Problem evaluating a simple metric


I have been trying to get this SimpleMetric to work for quite some time.

    "id": "BenchSiteTiming_MonitoringStatusSeries",
    "name": "BenchSiteTiming",
    "description": "Benchmark timing evolution on a site",
    "tsDecl": {
        "data": "status",
        "filter": "job.expressions == expressions &&
                   job.filter == concat('intersects(id, [\"', site, '\"])')",
        "value": "timing",
        "treatment": "RATE",
        "start": "timestamp"
    "srcType": "MonitoringStatusSeries",
    "variables": [{ "name": "expressions" }, { "name": "site" }],
    "unit": { "id": "msec" }

I tried without filter and variables, replacing value with a constant, etc., but the following always returns a series of 100:

  bindings: {site: 's00313', expressions: 'Sum_ICHA_Energy_Consumption'}

whereas all the objects needed exist in the database. Some more info:

  1. v7.6.1
  2. definition of MonitoringStatusSeries
entity type MonitoringStatusSeries mixes MetricEvaluatable schema name 'MonSTSeries' {

    status: [MonitoringStatus](parent)
    target: string
    tags: string
  1. definition of MonitoringStatus
entity type MonitoringStatus mixes TimedValueHistory<MonitoringStatusSeries> schema name 'MonST' {
    job: MetricCachingJob
    quantity: integer
    timing: integer



@AlexBakic I’m not sure what you are trying to do here with the types but I see a series of non recommended ways of modeling data.

  • Having the cached job directly on the data point type? Are you sure ?

I would start with something simple and work your way to your expression. It will be difficult to say why the value being returned is 100 without actually debugging this


I inherited the code, it is supposed to have a series type (header) and a simple datum type (status). The status keeps the amount of data processed and the processing time (the processing is evalMetrics). Perhaps it’s the missing path, I will play with it. Thanks


I added the job field just for linking purposes.


One more thing: what we measure is not helpful at all because jobs get enqueued, so timing at the end is huge and rather uniform. Is there a way to getamore precise timing of evalMetrics? (I can’t check at the moment if EvalMetricsResult has timing(s).)


The overall evalMetric time should be available via splunk. Also see ExprCompileOptions that needs to be passed along with explain:true


Hum, I might try the profiler’s time, I hope it does not slow down the evaluation (significantly). Thanks


It was a pilot error, all is defined correctly and it works.