Problem evaluating a simple metric


#1

I have been trying to get this SimpleMetric to work for quite some time.

{
    "id": "BenchSiteTiming_MonitoringStatusSeries",
    "name": "BenchSiteTiming",
    "description": "Benchmark timing evolution on a site",
    "tsDecl": {
        "data": "status",
        "filter": "job.expressions == expressions &&
                   job.filter == concat('intersects(id, [\"', site, '\"])')",
        "value": "timing",
        "treatment": "RATE",
        "start": "timestamp"
    },
    "srcType": "MonitoringStatusSeries",
    "variables": [{ "name": "expressions" }, { "name": "site" }],
    "unit": { "id": "msec" }
}

I tried without filter and variables, replacing value with a constant, etc., but the following always returns a series of 100:

c3Viz(MonitoringStatusSeries.evalMetrics({
  ids:[mss.id],
  expressions:['missing(BenchSiteTiming)'],
  start:'2018-03-01',
  end:'2018-03-10',
  interval:'DAY',
  bindings: {site: 's00313', expressions: 'Sum_ICHA_Energy_Consumption'}
}))

whereas all the objects needed exist in the database. Some more info:

  1. v7.6.1
  2. definition of MonitoringStatusSeries
entity type MonitoringStatusSeries mixes MetricEvaluatable schema name 'MonSTSeries' {

    status: [MonitoringStatus](parent)
    target: string
    tags: string
...
}
  1. definition of MonitoringStatus
@db(compactType=true,
    datastore='cassandra',
    partitionKeyField='parent',
    persistenceOrder='timestamp',
    persistDuplicates=false,
    shortId=true,
    shortIdReservationRange=100000)
entity type MonitoringStatus mixes TimedValueHistory<MonitoringStatusSeries> schema name 'MonST' {
    job: MetricCachingJob
...
    quantity: integer
    timing: integer
}

Thanks


#2

@AlexBakic I’m not sure what you are trying to do here with the types but I see a series of non recommended ways of modeling data.

  • Having the cached job directly on the data point type? Are you sure ?

I would start with something simple and work your way to your expression. It will be difficult to say why the value being returned is 100 without actually debugging this


#3

I inherited the code, it is supposed to have a series type (header) and a simple datum type (status). The status keeps the amount of data processed and the processing time (the processing is evalMetrics). Perhaps it’s the missing path, I will play with it. Thanks


#4

I added the job field just for linking purposes.


#5

One more thing: what we measure is not helpful at all because jobs get enqueued, so timing at the end is huge and rather uniform. Is there a way to getamore precise timing of evalMetrics? (I can’t check at the moment if EvalMetricsResult has timing(s).)


#6

The overall evalMetric time should be available via splunk. Also see ExprCompileOptions that needs to be passed along with explain:true


#7

Hum, I might try the profiler’s time, I hope it does not slow down the evaluation (significantly). Thanks


#8

It was a pilot error, all is defined correctly and it works.