Evaluate metrics on HOUR intervals

#1

I am experiencing an issue with evaluating a metric on an HOUR interval.
Context:
I have a compound metric which gets the average value over N days on ten minute interval data.
The expression is:
"eval('AVG', 'TEN_MINUTE', window('AVG',CompoundMetricA, -x, x))"
where x is some integer.

Note: There is a SimpleMetric that CompoundMetricA depends on that is cached at an ‘HOUR’ interval.

I will now outline the results as I vary the metric expression and evalMetricsSpec slightly and the difference in the results:
Scenario 0:
Command: MyType.evalMetric({expressions: ['TagMeasurements', 'CompoundMetricA'], ... , "interval":"HOUR"})
Expression (note many intervals were tried): "eval('AVG', 'HOUR', window('AVG', TagMeasurements, -smoothingWindow, smoothingWindow))"
Output:
TagMeasurements – no missing data
CompoundMetricA – no missing data

Scenario 1:
Command: MyType.evalMetric({expressions: ['TagMeasurements', 'CompoundMetricA'], ... , "interval":"HOUR"})
Expression (note many intervals were tried): "eval('AVG', 'ANY_INTERVAL_BESIDES_HOUR', window('AVG', TagMeasurements, -smoothingWindow, smoothingWindow))"
Output:
TagMeasurements – no missing data
CompoundMetricA – all data is missing

Scenario 2:
Command: MyType.evalMetric({expressions: ['TagMeasurements', 'CompoundMetricA'], ... , "interval":"ANY_INTERVAL"})
Expression: "eval('AVG', 'HOUR', window('AVG', TagMeasurements, -smoothingWindow, smoothingWindow))"
Output:
TagMeasurements – no missing data
CompoundMetricA – no missing data

Scenario 3:
Command: MyType.evalMetric({expressions: ['TagMeasurements', 'CompoundMetricA'], ... , "interval":"ANY_INTERVAL"})
Expression: "window('AVG', TagMeasurements, -smoothingWindow, smoothingWindow)"
Output:
TagMeasurements – no missing data
CompoundMetricA – no missing data

We have tried to refresh the cached metric on the MyType instance we are calling the metric from, but it has not changed our result.

Question:
Why does this metric not return the expected output on an HOUR interval, and how can we adjust the cached metric results?

0 Likes

#2

what do you mean fail to evaluate? do you see a 500 error?

0 Likes

#3

No there are no 500 errors, the metric finishes evaluating but the output is a list of all zeros when it shouldn’t be

0 Likes

#4

@m-a.hameed What version of the server are you using? I remember fixing this issue recently and I believe was cherry-picked to the latest v7.8 as well. Depending on the version you are using this issue was reported and fixed.

1 Like

#5

@rohit.sureka Hi Rohit, we are currently using c3-server-7.8.8.76-1.x86_64.rpm

1 Like