Validity of `evalMetric[s]`


I am not sure about the way errors are handled during evalMetric[s] and how to go about it. It seems to me that the engine will evaluate any metric without throwing exceptions, even if it cannot be evaluated for some reason: you may get all 0s, for example.

Whether a metric has been evaluated correctly should be determined by an accompanying meta-metric, which is based on the same inputs but only outputs a boolean (in practice, enum {0,1}).

The value of evalMetric is a timeseries whose data codomain is [double], which has no room for anything but the value, which may be correct or not.

For example, if C is a metric about a consumption which only makes sense during the heating season, meta-metric C_valid would output 1s iff within the heating season or, perhaps also outside the heating season, whenever C does not evaluate to 0 (assuming that it is possible to consume energy by mistake even outside the heating season).

The emphasis is on C_valid, as opposed to just adding something like HeatingSeason * ... to C's expression, which may also hide non-0 consumptions.

Could someone explain the right approach?



Yes errors during metric evaluation are handled silently, e.g if you have metrics that uses an ActionMetricDecl or MetricFunctionLibrary then errors in your js are only logged to splunk with no other indication. Your only indicator is the m_missing values.
However, certain types of errors during evalMetric like unit conversion are propagated back to the caller

What can be confusing is that m_missing is also set when you don’t have data (e.g. the fetch return nothing) to populates your metrics.

Your only option is to properly debug your metric and check the logs before you go to production.



OK when there is an exception, but too often there is only missing, but even then we do not know if data are really missing at the source or we get missing due to some error (like a missing link, which is – perhaps justifiably – interpreted as missing data).