I am not sure about the way errors are handled during
evalMetric[s] and how to go about it. It seems to me that the engine will evaluate any metric without throwing exceptions, even if it cannot be evaluated for some reason: you may get all
0s, for example.
Whether a metric has been evaluated correctly should be determined by an accompanying meta-metric, which is based on the same inputs but only outputs a
boolean (in practice,
The value of
evalMetric is a timeseries whose data codomain is
[double], which has no room for anything but the value, which may be correct or not.
For example, if
C is a metric about a consumption which only makes sense during the heating season, meta-metric
C_valid would output
1s iff within the heating season or, perhaps also outside the heating season, whenever
C does not evaluate to
0 (assuming that it is possible to consume energy by mistake even outside the heating season).
The emphasis is on
C_valid, as opposed to just adding something like
HeatingSeason * ... to
expression, which may also hide non-
Could someone explain the right approach?