Thank you both for your suggestions. We have performance issues with the v5 application currently deployed. The evalMetric takes 3 to 8 seconds for a year worth of data at the month interval and we strongly suspect that having to evaluate it at the minute grain is causing this. It takes up to 30s when data is being loaded in parallel.
Therefore, we want to properly design our v7 application to avoid replicating suboptimal patterns. I understand that overriding normalization might not the right way to do this − but if we want to benchmark it, do you have an exhaustive list of the individual steps you started listing, rohit?
In order to obtain the consolidated index from the numbers the meter reports, we currently compute the rollingDiff and then filter out negative values, but that’s a bad approximation when data is missing (and it’s slow).
What we’d really want is something like this:
rollingDiff(eval('MINUTE', interpolate(rolling('SUM', rollingDiff(NormalizedIndex, 0.01), 'LINEAR', 'MISSING'))))
rolling(rollingDiff()) is used to get rid of the resets and reconstruct an increasing index, interpolate() to smoothen the consumption when data is missing, and the outer rollingDiff to get back to the (approximated) consumption.
This is out of the question considering our current performance issues, hence the need to come up with a better scheme. Maybe a custom expression engine function could help but I don’t know if we’ll miss C3 optimizations by doing so.
Anyway, if we could store the reconstructed index as the normalized value, we could evaluate this timeseries with 15 or 30mn interval (currently impossible because we might miss a reset in the middle) and I suspect it would greatly reduce the response time of this evalMetric.
Further suggestions are welcome, meanwhile I’ll follow the highlighted steps