Normalization failed since numOfNormalizedValues=700800 exceeded maxAllowed=700800

#1

I’m trying to evaluate a metric MyType.evalMetric({. . .}) but it fails with the following error:

Error: c3.love.exceptions.C3RuntimeException: c3.love.exceptions.C3RuntimeException: MetricEngine error : c3.love.exceptions.C3RuntimeException: Error c3.love.exceptions.C3RuntimeException: DataLoaderImpl Error : Call to fetchNormalizedData failed : Normalization failed since numOfNormalizedValues=700800 exceeded maxAllowed=700800, for type=PointPhysicalMeasurementSeries,pid=00-AsNet1.00-Group00000000.AMB_UTA3_231_04____01,tsField=quantity,grain=MINUTE,normalizedGrain=MINUTE↵	at c3.love.expr.DataLoaderImpl.throwError(DataLoaderImpl.java:103)↵	at c3.love.expr.DataLoaderImpl.fetchNormalizedData(DataLoaderImpl.java:135)↵	at c3.love.expr.DataLoaderImpl.getNormalizedData(DataLoaderImpl.java:368)↵	at c3.love.expr.eval.EvalVisitor.processTimeseriesNodes(EvalVisitor.java:813)↵	at c3.love.expr.eval.EvalVisitor.processTimeseriesContext(EvalVisitor.java:901)↵	at c3.love.expr.eval.EvalVisitor.visitNameOrKeywordNode(EvalVisitor.java:1016)↵	at c3.love.expr.graph.AstNodeVisitor.visitNameNode(AstNodeVisitor.java:171)↵	at c3.love.expr.graph.AstNodeVisitor.visitNode(AstNodeVisitor.java:72)↵	at c3.love.expr.eval.EvalVisitor.visitGetPropNode(EvalVisitor.java:1098)↵	at c3.love.expr.graph.AstNodeVisitor.visitNode(AstNodeVisitor.java:80)↵	at c3.love.expr.eval.EvalVisitor.evalArgs(EvalVisitor.java:602)↵	at c3.love.expr.eval.EvalVisitor.evaluateFunctionCall(EvalVisitor.java:411)↵	at c3.love.expr.eval.EvalVisitor.visitCallNode(EvalVisitor.java:351)↵	at c3.love.expr.graph.AstNodeVisitor.visitNode(AstNodeVisitor.java:96)↵	at c3.love.expr.eval.EvalVisitor.evalArgs(EvalVisitor.java:602)↵	at c3.love.expr.eval.EvalVisitor.evaluateFunctionCall(EvalVisitor.java:411)↵	at c3.love.expr.eval.EvalVisitor.visitCallNode(EvalVisitor.java:351)↵	at c3.love.expr.graph.AstNodeVisitor.visitNode(AstNodeVisitor.java:96)↵	at c3.love.expr.eval.EvalVisitor.visitConditionalExprNode(EvalVisitor.java:305)↵	at c3.love.expr.graph.AstNodeVisitor.visitNode(AstNodeVisitor.java:63)↵	at c3.love.expr.eval.EvalVisitor.evaluateExpression(EvalVisitor.java:122)↵	at c3.love.expr.eval.EvalFacade.evalExprWithBindings(EvalFacade.java:188)↵	at c3.service.metric.SimpleMetricEvaluator.evaluateExpressionBasedMetric(SimpleMetricEvaluator.java:577)↵	at c3.service.metric.SimpleMetricEvaluator.evaluateMetric(SimpleMetricEvaluator.java:528)↵	at c3.service.metric.MetricEvaluatable.compute(MetricEvaluatable.java:296)↵	at c3.service.metric.MetricEvaluatable.computeResultsFromHierarchies(MetricEvaluatable.java:159)↵	at c3.service.metric.MetricEvaluatable.eval(MetricEvaluatable.java:108)↵	at c3.love.expr.bytecode.TsFuncLib.lambda$cache$123(TsFuncLib.java:1406)↵	at c3.love.expr.bytecode.TsFuncLib$$Lambda$104/999654168.eval(Unknown Source)↵	at c3.love.expr.bytecode.CompiledExpr.lambda$doEvalTimeseries$0(CompiledExpr.java:213)↵	at c3.love.expr.bytecode.CompiledExpr$$Lambda$105/1934820946.accept(Unknown Source)↵	at java.lang.Iterable.forEach(Iterable.java:75)↵	at c3.love.expr.bytecode.CompiledExpr.doEvalTimeseries(CompiledExpr.java:213)↵	at c3.love.expr.bytecode.Expr_8998086551980520919_7241190445062192654.eval(Unknown Source)↵	at c3.service.metric.CompoundMetricEvaluator.evaluateCompiledExpr(CompoundMetricEvaluator.java:902)↵	at c3.service.metric.CompoundMetricEvaluator.evaluateMetricsForSource(CompoundMetricEvaluator.java:660)↵	at c3.service.metric.CompoundMetricEvaluator.evaluateMetrics(CompoundMetricEvaluator.java:870)↵	at c3.service.metric.MetricEngine.evalMetric(MetricEngine.java:381)↵	at sun.reflect.GeneratedMethodAccessor141.invoke(Unknown Source)↵	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)↵	at java.lang.reflect.Method.invoke(Method.java:497)↵	at c3.server.engine.AnnotatedEngine.invokeMethod(AnnotatedEngine.java:422)↵	at c3.server.engine.AnnotatedEngine.execute(AnnotatedEngine.java:355)↵	at c3.server.impl.Task.doFilter(Task.java:247)↵	at c3.server.impl.ServerDispatcherBase$ActionFilterChainImpl.doFilter(ServerDispatcherBase.java:185)↵	at c3.server.impl.dataCache.DataCacheEngine.doFilter(DataCacheEngine.java:126)↵	at c3.server.impl.ServerDispatcherBase$ActionFilterChainImpl.doFilter(ServerDispatcherBase.java:183)↵	at c3.server.impl.ServerDispatcherBase.doFilter(ServerDispatcherBase.java:153)↵	at c3.server.impl.InteractiveDispatcher.doFilter(InteractiveDispatcher.java:77)↵	at c3.server.impl.Task.run(Task.java:183)↵	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)↵	at java.util.concurrent.FutureTask.run(FutureTask.java:266)↵	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)↵	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)↵	at java.lang.Thread.run(Thread.java:745)↵Caused by: c3.love.exceptions.C3RuntimeException: Normalization failed since numOfNormalizedValues=700800 exceeded maxAllowed=700800, for type=PointPhysicalMeasurementSeries,pid=00-AsNet1.00-Group00000000.AMB_UTA3_231_04____01,tsField=quantity,grain=MINUTE,normalizedGrain=MINUTE↵	at c3.engine.…:153)↵	at c3.server.impl.InteractiveDispatcher.doFilter(InteractiveDispatcher.java:77)↵	at c3.server.impl.Task.run(Task.java:183)↵	at c3.server.impl.InteractiveDispatcher.dispatch(InteractiveDispatcher.java:370)↵	at c3.server.impl.InteractiveDispatcher.dispatch(InteractiveDispatcher.java:283)↵	at c3.love.dbfacade.DbFacade.dispatch(DbFacade.java:666)↵	at c3.engine.database.timeseries.normn.FetchNormalizedDataTask.normalize(FetchNormalizedDataTask.java:1361)↵	at c3.engine.database.timeseries.normn.FetchNormalizedDataTask.evaluate(FetchNormalizedDataTask.java:219)↵	at c3.engine.database.timeseries.normn.FetchNormalizedDataTask.call(FetchNormalizedDataTask.java:169)↵	at c3.engine.database.timeseries.normn.FetchNormalizedDataTask.call(FetchNormalizedDataTask.java:93)↵	at c3.engine.database.DatabaseEngine.execute(DatabaseEngine.java:1348)↵	at c3.engine.database.DatabaseEngine.execute(DatabaseEngine.java:335)↵	at c3.server.impl.Task.doFilter(Task.java:247)↵	at c3.server.impl.ServerDispatcherBase$ActionFilterChainImpl.doFilter(ServerDispatcherBase.java:185)↵	at c3.server.impl.dataCache.DataCacheEngine.doFilter(DataCacheEngine.java:126)↵	at c3.server.impl.ServerDispatcherBase$ActionFilterChainImpl.doFilter(ServerDispatcherBase.java:183)↵	at c3.server.impl.ServerDispatcherBase.doFilter(ServerDispatcherBase.java:153)↵	at c3.server.impl.InteractiveDispatcher.doFilter(InteractiveDispatcher.java:77)↵	at c3.server.impl.Task.run(Task.java:183)↵	at c3.server.impl.InteractiveDispatcher.dispatch(InteractiveDispatcher.java:370)↵	at c3.server.impl.InteractiveDispatcher.dispatch(InteractiveDispatcher.java:283)↵	at c3.love.expr.DataLoaderImpl.fetchNormalizedData(DataLoaderImpl.java:133)↵	... 53 more↵,msg Failed to eval expression with bindings for srcId: 00-AsNet1.00-Group00000000.AMB_UTA3_231_04____01,srcId=00-AsNet1.00-Group00000000.AMB_UTA3_231_04____01↵	at c3.service.metric.MetricUtils.throwError(MetricUtils.java:988)↵	at c3.service.metric.MetricEvaluatable.computeResultsFromHierarchies(MetricEvaluatable.java:162)↵	at c3.service.metric.MetricEvaluatable.eval(MetricEvaluatable.java:108)↵	at c3.love.expr.bytecode.TsFuncLib.lambda$cache$123(TsFuncLib.java:1406)↵	at c3.love.expr.bytecode.TsFuncLib$$Lambda$104/999654168.eval(Unknown Source)↵	at c3.love.expr.bytecode.CompiledExpr.lambda$doEvalTimeseries$0(CompiledExpr.java:213)↵	at c3.love.expr.bytecode.CompiledExpr$$Lambda$105/1934820946.accept(Unknown Source)↵	at java.lang.Iterable.forEach(Iterable.java:75)↵	at c3.love.expr.bytecode.CompiledExpr.doEvalTimeseries(CompiledExpr.java:213)↵	at c3.love.expr.bytecode.Expr_8998086551980520919_7241190445062192654.eval(Unknown Source)↵	at c3.service.metric.CompoundMetricEvaluator.evaluateCompiledExpr(CompoundMetricEvaluator.java:902)↵	at c3.service.metric.CompoundMetricEvaluator.evaluateMetricsForSource(CompoundMetricEvaluator.java:660)↵	at c3.service.metric.CompoundMetricEvaluator.evaluateMetrics(CompoundMetricEvaluator.java:870)↵	at c3.service.metric.MetricEngine.evalMetric(MetricEngine.java:381)↵	at sun.reflect.GeneratedMethodAccessor141.invoke(Unknown Source)↵	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)↵	at java.lang.reflect.Method.invoke(Method.java:497)↵	at c3.server.engine.AnnotatedEngine.invokeMethod(AnnotatedEngine.java:422)↵	at c3.server.engine.AnnotatedEngine.execute(AnnotatedEngine.java:355)↵	at c3.server.impl.Task.doFilter(Task.java:247)↵	at c3.server.impl.ServerDispatcherBase$ActionFilterChainImpl.doFilter(ServerDispatcherBase.java:185)↵	at c3.server.impl.dataCache.DataCacheEngine.doFilter(DataCacheEngine.java:126)↵	at c3.server.impl.ServerDispatcherBase$ActionFilterChainImpl.doFilter(ServerDispatcherBase.java:183)↵	at c3.server.impl.ServerDispatcherBase.doFilter(ServerDispatcherBase.java:153)↵	at c3.server.impl.InteractiveDispatcher.doFilter(InteractiveDispatcher.java:77)↵	at c3.server.impl.Task.run(Task.java:183)↵	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)↵	at java.util.concurrent.FutureTask.run(FutureTask.java:266)↵	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)↵	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)↵	at java.lang.Thread.run(Thread.java:745)↵↵srcId=00-AsNet1.00-Group00000000.AMB_UTA3_231_04____01↵srcId=00-AsNet1.00-Group00000000.AMB_UTA3_231_04____01↵    at new C3.client.ActionError (https://claradomus.c3iot.com/typesys/1/all.js?env=browser&compat:1154:13)↵    at Object.request (https://claradomus.c3iot.com/typesys/1/all.js?env=browser&compat:906:15)↵    at Object.call (https://claradomus.c3iot.com/typesys/1/all.js?env=browser&compat:589:27)↵    at c3Call (https://claradomus.c3iot.com/typesys/1/all.js?env=browser&compat:97:20)↵    at Object._call (https://claradomus.c3iot.com/typesys/1/all.js?env=browser&compat:2613:20)↵    at Object.eval (eval at get (https://claradomus.c3iot.com/typesys/1/all.js?env=browser&compat:3003:20), <anonymous>:5:15)↵    at <anonymous>:1:31"

I tried to reduce the time rage (i.e. start/end dates) or the interval but did not help!
I also tried with setting the TenantConfig's parameter MaxNormalizedPointsLimit to a big value but did not help as my servers are under 7.2:

TenantConfig.merge({id: 'MaxNormalizedPointsLimit', name: 'to avoid Normalization failed since numOfNormalizedValues', value: 1099511627776})

Any idea that could help resolving this issue?

0 Likes

#2

You are hitting the maximum number of points stored in Cassandra under the same parent that can be normalized.
It is currently equivalent to 20 years of 15 min interval data (20 * 365 * 24 * 4 = 700,800).
I believe this is a config that you can change (@rohit.sureka please confirm).
But I would not recommend storing larger volumes of data as Cassandra is not designed for this kind of use cases.

You should use cheaper storage like S3. We also support normalization for this data store.

1 Like

#3

@bachr Maximum number of points stored in Cassandra is configurable via TenantConfig. There’s a field called MaxNormalizedPointsLimit on TenantConfig. Update: It is available in 7.6.1

1 Like

#4

@bachr the feature should be available in v7.2 as well. And yes, you would have to renormalize the series again in order for eval metrics to go through. We avoid automatic renormalization since it could put unnecessary pressure on db if the root error has not been resolved.

It is also a part of the normalization in depth documentation:

TenantConfig.upsert({id:“MaxNormalizedPointsLimit”, value : 1051200 });

1 Like

#5

Thanks all for the clarifications.

@rohit.sureka will it be better in my case to switch from storing my type in Cassandra to a FileData (I guess this is what @romain.juban is suggesting)?

0 Likes

#6

I still have this problem, and the strange thing is that PointMeasurement has few points:

PointMeasurement.fetchCount({filter: Filter.eq('parent.id', parentId)});
5284

Looking at the collection of the point I see that the min difference between two subsequent measurements is 0 millis and the maximum is 5523652000 millis (which is 22:20:52)!

I tried to re-kick the normalization, with

PointPhysicalMeasurementSeries.refreshNormalization()

or

Tag.rebuild('superrtag')

but it keeps failing for this series:

> c3QReport(NormalizationQueue)
|targetTenantTagId|targetTenantTag      |targetTypeId|targetType                    |status |count|
|-----------------|---------------------|------------|------------------------------|-------|-----|
|5                |supertenant/superrtag|378         |RegisterMeasurementSeries     |failed |40   |
|5                |supertenant/superrtag|562         |PointPhysicalMeasurementSeries|failed |26   |
|5                |supertenant/superrtag|562         |PointPhysicalMeasurementSeries|pending|1    |

Here is the series causing problems:

 id: series-id
 version: 1
 name: series-id
 meta: { ... }
   tenantTagId: 5
   tenant: supertenat
   tag: superrtag
   created: 2018-01-26T12:33:25.000Z
   createdBy: bachr@cousco.us
   updated: 2018-01-26T12:33:25.000Z
   updatedBy: bachr@cousco.us
   timestamp: 2018-01-26T12:33:31.000Z
   sourceSystem: SOURCE_SYSTEM_NAME
   sourceFile: SOURCE_FILE_MeasurementSeries_v4.3
 unitConstraint: { ... }
   id: dimensionless
 treatment: AVG
 multiplier: 1
 sensorVisibility: false
 servicePoint: { ... }
   id: Service_Point_ID
 measurementType: MeasurementTypeName
 origin: SYSTEM_NAME
 interval: MINUTE
 resource: { ... }
   id: resourceId
 description: Series description
 typeIdent: TYP:IDENT:SAMPLE

Does the fact that the series has no dimension is causing the normalization problem?

0 Likes

#7

Increasing the normalization interval from ‘MINUTE’ to ‘QUARTER_HOUR’ for the trouble making PointPhysicalMeasurementSeries seems to fix the problem.

var series = PointPhysicalMeasurementSeries.get('series-id')
series.interval = "QUARTER_HOUR"
series.merge()

But we the want ‘minute’ granularity!

0 Likes

#8

@bachr The limit is on the number of normalized. Even if you have two points say one in 1970 and other in 2018 and you say minute level normalization, it will generate more than the allowed limit and throw you that error. This nature is to avoid accidentally making the database go out of diskspace if you introduce a bad point. You can surgically change the limit by applying the tenant config. We have chosen the default to be 20 years of 15 minute data a.k.a 70800 points

0 Likes

#9

Thanks @rohit.sureka for the explanation, now I understand better what this limit is about.

I looked to my measurements and the difference between first point and last point is 734387.15 min which is bigger than the default 70800.

But the thing is in my TenantConfig I did set MaxNormalizedPointsLimit to 1099511627776. May be the system which runs c3-server-7.2.0.1050-1.x86_64.rpm is not considering this value!

0 Likes

#10

May be the value being used is cached. Is running Cluster.emptyAllCaches() an option?

1 Like