upsertBatch for FileData

On c3server 7.8.x.

Given I have:

type MyMeasurementSeries mixes FileTimedDataHeader<MyMeasurementSeries>... 

and

@db(partitionKeyField="parent")
type MyFileMeasurement mixes FileTimedDataPoint<MyMeasurementSeries, MyFileMeasurement> {

  @ts(treatment='previous')
  value: double

  anotherValue: double
}

I am looking to write a JsMapReduce job that will modify all MyFileMeasurement objects where statusCode != "Good" by moving the the value of the value field over to the anotherValue field.

Using normal TimedDataPoints this is trivial, as you can just set the fields on each of these points and then call upsertBatch on those data points. However, with FileTimedDataPoints it looks like there isn’t an id field, so calling upsertBatch results in duplicates. There is no removeBatch either, so you cannot simply remove the problematic data points, then upsert the modified ones thereafter.

I looked into calling MyFileMeasurement.removeAll({...}, but it looks like the only filter you are allowed to specify there is a partitionKeyFilter, which removes all data under that partition :frowning:

Is there an elegant solution to handling this situation? We have been looking at various FileData apis and can’t seem to find something that will work for this. The only solution I can think of at the moment is reading each file associated with a partition key, processing all data points in that file, then overwriting the file with the updated data points.