Type Triggers that delete on insert

#1

Hi there,

I’d like to make a type that holds 70 records and on insert per record deletes the oldest record.

Could someone please explain how someone can achieve this functionality

0 Likes

#2

There are afterCreate, afterUpdate functions available on the persistable type

0 Likes

#3

Thanks Uday, where can I read about these and see their usage?

0 Likes

#4

0 Likes

#5

70 records on a type, one record is added every day on update delete the oldest.

0 Likes

#6

it looks like afterCreate is more applicable

0 Likes

#7

Yes, you can implement MyType.afterCreate to do this. A FetchSpec like {offset: 70, order: "descending(meta.created)"} then a removeBatch should do the job to delete all records older than the newest 70.

In general, writing less code is better, so I’ll offer an alternative approach just in case it happens to suit your application: wherever you are reading these records, you can retrieve them with a FetchSpec like {limit: 70, order: "descending(meta.created)"} so that you don’t have to maintain additional logic to remove old records.

0 Likes

#8

I want automation, I don’t want to have to run the command daily.

Could someone please provide the equivalent of running a typical data base table trigger.

And provide a practical example thanks.

0 Likes

#9

You would implement afterCreate using the logic Matt said. For example:

MyType.c3typ:
entity type MyType {
afterCreate: ~ js server
}

MyType.js:
function afterCreate(objs) {
MyType.fetch({limit: “objs.length”, order: “ascending(meta.created)”}).each(function(t) {t.remove()})
}

The result here is that after N objects of MyType are created, your afterCreate() gets automatically invoked removing the N oldest MyType objects. (you could probably write better logic in afterCreate to batch the remove call using a removeAll but i’ll leave that to you)

However, i agree with matt that your requirement is probably not a good one… why would you ever want to remove data? If you only need the 70 most recent records for some sort of algorithm then you can fetch the 70 newest with the call matt provided, what’s the point of removing the old ones? Once data is removed it can never be recreated, and you might find a use for it later.

Unless these objects are a GB each, there’s very little chance that 1 object a day could have a material impact on cost or performance. (As a practice at C3 we almost never remove any data for any reason)

0 Likes

#11

Its a big roll up. The performance of the rollup is unacceptable and gives a really horrendous UX.
My intention is to persist the values that the rollup is fetching and store them in a temp table eliminating the users need to wait for a big query.

This way instead of millions of calcs the UI can just fetch a few hundred thousand records and plot.
Removal is only used so the table(type) does not grow and its purpose is only to exist to feed the UI.

Works great in Oracle.

0 Likes

#12

Yes storing aggregates will make the performance of queries for the UI better, and thats probably best practice (without knowing your specific use-case).

You could still keep old aggregates around, for example maybe you will want to change the amount of historical data presented in the UI one day.

If removing data is really a requirement then the afterCreate function above should automatically remove old entries as you wish.

0 Likes