Is there a way to have the server process API requests one at a time?


As an example, lets say I want to clone a type instance. Each clone has a name following the scheme originalName-copyX where x is the number of copies. We do not want two clones with the same name, so we should only do one cloning at a time.

Is there a way to specify that behavior for a server side method?


I think you have to implement the logic yourself with a combination of copyObj and fetch, e.g.

cloneObj: function(this: !mixing Persistable, type: !Type): !Obj

which can be implemented as

function cloneObj(type: !Type) {
  var cloneInstance = this.copyObj(type);
  var prefix = + '-copy';
  var previousClone = type.fetch({
    filter: 'startsWith(name, "'+prefix+'")',
    order: 'descending(name)',
    include: 'name',
    limit: 1
  if(_.isUndefined(previousClone)) { = prefix + '1';
  }else { = prefix + parseInt(previousClone.replace(prefix, ''));

Hope that would help.


This is a good way to make the clone, but I believe there is a race condition here. If two users on different machines on the same tag clone the same instance at the same time, they would have equal previousClone values, and therefore both generate a clone with the same name. Is there a way to have the server throttle these requests so that only one gets processed at a time?


I’m not sure we have mechanisms for synchronization, but to be sure that none else could create another clone with same id use unique in the db annotation like:

entity type TargetType {
  . . .

This will guarantee unicity for name and as result the second user attempting to create a clone with same clone name will fail.


@Alexander Bauer you could acquire a DbLock with a specific key and pass the lambda function that you need to synchronize. Its pretty straight forward, you can read examples / documentation from lock.c3doc. It also has examples of implementing your own custom locking logic (not using existing db lock) and examples of how to use the DbLock / steps to debug / log statements to search for /etc.


      .timeout(5 * 60 * 1000) // 5 minutes
      .synchronize((key) -> {
         // CRITICAL SECTION - any custom code that needs to be protected    
         return null;

but in principal, i agree with david, each unit of work should be idempotent, that is the only way you can scale. If you want I can go over the use case with you to see if you can get around without acquiring db lock


I wrote a test function:

    lockTest: function(k: string) js all


function lockTest(key) {'## LOCK TEST {x}', { x: key }));

but the following command at the console blocks indefinitely (more than 15mn):

      // .doNotThrowOnFailure()
      .timeout(5 * 60 * 1000) // 5 minutes

(X should be replaced by a real type name.)
I have looked at the Lock documentation. I wonder what language is used in the example because JavaScript lambda is not accepted.
What is wrong with the above?



I don’t know if this is the issue, but i notice that the lambda is slightly different in rohit’s example than in yours. I think that Rohit’s code is java and won’t work for you exactly. To create a js lambda you could do:

Lambda.fromJavaScript(function lockTest(key) {'## LOCK TEST {x}', { x: key }));

See the “Lambda” type for more useful methods


Side note: you shouldn’t design an application that requires single threading. You will lose significant performance benefits that come from parallelism and horizontal scaling.


I need to serialize access to a type because there is a chain creation that must stay linear (two simultaneous accesses could create two references to the same “parent” object, which is not allowed). I will minimize the time spent in the critical section.


type YourType {
reference: TheParentType

should solve this for you no?


Actually, I get the same error as when I pass the JS function directly:

"Invalid c3 type: metadata.Lambda is NOT a typesys.lambda_, at JSON document at 1:186"


      // .doNotThrowOnFailure()
      .timeout(5000) // 5s
      .synchronize(Lambda.fromJavaScript(function lockTest(key) {'## LOCK TEST {x}', { x: key }));


May work, sounds better, thanks.


Strange, this code managed to create the first two objects (root and another):

    var pmrh = PointMeasurementReportHeader.create({
        evalAt: evalAt,
        start: startEval,
        end: endEval,
        prev: prev ? { id: } : // NOTE: @db(unique)
            PointMeasurementReportHeader.create({ id: 'root' })

But whenever I try again, I get an error

Error: Write failed: Object with same prev already exists in type PointMeasurementReportHeader for unique index: C3_2_PTMESRPTHDR_U_1 for object with id 09c18bfe-06a6-43f8-9811-7de112f7bdd0. Please change to unique values.

and no new objects are created (the first two is all that is returned by fetch[1]).
There is no object in DB with the id or prev mentioned in the error.
There are no other (static, in code) writes (create or other) but these two.
This is the start of type declaration:

entity type PointMeasurementReportHeader schema name "PTMESRPTHDR" {

I kept entity, will see what happens if I remove it.


VLog.strIP('{x}', {x:PointMeasurementReportHeader.fetch().objs}) ==>
16:47:18.789 "[
    "type": "PointMeasurementReportHeader",
    "evalAt": "2019-02-14T16:03:55.568+01:00",
    "start": "2017-01-01T00:00:00.000",
    "end": "2017-01-08T00:00:00.000",
    "prev": {
      "id": "root"
    "id": "bb67ce36-e145-4117-b524-7d4ac992ab4f",
    "version": 1,
    "meta": {
      "tenantTagId": 33,
      "tenant": "engie-vertuoz",
      "tag": "test2",
      "created": "2019-02-14T15:03:56.000Z",
      "createdBy": "",
      "updated": "2019-02-14T15:03:56.000Z",
      "updatedBy": "",
      "timestamp": "2019-02-14T15:03:56.000Z"
    "type": "PointMeasurementReportHeader",
    "id": "root",
    "version": 1,
    "meta": {
      "tenantTagId": 33,
      "tenant": "engie-vertuoz",
      "tag": "test2",
      "created": "2019-02-14T15:03:56.000Z",
      "createdBy": "",
      "updated": "2019-02-14T15:03:56.000Z",
      "updatedBy": "",
      "timestamp": "2019-02-14T15:03:56.000Z"

EDIT: I found the root cause: I used order:'descending(evalAt)' when fetching the object to become prev, where evalAt is a timestamp that did not exist on the root object. But the fetch returned root as the latest, so the second time the create was trying to bind prev to root again (although not to that random-id object). After setting evalAt of the root to ‘1970-01-01’, it works with @db(unique=['prev']).