Benchmark timing/performance of c3 methods


Is there a good/recommended way to benchmark timing on a method on C3 as opposed to running a for loop with many iterations in js console and taking the avg? Any convenience function I can leverage for this?

  1. I would write a little method on a c3 type that you can provision and run server-side, so you don’t account for latency and data transfer between your laptop and the server.
  2. First call is typically slower than the rest so if you are interested in the performance of an action that will be executed a lot, I’d recommend capturing multiple runs and report the average of the fastest 3 calls (ipython %%time does it that way).
  3. Note that if Splunk is available you can just run the actions many time, and look at its performance (with a lot of details) in the Action Profiler tab, you can do average, min, max, percentiles, etc.


Going off @lpoirier’s point (1) - this can be done server-side utilizing the C3.util.StopWatch utility type. Something along the lines of this could work:

function benchMarkMyMethod(iterations) {
  var stopWatch = new C3.util.StopWatch('sw');
  _.times(iterations, function (num) {
    if (num === 0)


    if (num === iterations - 1) {
    } else {

  var times = stopWatch.toString().split(",").map(function (r) {
    return r.split("=");

  // remove timer name

  // store total time
  var total = parseFloat(times.shift()[1]);

  var runs =, function (r) {
    return parseFloat(r[1]);

  var min = _.min(runs);
  var max = _.max(runs);
  var avg = total / runs.length;

  return {
    min: min,
    max: max,
    avg: avg


One could also use the JSMapReduce framework to run large number of iterations of a C3 method in parallel in a reasonably fast period of time and store average execution time for further analysis / comparison. To do so, one can use the snippet below:

    var map = function(batch, objs, job) {
        var executionTime = new C3.typesys.Mapp(new C3.typesys.MappType(
                C3.typesys.PrimitiveType.String, C3.typesys.PrimitiveType.Double));

        var my_ids = [objs[0].id, objs[1].id];
        var results = PiTag.evalMetrics({
            ids: my_ids,
            expressions: ["HotTagMeasurements"],
            interval: "DAY",
            start: "2018-01-01",
            end: "2018-12-01",
        var ts1 = results.result.get(my_ids[0]).HotTagMeasurements;
        var ts2 = results.result.get(my_ids[1]).HotTagMeasurements;
        var t_s =;
        var pc = PiTag.pearsonCorrelationAsDouble(ts1, ts2);
        var t_e =;

        executionTime.set('time', t_e - t_s);

        return executionTime;

    var reduce = function(outKey, interValues, job) {
        var count = 0
        interValues.each(function(c) {
            count += c;
        return [count / interValues.length]


    var spec = JSMapReduceSpec.make({
      targetType: {
        typeName  : "PiTag"
      include  : "id, isPredictionTarget, meta.created",
      filter   : "exists(id) && isPredictionTarget=='true'",
      limit    : 100,
      order: "meta.created",
      batchSize: 2,
      map      : map.toString(),
      reduce   : reduce.toString()

    var job = JS.mapReduce(spec);

The JSMapReduceJob job can be monitored as follows:



Once the job is complete, saved results (in this case, average execution time) can then be accessed via ReduceResult type:

    filter: Filter.eq('', job.get('').at(''))


Check out MaximGun and MaximGunOptions types. They are used for Performance testing.

The function to call is In options you specify the action, and count.


There also appears to be a TestApiPsr type that is geared towards running PSR tests to simulate multiple users running the same command against an environment