Is it possible to use distributed computation resources of Hadoop clusters (e.g. Spark, MLib etc.) with C3?
C3 provides distribution of your workload on its cluster. At the simplest level you can use
Batch, and event processing (
Here is an example on how to use MapReduce available in the platform documentation:
As of 7.8, Spark is not offered as a hosted service in C3.