I’m trying to load data with ~200 columns to a local docker container. When I cURL files with 5000 records to the /import API for my Canonical, they will create and persist records for about 90% of the data and then hang in an SourceFile.status of “initial” and DataIntegStatuses of “initial” for the CHUNK task, “processing” for the TARGET:SOURCE task, “completed” for the TRANSFORM step. They hang indefinitely like this, with no tasks listed in Cluster.actionDump().
I tried exploring other record counts to see if there was an underlying problem with the way my environment’s configured. Files with 25 records will process to completion; files larger than 40 records hang just like the 5000 record set. If 25 records is my batch size and I have to cURL individual chunks, it will take about 27 hours and require over 32,000 cURLed files.
Is this a known issue, is there a workaround, or do I have a problem in my configuration? I’ve looked for 7.9 data integration resources without success. Any assistance is appreciated.