Reading Large Files from S3 via C3 Type System in Python

I’m trying to read a large csv file from S3 to perform some validations. I tried using File.readLines() method which returns a stream, but it is running out of memory. What’s the best way to read the data?

(My file in question is ~5 GB)

Thank you


@mahesh if you can run these actions on a worker that would be nice at least it will not bring an environment down,
File.make().readObjs({limit: 10, offset: 30, serType: <yourtype>}) to read a sample of the records from the files. For more info check the FileObjsOperSpec