What is the best way to remove S3File being created by a mapreduce job, from its corresponding test file?

I have a test file for a MapReduce job called ‘test_MeterAnalyticResultMapExport’, the job simply converts the target type into a csv file in the S3 file system. The fileId used inside the map function uses the batch parameter provided to the map function. In my test file I start the job and wait for it finish, and then to delete the file created in the job I use the hardcoded value for batch=1(since batch is small and all objects are processed at once) which gets a green build in jenkins but I am not certain if it would be 1 for all cases. So I just wanted to know what the best way is to delete an S3File created in a mapreduce job, from its test.
The MapReduce job I am testing is MeterAnalyticResultMapExport.
And the url for the file to be deleted is retrieved using these values:
var batch = 1
var fileId = jobPrefix + batch // job prefix is a specified in the instance of the MapReduce job created

S3.deleteFiles(‘url of directory or file’)

S3.deleteFiles is a member function, and I dont have a handle for the file system. I retrieve the handle for the file using : S3File.make({ url: url }).putField(‘contentEncoding’, ‘gzip’);
Is it possible to retrieve the instance of S3 type, which can then be used to call deleteFiles?

S3 is a singleton. You should be able to call it directly. If not use S3.inst().

File.make('url of file').delete()
FileSystem.deleteFiles(...)
If your export job runs in another cloud provider maybe S3 specific code will not work.