I am new to Flink, my understanding is following API call
StreamExecutionEnvironment.getExecutionEnvironment().readFile(format, path)
will read the files in parallel for given S3 bucket path.
We are storing log files in S3. The requirement is to serve multiple client requests to read from different folders with time stamps.
For my use case, to serve multiple client request, I am evaluating to use Flink. So I want Flink to perform AWS S3 read in parallel for different AWS S3 file paths.
Is it possible to achieve this in single Flink Job. Any suggestions?