Flink provides HDFS connector which can be used to write data to any file system supported by Hadoop Filesystem.
The provided sink is a Bucketing sink which partitions the data stream into folders containing rolling files. The bucketing behavior, as well as the writing, can be configured with parameters such as batch size and batch roll over time interval
The Flink document gives following example -
DataStream<Tuple2<IntWritable,Text>> input = ...;
BucketingSink<String> sink = new BucketingSink<String>("/base/path");
sink.setBucketer(new DateTimeBucketer<String>("yyyy-MM-dd--HHmm", ZoneId.of("America/Los_Angeles")));
sink.setWriter(new SequenceFileWriter<IntWritable, Text>());
sink.setBatchSize(1024 * 1024 * 400); // this is 400 MB,
sink.setBatchRolloverInterval(20 * 60 * 1000); // this is 20 mins
input.addSink(sink);