0
votes

I would like to read and write some data with Apache Flink 1.11.2 from S3. The documentation recommends to use the presto plugin for checkpoints and the hadoop plugin for pipeline data.

After reading this section you have to copy the plugins from /opt to /plugin. I can find the flink-s3-fs-presto-1.11.2.jar under /opt but there is no flink-s3-fs-hadoop-1.11.2.jar. Where can i find the s3-hadoop plugin for setting up my production environment?

And how can i use these plugins in the IDE? Simply adding these to pom.xml als provided dependencies? And then how can i pass the crentials in IDE?

1

1 Answers

1
votes

That is weird I can see that they are both present in the official binaries in opt in 1.11.1. However if You can't find them, You can simply try to get the jars from Maven here and copy them to the required place. Another thing that may work is adding the dependency into the project with compile scope.

Running the job locally is described here. There are various ways of configuring the credentials when running the job in IDE, one might be adding core-site.xml to resources folder with proper configruation.

EDIT: As for the local execution it was explained here a little bit.