1
votes

Folks,

We have few flink jobs - built as separate executable Jars

Each of this flink jobs is using the following to run -

>  StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
> 
> try {
>             env.execute("FLINK-JOB");
>         } catch (Exception ex) {
>             // Some message
>         }

But when we deploy these Flink jobs (5 in all) - only one runs and the other one closes.

the way we deploy is via bin/flink run

Thanks Much

2
What do you see in the jobmanager logs? - David Anderson
Do you run out of resources? E.g. not enough slots? - TobiSH
You're probably better off not catching exceptions from env.execute(). If you throw an exception there, Flink will write it to logs and generally make it easy for you to see. - CantankerousBullMoose

2 Answers

0
votes

I guess you may be using the default startup method of flink standalone, via bin/start-cluster.sh and bin/stop-cluster.sh. this method is rely on conf/masters and conf/workers to determine the number of cluster component instances, the default number of taskmanager only 1, with 1 slot.

When the job parallelism is only one, only one job can be run (when the job parallelism is greater than one, no job can run). When you do not have enough taskmanager (slot), you cannot run enough jobs (each job needs at least one slot)

you can add taskmanager (slot) by referring to the steps on the picture

flink document link

0
votes

This might be because you are using same job name in env.execute("FLINK-JOB"). Please try to make it different for 5 of your jobs, alternatively, you can pass job name using parameter configuration while deploying flink job and pass different job name using env.execute(params.get("your-job-name")). Having unique job name should be helpful. Thanks