First I am running on Standalone mode!
I have been trying to find any configuration but I haven't found anything about this.
In Spark there are some configurations which let you limit the number of CPUs to use in each slave:
- SPARK_WORKER_CORES (worker configurations)
- spark.executor.cores (cluster configuration)
But in Flink you just can set the maximun memory to use and the number of task slots (which just divides the memory) as said in the official documentation:
- taskmanager.numberOfTaskSlots: The number of parallel operator or user function instances that a single TaskManager can run (DEFAULT: 1). If this value is larger than 1, a single TaskManager takes multiple instances of a function or operator. That way, the TaskManager can utilize multiple CPU cores, but at the same time, the available memory is divided between the different operator or function instances. This value is typically proportional to the number of physical CPU cores that the TaskManager’s machine has (e.g., equal to the number of cores, or half the number of cores).
And here more focused on my question:
Each task slot represents a fixed subset of resources of the TaskManager. A TaskManager with three slots, for example, will dedicate 1/3 of its managed memory to each slot. Slotting the resources means that a subtask will not compete with subtasks from other jobs for managed memory, but instead has a certain amount of reserved managed memory. Note that no CPU isolation happens here; currently slots only separate the managed memory of tasks.
Thanks!!