I am trying to figure out how to set up a work queue with Argo. The Argo Workflows are computationally expensive. We need to plan for many simultaneous requests. The workflow items are added to the work queue via HTTP requests.
The flow can be demonstrated like this:
client
=> hasura # user authentication
=> redis # work queue
=> argo events # queue listener
=> argo workflows
=> redis + hasura # inform that workflow has finished
=> client
I have never build a K8s cluster that exceeds its resources. Where do I limit the execution of workflows? Or does Argo Events and Workflows limit these according to the resources in the cluster?
The above example could probably be simplified to the following, but the problem is what happens if the processing queue is full?
client
=> argo events # HTTP request listener
=> argo workflows