3
votes

I am working on a multi tenant service which will process long running jobs from multiple tenants.

I am considering service bus queue/queues to hold requests from tenants. Background job processor can process limited number of jobs at any given time. Background job processor has to give equal priority to jobs from all the tenants. Therefore, we dont want any single tenant to consume background processor resources, rather processor will process requests in round robin way. Simple Service Bus Queue is not the right solution, I was thinking about partition queues witch each partition holding request from each tenant. Does this sound like a correct solution OR do I need to create separate queues for each tenant? Does partition queue have the capability to return messages in round robin way OR can processor somehow check each partition one by one to see if there is a request to process?

1
Have a look at topic/subscription.Thomas
What @Thomas sad. That way you can assign different processing power to each topic if needed. It would require scaling out processing per tenant. Based on number of tenants, might not be the best option.Sean Feldman
Not sure how topic/subscription will solve the problem. I cant scale out processing due to costing limitation. So I have to kind of multiplex tenants requests.Usman Khan
The topic give you one endpoint for all the messages. You can create a subscription per tenant (creating filter) then you can have a process per tenant/subscriptionThomas
My situation is opposite, I have a single process that can process say only 5 requests concurrently. If 5 tenants create 20 requests more or less at the same time, request processor should process request from each tenant before revisiting a tenant again to process subsequent tenant's request.Usman Khan

1 Answers

0
votes

If service bus is not the only choice. Depending on your requirements, Redis may or may not be a good fit. But based on my experence. we could create the redis lists for mutiple tenants and we could circular the lists as we wanted.