1
votes

I am wanting to create a custom Azure Logic App which does some heavy processing. I am reading as much as I can about this. I want to describe what I wish to do, as I understand it currently, then I am hoping someone can point out where I am incorrect in my understanding, or point out a more ideal way to do this.

What I want to do is take an application that runs a heavy computational process on a 3D mesh and turn it into a node to use in Azure Logic App flows.

What I am thinking so far, in a basic form, is this:

  1. HTTP Trigger App: This logic app receives a reference to a 3D mesh to be processed, it then saves this mesh to Azure Store and passes that that reference to the next logic app.
  2. Mesh Computation Process App: This receives the Azure Storage reference to the 3D mesh. It then launches a high performance server with many CPUs and GPU's, the high performance server downloads the mesh, processes the mesh, then uploads the mesh back to Azure Storage. This app then passes the reference to the processed mesh to the next logic app. Finally this shuts down the high performance server so it doesn't consume resource unnecessarily.
  3. Email notification App: This receives the Azure Storage reference to the processed mesh, then fires off an email with the download link to the user.

Is this possible? So far what I've read this appears possible. I am just wanting someone to verify this in case I've severely misunderstood something.

Also I am hoping a to get a little bit of guidance on the mechanism to launch and shut down a high performance server within the 'Mesh Computation Process App'. The only place the Azure documentation mentions asynchronous, long-term, task processing in Logic Apps is on this page: https://docs.microsoft.com/en-us/azure/logic-apps/logic-apps-create-api-app

It talks about it requiring you to launch an API App or a Web App to receive the Azure Logic App request, then ping back status to Azure Logic Apps. I was wondering, is it possible to do this in a serverless manner? So the 'Mesh Computation Process App' would fire off an Azure Function which spins up the higher performance server, then another Azure Function periodically pings that server to report back status until complete, at which point an Azure Function then triggers the higher performance server to shut down, then signals to the 'Mesh Computation Process App' that it is complete and it continues onto the next logic app. Is it possible to do it in that manner?

Any comments or guidance on how to better approach or think about this would be appreciated. This is the first time I've dug into Azure, so I am simultaneously trying to orient myself on proper understanding in Azure and make a system like this.

1

1 Answers

2
votes

Should be possible. At the moment I'm not exactly sure if Logic Apps themselves can create all of those things for you, but it definitely can be done with Azure Functions, in a serverless manner.

For your second step, if I understand correctly, you need it to run for long, just so that it can pass something further once the VM is done? You don't really need that. When in serverless, try to not think of long running tasks, and remember that everything is an event.

Putting stuff into Azure Blob storage is an event you can react to, this removes your need for linking.

Your first step, saves stuff to Azure Store, and that's it, doesn't need to do anything else.

Your second app, triggers on inserted stuff to initiate processing.

The VM processes your stuff, and puts it in the store.

The email app triggers on stuff being put into "processed" folder. Another App triggers on the same file to shut down the VM.

This way you remove the long running state management and chaining the apps directly, and instead each of them does only what it needs to do, and then apps can trigger automatically to the results of the previous flows.

If you do need some kind of state management/orchestration in all of your steps, and you want to still be in serverless, look into durable azure functions. They are serverless, but the actions they take and results they get are stored in table storage, so it can be recreated and restored to a state it was in before. Of course everything is done for you automatically, it just changes a bit on what exactly you can do inside of it to still be durable. The actual state management you might want to do is maybe something to keep track of all the VM's and try to reuse stuff, instead of spending time spinning them up and killing them. But don't complicate it too much for now.

https://docs.microsoft.com/en-us/azure/azure-functions/durable/durable-functions-overview

Of course you still need to think about error handling, like what happens if your VM just dies without uploading anything, you don't want to just miss on stuff. So you can trigger special flows to handle repeats/errors, maybe send different emails, etc.