Developer Notes

Cross section of a shell showing exponential growth. Picture by Pixabay

Serverless message processing in Azure

When given the task to implement a message handler for Service Bus in Azure an obvious choice is to go with serverless functions. But when balancing cost, security, and complexity there may be other alternatives that are worthwhile. Especially if you already have a solution that can scale to zero, like Azure Kubernetes Service or Azure Container Apps

Case study

A while back I hit this very issue. My team wanted to start adopting event-driven processing to replace the batch processing currently in use all over the place. The application, a monolith, had already been lifted to Azure and was now running in AKS. Work was underway to modernize the application and break it up into more manageable slices. The time was right. 

Then a story popped up on the board that was perfect to serve as a pilot for event-driven processing. So, we started a proof of concept with the first technology that seemed to make sense: serverless Azure Functions.

First try: Serverless functions

Tooling support is quite good for Azure Functions so it's not that hard to get up and running and develop the functionality. With a quick first version up and running on a devs machine it was time to start testing so we cobbled together the bicep templates and added deployment steps to the pipeline.

This was already a stretch for the team. Their whole process was aimed at AKS, which was working pretty well for them. AKS doesn't expect the developer to author and deploy bicep and besides that, there were decent templates and processes available so that the team hardly needed to spend time contemplating how to deploy their code

The dev pushed on though and, with some help, we had the whole thing up and running quite quickly.

Seeing all the resources spawn up on deployment in Azure triggered more concerns for me: complexity, cost, and security. While serverless functions are dirt cheap, the price goes up quite a bit when switching to the premium plan to get VNet integration. Network security with VNets is a hard requirement for the company. Control of the VNet was however not with the team but with a managed services provider.

The concern about complexity in this solution comes from the introduction of resource types that the team had not been using up to that point; App Service Plans, Function Apps, and Storage Accounts.

Azure functions - consumption

ProCon
✔ Low cost
✔ Super easy to build
✔ scalable
✖ Takes effort to secure
✖ No VNet integration
✖ Cold starts are slow
✖ Poor fit with way of working

Azure functions - premium plan

ProCon
✔ Super easy to build
✔ Scalable
✔ VNet integration
✖ Additional cost
✖ Still takes effort to secure
✖ Poor fit with way of working

Second try: Functions in AKS

To address all these concerns we modified the code to produce a container. There's decent documentation on how to containerize a function app and it works.

With the function app running in AKS we get the benefit of security by default because everything in AKS runs in a VNet. Complexity came down a bit because we did not need any extra resources, this also meant no additional costs.

However... a containerized function app is very resource-hungry. The containers are large compared to .NET 6 apps and triggered AKS to scale out the node pool on some occasions.

There was an interesting insight though when we looked at automating the deployment into AKS. The function tools want to take care of this but I find that to be a poor fit for CI/CD pipelines. Fortunately, the tooling can output the Kubernetes manifests for the container. This makes it clear how KEDA is leveraged under the hood to scale the function app down to 0 when the queue is empty.

And... that sparked another idea: We don't need a function host if AKS and KEDA already take care of scaling and monitoring the queue. We can cut out the middleman and handle the messages with a simple command line app.

Azure functions in AKS

ProCon
✔ Super easy to build
✔ scalable
✔ Secure by default (AKS)
✖ Big containers are slow to start
✖ resource consumption
✖ Custom deployment tooling

Third try: Command line apps

Containerized .NET 6 executables are efficient. Since most services already run on .NET 6 in the cluster for this customer even new deployments usually spin up in a matter of seconds. .NET executables are quite snappy already and the containers are small.

There are some things that you get for free in ASP.NET apps, including function apps, that you may take for granted that a bare-bones command line app doesn't have:

  • Configuration
  • Dependency injection
  • Distributed logging and tracing
  • Application lifecycle (i.e. stop when the container gets a signal to do so)
  • Health checks

Modern .NET exposes all these building blocks and it's quite doable to come up with a solid base. This means that the message handlers end up being simple little executables that are quite similar in setup to asp.net 6 web apps.

ProCon
✔ Simple tech that fits well
✔ Fast and resource-efficient
✔ Scalable
✔ Secure by default (AKS)
✔ More control over message processing parameters
✖ Complexity: Custom code is needed to make it all work

The cherry on top: KEDA

The command line apps get a competing edge when you throw KEDA into the mix. KEDA can monitor a message queue and scale the message processor workload as needed.

So, during quiet times, the containers are scaled all the way down to 0 allowing other processes to use the available resources.

The interesting bit for me is that letting go of the Azure Functions host gives a lot more control over message processing. For example by introducing a circuit breaker to stop processing messages when an external integration is down.

Conclusion

Functions are versatile and really easy to code but they have drawbacks. The function host is a bit heavy and with the availability of KEDA, the function host doesn't add all that much.