Developer Notes

Microsoft windows logo

Running windows containers

Running .NET Framework applications in containers is not exactly a smooth experience. I'm used to running Linux containers an that's what containerization solutions seem geared towards. Windows doesn't quite fit in. But, as I found out, there are things you can do to make Windows fit in to the container ecosystem.

In this post I'll go over all the things I've learned running Windows containers in production at customers in Azure Kubernetes Service (AKS). AKS itself is out of scope, this post is limited to the things I do to build and run the containers and should be applicabe to any Kubernetes cluster running Windows containers.

Containerizing your app

To containerize the app, you'll need to setup a Dockerfile. The first thing to do is select a base image. The obvious candidate is the ASP.NET 4.8 base image:

# Base image
FROM AS runtime


Containers typically start an application from the console and exit when it's done. ASP.NET 4.x web apps are not like that, they run in an AppPool inside IIS. IIS is a Windows service, which is normally started when the OS boots up, so there is no command line to wait on.

To bridge that gap, Microsoft has provided IIS.ServiceMonitor (source and docs on GitHub). It's a command line tool that monitors the IIS process. It will start IIS if it's not running yet and exit once it stops. Either due to a shutdown or a crash.

The tool is part of the ASP.NET 4.8 image and can serve as the entrypoint:

ENTRYPOINT ["C:\ServiceMonitor.exe", "w3svc" ]

Service monitor will also propagate the container environment variables to the ASP.NET process so all container configuration is available within your application.

A word of warning though: if you decide to tweak the IIS configuration on container startup, a race condition can occur causing the container to crash or hang. This is due to the fact that changing the IIS configuration will cause IIS to restart. If ServiceMonitor is launching at the same time it may find IIS is not running yet and try to start it even though it was already starting. More details in this GitHub issue.

To prevent this, ensure IIS is not started on container startup:

# Disable IIS auto start
RUN ["cmd", "/S", "/C", "sc", "config", "w3svc", "start=demand"]

This has the added benefit of making the container startup faster under some circumstances because IIS only starts when everything is ready to go. When ServiceMonitor is used as the entrypoint it will start IIS.

Container logs

While on the topic of crashes; it can be quite difficult to figure out why a container stopped because IIS doesn't log to the console. Container hosts typically capture console output for crash analysis but Windows doesn't work like that.

IIS can be configured to log requests to files and you can enable tracing for failed requests. By default IIS will log it's status and some errors to the Windows EventLog but all of that is gone when the container stops. 

Microsoft has provided yet another tool to fill that gap: LogMonitor (source and docs on GitHub). The tool can capture logs from the following sources:

  • Log files - Periodically checks a directory to see if new files appear or data has been added, and streams that to the console. This can be used to log traffic and failed request traces to the console. Because of the polling, this data has a bit of a lag.
  • ETW logs - an XML based format that can trace a lot of details. This is data is close to real-time but it's quite chatty so I prefer to only get warnings and errors.
  • Windows EventLogs - Periodically checks the Windows Eventlogs for issues.

Please check the sample config file to get an idea of what it can do.

After some experimentation, I found that I have little use for the log files because requests are typically captured by Application Insights (see Observability below). Here's the configuration I'm using:

  "LogConfig": {
    "sources": [
        "type": "EventLog",
        "startAtOldestRecord": true,
        "eventFormatMultiLine": false,
        "channels": [
            "name": "system",
            "level": "Warning"
            "name": "application",
            "level": "Warning"
        "type": "ETW",
        "eventFormatMultiLine": false,
        "providers": [
            "providerName": "IIS: WWW Server",
            "providerGuid": "3A2A4E84-4C21-4981-AE10-3FDA0D9B0F83",
            "level": "Information"
            "providerName": "Microsoft-Windows-IIS-Logging",
            "providerGuid": "7E8AD27F-B271-4EA2-A783-A47BDE29143B",
            "level": "Warning"

The EventLog will output some noise because services start and stop during the lifetime of a Windows container but it has helped me diagnose some serious issues.

To activate LogMonitor, add it to the container and chain it in the entrypoint. ETW logging needs to be enabled explicitly in the IIS configuration. As mentioned above, changing the IIS configuration will restart the process if it's already running. So it's best to do this during the container build and prevent startup issues.

# Base image
FROM AS runtime

# Set the shell to PowerShell
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; Set-ExecutionPolicy Unrestricted -Force;"]

# Disable IIS auto start
RUN ["cmd", "/S", "/C", "sc", "config", "w3svc", "start=demand"]

# Install LogMonitor.exe
RUN md c:\LogMonitor
COPY Tools/LogMonitor/. c:/LogMonitor

# Update IIS config
    # Enable ETW logging for Default Web Site on IIS    
    c:/windows/system32/inetsrv/appcmd.exe set config -section:system.applicationHost/sites /\"[name='Default Web Site'].logFile.logTargetW3C:\"File,ETW\"\" /commit:apphost ; \

# .... copy in your application ....

ENTRYPOINT ["C:\\LogMonitor\\LogMonitor.exe", "C:\\ServiceMonitor.exe", "w3svc"]

All this is done to help see what happens when the application has a catastrophic failure at the OS or IIS service level. 


To see what's going on within the application, you'll need to install an APM. For .NET Appllication Insights has just about everything you need. 

At this point I recommend:

  • Adding or updating the Micrsoft.ApplicationInsights packages to the latest version (v2.20 or newer)
  • Follow the recommendations: use a connection string (not an instrumentation key) and use Workspace-based resources to get faster ingestion.
  • Set the WEBSITE_HOSTNAME environment variable through the container environment to your application name. AppInsights will report that as the Role name without the need for additional telemetry initializers

Many .NET framework application, most notably CMSes will use internal redirects to translate friendly URLs to internal URLs. In that case, you'll probably need to implement a telemetry initializer to adjust the URL for request telemetry. 

Using a reverse proxy

To further improve security and resilience, you'll probably want to use a reverse proxy between the ASP.NET application and the public internet.

A reverse proxy will make it a lot easier to make a .NET framework application comply with modern security guidelines as well as handle TLS.

Whether the reverse proxy is an ingress controller, an Azure Application Gateway or similar service the effect is the same: the actual URL and protocol sent to the application is not the same as the url requested by the client. The original request properties are instead forwarded by the proxy in headers of the HTTP request. In order to make the application play nice with the reverse proxy (without changing the code) we'll need to set server variables from the proxy headers.

The IIS URL Rewrite module can help out here. The tool needs to be installed into the container:

# Add Rewrite module
RUN md c:\aspnet-startup ; \
    Invoke-WebRequest -OutFile c:/aspnet-startup/rewrite_amd64_en-US.msi ; \
    Start-Process c:/aspnet-startup/rewrite_amd64_en-US.msi -ArgumentList "/qn" -Wait ; \
    Remove-Item c:/aspnet-startup/rewrite_amd64_en-US.msi -Force

Next, add the following rewrite rule in web.config to ensure the application understands the proxied requests:

		<!-- note that setting the allowed server variables requires changes to applicationhost.config during container startup -->
				<rule name="Allow ingress if source is HTTPS" stopProcessing="false">
					<match url="(.*)" />
						<set name="HTTPS" value="on"  replace="true" />
						<set name="HTTP_PROTOCOL" value="https" replace="true" />
						<set name="SERVER_PORT" value="443" />
						<set name="HTTP_HOST" value="{HTTP_X_FORWARDED_HOST}"  replace="true" />
					<action type="None" />
					<conditions logicalGrouping="MatchAny">
						<add input="{HTTP_X_FORWARDED_PROTO}" pattern="https" />
				<rule name="Http to Https redirect" stopProcessing="true">
					<match url="(.*)" />
						<add input="{HTTPS}" pattern="^OFF$" />
					<action type="Redirect" url="https://{HTTP_HOST}/{R:1}" redirectType="SeeOther" />

By default it is not allowed to set server variables like this. This needs to be enabled in the IIS configuration. In the Dockerfile add the following:

# Update IIS config
    # Enable support for reverse proxy
    c:/windows/system32/inetsrv/appcmd.exe set config \"Default Web Site\" -section:system.webServer/rewrite/allowedServerVariables /+\"[name='HTTPS']\" /commit:apphost ; \
    c:/windows/system32/inetsrv/appcmd.exe set config \"Default Web Site\" -section:system.webServer/rewrite/allowedServerVariables /+\"[name='HTTP_HOST']\" /commit:apphost ; \
    c:/windows/system32/inetsrv/appcmd.exe set config \"Default Web Site\" -section:system.webServer/rewrite/allowedServerVariables /+\"[name='HTTP_PROTOCOL']\" /commit:apphost ; \
    c:/windows/system32/inetsrv/appcmd.exe set config \"Default Web Site\" -section:system.webServer/rewrite/allowedServerVariables /+\"[name='SERVER_PORT']\" /commit:apphost


Containers are usually accessed through a shell, not the UI. So there's no clicking around in IIS admin screens. Here are a couple of handy tricks to get more information in case of trouble.

Connect to a pod

Without going into too much detail here's how to connect to a container. I'll assume you have Azure CLI installed. Run az aks install-cli to install Kubectl.

Now, to connect kubectl to your cluster:

  • Ensure you're logged in with Azure cli:
    az login
  • Ensure you're connected to the right subscription:
    az account show
  • Connect Kubectl to your cluster
# Configure kubectl with credentials to your cluster
az aks get-credentials --resource-group my-rg --name my-cluster
  • List out the pods:
    kubectl get pods -n my-namesapce
  • Open a powershell in a pod:
kubectl exec -n development -it mycontainer-xxxxxxx-yyyy -- powershell

From the command line most of the usual ASP.NET troubleshooting techniques are available.

Enable rich error messages on the page

Editing files can be a bit tricky because there is no command line editor on Windows containers, as far as I know anyway. So to tweak web.config I resort to PowerShell. The following lines allow detailed exceptions to be returned to the client (aka the Yellow Screen of Death):

$XMLDoc = new-object System.Xml.XmlDocument
$XMLDoc.get_DocumentElement()."system.web".customErrors.mode = "Off"
$attr = $XMLDoc.get_DocumentElement()."system.webServer".SelectSingleNode('httpErrors').Attributes.Append($XMLDoc.CreateAttribute("errorMode"))
$attr.Value = "Detailed"

Failed request tracing

This is a Windows feature. Once it is enabled, log files will appear in the logs folder.

# Enable failed request tracing
Install-WindowsFeature Web-Http-Tracing 

Inspect event logs

# List all logs (to see which have entries)
Get-EventLog -List

# System
Get-EventLog -LogName "System" -Newest 20

# Powershell (if you use powershell at container startup)
Get-EventLog -LogName "Windows PowerShell" -Newest 20

# Application (could have exceptions in it)
Get-EventLog -LogName "Application" -Newest 20

# To get details for a specific message, use the index
Get-EventLog -LogName "Application" -Index 88 | Select-Object Message


Don't restart IIS

Be carfeul not to restart IIS because that will terminate the container. Instead, recycle the AppPool. This will fix most hangs and is so much more efficient.

# Restart apppool
c:\windows\system32\inetsrv\appcmd recycle apppool DefaultAppPool

Docker build on Windows: The remote name could not be resolved

On some rare occasions I've run into this error and thankfully somebody has a resolution for it. It has to do with the priority of network interfaces. More details on StackExchange

Node image versions and build agents

When you're using CI/CD pipelines (and you really should be) make sure the OS of the build agent matches with the OS of the Windows nodes in your cluster.

I got bitten when Azure DevOps moved to Windows 2022 agents but the cluster was still on Windows 2019 images. The result is containers that fail to start or even pull from the container registry.

 Further reading