Some time in the past the .NET and Home windows containers workforce investigated how our prospects have been utilizing certificates for HTTPS connections when utilizing Home windows containers to run internet workloads on IIS, both on AKS or not. On the time, we discovered that the administration of SSL certificates on Home windows containers particularly for IIS could be very handbook and doesn’t align effectively with fashionable practices you’d anticipate when operating on a containerized surroundings.
We discovered that almost all of our prospects have scripts to load certificates into the Home windows container surroundings, set up the certificates, and have it configured as a part of the IIS deployment alongside the appliance, its utility pool in IIS, and its IIS bindings. The opposite situation on which this isn’t needed is when prospects use an ingress controller, which handles the HTTPS visitors earlier than it will get to the containers/pods.
On the time that we investigated this we missed an necessary function that exists in IIS already from the pre-container period – Central Certificate Store. This function was launched in Home windows Server 2012 as a part of the on the time new IIS 8.0. It permits the server directors to retailer and entry the certificates centrally on a file share. Home windows Servers in a server farm can then be configured to load the certificates from the file share on-demand. For Home windows containers, this function is useful as it’s precisely what we have to decouple the storing of recordsdata (certificates on this case) from the container.
Proof of idea with Docker Desktop
To validate that Central Certificates Retailer could be correctly used for Home windows containers, I examined the function regionally on my machine. That is what the structure appears like in its easiest kind:
The principle factor within the diagram above is that the certificates is just not being loaded into the container. As an alternative, it sits on a neighborhood folder on my machine. To validate the above, listed here are the belongings I used:
Dockerfile:
# escape=`
# Use the Home windows Server Core picture with IIS put in, focusing on 2022 LTSC model
FROM mcr.microsoft.com/home windows/servercore/iis:windowsservercore-ltsc2022
# Set up the Centralized Certificates Module
RUN powershell -command `
Add-WindowsFeature Internet-CertProvider
#Copy LogMonitor JSON file and obtain LogMonitor to the container
WORKDIR /LogMonitor
COPY LogMonitorConfig.json .
RUN powershell.exe -command wget -uri https://github.com/microsoft/windows-container-tools/releases/obtain/v1.2.1/LogMonitor.exe -outfile LogMonitor.exe
#Copy iiscentralstore.ps1 to the container
COPY iiscentralstore.ps1 .
ENTRYPOINT ["powershell", "-File", "C:LogMonitoriiscentralstore.ps1"]
The above Dockerfile will create a brand new picture based mostly on the Home windows Server 2022 LTSC IIS picture. It’ll set up the Central Certificates Retailer (Internet-CertProvider) function. It’ll additionally obtain and configure LogMonitor so you’ll be able to see the logs from IIS outdoors of the container. Lastly, it can copy a PowerShell script that shall be used as entry level for the picture.
An important side of this PowerShell script is that it’ll solely be known as when the container is executed from the picture. The script is just not executed when the picture is being constructed. This strategy permits us to defer specifying usernames or passwords till the container is launched. It is a safety greatest apply as these may very well be tracked to the picture historical past.
Right here’s what the PowerShell script appears like:
#Create a brand new native person account
$Password = ConvertTo-SecureString -AsPlainText $env:LocalUserPassword -Power
New-LocalUser -Title $env:LocalUsername -Password $Password -FullName $env:LocalUsername -Description 'IIS certificates supervisor person'
# Configure the Central Certificates Retailer
$PFXPassword = ConvertTo-SecureString -AsPlainText $env:PFXCertPassword -Power
$CertStorePath="C:CertificateStore"
Allow-IISCentralCertProvider -CertStoreLocation $CertStorePath -UserName $env:LocalUsername -Password $Password -PrivateKeyPassword $PFXPassword
# Replace the IIS bindings to make use of the Central Certificates Retailer
$siteName="Default Internet Web site"; `
Take away-WebSite -Title $siteName; `
$newSiteName="CCSTest"; `
$newSitePhysicalPath="C:inetpubwwwroot"; `
$newSiteBindingInformation = '*:443:'; `
New-IISSite -Title $newSiteName -PhysicalPath $newSitePhysicalPath -BindingInformation $newSiteBindingInformation -Protocol https -SslFlag CentralCertStore
#Name LogMonitor, ServiceMonitor and IIS
C:LogMonitorLogMonitor.exe C:ServiceMonitor.exe w3svc
The script begins by creating a brand new native person. This person shall be used later for accessing the folder on which the certificates is saved. Be aware that I’m not hardcoding the username or password to the script. The purpose of this strategy is to take away all secrets and techniques from the container picture, which incorporates credentials.
Subsequent, we configure the Central Certificates Retailer. For that, we’ll use the person account and password from the earlier step, however we additionally want the password for the certificates (PFX) file and the situation on which the certificates shall be saved. Historically, this may very well be an SMB file share. In our case, we’ll use a neighborhood folder. Later, this native folder shall be a quantity mounted into the container.
We then transfer to replace the IIS binding to make use of the Central Certificates Retailer. To make sure solely the web site we want is there, we deleted the Default Internet Web site (observe: this may very well be achieved as a part of the construct course of) and constructed a brand new one with the precise configuration. Most significantly, the New-IISSite command contains the -SslFlag indicating the certificates comes from the CentralCertStore.
Lastly, we instantiate LogMonitor, which calls ServiceMonitor, which in flip checks for the state of the w3svc (IIS) service. So long as the service is up, ServiceMonitor will hold the container operating and LogMonitor will ship logs to SDTOUT, which shall be captured by Docker or Kubernetes.
For a neighborhood deployment, that’s just about it. Now, we want a certificates. On my native machine, I ran:
# Create the certificates regionally
$cert = New-SelfSignedCertificate -DnsName "www.viniapccstest.com" -CertStoreLocation "cert:LocalMachineMy"
# Specify the trail the place you need to save the certificates's public key
$certPath = "C:CertCCSTest.pfx"
# Export the certificates to a file
$certPassword = ConvertTo-SecureString -String "MySecurePassword" -Power -AsPlainText
Export-PfxCertificate -Cert $cert -FilePath $certPath -Password $certPassword
The above created the PFX certificates file for the web site I need to use. Subsequent, we have to construct the container picture:
docker construct -t iisccs:v1 .
With the picture constructed, we are able to run a brand new container based mostly on the picture:
docker run -e LocalUsername="<Username>" -e LocalUserPassword='<LocalUserPassword>' -e PFXCertPassword='<CertificatePassword>' -d -p 8081:443 -v C:Cert:C:CertificateStore iisccs:v2
The command above will instantiate a brand new container based mostly on the picture we simply constructed. It’ll additionally map port 443 of the container to port 8081 on the host. I’ve supplied the values for the surroundings variables wanted, similar to native username, username password, and PFX file password. Lastly, it can map the native folder on my machine to the amount contained in the container, ensuing within the container having the ability to see the certificates we simply created.
Since it is a proof of idea, I’ve manually modified the HOSTS file on my machine to incorporate the FQDN of the certificates to the IP tackle of my machine. Once I open the browser and sort: https://www.viniapccstest.com:8081, it brings up the web site accurately. (Certainly, I needed to bypass the browser warning in regards to the web site, for the reason that certificates is self-signed and never trusted by my machine)
This proved that we are able to have an IIS web site with HTTPS configured with no certificates loaded into the container picture. Now, if I want to vary the certificates, I don’t must rebuild the picture. All I’ve to do is to replace the certificates within the folder being mapped to a quantity contained in the container.
Don’t get me improper and let me make clear one thing instantly: The principle downside and absolute blocker right here is the truth that I’m passing delicate info when operating the container. A easy docker examine can reveal the username and password used as a part of the docker run command:
PS C:Usersuser> docker run -e LocalUsername="username" -e LocalUserPassword='password' -e PFXCertPassword='password' -d -p 8080:80 -p 8081:443 -p 8172:8172 -v C:Cert:C:CertificateStore iisccs:v2
72ee37d2088e7673d4efb58a787bbe9005fe3c30f2c2e504330bc6396d21d679
PS C:Usersuser> docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
72ee37d2088e iisccs:v2 "powershell -File C:…" 12 seconds in the past Up 5 seconds 0.0.0.0:8172->8172/tcp, 0.0.0.0:8080->80/tcp, 0.0.0.0:8081->443/tcp beautiful_ellis
PS C:Usersuser> docker examine 72ee37d2088e
<redacted>
"Env": [
"LocalUsername=username",
"LocalUserPassword=password",
"PFXCertPassword=password"
<redacted>
Unfortunately, for Docker Desktop environments there’s not much to be done. Docker provides a great feature called Docker Secrets, but that is available for Docker Swarm environments only. Since this was just a proof of concept, it is ok to use it for validation or development/testing purposes.
For Kubernetes environments though, there are other more secure options available that allow us to take this approach and validated concept to production environments.
IIS Central Certificate Store on Azure Kubernetes Service (AKS)
Now we have an IIS container image able to use Central Certificate Store to load the certificate into the container. What we need is to validate that we can do it in a secure and safe way, enabling it to be used in production environments. This is what our architecture in AKS will look like:
The above architecture is more complex than the previous one, for the obvious reason that we want to ensure a few things:
– Images are available in a registry so AKS cluster nodes can pull it.
– Certificates must be stored in a highly available service and can be mounted into the pods inside the AKS cluster.
– Usernames and passwords are sensitive information and must be kept private.
To achieve the above, I started building this environment by creating:
– An Azure Container Registry (ACR), following the instructions here. You also need to tag your image and push it to the registry, following the documentation.
– An AKS cluster with Windows nodes, following the instructions here. You will need to attach the ACR registry to the AKS cluster. You can do that while you build the cluster or you can attach the registry to the cluster following the documentation here. This will ensure only nodes in this AKS cluster can pull images from the registry.
With ACR and AKS in place, we can move on to configuring Azure Storage where the certificate will be stored and presented to AKS nodes as a persistent volume. To do this, follow the documentation here.
Azure Files storage can be used in two ways for AKS:
– Dynamically provision volumes: This is ideal for scenarios on which the application running in the pod needs a clean volume/disk to use. The volume will be provisioned dynamically as the deployments happen.
– Statically provision volumes: This is used when you want to create a volume that leverages the file shares already present in Azure files.
For our scenario we will use the option to statically provision volumes, which allows us to load the certificate into Azure files prior to allocating it as a volume for the containers. Follow the documentation above and you will have a File Share available in Azure Files. You can then upload the certificate from your machine to the Azure file share by using the Azure portal or the below command:
az storage file upload --account-name <storageaccountname> --share-name <filesharename> --source 'C:folderfile.pfx' --path 'file.pfx'
You should see the certificate in the share now:
With ACR, AKS, and Azure Files configured, we can move on to deploying the application. However, prior to deploying the application, we need to prepare the AKS cluster with the secrets the application will need – the username and passwords we saw earlier. To achieve that, we can run the following on the AKS cluster:
kubectl create secret generic iisccs-secrets --from-literal=LocalUsername="Username" --from-literal=LocalUserPassword='Password' --from-literal=PFXCertPassword='Password'
This created the Kubernetes secret to store the sensitive information that will be used when the container is instantiated. If you remember from the previous steps, the container image has been created with a PowerShell script that will run when the container/pod is created. That PowerShell script expects to find this information as environment variables.
To deploy the application, we can create the iiscentralstore.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: iisccs-deployment
spec:
replicas: 1
selector:
matchLabels:
app: iisccs
template:
metadata:
labels:
app: iisccs
spec:
nodeSelector:
kubernetes.io/os: windows
containers:
- name: iisccs
image: <your image from the ACR registry>
ports:
- containerPort: 443
env:
- name: LocalUsername
valueFrom:
secretKeyRef:
name: iisccs-secrets
key: LocalUsername
- name: LocalUserPassword
valueFrom:
secretKeyRef:
name: iisccs-secrets
key: LocalUserPassword
- name: PFXCertPassword
valueFrom:
secretKeyRef:
name: iisccs-secrets
key: PFXCertPassword
resources:
limits:
cpu: "1"
memory: "500Mi"
volumeMounts:
- name: azure
mountPath: "C:CertificateStore"
volumes:
- name: azure
persistentVolumeClaim:
claimName: azurefile
---
apiVersion: v1
kind: Service
metadata:
name: iisccs-service
spec:
selector:
app: iisccs
ports:
- name: https
protocol: TCP
port: 443
targetPort: 443
type: LoadBalancer
The above will create a deployment and a service. The deployment will be based on the container image you pushed to ACR. It also informs the deployment to use the Kubernetes secret we just created to provide the environment variables assigned. Finally, as part of the deployment, it creates a volume in the folder specified (In this example, this is the folder IIS will be configured to use as the source for the Central Certificate Store. You could either change this, or even use a ConfigMap to set this as a variable). The Service created is a standard LoadBalancer service with ports 80 and 443 open. Note that port 80 is not being used in this application so you can safely remove it.
Also, as part of the deployment, we have the information on the persistentVolumeClaim. We need the iisccs_pvc.yaml file to deploy that:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: azurefile
spec:
accessModes:
- ReadWriteMany
storageClassName: azurefile-csi
volumeName: azurefile
resources:
requests:
storage: 1Gi
This will create a Persistent Volume Claim from the deployment to reach a Persistent Volume. The file above calls the volumeName by the virtue of another specification called iisccs_pv.yaml:
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
pv.kubernetes.io/provisioned-by: file.csi.azure.com
name: azurefile
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: azurefile-csi
csi:
driver: file.csi.azure.com
volumeHandle: iisccs-volumeid
volumeAttributes:
shareName: iisccsshare
nodeStageSecretRef:
name: azure-secret
namespace: default
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=0
- gid=0
- mfsymlinks
- cache=strict
- nosharesock
- nobrl
The above creates the construct of the Persistent Volume in the AKS cluster. Is uses the azurefile-csi with the nodeStageSecretRef which is a Kubernetes secret created as part of the deployment of the Azure Files storage (as in the documentation provided above).
With the PVC and PV specification, we can go ahead and deploy it:
kubectl create -f iisccs_pv.yaml
kubectl apply -f iisccs_pvc.yaml
You can check if the PVC has been created and bound to the PV by using:
PS C:Usersuser> kubectl get pvc azurefile
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
azurefile Bound azurefile 1Gi RWX azurefile-csi 24h
This confirms the PVC and PV have been correctly configured. We can then deploy the application:
kubectl apply -f iiscentralstore.yaml
This will create the deployment and service in your AKS cluster. Once the image has been pulled, it should run in one of the Windows nodes in your cluster. You can check the public IP address of the service by running:
PS C:Usersuser> kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
iisccs-service LoadBalancer 10.240.192.27 XXX.XXX.XXX.XXX 80:32726/TCP,443:32215/TCP 23h
kubernetes ClusterIP 10.240.0.1 <none> 443/TCP 7d1h
Remember that if you don’t have a real DNS record pointing to this IP address, you might need to change the HOSTS file in your machine so you can access the website with its name – that way IIS Central Certificate Store can match the website to the certificate in the store.
Once you do that, you can access the website:
Conclusion
In this blog post we analyzed how to create a Windows container image for IIS that deploys a website using HTTPS without having to load the certificate into the container image. We validated the concept locally on Docker Desktop and created an image that successfully deployed a website using HTTPS with the certificate presented via a local mount volume. While successful, this approach doesn’t meet the security bar for production environments, given the nature of Docker Desktop for local development purposes.
We then explored a secure way to deploy this concept in AKS, by leveraging Kubernetes secrets for sensitive data and Azure Files storage for storing the certificate and presenting it to the pods on the Windows nodes as volumes.
The main goal of this exercise was to enable us to separate the certificate lifecycle management from the container image. If we need to change the certificate, we can do so without changing the container image or re-deploying the pods. Furthermore, the pod lifecycle is also apart from the certificate. We can manage them independently now, which is helpful when thinking about DevOps practices and CI/CD pipelines.
We hope this helps you in the process of modernizing applications with Windows containers and AKS and offers a solution for TLS certificates lifecycle management. Let us know what you think in the comments section below.