Like many other libraries for .NET, Serilog provides diagnostic logging to files, the console, and elsewhere. It is easy to set up, has a clean API, and is portable between recent .NET platforms.
It is very easy to setup but question is Are we doing it correctly?
To start with Serilog with .net core, we need to do two important tasks:
Configure the settings for Serilog through config file.
In above setting we configure three important steps, a: Definethe sink - sinks are for writing log events to storage in various formats. Here are the complete list of sinks provided by Serilog: https://github.com/serilog/serilog/wiki/Provided-Sinks
b: Configure the minimum log level which we overriding to Error for Host, System and framework errors.
c: Define the WriteTo format for the log events. You can define your custom format of the data being logged here. i.e.
2. Initialize serilog as loggingProvider in Program.cs
var logger = new LoggerConfiguration().ReadFrom.Configuration(builder.Configuration).CreateLogger(); builder.Host.UseSerilog(logger, dispose: true);
Wait, are we doing it correctly. Of course Not! Above code has a security issue and it has led in the past to the following vulnerabilities: CVE-2018–0285 CVE-2000–1127 CVE-2017–15113 CVE-2015–5742
So how we correct it then. well, this is how you can correctly initialize the serilog as logging provider through the Action method parameter, instead of creating a new instance of LoggerConfiguration. Here is the code:
In this case, .Net Core infrastructure will take care of loggerConfiguration through dependency resolver internally based on your config setting which is secured.
Finally Here is the complete code to properly initialize serilog as logging provider by reading config from appsettings.{your_environment}.json files.
using Serilog;
var builder = WebApplication.CreateBuilder(args);// read configuration information from appsettings.enviorment.json builder.Host.ConfigureAppConfiguration((hostingContext, config) => { var env = hostingContext.HostingEnvironment; config.AddJsonFile("appsettings.json", optional: true, reloadOnChange: true) .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true, reloadOnChange: true); config.AddEnvironmentVariables(); });builder.Host.UseSerilog((hostingContext, loggerConfiguration) => { loggerConfiguration.ReadFrom.Configuration(hostingContext.Configuration); });
Hope you enjoyed the content, follow me for more like this and please don’t forget to like/comment for it. Happy programming.
In both the articles I gave the example of keeping secrets in GitHub Environments but what if you want to store your secrets in Azure Key Vaults which has the advantage over Github secrets. Like, You can verify the secrets value in azure key vault and can be upgraded programmatically too to new version if needed and also you can control the access permission based on need. So here is the example to prepare your appsettings.production.js or other config files by reading secrets from Azure Key Vaults. Follow the steps here to do so:
First thing we would need is, Connectivity to Azure so that pipeline can do the azure login and for this purpose I suggest always to use service principal instead of user id & password. This is the only settings which you need to store as part of GitHub Secrets so that using this you can do the Azure login. Here is command to generate the service principal .
az ad sp create-for-rbac --name "{your_serviceprincipal_name}" --scope /subscriptions/{subscription_id}/resourceGroups/{resourceGroupName} --role Contributor --sdk-auth
Note: I’m creating service principal with contributor role at resource group level for my need, but I would recommend the role to be downgraded for access based on your need.
2. Next we need to provide the access to above created service principal for accessing the secrets from key vault and for this please login to the azure portal and navigate to your Key Vault => Access policies and click on +Create.
From the Permission tab: select Get, List, Decrypt of Key Permissions and Get, List of Secret Permissions and Certificate Permissions. From the Principal tab: search your service principal and select.
3. Set up is done, now do the code in Github action to read the secrets from Azure Key Vault.
From the above code, a. first we are doing code checkout for the code repository, b. Logging in to the Azure and ‘YourServicePrincipa’ is the secrets stored on GitHub environment which you created from 1st step here. , c. Reading secret ‘CONNECTIONSTRING’ from Azure Key Vault. and finally d. Using the secrets ‘ConnectionString’ to replace in appsettings.Production.json.
Note: In case of multiple secrets reading, please mention all your secrets name with comma separated i.e.
This article here is going to be bit lengthy but if you stay with me, I’m sure it would be one stop learning for your application to automate the deployment on Azure Kubernete Service.
I assume, you know the basics about Kubernetes cluster and here we will be using Azure Kuberenete Service to orchestrate my kubernete cluster. We would need Ingress Controller to expose the pod for public access and in this example I’ll be using Azure Application Gateway to expose the service running in pods.
AGIC is Azure Gateway Ingress Controller which allows Azure Application Gateway to be used as the ingress for the AKS Cluster.
Now lets create github action to deploy the application to the cluster and expose it through the Azure Application Gateway and to do so follow the steps below:
Step 1: First thing we need to create a docker container of our application to host on kubernetes cluster and to do so add the dockerFile to your main project and save it as Dockerfile.
FROM mcr.microsoft.com/dotnet/sdk:6.0-alpine AS base WORKDIR /app EXPOSE 8080ENV ASPNETCORE_URLS=http://+:8080FROM mcr.microsoft.com/dotnet/sdk:6.0-alpine AS build COPY . . WORKDIR "./MyApp" RUN dotnet build "MyApp.csproj" -c Release -o /app/buildFROM build AS publish RUN dotnet publish "MyApp.csproj" -c Release -o /app/publishFROM base AS final WORKDIR /app COPY --from=publish /app/publish . ENTRYPOINT ["dotnet", "MyApp.dll"]
In the above docker file, I’m copying everything to build server and setting the work directory as MyApp where MyApp.csproj is available then do the build & publish.
Note: You can also generate the docker file through Visual Studio if you enable the docker support Or through Visual Studio Code if you have docker extension installed. If Yes, the just pres ctrl+shift+p in visual studio code and type command >docker
Step 2: We would need Azure credentials to connect to azure and save it with azure github secret. For this we will generate the credentials by running below command through command prompt.
In command prompt first do the azure login with: > az login now run the command to create the credentials: > az ad sp create-for-rbac — name “MyApp” — scope /subscriptions/{subscription_id}/resourceGroups/{resourceGroupName} — role Contributor — sdk-auth
Above command will output result like below, save it in github environment secret.
Step 3: To host the application to kubernete cluster we need to create files as deployment.yml (for container deployment to pods), service.yml (to expose the proxy to access application running in pods, it will be internal) and ingress-appgateway.yml (the ingress controller which will expose the application for public access).
if you want You merge all these to a single file but for better understanding and seperation I’m keeping them separate. So first I’ll create a folder name as k8s (you can name any) in the parent directory to keep all file. Hence my directory looks like:
Above all three files are self explanatory if are aware of basic concepts of kubernete clusters but still I have highlighted the names string which should match with what you name your deployment app and service.
Step 4: Now lets start creating the github Action pipeline by adding yml file as ./.github/workflows/service-deploy.yml and below code.
- name: Setup .NET Core uses: actions/setup-dotnet@v1 with: dotnet-version: 6.0.x - name: Run unit and integration tests shell: bash working-directory: ./src run: dotnet test -c Release
Step 6: Now we add the ‘publish’ job. As part of the publish job we will replace the tokens in the appsettings.production.json (or appsettings.json) if any.
As per the above code, appsettings.production.json file as a token defined as “#{ConnectionString}#” which has to be replaced with the value stored in github environment’s (named ‘test’ ) secret named ‘CONNECTIONSTRING’. Also this job requires [build] job ti run successfully.
Step 7: Next we will continue with publish job only and steps to connect to Azure Container Registry and then build & push the container.
logging in to the acr with acr url, username & password stored in github environment secret.
in ‘deploy’ job, first steps we add to download the artifact. Here the artifact name must match with the name provided in step 7 with publishing artifact steps.
next, we need to set the aks context where we would azure credentials which we generated in step 2 and stored in github environment secret along with resource group name and cluster name which again we will read it from github environment secret.
Now if you remember the deployment.yml file which we created in the step 3 has two tokens #{{CONTAINER_IMAGE}}# and #{{CONTAINER_REGISTRY_SECRET}}#, hence lets replace them.
From the above code, CONTAINER_REGISTRY_SECRET is a hardcoded string which you can name it any or defined it as environment input for your workflow and use it.
Now in the next step of the job ‘deploy’ we will create the image pull secret which will be used to pull the image by the cluster to create pods and deploy image.
and now the final steps of the deploy jobs is for deploying to the cluster.
- name: Deploy to AKS id: deploy-aks uses: Azure/k8s-deploy@v4 with: namespace: ${{ secrets.kubernetesNamespace }} manifests: myapp/k8s/ imagepullsecrets: myapp-image-pull-secret
and we are done. During the whole process you can get two issues, either azure access issue (which you have to resolve) or wrong file references from artifects(in my case myapp) i.e. manifest file not found or image tag file not found and in this case you can add the below steps here to see the folder structure of your artifact so that you can point the file correctly.
- name: Display structure of downloaded files-Debug shell: bash run: ls -R working-directory: ${{ inputs.artifactName }}
Note: use this steps only after you downloaded the artifact in deploy job.
Now we are done. So if you run the public ip of your application gateway, you will see the application running.
Note: If you get any trouble like application is not available with public IP then check the following. 1. Make sure Backend Pools are correctly configured and pointing to your cluster IP with 8080 port. It’s been configured automatically with kubernete deployment when manifest files are executed. 2. Check the kubernete services, ingress and pods are running healthy either through the azure portal or kubectl commands. 3. Check if your image is create healthy and appsettings files are correctly detokenzed. you may used docker extension in visual studio code to do all this or command prompt what you feel comfortable with.
Hope you enjoyed the content, follow me for more like this and please don’t forget to like/comment for it. Happy programming.