A background job is a task that we need to run in the background like a scheduled job and the Azure WebJobs provides this capability as part of the Azure App Service.
Azure WebJobs is the feature of Azure AppService where you can run your background tasks along with your web application deployed in App Service with NO additional cost.
In this article, we will talk about how to automate the background task deployment to the Azure AppService WebJob along with the web application. Please note that you can also deploy only the background job as a web job in the Azure app service too.
If you would have followed me on my previous article about automating the web application deployment to the Azure AppService here “CI/CD with GitHub Actions to deploy Applications to Azure App Service” then you have almost got this to 90%. Yes, it is 90% because the deployment and the process are all the same and for rest 10% all you have to do is, publish your background tasks to a specific folder along with your web application.
Note: If you are hosting the web application along with web job then you must have to host it together in a single pipeline as I’ll be showing here or else the wwwroot folder will get overrriden with lastest deployment and this happens because everything goes into the wwwroot folder. Web application content will got directly to the wwwroot folder and background task will insided wwwroot/App_Data/Jobs/Triggered/{your Job Name}
For this example, I’ve got a simple .net console application as a background task which I’m going to deploy as a web Job.
To achieve this, We need to modify Step 7 from my previous article to prepare the app settings. In my case, I’m adding one more execution step in the pipeline as I’m deploying the web application and web job both. So the Step 7 code will finally be as:
In the above code, I added the extra step in the build job to modify the connection string of the background task “DemoTask” which is a console application for me inside a folder called “webjob” from my code repo.
Next, we will modify Step 8 from my previous article, to add a new step to build and publish the console application as a background task i.e. web job. Here is the modified step 8:
In the above code, if you observed, the background task is being published in a specific folder as “${{env.DOTNET_ROOT}}/myapp/App_Data/Jobs/Triggered/${{ env.webJobName }}”
So this is the only catch that, your background task must be in App_Data/Jobs/Triggered/{job_name} inside wwwroot which is the base path.
Thats all. We are done and this is how the folder structure will look like in Azure App Service.
CICD is the process of automating the building, testing & deploying of your application.
CICD is no more a novelty instead it’s a need for every development team. Over the past few weeks, I got a chance to spend huge time implementing CICD from scratch, and based on my experience I suggest six best practices required for any DevOps team, and these are:
Plan your repo
Choose your tools
Plan your Tests automation
Secure your pipelines and secrets
Pipelines for early-stage verification and deployment
Involve the Team.
Plan your repo Your code repository is extremely important to avoid mess working with small/big teams. A wise decision is always to avoid direct push to the main branch or a release branch, hence the recommendation would be to have branches like this:
In this case, the Feature branch will be the branch that will be synced to the Main for any changes and all developers should be worked with Feature Branch to avoid any accidental/unwanted changes to the Main.
Also, the most important task here is to restrict access to the Main & Release branch for direct push, Pull Request approval, etc.
Choose your tools It is always important to scan the code being pushed to the branches in terms of security & vulnerabilities. To do this there are plenty of tools available like SonarQube, Blackduck, etc.
These scanning tools help you to make sure your code going safely on in the internet ocean.
Plan your Tests automation Unit Tests or Integration Tests are important to make sure no breaking changes are being pushed and your application is healthy but can be relied on running the tests on local/dev machines only, Oh No, that would be a big mistake.
So make sure you have a pipeline to trigger with every pull request to your concern branch which builds the project and runs the tests and PR should be accepted only with the successful execution of these pipelines.
For securing your application secrets, you can use cloud Key Vault services with restricted access i.e. Azure Key Vault
Pipelines for early-stage verification and deployment Issues in application code are like diseases in your body so as early it is caught, it can be treated well.
So plan your pipeline like Code scanning (SonarQube, Blackduck, etc), Test execution, dev/test deployment, etc in the early stages like with each PR of the feature branch and main branch.
Involve the Team Last but not the least, Team involvement is very much required as DevOps activity is not one person's responsibility. Whether it is writing tests or observing the pipeline's progress/failure, everyone’s responsibility is equal to making sure the pipeline goes end-to-end green.
Hope you enjoyed the content, follow me for more like this, and please don’t forget to LIKE it. Happy programming.
Writing Tests (Unit Tests, Integration Tests) is not only the best practice but also essential to ensure quality code.
From the DevOps side, it is essential to put a gate to check for successful unit test execution with each pull request or push. So in this article, we will see how to implement a pipeline to run the unit tests and publish the results.
What do we need?
We need a pipeline to be triggered with every pull request for your code repo.
Jobs to run the unit tests and publish the results.
Let’s create our pipeline for unit test execution. To do this add a yaml/yml file as .github/workflows/buildAndRunTest.yml in the root project folder and then starts with defining the name & triggering action.
name:Buildandruntests
env: DOTNET_VERSION:'6.0'# set this to the .NET Core version to use WORKING_DIRECTORY:'./src'#define the root folder path
on: workflow_dispatch:#for manual trigger pull_request:#trigger action with pull request branches: -main#your branch name
Next step, we will define the job and mention the target machine (Linux/windows) then perform the necessary steps for code checkout and setting .net core
jobs: build: runs-on:ubuntu-latest#target machine is ubuntu
steps: -uses:actions/checkout@v2
-name:Setup.NETCore uses:actions/setup-dotnet@v1 with: dotnet-version:${{env.DOTNET_VERSION}}#reading it from env variable defined above
I’m using “EnricoMi/publish-unit-test-result-action” action from the marketplace with JUnit logger hence we need to install the related package for each test project as the next step.
Xml logger for JUnit v5 compliant xml report when test is running with “dotnet test” or “dotnet vstest”.
In both the articles I gave the example of keeping secrets in GitHub Environments but what if you want to store your secrets in Azure Key Vaults which has the advantage over Github secrets. Like, You can verify the secrets value in azure key vault and can be upgraded programmatically too to new version if needed and also you can control the access permission based on need. So here is the example to prepare your appsettings.production.js or other config files by reading secrets from Azure Key Vaults. Follow the steps here to do so:
First thing we would need is, Connectivity to Azure so that pipeline can do the azure login and for this purpose I suggest always to use service principal instead of user id & password. This is the only settings which you need to store as part of GitHub Secrets so that using this you can do the Azure login. Here is command to generate the service principal .
az ad sp create-for-rbac --name "{your_serviceprincipal_name}" --scope /subscriptions/{subscription_id}/resourceGroups/{resourceGroupName} --role Contributor --sdk-auth
Note: I’m creating service principal with contributor role at resource group level for my need, but I would recommend the role to be downgraded for access based on your need.
2. Next we need to provide the access to above created service principal for accessing the secrets from key vault and for this please login to the azure portal and navigate to your Key Vault => Access policies and click on +Create.
From the Permission tab: select Get, List, Decrypt of Key Permissions and Get, List of Secret Permissions and Certificate Permissions. From the Principal tab: search your service principal and select.
3. Set up is done, now do the code in Github action to read the secrets from Azure Key Vault.
From the above code, a. first we are doing code checkout for the code repository, b. Logging in to the Azure and ‘YourServicePrincipa’ is the secrets stored on GitHub environment which you created from 1st step here. , c. Reading secret ‘CONNECTIONSTRING’ from Azure Key Vault. and finally d. Using the secrets ‘ConnectionString’ to replace in appsettings.Production.json.
Note: In case of multiple secrets reading, please mention all your secrets name with comma separated i.e.
This article here is going to be bit lengthy but if you stay with me, I’m sure it would be one stop learning for your application to automate the deployment on Azure Kubernete Service.
I assume, you know the basics about Kubernetes cluster and here we will be using Azure Kuberenete Service to orchestrate my kubernete cluster. We would need Ingress Controller to expose the pod for public access and in this example I’ll be using Azure Application Gateway to expose the service running in pods.
AGIC is Azure Gateway Ingress Controller which allows Azure Application Gateway to be used as the ingress for the AKS Cluster.
Now lets create github action to deploy the application to the cluster and expose it through the Azure Application Gateway and to do so follow the steps below:
Step 1: First thing we need to create a docker container of our application to host on kubernetes cluster and to do so add the dockerFile to your main project and save it as Dockerfile.
FROM mcr.microsoft.com/dotnet/sdk:6.0-alpine AS base WORKDIR /app EXPOSE 8080ENV ASPNETCORE_URLS=http://+:8080FROM mcr.microsoft.com/dotnet/sdk:6.0-alpine AS build COPY . . WORKDIR "./MyApp" RUN dotnet build "MyApp.csproj" -c Release -o /app/buildFROM build AS publish RUN dotnet publish "MyApp.csproj" -c Release -o /app/publishFROM base AS final WORKDIR /app COPY --from=publish /app/publish . ENTRYPOINT ["dotnet", "MyApp.dll"]
In the above docker file, I’m copying everything to build server and setting the work directory as MyApp where MyApp.csproj is available then do the build & publish.
Note: You can also generate the docker file through Visual Studio if you enable the docker support Or through Visual Studio Code if you have docker extension installed. If Yes, the just pres ctrl+shift+p in visual studio code and type command >docker
Step 2: We would need Azure credentials to connect to azure and save it with azure github secret. For this we will generate the credentials by running below command through command prompt.
In command prompt first do the azure login with: > az login now run the command to create the credentials: > az ad sp create-for-rbac — name “MyApp” — scope /subscriptions/{subscription_id}/resourceGroups/{resourceGroupName} — role Contributor — sdk-auth
Above command will output result like below, save it in github environment secret.
Step 3: To host the application to kubernete cluster we need to create files as deployment.yml (for container deployment to pods), service.yml (to expose the proxy to access application running in pods, it will be internal) and ingress-appgateway.yml (the ingress controller which will expose the application for public access).
if you want You merge all these to a single file but for better understanding and seperation I’m keeping them separate. So first I’ll create a folder name as k8s (you can name any) in the parent directory to keep all file. Hence my directory looks like:
Above all three files are self explanatory if are aware of basic concepts of kubernete clusters but still I have highlighted the names string which should match with what you name your deployment app and service.
Step 4: Now lets start creating the github Action pipeline by adding yml file as ./.github/workflows/service-deploy.yml and below code.
- name: Setup .NET Core uses: actions/setup-dotnet@v1 with: dotnet-version: 6.0.x - name: Run unit and integration tests shell: bash working-directory: ./src run: dotnet test -c Release
Step 6: Now we add the ‘publish’ job. As part of the publish job we will replace the tokens in the appsettings.production.json (or appsettings.json) if any.
As per the above code, appsettings.production.json file as a token defined as “#{ConnectionString}#” which has to be replaced with the value stored in github environment’s (named ‘test’ ) secret named ‘CONNECTIONSTRING’. Also this job requires [build] job ti run successfully.
Step 7: Next we will continue with publish job only and steps to connect to Azure Container Registry and then build & push the container.
logging in to the acr with acr url, username & password stored in github environment secret.
in ‘deploy’ job, first steps we add to download the artifact. Here the artifact name must match with the name provided in step 7 with publishing artifact steps.
next, we need to set the aks context where we would azure credentials which we generated in step 2 and stored in github environment secret along with resource group name and cluster name which again we will read it from github environment secret.
Now if you remember the deployment.yml file which we created in the step 3 has two tokens #{{CONTAINER_IMAGE}}# and #{{CONTAINER_REGISTRY_SECRET}}#, hence lets replace them.
From the above code, CONTAINER_REGISTRY_SECRET is a hardcoded string which you can name it any or defined it as environment input for your workflow and use it.
Now in the next step of the job ‘deploy’ we will create the image pull secret which will be used to pull the image by the cluster to create pods and deploy image.
and now the final steps of the deploy jobs is for deploying to the cluster.
- name: Deploy to AKS id: deploy-aks uses: Azure/k8s-deploy@v4 with: namespace: ${{ secrets.kubernetesNamespace }} manifests: myapp/k8s/ imagepullsecrets: myapp-image-pull-secret
and we are done. During the whole process you can get two issues, either azure access issue (which you have to resolve) or wrong file references from artifects(in my case myapp) i.e. manifest file not found or image tag file not found and in this case you can add the below steps here to see the folder structure of your artifact so that you can point the file correctly.
- name: Display structure of downloaded files-Debug shell: bash run: ls -R working-directory: ${{ inputs.artifactName }}
Note: use this steps only after you downloaded the artifact in deploy job.
Now we are done. So if you run the public ip of your application gateway, you will see the application running.
Note: If you get any trouble like application is not available with public IP then check the following. 1. Make sure Backend Pools are correctly configured and pointing to your cluster IP with 8080 port. It’s been configured automatically with kubernete deployment when manifest files are executed. 2. Check the kubernete services, ingress and pods are running healthy either through the azure portal or kubectl commands. 3. Check if your image is create healthy and appsettings files are correctly detokenzed. you may used docker extension in visual studio code to do all this or command prompt what you feel comfortable with.
Hope you enjoyed the content, follow me for more like this and please don’t forget to like/comment for it. Happy programming.