Docker is a set of platform as a service products that use OS-level virtualization to deliver software in packages called containers.
Containers are isolated from one another and bundle their own software, libraries and configuration files; they can communicate with each other through well-defined channels.
This is the technology, I feel everyone is aware of regardless of, whether they are using it or not. Today here I’m gonna list out some best practices to tune up your docker file if you are creating one. Below points will help you to create optimized, cleaner and maintainable docker and They are:
1. Use the appropriate specific version image as base image instead of using generalized base image and start installing required packages.
i.e. If I need image to run my .Net Core application then I’ll check the available .Net core images here https://hub.docker.com/_/microsoft-dotnet-aspnet/ and use the one I need it like:
FROM mcr.microsoft.com/dotnet/aspnet:5.0 AS base
For Microsoft we have well defined Images as our need, think about a situation when you need a Linux image to host NodeJS application. So in this case instead of pulling Linux image like ‘From ubuntu’ and then installing node on this, always look NodeJs image. i.e.
From node:17.2.0
For specific version please visit here: https://hub.docker.com/_/node
In above both the example I am explicitly mentioned the image version which is optional. “From node” is equivalent to ““From node:latest” hence this will always pull the latest of node image whenever you build it which might break your stuff hence avoid doing that.
2. Always try to use the minimum light weight image as suits your requirements.
Full blown images comes with extra tools/Utilities/features which you might not need and without understanding them you might creating a security issue too and it will also increase the image size also which may cost you extra for storage. Hence strict to specific image which you need.
All server docker images comes with various types as alpine, Bullseye, Buster, nano etc. Hence do check the available feature before using them.
use the command to inspect the images: docker image inspect {image_name}
docker image inspect mcr.microsoft.com/dotnet/aspnet:5.0
3. Optimize caching image layer
Each command in your docker file is a layer and each layer is cached by docker on local file system. So when you rebuild the docker image and if you don’t have any change for particular layer then docker will re-use the layer from cache. This advantage us for faster downloading and faster image building.
Hence to make use of Docker caching effectively, re-arrange layers (docker commands) from least to most frequently changing layer.
i.e. lets say you use windows server image and then installing some dependency (i.e. packages/tool etc). build/package your project/code, copy/install etc. In this the correct order would be: pull window server image => run the packages/dependencies => build your project => copy the build.
Below is an example of node image and installing dependency packages in correct order.
FROM node:17.2.0-alpineWORKDIR /appCOPY package.json package-lock.json .RUN npm install --productionCOPY myapp /appCMD ["node", "src/index.js"]
4. Avoid files/folder to copy to image not required
Use the .dockerignore to list files/folders to avoid. .dockerignore file should be in root folder level of your project. Below is the typical example of a .dockerignore file.
**/.classpath
**/.dockerignore
**/.env
**/.git
**/.gitignore
**/.project
**/.settings
**/.toolstarget
**/.vs
**/.vscode
**/*.*proj.user
**/*.dbmdl
**/*.jfm
**/azds.yaml
**/bin
**/charts
**/docker-compose*
**/Dockerfile*
**/node_modules
**/npm-debug.log
**/obj
**/secrets.dev.yaml
**/values.dev.yaml
LICENSE
README.md
5. Use the Multi-stage builds concepts.
this is very important concept to avoid stuff like tools, files etc. required only to build the project but not required for running the application.
i.e. Take an example of .Net core application, to build we need .net sdk but to run the app we only need .net runtime hence we take advantage of multi-stage build concept and use two different image with build & publish stage to build the final image. Below code depicts the same:
FROM mcr.microsoft.com/dotnet/aspnet:5.0 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
WORKDIR /src
COPY ["CoreWebAPIDemo.csproj", "."]
RUN dotnet restore "./CoreWebAPIDemo.csproj"
COPY . .
WORKDIR "/src/."
RUN dotnet build "CoreWebAPIDemo.csproj" -c Release -o /app/buildFROM build AS publish
RUN dotnet publish "CoreWebAPIDemo.csproj" -c Release -o /app/publishFROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "CoreWebAPIDemo.dll"]
6. Use the least privileged user to start the application
Docker by default uses the default user which is root user having root level access which could be a security risk. Hence to avoid that it recommended to create a dedicated user with least privilege required to run the application from docker container.
i.e. I have a dedicated user with all required permission required to my application to run, hence setting it up using below code here.
USER ContainerUserENTRYPOINT ["dotnet", "CoreWebAPIDemo.dll"]
7. Perform Vulnerability scanning for Docker image
This is a must do step before releasing you docker image for production. Use the docker scan {image_name/id} command to perform scanning. Result of this will tell the vulnerabilities if any and will tell you the release patch version which has a fix so that you can make use of that.
Learn about more docker scan here: https://docs.docker.com/engine/scan/
Above best practices applied for any docker image which you build irrespective of technologies Microsoft, NodeJs, Java etc. Hope you like it.
Thank you for reading it. Don’t forget to clap if you like and leave comments for suggestion. Follow me for updates on my next article(s).