Docker Fundamentals: Containerizing Your Applications

Docker Fundamentals: Containerizing Your Applications
Docker revolutionized application deployment by packaging applications with their dependencies into portable containers. Let’s explore how to effectively containerize your applications.
Basic Dockerfile for Node.js
Create a simple Dockerfile for a Node.js application:
# Use official Node.js image
FROM node:18-alpine
# Set working directory
WORKDIR /app
# Copy package files
COPY package*.json ./
# Install dependencies
RUN npm ci --only=production
# Copy application code
COPY . .
# Expose port
EXPOSE 3000
# Set user for security
USER node
# Start application
CMD ["node", "server.js"]Build and run:
# Build the image
docker build -t my-node-app:1.0 .
# Run the container
docker run -d -p 3000:3000 --name my-app my-node-app:1.0
# View running containers
docker ps
# View logs
docker logs my-app
# Stop container
docker stop my-app
# Remove container
docker rm my-appMulti-Stage Builds
Optimize image size with multi-stage builds:
# Stage 1: Build stage
FROM node:18-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Stage 2: Production stage
FROM node:18-alpine
WORKDIR /app
# Copy only necessary files from builder
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package*.json ./
# Non-root user
USER node
EXPOSE 3000
CMD ["node", "dist/server.js"]React Application Dockerfile
Containerize a React app with nginx:
# Build stage
FROM node:18-alpine AS build
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Production stage
FROM nginx:alpine
# Copy custom nginx config
COPY nginx.conf /etc/nginx/conf.d/default.conf
# Copy built files from build stage
COPY --from=build /app/build /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]Nginx configuration (nginx.conf):
server {
listen 80;
server_name localhost;
root /usr/share/nginx/html;
index index.html;
location / {
try_files $uri $uri/ /index.html;
}
location /api {
proxy_pass http://backend:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}Docker Compose for Multi-Container Apps
Orchestrate multiple services:
version: "3.8"
services:
frontend:
build:
context: ./frontend
dockerfile: Dockerfile
ports:
- "80:80"
depends_on:
- backend
networks:
- app-network
backend:
build:
context: ./backend
dockerfile: Dockerfile
ports:
- "3000:3000"
environment:
- NODE_ENV=production
- DATABASE_URL=postgresql://user:password@db:5432/myapp
- REDIS_URL=redis://redis:6379
depends_on:
- db
- redis
networks:
- app-network
volumes:
- ./backend:/app
- /app/node_modules
db:
image: postgres:15-alpine
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=password
- POSTGRES_DB=myapp
volumes:
- postgres-data:/var/lib/postgresql/data
networks:
- app-network
ports:
- "5432:5432"
redis:
image: redis:7-alpine
ports:
- "6379:6379"
networks:
- app-network
volumes:
- redis-data:/data
networks:
app-network:
driver: bridge
volumes:
postgres-data:
redis-data:Run with Docker Compose:
# Start all services
docker-compose up -d
# View logs
docker-compose logs -f backend
# Stop all services
docker-compose down
# Rebuild and start
docker-compose up --build -d
# Remove volumes
docker-compose down -vDevelopment vs Production Dockerfiles
Create separate configurations:
Dockerfile.dev:
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
# Hot reload for development
CMD ["npm", "run", "dev"]Dockerfile.prod:
FROM node:18-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
RUN npm run build
FROM node:18-alpine
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package*.json ./
USER node
EXPOSE 3000
CMD ["node", "dist/server.js"]Docker Ignore File
Exclude unnecessary files (.dockerignore):
node_modules
npm-debug.log
.git
.gitignore
README.md
.env
.env.local
.DS_Store
coverage
.vscode
.idea
*.log
dist
buildEnvironment Variables
Manage configuration securely:
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
# Set default environment variables
ENV NODE_ENV=production \
PORT=3000 \
LOG_LEVEL=info
EXPOSE 3000
CMD ["node", "server.js"]Using environment file:
# .env.production
NODE_ENV=production
DATABASE_URL=postgresql://user:pass@db:5432/myapp
JWT_SECRET=your-secret-key
API_KEY=your-api-key
# Run with environment file
docker run --env-file .env.production -p 3000:3000 my-appHealth Checks
Add health checks to your containers:
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
EXPOSE 3000
# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=40s --retries=3 \
CMD node healthcheck.js
CMD ["node", "server.js"]healthcheck.js:
const http = require("http");
const options = {
host: "localhost",
port: 3000,
path: "/health",
timeout: 2000,
};
const request = http.request(options, (res) => {
if (res.statusCode === 200) {
process.exit(0);
} else {
process.exit(1);
}
});
request.on("error", () => {
process.exit(1);
});
request.end();Python Flask Application
Containerize a Python app:
FROM python:3.11-slim
WORKDIR /app
# Install dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy application
COPY . .
# Create non-root user
RUN useradd -m -u 1000 appuser && \
chown -R appuser:appuser /app
USER appuser
EXPOSE 5000
# Use gunicorn for production
CMD ["gunicorn", "--bind", "0.0.0.0:5000", "--workers", "4", "app:app"]requirements.txt:
Flask==2.3.0
gunicorn==21.2.0
psycopg2-binary==2.9.7
redis==5.0.0Useful Docker Commands
# List images
docker images
# Remove unused images
docker image prune -a
# View container stats
docker stats
# Execute command in running container
docker exec -it my-app sh
# Copy files from container
docker cp my-app:/app/logs ./logs
# Inspect container
docker inspect my-app
# View port mappings
docker port my-app
# Save image to file
docker save -o my-app.tar my-app:1.0
# Load image from file
docker load -i my-app.tar
# Tag image
docker tag my-app:1.0 myregistry.com/my-app:1.0
# Push to registry
docker push myregistry.com/my-app:1.0Best Practices
# 1. Use specific versions, not 'latest'
FROM node:18.17-alpine # ✅
FROM node:latest # ❌
# 2. Use .dockerignore to exclude files
# Reduces build context size
# 3. Combine RUN commands to reduce layers
RUN apt-get update && \
apt-get install -y curl && \
apt-get clean
# 4. Order instructions by change frequency
# Most stable instructions first
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . . # Source code changes frequently
# 5. Don't run as root
USER node
# 6. Use multi-stage builds
# Keeps final image small
# 7. Clean up in the same layer
RUN apt-get update && \
apt-get install -y build-essential && \
npm install && \
apt-get remove -y build-essential && \
apt-get autoremove -y && \
rm -rf /var/lib/apt/lists/*Conclusion
Docker simplifies deployment and ensures consistency across environments. Master these fundamentals to build efficient, secure containers for your applications. Start with simple Dockerfiles and gradually adopt advanced patterns like multi-stage builds as your needs grow.