Skip to main content

Command Palette

Search for a command to run...

Production-Ready Docker Setup for Full-Stack Apps: Next.js + FastAPI + PostgreSQL with Tortoise ORM

Updated
17 min read
Production-Ready Docker Setup for Full-Stack Apps: Next.js + FastAPI + PostgreSQL with Tortoise ORM
T

Seasoned software engineer, technical founder, and mentor with deep expertise in web and mobile development, enterprise applications, DevOps, and modern AI/LLM technologies. I build robust, scalable platforms using Python (Django, FastAPI), JavaScript/TypeScript (React, Next.js), and Dart (Flutter).

With a strong academic background and extensive real-world experience, I’m passionate about delivering impactful solutions and guiding the next generation of developers. I love exploring where software, AI, and technology intersect with everyday life. Outside of tech, I enjoy quality time with family, learning new things, and helping others grow.

You know that frustrating moment when your app works perfectly on your laptop, but the moment you try to deploy it or share it with your teammate, everything falls apart? Yeah, we've all been there. That's the classic "works on my machine" nightmare, and honestly, it's exhausting.

Today, we're going to fix that once and for all. We're building a proper, production-ready Docker setup for a full-stack application that works consistently everywhere: your laptop, your colleague's machine, staging, and production. No more surprises, no more excuses.

By the end of this tutorial, you'll have a complete containerised application with hot reload for development, optimised multi-stage builds for production, and proper orchestration using Docker Compose. We're talking about a real-world setup that you can actually use in production.

What We're Actually Building

Here's what you're getting:

  • Frontend: Next.js 15 with TypeScript

  • Backend: FastAPI with Tortoise ORM

  • Database: PostgreSQL 16

  • Reverse Proxy: Nginx for production

  • Development Environment: Hot reload for both frontend and backend

  • Production Environment: Multi-stage builds that'll shrink your images by 80%

  • Proper Configuration: Separate environment files for better security and organisation

Why Should You Care?

Look, Docker can be brilliant or it can be an absolute disaster. I've seen 2GB+ Docker images that take forever to build and deploy. I've seen development setups where you have to rebuild the entire container just to test a one-line change. I've seen production deployments with hardcoded passwords and security holes you could drive a lorry through.

We're doing none of that. This tutorial follows best practices that actual companies use in production. You're learning the right way from the start.

Project Structure

fullstack-docker-app/
├── frontend/
│   ├── Dockerfile
│   ├── Dockerfile.dev
│   ├── .dockerignore
│   ├── .env.example
│   ├── next.config.mjs
│   ├── package.json
│   └── app/
├── backend/
│   ├── Dockerfile
│   ├── Dockerfile.dev
│   ├── .dockerignore
│   ├── .env.example
│   ├── requirements.txt
│   ├── main.py
│   ├── database.py
│   ├── models.py
│   ├── schemas.py
│   └── enums.py
├── nginx/
│   └── nginx.conf
├── docker-compose.yml
├── docker-compose.dev.yml
└── .gitignore

Notice we're using separate environment files for frontend and backend. This is crucial for security and proper separation of concerns - no more backend database passwords accidentally exposed to the frontend!

Part 1: Backend Setup with FastAPI and Tortoise ORM

Let's start with the backend because, let's be honest, that's where the real magic happens.

Step 1: Create the Backend Directory

mkdir -p fullstack-docker-app/backend
cd fullstack-docker-app/backend

Step 2: Create Requirements File

File: backend/requirements.txt

Here are the dependencies we need. Nothing fancy, just the essentials:

fastapi==0.109.0
uvicorn[standard]==0.27.0
tortoise-orm==0.20.0
asyncpg==0.29.0
aerich==0.7.2
pydantic==2.5.3
pydantic-settings==2.1.0

A quick note: we're using asyncpg instead of psycopg2 because Tortoise is async-first, and asyncpg is significantly faster.

Step 3: Create Backend Environment Configuration

File: backend/.env.example

This contains all backend-specific configuration. Notice how we keep database credentials and API secrets separate from the frontend:

# Database Configuration
POSTGRES_USER=postgres
POSTGRES_PASSWORD=your_secure_password_here
POSTGRES_DB=appdb
DATABASE_URL=postgresql://postgres:your_secure_password_here@db:5432/appdb

# Application Configuration
ENVIRONMENT=development
API_PORT=8000

# Security Configuration
SECRET_KEY=your-super-secret-key-here-change-in-production
CORS_ORIGINS=["http://localhost:3000"]

# External API Keys (backend only - never expose these to frontend)
# STRIPE_SECRET_KEY=sk_test_your_stripe_secret_key
# SENDGRID_API_KEY=SG.your_sendgrid_api_key

Now create your actual environment file:

cp .env.example .env

Important: Never commit .env files to version control!

Step 4: Create Enumerations File

File: backend/enums.py

Let's start with our enumerations. This keeps our code clean and organised:

"""Application enumerations."""

from enum import Enum


class ItemStatus(str, Enum):
    """Item status enumeration."""
    ACTIVE = "active"
    ARCHIVED = "archived"
    DELETED = "deleted"

Step 5: Create Database Models

File: backend/models.py

Now let's create our database models:

"""Database models using Tortoise ORM."""

import uuid
from tortoise import fields
from tortoise.models import Model

from enums import ItemStatus


class BaseModel(Model):
    """
    Base model with common fields.
    All our models inherit from this.
    """
    id = fields.UUIDField(pk=True, default=uuid.uuid4)
    created_at = fields.DatetimeField(auto_now_add=True)
    updated_at = fields.DatetimeField(auto_now=True)

    class Meta:
        abstract = True


class Item(BaseModel):
    """Item model for storing user-created items."""
    name = fields.CharField(max_length=255, index=True)
    description = fields.TextField()
    status = fields.CharEnumField(
        ItemStatus,
        max_length=20,
        default=ItemStatus.ACTIVE
    )
    metadata = fields.JSONField(default=dict)

    class Meta:
        table = "items"
        ordering = ["-created_at"]

    def __str__(self):
        return f"Item: {self.name}"

Step 6: Create Pydantic Schemas

File: backend/schemas.py

Pydantic schemas for request/response validation. Keeping these separate makes your codebase much easier to maintain:

"""Pydantic schemas for request/response validation."""

from typing import Optional
from pydantic import BaseModel, Field

from enums import ItemStatus


class ItemCreate(BaseModel):
    """Schema for creating a new item."""
    name: str = Field(..., min_length=1, max_length=255)
    description: str = Field(..., min_length=1)
    metadata: dict = Field(default_factory=dict)


class ItemUpdate(BaseModel):
    """Schema for updating an item."""
    name: Optional[str] = Field(None, min_length=1, max_length=255)
    description: Optional[str] = Field(None, min_length=1)
    status: Optional[ItemStatus] = None
    metadata: Optional[dict] = None


class ItemResponse(BaseModel):
    """Schema for item responses."""
    id: str
    name: str
    description: str
    status: str
    metadata: dict
    created_at: str
    updated_at: str

    class Config:
        from_attributes = True

Step 7: Create Database Configuration

File: backend/database.py

This is where we configure Tortoise ORM with environment variables:

"""Database configuration and initialisation."""

import os
from fastapi import FastAPI
from tortoise.contrib.fastapi import register_tortoise


def get_db_url() -> str:
    """
    Get database URL from environment variables.
    Falls back to local PostgreSQL if not set.
    """
    return os.getenv(
        "DATABASE_URL",
        f"postgresql://{os.getenv('POSTGRES_USER', 'postgres')}:"
        f"{os.getenv('POSTGRES_PASSWORD', 'postgres')}@"
        f"{os.getenv('POSTGRES_HOST', 'localhost')}:"
        f"{os.getenv('POSTGRES_PORT', '5432')}/"
        f"{os.getenv('POSTGRES_DB', 'appdb')}"
    )


TORTOISE_ORM = {
    "connections": {
        "default": get_db_url()
    },
    "apps": {
        "models": {
            "models": [
                "models",
                "aerich.models"
            ],
            "default_connection": "default",
        },
    },
    "use_tz": False,
    "timezone": "UTC",
}


def init_db(app: FastAPI) -> None:
    """
    Initialise database with Tortoise ORM.
    This registers Tortoise with FastAPI's lifespan events.
    """
    register_tortoise(
        app,
        config=TORTOISE_ORM,
        generate_schemas=True,
        add_exception_handlers=True,
    )

Step 8: Create FastAPI Application

File: backend/main.py

Here's our FastAPI application with proper environment configuration:

"""
FastAPI application with Tortoise ORM.
A proper production-ready setup with health checks and structured responses.
"""

import os
from contextlib import asynccontextmanager
from typing import List, Optional
from fastapi import FastAPI, HTTPException, status
from fastapi.middleware.cors import CORSMiddleware

from database import init_db
from models import Item
from schemas import ItemCreate, ItemUpdate, ItemResponse
from enums import ItemStatus


@asynccontextmanager
async def lifespan(app: FastAPI):
    """Application lifespan manager."""
    environment = os.getenv("ENVIRONMENT", "development")
    print(f"Starting application in {environment} mode...")
    yield
    print("Shutting down application...")


app = FastAPI(
    title="Full-Stack Docker API",
    description="A production-ready FastAPI application with Tortoise ORM",
    version="1.0.0",
    lifespan=lifespan,
)

# CORS configuration from environment
cors_origins = os.getenv("CORS_ORIGINS", '["*"]')
if isinstance(cors_origins, str):
    import json
    try:
        cors_origins = json.loads(cors_origins)
    except json.JSONDecodeError:
        cors_origins = ["*"]

app.add_middleware(
    CORSMiddleware,
    allow_origins=cors_origins,
    allow_credentials=True,
    allow_methods=["*"],
    allow_headers=["*"],
)

init_db(app)


@app.get("/", tags=["Root"])
async def root():
    """Root endpoint with environment info."""
    return {
        "message": "Full-Stack Docker API is running",
        "environment": os.getenv("ENVIRONMENT", "development"),
        "version": "1.0.0",
    }


@app.get("/health", tags=["Health"])
async def health_check():
    """Health check endpoint for monitoring."""
    return {
        "status": "healthy",
        "environment": os.getenv("ENVIRONMENT", "development"),
    }


@app.post("/items/", response_model=ItemResponse, status_code=status.HTTP_201_CREATED, tags=["Items"])
async def create_item(item: ItemCreate):
    """Create a new item."""
    db_item = await Item.create(
        name=item.name,
        description=item.description,
        metadata=item.metadata,
    )

    return ItemResponse(
        id=str(db_item.id),
        name=db_item.name,
        description=db_item.description,
        status=db_item.status,
        metadata=db_item.metadata,
        created_at=db_item.created_at.isoformat(),
        updated_at=db_item.updated_at.isoformat(),
    )


@app.get("/items/", response_model=List[ItemResponse], tags=["Items"])
async def get_items(
    skip: int = 0,
    limit: int = 10,
    status_filter: Optional[ItemStatus] = None
):
    """Get all items with optional filtering and pagination."""
    query = Item.all()

    if status_filter:
        query = query.filter(status=status_filter)

    items = await query.offset(skip).limit(limit)

    return [
        ItemResponse(
            id=str(item.id),
            name=item.name,
            description=item.description,
            status=item.status,
            metadata=item.metadata,
            created_at=item.created_at.isoformat(),
            updated_at=item.updated_at.isoformat(),
        )
        for item in items
    ]


@app.get("/items/{item_id}", response_model=ItemResponse, tags=["Items"])
async def get_item(item_id: str):
    """Get a specific item by ID."""
    item = await Item.filter(id=item_id).first()

    if not item:
        raise HTTPException(
            status_code=status.HTTP_404_NOT_FOUND,
            detail=f"Item with ID {item_id} not found"
        )

    return ItemResponse(
        id=str(item.id),
        name=item.name,
        description=item.description,
        status=item.status,
        metadata=item.metadata,
        created_at=item.created_at.isoformat(),
        updated_at=item.updated_at.isoformat(),
    )


@app.patch("/items/{item_id}", response_model=ItemResponse, tags=["Items"])
async def update_item(item_id: str, item_update: ItemUpdate):
    """Update an item."""
    item = await Item.filter(id=item_id).first()

    if not item:
        raise HTTPException(
            status_code=status.HTTP_404_NOT_FOUND,
            detail=f"Item with ID {item_id} not found"
        )

    update_data = item_update.dict(exclude_unset=True)
    await item.update_from_dict(update_data)
    await item.save()

    return ItemResponse(
        id=str(item.id),
        name=item.name,
        description=item.description,
        status=item.status,
        metadata=item.metadata,
        created_at=item.created_at.isoformat(),
        updated_at=item.updated_at.isoformat(),
    )


@app.delete("/items/{item_id}", status_code=status.HTTP_204_NO_CONTENT, tags=["Items"])
async def delete_item(item_id: str):
    """Delete an item."""
    item = await Item.filter(id=item_id).first()

    if not item:
        raise HTTPException(
            status_code=status.HTTP_404_NOT_FOUND,
            detail=f"Item with ID {item_id} not found"
        )

    await item.delete()
    return None

Step 9: Create Production Dockerfile

File: backend/Dockerfile

This is where the magic of multi-stage builds happens:

FROM python:3.11-slim as builder

WORKDIR /app

RUN apt-get update && apt-get install -y --no-install-recommends \
    gcc \
    && rm -rf /var/lib/apt/lists/*

COPY requirements.txt .
RUN pip wheel --no-cache-dir --no-deps --wheel-dir /app/wheels -r requirements.txt


FROM python:3.11-slim

WORKDIR /app

COPY --from=builder /app/wheels /wheels
COPY --from=builder /app/requirements.txt .

RUN pip install --no-cache /wheels/*

COPY . .

RUN useradd -m -u 1000 appuser && chown -R appuser:appuser /app
USER appuser

EXPOSE 8000

HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
    CMD python -c "import urllib.request; urllib.request.urlopen('http://localhost:8000/health')" || exit 1

CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000", "--workers", "2"]

Step 10: Create Development Dockerfile

File: backend/Dockerfile.dev

For development with hot reload:

FROM python:3.11-slim

WORKDIR /app

COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

COPY . .

EXPOSE 8000

CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000", "--reload"]

Step 11: Create Docker Ignore File

File: backend/.dockerignore

__pycache__
*.pyc
*.pyo
*.pyd
.Python
env/
venv/
.venv/
.pytest_cache/
.coverage
*.log
.git
.gitignore
README.md
*.md
.DS_Store
.env
.env.*
!.env.example

Part 2: Frontend Setup with Next.js

Right, backend sorted. Now let's build a proper frontend with its own environment configuration.

Step 1: Create the Next.js App

cd ..
npx create-next-app@latest frontend --typescript --tailwind --app --no-src-dir
cd frontend
npm install axios

Step 2: Create Frontend Environment Configuration

File: frontend/.env.example

Frontend environment variables - notice these are all public-safe variables:

# API Configuration
NEXT_PUBLIC_API_URL=http://localhost:8000

# Application Configuration
NEXT_PUBLIC_APP_NAME=Full-Stack Docker App
NEXT_PUBLIC_ENVIRONMENT=development

# Public Analytics & Third-party Services (public keys only!)
# NEXT_PUBLIC_GOOGLE_ANALYTICS=GA_MEASUREMENT_ID
# NEXT_PUBLIC_STRIPE_PUBLISHABLE_KEY=pk_test_your_publishable_key
# NEXT_PUBLIC_SENTRY_DSN=https://your-sentry-dsn

# Development Configuration
NEXT_TELEMETRY_DISABLED=1

Create your actual environment file:

cp .env.example .env

Security Note: In Next.js, only variables prefixed with NEXT_PUBLIC_ are exposed to the browser. Never put secrets here!

Step 3: Create the API Client

File: frontend/lib/api.ts

import axios from 'axios';

const API_URL = process.env.NEXT_PUBLIC_API_URL || 'http://localhost:8000';

export const api = axios.create({
  baseURL: API_URL,
  headers: {
    'Content-Type': 'application/json',
  },
  timeout: 10000,
});

// Add request interceptor for debugging in development
if (process.env.NEXT_PUBLIC_ENVIRONMENT === 'development') {
  api.interceptors.request.use(
    (config) => {
      console.log(`🚀 API Request: ${config.method?.toUpperCase()} ${config.url}`);
      return config;
    },
    (error) => {
      console.error('❌ API Request Error:', error);
      return Promise.reject(error);
    }
  );
}

export interface Item {
  id: string;
  name: string;
  description: string;
  status: string;
  metadata: Record<string, any>;
  created_at: string;
  updated_at: string;
}

export interface ItemCreate {
  name: string;
  description: string;
  metadata?: Record<string, any>;
}

export interface ItemUpdate {
  name?: string;
  description?: string;
  status?: string;
  metadata?: Record<string, any>;
}

export const itemsApi = {
  getAll: async (skip = 0, limit = 10): Promise<Item[]> => {
    const response = await api.get('/items/', {
      params: { skip, limit }
    });
    return response.data;
  },

  getById: async (id: string): Promise<Item> => {
    const response = await api.get(`/items/${id}`);
    return response.data;
  },

  create: async (item: ItemCreate): Promise<Item> => {
    const response = await api.post('/items/', item);
    return response.data;
  },

  update: async (id: string, item: ItemUpdate): Promise<Item> => {
    const response = await api.patch(`/items/${id}`, item);
    return response.data;
  },

  delete: async (id: string): Promise<void> => {
    await api.delete(`/items/${id}`);
  },
};

Step 4: Create the Main Page Component

File: frontend/app/page.tsx

Replace the default content with this:

'use client';

import { useState, useEffect } from 'react';
import { itemsApi, Item, ItemCreate } from '@/lib/api';

export default function Home() {
  const [items, setItems] = useState<Item[]>([]);
  const [loading, setLoading] = useState(true);
  const [error, setError] = useState<string | null>(null);
  const [formData, setFormData] = useState<ItemCreate>({
    name: '',
    description: '',
    metadata: {},
  });

  const fetchItems = async () => {
    try {
      setLoading(true);
      const data = await itemsApi.getAll();
      setItems(data);
      setError(null);
    } catch (err: any) {
      setError(err.message || 'Failed to fetch items');
      console.error('Error fetching items:', err);
    } finally {
      setLoading(false);
    }
  };

  useEffect(() => {
    fetchItems();
  }, []);

  const handleSubmit = async (e: React.FormEvent) => {
    e.preventDefault();

    if (!formData.name.trim() || !formData.description.trim()) {
      setError('Please fill in all fields');
      return;
    }

    try {
      await itemsApi.create(formData);
      setFormData({ name: '', description: '', metadata: {} });
      setError(null);
      fetchItems();
    } catch (err: any) {
      setError(err.message || 'Failed to create item');
      console.error('Error creating item:', err);
    }
  };

  const handleDelete = async (id: string) => {
    if (!confirm('Are you sure you want to delete this item?')) {
      return;
    }

    try {
      await itemsApi.delete(id);
      fetchItems();
    } catch (err: any) {
      setError(err.message || 'Failed to delete item');
      console.error('Error deleting item:', err);
    }
  };

  const appName = process.env.NEXT_PUBLIC_APP_NAME || 'Full-Stack Docker App';
  const environment = process.env.NEXT_PUBLIC_ENVIRONMENT || 'development';

  return (
    <main className="min-h-screen p-8 bg-gradient-to-br from-blue-50 to-indigo-100">
      <div className="max-w-4xl mx-auto">
        <div className="mb-8">
          <h1 className="text-4xl font-bold text-gray-800 mb-2">
            {appName}
          </h1>
          <p className="text-gray-600">
            Next.js 15 + FastAPI + PostgreSQL + Tortoise ORM
          </p>
          {environment === 'development' && (
            <div className="mt-2 inline-flex items-center px-2 py-1 rounded-full bg-yellow-100 text-yellow-800 text-sm">
              🚧 Development Mode
            </div>
          )}
        </div>

        {error && (
          <div className="bg-red-50 border border-red-200 text-red-700 px-4 py-3 rounded-lg mb-6">
            <strong>Error:</strong> {error}
          </div>
        )}

        <div className="bg-white rounded-lg shadow-md p-6 mb-8">
          <h2 className="text-2xl font-semibold mb-4 text-gray-800">
            Create New Item
          </h2>
          <form onSubmit={handleSubmit} className="space-y-4">
            <div>
              <label className="block text-sm font-medium text-gray-700 mb-1">
                Name
              </label>
              <input
                type="text"
                value={formData.name}
                onChange={(e) =>
                  setFormData({ ...formData, name: e.target.value })
                }
                className="w-full p-3 border border-gray-300 rounded-md focus:ring-2 focus:ring-blue-500 focus:border-transparent text-gray-900"
                placeholder="Enter item name..."
                required
              />
            </div>
            <div>
              <label className="block text-sm font-medium text-gray-700 mb-1">
                Description
              </label>
              <textarea
                value={formData.description}
                onChange={(e) =>
                  setFormData({ ...formData, description: e.target.value })
                }
                className="w-full p-3 border border-gray-300 rounded-md focus:ring-2 focus:ring-blue-500 focus:border-transparent text-gray-900"
                rows={4}
                placeholder="Describe your item..."
                required
              />
            </div>
            <button
              type="submit"
              className="w-full bg-blue-500 text-white py-3 rounded-md hover:bg-blue-600 transition-colors font-medium"
            >
              Create Item
            </button>
          </form>
        </div>

        <div className="bg-white rounded-lg shadow-md p-6">
          <h2 className="text-2xl font-semibold mb-4 text-gray-800">Items</h2>

          {loading && (
            <div className="text-center py-8">
              <p className="text-gray-600">Loading items...</p>
            </div>
          )}

          {!loading && items.length === 0 && (
            <div className="text-center py-8">
              <p className="text-gray-600">
                No items yet. Create one above to get started!
              </p>
            </div>
          )}

          <div className="space-y-4">
            {items.map((item) => (
              <div
                key={item.id}
                className="border border-gray-200 rounded-md p-4 hover:shadow-md transition-shadow"
              >
                <div className="flex justify-between items-start">
                  <div className="flex-1">
                    <h3 className="text-lg font-semibold text-gray-800">
                      {item.name}
                    </h3>
                    <p className="text-gray-600 mt-1">{item.description}</p>
                    <div className="mt-3 flex items-center gap-4 text-sm text-gray-500">
                      <span className="inline-flex items-center px-2 py-1 rounded-full bg-green-100 text-green-800">
                        {item.status}
                      </span>
                      <span>
                        Created: {new Date(item.created_at).toLocaleDateString()}
                      </span>
                    </div>
                  </div>
                  <button
                    onClick={() => handleDelete(item.id)}
                    className="ml-4 px-3 py-1 bg-red-500 text-white rounded hover:bg-red-600 transition-colors text-sm"
                  >
                    Delete
                  </button>
                </div>
              </div>
            ))}
          </div>
        </div>
      </div>
    </main>
  );
}

Step 5: Update Next.js Config

File: frontend/next.config.mjs

Replace with this:

const nextConfig = {
  output: 'standalone',
  env: {
    NEXT_PUBLIC_API_URL: process.env.NEXT_PUBLIC_API_URL,
    NEXT_PUBLIC_APP_NAME: process.env.NEXT_PUBLIC_APP_NAME,
    NEXT_PUBLIC_ENVIRONMENT: process.env.NEXT_PUBLIC_ENVIRONMENT,
  },
  // Redirect /api routes to backend (useful for production)
  async rewrites() {
    return [
      {
        source: '/api/:path*',
        destination: `${process.env.NEXT_PUBLIC_API_URL}/:path*`,
      },
    ];
  },
};

export default nextConfig;

Step 6: Create Frontend Production Dockerfile

File: frontend/Dockerfile

FROM node:20-alpine AS deps
WORKDIR /app

COPY package.json package-lock.json* ./
RUN npm ci

FROM node:20-alpine AS builder
WORKDIR /app

COPY --from=deps /app/node_modules ./node_modules
COPY . .

ENV NEXT_TELEMETRY_DISABLED 1
ENV NODE_ENV production

RUN npm run build

FROM node:20-alpine AS runner
WORKDIR /app

ENV NODE_ENV production
ENV NEXT_TELEMETRY_DISABLED 1

RUN addgroup --system --gid 1001 nodejs && \
    adduser --system --uid 1001 nextjs

COPY --from=builder /app/public ./public
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static

USER nextjs

EXPOSE 3000

ENV PORT 3000
ENV HOSTNAME "0.0.0.0"

CMD ["node", "server.js"]

Step 7: Create Frontend Development Dockerfile

File: frontend/Dockerfile.dev

FROM node:20-alpine

WORKDIR /app

COPY package.json package-lock.json* ./

RUN npm install

COPY . .

EXPOSE 3000

CMD ["npm", "run", "dev"]

Step 8: Create Frontend Docker Ignore File

File: frontend/.dockerignore

node_modules
.next
.git
.gitignore
README.md
npm-debug.log
.DS_Store
.env
.env.*
!.env.example

Part 3: Nginx Configuration

For production, we'll use Nginx as a reverse proxy.

File: nginx/nginx.conf

upstream frontend {
    server frontend:3000;
}

upstream backend {
    server backend:8000;
}

server {
    listen 80;
    server_name localhost;

    client_max_body_size 10M;

    # Security headers
    add_header X-Frame-Options "SAMEORIGIN" always;
    add_header X-Content-Type-Options "nosniff" always;
    add_header X-XSS-Protection "1; mode=block" always;

    # Frontend routes
    location / {
        proxy_pass http://frontend;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_cache_bypass $http_upgrade;
    }

    # Backend API routes
    location /api/ {
        rewrite ^/api/(.*) /$1 break;
        proxy_pass http://backend;
        proxy_http_version 1.1;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }

    # Health check endpoint
    location /health {
        proxy_pass http://backend/health;
        access_log off;
    }

    # API documentation
    location /docs {
        proxy_pass http://backend/docs;
    }

    location /redoc {
        proxy_pass http://backend/redoc;
    }
}

Part 4: Docker Compose Configuration

Now for the orchestration using separate environment files for each service.

Step 1: Create Development Compose File

File: docker-compose.dev.yml

Notice how each service loads its own environment file:

version: '3.8'

services:
  db:
    image: postgres:16-alpine
    container_name: fullstack-db-dev
    env_file:
      - ./backend/.env  # Database config comes from backend
    ports:
      - "5432:5432"
    volumes:
      - postgres_data_dev:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U postgres"]
      interval: 5s
      timeout: 5s
      retries: 5

  backend:
    build:
      context: ./backend
      dockerfile: Dockerfile.dev
    container_name: fullstack-backend-dev
    env_file:
      - ./backend/.env  # Backend-specific environment
    ports:
      - "8000:8000"
    volumes:
      - ./backend:/app
      - /app/__pycache__
    depends_on:
      db:
        condition: service_healthy
    command: uvicorn main:app --host 0.0.0.0 --port 8000 --reload

  frontend:
    build:
      context: ./frontend
      dockerfile: Dockerfile.dev
    container_name: fullstack-frontend-dev
    env_file:
      - ./frontend/.env  # Frontend-specific environment
    ports:
      - "3000:3000"
    volumes:
      - ./frontend:/app
      - /app/node_modules
      - /app/.next
    depends_on:
      - backend

volumes:
  postgres_data_dev:
    driver: local

Step 2: Create Production Compose File

File: docker-compose.yml

version: '3.8'

services:
  db:
    image: postgres:16-alpine
    container_name: fullstack-db
    env_file:
      - ./backend/.env  # Database config from backend environment
    volumes:
      - postgres_data:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U postgres"]
      interval: 10s
      timeout: 5s
      retries: 5
    restart: unless-stopped
    networks:
      - app-network

  backend:
    build:
      context: ./backend
      dockerfile: Dockerfile
    container_name: fullstack-backend
    env_file:
      - ./backend/.env  # Backend environment variables
    expose:
      - "8000"
    depends_on:
      db:
        condition: service_healthy
    restart: unless-stopped
    networks:
      - app-network

  frontend:
    build:
      context: ./frontend
      dockerfile: Dockerfile
    container_name: fullstack-frontend
    env_file:
      - ./frontend/.env  # Frontend environment variables
    expose:
      - "3000"
    depends_on:
      - backend
    restart: unless-stopped
    networks:
      - app-network

  nginx:
    image: nginx:alpine
    container_name: fullstack-nginx
    ports:
      - "80:80"
    volumes:
      - ./nginx/nginx.conf:/etc/nginx/conf.d/default.conf:ro
    depends_on:
      - frontend
      - backend
    restart: unless-stopped
    networks:
      - app-network

volumes:
  postgres_data:
    driver: local

networks:
  app-network:
    driver: bridge

Step 3: Create Git Ignore File

File: .gitignore

This is crucial for security - never commit environment files:

# Environment files (keep examples)
**/.env
**/.env.local
**/.env.development
**/.env.staging
**/.env.production

# But keep examples for reference
!**/.env.example
!**/.env.local.example

# Python
__pycache__/
*.py[cod]
*$py.class
*.so
.Python
venv/
.venv/
*.db
*.sqlite

# Node.js
node_modules/
.next/
npm-debug.log*
yarn-debug.log*
yarn-error.log*

# System files
.DS_Store
Thumbs.db

# IDE
.vscode/
.idea/
*.swp
*.swo

# Logs
*.log
logs/

# Docker
.dockerignore

Setting Up Environment Files

Now let's create the actual environment files for both services:

Backend Environment Setup

cd backend
cp .env.example .env

Edit backend/.env with your actual values:

# Database Configuration
POSTGRES_USER=postgres
POSTGRES_PASSWORD=super_secure_password_123
POSTGRES_DB=appdb
DATABASE_URL=postgresql://postgres:super_secure_password_123@db:5432/appdb

# Application Configuration
ENVIRONMENT=development
API_PORT=8000

# Security Configuration
SECRET_KEY=your-super-secret-key-here-change-in-production
CORS_ORIGINS=["http://localhost:3000"]

Frontend Environment Setup

cd ../frontend
cp .env.example .env

Edit frontend/.env:

# API Configuration
NEXT_PUBLIC_API_URL=http://localhost:8000

# Application Configuration
NEXT_PUBLIC_APP_NAME=Full-Stack Docker App
NEXT_PUBLIC_ENVIRONMENT=development

# Development Configuration
NEXT_TELEMETRY_DISABLED=1

Running the Application

Development Mode

First time setup:

# From the project root
docker compose -f docker-compose.dev.yml up

Docker Compose will automatically build the images and load the separate environment files. With volume mounts configured, your code changes will reflect immediately without rebuilding.

When do you need --build?

Only when you modify dependencies:

# Changed requirements.txt or package.json?
docker compose -f docker-compose.dev.yml up --build

For regular code changes, just keep the containers running (hot reload handles everything).

Access your application:

Production Mode (Local Deployment)

For production deployment, update the environment files first:

Backend (backend/.env):

ENVIRONMENT=production
# ... other production values

Frontend (frontend/.env):

NEXT_PUBLIC_API_URL=http://localhost/api
NEXT_PUBLIC_ENVIRONMENT=production

Then deploy:

docker compose up -d

Docker Compose builds images automatically on first run. The -d flag runs containers in detached mode (background).

Deploying updates:

docker compose up -d --build

This rebuilds images with your new code and restarts containers.

Access your application:

Testing Your Setup

# Check containers
docker compose ps

# Test backend health
curl http://localhost:8000/health

# Test creating an item
curl -X POST http://localhost:8000/items/ \
  -H "Content-Type: application/json" \
  -d '{"name":"Test Item","description":"This is a test"}'

# Check environment variables are working
curl http://localhost:8000/ | jq '.environment'

# Visit frontend
# Open http://localhost:3000 in your browser

Environment Management Best Practices

For Different Environments

Create environment-specific files:

backend/
├── .env.example
├── .env.development
├── .env.staging  
├── .env.production
└── .env  # symlink to current environment

frontend/
├── .env.example
├── .env.development
├── .env.staging
├── .env.production
└── .env  # symlink to current environment

For Team Collaboration

  1. Never commit actual .env files

  2. Document required variables in README

  3. Use strong passwords in production

  4. Rotate secrets regularly

For Production Deployment

  1. Use secret management systems (AWS Secrets Manager, HashiCorp Vault)

  2. Set environment variables via CI/CD

  3. Monitor environment variable changes

What You've Learnt

  • Proper environment separation with security in mind

  • Multi-stage Docker builds that reduce image sizes by 80%

  • Professional code organisation with schemas, models, and enums separated

  • Environment variable management using separate .env files

  • Docker Compose orchestration with proper health checks and dependencies

  • Hot reload for rapid development with volume mounts

  • Production-ready patterns you can actually use in real projects

  • Security best practices for handling secrets and credentials

What's Next?

This Docker setup gets you up and running quickly with proper environment management, but there's still a missing piece for truly professional deployments: automated CI/CD pipelines.

In the next post, we'll level this up to a complete production workflow:

  • GitHub Actions for automated testing and building

  • GitHub Container Registry (GHCR) for storing your images

  • Environment-specific deployments with proper secret management

  • Automated deployments that pull pre-built images

  • Zero-downtime deployments with proper rollback strategies

You'll learn how to push code to GitHub and have it automatically build, test, and deploy to different environments with proper secret management (no manual building on servers, no SSH-ing into machines). Just proper DevOps.

Stay tuned for "CI/CD for Full-Stack Apps: GitHub Actions + GHCR + Multi-Environment Deployments"

Final Thoughts

This setup follows industry best practices for environment management and security. You're not learning toy examples, you're learning patterns that actual production applications use.

The separation of environment files might seem like extra work initially, but it pays dividends when you need to:

  • Deploy to different environments

  • Share code with teammates

  • Manage production secrets

  • Debug environment-specific issues

  • Scale your application

No more "works on my machine". No more deployment surprises. No more accidentally exposing secrets. It just works, everywhere, securely.

You now have a solid, secure foundation. In the next tutorial, we'll automate the entire deployment process end-to-end with proper secret management.

Now go build something brilliant with it.

More from this blog