Skip to main content

Docker

Our server stack is fully containerized using Docker, enabling consistent, secure, and isolated deployments of all services.


What is Docker?

Think of Docker like a box that holds a mini computer.

Each Docker “box” (called a container) has everything it needs to run — the app, the tools it depends on, the settings — all bundled inside.

You can run lots of these containers side-by-side, and they won’t affect each other.


Why We Use Docker

  • Consistency: The same Docker container works the same way on every machine — dev, staging, production.
  • Isolation: One app crashing won't break others — they run in separate containers.
  • Simplicity: You don’t have to install tools like MySQL or Redis manually — they're already in containers, ready to go.
  • Portability: We can copy, move, or upgrade apps easily by just restarting containers.

How Docker is Used in Our Infrastructure

Every service in our infrastructure — from databases to monitoring tools — runs as a Docker container.

Examples:

  • grafana, loki, promtail: Monitoring stack
  • mysql, redis: Databases
  • jenkins, jenkins-agent, nexus: CI/CD and artifact storage
  • keycloak, gerrit, phpmyadmin: Internal tools
  • nginx: Reverse proxy to route external traffic to containers

Docker Compose & Dockerfile

Each major system in our infrastructure typically includes:

  • A Dockerfile to define how the image is built
  • A docker-compose.yml file to define how the container runs

Some services use both files together, while others may only require one — depending on whether we build the image ourselves or use an official one.


Dockerfile

A Dockerfile is like a recipe for building a container image.

It tells Docker:

  • What base image to use (e.g. Ubuntu, Node.js, MySQL, etc.)
  • What files to copy into the image
  • What packages to install
  • What command to run when the container starts

Example:

FROM node:22-alpine

WORKDIR /app
COPY . .
RUN npm install

CMD ["npm", "start"]

This builds a lightweight Node.js container with your app inside.


docker-compose.yml

While the Dockerfile builds the image, docker-compose.yml defines how to run the containers — including their:

  • Ports
  • Environment variables
  • Volumes
  • Dependencies
  • Restart policy

Example snippet:

services:
api:
build: .
ports:
- "3000:3000"
volumes:
- .:/app
depends_on:
- db
db:
image: mysql:8
environment:
MYSQL_ROOT_PASSWORD: example

This launches both the app and a MySQL database in linked containers.


Together, Dockerfile and docker-compose.yml give us full control over building and running services across all environments.

Common Commands:

docker compose up -d                 # Start all services in the background
docker ps # List running containers
docker compose down # Stop and remove all services
docker compose restart <service> # Restart a specific service
docker compose logs <service> # View logs for a specific service

File Structure

Here’s a simplified layout of our Docker config on the server:

.
├── crowdsec-monitoring
│ ├── docker-binding
│ └── docker-compose.yml
├── database
│ ├── docker-binding
│ └── docker-compose.yml
├── elk-logging
│ ├── docker-binding
│ └── docker-compose.yml
├── gerrit-docker
│ ├── docker-compose.yml
│ └── plugins
├── grafana-docker
│ ├── docker-binding
│ └── docker-compose.yml
├── jenkins-docker
│ ├── agent
│ └── docker-compose.yml
├── keycloak-docker
│ ├── docker-compose.yml
│ └── keycloak_data
├── nexus-docker
│ ├── data
│ └── docker-compose.yml
├── redis-docker
│ ├── docker-binding
│ └── docker-compose.yml

Server Reboots & Docker Behavior

When the server is restarted, Docker attempts to automatically restart containers that were running — if they are configured with a restart policy.

Auto-Restart Policy

All our services use:

restart: unless-stopped

This means:

  • Services will auto-start after a reboot
  • If a service doesn’t restart, it can be manually started without re-creating it

What to Do After a Reboot

  1. Check running containers:
docker ps
  1. If a container is missing, check all (including stopped) containers:
docker ps -a
  1. Restart any service that isn’t running:
docker restart <container-name>

This is safer than docker compose up -d, which might recreate containers or reattach volumes depending on changes in the config.


Pro tip: Use docker logs <container-name> to see why a container failed to start, if any issue persists.


Active Docker Services

Below is a current snapshot of the active containers running on our infrastructure:

witty-api-node
witty-web-nuxt-v3
witty-docs
loki
prometheus
promtail
witty-api-laravel
kibana
logstash
elasticsearch
phpmyadmin
mysql
jenkins-agent
redisinsight-product
redisinsight-staging
redis
grafana_staging
grafana_production
buildjenkins
keycloak
nexus
gerrit

When you run docker ps, you’ll see a similar list of active containers — possibly more or fewer, depending on when this documentation was last updated.

To view only container names:

docker ps --format '{{.Names}}'

Port Mappings in docker ps

Just before the container NAMES column in docker ps, you’ll also see the PORTS column. This shows how container ports are exposed to your machine or the outside world.

There are two common formats you’ll encounter:

1. Public / Global Binding

0.0.0.0:1234->5000/tcp, [::]:5000->5000/tcp
  • This means the port is open to all IP addresses (IPv4 and IPv6)
  • Anyone from the internet (if firewall allows) can reach this container via port 1234
  • Often used for services exposed externally via Nginx

2. Localhost-Only Binding

127.0.0.1:1234->5601/tcp
  • This port is only accessible locally from the server itself (localhost)
  • Not exposed to the public or external networks
  • Useful for sensitive tools (e.g. database UIs, internal-only APIs)

Best practice: keep most ports bound to 127.0.0.1 and expose externally only through Nginx reverse proxy + SSL.


What Developers Need to Know

While Docker powers everything on the server, you do not need to run Docker locally to work on Witty projects.

Instead:

  • Use .env files for local configuration
  • Connect to remote databases via SSH tunnel
  • Rely on server-hosted services (e.g., Redis, MySQL, Jenkins)

For more details, check out the specific service documentation in this folder.