Docker Networking Basics: Bridge, Host, and Overlay
Most people learn Docker by writing a Dockerfile, running docker build, and calling it a day. Networking gets ignored until something breaks – a container can’t reach another container, or a service isn’t accessible from the host. Then you’re pasting error messages into ChatGPT at 11 PM trying to figure out why localhost doesn’t mean what you think it means inside a container.
Docker ships with three network drivers that cover the vast majority of use cases: bridge, host, and overlay. Each solves a different problem, and picking the wrong one usually means you’re fighting Docker instead of using it.
Bridge: The Default You’ll Outgrow
When you run docker run without specifying a network, your container lands on the default bridge network. Docker creates a virtual bridge interface (docker0) on the host, assigns each container an IP from a private subnet (typically 172.17.0.0/16), and routes traffic through NAT.
# This container is on the default bridge
docker run -d --name web nginx
# Check it
docker network inspect bridge
The default bridge works, but it has a real limitation: no automatic DNS resolution between containers. If you spin up two containers on the default bridge, they can reach each other by IP address, but not by name. You’d have to use --link (legacy – its environment variables were disabled by default in Engine v29 and will be removed in v30
) or hardcode IPs (fragile).
The fix is a user-defined bridge network:
docker network create my-app
docker run -d --name api --network my-app node-api
docker run -d --name db --network my-app postgres
Now api can reach db by hostname. Docker’s embedded DNS server handles resolution automatically. This is what you actually want for local development and single-host deployments.
When to use bridge:
- Local development with multiple containers
- Single-host deployments where you control the full stack
- Any time you need container-to-container communication on one machine
What bridge won’t do:
- Cross-host networking. Bridge networks are scoped to a single Docker host. If your containers need to talk across machines, you need overlay.
A detail worth knowing
User-defined bridges also give you better isolation. Containers on different user-defined bridges can’t communicate with each other unless you explicitly connect them to both networks. On the default bridge, every container can reach every other container – not great if you’re running unrelated services on the same host.
# Connect a container to a second network
docker network connect frontend api
Host: Skip the Abstraction
Host networking removes the network isolation between the container and the Docker host entirely. The container shares the host’s network namespace – same interfaces, same IP, same ports.
docker run -d --network host nginx
No port mapping needed. Nginx binds to port 80 on the host directly. No NAT overhead, no docker0 bridge, no virtual ethernet pairs.
This sounds great until you realize the tradeoff: you lose port isolation. Two containers can’t both bind to port 80. You’re back to managing port conflicts manually, which is exactly the kind of problem containers were supposed to solve.
When host networking makes sense:
- Performance-sensitive workloads where NAT overhead matters. Think high-throughput proxies, load balancers, or monitoring agents that need to see all host traffic.
- Containers that need to bind to a large range of ports (like an FTP server with passive mode).
- Quick debugging – sometimes you just want to run something without thinking about port mappings.
When to avoid it:
- Multi-tenant environments, or anywhere you’re running untrusted containers. No network isolation means a compromised container has direct access to the host network.
- Anywhere you need to run multiple instances of the same service on one host.
Host networking on Docker Desktop
This used to be Linux-only, and you’ll still find articles saying so. That changed in Docker Desktop 4.34 (September 2024) – host networking now works on Mac and Windows too. It’s an opt-in feature: go to Settings > Resources > Network and enable it, then restart Docker Desktop.
It works in both directions: containers can reach host services on localhost, and host processes can reach container services on localhost. TCP and UDP both work.
The catch is that Docker Desktop’s implementation operates at layer 4 only – protocols below TCP/UDP aren’t supported. It also won’t work if you have Enhanced Container Isolation (ECI) enabled, since network isolation and host network access are contradictory. And it’s Linux containers only, no Windows containers.
Overlay: Containers Across Hosts
Overlay networks solve the multi-host problem. They create a distributed network that spans multiple Docker hosts, letting containers on different machines communicate as if they were on the same LAN.
Under the hood, overlay uses VXLAN to encapsulate container traffic in UDP packets and route them between hosts. Each host gets a VTEP (VXLAN Tunnel Endpoint) that handles the encapsulation and decapsulation transparently.
# Initialize Swarm (required for overlay)
docker swarm init
# Create an overlay network
docker network create -d overlay my-overlay
# Deploy a service on the overlay
docker service create --name web --network my-overlay --replicas 3 nginx
Overlay networks require Docker Swarm or another orchestrator. You can’t just docker run with an overlay network on standalone hosts – there needs to be a control plane coordinating the network state across nodes. That said, you can attach standalone containers to overlay networks (not just Swarm services) by creating the network with the --attachable flag:
docker network create -d overlay --attachable my-overlay
When to use overlay:
- Multi-host deployments where services on different machines need to communicate
- Docker Swarm services
- When you want to keep the networking simple while scaling horizontally
The reality check:
If you’re reaching for overlay networks, you’re probably at a scale where Kubernetes is worth evaluating. Swarm’s overlay networking works and is simpler to set up, but Kubernetes has won the orchestration war for production workloads. Docker’s own documentation now lists SwarmKit under products where development has slowed in favor of Kubernetes, though Mirantis has committed to supporting Swarm through at least 2030 . For smaller teams or simpler architectures, Swarm + overlay still gets the job done – but go in with eyes open about its trajectory.
Overlay and encryption
By default, overlay traffic between hosts is not encrypted. The VXLAN packets travel in plain UDP. If your hosts are on an untrusted network (or really, even if they aren’t), enable encryption:
docker network create -d overlay --opt encrypted my-secure-overlay
This adds IPsec encryption between nodes. It costs some CPU, but the alternative is plaintext application traffic crossing your network.
One thing to be aware of: control plane traffic
(swarm management messages) is always encrypted regardless of this flag. The --opt encrypted flag only affects application data.
There’s also a known issue on RHEL, CentOS, and Rocky Linux
where the xt_u32 kernel module may not be installed (it was moved to kernel-modules-extra in RHEL 8.3). Without it, encrypted overlay networks silently fall back to transmitting unencrypted data – no error, no warning. If you’re running overlay networks on these distros, verify xt_u32 is loaded before assuming your traffic is encrypted.
Choosing the Right Driver
Skip the decision matrix. Here’s how it works in practice:
Developing locally or deploying to a single server? Use a user-defined bridge. Don’t bother with the default bridge – the DNS resolution alone is worth the 10 seconds it takes to create a custom one.
Need raw performance or host-level network access? Use host. Accept the port conflict tradeoff.
Running containers across multiple hosts? Use overlay with Swarm, or move to Kubernetes and let its CNI plugins handle networking.
Most Docker users will spend 90% of their time on bridge networks. That’s fine. The others exist for when bridge isn’t enough – and now you know when that is.
What This Article Didn’t Cover
Docker also has macvlan (assigns a MAC address to each container, making it appear as a physical device on the network) and none (disables networking entirely). Both are niche. Macvlan is useful when you need containers to be directly addressable on a physical network – common in IoT and legacy integration scenarios. None is for containers that genuinely shouldn’t have network access.
There’s also the question of Docker Compose networking, which creates a user-defined bridge per project automatically. If you’re using Compose (and you probably should be for local dev), you get sensible defaults without thinking about it. Each service is reachable by its service name within the project’s network.
# docker-compose.yml
services:
api:
image: node-api
db:
image: postgres
# api can reach db at hostname "db" – no extra config needed
That’s the part most people actually need to know.