Docker Networking: A Deep Dive into Overlay Networks
After spending weeks debugging container networking issues in production, I decided to write down everything I learned about Docker's overlay network driver. This post assumes you're already familiar with basic Docker concepts.
How Overlay Networks Work
Docker Swarm uses VXLAN (Virtual Extensible LAN) to create overlay networks. Each container gets a virtual Ethernet interface connected to a bridge, and the VXLAN encapsulation happens in user space through the network driver.
# Create an overlay network
docker network create --driver overlay --subnet 10.0.0.0/24 mynet
# Inspect the network
docker network inspect mynet | jq '.[0].IPAM.Config'
The key insight is that Docker manages the VXLAN Network Identifier (VNI) automatically. Each overlay network gets a unique VNI, and the control plane distributes the mapping between container IPs and host IPs across all swarm nodes.
Service Discovery
Docker embeds a DNS server in each container's network namespace. When you resolve a service name, it returns the virtual IP (VIP) of that service, and IPVS handles the load balancing at the kernel level.
iptables Rules
Run iptables -t nat -L -n on any Docker host and you'll see the extensive NAT rules Docker manages. The DOCKER-USER chain is particularly important for custom firewall rules.
Common Pitfalls
- MTU mismatches between overlay network and underlying physical network
- Gossip protocol saturation in large clusters (50+ nodes)
- DNS resolution failures when the embedded DNS server is overloaded