- Initial Setup - Running Apache Container
- Try to Connect Through Container IP (From Host)
- Two Containers Under Default Bridge Network
- Two Containers Under Same Custom Network
- Two Containers Under Different Custom Networks
- Gateway Container - Bridging Two Networks
- Adding Static Routes (First Attempt - Permission Denied)
- Recreate Containers with NET_ADMIN Capability
- Configure Static Routes for Inter-Network Communication
- Summary and Key Learnings
graph TB
subgraph "Host Machine (192.168.0.105)"
H[Host Network<br/>192.168.0.0/24]
D[Docker Engine]
end
subgraph "Default Bridge Network (172.17.0.0/16)"
BR[Bridge Gateway<br/>172.17.0.1]
end
subgraph "Custom Networks"
subgraph "Backend Network (10.0.0.0/24)"
BG[Gateway<br/>10.0.0.1]
end
subgraph "Frontend Network (10.0.1.0/24)"
FG[Gateway<br/>10.0.1.1]
end
end
H -->|manages| D
D -->|creates| BR
D -->|creates| BG
D -->|creates| FG
style H fill:#e1f5ff
style D fill:#fff4e1
style BR fill:#ffe1e1
style BG fill:#e1ffe1
style FG fill:#f0e1ff
docker run -p 8080:80 -d httpd
curl http://localhost:8080docker inspect <container_id>Network Configuration:
"Networks": {
"bridge": {
"Gateway": "172.17.0.1",
"IPAddress": "172.17.0.2",
"IPPrefixLen": 16,
"MacAddress": "fe:41:b5:ee:eb:71"
}
}graph LR
subgraph "Host Machine"
Host[Host<br/>192.168.0.105]
end
subgraph "Docker Bridge Network (172.17.0.0/16)"
Bridge[Bridge Gateway<br/>172.17.0.1]
C1[Container<br/>172.17.0.2:80]
end
Host -->|"curl 172.17.0.2 ❌<br/>Connection Failed"| C1
Host -->|"curl localhost:8080 ✅<br/>Port Mapping Works"| Bridge
Bridge --> C1
style Host fill:#e1f5ff
style Bridge fill:#ffe1e1
style C1 fill:#fff4e1
curl 172.17.0.2
# Result: Failed to connect to 172.17.0.2 port 80 after 3092 ms
curl 172.17.0.2:8080
# Result: Failed to connect to 172.17.0.2 port 8080 after 3105 msWhy it failed:
- Container IP (172.17.0.2) is only accessible within Docker's bridge network
- Host machine cannot directly access container IPs without port mapping
- The container exposes port 80 internally, but it's mapped to host port 8080
- Correct way:
curl http://localhost:8080(uses port mapping)
docker build . -t nhttpddocker ps
docker stop f1
docker rm f1graph TB
subgraph "Docker Default Bridge Network (172.17.0.0/16)"
Bridge[Bridge Gateway<br/>172.17.0.1]
S1[S1 Container<br/>172.17.0.2<br/>hostname: 581a662b95e1]
S2[S2 Container<br/>172.17.0.3<br/>hostname: 0ebb039968f1]
end
DNS[Docker Desktop DNS<br/>192.168.65.7]
Bridge --> S1
Bridge --> S2
S1 <-->|"✅ ping 172.17.0.3<br/>✅ curl 172.17.0.3"| S2
S1 -->|"❌ ping s2<br/>No DNS resolution"| S2
S1 -->|"DNS queries"| DNS
S2 -->|"DNS queries"| DNS
style Bridge fill:#ffe1e1
style S1 fill:#e1ffe1
style S2 fill:#e1f5ff
style DNS fill:#fff4e1
docker run --name s1 -d nhttpd
docker run --name s2 -d nhttpdS1 Network Configuration:
"Networks": {
"bridge": {
"Gateway": "172.17.0.1",
"IPAddress": "172.17.0.2",
"MacAddress": "0a:f4:4e:e5:8f:1e"
}
}S2 Network Configuration:
"Networks": {
"bridge": {
"Gateway": "172.17.0.1",
"IPAddress": "172.17.0.3",
"MacAddress": "86:fa:f9:fe:32:a9"
}
}docker inspect network bridgeConnected Containers:
"Containers": {
"s2": {
"IPv4Address": "172.17.0.3/16",
"MacAddress": "86:fa:f9:fe:32:a9"
},
"s1": {
"IPv4Address": "172.17.0.2/16",
"MacAddress": "0a:f4:4e:e5:8f:1e"
}
}Enter S1 Container:
docker exec -it s1 bashInternet Connectivity Test:
ping google.com
# Result: 100% packet loss (35 packets transmitted, 0 received)Why it failed:
- The host machine likely has network restrictions or firewall rules
- ICMP packets are being blocked
- DNS resolution works (resolves to 142.251.223.142) but packets don't reach destination
DNS Resolution:
hostname
# Output: 581a662b95e1
hostname -I
# Output: 172.17.0.2
nslookup google.com
# Server: 192.168.65.7
# Address: 142.250.193.78DNS Lookup for Container Hostname:
nslookup 581a662b95e1
# Result: server can't find 581a662b95e1: NXDOMAINWhy it failed:
- Default bridge network doesn't provide DNS resolution for container names/hostnames
- DNS server (192.168.65.7) is the Docker Desktop DNS
- Only custom networks support automatic container name resolution
Ping Tests (IP-based communication):
ping 172.17.0.2 # Self
# Result: 4 packets transmitted, 4 received, 0% packet loss
ping 172.17.0.3 # S2
# Result: 3 packets transmitted, 3 received, 0% packet loss
ping 172.17.0.1 # Gateway
# Result: 5 packets transmitted, 5 received, 0% packet lossSuccess: Containers on default bridge network can communicate via IP addresses.
HTTP Communication Tests:
curl 172.17.0.2 # S1
# Output: <title>It works! Apache httpd</title>
curl 172.17.0.3 # S2
# Output: <title>It works! Apache httpd</title>Success: HTTP traffic works between containers on default bridge network.
Modify S1 Index Page:
cd htdocs/
vim index.html
# Changed title to: "It is S1. Apache httpd"
curl 172.17.0.2
# Output: <title>It is S1. Apache httpd</title>Modify S2 Index Page:
docker exec -it s2 bash
cd htdocs/
vim index.html
# Changed title to: "It is s2. Apache httpd"
hostname -I
# Output: 172.17.0.3
curl 172.17.0.3
# Output: <title>It is s2. Apache httpd</title>Summary - Default Bridge Network:
- ✅ Containers can communicate via IP addresses
- ✅ HTTP/TCP services work
- ❌ No DNS resolution by container name
- ❌ No DNS resolution by hostname
- DNS Server: 192.168.65.7 (Docker Desktop DNS)
graph TB
subgraph "Backend Network (10.0.0.0/24)"
BG[Backend Gateway<br/>10.0.0.1]
S1B[S1 Container<br/>10.0.0.2<br/>DNSNames: s1, 581a662b95e1]
S2B[S2 Container<br/>10.0.0.3<br/>DNSNames: s2, 0ebb039968f1]
EDNS[Embedded DNS<br/>127.0.0.11]
end
BG --> S1B
BG --> S2B
S1B <-->|"✅ ping s2<br/>✅ ping 10.0.0.3<br/>✅ curl s2"| S2B
S1B -->|"DNS queries"| EDNS
S2B -->|"DNS queries"| EDNS
EDNS -->|"Resolves within<br/>same network"| S1B
EDNS -->|"Resolves within<br/>same network"| S2B
style BG fill:#e1ffe1
style S1B fill:#e1f5ff
style S2B fill:#ffe1ff
style EDNS fill:#fff4e1
docker network create backend --subnet 10.0.0.0/24docker network inspect backendNetwork Configuration:
{
"Name": "backend",
"Driver": "bridge",
"IPAM": {
"Config": [
{
"Subnet": "10.0.0.0/24"
}
]
},
"Internal": false,
"Containers": {}
}Note: Setting "Internal": true will only allow containers within this network to communicate (no external internet access).
docker network connect backend s1
docker network connect backend s2S1 Multi-Network Configuration:
"Networks": {
"backend": {
"Gateway": "10.0.0.1",
"IPAddress": "10.0.0.2",
"IPPrefixLen": 24,
"DNSNames": ["s1", "581a662b95e1"]
},
"bridge": {
"Gateway": "172.17.0.1",
"IPAddress": "172.17.0.2",
"IPPrefixLen": 16
}
}Key observations:
- Container has two IP addresses (one per network)
- Custom network provides DNS names: ["s1", "581a662b95e1"]
- Default bridge network has no DNS names
docker network disconnect bridge s1Enter S1 Container:
docker exec -it s1 bashDNS Lookups:
nslookup s1
# Server: 127.0.0.11 (local DNS server, not 192.168.65.7)
# Address: 10.0.0.2
nslookup s2
# Server: 127.0.0.11
# Address: 10.0.0.3
nslookup 581a662b95e1
# Server: 127.0.0.11
# Address: 10.0.0.2Key Questions:
- Is 127.0.0.11 a DNS server? Yes, it's Docker's embedded DNS server
- Is it a local DNS server? Yes, it's a container-local DNS resolver that Docker provides for service discovery
Success: Custom networks provide automatic DNS resolution by container name and hostname!
Traceroute Tests:
traceroute s1
# 1 10.0.0.2 0.005ms 0.002ms 0.001ms
traceroute s2
# 1 10.0.0.3 0.014ms 0.002ms 0.002msSuccess: Direct routing within the same network (single hop).
Check Host Routing Table:
ip routeOutput:
default via 192.168.0.1 dev wlp2s0 proto dhcp src 192.168.0.105 metric 600
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
172.18.0.0/16 dev br-3207027111f6 proto kernel scope link src 172.18.0.1 linkdown
192.168.0.0/24 dev wlp2s0 proto kernel scope link src 192.168.0.105 metric 600
traceroute 192.168.0.1
# 1 192.168.0.1 0.005ms 0.001ms 0.001msQuestion: Why doesn't it use the default gateway of s1 (10.0.0.1)?
Answer:
ip route get 192.168.0.1
# Output: 192.168.0.1 via 10.0.0.1 dev eth1 src 10.0.0.2 uid 0- It does use the container's default gateway (10.0.0.1)
- The traceroute output is misleading
ip route getconfirms traffic goes via 10.0.0.1 first
Summary - Same Custom Network:
- ✅ Containers can communicate via IP addresses
- ✅ Containers can communicate via container names
- ✅ Containers can communicate via hostnames
- ✅ DNS resolution works automatically
- DNS Server: 127.0.0.11 (Docker embedded DNS)
- Container still has internet access
graph TB
subgraph "Backend Network (10.0.0.0/24)"
BG[Backend Gateway<br/>10.0.0.1]
S1[S1 Container<br/>10.0.0.2]
BDNS[Embedded DNS<br/>127.0.0.11]
end
subgraph "Frontend Network (10.0.1.0/24)"
FG[Frontend Gateway<br/>10.0.1.1]
S2[S2 Container<br/>10.0.1.2]
FDNS[Embedded DNS<br/>127.0.0.11]
end
BG --> S1
FG --> S2
S1 -.->|"❌ ping s2<br/>DNS not found"| S2
S1 -.->|"❌ ping 10.0.1.2<br/>No route"| S2
S1 -->|"Only resolves<br/>backend network"| BDNS
S2 -->|"Only resolves<br/>frontend network"| FDNS
style BG fill:#e1ffe1
style S1 fill:#e1f5ff
style FG fill:#ffe1ff
style S2 fill:#fff4e1
style BDNS fill:#ffebe1
style FDNS fill:#ffebe1
- Backend Network: 10.0.0.0/24 (S1)
- Frontend Network: 10.0.1.0/24 (S2)
docker network create frontend --subnet 10.0.1.0/24
docker network disconnect backend s2
docker network connect frontend s2S1 (Backend only):
"Networks": {
"backend": {
"Gateway": "10.0.0.1",
"IPAddress": "10.0.0.2",
"IPPrefixLen": 24,
"DNSNames": ["s1", "581a662b95e1"]
}
}S2 (Frontend only):
"Networks": {
"frontend": {
"Gateway": "10.0.1.1",
"IPAddress": "10.0.1.2",
"IPPrefixLen": 24,
"DNSNames": ["s2", "0ebb039968f1"]
}
}From S1 Container:
docker exec -it s1 bash
ping s1
# Result: 4 packets transmitted, 4 received, 0% packet loss (10.0.0.2)Success: S1 can reach itself.
ping s2
# Result: ping: s2: Name or service not knownWhy it failed:
- S2 is not in the same network as S1
- Docker's embedded DNS (127.0.0.11) only resolves names within the same network
- No DNS record exists for S2 in backend network
ping 10.0.1.2
# Result: 32 packets transmitted, 0 received, 100% packet lossWhy it failed:
- No routing path between 10.0.0.0/24 (backend) and 10.0.1.0/24 (frontend)
- Networks are completely isolated by default
- Docker doesn't automatically route between custom networks
Summary - Different Custom Networks:
- ❌ Containers cannot communicate across different networks
- ❌ No DNS resolution for containers in other networks
- ❌ No routing between isolated networks
- Networks are completely isolated by default
graph TB
subgraph "Backend Network (10.0.0.0/24)"
BG[Backend Gateway<br/>10.0.0.1]
S1[S1 Container<br/>10.0.0.2]
GW1[Gateway Container<br/>eth1: 10.0.0.3]
end
subgraph "Frontend Network (10.0.1.0/24)"
FG[Frontend Gateway<br/>10.0.1.1]
S2[S2 Container<br/>10.0.1.2]
GW2[Gateway Container<br/>eth2: 10.0.1.3]
end
BG --> S1
BG --> GW1
FG --> S2
FG --> GW2
GW1 <-->|"Same container<br/>Multi-homed"| GW2
GW1 <-->|"✅ ping s1<br/>1 hop"| S1
GW2 <-->|"✅ ping s2<br/>1 hop"| S2
S1 -.->|"❌ ping 10.0.1.2<br/>No route yet"| S2
style BG fill:#e1ffe1
style S1 fill:#e1f5ff
style FG fill:#ffe1ff
style S2 fill:#fff4e1
style GW1 fill:#ffffcc
style GW2 fill:#ffffcc
docker run --name gw -d nhttpd
docker network disconnect bridge gw
docker network connect backend gw
docker network connect frontend gwdocker inspect gwGateway belongs to both networks:
"Networks": {
"backend": {
"Gateway": "10.0.0.1",
"IPAddress": "10.0.0.3",
"DNSNames": ["gw", "430ef1345f89"]
},
"frontend": {
"Gateway": "10.0.1.1",
"IPAddress": "10.0.1.3",
"DNSNames": ["gw", "430ef1345f89"]
}
}Key observation: Gateway has two IPs (10.0.0.3 and 10.0.1.3).
Enter Gateway Container:
docker exec -it gw bash
ip route
# default via 10.0.0.1 dev eth1
# 10.0.0.0/24 dev eth1 proto kernel scope link src 10.0.0.3
# 10.0.1.0/24 dev eth2 proto kernel scope link src 10.0.1.3Ping Tests from Gateway:
ping s1
# Result: 4 packets transmitted, 4 received (10.0.0.2)
ping s2
# Result: 4 packets transmitted, 4 received (10.0.1.2)Success: Gateway can reach both S1 and S2.
Traceroute from Gateway:
traceroute s1
# 1 10.0.0.2 0.017ms 0.003ms 0.003ms
traceroute s2
# 1 10.0.1.2 0.016ms 0.008ms 0.002msSuccess: Direct routing to both networks (single hop each).
Enter S1 Container:
docker exec -it s1 bash
ping gw
# Result: 4 packets transmitted, 4 received (10.0.0.3)Success: S1 can reach gateway.
ping s2
# Result: ping: s2: Name or service not knownWhy it failed:
- DNS still doesn't work across networks
- S1's DNS only knows about backend network
ping 10.0.1.2
# Result: 3 packets transmitted, 0 received, 100% packet lossWhy it failed:
- S1 has no route to 10.0.1.0/24 network
- Even though gateway is connected to both networks, S1 doesn't know to route through it
- Need to add static route in S1
Summary - Gateway Without Static Routes:
- ✅ Gateway can reach both networks
- ❌ S1 and S2 still cannot communicate
- ❌ No automatic routing through gateway
- Static routes needed for inter-network communication
Attempt to add route in S1:
docker exec -it s1 bash
ip route add 10.0.0.0/24 via 10.0.0.3
# Result: RTNETLINK answers: Operation not permittedWhy it failed:
- Container lacks NET_ADMIN capability
- NET_ADMIN is required to modify routing tables
- Default Docker security restrictions prevent this
docker stop s1
docker rm s1
docker stop s2
docker rm s2docker run --cap-add=NET_ADMIN --name s1 --network backend -d nhttpd
docker run --cap-add=NET_ADMIN --name s2 --network frontend -d nhttpdWhat changed:
--cap-add=NET_ADMINgrants permission to modify network configuration- Containers can now add/modify routing rules
graph TB
subgraph "Backend Network (10.0.0.0/24)"
BG[Backend Gateway<br/>10.0.0.1]
S1[S1 Container<br/>10.0.0.2<br/><br/>Route:<br/>10.0.1.0/24 via 10.0.0.3]
GW1[Gateway Container<br/>eth1: 10.0.0.3]
end
subgraph "Frontend Network (10.0.1.0/24)"
FG[Frontend Gateway<br/>10.0.1.1]
S2[S2 Container<br/>10.0.1.2<br/><br/>Route:<br/>10.0.0.0/24 via 10.0.1.3]
GW2[Gateway Container<br/>eth2: 10.0.1.3]
end
BG --> S1
BG --> GW1
FG --> S2
FG --> GW2
GW1 <-->|"Same container<br/>Multi-homed"| GW2
S1 -->|"1. Packet to 10.0.1.2"| GW1
GW1 -->|"2. Forward"| GW2
GW2 -->|"3. Deliver"| S2
S2 -->|"4. Reply to 10.0.0.2"| GW2
GW2 -->|"5. Forward"| GW1
GW1 -->|"6. Deliver"| S1
S1 <-.->|"✅ ping 10.0.1.2<br/>TTL=63 (1 hop)<br/>❌ ping s2 (no DNS)"| S2
style BG fill:#e1ffe1
style S1 fill:#e1f5ff
style FG fill:#ffe1ff
style S2 fill:#fff4e1
style GW1 fill:#90EE90
style GW2 fill:#90EE90
Enter S1 Container:
docker exec -it s1 bash
ip route add 10.0.1.0/24 via 10.0.0.3Success: Route added (no error this time).
Test Connectivity:
ping 10.0.1.2
# Result: 5 packets transmitted, 5 received, 0% packet loss
# ttl=63 indicates packet went through gatewaySuccess: S1 can now reach S2 by IP address!
Why ttl=63 instead of 64:
- Initial TTL is 64
- Each router hop decrements TTL by 1
- TTL=63 means packet passed through 1 router (the gateway container)
Try DNS Resolution:
ping s2
# Result: ping: s2: Name or service not knownWhy it failed:
- DNS resolution still doesn't work across networks
- Docker's embedded DNS only resolves names within the same network
- Static routes only solve IP routing, not DNS
Enter S2 Container:
docker exec -it s2 bash
ip route
# default via 10.0.1.1 dev eth0
# 10.0.1.0/24 dev eth0 proto kernel scope link src 10.0.1.2Verify Gateway IP in S2's Network:
nslookup gw
# Server: 127.0.0.11
# Address: 10.0.1.3Add Route to Backend Network:
ip route add 10.0.0.0/24 via 10.0.1.3
ip route
# default via 10.0.1.1 dev eth0
# 10.0.0.0/24 via 10.0.1.3 dev eth0
# 10.0.1.0/24 dev eth0 proto kernel scope link src 10.0.1.2Success: Route added to backend network via gateway.
Test Connectivity:
ping 10.0.0.2
# Result: 5 packets transmitted, 5 received, 0% packet loss
# ttl=63 indicates packet went through gatewaySuccess: S2 can now reach S1 by IP address through the gateway!
Summary - With Static Routes:
- ✅ S1 can reach S2 via IP address (through gateway)
- ✅ S2 can reach S1 via IP address (through gateway)
- ✅ Gateway acts as router between networks
- ❌ DNS resolution doesn't work across networks
- TTL=63 confirms routing through gateway (1 hop)
graph LR
subgraph "Communication Capabilities"
A[Default Bridge<br/>172.17.0.0/16]
B[Custom Network<br/>Same Network]
C[Custom Network<br/>Different Networks]
D[With Gateway<br/>+ Static Routes]
end
A -->|"✅ IP only<br/>❌ DNS"| A1[Communication]
B -->|"✅ IP + DNS<br/>✅ Name resolution"| B1[Communication]
C -->|"❌ Isolated<br/>❌ No routing"| C1[No Communication]
D -->|"✅ IP via gateway<br/>❌ DNS across networks"| D1[Limited Communication]
style A fill:#ffe1e1
style B fill:#e1ffe1
style C fill:#ffcccc
style D fill:#ffffcc
style A1 fill:#fff4e1
style B1 fill:#e1ffe1
style C1 fill:#ffcccc
style D1 fill:#ffffcc
graph LR
subgraph "Direct Communication (Same Network)"
S1A[S1<br/>TTL=64]
S2A[S2<br/>Receives TTL=64]
end
subgraph "Routed Communication (Via Gateway)"
S1B[S1<br/>TTL=64]
GW[Gateway<br/>TTL=64→63]
S2B[S2<br/>Receives TTL=63]
end
S1A -->|"Direct<br/>0 hops"| S2A
S1B -->|"Step 1"| GW
GW -->|"Step 2<br/>TTL-1"| S2B
style S1A fill:#e1f5ff
style S2A fill:#fff4e1
style S1B fill:#e1f5ff
style GW fill:#ffffcc
style S2B fill:#fff4e1
graph TB
subgraph "Default Bridge Network"
DB[DNS: 192.168.65.7<br/>Docker Desktop DNS]
DB -->|"❌ No container<br/>name resolution"| DBC[Containers]
end
subgraph "Custom Network A"
CA[DNS: 127.0.0.11<br/>Embedded DNS]
CA -->|"✅ Resolves names<br/>within Network A"| CAC[Containers in A]
end
subgraph "Custom Network B"
CB[DNS: 127.0.0.11<br/>Embedded DNS]
CB -->|"✅ Resolves names<br/>within Network B"| CBC[Containers in B]
end
CAC -.->|"❌ Cannot resolve<br/>names in Network B"| CBC
style DB fill:#ffe1e1
style CA fill:#e1ffe1
style CB fill:#ffe1ff
style DBC fill:#fff4e1
style CAC fill:#e1f5ff
style CBC fill:#fff4e1
| Network Type | DNS Resolution | Container Communication | DNS Server |
|---|---|---|---|
| Default Bridge | ❌ No (IP only) | ✅ Same network only | 192.168.65.7 |
| Custom Network | ✅ Yes (name/hostname) | ✅ Same network only | 127.0.0.11 |
| Different Networks | ❌ No | ❌ No (isolated) | 127.0.0.11 |
| With Gateway & Routes | ❌ No | ✅ Yes (via IP) | 127.0.0.11 |
1. Default Bridge Network (172.17.0.0/16):
- Containers get IPs automatically
- No DNS resolution by container name or hostname
- Containers can communicate via IP addresses
- DNS server: 192.168.65.7 (Docker Desktop DNS)
2. Custom Networks:
- Automatic DNS resolution by container name and hostname
- Local DNS server: 127.0.0.11 (Docker embedded DNS)
- Each network is isolated by default
- Better for service discovery
3. Network Isolation:
- Containers in different networks cannot communicate by default
- No routing between custom networks automatically
- DNS only resolves names within the same network
- Complete network segmentation
4. Gateway Container Pattern:
- Multi-homed container (connected to multiple networks)
- Can act as router between networks
- Requires static routes in other containers
- Gateway has multiple IPs (one per network)
5. Static Routing Requirements:
- Requires NET_ADMIN capability (
--cap-add=NET_ADMIN) - Routes traffic through gateway container IP
- DNS resolution doesn't work across networks
- TTL decrements when going through gateway (ttl=63)
- Must manually configure routes in each container
6. Container IP Accessibility:
- Container IPs not directly accessible from host (without port mapping)
- Port mapping (e.g.,
-p 8080:80) required for host access - Container IPs only work within Docker networks
7. Internal Networks:
- Setting
"Internal": trueblocks external internet access - Restricts communication to within the network only
- Useful for database or backend services
DNS Resolution Failures:
- Default bridge network doesn't support DNS by name
- Cross-network DNS queries fail (different networks)
- Only works within the same custom network
Connectivity Failures:
- No routing between different networks (need static routes)
- Missing NET_ADMIN capability (can't modify routes)
- Container IPs not accessible from host without port mapping
- ICMP/ping blocked by firewall or network policies
Routing Issues:
- Static routes required for inter-network communication
- Each container needs its own route configuration
- Routes must point to gateway IP in their own network
- Use custom networks for DNS-based service discovery
- Use network isolation for security boundaries
- Port mapping for host-to-container communication
- Gateway containers for controlled inter-network routing
- NET_ADMIN capability only when needed (security consideration)