UPDATE: If you want a turn-key appliance, you should check out ZeroShell (http://lmgtfy.com/?q=zeroshell). No only do you get an easy-to-setup Linux router, but you also get a RRAS, Kerberos, Radius, DHCP, DNS, firewall, X.509 CA and http proxy server!
I’ve left the article below in case you just really want to configure your CentOS host as a NAT router. But I’d recommend the ZeroShell, unless you hate the ‘example.com’ domain. 🙂
If you google it, you’ll find tons of articles, all with mostly the same information. Here’s a Quick and Dirty version if you’re setting up a simple Docker environment. Since I am dealing with Docker Swarm and Docker EE, I wanted to try creating a VM to do the NAT routing rather than rely on VMware Fusion’s services (and feel free to question the sanity of this). I created two custom isolated networks and put one Docker host VM on vmnet3 and another on vmnet4. Then I placed my CentOS7 NAT gateway VM on vmnet3, vmnet4, and default bridged network so I can treat it like a Jump Host. I’m just documenting here and sharing in case it might be useful for someone else.
If you need more detail on any of these, try searching google on NAT Router site:DigitalOcean.com.
Here’s my list for configuring firewalld on a CentOS7/RHEL7 host.
CentOS 7 firewalld NAT router
Configure IP Forwarding
Add the following
sysctl to reload the
/etc/sysctl.conf by running the following command:
Add a new zone (if necessary)
firewall-cmd --permanent --add-zone=<ZONE_NAME>
or delete zone if necessary (
firewall-cmd --permanent --delete-zone=<ZONE_NAME>)
My suggestion? Create a zone name for each private network and add the appropriate interface. I was using ens33 on the public, ens38 on vmnet3 and ens39 on vmnet4. ens33 was already on the public zone, so I just needed to create two and add the NIC.
firewall-cmd --permanent --add-zone=vmnet3
firewall-cmd --permanent --add-zone=vmnet4
Add or remove interfaces to zones
firewall-cmd --permanent --zone=<ZONE_NAME> --add-interface=<interface name>
…or…(if you entered one incorrectly)
firewall-cmd --permanent --zone=<ZONE_NAME> --remove-interface=<interface name>
I did this:
(I didn’t bother with ens33, since it was already in the public zone (the bridged network).
If the NIC doesn’t show up, restart the firewall daemon.
systemctl restart firewalld.
If the NIC still doesn’t show up via
firewall-cmd --info-zone=, check the /etc/sysconfig/network-scripts/ifcfg- configuration file for the ZONE= statement. If it is missing, check your command history and be sure you added the interface to the correct zone. I’ve had hit & miss results. Add the line
ZONE=<name_of_zone> to the end of the ifcfg-<interface> configuration file if it is missing (e.g.,
echo "ZONE=vmnet4" >> /etc/sysconfig/network-scripts/ifcfg-ens39, if your interface is ens39 and the zone name is ‘vmnet4’).
If the configuration file is accurate, as mentioned above, just restart the firewall service.
- Allow isolated network to send packets to Router (“the NIC on the bridged network, in my case ens33”)
- Allow responses back from the router to the isolated network
PUBNIC = NIC that is on the bridged network (the network the private nics are isolated from) and masquerades for all private networks on the public side; it forwards all private traffic to the private networks.
firewall-cmd --direct --add-rule ipv4 nat POSTROUTING 0 -o $PUBNIC -j MASQUERADE
Private Network to be routed (all traffic, regardless of service, or port)
For example, if the Private NIC (or PVTNIC) is ens38 and ens39, and the Public NIC (or PUBNIC) on the bridged network is ens33, you can enter the the addition individually, or in a for loop.
for PVTNIC in ens38 ens39;do firewall-cmd --direct --add-rule ipv4 filter FORWARD 0 -i $PVTNIC -o $PUBNIC -j ACCEPT && \
firewall-cmd --direct --add-rule ipv4 filter FORWARD 0 -i $PUBNIC -o $PVTNIC -m state --state RELATED,ESTABLISHED -j ACCEPT; done;
Firewall rules (Common)
- DHCP, 53/udp
- DNS, 67/udp
- Chrony/, 123/udp & 323/udp (or if running NTP, just 123/udp)
- ssh, 22/tcp
- Docker client communication, 2376/tcp
- Docker Swarm (manager nodes), 2377/tcp
- Docker container network discover, 7946/tcp & /udp
- Docker ingress network (overlay), 4789/udp
firewall-cmd --add-port=53/udp --permanent
firewall-cmd --add-port=67/udp --permanent
firewall-cmd --add-port=123/udp --permanent
firewall-cmd --add-port=323/udp --permanent
firewall-cmd --add-port=22/tcp --permanent
firewall-cmd --add-port=2376/tcp --permanent
firewall-cmd --add-port=2377/tcp --permanent
firewall-cmd --add-port=7946/tcp --permanent
firewall-cmd --add-port=7946/udp --permanent
firewall-cmd --add-port=4789/udp --permanent
firewalld was made to be more user-friendly. You can also add by service name, but I believe it’s a good idea to know the ports you’re using rathar than just the service names.
Restart the `firewalld` service
systemctl restart firewalld
NOTE: If you’re connected remotely, you will probably be disconnected.