Tag Archives: Linux

Configure a Linux Network Gateway / Firewall with DHCP & DNS

System Configuration


Configure hostname

echo gateway.lab.local > /etc/hostname
hostname $(cat /etc/hostname)

Add the following to /etc/sysctl.conf

# Disable IPv6 networking (if not required)
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
# Accept packets destined for other addresses
net.ipv4.ip_forward = 1

Apply settings defined in /etc/sysctl.conf

systemctl restart systemd-sysctl

Networking


[root@gateway ~]# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
    link/ether 08:00:27:43:e6:71 brd ff:ff:ff:ff:ff:ff
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
    link/ether 08:00:27:a6:b8:f9 brd ff:ff:ff:ff:ff:ff

Determine the WAN and LAN ports by physically removing a network connection and view “dmesg”

[64476.759373] e1000: enp0s8 NIC Link is Down

This is the physical port connected on the LAN side, and so this will be the LAN port

Configure the relevant interface files for the WAN and LAN ports

sed -i '/^ONBOOT/s/=.*$/=yes/' /etc/sysconfig/network-scripts/ifcfg-enp0s3
sed -i '/^ONBOOT/s/=.*$/=yes/' /etc/sysconfig/network-scripts/ifcfg-enp0s8
sed -i '/^BOOTPROTO/s/=.*$/=none/' /etc/sysconfig/network-scripts/ifcfg-enp0s8
echo "IPADDR=172.24.10.1" >> /etc/sysconfig/network-scripts/ifcfg-enp0s8
echo "NETMASK=255.255.255.0" >> /etc/sysconfig/network-scripts/ifcfg-enp0s8
systemctl restart network
  •  ONBOOT=yes   Set network interface to be brought up on system boot
  •  BOOTPROTO=none  No protocol is used as static network details are supplied
  •  IPADDR=172.24.10.1   IP address for the interface that will be used as the gateway for the local network
  •  NETMASK=255.255.255.0   Netmask of the local network

Check that the new settings have been applied

[root@gateway ~]# ip addr
[...]
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 08:00:27:8d:83:09 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.12/24 brd 192.168.1.255 scope global dynamic enp0s3
       valid_lft 691072sec preferred_lft 691072sec
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 08:00:27:74:ee:ac brd ff:ff:ff:ff:ff:ff
    inet 172.24.10.1/24 brd 172.24.10.255 scope global enp0s8
       valid_lft forever preferred_lft forever

[root@testclient1 ~]# ip route
default via 192.168.1.1 dev enp0s3 proto static metric 100
172.24.10.0/24 dev enp0s8 proto kernel scope link src 172.24.10.1 metric 100
192.168.1.0/24 dev enp0s3 proto kernel scope link src 192.168.1.12 metric 100

Firewall


Re-enable the traditional iptables firewall and clear the default rules

systemctl disable firewalld && systemctl stop firewalld
yum install iptables-services
> /etc/sysconfig/iptables
systemctl enable iptables && systemctl start iptables

Define and save new rules

iptables -A INPUT -i lo -j ACCEPT
iptables -A INPUT -i enp0s3 -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -A INPUT -i enp0s3 -p tcp --dport 22 -j ACCEPT
iptables -P INPUT DROP
iptables -A FORWARD -i enp0s3 --dst 172.24.10.0/24 -j ACCEPT
iptables -A FORWARD -i enp0s8 --src 172.24.10.0/24 -j ACCEPT
iptables -P FORWARD DROP
iptables -t nat -A POSTROUTING --src 172.24.10.0/24 -j MASQUERADE
service iptables save
  • iptables -A FORWARD -i enp0s3 –dst 172.24.10.0/24 -j ACCEPT   Allow packets from WAN destined to LAN
  • iptables -A FORWARD -i enp0s8 –src 172.24.10.0/24 -j ACCEPT   Allow packets from LAN
  • iptables -t nat -A POSTROUTING –src 172.24.10.0/24 -j MASQUERADE   MASQUERADE packets originating from LAN. If WAN packets are masqueraded, the host connections will appear as “gateway”

Software Installation

BIND DNS

yum install bind bind-utils
sed -i '/listen-on/s/127.0.0.1;/& 172.24.10.1;/' /etc/named.conf
sed -i '/allow-query/s/localhost/any/' /etc/named.conf
systemctl enable named && systemctl start named
iptables -A INPUT -i enp0s8 -p udp --dport 53 -j ACCEPT
  • listen-on   Specify interface addresses to listen on
  • allow-query   Set to “any” to allow queries on all listening interfaces
  • iptables -A INPUT -i enp0s8 -p udp –dport 53 -j ACCEPT   Allow DNS queries on LAN port

DHCPD

yum install dhcp

Configure /etc/dhcp/dhcpd.conf:

authoritative; # This DHCP server is the official DHCP server for the LAN.

default-lease-time 600;
max-lease-time 7200;

subnet 172.24.10.0 netmask 255.255.255.0 {
    range 172.24.10.100 172.24.10.200;
    option domain-name-servers 172.24.10.1;
    option domain-name "lab.local";
    option routers 172.24.10.1;
    option broadcast-address 172.24.10.255;
}

Note:  DHCPD matches defined subnets to the IP address assigned to an interface and will not serve any that do not match. Subnet 172.24.10.0/24 will be served on enp0s8.

Enable the system service:

systemctl enable dhcpd && systemctl start dhcpd

Testing


Connect several clients to the LAN and activate network interfaces on boot. DHCP is enabled by default

sed -i '/^ONBOOT/s/=.*$/=yes/' /etc/sysconfig/network-scripts/ifcfg-enp0s3
systemctl restart network

Check that DHCP is functioning correctly

                   +---------------+    ------ Discover ---->    +---------------+
                   |      DHCP     |    <----- Offer --------    |      DHCP     |
                   |     Client    |    ------ Request ----->    |     Server    |
                   +---------------+    <----- Acknowledge --    +---------------+

DHCP Client Server Interaction Steps


[root@gateway ~]# systemctl status dhcpd
● dhcpd.service - DHCPv4 Server Daemon
Loaded: loaded (/usr/lib/systemd/system/dhcpd.service; enabled; vendor preset: disabled)
Active: active (running) since Wed 2018-06-20 10:16:10 BST; 8min ago
Docs: man:dhcpd(8)
man:dhcpd.conf(5)
Main PID: 2051 (dhcpd)
Status: "Dispatching packets..."
CGroup: /system.slice/dhcpd.service
└─2051 /usr/sbin/dhcpd -f -cf /etc/dhcp/dhcpd.conf -user dhcpd -group dhcpd --no-pid

Jun 20 10:23:05 gateway.lab.local dhcpd[2051]: DHCPDISCOVER from 08:00:27:b7:f0:59 via enp0s8
Jun 20 10:23:06 gateway.lab.local dhcpd[2051]: DHCPOFFER on 172.24.10.101 to 08:00:27:b7:f0:59 via enp0s8
Jun 20 10:23:06 gateway.lab.local dhcpd[2051]: DHCPREQUEST for 172.24.10.101 (172.24.10.1) from 08:00:27:b7:f0:59 via enp0s8
Jun 20 10:23:06 gateway.lab.local dhcpd[2051]: DHCPACK on 172.24.10.101 to 08:00:27:b7:f0:59 via enp0s8

[root@testclient1 ~]# ip addr show enp0s3
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 08:00:27:dc:b5:90 brd ff:ff:ff:ff:ff:ff
    inet 172.24.10.100/24 brd 172.24.10.255 scope global dynamic enp0s3
       valid_lft 483sec preferred_lft 483se

[root@testclient2 ~]# ip addr show enp0s3
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 08:00:27:7e:75:7f brd ff:ff:ff:ff:ff:ff
    inet 172.24.10.101/24 brd 172.24.10.255 scope global dynamic enp0s3
       valid_lft 441sec preferred_lft 441sec

Forward SSH to access test clients

iptables -t nat -A PREROUTING -i enp0s3 -p tcp --dport 220 -j DNAT --to 172.24.10.100:22
iptables -t nat -A PREROUTING -i enp0s3 -p tcp --dport 221 -j DNAT --to 172.24.10.101:22
                                        Gateway
                                           __
                              Modem       |==|         LAN Switch
                               ____       |  |       _______________
             Internet  <------[_..°]------|__|------[_:::::::::::::_]
                                   (enp0s3)  (enp0s8)      | |
                                                       __  | |  __
                                                 ____ |==| | | |==| ____   
                                   testclient1  |    ||  |_| |_|  ||    |  testclient2
                                 172.24.10.100  |____||__|     |__||____|  172.24.10.101
                                                /::::/             /::::/

Test Client 1

$ ssh 192.168.1.12 -p 220
root@192.168.1.12's password:
Last login: Wed Jun 20 10:46:57 2018 from 192.168.1.19

[root@testclient1 ~]# cat /etc/resolv.conf
# Generated by NetworkManager
search lab.local
nameserver 172.24.10.1

[root@testclient1 ~]# ip addr show enp0s3
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:1f:2d:84 brd ff:ff:ff:ff:ff:ff
    inet 172.24.10.100/24 brd 172.24.10.255 scope global noprefixroute dynamic enp0s3
       valid_lft 457sec preferred_lft 457sec

[root@testclient1 ~]# ping -c3 google.com
PING google.com (216.58.198.110) 56(84) bytes of data.
64 bytes from lhr25s07-in-f110.1e100.net (216.58.198.110): icmp_seq=1 ttl=56 time=6.30 ms
64 bytes from lhr25s07-in-f110.1e100.net (216.58.198.110): icmp_seq=2 ttl=56 time=6.41 ms
64 bytes from lhr25s07-in-f110.1e100.net (216.58.198.110): icmp_seq=3 ttl=56 time=7.43 ms

--- google.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2004ms
rtt min/avg/max/mdev = 6.303/6.717/7.431/0.507 ms

[root@testclient1 ~]# curl ipinfo.io/ip
212.42.180.148

Test Client 2

$ ssh 192.168.1.12 -p 221
root@192.168.1.12's password:
Last login: Wed Jun 20 10:45:01 2018 from 192.168.1.19

[root@testclient2 ~]# cat /etc/resolv.conf
# Generated by NetworkManager
search lab.local
nameserver 172.24.10.1

[root@testclient2 ~]# ip addr show enp0s3
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:b7:f0:59 brd ff:ff:ff:ff:ff:ff
    inet 172.24.10.101/24 brd 172.24.10.255 scope global noprefixroute dynamic enp0s3
       valid_lft 450sec preferred_lft 450sec

[root@testclient2 ~]# ping -c3 google.com
PING google.com (216.58.198.110) 56(84) bytes of data.
64 bytes from lhr25s07-in-f110.1e100.net (216.58.198.110): icmp_seq=1 ttl=56 time=7.68 ms
64 bytes from lhr25s07-in-f110.1e100.net (216.58.198.110): icmp_seq=2 ttl=56 time=6.46 ms
64 bytes from lhr25s07-in-f110.1e100.net (216.58.198.110): icmp_seq=3 ttl=56 time=6.42 ms

--- google.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 6.421/6.855/7.680/0.590 ms

[root@testclient2 ~]# curl ipinfo.io/ip
212.42.180.148

The clients have been automatically configured via DHCP, can resolve URLs via DNS, and are sending and receiving network traffic through the gateway.

Logical Volume Management with RAID (mdadm)


Logical Volume Management


/ has just run out of space, what do we do now? With a normal partition setup, increasing the size of a partition is difficult as it can’t just be resized: typically you would have to backup your files, delete the partitions, create a new partition layout, create file systems, and copy all your files back on.

That’s a lot of work to just add some room to a partition that’s running out of space. Graphical tools such as gparted make this easier, but the system would still have to be taken down for maintenance and may not be easily available for the server in question. Here is where LVM comes in:

[root@localhost ~]# lvextend -L+1G /dev/new_volume/lv_root
  Size of logical volume new_volume/lv_root changed from 2.00 GiB (512 extents) to 3.00 GiB (768 extents).
  Logical volume lv_root successfully resized

[root@localhost ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/new_volume-lv_root
                      2.0G  3.0M  1.9G   1% /mnt

[root@localhost ~]# resize2fs /dev/mapper/new_volume-lv_root
resize2fs 1.41.12 (17-May-2010)
Filesystem at /dev/mapper/new_volume-lv_root is mounted on /mnt; on-line resizing required
old desc_blocks = 1, new_desc_blocks = 1
Performing an on-line resize of /dev/mapper/new_volume-lv_root to 786432 (4k) blocks.
The filesystem on /dev/mapper/new_volume-lv_root is now 786432 blocks long.

[root@localhost ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/new_volume-lv_root
                      2.9G  3.0M  2.8G   1% /mnt

There are no guarantees with partition resizing. Always keep full up-to-date backups of your data.

With minimal time and work the root partition has now been increased by 1GB, and the system didn’t need to be taken off-line to do it. It was all done while the system was still running.

LVM can be used on a single hard drive, but it also takes multiple hard drives and pools the available space into a single logical volume group. The space is then allocated to the logical volumes that will be used for the Linux file system. If space is running low in an LVM, additional hard drives can be added to the volume group to provide space to extend the logical volumes. Logical volumes can be extended and shrunk as required.

lvg

LVM provides the ability to dynamically allocate space, but it does not provide redundancy (although newer versions may support this). If a disk died in a volume group the entire volume would be destroyed.

LVM on Linux software RAID array


The purpose of a Redundant Array of Independent Disks (at level 1 or higher) is to provide redundancy and keep the system up and running when a hard disk fails; if this happens, the missing data is provided on-the-fly from the redundant information on the other hard disks. This allows the system to continue uninterrupted until the drive can be replaced and the missing data is rebuilt. Using LVM with a RAID allows LVM to continue working even if one of the physical disks dies.

Using a RAID array with LVM does have its disadvantages. RAID will will use the size of the smallest disk when building the array due to the need to stripe information across the drives. In the example above the smallest disk is 100GB, and so only 100GB would be used from each of the other two drives despite their larger capacity.

In a RAID 5 array, the space of one drive will be used for redundancy which makes it unavailable for storage. This issue can be reduced by using more hard drives, but the more hard drives there are the greater the risk of failure becomes.

7

Total available space in a RAID 5 array = “size of smallest device” x (number of disks – 1)

100GB x (3 – 1) = 200GB. With LVM the entire 550GB was available for use in the volume group, but with a RAID 5 array only 200GB out of the 550GB would be available for use due to the mismatched device sizes. To make better use of the disk space the drive sizes should be matched more closely.

lvg-mdadm

Example Setup


I have used a virtual machine with 5 partitions to create an mdadm RAID 5 array. Normally the RAID setup would be done with multiple disks but the concept works for this example. The mdadm RAID array is then used to create a volume group from which the logical volumes can be created.

[root@localhost ~]# mdadm --create /dev/md0 --raid-devices=5 --level=5 /dev/xvdb5 \\
> /dev/xvdb6 /dev/xvdb7 /dev/xvdb8 /dev/xvdb9
[...]
md0: WARNING: xvdb9 appears to be on the same physical disk as xvdb8.
[...]
md0: WARNING: xvdb5 appears to be on the same physical disk as xvdb8.
True protection against single-disk failure might be compromised.
md/raid:md0: device xvdb8 operational as raid disk 3
md/raid:md0: device xvdb7 operational as raid disk 2
md/raid:md0: device xvdb6 operational as raid disk 1
md/raid:md0: device xvdb5 operational as raid disk 0
md/raid:md0: allocated 0kB
md/raid:md0: raid level 5 active with 4 out of 5 devices, algorithm 2
md0: detected capacity change from 0 to 6476005376
[...]
mdadm: array /dev/md0 started.
 md0: unknown partition table
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 xvdb9[5] xvdb8[3] xvdb7[2] xvdb6[1] xvdb5[0]
      6324224 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/5] [UUUUU]

unused devices:

The RAID device is now created and ready to be used. Now to create the LVM volume on the RAID device.

[root@localhost ~]# pvcreate /dev/md0
  Physical volume "/dev/md0" successfully created
[root@localhost ~]# vgcreate new_volume /dev/md0
  Volume group "new_volume" successfully created
[root@localhost ~]# lvcreate -L 500M -n lv_swap new_volume
  Logical volume "lv_swap" created.
[root@localhost ~]# lvcreate -L 2G -n lv_root new_volume
  Logical volume "lv_root" created.
[root@localhost ~]# lvcreate -l 100%FREE -n lv_home new_volume
  Logical volume "lv_home" created.
[root@localhost ~]# mkfs.ext4 /dev/mapper/new_volume-lv_root
[...]
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done

[root@localhost]~# mkfs.ext4 /dev/mapper/new_volume-lv_home
[...]
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done