8 min read

In this article written by Omar Khedher, author of Mastering OpenStack, we will explore the various aspects of networking and security in OpenStack. A major part of the article is focused on presenting the different security layouts by using Neutron.

In this article, we will discuss the following topics:

  • Understanding how Neutron facilitates the network management in OpenStack
  • Using security groups to enforce a security layer for instances

The story of an API

By analogy, the OpenStack compute service provides an API that provides a virtual server abstraction to imitate the compute resources. The network service and compute service perform in the same way, where we come to a new generation of virtualization in network resources such as network, subnet, and ports, and can be continued in the following schema:

  • Network: As an abstraction for the layer 2 network segmentation that is similar to the VLANs
  • Subnet: This is the associated abstraction layer for a block of IPv4/IPv6 addressing per network
  • Port: This is the associated abstraction layer that is used to attach a virtual NIC of an instance to a network
  • Router: This is an abstraction for layer 3 that is used to perform routing between the networks
  • Floating IP: This is used to perform static public IP mapping from external to internal networks

Security groups

Imagine a scenario where you have to apply certain traffic management rules for a dozen compute node instances. Therefore, assigning a certain set of rules for a specific group of nodes will be much easier instead of going through each node at a time. Security groups enclose all the aspects of the rules that are applied to the ingoing and outgoing traffic to instances, which includes the following:

  • The source and receiver, which will allow or deny traffic to instances from either the internal OpenStack IP addresses or from the rest of the world
  • Protocols to which the rule will apply, such as TCP, UDP, and ICMP
  • Egress/ingress traffic management to a neutron port

In this way, OpenStack offers an additional security layer to the firewall rules that are available on the compute instance. The purpose is to manage traffic to several compute instances from one security group. You should bear in mind that the networking security groups are more granular-traffic-filtering-aware than the compute firewall rules since they are applied on the basis of the port instead of the instance. Eventually, the creation of the network security rules can be done in different ways.

For more information on how iptables works on Linux, https://www.centos.org/docs/5/html/Deployment_Guide-en-US/ch-iptables.html is a very useful reference.

Manage the security groups using Horizon

From Horizon, in the Access and Security section, you can add a security group and name it, for example PacktPub_SG. Then, a simple click on Edit Rules will do the trick. The following example illustrates how this network security function can help you understand how traffic—both in ingress/egress—can be controlled:

The previous security group contains four rules. The first and the second lines are rules to open all the outgoing traffic for IPv4 and IPv6 respectively. The third line allows the inbound traffic by opening the ICMP port, while the last one opens port 22 for SSH for the inbound interface. You might notice the presence of the CIDR fields, which is essential to know. Based on CIDR, you allow or restrict traffic over the specified port. For example, using CIDR of 0.0.0.0/0 will allow traffic for all the IP addresses over the port that was mentioned in your rule. For example, a CIDR with 32.32.15.5/32 will restrict traffic only to a single host with an IP of 32.32.15.5. If you would like to specify a range of IP in the same subnet, you can use the CIDR notation, 32.32.15.1/24, which will restrict traffic to the IP addresses starting from 32.32.15.*; the other IP addresses will not stick to the latter rule.

The naming of the security group must be done with a unique name per project.

Manage the security groups using the Neutron CLI

The security groups also can be managed by using the Python Neutron command-line interface. Wherever you run the Neutron daemon, you can list, for example, all the present security groups from the command line in the following way:

# neutron security-group-list

The preceding command yields the following output:

To demonstrate how the PacktPub_SG security group rules that were illustrated previously are implemented on the host, we can add a new rule that allows the ingress connections to ping (ICMP) and establish a secure shell connection (SSH) in the following way:

# neutron security-group-rule-create --protocol icmp –-direction 
ingress PacktPub-SG

The preceding command produces the following result:

The following command line will add a new rule that allows ingress connections to establish a secure shell connection (SSH):

# neutron security-group-rule-create --protocol tcp –-port-range-max 
22 –-direction ingress PacktPub-SG

The preceding command gives the following output:

By default, if none of the security groups have been created, the port of instances will be associated within the default security group for any project where all the outbound traffic will be allowed and blocked in the inbound side.

You may conclude from the output of the previous command line that it lists the rules that are associated with the current project ID and not by the security groups.

Managing the security groups using the Nova CLI

The nova command line also does the same trick if you intend to perform the basic security group’s control, as follows:

$ nova secgroup-list-rules default

Since we are setting Neutron as our network service controller, we will proceed by using the networking security groups, which reveals additional traffic control features. If you are still using the compute API to manage the security groups, you can always refer to the nova.conf file for each compute node to set security_group_api = neutron.

To associate the security groups to certain running instances, it might possible to use the nova client in the following way:

# nova add-secgroup test-vm1 PacktPub_SG

The following code illustrates the new association of the packtPub_SG security group with the test-vm1 instance:

# nova show test-vm1

 

The following is the result of the preceding command:

One of the best practices to troubleshoot connection issues for the running instances is to start checking the iptables running in the compute node. Eventually, any rule that was added to a security group will be applied to the iptables chains in the compute node. We can check the updated iptables chains in the compute host after applying the security group rules by using the following command:

# iptables-save

The preceding command yields the following output:

The highlighted rules describe the direction of the packet and the rule that is matched. For example, the inbound traffic to the f7fabcce-f interface will be processed by the neutron-openvswi-if7fabcce-f chain.

It is important to know how iptables rules work in Linux. Updating the security groups will also perform changes in the iptable chains. Remember that chains are a set of rules that determine how packets should be filtered. Network packets traverse rules in chains, and it is possible to jump to another chain. You can find different chains per compute host, depending on the network filter setup.

If you have already created your own security groups, a series of iptables and chains are implemented on every compute node that hosts the instance that is associated within the applied corresponding security group. The following example demonstrates a sample update in the current iptables of a compute node that runs instances within the 10.10.10.0/24 subnet and assigns 10.10.10.2 as a default gateway for the former instances IP ranges:

The last rule that was shown in the preceding screenshot dictates how the flow of the traffic leaving the f7fabcce-finterface must be sourced from 10.10.10.2/32 and the FA:16:3E:7E:79:64 MAC address. The former rule is useful when you wish to prevent an instance from issuing a MAC/IP address spoofing. It is possible to test ping and SSH to the instance via the router namespace in the following way:

# ip netns exec router qrouter-5abdeaf9-fbb6-4a3f-bed2-7f93e91bb904 
ping 10.10.10.2

The preceding command provides the following output:

The testing of an SSH to the instance can be done by using the sane router namespace, as follows:

# ip netns exec router qrouter-5abdeaf9-fbb6-4a3f-bed2-7f93e91bb904 
ssh [email protected]

The preceding command produces the following output:

Web servers DMZ example

In the current example, we will show a simple setup of a security group that might be applied to a pool of web servers that are running in the Compute01, Compute02 and Compute03 nodes. We will allow inbound traffic from the Internet to access WebServer01, AppServer01, and DNSServer01 over HTTP and HTTPS. This is depicted in the following diagram:

Let’s see how we can restrict the traffic ingress/egress via Neutron API:

$ neutron security-group-create DMZ_Zone --description "allow web traffic from the Internet"

$neutron security-group-rule-create --direction ingress --protocol tcp --port_range_min 80 --port_range_max 80 DMZ_Zone --remote-ip-prefix 0.0.0.0/0

$neutron security-group-rule-create --direction ingress --protocol tcp --port_range_min 443 --port_range_max 443 DMZ_Zone --remote-ip-prefix 0.0.0.0/0

$neutron security-group-rule-create --direction ingress --protocol tcp --port_range_min 3306 --port_range_max 53 DMZ_Zone --remote-ip-prefix 0.0.0.0/0

From Horizon, we can see the following security rules group added:

To conclude, we have looked at presenting different security layouts by using Neutron. At this point, you should be comfortable with security groups and their use cases.

Further your OpenStack knowledge by designing, deploying, and managing a scalable OpenStack infrastructure with Mastering OpenStack

LEAVE A REPLY

Please enter your comment!
Please enter your name here