6 min read

(For more resources on Open Source, see here.)

A host is a server that has the ability to run virtual machines using a special software component called a hypervisor that is managed by the OpenNebula frontend.

All the hosts do not need to have homogeneous configuration, but it is possible to use different hypervisors on different GNU/Linux distributions on a single OpenNebula cluster.

Using different hypervisors in your infrastructure is not just a technical exercise but assures you greater flexibility and reliability. A few examples where having multiple hypervisors would prove to be beneficial are as follows:

  • A bug in the current release of A hypervisor does not permit the installation of a virtual machine with a particular legacy OS (let’s say, for example,Windows 2000 Service Pack 4), but you can execute it with B hypervisor without any problem.

  • You have a production infrastructure that is running a closed source free-to-use hypervisor, and during the next year the software house developing that hypervisor will request a license payment or declare bankruptcy due to economic crisis.

The current version of OpenNebula will give you great flexibility regarding hypervisor usage since it natively supports KVM/Xen (which are open source) and VMware ESXi. In the future it will probably support both VirtualBox (Oracle) and Hyper-V (Microsoft).

Configuring hosts

The first thing to do before starting with the installation of a particular hypervisor on a host is to perform some general configuration steps. They are as follows:

  1. Create a dedicated oneadmin UNIX account (which should have sudo privileges for executing particular tasks, for example, iptables/ebtables,and network hooks that we have configured.

  2. The frontend and host’s hostname should be resolved by a local DNS or a shared/etc/hosts file.

  3. The oneadmin on the frontend should be able to connect remotely through SSH to the oneadmin on the hosts without a password.

  4. Configure the shared network bridge that will be used by VM to get the physical network.

 

The oneadmin account and passwordless login

Every host should have a oneadmin UNIX account that will be used by the OpenNebula frontend to connect and execute commands.

If during the operating system install you did not create it, create a oneadmin user on the host by using the following command:

youruser@host1 $ sudo adduser oneadmin

You can configure any password you like (even blank) because we are going to set up a passwordless login from the frontend:

oneadmin@front-end $ ssh-copy-id oneadmin@host1

Now if you connect from the oneadmin account on the frontend to the oneadminaccount of the host, you should get the shell prompt without entering any password by using the following command:

oneadmin@front-end $ ssh oneadmin@host1

Uniformity of oneadmin UID number Later, we will learn about the possible storage solutions available with OpenNebula. However, keep in mind that if we are going to set up a shared storage, we need to make sure that the UID number of the oneadmin user is homogeneous between the frontend and every other host. In other words, check that with the id command the oneadmin UID is the same both on the frontend and the hosts.

Verifying the SSH host fingerprints

The first time you connect to a remote SSH server from a particular host, the SSH client will provide you the fingerprintprint of the remote server and ask for your permission to continue with the following message:

The authenticity of host host01 (192.168.254.2)can’t be established. RSA key fingerprint is 5a:65:0f:6f:21:bb:fd:6a:4a:68:cd: 72:58:5c:fb:9f. Are you sure you want to continue connecting (yes/no)?

Knowing the fingerprint of the remote SSH key and saving it to the local SSH client fingerprint cache (saved in ~/.ssh/known_hosts) should be good enough to prevent man-in-the-middle attacks.

For this reason, you need to connect from the oneadmin user on the frontend to every host in order to save the fingerprints of the remote hosts in the oneadmin known_hosts for the first time. Not doing this will prevent OpenNebula from connecting to the remote hosts.

In large environments, this requirement may be a slow-down when cofiguring new hosts. However, it is possible to bypass this operation by instructing the remote client used by OpenNebula to connect to remote hosts and not check the remote SSH key in ~/.ssh/config. The command prompt will show the following content when the operation is bypassed:

Host* StrictHostKeyChecking no.

If you do not have a local DNS (or you cannot/do not want to set it up), you can manually manage the /etc/hosts file in every host, using the following IP addresses:

127.0.0.1 localhost 192.168.66.90 on-front 192.168.66.97 kvm01 192.168.66.98 xen01 192.168.66.99 esx01

Now you should be able to remotely connect from a node to another with your hostname using the following command:

$ ssh oneadmin@kvm01

Configuring a simple DNS with dnsmasq

If you do not have a local DNS and manually managing the plain host’s file on every host does not excite you, you can try to install and configure dnsmasq. It is a lightweight, easy-to-configure DNS forwarder (optionally DHCP and TFTP can be provided within it) that services well to a small-scale network.

The OpenNebula frontend may be a good place to install it.

For an Ubuntu/Debian installation use the following command:

$ sudo apt-get install dnsmasq

The default configuration should be fine. You just need to make sure that /etc/resolv.conf configuration details look similar to the following:

# dnsmasq nameserver 127.0.0.1 # another local DNS nameserver 192.168.0.1 # ISP or public DNS nameserver 208.67.220.220 nameserver 208.67.222.222

The /etc/hosts configuration details will look similar to the following:

127.0.0.1 localhost 192.168.66.90 on-front 192.168.66.97 kvm01 192.168.66.98 xen01 192.168.66.99 esx01

Configure any other hostname here in the hosts file on the frontend by running dnsmasq. Configure /etc/resolv.conf configuration details on the other hosts using the following code:

# ip where dnsmasq is installed nameserver 192.168.0.2

Now you should be able to remotely connect from a node to another using your plain hostname using the following command:

$ ssh oneadmin@kvm01

When you add new hosts, simply add them at /etc/hosts on the frontend and they will automatically work on every other host, thanks to dnsmasq.

Configuring sudo

To give administrative privileges to the oneadmin account on the hosts, add it to the sudo or admin group depending on your /etc/sudoers configuration using the following code:

# /etc/sudoers Defaults env_reset root ALL=(ALL) ALL %sudo ALL=NOPASSWD: ALL

With this simple sudo configuration, every user in the sudo group can execute any command with root privileges, without requiring to enter the user password before each command.

Now add the oneadmin user to the sudo group with the following command:

$ sudo adduser oneadmin sudo

Giving full administrative privileges to the oneadmin account might be considered inappropriate for most security-focused people. However, I can assure you that if you are taking the first step with OpenNebula now, having full administrative privileges could save some headaches. This is a suggested configuration but it is not required to run OpenNebula.

Configuring network bridges

Every host should have its bridges configured with the same name. Check the following /etc/network/interfaces code as an example:

# The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface
iface eth0 inet manual
auto lan0
iface lan0 inet static
bridge_ports eth0
bridge_stp off
bridge_fd 0
address 192.168.66.97
netmask 255.255.255.0
gateway 192.168.66.1
dns-nameservers 192.168.66.1

You can have as many bridges as you need, bound or not bound to a physical network. By eliminating the bridge_ports parameter you get a pure virtual network for your VMs but remember that without a physical network different VMs on different hosts cannot communicate with each other.

LEAVE A REPLY

Please enter your comment!
Please enter your name here