8 min read

(For more resources related to this topic, see here.)

Configuring the NFS storage

NFS storage is a fairly common type of storage that is quite easy to set up and run even without special equipment. You can take the server with large disks and create NFS directory.But despite the apparent simplicity of NFS, setting s should be done with attention to details.

Make sure that the NFS directory is suitable for use; go to the procedure of connecting storage to the data center. The following options are displayed after you click on the Configure Storage dialog box in which we specify the basic storage configuration:

  • Name and Data Center: It is used to specify a name and target of the data center for storage
  • Domain Function/Storage Type: It is used to choose a data function and NFS type
  • Use Host: It is used to enter the host that will make the initial connection to the storage and a host who will be in the role of SPM
  • Export Path: It is used to enter the storage server name and path of the exported directory
  • Advanced Parameters: It provides additional connection options, such as NFS version, number of retransmissions and timeout, that are recommended to be changed only in exceptional cases

Fill in the required storage settings and click on the OK button; this will start the process of connecting storage.

The following image shows the New Storage dialog box with the connecting NFS storage:

Configuring the iSCSI storage

This section will explain how to connect the iSCSI storage to the data center with the type of storage as iSCSI. You can skip this section if you do not use iSCSI storage.

iSCSI is a technology for building SAN (Storage Area Network). A key feature of this technology is the transmission of SCSI commands over the IP networks. Thus, there is a transfer of block data via IP. By using the IP networks, data transfer can take place over long distances and through network equipment such as routers and switches. These features make the iSCSI technology good for construction of low-cost SAN. oVirt supports iSCSI and iSCSI storages that can be connected to oVirt data centers.

Then begin the process of connecting the storage to the data center. After you click on the Configure Storage dialog box in which you specify the basic storage configuration, the following options are displayed:

  • Name and Data Center: It is used to specify the name and target of the data center.
  • Domain Function/Storage Type: It is used to specify the domain function and storage type. In this case, the data function and iSCSI type.
  • Use Host: It is used to specify the host to which the storage (SPM) will be attached.

The following options are present in the search box for iSCSI targets:

  • Address and Port: It is used to specify the address and port of the storage server that contains the iSCSI target
  • User Authentication: Enable this checkbox if authentication is to be used on the iSCSI target
  • CHAP username and password: It is used to specify the username and password for authentication

Click on the Discover button and oVirt Engine connects to the specified server for the searching of iSCSI targets. In the resulting list, click on the designated targets, we click on the Login button to authenticate. Upon successful completion of the authentication, the display target LUN will be displayed; check it and click on OK to start connection to the data center. New storage will automatically connect to the data center. If it does not, select the location from the list and click on the Attach button in the detail pane where we choose a target data center.

Configuring the Fibre Channel storage

If you have selected Fibre Channel when creating the data center, we should create a Fibre Channel storage domain. oVirt supports Fibre Channel storage based on multiple preconfigured Logical Unit Numbers (LUN). Skip this section if you do not use Fibre Channel equipment.

Begin the process of connecting the storage to the data center. Open the Guide Me wizard and click on the Configure Storage dialog box where you specify the basic storage configuration:

  • Name and Data Center: It is used to specify the name and data center
  • Domain Function/Storage Type: Here we need to specify the data function and Fibre Channel type
  • Use Host: It specifies the address of the virtualization host that will act as the SPM

In the area below, the list of LUNs are displayed, enable the Add LUN checkbox on the selected LUN to use it as Fibre Channel data storage.

Click on the OK button and this will start the process of connecting storage to the data centers. In the Storage tab and in the list of storages, we can see created Fibre Channel storage. In the process of connecting, its status will change and at the end new storage will be activated and connected to the data center. The connection process can also be seen in the event pane.

The following screenshot shows the New Storage dialog box with Fibre Channel storage type:

Configuring the GlusterFS storage

GlusterFS is a distributed, parallel, and linearly scalable filesystem. GlusterFS can combine the data storage that are located on different servers into a parallel network filesystem. GlusterFS’s potential is very large, so developers directed their efforts towards the implementation and support of GlusterFS in oVirt (GlusterFS documentation is available at http://www.gluster.org/community/documentation/index.php/Main_Page). oVirt 3.3 has a complete data center with the GlusterFS type of storage.

Configuring the GlusterFS volume

Before attempting to connect GlusterFS storage into the data center, we need to create the volume. The procedure of creating GlusterFS volume is common in all versions.

  1. Select the Volumes tab in the resource pane and click on Create Volume.
  2. In the open window, fill the volume settings:
    • Data Center: It is used to specify the data center that will be attached to the GlusterFS storage.
    • Volume Cluster: It is used to specify the name of the cluster that will be created.
    • Name: It is used to specify a name for the new volume.
    • Type: It is used to specify the type of GlusterFS volume available to choose from, there are seven types of volume that implement various strategic placements of data on the filesystem. Base types are Distribute, Replicate, and Stripe and other combination of these types: Distributed Replicate, Distributed Stripe, Striped Replicate, and Distributed Striped Replicate (additional info can be found at the link: http://gluster.org/community/documentation/index.php/GlusterFS_Concepts).
    • Bricks: With this button, a list of bricks will be collected from the filesystem. Brick is a separate piece with which volume will be built. These bricks are distributed across the hosts. As bricks use a separate directory, it should be placed on a separate partition.
    • Access Protocols: It defines basic access protocols that can be used to gain access to the following:
      • Gluster: It is a native protocol access to volumes GlusterFS, enabled by default.
      • NFS: It is an access protocol based on NFS.
      • CIFS: It is an access protocol based on CIFS.
    • Allow Access From: It allows us to enter a comma-separated IP address, hostnames, or * for all hosts that are allowed to access GlusterFS volume.
    • Optimize for oVirt Store: Enabling this checkbox will enable extended options for created volume.

    The following screenshot shows the dialog box of Create Volume:

  3. Fill in the parameters, click on the Bricks button, and go to the new window to add new bricks with the following properties:
    • Volume Type: This is used to change the previously marked type of the GlusterFS volume
    • Server: It is used to specify a separate server that will export GlusterFS brick
    • Brick Directory: It is used to specify the directory to use
  4. Specify the server and directory and click on Add. Depending on the type of volume, specify multiple bricks. After completing the list with bricks, click on the OK button to add volume and return to the menu.
  5. Click on the OK button to create GlusterFS volumes with the specified parameters.

    The following screenshot shows the Add Bricks dialog box:

Now that we have GlusterFS volume, we select it from the list and click on Start.

Configuring the GlusterFS storage

oVirt 3.3 has support for creating data centers with the GlusterFS storage type:

  1. The GlusterFS storage type requires a preconfigured data center.
  2. A pre-created cluster should be present inside the data center. The enabled Gluster service is required.
  3. Go to the Storage section in resource pane and click on New Domain.
  4. In the dialog box that opens, fill in the details of our storage. The details are given as follows:
    • Name and Data Center: It is used to specify the name and data center
    • Domain Function/Storage Type: It is used to specify the data function and GlusterFS type
    • Use Host: It is used to specify the host that will connect to the SPM
    • Path: It is used to specify the path to the location in the format hostname:volume_name
    • VFS Type: Leave it as glusterfs and leave Mount Option blank
  5. Click on the OK button; this will start the process of creating the repository.

The created storage automatically connects to the specified data centers. If not, select the repository created in the list, and in the subtab named Data Center in the detail pane, click on the Attach button and choose our data center. After you click on OK, the process of connecting storage to the data center starts.

The following screenshot shows the New Storage dialog box with the GlusterFS storage type.

Summary

In this article we learned how to configure NFS Storage, iSCSI Storage, FC storages, and GlusterFS Storage.

Resources for Article:


Further resources on this subject:


LEAVE A REPLY

Please enter your comment!
Please enter your name here