How to Set Up CoreOS Environment

17 min read

In this article, Kingston Smiler and Shantanu Agrawal, the authors of the book Learning CoreOS, explain how CoreOS can be installed on a variety of platforms such as bare metal servers, cloud providers virtual machines, physical servers, and so on. This article  describes in detail how to bring up your first CoreOS environment focusing on deploying CoreOS on a Virtual Machine. When deploying in a virtualization environment, tools such as Vagrant comes in very handy in managing the CoreOS virtual machines. Vagrant enables setting up CoreOS with multiple nodes even on single laptops or workstations easily with minimum configuration. Vagrant supports VirtualBox, a commonly used virtualization application. Both Vagrant and VirtualBox are available for multiple architecture like Intel or AMD, and operating systems such as Windows, Linux, Solaris, and Mac.

This article covers setting up CoreOS on VirtualBox, VMware VSphere, and also the following topics:

  • VirtualBox installation
  • Introduction to Vagrant
  • CoreOS on VMware VSphere setup

Git is used for downloading all the required software mentioned in this  article.

(For more resources related to this topic, see here.)

Installing Git

Download the latest version of Git installation as per the host operating system from After the download is complete, start the installation. The installation of Git using this procedure is useful for Mac and Windows. For all Linux distributions, the Git client is available through its package manager. For example, if the operation system is CentOS, the package manager yum can be used to install Git.

Installing VirtualBox

Download the latest version of VirtualBox as per the host operating system and architecture from After the download is complete, start the installation.

During installation, continue with the default options. VirtualBox installation resets the host machine’s network adapters during installation. This will result in the network connection toggle. After the installation is successful, Installer will print the status of the operation.

Introduction to Vagrant

Vagrant provides a mechanism to install and configure a development, test, or production environment. Vagrant works along with various virtualization applications such as VirtualBox, VMware, AWS, and so on. All installation, setup information, configuration, and dependencies are maintained in a file and virtual machine can be configured and brought up using a simple Vagrant command. This also helps to automate the process of installation and configuration of machines using commonly available scripting languages. Vagrant helps in creating an environment that is exactly the same across users and deployments. Vagrant also provides simple commands to manage the virtual machines. In the context of CoreOS, Vagrant will help to create multiple machines of the CoreOS cluster with ease and with the same environment.

Installing Vagrant

Download and install the latest version of Vagrant from Choose default settings during installation.

Vagrant configuration files

The Vagrant configuration file contains the configuration and provisioning information of the virtual machines. The configuration filename is Vagrantfile and the file syntax is Ruby. The configuration file can be present in any of the directory levels starting from the current working directory. The file in the current working directory is read first, then the file (if present) in one directory level back, and so on until /. Files are merged as they are read. For most of the configuration parameters, newer settings overwrite the older settings except for a few parameters where they are appended.

A Vagrantfile template and other associated files can be cloned from the GIT repository ( Run the following command from the terminal to clone the repository. Note that the procedure to start a terminal may vary from OS to OS. For example, in Windows, the terminal for running Git commands is by running Git Bash.

$ git clone

A directory, coreos-vagrant, is created after git clone. Along with other files associated to the git repository, the directory contains Vagrantfile, user-data.sample, and config.rb.sample. Rename user-data.sample to user-data and config.rb.sample to config.rb.

git clone

Cloning into 'coreos-vagrant'...

remote: Counting objects: 402, done.

remote: Total 402 (delta 0), reused 0 (delta 0), pack-reused 402

Receiving objects: 100% (402/402), 96.63 KiB | 31.00 KiB/s, done.

Resolving deltas: 100% (175/175), done.


cd coreos-vagrant/


config.rb.sample**  DCO*  LICENSE*  MAINTAINERS*  NOTICE**  user-data.sample*  Vagrantfile*

Vagrantfile contains template configuration to create and configure the CoreOS virtual machine using VirtualBox. Vagrantfile includes the config.rb file using the require directive.


CONFIG = File.join(File.dirname(__FILE__), "config.rb")


if File.exist?(CONFIG)

  require CONFIG





CLOUD_CONFIG_PATH = File.join(File.dirname(__FILE__), "user-data")


      if File.exist?(CLOUD_CONFIG_PATH)

        config.vm.provision :file, :source => "#{CLOUD_CONFIG_PATH}", 
        :destination => "/tmp/vagrantfile-user-data"

        config.vm.provision :shell, :inline => "mv /tmp/vagrantfile-
        user-data /var/lib/coreos-vagrant/", :privileged => true




cloud config files are special files that get executed by the cloud-init process when the CoreOS system starts or when the configuration is dynamically updated. Typically, the cloud config file contains the various OS level configuration of the docker container such as networking, user administration, systemd units, and so on. For CoreOS, user-data is the name of the cloud-config file and is present inside the base directory of the vagrant folder. The systemd units files are configuration files containing information about a process.

The cloud-config file uses the YAML file format. A cloud-config file must contain #cloud-config as the first line, followed by an associative array that has zero or more of the following keys:

  • coreos: This key provides configuration of the services provided by CoreOS. Configuration for some of the important services are described next:
    • etc2: This key replaces the previously used etc service. The parameters for etc2 are used to generate the systemd unit drop-in file for etcd2 services. Some of the important parameters of the etc2 configuration are:

      discovery: This specifies the unique token used to identify all the etcd members forming a cluster. The unique token can be generated by accessing the free discovery service (<clustersize>). This is used when the discovery mechanism is used to identify cluster etcd members in cases where IP addresses of all the nodes are not known beforehand. The token generated is also called the discovery URL. The discovery service helps clusters to connect to each other using initial-advertise-peer-urls provided by each member by storing the connected etcd members, the size of the cluster, and other metadata against the discovery URL.

      initial-advertise-peer-urls: This specifies the member’s own peer URLs that are advertised to the cluster. The IP should be accessible to all etcd members. Depending on accessibility, a public and/or private IP can be used.

      advertise-client-urls: This specifies the member’s own client URLs that are advertised to the cluster. The IP should be accessible to all etcd members. Depending on accessibility, a public and/or private IP can be used.

      listen-client-urls: This specifies the list of self URLs on which the member is listening for client traffic. All advertised client URLs should be part of this configuration.

      listen-peer-urls: This specifies the list of self URLs on which the member is listening for peer traffic. All advertised peer URLs should be part of this configuration.

On some platforms, the providing IP can be automated by using templating feature. Instead of providing actual IP addresses, the fields $public_ipv4 or $private_ipv4 can be provided.

$public_ipv4 is a substitution variable for the public IPV4 address of the machine.

$private_ipv4 is a substitution variable for the private IPV4 address of the machine.

The following is sample coreos configuration in the cloud-config file:





    # multi-region and multi-cloud deployments need to use 

    advertise-client-urls: http://$public_ipv4:2379

    initial-advertise-peer-urls: http://$private_ipv4:2380

    # listen on both the official ports and the legacy ports

    # legacy ports can be omitted if your application doesn't 
    depend on them


  • fleet: The parameters for fleet are used to generate environment variables for the fleet service. The fleet service manages the running of containers on clusters. Some of the important parameters of the fleet configuration are:

etcd_servers: This provides the list of ULRs through which etcd services can be reached. The URLs configured should be one of the listen-client-urls for etcd services.

public_ip: The IP address that should be published with the local machine’s state.

The following is a sample fleet configuration in the cloud-config file:



    etcd_servers: http:// $public_ipv4:2379,http:// $public_ipv4:4001

public-ip: $public_ipv4
  •  flannel: The parameters for flannel are used to generate environment variables for the flannel service. The flannel service provides communication between containers.
  • locksmith: The parameters for locksmith are used to generate environment variables for the locksmith service. The locksmith service provides reboot management of clusters.
  • update: These parameters manipulate settings related to how CoreOS instances are updated.
  • Units: These parameters specify the set of systemd units that need to be started after boot-up. Some of the important parameters of unit configuration are:

    name: This specifies the name of the service.

    command: This parameter specifies the command to execute on the unit: start, stop, reload, restart, try-restart, reload-or-restart, reload-or-try-restart.

    enable: This flag (true/false) specifies if the Install section of the unit file has to be ignored or not.

    drop-ins: This contains a list of the unit’s drop-in files. Each unit information set contains name, which specifies the unit’s drop-in files, and content, which is plain text representing the unit’s drop-in file.

The following is a sample unit configuration in the cloud-config file.



    - name: etcd2.service

      command: start

    - name: fleet.service

      command: start

    - name: docker-tcp.socket

      command: start

      enable: true

      content: |


        Description=Docker Socket for the API







  • ssh_authorized_keys: This parameter specifies the public SSH keys that will be authorized for the core user.
  • hostname: This specifies the hostname of the member.
  • users: This specifies the list of users to be created or updated on the member. Each user information contains name, password, homedir, shell, and so on.
  • write_files: This specifies the list of files that are to be created on the member. Each file information contains path, permission, owner, content, and so on.
  • manage_etc_hosts: This specifies the content of the /etc/hosts file for local name resolution. Currently, only localhost is supported.

The config.rb configuration file

This file contains information to configure the CoreOS cluster. This file provides the configuration value for the parameters used by Vagrantfile. Vagrantfile accesses the configuration by including the config.rb file. The following are the parameters:

  • $num_instances: This parameter specifies the number of nodes in the cluster
  • $shared_folders: This parameter specifies the list of shared folder paths on the host machine along with the respective path on the member
  • $forwarded_ports: This specifies the port forwarding from the member to the host machine
  • $vm_gui: This flag specifies if GUI is to be set up for the member
  • $vm_memory: This parameter specifies the memory for the member in MBs
  • $vm_cpus: This specifies the number of CPUs to be allocated for the member
  • $instance_name_prefix: This parameter specifies the prefix to be used for the member name
  • $update_channel: This parameter specifies the update channel (alpha, beta, and so on) for CoreOS

The following is a sample config.rb file:

# To automatically replace the discovery token on 'vagrant up', 
# the lines below:
#if File.exists?('user-data') && ARGV[0].eql?('up')
#  require 'open-uri'
#  require 'yaml'
#  token = open($new_discovery_url).read
#  data = YAML.load(IO.readlines('user-data')[1..-1].join)
#  if data['coreos'].key? 'etcd'
#    data['coreos']['etcd']['discovery'] = token
#  end
#  if data['coreos'].key? 'etcd2'
#    data['coreos']['etcd2']['discovery'] = token
#  end
# Fix for YAML.load() converting reboot-strategy from 'off' to      false` #  if data['coreos']['update'].key? 'reboot-strategy' #     if data['coreos']['update']['reboot-strategy'] == false #          data['coreos']['update']['reboot-strategy'] = 'off' #       end #  end # #  yaml = YAML.dump(data) #'user-data', 'w') { |file| file.write("#cloud-    confignn#{yaml}") } #end
$instance_name_prefix="coreOS-learn" $image_version = "current" $update_channel='alpha' $vm_gui = false $vm_memory = 1024 $vm_cpus = 1 $shared_folders = {} $forwarded_ports = {}

Starting a CoreOS VM using Vagrant

Once the config.rb and user-config files are updated with the actual configuration parameter, execute the command vagrant up in the directory where configuration files are present to start the CoreOS VM image. Once the vagrant up command is successfully executed, the CoreOS in the VM environment is ready:

vagrant up

Bringing machine 'core-01' up with 'virtualbox' provider...
==> core-01: Checking if box 'coreos-alpha' is up to date...
==> core-01: Clearing any previously set forwarded ports...
==> core-01: Clearing any previously set network interfaces...
==> core-01: Preparing network interfaces based on configuration...
    core-01: Adapter 1: nat
    core-01: Adapter 2: hostonly
==> core-01: Forwarding ports...
    core-01: 22 => 2222 (adapter 1)
==> core-01: Running 'pre-boot' VM customizations...
==> core-01: Booting VM...
==> core-01: Waiting for machine to boot. This may take a few minutes...

    core-01: SSH address:
    core-01: SSH username: core
    core-01: SSH auth method: private key   
    core-01: Warning: Connection timeout. Retrying...
==> core-01: Machine booted and ready!
==> core-01: Setting hostname...
==> core-01: Configuring and enabling network interfaces...
==> core-01: Machine already provisioned. Run `vagrant provision` or 
             use the `--provision`
==> core-01: flag to force provisioning. Provisioners marked to run 
             always will still run.
vagrant status

Current machine states:   core-01                   running (virtualbox)

The VM is running. To stop this VM, you can run vagrant halt to shut it down forcefully, or you can run vagrant suspend to simply suspend the virtual machine. In either case, to restart it again, simply run vagrant up.

Setting up CoreOS on VMware vSphere

VMware vSphere is a server virtualization platform that uses VMware’s ESX/ESXi hypervisor. VMware VSphere provides complete platform, toolsets, and virtualization infrastructure to provide and manage virtual machines in bare metal. VMware vSphere consists of VMware vCenter Server and VMware vSphere Client. VMware vCenter Server manages the virtual as well as the physical resources. VMware vSphere Client provides a GUI to install and manage virtual machines in bare metal.

Installing VMware vSphere Client

Download the latest version of VMware vSphere Client installation as per the host operating system and architecture from After the download is complete, start the installation. During installation, continue with the default options.

Once the installation is complete, open the VMware vSphere Client application. This opens a new GUI. In the IP address / Name field, enter the IP address/hostname to directly manage a single host. Enter the IP address/hostname of vCenter Server to manage multiple hosts. In the User name and Password field, enter the username and password.

Download the latest version of the CoreOS image from Once the download is complete, the next step is to create the VM image using the downloaded ova file. The steps to create the VM image are as follows:

  1. Open the VMware vSphere Client application.
  2. Enter IP Address, username, and password as mentioned earlier.
  3. Click on the File menu.
  4. Click on Deploy OVF Template.
  5. This opens a new wizard. Specify the location of the ova file that was downloaded earlier. Click on Next.
  6. Specify the name of the VM and Inventory location in the Name and Location tab.
  7. Specify the host/server where this VM is to be deployed in the Host/Cluster tab.
  8. Specify the location where the VM image should be stored in the Storage tab.
  9. Specify the disk format in the Disk Format tab.
  10. Click on Next. It takes a while to deploy the VM image.

Once the VM image is deployed in the VMware server, we need to start the CoreOS VM with the appropriate cloud-config file having required configuration property. The cloud-config file in VMware vSphere should be specified by attaching a config-drive with filesystem labeling config-2 by attaching CD-ROMs or new drives. The steps to create a config-drive which is an iso file for VMware vSphere. The following are the commands to create the iso file in a Linux-based operating system:

  • Create a folder, say /tmp/new-drive/openstack/latest, as follows:
    mkdir -p /tmp/new-drive/openstack/latest
  • Copy the user_data file, which is the cloud-config file, into the folder:
    cp user_data /tmp/new-drive/openstack/latest/user_data
  • Create the iso file using the command mkisofs as follows:
    mkisofs -R -V config-2 -o configdrive.iso /tmp/new-drive

Once the config-drive file is created, perform the following steps to attach the config file into the VML:

  1. Transfer the iso image to the machine wherein the VMware vSphere Client program is running.
  2. Open VMware vSphere Client.
  3. Click on the CoreOS VM and go to the Summary tab of the VM as shown in the following screenshot:

  4. Right-click over the DataStore section and click on Browse Datastore. This will open a new window called Datastore Browser.
  5. Select the folder named iso.
  6. Click on the Upload file to Datastore icon.
  7. Select the iso file in the local machine and upload the iso file to the data store.

The next step is to attach the iso file as a cloud-config file for the VM. Perform the following steps:

  1. Go to CoreOS VM and right-click.
  2. Click on Properties.
  3. Select CD/DVD drive 1.
  4. In the right-hand side, select Device Status as Connected as well as Connect at power on.
  5. Click on Datastore ISO File and select the uploaded iso file from the data store.

Once the iso file is uploaded and attached to the VM, start the VM. The CoreOS VM  the VMware environment is ready.


In this article, we were able to set up and run CoreOS with a single machine using Vagrant and VirtualBox.

Resources for Article:

Further resources on this subject:


Please enter your comment!
Please enter your name here