31 min read

In this article by Joseph Hall, author of the book Mastering SaltStack, we will be looking at how Salt works under the hood. In this article, we will:

  • Discover how Salt manages configuration files
  • Look at how the Renderer system works
  • Discuss how the Loader system handles modules
  • Explore the State compiler, which drives so much of Salt

With a more comprehensive understanding of the internals of Salt, you will be able to craft configurations and States that take advantage of the architectural decisions that inspired the design of Salt.

(For more resources related to this topic, see here.)

Understanding the Salt configuration

One of the basic ideas around the Salt configuration is that a configuration management system should require as little configuration as possible. A concerted effort has been made by the developers to assign defaults that will apply to as many deployments as possible, while still allowing users to fine-tune the settings to their own needs.

If you are just starting with Salt, you may not need to change anything. In fact, most of the time the Master configuration will be exactly what is needed for a small installation, while Minions will require almost no changes, if any.

Following the configuration tree

By default, most operating systems (primarily Linux-based) will store the Salt configuration in the /etc/salt/ directory. Unix distributions will often use the /usr/local/etc/salt/ directory instead, while Windows uses the C:salt directory. These locations were chosen in order to follow the design most commonly used by the operating system in question, while still using a location that was easy to make use of. We will refer to the /etc/salt/directory, but you can go ahead and replace it with the correct directory for your operating system.

There are other paths that Salt makes use of as well. Various caches are typically stored in /var/cache/salt/, sockets are stored in /var/run/salt/, and State trees, Pillar trees, and Reactor files are stored in /srv/salt/, /srv/pillar/, and /srv/reactor/, respectively.

Looking inside /etc/salt/

Inside the /etc/salt/ directory, there will generally be one of two files: Master and Minion (both will appear if you treat your Master as a Minion). When the documentation refers to Master configuration, it generally means the /etc/salt/master file, and of course Minion configuration refers to the /etc/salt/minion file. All configuration for these two daemons can technically go into their respective file.

However, many users find reasons to break out their configuration into smaller files. This is often for organizational reasons, but there is a practical reason too: because Salt can manage itself, it is often easier to have it manage smaller, templated files, rather than one large, monolithic file.

Because of this, the Master can also include any file with a .conf extension, found in the /etc/salt/master.d/ directory (and the Minion likewise in the minion.d/ directory). This is in keeping with the numerous other services that also maintain similar directory structures.

Other subsystems inside Salt also make use of the .d/ directory structure. Notably, Salt Cloud makes use of a number of these directories. The /etc/salt/cloud, /etc/salt/cloud.providers, and /etc/salt/cloud.profiles files can also be broken out into the /etc/salt/cloud.d/, /etc/salt/cloud.providers.d/, and /etc/salt/cloud.profiles.d/ directories, respectively. Additionally, it is recommended to store cloud maps in the /etc/salt/cloud.maps.d/ directory.

While other configuration formats are available elsewhere in Salt, the format of all of these core configuration files is YAML. This is by necessity; Salt needs a stable starting point from which to configure everything else. Likewise, the /etc/salt/ directory is hard-coded as the default starting point to find these files, though that may be overridden using the –config-dir (or -C) option:

# salt-master --config-dir=/other/path/to/salt/

Managing Salt keys

Inside the /etc/salt/ directory, there is also a pki/ directory, inside which is a master/ or minion/ directory (or both). This is where the public and private keys are stored.

The Minion will only have three files inside the /etc/salt/pki/minion directory: minion.pem (the Minion’s private RSA key), minion.pub (the Minion’s public RSA key), and minion_master.pub (the Master’s public RSA key).

The Master will also keep its RSA keys in the /etc/salt/pki/master/ directory: master.pem and master.pub. However, at least three more directories will also appear in here. The minions.pre/ directory contains the public RSA keys for Minions that have contacted the Master but have not yet been accepted. The minions/ directory contains the public RSA keys for Minions that have been accepted on the Master. And the minions_rejected/ directory will contain keys for any Minion that has contacted the Master, but been explicitly rejected.

There is nothing particularly special about these directories. The salt-key command on the Master is essentially a convenience tool for the user that moves public key files between directories, as requested. If needed, users can set up their own tools to manage the keys on their own, just by moving files around.

Exploring the SLS directories

As mentioned, Salt also makes use of other directory trees on the system. The most important of these are the directories that store SLS files, which are, by default, located in /srv/.

Of the SLS directories, /srv/salt/ is probably the most important. This directory stores the State SLS files, and their corresponding top files. It also serves as the default root directory for Salt’s built-in file server. There will typically be a top.sls file, and several accompanying .sls files and/or directories.

A close second is the /srv/pillar/ directory. This directory maintains a copy of the static pillar definitions, if used. Like the /srv/salt/ directory, there will typically be a top.sls file and several accompanying .sls files and directories. But while the top.sls file matches the format used in /srv/salt/, the accompanying .sls files are merely collections of key/value pairs. While they can use Salt’s Renderer, the resulting data does not need to conform to Salt’s State compiler.

Another directory which will hopefully find its way into your arsenal is the /srv/reactor/ directory. Unlike the others, there is no top.sls file in here. That is because the mapping is performed inside the Master configuration instead of the top system. However, the files in this directory do have a specific format.

Examining the Salt cache

Salt also maintains a cache directory, usually at /var/cache/salt/ (again, this may differ on your operating system). As before, both the Master and the Minion have their own directory for cache data. The Master cache directory contains more entries than the Minion cache, so we’ll jump into that first.

The Master job cache

Probably the first cache directory that you’ll run across in every day use is the jobs/ directory. In a default configuration, this contains all the data that the Master stores about the jobs that it executes.

This directory uses hashmap-style storage. That means that a piece of identifying information (in this case, a job ID, or JID), has been processed with a hash algorithm, and a directory or directory structure has been created using a part or all of the hash. In this case, a split hash model has been used, where a directory has been created using the first two characters of the hash, and another directory under it has been created with the rest of the hash.

The default hash type for Salt is MD5. This can be modified by changing the hash_type value in the Master configuration:

hash_type: md5

Keep in mind that the hash_type is an important value that should be decided upon when first setting up a new Salt infrastructure, if MD5 is not the desired value. If it is changed (say, to SHA1) after an infrastructure has been using another value for a while, then any part of Salt that has been making use of it must be cleaned up manually.

The JID is easy to interpret: it is a date and time stamp. For instance, a job ID of 20141203081456191706 refers to a job that was started on December 3, 2014, at 56 seconds and 191706 milliseconds past 8:14 AM. The MD5 of that JID would be f716a0e8131ddd6df3ba583fed2c88b7. Therefore, the data that describes that job would be located at the following path:

/var/cache/salt/master/jobs/f7/16a0e8131ddd6df3ba583fed2c88b7

In that directory, there will be a file called jid. This will of course contain the job ID. There will also be a series of files with a .p extension. These files are all serialized by msgpack.

Looking inside msgpack files

If you have checked out a copy of Salt from Git, this data is easy to view. Inside the test/ directory in Salt’s Git tree, there is a file called packdump.py. This can be used to dump the contents of the msgpack files to the console.

First, there is a a file called .minions.p (notice the leading dot), which contains a list of Minions that were targeted by this job. This will look something like so:

[
  "minion1",
  "minion2",
  "minion3"
]

The job itself will be described by a file called .load.p:

{
  "arg": [
    ""
  ],
  "fun": "test.ping",
  "jid": "20141203081456191706",
  "tgt": "*",
  "tgt_type": "glob",
  "user": "root"
}

There will also be one directory for each Minion that was targeted by that job and that contains the return information for that job, for that Minion. Inside that directory will be a file called return.p that contains the return data, serialized by msgpack. Assuming that the job in question did a simple test.ping, the return will look like the following:

{
  "fun": "test.ping",
  "fun_args": [],
  "id": "minion1",
  "jid": "20141203081456191706",
  "retcode": 0,
  "return": true,
  "success": true
}

The Master-side Minion cache

Once Salt has started issuing jobs, another cache directory will show up, called minions/. This directory will contain one entry per Minion, with cached data about that Minion. Inside this directory are two files: data.p and mine.p.

The data.p file contains a copy of the Grains and Pillar data for that Minion. A (shortened) data.p file may look like the following:

{
  "grains": {
    "biosreleasedate": "01/09/2013",
    "biosversion": "G1ET91WW (2.51 )",
    "cpu_model": "Intel(R) Core(TM) i5-3210M CPU @ 2.50GHz",
    "cpuarch": "x86_64",
    "os": "Ubuntu",
    "os_family": "Debian",
  },
  "pillar": {
    "role": "web"
  }
}

The mine.p file contains mine data. A Minion can be configured to cache the return data from specific commands, in the cache directory on the Master, so that other Minions can look it up. For instance, if the output from test.ping and network.ip_addrs has been configured, the contents of the mine.p file will look as follows:

{
  "network.ip_addrs": [
    "192.168.2.101"
  ],
  "test.ping": true
}

The external file server cache

In a default installation, Salt will keep its files in the /srv/salt/ directory. However, an external file server, by definition, maintains an external file store. For instance, the gitfs external file server keeps its files on a Git server, such as GitHub. However, it is incredibly inefficient to ask the Salt Master to always serve files directly from the Git. So, in order to improve efficiency, a copy of the Git tree is stored on the Master.

The contents and layout of this tree will vary among the external file server modules. For instance, the gitfs module doesn’t store a full directory tree as one might see in a normal Git checkout; it only maintains the information used to create that tree, using whatever branches are available. Other external file servers, however, may contain a full copy of the external source, which is updated periodically. The full path to this cache may look like this:

/var/cache/salt/master/gitfs/

where gitfs is the name of the file server module.

In order to keep track of the file changes, a directory called hash/ will also exist inside the external file server’s cache. Inside hash/, there will be one directory per environment (that is, base, dev, prod, and so on). Each of those will contain what looks like a mirror image of the file tree. However, each actual file name will be appended with .hash.md5 (or the appropriate hash name, if different), and the contents will be the value of the checksum for that file.

In addition to the file server cache, there will be another directory called file_lists/ that contains one directory per enabled file server. Inside that directory will be one file per environment, with a .p extension (such as base.p for the base environment). This file will contain a list of files and directories belonging to that environment’s directory tree. A shortened version might look like this:

{
  "dirs": [
    ".",
    "vim",
    "httpd",
  ],
  "empty_dirs": [
  ],
  "files": [
    "top.sls",
    "vim/init.sls",
    "httpd/httpd.conf",
    "httpd/init.sls",
  ],
  "links": []
}

This file helps Salt with a quick lookup of the directory structure, without having to constantly descend into a directory tree.

The Minion-side proc/ directory

The Minion doesn’t maintain nearly as many cache directories as the Master, but it does have a couple. The first of these is the proc/ directory, which maintains the data for active jobs on the Minion. It is easy to see this in action. From the Master, issue a sleep command to a Minion:

salt myminion test.sleep 300 --async

This will kick off a process on the Minion which will wait for 300 seconds (5 minutes) before returning True to the Master. Because the command includes the –async flag, Salt will immediately return a JID to the user.

While this process is running, log into the Minion and take a look at the /var/cache/salt/minion/proc/ directory. There should be a file bearing the name of the JID. The unpacked contents of this file will look like the following:

{'arg': [300],
 'fun': 'test.sleep',
 'jid': '20150323233901672076',
 'pid': 4741,
 'ret': '',
 'tgt': 'myminion',
 'tgt_type': 'glob',
 'user': 'root'}

This file will exist until the job is completed on the Minion. If you’d like, you can see the corresponding file on the Master. Use the hashutil.md5_digest function to find the MD5 value of the JID:

# salt myminion hashutil.md5_digest 20150323233901672076

External modules

The other directory that you are likely to see on the Minion is the extmods/ directory. If custom modules have been synced to the Minion from the Master (using the _modules, _states, etc. directories on the Master), they will appear here.

This is also easy to see in action. On the Master, create a _modules/ directory inside /srv/salt/. Inside this directory, create a file called mytest.py, with the following contents:

def ping():
    return True

Then, from the Master, use the saltutil module to sync your new module to a Minion:

salt myminion saltutil.sync_modules

After a moment, Salt will report that it has finished:

myminion:
    - modules.mytest

Log into the Minion and look inside /var/cache/salt/minion/extmods/modules/. There will be two files: mytest.py and mytest.pyc. If you look at the contents of mytest.py, you will see the custom module that you created on the Master. You will also be able to execute the mytest.ping function from the Master:

# salt myminion mytest.ping
myminion:
    True

The Renderer

While the main Master and Minion configuration files must necessarily be stored in YAML, other files in Salt can take advantage of the wealth of file formats that the modern world of technology has to offer. This is because of the rendering system built into Salt, which can take files of arbitrary formats and render them into a structure that is usable by Salt.

Rendering SLS files

By default, all SLS files in Salt are rendered twice: first through the Jinja templating engine, and then through the PyYAML library. This provides some significant advantages:

  • Jinja provides a fast, powerful, and easy to understand and use templating system that follows a Pythonic mindset, comfortable to many administrators. It is particularly well-suited for managing YAML files.
  • YAML has a very shallow learning curve, making it easy to learn and understand. While it does support more complex syntax, such as parenthesis, brackets, and braces (JSON is technically syntactically-correct YAML), it is not required.

However, it was immediately apparent, even before any Renderers were written, that there would be some dissent among users as to which formats were best suited to their own environments.

  • A popular alternative to YAML, which was already in common usage in other software, is JSON. This format is more strict, making it somewhat harder to read, and even more difficult to write correctly. However, because JSON is more strict concerning how data is declared, a properly-formatted JSON file is more accurate than YAML, and easier to parse safely.
  • Mako was also an early addition to the Salt toolkit. While Jinja adds just enough functionality to create a dynamic toolkit, Mako is designed to bring the full power of Python to templates. This is especially popular with a number of users in the DevOps community, who are known to mix code with content in a number of innovative ways.

A primary design goal of Salt has always been to provide flexibility, and so the Renderer system was designed to be pluggable in the same way as the other components of Salt. While Jinja and YAML have been made the default, either or both can be replaced and, if necessary, even more Renderers can be brought into the mix.

If your needs include changing the global Renderer from yaml_jinja, you can do so in the Master configuration file:

renderer: json_mako

However, you should consider very carefully whether this is best. Keep in mind that community examples, repositories, and formulae are generally kept in YAML, and if any templating needs to be done, Jinja is usually used. This will affect how you deal with the community or act as an enterprise customer on any support issues, and may confuse any experienced Salt users that your company hires.

That said, even with a standard base of Jinja + YAML, there are times when using a different set of Renderers for a small subset of your SLS files is appropriate.

Render pipes

As previously mentioned, SLS files will be rendered using the configured default. However, it is possible to change how a file is rendered by adding a shebang (also known as, shabang) line to the top of the file. A file that is to be rendered only as YAML will begin with the following line:

#!yaml

However, in the Salt world, this is generally impractical. Adding a templating engine increases the power of an SLS file significantly. In order to use multiple Renderers in a specific order, add them to the shabang line in the desired order, separated by pipes:

#!jinja|yaml

This resembles the Unix method of piping smaller programs together, to create larger, more functional programs. There is no imposed limit on how many Renderers are piped together:

#!mako|pyobjects|jinja|yaml|json

However, this is pretty unrealistic. You will find that, in general, no more than two Renderers need to be used. Indeed, too many Renderers will create a complexity that is unreadable and unmaintainable. Use as many as are needed, and no more.

It is important to note that SLS files will ultimately result in a specific data structure. The most accurate way to say this in simple terms is that the data generated by SLS files must be usable by the msgpack serialization package. This is the format used extensively throughout the various subsystems inside Salt (notably, the cache system).

Serving templated files

SLS files are not the only files that can take advantage of the Renderer. Any file that is served from an SLS file may also be rendered through a templating engine. These files aren’t as specific as SLS files, because they do not need to return a specific data format; they only need to result in the arbitrary file contents that will be served by Salt.

The most common usage of this is with the file.managed State. Adding a template argument to this State will cause the file to be rendered accordingly:

/etc/httpd/conf/httpd.conf:
file.managed:
- source: salt://httpd/httpd.conf
- template: jinja

Because the templated file will not return data, Renderers that deal exclusively with data are not available here. But while YAML, JSON, msgpack, and the various Python-based Renderers are not available, Jinja, Mako, Cheetah, and the like can be used.

Understanding the Loader

The Loader system is at the heart of how Salt works. In a nutshell, Salt is a collection of modules, tied together with the Loader. Even the transport mechanisms, which enable communication between and define the Master, Minion, and Syndic hierarchies make use of modules that are managed by the Loader.

Dynamic modules

Salt’s Loader system is a bit unconventional. Traditionally, most software has been designed to require all components that are supported to be installed. This is not the case with every package, of course. The Apache Web Server is an example of one project that supports a number of components that need not all be installed. Debian-based operating systems manage Apache modules by providing their modules-available/ and modules-enabled/ directories. RedHat-based systems take a different approach: all components that are supported by Apache’s httpd package are required to be installed with it.

Making such a demand with Salt is beyond unrealistic. So many packages are supported with the default installation of Salt, many of which compete with each other (and some of which compete, in some ways, with Salt itself) that it could be said that to build such a dependency tree into Salt would effectively turn Salt into its own operating system.

However, even this is not entirely accurate. Because Salt supports a number of different Linux distributions, in addition to several Unix flavors and even Windows, it would be more accurate to say that installing every package that is supported by Salt would effectively turn Salt into several mutually-exclusive operating systems. Obviously, this is just not possible.

Salt is able to handle this using multiple approaches. First, Grains provide critical information to Salt to help identify the platform on which it is running. Grains such as os and os_flavor are used often enough to help Salt know whether to use yum or apt to manage packages, or systemd or upstart to manage services.

Each module is also able to check other dependencies on the system. The bulk of Salt’s apache module makes use of the apachectl command (or apache2ctl as appropriate), so its availability is dependent upon whether or not that command exists on the system.

This set of techniques enables Salt to appropriately detect, as the Minion process starts, which modules to make available to the user.

A relatively new feature of Salt’s Loader system is the ability to load modules on demand. Modules that support the Lazy Loader functionality will not actually load until requested by the user. This streamlines the start process for the Minion, and makes more effective use of the available resources.

Execution modules

It has often been said that most of the heavy lifting in Salt is performed by the execution modules. This is because Salt was designed originally as a remote execution system, and most module types that have been added to the loader have been designed to extend the functionality of remote execution.

For instance, State modules are designed with one purpose in mind: to enforce the State of a certain aspect of a system. This could be to ensure that a package is installed, or that a service is running. The State module itself doesn’t install the package or start the service; it calls out to the execution module to do so. A State module’s only job is to add idempotency to an execution module.

One could say that an important differentiator between runner modules and execution modules is that runners are designed to be used from the Master, while execution modules are designed to execute remotely on the Minion. However, runners were actually designed with something more specific in mind. System administrators have been using shell scripts for decades. From csh in Unix to bash in Linux, and even batch files in DOS and Windows, this has been the long-running standard.

Runner modules were designed to allow Salt users to apply a scripting language to remote executions. Because so many early Salt users were also Python users, it was not generally difficult for them to use Python as their scripting language. As the Salt user base grew, so too did the number of users who were not fluent in Python, but the number of other options available for them also grew.

Reactor modules are a type of module that can pull together execution modules and runner modules, and make them available to users with no programming experience. And because Salt States are actually applied using the State execution module, even States are available through Reactors.

Cloud modules

Cloud modules are not typically thought of by many people as Salt modules, perhaps because Salt Cloud started as a project separate from Salt, but in fact they have always used the Loader system. However, they do work a little differently.

Unlike many other modules in Salt, Cloud modules do not make use of execution modules (although there is an execution module that makes use of the Salt Cloud). This is in part because Salt Cloud was designed to run on the Salt Master. However, it does not make use of runner modules either (though again, there is a runner module that can make use of the Salt Cloud).

Salt Cloud’s initial purpose was to create new VMs on various public cloud providers, and automatically accept their keys on the Salt Master. However, it quickly grew apparent that users wanted to control as many aspects of their cloud providers as possible; not just VM creation.

Now Salt Cloud is able to perform any action that is available against a cloud provider. Some providers support more functionality than others. In some cases, this is because demand has not been presented, and in other cases because the appropriate developer has not yet had the resources to make the addition. But often it is because the features available on the provider itself may be limited. Whatever the situation, if a feature is available, then it can be added and made available via the Loader system.

Plunging into the State compiler

Salt was initially designed as a remote execution system that was to be used for gathering data normally collected by monitoring systems, and storing it for later analysis. However, as functionality grew, so too did a need to manage the execution modules that were doing the heavy lifting. Salt States were born from this need and, before long, the engine that managed them had expanded into other areas of Salt.

Imperative versus declarative

A point of contention between various configuration management systems is the concept of declarative versus imperative configurations. Before we discuss Salt’s take on the matter, let’s take a moment to examine the two.

It may be easiest to think of imperative programming like a script: perform Task A and, when it is finished, perform Task B; once that has finished, perform Task C. This is what many administrators are used to, especially as it more closely resembles the shell scripts that have been their lifelines for so many decades. Chef is an example of a configuration management suite that is imperative in nature.

Declarative definition is a newer concept, and more representative of object oriented programming. The basic idea is, the user declares which tasks need to be performed, and the software performs them in whichever order it sees fit. Generally, dependencies can also be declared that dictate that some tasks are not to be completed until others are. Puppet is a well-known example of a configuration management platform that is declarative in nature.

Salt is unique in that it supports both imperative ordering and declarative execution. If no dependencies are defined then, by default, Salt will attempt to execute States in the order in which they appear in the SLS files. If a State fails because it requires a task that appears later, then multiple Salt runs will be required to complete all tasks.

However, if dependencies are defined, States will be handled differently. They will still be evaluated in the order in which they appear, but dependencies can cause them to be executed in a different order. Consider the following Salt State:

mysql:
  service:
    - running
  pkg:
    - installed
  file:
    - managed
    - source: salt://mysql/my.cnf
    - name: /etc/mysql/my.cnf

In the first several versions of Salt that supported States, this would have been evaluated lexicographically: the file would have been copied into place first, then the package installed, then the service started, because in the English alphabet, F comes before P, and P comes before S. Happily, this is also the order that is probably desired.

However, the default ordering system now in Salt is imperative, meaning States will be evaluated in the order in which they appear. Salt will attempt to start the mysql service, which will fail because the package is not installed. It will then attempt to install the mysql package, which will succeed. If this is a Debian-based system, installation of the package will also cause the service to start, in this case without the correct configuration file. Lastly, Salt will copy the my.cnf file into place, but will make no attempt to restart the service to apply the correct changes. A second State run will report success for all three States (the service is running, the package is installed, and the file is managed as requested), but a manual restart of the mysql service will still be required.

Requisites

To accommodate ordering issues caused by such issues, Salt uses requisites. These will affect the order in which States are evaluated and executed. Consider the following changes to the above salt State:

mysql:
  service:
    - running
    - require:
      - package: mysql
    - watch:
      - file: mysql
  pkg:
    - installed
    - require:
      - file: mysql
  file:
    - managed
    - source: salt://mysql/my.cnf
    - name: /etc/mysql/my.cnf

Even though the States have been defined in an order that is not appropriate, they will still be evaluated and executed correctly.

The following will be the order that will be defined:

  1. service: mysql.
  2. pkg: mysql.
  3. file: mysql

However, the mysql service requires that the mysql package is executed first. So, before executing the mysql service, it will look ahead and evaluate the mysql package. But, since the mysql package requires the mysql file to be executed first, it will jump ahead and evaluate the mysql file. Because the file State does not require anything else, Salt will execute it. Having completed the list of requirements for the pkg State, Salt will go back and execute it. And finally, having completed all service requirements, Salt will go back and execute the service.

Following successful completion of the service State, it will move onto the next State and see if it has already been executed. It will continue in this fashion until all States have been evaluated and executed.

It is in this manner that Salt is able to be both imperative (by allowing statements to be evaluated in the order in which they appear) and declarative (by allowing statements to be executed based on requisites).

High and Low States

The concept of High State has proven to be one of the most confusing things about Salt. Users understand that the state.highstate command performs a State run, but what exactly is a “High State”? And does the presence of a High State mean that there is a “Low State” as well?

There are two parts of the State system that are in effect. “High” data refers generally to data as it is seen by the user. “Low” data refers generally to data as it is ingested and used by Salt.

High States

If you have worked with State files, you have already seen every aspect of this part of the State system. There are three specific components, each of which builds upon the one before it:

  • High data
  • SLS file
  • High State

Each individual State represents a piece of high data. If the previous SLS were broken into individual States they would look like this, respectively (ignoring the fact that duplicate top-level keys would comprise an invalid YAML file):

mysql:
  service:
    - running
    - require:
      - pkg: mysql
    - watch:
      - file: mysql

mysql:
  pkg:
    - installed
    - require:
      - file: mysql

mysql:
  file:
    - managed
    - source: salt://mysql/my.cnf
    - name: /etc/mysql/my.cnf
When combined together, along with other States they form an SLS file:
iptables:
  service:
    - running

mysql:
  service:
    - running
    - require:
      - package: mysql
    - watch:
      - file: mysql
  package:
    - installed
    - require:
      - file: mysql
  file:
    - managed
    - source: salt://mysql/my.cnf
    - name: /etc/mysql/my.cnf

When these files are tied together using includes, and further glued together for use inside an environment using a top.sls file, they form a High State.

top.sls
base:
  '*':
    - mysql
mysql.sls
include:
  - iptables

mysql:
  service:
    - running
    - require:
      - package: mysql
    - watch:
      - file: mysql
  package:
    - installed
    - require:
      - file: mysql
  file:
    - managed
    - source: salt://mysql/my.cnf
    - name: /etc/mysql/my.cnf
iptables.sls
iptables:
  service:
    - running

When the state.highstate function is executed, Salt will compile all relevant SLS inside the top.sls, and any includes, into a single definition, called a High State. This can be viewed by using the state.show_highstate function:

# salt myminion state.show_highstate --out yaml
myminion:
  iptables:
    service:
    - running
    - order: 10000
    __sls__: iptables
    __env__: base
  mysql:
    service:
    - running
    - require:
      - pkg: mysql
    - watch:
      - file: mysql
    - order: 10001
    pkg:
    - installed
    - require:
      - file: mysql
    - order: 10002
    file:
    - managed
    - source: salt://mysql/my.cnf
    - name: /etc/mysql/my.cnf
    - order: 10003
    __sls__: mysql
    __env__: base

Take note of the extra fields that are included in this output. First, an order is declared. This is something that can be explicitly declared by the user in an SLS file using either real numbers, or the first or last keywords. All States that are set to be first will have their order adjusted accordingly. Numerically ordered States will appear next. Salt will then add 10000 to the last defined number (which is 0 by default), and add any States that are not explicitly ordered. Finally, any States set to last will be added.

Salt will also add some variables that it uses internally, to know which environment (__env__) to execute the State in, and which SLS file (__sls__) the State declaration came from.

Remember that the order is still no more than a starting point; the actual High State will be executed based first on requisites, and then on order.

Low States

Once the final High State has been generated, it will be sent to the State compiler. This will reformat the State data into a format that Salt uses internally to evaluate each declaration, and feed data into each State module (which will in turn call the execution modules, as necessary). As with high data, low data can be broken into individual components:

  • Low State
  • Low chunks
  • State module
  • Execution module(s)

The low data can be viewed using the state.show_lowstate function:

# salt myminion state.show_lowstate --out yaml
myminion:
- __env__: base
  __id__: iptables
  __sls__: iptables
  fun: running
  name: iptables
  order: 10000
  state: service
- __env__: base
  __id__: mysql
  __sls__: mysql
  fun: running
  name: mysql
  order: 10001
  require:
  - package: mysql
  state: service
  watch:
  - file: mysql
- __env__: base
  __id__: mysql
  __sls__: mysql
  fun: installed
  name: mysql
  order: 10002
  require:
  - file: mysql
  state: package
- __env__: base
  __id__: mysql
  __sls__: mysql
  fun: managed
  name: /etc/mysql/my.cnf
  order: 10003
  source: salt://mysql/my.cnf
  state: file

Together, all this comprises a Low State. Each individual item is a Low Chunk. The first Low Chunk on this list looks like this:

- __env__: base
  __id__: iptables
  __sls__: iptables
  fun: running
  name: iptables
  order: 10000
  state: service

Each low chunk maps to a State module (in this case, service) and a function inside that State module (in this case, running). An ID is also provided at this level (__id__). Salt will map relationships (that is, requisites) between States using a combination of State and __id__. If a name has not been declared by the user, then Salt will automatically use the __id__ as the name.

Once a function inside a State module has been called, it will usually map to one or more execution modules which actually do the work. Let’s take a moment to examine what goes down when Salt gets to that point.

Summary

We have discussed how Salt manages its own configuration, as well as the Loader and Renderer systems. We have also gone into significant details about how the State system works.

Resources for Article:

 


Further resources on this subject:


LEAVE A REPLY

Please enter your comment!
Please enter your name here