The Salt system is amazingly simple and easy to configure, the two components
of the Salt system each have a respective configuration file. The
salt-master
is configured via the master configuration file, and the
salt-minion
is configured via the minion configuration file.
Primary Master Configuration
interface
Default: 0.0.0.0
(all interfaces)
The local interface to bind to, must be an IP address.
ipv6
Default: False
Whether the master should listen for IPv6 connections. If this is set to True,
the interface option must be adjusted too (for example: interface: '::'
)
publish_port
Default: 4505
The network port to set up the publication interface.
master_id
Default: None
The id to be passed in the publish job to minions. This is used for MultiSyndics
to return the job to the requesting master.
Note
This must be the same string as the syndic is configured with.
master_id: MasterOfMaster
user
Default: root
The user to run the Salt processes
Note
Starting with version 3006.0, Salt's offical packages ship with a default
configuration which runs the Master as a non-priviledged user. The Master's
configuration file has the user option set to user: salt. Unless you
are absolutly sure want to run salt as some other user, care should be
taken to preserve this setting in your Master configuration file..
enable_ssh_minions
Default: False
Tell the master to also use salt-ssh when running commands against minions.
Note
Enabling this does not influence the limitations on cross-minion communication.
The Salt mine and publish.publish
do not work from regular minions
to SSH minions, the other way around is partly possible since 3007.0
(during state rendering on the master).
This means you can use the mentioned functions to call out to regular minions
in sls
templates and wrapper modules, but state modules
(which are executed on the remote) relying on them still do not work.
ret_port
Default: 4506
The port used by the return server, this is the server used by Salt to receive
execution returns and command executions.
pidfile
Default: /var/run/salt-master.pid
Specify the location of the master pidfile.
pidfile: /var/run/salt-master.pid
root_dir
Default: /
The system root directory to operate from, change this to make Salt run from
an alternative root.
conf_file
Default: /etc/salt/master
The path to the master's configuration file.
conf_file: /etc/salt/master
pki_dir
Default: <LIB_STATE_DIR>/pki/master
The directory to store the pki authentication keys.
<LIB_STATE_DIR>
is the pre-configured variable state directory set during
installation via --salt-lib-state-dir
. It defaults to /etc/salt
. Systems
following the Filesystem Hierarchy Standard (FHS) might set it to
/var/lib/salt
.
pki_dir: /etc/salt/pki/master
cluster_id
When defined, the master will operate in cluster mode. The master will send the
cluster key and id to minions instead of its own key and id. The master will
also forward its local event bus to other masters defined by cluster_peers
cluster_peers
When cluster_id
is defined, this setting is a list of other master
(hostnames or ips) that will be in the cluster.
cluster_peers:
- master2
- master3
cluster_pki_dir
When cluster_id
is defined, this sets the location of where this cluster
will store its cluster public and private key as well as any minion keys. This
setting will default to the value of pki_dir
, but should be changed
to the filesystem location shared between peers in the cluster.
cluster_pki: /my/gluster/share/pki
extension_modules
Changed in version 2016.3.0: The default location for this directory has been moved. Prior to this
version, the location was a directory named extmods
in the Salt
cachedir (on most platforms, /var/cache/salt/extmods
). It has been
moved into the master cachedir (on most platforms,
/var/cache/salt/master/extmods
).
Directory where custom modules are synced to. This directory can contain
subdirectories for each of Salt's module types such as runners
,
output
, wheel
, modules
, states
, returners
, engines
,
utils
, etc. This path is appended to root_dir
.
Note, any directories or files not found in the module_dirs location
will be removed from the extension_modules path.
extension_modules: /root/salt_extmods
extmod_whitelist/extmod_blacklist
By using this dictionary, the modules that are synced to the master's extmod cache using saltutil.sync_* can be
limited. If nothing is set to a specific type, then all modules are accepted. To block all modules of a specific type,
whitelist an empty list.
extmod_whitelist:
modules:
- custom_module
engines:
- custom_engine
pillars: []
extmod_blacklist:
modules:
- specific_module
- Valid options:
modules
states
grains
renderers
returners
output
proxy
runners
wheel
engines
queues
pillar
utils
sdb
cache
clouds
tops
roster
tokens
module_dirs
Default: []
Like extension_modules
, but a list of extra directories to search
for Salt modules.
module_dirs:
- /var/cache/salt/minion/extmods
cachedir
Default: /var/cache/salt/master
The location used to store cache information, particularly the job information
for executed salt commands.
This directory may contain sensitive data and should be protected accordingly.
cachedir: /var/cache/salt/master
verify_env
Default: True
Verify and set permissions on configuration directories at startup.
keep_jobs
Default: 24
Set the number of hours to keep old job information. Note that setting this option
to 0
disables the cache cleaner.
keep_jobs_seconds
Default: 86400
Set the number of seconds to keep old job information. Note that setting this option
to 0
disables the cache cleaner.
gather_job_timeout
Default: 10
The number of seconds to wait when the client is requesting information
about running jobs.
timeout
Default: 5
Set the default timeout for the salt command and api.
loop_interval
Default: 60
The loop_interval option controls the seconds for the master's Maintenance
process check cycle. This process updates file server backends, cleans the
job cache and executes the scheduler.
maintenance_interval
Default: 3600
Defines how often to restart the master's Maintenance process.
maintenance_interval: 9600
output
Default: nested
Set the default outputter used by the salt command.
outputter_dirs
Default: []
A list of additional directories to search for salt outputters in.
output_file
Default: None
Set the default output file used by the salt command. Default is to output
to the CLI and not to a file. Functions the same way as the "--out-file"
CLI option, only sets this to a single file for all salt commands.
output_file: /path/output/file
show_timeout
Default: True
Tell the client to show minions that have timed out.
show_jid
Default: False
Tell the client to display the jid when a job is published.
color
Default: True
By default output is colored, to disable colored output set the color value
to False.
color_theme
Default: ""
Specifies a path to the color theme to use for colored command line output.
color_theme: /etc/salt/color_theme
cli_summary
Default: False
When set to True
, displays a summary of the number of minions targeted,
the number of minions returned, and the number of minions that did not
return.
sock_dir
Default: /var/run/salt/master
Set the location to use for creating Unix sockets for master process
communication.
sock_dir: /var/run/salt/master
enable_gpu_grains
Default: False
Enable GPU hardware data for your master. Be aware that the master can
take a while to start up when lspci and/or dmidecode is used to populate the
grains for the master.
skip_grains
Default: False
MasterMinions should omit grains. A MasterMinion is "a minion function object
for generic use on the master" that omit pillar. A RunnerClient creates a
MasterMinion omitting states and renderer. Setting to True can improve master
performance.
job_cache
Default: True
The master maintains a temporary job cache. While this is a great addition, it
can be a burden on the master for larger deployments (over 5000 minions).
Disabling the job cache will make previously executed jobs unavailable to
the jobs system and is not generally recommended. Normally it is wise to make
sure the master has access to a faster IO system or a tmpfs is mounted to the
jobs dir.
Note
Setting the job_cache
to False
will not cache minion returns, but
the JID directory for each job is still created. The creation of the JID
directories is necessary because Salt uses those directories to check for
JID collisions. By setting this option to False
, the job cache
directory, which is /var/cache/salt/master/jobs/
by default, will be
smaller, but the JID directories will still be present.
Note that the keep_jobs_seconds
option can be set to a lower
value, such as 3600
, to limit the number of seconds jobs are stored in
the job cache. (The default is 86400 seconds.)
Please see the Managing the Job Cache
documentation for more information.
minion_data_cache
Default: True
The minion data cache is a cache of information about the minions stored on the
master, this information is primarily the pillar, grains and mine data. The data
is cached via the cache subsystem in the Master cachedir under the name of the
minion or in a supported database. The data is used to predetermine what minions
are expected to reply from executions.
cache
Default: localfs
Cache subsystem module to use for minion data cache.
memcache_expire_seconds
Default: 0
Memcache is an additional cache layer that keeps a limited amount of data
fetched from the minion data cache for a limited period of time in memory that
makes cache operations faster. It doesn't make much sense for the localfs
cache driver but helps for more complex drivers like consul
.
This option sets the memcache items expiration time. By default is set to 0
that disables the memcache.
memcache_expire_seconds: 30
memcache_max_items
Default: 1024
Set memcache limit in items that are bank-key pairs. I.e the list of
minion_0/data, minion_0/mine, minion_1/data contains 3 items. This value depends
on the count of minions usually targeted in your environment. The best one could
be found by analyzing the cache log with memcache_debug
enabled.
memcache_full_cleanup
Default: False
If cache storage got full, i.e. the items count exceeds the
memcache_max_items
value, memcache cleans up its storage. If this option
set to False
memcache removes the only one oldest value from its storage.
If this set set to True
memcache removes all the expired items and also
removes the oldest one if there are no expired items.
memcache_full_cleanup: True
memcache_debug
Default: False
Enable collecting the memcache stats and log it on debug log level. If enabled
memcache collect information about how many fetch
calls has been done and
how many of them has been hit by memcache. Also it outputs the rate value that
is the result of division of the first two values. This should help to choose
right values for the expiration time and the cache size.
ext_job_cache
Default: ''
Used to specify a default returner for all minions. When this option is set,
the specified returner needs to be properly configured and the minions will
always default to sending returns to this returner. This will also disable the
local job cache on the master.
event_return
Default: ''
Specify the returner(s) to use to log events. Each returner may have
installation and configuration requirements. Read the returner's
documentation.
Note
Not all returners support event returns. Verify that a returner has an
event_return()
function before configuring this option with a returner.
event_return:
- syslog
- splunk
event_return_queue
Default: 0
On busy systems, enabling event_returns can cause a considerable load on
the storage system for returners. Events can be queued on the master and
stored in a batched fashion using a single transaction for multiple events.
By default, events are not queued.
event_return_whitelist
Default: []
Only return events matching tags in a whitelist.
Changed in version 2016.11.0: Supports glob matching patterns.
event_return_whitelist:
- salt/master/a_tag
- salt/run/*/ret
event_return_blacklist
Default: []
Store all event returns _except_ the tags in a blacklist.
Changed in version 2016.11.0: Supports glob matching patterns.
event_return_blacklist:
- salt/master/not_this_tag
- salt/wheel/*/ret
max_event_size
Default: 1048576
Passing very large events can cause the minion to consume large amounts of
memory. This value tunes the maximum size of a message allowed onto the
master event bus. The value is expressed in bytes.
master_job_cache
Default: local_cache
Specify the returner to use for the job cache. The job cache will only be
interacted with from the salt master and therefore does not need to be
accessible from the minions.
job_cache_store_endtime
Default: False
Specify whether the Salt Master should store end times for jobs as returns
come in.
job_cache_store_endtime: False
enforce_mine_cache
Default: False
By-default when disabling the minion_data_cache mine will stop working since
it is based on cached data, by enabling this option we explicitly enabling
only the cache for the mine system.
enforce_mine_cache: False
max_minions
Default: 0
The maximum number of minion connections allowed by the master. Use this to
accommodate the number of minions per master if you have different types of
hardware serving your minions. The default of 0
means unlimited connections.
Please note that this can slow down the authentication process a bit in large
setups.
con_cache
Default: False
If max_minions is used in large installations, the master might experience
high-load situations because of having to check the number of connected
minions for every authentication. This cache provides the minion-ids of
all connected minions to all MWorker-processes and greatly improves the
performance of max_minions.
presence_events
Default: False
Causes the master to periodically look for actively connected minions.
Presence events are fired on the event bus on a
regular interval with a list of connected minions, as well as events with lists
of newly connected or disconnected minions. This is a master-only operation
that does not send executions to minions.
detect_remote_minions
Default: False
When checking the minions connected to a master, also include the master's
connections to minions on the port specified in the setting remote_minions_port.
This is particularly useful when checking if the master is connected to any Heist-Salt
minions. If this setting is set to True, the master will check all connections on port 22
by default unless a user also configures a different port with the setting
remote_minions_port.
Changing this setting will check the remote minions the master is connected to when using
presence events, the manage runner, and any other parts of the code that call the
connected_ids method to check the status of connected minions.
detect_remote_minions: True
remote_minions_port
Default: 22
The port to use when checking for remote minions when detect_remote_minions is set
to True.
remote_minions_port: 2222
ping_on_rotate
Default: False
By default, the master AES key rotates every 24 hours. The next command
following a key rotation will trigger a key refresh from the minion which may
result in minions which do not respond to the first command after a key refresh.
To tell the master to ping all minions immediately after an AES key refresh,
set ping_on_rotate
to True
. This should mitigate the issue where a
minion does not appear to initially respond after a key is rotated.
Note that enabling this may cause high load on the master immediately after the
key rotation event as minions reconnect. Consider this carefully if this salt
master is managing a large number of minions.
If disabled, it is recommended to handle this event by listening for the
aes_key_rotate
event with the key
tag and acting appropriately.
transport
Default: zeromq
Changes the underlying transport layer. ZeroMQ is the recommended transport
while additional transport layers are under development. Supported values are
zeromq
and tcp
(experimental). This setting has a significant impact on
performance and should not be changed unless you know what you are doing!
transport_opts
Default: {}
(experimental) Starts multiple transports and overrides options for each
transport with the provided dictionary This setting has a significant impact on
performance and should not be changed unless you know what you are doing! The
following example shows how to start a TCP transport alongside a ZMQ transport.
transport_opts:
tcp:
publish_port: 4605
ret_port: 4606
zeromq: []
master_stats
Default: False
Turning on the master stats enables runtime throughput and statistics events
to be fired from the master event bus. These events will report on what
functions have been run on the master and how long these runs have, on
average, taken over a given period of time.
master_stats_event_iter
Default: 60
The time in seconds to fire master_stats events. This will only fire in
conjunction with receiving a request to the master, idle masters will not
fire these events.
sock_pool_size
Default: 1
To avoid blocking waiting while writing a data to a socket, we support
socket pool for Salt applications. For example, a job with a large number
of target host list can cause long period blocking waiting. The option
is used by ZMQ and TCP transports, and the other transport methods don't
need the socket pool by definition. Most of Salt tools, including CLI,
are enough to use a single bucket of socket pool. On the other hands,
it is highly recommended to set the size of socket pool larger than 1
for other Salt applications, especially Salt API, which must write data
to socket concurrently.
ipc_mode
Default: ipc
The ipc strategy. (i.e., sockets versus tcp, etc.) Windows platforms lack
POSIX IPC and must rely on TCP based inter-process communications. ipc_mode
is set to tcp
by default on Windows.
ipc_write_buffer
Default: 0
The maximum size of a message sent via the IPC transport module can be limited
dynamically or by sharing an integer value lower than the total memory size. When
the value dynamic
is set, salt will use 2.5% of the total memory as
ipc_write_buffer
value (rounded to an integer). A value of 0
disables
this option.
ipc_write_buffer: 10485760
tcp_master_pub_port
Default: 4512
The TCP port on which events for the master should be published if ipc_mode
is TCP.
tcp_master_pub_port: 4512
tcp_master_pull_port
Default: 4513
The TCP port on which events for the master should be pulled if ipc_mode
is TCP.
tcp_master_pull_port: 4513
tcp_master_publish_pull
Default: 4514
The TCP port on which events for the master should be pulled fom and then republished onto
the event bus on the master.
tcp_master_publish_pull: 4514
tcp_master_workers
Default: 4515
The TCP port for mworkers
to connect to on the master.
auth_events
Default: True
Determines whether the master will fire authentication events.
Authentication events are fired when
a minion performs an authentication check with the master.
minion_data_cache_events
Default: True
Determines whether the master will fire minion data cache events. Minion data
cache events are fired when a minion requests a minion data cache refresh.
minion_data_cache_events: True
http_connect_timeout
Default: 20
HTTP connection timeout in seconds.
Applied when fetching files using tornado back-end.
Should be greater than overall download time.
http_request_timeout
Default: 3600
HTTP request timeout in seconds.
Applied when fetching files using tornado back-end.
Should be greater than overall download time.
http_request_timeout: 3600
use_yamlloader_old
Default: False
Use the pre-2019.2 YAML renderer.
Uses legacy YAML rendering to support some legacy inline data structures.
See the 2019.2.1 release notes for more details.
use_yamlloader_old: False
req_server_niceness
Default: None
Process priority level of the ReqServer subprocess of the master.
Supported on POSIX platforms only.
pub_server_niceness
Default: None
Process priority level of the PubServer subprocess of the master.
Supported on POSIX platforms only.
fileserver_update_niceness
Default: None
Process priority level of the FileServerUpdate subprocess of the master.
Supported on POSIX platforms only.
fileserver_update_niceness: 9
maintenance_niceness
Default: None
Process priority level of the Maintenance subprocess of the master.
Supported on POSIX platforms only.
mworker_niceness
Default: None
Process priority level of the MWorker subprocess of the master.
Supported on POSIX platforms only.
mworker_queue_niceness
default: None
process priority level of the MWorkerQueue subprocess of the master.
supported on POSIX platforms only.
mworker_queue_niceness: 9
event_return_niceness
default: None
process priority level of the EventReturn subprocess of the master.
supported on POSIX platforms only.
event_publisher_niceness
default: none
process priority level of the EventPublisher subprocess of the master.
supported on POSIX platforms only.
event_publisher_niceness: 9
reactor_niceness
default: None
process priority level of the Reactor subprocess of the master.
supported on POSIX platforms only.
Master Security Settings
open_mode
Default: False
Open mode is a dangerous security feature. One problem encountered with pki
authentication systems is that keys can become "mixed up" and authentication
begins to fail. Open mode turns off authentication and tells the master to
accept all authentication. This will clean up the pki keys received from the
minions. Open mode should not be turned on for general use. Open mode should
only be used for a short period of time to clean up pki keys. To turn on open
mode set this value to True
.
auto_accept
Default: False
Enable auto_accept. This setting will automatically accept all incoming
public keys from minions.
keysize
Default: 2048
The size of key that should be generated when creating new keys.
autosign_timeout
Default: 120
Time in minutes that a incoming public key with a matching name found in
pki_dir/minion_autosign/keyid is automatically accepted. Expired autosign keys
are removed when the master checks the minion_autosign directory. This method
to auto accept minions can be safer than an autosign_file because the
keyid record can expire and is limited to being an exact name match.
This should still be considered a less than secure option, due to the fact
that trust is based on just the requesting minion id.
autosign_file
Default: not defined
If the autosign_file
is specified incoming keys specified in the autosign_file
will be automatically accepted. Matches will be searched for first by string
comparison, then by globbing, then by full-string regex matching.
This should still be considered a less than secure option, due to the fact
that trust is based on just the requesting minion id.
Changed in version 2018.3.0: For security reasons the file must be readonly except for its owner.
If permissive_pki_access
is True
the owning group can also
have write access, but if Salt is running as root
it must be a member of that group.
A less strict requirement also existed in previous version.
autoreject_file
Default: not defined
Works like autosign_file
, but instead allows you to specify
minion IDs for which keys will automatically be rejected. Will override both
membership in the autosign_file
and the
auto_accept
setting.
autosign_grains_dir
Default: not defined
If the autosign_grains_dir
is specified, incoming keys from minions with
grain values that match those defined in files in the autosign_grains_dir
will be accepted automatically. Grain values that should be accepted automatically
can be defined by creating a file named like the corresponding grain in the
autosign_grains_dir and writing the values into that file, one value per line.
Lines starting with a #
will be ignored.
Minion must be configured to send the corresponding grains on authentication.
This should still be considered a less than secure option, due to the fact
that trust is based on just the requesting minion.
Please see the Autoaccept Minions from Grains
documentation for more information.
autosign_grains_dir: /etc/salt/autosign_grains
permissive_pki_access
Default: False
Enable permissive access to the salt keys. This allows you to run the
master or minion as root, but have a non-root group be given access to
your pki_dir. To make the access explicit, root must belong to the group
you've given access to. This is potentially quite insecure. If an autosign_file
is specified, enabling permissive_pki_access will allow group access to that
specific file.
permissive_pki_access: False
publisher_acl
Default: {}
Enable user accounts on the master to execute specific modules. These modules
can be expressed as regular expressions.
publisher_acl:
fred:
- test.ping
- pkg.*
publisher_acl_blacklist
Default: {}
Blacklist users or modules
This example would blacklist all non sudo users, including root from
running any commands. It would also blacklist any use of the "cmd"
module.
This is completely disabled by default.
publisher_acl_blacklist:
users:
- root
- '^(?!sudo_).*$' # all non sudo users
modules:
- cmd.*
- test.echo
sudo_acl
Default: False
Enforce publisher_acl
and publisher_acl_blacklist
when users have sudo
access to the salt command.
external_auth
Default: {}
The external auth system uses the Salt auth modules to authenticate and
validate users to access areas of the Salt system.
external_auth:
pam:
fred:
- test.*
token_expire
Default: 43200
Time (in seconds) for a newly generated token to live.
Default: 12 hours
token_expire_user_override
Default: False
Allow eauth users to specify the expiry time of the tokens they generate.
A boolean applies to all users or a dictionary of whitelisted eauth backends
and usernames may be given:
token_expire_user_override:
pam:
- fred
- tom
ldap:
- gary
keep_acl_in_token
Default: False
Set to True to enable keeping the calculated user's auth list in the token
file. This is disabled by default and the auth list is calculated or requested
from the eauth driver each time.
Note: keep_acl_in_token will be forced to True when using external authentication
for REST API (rest is present under external_auth). This is because the REST API
does not store the password, and can therefore not retroactively fetch the ACL, so
the ACL must be stored in the token.
eauth_acl_module
Default: ''
Auth subsystem module to use to get authorized access list for a user. By default it's
the same module used for external authentication.
file_recv
Default: False
Allow minions to push files to the master. This is disabled by default, for
security purposes.
file_recv_max_size
Default: 100
Set a hard-limit on the size of the files that can be pushed to the master.
It will be interpreted as megabytes.
master_sign_pubkey
Default: False
Sign the master auth-replies with a cryptographic signature of the master's
public key. Please see the tutorial how to use these settings in the
Multimaster-PKI with Failover Tutorial
master_sign_key_name
Default: master_sign
The customizable name of the signing-key-pair without suffix.
master_sign_key_name: <filename_without_suffix>
master_pubkey_signature
Default: master_pubkey_signature
The name of the file in the master's pki-directory that holds the pre-calculated
signature of the master's public-key.
master_pubkey_signature: <filename>
master_use_pubkey_signature
Default: False
Instead of computing the signature for each auth-reply, use a pre-calculated
signature. The master_pubkey_signature
must also be set for this.
master_use_pubkey_signature: True
rotate_aes_key
Default: True
Rotate the salt-masters AES-key when a minion-public is deleted with salt-key.
This is a very important security-setting. Disabling it will enable deleted
minions to still listen in on the messages published by the salt-master.
Do not disable this unless it is absolutely clear what this does.
publish_session
Default: 86400
The number of seconds between AES key rotations on the master.
publish_session: Default: 86400
publish_signing_algorithm
Default: PKCS1v15-SHA1
The RSA signing algorithm used by this minion when connecting to the
master's request channel. Valid values are PKCS1v15-SHA1
and
PKCS1v15-SHA224
. Minions must be at version 3006.9
or greater if this
is changed from the default setting.
ssl
New in version 2016.11.0.
Default: None
TLS/SSL connection options. This could be set to a dictionary containing
arguments corresponding to python ssl.wrap_socket
method. For details see
Tornado
and Python
documentation.
Note: to set enum arguments values like cert_reqs
and ssl_version
use
constant names without ssl module prefix: CERT_REQUIRED
or PROTOCOL_SSLv23
.
ssl:
keyfile: <path_to_keyfile>
certfile: <path_to_certfile>
ssl_version: PROTOCOL_TLSv1_2
preserve_minion_cache
Default: False
By default, the master deletes its cache of minion data when the key for that
minion is removed. To preserve the cache after key deletion, set
preserve_minion_cache
to True.
WARNING: This may have security implications if compromised minions auth with
a previous deleted minion ID.
preserve_minion_cache: False
allow_minion_key_revoke
Default: True
Controls whether a minion can request its own key revocation. When True
the master will honor the minion's request and revoke its key. When False,
the master will drop the request and the minion's key will remain accepted.
allow_minion_key_revoke: False
optimization_order
Default: [0, 1, 2]
In cases where Salt is distributed without .py files, this option determines
the priority of optimization level(s) Salt's module loader should prefer.
Note
This option is only supported on Python 3.5+.
optimization_order:
- 2
- 0
- 1
Master State System Settings
state_top
Default: top.sls
The state system uses a "top" file to tell the minions what environment to
use and what modules to use. The state_top file is defined relative to the
root of the base environment. The value of "state_top" is also used for the
pillar top file
state_top_saltenv
This option has no default value. Set it to an environment name to ensure that
only the top file from that environment is considered during a
highstate.
Note
Using this value does not change the merging strategy. For instance, if
top_file_merging_strategy
is set to merge
, and
state_top_saltenv
is set to foo
, then any sections for
environments other than foo
in the top file for the foo
environment
will be ignored. With state_top_saltenv
set to base
, all
states from all environments in the base
top file will be applied,
while all other top files are ignored. The only way to set
state_top_saltenv
to something other than base
and not
have the other environments in the targeted top file ignored, would be to
set top_file_merging_strategy
to merge_all
.
top_file_merging_strategy
Changed in version 2016.11.0: A merge_all
strategy has been added.
Default: merge
When no specific fileserver environment (a.k.a. saltenv
) has been specified
for a highstate, all environments' top files are
inspected. This config option determines how the SLS targets in those top files
are handled.
When set to merge
, the base
environment's top file is evaluated first,
followed by the other environments' top files. The first target expression
(e.g. '*'
) for a given environment is kept, and when the same target
expression is used in a different top file evaluated later, it is ignored.
Because base
is evaluated first, it is authoritative. For example, if there
is a target for '*'
for the foo
environment in both the base
and
foo
environment's top files, the one in the foo
environment would be
ignored. The environments will be evaluated in no specific order (aside from
base
coming first). For greater control over the order in which the
environments are evaluated, use env_order
. Note that, aside from
the base
environment's top file, any sections in top files that do not
match that top file's environment will be ignored. So, for example, a section
for the qa
environment would be ignored if it appears in the dev
environment's top file. To keep use cases like this from being ignored, use the
merge_all
strategy.
When set to same
, then for each environment, only that environment's top
file is processed, with the others being ignored. For example, only the dev
environment's top file will be processed for the dev
environment, and any
SLS targets defined for dev
in the base
environment's (or any other
environment's) top file will be ignored. If an environment does not have a top
file, then the top file from the default_top
config parameter
will be used as a fallback.
When set to merge_all
, then all states in all environments in all top files
will be applied. The order in which individual SLS files will be executed will
depend on the order in which the top files were evaluated, and the environments
will be evaluated in no specific order. For greater control over the order in
which the environments are evaluated, use env_order
.
top_file_merging_strategy: same
env_order
Default: []
When top_file_merging_strategy
is set to merge
, and no
environment is specified for a highstate, this
config option allows for the order in which top files are evaluated to be
explicitly defined.
env_order:
- base
- dev
- qa
master_tops
Default: {}
The master_tops option replaces the external_nodes option by creating
a pluggable system for the generation of external top data. The external_nodes
option is deprecated by the master_tops option.
To gain the capabilities of the classic external_nodes system, use the
following configuration:
master_tops:
ext_nodes: <Shell command which returns yaml>
renderer
Default: jinja|yaml
The renderer to use on the minions to render the state data.
userdata_template
New in version 2016.11.4.
Default: None
The renderer to use for templating userdata files in salt-cloud, if the
userdata_template
is not set in the cloud profile. If no value is set in
the cloud profile or master config file, no templating will be performed.
jinja_env
Default: {}
jinja_env overrides the default Jinja environment options for
all templates except sls templates.
To set the options for sls templates use jinja_sls_env
.
The default options are:
jinja_env:
block_start_string: '{%'
block_end_string: '%}'
variable_start_string: '{{'
variable_end_string: '}}'
comment_start_string: '{#'
comment_end_string: '#}'
line_statement_prefix:
line_comment_prefix:
trim_blocks: False
lstrip_blocks: False
newline_sequence: '\n'
keep_trailing_newline: False
jinja_sls_env
Default: {}
jinja_sls_env sets the Jinja environment options for sls templates.
The defaults and accepted options are exactly the same as they are
for jinja_env
.
The default options are:
jinja_sls_env:
block_start_string: '{%'
block_end_string: '%}'
variable_start_string: '{{'
variable_end_string: '}}'
comment_start_string: '{#'
comment_end_string: '#}'
line_statement_prefix:
line_comment_prefix:
trim_blocks: False
lstrip_blocks: False
newline_sequence: '\n'
keep_trailing_newline: False
Example using line statements and line comments to increase ease of use:
If your configuration options are
jinja_sls_env:
line_statement_prefix: '%'
line_comment_prefix: '##'
With these options jinja will interpret anything after a %
at the start of a line (ignoreing whitespace)
as a jinja statement and will interpret anything after a ##
as a comment.
This allows the following more convenient syntax to be used:
## (this comment will not stay once rendered)
# (this comment remains in the rendered template)
## ensure all the formula services are running
% for service in formula_services:
enable_service_{{ service }}:
service.running:
name: {{ service }}
% endfor
The following less convenient but equivalent syntax would have to
be used if you had not set the line_statement and line_comment options:
{# (this comment will not stay once rendered) #}
# (this comment remains in the rendered template)
{# ensure all the formula services are running #}
{% for service in formula_services %}
enable_service_{{ service }}:
service.running:
name: {{ service }}
{% endfor %}
jinja_trim_blocks
Default: False
If this is set to True
, the first newline after a Jinja block is
removed (block, not variable tag!). Defaults to False
and corresponds
to the Jinja environment init variable trim_blocks
.
jinja_lstrip_blocks
Default: False
If this is set to True
, leading spaces and tabs are stripped from the
start of a line to a block. Defaults to False
and corresponds to the
Jinja environment init variable lstrip_blocks
.
jinja_lstrip_blocks: False
failhard
Default: False
Set the global failhard flag. This informs all states to stop running states
at the moment a single state fails.
state_verbose
Default: True
Controls the verbosity of state runs. By default, the results of all states are
returned, but setting this value to False
will cause salt to only display
output for states that failed or states that have changes.
state_output
Default: full
The state_output setting controls which results will be output full multi line:
full
, terse
- each state will be full/terse
mixed
- only states with errors will be full
changes
- states with changes and errors will be full
full_id
, mixed_id
, changes_id
and terse_id
are also allowed;
when set, the state ID will be used as name in the output.
state_output_diff
Default: False
The state_output_diff setting changes whether or not the output from
successful states is returned. Useful when even the terse output of these
states is cluttering the logs. Set it to True to ignore them.
state_output_profile
Default: True
The state_output_profile
setting changes whether profile information
will be shown for each state run.
state_output_profile: True
state_output_pct
Default: False
The state_output_pct
setting changes whether success and failure information
as a percent of total actions will be shown for each state run.
state_compress_ids
Default: False
The state_compress_ids
setting aggregates information about states which
have multiple "names" under the same state ID in the highstate output.
state_compress_ids: False
state_aggregate
Default: False
Automatically aggregate all states that have support for mod_aggregate
by
setting to True
.
Or pass a list of state module names to automatically
aggregate just those types.
state_events
Default: False
Send progress events as each function in a state run completes execution
by setting to True
. Progress events are in the format
salt/job/<JID>/prog/<MID>/<RUN NUM>
.
yaml_utf8
Default: False
Enable extra routines for YAML renderer used states containing UTF characters.
runner_returns
Default: True
If set to False
, runner jobs will not be saved to job cache (defined by
master_job_cache
).
Pillar Configuration
pillar_roots
Default:
Set the environments and directories used to hold pillar sls data. This
configuration is the same as file_roots
:
As of 2017.7.5 and 2018.3.1, it is possible to have __env__ as a catch-all environment.
Example:
pillar_roots:
base:
- /srv/pillar
dev:
- /srv/pillar/dev
prod:
- /srv/pillar/prod
__env__:
- /srv/pillar/others
Taking dynamic environments one step further, __env__
can also be used in
the pillar_roots
filesystem path as of version 3005. It will be replaced
with the actual pillarenv
and searched for Pillar data to provide to the
minion. Note this substitution ONLY occurs for the __env__
environment. For
instance, this configuration:
pillar_roots:
__env__:
- /srv/__env__/pillar
is equivalent to this static configuration:
pillar_roots:
dev:
- /srv/dev/pillar
test:
- /srv/test/pillar
prod:
- /srv/prod/pillar
on_demand_ext_pillar
New in version 2016.3.6,2016.11.3,2017.7.0.
Default: ['libvirt', 'virtkey']
The external pillars permitted to be used on-demand using pillar.ext
.
on_demand_ext_pillar:
- libvirt
- virtkey
- git
Warning
This will allow minions to request specific pillar data via
pillar.ext
, and may be considered a
security risk. However, pillar data generated in this way will not affect
the in-memory pillar data, so this risk is
limited to instances in which states/modules/etc. (built-in or custom) rely
upon pillar data generated by pillar.ext
.
decrypt_pillar
Default: []
A list of paths to be recursively decrypted during pillar compilation.
decrypt_pillar:
- 'foo:bar': gpg
- 'lorem:ipsum:dolor'
Entries in this list can be formatted either as a simple string, or as a
key/value pair, with the key being the pillar location, and the value being the
renderer to use for pillar decryption. If the former is used, the renderer
specified by decrypt_pillar_default
will be used.
decrypt_pillar_delimiter
Default: :
The delimiter used to distinguish nested data structures in the
decrypt_pillar
option.
decrypt_pillar_delimiter: '|'
decrypt_pillar:
- 'foo|bar': gpg
- 'lorem|ipsum|dolor'
decrypt_pillar_default
Default: gpg
The default renderer used for decryption, if one is not specified for a given
pillar key in decrypt_pillar
.
decrypt_pillar_default: my_custom_renderer
decrypt_pillar_renderers
Default: ['gpg']
List of renderers which are permitted to be used for pillar decryption.
decrypt_pillar_renderers:
- gpg
- my_custom_renderer
gpg_decrypt_must_succeed
Default: False
If this is True
and the ciphertext could not be decrypted, then an error is
raised.
Sending the ciphertext through basically is never desired, for example if a
state is setting a database password from pillar and gpg rendering fails, then
the state will update the password to the ciphertext, which by definition is
not encrypted.
Warning
The value defaults to False
for backwards compatibility. In the
Chlorine
release, this option will default to True
.
gpg_decrypt_must_succeed: False
pillar_opts
Default: False
The pillar_opts
option adds the master configuration file data to a dict in
the pillar called master
. This can be used to set simple configurations in
the master config file that can then be used on minions.
Note that setting this option to True
means the master config file will be
included in all minion's pillars. While this makes global configuration of services
and systems easy, it may not be desired if sensitive data is stored in the master
configuration.
pillar_safe_render_error
Default: True
The pillar_safe_render_error option prevents the master from passing pillar
render errors to the minion. This is set on by default because the error could
contain templating data which would give that minion information it shouldn't
have, like a password! When set True
the error message will only show:
Rendering SLS 'my.sls' failed. Please see master log for details.
pillar_safe_render_error: True
ext_pillar
The ext_pillar option allows for any number of external pillar interfaces to be
called when populating pillar data. The configuration is based on ext_pillar
functions. The available ext_pillar functions can be found herein:
salt/pillar
By default, the ext_pillar interface is not configured to run.
Default: []
ext_pillar:
- hiera: /etc/hiera.yaml
- cmd_yaml: cat /etc/salt/yaml
- reclass:
inventory_base_uri: /etc/reclass
There are additional details at Pillars
ext_pillar_first
Default: False
This option allows for external pillar sources to be evaluated before
pillar_roots
. External pillar data is evaluated separately from
pillar_roots
pillar data, and then both sets of pillar data are
merged into a single pillar dictionary, so the value of this config option will
have an impact on which key "wins" when there is one of the same name in both
the external pillar data and pillar_roots
pillar data. By
setting this option to True
, ext_pillar keys will be overridden by
pillar_roots
, while leaving it as False
will allow
ext_pillar keys to override those from pillar_roots
.
Note
For a while, this config option did not work as specified above, because of
a bug in Pillar compilation. This bug has been resolved in version 2016.3.4
and later.
pillarenv_from_saltenv
Default: False
When set to True
, the pillarenv
value will assume the value
of the effective saltenv when running states. This essentially makes salt-run
pillar.show_pillar saltenv=dev
equivalent to salt-run pillar.show_pillar
saltenv=dev pillarenv=dev
. If pillarenv
is set on the CLI, it
will override this option.
pillarenv_from_saltenv: True
Note
For salt remote execution commands this option should be set in the Minion
configuration instead.
pillar_raise_on_missing
Default: False
Set this option to True
to force a KeyError
to be raised whenever an
attempt to retrieve a named value from pillar fails. When this option is set
to False
, the failed attempt returns an empty string.
Git External Pillar (git_pillar) Configuration Options
git_pillar_provider
Specify the provider to be used for git_pillar. Must be either pygit2
or
gitpython
. If unset, then both will be tried in that same order, and the
first one with a compatible version installed will be the provider that is
used.
git_pillar_provider: gitpython
git_pillar_base
Default: master
If the desired branch matches this value, and the environment is omitted from
the git_pillar configuration, then the environment for that git_pillar remote
will be base
. For example, in the configuration below, the foo
branch/tag would be assigned to the base
environment, while bar
would
be mapped to the bar
environment.
git_pillar_base: foo
ext_pillar:
- git:
- foo https://mygitserver/git-pillar.git
- bar https://mygitserver/git-pillar.git
git_pillar_branch
Default: master
If the branch is omitted from a git_pillar remote, then this branch will be
used instead. For example, in the configuration below, the first two remotes
would use the pillardata
branch/tag, while the third would use the foo
branch/tag.
git_pillar_branch: pillardata
ext_pillar:
- git:
- https://mygitserver/pillar1.git
- https://mygitserver/pillar2.git:
- root: pillar
- foo https://mygitserver/pillar3.git
git_pillar_env
Default: ''
(unset)
Environment to use for git_pillar remotes. This is normally derived from the
branch/tag (or from a per-remote env
parameter), but if set this will
override the process of deriving the env from the branch/tag name. For example,
in the configuration below the foo
branch would be assigned to the base
environment, while the bar
branch would need to explicitly have bar
configured as its environment to keep it from also being mapped to the
base
environment.
git_pillar_env: base
ext_pillar:
- git:
- foo https://mygitserver/git-pillar.git
- bar https://mygitserver/git-pillar.git:
- env: bar
For this reason, this option is recommended to be left unset, unless the use
case calls for all (or almost all) of the git_pillar remotes to use the same
environment irrespective of the branch/tag being used.
git_pillar_root
Default: ''
Path relative to the root of the repository where the git_pillar top file and
SLS files are located. In the below configuration, the pillar top file and SLS
files would be looked for in a subdirectory called pillar
.
git_pillar_root: pillar
ext_pillar:
- git:
- master https://mygitserver/pillar1.git
- master https://mygitserver/pillar2.git
Note
This is a global option. If only one or two repos need to have their files
sourced from a subdirectory, then git_pillar_root
can be
omitted and the root can be specified on a per-remote basis, like so:
ext_pillar:
- git:
- master https://mygitserver/pillar1.git
- master https://mygitserver/pillar2.git:
- root: pillar
In this example, for the first remote the top file and SLS files would be
looked for in the root of the repository, while in the second remote the
pillar data would be retrieved from the pillar
subdirectory.
git_pillar_ssl_verify
Changed in version 2016.11.0.
Default: False
Specifies whether or not to ignore SSL certificate errors when contacting the
remote repository. The False
setting is useful if you're using a
git repo that uses a self-signed certificate. However, keep in mind that
setting this to anything other True
is a considered insecure, and using an
SSH-based transport (if available) may be a better option.
In the 2016.11.0 release, the default config value changed from False
to
True
.
git_pillar_ssl_verify: True
Note
pygit2 only supports disabling SSL verification in versions 0.23.2 and
newer.
git_pillar_global_lock
Default: True
When set to False
, if there is an update/checkout lock for a git_pillar
remote and the pid written to it is not running on the master, the lock file
will be automatically cleared and a new lock will be obtained. When set to
True
, Salt will simply log a warning when there is an lock present.
On single-master deployments, disabling this option can help automatically deal
with instances where the master was shutdown/restarted during the middle of a
git_pillar update/checkout, leaving a lock in place.
However, on multi-master deployments with the git_pillar cachedir shared via
GlusterFS, nfs, or another network filesystem, it is strongly recommended
not to disable this option as doing so will cause lock files to be removed if
they were created by a different master.
# Disable global lock
git_pillar_global_lock: False
git_pillar_includes
Default: True
Normally, when processing git_pillar remotes, if more than one repo under the same git
section in the ext_pillar
configuration refers to the same pillar
environment, then each repo in a given environment will have access to the
other repos' files to be referenced in their top files. However, it may be
desirable to disable this behavior. If so, set this value to False
.
For a more detailed examination of how includes work, see this
explanation from the git_pillar documentation.
git_pillar_includes: False
git_pillar_update_interval
Default: 60
This option defines the default update interval (in seconds) for git_pillar
remotes. The update is handled within the global loop, hence
git_pillar_update_interval
should be a multiple of loop_interval
.
git_pillar_update_interval: 120
Git External Pillar Authentication Options
These parameters only currently apply to the pygit2
git_pillar_provider
. Authentication works the same as it does
in gitfs, as outlined in the GitFS Walkthrough,
though the global configuration options are named differently to reflect that
they are for git_pillar instead of gitfs.
git_pillar_user
Default: ''
Along with git_pillar_password
, is used to authenticate to HTTPS
remotes.
git_pillar_password
Default: ''
Along with git_pillar_user
, is used to authenticate to HTTPS
remotes. This parameter is not required if the repository does not use
authentication.
git_pillar_password: mypassword
git_pillar_insecure_auth
Default: False
By default, Salt will not authenticate to an HTTP (non-HTTPS) remote. This
parameter enables authentication over HTTP. Enable this at your own risk.
git_pillar_insecure_auth: True
git_pillar_passphrase
Default: ''
This parameter is optional, required only when the SSH key being used to
authenticate is protected by a passphrase.
git_pillar_passphrase: mypassphrase
git_pillar_refspecs
Default: ['+refs/heads/*:refs/remotes/origin/*', '+refs/tags/*:refs/tags/*']
When fetching from remote repositories, by default Salt will fetch branches and
tags. This parameter can be used to override the default and specify
alternate refspecs to be fetched. This parameter works similarly to its
GitFS counterpart, in that it can be
configured both globally and for individual remotes.
git_pillar_refspecs:
- '+refs/heads/*:refs/remotes/origin/*'
- '+refs/tags/*:refs/tags/*'
- '+refs/pull/*/head:refs/remotes/origin/pr/*'
- '+refs/pull/*/merge:refs/remotes/origin/merge/*'
git_pillar_verify_config
Default: True
By default, as the master starts it performs some sanity checks on the
configured git_pillar repositories. If any of these sanity checks fail (such as
when an invalid configuration is used), the master daemon will abort.
To skip these sanity checks, set this option to False
.
git_pillar_verify_config: False
Pillar Merging Options
pillar_source_merging_strategy
Default: smart
The pillar_source_merging_strategy option allows you to configure merging
strategy between different sources. It accepts 5 values:
none
:
It will not do any merging at all and only parse the pillar data from the passed environment and 'base' if no environment was specified.
recurse
:
It will recursively merge data. For example, theses 2 sources:
foo: 42
bar:
element1: True
bar:
element2: True
baz: quux
will be merged as:
foo: 42
bar:
element1: True
element2: True
baz: quux
aggregate
:
instructs aggregation of elements between sources that use the #!yamlex renderer.
For example, these two documents:
foo: 42
bar: !aggregate {
element1: True
}
baz: !aggregate quux
bar: !aggregate {
element2: True
}
baz: !aggregate quux2
will be merged as:
foo: 42
bar:
element1: True
element2: True
baz:
- quux
- quux2
overwrite
:
Will use the behaviour of the 2014.1 branch and earlier.
Overwrites elements according the order in which they are processed.
First pillar processed:
A:
first_key: blah
second_key: blah
Second pillar processed:
A:
third_key: blah
fourth_key: blah
will be merged as:
A:
third_key: blah
fourth_key: blah
smart
(default):
Guesses the best strategy based on the "renderer" setting.
Note
In order for yamlex based features such as !aggregate
to work as expected
across documents using the default smart
merge strategy, the renderer
config option must be set to jinja|yamlex
or similar.
pillar_merge_lists
Default: False
Recursively merge lists by aggregating them instead of replacing them.
pillar_merge_lists: False
pillar_includes_override_sls
New in version 2017.7.6,2018.3.1.
Default: False
Prior to version 2017.7.3, keys from pillar includes
would be merged on top of the pillar SLS. Since 2017.7.3, the includes are
merged together and then the pillar SLS is merged on top of that.
Set this option to True
to return to the old behavior.
pillar_includes_override_sls: True
Pillar Cache Options
pillar_cache
Default: False
A master can cache pillars locally to bypass the expense of having to render them
for each minion on every request. This feature should only be enabled in cases
where pillar rendering time is known to be unsatisfactory and any attendant security
concerns about storing pillars in a master cache have been addressed.
When enabling this feature, be certain to read through the additional pillar_cache_*
configuration options to fully understand the tunable parameters and their implications.
pillar_cache_ttl
Default: 3600
If and only if a master has set pillar_cache: True
, the cache TTL controls the amount
of time, in seconds, before the cache is considered invalid by a master and a fresh
pillar is recompiled and stored.
The cache TTL does not prevent pillar cache from being refreshed before its TTL expires.
pillar_cache_backend
Default: disk
If an only if a master has set pillar_cache: True
, one of several storage providers
can be utilized:
disk
(default):
The default storage backend. This caches rendered pillars to the master cache.
Rendered pillars are serialized and deserialized as msgpack
structures for speed.
Note that pillars are stored UNENCRYPTED. Ensure that the master cache has permissions
set appropriately (sane defaults are provided).
memory
[EXPERIMENTAL]:
An optional backend for pillar caches which uses a pure-Python
in-memory data structure for maximal performance. There are several caveats,
however. First, because each master worker contains its own in-memory cache,
there is no guarantee of cache consistency between minion requests. This
works best in situations where the pillar rarely if ever changes. Secondly,
and perhaps more importantly, this means that unencrypted pillars will
be accessible to any process which can examine the memory of the salt-master
!
This may represent a substantial security risk.
pillar_cache_backend: disk