In light of Python 2.7 reaching its End of Life (EOL) on Jan 1st 2020, Python 2 will be deprecated from SaltStack no earlier than the 3001 release, that is either the 3001 release or a later release. This decision is pending further community discussion.
The following are known issues for the 2019.2.0 release and will be fixed for 2019.2.1:
In earlier releases, this was considered valid usage in Python 2, assuming that
data
was a list or dictionary containing keys/values which are unicode
types:
/etc/foo.conf:
file.managed:
- source: salt://foo.conf.jinja
- template: jinja
- context:
data: {{ data }}
One common use case for this is when using one of Salt's custom
Jinja filters
which return lists or dictionaries, such
as the ipv4
filter.
In Python 2, Jinja will render the unicode
string types within the
list/dictionary with the "u" prefix (e.g. {u'foo': u'bar'}
). While not
valid YAML, earlier releases would successfully load these values.
As of this release, the above SLS would result in an error message. To allow for a data structure to be dumped directly into your SLS file, use the tojson Jinja filter:
/etc/foo.conf:
file.managed:
- source: salt://foo.conf.jinja
- template: jinja
- context:
data: {{ data|tojson }}
Another example where the new filter needs to be used is the following state example:
grafana_packages:
pkg.installed:
- names: {{ server.pkgs }}
This will fail when pkgs is a list or dictionary. You will need to update the state:
grafana_packages:
pkg.installed:
- names: {{ server.pkgs|tojson }}
This test case has also been tested with the yaml
and json
filters successfully.
Note
This filter was added in Jinja 2.9. However, fear not! The 2018.3.3 release
added a tojson
filter which will be used if this filter is not already
present, making it available on platforms like RHEL 7 and Ubuntu 14.04
which provide older versions of Jinja.
Important
The json_encode_dict
and json_encode_list
filters
do not actually dump the results to JSON. Since tojson
accomplishes
what those filters were designed to do, they are now deprecated and will be
removed in the 3000 release. The tojson
filter should be used in all
cases where json_encode_dict
and json_encode_list
would have been used.
Along with the including the ansible modules
in the Oxygen release, running playbooks has been
added in 2019.2.0 with the playbooks function
. This also includes an ansible
playbooks state module
which can be used
on a targeted host to run ansible playbooks, or used in an
orchestration state runner.
install nginx:
ansible.playbooks:
- name: install.yml
- git_repo: git://github.com/gtmanfred/playbook.git
- git_kwargs:
rev: master
The playbooks modules also includes the ability to specify a git repo to clone and use, or a specific directory can to used when running the playbook.
Beginning with this release, Salt provides much broader support for a variety of network operating systems, and features for configuration manipulation or operational command execution.
Added in the previous release, 2018.3.0, the capabilities of the
netbox
Execution Module have been extended, with a
much longer list of available features:
netbox.create_circuit
netbox.create_circuit_provider
netbox.create_circuit_termination
netbox.create_circuit_type
netbox.create_device
netbox.create_device_role
netbox.create_device_type
netbox.create_interface
netbox.create_interface_connection
netbox.create_inventory_item
netbox.create_ipaddress
netbox.create_manufacturer
netbox.create_platform
netbox.create_site
netbox.delete_interface
netbox.delete_inventory_item
netbox.delete_ipaddress
netbox.get_circuit_provider
netbox.get_interfaces
netbox.get_ipaddresses
netbox.make_interface_child
netbox.make_interface_lag
netbox.openconfig_interfaces
netbox.openconfig_lacp
netbox.update_device
netbox.update_interface
Besides this Execution Module, Salt users can load data directly from NetBox
into the device Pillar, via the netbox
External
Pillar module.
Netmiko, the multi-vendor library to
simplify Paramiko SSH connections to network devices, is now officially
integrated into Salt. The network community can use it via the
netmiko
Proxy Module or directly from any Salt
Minions, passing the connection credentials - see the documentation for the
netmiko
Execution Module.
Arista switches can now be managed running under the pyeapi
Proxy Module, and execute RPC requests via the
pyeapi
Execution Module.
While support for SSH-based operations has been added in the release codename
Carbon (2016.11), the new nxos_api
Proxy Module
and nxos_api
allow management of Cisco Nexus
switches via the NX-API.
It is important to note that these modules don't have third party dependencies, therefore they can be used straight away from any Salt Minion. This also means that the user may be able to install the regular Salt Minion on the Nexus switch directly and manage the network devices like a regular server.
The new ciscoconfparse
Execution Module
can be used for basic configuration parsing, audit or validation for a variety
of network platforms having Cisco IOS style configuration (one space
indentation), as well as brace-delimited configuration style.
The iosconfig
can be used for various
configuration manipulation for Cisco IOS style configuration, such as:
configuration cleanup
,
tree representation of the config
, etc.
Beginning with this release, NAPALM users are able to execute scheduled commits
(broadly known as "commit at") and "commit confirmed" (revert the configuration
change unless the user confirms by running another command). These features are
available via the commit_in
, commit_at
, revert_in
, or revert_at
arguments for the
net.load_config
and
net.load_template
execution
functions, or netconfig.managed
.
The counterpart execution functions
net.confirm_commit
, or
net.cancel_commit
, as well
as the State functions
netconfig.commit_cancelled
, or
netconfig.commit_confirmed
can
be used to confirm or cancel a commit.
Please note that the commit confirmed and commit cancelled functionalities are available for any platform whether the network devices supports the features natively or not. However, be cautious and make sure you read and understand the caveats before using them in production.
The template_name
argument of the
net.load_template
Execution
and netconfig.managed
State function now
supports a list of templates. This is particularly useful when a very large
Jinja template is split into multiple smaller and easier to read templates that
can eventually be reused in other States. For example, the following syntax is
not correct to manage the configuration of NTP and BGP simultaneously, using
two different templates and changing the device configuration through one
single commit:
manage_bgp_and_ntp:
netconfig.managed:
- template_name:
- salt://templates/bgp.jinja
- salt://templates/ntp.jinja
- context:
bpg: {{ pillar.bgp }}
ntp: {{ pillar.ntp }}
Beginning with this release, any NAPALM command executed when
running under a NAPALM Proxy Minion supports the force_reconnect
magic argument.
Proxy Minions generally establish a connection with the remote network device at the time of the Minion startup and that connection is going to be used forever.
If one would need to execute a command on the device but is connecting using
different parameters (due to various causes, e.g., unable to authenticate
the user specified in the Pillar as the authentication system - say
TACACS+ is not available, or the DNS resolver is currently down and would
like to temporarily use the IP address instead, etc.), it implies updating
the Pillar data and restarting the Proxy Minion process restart.
In particular cases like that, you can pass the force_reconnect=True
keyword argument, together with the alternative connection details, to
enforce the command to be executed over a separate connection.
For example, if the usual command is salt '*' net.arp
, you can use the
following to connect using a different username instead:
salt '*' net.arp username=my-alt-usr force_reconnect=True
The same goes with any of the other configuration arguments required for the
NAPALM connection - see NAPALM proxy documentation
.
To replace various configuration chunks, you can use the new
net.replace_pattern
execution function, or the
netconfig.replace_pattern
State
function. For example, if you want to update your configuration and rename
a BGP policy referenced in many places, you can do so by running:
salt '*' net.replace_pattern OLD-POLICY-CONFIG new-policy-config
Similarly, you can also replace entire configuration blocks using the
net.blockreplace
function.
The net.save_config
function
can be used to save the configuration of the managed device into a file. For
the State subsystem, the netconfig.saved
function has been added which provides a complete list of facilities when
managing the target file where the configuration of the network device can be
saved.
For example, backup the running configuration of each device under its own directory tree:
/var/backups/{{ opts.id }}/running.cfg:
netconfig.saved:
- source: running
- makedirs: true
All the new network automation modules mentioned above are directly exposed to the NAPALM users, without requiring any architectural changes, just eventually install some requirements:
The features from the existing junos
Execution
Module are available via the following functions:
napalm.junos_cli
: Execute a CLI
command and return the output as text or Python dictionary.
napalm.junos_rpc
: Execute an RPC
request on the remote Junos device, and return the result as a Python
dictionary, easy to digest and manipulate.
napalm.junos_install_os
:
Install the given image on the device.
napalm.junos_facts
: The complete
list of Junos facts collected by the junos-eznc
underlying library.
Note
To be able to use these features, you muse ensure that you meet the
requirements for the junos
module. As
junos-eznc
is already a dependency of NAPALM, you will only have to
install jxmlease
.
Usage examples:
salt '*' napalm.junos_cli 'show arp' format=xml
salt '*' napalm.junos_rpc get-interface-information
The features from the newly added netmiko
Execution Module are available as:
napalm.netmiko_commands
:
Execute one or more commands to be execute on the remote device, via Netmiko,
and return the output as a text.
napalm.netmiko_config
: Load
a list of configuration command on the remote device, via Netmiko. The
commands can equally be loaded from a local or remote path, and passed
through Salt's template rendering pipeline (by default using Jinja
as the
template rendering engine).
Usage examples:
salt '*' napalm.netmiko_commands 'show version' 'show interfaces'
salt '*' napalm.netmiko_config config_file=https://bit.ly/2sgljCB
For various operations and various extension modules, the following features
have been added to gate functionality from the
pyeapi
module:
napalm.pyeapi_run_commands
: Execute a list of commands on
the Arista switch, via the pyeapi
library.
napalm.pyeapi_config
:
Configure the Arista switch with the specified commands, via the pyeapi
Python library. Similarly to
napalm.netmiko_config
, you
can use both local and remote files, with or without templating.
Usage examples:
salt '*' napalm.pyeapi_run_commands 'show version' 'show interfaces'
salt '*' napalm.pyeapi_config config_file=salt://path/to/template.jinja
In the exact same way as above, the user has absolute control by using the following primitives to manage Cisco Nexus switches via the NX-API:
napalm.nxos_api_show
: Execute
one or more show (non-configuration) commands, and return the output as plain
text or Python dictionary.
napalm.nxos_api_rpc
: Execute
arbitrary RPC requests via the Nexus API.
napalm.nxos_api_config
:
Configures the Nexus switch with the specified commands, via the NX-API. The
commands can be loaded from the command line, or a local or remote file,
eventually rendered using the templating engine of choice (default:
jinja
).
Usage examples:
salt '*' napalm.nxos_api_show 'show bgp sessions' 'show processes' raw_text=False
The following list of function may be handy when manipulating Cisco IOS or Junos style configurations:
napalm.config_filter_lines
: Return a list of detailed
matches, for the configuration blocks (parent-child relationship) whose
parent and children respect the regular expressions provided.
napalm.config_find_lines
:
Return the configuration lines that match the regular expression provided.
napalm.config_lines_w_child
:
Return the configuration lines that match a regular expression, having child
lines matching the child regular expression.
napalm.config_lines_wo_child
:
Return the configuration lines that match a regular expression, that don't
have child lines matching the child regular expression.
Note
These functions require the ciscoconfparse
Python library to be
installed.
Usage example (find interfaces that are administratively shut down):
salt '*' napalm.config_lines_w_child 'interface' 'shutdown'
For Cisco IOS style configuration, the following features have been added to
the napalm
Execution Module:
napalm.config_tree
: Transform
Cisco IOS style configuration to structured Python dictionary, using the
configuration of the interrogated network device.
napalm.config_merge_tree
:
Return the merge tree of the configuration of the managed network device with
a different configuration to be merged with (without actually loading any
changes on the device).
napalm.config_merge_text
:
Return the merge result (as text) of the configuration of the managed network
device with a different configuration to be merged with.
napalm.config_merge_diff
:
Return the merge diff after merging the configuration of the managed network
device with a different configuration (without actually loading any changes
on the device).
Reusing the already available connection credentials provided for NAPALM, the following features are now available:
napalm.scp_put
: Transfer files and
directories to remote network device.
napalm.scp_get
: Transfer files and
directories from remote network device to the localhost of the Minion.
The peeringdb
Execution Module is useful to
gather information about other networks you can potentially peer with, and
automatically establish BGP sessions, e.g., given just a specific AS number,
the rest of the data (i.e., IP addresses, locations where the remote network is
available, etc.) is retrieved from PeeringDB, and the session configuration is
automated with minimal to no effort (typing the IP addresses manually can be
both tedious and error prone).
Docker containers can now be treated as actual minions without installing salt
in the container, using the new docker proxy minion
.
This proxy minion uses the docker executor
to
pass commands to the docker container using docker.call
. Any state module calls are passed through the
corresponding function from the docker
module.
proxy:
proxytype: docker
name: keen_proskuriakova
You can now dynamically generate a Salt-SSH roster from the terraform resources defined with terraform-provider-salt.
This allows you to combine both terraform and Salt-SSH to provision and
configure your hosts. See the terraform roster
for
an example on how to setup and use.
Starting in this release, if a custom grains function accepts a variable named
grains
, the Grains dictionary of the already compiled grains will be passed
in. Because of the non-deterministic order that grains are rendered in, the
only grains that can be relied upon to be passed in are core.py
grains,
since those are compiled first.
virtual
Grain¶This release improves the accuracy of the virtual
grain when running Salt in
a nested virtualization environment (e.g. systemd-nspawn
container inside a
VM) and having virt-what
installed.
Until now, the virtual
grain was determined by matching against all output
lines of virt-what
instead of individual items which could lead to not quite
precise results (e.g. reporting HyperV
inside a systemd-nspawn
container
running within a Hyper-V-based VM.
Salt modules (states, execution modules, returners, etc.) now can have custom
environment variables applied when running shell commands. This can be
configured by setting a system-environment
key either in Grains or Pillar.
The syntax is as follows:
system-environment:
<type>
<module>:
# Namespace for all functions in the module
_:
<key>: <value>
# Namespace only for particular function in the module
<function>:
<key>: <value>
<type>
would be the type of module (i.e. states
, modules
, etc.).
<module>
would be the module's name.
Note
The module name can be either the virtual name (e.g. pkg
), or the
physical name (e.g. yumpkg
).
<function>
would be the function name within that module. To apply
environment variables to all functions in a given module, use an underscore
(i.e. _
) as the function name. For example, to set the same environment
variable for all package management functions, the following could be used:
system-environment:
modules:
pkg:
_:
SOMETHING: for_all
To set an environment variable in pkg.install
only:
system-environment:
modules:
pkg:
install:
LC_ALL: en_GB.UTF-8
To set the same variable but only for SUSE minions (which use zypper for package management):
system-environment:
modules:
zypper:
install:
LC_ALL: en_GB.UTF-8
In APT, some packages have an associated list of packages which they provide.
This allows one to do things like run apt-get install foo
when the real
package name is foo1.0
, and get the right package installed.
Salt has traditionally designated as "virtual packages" those which are
provided by an installed package, but for which there is no real package by
that name installed. Given the above example, if one were to run a
pkg.installed
state for a package named
foo
, then pkg.list_pkgs
would
show a package version of simply 1
for package foo
, denoting that it is
a virtual package.
However, while this makes certain aspects of package management convenient,
there are issues with this approach that make relying on "virtual packages"
problematic. For instance, Ubuntu has four different mutually-conflicting
packages for nginx
:
All four of these provide nginx
. Yet there is an nginx package as well,
which has no actual content and merely has dependencies on any one of the above
four packages. If one used nginx
in a pkg.installed
state, and none of the above four packages were
installed, then the nginx metapackage would be installed, which would pull in
nginx-core_. Later, if nginx
were used in a pkg.removed
state, the nginx metapackage would be removed,
leaving nginx-core installed. The result would be that, since nginx-core_
provides nginx_, Salt would now see nginx as an installed virtual package,
and the pkg.removed
state would fail.
Moreover, nginx would not actually have been removed, since nginx-core would
remain installed.
Starting with this release, Salt will no longer support using "virtual package"
names in pkg
states, and package names will need to be specified using the
proper package name. The pkg.list_repo_pkgs
function can be used to find matching
package names in the repositories, given a package name (or glob):
# salt myminion pkg.list_repo_pkgs 'nginx*'
myminion:
----------
nginx:
- 1.10.3-0ubuntu0.16.04.2
- 1.9.15-0ubuntu1
nginx-common:
- 1.10.3-0ubuntu0.16.04.2
- 1.9.15-0ubuntu1
nginx-core:
- 1.10.3-0ubuntu0.16.04.2
- 1.9.15-0ubuntu1
nginx-core-dbg:
- 1.10.3-0ubuntu0.16.04.2
- 1.9.15-0ubuntu1
nginx-doc:
- 1.10.3-0ubuntu0.16.04.2
- 1.9.15-0ubuntu1
nginx-extras:
- 1.10.3-0ubuntu0.16.04.2
- 1.9.15-0ubuntu1
nginx-extras-dbg:
- 1.10.3-0ubuntu0.16.04.2
- 1.9.15-0ubuntu1
nginx-full:
- 1.10.3-0ubuntu0.16.04.2
- 1.9.15-0ubuntu1
nginx-full-dbg:
- 1.10.3-0ubuntu0.16.04.2
- 1.9.15-0ubuntu1
nginx-light:
- 1.10.3-0ubuntu0.16.04.2
- 1.9.15-0ubuntu1
nginx-light-dbg:
- 1.10.3-0ubuntu0.16.04.2
- 1.9.15-0ubuntu1
Alternatively, the newly-added pkg.show
function can be used to get more detailed information about a given package and
help determine what package name is correct:
# salt myminion pkg.show 'nginx*' filter=description,provides
myminion:
----------
nginx:
----------
1.10.3-0ubuntu0.16.04.2:
----------
Description:
small, powerful, scalable web/proxy server
1.9.15-0ubuntu1:
----------
Description:
small, powerful, scalable web/proxy server
nginx-common:
----------
1.10.3-0ubuntu0.16.04.2:
----------
Description:
small, powerful, scalable web/proxy server - common files
1.9.15-0ubuntu1:
----------
Description:
small, powerful, scalable web/proxy server - common files
nginx-core:
----------
1.10.3-0ubuntu0.16.04.2:
----------
Description:
nginx web/proxy server (core version)
Provides:
httpd, httpd-cgi, nginx
1.9.15-0ubuntu1:
----------
Description:
nginx web/proxy server (core version)
Provides:
httpd, httpd-cgi, nginx
nginx-core-dbg:
----------
1.10.3-0ubuntu0.16.04.2:
----------
Description:
nginx web/proxy server (core version) - debugging symbols
1.9.15-0ubuntu1:
----------
Description:
nginx web/proxy server (core version) - debugging symbols
nginx-doc:
----------
1.10.3-0ubuntu0.16.04.2:
----------
Description:
small, powerful, scalable web/proxy server - documentation
1.9.15-0ubuntu1:
----------
Description:
small, powerful, scalable web/proxy server - documentation
nginx-extras:
----------
1.10.3-0ubuntu0.16.04.2:
----------
Description:
nginx web/proxy server (extended version)
Provides:
httpd, httpd-cgi, nginx
1.9.15-0ubuntu1:
----------
Description:
nginx web/proxy server (extended version)
Provides:
httpd, httpd-cgi, nginx
nginx-extras-dbg:
----------
1.10.3-0ubuntu0.16.04.2:
----------
Description:
nginx web/proxy server (extended version) - debugging symbols
1.9.15-0ubuntu1:
----------
Description:
nginx web/proxy server (extended version) - debugging symbols
nginx-full:
----------
1.10.3-0ubuntu0.16.04.2:
----------
Description:
nginx web/proxy server (standard version)
Provides:
httpd, httpd-cgi, nginx
1.9.15-0ubuntu1:
----------
Description:
nginx web/proxy server (standard version)
Provides:
httpd, httpd-cgi, nginx
nginx-full-dbg:
----------
1.10.3-0ubuntu0.16.04.2:
----------
Description:
nginx web/proxy server (standard version) - debugging symbols
1.9.15-0ubuntu1:
----------
Description:
nginx web/proxy server (standard version) - debugging symbols
nginx-light:
----------
1.10.3-0ubuntu0.16.04.2:
----------
Description:
nginx web/proxy server (basic version)
Provides:
httpd, httpd-cgi, nginx
1.9.15-0ubuntu1:
----------
Description:
nginx web/proxy server (basic version)
Provides:
httpd, httpd-cgi, nginx
nginx-light-dbg:
----------
1.10.3-0ubuntu0.16.04.2:
----------
Description:
nginx web/proxy server (basic version) - debugging symbols
1.9.15-0ubuntu1:
----------
Description:
nginx web/proxy server (basic version) - debugging symbols
When a minion starts up it sends a notification on the event bus with a tag
that looks like this: salt/minion/<minion_id>/start
. For historical reasons
the minion also sends a similar event with an event tag like this:
minion_start
. This duplication can cause a lot of clutter on the event bus
when there are many minions. Set enable_legacy_startup_events: False
in the
minion config to ensure only the salt/minion/<minion_id>/start
events are
sent.
The new enable_legacy_startup_events
minion config option
defaults to True
, but will be set to default to False
beginning with
the 3001 release of Salt.
The Salt Syndic currently sends an old style syndic_start
event as well. The
syndic respects enable_legacy_startup_events
as well.
It is now possible to override a global failhard setting with a state-level
failhard setting. This is most useful in case where global failhard is set to
True
and you want the execution not to stop for a specific state that
could fail, by setting the state level failhard to False
.
This also allows for the use of onfail*
-requisites, which would previously
be ignored when a global failhard was set to True
.
This is a deviation from previous behavior, where the global failhard setting
always resulted in an immediate stop whenever any state failed (regardless
of whether the failing state had a failhard setting of its own, or whether
any onfail*
-requisites were used).
file.serialize
State¶This allows for more granular control over the way in which the dataset is
serialized. See the documentation for the new serializer_opts
and
deserializer_opts
options in the file.serialize
state for more information.
file.patch
State Rewritten¶The file.patch
state has been rewritten with
several new features:
Patch sources can now be remote files instead of only salt://
URLs
Multi-file patches are now supported
Patch files can be templated
In addition, it is no longer necessary to specify what the hash of the patched file should be.
Pass a list of hosts using the no_proxy
minion config option to bypass an HTTP
proxy.
Note
This key does nothing unless proxy_host is configured and it does not support any kind of wildcards.
no_proxy: [ '127.0.0.1', 'foo.tld' ]
slack
Engine¶The output returned to Slack from functions run using this engine is now formatted using that function's proper outputter. Earlier releases would format the output in YAML for all functions except for when states were run.
wtmp
Beacon¶A new key, action
, has been added to the events fired by this beacon, which
will contain either the string login
or logout
. This will simplify
reactors which use this beacon's data, as it will no longer be necessary to
check the integer value of the type
key to know whether the event is a
login or logout.
Additionally, in the event that your platform has a non-standard utmp.h
,
you can now configure which type numbers indicate a login and logout.
See the wtmp beacon documentation
for more
information.
Support for LocalClient's expr_form
argument has
been removed. Please use tgt_type
instead. This change was made due to
numerous reports of confusion among community members, since the targeting
method is published to minions as tgt_type
, and appears as tgt_type
in the job cache as well.
Those who are using the LocalClient (either directly,
or implicitly via a netapi module) need to update
their code to use tgt_type
.
>>> import salt.client
>>> local = salt.client.LocalClient()
>>> local.cmd("*", "cmd.run", ["whoami"], tgt_type="glob")
{'jerry': 'root'}
The master_shuffle
configuration option is deprecated as of the
2019.2.0
release. Please use the random_master
option instead.
The napalm_network
module has been
changed as follows:
Support for the
template_path
has been removed fromnet.load_template
function. This is because support for NAPALM native templates has been dropped.
The pip
module has been changed as follows:
Support for the
no_chown
option has been removed frompip.install
function.
The trafficserver
module has been
changed as follows:
The
trafficserver.match_var
function was removed. Please usetrafficserver.match_metric
instead.The
trafficserver.read_var
function was removed. Please usetrafficserver.read_config
instead.The
trafficserver.set_var
function was removed. Please usetrafficserver.set_config
instead.
The win_update
module has been removed. It has been replaced by
win_wua
.
The win_wua
module has been changed as
follows:
The
win_wua.download_update
andwin_wua.download_updates
functions have been removed. Please usewin_wua.download
instead.The
win_wua.install_update
andwin_wua.install_updates
functions have been removed. Please usewin_wua.install
instead.The
win_wua.list_update
function has been removed. Please use functions have been removed. Please usewin_wua.get
instead.The
win_wua.list_updates
function has been removed. Please use functions have been removed. Please usewin_wua.list
instead.
The vault
external pillar has been changed as
follows:
Support for the
profile
argument was removed. Any options passed up until and following the firstpath=
are discarded.
The cache
roster has been changed as follows:
Support for
roster_order
as a list or tuple has been removed. As of the2019.2.0
release,roster_order
must be a dictionary.The
roster_order
option now includes IPv6 in addition to IPv4 for theprivate
,public
,global
orlocal
settings. The syntax for these settings has changed toipv4-*
oripv6-*
, respectively.
The docker
state module has been removed
In 2017.7.0, the states from this module were split into four separate state modules:
docker_container
docker_image
docker_volume
docker_network
The
docker
module remained, for backward-compatibility, but it has now been removed. Please update SLS files to use the new state names:
docker.running
=>docker_container.running
docker.stopped
=>docker_container.stopped
docker.absent
=>docker_container.absent
docker.network_present
=>docker_network.present
docker.network_absent
=>docker_network.absent
docker.image_present
=>docker_image.present
docker.image_absent
=>docker_image.absent
docker.volume_present
=>docker_volume.present
docker.volume_absent
=>docker_volume.absent
The docker_network
state module has
been changed as follows:
The
driver
option has been removed fromdocker_network.absent
. It had no functionality, as the state simply deletes the specified network name if it exists.
The deprecated ref
option has been removed from the
git.detached
state. Please use rev
instead.
The k8s
state module has been removed in favor of the kubernetes
state mdoule. Please update SLS files as follows:
In place of
k8s.label_present
, usekubernetes.node_label_present
In place of
k8s.label_absent
, usekubernetes.node_label_absent
In place of
k8s.label_folder_absent
, usekubernetes.node_label_folder_absent
Support for the template_path
option in the netconfig.managed
<salt.states.netconfig.managed()
state has been removed. This is because
support for NAPALM native templates has been dropped.
Support for the no_chown
option in the
pip.insalled
state has been removed.
The trafficserver.set_var
state has been removed. Please use trafficserver.config
instead.
Support for the no_chown
option in the
:py:func`virtualenv.managed <salt.states.virtualenv.managed>` function has
been removed.
The win_update
state module has been removed. It has been replaced by
win_wua
.
Support for virtual packages has been removed from the py:mod:pkg state <salt.states.pkg>.
The cloud
utils module had the following changes:
Support for the cache_nodes_ip
function in salt utils module
has been removed. The function was incomplete and non-functional.
The vault
utils module had the following changes:
Support for specifying Vault connection data within a 'profile' has been removed.
Please see the vault execution module
documentation for
details on the new configuration schema.
Salt-Cloud has been updated to use the pypsexec
Python library instead of the
winexe
executable. Both winexe
and pypsexec
run remote commands
against Windows OSes. Since winexe
is not packaged for every system, it has
been deprecated in favor of pypsexec
.
Salt-Cloud has deprecated the use impacket
in favor of smbprotocol
.
This changes was made because impacket
is not compatible with Python 3.
SaltSSH now works across different major Python versions. Python 2.7 ~ Python 3.x are now supported transparently. Requirement is, however, that the SaltMaster should have installed Salt, including all related dependencies for Python 2 and Python 3. Everything needs to be importable from the respective Python environment.
SaltSSH can bundle up an arbitrary version of Salt. If there would be an old box for example, running an outdated and unsupported Python 2.6, it is still possible from a SaltMaster with Python 3.5 or newer to access it. This feature requires an additional configuration in /etc/salt/master as follows:
ssh_ext_alternatives:
2016.3: # Namespace, can be actually anything.
py-version: [2, 6] # Constraint to specific interpreter version
path: /opt/2016.3/salt # Main Salt installation
dependencies: # List of dependencies and their installation paths
jinja2: /opt/jinja2
yaml: /opt/yaml
tornado: /opt/tornado
msgpack: /opt/msgpack
certifi: /opt/certifi
singledispatch: /opt/singledispatch.py
singledispatch_helpers: /opt/singledispatch_helpers.py
markupsafe: /opt/markupsafe
backports_abc: /opt/backports_abc.py
It is also possible to use several alternative versions of Salt. You can for instance generate a minimal tarball using runners and include that. But this is only possible, when such specific Salt version is also available on the Master machine, although does not need to be directly installed together with the older Python interpreter.
SaltSSH now support private key's passphrase. You can configure it by:
--priv-passwd for salt-ssh cli
salt_priv_passwd for salt master configure file
priv_passwd for salt roster file
salt
State Module (used in orchestration)¶The test
option now defaults to None. A value of True
or False
set
here is passed to the state being run and can be used to override a test:
True
option set in the minion's config file. In previous releases the
minion's config option would take precedence and it would be impossible to run
an orchestration on a minion with test mode set to True in the config file.
If a minion is not in permanent test mode due to the config file and the 'test'
argument here is left as None then a value of test=True
on the command-line is
passed correctly to the minion to run an orchestration in test mode. At present
it is not possible to pass test=False
on the command-line to override a
minion in permanent test mode and so the test: False
option must still be set
in the orchestration file.
event.send
State¶The event.send
state does not know the
results of the sent event, so returns changed every state run. It can now be
set to return changed or unchanged.
influxdb_user.present
Influxdb User Module State¶The password
parameter has been changed to passwd
to remove the
name collusion with the influxdb client configuration (client_kwargs
)
allowing management of users when authentication is enabled on the influxdb
instance
Old behavior:
influxdb_user.present:
- name: exampleuser
- password: exampleuserpassword
- user: admin
- password: adminpassword
New behavior:
influxdb_user.present:
- name: exampleuser
- passwd: exampleuserpassword
- user: admin
- password: adminpassword
winrepo_cache_expire_min
Windows Package Definitions Caching¶The winrepo_cache_expire_min
has been changed from 0 to 1800 (30 minutes)
For example if you run highstate the package definitions are normally updated,
however now if the package definitions are younger than winrepo_cache_expire_min
(30 minutes) the package definitions will not be refreshed, reducing the amount
of time taken to run a 2nd highstate. To get the old behaviour change the value
back to 0 in the minion configuration file. This also effects the behaviour of
other functions which default to refresh. The pkg.refresh_db
will always
refresh the package definitions.
groupattribute
support¶Previously, if Salt was using external authentication against a freeipa LDAP
system it could only search for users via the accountattributename
field.
This release add an additional search using the groupattribute
field as
well. The original accountattributename
search is done first then the
groupattribute
allowing for backward compatibility with previous Salt
releases.
When a jinja include template name begins with ./
or
../
then the import will be relative to the importing file.
Prior practices required the following construct:
{% from tpldir ~ '/foo' import bar %}
A more "natural" construct is now supported:
{% from './foo' import bar %}
Comparatively when importing from a parent directory - prior practice:
{% from tpldir ~ '/../foo' import bar %}
New style for including from a parent directory:
{% from '../foo' import bar %}
Previously, salt-api was was not supported on the Microsoft Windows platforms. Now it is! salt-api provides a RESTful interface to a running Salt system. It allows for viewing minions, runners, and jobs as well as running execution modules and runners of a running Salt system through a REST API that returns JSON. See Salt-API documentation. .. _Salt-API: https://docs.saltproject.io/en/latest/topics/netapi/index.html
The Job ID (JID) can now be optionally included in both the minion and master logs
by including jid
in either the log_fmt_console
or log_fmt_logfile
configuration option:
log_fmt_console: "[%(levelname)-8s] %(jid)s %(message)s"
The will cause the JID to be included in any log entries that are related to a
particular Salt job. The JID will be included using the default format,
[JID: %(jid)s]
but can be overridden with the log_fmt_jid
configuration item.
log_fmt_jid: "[JID: %(jid)s]"
A password is no longer required with runas
under normal circumstances.
The password option is only needed if the minion process is run under a
restricted (non-administrator) account. In the aforementioned case, a password
is only required when using the runas
argument to run command as a
different user.
salt.modules.ciscoconfparse_mod
salt.modules.jira
salt.modules.google_chat
salt.modules.netmiko
salt.modules.peeringdb
salt.modules.purefb
netbox
salt.proxy.netmiko
salt.proxy.nxos_api
salt.proxy.pyeapi