Proxy Minion interface module for managing VMWare ESXi clusters.
pyVmomi
jsonschema
To use this integration proxy module, please configure the following:
Proxy minions get their configuration from Salt's Pillar. This can now happen from the proxy's configuration file.
Example pillars:
userpass
mechanism:
proxy:
proxytype: esxcluster
cluster: <cluster name>
datacenter: <datacenter name>
vcenter: <ip or dns name of parent vcenter>
mechanism: userpass
username: <vCenter username>
passwords: (required if userpass is used)
- first_password
- second_password
- third_password
sspi
mechanism:
proxy:
proxytype: esxcluster
cluster: <cluster name>
datacenter: <datacenter name>
vcenter: <ip or dns name of parent vcenter>
mechanism: sspi
domain: <user domain>
principal: <host kerberos principal>
To use this Proxy Module, set this to esxdatacenter
.
Name of the managed cluster. Required.
Name of the datacenter the managed cluster is in. Required.
The location of the VMware vCenter server (host of ip) where the datacenter should be managed. Required.
The mechanism used to connect to the vCenter server. Supported values are
userpass
and sspi
. Required.
Note
Connections are attempted using all (username
, password
)
combinations on proxy startup.
The username used to login to the host, such as root
. Required if mechanism
is userpass
.
A list of passwords to be used to try and login to the vCenter server. At least
one password in this list is required if mechanism is userpass
. When the
proxy comes up, it will try the passwords listed in order.
User domain. Required if mechanism is sspi
.
Kerberos principal. Rquired if mechanism is sspi
.
If the ESXi host is not using the default protocol, set this value to an
alternate protocol. Default is https
.
If the ESXi host is not using the default port, set this value to an
alternate port. Default is 443
.
After your pillar is in place, you can test the proxy. The proxy can run on any machine that has network connectivity to your Salt Master and to the vCenter server in the pillar. SaltStack recommends that the machine running the salt-proxy process also run a regular minion, though it is not strictly necessary.
To start a proxy minion one needs to establish its identity <id>:
salt-proxy --proxyid <proxy_id>
On the machine that will run the proxy, make sure there is a configuration file
present. By default this is /etc/salt/proxy
. If in a different location, the
<configuration_folder>
has to be specified when running the proxy:
file with at least the following in it:
salt-proxy --proxyid <proxy_id> -c <configuration_folder>
Once the proxy is running it will connect back to the specified master and individual commands can be runs against it:
# Master - minion communication
salt <cluster_name> test.ping
# Test vcenter connection
salt <cluster_name> vsphere.test_vcenter_connection
Associated states are documented in
salt.states.esxcluster
.
Look there to find an example structure for Pillar as well as an example
.sls
file for configuring an ESX cluster from scratch.
Cycle through all the possible credentials and return the first one that works.
Function that returns the cached details
This function gets called when the proxy starts up. For login the protocol and port are cached.
Returns True.
CLI Example:
salt esx-cluster test.ping
Shutdown the connection to the proxy device. For this proxy, shutdown is a no-op.