Welcome to 0.8.5! Some important things have happened in this release, that you'll want to take note of. The first thing that may trip you up when installing directly is that Paramiko is no longer a dependency, and botocore and sshpass are new dependencies. Read on to see what else has happened.
The documentation for Salt Cloud can be found on Read the Docs: https://salt-cloud.readthedocs.io
Salt Cloud can be downloaded and install via pypi:
https://pypi.python.org/packages/source/s/salt-cloud/salt-cloud-0.8.5.tar.gz
Some packages have been made available for salt-cloud and more on their way. Packages for Arch and FreeBSD are being made available thanks to the work of Christer Edwards, and packages for RHEL and Fedora are being created by Clint Savage. The Ubuntu PPA is being managed by Sean Channel. Package availability will be announced on the salt mailing list.
In 0.8.4, the default deploy script was set to bootstrap-salt-minion. Since then, the Salt Boostrap script has been extended to be able to install more than just minions, and as such, has been renamed. It is now called bootstrap-salt, and has been renamed in Salt Cloud accordingly. Check out the salt-bootstrap project for more details:
https://github.com/saltstack/salt-bootstrap
Just another reminder: For those of you still using "os" in your profiles, this option was renamed to "script" in 0.8.2, and your configuration should be updated accordingly.
If you like running the latest and greatest version of salt-bootstrap, but you're sick of tracking down the source directory to update it, a new option has been added to update it for you.
salt-cloud -u
salt-cloud --update-bootstrap
Bear in mind that this updates to the latest (unstable) version, so use with caution.
As mentioned above, AWS instances are named via a tag. However, renaming an instance by renaming its tag will cause the salt keys to mismatch. A rename function has been added which renames both the instance, and the salt keys.
salt-cloud -a rename mymachine newname=yourmachine
AWS allows the user to enable and disable termination protection on a specific instance. An instance with this protection enabled cannot be destroyed.
salt-cloud -a enable_term_protect mymachine
salt-cloud -a disable_term_protect mymachine
It has become increasingly common for users to set up multi-hierarchal infrastructures using Salt Cloud. This sometimes involves setting up an instance to be a master in addition to a minion. With that in mind, you can now law down master configuration on a machine by specifying master options in the profile or map file.
make_master: True
This will cause Salt Cloud to generate master keys for the instance, and tell salt-bootstrap to install the salt-master package, in addition to the salt-minion package.
The default master configuration is usually appropriate for most users, and will not be changed unless specific master configuration has been added to the profile or map:
master:
user: root
interface: 0.0.0.0
When Salt Cloud deploys an instance, it uploads temporary files to /tmp/ for salt-bootstrap to put in place. After the script has run, they are deleted. To keep these files around (mostly for debugging purposes), the --keep-tmp option can be added:
salt-cloud -p myprofile mymachine --keep-tmp
For those wondering why /tmp/ was used instead of /root/, this had to be done for images which require the use of sudo, and therefore do not allow remote root logins, even for file transfers (which makes /root/ unavailable).
Custom deploy scripts are unlikely to need custom arguments to be passed to them, but salt-bootstrap has been extended quite a bit, and this may be necessary. script_args can be specified in either the profile or the map file, to pass arguments to the deploy script:
aws-amazon:
provider: aws
image: ami-1624987f
size: Micro Instance
ssh_username: ec2-user
script: bootstrap-salt
script_args: -c /tmp/
This has also been tested to work with pipes, if needed:
script_args: '| head'
When an instance is destroyed, its IP address is usually recycled back into the IP pool. When such an IP is reassigned to you, and the old key is still in your known_hosts file, the deploy script will fail due to mismatched SSH keys. To mitigate this, add the following to your main cloud configuration:
delete_sshkeys: True