OpenStack is an open source project that is in use by a number a cloud providers, each of which have their own ways of using it.
This OpenStack driver uses a the shade python module which is managed by the OpenStack Infra team. This module is written to handle all the different versions of different OpenStack tools for salt, so most commands are just passed over to the module to handle everything.
There are two ways to configure providers for this driver. The first one is to just let shade handle everything, and configure using os-client-config and setting up /etc/openstack/clouds.yml.
clouds: democloud: region_name: RegionOne auth: username: 'demo' password: secret project_name: 'demo' auth_url: 'http://openstack/identity'
And then this can be referenced in the salt provider based on the democloud name.
myopenstack: driver: openstack cloud: democloud region_name: RegionOne
This allows for just using one configuration for salt-cloud and for any other openstack tools which are all using /etc/openstack/clouds.yml
The other method allows for specifying everything in the provider config, instead of using the extra configuration file. This will allow for passing salt-cloud configs only through pillars for minions without having to write a clouds.yml file on each minion.abs
myopenstack: driver: openstack region_name: RegionOne auth: username: 'demo' password: secret project_name: 'demo' user_domain_name: default, project_domain_name: default, auth_url: 'http://openstack/identity'
Or if you need to use a profile to setup some extra stuff, it can be passed as a profile to use any of the vendor config options.
myrackspace: driver: openstack profile: rackspace auth: username: rackusername api_key: myapikey region_name: ORD auth_type: rackspace_apikey
And this will pull in the profile for rackspace and setup all the correct options for the auth_url and different api versions for services.
Most of the options for building servers are just passed on to the create_server function from shade.
The salt specific ones are:
ssh_key_file: The path to the ssh key that should be used to login to the machine to bootstrap it
ssh_key_file: The name of the keypair in openstack
userdata_template: The renderer to use if the userdata is a file that is templated. Default: False
ssh_interface: The interface to use to login for bootstrapping: public_ips, private_ips, floating_ips, fixed_ips
ignore_cidr: Specify a CIDR range of unreachable private addresses for salt to ignore when connecting
centos: provider: myopenstack image: CentOS 7 size: ds1G ssh_key_name: mykey ssh_key_file: /root/.ssh/id_rsa
This is the minimum setup required.
If metadata is set to make sure that the host has finished setting up the wait_for_metadata can be set.
centos: provider: myopenstack image: CentOS 7 size: ds1G ssh_key_name: mykey ssh_key_file: /root/.ssh/id_rsa meta: build_config: rack_user_only wait_for_metadata: rax_service_level_automation: Complete rackconnect_automation_status: DEPLOYED
If your OpenStack instances only have private IP addresses and a CIDR range of private addresses are not reachable from the salt-master, you may set your preference to have Salt ignore it:
my-openstack-config: ignore_cidr: 192.168.0.0/16
Anything else from the create_server docs can be passed through here.
image: Image dict, name or ID to boot with. image is required unless boot_volume is given.
flavor: Flavor dict, name or ID to boot onto.
auto_ip: Whether to take actions to find a routable IP for the server. (defaults to True)
ips: List of IPs to attach to the server (defaults to None)
ip_pool: Name of the network or floating IP pool to get an address from. (defaults to None)
root_volume: Name or ID of a volume to boot from (defaults to None - deprecated, use boot_volume)
boot_volume: Name or ID of a volume to boot from (defaults to None)
terminate_volume: If booting from a volume, whether it should be deleted when the server is destroyed. (defaults to False)
volumes: (optional) A list of volumes to attach to the server
meta: (optional) A dict of arbitrary key/value metadata to store for this server. Both keys and values must be <=255 characters.
files: (optional, deprecated) A dict of files to overwrite
on the server upon boot. Keys are file names (i.e.
/etc/passwd) and values
are the file contents (either as a string or as a
file-like object). A maximum of five entries is allowed,
and each file must be 10k or less.
reservation_id: a UUID for the set of servers being requested.
min_count: (optional extension) The minimum number of servers to launch.
max_count: (optional extension) The maximum number of servers to launch.
security_groups: A list of security group names
userdata: user data to pass to be exposed by the metadata server this can be a file type object as well or a string.
key_name: (optional extension) name of previously created keypair to inject into the instance.
availability_zone: Name of the availability zone for instance placement.
block_device_mapping: (optional) A list of dictionaries representing legacy block device mappings for this server. See documentation for details.
block_device_mapping_v2: (optional) A list of dictionaries representing block device mappings for this server. See v2 documentation for details.
nics: (optional extension) an ordered list of nics to be added to this server, with information about connected networks, fixed IPs, port etc.
scheduler_hints: (optional extension) arbitrary key-value pairs specified by the client to help boot an instance
config_drive: (optional extension) value for config drive either boolean, or volume-id
disk_config: (optional extension) control how the disk is partitioned when the server is created. possible values are 'AUTO' or 'MANUAL'.
admin_pass: (optional extension) add a user supplied admin password.
timeout: (optional) Seconds to wait, defaults to 60.
reuse_ips: (optional) Whether to attempt to reuse pre-existing floating ips should a floating IP be needed (defaults to True)
network: (optional) Network dict or name or ID to attach the server to. Mutually exclusive with the nics parameter. Can also be be a list of network names or IDs or network dicts.
boot_from_volume: Whether to boot from volume. 'boot_volume' implies True, but boot_from_volume=True with no boot_volume is valid and will create a volume from the image and use that.
volume_size: When booting an image from volume, how big should the created volume be? Defaults to 50.
nat_destination: Which network should a created floating IP be attached to, if it's not possible to infer from the cloud's configuration. (Optional, defaults to None)
group: ServerGroup dict, name or id to boot the server in. If a group is provided in both scheduler_hints and in the group param, the group param will win. (Optional, defaults to None)
If there is anything added, that is not in this list, it can be added to an extras dictionary for the profile, and that will be to the create_server function.
List available images for OpenStack
salt-cloud -f avail_images myopenstack salt-cloud --list-images myopenstack
List available sizes for OpenStack
salt-cloud -f avail_sizes myopenstack salt-cloud --list-sizes myopenstack
call(conn=None, call=None, kwargs=None)¶
Call function from shade.
function to call from shade.openstackcloud library
salt-cloud -f call myopenstack func=list_images t sujksalt-cloud -f call myopenstack func=create_network name=mysubnet
Create a single VM from a data dict
destroy(name, conn=None, call=None)¶
Delete a single VM
Return the first configured instance.
Return a conn object for the passed VM data
Warn if dependencies aren't met.
Return True if we are to ignore the specified IP.
List networks for OpenStack
salt-cloud -f list_networks myopenstack
Return a list of VMs
salt-cloud -f list_nodes myopenstack
Return a list of VMs with all the information about them
salt-cloud -f list_nodes_full myopenstack
Return a list of VMs with minimal information
salt-cloud -f list_nodes_min myopenstack
Return a list of VMs with the fields from query.selection
salt-cloud -f list_nodes_full myopenstack
list_subnets(conn=None, call=None, kwargs=None)¶
List subnets in a virtual network
network to list subnets of
salt-cloud -f list_subnets myopenstack network=salt-net
Return either an 'ipv4' (default) or 'ipv6' address depending on 'protocol' option. The list of 'ipv4' IPs is filtered by ignore_cidr() to remove any unreachable private addresses.
request_instance(vm_, conn=None, call=None)¶
Request an instance to be built
show_instance(name, conn=None, call=None)¶
Get VM on this OpenStack account
name of the instance
salt-cloud -a show_instance myserver
Return the ssh_interface type to connect to. Either 'public_ips' (default) or 'private_ips'.