salt.states.pcs

Management of Pacemaker/Corosync clusters with PCS

A state module to manage Pacemaker/Corosync clusters with the Pacemaker/Corosync configuration system (PCS)

New in version 2016.11.0.

depends:

pcs

Walkthrough of a complete PCS cluster setup: http://clusterlabs.org/doc/en-US/Pacemaker/1.1/html/Clusters_from_Scratch/

Requirements:

PCS is installed, pcs service is started and the password for the hacluster user is set and known.

Remark on the cibname variable used in the examples:

The use of the cibname variable is optional. Use it only if you want to deploy your changes into a cibfile first and then push it. This makes only sense if you want to deploy multiple changes (which require each other) at once to the cluster.

At first the cibfile must be created:

mysql_pcs__cib_present_cib_for_galera:
    pcs.cib_present:
        - cibname: cib_for_galera
        - scope: None
        - extra_args: None

Then the cibfile can be modified by creating resources (creating only 1 resource for demonstration, see also 7.):

mysql_pcs__resource_present_galera:
    pcs.resource_present:
        - resource_id: galera
        - resource_type: "ocf:heartbeat:galera"
        - resource_options:
            - 'wsrep_cluster_address=gcomm://node1.example.org,node2.example.org,node3.example.org'
            - '--master'
        - cibname: cib_for_galera

After modifying the cibfile, it can be pushed to the live CIB in the cluster:

mysql_pcs__cib_pushed_cib_for_galera:
    pcs.cib_pushed:
        - cibname: cib_for_galera
        - scope: None
        - extra_args: None

Create a cluster from scratch:

  1. This authorizes nodes to each other. It probably won't work with Ubuntu as

    it rolls out a default cluster that needs to be destroyed before the new cluster can be created. This is a little complicated so it's best to just run the cluster_setup below in most cases.:

    pcs_auth__auth:
        pcs.auth:
            - nodes:
                - node1.example.com
                - node2.example.com
            - pcsuser: hacluster
            - pcspasswd: hoonetorg
    
  2. Do the initial cluster setup:

    pcs_setup__setup:
        pcs.cluster_setup:
            - nodes:
                - node1.example.com
                - node2.example.com
            - pcsclustername: pcscluster
            - extra_args:
                - '--start'
                - '--enable'
            - pcsuser: hacluster
            - pcspasswd: hoonetorg
    
  3. Optional: Set cluster properties:

    pcs_properties__prop_has_value_no-quorum-policy:
        pcs.prop_has_value:
            - prop: no-quorum-policy
            - value: ignore
            - cibname: cib_for_cluster_settings
    
  4. Optional: Set resource defaults:

    pcs_properties__resource_defaults_to_resource-stickiness:
        pcs.resource_defaults_to:
            - default: resource-stickiness
            - value: 100
            - cibname: cib_for_cluster_settings
    
  5. Optional: Set resource op defaults:

    pcs_properties__resource_op_defaults_to_monitor-interval:
        pcs.resource_op_defaults_to:
            - op_default: monitor-interval
            - value: 60s
            - cibname: cib_for_cluster_settings
    
  6. Configure Fencing (!is often not optional on production ready cluster!):

    pcs_stonith__created_eps_fence:
        pcs.stonith_present:
            - stonith_id: eps_fence
            - stonith_device_type: fence_eps
            - stonith_device_options:
                - 'pcmk_host_map=node1.example.org:01;node2.example.org:02'
                - 'ipaddr=myepsdevice.example.org'
                - 'power_wait=5'
                - 'verbose=1'
                - 'debug=/var/log/pcsd/eps_fence.log'
                - 'login=hidden'
                - 'passwd=hoonetorg'
            - cibname: cib_for_stonith
    
  7. Add resources to your cluster:

    mysql_pcs__resource_present_galera:
        pcs.resource_present:
            - resource_id: galera
            - resource_type: "ocf:heartbeat:galera"
            - resource_options:
                - 'wsrep_cluster_address=gcomm://node1.example.org,node2.example.org,node3.example.org'
                 - '--master'
             - cibname: cib_for_galera
    
  8. Optional: Add constraints (locations, colocations, orders):

    haproxy_pcs__constraint_present_colocation-vip_galera-haproxy-clone-INFINITY:
        pcs.constraint_present:
            - constraint_id: colocation-vip_galera-haproxy-clone-INFINITY
            - constraint_type: colocation
            - constraint_options:
                - 'add'
                - 'vip_galera'
                - 'with'
                - 'haproxy-clone'
            - cibname: cib_for_haproxy
    

New in version 2016.3.0.

salt.states.pcs.auth(name, nodes, pcsuser='hacluster', pcspasswd='hacluster', extra_args=None)

Ensure all nodes are authorized to the cluster

name

Irrelevant, not used (recommended: pcs_auth__auth)

nodes

a list of nodes which should be authorized to the cluster

pcsuser

user for communication with pcs (default: hacluster)

pcspasswd

password for pcsuser (default: hacluster)

extra_args

list of extra args for the 'pcs cluster auth' command, there are none so it's here for compatibility.

Example:

pcs_auth__auth:
    pcs.auth:
        - nodes:
            - node1.example.com
            - node2.example.com
        - pcsuser: hacluster
        - pcspasswd: hoonetorg
        - extra_args: []
salt.states.pcs.cib_present(name, cibname, scope=None, extra_args=None)

Ensure that a CIB-file with the content of the current live CIB is created

Should be run on one cluster node only (there may be races)

name

Irrelevant, not used (recommended: {{formulaname}}__cib_present_{{cibname}})

cibname

name/path of the file containing the CIB

scope

specific section of the CIB (default: None)

extra_args

additional options for creating the CIB-file

Example:

mysql_pcs__cib_present_cib_for_galera:
    pcs.cib_present:
        - cibname: cib_for_galera
        - scope: None
        - extra_args: None
salt.states.pcs.cib_pushed(name, cibname, scope=None, extra_args=None)

Ensure that a CIB-file is pushed if it is changed since the creation of it with pcs.cib_present

Should be run on one cluster node only (there may be races)

name

Irrelevant, not used (recommended: {{formulaname}}__cib_pushed_{{cibname}})

cibname

name/path of the file containing the CIB

scope

specific section of the CIB

extra_args

additional options for creating the CIB-file

Example:

mysql_pcs__cib_pushed_cib_for_galera:
    pcs.cib_pushed:
        - cibname: cib_for_galera
        - scope: None
        - extra_args: None
salt.states.pcs.cluster_node_present(name, node, extra_args=None)

Add a node to the Pacemaker cluster via PCS Should be run on one cluster node only (there may be races) Can only be run on a already setup/added node

name

Irrelevant, not used (recommended: pcs_setup__node_add_{{node}})

node

node that should be added

extra_args

list of extra args for the 'pcs cluster node add' command

Example:

pcs_setup__node_add_node1.example.com:
    pcs.cluster_node_present:
        - node: node1.example.com
        - extra_args:
            - '--start'
            - '--enable'
salt.states.pcs.cluster_setup(name, nodes, pcsclustername='pcscluster', extra_args=None, pcsuser='hacluster', pcspasswd='hacluster', pcs_auth_extra_args=None, wipe_default=False)

Setup Pacemaker cluster on nodes. Should be run on one cluster node only to avoid race conditions. This performs auth as well as setup so can be run in place of the auth state. It is recommended not to run auth on Debian/Ubuntu for a new cluster and just to run this because of the initial cluster config that is installed on Ubuntu/Debian by default.

name

Irrelevant, not used (recommended: pcs_setup__setup)

nodes

a list of nodes which should be set up

pcsclustername

Name of the Pacemaker cluster

extra_args

list of extra args for the 'pcs cluster setup' command

pcsuser

The username for authenticating the cluster (default: hacluster)

pcspasswd

The password for authenticating the cluster (default: hacluster)

pcs_auth_extra_args

Extra args to be passed to the auth function in case of reauth.

wipe_default

This removes the files that are installed with Debian based operating systems.

Example:

pcs_setup__setup:
    pcs.cluster_setup:
        - nodes:
            - node1.example.com
            - node2.example.com
        - pcsclustername: pcscluster
        - extra_args:
            - '--start'
            - '--enable'
        - pcsuser: hacluster
        - pcspasswd: hoonetorg
salt.states.pcs.constraint_present(name, constraint_id, constraint_type, constraint_options=None, cibname=None)

Ensure that a constraint is created

Should be run on one cluster node only (there may be races) Can only be run on a node with a functional pacemaker/corosync

name

Irrelevant, not used (recommended: {{formulaname}}__constraint_present_{{constraint_id}})

constraint_id

name for the constraint (try first to create manually to find out the autocreated name)

constraint_type

constraint type (location, colocation, order)

constraint_options

options for creating the constraint

cibname

use a cached CIB-file named like cibname instead of the live CIB

Example:

haproxy_pcs__constraint_present_colocation-vip_galera-haproxy-clone-INFINITY:
    pcs.constraint_present:
        - constraint_id: colocation-vip_galera-haproxy-clone-INFINITY
        - constraint_type: colocation
        - constraint_options:
            - 'add'
            - 'vip_galera'
            - 'with'
            - 'haproxy-clone'
        - cibname: cib_for_haproxy
salt.states.pcs.prop_has_value(name, prop, value, extra_args=None, cibname=None)

Ensure that a property in the cluster is set to a given value

Should be run on one cluster node only (there may be races)

name

Irrelevant, not used (recommended: pcs_properties__prop_has_value_{{prop}})

prop

name of the property

value

value of the property

extra_args

additional options for the pcs property command

cibname

use a cached CIB-file named like cibname instead of the live CIB

Example:

pcs_properties__prop_has_value_no-quorum-policy:
    pcs.prop_has_value:
        - prop: no-quorum-policy
        - value: ignore
        - cibname: cib_for_cluster_settings
salt.states.pcs.resource_defaults_to(name, default, value, extra_args=None, cibname=None)

Ensure a resource default in the cluster is set to a given value

Should be run on one cluster node only (there may be races) Can only be run on a node with a functional pacemaker/corosync

name

Irrelevant, not used (recommended: pcs_properties__resource_defaults_to_{{default}})

default

name of the default resource property

value

value of the default resource property

extra_args

additional options for the pcs command

cibname

use a cached CIB-file named like cibname instead of the live CIB

Example:

pcs_properties__resource_defaults_to_resource-stickiness:
    pcs.resource_defaults_to:
        - default: resource-stickiness
        - value: 100
        - cibname: cib_for_cluster_settings
salt.states.pcs.resource_op_defaults_to(name, op_default, value, extra_args=None, cibname=None)

Ensure a resource operation default in the cluster is set to a given value

Should be run on one cluster node only (there may be races) Can only be run on a node with a functional pacemaker/corosync

name

Irrelevant, not used (recommended: pcs_properties__resource_op_defaults_to_{{op_default}})

op_default

name of the operation default resource property

value

value of the operation default resource property

extra_args

additional options for the pcs command

cibname

use a cached CIB-file named like cibname instead of the live CIB

Example:

pcs_properties__resource_op_defaults_to_monitor-interval:
    pcs.resource_op_defaults_to:
        - op_default: monitor-interval
        - value: 60s
        - cibname: cib_for_cluster_settings
salt.states.pcs.resource_present(name, resource_id, resource_type, resource_options=None, cibname=None)

Ensure that a resource is created

Should be run on one cluster node only (there may be races) Can only be run on a node with a functional pacemaker/corosync

name

Irrelevant, not used (recommended: {{formulaname}}__resource_present_{{resource_id}})

resource_id

name for the resource

resource_type

resource type (f.e. ocf:heartbeat:IPaddr2 or VirtualIP)

resource_options

additional options for creating the resource

cibname

use a cached CIB-file named like cibname instead of the live CIB

Example:

mysql_pcs__resource_present_galera:
    pcs.resource_present:
        - resource_id: galera
        - resource_type: "ocf:heartbeat:galera"
        - resource_options:
            - 'wsrep_cluster_address=gcomm://node1.example.org,node2.example.org,node3.example.org'
            - '--master'
        - cibname: cib_for_galera
salt.states.pcs.stonith_present(name, stonith_id, stonith_device_type, stonith_device_options=None, cibname=None)

Ensure that a fencing resource is created

Should be run on one cluster node only (there may be races) Can only be run on a node with a functional pacemaker/corosync

name

Irrelevant, not used (recommended: pcs_stonith__created_{{stonith_id}})

stonith_id

name for the stonith resource

stonith_device_type

name of the stonith agent fence_eps, fence_xvm f.e.

stonith_device_options

additional options for creating the stonith resource

cibname

use a cached CIB-file named like cibname instead of the live CIB

Example:

pcs_stonith__created_eps_fence:
    pcs.stonith_present:
        - stonith_id: eps_fence
        - stonith_device_type: fence_eps
        - stonith_device_options:
            - 'pcmk_host_map=node1.example.org:01;node2.example.org:02'
            - 'ipaddr=myepsdevice.example.org'
            - 'power_wait=5'
            - 'verbose=1'
            - 'debug=/var/log/pcsd/eps_fence.log'
            - 'login=hidden'
            - 'passwd=hoonetorg'
        - cibname: cib_for_stonith