Tags Archives: resources

Configuring Cluster Resources and Properties

A resource is anything managed by the cluster. Resources are represented by resource scripts.

 

There are four main types:

 

OCF – open cluster framework

 

systemd – to unit files of systemd – you can take these out of systemd and will be run by the cluster instead

 

heartbeat – this was the old communication system for clustering, avoid if you can. Most now replaced by ocf scripts

 

stonith – these are scripts for stonith devices

 

The resource config lives in the CIB cluster information base

 

Three main types of resources:

 

  • primitive – a single resource that can be managed, usually only needs to start once eg for an ip address
  • clone – should run on multiple nodes at same time
  • multi state – (master/slave), this is a special form of clone. applies to specific resource only, usually ones involving master-slave

 

 

group resource type – makes it easier to manage resources by grouping related resources together, ensures they can be started /stopped together and can be related and linked, for starting/stopping sequence

 

Resources that are part of the same resource group:

 

  • Start in the defined sequence.
  • Stop in the reverse order.
  • Always run on the same cluster node.

 

They can be a group of primitives –

 

clones

 

multi state

 

group

 

resource stickiness

 

this is when a resource will go down after original situation is restored. this defines what should happen to resource when a node has been restored to the cluster after fencing.

 

eg resource should migrate back to the original node, or to stay where it is.

 

but- generally its best to avoid resources migrating from node to node.

 

 

Creating resources

the scripts:

 

eg
find / -name IPaddr2

 

cd /usr/lib/ocf/resource.d

 

here we have

 

heartbeat
lvm2
ocfs2
pacemaker
.isolation – for docker wrappers

 

under heartbeat we have a whole long list of scripts eg IPaddr2

 

note IPaddr2 is for the ip suite of network commands the IPaddr is for ifconfig. you should only be using IPaddr2 nowadays.

 

 

under crm shell

 

crm

 

classes

 

this shows you the same as above

 

also there is the info command

 

info IPaddr2

 

this displays the shell script meta data for IPaddr2 shell script

 

crm configure primitive newip ocf:heartbeat:IPaddr2 params ip_ip address op monitor interval-10s

 

crm resource show newip

 

this is specific to the crm command

 

crm_mon will show you the list of your current active resources live and running on your cluster

 

cibadmin

 

allows you to query the cib

 

 

Resource Constraints

 

CAUTION: these are dangerous, use with care!

 

Resources have to be related to each other, this can be done by creating resource constraints:

 

3 types:

 

Location: on which node/s the resource should run – can be done positively or negatively with scores

 

Colocation: with which resource a resource should run

 

Order: after/before which resourse

 

[root@centos1 corosync]# pcs constraint show
Location Constraints:
Ordering Constraints:
Colocation Constraints:
Ticket Constraints:
[root@centos1 corosync]

 

Typically a score is used, between

 

INFINITY: must happen, and

 

-INFINITY: may not happen

 

Intermediate values: expresses greater or lesser wish to have it happen or not

 

To ensure a certain action is never performed, use a negative score. Any score smaller than 0 will ban the resource from a node.

 

crm migrate / pcs resource move – these also enforce INFINITY resource constraints, you will need to remove this this using

 

crm resource unmigrate /pcs resource clear

 

NOTE: -INFINITY on a location constraint will NEVER run the resource on the specified note, not even if its the last node left in the cluster!

 

 

To display an overview of all currently applying resource constraint scores:

 

[root@centos1 ~]# crm_simulate -sL

 

Current cluster status:
Online: [ centos1.localdomain centos2.localdomain centos3.localdomain ]

 

fence_centos1 (stonith:fence_xvm): Started centos3.localdomain
fence_centos2 (stonith:fence_xvm): Started centos3.localdomain
fence_centos3 (stonith:fence_xvm): Started centos3.localdomain

 

Allocation scores:

pcmk__native_allocate: fence_centos1 allocation score on centos1.localdomain: 0
pcmk__native_allocate: fence_centos1 allocation score on centos2.localdomain: 0
pcmk__native_allocate: fence_centos1 allocation score on centos3.localdomain: 0
pcmk__native_allocate: fence_centos2 allocation score on centos1.localdomain: -INFINITY
pcmk__native_allocate: fence_centos2 allocation score on centos2.localdomain: -INFINITY
pcmk__native_allocate: fence_centos2 allocation score on centos3.localdomain: 0
pcmk__native_allocate: fence_centos3 allocation score on centos1.localdomain: -INFINITY
pcmk__native_allocate: fence_centos3 allocation score on centos2.localdomain: -INFINITY
pcmk__native_allocate: fence_centos3 allocation score on centos3.localdomain: 0

Transition Summary:
[root@centos1 ~]#

 

 

 

 

Continue Reading