brook davis
2015-01-21 20:06:47 UTC
Hi,
I've got a master-slave resource and I'd like to achieve the following
behavior with it:
* Only ever run (as master or slave) on 2 specific nodes (out of N
possible nodes). These nodes are predetermined and are specified at
resource creation time.
* Prefer one specific node (of the 2 selected for running the resource)
for starting in the Master role.
* Upon failover event, promote the secondary node to master.
* Do not re-promote the failed node back to master, should it come back
online.
The last requirement is the one I'm currently struggling with. I can
force the resource to run on only the 2 nodes I want (out of 3 possible
nodes), but I can't get it to "stick" on the secondary node as master
after a failover and recovery. That is, when I take the original
master offline, the resource promotes correctly on the secondary, but if
I bring the origin node back online, the resource is demoted on the
secondary and promotes back to master on the origin. I'd like to avoid
that last bit.
Here's the relevant bits of my CRM configuration:
primitive NIMHA-01 ocf:heartbeat:nimha \
op start interval="0" timeout="60s" \
op monitor interval="30s" role="Master" \
op stop interval="0" timeout="60s" \
op monitor interval="45s" role="Slave" \
ms NIMMS-01 NIMHA-01 \
meta master-max="1" master-node-max="1" clone-max="2"
clone-node-max="1" notify="true" target-role="Started" is-managed="true"
location prefer-elmy-inf NIMMS-01 5: elmyra
location prefer-elmyra-ms NIMMS-01 \
rule $id="prefer-elmyra-rule" $role="Master" 10: #uname eq elmyra
location prefer-pres-inf NIMMS-01 5: president
location prefer-president-ms NIMMS-01 \
rule $id="prefer-president-rule" $role="Master" 5: #uname eq president
property $id="cib-bootstrap-options" \
dc-version="1.1.10-42f2063" \
cluster-infrastructure="corosync" \
stonith-enabled="false" \
no-quorum-policy="ignore" \
last-lrm-refresh="1421798334" \
default-resource-stickiness="200" \
symmetric-cluster="false"
I've set symmetric-cluster="false" to achieve an "opt-in" behavior, per
the corosync docs. From my understanding, these location constraints
should direct the resource to be able to run on the two nodes,
preferring 'elmyra' initially as Master. My question then becomes, is
there a way to apply the stickiness to the Master role ?? I've tried
adding explicit stickiness settings (high numbers and INF) to the
default-resource-stickiness, the actual "ms" resource, and the
primitive, all to no avail.
Anyone have any ideas on how to achieve stickiness on the master role in
such a configuration ?
Thanks for any and all help in advance,
brook
ps. please ignore/forgive the no-quorum-policy and stonith-enabled
settings in my configuration... I know it's bad and not best practice.
I don't think it should affect the answer to the above question, though,
based on my understanding of the system.
_______________________________________________
Pacemaker mailing list: ***@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker
Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org
I've got a master-slave resource and I'd like to achieve the following
behavior with it:
* Only ever run (as master or slave) on 2 specific nodes (out of N
possible nodes). These nodes are predetermined and are specified at
resource creation time.
* Prefer one specific node (of the 2 selected for running the resource)
for starting in the Master role.
* Upon failover event, promote the secondary node to master.
* Do not re-promote the failed node back to master, should it come back
online.
The last requirement is the one I'm currently struggling with. I can
force the resource to run on only the 2 nodes I want (out of 3 possible
nodes), but I can't get it to "stick" on the secondary node as master
after a failover and recovery. That is, when I take the original
master offline, the resource promotes correctly on the secondary, but if
I bring the origin node back online, the resource is demoted on the
secondary and promotes back to master on the origin. I'd like to avoid
that last bit.
Here's the relevant bits of my CRM configuration:
primitive NIMHA-01 ocf:heartbeat:nimha \
op start interval="0" timeout="60s" \
op monitor interval="30s" role="Master" \
op stop interval="0" timeout="60s" \
op monitor interval="45s" role="Slave" \
ms NIMMS-01 NIMHA-01 \
meta master-max="1" master-node-max="1" clone-max="2"
clone-node-max="1" notify="true" target-role="Started" is-managed="true"
location prefer-elmy-inf NIMMS-01 5: elmyra
location prefer-elmyra-ms NIMMS-01 \
rule $id="prefer-elmyra-rule" $role="Master" 10: #uname eq elmyra
location prefer-pres-inf NIMMS-01 5: president
location prefer-president-ms NIMMS-01 \
rule $id="prefer-president-rule" $role="Master" 5: #uname eq president
property $id="cib-bootstrap-options" \
dc-version="1.1.10-42f2063" \
cluster-infrastructure="corosync" \
stonith-enabled="false" \
no-quorum-policy="ignore" \
last-lrm-refresh="1421798334" \
default-resource-stickiness="200" \
symmetric-cluster="false"
I've set symmetric-cluster="false" to achieve an "opt-in" behavior, per
the corosync docs. From my understanding, these location constraints
should direct the resource to be able to run on the two nodes,
preferring 'elmyra' initially as Master. My question then becomes, is
there a way to apply the stickiness to the Master role ?? I've tried
adding explicit stickiness settings (high numbers and INF) to the
default-resource-stickiness, the actual "ms" resource, and the
primitive, all to no avail.
Anyone have any ideas on how to achieve stickiness on the master role in
such a configuration ?
Thanks for any and all help in advance,
brook
ps. please ignore/forgive the no-quorum-policy and stonith-enabled
settings in my configuration... I know it's bad and not best practice.
I don't think it should affect the answer to the above question, though,
based on my understanding of the system.
_______________________________________________
Pacemaker mailing list: ***@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker
Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org