Comments on: Dual Primary DRBD on CentOS 6: GFS2 & Pacemaker https://www.justinsilver.com/technology/linux/dual-primary-drbd-centos-6-gfs2-pacemaker/?utm_source=rss&utm_medium=rss&utm_campaign=dual-primary-drbd-centos-6-gfs2-pacemaker Technology, Travel, and Pictures Thu, 08 Feb 2018 18:34:36 +0000 hourly 1 https://wordpress.org/?v=6.0.1 By: Justin Silver https://www.justinsilver.com/technology/linux/dual-primary-drbd-centos-6-gfs2-pacemaker/#comment-2270 Sun, 28 Aug 2016 00:24:05 +0000 http://justin.ag/?p=3243#comment-2270 In reply to Igor.

Hi Igor, that’s correct, although the end of the post isn’t very clear since I just connect to fileserver-1. If you look at the diagram at the top of the post in this setup both NFS servers are fronted by a load balancer, this is what the clients should *actually* connect to. Thanks for your comment!

]]>
By: Igor https://www.justinsilver.com/technology/linux/dual-primary-drbd-centos-6-gfs2-pacemaker/#comment-2267 Tue, 23 Aug 2016 23:04:08 +0000 http://justin.ag/?p=3243#comment-2267 You will need virtual ip resource as well, in current config if fileserver-1 dies the clients are screwed.

]]>
By: Justin Silver https://www.justinsilver.com/technology/linux/dual-primary-drbd-centos-6-gfs2-pacemaker/#comment-1905 Mon, 04 May 2015 18:50:38 +0000 http://justin.ag/?p=3243#comment-1905 In reply to norman choe.

Hi Norman,

Sorry for the delayed response – just got back from a 2 month road trip and I didn’t keep up with all my email. For your first question, the name should be arbitrary as it’s just an identifier.

The second item is a bug in the guide – I was using a “real” configuration as a guide but copied and pasted to make sure nothing proprietary made it to the Internet and I forgot to update that section. The ipaddr attribute should be the IP address of each cluster, used to connect to and restart the machine if necessary.

Thanks!

]]>
By: norman choe https://www.justinsilver.com/technology/linux/dual-primary-drbd-centos-6-gfs2-pacemaker/#comment-1892 Thu, 05 Mar 2015 15:41:09 +0000 http://justin.ag/?p=3243#comment-1892 seriously an excellent resource that i’ve actually pointed the folks at DRBD to, and they agreed that you’ve done it right.

I have another Q though in the CRM section (and thanks for your explanation; I see that I don’t need to make changes on “both nodes”, and that when I make changes on one I can see them on the other):

location l_stonith_fence_virsh_machine1_noton_fileserver1 stonith_fence_virsh_fileserver1 -inf: host1
location l_stonith_fence_virsh_machine1_noton_fileserver2 stonith_fence_virsh_fileserver2 -inf: host2

what’s the virsh_machine1 mean there? I realize it’s just a name, and is it just arbitrary?

Also:

primitive stonith_fence_virsh_fileserver1 stonith:fence_virsh \
    params action="reboot" ipaddr="vm-host" \

… is the “vm-host” relevant? I don’t know the vm-host for a Rackspace machine.

OK, heading back to read more about CRMSH. RTFM, I know!

]]>
By: Justin Silver https://www.justinsilver.com/technology/linux/dual-primary-drbd-centos-6-gfs2-pacemaker/#comment-1890 Thu, 26 Feb 2015 19:27:56 +0000 http://justin.ag/?p=3243#comment-1890 In reply to norman.

It doesn’t have to be root per se, just a user that is allowed to execute `reboot`. This is how one node can “shoot the other one in the head” by taking it offline and assuming the master role until the other node can be synced up. If you only allow logins to this user via your local network interface, it should be fairly secure.

Thanks for reading!

]]>
By: norman https://www.justinsilver.com/technology/linux/dual-primary-drbd-centos-6-gfs2-pacemaker/#comment-1889 Thu, 26 Feb 2015 18:57:47 +0000 http://justin.ag/?p=3243#comment-1889 In reply to Justin Silver.

ahh, okay. however, since i don’t allow root logins, i’m going to have to edit that a bit.

or i guess i could allow ssh logins from a single host, but that’s a bit of a pain. maybe something with sudo.

All in all though, this was a super helpful howto!

]]>
By: Justin Silver https://www.justinsilver.com/technology/linux/dual-primary-drbd-centos-6-gfs2-pacemaker/#comment-1888 Wed, 25 Feb 2015 22:53:22 +0000 http://justin.ag/?p=3243#comment-1888 In reply to norman choe.

Sorry, that should have been `/dev/xvdb1`, I’ve updated the post.

As for STONITH, you are using CRMSH to manage the cluster, not a particular node. That means that when you save the configuration, it should be applied to all nodes in your cluster.

]]>
By: norman choe https://www.justinsilver.com/technology/linux/dual-primary-drbd-centos-6-gfs2-pacemaker/#comment-1887 Wed, 25 Feb 2015 21:49:55 +0000 http://justin.ag/?p=3243#comment-1887 you mention /dev/dvdb1 a couple of times but it’s /dev/xvdb1 in the snippets; also, could you fill in the STONITH part a bit? Does that configuration go on both nodes?

]]>
By: Justin Silver https://www.justinsilver.com/technology/linux/dual-primary-drbd-centos-6-gfs2-pacemaker/#comment-1828 Mon, 03 Nov 2014 18:05:36 +0000 http://justin.ag/?p=3243#comment-1828 In reply to Richard.

Richard – as I mentioned in my email the HA/failover is going to depend on your specific setup. If you use Heartbeat it will handle the assignment of shared IP addresses, so if Server1 is 10.0.0.10 and Server2 is 10.0.0.20 then they might have a shared IP address of 10.0.0.30 that the clients actually connect to. You can logically spread the load, but technically all clients are mapped to a single server. One big advantage here is that you don’t have to have dual-primaries – this can get pretty complicated.

I ended up going with Pacemaker with a hardware load balancer in front of it since we needed the load balanced performance. Both servers were active in a dual primary configuration, with all the clients connected to an NFS share on the load balancer. If a server went down, it was removed from the pool by the load balancer and the clients were none the wiser.

Some HA testing was done, but not extensively.

]]>
By: Richard https://www.justinsilver.com/technology/linux/dual-primary-drbd-centos-6-gfs2-pacemaker/#comment-1827 Sun, 02 Nov 2014 21:43:36 +0000 http://justin.ag/?p=3243#comment-1827 I’m about to build this out with bare hardware to play with it.
I guess the one thing I’m missing from the read through is if I want to use the cluster for HA as well as load balancing…
I like the idea of splitting the nfs load between the 2 servers. Instead of just having the second one mirroring.
But if I have server-a pointed to server-1 and server-b pointed to server-2. (1 and 2 being the nfs cluster) and server-1 goes down, how does server-a fail over to server-2 to continue operating without down time??

]]>