LPIC3 DIPLOMA Linux Clustering – LAB NOTES LESSON 14: OCFS2 (Oracle Cluster File System 2)

You are here:
< All Topics

 

OCFS2 (Oracle Cluster File System 2) 

 

 

Oracle Cluster File System 2 (OCFS2) is a general-purpose journaling file system that has been fully integrated since the Linux 2.6 Kernel. OCFS2 allows you to store application binary files, data files, and databases on devices on shared storage. All nodes in a cluster have concurrent read and write access to the file system.

 

A user space control daemon, managed via a clone resource, provides the integration with the HA stack, in particular with Corosync and the Distributed Lock Manager (DLM).

 

So that you can download and install the correct version for your kernel, check your kernel version using the “uname -r” command.

 

# uname -r
2.6.9-22.EL
#

 

 

To install on Debian/Ubuntu:

 

apt-get install ocfs2-tools ocfs2console

 

 

Firewalling for OCFS2:

 

Update the iptables rules to allow the OCFS2 Cluster port 7777 on all the nodes using ocfs2.

iptables -I INPUT -p udp -m udp –dport 7777 -j ACCEPT
iptables -I INPUT -p udp -m udp –dport 7777 -j ACCEPT
iptables -I INPUT -p tcp -m state –state NEW -m tcp –dport 7777 -j ACCEPT
iptables-save >> /etc/sysconfig/iptables

 

and restart firewall:

 

service iptables restart

 

 

 

How To Configure OC2FS

 

 

OCFS2 cluster nodes are configured through a file (/etc/ocfs2/cluster.conf). This file has all the settings for the OCFS2 cluster.

 

 

In RHEL/Centos the main configuration file is located in /etc/sysconfig/o2cb

 

However in Debian based Linux it is located in: /etc/default/o2cb (I don’t think either make sense, why not just have a dir inside /etc/ called ocfs2 ?)

 

Open the file: /etc/default/o2cb and change the following line:

 

O2CB_ENABLED=false

 

to O2CB_ENABLED=true

 

This enables OCFS2 to startup at boot time (which you obviously should want if you are using such a filesystem).

 

 

Oracle RAC Clusters:

 

Note that although OCFS2 can be used to share datafiles between RAC nodes, the current recommendation is to avoid this and use ASM to control the shared disks directly as raw devices, or via the ASMLib software.

 

 

How To Setup OC2FS cluster.conf

 

You’ll need to manually create this path:

 

mkdir /etc/ocfs2

 

This is an example of cluster.conf

 

node:
ip_port = 7777
ip_address = 192.168.1.115
number = 1
name = one
cluster = ocfs2
node:
ip_port = 7777
ip_address = 192.168.1.116
number = 2
name = two
cluster = ocfs2
cluster:
node_count = 2
name = ocfs2

 

The hostname must be correct for the node, meaning it has to be the actual hostname.

 

Under “cluster” that is the name you will be calling your cluster. Make sure that in /etc/default/o2cb the parameter matches (by default it is set to ocfs2 but it can be changed if you like).

 

Also make sure under each node that the cluster value is correct. cluster is the name of the OCFS2 filesystem/cluster you want those nodes to belong to.

 

copy cluster.conf to each node

 

start o2cb service on the cluster

 

Format your OCFS2 partition:

 

eg

 

mkfs.ocfs2 -b 4k -C 32K -L “MyOCFS2Cluster” -N 4 /dev/sda2

 

The “-N 4” stands for slots for up to 4 nodes, note that this can be increased but can’t be decreased. You shouldn’t have a number higher than you need.

 

 

IMPORTANT!

 

If you mount the OCFS2 file system manually for testing, be sure to unmount it again before starting to use it via cluster resources!

 

 

Using ocfs2 with Pacemaker

 

Before you can create OCFS2 volumes, you must configure the following resources as services in the cluster: DLM and a STONITH resource.

 

The following procedure uses the crm shell to configure the cluster resources.

 

NOTE: You need to configure a fencing device. Without a STONITH mechanism (like external/sbd) in place the configuration will fail.

 

To mount an OCFS2 volume with the High Availability software, configure an ocfs2 file system resource in the cluster.

 

 

 

This is an example of the procedure using crm configure:

 

primitive ocfs2-1 ocf:heartbeat:Filesystem \
params device=”/dev/sdb1″ directory=”/mnt/shared” \
fstype=”ocfs2″ options=”acl” \
op monitor interval=”20″ timeout=”40″ \
op start timeout=”60″ op stop timeout=”60″ \
meta target-role=”Started”

 

 

Make sure also you add the ocfs2-1 primitive to the g-storage group you created for dlm (distributed lock manager)

 

eg

 

modgroup g-storage add ocfs2-1

 

The add subcommand appends the new group member by default. Because of the base group’s internal colocation and ordering, Pacemaker will only start the ocfs2-1 resource on nodes that also have a dlm resource already running.

 

The Distributed Lock Manager (DLM) in the kernel is the base component used by OCFS2, GFS2, Cluster MD, and Cluster LVM (lvmlockd) to provide active-active storage at each respective layer.

 

As OCFS2, GFS2, Cluster MD, and Cluster LVM (lvmlockd) all use DLM, it is enough to configure one resource for DLM. As the DLM resource runs on all nodes in the cluster it is configured as a clone resource.

 

If you have a setup that includes both OCFS2 and Cluster LVM, configuring one DLM resource for both OCFS2 and Cluster LVM is enough.

 

 

An example with crm configure:

 

Enter the following to create the primitive resource for DLM:

 

primitive dlm ocf:pacemaker:controld \
op monitor interval=”60″ timeout=”60″

 

 

 

Then create a base group for the DLM resource and further storage-related resources:

 

group g-storage dlm

Clone the g-storage group so that it runs on all nodes:

 

eg

 

clone cl-storage g-storage \
meta interleave=true target-role=Started

 

 

verify with

 

show

 

then if ok do

 

commit

 

and then exit crm configure.

 

Remember: If you set the global cluster option stonith-enabled to false for testing or troubleshooting purposes, the DLM resource and all services depending on it (such as Cluster LVM, GFS2, and OCFS2) will fail to start.

 

 

 

 

About o2cb

 

o2cb is the default cluster stack of the OCFS2 file system.

 

It is an in-kernel cluster stack that includes a node manager (o2nm) to keep track of the nodes in the cluster, a disk heartbeat agent (o2hb) to detect node live-ness, a network agent (o2net) for intra-cluster node communication and a distributed lock manager (o2dlm) to keep track of lock resources.

 

o2cb also includes a synthetic file system, dlmfs, to allow applications to access the in-kernel dlm.

 

 

 

 

About DLM Distributed Lock Manager

 

 

The Distributed Lock Manager (DLM) in the kernel is the base component used by OCFS2, GFS2, Cluster MD, and Cluster LVM (lvmlockd) to provide active-active storage at each respective layer.

 

DLM uses the cluster membership services from Pacemaker which run in user space. Therefore, DLM needs to be configured as a clone resource that is present on each node in the cluster.

 

As OCFS2, GFS2, Cluster MD, and Cluster LVM (lvmlockd) all use DLM, it is enough to configure one resource for DLM. As the DLM resource runs on all nodes in the cluster it is configured as a clone resource.

 

If you have a setup that includes both OCFS2 and Cluster LVM, configuring one DLM resource for both OCFS2 and Cluster LVM is enough.

 

 

Enter the following in crm configure to create the primitive resource for DLM:

 

primitive dlm ocf:pacemaker:controld \
op monitor interval=”60″ timeout=”60″

 

Tags:
Table of Contents