Blob Blame History Raw
commit 75934649b85259d1559eabca40be820095643239
Author: Andrew Price <anprice@redhat.com>
Date:   Tue Feb 12 09:58:11 2019 +0000

    gfs2.5: General updates and layout improvements
    
    - Update the manpage to mention lvmlockd and don't mention gfs2_quota
      or gfs_controld (both obsolete).
    - Simplify the setup instructions and refer to distribution-specific
      docs and support requirements.
    - Rearrange the "See also" section for relevance and incorporate the
      references from the setup section.
    
    Signed-off-by: Andrew Price <anprice@redhat.com>

diff --git a/gfs2/man/gfs2.5 b/gfs2/man/gfs2.5
index 56d1a008..436abc09 100644
--- a/gfs2/man/gfs2.5
+++ b/gfs2/man/gfs2.5
@@ -21,6 +21,20 @@ mounts which are equivalent to mounting a read-only block device and as
 such can neither recover a journal or write to the filesystem, so do not
 require a journal assigned to them.
 
+The GFS2 documentation has been split into a number of sections:
+
+\fBmkfs.gfs2\fP(8) Create a GFS2 filesystem
+.br
+\fBfsck.gfs2\fP(8) The GFS2 filesystem checker
+.br
+\fBgfs2_grow\fP(8) Growing a GFS2 filesystem
+.br
+\fBgfs2_jadd\fP(8) Adding a journal to a GFS2 filesystem
+.br
+\fBtunegfs2\fP(8) Tool to manipulate GFS2 superblocks
+.br
+\fBgfs2_edit\fP(8) A GFS2 debug tool (use with caution)
+
 .SH MOUNT OPTIONS
 
 .TP
@@ -200,220 +214,55 @@ versa.  Finally, when first enabling this option on a filesystem that had been
 previously mounted without it, you must make sure that there are no outstanding
 cookies being cached by other software, such as NFS.
 
-.SH BUGS
-
-GFS2 doesn't support \fBerrors=\fP\fIremount-ro\fR or \fBdata=\fP\fIjournal\fR.
-It is not possible to switch support for user and group quotas on and
-off independently of each other. Some of the error messages are rather
-cryptic, if you encounter one of these messages check firstly that gfs_controld
-is running and secondly that you have enough journals on the filesystem
-for the number of nodes in use.
-
-.SH SEE ALSO
-
-\fBmount\fP(8) for general mount options,
-\fBchmod\fP(1) and \fBchmod\fP(2) for access permission flags,
-\fBacl\fP(5) for access control lists,
-\fBlvm\fP(8) for volume management,
-\fBccs\fP(7) for cluster management,
-\fBumount\fP(8),
-\fBinitrd\fP(4).
-
-The GFS2 documentation has been split into a number of sections:
-
-\fBgfs2_edit\fP(8) A GFS2 debug tool (use with caution)
-\fBfsck.gfs2\fP(8) The GFS2 file system checker
-\fBgfs2_grow\fP(8) Growing a GFS2 file system
-\fBgfs2_jadd\fP(8) Adding a journal to a GFS2 file system
-\fBmkfs.gfs2\fP(8) Make a GFS2 file system
-\fBgfs2_quota\fP(8) Manipulate GFS2 disk quotas 
-\fBgfs2_tool\fP(8) Tool to manipulate a GFS2 file system (obsolete)
-\fBtunegfs2\fP(8) Tool to manipulate GFS2 superblocks
-
 .SH SETUP
 
-GFS2 clustering is driven by the dlm, which depends on dlm_controld to
-provide clustering from userspace.  dlm_controld clustering is built on
-corosync cluster/group membership and messaging.
-
-Follow these steps to manually configure and run gfs2/dlm/corosync.
-
-.B 1. create /etc/corosync/corosync.conf and copy to all nodes
-
-In this sample, replace cluster_name and IP addresses, and add nodes as
-needed.  If using only two nodes, uncomment the two_node line.
-See corosync.conf(5) for more information.
-
-.nf
-totem {
-        version: 2
-        secauth: off
-        cluster_name: abc
-}
-
-nodelist {
-        node {
-                ring0_addr: 10.10.10.1
-                nodeid: 1
-        }
-        node {
-                ring0_addr: 10.10.10.2
-                nodeid: 2
-        }
-        node {
-                ring0_addr: 10.10.10.3
-                nodeid: 3
-        }
-}
-
-quorum {
-        provider: corosync_votequorum
-#       two_node: 1
-}
-
-logging {
-        to_syslog: yes
-}
-.fi
-
-.PP
-
-.B 2. start corosync on all nodes
-
-.nf
-systemctl start corosync
-.fi
-
-Run corosync-quorumtool to verify that all nodes are listed.
-
-.PP
-
-.B 3. create /etc/dlm/dlm.conf and copy to all nodes
-
-.B *
-To use no fencing, use this line:
+GFS2 clustering is driven by the dlm, which depends on dlm_controld to provide
+clustering from userspace.  dlm_controld clustering is built on corosync
+cluster/group membership and messaging. GFS2 also requires clustered lvm which
+is provided by lvmlockd or, previously, clvmd. Refer to the documentation for
+each of these components and ensure that they are configured before setting up
+a GFS2 filesystem. Also refer to your distribution's documentation for any
+specific support requirements.
 
-.nf
-enable_fencing=0
-.fi
+Ensure that gfs2-utils is installed on all nodes which mount the filesystem as
+it provides scripts required for correct withdraw event response.
 
-.B *
-To use no fencing, but exercise fencing functions, use this line:
-
-.nf
-fence_all /bin/true
-.fi
-
-The "true" binary will be executed for all nodes and will succeed (exit 0)
-immediately.
-
-.B *
-To use manual fencing, use this line:
-
-.nf
-fence_all /bin/false
-.fi
-
-The "false" binary will be executed for all nodes and will fail (exit 1)
-immediately.
-
-When a node fails, manually run: dlm_tool fence_ack <nodeid>
-
-.B *
-To use stonith/pacemaker for fencing, use this line:
-
-.nf
-fence_all /usr/sbin/dlm_stonith
-.fi
-
-The "dlm_stonith" binary will be executed for all nodes.  If
-stonith/pacemaker systems are not available, dlm_stonith will fail and
-this config becomes the equivalent of the previous /bin/false config.
-
-.B *
-To use an APC power switch, use these lines:
-
-.nf
-device  apc /usr/sbin/fence_apc ipaddr=1.1.1.1 login=admin password=pw
-connect apc node=1 port=1
-connect apc node=2 port=2
-connect apc node=3 port=3
-.fi
-
-Other network switch based agents are configured similarly.
-
-.B *
-To use sanlock/watchdog fencing, use these lines:
-
-.nf
-device wd /usr/sbin/fence_sanlock path=/dev/fence/leases
-connect wd node=1 host_id=1
-connect wd node=2 host_id=2
-unfence wd
-.fi
-
-See fence_sanlock(8) for more information.
-
-.B *
-For other fencing configurations see dlm.conf(5) man page.
-
-.PP
-
-.B 4. start dlm_controld on all nodes
-
-.nf
-systemctl start dlm
-.fi
-
-Run "dlm_tool status" to verify that all nodes are listed.
-
-.PP
-
-.B 5. if using clvm, start clvmd on all nodes
-
-systemctl clvmd start
-
-.PP
-
-.B 6. make new gfs2 file systems
+.B 1. Create the gfs2 filesystem
 
 mkfs.gfs2 -p lock_dlm -t cluster_name:fs_name -j num /path/to/storage
 
-The cluster_name must match the name used in step 1 above.
-The fs_name must be a unique name in the cluster.
-The -j option is the number of journals to create, there must
-be one for each node that will mount the fs.
+The cluster_name must match the name configured in corosync (and thus dlm).
+The fs_name must be a unique name for the filesystem in the cluster.
+The -j option is the number of journals to create; there must
+be one for each node that will mount the filesystem.
 
 .PP
+.B 2. Mount the gfs2 filesystem
 
-.B 7. mount gfs2 file systems
+If you are using a clustered resource manager, see its documentation for
+enabling a gfs2 filesystem resource. Otherwise, run:
 
 mount /path/to/storage /mountpoint
 
 Run "dlm_tool ls" to verify the nodes that have each fs mounted.
 
 .PP
+.B 3. Shut down
 
-.B 8. shut down
+If you are using a clustered resource manager, see its documentation for
+disabling a gfs2 filesystem resource. Otherwise, run:
 
-.nf
 umount -a -t gfs2
-systemctl clvmd stop
-systemctl dlm stop
-systemctl corosync stop
-.fi
 
 .PP
+.SH SEE ALSO
 
-.B More setup information:
-.br
-.BR dlm_controld (8),
-.br
-.BR dlm_tool (8),
-.br
-.BR dlm.conf (5),
-.br
-.BR corosync (8),
-.br
-.BR corosync.conf (5)
-.br
+\fBmount\fP(8) and \fBumount\fP(8) for general mount information,
+\fBchmod\fP(1) and \fBchmod\fP(2) for access permission flags,
+\fBacl\fP(5) for access control lists,
+\fBlvm\fP(8) for volume management,
+\fBdlm_controld\fP(8),
+\fBdlm_tool\fP(8),
+\fBdlm.conf\fP(5),
+\fBcorosync\fP(8),
+\fBcorosync.conf\fP(5),