Setting up a multicluster environment using General Parallel File System
Learn how to construct and deconstruct a simple multicluster of System x™ and System p™ computers using the General Parallel File System (GPFS). You can remotely add an existing GPFS cluster to another cluster. See how to mount a file system from the remote cluster using the GPFS secure communication protocol.
General Parallel File System (GPFS) (see Resources for more information) is the parallel file system from IBM for AIX® 5L and Linux® clusters made up of System x and System p computers. GPFS is often used to create high-performance storage clusters that serve terabytes of data. You might want to mount file systems across these clusters over a wide area network. This article illustrates the steps to create a multicluster setup containing the GPFS file system. This article assumes you have basic familiarity with the terminology and concepts of GPFS.
Follow these steps to create the multicluster:
- Create two separate GPFS clusters using the techniques shown in Listing 1. Also, create NSD disks and file systems that should be mounted across clusters. A simple and concise method to create a GPFS cluster and an example file system is described in the developerWorks article Install and configure General Parallel File System (GPFS) on xSeries™ . Secure shell (ssh) access between clusters is not required. All intercluster communication is handled by the GPFS daemon, which internally uses Secure Socket Layer (SSL).
Listing 1. Mounting the filesystem from the first cluster onto the second
[root@gpfs-lin1 gpfs]# mmlscluster GPFS cluster information ======================== GPFS cluster name: gpfs-lin1.in.ibm.com GPFS cluster id: 699960274622719275 GPFS UID domain: gpfs-lin1.in.ibm.com Remote shell command: /usr/bin/ssh Remote file copy command: /usr/bin/scp GPFS cluster configuration servers: ----------------------------------- Primary server: gpfs-lin1.in.ibm.com Secondary server: (none) Node Daemon node name IP address Admin node name Designation ------------------------------------------------------------------------------------------ 1 gpfs-lin1.in.ibm.com 9.182.194.41 gpfs-lin1.in.ibm.com quorum-manager [root@gpfs-lin2 ~]# mmlscluster GPFS cluster information ======================== GPFS cluster name: gpfs-lin2.in.ibm.com GPFS cluster id: 699960278913802376 GPFS UID domain: gpfs-lin2.in.ibm.com Remote shell command: /usr/bin/ssh Remote file copy command: /usr/bin/scp GPFS cluster configuration servers: ----------------------------------- Primary server: gpfs-lin2.in.ibm.com Secondary server: (none) Node Daemon node name IP address Admin node name Designation ------------------------------------------------------------------------------------------ 1 gpfs-lin2.in.ibm.com 9.182.194.42 gpfs-lin2.in.ibm.com quorum-manager
- Shut down GPFS (daemon) at both clusters using the command
mmshutdown -a
. The two clusters authenticate each other using SSL.
- Generate a key for secure communication between the two clusters. Do this by issuing the command
mmauth genkey new
at Cluster 1.mmauth
is a command used to manage secure access to GPFS file systems, as shown in Listing 2.
Listing 2. Using mmauth to generate keys required for secure communication between clusters
[root@gpfs-lin1 gpfs]# mmauth genkey new Generating RSA private key, 512 bit long modulus .........++++++++++++ .........++++++++++++ e is 65537 (0x10001) writing RSA key mmauth: Command successfully completed
- Add security to the GPFS communication network by issuing the command
mmchconfig cipherList=AUTHONLY
at Cluster 1. IfcipherList
is not specified, or if the valueDEFAULT
is specified, GPFS does not authenticate or check authorization for network connections. If the valueAUTHONLY
is specified, GPFS does authenticate and check authorization for network connections, but data sent over the connection is not protected. Before settingcipherList
for the first time, establish a public or private key pair for the cluster by using themmauth genkey new
command, as shown in Listing 3.
Listing 3. Enforcing authentication using the mmchconfig command
[root@gpfs-lin1 gpfs]# mmchconfig cipherList=AUTHONLY mmchconfig: Command successfully completed
- Copy the key generated (at Cluster 1) to the remote site (Cluster 2) at any suitable location. Use the
scp
command or manually copy the contents of the key file. If you make a manual copy, ensure the integrity of the key file. In the example in this article, this key is cluster1.key, which is later used in our example.
- Repeat steps 4, 5, and 6 on Cluster 2. Now both clusters have a key file from the other cluster. In the example, the second key generated is cluster2.key.
- Start GPFS on both the clusters using the command
mmsstartup -a
.
- Prepare Cluster 1 to grant secure access to the file systems it owns from Cluster 2. Issue the command
mmauth add cluster2 -k /tmp/cluster2.key
at Cluster 1, as shown in Listing 4.
Listing 4. Adding key information about second cluster to the first.
[root@gpfs-lin1 gpfs]# mmauth add gpfs-lin2.in.ibm.com -k /tmp/cluster2.key mmauth: Command successfully completed
- Add the remote Cluster 1 to the set of remote clusters known to Cluster 2. Use the command
mmremotecluster add cluster1 -k /tmp/cluster1.key -n cluster1-node
, as shown in Listing 5.
Listing 5. Adding the remote cluster to the secondary
[root@gpfs-lin2 ~]# mmremotecluster add gpfs-lin1.in.ibm.com -k /tmp/cluster1.key -n gpfs-lin1.in.ibm.com mmremotecluster: Command successfully completed mmremotecluster: Propagating the cluster configuration data to all affected nodes. This is an asynchronous process.
- Register the file systems from Cluster 1 that you want to access from Cluster 2 using the command
mmremotefs add mygpfs -f cluster1-FS-gpfs0 -C cluster-1 -T /mygpfs
, as shown in Listing 6.
Listing 6. Adding the remote file system from the first cluster to the secondary
[root@gpfs-lin2 ~]# mmremotefs add mygpfs -f /dev/gpfs -C gpfs-lin1.in.ibm.com -T /mygpfs mmremotefs: Propagating the cluster configuration data to all affected nodes. This is an asynchronous process.
- Grant access to the file systems that Cluster 2 will access remotely. You can do this on each cluster one at a time using the command
mmauth grant cluster2 -f cluster1.fs
. If you do not want to give access to each file system separately to each cluster, use the commandmmauth grant all -f all
, as shown in Listing 7.
Listing 7. Granting file system access to the second cluster.
[root@gpfs-lin2 gpfs]# mmauth grant gpfs-lin2.in.ibm.com -f /dev/gpfs mmauth: Granting cluster gpfs-lin2.in.ibm.com access to file system gpfs: access type rw; root credentials will not be remapped. mmauth: Command successfully completed
- Mount the file system from Cluster 2 using the
mount /mygpfs
command.
- Verify your remote cluster using the commands shown in Listing 8.
Listing 8. Checking successful addition of remote file systems across clusters
[root@gpfs-lin2 ~]# mmremotecluster show all Cluster name: gpfs-lin1.in.ibm.com Contact nodes: gpfs-lin1.in.ibm.com SHA digest: 1a30629010ba607ed59dd011d30261482b0957c5 File systems: mygpfs (gpfs9) [root@gpfs-lin1 gpfs]# mmremotefs show all Local Name Remote Name Cluster name Mount Point Mount Options Automount mygpfs gpfs gpfs-lin1.in.ibm.com /mygpfs rw no
|
Deconstructing a multicluster setup
What if you would now like to deconstruct the same multicluster? Follow these steps:
- Unmount any individual file system and delete the file system from the remote location using
mmremotefs delete remote-fsname
. To remove all remote file systems at once, usemmremotefs delete all
. Note that this does not delete file systems from the source cluster; it only removes them from the remote cluster. Use this command at Cluster 2, that is, where the primary cluster file system is remotely mounted, as shown in Listing 9.
Listing 9. Deleting the remote file system information.
[root@gpfs-lin2 gpfs]# mmremotefs delete /dev/mygpfs mmremotefs: Propagating the cluster configuration data to all affected nodes. This is an asynchronous process.
- Delete the primary cluster configuration data from the secondary cluster with the command
mmremotecluster delete cluster-name
, or usemmremotecluster delete all
to delete all cluster information, as shown in Listing 10.
Listing 10. Deleting the remote cluster information.
[root@gpfs-lin2 gpfs]# mmremotecluster delete gpfs-lin1.in.ibm.com mmremotecluster: Command successfully completed mmremotecluster: Propagating the cluster configuration data to all affected nodes. This is an asynchronous process.
- From the primary cluster, remove the authentication information with the command
mmauth delete secondary-cluster-name
, as shown in Listing 11.
Listing 11. Deleting the authentication information
[root@gpfs-lin1 gpfs]# mmauth delete gpfs-lin2.in.ibm.com mmauth: Command successfully completed
You finished deconstructing the file system, the multicluster, and the authentication information shared between the clusters.
'alt.comp > os' 카테고리의 다른 글
UNIX계열 OS별 Command 비교 Chart (0) | 2010.04.12 |
---|---|
History File Settting 방법 (0) | 2008.07.03 |
RH에서 Loopback interface 잡는 법 (0) | 2008.05.01 |
martian source.... (0) | 2008.05.01 |
각 OS별 MAC Addr 확인방법 (0) | 2008.05.01 |