Updated on 2022-12-14 GMT+08:00

Configuring NFS

Scenario

Before deploying a cluster, you can deploy a Network File System (NFS) server based on requirements to store NameNode metadata to enhance data reliability.

If the NFS server has been deployed and NFS services are configured, you can follow operations in this section to configure NFS on the cluster. These operations are optional.

Procedure

  1. Check the permission of the shared NFS directories on the NFS server to ensure that the server can access NameNode in the MRS cluster.
  2. Log in to the active NameNode as user root.
  3. Run the following commands to create a directory and assign it write permissions:

    mkdir ${BIGDATA_DATA_HOME}/namenode-nfs

    chown omm:wheel ${BIGDATA_DATA_HOME}/namenode-nfs

    chmod 750 ${BIGDATA_DATA_HOME}/namenode-nfs

  4. Run the following command to mount the NFS to the active NameNode:

    mount -t nfs -o rsize=8192,wsize=8192,soft,nolock,timeo=3,intr IP address of the NFS server:Shared directory ${BIGDATA_DATA_HOME}/namenode-nfs

    For example, if the IP address of the NFS server is 192.168.0.11 and the shared directory is /opt/Hadoop/NameNode, run the following command:

    mount -t nfs -o rsize=8192,wsize=8192,soft,nolock,timeo=3,intr 192.168.0.11:/opt/Hadoop/NameNode ${BIGDATA_DATA_HOME}/namenode-nfs

  5. Perform 2 to 4 on the standby NameNode.

    The names of the shared directories (for example, /opt/Hadoop/NameNode) created on the NFS server by the active and standby NameNodes must be different.

  6. Log in to FusionInsight Manager, and choose Cluster > Name of the desired cluster > Service > HDFS > Configuration > All Configurations.
  7. In the search box, search for dfs.namenode.name.dir, add ${BIGDATA_DATA_HOME}/namenode-nfs to Value, and click Save. Separate paths with commas (,).
  8. Click OK. On the Dashboard tab page, choose More > Restart Service to restart the service.