Updated on 2024-10-23 GMT+08:00

Commissioning an HDFS Application in the Windows Environment

Scenario

After the code development is complete, you can run an application in the Windows development environment. If the network between the local and the cluster service plane is normal, you can perform the commissioning on the local host.

After an HDFS application is run, you can learn the application running conditions by viewing the running result or HDFS logs.

Compiling and Running the Program

  1. (Optional) In a development environment (for example, IntelliJ IDEA), a user must be specified to run the example code. There are two ways to specify the user:

    Select the sample program HdfsExample.java or ColocationExample.java to be run, right-click the project, and choose Run Configurations from the shortcut menu. In the dialog box that is displayed, select JavaApplication > HdfsExample to set the running parameters. On the menu bar of the IntelliJ IDEA, choose Run > Edit Configurations. In the dialog box that is displayed, set the running user.

    -DHADOOP_USER_NAME=test

    The test user here is an example. To run the example code related to the Colocation operation, the user must be a member of the supergroup group.

  2. If the environment variable is configured according to 1 click Run to run the application. If not, choose the following two projects separately and run the projects:

    • Choose HdfsExample.java, right-click the project and choose Run 'HdfsExample.main()' from the shortcut menu to run the project.
    • Choose ColocationExample.java, right-click the project and choose Run 'ColocationExample.main()' from the shortcut menu to run the project.
    • It is forbidden to restart HDFS service while HDFS application is in running status, otherwise the application will fail.
    • When the Colocation project is run, the HDFS parameter fs.defaultFS cannot be set to viewfs://ClusterX.

Checking the Commissioning Result

  • Learn the application running conditions by viewing the running result.
    • The running result of the HDFS windows example application is shown as follows:
      1654 [main] WARN  org.apache.hadoop.hdfs.shortcircuit.DomainSocketFactory  - The short-circuit local reads feature cannot be used because UNIX Domain sockets are not available on Windows.
      2013 [main] INFO  com.huawei.bigdata.hdfs.examples.HdfsExample  - success to create path /user/hdfs-examples
      2137 [main] WARN  org.apache.hadoop.util.NativeCodeLoader  - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
      2590 [main] INFO  com.huawei.bigdata.hdfs.examples.HdfsExample  - success to write.
      3245 [main] INFO  com.huawei.bigdata.hdfs.examples.HdfsExample  - success to append.
      4447 [main] INFO  com.huawei.bigdata.hdfs.examples.HdfsExample  - result is : hi, I am bigdata. It is successful if you can see me.I append this content.
      4447 [main] INFO  com.huawei.bigdata.hdfs.examples.HdfsExample  - success to read.
      4509 [main] INFO  com.huawei.bigdata.hdfs.examples.HdfsExample  - success to delete the file /user/hdfs-examples\test.txt
      4618 [main] INFO  com.huawei.bigdata.hdfs.examples.HdfsExample  - success to delete path /user/hdfs-examples
      4743 [hdfs_example_1] INFO  com.huawei.bigdata.hdfs.examples.HdfsExample  - success to create path /user/hdfs-examples/hdfs_example_1
      4743 [hdfs_example_0] INFO  com.huawei.bigdata.hdfs.examples.HdfsExample  - success to create path /user/hdfs-examples/hdfs_example_0
      5087 [hdfs_example_0] INFO  com.huawei.bigdata.hdfs.examples.HdfsExample  - success to write.
      5087 [hdfs_example_1] INFO  com.huawei.bigdata.hdfs.examples.HdfsExample  - success to write.
      6507 [hdfs_example_1] INFO  com.huawei.bigdata.hdfs.examples.HdfsExample  - success to append.
      6553 [hdfs_example_0] INFO  com.huawei.bigdata.hdfs.examples.HdfsExample  - success to append.
      7505 [hdfs_example_1] INFO  com.huawei.bigdata.hdfs.examples.HdfsExample  - result is : hi, I am bigdata. It is successful if you can see me.I append this content.
      7505 [hdfs_example_1] INFO  com.huawei.bigdata.hdfs.examples.HdfsExample  - success to read.
      7568 [hdfs_example_1] INFO  com.huawei.bigdata.hdfs.examples.HdfsExample  - success to delete the file /user/hdfs-examples/hdfs_example_1\test.txt
      7583 [hdfs_example_0] INFO  com.huawei.bigdata.hdfs.examples.HdfsExample  - result is : hi, I am bigdata. It is successful if you can see me.I append this content.
      7583 [hdfs_example_0] INFO  com.huawei.bigdata.hdfs.examples.HdfsExample  - success to read.
      7630 [hdfs_example_0] INFO  com.huawei.bigdata.hdfs.examples.HdfsExample  - success to delete the file /user/hdfs-examples/hdfs_example_0\test.txt
      7677 [hdfs_example_1] INFO  com.huawei.bigdata.hdfs.examples.HdfsExample  - success to delete path /user/hdfs-examples/hdfs_example_1
      7739 [hdfs_example_0] INFO  com.huawei.bigdata.hdfs.examples.HdfsExample  - success to delete path /user/hdfs-examples/hdfs_example_0

      In the Windows environment, the following exception occurs but does not affect services.

      java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.

    • The running result of the Colocation windows example application is shown as follows:
      1623 [main] WARN  org.apache.hadoop.hdfs.shortcircuit.DomainSocketFactory  - The short-circuit local reads feature cannot be used because UNIX Domain sockets are not available on Windows.
      1670 [main] INFO  org.apache.zookeeper.ZooKeeper  - Client environment:zookeeper.version=***, built on 10/19/2017 04:21 GMT
      1670 [main] INFO  org.apache.zookeeper.ZooKeeper  - Client environment:host.name=siay7user1.china.huawei.com
      1670 [main] INFO  org.apache.zookeeper.ZooKeeper  - Client environment:java.version=***
      1670 [main] INFO  org.apache.zookeeper.ZooKeeper  - Client environment:java.vendor=Oracle Corporation
      1670 [main] INFO  org.apache.zookeeper.ZooKeeper  - Client environment:java.home=D:\Program Files\Java\jre1.8.0_131
      ......
      Create Group has finished.
      Put file is running...
      5930 [main] WARN  org.apache.hadoop.util.NativeCodeLoader  - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
      Put file has finished.
      Delete file is running...
      Delete file has finished.
      Delete Group is running...
      Delete Group has finished.
      6866 [main] INFO  org.apache.zookeeper.ZooKeeper  - Session: 0x13000074b7e464b7 closed
      6866 [main-EventThread] INFO  org.apache.zookeeper.ClientCnxn  - EventThread shut down for session: 0x13000074b7e464b7
      6928 [main-EventThread] INFO  org.apache.zookeeper.ClientCnxn  - EventThread shut down for session: 0x14000073f13b657b
      6928 [main] INFO  org.apache.zookeeper.ZooKeeper  - Session: 0x14000073f13b657b closed
  • Learn the application running conditions by viewing HDFS logs.

    The NameNode logs of HDFS offer immediate visibility into application running conditions. You can adjust application programs based on the logs.