Checking the Commissioning Result
Scenario
After an HDFS application is run, you can learn the application running conditions by viewing the running result or HDFS logs.
Procedure
- Learn the application running conditions by viewing the running result.
- The running result of the HDFS example application is shown as follows:
[root@192-168-33-94 hdfsDemo]#java -cp HDFSTest-0.0.1-SNAPSHOT.jar:conf/:lib/* com.huawei.bigdata.hdfs.examples.HdfsExample 0 [main] INFO org.apache.hadoop.security.UserGroupInformation - Login successful for user hdfsDevelop using keytab file user.keytab 1 [main] INFO com.huawei.hadoop.security.LoginUtil - Login success!!!!!!!!!!!!!! 568 [main] WARN org.apache.hadoop.util.NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 582 [main] WARN org.apache.hadoop.hdfs.shortcircuit.DomainSocketFactory - The short-circuit local reads feature cannot be used because libhadoop cannot be loaded. 793 [main] INFO com.huawei.bigdata.hdfs.examples.HdfsExample - success to create path /user/hdfs-examples 969 [main] INFO com.huawei.bigdata.hdfs.examples.HdfsExample - success to write. 1068 [main] INFO com.huawei.bigdata.hdfs.examples.HdfsExample - success to append. 1191 [main] INFO com.huawei.bigdata.hdfs.examples.HdfsExample - result is : hi, I am bigdata. It is successful if you can see me.I append this content. 1191 [main] INFO com.huawei.bigdata.hdfs.examples.HdfsExample - success to read. 1202 [main] INFO com.huawei.bigdata.hdfs.examples.HdfsExample - success to delete the file /user/hdfs-examples/test.txt 1210 [main] INFO com.huawei.bigdata.hdfs.examples.HdfsExample - success to delete path /user/hdfs-examples 1223 [hdfs_example_0] INFO com.huawei.bigdata.hdfs.examples.HdfsExample - success to create path /user/hdfs-examples/hdfs_example_0 1224 [hdfs_example_1] INFO com.huawei.bigdata.hdfs.examples.HdfsExample - success to create path /user/hdfs-examples/hdfs_example_1 1261 [hdfs_example_0] INFO com.huawei.bigdata.hdfs.examples.HdfsExample - success to write. 1264 [hdfs_example_1] INFO com.huawei.bigdata.hdfs.examples.HdfsExample - success to write. 2807 [hdfs_example_0] INFO com.huawei.bigdata.hdfs.examples.HdfsExample - success to append. 2810 [hdfs_example_1] INFO com.huawei.bigdata.hdfs.examples.HdfsExample - success to append. 2861 [hdfs_example_0] INFO com.huawei.bigdata.hdfs.examples.HdfsExample - result is : hi, I am bigdata. It is successful if you can see me.I append this content. 2861 [hdfs_example_0] INFO com.huawei.bigdata.hdfs.examples.HdfsExample - success to read. 2866 [hdfs_example_0] INFO com.huawei.bigdata.hdfs.examples.HdfsExample - success to delete the file /user/hdfs-examples/hdfs_example_0/test.txt 2874 [hdfs_example_0] INFO com.huawei.bigdata.hdfs.examples.HdfsExample - success to delete path /user/hdfs-examples/hdfs_example_0 2874 [hdfs_example_1] INFO com.huawei.bigdata.hdfs.examples.HdfsExample - result is : hi, I am bigdata. It is successful if you can see me.I append this content. 2874 [hdfs_example_1] INFO com.huawei.bigdata.hdfs.examples.HdfsExample - success to read. 2879 [hdfs_example_1] INFO com.huawei.bigdata.hdfs.examples.HdfsExample - success to delete the file /user/hdfs-examples/hdfs_example_1/test.txt 2885 [hdfs_example_1] INFO com.huawei.bigdata.hdfs.examples.HdfsExample - success to delete path /user/hdfs-examples/hdfs_example_1
- The running result of the Colocation example application is shown as follows:
[root@192-168-33-94 hdfsDemo]#java -cp HDFSTest-0.0.1-SNAPSHOT.jar:conf/:lib/* com.huawei.bigdata.hdfs.examples.ColocationExample 0 [main] INFO com.huawei.hadoop.security.LoginUtil - JaasConfiguration loginContextName=Client principal=hdfsDevelop useTicketCache=false keytabFile=/opt/hdfsDemo/conf/user.keytab 817 [main] INFO org.apache.hadoop.security.UserGroupInformation - Login successful for user hdfsDevelop using keytab file user.keytab 817 [main] INFO com.huawei.hadoop.security.LoginUtil - Login success!!!!!!!!!!!!!! 1380 [main] WARN org.apache.hadoop.util.NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 1393 [main] WARN org.apache.hadoop.hdfs.shortcircuit.DomainSocketFactory - The short-circuit local reads feature cannot be used because libhadoop cannot be loaded. 1415 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:zookeeper.version=V100R002C30, built on 10/19/2017 04:21 GMT 1415 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:host.name=192-168-33-94 1415 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.version=1.8.0_144 1415 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.vendor=Oracle Corporation 1415 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.home=/opt/clientAll/JDK/jdk1.8.0_144/jre 1415 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.class.path=colocation-examplesafty.jar:conf/:lib/hadoop-hdfs-3.1.1.jar:lib/commons-cli-1.2.jar:lib/slf4j-log4j12-1.7.10.jar:lib/zookeeper-3.5.1.jar:lib/hadoop-hdfs-colocation-3.1.1.jar:lib/hadoop-auth-3.1.1.jar:lib/smallfs-main-V100R002C30.jar:lib/hadoop-nfs-3.1.1.jar:lib/commons-codec-1.9.jar:lib/hadoop-hdfs-nfs-3.1.1.jar:lib/htrace-core-3.1.0-incubating.jar:lib/hadoop-annotations-3.1.1.jar:lib/guava-11.0.2.jar:lib/dynalogger-V100R002C30.jar:lib/hadoop-common-3.1.1.jar:lib/hadoop-hdfs-restore-3.1.1.jar:lib/commons-codec-1.4.jar:lib/smallfs-common-V100R002C30.jar:lib/hadoop-mapreduce-client-core-3.1.1.jar:lib/commons-lang-2.6.jar:lib/commons-io-2.4.jar:lib/hadoop-hdfs-client-3.1.1.jar:lib/commons-collections-3.2.2.jar:lib/commons-configuration-1.6.jar:lib/zookeeper-file-storage-0.0.1.jar:lib/commons-logging-1.1.3.jar:lib/hadoop-hdfs-datamovement-3.1.1.jar:lib/slf4j-api-1.7.10.jar:lib/log4j-1.2.17.jar:lib/protobuf-java-2.5.0.jar 1415 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.library.path=/opt/clientAll/JDK/jdk1.8.0_144/lib:/opt/clientAll/KrbClient/kerberos/lib:/opt/clientAll/Redis/bin:/opt/clientAll/Redis/bin::/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib 1415 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.io.tmpdir=/tmp 1415 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.compiler=<NA> 1415 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.name=Linux 1415 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.arch=amd64 1415 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.version=2.6.32-504.el6.x86_64 1415 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.name=root 1415 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.home=/root 1415 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.dir=/opt/hdfsDemo 1415 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.free=395MB 1415 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.max=7131MB 1415 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.total=481MB 1417 [main] INFO org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=192-168-33-94:24002,192-168-34-145:2181,192-168-35-218:2181 sessionTimeout=45000 watcher=com.huawei.hadoop.oi.colocation.ZooKeeperWatcher@38145825 1436 [main] INFO org.apache.zookeeper.ClientCnxn - zookeeper.request.timeout is not configured. Using default value 120000. 1436 [main] INFO org.apache.zookeeper.ClientCnxn - zookeeper.client.bind.port.range is not configured. 1436 [main] INFO org.apache.zookeeper.ClientCnxn - zookeeper.client.bind.address is not configured. 1439 [main-SendThread(192-168-34-145:2181)] INFO org.apache.zookeeper.client.FourLetterWordMain - connecting to 192-168-34-145 2181 1445 [main-SendThread(192-168-34-145:2181)] INFO org.apache.zookeeper.ClientCnxn - Got server principal from the server and it is zookeeper/hadoop.<system domain name> 1445 [main-SendThread(192-168-34-145:2181)] INFO org.apache.zookeeper.ClientCnxn - Using server principal zookeeper/hadoop.<system domain name> 1456 [main-SendThread(192-168-34-145:2181)] INFO org.apache.zookeeper.Login - successfully logged in. 1457 [Thread-4] INFO org.apache.zookeeper.Login - TGT refresh thread started. 1460 [Thread-4] INFO org.apache.zookeeper.Login - TGT valid starting at: Thu Oct 26 09:02:27 CST 2017 1460 [Thread-4] INFO org.apache.zookeeper.Login - TGT expires: Fri Oct 27 09:02:27 CST 2017 1461 [Thread-4] INFO org.apache.zookeeper.Login - TGT refresh sleeping until: Fri Oct 27 05:22:47 CST 2017 1461 [main-SendThread(192-168-34-145:2181)] INFO org.apache.zookeeper.client.ZooKeeperSaslClient - Client will use GSSAPI as SASL mechanism. 1466 [main-SendThread(192-168-34-145:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server 192-168-34-145/192.168.34.145:2181. Will attempt to SASL-authenticate using Login Context section 'Client' 1472 [main-SendThread(192-168-34-145:2181)] INFO org.apache.zookeeper.ClientCnxn - Socket connection established, initiating session, client: /192.168.33.94:50807, server: 192-168-34-145/192.168.34.145:2181 1479 [main-SendThread(192-168-34-145:2181)] INFO org.apache.zookeeper.ClientCnxn - Session establishment complete on server 192-168-34-145/192.168.34.145:2181, sessionid = 0x13001da607051e27, negotiated timeout = 45000 1573 [main] INFO com.huawei.hadoop.oi.colocation.ZKUtil - ZooKeeper colocation znode : /hadoop/colocationDetails. Will publish colocation details under this znode hierarchy. Create Group is running... 2102 [main] INFO org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=192-168-33-94:2181,192-168-34-145:2181,192-168-35-218:2181 sessionTimeout=45000 watcher=com.huawei.hadoop.oi.colocation.ZooKeeperWatcher@1fa1cab1 2103 [main] INFO org.apache.zookeeper.ClientCnxn - zookeeper.request.timeout is not configured. Using default value 120000. 2103 [main] INFO org.apache.zookeeper.ClientCnxn - zookeeper.client.bind.port.range is not configured. 2103 [main] INFO org.apache.zookeeper.ClientCnxn - zookeeper.client.bind.address is not configured. 2104 [main-SendThread(192-168-34-145:2181)] INFO org.apache.zookeeper.client.FourLetterWordMain - connecting to 192-168-34-145 2181 2106 [main-SendThread(192-168-34-145:2181)] INFO org.apache.zookeeper.ClientCnxn - Got server principal from the server and it is zookeeper/hadoop.<system domain name> 2106 [main-SendThread(192-168-34-145:2181)] INFO org.apache.zookeeper.ClientCnxn - Using server principal zookeeper/hadoop.<system domain name> 2114 [main-SendThread(192-168-34-145:2181)] INFO org.apache.zookeeper.Login - successfully logged in. 2115 [main-SendThread(192-168-34-145:2181)] INFO org.apache.zookeeper.client.ZooKeeperSaslClient - Client will use GSSAPI as SASL mechanism. 2115 [Thread-8] INFO org.apache.zookeeper.Login - TGT refresh thread started. 2115 [Thread-8] INFO org.apache.zookeeper.Login - TGT valid starting at: Thu Oct 26 09:02:28 CST 2017 2115 [Thread-8] INFO org.apache.zookeeper.Login - TGT expires: Fri Oct 27 09:02:28 CST 2017 2115 [Thread-8] INFO org.apache.zookeeper.Login - TGT refresh sleeping until: Fri Oct 27 04:20:56 CST 2017 2115 [main-SendThread(192-168-34-145:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server 192-168-34-145/192.168.34.145:2181. Will attempt to SASL-authenticate using Login Context section 'Client' 2117 [main-SendThread(192-168-34-145:2181)] INFO org.apache.zookeeper.ClientCnxn - Socket connection established, initiating session, client: /192.168.33.94:50814, server: 192-168-34-145/192.168.34.145:2181 2123 [main-SendThread(192-168-34-145:2181)] INFO org.apache.zookeeper.ClientCnxn - Session establishment complete on server 192-168-34-145/192.168.34.145:2181, sessionid = 0x13001da607051e28, negotiated timeout = 45000 2137 [main] INFO com.huawei.hadoop.oi.colocation.ZKUtil - ZooKeeper colocation znode : /hadoop/colocationDetails. Will publish colocation details under this znode hierarchy. Create Group has finished. Put file is running... Put file has finished. Delete file is running... Delete file has finished. Delete Group is running... Delete Group has finished. 2378 [main-EventThread] INFO org.apache.zookeeper.ClientCnxn - EventThread shut down for session: 0x13001da607051e27 2378 [main] INFO org.apache.zookeeper.ZooKeeper - Session: 0x13001da607051e27 closed 2384 [main-EventThread] INFO org.apache.zookeeper.ClientCnxn - EventThread shut down for session: 0x13001da607051e28 2385 [main] INFO org.apache.zookeeper.ZooKeeper - Session: 0x13001da607051e28 closed
- The running result of the SmallFS example application is shown as follows:
[root@192-168-33-94 hdfsDemo]#java -cp HDFSTest-0.0.1-SNAPSHOT.jar:conf/:lib/* com.huawei.bigdata.hdfs.examples.HdfsExample 1 [main] INFO org.apache.hadoop.security.UserGroupInformation - Login successful for user hdfsDevelop using keytab file user.keytab 2 [main] INFO com.huawei.hadoop.security.LoginUtil - Login success!!!!!!!!!!!!!! 609 [main] WARN org.apache.hadoop.util.NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 625 [main] WARN org.apache.hadoop.hdfs.shortcircuit.DomainSocketFactory - The short-circuit local reads feature cannot be used because libhadoop cannot be loaded. 869 [main] INFO com.huawei.bigdata.hdfs.examples.HdfsExample - success to create path /user/hdfs-examples 1189 [main] INFO com.huawei.bigdata.hdfs.examples.HdfsExample - success to write. 1289 [main] INFO com.huawei.bigdata.hdfs.examples.HdfsExample - success to append. 1407 [main] INFO com.huawei.bigdata.hdfs.examples.HdfsExample - result is : hi, I am bigdata. It is successful if you can see me.I append this content. 1407 [main] INFO com.huawei.bigdata.hdfs.examples.HdfsExample - success to read. 1418 [main] INFO com.huawei.bigdata.hdfs.examples.HdfsExample - success to delete the file /user/hdfs-examples/test.txt 1428 [main] INFO com.huawei.bigdata.hdfs.examples.HdfsExample - success to delete path /user/hdfs-examples 1440 [hdfs_example_1] INFO com.huawei.bigdata.hdfs.examples.HdfsExample - success to create path /user/hdfs-examples/hdfs_example_1 1441 [hdfs_example_0] INFO com.huawei.bigdata.hdfs.examples.HdfsExample - success to create path /user/hdfs-examples/hdfs_example_0 1476 [hdfs_example_0] INFO com.huawei.bigdata.hdfs.examples.HdfsExample - success to write. 1479 [hdfs_example_1] INFO com.huawei.bigdata.hdfs.examples.HdfsExample - success to write. 2356 [hdfs_example_1] INFO com.huawei.bigdata.hdfs.examples.HdfsExample - success to append. 2389 [hdfs_example_1] INFO com.huawei.bigdata.hdfs.examples.HdfsExample - result is : hi, I am bigdata. It is successful if you can see me.I append this content. 2389 [hdfs_example_1] INFO com.huawei.bigdata.hdfs.examples.HdfsExample - success to read. 2394 [hdfs_example_1] INFO com.huawei.bigdata.hdfs.examples.HdfsExample - success to delete the file /user/hdfs-examples/hdfs_example_1/test.txt 2401 [hdfs_example_1] INFO com.huawei.bigdata.hdfs.examples.HdfsExample - success to delete path /user/hdfs-examples/hdfs_example_1 2946 [hdfs_example_0] INFO com.huawei.bigdata.hdfs.examples.HdfsExample - success to append. 2973 [hdfs_example_0] INFO com.huawei.bigdata.hdfs.examples.HdfsExample - result is : hi, I am bigdata. It is successful if you can see me.I append this content. 2973 [hdfs_example_0] INFO com.huawei.bigdata.hdfs.examples.HdfsExample - success to read. 2979 [hdfs_example_0] INFO com.huawei.bigdata.hdfs.examples.HdfsExample - success to delete the file /user/hdfs-examples/hdfs_example_0/test.txt 2985 [hdfs_example_0] INFO com.huawei.bigdata.hdfs.examples.HdfsExample - success to delete path /user/hdfs-examples/hdfs_example_0
- The running result of the HDFS example application is shown as follows:
- Learn the application running conditions by viewing HDFS logs.
The namenode logs of HDFS offer immediate visibility into application running conditions. You can adjust application programs based on the logs.
Last Article: Compiling and Running an Application With the Client Not Installed
Next Article: More Information
Did this article solve your problem?
Thank you for your score!Your feedback would help us improve the website.