HDFS HTTP REST APIs
Function Description
In the REST application development sample code, file operations include creating, reading, writing, appending and deleting files. For details about related APIs, visit http://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-hdfs/WebHDFS.html.
Preparing an Operating Environment
- Install a client. Install the client on the node, for example, to the /opt/client directory. For details about how to install the client, see Client Management in the MapReduce Service User Guide.
- You have installed HDFS on the server and confirmed that HDFS is running properly.
- The JDK 1.7 or 1.8 has been installed on the client.
- Obtain the MRS_HDFS_Client.tar installation package, and run the following commands to decompress the package:
tar -xvf MRS_HDFS_Client.tar
tar -xvf MRS_HDFS_ClientConfig.tar
You are advised to install a client of the same version as the cluster on the server to avoid version incompatibility.
- Go to the MRS_HDFS_ClientConfig decompressed folder and run the following command to install the client:
sh install.sh /opt/client
In the preceding command, /opt/client is an example user-defined path.
- Go to the /opt/client client installation directory and run the following command to initialize the environment variables:
source bigdata_env
- Run the following command to perform user authentication. The following uses user hdfs as an example. You can change the username based on the site requirements (skip this step for a normal cluster).
kinit hdfs
The validity period of once kinit is 24 hours. Run the kinit command again when you run the sample application 24 hours later.
- Run the following commands to prepare the testFile and testFileAppend files in the client directory. Their contents are "Hello, webhdfs user!" and "Welcome back to webhdfs!", respectively.
touch testFile
vi testFile
Write "Hello, webhdfs user!". Save the file and exit.
touch testFileAppend
vi testFileAppend
Write "Welcome back to webhdfs!". Save the file and exit.
- MRS clusters support only HTTPS-based access by default. If the HTTPS service is used, go to 3. If the HTTP service is used (supported only by security clusters), go to 4.
- HTTPS-based access is different from HTTP-based access. When you access HDFS using HTTPS, you must ensure that the SSL protocol supported by the curl command is supported by the cluster because SSL security encryption is used. If the cluster does not support the SSL protocol, change the SSL protocol in the cluster. For example, if the curl supports only the TLSv1 protocol, perform the following steps:
Log in to MRS Manager. Choose Service > HDFS > Service Configuration, and set Type to All. Search for hadoop.ssl.enabled.protocols in the search box, and check whether the parameter value contains TLSv1. If it does not contain TLSv1, add TLSv1 to the hadoop.ssl.enabled.protocols configuration item. Clear the value of ssl.server.exclude.cipher.list. Otherwise, HDFS cannot be accessed using HTTPS. Click Save Configuration and select Restart the affected services or instances. Click Yes and restart the HDFS service.
TLSv1 has security vulnerabilities. Exercise caution when using it.
- Log in to MRS Manager. Choose Service > HDFS > Service Configuration, and set Type to All. Search for dfs.http.policy in the search box, and select HTTP_AND_HTTPS. Click Save Configuration and select Restart the affected services or instances. Click Yes and restart the HDFS service.
Procedure
- Log in to MRS Manager, and click Services. Select HDFS and click it to access the HDFS service status page.
Because webhdfs is accessed through HTTP and HTTPs, you need to obtain the IP address and HTTP and HTTPs ports of the active NameNode.
- Click Instance to access the page displayed in Figure 1. Find the host name and IP address of NameNode(hacluster,active).
- Click Service Configuration to access the page displayed in Figure 2. Find namenode.http.port (9870) and namenode.https.port (9871).
For versions earlier than MRS 1.9.2, the default values of the preceding ports are 25002 and 25003. For details, see the related port information in MapReduce Service User Guide.
- Create a directory by referring to the following link.
http://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Make_a_Directory
Click the link. The page is displayed, as shown in Figure 3.
Go to the installation directory of the client, for example, /opt/client, and create a directory named huawei.
- Run the following command to check whether the directory named huawei exists:
hdfs dfs -ls /
The command output is as follows:
linux1:/opt/client # hdfs dfs -ls / 16/04/22 16:10:02 INFO hdfs.PeerCache: SocketCache disabled. Found 7 items -rw-r--r-- 3 hdfs supergroup 0 2016-04-20 18:03 /PRE_CREATE_DIR.SUCCESS drwxr-x--- - flume hadoop 0 2016-04-20 18:02 /flume drwx------ - hbase hadoop 0 2016-04-22 15:19 /hbase drwxrwxrwx - mapred hadoop 0 2016-04-20 18:02 /mr-history drwxrwxrwx - spark supergroup 0 2016-04-22 15:19 /sparkJobHistory drwxrwxrwx - hdfs hadoop 0 2016-04-22 14:51 /tmp drwxrwxrwx - hdfs hadoop 0 2016-04-22 14:50 /user
The huawei directory does not exist in the current path.
- Run the command in Figure 3 to create a directory named huawei. Replace <HOST> and <PORT> in the command with the host name or IP address and port number obtained in 1, and enter the huawei directory to be created in <PATH>.
Replace <HOST> with the host name or IP address. Note that the HTTP and HTTPS ports are different.
- Run the following command to access HTTP:
linux1:/opt/client # curl -i -X PUT --negotiate -u: "http://linux1:9870/webhdfs/v1/huawei?op=MKDIRS"
In the preceding information, linux1 indicates <HOST> and 9870 indicates <PORT>.
- Command output
HTTP/1.1 401 Authentication required Date: Thu, 05 May 2016 03:10:09 GMT Pragma: no-cache Date: Thu, 05 May 2016 03:10:09 GMT Pragma: no-cache X-Frame-Options: SAMEORIGIN WWW-Authenticate: Negotiate Set-Cookie: hadoop.auth=; Path=/; Expires=Thu, 01-Jan-1970 00:00:00 GMT; HttpOnly Content-Length: 0 HTTP/1.1 200 OK Cache-Control: no-cache Expires: Thu, 05 May 2016 03:10:09 GMT Date: Thu, 05 May 2016 03:10:09 GMT Pragma: no-cache Expires: Thu, 05 May 2016 03:10:09 GMT Date: Thu, 05 May 2016 03:10:09 GMT Pragma: no-cache Content-Type: application/json X-Frame-Options: SAMEORIGIN WWW-Authenticate: Negotiate YGoGCSqGSIb3EgECAgIAb1swWaADAgEFoQMCAQ+iTTBLoAMCARKiRARCArhuv39Ttp6lhBlG3B0JAmFjv9weLp+SGFI+t2HSEHN6p4UVWKKy/kd9dKEgNMlyDu/o7ytzs0cqMxNsI69WbN5H Set-Cookie: hadoop.auth="u=hdfs&p=hdfs@HADOOP.COM&t=kerberos&e=1462453809395&s=wiRF4rdTWpm3tDST+a/Sy0lwgA4="; Path=/; Expires=Thu, 05-May-2016 13:10:09 GMT; HttpOnly Transfer-Encoding: chunked {"boolean":true}linux1:/opt/client #
The return value {"boolean" :true} indicates that the huawei directory is successfully created.
- Run the following command to access HTTPS:
linux1:/opt/client # curl -i -k -X PUT --negotiate -u: "https://linux1:9871/webhdfs/v1/huawei?op=MKDIRS"
In the preceding information, linux1 indicates <HOST> and 9871 indicates <PORT>.
- Command output
HTTP/1.1 401 Authentication required Date: Fri, 22 Apr 2016 08:13:37 GMT Pragma: no-cache Date: Fri, 22 Apr 2016 08:13:37 GMT Pragma: no-cache X-Frame-Options: SAMEORIGIN WWW-Authenticate: Negotiate Set-Cookie: hadoop.auth=; Path=/; Expires=Thu, 01-Jan-1970 00:00:00 GMT; Secure; HttpOnly Content-Length: 0 HTTP/1.1 200 OK Cache-Control: no-cache Expires: Fri, 22 Apr 2016 08:13:37 GMT Date: Fri, 22 Apr 2016 08:13:37 GMT Pragma: no-cache Expires: Fri, 22 Apr 2016 08:13:37 GMT Date: Fri, 22 Apr 2016 08:13:37 GMT Pragma: no-cache Content-Type: application/json X-Frame-Options: SAMEORIGIN WWW-Authenticate: Negotiate YGoGCSqGSIb3EgECAgIAb1swWaADAgEFoQMCAQ+iTTBLoAMCARKiRARCugB+yT3Y+z8YCRMYJHXF84o1cyCfJq157+NZN1gu7D7yhMULnjr+7BuUdEcZKewFR7uD+DRiMY3akg3OgU45xQ9R Set-Cookie: hadoop.auth="u=hdfs&p=hdfs@HADOOP.COM&t=kerberos&e=1461348817963&s=sh57G7iVccX/Aknoz410yJPTLHg="; Path=/; Expires=Fri, 22-Apr-2016 18:13:37 GMT; Secure; HttpOnly Transfer-Encoding: chunked {"boolean":true}linux1:/opt/client #
The return value {"boolean" :true} indicates that the huawei directory is successfully created.
- Run the following command to access HTTP:
- Run the following command to check the huawei directory in the path.
linux1:/opt/client # hdfs dfs -ls / 16/04/22 16:14:25 INFO hdfs.PeerCache: SocketCache disabled. Found 8 items -rw-r--r-- 3 hdfs supergroup 0 2016-04-20 18:03 /PRE_CREATE_DIR.SUCCESS drwxr-x--- - flume hadoop 0 2016-04-20 18:02 /flume drwx------ - hbase hadoop 0 2016-04-22 15:19 /hbase drwxr-xr-x - hdfs supergroup 0 2016-04-22 16:13 /huawei drwxrwxrwx - mapred hadoop 0 2016-04-20 18:02 /mr-history drwxrwxrwx - spark supergroup 0 2016-04-22 16:12 /sparkJobHistory drwxrwxrwx - hdfs hadoop 0 2016-04-22 14:51 /tmp drwxrwxrwx - hdfs hadoop 0 2016-04-22 16:10 /user
- Run the following command to check whether the directory named huawei exists:
- Create an upload request to obtain the location where the DataNode address is written in.
- Run the following command to access HTTP:
linux1:/opt/client # curl -i -X PUT --negotiate -u: "http://linux1:9870/webhdfs/v1/huawei/testHdfs?op=CREATE"
In the preceding information, linux1 indicates <HOST> and 9870 indicates <PORT>.
- Command output
HTTP/1.1 401 Authentication required Date: Thu, 05 May 2016 06:09:48 GMT Pragma: no-cache Date: Thu, 05 May 2016 06:09:48 GMT Pragma: no-cache X-Frame-Options: SAMEORIGIN WWW-Authenticate: Negotiate Set-Cookie: hadoop.auth=; Path=/; Expires=Thu, 01-Jan-1970 00:00:00 GMT; HttpOnly Content-Length: 0 HTTP/1.1 307 TEMPORARY_REDIRECT Cache-Control: no-cache Expires: Thu, 05 May 2016 06:09:48 GMT Date: Thu, 05 May 2016 06:09:48 GMT Pragma: no-cache Expires: Thu, 05 May 2016 06:09:48 GMT Date: Thu, 05 May 2016 06:09:48 GMT Pragma: no-cache Content-Type: application/octet-stream X-Frame-Options: SAMEORIGIN WWW-Authenticate: Negotiate YGoGCSqGSIb3EgECAgIAb1swWaADAgEFoQMCAQ+iTTBLoAMCARKiRARCzQ6w+9pNzWCTJEdoU3z9xKEyg1JQNka0nYaB9TndvrL5S0neAoK2usnictTFnqIincAjwB6SnTtht8Q16WDlHJX/ Set-Cookie: hadoop.auth="u=hdfs&p=hdfs@HADOOP.COM&t=kerberos&e=1462464588403&s=qry87vAyYzSn9VsS6Rm6vKLhKeU="; Path=/; Expires=Thu, 05-May-2016 16:09:48 GMT; HttpOnly Location: http://linux1:25010/webhdfs/v1/huawei/testHdfs?op=CREATE&delegation=HgAFYWRtaW4FYWRtaW4AigFUf4lZdIoBVKOV3XQOCBSyXvFAp92alcRs4j-KNulnN6wUoBJXRUJIREZTIGRlbGVnYXRpb24UMTAuMTIwLjE3Mi4xMDk6MjUwMDA&namenoderpcaddress=hacluster&overwrite=false Content-Length: 0
- Run the following command to access HTTPS:
linux1:/opt/client # curl -i -k -X PUT --negotiate -u: "https://linux1:9871/webhdfs/v1/huawei/testHdfs?op=CREATE"
In the preceding information, linux1 indicates <HOST> and 9871 indicates <PORT>.
- Command output
HTTP/1.1 401 Authentication required Date: Thu, 05 May 2016 03:46:18 GMT Pragma: no-cache Date: Thu, 05 May 2016 03:46:18 GMT Pragma: no-cache X-Frame-Options: SAMEORIGIN WWW-Authenticate: Negotiate Set-Cookie: hadoop.auth=; Path=/; Expires=Thu, 01-Jan-1970 00:00:00 GMT; Secure; HttpOnly Content-Length: 0 HTTP/1.1 307 TEMPORARY_REDIRECT Cache-Control: no-cache Expires: Thu, 05 May 2016 03:46:18 GMT Date: Thu, 05 May 2016 03:46:18 GMT Pragma: no-cache Expires: Thu, 05 May 2016 03:46:18 GMT Date: Thu, 05 May 2016 03:46:18 GMT Pragma: no-cache Content-Type: application/octet-stream X-Frame-Options: SAMEORIGIN WWW-Authenticate: Negotiate YGoGCSqGSIb3EgECAgIAb1swWaADAgEFoQMCAQ+iTTBLoAMCARKiRARCZMYR8GGUkn7pPZaoOYZD5HxzLTRZ71angUHKubW2wC/18m9/OOZstGQ6M1wH2pGriipuCNsKIfwP93eO2Co0fQF3 Set-Cookie: hadoop.auth="u=hdfs&p=hdfs@HADOOP.COM&t=kerberos&e=1462455978166&s=F4rXUwEevHZze3PR8TxkzcV7RQQ="; Path=/; Expires=Thu, 05-May-2016 13:46:18 GMT; Secure; HttpOnly Location: https://linux1:9865/webhdfs/v1/huawei/testHdfs?op=CREATE&delegation=HgAFYWRtaW4FYWRtaW4AigFUfwX3t4oBVKMSe7cCCBSFJTi9j7X64QwnSz59TGFPKFf7GhNTV0VCSERGUyBkZWxlZ2F0aW9uFDEwLjEyMC4xNzIuMTA5OjI1MDAw&namenoderpcaddress=hacluster&overwrite=false Content-Length: 0
- Run the following command to access HTTP:
- Based on the obtained location information, you can create the /huawei/testHdfs file in HDFS and upload the content in the local testFile file to the testHdfs file.
- Run the following command to access HTTP:
linux1:/opt/client # curl -i -X PUT -T testFile --negotiate -u: "http://linux1:9864/webhdfs/v1/huawei/testHdfs?op=CREATE&delegation=HgAFYWRtaW4FYWRtaW4AigFUf4lZdIoBVKOV3XQOCBSyXvFAp92alcRs4j-KNulnN6wUoBJXRUJIREZTIGRlbGVnYXRpb24UMTAuMTIwLjE3Mi4xMDk6MjUwMDA&namenoderpcaddress=hacluster&overwrite=false"
In the preceding information, linux1 indicates <HOST> and 9864 indicates <PORT>.
- Command output
HTTP/1.1 100 Continue HTTP/1.1 201 Created Location: hdfs://hacluster/huawei/testHdfs Content-Length: 0 Connection: close
- Run the following command to access HTTPS:
linux1:/opt/client # curl -i -k -X PUT -T testFile --negotiate -u: "https://linux1:9865/webhdfs/v1/huawei/testHdfs?op=CREATE&delegation=HgAFYWRtaW4FYWRtaW4AigFUfwX3t4oBVKMSe7cCCBSFJTi9j7X64QwnSz59TGFPKFf7GhNTV0VCSERGUyBkZWxlZ2F0aW9uFDEwLjEyMC4xNzIuMTA5OjI1MDAw&namenoderpcaddress=hacluster&overwrite=false"
- Command output
HTTP/1.1 100 Continue HTTP/1.1 201 Created Location: hdfs://hacluster/huawei/testHdfs Content-Length: 0 Connection: close
- Run the following command to access HTTP:
- Open the /huawei/testHdfs file and read content in the file.
- Run the following command to access HTTP:
linux1:/opt/client # curl -L --negotiate -u: "http://linux1:9870/webhdfs/v1/huawei/testHdfs?op=OPEN"
In the preceding information, linux1 indicates <HOST> and 9870 indicates <PORT>.
- Command output
Hello, webhdfs user!
- Run the following command to access HTTPS:
linux1:/opt/client # curl -k -L --negotiate -u: "https://linux1:9871/webhdfs/v1/huawei/testHdfs?op=OPEN"
In the preceding information, linux1 indicates <HOST> and 9871 indicates <PORT>.
- Command output
Hello, webhdfs user!
- Run the following command to access HTTP:
- Create a request to append a file to obtain the location where the DataNode address of /huawei/testHdfs file is written in.
- Run the following command to access HTTP:
linux1:/opt/client # curl -i -X POST --negotiate -u: "http://linux1:9870/webhdfs/v1/huawei/testHdfs?op=APPEND"
In the preceding information, linux1 indicates <HOST> and 9870 indicates <PORT>.
- Command output
HTTP/1.1 401 Authentication required Cache-Control: must-revalidate,no-cache,no-store Date: Thu, 05 May 2016 05:35:02 GMT Pragma: no-cache Date: Thu, 05 May 2016 05:35:02 GMT Pragma: no-cache Content-Type: text/html; charset=iso-8859-1 X-Frame-Options: SAMEORIGIN WWW-Authenticate: Negotiate Set-Cookie: hadoop.auth=; Path=/; Expires=Thu, 01-Jan-1970 00:00:00 GMT; HttpOnly Content-Length: 1349 HTTP/1.1 307 TEMPORARY_REDIRECT Cache-Control: no-cache Expires: Thu, 05 May 2016 05:35:02 GMT Date: Thu, 05 May 2016 05:35:02 GMT Pragma: no-cache Expires: Thu, 05 May 2016 05:35:02 GMT Date: Thu, 05 May 2016 05:35:02 GMT Pragma: no-cache Content-Type: application/octet-stream X-Frame-Options: SAMEORIGIN WWW-Authenticate: Negotiate YGoGCSqGSIb3EgECAgIAb1swWaADAgEFoQMCAQ+iTTBLoAMCARKiRARCTYvNX/2JMXhzsVPTw3Sluox6s/gEroHH980xMBkkYlCnO3W+0fM32c4/F98U5bl5dzgoolQoBvqq/EYXivvR12WX Set-Cookie: hadoop.auth="u=hdfs&p=hdfs@HADOOP.COM&t=kerberos&e=1462462502626&s=et1okVIOd7DWJ/LdhzNeS2wQEEY="; Path=/; Expires=Thu, 05-May-2016 15:35:02 GMT; HttpOnly Location: http://linux1:9864/webhdfs/v1/huawei/testHdfs?op=APPEND&delegation=HgAFYWRtaW4FYWRtaW4AigFUf2mGHooBVKN2Ch4KCBRzjM3jwSMlAowXb4dhqfKB5rT-8hJXRUJIREZTIGRlbGVnYXRpb24UMTAuMTIwLjE3Mi4xMDk6MjUwMDA&namenoderpcaddress=hacluster Content-Length: 0
- Run the following command to access HTTPS:
linux1:/opt/client # curl -i -k -X POST --negotiate -u: "https://linux1:9871/webhdfs/v1/huawei/testHdfs?op=APPEND"
In the preceding information, linux1 indicates <HOST> and 9871 indicates <PORT>.
- Command output
HTTP/1.1 401 Authentication required Cache-Control: must-revalidate,no-cache,no-store Date: Thu, 05 May 2016 05:20:41 GMT Pragma: no-cache Date: Thu, 05 May 2016 05:20:41 GMT Pragma: no-cache Content-Type: text/html; charset=iso-8859-1 X-Frame-Options: SAMEORIGIN WWW-Authenticate: Negotiate Set-Cookie: hadoop.auth=; Path=/; Expires=Thu, 01-Jan-1970 00:00:00 GMT; Secure; HttpOnly Content-Length: 1349 HTTP/1.1 307 TEMPORARY_REDIRECT Cache-Control: no-cache Expires: Thu, 05 May 2016 05:20:41 GMT Date: Thu, 05 May 2016 05:20:41 GMT Pragma: no-cache Expires: Thu, 05 May 2016 05:20:41 GMT Date: Thu, 05 May 2016 05:20:41 GMT Pragma: no-cache Content-Type: application/octet-stream X-Frame-Options: SAMEORIGIN WWW-Authenticate: Negotiate YGoGCSqGSIb3EgECAgIAb1swWaADAgEFoQMCAQ+iTTBLoAMCARKiRARCXgdjZuoxLHGtM1oyrPcXk95/Y869eMfXIQV5UdEwBZ0iQiYaOdf5+Vk7a7FezhmzCABOWYXPxEQPNugbZ/yD5VLT Set-Cookie: hadoop.auth="u=hdfs&p=hdfs@HADOOP.COM&t=kerberos&e=1462461641713&s=tGwwOH9scmnNtxPjlnu28SFtex0="; Path=/; Expires=Thu, 05-May-2016 15:20:41 GMT; Secure; HttpOnly Location: https://linux1:9865/webhdfs/v1/huawei/testHdfs?op=APPEND&delegation=HgAFYWRtaW4FYWRtaW4AigFUf1xi_4oBVKNo5v8HCBSE3Fg0f_EwtFKKlODKQSM2t32CjhNTV0VCSERGUyBkZWxlZ2F0aW9uFDEwLjEyMC4xNzIuMTA5OjI1MDAw&namenoderpcaddress=hacluster
- Run the following command to access HTTP:
- Based on the obtained location information, you can append content in the testFileAppend file to the /huawei/testHdfs file in HDFS.
- Run the following command to access HTTP:
linux1:/opt/client # curl -i -X POST -T testFileAppend --negotiate -u: "http://linux1:9864/webhdfs/v1/huawei/testHdfs?op=APPEND&delegation=HgAFYWRtaW4FYWRtaW4AigFUf2mGHooBVKN2Ch4KCBRzjM3jwSMlAowXb4dhqfKB5rT-8hJXRUJIREZTIGRlbGVnYXRpb24UMTAuMTIwLjE3Mi4xMDk6MjUwMDA&namenoderpcaddress=hacluster"
In the preceding information, linux1 indicates <HOST> and 9864 indicates <PORT>.
- Command output
HTTP/1.1 100 Continue HTTP/1.1 200 OK Content-Length: 0 Connection: close
- Run the following command to access HTTPS:
linux1:/opt/client # curl -i -k -X POST -T testFileAppend --negotiate -u: "https://linux1:9865/webhdfs/v1/huawei/testHdfs?op=APPEND&delegation=HgAFYWRtaW4FYWRtaW4AigFUf1xi_4oBVKNo5v8HCBSE3Fg0f_EwtFKKlODKQSM2t32CjhNTV0VCSERGUyBkZWxlZ2F0aW9uFDEwLjEyMC4xNzIuMTA5OjI1MDAw&namenoderpcaddress=hacluster"
In the preceding information, linux1 indicates <HOST> and 9865 indicates <PORT>.
- Command output
HTTP/1.1 100 Continue HTTP/1.1 200 OK Content-Length: 0 Connection: close
- Run the following command to access HTTP:
- Open the /huawei/testHdfs file and read all content in the file.
- Run the following command to access HTTP:
linux1:/opt/client # curl -L --negotiate -u: "http://linux1:9870/webhdfs/v1/huawei/testHdfs?op=OPEN"
In the preceding information, linux1 indicates <HOST> and 9870 indicates <PORT>.
- Command output
Hello, webhdfs user! Welcome back to webhdfs!
- Run the following command to access HTTPS:
linux1:/opt/client # curl -k -L --negotiate -u: "https://linux1:9871/webhdfs/v1/huawei/testHdfs?op=OPEN"
In the preceding information, linux1 indicates <HOST> and 9871 indicates <PORT>.
- Command output
Hello, webhdfs user! Welcome back to webhdfs!
- Run the following command to access HTTP:
- List the detailed information about all directories and files in the huawei directory in HDFS.
LISTSTATUS returns information about all subfiles and folders in a request.
- Run the following command to access HTTP:
linux1:/opt/client # curl --negotiate -u: "http://linux1:9870/webhdfs/v1/huawei/testHdfs?op=LISTSTATUS"
In the preceding information, linux1 indicates <HOST> and 9870 indicates <PORT>.
- Command output
{"FileStatuses":{"FileStatus":[ {"accessTime":1462425245595,"blockSize":134217728,"childrenNum":0,"fileId":17680,"group":"supergroup","length":70,"modificationTime":1462426678379,"owner":"hdfs","pathSuffix":"","permission":"755","replication":3,"storagePolicy":0,"type":"FILE"} ]}}
- Run the following command to access HTTPS:
linux1:/opt/client # curl -k --negotiate -u: "https://linux1:9871/webhdfs/v1/huawei/testHdfs?op=LISTSTATUS"
In the preceding information, linux1 indicates <HOST> and 9871 indicates <PORT>.
- Command output
{"FileStatuses":{"FileStatus":[ {"accessTime":1462425245595,"blockSize":134217728,"childrenNum":0,"fileId":17680,"group":"supergroup","length":70,"modificationTime":1462426678379,"owner":"hdfs","pathSuffix":"","permission":"755","replication":3,"storagePolicy":0,"type":"FILE"} ]}}
LISTSTATUS along with the size and startafter parameters will help fetch the subfiles and folders information through multiple requests, thereby avoiding the user interface from becoming slow when there are plenty of child information to be fetched.
- Run the following command to access HTTP:
linux1:/opt/client # curl --negotiate -u: "http://linux1:9870/webhdfs/v1/huawei/?op=LISTSTATUS&startafter=sparkJobHistory&size=1"
In the preceding information, linux1 indicates <HOST> and 9870 indicates <PORT>.
- Command output
{"FileStatuses":{"FileStatus":[ {"accessTime":1462425245595,"blockSize":134217728,"childrenNum":0,"fileId":17680,"group":"supergroup","length":70,"modificationTime":1462426678379,"owner":"hdfs","pathSuffix":"testHdfs","permission":"755","replication":3,"storagePolicy":0,"type":"FILE"} ]}}
- Run the following command to access HTTPS:
linux1:/opt/client # curl -k --negotiate -u: "https://linux1:9871/webhdfs/v1/huawei/?op=LISTSTATUS&startafter=sparkJobHistory&size=1"
In the preceding information, linux1 indicates <HOST> and 9871 indicates <PORT>.
- Command output
{"FileStatuses":{"FileStatus":[ {"accessTime":1462425245595,"blockSize":134217728,"childrenNum":0,"fileId":17680,"group":"supergroup","length":70,"modificationTime":1462426678379,"owner":"hdfs","pathSuffix":"testHdfs","permission":"755","replication":3,"storagePolicy":0,"type":"FILE"} ]}}
- Run the following command to access HTTP:
- Delete the /huawei/testHdfs file from HDFS.
- Run the following command to access HTTP:
linux1:/opt/client # curl -i -X DELETE --negotiate -u: "http://linux1:9870/webhdfs/v1/huawei/testHdfs?op=DELETE"
In the preceding information, linux1 indicates <HOST> and 9870 indicates <PORT>.
- Command output
HTTP/1.1 401 Authentication required Date: Thu, 05 May 2016 05:54:37 GMT Pragma: no-cache Date: Thu, 05 May 2016 05:54:37 GMT Pragma: no-cache X-Frame-Options: SAMEORIGIN WWW-Authenticate: Negotiate Set-Cookie: hadoop.auth=; Path=/; Expires=Thu, 01-Jan-1970 00:00:00 GMT; HttpOnly Content-Length: 0 HTTP/1.1 200 OK Cache-Control: no-cache Expires: Thu, 05 May 2016 05:54:37 GMT Date: Thu, 05 May 2016 05:54:37 GMT Pragma: no-cache Expires: Thu, 05 May 2016 05:54:37 GMT Date: Thu, 05 May 2016 05:54:37 GMT Pragma: no-cache Content-Type: application/json X-Frame-Options: SAMEORIGIN WWW-Authenticate: Negotiate YGoGCSqGSIb3EgECAgIAb1swWaADAgEFoQMCAQ+iTTBLoAMCARKiRARC9k0/v6Ed8VlUBy3kuT0b4RkqkNMCrDevsLGQOUQRORkzWI3Wu+XLJUMKlmZaWpP+bPzpx8O2Od81mLBgdi8sOkLw Set-Cookie: hadoop.auth="u=hdfs&p=hdfs@HADOOP.COM&t=kerberos&e=1462463677153&s=Pwxe5UIqaULjFb9R6ZwlSX85GoI="; Path=/; Expires=Thu, 05-May-2016 15:54:37 GMT; HttpOnly Transfer-Encoding: chunked {"boolean":true}linux1:/opt/client #
- Run the following command to access HTTPS:
linux1:/opt/client # curl -i -k -X DELETE --negotiate -u: "https://linux1:9871/webhdfs/v1/huawei/testHdfs?op=DELETE"
In the preceding information, linux1 indicates <HOST> and 9871 indicates <PORT>.
- Command output
HTTP/1.1 401 Authentication required Date: Thu, 05 May 2016 06:20:10 GMT Pragma: no-cache Date: Thu, 05 May 2016 06:20:10 GMT Pragma: no-cache X-Frame-Options: SAMEORIGIN WWW-Authenticate: Negotiate Set-Cookie: hadoop.auth=; Path=/; Expires=Thu, 01-Jan-1970 00:00:00 GMT; Secure; HttpOnly Content-Length: 0 HTTP/1.1 200 OK Cache-Control: no-cache Expires: Thu, 05 May 2016 06:20:10 GMT Date: Thu, 05 May 2016 06:20:10 GMT Pragma: no-cache Expires: Thu, 05 May 2016 06:20:10 GMT Date: Thu, 05 May 2016 06:20:10 GMT Pragma: no-cache Content-Type: application/json X-Frame-Options: SAMEORIGIN WWW-Authenticate: Negotiate YGoGCSqGSIb3EgECAgIAb1swWaADAgEFoQMCAQ+iTTBLoAMCARKiRARCLY5vrVmgsiH2VWRypc30iZGffRUf4nXNaHCWni3TIDUOTl+S+hfjatSbo/+uayQI/6k9jAfaJrvFIfxqppFtofpp Set-Cookie: hadoop.auth="u=hdfs&p=hdfs@HADOOP.COM&t=kerberos&e=1462465210180&s=KGd2SbH/EUSaaeVKCb5zPzGBRKo="; Path=/; Expires=Thu, 05-May-2016 16:20:10 GMT; Secure; HttpOnly Transfer-Encoding: chunked {"boolean":true}linux1:/opt/client #
- Run the following command to access HTTP:
The key management system provides the key management service through HTTP REST APIs. For details about the APIs, visit the following website:
http://hadoop.apache.org/docs/r2.7.2/hadoop-kms/index.html
Security hardening has been performed for REST APIs to prevent script injection attacks. The REST APIs cannot be used to create directories and file names that contain the keywords <script, <iframe, <frame, and javascript:.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot