HTTP REST API
Function Description
Users can use the application programming interface (API) of Representational State Transfer (REST) to create, read and write, append, and delete files. For details of the REST API, see the following official guidelines:
http://hadoop.apache.org/docs/r3.1.1/hadoop-project-dist/hadoop-hdfs/WebHDFS.html.
Preparing Running Environment
- Install the client. Install the client on the node. For example, install the client in the /opt/client directory.
- Run the following command to for the user authentication. In the command, the hdfs is taken as an example and can be defined by users.
- Prepare files testFile and testFileAppend and write content 'Hello, webhdfs user!' and 'Welcome back to webhdfs!'. Run the following command to prepare testFile and testFileAppend files:
touch testFile
vi testFile
Hello, webhdfs user!
touch testFileAppend
vi testFileAppend
Welcome back to webhdfs!
- The MRS cluster supports only the HTTPS access by default. If access by using the HTTPS service, perform 3. If access by using the HTTP service, perform 4.
- HTTPS-based access is different from HTTP-based access. When you access HDFS using HTTPS, you must ensure that the SSL protocol supported by the curl command is supported by the cluster because SSL security encryption is used. If the cluster does not support the SSL protocol, change the SSL protocol in the cluster. For example, if the Curl command only supports the TLSv1 protocol, modify the protocol configuration performing following measures:
Log in to FusionInsight Manager and choose Cluster > Name of the desired cluster > Services > HDFS > Configurations > All Configurations. Type hadoop.ssl.enabled.protocols in the research box, check whether the parameter value contains TLSv1. If the parameter value does not contain TLSv1, add TLSv1 in the hadoop.ssl.enabled.protocols configuration item, and clear the value of ssl.server.exclude.cipher.list. Otherwise, HDFS cannot be accessed by using HTTPS. Then click Save and click More > Restart Service to restart the HDFS service.
TLSv1 has security vulnerabilities. Exercise caution when using it.
- Log in to the FusionInsight Manager portal, choose Cluster > Name of the desired cluster > Services > HDFS > Configurations > All Configurations. Type dfs.http.policy in the research box, select HTTP_AND_HTTPS, click Save, and select More >Restart Service to restart the HDFS service.
Procedure
- Log in to the FusionInsight Manager portal, click Cluster > Name of the desired cluster > Services, and then select HDFS. The HDFS page is displayed.
Because webhdfs is accessed through HTTP/HTTPS, you need to obtain the IP address of the active NameNode and the HTTP/HTTPS port.
- Click Instance, and in HDFS Instances page, view the host name and IP address of the active NameNode.
- Click Configurations, and in HDFS Service Configuration page, find namenode.https.port (9870) and namenode.https.port (9871).
- Create a directory by referring to the following link:
http://hadoop.apache.org/docs/r3.1.1/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Make_a_Directory
Click the link, Figure 1 is displayed:
Go to the /opt/client directory, the installation directory of the client, and create the huawei directory.
- Run the following command to check whether the huawei directory exists in the current path.
hdfs dfs -ls /
The running results are as follows:
linux1:/opt/client # hdfs dfs -ls / 16/04/22 16:10:02 INFO hdfs.PeerCache: SocketCache disabled. Found 7 items -rw-r--r-- 3 hdfs supergroup 0 2016-04-20 18:03 /PRE_CREATE_DIR.SUCCESS drwxr-x--- - flume hadoop 0 2016-04-20 18:02 /flume drwx------ - hbase hadoop 0 2016-04-22 15:19 /hbase drwxrwxrwx - mapred hadoop 0 2016-04-20 18:02 /mr-history drwxrwxrwx - spark supergroup 0 2016-04-22 15:19 /sparkJobHistory drwxrwxrwx - hdfs hadoop 0 2016-04-22 14:51 /tmp drwxrwxrwx - hdfs hadoop 0 2016-04-22 14:50 /user
The huawei directory does not exist in the current path.
- Run the command in Figure 1 that is named with huawei. Replace the <HOST> and <PORT>in the command with the host name or IP address and port number that are obtained in 1. Type the huawei as the directory in the <PATH>.
The <HOST> can be replaced by the host name or IP address. It is noted that the port of HTTP is different from the port of HTTPS.
- Run the following command to access HTTP:
linux1:/opt/client # curl -i -X PUT --negotiate -u: "http://linux1:9870/webhdfs/v1/huawei?op=MKDIRS"
In the command, the <HOST> is replaced by linux 1 and the <PORT> is replaced by 9870.
- The running result is displayed as follows:
HTTP/1.1 401 Authentication required Date: Thu, 05 May 2016 03:10:09 GMT Pragma: no-cache Date: Thu, 05 May 2016 03:10:09 GMT Pragma: no-cache X-Frame-Options: SAMEORIGIN WWW-Authenticate: Negotiate Set-Cookie: hadoop.auth=; Path=/; Expires=Thu, 01-Jan-1970 00:00:00 GMT; HttpOnly Content-Length: 0 HTTP/1.1 200 OK Cache-Control: no-cache Expires: Thu, 05 May 2016 03:10:09 GMT Date: Thu, 05 May 2016 03:10:09 GMT Pragma: no-cache Expires: Thu, 05 May 2016 03:10:09 GMT Date: Thu, 05 May 2016 03:10:09 GMT Pragma: no-cache Content-Type: application/json X-Frame-Options: SAMEORIGIN WWW-Authenticate: Negotiate YGoGCSqGSIb3EgECAgIAb1swWaADAgEFoQMCAQ+iTTBLoAMCARKiRARCArhuv39Ttp6lhBlG3B0JAmFjv9weLp+SGFI+t2HSEHN6p4UVWKKy/kd9dKEgNMlyDu/o7ytzs0cqMxNsI69WbN5H Set-Cookie: hadoop.auth="u=hdfs&p=hdfs@<system domain name>&t=kerberos&e=1462453809395&s=wiRF4rdTWpm3tDST+a/Sy0lwgA4="; Path=/; Expires=Thu, 05-May-2016 13:10:09 GMT; HttpOnly Transfer-Encoding: chunked {"boolean":true}linux1:/opt/client #
If {"boolean":true} returns, the huawei directory is successfully created.
- Run the following command to access HTTPS:
linux1:/opt/client # curl -i -k -X PUT --negotiate -u: "https://10.120.172.109:9871/webhdfs/v1/huawei?op=MKDIRS"
In the command, the <HOST> is replaced by IP address (10.120.172.109) and the <PORT> is replaced by 9871.
- The running result is displayed as follows:
HTTP/1.1 401 Authentication required Date: Fri, 22 Apr 2016 08:13:37 GMT Pragma: no-cache Date: Fri, 22 Apr 2016 08:13:37 GMT Pragma: no-cache X-Frame-Options: SAMEORIGIN WWW-Authenticate: Negotiate Set-Cookie: hadoop.auth=; Path=/; Expires=Thu, 01-Jan-1970 00:00:00 GMT; Secure; HttpOnly Content-Length: 0 HTTP/1.1 200 OK Cache-Control: no-cache Expires: Fri, 22 Apr 2016 08:13:37 GMT Date: Fri, 22 Apr 2016 08:13:37 GMT Pragma: no-cache Expires: Fri, 22 Apr 2016 08:13:37 GMT Date: Fri, 22 Apr 2016 08:13:37 GMT Pragma: no-cache Content-Type: application/json X-Frame-Options: SAMEORIGIN WWW-Authenticate: Negotiate YGoGCSqGSIb3EgECAgIAb1swWaADAgEFoQMCAQ+iTTBLoAMCARKiRARCugB+yT3Y+z8YCRMYJHXF84o1cyCfJq157+NZN1gu7D7yhMULnjr+7BuUdEcZKewFR7uD+DRiMY3akg3OgU45xQ9R Set-Cookie: hadoop.auth="u=hdfs&p=hdfs@<system domain name>&t=kerberos&e=1461348817963&s=sh57G7iVccX/Aknoz410yJPTLHg="; Path=/; Expires=Fri, 22-Apr-2016 18:13:37 GMT; Secure; HttpOnly Transfer-Encoding: chunked {"boolean":true}linux1:/opt/client #
If {"boolean":true} returns, the huawei directory is successfully created.
- Run the following command to access HTTP:
- Run the following command to check the huawei directory in the path.
linux1:/opt/client # hdfs dfs -ls / 16/04/22 16:14:25 INFO hdfs.PeerCache: SocketCache disabled. Found 8 items -rw-r--r-- 3 hdfs supergroup 0 2016-04-20 18:03 /PRE_CREATE_DIR.SUCCESS drwxr-x--- - flume hadoop 0 2016-04-20 18:02 /flume drwx------ - hbase hadoop 0 2016-04-22 15:19 /hbase drwxr-xr-x - hdfs supergroup 0 2016-04-22 16:13 /huawei drwxrwxrwx - mapred hadoop 0 2016-04-20 18:02 /mr-history drwxrwxrwx - spark supergroup 0 2016-04-22 16:12 /sparkJobHistory drwxrwxrwx - hdfs hadoop 0 2016-04-22 14:51 /tmp drwxrwxrwx - hdfs hadoop 0 2016-04-22 16:10 /user
- Run the following command to check whether the huawei directory exists in the current path.
- Create a command of the upload request to obtain the information about Location where the DataNode IP address is written in.
- Run the following command to access HTTP:
linux1:/opt/client # curl -i -X PUT --negotiate -u: "http://linux1:9870/webhdfs/v1/huawei/testHdfs?op=CREATE"
- The running result is displayed as follows:
HTTP/1.1 401 Authentication required Date: Thu, 05 May 2016 06:09:48 GMT Pragma: no-cache Date: Thu, 05 May 2016 06:09:48 GMT Pragma: no-cache X-Frame-Options: SAMEORIGIN WWW-Authenticate: Negotiate Set-Cookie: hadoop.auth=; Path=/; Expires=Thu, 01-Jan-1970 00:00:00 GMT; HttpOnly Content-Length: 0 HTTP/1.1 307 TEMPORARY_REDIRECT Cache-Control: no-cache Expires: Thu, 05 May 2016 06:09:48 GMT Date: Thu, 05 May 2016 06:09:48 GMT Pragma: no-cache Expires: Thu, 05 May 2016 06:09:48 GMT Date: Thu, 05 May 2016 06:09:48 GMT Pragma: no-cache Content-Type: application/octet-stream X-Frame-Options: SAMEORIGIN WWW-Authenticate: Negotiate YGoGCSqGSIb3EgECAgIAb1swWaADAgEFoQMCAQ+iTTBLoAMCARKiRARCzQ6w+9pNzWCTJEdoU3z9xKEyg1JQNka0nYaB9TndvrL5S0neAoK2usnictTFnqIincAjwB6SnTtht8Q16WDlHJX/ Set-Cookie: hadoop.auth="u=hdfs&p=hdfs@<system domain name>&t=kerberos&e=1462464588403&s=qry87vAyYzSn9VsS6Rm6vKLhKeU="; Path=/; Expires=Thu, 05-May-2016 16:09:48 GMT; HttpOnly Location: http://linux1:25010/webhdfs/v1/huawei/testHdfs?op=CREATE&delegation=HgAFYWRtaW4FYWRtaW4AigFUf4lZdIoBVKOV3XQOCBSyXvFAp92alcRs4j-KNulnN6wUoBJXRUJIREZTIGRlbGVnYXRpb24UMTAuMTIwLjE3Mi4xMDk6MjUwMDA&namenoderpcaddress=hacluster&overwrite=false Content-Length: 0
- Run the following command to access HTTPS:
linux1:/opt/client # curl -i -k -X PUT --negotiate -u: "https://linux1:25003/webhdfs/v1/huawei/testHdfs?op=CREATE"
- The running result is displayed as follows:
HTTP/1.1 401 Authentication required Date: Thu, 05 May 2016 03:46:18 GMT Pragma: no-cache Date: Thu, 05 May 2016 03:46:18 GMT Pragma: no-cache X-Frame-Options: SAMEORIGIN WWW-Authenticate: Negotiate Set-Cookie: hadoop.auth=; Path=/; Expires=Thu, 01-Jan-1970 00:00:00 GMT; Secure; HttpOnly Content-Length: 0 HTTP/1.1 307 TEMPORARY_REDIRECT Cache-Control: no-cache Expires: Thu, 05 May 2016 03:46:18 GMT Date: Thu, 05 May 2016 03:46:18 GMT Pragma: no-cache Expires: Thu, 05 May 2016 03:46:18 GMT Date: Thu, 05 May 2016 03:46:18 GMT Pragma: no-cache Content-Type: application/octet-stream X-Frame-Options: SAMEORIGIN WWW-Authenticate: Negotiate YGoGCSqGSIb3EgECAgIAb1swWaADAgEFoQMCAQ+iTTBLoAMCARKiRARCZMYR8GGUkn7pPZaoOYZD5HxzLTRZ71angUHKubW2wC/18m9/OOZstGQ6M1wH2pGriipuCNsKIfwP93eO2Co0fQF3 Set-Cookie: hadoop.auth="u=hdfs&p=hdfs@<system domain name>&t=kerberos&e=1462455978166&s=F4rXUwEevHZze3PR8TxkzcV7RQQ="; Path=/; Expires=Thu, 05-May-2016 13:46:18 GMT; Secure; HttpOnly Location: https://linux1:9865/webhdfs/v1/huawei/testHdfs?op=CREATE&delegation=HgAFYWRtaW4FYWRtaW4AigFUfwX3t4oBVKMSe7cCCBSFJTi9j7X64QwnSz59TGFPKFf7GhNTV0VCSERGUyBkZWxlZ2F0aW9uFDEwLjEyMC4xNzIuMTA5OjI1MDAw&namenoderpcaddress=hacluster&overwrite=false Content-Length: 0
- Run the following command to access HTTP:
- According to the Location information, create the testHdfs file in the /huawei/testHdfs file on the HDFS and upload the content in the local testFile file into the testHdfs file.
- Run the following command to access HTTP:
linux1:/opt/client # curl -i -X PUT -T testFile --negotiate -u: "http://linux1:9864/webhdfs/v1/huawei/testHdfs?op=CREATE&delegation=HgAFYWRtaW4FYWRtaW4AigFUf4lZdIoBVKOV3XQOCBSyXvFAp92alcRs4j-KNulnN6wUoBJXRUJIREZTIGRlbGVnYXRpb24UMTAuMTIwLjE3Mi4xMDk6MjUwMDA&namenoderpcaddress=hacluster&overwrite=false"
- The running result is displayed as follows:
HTTP/1.1 100 Continue HTTP/1.1 201 Created Location: hdfs://hacluster/huawei/testHdfs Content-Length: 0 Connection: close
- Run the following command to access HTTPS:
linux1:/opt/client # curl -i -k -X PUT -T testFile --negotiate -u: "https://linux1:9865/webhdfs/v1/huawei/testHdfs?op=CREATE&delegation=HgAFYWRtaW4FYWRtaW4AigFUfwX3t4oBVKMSe7cCCBSFJTi9j7X64QwnSz59TGFPKFf7GhNTV0VCSERGUyBkZWxlZ2F0aW9uFDEwLjEyMC4xNzIuMTA5OjI1MDAw&namenoderpcaddress=hacluster&overwrite=false"
- The running result is displayed as follows:
HTTP/1.1 100 Continue HTTP/1.1 201 Created Location: hdfs://hacluster/huawei/testHdfs Content-Length: 0 Connection: close
- Run the following command to access HTTP:
- Go to the /huawei/testHdfs directory and read the content of testHdfs file.
- Run the following command to access HTTP:
linux1:/opt/client # curl -L --negotiate -u: "http://linux1:9870/webhdfs/v1/huawei/testHdfs?op=OPEN"
- The running result is displayed as follows:
Hello, webhdfs user!
- Run the following command to access HTTPS:
linux1:/opt/client # curl -k -L --negotiate -u: "https://linux1:9871/webhdfs/v1/huawei/testHdfs?op=OPEN"
- The running result is displayed as follows:
Hello, webhdfs user!
- Run the following command to access HTTP:
- Create a command of the upload request to obtain the information about Location where the DataNode IP address of testHdfs file is written in.
- Run the following command to access HTTP:
linux1:/opt/client # curl -i -X POST --negotiate -u: "http://linux1:9870//webhdfs/v1/huawei/testHdfs?op=APPEND"
- The running result is displayed as follows:
HTTP/1.1 401 Authentication required Cache-Control: must-revalidate,no-cache,no-store Date: Thu, 05 May 2016 05:35:02 GMT Pragma: no-cache Date: Thu, 05 May 2016 05:35:02 GMT Pragma: no-cache Content-Type: text/html; charset=iso-8859-1 X-Frame-Options: SAMEORIGIN WWW-Authenticate: Negotiate Set-Cookie: hadoop.auth=; Path=/; Expires=Thu, 01-Jan-1970 00:00:00 GMT; HttpOnly Content-Length: 1349 HTTP/1.1 307 TEMPORARY_REDIRECT Cache-Control: no-cache Expires: Thu, 05 May 2016 05:35:02 GMT Date: Thu, 05 May 2016 05:35:02 GMT Pragma: no-cache Expires: Thu, 05 May 2016 05:35:02 GMT Date: Thu, 05 May 2016 05:35:02 GMT Pragma: no-cache Content-Type: application/octet-stream X-Frame-Options: SAMEORIGIN WWW-Authenticate: Negotiate YGoGCSqGSIb3EgECAgIAb1swWaADAgEFoQMCAQ+iTTBLoAMCARKiRARCTYvNX/2JMXhzsVPTw3Sluox6s/gEroHH980xMBkkYlCnO3W+0fM32c4/F98U5bl5dzgoolQoBvqq/EYXivvR12WX Set-Cookie: hadoop.auth="u=hdfs&p=hdfs@<system domain name>&t=kerberos&e=1462462502626&s=et1okVIOd7DWJ/LdhzNeS2wQEEY="; Path=/; Expires=Thu, 05-May-2016 15:35:02 GMT; HttpOnly Location: http://linux1:9864/webhdfs/v1/huawei/testHdfs?op=APPEND&delegation=HgAFYWRtaW4FYWRtaW4AigFUf2mGHooBVKN2Ch4KCBRzjM3jwSMlAowXb4dhqfKB5rT-8hJXRUJIREZTIGRlbGVnYXRpb24UMTAuMTIwLjE3Mi4xMDk6MjUwMDA&namenoderpcaddress=hacluster Content-Length: 0
- Run the following command to access HTTPS:
linux1:/opt/client # curl -i -k -X POST --negotiate -u: "https://linux1:9871/webhdfs/v1/huawei/testHdfs?op=APPEND"
- The running result is displayed as follows:
HTTP/1.1 401 Authentication required Cache-Control: must-revalidate,no-cache,no-store Date: Thu, 05 May 2016 05:20:41 GMT Pragma: no-cache Date: Thu, 05 May 2016 05:20:41 GMT Pragma: no-cache Content-Type: text/html; charset=iso-8859-1 X-Frame-Options: SAMEORIGIN WWW-Authenticate: Negotiate Set-Cookie: hadoop.auth=; Path=/; Expires=Thu, 01-Jan-1970 00:00:00 GMT; Secure; HttpOnly Content-Length: 1349 HTTP/1.1 307 TEMPORARY_REDIRECT Cache-Control: no-cache Expires: Thu, 05 May 2016 05:20:41 GMT Date: Thu, 05 May 2016 05:20:41 GMT Pragma: no-cache Expires: Thu, 05 May 2016 05:20:41 GMT Date: Thu, 05 May 2016 05:20:41 GMT Pragma: no-cache Content-Type: application/octet-stream X-Frame-Options: SAMEORIGIN WWW-Authenticate: Negotiate YGoGCSqGSIb3EgECAgIAb1swWaADAgEFoQMCAQ+iTTBLoAMCARKiRARCXgdjZuoxLHGtM1oyrPcXk95/Y869eMfXIQV5UdEwBZ0iQiYaOdf5+Vk7a7FezhmzCABOWYXPxEQPNugbZ/yD5VLT Set-Cookie: hadoop.auth="u=hdfs&p=hdfs@<system domain name>&t=kerberos&e=1462461641713&s=tGwwOH9scmnNtxPjlnu28SFtex0="; Path=/; Expires=Thu, 05-May-2016 15:20:41 GMT; Secure; HttpOnly Location: https://linux1:9865/webhdfs/v1/huawei/testHdfs?op=APPEND&delegation=HgAFYWRtaW4FYWRtaW4AigFUf1xi_4oBVKNo5v8HCBSE3Fg0f_EwtFKKlODKQSM2t32CjhNTV0VCSERGUyBkZWxlZ2F0aW9uFDEwLjEyMC4xNzIuMTA5OjI1MDAw&namenoderpcaddress=hacluster
- Run the following command to access HTTP:
- According to the Location information, add the content in the local testFileAppend file to the testHdfs file that is in the /huawei/testHdfs directory of HDFS.
- Run the following command to access HTTP:
linux1:/opt/client # curl -i -X POST -T testFileAppend --negotiate -u: "http://linux1:9864/webhdfs/v1/huawei/testHdfs?op=APPEND&delegation=HgAFYWRtaW4FYWRtaW4AigFUf2mGHooBVKN2Ch4KCBRzjM3jwSMlAowXb4dhqfKB5rT-8hJXRUJIREZTIGRlbGVnYXRpb24UMTAuMTIwLjE3Mi4xMDk6MjUwMDA&namenoderpcaddress=hacluster"
- The running result is displayed as follows:
HTTP/1.1 100 Continue HTTP/1.1 200 OK Content-Length: 0 Connection: close
- Run the following command to access HTTPS:
linux1:/opt/client # curl -i -k -X POST -T testFileAppend --negotiate -u: "https://linux1:9865/webhdfs/v1/huawei/testHdfs?op=APPEND&delegation=HgAFYWRtaW4FYWRtaW4AigFUf1xi_4oBVKNo5v8HCBSE3Fg0f_EwtFKKlODKQSM2t32CjhNTV0VCSERGUyBkZWxlZ2F0aW9uFDEwLjEyMC4xNzIuMTA5OjI1MDAw&namenoderpcaddress=hacluster"
- The running result is displayed as follows:
HTTP/1.1 100 Continue HTTP/1.1 200 OK Content-Length: 0 Connection: close
- Run the following command to access HTTP:
- Go to the /huawei/testHdfs directory and read all content in the testHdfs file.
- Run the following command to access HTTP:
linux1:/opt/client # curl -L --negotiate -u: "http://linux1:9870/webhdfs/v1/huawei/testHdfs?op=OPEN"
- The running result is displayed as follows:
Hello, webhdfs user! Welcome back to webhdfs!
- Run the following command to access HTTPS:
linux1:/opt/client # curl -k -L --negotiate -u: "https://linux1:9871/webhdfs/v1/huawei/testHdfs?op=OPEN"
- The running result is displayed as follows:
Hello, webhdfs user! Welcome back to webhdfs!
- Run the following command to access HTTP:
- List details of all directory and file information in the huawei directory of the HDFS.
LISTSTATUS will return all child files and folders information in a single request.
- Run the following command to access HTTP.
linux1:/opt/client # curl --negotiate -u: "http://linux1:9870/webhdfs/v1/huawei/testHdfs?op=LISTSTATUS"
- The result is displayed as follows:
{"FileStatuses":{"FileStatus":[ {"accessTime":1462425245595,"blockSize":134217728,"childrenNum":0,"fileId":17680,"group":"supergroup","length":70,"modificationTime":1462426678379,"owner":"hdfs","pathSuffix":"","permission":"755","replication":3,"storagePolicy":0,"type":"FILE"} ]}}
- Run the following command to access HTTPS.
linux1:/opt/client # curl -k --negotiate -u: "https://linux1:9871/webhdfs/v1/huawei/testHdfs?op=LISTSTATUS"
- The result is displayed as follows:
{"FileStatuses":{"FileStatus":[ {"accessTime":1462425245595,"blockSize":134217728,"childrenNum":0,"fileId":17680,"group":"supergroup","length":70,"modificationTime":1462426678379,"owner":"hdfs","pathSuffix":"","permission":"755","replication":3,"storagePolicy":0,"type":"FILE"} ]}}
LISTSTATUS along with size and startafter param will help in fetching the child files and folders information through multiple requests, thus avoiding the user interface from becoming slow when there is a large amount of child information to be fetched.- Run the following command to access HTTP.
linux1:/opt/client # curl --negotiate -u: "http://linux1:9870/webhdfs/v1/huawei/?op=LISTSTATUS&startafter=sparkJobHistory&size=1"
- The result is displayed as follows:
{"FileStatuses":{"FileStatus":[ {"accessTime":1462425245595,"blockSize":134217728,"childrenNum":0,"fileId":17680,"group":"supergroup","length":70,"modificationTime":1462426678379,"owner":"hdfs","pathSuffix":"testHdfs","permission":"755","replication":3,"storagePolicy":0,"type":"FILE"} ]}}
- Run the following command to access HTTPS.
linux1:/opt/client # curl -k --negotiate -u: "https://linux1:9871/webhdfs/v1/huawei/?op=LISTSTATUS&startafter=sparkJobHistory&size=1"
- The result is displayed as follows:
{"FileStatuses":{"FileStatus":[ {"accessTime":1462425245595,"blockSize":134217728,"childrenNum":0,"fileId":17680,"group":"supergroup","length":70,"modificationTime":1462426678379,"owner":"hdfs","pathSuffix":"testHdfs","permission":"755","replication":3,"storagePolicy":0,"type":"FILE"} ]}}
- Run the following command to access HTTP.
- Delete the testHdfs file that is in the /huawei/testHdfs directory of HDFS.
- Run the following command to access HTTP:
linux1:/opt/client # curl -i -X DELETE --negotiate -u: "http://linux1:9870/webhdfs/v1/huawei/testHdfs?op=DELETE"
- The running result is displayed as follows:
HTTP/1.1 401 Authentication required Date: Thu, 05 May 2016 05:54:37 GMT Pragma: no-cache Date: Thu, 05 May 2016 05:54:37 GMT Pragma: no-cache X-Frame-Options: SAMEORIGIN WWW-Authenticate: Negotiate Set-Cookie: hadoop.auth=; Path=/; Expires=Thu, 01-Jan-1970 00:00:00 GMT; HttpOnly Content-Length: 0 HTTP/1.1 200 OK Cache-Control: no-cache Expires: Thu, 05 May 2016 05:54:37 GMT Date: Thu, 05 May 2016 05:54:37 GMT Pragma: no-cache Expires: Thu, 05 May 2016 05:54:37 GMT Date: Thu, 05 May 2016 05:54:37 GMT Pragma: no-cache Content-Type: application/json X-Frame-Options: SAMEORIGIN WWW-Authenticate: Negotiate YGoGCSqGSIb3EgECAgIAb1swWaADAgEFoQMCAQ+iTTBLoAMCARKiRARC9k0/v6Ed8VlUBy3kuT0b4RkqkNMCrDevsLGQOUQRORkzWI3Wu+XLJUMKlmZaWpP+bPzpx8O2Od81mLBgdi8sOkLw Set-Cookie: hadoop.auth="u=hdfs&p=hdfs@<system domain name>&t=kerberos&e=1462463677153&s=Pwxe5UIqaULjFb9R6ZwlSX85GoI="; Path=/; Expires=Thu, 05-May-2016 15:54:37 GMT; HttpOnly Transfer-Encoding: chunked {"boolean":true}linux1:/opt/client #
- Run the following command to access HTTPS:
linux1:/opt/client # curl -i -k -X DELETE --negotiate -u: "https://linux1:9871/webhdfs/v1/huawei/testHdfs?op=DELETE"
- The running result is displayed as follows:
HTTP/1.1 401 Authentication required Date: Thu, 05 May 2016 06:20:10 GMT Pragma: no-cache Date: Thu, 05 May 2016 06:20:10 GMT Pragma: no-cache X-Frame-Options: SAMEORIGIN WWW-Authenticate: Negotiate Set-Cookie: hadoop.auth=; Path=/; Expires=Thu, 01-Jan-1970 00:00:00 GMT; Secure; HttpOnly Content-Length: 0 HTTP/1.1 200 OK Cache-Control: no-cache Expires: Thu, 05 May 2016 06:20:10 GMT Date: Thu, 05 May 2016 06:20:10 GMT Pragma: no-cache Expires: Thu, 05 May 2016 06:20:10 GMT Date: Thu, 05 May 2016 06:20:10 GMT Pragma: no-cache Content-Type: application/json X-Frame-Options: SAMEORIGIN WWW-Authenticate: Negotiate YGoGCSqGSIb3EgECAgIAb1swWaADAgEFoQMCAQ+iTTBLoAMCARKiRARCLY5vrVmgsiH2VWRypc30iZGffRUf4nXNaHCWni3TIDUOTl+S+hfjatSbo/+uayQI/6k9jAfaJrvFIfxqppFtofpp Set-Cookie: hadoop.auth="u=hdfs&p=hdfs@<system domain name>&t=kerberos&e=1462465210180&s=KGd2SbH/EUSaaeVKCb5zPzGBRKo="; Path=/; Expires=Thu, 05-May-2016 16:20:10 GMT; Secure; HttpOnly Transfer-Encoding: chunked {"boolean":true}linux1:/opt/client #
- Run the following command to access HTTP:
The Key Management Server (KMS) uses the HTTP REST API to provide key management services for external systems. For details about the API, see:
http://hadoop.apache.org/docs/r3.1.1/hadoop-kms/index.html.
As REST API reference has done security hardening to prevent script injection attack. Through REST API reference, it cannot create directory and file name which contain those key words "<script ", "<iframe", "<frame", "javascript:".
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.