更新时间:2024-10-18 GMT+08:00
分享

HDFS HTTP REST API接口介绍

功能简介

REST应用开发代码样例中所涉及的文件操作主要包括创建文件、读写文件、追加文件、删除文件。完整和详细的接口请参考官网上的描述以了解其使用:

http://hadoop.apache.org/docs/r3.1.1/hadoop-project-dist/hadoop-hdfs/WebHDFS.html

准备运行环境

  1. 安装集群客户端,例如客户端安装目录为“/opt/client”。

    1. 执行下列命令进行用户认证,这里以hdfs为例,用户可根据实际用户名修改。
      kinit hdfs

      kinit认证的默认时效为24小时,到期后再次运行样例,需要重新执行kinit

    2. 在客户端目录创建文件“testFile”“testFileAppend”,文件内容分别为“Hello, webhdfs user!”和“Welcome back to webhdfs!”。

      touch testFile

      vi testFile

      Hello, webhdfs user!

      touch testFileAppend

      vi testFileAppend

      Welcome back to webhdfs!

  2. MRS集群默认只支持HTTPS服务访问,若使用HTTPS服务访问,执行3;若使用HTTP服务访问,执行4
  3. 与HTTP服务访问相比,以HTTPS方式访问HDFS时,由于使用了SSL安全加密,需要确保Curl命令所支持的SSL协议在集群中已添加支持。若不支持,可对应修改集群中SSL协议。例如,若Curl仅支持TLSv1协议,修改方法如下:

    登录FusionInsight Manager页面,选择“集群 > 待操作集群的名称 > 服务 > HDFS > 配置 > 全部配置”,在“搜索”框里搜索“hadoop.ssl.enabled.protocols”,查看参数值是否包含“TLSv1”,若不包含,则在配置项“hadoop.ssl.enabled.protocols”中追加“,TLSv1”。清空“ssl.server.exclude.cipher.list”配置项的值 ,否则以HTTPS访问不了HDFS。单击“保存 > 确定”,保存完成后重启HDFS服务。

    TLSv1协议存在安全漏洞,请谨慎使用。

  4. 登录FusionInsight Manager页面,单击“集群 > 待操作集群的名称 > 服务 > HDFS > 配置 >全部配置”,在“搜索”框里搜索“dfs.http.policy”,然后勾选“HTTP_AND_HTTPS”,单击“保存”,单击“更多 > 重启服务”重启HDFS服务。

操作步骤

  1. 登录FusionInsight Manager页面,单击“集群 > 待操作集群的名称 > 服务”,选择“HDFS”,单击进入HDFS服务状态页面。

    由于webhdfs是http/https访问的,需要主NameNode的IP和http/https端口。

    1. 单击“实例”,进入HDFS实例界面,找到“NameNode(hacluster,主)”的主机名(host)和对应的IP。
    2. 单击“配置”,进入HDFS服务配置界面,找到“namenode.http.port”(9870)和“namenode.https.port”(9871)。

  2. 参考如下链接,创建目录。

    http://hadoop.apache.org/docs/r3.1.1/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Make_a_Directory

    单击链接,如图1所示。

    图1 创建目录样例命令

    进入到客户端的安装目录下,此处为“/opt/client”,创建名为“huawei”的目录。

    1. 执行下列命令,查看当前是否存在名为“huawei”的目录。

      hdfs dfs -ls /

      执行结果如下:

      linux1:/opt/client # hdfs dfs -ls /
      16/04/22 16:10:02 INFO hdfs.PeerCache: SocketCache disabled.
      Found 7 items
      -rw-r--r--   3 hdfs   supergroup          0 2016-04-20 18:03 /PRE_CREATE_DIR.SUCCESS
      drwxr-x---   - flume  hadoop              0 2016-04-20 18:02 /flume
      drwx------   - hbase  hadoop              0 2016-04-22 15:19 /hbase
      drwxrwxrwx   - mapred hadoop              0 2016-04-20 18:02 /mr-history
      drwxrwxrwx   - spark  supergroup          0 2016-04-22 15:19 /sparkJobHistory
      drwxrwxrwx   - hdfs   hadoop              0 2016-04-22 14:51 /tmp
      drwxrwxrwx   - hdfs   hadoop              0 2016-04-22 14:50 /user

      当前路径下不存在“huawei”目录。

    2. 执行图1中的命令创建以“huawei”为名的目录。其中,用1中查找到的主机名或IP和端口分别替代命令中的<HOST>和<PORT>,在<PATH>中输入想要创建的目录“huawei”

      用主机名或IP代替<HOST>都是可以的,要注意HTTP和HTTPS的端口不同。

      • 执行下列命令访问HTTP:
        linux1:/opt/client # curl -i -X PUT --negotiate -u: "http://linux1:9870/webhdfs/v1/huawei?op=MKDIRS"

        其中用linux1代替<HOST>,用9870代替<PORT>。

      • 运行结果:
        HTTP/1.1 401 Authentication required
        Date: Thu, 05 May 2016 03:10:09 GMT
        Pragma: no-cache
        Date: Thu, 05 May 2016 03:10:09 GMT
        Pragma: no-cache
        X-Frame-Options: SAMEORIGIN
        WWW-Authenticate: Negotiate
        Set-Cookie: hadoop.auth=; Path=/; Expires=Thu, 01-Jan-1970 00:00:00 GMT; HttpOnly
        Content-Length: 0
        HTTP/1.1 200 OK
        Cache-Control: no-cache
        Expires: Thu, 05 May 2016 03:10:09 GMT
        Date: Thu, 05 May 2016 03:10:09 GMT
        Pragma: no-cache
        Expires: Thu, 05 May 2016 03:10:09 GMT
        Date: Thu, 05 May 2016 03:10:09 GMT
        Pragma: no-cache
        Content-Type: application/json
        X-Frame-Options: SAMEORIGIN
        WWW-Authenticate: Negotiate YGoGCSqGSIb3EgECAgIAb1swWaADAgEFoQMCAQ+iTTBLoAMCARKiRARCArhuv39Ttp6lhBlG3B0JAmFjv9weLp+SGFI+t2HSEHN6p4UVWKKy/kd9dKEgNMlyDu/o7ytzs0cqMxNsI69WbN5H
        Set-Cookie: hadoop.auth="u=hdfs&p=hdfs@<系统域名>&t=kerberos&e=1462453809395&s=wiRF4rdTWpm3tDST+a/Sy0lwgA4="; Path=/; Expires=Thu, 05-May-2016 13:10:09 GMT; HttpOnly
        Transfer-Encoding: chunked
        {"boolean":true}linux1:/opt/client # 

        返回值{"boolean":true}说明创建成功。

      • 执行下列命令访问HTTPS:
        linux1:/opt/client # curl -i -k -X PUT --negotiate -u: "https://10.120.172.109:9871/webhdfs/v1/huawei?op=MKDIRS"

        其中用IP10.120.172.109代替<HOST>,用9871代替<PORT>。

      • 运行结果:
        HTTP/1.1 401 Authentication required
        Date: Fri, 22 Apr 2016 08:13:37 GMT
        Pragma: no-cache
        Date: Fri, 22 Apr 2016 08:13:37 GMT
        Pragma: no-cache
        X-Frame-Options: SAMEORIGIN
        WWW-Authenticate: Negotiate
        Set-Cookie: hadoop.auth=; Path=/; Expires=Thu, 01-Jan-1970 00:00:00 GMT; Secure; HttpOnly
        Content-Length: 0
        HTTP/1.1 200 OK
        Cache-Control: no-cache
        Expires: Fri, 22 Apr 2016 08:13:37 GMT
        Date: Fri, 22 Apr 2016 08:13:37 GMT
        Pragma: no-cache
        Expires: Fri, 22 Apr 2016 08:13:37 GMT
        Date: Fri, 22 Apr 2016 08:13:37 GMT
        Pragma: no-cache
        Content-Type: application/json
        X-Frame-Options: SAMEORIGIN
        WWW-Authenticate: Negotiate YGoGCSqGSIb3EgECAgIAb1swWaADAgEFoQMCAQ+iTTBLoAMCARKiRARCugB+yT3Y+z8YCRMYJHXF84o1cyCfJq157+NZN1gu7D7yhMULnjr+7BuUdEcZKewFR7uD+DRiMY3akg3OgU45xQ9R
        Set-Cookie: hadoop.auth="u=hdfs&p=hdfs@<系统域名>&t=kerberos&e=1461348817963&s=sh57G7iVccX/Aknoz410yJPTLHg="; Path=/; Expires=Fri, 22-Apr-2016 18:13:37 GMT; Secure; HttpOnly
        Transfer-Encoding: chunked
        
        {"boolean":true}linux1:/opt/client # 

        返回值{"boolean":true}说明创建成功。

    3. 再执行下列命令进行查看,可以看到路径下出现“huawei”目录。
      linux1:/opt/client # hdfs dfs -ls /
      16/04/22 16:14:25 INFO hdfs.PeerCache: SocketCache disabled.
      Found 8 items
      -rw-r--r--   3 hdfs   supergroup          0 2016-04-20 18:03 /PRE_CREATE_DIR.SUCCESS
      drwxr-x---   - flume  hadoop              0 2016-04-20 18:02 /flume
      drwx------   - hbase  hadoop              0 2016-04-22 15:19 /hbase
      drwxr-xr-x   - hdfs  supergroup          0 2016-04-22 16:13 /huawei
      drwxrwxrwx   - mapred hadoop              0 2016-04-20 18:02 /mr-history
      drwxrwxrwx   - spark  supergroup          0 2016-04-22 16:12 /sparkJobHistory
      drwxrwxrwx   - hdfs   hadoop              0 2016-04-22 14:51 /tmp
      drwxrwxrwx   - hdfs   hadoop              0 2016-04-22 16:10 /user

  3. 创建请求上传命令,获取集群分配的可写入DataNode节点地址的信息Location。

    • 执行如下命令访问HTTP:
      linux1:/opt/client # curl -i -X PUT --negotiate -u: "http://linux1:9870/webhdfs/v1/huawei/testHdfs?op=CREATE"
    • 运行结果:
      HTTP/1.1 401 Authentication required
      Date: Thu, 05 May 2016 06:09:48 GMT
      Pragma: no-cache
      Date: Thu, 05 May 2016 06:09:48 GMT
      Pragma: no-cache
      X-Frame-Options: SAMEORIGIN
      WWW-Authenticate: Negotiate
      Set-Cookie: hadoop.auth=; Path=/; Expires=Thu, 01-Jan-1970 00:00:00 GMT; HttpOnly
      Content-Length: 0
      
      HTTP/1.1 307 TEMPORARY_REDIRECT
      Cache-Control: no-cache
      Expires: Thu, 05 May 2016 06:09:48 GMT
      Date: Thu, 05 May 2016 06:09:48 GMT
      Pragma: no-cache
      Expires: Thu, 05 May 2016 06:09:48 GMT
      Date: Thu, 05 May 2016 06:09:48 GMT
      Pragma: no-cache
      Content-Type: application/octet-stream
      X-Frame-Options: SAMEORIGIN
      WWW-Authenticate: Negotiate YGoGCSqGSIb3EgECAgIAb1swWaADAgEFoQMCAQ+iTTBLoAMCARKiRARCzQ6w+9pNzWCTJEdoU3z9xKEyg1JQNka0nYaB9TndvrL5S0neAoK2usnictTFnqIincAjwB6SnTtht8Q16WDlHJX/
      Set-Cookie: hadoop.auth="u=hdfs&p=hdfs@<系统域名>&t=kerberos&e=1462464588403&s=qry87vAyYzSn9VsS6Rm6vKLhKeU="; Path=/; Expires=Thu, 05-May-2016 16:09:48 GMT; HttpOnly
      Location: http://linux1:9864/webhdfs/v1/huawei/testHdfs?op=CREATE&delegation=HgAFYWRtaW4FYWRtaW4AigFUf4lZdIoBVKOV3XQOCBSyXvFAp92alcRs4j-KNulnN6wUoBJXRUJIREZTIGRlbGVnYXRpb24UMTAuMTIwLjE3Mi4xMDk6MjUwMDA&namenoderpcaddress=hacluster&overwrite=false
      Content-Length: 0
    • 执行如下命令访问HTTPS:
      linux1:/opt/client # curl -i -k -X PUT --negotiate -u: "https://linux1:9871/webhdfs/v1/huawei/testHdfs?op=CREATE"
    • 运行结果:
      HTTP/1.1 401 Authentication required
      Date: Thu, 05 May 2016 03:46:18 GMT
      Pragma: no-cache
      Date: Thu, 05 May 2016 03:46:18 GMT
      Pragma: no-cache
      X-Frame-Options: SAMEORIGIN
      WWW-Authenticate: Negotiate
      Set-Cookie: hadoop.auth=; Path=/; Expires=Thu, 01-Jan-1970 00:00:00 GMT; Secure; HttpOnly
      Content-Length: 0
      
      HTTP/1.1 307 TEMPORARY_REDIRECT
      Cache-Control: no-cache
      Expires: Thu, 05 May 2016 03:46:18 GMT
      Date: Thu, 05 May 2016 03:46:18 GMT
      Pragma: no-cache
      Expires: Thu, 05 May 2016 03:46:18 GMT
      Date: Thu, 05 May 2016 03:46:18 GMT
      Pragma: no-cache
      Content-Type: application/octet-stream
      X-Frame-Options: SAMEORIGIN
      WWW-Authenticate: Negotiate YGoGCSqGSIb3EgECAgIAb1swWaADAgEFoQMCAQ+iTTBLoAMCARKiRARCZMYR8GGUkn7pPZaoOYZD5HxzLTRZ71angUHKubW2wC/18m9/OOZstGQ6M1wH2pGriipuCNsKIfwP93eO2Co0fQF3
      Set-Cookie: hadoop.auth="u=hdfs&p=hdfs@<系统域名>&t=kerberos&e=1462455978166&s=F4rXUwEevHZze3PR8TxkzcV7RQQ="; Path=/; Expires=Thu, 05-May-2016 13:46:18 GMT; Secure; HttpOnly
      Location: https://linux1:9865/webhdfs/v1/huawei/testHdfs?op=CREATE&delegation=HgAFYWRtaW4FYWRtaW4AigFUfwX3t4oBVKMSe7cCCBSFJTi9j7X64QwnSz59TGFPKFf7GhNTV0VCSERGUyBkZWxlZ2F0aW9uFDEwLjEyMC4xNzIuMTA5OjI1MDAw&namenoderpcaddress=hacluster&overwrite=false
      Content-Length: 0

  4. 根据获取的Location地址信息,可在HDFS文件系统上创建“/huawei/testHdfs”文件,并将本地“testFile”中的内容上传至“testHdfs”文件。

    • 执行如下命令访问HTTP:
      linux1:/opt/client # curl -i -X PUT -T testFile --negotiate -u: "http://linux1:9864/webhdfs/v1/huawei/testHdfs?op=CREATE&delegation=HgAFYWRtaW4FYWRtaW4AigFUf4lZdIoBVKOV3XQOCBSyXvFAp92alcRs4j-KNulnN6wUoBJXRUJIREZTIGRlbGVnYXRpb24UMTAuMTIwLjE3Mi4xMDk6MjUwMDA&namenoderpcaddress=hacluster&overwrite=false"
    • 运行结果:
      HTTP/1.1 100 Continue
      HTTP/1.1 201 Created
      Location: hdfs://hacluster/huawei/testHdfs
      Content-Length: 0
      Connection: close
    • 执行如下命令访问HTTPS:
      linux1:/opt/client # curl -i -k -X PUT -T testFile --negotiate -u: "https://linux1:9865/webhdfs/v1/huawei/testHdfs?op=CREATE&delegation=HgAFYWRtaW4FYWRtaW4AigFUfwX3t4oBVKMSe7cCCBSFJTi9j7X64QwnSz59TGFPKFf7GhNTV0VCSERGUyBkZWxlZ2F0aW9uFDEwLjEyMC4xNzIuMTA5OjI1MDAw&namenoderpcaddress=hacluster&overwrite=false"
    • 运行结果:
      HTTP/1.1 100 Continue
      HTTP/1.1 201 Created
      Location: hdfs://hacluster/huawei/testHdfs
      Content-Length: 0
      Connection: close

  5. 打开“/huawei/testHdfs”文件,并读取文件中上传写入的内容。

    • 执行如下命令访问HTTP:
      linux1:/opt/client # curl -L --negotiate -u: "http://linux1:9870/webhdfs/v1/huawei/testHdfs?op=OPEN"
    • 运行结果:
      Hello, webhdfs user!
    • 执行如下命令访问HTTPS:
      linux1:/opt/client # curl -k -L --negotiate -u: "https://linux1:9871/webhdfs/v1/huawei/testHdfs?op=OPEN"
    • 运行结果:
      Hello, webhdfs user!

  6. 创建请求追加文件的命令,获取集群为已存在“/huawei/testHdfs”文件分配的可写入DataNode节点地址信息Location。

    • 执行如下命令访问HTTP:
      linux1:/opt/client # curl -i -X POST --negotiate -u: "http://linux1:9870/webhdfs/v1/huawei/testHdfs?op=APPEND"
    • 运行结果:
      HTTP/1.1 401 Authentication required
      Cache-Control: must-revalidate,no-cache,no-store
      Date: Thu, 05 May 2016 05:35:02 GMT
      Pragma: no-cache
      Date: Thu, 05 May 2016 05:35:02 GMT
      Pragma: no-cache
      Content-Type: text/html; charset=iso-8859-1
      X-Frame-Options: SAMEORIGIN
      WWW-Authenticate: Negotiate
      Set-Cookie: hadoop.auth=; Path=/; Expires=Thu, 01-Jan-1970 00:00:00 GMT; HttpOnly
      Content-Length: 1349
      
      HTTP/1.1 307 TEMPORARY_REDIRECT
      Cache-Control: no-cache
      Expires: Thu, 05 May 2016 05:35:02 GMT
      Date: Thu, 05 May 2016 05:35:02 GMT
      Pragma: no-cache
      Expires: Thu, 05 May 2016 05:35:02 GMT
      Date: Thu, 05 May 2016 05:35:02 GMT
      Pragma: no-cache
      Content-Type: application/octet-stream
      X-Frame-Options: SAMEORIGIN
      WWW-Authenticate: Negotiate YGoGCSqGSIb3EgECAgIAb1swWaADAgEFoQMCAQ+iTTBLoAMCARKiRARCTYvNX/2JMXhzsVPTw3Sluox6s/gEroHH980xMBkkYlCnO3W+0fM32c4/F98U5bl5dzgoolQoBvqq/EYXivvR12WX
      Set-Cookie: hadoop.auth="u=hdfs&p=hdfs@<系统域名>&t=kerberos&e=1462462502626&s=et1okVIOd7DWJ/LdhzNeS2wQEEY="; Path=/; Expires=Thu, 05-May-2016 15:35:02 GMT; HttpOnly
      Location: http://linux1:9864/webhdfs/v1/huawei/testHdfs?op=APPEND&delegation=HgAFYWRtaW4FYWRtaW4AigFUf2mGHooBVKN2Ch4KCBRzjM3jwSMlAowXb4dhqfKB5rT-8hJXRUJIREZTIGRlbGVnYXRpb24UMTAuMTIwLjE3Mi4xMDk6MjUwMDA&namenoderpcaddress=hacluster
      Content-Length: 0
    • 执行如下命令访问HTTPS:
      linux1:/opt/client # curl -i -k -X POST --negotiate -u: "https://linux1:9871/webhdfs/v1/huawei/testHdfs?op=APPEND"
    • 运行结果:
      HTTP/1.1 401 Authentication required
      Cache-Control: must-revalidate,no-cache,no-store
      Date: Thu, 05 May 2016 05:20:41 GMT
      Pragma: no-cache
      Date: Thu, 05 May 2016 05:20:41 GMT
      Pragma: no-cache
      Content-Type: text/html; charset=iso-8859-1
      X-Frame-Options: SAMEORIGIN
      WWW-Authenticate: Negotiate
      Set-Cookie: hadoop.auth=; Path=/; Expires=Thu, 01-Jan-1970 00:00:00 GMT; Secure; HttpOnly
      Content-Length: 1349
      
      HTTP/1.1 307 TEMPORARY_REDIRECT
      Cache-Control: no-cache
      Expires: Thu, 05 May 2016 05:20:41 GMT
      Date: Thu, 05 May 2016 05:20:41 GMT
      Pragma: no-cache
      Expires: Thu, 05 May 2016 05:20:41 GMT
      Date: Thu, 05 May 2016 05:20:41 GMT
      Pragma: no-cache
      Content-Type: application/octet-stream
      X-Frame-Options: SAMEORIGIN
      WWW-Authenticate: Negotiate YGoGCSqGSIb3EgECAgIAb1swWaADAgEFoQMCAQ+iTTBLoAMCARKiRARCXgdjZuoxLHGtM1oyrPcXk95/Y869eMfXIQV5UdEwBZ0iQiYaOdf5+Vk7a7FezhmzCABOWYXPxEQPNugbZ/yD5VLT
      Set-Cookie: hadoop.auth="u=hdfs&p=hdfs@<系统域名>&t=kerberos&e=1462461641713&s=tGwwOH9scmnNtxPjlnu28SFtex0="; Path=/; Expires=Thu, 05-May-2016 15:20:41 GMT; Secure; HttpOnly
      Location: https://linux1:9865/webhdfs/v1/huawei/testHdfs?op=APPEND&delegation=HgAFYWRtaW4FYWRtaW4AigFUf1xi_4oBVKNo5v8HCBSE3Fg0f_EwtFKKlODKQSM2t32CjhNTV0VCSERGUyBkZWxlZ2F0aW9uFDEwLjEyMC4xNzIuMTA5OjI1MDAw&namenoderpcaddress=hacluster

  7. 根据获取的Location地址信息,可将本地“testFileAppend”文件中的内容追加到HDFS文件系统上的“/huawei/testHdfs”文件。

    • 执行如下命令访问HTTP:
      linux1:/opt/client # curl -i -X POST -T testFileAppend --negotiate -u: "http://linux1:9864/webhdfs/v1/huawei/testHdfs?op=APPEND&delegation=HgAFYWRtaW4FYWRtaW4AigFUf2mGHooBVKN2Ch4KCBRzjM3jwSMlAowXb4dhqfKB5rT-8hJXRUJIREZTIGRlbGVnYXRpb24UMTAuMTIwLjE3Mi4xMDk6MjUwMDA&namenoderpcaddress=hacluster"
    • 运行结果:
      HTTP/1.1 100 Continue
      HTTP/1.1 200 OK
      Content-Length: 0
      Connection: close
    • 执行如下命令访问HTTPS:
      linux1:/opt/client # curl -i -k -X POST -T testFileAppend --negotiate -u: "https://linux1:9865/webhdfs/v1/huawei/testHdfs?op=APPEND&delegation=HgAFYWRtaW4FYWRtaW4AigFUf1xi_4oBVKNo5v8HCBSE3Fg0f_EwtFKKlODKQSM2t32CjhNTV0VCSERGUyBkZWxlZ2F0aW9uFDEwLjEyMC4xNzIuMTA5OjI1MDAw&namenoderpcaddress=hacluster"
    • 运行结果:
      HTTP/1.1 100 Continue
      HTTP/1.1 200 OK
      Content-Length: 0
      Connection: close

  8. 打开“/huawei/testHdfs”文件,并读取文件中全部的内容。

    • 执行如下命令访问HTTP:
      linux1:/opt/client # curl -L --negotiate -u: "http://linux1:9870/webhdfs/v1/huawei/testHdfs?op=OPEN"
    • 运行结果:
      Hello, webhdfs user!
      Welcome back to webhdfs!
    • 执行如下命令访问HTTPS:
      linux1:/opt/client # curl -k -L --negotiate -u: "https://linux1:9871/webhdfs/v1/huawei/testHdfs?op=OPEN"
    • 运行结果:
      Hello, webhdfs user!
      Welcome back to webhdfs!

  9. 可列出文件系统上“huawei”目录下所有目录和文件的详细信息。

    LISTSTATUS将在一个请求中返回所有子文件和文件夹的信息。
    • 执行如下命令访问HTTP:
      linux1:/opt/client # curl --negotiate -u: "http://linux1:9870/webhdfs/v1/huawei/testHdfs?op=LISTSTATUS"
    • 运行结果:
      {"FileStatuses":{"FileStatus":[
      {"accessTime":1462425245595,"blockSize":134217728,"childrenNum":0,"fileId":17680,"group":"supergroup","length":70,"modificationTime":1462426678379,"owner":"hdfs","pathSuffix":"","permission":"755","replication":3,"storagePolicy":0,"type":"FILE"}
      ]}}
    • 执行如下命令访问HTTPS:
      linux1:/opt/client # curl -k --negotiate -u: "https://linux1:9871/webhdfs/v1/huawei/testHdfs?op=LISTSTATUS"
    • 运行结果:
      {"FileStatuses":{"FileStatus":[
      {"accessTime":1462425245595,"blockSize":134217728,"childrenNum":0,"fileId":17680,"group":"supergroup","length":70,"modificationTime":1462426678379,"owner":"hdfs","pathSuffix":"","permission":"755","replication":3,"storagePolicy":0,"type":"FILE"}
      ]}}

    带有大小参数和startafter参数的LISTSTATUS将有助于通过多个请求获取子文件和文件夹信息,从而避免获取大量子文件和文件夹信息时,用户界面变慢。

    • 执行如下命令访问HTTP:
      linux1:/opt/client # curl --negotiate -u: "http://linux1:9870/webhdfs/v1/huawei/?op=LISTSTATUS&startafter=sparkJobHistory&size=1"
    • 运行结果:
      {"FileStatuses":{"FileStatus":[
      {"accessTime":1462425245595,"blockSize":134217728,"childrenNum":0,"fileId":17680,"group":"supergroup","length":70,"modificationTime":1462426678379,"owner":"hdfs","pathSuffix":"testHdfs","permission":"755","replication":3,"storagePolicy":0,"type":"FILE"}
      ]}}
    • 执行如下命令访问HTTPS:
      linux1:/opt/client # curl -k --negotiate -u: "https://linux1:9871/webhdfs/v1/huawei/?op=LISTSTATUS&startafter=sparkJobHistory&size=1"
    • 运行结果:
      {"FileStatuses":{"FileStatus":[
      {"accessTime":1462425245595,"blockSize":134217728,"childrenNum":0,"fileId":17680,"group":"supergroup","length":70,"modificationTime":1462426678379,"owner":"hdfs","pathSuffix":"testHdfs","permission":"755","replication":3,"storagePolicy":0,"type":"FILE"}
      ]}}

  10. 删除HDFS上的文件“/huawei/testHdfs”

    • 执行如下命令访问HTTP:
      linux1:/opt/client # curl -i -X DELETE  --negotiate -u: "http://linux1:9870/webhdfs/v1/huawei/testHdfs?op=DELETE"
    • 运行结果:
      HTTP/1.1 401 Authentication required
      Date: Thu, 05 May 2016 05:54:37 GMT
      Pragma: no-cache
      Date: Thu, 05 May 2016 05:54:37 GMT
      Pragma: no-cache
      X-Frame-Options: SAMEORIGIN
      WWW-Authenticate: Negotiate
      Set-Cookie: hadoop.auth=; Path=/; Expires=Thu, 01-Jan-1970 00:00:00 GMT; HttpOnly
      Content-Length: 0
      HTTP/1.1 200 OK
      Cache-Control: no-cache
      Expires: Thu, 05 May 2016 05:54:37 GMT
      Date: Thu, 05 May 2016 05:54:37 GMT
      Pragma: no-cache
      Expires: Thu, 05 May 2016 05:54:37 GMT
      Date: Thu, 05 May 2016 05:54:37 GMT
      Pragma: no-cache
      Content-Type: application/json
      X-Frame-Options: SAMEORIGIN
      WWW-Authenticate: Negotiate YGoGCSqGSIb3EgECAgIAb1swWaADAgEFoQMCAQ+iTTBLoAMCARKiRARC9k0/v6Ed8VlUBy3kuT0b4RkqkNMCrDevsLGQOUQRORkzWI3Wu+XLJUMKlmZaWpP+bPzpx8O2Od81mLBgdi8sOkLw
      Set-Cookie: hadoop.auth="u=hdfs&p=hdfs@<系统域名>&t=kerberos&e=1462463677153&s=Pwxe5UIqaULjFb9R6ZwlSX85GoI="; Path=/; Expires=Thu, 05-May-2016 15:54:37 GMT; HttpOnly
      Transfer-Encoding: chunked
      {"boolean":true}linux1:/opt/client # 
    • 执行如下命令访问HTTPS:
      linux1:/opt/client # curl -i -k -X DELETE --negotiate -u: "https://linux1:9871/webhdfs/v1/huawei/testHdfs?op=DELETE"
    • 运行结果:
      HTTP/1.1 401 Authentication required
      Date: Thu, 05 May 2016 06:20:10 GMT
      Pragma: no-cache
      Date: Thu, 05 May 2016 06:20:10 GMT
      Pragma: no-cache
      X-Frame-Options: SAMEORIGIN
      WWW-Authenticate: Negotiate
      Set-Cookie: hadoop.auth=; Path=/; Expires=Thu, 01-Jan-1970 00:00:00 GMT; Secure; HttpOnly
      Content-Length: 0
      HTTP/1.1 200 OK
      Cache-Control: no-cache
      Expires: Thu, 05 May 2016 06:20:10 GMT
      Date: Thu, 05 May 2016 06:20:10 GMT
      Pragma: no-cache
      Expires: Thu, 05 May 2016 06:20:10 GMT
      Date: Thu, 05 May 2016 06:20:10 GMT
      Pragma: no-cache
      Content-Type: application/json
      X-Frame-Options: SAMEORIGIN
      WWW-Authenticate: Negotiate YGoGCSqGSIb3EgECAgIAb1swWaADAgEFoQMCAQ+iTTBLoAMCARKiRARCLY5vrVmgsiH2VWRypc30iZGffRUf4nXNaHCWni3TIDUOTl+S+hfjatSbo/+uayQI/6k9jAfaJrvFIfxqppFtofpp
      Set-Cookie: hadoop.auth="u=hdfs&p=hdfs@<系统域名>&t=kerberos&e=1462465210180&s=KGd2SbH/EUSaaeVKCb5zPzGBRKo="; Path=/; Expires=Thu, 05-May-2016 16:20:10 GMT; Secure; HttpOnly
      Transfer-Encoding: chunked
      {"boolean":true}linux1:/opt/client #

密钥管理系统通过HTTP REST API对外提供密钥管理服务,接口请参考官网:

http://hadoop.apache.org/docs/r3.1.1/hadoop-kms/index.html

由于REST API接口做了安全加固,防止脚本注入攻击。通过REST API的接口,无法创建包含 "<script ", "<iframe", "<frame", "javascript:" 这些关键字的目录和文件名。

相关文档