When the Hadoop Client Is Used to Delete Data from OBS, It Does Not Have the Permission for the .Trash Directory
Issue
When a user uses the Hadoop client to delete data from OBS, an error message is displayed indicating that the user does not have the permission on the .Trash directory.
Symptom
After the hadoop fs -rm obs://<obs_path> command is executed, the following error information is displayed:
exception [java.nio.file.AccessDeniedException: user/root/.Trash/Current/: getFileStatus on user/root/.Trash/Current/: status [403]
Cause Analysis
When deleting a file, Hadoop moves the file to the .Trash directory. If the user does not have the permission on the directory, error 403 is reported.
Procedure
Solution 1:
Run the hadoop fs -rm -skipTrash command to delete the file.
Solution 2:
Add the permission to access the .Trash directory to the agency corresponding to the cluster.
- On the Dashboard tab page of the cluster, query and record the name of the agency bound to the cluster.
- Log in to the IAM console.
- Choose Permissions. On the displayed page, click Create Custom Policy.
- Policy Name: Enter a policy name.
- Scope: Select Global services.
- Policy View: Select Visual editor.
- Policy Content:
- Allow: Select Allow.
- Select service: Select Object Storage Service (OBS).
- Select all operation permissions.
- Specific resources:
- Set object to Specify resource path, click Add resource path, and enter the .Trash directory, for example, obs_bucket_name/user/root/.Trash/* in Path.
- Set bucket to Specify resource path, click Add resource path, and enter obs_bucket_name in Path.
Replace obs_bucket-name with the actual OBS bucket name.
- (Optional) Request condition, which does not need to be added currently.
Figure 1 Custom policy
- Click OK.
- Select Agency and click Assign Permissions in the Operation column of the agency queried in 1.
- Query and select the created policy in 3.
- Click OK.
- Run the hadoop fs -rm obs://<obs_path> command again.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.