Connecting Hadoop to OBS
Overview
Hadoop provides a distributed resource scheduling engine for processing and analyzing large-scale data sets. OBS effects the Hadoop HDFS protocol. It can replace HDFS in the Hadoop system to connect to big data components such as Spark, MapReduce, and Hive, serving as the data lake storage for big data computing.
The HDFS protocol is defined through the FileSystem abstract class in Hadoop, which can be effected by different storage systems, such as the HDFS service built in Hadoop and Huawei Cloud OBS.
Constraints
The following HDFS semantics are not supported:
- Lease
- Symbolic link operations
- Proxy users
- File concat
- File checksum
- File replication factor
- Extended attributes (Xattrs) operations
- Snapshot operations
- Storage policy
- Quota
- POSIX ACL
- Delegation token operations
Precautions
To reduce output logs, add the following configuration to the /opt/hadoop-3.1.1/etc/hadoop/log4j.properties file:
log4j.logger.com.obs=ERROR
Procedure
Hadoop 3.1.1 is used here as an example. You are advised to use the latest version. You are not advised to use a Hadoop version earlier than 2.8.3.
- Download hadoop-3.1.1.tar.gz and decompress it to the /opt/hadoop-3.1.1 directory.
- Add the following content to the /etc/profile file:
export HADOOP_HOME=/opt/hadoop-3.1.1 export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
- Install hadoop-huaweicloud.
- Download it from GitHub.
If no JAR package of the required version is available, modify the Hadoop version in the POM file under the hadoop-huaweicloud directory and then compile the file again.
- Copy the hadoop-huaweicloud-x.x.x-hw-y.jar package to the /opt/hadoop-3.1.1/share/hadoop/tools/lib and /opt/hadoop-3.1.1/share/hadoop/common/lib directories.
In a hadoop-huaweicloud-x.x.x-hw-y.jar package name, x.x.x indicates the Hadoop version number, and y indicates the OBSA version number. For example, in the hadoop-huaweicloud-3.1.1-hw-40.jar package name, 3.1.1 is the Hadoop version number, and 40 is the OBSA version number.
- Download it from GitHub.
- Configure Hadoop.
Add OBS configurations to the /opt/hadoop-3.1.1/etc/hadoop/core-site.xml file:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
<property> <name>fs.obs.impl</name> <value>org.apache.hadoop.fs.obs.OBSFileSystem</value> </property> <property> <name>fs.AbstractFileSystem.obs.impl</name> <value>org.apache.hadoop.fs.obs.OBS</value> </property> <property> <name>fs.obs.access.key</name> <value>xxx</value> <description>HuaweiCloud Access Key Id</description> </property> <property> <name>fs.obs.secret.key</name> <value>xxx</value> <description>HuaweiCloud Secret Access Key</description> </property> <property> <name>fs.obs.endpoint</name> <value>xxx</value> <description>HuaweiCloud Endpoint</description> </property>
- Check whether the connection is successful.
You can use a CLI or MapReduce program for verification. Examples are provided as follows:
- CLI
hadoop fs -ls obs://obs-bucket/
Command output:
-rw-rw-rw- 1 root root 1087 2018-06-11 07:49 obs://obs-bucket/test1 -rw-rw-rw- 1 root root 1087 2018-06-11 07:49 obs://obs-bucket/test2
- MapReduce program
hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.1.jar wordcount obs://example-bucket/input/test.txt obs://obs-bucket/output
- CLI
[Appendix] hadoop-huaweicloud Configurations
Configuration Item |
Default Value |
Mandatory |
Description |
---|---|---|---|
fs.obs.impl |
org.apache.hadoop.fs.obs.OBSFileSystem |
Yes |
- |
fs.AbstractFileSystem.obs.impl |
org.apache.hadoop.fs.obs.OBS |
Yes |
- |
fs.obs.endpoint |
N/A |
Yes |
Endpoint of Huawei Cloud OBS |
fs.obs.access.key |
N/A |
Yes |
Huawei Cloud's access key ID (AK) for accessing the corresponding OBS bucket. |
fs.obs.secret.key |
N/A |
Yes |
Huawei Cloud's secret access key (SK) for accessing the corresponding OBS bucket. |
fs.obs.session.token |
N/A |
No |
Huawei Cloud's security token for accessing the corresponding OBS bucket. This token is required when a temporary AK/SK pair is used. |
fs.obs.security.provider |
N/A |
No |
Class for calling the com.obs.services.IObsCredentialsProvider API. This API is used to obtain the credentials for accessing OBS. |
fs.obs.connection.ssl.enabled |
FALSE |
No |
Specifies whether to access OBS through HTTPS. |
fs.obs.threads.keepalivetime |
60 |
No |
Parameter keepAliveTime is used to control the read and write thread pool. |
fs.obs.threads.max |
20 |
No |
Parameters corePoolSize and maximumPoolSize are used to control the read and write thread pool. |
fs.obs.max.total.tasks |
20 |
No |
Parameter BlockingQueue is used to control the capacity of the read and write thread pool. Its value is the sum of the values of fs.obs.threads.max and fs.obs.max.total.tasks. |
fs.obs.multipart.size |
104857600 |
No |
Size of a multipart upload. |
fs.obs.fast.upload.buffer |
disk |
No |
Specifies a cache method. All written data is cached and then uploaded to OBS. The options are as follows:
|
fs.obs.buffer.dir |
${hadoop.tmp.dir} |
No |
Specifies the cache directory when fs.obs.fast.upload.buffer is set to disk. In such cases, multiple directories are supported and separated by commas (,). |
fs.obs.bufferdir.verify.enable |
FALSE |
No |
Specifies whether to verify the existence of the cache directory and whether the directory has the write permissions, when fs.obs.fast.upload.buffer is set to disk. |
fs.obs.fast.upload.active.blocks |
4 |
No |
Specifies the maximum number of caches allowed by each stream operation (the maximum number of thread tasks that can be submitted through a multipart upload thread pool). This limits the maximum cache space (calculated from fs.obs.fast.upload.active.blocks x fs.obs.multipart.size) that can be used by each stream operation. |
fs.obs.fast.upload.array.first.buffer |
1048576 |
No |
When fs.obs.fast.upload.buffer is set to array, this parameter is used to control the initial size of JVM on-heap memory. |
fs.obs.readahead.range |
1048576 |
No |
Size of the part that will be read ahead. |
fs.obs.multiobjectdelete.enable |
TRUE |
No |
Specifies whether to enable a batch deletion when directories are deleted. |
fs.obs.delete.threads.max |
20 |
No |
Parameters maximumPoolSize and corePoolSize are used to control the thread pool. |
fs.obs.multiobjectdelete.maximum |
1000 |
No |
Specifies the maximum number of objects that can be deleted in a batch deletion request. The maximum value is 1000. |
fs.obs.multiobjectdelete.threshold |
3 |
No |
Specifies the minimum number of objects in a batch deletion. If the number of objects to be batch deleted is less than this parameter value, batch deletion will not be started. |
fs.obs.list.threads.core |
30 |
No |
Parameter corePoolSize is used to control the thread pool. |
fs.obs.list.threads.max |
60 |
No |
Parameter maximumPoolSize is used to control the thread pool. |
fs.obs.list.workqueue.capacity |
1024 |
No |
Capacity of the parameter BlockingQueue that is used to control the thread pool. |
fs.obs.list.parallel.factor |
30 |
No |
This parameter is used to control concurrency factors. |
fs.obs.paging.maximum |
1000 |
No |
Specifies the maximum number of objects that can be returned in a list request. The maximum value is 1000. |
fs.obs.copy.threads.max |
40 |
No |
When a bucket renames a directory, parameters maximumPoolSize and corePoolSize are used to control the thread copy. Their value is half of the value of this parameter. The capacity of BlockingQueue is 1024. |
fs.obs.copypart.size |
104857600 |
No |
Specifies the size of a single part in a multipart copy. If the size of an object to be copied exceeds this parameter value, multipart copy is performed, and the size of a single part is set to this parameter value. Otherwise, simple copy is performed. |
fs.obs.copypart.threads.max |
5368709120 |
No |
If multipart copy is performed during the copy of a single object, maximumPoolSize and corePoolSize are used to configure the multipart copy thread pool. Their value is half of the value of this parameter. The capacity of BlockingQueue is 1024. |
fs.obs.getcanonicalservicename.enable |
FALSE |
No |
Controls the return value of API getCanonicalServiceName().
|
fs.obs.multipart.purge |
FALSE |
No |
Specifies whether to clear multipart upload tasks in a bucket when OBSFilesystem is initialized. |
fs.obs.multipart.purge.age |
86400 |
No |
Time before which multipart upload tasks in a bucket will be cleared when OBSFilesystem is initialized. |
fs.obs.trash.enable |
FALSE |
No |
Specifies whether to enable the trash feature. |
fs.obs.trash.dir |
N/A |
No |
Directory for storing deleted files. |
fs.obs.block.size |
134217728 |
No |
Block size. |
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot