Installing Spark
Prerequisites
A Linux server that can access the public network is available. The recommended node specifications are 4 vCPUs and 8 GiB memory or higher.
Configuring the JDK
This section uses CentOS as an example to describe how to install JDK 1.8.
- Obtain the available version.
yum -y list java*
- Install JDK 1.8.
yum install -y java-1.8.0-openjdk java-1.8.0-openjdk-devel
- Check the version after the installation.
# java -version openjdk version "1.8.0_382" OpenJDK Runtime Environment (build 1.8.0_382-b05) OpenJDK 64-Bit Server VM (build 25.382-b05, mixed mode)
- Add environment variables.
- Linux environment variables are configured in the /etc/profile file.
vim /etc/profile
- In the editing mode, add the following content to the end of the file:
JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.382.b05-1.el7_9.x86_64 PATH=$PATH:$JAVA_HOME/bin CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar export JAVA_HOME PATH CLASSPATH
- Save and close the profile file. Run the following command for the modification to take effect:
source /etc/profile
- Check the JDK environment variables.
echo $JAVA_HOME echo $PATH echo $CLASSPATH
- Linux environment variables are configured in the /etc/profile file.
Obtaining the Spark Package
OBS matches Hadoop 2.8.3 and 3.1.1. Hadoop 3.1.1 is used in this example.
- Download Spark v3.1.1. If Git is not installed, run yum install git to install it.
git clone -b v3.1.1 https://github.com/apache/spark.git
- Modify the /dev/make-distribution.sh file and specify the Spark version so that the check can be skipped during compilation.
- Search for the line where VERSION resides and check the number of the line where the version number is located.
cat ./spark/dev/make-distribution.sh |grep -n '^VERSION=' -A18
- Comment out the content displayed from lines 129 to 147 and specify the version.
sed -i '129,147s/^/#/g' ./spark/dev/make-distribution.sh sed -i '148a VERSION=3.1.3\nSCALA_VERSION=2.12\nSPARK_HADOOP_VERSION=3.1.1\nSPARK_HIVE=1' ./spark/dev/make-distribution.sh
- Search for the line where VERSION resides and check the number of the line where the version number is located.
- Download the dependency.
wget https://archive.apache.org/dist//maven/maven-3/3.6.3/binaries/apache-maven-3.6.3-bin.tar.gz tar -zxvf apache-maven-3.6.3-bin.tar.gz && mv apache-maven-3.6.3 ./spark/build
- Run the following command to perform compilation:
./spark/dev/make-distribution.sh --name hadoop3.1 --tgz -Pkubernetes -Pyarn -Dhadoop.version=3.1.1
- Wait for the compilation to complete. After the compilation, the software package is named spark-3.1.3-bin-hadoop3.1.tgz.
Configuring the Runtime Environment for Spark
To simplify the operation, use the root user to place the compiled package spark-3.1.3-bin-hadoop3.1.tgz in the /root directory on the operation node.
- Move the software package to the /root directory.
mv ./spark/spark-3.1.3-bin-hadoop3.1.tgz /root
- Run the following command to install Spark:
tar -zxvf spark-3.1.3-bin-hadoop3.1.tgz mv spark-3.1.3-bin-hadoop3.1 spark-obs cat >> ~/.bashrc <<EOF PATH=/root/spark-obs/bin:\$PATH PATH=/root/spark-obs/sbin:\$PATH export SPARK_HOME=/root/spark-obs EOF source ~/.bashrc
- Run the following command where binary spark-submit is used to check the Spark version:
spark-submit --version
Interconnecting Spark with OBS
- Obtain Huawei Cloud OBS JAR. The hadoop-huaweicloud-3.1.1-hw-45.jar package is used, which can be obtained from https://github.com/huaweicloud/obsa-hdfs/tree/master/release.
wget https://github.com/huaweicloud/obsa-hdfs/releases/download/v45/hadoop-huaweicloud-3.1.1-hw-45.jar
- Copy the package to the corresponding directory.
cp hadoop-huaweicloud-3.1.1-hw-45.jar /root/spark-obs/jars/
- Modify Spark configuration items. To interconnect Spark with OBS, add ConfigMaps for Spark as follows:
- Obtain the AK/SK. For details, see Access Keys.
- Change the values of AK_OF_YOUR_ACCOUNT, SK_OF_YOUR_ACCOUNT, and OBS_ENDPOINT to the actual values.
- AK_OF_YOUR_ACCOUNT: indicates the AK obtained in the previous step.
- SK_OF_YOUR_ACCOUNT: indicates the SK obtained in the previous step.
- OBS_ENDPOINT: indicates the OBS endpoint. It can be obtained in Regions and Endpoints.
cp ~/spark-obs/conf/spark-defaults.conf.template ~/spark-obs/conf/spark-defaults.conf cat >> ~/spark-obs/conf/spark-defaults.conf <<EOF spark.hadoop.fs.obs.readahead.inputstream.enabled=true spark.hadoop.fs.obs.buffer.max.range=6291456 spark.hadoop.fs.obs.buffer.part.size=2097152 spark.hadoop.fs.obs.threads.read.core=500 spark.hadoop.fs.obs.threads.read.max=1000 spark.hadoop.fs.obs.write.buffer.size=8192 spark.hadoop.fs.obs.read.buffer.size=8192 spark.hadoop.fs.obs.connection.maximum=1000 spark.hadoop.fs.obs.access.key=AK_OF_YOUR_ACCOUNT spark.hadoop.fs.obs.secret.key=SK_OF_YOUR_ACCOUNT spark.hadoop.fs.obs.endpoint=OBS_ENDPOINT spark.hadoop.fs.obs.buffer.dir=/root/hadoop-obs/obs-cache spark.hadoop.fs.obs.impl=org.apache.hadoop.fs.obs.OBSFileSystem spark.hadoop.fs.obs.connection.ssl.enabled=false spark.hadoop.fs.obs.fast.upload=true spark.hadoop.fs.obs.socket.send.buffer=65536 spark.hadoop.fs.obs.socket.recv.buffer=65536 spark.hadoop.fs.obs.max.total.tasks=20 spark.hadoop.fs.obs.threads.max=20 spark.kubernetes.container.image.pullSecrets=default-secret EOF
Pushing an Image to SWR
To run Spark tasks in Kubernetes, build a Spark container image of the same version and upload it to SWR. A Dockerfile file has been generated during compilation. Use this file to create an image and push it to SWR.
- Create an image.
cd ~/spark-obs docker build -t spark:3.1.3-obs --build-arg spark_uid=0 -f kubernetes/dockerfiles/spark/Dockerfile .
- Upload the image.
- (Optional) Log in to the SWR console, choose Organizations in the navigation pane, and click Create Organization in the upper right corner of the page.
Skip this step if you already have an organization.
- Choose My Images in the navigation pane and click Upload Through Client. On the page displayed, click Generate a temporary login command and click to copy the command.
- Run the login command copied in the previous step on the node. If the login is successful, the message "Login Succeeded" will display.
- Log in to the node where the image is created and run the login command.
Docker tag [{ Image name }:{Version name }] swr. ap-southeast-1.myhuaweicloud.com/{ Organization name }/{Image name }:{version name } docker push swr.ap-southeast-1.myhuaweicloud.com/{Organization name }/{Image name }:{Version name }
Record the image access address for later use.
For example, record the IP address as swr.ap-southeast-1.myhuaweicloud.com/dev-container/spark:3.1.3-obs.
- (Optional) Log in to the SWR console, choose Organizations in the navigation pane, and click Create Organization in the upper right corner of the page.
Configuring Spark History Server
- Modify the ~/spark-obs/conf/spark-defaults.conf file, enable Spark event logging, and configure the OBS bucket name and directory.
cat >> ~/spark-obs/conf/spark-defaults.conf <<EOF spark.eventLog.enabled=true spark.eventLog.dir=obs://{bucket-name}/{log-dir}/ EOF
- spark.eventLog.enabled: indicates that Spark event logging is enabled if it is set to true.
- spark.eventLog.dir: indicates the OBS bucket name and path. The bucket is named in the format of obs://{bucket-name}/{log-dir}/, for example, obs://spark-sh1/history-obs/. Ensure that the OBS bucket name and directory are correct.
- Modify the ~/spark-obs/conf/spark-env.sh file. If the file does not exist, run the command to copy the template as a file:
cp ~/spark-obs/conf/spark-env.sh.template ~/spark-obs/conf/spark-env.sh cat >> ~/spark-obs/conf/spark-env.sh <<EOF SPARK_HISTORY_OPTS="-Dspark.history.fs.logDirectory=obs://{bucket-name}/{log-dir}/" EOF
The OBS address must be the same as that in spark-default.conf in the previous step.
- Start Spark History Server.
start-history-server.sh
Information similar to the following is displayed:
starting org.apache.spark.deploy.history.HistoryServer, logging to /root/spark-obs/logs/spark-root-org.apache.spark.deploy.history.HistoryServer-1-spark-sh1.out
- Access the server through port 18080 on the node.
To stop the server, run the following command:
stop-history-server.sh
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot