A Spark Job Fails to Run Due to Incorrect JAR File Import
Symptom
A Spark job fails to be executed.
Cause Analysis
The imported JAR file is incorrect when the Spark job is executed. As a result, the Spark job fails to be executed.
Procedure
- Log in to any Master node.
- Run the cd /opt/Bigdata/MRS_*/install/FusionInsight-Spark-*/spark/examples/jars command to view the JAR file of the sample program.
A JAR file name contains a maximum of 1023 characters and cannot include special characters (;|&>,<'$). In addition, it cannot be left blank or full of spaces.
- Check the path of the executable programs in HDFS or the OBS bucket. The path may vary depending on the file system.
- OBS storage path: starts with obs://, for example, obs://wordcount/program/hadoop-mapreduce-examples-2.7.x.jar.
- HDFS storage path: starts with /user. The Spark Script must end with .sql, and MapReduce and Spark must end with .jar. .sql and .jar are case-insensitive.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot