A Spark Job Fails to Run Due to Incorrect JAR File Import
Issue
A Spark job fails to be executed.
Symptom
A Spark job fails to be executed.
Cause Analysis
The imported JAR file is incorrect when the Spark job is executed. As a result, the Spark job fails to be executed.
Procedure
- Log in to any Master node.
- Run the cd /opt/Bigdata/MRS_*/install/FusionInsight-Spark-*/spark/examples/jars command to view the JAR file of the sample program.
A JAR file name contains a maximum of 1023 characters and cannot include special characters (;|&>,<'$). In addition, it cannot be left blank or full of spaces.
- Check the executable programs in the OBS bucket. The executable programs can be stored in HDFS or OBS. The paths vary according to file systems.
- OBS storage path: starts with obs://, for example, obs://wordcount/program/hadoop-mapreduce-examples-2.7.x.jar.
- HDFS storage path: starts with /user. Spark Script must end with .sql, and MR and Spark must end with .jar. The .sql and .jar are case-insensitive.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.