Help Center/ MapReduce Service/ Troubleshooting/ Using Spark/ A Spark Job Fails to Run Due to Incorrect JAR File Import
Updated on 2023-11-30 GMT+08:00

A Spark Job Fails to Run Due to Incorrect JAR File Import

Symptom

A Spark job fails to be executed.

Cause Analysis

The imported JAR file is incorrect when the Spark job is executed. As a result, the Spark job fails to be executed.

Procedure

  1. Log in to any Master node.
  2. Run the cd /opt/Bigdata/MRS_*/install/FusionInsight-Spark-*/spark/examples/jars command to view the JAR file of the sample program.

    A JAR file name contains a maximum of 1023 characters and cannot include special characters (;|&>,<'$). In addition, it cannot be left blank or full of spaces.

  3. Check the path of the executable programs in HDFS or the OBS bucket. The path may vary depending on the file system.

    • OBS storage path: starts with obs://, for example, obs://wordcount/program/hadoop-mapreduce-examples-2.7.x.jar.
    • HDFS storage path: starts with /user. The Spark Script must end with .sql, and MapReduce and Spark must end with .jar. .sql and .jar are case-insensitive.