Help Center/ MapReduce Service/ Troubleshooting/ Using Spark/ A Spark Job Fails to Run Due to Incorrect JAR File Import
Updated on 2022-09-14 GMT+08:00

A Spark Job Fails to Run Due to Incorrect JAR File Import

Issue

A Spark job fails to be executed.

Symptom

A Spark job fails to be executed.

Cause Analysis

The imported JAR file is incorrect when the Spark job is executed. As a result, the Spark job fails to be executed.

Procedure

  1. Log in to any Master node.
  2. Run the cd /opt/Bigdata/MRS_*/install/FusionInsight-Spark-*/spark/examples/jars command to view the JAR file of the sample program.

    A JAR file name contains a maximum of 1023 characters and cannot include special characters (;|&>,<'$). In addition, it cannot be left blank or full of spaces.

  3. Check the executable programs in the OBS bucket. The executable programs can be stored in HDFS or OBS. The paths vary according to file systems.

    • OBS storage path: starts with obs://, for example, obs://wordcount/program/hadoop-mapreduce-examples-2.7.x.jar.
    • HDFS storage path: starts with /user. Spark Script must end with .sql, and MR and Spark must end with .jar. The .sql and .jar are case-insensitive.