Help Center> MapReduce Service> Developer Guide (Normal_Earlier Than 3.x)> Spark Application Development> FAQs> What Should I Do If the "had a not serializable result" Error Is Reported When a Spark Task Reads HBase Data?
Updated on 2022-06-01 GMT+08:00

What Should I Do If the "had a not serializable result" Error Is Reported When a Spark Task Reads HBase Data?

Question

What should I do if the error "Task 0.0 in stage 0.0 (TID 0) had a not serializable result: org.apache.hadoop.hbase.io.ImmutableBytesWritable" is reported when a Spark task reads HBase data?

Answer

You can resolve this exception by using either of the following methods:

  • Run the following lines of code before initializing SparkConf:
    System.setProperty("spark.serializer", "org.apache.spark.serializer.KryoSerializer");
    System.setProperty("spark.kryo.registrator", "com.huawei.bigdata.spark.examples.MyRegistrator");
  • Use the set method to set the SparkConf object. The code is as follows:
    val conf = new SparkConf().setAppName("HbaseTest");
    conf.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer");
    conf.set("spark.kryo.registrator", "com.huawei.bigdata.spark.examples.MyRegistrator");