Deze pagina is nog niet beschikbaar in uw eigen taal. We werken er hard aan om meer taalversies toe te voegen. Bedankt voor uw steun.

On this page

Show all

Help Center/ MapReduce Service/ Developer Guide (Normal_Earlier Than 3.x)/ Spark Application Development/ FAQs/ What Should I Do If the "had a not serializable result" Error Is Reported When a Spark Task Reads HBase Data?

What Should I Do If the "had a not serializable result" Error Is Reported When a Spark Task Reads HBase Data?

Updated on 2022-09-14 GMT+08:00

Question

What should I do if the error "Task 0.0 in stage 0.0 (TID 0) had a not serializable result: org.apache.hadoop.hbase.io.ImmutableBytesWritable" is reported when a Spark task reads HBase data?

Answer

You can resolve this exception by using either of the following methods:

  • Run the following lines of code before initializing SparkConf:
    System.setProperty("spark.serializer", "org.apache.spark.serializer.KryoSerializer");
    System.setProperty("spark.kryo.registrator", "com.huawei.bigdata.spark.examples.MyRegistrator");
  • Use the set method to set the SparkConf object. The code is as follows:
    val conf = new SparkConf().setAppName("HbaseTest");
    conf.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer");
    conf.set("spark.kryo.registrator", "com.huawei.bigdata.spark.examples.MyRegistrator");
Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback