Message "Can't get the Kerberos realm" Is Displayed in Yarn-cluster Mode
Symptom
A Spark task fails to be submitted due to an authentication failure.
Cause Analysis
- According to the exception printed in the driver log, the token used to connect to HDFS cannot be found.
16/03/22 20:37:10 WARN Client: Exception encountered while connecting to the server : org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken): token (HDFS_DELEGATION_TOKEN token 192 for admin) can't be found in cache 16/03/22 20:37:10 WARN Client: Failed to cleanup staging dir .sparkStaging/application_1458558192236_0003 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken): token (HDFS_DELEGATION_TOKEN token 192 for admin) can't be found in cache
- The native Yarn web UI shows that ApplicationMaster fails to be started twice and the task exits.
Figure 1 ApplicationMaster start failure
- The ApplicationMaster log shows the following error information:
Exception in thread "main" java.lang.ExceptionInInitializerError Caused by: org.apache.spark.SparkException: Unable to load YARN support Caused by: java.lang.IllegalArgumentException: Can't get Kerberos realm Caused by: java.lang.reflect.InvocationTargetException Caused by: KrbException: Cannot locate default realm Caused by: KrbException: Generic error (description in e-text) (60) - Unable to locate Kerberos realm org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1410) ... 86 more Caused by: javax.jdo.JDOFatalInternalException: Unexpected exception caught. NestedThrowables:java.lang.reflect.InvocationTargetException ... 110 more
- When you execute ./spark-submit --class yourclassname --master yarn-cluster /yourdependencyjars to submit a task in Yarn-cluster mode, the driver is enabled in the cluster. Because the client's spark.driver.extraJavaOptions is loaded, you cannot find the kdc.conf file in the target path on the cluster node and cannot obtain information required for Kerberos authentication. As a result, the ApplicationMaster fails to be started.
Solution
When submitting a task on the client, configure the spark.driver.extraJavaOptions parameter in the CLI. In this way, the spark.driver.extraJavaOptions parameter in the spark-defaults.conf file is not automatically loaded from the client path. When starting a Spark task, use --conf to specify the driver configuration as follows (note that the quotation mark after spark.driver.extraJavaOptions= is mandatory):
./spark-submit -class yourclassname --master yarn-cluster --conf spark.driver.extraJavaOptions="
-Dlog4j.configuration=file:/opt/client/Spark/spark/conf/log4j.properties -Djetty.version=x.y.z -Dzookeeper.server.principal=zookeeper/hadoop.794bbab6_9505_44cc_8515_b4eddc84e6c1.com -Djava.security.krb5.conf=/opt/client/KrbClient/kerberos/var/krb5kdc/krb5.conf -Djava.security.auth.login.config=/opt/client/Spark/spark/conf/jaas.conf -Dorg.xerial.snappy.tempdir=/opt/client/Spark/tmp -Dcarbon.properties.filepath=/opt/client/Spark/spark/conf/carbon.properties" ../yourdependencyjars
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot