Error Code 139 Reported When Python Pipeline Runs in the ARM Environment
Question
Error code 139 is displayed when the pipeline of the Python plug-in is used on the TaiShan server. The error information is as follows:
subprocess exited with status 139
Answer
The Python program uses both libcrypto.so and libssl.so. If the native library directory of Hadoop is added to LD_LIBRARY_PATH, the libcrypto.so in the hadoop native library is used and the libssl.so provided by the system is used (because the hadoop native directory does not contain this package). The versions of the two libraries do not match. As a result, a segment error occurs during the running of the Python file.
Solution
Solution 1:
Modify the spark-default.conf file in the conf directory of the Spark2x client. Clear the values of spark.driver.extraLibraryPath, spark.yarn.cluster.driver.extraLibraryPath, and spark.executor.extraLibraryPath.
Solution 2:
On the Spark2x page of FusionInsight Manager, modify the preceding three parameters. Restart the Spark2x instance, and download the client again. The procedure is as follows:
- Log in to FusionInsight Manager, choose Cluster > Name of the desired cluster > Services > Spark2x> Configurations > All Configurations, search for the spark.driver.extraLibraryPath and spark.executor.extraLibraryPath parameters, and clear their values.
- Choose All Configurations > SparkResource2x. In the custom area, add the three parameters in solution 1, as shown in the following figure.
- Click Save. Restart the expired spark2x instance. Download and install the client again.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot