Help Center/ MapReduce Service/ Developer Guide (LTS)/ Spark2x Development Guide (Normal Mode)/ More Information/ FAQ/ Error Code 139 Reported When Python Pipeline Runs in the ARM Environment
Updated on 2022-11-18 GMT+08:00

Error Code 139 Reported When Python Pipeline Runs in the ARM Environment

Question

Error code 139 is displayed when the pipeline of the Python plug-in is used on the TaiShan server. The error information is as follows:

subprocess exited with status 139

Answer

The Python program uses both libcrypto.so and libssl.so. If the native library directory of Hadoop is added to LD_LIBRARY_PATH, the libcrypto.so in the hadoop native library is used and the libssl.so provided by the system is used (because the hadoop native directory does not contain this package). The versions of the two libraries do not match. As a result, a segment error occurs during the running of the Python file.

Solution

Solution 1:

Modify the spark-default.conf file in the conf directory of the Spark2x client. Clear the values of spark.driver.extraLibraryPath, spark.yarn.cluster.driver.extraLibraryPath, and spark.executor.extraLibraryPath.

Solution 2:

On the Spark2x page of FusionInsight Manager, modify the preceding three parameters. Restart the Spark2x instance, and download the client again. The procedure is as follows:

  1. Log in to FusionInsight Manager, choose Cluster > Name of the desired cluster > Services > Spark2x> Configurations > All Configurations, search for the spark.driver.extraLibraryPath and spark.executor.extraLibraryPath parameters, and clear their values.
  2. Choose All Configurations > SparkResource2x. In the custom area, add the three parameters in solution 1, as shown in the following figure.

  3. Click Save. Restart the expired spark2x instance. Download and install the client again.