Updated on 2024-08-10 GMT+08:00

Preparing a Local Application Development Environment

Table 1 describes the environments required for developing and running an application.

Table 1 Environment requirements

Item

Description

OS

  • Development environment: Windows 7 or later.
  • Running environment: Windows OS or Linux OS.

    If the program needs local debugging, the running environment must be able to communicate with the cluster service plane.

JDK installation

Basic configurations for the development and operating environments. The version requirements are as follows:

If you are using the MRS server and client, only built-in OpenJDK 1.8.0_272 is supported. Other JDKs are not allowed.

If the JAR packages of the SDK classes that need to be referenced by the customer applications run in the application process, the JDK requirements are as follows:

  • x86 client: Oracle JDK 1.8; IBM JDK 1.8.5.11.
  • TaiShan client: OpenJDK 1.8.0_272
NOTE:

For security purposes, the server supports only TLS V1.2 or later.

However, IBM JDK supports only TLS V1.0 by default. If you are using an IBM JDK, set com.ibm.jsse2.overrideDefaultTLS to true to support TLS V1.0, V1.1, and V1.2. For details, see https://www.ibm.com/docs/en/sdk-java-technology/8?topic=customization-matching-behavior-sslcontextgetinstancetls-oracle#matchsslcontext_tls.

IntelliJ IDEA installation and configuration

Tool used for developing applications. The version must be 2019.1 or other compatible versions.

NOTE:
  • If you are using an IBM JDK, ensure that the JDK configured in IntelliJ IDEA is the IBM JDK.
  • If you are using an Oracle JDK, ensure that the JDK configured in IntelliJ IDEA is the Oracle JDK.
  • If you are using an open JDK, ensure that the JDK configured in IntelliJ IDEA is the Open JDK.
  • Do not use the same workspace and the sample project in the same path for different IntelliJ IDEA projects.

Maven installation

Basic configuration for the development environment. This tool is used for project management throughout the lifecycle of software development.

Scala installation

Basic configuration for the Scala development environment. The required version is 2.12.14.

Scala plug-in installation

Basic configuration for the Scala development environment. The required version is 2018.2.11 or other compatible versions.

Editra installation

Editra is an editor in the Python development environment and is used to compile Python programs. You can also use other IDEs for Python programming.

7-Zip

A tool used to decompress *.zip and *.rar packages. 7-zip 16.04 is supported.

Python installation

Its version must be 3.7 or later.

Preparing a Running Environment

During application development, prepare the environment for running and commissioning code to verify that the application can run properly.

  • If the local Windows development environment can communicate with the cluster service plane network, download the cluster client to the local host; obtain the cluster configuration file required for commissioning; configure the network connection, and commission the application in Windows.
    1. Log in to the FusionInsight Manager portal and choose Cluster > Dashboard > More > Download Client. Set Select Client Type to Configuration Files Only. Select the platform type based on the type of the node where the client is to be installed (select x86_64 for the x86 architecture and aarch64 for the Arm architecture) and click OK. After the client files are packaged and generated, download the client to the local PC as prompted and decompress it.

      For example, if the client file package is FusionInsight_Cluster_1_Services_Client.tar, decompress it to obtain FusionInsight_Cluster_1_Services_ClientConfig_ConfigFiles.tar. Then, decompress this file to the D:\FusionInsight_Cluster_1_Services_ClientConfig_ConfigFiles directory on the local PC. The directory name cannot contain spaces.

    2. Go to the client decompression path FusionInsight_Cluster_1_Services_ClientConfig_ConfigFiles\Spark2x\config and manually import the configuration file to the configuration file directory (usually the resources folder) of the Spark sample project.

      Table 2 describes the main configuration files.

      Table 2 Configuration files

      File

      Description

      carbon.properties

      CarbonData configurations

      core-site.xml

      HDFS parameters

      hdfs-site.xml

      HDFS parameters

      hbase-site.xml

      HBase parameters

      hive-site.xml

      Hive parameters

      jaas-zk.conf

      Java authentication configurations

      log4j-executor.properties

      Executor log configurations

      mapred-site.xml

      Hadoop MapReduce configurations

      ranger-spark-audit.xml

      Ranger audit log configurations

      ranger-spark-security.xml

      Ranger permission management configurations

      yarn-site.xml

      Yarn parameters

      spark-defaults.conf

      Spark parameters

      spark-env.sh

      Spark environment variable configurations

    3. During application development, if you need to commission the application in the local Windows system, copy the content in the hosts file in the decompression directory to the hosts file of the node where the client is located. Ensure that the local host can communicate correctly with the hosts listed in the hosts file in the decompression directory.
      • If the client host is outside the cluster, configure network connections to the client to prevent errors when you run commands on the client.
      • The local hosts file in a Windows environment is stored, for example, in C:\WINDOWS\system32\drivers\etc\hosts.
  • To use the Linux environment for project commissioning, install the cluster client on the Linux node and obtain related configuration files.
    1. Install the client on the node. For example, install the client in the /opt/client directory.

      The difference between the client time and the cluster time must be less than 5 minutes.

      For details about how to use a client on the master or core nodes inside a cluster, see Using an MRS Client on Nodes Inside a Cluster. For details about how to use a client outside a cluster, see Using an MRS Client on Nodes Outside a Cluster.

    2. Log in to the FusionInsight Manager portal. Download the cluster client software package to the active management node and decompress it. Then, log in to the active management node as user root. Go to the decompression path of the cluster client and copy all configuration files in the FusionInsight_Cluster_1_Services_ClientConfig/Spark2x/config directory to the conf directory where the compiled JAR package is stored for subsequent commissioning, for example, /opt/client/conf.

      For example, if the client software package is FusionInsight_Cluster_1_Services_Client.tar and it is downloaded to the /tmp/FusionInsight-Client directory on the active management node, run the following commands:

      cd /tmp/FusionInsight-Client

      tar -xvf FusionInsight_Cluster_1_Services_Client.tar

      tar -xvf FusionInsight_Cluster_1_Services_ClientConfig.tar

      cd FusionInsight_Cluster_1_Services_ClientConfig

      scp Spark2x/config/* root@IP address of the client node:/opt/client/conf

      Table 3 Configuration files

      File

      Description

      carbon.properties

      CarbonData configurations

      core-site.xml

      HDFS parameters

      hdfs-site.xml

      HDFS parameters

      hbase-site.xml

      HBase parameters

      hive-site.xml

      Hive parameters

      jaas-zk.conf

      Java authentication configurations

      log4j-executor.properties

      Executor log configurations

      mapred-site.xml

      Hadoop MapReduce configurations

      ranger-spark-audit.xml

      Ranger audit log configurations

      ranger-spark-security.xml

      Ranger permission management configurations

      yarn-site.xml

      Yarn parameters

      spark-defaults.conf

      Spark parameters

      spark-env.sh

      Spark environment variable configuration file

    3. Check the network connection of the client node.

      During the client installation, the system automatically configures the hosts file on the client node. You are advised to check whether the /etc/hosts file contains the host names of the nodes in the cluster. If they are not contained, manually copy the content in the hosts file in the decompression directory to the hosts file on the node where the client is deployed.