Updated on 2024-08-10 GMT+08:00

Preparing for Development and Operating Environment

Preparing Development Environment

Table 1 describes the environment required for application development.

Table 1 Development environment

Item

Description

OS

  • Development environment: Windows OS. Windows 7 or later is supported.
  • Operating environment: Linux OS.

    If the program needs to be commissioned locally, the running environment must be able to communicate with the cluster service plane network.

JDK installation

Basic configuration of the development and running environment. The version requirements are as follows:

The server and client support only the built-in OpenJDK. Other JDKs cannot be used.

If the JAR packages of the SDK classes that need to be referenced by the customer applications run in the application process, the JDK requirements are as follows:

  • For x86 nodes that run clients, use the following JDKs:
    • Oracle JDK 1.8
    • IBM JDK 1.8.0.7.20 and 1.8.0.6.15
  • For Arm nodes that run clients, use the following JDKs:
    • OpenJDK 1.8.0_272 (built-in JDK, which can be obtained from the JDK folder in the cluster client installation directory.)
    • BiSheng JDK 1.8.0_272
NOTE:

IntelliJ IDEA installation and configuration

IntelliJ IDEA is a tool used to develop Flink applications. The version must be 2019.1 or other compatible version.

Scala installation

Install Scala is the basic configuration for the Scala development environment. The required version is 2.11.7.

Scala plug-in installation

Installing Scala plug-ins is the basic configuration for the Scala development environment. The required version is 1.5.4.

Maven installation

Basic configuration of the development environment for project management throughout the lifecycle of software development.

7-zip

It is a tool used to decompress .zip and .rar packages. The 7-Zip 16.04 is supported.

Python3

Used to run Flink Python jobs. Python 3.6 or later is required.

Preparing an Operating Environment

During application development, you need to prepare the code running and commissioning environment to verify that the application is running properly.

  • If you use the Linux environment for commissioning, you need to prepare the Linux node where the cluster client is to be installed and obtain related configuration files.
    1. Install the client on the node. For example, the client installation directory is /opt/client.

      Ensure that the difference between the client time and the cluster time is less than 5 minutes.

      For details about how to use the client on a Master or Core node in the cluster, see Using an MRS Client on Nodes Inside a Cluster. For details about how to install the client outside the MRS cluster, see Using an MRS Client on Nodes Outside a Cluster.

    2. Log in to the FusionInsight Manager portal. Download the cluster client software package to the active management node and decompress it. Then, log in to the active management node as user root. Go to the decompression path of the cluster client and copy all configuration files in the FusionInsight_Cluster_1_Services_ClientConfig\Flink\config directory to the conf directory where the compiled JAR package is stored for subsequent commissioning, for example, /opt/client/conf.

      For example, if the client software package is FusionInsight_Cluster_1_Services_Client.tar and the download path is /tmp/FusionInsight-Client on the active management node, run the following command:

      cd /tmp/FusionInsight-Client

      tar -xvf FusionInsight_Cluster_1_Services_Client.tar

      tar -xvf FusionInsight_Cluster_1_Services_ClientConfig.tar

      cd FusionInsight_Cluster_1_Services_ClientConfig

      scp Flink/config/* root@IP address of the client node:/opt/client/conf

      Table 2 describes the main configuration files.

      Table 2 Configuration file

      Document Name

      Function

      core-site.xml

      Configures Flink parameters.

      hdfs-site.xml

      Configures hdfs parameters.

      yarn-site.xml

      Configures yarn parameters.

      flink-conf.yaml

      Configures Flink parameters.

    3. Check the network connection of the client node.

      During the client installation, the system automatically configures the hosts file on the client node. You are advised to check whether the /etc/hosts file contains the host names of the nodes in the cluster. If no, manually copy the content in the hosts file in the decompression directory to the hosts file on the node where the client resides, to ensure that the local host can communicate with each host in the cluster.

    4. (Optional) To run a Python job, perform the following additional configurations:
      1. Log in to the node where the Flink client is installed as the root user and run the following command to check whether Python 3.6 or a later has been installed:

        python3 -V

      2. Go to the python 3 installation path, for example, /srv/pyflink-example, and install the virtualenv:

        cd /srv/pyflink-example

        virtualenv venv --python=python3.x

        source venv/bin/activate

      3. Copy the Flink/flink/opt/python/apache-flink-*.tar.gz file from the client installation directory to /srv/pyflink-example:

        cp Client installation directory/Flink/flink/opt/python/apache-flink-*.tar.gz /srv/pyflink-example

      4. Install the dependency package. If the following command output is displayed, the installation is successful:

        python -m pip install apache-flink-libraries-*.tar.gz

        python -m pip install apache-flink-Version number.tar.gz
        ...
        Successfully built apache-flink
         Installing collected packages: apache-flink
          Attempting uninstall: apache-flink
           Found existing installation: apache-flink x.xx.x
           Uninstalling apache- flink-x.xx.x:
            Successfully uninstalled apache-flink-x.xx.x
        Successfully installed apache-flink-x.xx.x