Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
OneAccess
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
IoT
IoT Device Access
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Managed Threat Detection
Blockchain
Blockchain Service
Web3 Node Engine Service
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
SoftWare Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Edge Data Center Management
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Intelligent EdgeCloud
Solutions
SAP Cloud
High Performance Computing
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive

Preparing for Development and Operating Environment

Updated on 2024-08-10 GMT+08:00

Preparing Development Environment

Table 1 describes the environment required for application development.

Table 1 Development environment

Item

Description

OS

  • Development environment: Windows OS. Windows 7 or later is supported.
  • Operating environment: Linux OS.

    If the program needs to be commissioned locally, the running environment must be able to communicate with the cluster service plane network.

JDK installation

Basic configuration of the development and running environment. The version requirements are as follows:

The server and client support only the built-in OpenJDK. Other JDKs cannot be used.

If the JAR packages of the SDK classes that need to be referenced by the customer applications run in the application process, the JDK requirements are as follows:

  • For x86 nodes that run clients, use the following JDKs:
    • Oracle JDK 1.8
    • IBM JDK 1.8.0.7.20 and 1.8.0.6.15
  • For Arm nodes that run clients, use the following JDKs:
    • OpenJDK 1.8.0_272 (built-in JDK, which can be obtained from the JDK folder in the cluster client installation directory.)
    • BiSheng JDK 1.8.0_272
NOTE:

IntelliJ IDEA installation and configuration

IntelliJ IDEA is a tool used to develop Flink applications. The version must be 2019.1 or other compatible version.

Scala installation

Install Scala is the basic configuration for the Scala development environment. The required version is 2.11.7.

Scala plug-in installation

Installing Scala plug-ins is the basic configuration for the Scala development environment. The required version is 1.5.4.

Maven installation

Basic configuration of the development environment for project management throughout the lifecycle of software development.

Developer account preparation

See Preparing MRS Application Development User for configuration.

7-zip

It is a tool used to decompress .zip and .rar packages. The 7-Zip 16.04 is supported.

Python3

Used to run Flink Python jobs. Python 3.6 or later is required.

Preparing an Operating Environment

During application development, you need to prepare the code running and commissioning environment to verify that the application is running properly.

  • If you use the Linux environment for commissioning, you need to prepare the Linux node where the cluster client is to be installed and obtain related configuration files.
    1. Install the client on the node. For example, the client installation directory is /opt/hadoopclient.

      Ensure that the difference between the client time and the cluster time is less than 5 minutes.

      For details about how to use the client on a Master or Core node in the cluster, see Using an MRS Client on Nodes Inside a Cluster. For details about how to install the client outside the MRS cluster, see Using an MRS Client on Nodes Outside a Cluster.

      NOTE:
      • Verify that the configuration options in the Flink client configuration file flink-conf.yaml are correctly configured. For details, see Installing the Client and Preparing for Security Authentication.
      • In security mode, appended the service IP address of the node where the client is installed and floating IP address of Manager to the jobmanager.web.allow-access-address configuration item in the flink-conf.yaml file. Use commas (,) to separate IP addresses.
    2. Log in to the FusionInsight Manager portal. Download the cluster client software package to the active management node and decompress it. Then, log in to the active management node as user root. Go to the decompression path of the cluster client and copy all configuration files in the FusionInsight_Cluster_1_Services_ClientConfig\Flink\config directory to the conf directory where the compiled JAR package is stored for subsequent commissioning, for example, /opt/hadoopclient/conf.

      For example, if the client software package is FusionInsight_Cluster_1_Services_Client.tar and the download path is /tmp/FusionInsight-Client on the active management node, run the following command:

      cd /tmp/FusionInsight-Client

      tar -xvf FusionInsight_Cluster_1_Services_Client.tar

      tar -xvf FusionInsight_Cluster_1_Services_ClientConfig.tar

      cd FusionInsight_Cluster_1_Services_ClientConfig

      scp Flink/config/* root@IP address of the client node:/opt/hadoopclient/conf

      The keytab file obtained during the Preparing MRS Application Development User is also stored in this directory. Table 2 describes the main configuration files.

      Table 2 Configuration file

      Document Name

      Function

      core-site.xml

      Configures Flink parameters.

      hdfs-site.xml

      Configures hdfs parameters.

      yarn-site.xml

      Configures yarn parameters.

      flink-conf.yaml

      Configures Flink parameters.

      user.keytab

      Provides user information for Kerberos security authentication.

      krb5.conf

      Kerberos server configuration information

    3. Check the network connection of the client node.

      During the client installation, the system automatically configures the hosts file on the client node. You are advised to check whether the /etc/hosts file contains the host names of the nodes in the cluster. If no, manually copy the content in the hosts file in the decompression directory to the hosts file on the node where the client resides, to ensure that the local host can communicate with each host in the cluster.

    4. (Optional) To run a Python job, perform the following additional configurations:
      1. Log in to the node where the Flink client is installed as the root user and run the following command to check whether Python 3.6 or a later has been installed:

        python3 -V

      2. Go to the python 3 installation path, for example, /srv/pyflink-example, and install the virtualenv:

        cd /srv/pyflink-example

        virtualenv venv --python=python3.x

        source venv/bin/activate

      3. Copy the Flink/flink/opt/python/apache-flink-*.tar.gz file from the client installation directory to /srv/pyflink-example:

        cp Client installation directory/Flink/flink/opt/python/apache-flink-*.tar.gz /srv/pyflink-example

      4. Install the dependency package. If the following command output is displayed, the installation is successful:

        python -m pip install apache-flink-libraries-*.tar.gz

        python -m pip install apache-flink-Version number.tar.gz
        ...
        Successfully built apache-flink
         Installing collected packages: apache-flink
          Attempting uninstall: apache-flink
           Found existing installation: apache-flink x.xx.x
           Uninstalling apache- flink-x.xx.x:
            Successfully uninstalled apache-flink-x.xx.x
        Successfully installed apache-flink-x.xx.x

We use cookies to improve our site and your experience. By continuing to browse our site you accept our cookie policy. Find out more

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback