El contenido no se encuentra disponible en el idioma seleccionado. Estamos trabajando continuamente para agregar más idiomas. Gracias por su apoyo.

Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
OneAccess
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
IoT
IoT Device Access
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Managed Threat Detection
Blockchain
Blockchain Service
Web3 Node Engine Service
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
SoftWare Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Edge Data Center Management
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Intelligent EdgeCloud
Solutions
SAP Cloud
High Performance Computing
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive

HBase Phoenix APIs

Updated on 2024-08-16 GMT+08:00

Version Mapping

If you want to use Phoenix, download the Phoenix version corresponding to the current MRS cluster. For details, see https://phoenix.apache.org. Table 1 lists the version mapping between MRS and Phoenix.

Table 1 Version mapping between MRS and Phoenix

MRS Version

Phoenix Version

Remarks

MRS 1.9.2

x.xx.x-HBase-1.3

Example: 4.14.1-HBase-1.3

Configuration Method

For versions earlier than MRS 3.x, download the third-party Phoenix package from the official website and perform the following configurations. MRS 3.x or later supports Phoenix so you can directly use Phoenix on the node where the HBase client is installed. For details about operations on clusters with Kerberos authentication enabled, see Phoenix Command Line. For details about operations on clusters with Kerberos authentication disabled, see Phoenix Command Line:

  1. Download the Phoenix binary package from the official website (https://phoenix.apache.org/download.html), and upload it to any Master node in the cluster. Decompress the package, modify the permission, and switch to user omm (for example, apache-phoenix-4.14.1-HBase-1.3-bin.tar.gz).
    tar -xvf apache-phoenix-4.14.1-HBase-1.3-bin.tar.gz
    chown omm:wheel apache-phoenix-4.14.1-HBase-1.3-bin -R
    su - omm
  2. Go to the apache-phoenix-4.14.1-HBase-1.3-bin directory and edit the following script. For example, if the script name is installPhoenixJar.sh, run the following command: sh installPhoenixJar.sh <PHOENIX_HBASE_VERSION> <MRS_VERSION> <IPs> (IP indicates the IP address of the node where HBase is installed, that is, the IP addresses of all Master and Core nodes. Use the actual IP address of the current cluster.) For example, the script is as follows:
    #!/bin/bash
    
    PHOENIX_HBASE_VERSION=$1
    shift
    MRS_VERSION=$1
    shift
    IPs=$1
    shift
    check_cmd_result() {
        echo "executing command: $*"
        str="$@"
        if [ ${#str} -eq 7 ]; then
            echo "please check input args, such as, sh installPhoenixJar.sh 5.0.0-HBase-2.0 2.0.1 xx.xx.xx.xx,xx.xx.xx.xx,xx.xx.xx.xx"
            exit 1
        fi
        if ! eval $*
        then
            echo "Failed to execute: $*"
            exit 1
        fi
    }
    
    check_cmd_result [ -n "$PHOENIX_HBASE_VERSION" ]
    check_cmd_result [ -n "$MRS_VERSION" ]
    check_cmd_result [ -n "$IPs" ]
    
    if [ ${MRS_VERSION}X = "1.8.5"X ]; then
        MRS_VERSION="1.8.3"
    fi
    if [[ ${MRS_VERSION} =~ "1.6" ]]; then
        WORKDIR="FusionInsight"
    elif [[ ${MRS_VERSION} =~ "1.7" ]]; then
        WORKDIR="MRS"
    else
        WORKDIR="MRS_${MRS_VERSION}/install"
    fi
    
    check_cmd_result HBASE_LIBDIR=$(ls -d /opt/Bigdata/${WORKDIR}/FusionInsight-HBase-*/hbase/lib)
    # copy jars to local node.
    check_cmd_result cp phoenix-${PHOENIX_HBASE_VERSION}-server.jar ${HBASE_LIBDIR}
    check_cmd_result cp phoenix-core-${PHOENIX_HBASE_VERSION}.jar ${HBASE_LIBDIR}
    
    check_cmd_result chmod 700 ${HBASE_LIBDIR}/phoenix-${PHOENIX_HBASE_VERSION}-server.jar
    check_cmd_result chmod 700 ${HBASE_LIBDIR}/phoenix-core-${PHOENIX_HBASE_VERSION}.jar
    
    check_cmd_result chown omm:wheel ${HBASE_LIBDIR}/phoenix-${PHOENIX_HBASE_VERSION}-server.jar
    check_cmd_result chown omm:wheel ${HBASE_LIBDIR}/phoenix-core-${PHOENIX_HBASE_VERSION}.jar
    
    if [[ "$MRS_VERSION" =~ "2." ]]; then
        check_cmd_result rm -rf ${HBASE_LIBDIR}/htrace-core-3.1.0-incubating.jar
        check_cmd_result rm -rf /opt/client/HBase/hbase/lib/joda-time-2.1.jar
        check_cmd_result ln -s /opt/share/htrace-core-3.1.0-incubating/htrace-core-3.1.0-incubating.jar \
    ${HBASE_LIBDIR}/htrace-core-3.1.0-incubating.jar
        check_cmd_result ln -s /opt/share/joda-time-2.1/joda-time-2.1.jar /opt/client/HBase/hbase/lib/joda-time-2.1.jar    
    fi
    
    # copy jars to other nodes.
    localIp=$(hostname -i)
    ipArr=($(echo "$IPs" | sed "s|\,|\ |g"))
    for ip in ${ipArr[@]}
    do
        if [ "$ip"X = "$localIp"X ]; then
            echo "skip copying jar to local node."
            continue
        fi
        check_cmd_result scp ${HBASE_LIBDIR}/phoenix-${PHOENIX_HBASE_VERSION}-server.jar ${ip}:${HBASE_LIBDIR} 2>/dev/null
        check_cmd_result scp ${HBASE_LIBDIR}/phoenix-core-${PHOENIX_HBASE_VERSION}.jar ${ip}:${HBASE_LIBDIR} 2>/dev/null
        if [[ "$MRS_VERSION" =~ "2." ]]; then
            check_cmd_result ssh $ip "rm -rf ${HBASE_LIBDIR}/htrace-core-3.1.0-incubating.jar" 2>/dev/null
            check_cmd_result ssh $ip "rm -rf /opt/client/HBase/hbase/lib/joda-time-2.1.jar" 2>/dev/null
            check_cmd_result ssh $ip "ln -s /opt/share/htrace-core-3.1.0-incubating/htrace-core-3.1.0-incubating.jar \
    ${HBASE_LIBDIR}/htrace-core-3.1.0-incubating.jar" 2>/dev/null
            check_cmd_result ssh $ip "ln -s /opt/share/joda-time-2.1/joda-time-2.1.jar /opt/client/HBase/hbase/lib/joda-time-2.1.jar" 2>/dev/null
        fi
    done
    echo "installing phoenix jars to hbase successfully..."
    NOTE:
    • Copy and import the preceding scripts in .txt format to avoid format errors.
    • <PHOENIX_HBASE_VERSION>: Current Phoenix version. For example, versions earlier than MRS 3.x support Phoenix 4.14.1-HBase-1.3.
    • <MRS_VERSION>: Current MRS version.
    • <IPs>: IP addresses of the nodes where HBase is installed, that is, the IP addresses of the Master and Core nodes in the current cluster. The IP addresses are separated by comma (,).
    • If the message "installing phoenix jars to hbase successfully..." is displayed after the script is executed, Phoenix has been successfully installed.
  3. Log in to MRS Manager and restart the HBase service.
  4. Configure the Phoenix client parameters. You can skip this step for a cluster with Kerberos authentication disabled.
    1. Configure authentication information for a Phoenix connection. Go to $PHOENIX_HOME/bin and edit the hbase-site.xml file. Set the parameters listed in Table 2.
      Table 2 Phoenix parameters

      Parameter

      Description

      Default Value

      hbase.regionserver.kerberos.principal

      Principal of RegionServer of the current cluster

      Not set

      hbase.master.kerberos.principal

      Principal of HMaster of the current cluster

      Not set

      hbase.security.authentication

      Authentication mode used for initializing the Phoenix connection.

      kerberos

      You can configure the parameters as follows:
      <property>
      <name>hbase.regionserver.kerberos.principal</name>
      <value>hbase/hadoop.hadoop.com@HADOOP.COM</value>
      </property>
      <property>
      <name>hbase.master.kerberos.principal</name>
      <value>hbase/hadoop.hadoop.com@HADOOP.COM</value>
      </property>
      <property>
      <name>hbase.security.authentication</name>
      <value>kerberos</value>
      </property>
      NOTE:
      The hbase.master.kerberos.principal and hbase.regionserver.kerberos.principal parameters are the Kerberos users of HBase in the security cluster with Kerberos authentication enabled. You can search the hbase-site.xml file on the client to obtain the parameter values. For example, if the client is installed in the /opt/client directory of the Master node, you can run the grep "kerberos.principal" /opt/client/HBase/hbase/conf/hbase-site.xml -A1 command to obtain the principal of HBase, as shown in Figure 1.
      Figure 1 Obtaining the principal of HBase.
    2. Modify the sqlline.py script (for example, apache-phoenix-4.14.1-HBase-1.3-bin/bin/sqlline.py) in the bin directory of the Phoenix path and add the dependency information of the HBase client, as shown in Figure 2.
      Figure 2 Phoenix dependencies and ZooKeeper authentication

      The configuration details are as follows:

      Add the lib package (for example, $HBASE_HOME/lib/*:) of the HBase client.
      Add related authentication information (for example, $HBASE_OPTS).

Usage

Phoenix enables you to operate HBase using SQL statements. The following describes how to use SQL statements to create tables, insert data, query data, and delete tables. Phoenix also allows you to operate HBase using JDBC. For details, see HBase SQL Query Sample Code.

  1. Connect to Phoenix.
    source /opt/client/bigdata_env
    kinit MRS cluster user (The MRS cluster user can be the built-in user hbase or another user that has been added to the hbase group. Skip this command for a cluster with Kerberos authentication disabled.)
    cd $PHOENIX_HOME
    bin/sqlline.py zookeerIp:2181
    NOTE:

    1. For versions earlier than MRS 1.9.2, the ZooKeeper port number is 24002. For details, see the ZooKeeper cluster configurations on MRS Manager.

    2. If the Phoenix index function is used, add the following configurations to the HBase server (including HMaster and RegionServer). For details, see https://phoenix.apache.org/secondary_indexing.html.

    <property>

    <name>hbase.regionserver.wal.codec</name>

    <value>org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec</value>

    </property>

  2. Create a table.
    CREATE TABLE TEST (id VARCHAR PRIMARY KEY, name VARCHAR);
  3. Insert data.
    UPSERT INTO TEST(id,name) VALUES ('1','jamee');
  4. Query data.
    SELECT * FROM TEST;
  5. Delete a table.
    DROP TABLE TEST;

Utilizamos cookies para mejorar nuestro sitio y tu experiencia. Al continuar navegando en nuestro sitio, tú aceptas nuestra política de cookies. Descubre más

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback