El contenido no se encuentra disponible en el idioma seleccionado. Estamos trabajando continuamente para agregar más idiomas. Gracias por su apoyo.

Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
OneAccess
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
IoT
IoT Device Access
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Managed Threat Detection
Blockchain
Blockchain Service
Web3 Node Engine Service
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
SoftWare Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Edge Data Center Management
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Intelligent EdgeCloud
Solutions
SAP Cloud
High Performance Computing
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive

Using HBase Dual-Read

Updated on 2024-08-10 GMT+08:00

Scenario

The HBase client application loads the configuration items of the active and standby clusters by customization to implement the dual-read capability. HBase dual-read is a key feature that improves the high availability of the HBase cluster system. It applies to four query scenarios: reading data using Get, reading data in batches using Get, reading data using Scan, and querying data using a secondary index. HBase can read data from the active and standby clusters at the same time, reducing the query glitch time. The advantages are as follows:
  • High success rate: The concurrent dual-read mechanism ensures a high success rate of read requests.
  • High availability: When a single cluster is faulty, the query service is not interrupted. A short network jitter does not prolong the query time.
  • High generality: The dual-read feature does not support dual-write, but does not affect the original real-time write scenario.
  • Ease-of-use: Client encapsulation is performed, which is not sensed by services.
NOTE:

Restrictions on HBase dual-read:

  • The HBase dual-read feature is implemented based on replication. Data read from the standby cluster may be different from that from the active cluster. Therefore, only eventual consistency can be achieved.
  • Currently, the HBase dual-read feature is used only for query. When the active cluster breaks down, the latest data cannot be synchronized. As a result, the latest data cannot be queried in the standby cluster.
  • A Scan operation of HBase may be split into multiple RPC operations. Data may not be completely the same because related session information is not synchronized between different clusters. Therefore, the dual-read feature takes effect only when an RPC operation is performed for the first time. Requests before ResultScanner close access the cluster used for the first RPC operation.
  • The HBase Admin API and real-time write API access only the active cluster. Therefore, after the active cluster breaks down, the Admin API and real-time write API are unavailable, and only the Get and Scan query services are available.

Add the Active/Standby Cluster Configuration to the hbase-dual.xml File

  1. Obtain the client configuration files core-site.xml, hbase-site.xml, and hdfs-site.xml of the HBase active cluster and save them to the src/main/resources/conf/active directory. This directory needs to be created by yourself. For details, see Preparing for Development and Operating Environment.
  2. Obtain the client configuration files core-site.xml, hbase-site.xml, and hdfs-site.xml of the standby cluster and save them to the src/main/resources/conf/standby directory. This directory needs to be created by yourself. For details, see Preparing for Development and Operating Environment.
  3. Create the hbase-dual.xml configuration file and save it to the src/main/resources/conf/ directory. For details about the configuration items in the configuration file, see Table 1.

    <?xml version="1.0" encoding="UTF-8"?>
    <configuration>
    <!--Configuration file directory of the active cluster-->
        <property>
            <name>hbase.dualclient.active.cluster.configuration.path</name>
            <value>{Sample code directory}\\src\\main\\resources\\conf\\active</value>
            </property>
    <!--Configuration file directory of the standby cluster-->
        <property>
            <name>hbase.dualclient.standby.cluster.configuration.path</name>
            <value>{Sample code directory}\\src\\main\\resources\\standby</value>
        </property>
    <!--Connection implementation of the dual-read mode-->
        <property>
            <name>hbase.client.connection.impl</name>
            <value>org.apache.hadoop.hbase.client.HBaseMultiClusterConnectionImpl</value>
        </property>
    <!--Normal mode-->
        <property>
            <name>hbase.security.authentication</name>
            <value>Simple</value>
        </property>
    <!--Normal mode-->
        <property>
            <name>hadoop.security.authentication</name>
            <value>Simple</value>
        </property>

  4. Creating a dual-read configuration.

    • The following code snippet belongs to the init method in TestMain class of the com.huawei.bigdata.hbase.examples packet.
      private static void init() throws IOException {
          // Default load from conf directory
          conf = HBaseConfiguration.create();
          //In Windows environment
          String userdir = TestMain.class.getClassLoader().getResource("conf").getPath() + File.separator;
          //In Linux environment
          //String userdir = System.getProperty("user.dir") + File.separator + "conf" + File.separator;
          conf.addResource(new Path(userdir + "hbase-dual.xml"), false);
        }

  5. Determining the data source cluster

    • GET request. The following code snippet belongs to the testGet method in HBaseSample class of the com.huawei.bigdata.hbase.examples packet.
      Result result = table.get(get); 
      if (result instanceof DualResult) {
           LOG.info(((DualResult)result).getClusterId()); 
      }
    • Scan request. The following code snippet belongs to the testScanData method in HBaseSample class of the com.huawei.bigdata.hbase.examples packet.
      ResultScanner rScanner = table.getScanner(scan);  
      if (rScanner instanceof HBaseMultiScanner) {
           LOG.info(((HBaseMultiScanner)rScanner).getClusterId()); 
      }
    • The client can print metric information.

      Add the following content to the log4j.properties file so that the client can export metric information to the specified file: For details about the metrics, see Printing Metric Information

      log4j.logger.DUAL=debug,DUAL 
      log4j.appender.DUAL=org.apache.log4j.RollingFileAppender 
      log4j.appender.DUAL.File=/var/log/dual.log //Local dual-read log path on the client. Change the value to the actual directory, but ensure that the directory has the write permission.
      log4j.additivity.DUAL=false 
      log4j.appender.DUAL.MaxFileSize=${hbase.log.maxfilesize} 
      log4j.appender.DUAL.MaxBackupIndex=${hbase.log.maxbackupindex} 
      log4j.appender.DUAL.layout=org.apache.log4j.PatternLayout 
      log4j.appender.DUAL.layout.ConversionPattern=%d{ISO8601} %-5p [%t] %c{2}: %m%n

Configure the Active/Standby Cluster Configuration in HBaseMultiClusterConnection

  1. Create a dual-read Configuration and delete the comment of testHBaseDualReadSample from the main method of the TestMain class in the com.huawei.bigdata.hbase.examples package. Ensure that the value of IS_CREATE_CONNECTION_BY_XML in the HBaseDualReadSample class in the com.huawei.bigdata.hbase.examples package is false.
  2. Add related configurations to the addHbaseDualXmlParam method of the HBaseDualReadSample class. For details about related configuration items, see HBase Dual-Read Configuration Items.

    private void addHbaseDualXmlParam(Configuration conf) {
        // We need to set the optional parameters contained in hbase-dual.xml to conf
        // when we use configuration transfer solution
        conf.set(CONNECTION_IMPL_KEY, DUAL_READ_CONNECTION);
        // conf.set("", "");
    }

  3. Add configurations related to the Active cluster client to the initActiveConf method of the HBaseDualReadSample class.

    private void initActiveConf() {
        // The hbase-dual.xml configuration scheme is used to generate the client configuration of the active cluster.
        // In actual application development, you need to generate the client configuration of the active cluster.
        String activeDir = HBaseDualReadSample.class.getClassLoader().getResource(Utils.CONF_DIRECTORY).getPath()
            + File.separator + ACTIVE_DIRECTORY + File.separator;
        Configuration activeConf = Utils.createConfByUserDir(activeDir);
        HBaseMultiClusterConnection.setActiveConf(activeConf);
    }

  4. Add configurations related to the Standby cluster client to the initStandbyConf method of the HBaseDualReadSample class.

    private void initStandbyConf() {
        // The hbase-dual.xml configuration scheme is used to generate the client configuration of the standby cluster.
        // In actual application development, you need to generate the client configuration of the standby cluster.
        String standbyDir = HBaseDualReadSample.class.getClassLoader().getResource(Utils.CONF_DIRECTORY).getPath()
            + File.separator + STANDBY_DIRECTORY + File.separator;
        Configuration standbyConf = Utils.createConfByUserDir(standbyDir);
        HBaseMultiClusterConnection.setStandbyConf(standbyConf);
    }

  5. Determining the data source cluster.

    • GET request. The following code snippet belongs to the testGet method in HBaseSample class of the com.huawei.bigdata.hbase.examples packet.
      Result result = table.get(get); 
      if (result instanceof DualResult) {
           LOG.info(((DualResult)result).getClusterId()); 
      }
    • Scan request. The following code snippet belongs to the testScanData method in HBaseSample class of the com.huawei.bigdata.hbase.examples packet.
      ResultScanner rScanner = table.getScanner(scan);  
      if (rScanner instanceof HBaseMultiScanner) {
           LOG.info(((HBaseMultiScanner)rScanner).getClusterId()); 
      }

  6. The client can print metric information.

    Add the following content to the log4j.properties file so that the client can export metric information to the specified file: For details about the metrics, see Printing Metric Information

    log4j.logger.DUAL=debug,DUAL 
    log4j.appender.DUAL=org.apache.log4j.RollingFileAppender 
    log4j.appender.DUAL.File=/var/log/dual.log //Local dual-read log path on the client. Change the value to the actual directory, but ensure that the directory has the write permission.
    log4j.additivity.DUAL=false 
    log4j.appender.DUAL.MaxFileSize=${hbase.log.maxfilesize} 
    log4j.appender.DUAL.MaxBackupIndex=${hbase.log.maxbackupindex} 
    log4j.appender.DUAL.layout=org.apache.log4j.PatternLayout 
    log4j.appender.DUAL.layout.ConversionPattern=%d{ISO8601} %-5p [%t] %c{2}: %m%n

Utilizamos cookies para mejorar nuestro sitio y tu experiencia. Al continuar navegando en nuestro sitio, tú aceptas nuestra política de cookies. Descubre más

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback