هذه الصفحة غير متوفرة حاليًا بلغتك المحلية. نحن نعمل جاهدين على إضافة المزيد من اللغات. شاكرين تفهمك ودعمك المستمر لنا.

Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
OneAccess
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
IoT
IoT Device Access
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Situation Awareness
Managed Threat Detection
Blockchain
Blockchain Service
Web3 Node Engine Service
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
Software Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Edge Data Center Management
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Intelligent EdgeCloud
Solutions
SAP Cloud
High Performance Computing
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive

Performance Test Methods

Updated on 2025-01-03 GMT+08:00

Objectives

RTA-based advertising poses higher technical requirements for advertisers, including quick response from the media and lower costs in data storage. In recent years, GeminiDB Redis API is widely used as a key-value (KV) signature database in RTA scenarios and delivers good performance at low costs.

This section describes a pressure test of GeminiDB Redis instances in RTA scenarios, including the performance on data compression, QPS, bandwidth, and latency.

Test Environment

This test used GeminiDB Redis clusters and Elastic Cloud Servers (ECSs). The following table lists the specifications.

  • GeminiDB Redis cluster specifications

    Region

    CN East-Shanghai1

    AZ type

    Deployment across AZ 1, AZ 2, and AZ 3

    vCPUs of nodes

    16

    Nodes

    20

    Total storage space

    2 TB

  • ECS specifications

    AZ type

    AZ 1

    Specifications

    c7.4xlarge.2, 3 PCS

    vCPUs

    16

    Memory

    32 GiB

    Operating System (OS)

    CentOS 8.2 64-bit

Test Tool

This test used memtier_benchmark, which is a multi-thread load test tool developed by Redis Labs. For details, see memtier_benchmark.

Test Metrics

Service scale of the simulated RTA scenario: 1 TB of data, 1.6 million QPS, and 1.5 Gbit/s of bandwidth.

  1. Data samples

    Categories of data samples are as follows.

    Category

    Key

    Value

    Hash

    34 characters

    10 field-value pairs. A field contains 10 characters and a value contains 20 to 80 characters.

    String

    68 characters

    32 random characters

    String

    19 characters

    500 to 2,000 random characters

    Four billion keys need to be stored in the Redis clusters. The proportion of each data category is about 2:7:1, and the frequently accessed data accounts for 50% of the total.

  2. Metrics

    Test metrics of database operations are as follows.

    Metric Abbreviation

    Description

    QPS

    Number of requests executed per second.

    Avg Latency (ms)

    Average request latency, indicating the overall performance of a GeminiDB Redis cluster.

    p99 Latency (ms)

    p99 latency of a request, indicating that 99% of the request execution time is shorter than the value of this parameter.

    p9999 Latency (ms)

    p9999 latency of a request, indicating that 99.99% of the request execution time is shorter than the value of this parameter.

Test Procedure

  1. Inject test data.

    Before the test, generate and inject test data. Configure the three categories of data as follows:

    1. Hash
      • A key consists of 34 characters in the format of string prefix + nine digits. The digits are consecutive from 100 million to 900 million. The key is used to control the total data volume and hot data distribution.
      • Inject 10 field-value pairs. A field contains 10 characters and a value contains 20 to 80 random characters. The average value of a field-value is 50 characters.
      • Construct and inject 800 million keys.
        memtier_benchmark -s ${ip} -a $(passwd} -p ${port} -c 20-t20 -n7500000 -d 32 -key-maximum=3
        800000000 -key-minimum =1000000000 --key-pr efix ='cefkljrithuir123894873h4523blj4b2jkjh2iw13b
        nfdhsbnkfhsdjkh' --key-pattern=P:P--ratio=1:0 -pipelire=100
    2. String
      • A key consists of 68 characters in the format of string prefix + 10 digits. The digits are consecutive from 1 billion to 3.8 billion. The key is used to control the total data volume and hot data distribution.
      • Inject 32 random characters for a value.
      • Construct and inject 2.8 billion keys.
        memtier_benchmark -s ${ip} -a ${passwd} -p ${port} -c 20 -t 20 -n 2500000 --command='hset __key__ mendke398d __data__ mebnejkehe __data__ fmebejdbnf __data__ j3i45u8923 __data__ j43245i908 __data__ jhiriu2349 __data__ 21021034ji __data__ jh23ui45j2 __data__ jiu5rj9234 __data__ j23io45u29 __data__' -d 50 --key-maximum=900000000 --key-minimum=100000000 --key-prefix='ewfdjkff43ksdh41fuihikucl' --command-key-pattern=P --pipeline=100
    3. String
      • A key consists of 19 characters in the format of string prefix + 9 digits. The digits are consecutive from 100 million to 300 million. The key is used to control the total data volume and hot data distribution.
      • Inject 500 to 2,000 random characters for a value. The average value is 1,250 bits.
      • Construct and inject 400 million keys.
        memtier_benchmark -s ${ip} -a ${passwd} -p ${port} -c 20 -t 20 -n 520000 -d 1250 --key-maximum=300000000 --key-minimum=100000000 --key-prefix='miqjkfdjiu' --key-pattern=P:P --ratio=1:0 --pipeline=100

    After data is injected, there were 3,809,940,889 (about 3.8 billion) keys. Obtain the total data volume on the GeminiDB Redis API console and calculate the data compression ratio. The compressed storage space was 155 GB, and the compression ratio was 13.8%.

    CAUTION:
    • About 4 billion data records were generated by memtier_benchmark of the current version. Data distribution among different categories is not affected.
    • A random character string constructed by memtier_benchmark contains many consecutive characters, so the compression ratio was low. The data compression ratio is about 30% to 50% in actual production.
  2. Pressure test commands

    Perform pressure tests on GeminiDB Redis clusters deployed on three ECSs, separately. The pressure test tasks are as follows:

    1. On ECS 1, run the HGETALL command for hashes and set a range for keys to allow access to hot data only.
      memtier_benchmark -s ${ip} -a ${passwd} -p ${port} -c 20 -t 30 --test-time 1200 --random-data --randomize --distinct-client-seed --command='hgetall __key__' --key-maximum=600000000 --key-minimum=200000000 --key-prefix='ewfdjkff43ksdh41fuihikucl' --out-file=./output_filename
    2. Run the GET command for type data 2 and set a range for keys to allow access to hot data only.
      memtier_benchmark -s ${ip} -a ${passwd} -p ${port} -c 70 -t 30 --test-time 1200 --random-data --randomize --distinct-client-seed --key-maximum=2400000000 --key-minimum=1000000000 --key-prefix='cefkljrithuin123894873h4523bhj4b2jkjh2iu13bnfdhsbnkfhsdjkh' --ratio=0:1 --out-file=./output_filename
    3. Run the GET command for type data 3 and set a range for keys to allow access to hot data only.
      memtier_benchmark -s ${ip} -a ${passwd} -p ${port} -c 10 -t 30 --test-time 1200 --random-data --randomize --distinct-client-seed --key-maximum=300000000 --key-minimum=100000000 --key-prefix='miqjkfdjiu' --ratio=0:1 --out-file=./output_filename

    The number of connections (the product of c and t) was adjusted to modify the number of clients and configuration of each instance, so as to achieve a QPS of 1,600,000 and a read request traffic of 1.5 Gbit/s. Remain the service volume unchanged and evaluate the performance of GeminiDB Redis API.

We use cookies to improve our site and your experience. By continuing to browse our site you accept our cookie policy. Find out more

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback