Updated on 2022-09-21 GMT+08:00

Test Method

This section describes the performance test of DDS Community Edition 3.4 and 4.0, including the test environment, procedures, and results.

The following uses the cluster and replica set instances as an example.

Test Environment

  • AZ: AZ1
  • Elastic Cloud Server (ECS): s3.2xlarge.2 flavor with 8 vCPUs, 16 GB of memory, and CentOS 7.5 64 bit image.
  • Each cluster instance includes two shard nodes.
  • Specifications of the tested cluster and replica set instances: All specifications supported by the cluster and replica set instances are tested. For details, see Table 1 and Table 2.
    Table 1 Cluster instance class

    Cluster ID

    DB Version

    shard Class

    Storage Space

    Cluster 1

    3.4

    1 vCPU | 4 GB

    600 GB

    Cluster 2

    3.4

    2 vCPUs | 4 GB

    600 GB

    Cluster 3

    3.4

    2 vCPUs | 8 GB

    600 GB

    Cluster 4

    3.4

    4 vCPUs | 8 GB

    600 GB

    Cluster 5

    3.4

    4 vCPUs | 16 GB

    600GB

    Cluster 6

    3.4

    8 vCPUs | 16 GB

    600 GB

    Cluster 7

    3.4

    8 vCPUs | 32 GB

    600 GB

    Cluster 8

    4.0

    1 vCPU | 4 GB

    600 GB

    Cluster 9

    4.0

    2 vCPUs | 4 GB

    600 GB

    Cluster 10

    4.0

    2 vCPUs | 8 GB

    600 GB

    Cluster 11

    4.0

    4 vCPUs | 8 GB

    600 GB

    Cluster 12

    4.0

    4 vCPUs | 16 GB

    600 GB

    Cluster 13

    4.0

    8 vCPUs | 16 GB

    600 GB

    Cluster 14

    4.0

    8 vCPUs | 32 GB

    600 GB

    Table 2 Replica set instance class

    Replica Set ID

    DB Version

    Node Class

    Storage Space

    Replica set 1

    3.4

    1 vCPU | 4 GB

    600 GB

    Replica set 2

    3.4

    2 vCPUs | 4 GB

    600 GB

    Replica set 3

    3.4

    2 vCPUs | 8 GB

    600 GB

    Replica set 4

    3.4

    4 vCPUs | 8 GB

    600 GB

    Replica set 5

    3.4

    4 vCPUs | 16 GB

    600 GB

    Replica set 6

    3.4

    8 vCPUs | 16 GB

    600 GB

    Replica set 7

    3.4

    8 vCPUs | 32 GB

    600 GB

    Replica set 8

    4.0

    1 vCPU | 4 GB

    600 GB

    Replica set 9

    4.0

    2 vCPUs | 4 GB

    600 GB

    Replica set 10

    4.0

    2 vCPUs | 8 GB

    600 GB

    Replica set 11

    4.0

    4 vCPUs | 8 GB

    600 GB

    Replica set 12

    4.0

    4 vCPUs | 16 GB

    600 GB

    Replica set 13

    4.0

    8 vCPUs | 16 GB

    600 GB

    Replica set 14

    4.0

    8 vCPUs | 32 GB

    600 GB

Test Tool

YCSB is an open-source database performance test tool. In this test, YCSB 0.12.0 is used.

For details on how to use this tool, see YCSB.

Test Metrics

Operations per Second (OPS): number of operations executed per second by a database

Test Procedure

  1. Configure the workload configuration file.

    Set the values of readproportion, insertproportion, and updateproportion in the workload file by referring to Table 3.

    Set the value of recordcount in the workload file based on the preset data volume listed in Table 4.

    Example: Configure the workload_s1 file.

    • recordcount = 100000000
    • operationcount = 100000000
    • insertproportion = 1
    • readproportion = 0
    • updateproportion = 0
    • scanproportion = 0

    The values of recordcount and operationcount are the same.

    The sum of the values of insertproportion, readproportion, updateproportion, and scanproportion is 1.

  2. Use workload_s1 as an example. Run the following command to prepare test data:

    ./bin/ycsb load mongodb -s -P workloads/workload_s1 -p mongodb.url="mongodb://${userName}:${password}@${mongosIP}:${port}/ycsb?authSource=admin" -threads ${threadNum} 1>workload_s1_load.result 2> workload_s1_load.log

  3. Use workload_s1 as an example. Run the following command to test the performance:

    ./bin/ycsb run mongodb -s -P workloads/workload_s1 -p mongodb.url="mongodb://${userName}:${password}@${mongosIP}:${port}/ycsb?authSource=admin" -threads ${threadNum} -p maxexecutiontime=1800 1>workload_s1_run.result 2> workload_s1_run.log

  • ${mongosIP} indicates the private IP address of the mongos node in the DDS cluster instance.
  • ${password} indicates the administrator password of the DDS instance.
  • ${threadNum} indicates the number of concurrent threads for running the test. In this test, the number of concurrent threads is 128.

Testing Models

  • Workload model
    Table 3 Service model

    Service Model No.

    Service Model

    S1

    100% insert

    S2

    90% update ,10% read

    S3

    65% read ,25% insert, 10% update

    S4

    90% read ,5% insert, 5% update

    S5

    50% update, 50% read

    S6

    100% read

  • Number of concurrent threads: 128
  • Document model

    Use the default configuration of YCSB: The size of each document is 1 KB, and the default index is _id.

  • Data volume to be prepared

    In this test, prepare two types of data volume for each cluster instance.

    For details, see the following table.

    Table 4 Data volume to be prepared

    Specifications

    Low-Level Data Volume

    High-Level Data Volume

    1 vCPUs | 4 GB

    Storage space: 10 GB

    Record counts: 10,000,000

    Storage space: 100 GB

    Record counts: 100,000,000

    2 vCPUs | 4 GB

    Storage space: 10 GB

    Record counts: 10,000,000

    Storage space: 100 GB

    Record counts: 100,000,000

    2 vCPUs | 8 GB

    Storage space: 10 GB

    Record counts: 10,000,000

    Storage space: 100 GB

    Record counts: 100,000,000

    4 vCPUs | 8 GB

    Storage space: 10 GB

    Record counts: 10,000,000

    Storage space: 100 GB

    Record counts: 100,000,000

    4 vCPUs | 16 GB

    Storage space: 10 GB

    Record counts: 10,000,000

    Storage space: 100 GB

    Record counts: 100,000,000

    8 vCPUs | 16 GB

    Storage space: 10 GB

    Record counts: 10,000,000

    Storage space: 100 GB

    Record counts: 100,000,000

    8 vCPUs | 32 GB

    Storage space: 10 GB

    Record counts: 10,000,000

    Storage space: 100 GB

    Record counts: 100,000,000

  • Data consistency model

    Weak consistency: For the write concern setting of {w: 1, j: false}, an acknowledgment is returned after data is written to the disk on a single node. Data is persisted on disks in asynchronous mode with the default write concern setting.