Highlights
Cloud-native GeminiDB is a key-value (KV) database service featuring high stability, cost-effectiveness, elasticity, and easy O&M. It is fully compatible with the Redis protocol, supports advanced functions such as PITR recoveries for game rollback and FastLoad for feature data import, and it allows you to set the field expiration time for hash keys and blacklist for high-risk keys.
GeminiDB is widely used in scenarios such as game friends list and player rankings, ad placement, personalized recommendations, e-commerce inventory, IoV data storage, and ERP systems. For details, see Application Scenarios.
GeminiDB has the following advantages over open-source on-premises KV databases (such as Redis and Pika databases):
Dimension |
Item |
Open-Source On-Premises KV Database |
GeminiDB |
---|---|---|---|
Stability |
Performance jitter caused by forks |
Service stability is severely affected by fork issues. When RDB backups are generated, the Append Only File (AOF) is rewritten, or full data is synchronized, a fork is called. This increases latency and causes out of memory (OOM) issues. |
Service stability is improved as fork issues are addressed. There is no performance jitter during backup and synchronization. |
Long latency in big key scenarios |
The single-thread architecture slows down subsequent requests. In a single-thread architecture, big key requests slow down all subsequent requests and may trigger flow control or OOM issues on shards. |
The multi-thread architecture reduces the impact on subsequent keys. GeminiDB uses a multi-thread architecture, which improves concurrency and reduces the impact of big keys on subsequent read and write operations of other keys. |
|
Bandwidth limiting during peak hours |
Flow control is easily triggered, affecting services. Open-source on-premises databases typically use a hybrid deployment that strictly limits the bandwidth. Flow control is easily triggered for smaller instances. |
Up to 10 Gbit/s is supported, allowing GeminiDB to handle service surges. By using an independent container deployment, GeminiDB can enable a load balancer to support a bandwidth of 10 Gbit/s. |
|
Impact of scale-out on services |
Scale-out can take several minutes or sometimes even hours, greatly affecting services. Adding nodes involves data migration. Services may be affected for a few minutes or up to several hours. |
Smooth scale-out is supported and has minimal impact on services. Scale-out can be completed in seconds and without interrupting services. Adding nodes does not requires any data migration. There is just a few seconds of jitter. |
|
HA scenarios such as node breakdowns and primary/secondary switchovers |
Long switchover time: RTO > 30s |
Second-level jitters, RTO < 10s |
|
Performance |
QPS |
QPS per shard: 80,000 to 100,000 In a single-thread architecture, the QPS of a single shard does not increase after CPUs are added. |
QPS per shard: 10,000 to 300,000 In a multi-thread architecture, the QPS can increase linearly as CPUs are added. |
Latency |
Low latency |
Low latency In most service scenarios, the average latency is 1 ms, and the p99 latency is about 2 ms. |
|
O&M capabilities |
Audit logs of risky operations |
Not supported |
High-risk commands can be traced. |
Circuit breakers triggered by abnormal requests to keys |
Not supported |
Key blacklists and one-click circuit breakers for high-risk operations are supported, so the entire instance is not affected. |
|
Slow query logs |
Supported |
Supported. More details can be found in the logs. |
|
Big key diagnosis |
Not supported |
Online diagnosis of big keys by category is supported. |
|
Hot key diagnosis |
Supported |
Online diagnosis of hot keys is supported. |
|
Cost |
Utilization cost |
The in-memory storage is expensive. |
The cost of databases with the same specifications is 30% lower than open-source on-premises databases. Users can purchase additional compute resources and storage resources independently to eliminate the resource waste associated with coupled storage and compute. |
Data compression |
Not supported |
The compression ratio (4:1) enables databases with the same specifications to store more data. |
|
Scale-out |
Coupled storage and compute increases costs exponentially. |
Decoupled storage and compute supports independent scaling of compute and storage resources. |
|
Availability |
/ |
If any pair of primary and standby nodes is faulty, the entire cluster becomes unavailable. |
GeminiDB provides superlative fault tolerance (N-1 reliability). |
Data reliability |
/ |
Weak Thousands or tens of thousands of records will be lost if nodes are restarted and the network fluctuates. Weak data consistency may cause dirty reads. |
High reliability GeminiDB provides three-copy storage, so it can serve as the primary database to replace the traditional DB+Cache solution, and it also ensures strong data consistency and avoids dirty reads. |
Advanced features |
Autoscaling |
Not supported |
Supported |
Setting the expiration time for fields in hashes |
Not supported |
Supported. Service design is less complex and concurrency is increased. |
|
Fast data loading |
Not supported |
FastLoad allows feature data to be imported faster, reducing the impact on online services. |
|
Point-In-Time Recovery (PITR) |
Not supported |
Supported PITR rollbacks and quick data restoration to the original instance are supported, making GeminiDB a great fit for gaming applications. |
|
DR instances |
Not supported |
Intra-region and cross-region DR instances can be created. |
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot