HBase Dual-Read Configuration Items
This section provides the details of all the configurations required for the HBase dual-read feature.
HBase Dual-Read Operations
| Configuration Item | Description | Default Value | Level |
|---|---|---|---|
| hbase.dualclient.active.cluster.configuration.path | HBase client configuration directory of the active cluster | None | Mandatory |
| hbase.dualclient.standby.cluster.configuration.path | HBase client configuration directory of the standby cluster | None | Mandatory |
| dual.client.schedule.update.table.delay.second | DR table update interval | 5 | Optional |
| hbase.dualclient.glitchtimeout.ms | Maximum glitch time can be tolerated in the active cluster | 50 | Optional |
| hbase.dualclient.slow.query.timeout.ms | Slow query alarm log | 180000 | Optional |
| hbase.dualclient.active.cluster.id | Active cluster ID | ACTIVE | Optional |
| hbase.dualclient.standby.cluster.id | Standby cluster ID | STANDBY | Optional |
| hbase.dualclient.active.executor.thread.max | Maximum size of the thread pool for processing requests to the active cluster | 100 | Optional |
| hbase.dualclient.active.executor.thread.core | Core size of the thread pool for processing requests to the active cluster | 100 | Optional |
| hbase.dualclient.active.executor.queue | Queue size of the thread pool for processing requests to the active cluster | 256 | Optional |
| hbase.dualclient.standby.executor.thread.max | Maximum size of the thread pool for processing requests to the standby cluster | 100 | Optional |
| hbase.dualclient.standby.executor.thread.core | Core size of the thread pool for processing requests to the standby cluster | 100 | Optional |
| hbase.dualclient.standby.executor.queue | Queue size of the thread pool for processing requests to the standby cluster | 256 | Optional |
| hbase.dualclient.clear.executor.thread.max | Maximum size of the thread pool for clearing resources | 30 | Optional |
| hbase.dualclient.clear.executor.thread.core | Core size of the thread pool for clearing resources | 30 | Optional |
| hbase.dualclient.clear.executor.queue | Queue size of the thread pool for clearing resources | Integer. MAX_VALUE | Optional |
| dual.client.metrics.enable | Whether to print client metric information | true | Optional |
| dual.client.schedule.metrics.second | Interval for printing client metric information | 300 | Optional |
| dual.client.asynchronous.enable | Whether to asynchronously request the active and standby clusters | false | Optional |
Printing Metric Information
| Metric Name | Description | Log level |
|---|---|---|
| total_request_count | Total number of queries in a period | INFO |
| active_success_count | Number of successful queries in the active cluster in a period | INFO |
| active_error_count | Number of failed queries in the active cluster in a period | INFO |
| active_timeout_count | Number of query timeouts in the active cluster in a period | INFO |
| standby_success_count | Number of successful queries in the standby cluster in a period | INFO |
| standby_error_count | Number of failed queries in the standby cluster in a period | INFO |
| Active Thread pool | Periodically printed information about the thread pool for processing requests to the active cluster | DEBUG |
| Standby Thread pool | Periodically printed information about the thread pool for processing requests to the standby cluster | DEBUG |
| Clear Thread pool | Periodically printed information about the thread pool for releasing resources | DEBUG |
| Metric Name | Description | Log level |
|---|---|---|
| averageLatency(ms) | Average latency | INFO |
| minLatency(ms) | Minimum latency | INFO |
| maxLatency(ms) | Maximum latency | INFO |
| 95thPercentileLatency(ms) | Maximum latency of 95% requests | INFO |
| 99thPercentileLatency(ms) | Maximum latency of 99% requests | INFO |
| 99.9PercentileLatency(ms) | Maximum latency of 99.9% requests | INFO |
| 99.99PercentileLatency(ms) | Maximum latency of 99.99% requests | INFO |
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.