Help Center> GeminiDB> GeminiDB Mongo API> FAQs> Database Usage> How Do I Troubleshoot High CPU Usage of a GeminiDB Mongo Instance?
Updated on 2024-05-30 GMT+08:00

How Do I Troubleshoot High CPU Usage of a GeminiDB Mongo Instance?

Analyzing Current Queries

  1. Connect to a GeminiDB Mongo instance using Mongo Shell.
  2. Run the following command to view operations being performed on the instance:

    db.currentOp()

    Command output:

    {
            "raw" : {
                    "shard0001" : {
                            "inprog" : [
                                    {
                                            "desc" : "StatisticsCollector",
                                            "threadId" : "140323686905600",
                                            "active" : true,
                                            "opid" : 9037713,
                                            "op" : "none",
                                            "ns" : "",
                                            "query" : {
     
                                            },
                                            "numYields" : 0,
                                            "locks" : {
     
                                            },
                                            "waitingForLock" : false,
                                            "lockStats" : {
     
                                            }
                                    },
                                    {
                                            "desc" : "conn2607",
                                            "threadId" : "140323415066368",
                                            "connectionId" : 2607,
                                            "client" : "xxx.xxx.xxx.xxx:xxx",
                                            "appName" : "MongoDB Shell",
                                            "active" : true,
                                            "opid" : 9039588,
                                            "secs_running" : 0,
                                            "microsecs_running" : NumberLong(63),
                                            "op" : "command",
                                            "ns" : "admin.",
                                            "query" : {
                                                    "currentOp" : 1
                                       },
                                            "numYields" : 0,
                                            "locks" : {
     
                                            },
                                            "waitingForLock" : false,
                                            "lockStats" : {
     
                                            }
                                    }
                            ],
                            "ok" : 1
                    },
        ...
    }
    • client indicates the IP address of the client that sends the request.
    • opid indicates the unique operation ID.
    • secs_running indicates the elapsed time for execution, in seconds. If the returned value of this field is too large, check whether there is something wrong with the request.
    • microsecs_running indicates the elapsed time for execution, in microseconds. If the returned value of this field is too large, check whether there is something wrong with the request.
    • op indicates the operation type. The operations can be query, insert, update, delete, or command.
    • ns indicates the target collection.

    For details, see the db.currentOp() command in the official document.

  3. Based on the command output, check whether there are requests that take prolonged periods to process.

    If the CPU usage is low during routine operation but increases during specific operations, analyze the requests that take an overlong time to execute.

    If an abnormal query is found, find the opid corresponding to the operation and run db.killOp(opid) to kill it.

Analyzing Slow Queries

Slow query profiling has been enabled for GeminiDB Mongo instances by default. The system automatically records any queries whose execution takes longer than 500 ms to the system.profile collection in the corresponding database. You can:

  1. Connect to a GeminiDB Mongo instance using Mongo Shell.
  2. Select a specific database (using the test database as an example):
    use test
  3. Check whether slow SQL queries have been collected in system.profile.
    show collections;
    • If the command output includes system.profile, slow SQL queries have been generated. Go to the next step.

    • If the command output does not contain system.profile, no slow SQL queries have been generated, and slow query analysis is not required.

  4. Check the slow query logs in the database.

    db.system.profile.find().pretty()

  5. Analyze slow query logs to find the cause of the high CPU usage.

    The following is an example of a slow query log. The log shows a request that scanned the entire table, including 1,561,632 documents. No indexes are used for query.

    {
            "op" : "query",
            "ns" : "taiyiDatabase.taiyiTables$10002e",
            "query" : {
                    "find" : "taiyiTables",
                    "filter" : {
                            "filed19" : NumberLong("852605039766")
                    },
                    "shardVersion" : [
                            Timestamp(1, 1048673),
                            ObjectId("5da43185267ad9c374a72fd5")
                    ],
                    "chunkId" : "10002e"
            },
            "keysExamined" : 0,
            "docsExamined" : 1561632,
            "cursorExhausted" : true,
            "numYield" : 12335,
            "locks" : {
                    "Global" : {
                            "acquireCount" : {
                                    "r" : NumberLong(24672)
                            }
                    },
                    "Database" : {
                            "acquireCount" : {
                                    "r" : NumberLong(12336)
                            }
                    },
                    "Collection" : {
                            "acquireCount" : {
                                    "r" : NumberLong(12336)
                            }
                    }
            },
            "nreturned" : 0,
            "responseLength" : 157,
            "protocol" : "op_command",
            "millis" : 44480,
            "planSummary" : "COLLSCAN",
            "execStats" : {
                  "stage" : "SHARDING_FILTER",                                                                                                                                       [3/1955]
                    "nReturned" : 0,
                    "executionTimeMillisEstimate" : 43701,
                    "works" : 1561634,
                    "advanced" : 0,
                    "needTime" : 1561633,
                    "needYield" : 0,
                    "saveState" : 12335,
                    "restoreState" : 12335,
                    "isEOF" : 1,
                    "invalidates" : 0,
                    "chunkSkips" : 0,
                    "inputStage" : {
                            "stage" : "COLLSCAN",
                            "filter" : {
                                    "filed19" : {
                                            "$eq" : NumberLong("852605039766")
                                    }
                            },
                            "nReturned" : 0,
                            "executionTimeMillisEstimate" : 43590,
                            "works" : 1561634,
                            "advanced" : 0,
                            "needTime" : 1561633,
                            "needYield" : 0,
                            "saveState" : 12335,
                            "restoreState" : 12335,
                            "isEOF" : 1,
                            "invalidates" : 0,
                            "direction" : "forward",
                            "docsExamined" : 1561632
                    }
            },
            "ts" : ISODate("2019-10-14T10:49:52.780Z"),
            "client" : "xxx.xxx.xxx.xxx",
            "appName" : "MongoDB Shell",
            "allUsers" : [
                    {
                            "user" : "__system",
                            "db" : "local"
                    }
            ],
           "user" : "__system@local"
    }

    The following stages can be causes for a slow query:

    • COLLSCAN, involving a full collection (full table) scan

      When a request (such as QUERY, UPDATE, and DELETE) requires a full table scan, a large amount of CPU resources are occupied. If you find COLLSCAN in the slow query log, CPU resources may be occupied.

      If such requests are frequent, create indexes for the fields to be queried.

    • docsExamined, involving a full collection (full table) scan

      You can view the value of docsExamined to see how many documents are scanned in a query. A larger value indicates a higher CPU usage.

    • IXSCAN and keysExamined scan indexes
      • An excessive number of indexes can affect the write and update performance.
      • If your application has more write operations, creating indexes may increase write latency.

      You can view the value of keyExamined to see how many indexes are scanned in a query. A larger value indicates a higher CPU usage.

      If an index is inappropriate or it has many matching results, the CPU usage does not decrease greatly but the execution is slow.

      Example: For the data of a collection, field a has few values (only values 1 and 2), but field b has many values.

      The following shows how to perform the {a: 1, b: 2} query.

      db.createIndex({a: 1}): The query is slow because the a field has too many same values.
      
      db.createIndex({a: 1, b: 1}): The query is slow because the a field has too many same values.
      
      db.createIndex({b: 1}): The query is fast because the b field has a few same values.
      
      db.createIndex({b: 1, a: 1}): The query is fast because the b field has a few same values.

      For the differences between {a: 1} and {b: 1, a: 1}, see the official documents.

    • SORT and hasSortStage, which may involve sorting a large amount of data.

      If the value of the hasSortStage parameter in the system.profile collection is true, the query request involves sorting. If the sorting cannot be implemented through indexes, the query results are sorted, and sorting is a CPU-intensive operation. When this happens, you need to create indexes for the fields that are frequently sorted.

      If the system.profile collection contains SORT, you can use indexing to improve sorting speed.

      Other operations, such as index creation and aggregation (combinations of traversal, query, update, and sorting), also apply to the preceding scenarios because they are also CPU-intensive operations. For more information about profiling, see the official documents.

Analyzing Service Capabilities

After analyzing and optimizing current requests and slow queries, all requests use appropriate indexes, and the CPU usage becomes stable. If the CPU usage remains high after the preceding troubleshooting operations are performed, the current instance may be experiencing a performance bottleneck and cannot meet workload requirements. To address this issue, do as follows:

  1. View monitoring information to analyze instance resource usage. For details, see Viewing Metrics.
  2. Change the instance class or add shard nodes. For details, see the following documents based on the instance type.