Updated on 2023-09-04 GMT+08:00

High CPU Usage

If your CPU usage reaches 80%, a CPU bottleneck exists. In this case, data read and write are slow, affecting your services.

The following describes how to analyze current slow queries. After the analysis and optimization, query performance will be improved and indexes will be used more efficiently.

Analyzing Current Queries

  1. Connect to an instance using Mongo Shell.
  2. Run the following command to view the operations being performed on the database:

    db.currentOp()

    Command output:

    {
            "raw" : {
                    "shard0001" : {
                            "inprog" : [
                                    {
                                            "desc" : "StatisticsCollector",
                                            "threadId" : "140323686905600",
                                            "active" : true,
                                            "opid" : 9037713,
                                            "op" : "none",
                                            "ns" : "",
                                            "query" : {
     
                                            },
                                            "numYields" : 0,
                                            "locks" : {
     
                                            },
                                            "waitingForLock" : false,
                                            "lockStats" : {
     
                                            }
                                    },
                                    {
                                            "desc" : "conn2607",
                                            "threadId" : "140323415066368",
                                            "connectionId" : 2607,
                                            "client" : "172.16.36.87:37804",
                                            "appName" : "MongoDB Shell",
                                            "active" : true,
                                            "opid" : 9039588,
                                            "secs_running" : 0,
                                            "microsecs_running" : NumberLong(63),
                                            "op" : "command",
                                            "ns" : "admin.",
                                            "query" : {
                                                    "currentOp" : 1
                                       },
                                            "numYields" : 0,
                                            "locks" : {
     
                                            },
                                            "waitingForLock" : false,
                                            "lockStats" : {
     
                                            }
                                    }
                            ],
                            "ok" : 1
                    },
        ...
    }
    • client: IP address of the client that sends the request
    • opid: unique operation ID
    • secs_running: elapsed time for execution, in seconds. If the returned value of this field is too large, check whether the request is reasonable.
    • microsecs_running: elapsed time for execution, in seconds. If the returned value of this field is too large, check whether the request is reasonable.
    • op: operation type. The operations can be query, insert, update, delete, or command.
    • ns: target collection
    • For details, see the db.currentOp() command in official document.
  3. Based on the command output, check whether there are requests that take a long time to process.

    If the CPU usage is low while services are being processed but then becomes high during just certain operations, analyze the requests that take a long time to execute.

    If an abnormal query is found, find the opid corresponding to the operation and run db.killOp(opid) to kill it.

Analyzing Slow Queries

Slow query profiling is enabled for DDS by default. The system automatically records any queries whose execution takes longer than 500 ms to the system.profile collection in the corresponding database. You can:

  1. Connect to an instance using Mongo Shell.

    To access an instance from the Internet

    To access an instance that is not publicly accessible

  2. Select a specific database (using the test database as an example):

    use test

  3. Check whether slow SQL queries have been collected in system.profile.

    show collections;

    • If the command output includes system.profile, slow SQL queries have been generated. Go to the next step.
      mongos> show collections
      system.profile
      test
    • If the command output does not contain system.profile, no slow SQL queries have been generated, and slow query analysis is not required.
      mongos> show collections
      test
  4. Check the slow query logs in the database.

    db.system.profile.find().pretty()

  5. Analyze slow query logs to find the cause of the high CPU usage.

    The following is an example of a slow query log. The log shows a request that scanned the entire table, including 1,561,632 documents and without using a search index.

    {
            "op" : "query",
            "ns" : "taiyiDatabase.taiyiTables$10002e",
            "query" : {
                    "find" : "taiyiTables",
                    "filter" : {
                            "filed19" : NumberLong("852605039766")
                    },
                    "shardVersion" : [
                            Timestamp(1, 1048673),
                            ObjectId("5da43185267ad9c374a72fd5")
                    ],
                    "chunkId" : "10002e"
            },
            "keysExamined" : 0,
            "docsExamined" : 1561632,
            "cursorExhausted" : true,
            "numYield" : 12335,
            "locks" : {
                    "Global" : {
                            "acquireCount" : {
                                    "r" : NumberLong(24672)
                            }
                    },
                    "Database" : {
                            "acquireCount" : {
                                    "r" : NumberLong(12336)
                            }
                    },
                    "Collection" : {
                            "acquireCount" : {
                                    "r" : NumberLong(12336)
                            }
                    }
            },
            "nreturned" : 0,
            "responseLength" : 157,
            "protocol" : "op_command",
            "millis" : 44480,
            "planSummary" : "COLLSCAN",
            "execStats" : {
                  "stage" : "SHARDING_FILTER",                                                                                                                                       [3/1955]
                    "nReturned" : 0,
                    "executionTimeMillisEstimate" : 43701,
                    "works" : 1561634,
                    "advanced" : 0,
                    "needTime" : 1561633,
                    "needYield" : 0,
                    "saveState" : 12335,
                    "restoreState" : 12335,
                    "isEOF" : 1,
                    "invalidates" : 0,
                    "chunkSkips" : 0,
                    "inputStage" : {
                            "stage" : "COLLSCAN",
                            "filter" : {
                                    "filed19" : {
                                            "$eq" : NumberLong("852605039766")
                                    }
                            },
                            "nReturned" : 0,
                            "executionTimeMillisEstimate" : 43590,
                            "works" : 1561634,
                            "advanced" : 0,
                            "needTime" : 1561633,
                            "needYield" : 0,
                            "saveState" : 12335,
                            "restoreState" : 12335,
                            "isEOF" : 1,
                            "invalidates" : 0,
                            "direction" : "forward",
                            "docsExamined" : 1561632
                    }
            },
            "ts" : ISODate("2019-10-14T10:49:52.780Z"),
            "client" : "172.16.36.87",
            "appName" : "MongoDB Shell",
            "allUsers" : [
                    {
                            "user" : "__system",
                            "db" : "local"
                    }
            ],
           "user" : "__system@local"
    }

    The following stages can be causes for a slow query:

    • COLLSCAN involves a full collection (full table) scan.

      When a request (such as query, update, and delete) requires a full table scan, a large amount of CPU resources are occupied. If you find COLLSCAN in the slow query log, a full table scan was performed and that occupy a lot of CPU resources.

      If such requests are frequent, create indexes for the fields to be queried.

    • docsExamined involves a full collection (full table) scan.

      You can view the value of docsExamined to check the number of documents scanned. A larger value indicates a higher CPU usage.

    • IXSCAN and keysExamined scan indexes.
      • An excessive number of indexes can affect the write and update performance.
      • If your application has more write operations, creating indexes may increase write latency.

      You can view the value of keyExamined to see how many indexes are scanned in a query. A larger value indicates a higher CPU usage.

      If an index is not properly created or there are many matching results, the CPU usage does not decrease greatly and the execution speed is slow.

      Example: For the data of a collection, the number of values of the a field is small (only 1 and 2), but the b field has more values.

      { a: 1, b: 1 }
      { a: 1, b: 2 }
      { a: 1, b: 3 }
      ......
      { a: 1, b: 100000} 
      { a: 2, b: 1 }
      { a: 2, b: 2 }
      { a: 2, b: 3 }
      ......
      { a: 1, y: 100000}

      The following shows how to implement the {a: 1, b: 2} query.

      db.createIndex({a: 1}): The query is not effective because the a field has too many same values.
      db.createIndex({a: 1, b: 1}): The query is not effective because the a field has too many same values.
      db.createIndex({b: 1}): The query is effective because the b field has a few same values.
      db.createIndex({b: 1, a: 1}): The query is not effective because the a field has a few same values.

      For the differences between {a: 1} and {b: 1, a: 1}, see the official documents.

    • SORT and hasSortStage may involve sorting a large amount of data.

      If the value of the hasSortStage parameter in the system.profile collection is true, the query request involves sorting. If the sorting cannot be implemented through indexes, the query results are sorted, and sorting is a CPU intensive operation. In this scenario, you need to create indexes for fields that are frequently sorted.

      If the system.profile collection contains SORT, you can use indexing to improve sorting speed.

    Other operations, such as index creation and aggregation (combinations of traversal, query, update, and sorting), also apply to the above mentioned scenarios because they are also CPU intensive operations. For more information about profiling, see official documents.

Analysis Capability

After the analysis and optimization of the requests that are being executed and slow requests, all requests use proper indexes, and the CPU usage becomes stable. If the CPU usage remains high after the analysis and troubleshooting, the current instance may have reached the performance bottleneck and cannot meet service requirements. In this case, you can perform the following operations to solve the problem:
  1. View monitoring information to analyze instance resource usage. For details, see Viewing Monitoring Metrics.
  2. Change the DDS instance class or add shard nodes.