PERF04-03 Performance Test Procedure
- Risk level
High
- Key strategies
1. Determine acceptance criteria
Organize users, developers, maintenance administrators, and other roles to define the target baselines for service, data, and resource metrics based on the production environment capabilities. You need to enable the system to manage the maximum amount of data with the minimum resources for optimal user experience. You also need to determine SLAs for each scenario.
2. Create test plan
Design specific scenarios or conditions for performance testing under system loads. The performance test plan must be comprehensive, covering all networking scenarios using a design template. Based on performance testing requirements, test models, and network topology diagrams, it outlines key test points and breaks down test scenarios and steps. The test plan simulates real-world user behavior and system loads, providing performance testers with methods to evaluate service performance under various conditions, including test environments, tools, and monitoring metrics.
It enables replication of different load levels, such as concurrent user access, peak traffic periods, or specific scenarios. By testing these load levels, teams can identify performance bottlenecks and optimize resource allocation. The final deliverable is an executable performance test plan.
- Define user behaviors: Identify test scenarios by analyzing frequent user interactions (for example, login, search, batch operations, import/export, form submissions, and feature access) to replicate real-world behavior and load patterns. Break down each scenario into specific interaction steps, covering page flows, transactions, and mixed system load conditions.
- Establish data models: Determine required test background data. Generate realistic datasets with varied scenarios, user profiles, and data volumes to ensure comprehensive performance evaluation.
- Design test scripts: Create executable scripts containing system operations, HTTP requests, or API/UI interactions. Use performance testing tools with parameterization and dynamic data processing. Debug scripts to resolve errors, missing or incorrect actions, or data issues, ensuring reliable test execution.
- Configure test variables and parameters: Set variables and parameters to simulate authentic scenarios, including multi-user logins, data queries, and randomized actions reflecting diverse user behaviors and workload responses.
- Automate scripts: Automate scripts and continuously optimize them based on feedback, test results, or change requirements. Improve script logic, parameterization, and error handling, or add additional validation checkpoints. Configure automated scripts to enable iterative testing without manual script modifications.
3. Configure test environment
Set up a test environment (with the same software version, system networking, and hardware specifications as that of the production environment), and separate the test environment to prevent it from being affected by other factors.
Key considerations include:
- System networking and architecture: active/standby, cluster, or distributed networking and architecture analysis to identify dependency services.
- Hardware specifications: required number of servers and their specifications, including CPU frequency/cores, memory capacity, disk type and capacity, storage pool type and capacity, and network bandwidth.
- Software environment: software versions and configurations, such as operating system version, service version, database version, and other performance-related configurations.
4. Complete test design
Design the preceding system loads, background data volumes, and data models for the tests.
5. Perform the tests
Execute performance tests using the selected tools. The process involves monitoring and recording key performance metrics, observing system behavior, and checking performance issues. Track and collect metrics, including response time, throughput, and CPU/memory utilization.
Apply the defined test plan to subject the workloads to expected levels. Conduct testing under varying load conditions, including normal, peak, and stress levels, to analyze workload behavior across different scenarios.
6. Analyze the results
Examine performance metrics collected from the performance tests to identify bottlenecks. For a performance issue, submit a ticket for optimization. Key aspects include:
- Review performance metrics: Evaluate metrics including response time, throughput, error rate, CPU/memory utilization, and network usage to assess overall service performance.
- Identify performance bottlenecks: Analyze metrics to pinpoint constraints such as high response time, resource utilization, database issues, network latency, and scalability limitations, guiding targeted optimizations and capacity adjustments.
- Correlate performance metrics: Examine the relationship between metrics, for example, how background data volume and resource utilization affect response time, to obtain valuable guidance for different scenarios, and determine which nodes require scaling when needed.
- Evaluate acceptance conditions: Compare results against predefined SLAs to determine production readiness. Implement optimizations or scaling when requirements are unmet.
7. Determine the baseline
A performance baseline is a baseline library established when the product is formed. The performance baseline is archived and released in the version design phase based on the layering principle. The baseline provides a reference point for comparing performance results within a period of time. The baseline should be the reference of the workload performance.
Consider the workload objectives and record the performance so that the baseline can be compared and optimized as the version iterates. Use these baseline metrics as the benchmark for future performance tests and use them to identify whether the service system has performance degradation.
To establish a performance baseline for future benchmarking, follow these steps:
- Determine performance metrics: Identify and agree upon metrics, such as:
Response time, or the speed at which a service responds to a request
Throughput, or the number of requests processed per unit of time
Resource utilization, such as CPU, memory, and disk usage
- Record performance-related metrics: Record the performance metrics obtained during the test as baseline metrics. Compare them with the pre-defined SLA values.
- Compare future test results: In subsequent tests, evaluate performance metrics against the established baseline and thresholds. This comparison helps to detect potential performance degradation.
Principles and requirements for managing performance baselines:
- Baseline immutability: Published baselines cannot be modified within the same version. Any changes require alignment with product SEs and PMs, followed by official re-release.
- Baseline inheritance: Baseline values must be validated during each version's design phase. Complete overhaul of existing baselines is prohibited.
- Baseline ownership: Testing teams must strictly adhere to published baseline requirements for performance tests and evaluations.
- Validation rights: Teams may reject verification requests for non-baseline tests that lack product relevance.
Testing teams have the right to challenge unreasonable baseline content. Based on test data, live network applications, customer specifications, and standards, they may initiate reviews with SEs and product managers for baseline adjustments.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot