Built-in Check Items
SecMaster can scan cloud services for risks in key configuration items, report scan results by category, generate alerts for incidents, and provide hardening suggestions and guidelines.
To view the details of each check item, such as the check status, risk severity, and check content, go to the check item details page. For details, see Viewing the Check Result.
This topic describes check items for cloud service baseline settings.
Security Standard |
Check Category |
Check Items |
|
---|---|---|---|
Cloud Security Compliance Check 1.0 |
12 |
||
7 |
|||
24 |
|||
22 |
|||
13 |
|||
Network Security |
8 |
||
5 |
|||
4 |
|||
1 |
|||
1 |
|||
5 |
|||
Huawei Cloud Security Configuration |
6 |
||
16 |
|||
14 |
|||
17 |
|||
16 |
|||
31 |
|||
17 |
|||
14 |
|||
General Data Protection Regulation (GDPR) |
5 |
||
6 |
|||
16 |
|||
10 |
|||
8 |
|||
7 |
|||
2 |
|||
Common Weak Password Detection |
1 |
||
Password Complexity Policy Detection |
5 |
||
PCI-DSS |
8 |
||
16 |
|||
10 |
|||
4 |
|||
11 |
|||
5 |
|||
2 |
|||
NIST SP 800-53 |
1 |
||
3 |
|||
3 |
|||
7 |
|||
2 |
|||
9 |
|||
1 |
|||
2 |
|||
2 |
|||
5 |
|||
5 |
|||
2 |
|||
7 |
|||
3 |
|||
6 |
Cloud Security Compliance Check — Identity and Access Management
Check Item |
Description |
---|---|
IAM users |
After Identity and Access Management (IAM) is enabled, IAM users in the default user group admin are granted access to all of your Huawei Cloud services. SecMaster checks whether there are at least two IAM users under your account and if they both belong to the admin user group. |
IAM user login protection |
If you enable login protection in IAM, IAM users will be asked to verify identity through virtual MFA device, SMS messages, or emails. This further improves account security and prevents phishing attacks or unexpected password leakage. SecMaster checks whether login protection is enabled in IAM security settings. |
IAM user operation protection |
If you enable IAM user operation protection, you and the users created using your account will be authenticated by a virtual MFA device, SMS message, or email before being allowed to perform critical console operations, such as deleting an ECS or unbinding an EIP. SecMaster checks whether operation protection is enabled for IAM users. |
Administrator account AK/SK |
An access key (Access Key ID/Secret Access Key) is used as a long-term identity credential of an account. Anyone with access to this key pair would be able to manage IAM users or perform other critical operations. It is recommended that the AK/SK be disabled for the administrator because if the key was accidentally lost or disclosed, the security repercussions would be severe. SecMaster checks whether the access key is enabled for the administrator account. |
IAM user password configuration |
Strong password policies are recommended for IAM users. A password should contain at least three types of the following characters: uppercase letters, lowercase letters, digits, and special characters. A password must contain at least eight characters. The new password cannot be the same as any of the last three passwords. Check whether the password policies for the IAM user meet the requirements. |
IAM login authentication policy (account lockout) |
Users with the Security Administrator permission can configure login authentication policies to ensure user information and system security. IAM allows its users to set a duration to lock IAM users out if a specific number of failed login attempts is reached within a certain period. It is recommended that you lock an IAM user out after five failed login attempts within 60 minutes. SecMaster checks whether the account lockout policy specifies that a user will be locked out after five failed login attempts within 60 minutes. |
IAM login authentication policy (account lockout duration) |
Users with the Security Administrator permission can configure login authentication policies to ensure user information and system security. IAM allows its users to set a duration to lock accounts out if a specific number of failed login attempts is reached within a certain period. IAM allows users to configure the account lockout duration. During this period, users cannot enter passwords. The account lockout duration should be set to 15 minutes. SecMaster checks whether the account lockout duration is set to 15 minutes. |
IAM password policy (password reuse) |
IAM allows users to configure password policies. If you enable the password reuse policy, the new password cannot be the same as any of the recently used passwords. Check whether the password reuse rule is enabled in the IAM password policy and whether the number of recent passwords must be less than 5. |
Session timeout policy |
IAM allows users to configure a session timeout duration. You can configure a session timeout that will apply if you or users created using your account do not perform any operations within a specific period. SecMaster checks whether the session timeout limit is set to 15 minutes |
Account disabling |
IAM users can log in to the Huawei Cloud management console using their usernames and passwords. If a user does not log in to the management console for 90 days or longer, disable the user's permission to access the console. SecMaster checks whether the account disabling policy is enabled and whether the validity period is set to 90 days. |
IAM user password strength |
Login passwords of IAM users must be strong. Login passwords of IAM users are classified into weak, medium, and strong. Set a strong password and change it periodically to make your account more secure. Check whether the IAM user login password is strong. |
Enabling of MFA for CBH instances |
After you enable MFA for CBH instances, more strict identity authentication is required for logins through SSH or web browsers. MFA methods: SMS, mobile OTP, USB keys, and dynamic token. Check whether MFA is enabled for CBH instances. |
Cloud Security Compliance Check — Inspection
Check Item |
Description |
---|---|
ELB backend server health check |
ELB periodically sends health check requests to backend servers to check whether they are healthy. If an unhealthy backend server is detected, the associated load balancer stops forwarding requests to this server. After the backend server recovers, the load balancer resumes routing requests to it. SecMaster checks whether health check is enabled for all load balancers and whether the backend server health status is normal. |
Enabling of CTS |
Cloud Trace Service (CTS) records operations on any of the resources under your account. The records provided by CTS let you analyze how safe your system is, track any resource changes, perform compliance audits, or locate faults. Check whether CTS has been enabled for all projects and whether at least one tracker is running. |
Enabling of OBS Bucket Access Logging |
After access logging is enabled for a given bucket, OBS automatically logs access requests to the bucket, generates logs, and saves the log files to a destination bucket. With access logs, you can analyze the type or pattern of requests to bucket access. Check whether access logging is enabled for all OBS buckets. |
Enabling of database audit (for RDS instances) |
Database Security Service (DBSS) database audit provides the audit function in bypass mode. It records user access to the database in real time, generates fine-grained audit reports, and sends real-time alerts for risky operations and attacks, helping users locate internal violations and improper operations in the database. Check whether database audit is enabled. |
Enabling of Cloud Eye |
Cloud Eye is a comprehensive platform to monitor a variety of cloud resources such as ECS and bandwidth usage. You can use Cloud Eye to monitor resources, set alert rules, identify resource exceptions, and quickly respond to resource changes. Check whether Cloud Eye is enabled. |
Cloud Eye server monitoring |
Server Monitoring includes basic monitoring, OS monitoring, and process monitoring. Basic Monitoring provides you with installation-free monitoring for basic metrics. OS Monitoring and Process Monitoring provide system-wide, active, and fine-grained monitoring for servers by installing open-source plug-ins on the servers. Check whether Cloud Eye Agent is installed on ECSs. |
Cloud Eye website monitoring |
Website monitoring is the monitoring of remote server statuses, such as availability and connectivity, by simulating real users' access to remote servers. Check whether website monitoring is configured. |
Cloud Security Compliance Check — Infrastructure Protection
Check Item |
Description |
---|---|
Key pair login of an ECS with an EIP |
If the EIP of an ECS is exposed, using the key pair to log in to the ECS is more secure. |
Log metric filtering and alert incidents (VPC changes) |
The log audit module is a core component necessary for information security audit and an important information system providing security risk management and control for enterprises and organizations. Cloud Trace Service (CTS) keeps track of user activities and resource changes on your cloud resources. It helps you collect, store, and query operational records for security analysis, audit and compliance, and fault location. SecMaster checks whether there are logs and alerts generated in CTS due to VPC changes. |
Log metric filtering and alert incidents (network gateway changes) |
The log audit module is a core component necessary for information security audit and an important information system providing security risk management and control for enterprises and organizations. Cloud Trace Service (CTS) keeps track of user activities and resource changes on your cloud resources. It helps you collect, store, and query operational records for security analysis, audit and compliance, and fault location. SecMaster checks whether there are logs and alerts generated in CTS due to network gateway changes. |
Log metric filtering and alert incidents (subnet changes) |
The log audit module is a core component necessary for information security audit and an important information system providing security risk management and control for enterprises and organizations. Cloud Trace Service (CTS) keeps track of user activities and resource changes on your cloud resources. It helps you collect, store, and query operational records for security analysis, audit and compliance, and fault location. SecMaster checks whether there are logs and alerts generated in CTS due to subnet changes. |
Log metric filtering and alert incidents (VPN changes) |
The log audit module is a core component necessary for information security audit and an important information system providing security risk management and control for enterprises and organizations. Cloud Trace Service (CTS) keeps track of user activities and resource changes on your cloud resources. It helps you collect, store, and query operational records for security analysis, audit and compliance, and fault location. SecMaster checks whether there are logs and alerts generated in CTS due to VPN changes. |
ELB shared load balancer access control |
Access control allows you to add an IP address whitelist or blacklist to control who can access the listener. An IP address whitelist allows only specified IP addresses to access the listener. An IP address blacklist blocks only specified IP addresses from access the listener. Check whether access control is enabled for all ELB load balancers. ELB allows you to configure access control policies to protect backend servers. Enabling access control will allow only IP addresses in the whitelist to access the listener for healthy forwarding. |
Network ACL rule configuration |
A network ACL is an access control policy system for subnets. Based on inbound and outbound rules, it determines whether data packets are allowed in or out of any associated subnet. Configuring a network ACL for subnets in a VPC adds an additional layer of protection to implement more refined and sophisticated security access control. Check whether a network ACL rule is configured. |
VPC peering connection route table |
A VPC peering connection is a network connection between two VPCs. The route tables configured for a VPC peering connection must meet the principle of least privilege. The destination of both the local and peer routes should be a subnet of the minimum size. Check whether the route table for the VPC peering connection meets the principle of least privilege. |
VPC planning |
If you have multiple service systems in a region and each service system requires an isolated network, you can create a separate VPC for each service system. You can use a VPC peering connection to enable communication between two VPCs. VPCs are region-specific. By default, networks in VPCs in different regions or even in the same region are not connected. The communications on these different networks are completely isolated from each other, this is not the case for different AZs in the same VPC. Two networks in the same VPC should be able to communicate with each other even if they are in different AZs. Check whether the VPC planning is appropriate. |
Enabling of WAF (cloud/dedicated/ELB mode) |
With WAF, all public traffic destined for your website goes to WAF first. WAF identifies and filters out the illegitimate traffic, and routes only the legitimate traffic to your origin server to keep your website secure, stable, and available. Check whether WAF is enabled. |
WAF back-to-source IP address configuration (no ELB configured) |
After Web Application Firewall (WAF) is enabled, you can configure the origin server to allow only access requests from WAF to access the origin server. This ensures normal access and prevents the IP address of the origin server from being exposed. If ELB is not used, add the WAF back-to-source IP address to the source IP address of the security group associated with the ECS. |
WAF geolocation access control rules |
You can configure a WAF geolocation access control rule to control access of IP addresses from a specific country or region. This can reduce attack surface of your website. (This function is not included in the detection or professional edition.) |
WAF basic web protection configuration |
You can configure WAF Basic Web Protection to Block to let WAF block and log attacks, or configure it to Log only to let WAF only log attacks. The detection edition supports only the Log only mode. Basic web protection must be enabled and set to Block for all domain names protected with WAF. In doing so, WAF can defend against common web attacks, such as SQL injection, XSS, remote overflow attacks, file inclusion, Bash vulnerability exploits, remote command execution, directory traversal, unauthorized sensitive file access, and command and code injections by default. Check whether Basic Web Protection is enabled and set to Block. |
Enabling of VSS |
Vulnerability Scan Service (VSS) scans for website vulnerabilities, provides analysis reports, and gives you fix suggestions. Check whether VSS is enabled. |
Enabling of Anti-DDoS |
Cloud Native Anti-DDoS Basic (Anti-DDoS) protects cloud resources against DDoS attacks on the network and application layers and sends alerts once attacks have been detected. It will improve your bandwidth usage and service stability. Check whether Anti-DDoS is enabled. |
Enabling of Advanced Anti-DDoS |
Advanced Anti-DDoS (AAD) ensures the continuity of important enterprise services. AAD protects your mission-critical workloads from DDoS attacks by routing all traffic destined for origin servers to AAD IP addresses and scrubbing malicious attacks. Check whether Advanced Anti-DDoS is enabled. |
Enabling of CBH |
Cloud Bastion Host (CBH) is a security management and control platform that enables you to centrally manage accounts, authorization, authentication, and audits (4A). If you enable this service, you can centrally manage and audit cloud resources, such as ECSs, databases, and application systems. It helps you improve system security and meet applicable compliance requirements. Check whether CBH is enabled. |
Enabling of HSS |
Host Security Service (HSS) checks your assets and protects them from harm you may or may not have noticed, including intrusions, vulnerabilities, and unsafe settings. This service can identify and manage data assets on your servers, scan for risks in real time, and defend against intrusions to your servers. This service helps easily build a security system to protect your servers. HSS should be enabled for each ECS. The minimum edition is enterprise edition. The ultimate and web tamper protection editions are recommended. Check whether HSS is enabled for each ECS. |
Enabling of HSS web tamper protection and direction configuration |
Web Tamper Protection (WTP) monitors website directories in real time, backs up files, and restores tampered files using the backup. WTP protects your websites from Trojans, illegal links, and tampering. For web applications, mission-critical systems, and servers hosting applications you want to protect, HSS WTP must be enabled and directories must be configured. Check whether WTP is enabled and whether the protected directories are configured. |
Emergency host vulnerability fixing |
Host Security Service (HSS) can detect Linux OS, Windows OS, and Web-CMS vulnerabilities. Check whether there is emergency vulnerabilities detected with HSS. |
CDN access control settings |
You can configure hotlinking prevention and IP address blacklist to so that CDN can identify and filter out malicious visitors. Check whether an access control rule is configured for CDN. |
Cloud Security Compliance Check — Data Protection
Check Item |
Description |
---|---|
ELB certificate validity |
ELB allows you to deploy server or CA certificates on a load balancer. When you configure an HTTPS listener, bind a server certificate to the listener. If two-way authentication is enabled, you also need to bind a CA certificate. SecMaster checks whether all ELB certificates are valid. If an SSL certificate expired, website visitors will see a warning indicating that the website security certificate has expired when they access the website. |
CDN certificate validity |
You can configure the HTTPS certificate of the acceleration domain name and deploy it on network-wide CDN nodes to implement secure acceleration. If an SSL certificate for CDN expired, website visitors will see a warning indicating that the website security certificate has expired when they access the website. |
SSL Certificate Validity |
SSL Certificate Manager (SCM) is a platform to centrally manage your Secure Socket Layer (SSL) certificates. After an SSL certificate is deployed on the server, HTTPS is enabled for server access. Expired SSL certificates cannot be used. Check the validity period of every issued SSL certificate (unissued SSL certificates are not included). |
RDS DB instance EIP binding settings |
For a publicly accessible Relational Database Service (RDS) DB instance, the minimum access privilege, SSL authenticated channel, and non-default database port are required to prevent data breaches. Check whether access control is configured, SSL is enabled, and the default port is changed if an RDS DB instance is publicly accessible. |
DDS DB instance EIP settings |
For a publicly accessible Document Database Service (DDS) instance, the minimum access privilege, SSL authenticated channel, and non-default database port are required to prevent data breaches. Check whether access control is configured, SSL is enabled, and the default port is changed if a DDS instance is publicly accessible. |
DCS DB instance EIP binding settings |
For a publicly accessible Distributed Cache Service (DCS) instance, the minimum access privilege, SSL authenticated channel, and non-default database port are required to prevent data breaches. Check whether access control is configured, SSL is enabled, and the default port is changed if a DCS instance is publicly accessible. |
GaussDB instance EIP settings |
For a publicly accessible GaussDB instance, the minimum access privilege, SSL authenticated channel, and non-default database port are required to prevent data breaches. Check whether access control is configured, SSL is enabled, and the default port is changed if a GaussDB instance is publicly accessible. |
RDS DB instance EIP binding |
Public connections are not recommended for RDS DB instances. If you configure public connections for an RDS DB instance, service data will be transmitted over the Internet, which may cause data leakage. Check whether public connections are configured for RDS DB instances. |
DDS DB instance EIP binding |
Public connections are not recommended for DDS DB instances. If you configure public connections for a DDS DB instance, service data will be transmitted over the Internet, which may cause data leakage. Check whether public connections are configured for DDS DB instances. |
DCS DB instance EIP binding |
Public connections are not recommended for DCS instances. If you configure public connections for a DCS instance, service data will be transmitted over the Internet, which may cause data leakage. Check whether public connections are configured for DCS instances. |
GaussDB instance EIP binding |
Public connections are not recommended for GaussDB instances. If you configure public connections for a GaussDB instance, service data will be transmitted over the Internet, which may cause data leakage. Check whether public connections are configured for GaussDB instances. |
RDS DB instance security group rules |
Check whether there are any insecure rules in the security groups associated with all RDS DB instances. If access is granted from 0.0.0.0/0 or if the inbound rule is left unconfigured for a security group, access to the DB instances in the security group is granted from any IP address, which is quite dangerous. Any inbound rule that allows all source IP addresses (0.0.0.0/0) to access port 1 - 65535 or a specific database service port, such as port 3306, over all protocols, is insecure. |
GaussDB instance security group rule |
Inbound rules of security groups must meet the principle of least privilege. A port with any of the following configurations fails to meet the principle of least privilege, unless otherwise required by the workloads (The configuration listed first brings the highest risk.): IPv4: The source IP address is set to 0.0.0.0/0. The mask of public IP addresses is smaller than 32. The subnet mask of internal IP addresses is smaller than 24. IPv6: The source IP address is ::/0 |
OBS Bucket Server-Side Encryption |
With Object Storage Service (OBS) server-side encryption, data is encrypted on the server and then uploaded to OBS buckets. When you download encrypted data, it is decrypted on the server and then sent to you. Encrypting data on server before storing it in OBS buckets improves security. Check whether server-side encryption is enabled for all OBS buckets. |
OBS Bucket ACL Permissions |
An ACL rule is used to control access to a specific bucket on a basis of accounts or user groups. Anonymous users are visitors who have not registered with Huawei Cloud. If an OBS bucket ACL grants bucket access to anonymous users, it means that anyone can access the OBS bucket. It means no authentication is required. Checks all OBS buckets to see whether access permission to any buckets or ACLs is granted to anonymous users. |
MySQL DB instance user root remote login |
Remote login to MySQL DB instances using user root must be controlled to prevent the account from being cracked. Only remote login from applications or DAS management network segments is allowed. |
RDS DB instance security group inbound rules |
Inbound rules of security groups must meet the principle of least privilege. A port with any of the following configurations fails to meet the principle of least privilege, unless otherwise required by the workloads (The configuration listed first brings the highest risk.): IPv4: The source IP address is set to 0.0.0.0/0. The mask of public IP addresses is smaller than 32. The subnet mask of internal IP addresses is smaller than 24. IPv6: The source IP address is ::/0. Check whether the inbound rules of the security group associated with the RDS DB instance comply with the principle of least privilege. |
DCS DB instance security group inbound rules |
Inbound rules of security groups must meet the principle of least privilege. A port with any of the following configurations fails to meet the principle of least privilege, unless otherwise required by the workloads (The configuration listed first brings the highest risk.): IPv4: The source IP address is set to 0.0.0.0/0. The mask of public IP addresses is smaller than 32. The subnet mask of internal IP addresses is smaller than 24. IPv6: The source IP address is ::/0. Check whether the inbound rules of the security group associated with the Distributed Cache Service (DCS) instance comply with the principle of least privilege. |
DDS DB instance security group inbound rules |
Inbound rules of security groups must meet the principle of least privilege. A port with any of the following configurations fails to meet the principle of least privilege, unless otherwise required by the workloads (The configuration listed first brings the highest risk.): IPv4: The source IP address is set to 0.0.0.0/0. The mask of public IP addresses is smaller than 32. The subnet mask of internal IP addresses is smaller than 24. IPv6: The source IP address is ::/0. Check whether the inbound rules of the security group configured for the DDS DB instance comply with the principle of least privilege. |
RDS DB instance security group ports |
Check whether there are any insecure rules in the security groups associated with all RDS DB instances. Insecure rules: The port ranging from 1 to 65535 or a non-database service port, such as 3306, is configured in an inbound rule. Check whether unused ports are enabled for each RDS instance. |
DCS DB instance ports |
Check whether there are any insecure rules in the security groups associated with all DCS instances. Insecure rules: The port ranging from 1 to 65535 or a non-database service port, such as 6379, is configured in an inbound rule. Check whether unused ports are enabled for each DCS instance. |
DDS DB instance ports |
Check whether there are any insecure rules in the security groups associated with all DDS DB instances. Insecure rules: The port ranging from 1 to 65535 or a non-database service port, such as 8635, is configured in an inbound rule. Check whether unused ports are enabled for each DDS instance. |
Cloud Security Compliance Check — Event Response
Check Item |
Description |
---|---|
CBR disk backup availability |
Cloud Backup and Recovery (CBR) lets you back up EVS disks with ease. If there is a virus intrusion, accidental deletion, or software/hardware fault, data can be restored to any backup point. Check whether CBR is enabled for all EVS disks. |
OBS bucket cross-region replication |
Object Storage Service (OBS) provides you with cross-region replication for disaster recovery. By creating cross-region replication rules, data in a source bucket can be automatically and asynchronously replicated to the destination bucket in different regions under the same account. This feature gives you the ability to back up data remotely. Check whether cross-region replication is enabled for all OBS buckets. |
Enabling of CTS key incident notifications |
You can create key incident notifications on CTS so that Simple Message Notification (SMN) can send messages to notify you of key operations. This function is triggered by CTS, but notifications are sent by SMN. |
LTS log transfer (to OBS/DIS) |
Logs reported from ECSs and cloud services are retained in LTS for 7 days by default. LTS automatically deletes logs older than the retention period. You can transfer logs to OBS buckets for a long-term storage. Check whether log transfer (to OBS/DIS) is enabled in LTS. |
CBR cloud server backup for ECSs/BMSs |
Cloud Backup and Recovery (CBR) lets you back up cloud servers (including ECSs, HECSs, and BMSs), disks, and on-premises VMware virtual environments with ease. If there is a virus intrusion, accidental deletion, or software/hardware fault, data can be restored to any backup point. Check whether CBR cloud server backups have been enabled for ECSs and BMSs. |
RDS DB instance backup |
Automatic backup should be enabled for each RDS DB instance to ensure data reliability. Check whether automated backup is enabled for each RDS DB instance. |
GaussDB instance backup |
Automatic backup should be enabled for each GaussDB instance to ensure data reliability. Check whether automated backup is enabled for each GaussDB instance. |
WAF logs |
If you authorize WAF to access Log Tank Service (LTS), you can use the WAF logs recorded by LTS for quick and efficient real-time analysis, device O&M management, and analysis of service trends. Check whether LTS is enabled for WAF. |
WAF event alarm notification |
With alert notifications enabled, WAF will send notifications to the recipients you configured through emails or SMS messages once an incident occurs. In this way, O&M personnel can respond to attacks in a timely manner and refine alert reporting frequency and incident type to meet workload changes. Check whether alert notifications are enabled for WAF incidents. |
DBSS audit log backup |
DBSS database audit enables you to back up audit logs to OBS buckets so that you can back up or restore database audit logs as needed. Check whether log backup is configured for database audit. |
DBSS database audit alert notifications |
After configuring alert notifications, you can receive DBSS alerts on database risks. Check whether alert notifications are enabled for database audit. |
EVS disk backups |
Cloud Backup and Recovery (CBR) lets you back up EVS disks with ease. If there is a virus intrusion, accidental deletion, or software/hardware fault, data can be restored to any backup point. Check whether a backup is available in CBS for EVS disks. |
RDS DB instance backup |
RDS lets you back up and restore database instances to ensure data reliability. By default, an automatic data backup policy is enabled for RDS DB instances, and data is backed up once a day. Check whether automatic backup is enabled for each RDS instance. |
Automatic backup for DDS DB instances |
DDS lets you back up and restore database instances to ensure data reliability. By default, an automatic data backup policy is enabled for DDS DB instances, and data is backed up once a day. Check whether automatic backup is enabled for each DDS instance. |
Network Protection Check — Security Kit
Check Item |
Description |
---|---|
HSS status |
Host Security Service (HSS) helps you identify and manage the assets on your servers, eliminate risks, and defend against intrusions and web page tampering. There are also advanced protection and security operations functions available to help you easily detect and handle threats. Check whether HSS protection is enabled for servers. |
HSS agent status |
Host Security Service (HSS) helps you manage and maintain the security of all your servers and reduce common risks. To enable HSS protection, install HSS agent on the servers you want to protect first. Check whether the HSS agent is in the Online status. |
Server risk detection result |
Host Security Service (HSS) detects risks and abnormal operations on servers in real time and performs a full scan on servers at 00:00 every day. You can view detection result for each server and fix unsafe settings and ignore trusted settings in a timely manner. Check the detection results to see if there is any unsafe settings or risky operations. |
WAF basic web protection (cloud instance) |
After this function is enabled, WAF can defend against common web attacks, such as SQL injections, XSS, remote overflow vulnerabilities, file inclusions, Bash vulnerabilities, remote command execution, directory traversal, sensitive file access, and command/code injections. Check whether basic web protection is enabled for cloud WAF. |
WAF policy configuration (cloud instance) |
You can configure WAF Basic Web Protection to Block to let WAF block and log attacks, or configure it to Log only to let WAF only log attacks. The detection edition supports only the Log only mode. Basic web protection must be enabled and set to Block for all domain names protected with WAF. In doing so, WAF can defend against common web attacks, such as SQL injection, XSS, remote overflow attacks, file inclusion, Bash vulnerability exploits, remote command execution, directory traversal, unauthorized sensitive file access, and command and code injections by default. Check whether Basic Web Protection is enabled and set to Block for cloud WAF. |
WAF basic web protection (dedicated instances) |
After this function is enabled, WAF can defend against common web attacks, such as SQL injections, XSS, remote overflow vulnerabilities, file inclusions, Bash vulnerabilities, remote command execution, directory traversal, sensitive file access, and command/code injections. Check whether basic web protection is enabled for all dedicated WAF instances. |
WAF policy configuration (dedicated instance) |
You can configure WAF Basic Web Protection to Block to let WAF block and log attacks, or configure it to Log only to let WAF only log attacks. The detection edition supports only the Log only mode. Basic web protection must be enabled and set to Block for all domain names protected with WAF. In doing so, WAF can defend against common web attacks, such as SQL injection, XSS, remote overflow attacks, file inclusion, Bash vulnerability exploits, remote command execution, directory traversal, unauthorized sensitive file access, and command and code injections by default. Check whether Basic Web Protection is enabled and set to Block for dedicated WAF instances. |
HSS agent status |
After you install the HSS agent on your servers, you will be able to check the server security status and risks in a region on the HSS console. HSS comes in basic, enterprise, premium, and WTP editions. The basic edition is used only for testing and individual user protection. The enterprise edition or later is recommended. Check all servers to see whether HSS enterprise edition or a higher edition is used. |
Network Security — Account Hardening
Check Item |
Description |
---|---|
Administrator account AK/SK |
An access key (Access Key ID/Secret Access Key) is used as a long-term identity credential of an account. Anyone with access to this key pair would be able to manage IAM users or perform other critical operations. It is recommended that the AK/SK be disabled for the administrator because if the key was accidentally lost or disclosed, the security repercussions would be severe. The service checks whether the access key is enabled for the administrator account. |
Host weak password |
HSS can check for accounts using weak passwords so you can remind users to change them, preventing easy guessing. Check whether weak passwords are used to log in to servers. |
Agency account |
By creating an agency, you can share your resources with another account, or delegate an individual or team to manage your resources. You do not need to share your security credentials (the password and access keys) with the delegated party. Instead, the delegated party can log in with its own account credentials and then switches the role to your account and manage your resources. Agencies created for individual accounts are not recommended as such agencies may be untrusted in the cloud service environment. Check whether there is any agency created for an individual account. |
Agency permissions for global services |
Check whether Security Administrator or Tenant Administrator permissions are assigned to any agency created for global services. |
Agency permissions for project-level services |
Check whether Security Administrator or Tenant Administrator is assigned to any agency. |
Network Security — Host Hardening
Check Item |
Description |
---|---|
High-risk open ports |
HSS can detect open ports on your servers so you can quickly know which assets on them are unsafe. If dangerous or unnecessary ports are found enabled, check whether they are mandatory for services, and disable them if they are not. For dangerous ports, you are advised to further check their program files, and delete or isolate their source files if necessary. Check each server to see whether high-risk ports or unused ports are open to the internet. |
Kubernetes version of a CCE cluster |
CCE provides highly scalable, high-performance enterprise-class Kubernetes clusters and supports Docker containers. With CCE, you can easily deploy, manage, and scale containerized applications on Huawei Cloud. Kubernetes should be upgraded to version 1.15 or later as version earlier than 1.15 is insecure in CCE. Check each CCE cluster to see whether Kubernetes is earlier than version 1.15. |
VPC peering connection configuration |
A VPC peering connection is a network connection between two VPCs using private IP addresses as if they were in the same VPC. You can create a peer connection between your own VPCs or between your VPC and another VPC in the same region. Check whether a VPC peering connection has been created in the VPC. If yes, check whether high-risk ports or unused ports are enabled |
VPC configuration (VPN Gateway) |
A VPN gateway is an egress gateway for a VPC. With a VPN gateway, you can create a secure, reliable, and encrypted connection between a VPC and an on-premises data center or between two VPCs in different regions. Check whether a VPN gateway has been created for the VPC. |
Network Security — Sudo Vulnerability
Check Item |
Description |
---|---|
Check whether there are sudo vulnerabilities. |
HSS can check and handle vulnerabilities in your Linux operating systems and the software (such as SSH, OpenSSL, Apache, and MySQL) you obtained from official sources and have not compiled. Check whether there is any sudo vulnerability on each server. |
Network Security — Access Control
Check Item |
Description |
---|---|
Security group inbound rules |
Inbound rules of security groups must meet the principle of least privilege. Unless your business required, an inbound rule with any of the following configurations fails to meet the principle of least privilege requirements: The source IP address is set to 0.0.0.0/0 (high risk). The mask of public IP addresses is smaller than 32 (medium risk). The subnet mask of internal IP addresses is smaller than 24 (low risk). IPv4: The source IP address is set to 0.0.0.0/0. The mask of public IP addresses is smaller than 32. The subnet mask of internal IP addresses is smaller than 24. IPv6: The source IP address is ::/0. |
Network Security — Sensitive Data Scanning
Check Item |
Description |
---|---|
OBS bucket ACL permissions |
An ACL rule is used to control access to a specific bucket on a basis of accounts or user groups. Anonymous users are visitors who have not registered with Huawei Cloud. If an OBS bucket ACL grants bucket access to anonymous users, it means that anyone can access the OBS bucket. It means no authentication is required. Checks all OBS buckets to see whether access permission to any buckets or ACLs is granted to anonymous users. |
Enabling of OBS Bucket Access Logging |
After access logging is enabled for a given bucket, OBS automatically logs access requests to the bucket, generates logs, and saves the log files to a destination bucket. With access logs, you can analyze the type or pattern of requests to bucket access. Check whether access logging is enabled for all OBS buckets. |
Sensitive data in databases |
Data Security Center (DSC) identifies sensitive data in databases using preset policies. It leverages multiple preset and customized masking algorithms to provide full-stack protection for such sensitive data. Check whether there is sensitive data in each database. |
Sensitive data in OBS buckets |
DSC identifies sensitive data in databases using preset policies. It leverages multiple preset and customized masking algorithms to provide full-stack protection for such sensitive data. Check whether there is sensitive data in each OBS bucket. |
Sensitive data at Elasticsearch |
DSC identifies sensitive data in databases using preset policies. It leverages multiple preset and customized masking algorithms to provide full-stack protection for such sensitive data. Check whether sensitive data is stored in Elasticsearch. |
Huawei Cloud Security Configuration — Network
Check Item |
Check Item |
---|---|
Disabling Internet Access over SSH |
The SSH protocol is mainly used to remotely connect to and manage hosts. The default port number is 22. The SSH protocol is an easy target for resource scanning and brute force cracking in network attacks. When you configure the network ACL rule or security groups for VPC subnets, do not configure SSH port rules whose source IP address is 0.0.0.0/0 or ::/0. If necessary, you need to configure specific source IP addresses by whitelist. |
Disabling the Source IP Address 0.0.0.0/0 to Access Remote Management Ports and High-Risk Ports for VPC Security Groups |
When configuring a security group rule for a VPC, the remote management ports and high-risk ports cannot be added to the inbound rule. If these ports are required by services, enable them to the smallest extent. If the source IP address is 0.0.0.0/0 or ::/0, the mask of the public IP address is less than 32, and the mask of the internal IP address is less than 24, the minimum access control principle is not met. High-risk ports include 20, 21, 135, 137, 138, 139, 445, 389, 593, and 1025. Remote management ports include 23, 177, 513, 4899, 6000 to 6063, 5900, and 5901. |
Disabling Access to Remote Management Ports and High-Risk Ports over the Source IP Address 0.0.0.0/0 for Subnet ACL |
When configuring a network ACL rule for a VPC subnet, the remote management ports and high-risk ports cannot be added to the inbound rule. If these ports are required by services, enable them to the smallest extent. If the source IP address is 0.0.0.0/0 or ::/0, the mask of the public IP address is less than 32, and the mask of the internal IP address is less than 24, the minimum access control principle is not met. High-risk ports include 20, 21, 135, 137, 138, 139, 445, 389, 593, and 1025. Remote management ports include 23, 177, 513, 4899, 6000 to 6063, 5900, and 5901. |
Disabling Internet Access over RDP |
The RDP protocol is mainly used to remotely connect to and manage hosts. The default port number is 3389. The RDP protocol is an easy target for resource scanning and brute force cracking in network attacks. When you configure the network ACL rule or security groups for VPC subnets, do not configure RDP port rules whose source IP address is 0.0.0.0/0 or ::/0. If necessary, you need to configure specific source IP addresses by whitelist. |
Enabling Access Control for ELB Listeners |
You can configure an IP address whitelist or blacklist for your ELB load balancer to control which IP addresses can access its listener. An IP address whitelist allows only specified IP addresses to access the listener. An IP address blacklist blocks only specified IP addresses from access the listener. Whitelists and blacklists do not conflict with inbound security group rules. Whitelists define the IP addresses that are allowed to access the listeners, while blacklists specify the IP addresses that are denied to access the listeners. Inbound security group rules control access to backend servers by specifying the protocol, ports, and IP addresses. Access control does not restrict the ping command. The backend servers are still pingable from restricted IP addresses. To ping the IP address of a shared load balancer, you need to add a listener and associate a backend server to it. To ping the IP address of a dedicated load balancer, you only need to add a listener to it. |
Ensuring the Principle of Least Privilege over VPC Peering Connections |
A VPC peering connection is a network connection between two VPCs. The routes configured for a VPC peering connection must meet the principle of least privilege. The destination of both the local and peer routes should be a subnet of the minimum size. |
Huawei Cloud Security Configuration — Identity and Access Management
Check Item |
Check Item |
---|---|
Avoiding Setting Access Keys for Users with Console Passwords When Setting Initial IAM Users |
To improve account resource security, you are advised not to set access keys for IAM users who have console passwords when setting initial IAM users. |
Enabling Access Key Management |
To ensure the security of your account and resources, enable access key management. This option is disabled by default. After you enable this option, only the administrator can create, enable, disable, or delete access keys of IAM users. |
Enabling Login Protection |
To improve account security and prevent phishing attacks and password leakage, the administrator can enable login protection on the Account Security Settings page. When users log in to the console, they need to verify their identities by virtual MFA, SMS, or email in addition to their passwords. |
Ensuring that the IAM Passwords Must Be Changed At Least Once Every 180 Days or Less |
The password validity period policy of IAM users must be set. The following requirements should be met: After the password expires, the system forces the user to change the password. (The password validity period should be set to 180 days or shorter.) |
Ensuring That the Minimum Password Age Is Set |
The minimum password age for IAM users must be set. The following requirements should be met: A password can be changed only after it is used for a specified time. (The minimum password age should be set to 5 minutes.) |
Ensuring that the Minimum Password Length Specified in IAM password Policy Is Not Less Than 8 |
Configuring the password policy Customers are advised to configure a strong password policy that contains the following requirements for IAM users: The password must contain at least eight characters. |
Enabling Operation Protection |
To improve account security and secure access to cloud services, the administrator can enable operation protection in IAM. When the account administrator or users created using the account perform a critical operation, such as deleting an Elastic Cloud Server (ECS) and unbinding an Elastic IP (EIP), on the console, they need to verify their identities again by virtual MFA device, SMS, or email. |
Ensuring That Only One Active Access Key Is Available for an IAM User |
To improve account resource security, you are advised to assign only one active access key to an IAM user. |
Ensuring that the IAM Password Policy Meets Password Strength Requirements |
Customers are advised to configure a strong password policy that contains the following requirements for IAM users:
|
Enabling MFA for the Administrator Account |
Virtual multi-factor authentication (MFA) is an authentication method that requires users to install an MFA application (such as an authenticator) on their smart devices and bind the application to their accounts as a virtual MFA device. When users log in to the console or perform a critical operation, they need to enter a 6-digit code generated by the virtual MFA device. MFA devices can be hardware- or software-based. Currently, software-based virtual MFA devices are supported. MFA must be enabled for users to log in to the management console. |
IAM Password Policy (Password Reuse) |
Strong password policies must be configured for IAM users. It is recommended that the password policies meet the following requirements: The new password cannot be any of the three recently used passwords. |
Configuring an ACL |
The administrator can set an access control list (ACL) to allow user access only from specified IP address ranges, IPv4 CIDR blocks, or VPC endpoints.
|
Disabling AK/SK for the Administrator Account |
To improve account security and secure access to cloud services, the administrator can enable operation protection in IAM. When the account administrator or users created using the account perform a critical operation, such as deleting an Elastic Cloud Server (ECS) and unbinding an Elastic IP (EIP), on the console, they need to verify their identities again by virtual MFA device, SMS, or email. |
No IAM Policy Is Created to Allow the *:* Permissions |
To improve account resource security, do not create an IAM policy that allows the *:* management permission. |
Configuring a Login Authentication Policy |
The administrator can set a log authentication policy that covers Session Timeout, Account Lockout, Account Disabling, Recent Login Information, and Custom Information.
|
Creating No IAM Users with Non-Administrator Permissions |
The default user admin has operation permissions for all cloud service resources. It is insecure if all users belong to the admin user group or share the same enterprise administrator account. To control the access of cloud resources by users or applications, customers can use Identity and Access Management (IAM) to create IAM users for their employees or applications. |
Huawei Cloud Security Configuration — Security
Check Item |
Check Item |
---|---|
Enabling Ransomware Prevention (Premium/Container/WTP Edition) |
Ransomware is one of the biggest cybersecurity threats today. Ransomware can intrude a server, encrypt data, and ask for ransom, causing service interruption, data leakage, or data loss. Attackers may not unlock the data even after receiving the ransom. HSS provides static and dynamic ransomware prevention. You can periodically back up server data to reduce potential losses. |
Enabling CBH and Multi-Factor Authentication |
CBH can monitor the usage of the CBH system, monitor O&M activities of each managed resource, and identify suspicious O&M actions in real time. This protects resources and data from being accessed or damaged by external or internal users. CBH reports alarms to customers, who can then more easily handle or audit O&M issues in a timely, centralized manner. To further secure user accounts in Cloud Bastion Host (CBH), users can enable multi-factor authentication (MFA) included in CBH. After MFA is enabled, multi-factor authentication is required every time when a user logs in to the CBH system through a web browser or an SSH client. MFA methods include SMS, mobile OTP, USB keys, and dynamic token. |
Enabling HSS (Basic/Professional/Enterprise/Premium Edition) |
Install and enable Host Security Service (HSS) for servers, such as Elastic Cloud Server (ECS) and Bare Metal Server (BMS), to comprehensively identify and manage server assets, monitor risks in servers in real time, and prevent unauthorized intrusions, helping enterprises enhance server security. |
Enabling Alarm Notifications for WAF Events |
You can enable notifications for attack logs. If this function is enabled, WAF sends you SMS or email notifications if an attack is detected. |
Configuring the WAF back-to-source IP addresses (origin servers deployed on ECSs) |
A back-to-source IP address is a source IP address used by WAF to forward client requests to origin servers. To origin servers, all web requests come from WAF, and all source IP addresses are WAF back-to-source IP addresses. The real client IP address is encapsulated into the HTTP header field. If no ELB load balancers are configured between WAF and origin servers, you need to configure ECSs housing web services to allow only WAF back-to-source IP addresses. |
Enabling Automatic Notification of SecMaster High-Risk Alarms |
Enable automatic notification of high-risk alarms of the SecMaster. When a high-risk alarm is detected, you can receive an email or SMS notification in a timely manner to quickly handle and respond to the alarm. |
Enabling WAF Basic Web Protection Block Mode |
Basic web protection has two modes: Block and Log only. In Log only mode, WAF logs attacks only. In Block mode, WAF blocks and records every attack detected. |
Enabling CFW |
Enabling CFW is recommended if your services have elastic IP addresses (EIPs) configured. After CFW is enabled, all public network traffic destined for the EIP will go to CFW first. CFW detects and blocks malicious attack traffic, and then routes only the legitimate traffic to the service server, keeping the server secure, stable, and reliable. |
Enabling WAF |
For users with web-based businesses, it is required to enable the Web Application Firewall (WAF) service. If WAF is enabled, all public traffic to the website will first go through the WAF. Malicious traffic will be detected and filtered by the WAF, while normal traffic will be returned to the source IP, ensuring the safety, stability, and availability of the source IP. |
Enabling CFW Alarm Notification |
You can create alarm rules to monitor log exceptions in real-time. If a log meets the preset rules, you will receive an alarm notification via SMS or email. |
Enabling DEW for Secret Hosting |
The secret hosting feature of Data Encryption Workshop (DEW) enables unified management, retrieval, and secure storage of various types of secrets, such as database passwords, server passwords, SSH keys, and access keys. |
Configuring a Geolocation Access Control Rule in WAF |
Geolocation access control rules in WAF enable you to restrict source IP addresses from a specific country or region to access the protected website. |
Configuring the WAF back-to-source IP addresses (origin servers added to ELB load balancers) |
A back-to-source IP address is a source IP address used by WAF to forward client requests to origin servers. To origin servers, all web requests come from WAF, and all source IP addresses are WAF back-to-source IP addresses. The real client IP address is encapsulated into the HTTP header field. If no ELB load balancers are configured between WAF and origin servers, you need to configure ELB to allow only WAF back-to-source IP addresses. |
Enabling HSS Web Tamper Protection |
The HSS Web Tamper Protection (WTP) edition should be enabled to protect websites from Trojans, illegal URLs, viruses, and tampering. |
Huawei Cloud Security Configuration — Logging and Monitoring
Check Item |
Check Item |
---|---|
Enabling RDS Database Audit |
If you enable the SQL audit function, all SQL operations will be logged for your download and query. SQL audit is disabled by default. Enabling this function may affect the database performance. |
Enabling Database Audit Alarm Notifications |
After configuring alarm notifications, you can receive DBSS alarms on database risks. If this function is not enabled, you have to log in to the management console to view alarms. |
Enabling DBSS Database Audit |
The database audit function should be enabled for databases. The database audit function of Database Security Service (DBSS) provides the audit function in bypass mode. It records user access to the database in real time, generates fine-grained audit reports, and sends real-time alarms for risky operations and attacks, helping users locate internal violations and improper operations in the database. |
Setting OBS Bucket for Storing Logs to Not Publicly Readable |
Ensure that the bucket storing audit logs is not publicly readable, preventing unauthorized access to audit logs. |
Enabling CTS |
A tracker is automatically created when you enable CTS. The tracker identifies and associates with all cloud services you are using, and records all operations on the services. CTS allows you to collect, store, and query all operations on your cloud resources and use these records for security analysis, compliance auditing, resource tracking, and fault locating. |
Enabling the Key Event Notifications in CTS |
Key events include real-time detection of high-risk operations (such as VM restart and security configuration changes), cost-sensitive operations (such as expensive resource creation and deletion), and service-sensitive operations (such as network configuration changes). The following lists the key operation items of some cloud services.
After the key event notification is enabled, CTS sends notifications about these key operations to related subscribers through Simple Message Notification (SMN). This function is triggered by CTS, and notifications are sent by SMN. You need to enable the key event notifications in CTS. You are advised to set the operation type to Custom (including deletion, creation, and login). When some key operations are recorded, CTS can send notifications to subscribers in real time through SMN. This function is triggered by CTS, and notifications are sent by SMN. For example, if the operation type is set to Root Login, a notification is sent when an enterprise administrator has a login event. If the operation type is set to CTS change, a notification is sent when the CTS tracker changes. |
Enabling Logging for OBS Buckets |
You can enable OBS logging to facilitate bucket analysis or auditing as required. With access logs, you can analyze the type or pattern of requests to bucket access. When logging is enabled for a bucket, OBS automatically logs access requests to the bucket and generates and writes log files into a specific bucket. |
Enabling Encrypted Storage of Log Files |
When dumping audit logs to OBS, you can configure encrypted storage to prevent unauthorized access to files. |
Enabling WAF Logs |
You can authorize WAF to access LTS so that all attack and access logs can be collected in Log Tank Service (LTS). With LTS, users can perform real-time decision analysis, device O&M management, and service trend analysis in a timely and efficient manner. Enabling LTS for WAF does not affect WAF performance. |
Enabling LTS Log Transfer |
HSS and cloud service logs transferred to LTS will be deleted automatically after the default log retention duration expires. If you want to store logs for a longer period, you need to configure log transfer in LTS. LTS can dump logs to the following cloud services:
|
Creating a VPC Flow Log |
VPC flow logs record information about traffic going to and from VPCs. You can use flow logs to monitor network traffic, analyze network attacks, and determine whether security groups and firewall rules need to be changed. If you want to know the traffic details of NICs in a VPC, you can view flow logs about the NICs through LTS. |
Enabling ECS Logs in LTS |
ICAgent collects logs from hosts based on your specified collection rules, and packages and sends the collected log data to LTS on a log-stream basis. You can view logs on the LTS console in real time. |
Ensuring That Log Retention Duration in LTS Meets Requirements |
After the log data of hosts and cloud services is reported to LTS, the log data is automatically deleted after the default log retention duration expires. Therefore, you need to configure the log retention duration based on service requirements. |
Enabling ELB Access Logging |
When distributing external traffic, Elastic Load Balance (ELB) logs details of HTTP and HTTPS requests, such as URIs, client IP addresses and ports, and status codes. You can use these logs for auditing. You can also search for logs using time and keywords and use a variety of SQL aggregation functions to analyze requests in a specified period to learn about the website visit frequency. |
Enabling FunctionGraph Logging |
You can enable FunctionGraph logging to facilitate analysis or audit as required. By accessing logs, the function owner can analyze the function execution process and quickly locate faults. |
Enabling the CFW Log Management Capability |
You can record attack event logs, access control logs, and traffic logs to Log Tank Service (LTS) and use these logs to quickly and efficiently perform real-time decision analysis, device operation management, and service trend analysis. |
Enabling Log File Integrity Verification |
When dumping audit logs to OBS, you can enable file verification to ensure the integrity of audit files and prevent files from being tampered with. |
Huawei Cloud Security Configuration — VMs and Containers
Check Item |
Check Item |
|
---|---|---|
Cloud Container Engine (CCE) |
Enabling the Container Security Edition of HSS |
Host Security Service (HSS) is designed to protect server workloads in hybrid clouds and multi-cloud data centers. It protects servers and containers and prevents web pages from malicious modifications. You are advised to enable HSS to protect the Node node in the CCE cluster and the containers on it. |
Forbidding Containers to Obtain Host Machine Metadata |
If CCE clusters are used as shared resource pools to build advanced services and end users of advanced services are allowed to create uncontrollable container workloads in the clusters, you need to restrict the access to metadata on hosts. |
|
Enabling LTS and Collecting Container Logs |
Log Tank Service (LTS) collects log data from hosts and cloud services. By processing a large number of logs efficiently, securely, and in real time, LTS provides useful insights for you to optimize the availability and performance of cloud services and applications. It also helps you efficiently perform real-time decision-making, device O&M, and service trend analysis. You are advised to collect container logs in a unified manner (including standard container output, log files in containers, node log files, and Kubernetes events) and report them to LTS. |
|
Disabling the Kubernetes Cluster Versions that has Reached EOS |
After the cluster version EOS, CCE does not support the creation of new clusters or provide technical support including new feature updates, vulnerability or issue fixes, new patches, service ticket guidance, and online checks for the EOS cluster version. After the key event notification is enabled, CTS sends notifications about these key operations to related subscribers through Simple Message Notification (SMN). This function is triggered by CTS, and notifications are sent by SMN. |
|
Preventing Clusters from Being Exposed to Public Networks over API Servers |
Kubernetes APIs have access control capabilities, but some CVE vulnerabilities without access control occasionally exist. To prevent attackers from probing Kubernetes API versions, you are advised not to bind EIPs to clusters unless necessary to avoid attacks. If an EIP must be bound, properly configure the firewall or security group rules to restrict access of unnecessary ports and IP addresses. |
|
Restricting the Access of Containers to the Management Plane |
If service containers on a node do not need to access kube-apiserver, you are advised to disable container network traffic on the node from accessing kube-apiserver. |
|
Handling Vulnerabilities Released on the CCE Official Website in a Timely Manner |
CCE releases vulnerabilities periodically. You need to pay attention to and handle the vulnerabilities in a timely manner. CCE fixes high-risk vulnerabilities within one month after the Kubernetes community detects them and releases fixing solutions. The fixing policies are the same as those of the community. Before the vulnerabilities are completely fixed, you can refer to the mitigations provided by CCE to minimize the impact of the vulnerabilities. |
|
Preventing Cluster Nodes from Being Exposed to Public Networks |
If a node is exposed to the public network, attackers can attack nodes and further control the cluster through the Internet. Do not bind an EIP to a cluster node unless necessary to reduce the attack surface. If an EIP must be bound, properly configure the firewall or security group rules to restrict access of unnecessary ports and IP addresses. You may have configured the kubeconfig.json file on a node in your cluster. kubectl can use the certificate and private key in this file to control the entire cluster. You are advised to delete unnecessary files from the /root/.kube directory on the node to prevent malicious use. rm -rf /root/.kube |
|
Hardening the Security Group Rules of the VPC Where the Kubernetes Cluster Resides |
CCE is a universal container platform. Its default security group rules apply to common scenarios. Based on security requirements, you can harden the security group rules set for CCE clusters on the Security Groups page of Network Console. |
|
Elastic Cloud Server (ECS) |
Updating the Password Reset Plug-in in ECS to the Latest Version |
ECS provides the one-click password resetting function. If you have installed the password reset plug-in, you can reset the password with a few clicks when your ECS password is lost or expires. Updating the password reset plug-in ensures that vulnerabilities are fixed in a timely manner. |
Setting Firewall Policies in ECS to Restrict the Accesses to Metadata |
ECS metadata contains basic ECS information on the cloud platform, such as the cloud service ID, host name, and network information. It may also contain sensitive information. The multi-user design of GuestOS causes a large metadata access scope, the SSRF vulnerability of tenant APPs may cause metadata leakage. You can configure firewall policies in ECS to restrict the accesses to metadata and mitigate metadata leakage risks. |
|
Enabling Encryption for Private Images |
Image encryption supports encrypting private images. When creating an ECS, if you select an encrypted image, the system disk of the created ECS automatically has encryption enabled, implementing system disk encryption. |
|
Using a Key Pair to Securely Log In to an ECS |
Key pairs are a set of security credentials for identity authentication for remote logins. A key pair consists of a public key and a private key. Key Pair Service (KPS) stores the public key and you store the private key. If you have imported a public key into a Linux server, you can use the corresponding private key to log in to the server without a password. You do not need to worry about password interception, cracking, or leakage. |
|
Bare Metal Server (BMS) |
Using a Key Pair to Securely Log In to a BMS |
Key pairs are a set of security credentials for identity authentication for remote logins. A key pair consists of a public key and a private key. Key Pair Service (KPS) stores the public key and you store the private key. If you have imported a public key into a Linux cloud server, you can use the corresponding private key to log in to the server without a password. You do not need to worry about password interception, cracking, or leakage. |
Updating the Password Reset Plug-in in BMS to the Latest Version |
BMS provides the one-click password resetting function. If you have installed the password reset plug-in, you can reset the password in a few clicks when your BMS password is lost or expires. Updating the password reset plug-in ensures that vulnerabilities are fixed in a timely manner. |
|
Setting Firewall Policies in BMS to Restrict the Accesses to Metadata |
BMS metadata contains basic BMS information, such as the cloud service ID, host name, and network information. It may also contain sensitive information. The multi-user design of GuestOS causes a large metadata access scope, the SSRF vulnerability of tenant APPs may cause metadata leakage. You can configure firewall policies in BMS to restrict access to metadata and mitigate metadata leakage risks. |
Huawei Cloud Security Configuration — Databases
Check Item |
Check Item |
|
---|---|---|
RDS for MySQL |
Configuring Proper Security Group Rules |
A security group is a logical group that has the same security requirements in a VPC to ensure database security and stability. |
Avoiding Binding an EIP to Access RDS for MySQL over Internet |
Do not deploy your instance on the Internet or in a demilitarized zone (DMZ). Instead, deploy it on an intranet and use routers or firewalls to control access to your instance. Do not bind EIPs to RDS for MySQL instances for access over the Internet. In this way, unauthorized access and DDoS attacks can be prevented. |
|
Enabling Encrypted Communication |
If TLS encrypted connections are not enabled, data is transmitted in plaintext between the MySQL client and server, which is vulnerable to eavesdropping, tampering, and man-in-the-middle attacks. If you are connected to the MySQL server through an insecure network such as the Internet, it is important to enable the TLS encrypted connection. |
|
Disallowing the Use of the Default Port |
The default port of MySQL is 3306. The default port is easy to be listened, which poses security risks. You are advised to use a non-default port. |
|
Enabling Transparent Data Encryption |
Transparent Data Encryption (TDE) performs real-time I/O encryption and decryption on data files. Data is encrypted before being written to disks and is decrypted when being read from disks to memory. This effectively protects databases and data files. |
|
Updating the Database Version to the Latest |
When a new CVE vulnerability is released in the MySQL community, the impact of the vulnerability will be analyzed in a timely manner and the patch release plan will be determined based on the actual impact of the vulnerability. You are advised to upgrade the system and repair vulnerabilities in a timely manner to prevent vulnerabilities from affecting data security. |
|
Enabling Database Audit Log |
The audit function records all user activities on databases in real time. You can view audit logs to perform security assessments and pinpoint problem causes, thereby enhancing system operational efficiency. |
|
RDS for PostgreSQL |
Enabling the Backup Function and Configuring a Backup Policy |
Regularly backing up your databases is recommended. If your database becomes faulty or data is corrupted, you can restore it from backups. |
Enabling Log Recording for User Logins |
To ensure database security and traceability, all operations performed by login users are recorded for security audit. The log_connections file records the authentication logs of each connection attempt to the server. The log_disconnections file records the logs of user logout. When important data is lost due to attacks or misoperations of internal employees, the login IP address can be located in a timely manner. |
|
Disabling Default Ports |
The default port of PostgreSQL is 5432. The default port is more likely to be eavesdropped. You are advised to use a non-default port. |
|
Configuring Proper Security Group Rules |
A security group is a logical group that has the same security requirements in a VPC to ensure database security and stability. |
|
Configuring a Client Authentication Timeout |
The authentication_timeout parameter specifies the maximum duration allowed to complete client authentication, in seconds. This parameter prevents clients from occupying a connection for a long time. The default value is 60s. |
|
Restricting the IP Addresses That Can Connect to Databases |
If databases are accessible over any IP addresses, the risk of being attacked will increase. |
|
Enabling Database Audit Log |
You can use the PostgreSQL audit extension (pgAudit) with RDS for PostgreSQL instances to record all operations performed on the database. By viewing audit logs, you can perform security audit and root cause analysis on the database, improving system O&M efficiency. |
|
Update the Database Version to the Latest Version |
PostgreSQL 9.5/9.6/10 has reached EOL and is no longer maintained by the community. The EOS notice has been released for PostgreSQL 9.5/9.6 on the cloud. An earlier version may have vulnerabilities. Running the software of the latest version can protect the systems from certain attacks. |
|
GaussDB databases |
Configuring Maximum Number of Concurrent Database Connections |
GaussDB connections use server resources. If there are too many concurrent connections, the database response will become slow. The maximum concurrent connections must be configured. This value cannot exceed the maximum thread capacity of the operating system in use, or the configuration becomes invalid. By configuring the maximum connections appropriately, you can mitigate the risk of Distributed Denial-of-Service (DDoS) attacks and optimize system resource utilization for optimal operational response capabilities. |
Permissions Management |
To prevent any database account from creating tables or other database objects in the PUBLIC schema, CREATE permissions must be restricted for the PUBLIC schema. |
|
Enabling Database Audit Log |
The audit function records all user activities on databases in real time. You can view audit logs to perform security assessments and pinpoint problem causes, thereby enhancing system operational efficiency. |
|
Security Authentication |
To prevent accounts from being cracked, you can set the maximum login retries and the automatic unlocking duration in GaussDB. |
|
User Password Security |
User passwords are stored in the system catalog pg_authid. To prevent password leakage, GaussDB encrypts user passwords before storing them. The cryptographic algorithm is determined by the configuration parameter password_encryption_type. All passwords of GaussDB database users must have a validity period. You can configure password_effect_time to set a validity period, and configure password_notify_time to set password change reminders. |
|
WAL Archiving Configuration |
Write Ahead Log (WAL) is also called Xlog. |
|
Enabling the Backup Function and Configuring a Backup Policy |
GaussDB provides high availability, but if a primary database or a table in it is maliciously or mistakenly deleted, data in the standby database will also be deleted. In this case, you can only restore the deleted data from backups. |
|
Document Database Service (DDS) |
Enabling Encrypted Communication |
If TLS encrypted connections are not enabled, data is transmitted in plaintext between the MongoDB client and server, which is vulnerable to eavesdropping, tampering, and man-in-the-middle attacks. If you are connected to the MongoDB server through an insecure network such as the Internet, it is important to enable the TLS encrypted connection. |
Enabling the Backup Function and Configuring a Backup Policy |
DDS instances support automatic backup and manual backup. You can periodically back up the database. If the database is faulty or data is damaged, you can use the backup files to restore the database, ensuring data reliability. |
|
Disallowing the Use of the Default Port |
The default port for MongoDB is 27017. The default port is more likely to be eavesdropped, so a non-default port is recommended. |
|
Patch Upgrade |
DDS supports patch upgrade. The upgrade involves adding new functions, fixing issues, and improving security and performance. |
|
Disabling the Script Running Function |
If the option security.javascriptEnabled in javascriptEnabled is enabled, JavaScript scripts can be run on MongoDB servers. This brings security risks. If the javascriptEnabled option is disabled, commands such as mapreduce and group cannot be used. If your application does not require operations such as mapreduce, you are advised to disable javascriptEnabled. |
|
Setting Second-Level Monitoring and Alarm Rules |
DDS monitors DB instances by default. When a metric value exceeds the preset threshold, an alarm is triggered. The system automatically sends an alarm notification to the cloud account through SMN, helping you learn about the status of DDS DB instances in a timely manner. |
|
Setting Maximum Connections |
MongoDB connections use server resources. If there are too many concurrent connections, the database response to operations (e.g. query, insert, update, and delete) will become slow. The maximum concurrent connections must be configured. This value cannot exceed the maximum thread capacity of the operating system in use, or the configuration becomes invalid. By configuring the maximum connections appropriately, you can mitigate the risk of Denial-of-Service (DoS) attacks and optimize system resource utilization for optimal operational response capabilities. |
|
Enabling Database Audit Log |
The audit function records all user activities on databases in real time. You can view audit logs to perform security assessments and pinpoint problem causes, thereby enhancing system operational efficiency. |
|
Enabling Disk Encryption |
Enabling static data encryption improves data security but slightly affects read/write performance. |
Huawei Cloud Security Configuration Baseline — Storage
Check Item |
Check Item |
|
---|---|---|
Object Storage Service (OBS) |
Enabling Server-Side Object Encryption in OBS |
With server-side encryption enabled, OBS encrypts a user's object as ciphertext before storing it on the server. |
Enabling WORM in Compliance Scenarios |
OBS provides the Write Once Read Many (WORM) function, that is, data can be written once and read multiple times. This ensures that data of a specified object version cannot be overwritten or deleted within a specified period of time. |
|
Using a User-Defined Domain Name in Scenarios Where OBS Objects Need to Be Previewed Online |
Based on security and compliance requirements, Huawei Cloud OBS forbids online preview of objects in a bucket using the default domain name of OBS. That is, when you use the preceding domain name to access objects (such as videos, images, and web pages) in the bucket from a browser, the object content is not displayed but downloaded as attachments. If you want to preview OBS objects online, use a user-defined domain name. |
|
Disabling Anonymous Access |
OBS buckets can disable the public access of anonymous users to protect private data from being disclosed (except for scenarios where static websites need to be opened to the public). |
|
Using OBS Temporary Data Sharing to Share Specified Data |
If you need to share the objects, that is, files or folders, stored in OBS with other users, you can use temporary data sharing to generate a sharing URL. This URL will become invalid after the specified validity period expires. |
|
Controlling Permissions of OBS Resources Using Both VPC Endpoint and OBS Bucket Policies |
You can set a VPC endpoint policy to restrict servers, such as ECS, CCE, and BMS, in a VPC to access specific resources in OBS. You can also set a bucket policy to restrict servers in a specified VPC to access OBS buckets. In this way, access control with both VPC endpoint and bucket policies is implemented. |
|
Enabling Versioning |
OBS versioning can be used to retain multiple versions of an object in a bucket, improving the quick recovery capability in case of data exceptions. |
|
Enabling URL Validation |
OBS URL validation can help protect user data from being stolen. |
|
Using Bucket Policies to Restrict Access to OBS Buckets over HTTPS |
The SecureTransport condition in the bucket policy specifies that HTTPS must be used to perform operations on the bucket, ensuring transmission security during data upload and download. |
|
Avoiding Creating Public Objects in a Private Bucket |
Disabling the public read permission on objects to anonymous users in OBS buckets can prevent objects from being opened to all users and avoid data breaches. |
|
Enabling Cross-Region Replication |
The cross-region replication function enables cross-region data disaster recovery. |
|
Scalable File Service (SFS) |
Ensuring that the SFS Turbo File System Encryption Is Enabled |
SFS Turbo file system encryption protects your static data. SFS Turbo file system encryption ensures that data is automatically encrypted when it is written from your applications to SFS Turbo file system and automatically decrypted when it is read from SFS Turbo file system. |
Elastic Volume Service (EVS) |
Ensuring That EVS Encryption Is Enabled |
EVS encryption protects your static data. EVS encryption ensures that data is automatically encrypted when it is written from cloud servers to EVS disks and is decrypted when read from disks. |
Cloud Backup and Recovery (CBR) |
Enabling Cross-Region Backup Replication |
Cross-region replication is more secure and reliable. |
Enabling Forcible Backup |
Enabling forcible backup can protect the security and accuracy of user data to the greatest extent and ensure service security. |
|
Enabling Re-confirmation for Deleting Backup Data |
To prevent backup data from being deleted by mistake, enable the re-confirmation mechanism. |
|
Selecting an Encryption Disk for Storing Backup Data |
You can select an encryption disk in CBR to store backups. The encryption attribute of backups cannot be changed manually. |
Huawei Cloud Security Configuration Baseline — Enterprise Intelligence
Check Item |
Check Item |
|
---|---|---|
Data Warehouse Service (DWS) |
Enabling Cluster Data Encryption |
GaussDB(DWS) can enable database encryption for clusters to protect data at rest and avoid security issues such as database cracking. |
Enabling DWS Database Audit Log |
GaussDB(DWS) supports database operation audit logs. DWS database audit logs are separated from the audit logs on the management plane. Database audit logs can record operations performed by each user in the database. It is recommended that you enable the operation log function as needed. This will facilitate historical data tracing and enhance data security. |
|
Enabling GaussDB(DWS) Management Console Audit Logs |
GaussDB(DWS) uses Cloud Trace Service (CTS) to record mission-critical operations, such as cluster creation, snapshot creation, cluster scale-out, and cluster restart, performed on the GaussDB(DWS) management console. The logs can be used for scenarios such as security analysis, compliance audit, resource tracing, and fault locating. After this function is enabled, operations on the console can be audited, and faults can be located easily. |
|
Enabling GaussDB(DWS) Database Audit Log Transfer |
Database audit logs in GaussDB(DWS) capture details about connections and user activities in a database. These audit logs help monitor databases for security purposes, troubleshooting, or locating historical operation records. By default, audit logs are stored in the database. You can dump audit logs to OBS to ensure that audit logs are backed up and to make it easier for you to view audit logs. |
|
Enabling Rights Separation on GaussDB(DWS) |
By default, the administrator specified when you create a GaussDB(DWS) cluster is the database system administrator. The administrator can create other users and view the audit logs of the database. That is, separation of permissions is disabled. To ensure cluster data security, GaussDB(DWS) supports separation of duties for clusters. Different types of users have different permissions. |
|
Enabling SSL-Encrypted Transmission |
SSL is a highly secure protocol. It authenticates bidirectional identification between the server and client using digital signatures and digital certificates to secure data transmission. |
|
ModelArts |
Using an IP Address Whitelist for Access to Notebook |
Notebook instances can be directly connected over SSH mode and authenticated using key pairs. For enhanced security, using an IP address whitelist to allow the endpoints that can access the instance is recommended. |
Using Independent Agencies for Different IAM Users |
An authorization is required for using ModelArts resources. To control the permissions of each sub-user, you are advised to grant permissions separately when allocating agency permissions to each sub-user in the ModelArts global configuration function. Do not share one agency credential with multiple sub-users. |
|
Using a Dedicated Resource Pool |
If training, inference, and development environments are used, a dedicated resource pool should be used for the production environment. This ensures exclusive compute resources and resource isolation, enhancing security. |
|
Running a Custom Image as a Non-root User |
You can create Dockerfiles for custom images and push them to SWR. Considering the permission control scope, you are advised to explicitly define the default running user as the root user when customizing an image to reduce security risks during container running. |
|
Enabling "Strict Mode" |
When using ModelArts resources, you need to assign different agencies to different sub-users to minimize authorization. |
|
MapReduce Service (MRS) |
Cluster EIP and Security Group Control |
An EIP can be bound to an MRS cluster. If an EIP is bound to an MRS cluster and the cluster is allowed by a security group, one can access the MRS Manager page over the EIP. You can also log in to an MRS cluster over SSH. To ensure security, properly manage and control security groups. Avoid allowing untrusted IP addresses to access MRS clusters. Ports for allowed IP addresses also need to be enabled as needed. Enabling all ports is not recommended. |
Separate Deployment of Management, Control, and Data Planes |
Templates Compact, OMS-separate, and Full-size are commonly used for deploying MRS clusters. To isolate data nodes from management and control nodes, the OMS-separate and Full-size templates are recommended. |
|
Enabling Kerberos Authentication |
MRS cluster components use Kerberos authentication. When Kerberos authentication is enabled, users can access component resources only after being authenticated. However, if Kerberos authentication is disabled, authentication and authorization are not required for accessing MRS components, which introduces risks to the cluster. |
General Data Protection Regulation — Disclosure to Third Parties
Check Item |
Description |
---|---|
The data controller shall notify data subjects and obtain their consent before disclosing their data to a third party. |
The data controller shall notify data subjects and obtain their consent before disclosing their data to a third party. Check all scenarios where personal data is disclosed to third parties, and check whether data subjects are notified and their consent is obtained before data disclosure. |
If the data controller entrusts a third party with personal data processing, the data controller needs to sign a data processing agreement (DPA) with the third party to specify the responsibilities and obligations of the third party as a data processor. |
If the data controller entrusts a third party with personal data processing, the data controller needs to sign a data processing agreement (DPA) with the third party to specify the responsibilities and obligations of the third party as a data processor. Check whether the responsibilities and obligations of the third-party data processor are specified in contracts and agreements in the following aspects:
|
When disclosing personal data to a third party, a contract or agreement needs to be signed to bind the third party and the joint controllers to their responsibilities and data protection measures, and timely responses to changes of the third party must be taken. |
When disclosing personal data to a third party, a contract or agreement needs to be signed to bind the third party and the joint controllers to their responsibilities and data protection measures, and timely responses to changes of the third party must be taken.
|
As a data processor, the organization shall establish a way to notify the data controller when there is a request from a third party to disclose personal data. |
Check whether a way to notify the data controller of personal data disclosure requests from third parties has been established. |
The data controller needs to communicate any rectification or erasure of personal data or restriction of processing carried out to each recipient to whom the personal data has been disclosed. |
The data controller needs to communicate any rectification or erasure of personal data or restriction of processing carried out to each recipient to whom the personal data has been disclosed.
|
General Data Protection Regulation — Cross-Border Data Transfer
Check Item |
Description |
---|---|
Your organization must develop a comprehensive system to ensure that cross-border data transfers meet GDPR requirements. |
Check whether the cross-border data transfer process is standardized through regulations, whether relevant personnel are arranged to review related documents, and whether the process implementation is supervised. |
The organization shall consider filtering or anonymizing personal data where cross-border transfers may take place to ensure that personal data cannot be restored in any way before transferring personal data as non-personal data across borders. |
The organization shall consider filtering or anonymizing personal data where cross-border transfers may take place to ensure that personal data cannot be restored in any way before transferring personal data as non-personal data across borders.
|
The organization shall ensure that the country where the data importer is located must be listed in the adequacy decision list released by European Commission on its website where cross-border data transfers are necessary for operating businesses. |
The organization shall ensure that the country where the data importer is located must be listed in the adequacy decision list released by European Commission on its website where cross-border data transfers are necessary for operating businesses.
|
The organization shall provide appropriate safeguards and enforceable data subject rights and effective legal remedies for data subjects when transferring their personal data to a country not included in the adequacy decisions list released by European Commission on its website. |
The organization shall provide appropriate safeguards and enforceable data subject rights and effective legal remedies for data subjects when transferring their personal data to a country not included in the adequacy decisions list released by European Commission on its website.
|
The organization shall meet the GDPR requirements on cross-border data transfers where the country where the data importer is located is not included in the adequacy decisions list released by European Commission on its website and in the absence of appropriate safeguards. |
The organization shall meet the GDPR requirements on cross-border data transfers where the country where the data importer is located is not included in the adequacy decisions list released by European Commission on its website and in the absence of appropriate safeguards.
|
As the data processor, you shall obtain the consent of the data controller before proactively transferring data across borders. |
As the data processor, you shall obtain the consent of the data controller before proactively transferring data across borders. Check whether a way to obtain consent from data controllers is specified in processes relating to cross-border data transfers. |
General Data Protection Regulation — Use, retention, and disposal of personal data
General Data Protection Regulation — Data subjects' access
Check Item |
Description |
---|---|
For systems that collect, process, and store personal data, the organization shall provide a way for data subjects to access their personal data. |
For systems that collect, process, and store personal data, the organization shall provide a way for data subjects to access their personal data. Check whether there is a way for data subjects to access personal data they provided. Check whether data subjects can learn if their personal data is being processed and if they can access their personal data. They can learn:
|
For systems that collect, process, and store personal data, the device provider shall provide a way for data subjects or data controllers to access their personal data. |
For systems that collect, process, and store personal data, the device provider shall provide a way for data subjects or data controllers to access their personal data. Check whether there is a way for data subjects to access personal data they provided. Check whether data subjects can learn if their personal data is being processed and if they can access their personal data. They can learn:
|
For systems that collect, process, and store personal data, the data controller shall provide a way for data subjects to modify their personal data. |
For systems that collect, process, and store personal data, the data controller shall provide a way for data subjects to modify their personal data. Check there is a way for data subjects to modify personal data they have provided. |
For systems that collect, process, and store personal data, the device provider shall provide a way for data subjects to modify their personal data. |
For systems that collect, process, and store personal data, the device provider shall provide a way for data subjects to modify their personal data. Check there is a way for data subjects to modify personal data they have provided. |
For systems that collect, process, and store personal data, the data controller shall provide a way for data subjects to delete their personal data. |
For systems that collect, process, and store personal data, the data controller shall provide a way for data subjects to delete their personal data. Check whether there is a way for data subjects to delete personal data they have provided. |
For systems that collect, process, and store personal data, the device provider shall provide a way for data subjects or data controllers to delete personal data provided by the data subjects. |
For systems that collect, process, and store personal data, the device provider shall provide a way for data subjects or data controllers to delete personal data provided by the data subjects. Check whether there is a way for data subjects to delete personal data they have provided. |
For systems that collect, process, and store personal data, your organization shall provide a way for data subjects to restrict the processing of their personal data. |
For systems that collect, process, and store personal data, your organization shall provide a way for data subjects to restrict the processing of their personal data. Check whether a mechanism is provided for data subjects to restrict the processing of their personal data. |
For systems that collect, process, and store personal data, the organization shall provide a way for data subjects to export their personal data. |
For systems that collect, process, and store personal data, the organization shall provide a way for data subjects to export their personal data. Check whether a mechanism is provided for data subjects to export personal data. |
A way to respond to legal requests from data subjects within specified time limit must be available in accordance with applicable laws and standards to protect the legal rights of data subjects. |
A way to respond to legal requests from data subjects within specified time limit must be available in accordance with applicable laws and standards to protect the legal rights of data subjects. Check with internal personnel responsible for personal data compliance whether a way to handle data subject's legal requests is provided in accordance with applicable laws and standards and whether the response time limit is specified, to protect the rights of data subjects. The mechanism includes but is not limited to the following:
|
The organization shall provide a way to notify users of the use of their personal data for direct marketing purposes and to allow users to withdraw their consent to such use. |
The organization shall provide a way to notify users of the use of their personal data for direct marketing purposes and to allow users to withdraw their consent to such use.
|
General Data Protection Regulation — Notifications
Check Item |
Description |
---|---|
The data controller shall provide a privacy statement for data subjects. |
The data controller shall provide a privacy statement for data subjects.
|
The device vendor shall provide a description of any personal data that may be processed by their products and provide a privacy statement page as required by the data controller. |
The device vendor shall provide a description of any personal data that may be processed by their products and provide a privacy statement page as required by the data controller.
|
The device vendor shall provide a description of any personal data that may be processed by their products when obtaining personal data from a third party. |
The device vendor shall provide a description of any personal data that may be processed by their products when obtaining personal data from a third party. Check whether your organization has provided documentation describing the personal data processed by your products. |
End user-oriented systems shall provide a way to verify user credentials when requiring users to register their personal data by self-service. |
End user-oriented systems shall provide a way to verify user credentials when requiring users to register their personal data by self-service. Check whether a way to verify data subject identities has been provided. |
The data processor shall inform the data controller of any intended changes concerning the addition or replacement of other processors and obtain written authorization from the data controller for such changes. |
The data processor shall inform the data controller of any intended changes concerning the addition or replacement of other processors and obtain written authorization from the data controller for such changes. Check whether there is a way to notify the data controller and to obtain the written authorization from the data controller when the data processing involves the addition or replacement of other data processors. |
The data controller shall establish a way to notify of personal data breaches to data subjects. |
The data controller shall establish a way to notify of personal data breaches to data subjects. Check whether there is a way to notify data subjects of personal data breaches. |
The data controller shall establish a way to notify supervisory authorities of personal data breaches. |
The data controller shall establish a way to notify supervisory authorities of personal data breaches. Check whether there is a way to notify supervisory authorities of personal data breaches. |
The data processor shall establish a way to notify data controllers of personal data breaches. |
The data processor shall establish a way to notify data controllers of personal data breaches. Check whether there is a way to notify data controllers of personal data breaches. |
General Data Protection Regulation — Choice and Consent
Check Item |
Description |
---|---|
The data controller shall only collect and process personal data with data subject's consent, for the purpose of contract or agreement fulfillment, or for other legal reasons. The data controller shall also provide a way to withdraw consent. |
The data controller shall only collect and process personal data with data subject's consent, for the purpose of contract or agreement fulfillment, or for other legal reasons. The data controller shall also provide a way to withdraw consent.
|
A privacy impact assessment (PIA) shall be conducted if the legitimate basis for the processing of personal data is "lawful interest." |
A privacy impact assessment (PIA) shall be conducted if the legitimate basis for the processing of personal data is "lawful interest." Check whether there is a way to conduct a privacy impact assessment (PIA). |
The privacy policy and user agreement shall be accessible at any time. |
The privacy policy and user agreement shall be accessible at any time. Check whether the privacy policy and user agreement are placed in an intelligible and easily accessible form. |
The organization shall provide a way to obtain users' explicit consent (i.e. ticking a box) before collecting user's personal data (e.g. upon user registration or initial use of apps). |
The organization shall provide a way to obtain users' explicit consent (i.e. ticking a box) before collecting user's personal data (e.g. upon user registration or initial use of apps). Check whether user consent is obtained before personal data collection, whether user consent is obtained through use's proactive operations, and whether there is misleading behavior. |
Processing of personal data relating to criminal convictions and offences shall be carried out only under the control of an official authority. |
Processing of personal data relating to criminal convictions and offences shall be carried out only under the control of an official authority.
|
Methods and channels shall be provided to withdraw consent or opt out of data collection. The data controller must stop collecting or processing the personal data after data subjects withdraw their consent. |
Methods and channels shall be provided to withdraw consent or opt out of data collection. The data controller must stop collecting or processing the personal data after data subjects withdraw their consent. Check whether there is a way to partially consent and withdraw consent if desired. Note: Withdrawing consent means that data subjects can withdraw their consent to personal data collection in a convenient form, for example, in the same form used to grant their consent. |
Withdrawing consent must be as easy as giving consent. |
Withdrawing consent must be as easy as giving consent.
|
General Data Protection Regulation — Organizational Structure
Check Item |
Description |
---|---|
The data controller or the processor shall designate in writing a representative in the EU while providing related goods or services to data subjects in the EU or monitoring the behavior of data subjects. |
The data controller or the processor shall designate in writing a representative in the EU while providing related goods or services to data subjects in the EU or monitoring the behavior of data subjects.
|
The organization shall designate a data protection officer. |
The organization shall designate a data protection officer.
|
Common Weak Password Detection — Weak Password Detection
Check Item |
Description |
---|---|
Weak password detection |
This item scans for common weak passwords and reminds users to change insecure passwords. |
Password Complexity Policy Detection — Password Complexity
Check Item |
Description |
---|---|
Password length check |
This item checks the password length policy set for a server to make sure each password is not shorter than a specified length. |
Uppercase letter check |
This item checks the password length policy set for a server to make sure the number of lowercase letters in a password is not less than a specified value. |
Lowercase letter check |
This item checks the password length policy set for a server to make sure the number of lowercase letters in a password is not less than a specified value. |
Digital check |
This item checks the password length policy set for a server to help make sure the number of digits in a password is not less than a specified value. |
Special character check |
This item checks the password length policy set for a server to help make sure the number of special characters in a password is not less than a specified value. |
PCI-DSS — Maintain an Information Security Policy
Check Item |
Description |
---|---|
Resources must be well prepared so that a budget for personnel, technologies, network environments, facilities, information, and finance can be developed to help achieve the cybersecurity and privacy protection objectives. |
Check whether a budget for personnel, technologies, network environments, facilities, information, and finance has been developed. For example, specific personnel are designated for 24/7 alarm responses. |
A dedicated team or individual must be authorized or appointed by the executive to oversee cybersecurity and privacy protection. Responsibilities and permissions for these roles are clearly defined. |
|
The organization shall review and update cyber security and privacy protection management policies, processes, standards, and documentation at least once a year based on the information obtained from continuously monitored data and regular assessments. Personnel shall be designated to develop, distribute, and update the documentation as needed. |
|
Your organization assets, including physical devices, systems, virtual devices, software, and data, are identified, and asset risks can be detected based on asset criticality, threat impacts, and risk possibility. |
Check the asset risk threat report to see if assets within the scope have been identified. Determine the risks based on the asset criticality, threat impacts, and risk possibility. The assets include:
|
Your organization must create and maintain an asset list that covers all components and asset importance, owner, location, status, and asset associations. |
|
Your organization must develop a data governance policy and explicitly define roles and responsibilities for all parties engaged in the data lifecycle, including data collection, use, storage, transmission, sharing, disclosure, and secure destruction. |
|
Security awareness and job skill training plans for different positions must be prepared, carried out, and periodically updated. |
|
Internal and external audits must be performed at least once a year or when major changes occur to ensure compliance with security policies, standards, and requirements. Quarterly reviews must be conducted and review process records must be documented. |
Check audit records to see whether internal and external audits are performed once a year or when major changes occur, including but not limited to the following aspects:
Check audit records to see whether your organization, as the personal data controller, meets the following requirements during the internal audit:
Check whether quarterly reviews are conducted and whether review process records are documented. The review content includes but is not limited to the following:
|
PCI-DSS — Implement Strong Access Control Measures
Check Item |
Description |
---|---|
Your organization must classify and mark assets based on factors such as business importance and data sensitivity. |
Check whether assets are classified and marked. Check whether asset identification records are used to record asset protection requirements. |
Your organization must review the asset list at least once a year or after each major change. |
Check with asset management personnel whether the asset list is reviewed at least once a year. Check with asset management personnel whether the asset list is reviewed after major changes. |
Your organization must develop a media management system to restrict and protect media usage and access and take physical and logical protection measures. |
Check with asset management personnel whether they take physical and logical protection measures for the use of media. Check with asset management personnel what measures are taken to restrict the use of media. |
Your organization must develop an authorization system for media transfer, implement security controls, and take protective measures during transfers. |
Check whether there is a storage media protection system to:
Check whether protection measures are taken during storage media transfer, including but not limited to the following:
|
Your organization must securely destruct assets that are no longer used, including permanent data deletion and media destruction. |
Check with asset management personnel whether they securely destroy the data storage assets that are no longer used. Check with asset management personnel whether they physically destroy the assets by shredding or incineration. If the assets are destroyed by a qualified third party, check with the third party whether the assets are physically destroyed and whether the certificates of data destruction are provided. |
You have implemented an appropriate user identification management policy, including assigning a unique account name and unique identification code to users, setting validity periods, and identifying cross-organization accounts. |
Check whether the configuration of the user account management system meets the following requirements:
Check whether there is a cross-organization account differentiation mechanism. |
You have security controls in place to automatically lock accounts after a certain number of failed login attempts. |
Check whether there is an approach to automatically locking accounts. Check account lockout settings to see if an account will be locked for at least 30 minutes after six consecutive failed login attempts. |
Your permissions management must comply with the principles of on-demand allocation, least privilege, and separation of duties (SOD). |
Check whether the existing permissions management specifications adhere to the following principles:
|
Role-based or attribute-based access control mechanism must be developed. The access requirements of each role must be defined clearly, and access permissions must be assigned based on the role requirements. |
Check whether the permission management policies include but are not limited to the following:
If the management of cardholder data is involved, check whether the access to any database that contains cardholder data (including applications, administrators, and all other users) is restricted. |
Accounts and permissions must be changed within 24 hours after the responsibilities of internal and external employees change. |
Check whether the accounts and permissions are changed within 24 hours after an employee transfer or departure. |
Your password policies must comply with industry standards. Do not use common or shared passwords or those that are the same as accounts. |
Check whether the system password policies meet the following requirements:
|
A password assignment policy must be created. For example, a random password is assigned for the first login, and the password must be changed after the first login. The new password must meet the password complexity requirements. |
Check whether documents related to password control are in place, including but not limited to the following:
Check the password assignment records and check whether there are requirements for the initial password and for password complexity in general. |
Authentication credentials such as passwords must be encrypted using encryption algorithms such as AES, RSA, or IDES during transmission and storage and encrypted channels must be used during transmission. |
Check the password transmission and storage policies to see whether authentication credentials such as passwords are encrypted using encryption algorithms such as AES, TDES/TDEA, or RSA. Check whether transmission channels have been encrypted during password transmission. Check whether unencrypted static credentials are prohibited from being included in applications or access scripts. |
Your multi-factor authentication (MFA) settings must meet security requirements. Ensure that MFA is associated with a unique account and cannot be shared among multiple accounts. At least one of the authentication methods used must be encrypted. |
Check whether the management requirements for MFA factors in permission management specifications include but are not limited to the following:
Note: In MFA, two or more authentication methods (such as passwords, cryptographic techniques, and biometric authentication) must be used to authenticate users. |
All accounts and their permissions must be reviewed periodically, for example, at least once a month. If any deviation between the accounts and permissions is found, rectify the issue within the specified period. |
Check whether a list of all accounts and permissions. Check the maintenance records for the list to see whether all accounts and permissions are reviewed at least once and whether deviations between accounts and permissions are rectified within the specified period. |
Physical access control measures must be implemented to restrict physical access to assets such as physical ports, network jacks, wireless access points, and telecommunication lines. |
Check whether a physical access control policy document for assets has been developed. The document includes but is not limited to the following content:
Check whether the following assets are under access control:
|
PCI-DSS — Build and Maintain a Secure Network and Systems
Check Item |
Description |
---|---|
Your organization's data flow must be mapped out and data inventories must be maintained. Data storage modes, data processing and transmission information, and data locations must be recorded. |
Check whether data flow diagrams and data inventories have been developed and maintained, and whether data locations have been documented. The following is an example:
|
You have hardened all software and hardware assets in accordance with security configuration baselines. |
Check whether a formal security configuration baseline document is in place. The document should list specific security configuration requirements for different types of software and hardware assets, such as operating systems, databases, and network devices. Review the existing software and hardware configurations manually or using automated tools to ensure that they comply with established security configuration baselines. These include but are not limited to firewall rules, user permission settings, and service and port statuses. Check whether security hardening steps, derived from security configuration baselines, are applied during system deployment or updates and incorporated into the change management process. |
You have established security configuration baselines that comply with industry standards and meets the following requirements:
|
Check whether a security configuration baseline has been established. Check whether the security configuration baseline meets the following requirements:
Check whether the time sources of NTP servers comply with industry standards. |
Automated tools must be provided for centralized management of security configuration baselines and periodical inspections on configuration file changes and content file integrity. |
Check whether automated tools have been deployed for security configuration baseline management. Check whether the automated tools can:
|
Your change and rollback procedure must be tested based on the risk assessment to ensure that there is no negative impact on operations and security of the organization. |
Check whether a documented test process for the change and rollback procedure is in place. Check whether change and rollback tests meet the following requirements:
|
Anti-malware tools must be used to detect, remove, and defend against all known types of malware or malicious code. |
Check whether there are anti-malware tools in place to detect, remove, and defend against all the known types of malware or malicious code. Check whether the anti-malware tools meet at least the following requirements:
|
The network security design and configuration information must be recorded in the network topology document and updated. |
Check whether there is a network topology document recording the network security design and configuration information. Check whether the document is updated in time and whether the document includes but is not limited to the following content:
|
Security devices or services are deployed at network borders and cross-border traffic is secure and controllable. |
Check whether security devices or services are deployed at network borders to ensure that cross-border network traffic is secure and controllable. The deployed security devices or services include but are not limited to:
|
Accesses to network borders must be controlled. Access control rules that comply with industry standards must be set to control the inbound and outbound data packets. |
Check whether the access control list (ACL) denies malicious IP addresses or rejects the communications of the non-controlled ports and IP addresses by default. Check the maintenance records of the access control list (ACL) to see whether invalid rules are deleted. Check whether cross-network access requires assessment and authorization. |
Vulnerability remediation solutions must be developed for identified vulnerabilities and all critical assets must be protected appropriately. |
Check whether there is a formal vulnerability management policy or process document that clearly specifies how to identify, assess, and fix vulnerabilities in the system. The document should include the selection and use of vulnerability scanning tools, vulnerability prioritization, patch deployment schedules, and path effectiveness verification methods. Review remediation documents and related log files to verify vulnerability-specific remediation solutions have been developed and the remediation activities have been documented in detail, including specific steps, assigned responsible personnel, estimated completion time, and actual remediation results. |
PCI-DSS — Maintain a Vulnerability Management Program
Check Item |
Description |
---|---|
Security skill training and appraisal must be regularly arranged for internal and external employees who assume security roles and key responsibilities. Information security requirements must be incorporated into the performance appraisal system. |
|
You have formal design specifications and security architecture documents that provide guidelines on software development and system deployment. The documents must cover key security areas such as authentication, authorization, encryption, logging, and monitoring. |
|
Secure coding specifications must be formulated, and developers must write code in compliance with the specifications. The specifications should contain key security practices, such as input validation, error handling, data encryption, and session management. |
|
Hosts, containers, and servers must be protected from malware. |
|
PCI-DSS — Regularly Monitor and Test Networks
Check Item |
Description |
---|---|
A penetration test plan must be developed, and dedicated personnel must be assigned to perform penetration tests on web applications and internal support systems after being authorized at least once a year and whenever there are major updates or modifications on the applications. |
|
A change detection system must be deployed to detect unauthorized modifications and such modifications must be rectified. |
|
Logging must be enabled for network devices, hosts, virtualization platforms, and application software. |
|
Logging is enabled for access controls, O&M operations, sensitive data access, and system events. |
|
A log management system that meets relevant external standards or internal management requirements must be used to collect and analyze logs centrally. |
|
Logs must be backed up periodically and measures must be taken to protect the logs and their backups. |
|
Security audit logs must be retained for at least one year and must be queryable online or restorable from backups within at least three months. |
|
Documented vulnerability scanning plans must be established and network environments must be periodically scanned (at least once a quarter) with manual or automated tools by specified individuals. |
|
Applicable techniques, such as intrusion detection, firewalls, and anti-DDoS systems, have been taken to centrally monitor network attacks on network devices, hosts, containers, application systems, and security devices. |
|
A security monitoring platform must be used to continuously monitor collected security logs, identify and record attacks or abnormal behaviors such as unauthorized changes to key systems, log file integrity monitoring or change detection, abnormal behaviors of privileged accounts, and invalid logical access attempts. |
|
If a network attack or exception occurs, an alarm must be triggered and the related owner must be assigned to trace, verify, and handle the alarm. |
|
PCI-DSS — Protect Account Data
Check Item |
Description |
---|---|
You have identified scenarios where data must be encrypted for transmission based on data classification. |
Identify scenarios where encrypted transmission is required based on data classification, including but not limited to:
|
Technical measures must be taken to ensure data authenticity, confidentiality, and integrity during transmission. |
Check whether technical measures have been taken to ensure data authenticity, confidentiality, and integrity during transmission. These measures include but are not limited to the following:
Check whether insecure protocols, including SSL 2.0, SSL 3.0, TLS 1.0, TLS 1.1, SSHv1, and IKEv1, have been prohibited. |
Technical measures must be taken to ensure data confidentiality and integrity during data storage. |
|
You have a system to regularly identify and delete the data that is in excess of the retention period or no longer needed. |
|
Policies controlling how to use and protect cryptographic keys throughout their lifecycles must be developed. |
Check whether policies on how to use and protect cryptographic keys throughout their lifecycles are developed and meet the following requirements:
|
PCI-DSS — Others
Check Item |
Description |
---|---|
Changes must be documented and risks of changes must be assessed and classified. |
Check whether risk assessment and classification for changes are documented. Check whether the following aspects of changes are included during risk assessment:
|
Policies on change notifications and implementation must be developed. Upon the completion of each change, the validity of the change must be verified, and the configuration library and applicable documentation must be updated. |
Check whether documented change notification and implementation policies are in place and check whether the following requirements are included in the documentation:
|
NIST SP 800-53 — System and Services Acquisition
Check Item |
Description |
---|---|
Secure coding specifications must be formulated, and developers must write code in compliance with the specifications. The specifications should contain key security practices, such as input validation, error handling, data encryption, and session management. |
|
NIST SP 800-53 — Program Management
Check Item |
Description |
---|---|
Resources must be well prepared so that a budget for personnel, technologies, network environments, facilities, information, and finance can be developed to help achieve the cybersecurity and privacy protection objectives. |
Check whether a budget for personnel, technologies, network environments, facilities, information, and finance has been developed. For example, specific personnel are designated for 24/7 alarm responses. |
A dedicated team or individual must be authorized or appointed by the executive to oversee cybersecurity and privacy protection. Responsibilities and permissions for these roles are clearly defined. |
|
Your organization must develop a data governance policy and explicitly define roles and responsibilities for all parties engaged in the data lifecycle, including data collection, use, storage, transmission, sharing, disclosure, and secure destruction. |
|
NIST SP 800-53 — Assessment, Authorization, and Monitoring
Check Item |
Description |
---|---|
A penetration test plan must be developed, and dedicated personnel must be assigned to perform penetration tests on web applications and internal support systems after being authorized at least once a year and whenever there are major updates or modifications on the applications. |
|
Internal and external audits must be performed at least once a year or when major changes occur to ensure compliance with security policies, standards, and requirements. Quarterly reviews must be conducted and review process records must be documented. |
|
Vulnerability remediation solutions must be developed for identified vulnerabilities and all critical assets must be protected appropriately. |
|
NIST SP 800-53 — Audit and Accountability
Check Item |
Description |
---|---|
You have identified scenarios where data must be encrypted for transmission based on data classification. |
Check whether scenarios where encrypted transmission is required are identified based on data classification, including but not limited to:
|
Logging must be enabled for network devices, hosts, virtualization platforms, and application software. |
Check the log management policy document. Check whether logs are collected from the following systems and components:
|
Logging is enabled for access controls, O&M operations, sensitive data access, and system events. |
Check whether there is a log management policy document in place. Check whether the following types of logs are collected:
|
A log management system that meets relevant external standards or internal management requirements must be used to collect and analyze logs centrally. |
|
Logs must be backed up periodically and measures must be taken to protect the logs and their backups. |
Check whether there is a log protection policy document in place. Check whether log protection measures meet the following requirements: Access control is enforced to prevent unauthorized modifications.
|
Security audit logs must be retained for at least one year and must be queryable online or restorable from backups within at least three months. |
|
A security monitoring platform must be used to continuously monitor collected security logs, identify and record attacks or abnormal behaviors such as unauthorized changes to key systems, log file integrity monitoring or change detection, abnormal behaviors of privileged accounts, and invalid logical access attempts. |
|
NIST SP 800-53 — Media Protection
Check Item |
Description |
---|---|
Your organization must develop a media management system to restrict and protect media usage and access and take physical and logical protection measures. |
|
Your organization must develop an authorization system for media transfer, implement security controls, and take protective measures during transfers. |
|
NIST SP 800-53 — System and Communications Protection
Check Item |
Description |
---|---|
Technical measures must be taken to ensure data authenticity, confidentiality, and integrity during transmission. |
Check whether technical measures have been taken to ensure data authenticity, confidentiality, and integrity during transmission. These measures include but are not limited to the following:
Check whether insecure protocols, including SSL 2.0, SSL 3.0, TLS 1.0, TLS 1.1, SSHv1, and IKEv1, have been prohibited. |
Technical measures must be taken to ensure data confidentiality and integrity during data storage. |
|
Policies controlling how to use and protect cryptographic keys throughout their lifecycles must be developed. |
Check whether policies on how to use and protect cryptographic keys throughout their lifecycles are developed and meet the following requirements:
|
You have established security configuration baselines that comply with industry standards and meets the following requirements: (1) Only necessary and secure services, protocols, and ports are enabled based on the minimization principle. (2) Network devices are set to reject all network communication by default and the latest stable versions of them are used. (3) Unnecessary services, protocols, functions, and ports are disabled. (4) Default accounts are deleted, or the default account usernames and passwords are changed. (5) If insecure functions need to be enabled, additional security control measures must be implemented. (6) The clock synchronization server must comply with industry standards and use three time synchronization sources. (7) Documented records are kept. (8) Approval is required. |
|
Anti-malware tools must be used to detect, remove, and defend against all known types of malware or malicious code. |
|
The network security design and configuration information must be recorded in the network topology document and updated. |
Check whether there is a network topology document recording the network security design and configuration information. Check whether the document is updated in time and whether the document includes but is not limited to the following content:
|
Security devices or services are deployed at network borders and cross-border traffic is secure and controllable. |
Check whether security devices or services are deployed at network borders to ensure that cross-border network traffic is secure and controllable. The deployed security devices or services include but are not limited to:
|
Accesses to network borders must be controlled. Access control rules that comply with industry standards must be set to control the inbound and outbound data packets. |
|
Applicable techniques, such as intrusion detection, firewalls, and anti-DDoS systems, have been taken to centrally monitor network attacks on network devices, hosts, containers, application systems, and security devices. |
|
NIST SP 800-53 — Incident Response
Check Item |
Description |
---|---|
If a network attack or exception occurs, an alarm must be triggered and the related owner must be assigned to trace, verify, and handle the alarm. |
|
NIST SP 800-53 — Physical and Environmental Protection
Check Item |
Description |
---|---|
Your organization must securely destruct assets that are no longer used, including permanent data deletion and media destruction. |
|
Physical access control measures must be implemented to restrict physical access to assets such as physical ports, network jacks, wireless access points, and telecommunication lines. |
|
NIST SP 800-53 — Planning and Policies
Check Item |
Description |
---|---|
A strategic plan with clear milestones has been developed for cybersecurity and privacy protection. The plan and its objectives are consistent with the overall business strategy of your organization. |
|
You have formal design specifications and security architecture documents that provide guidelines on software development and system deployment. The documents must cover key security areas such as authentication, authorization, encryption, logging, and monitoring. |
|
NIST SP 800-53 — System and Information Integrity
Check Item |
Description |
---|---|
You have a system to regularly identify and delete the data that is in excess of the retention period or no longer needed. |
|
A change detection system must be deployed to detect unauthorized modifications and such modifications must be rectified. |
|
A documented change and rollback procedure must be prepared for each change. A documented approval by authorized parties must be presented for each change. |
|
Your change and rollback procedure must be tested based on the risk assessment to ensure that there is no negative impact on operations and security of the organization. |
|
Hosts, containers, and servers must be protected from malware. |
|
NIST SP 800-53 — Access Control
Check Item |
Description |
---|---|
You have security controls in place to automatically lock accounts after a certain number of failed login attempts. |
|
Your permissions management must comply with the principles of on-demand allocation, least privilege, and separation of duties (SOD). |
Check whether the existing permissions management specifications adhere to the following principles:
|
Role-based or attribute-based access control mechanism must be developed. The access requirements of each role must be defined clearly, and access permissions must be assigned based on the role requirements. |
|
Accounts and permissions must be changed within 24 hours after the responsibilities of internal and external employees change. |
Check whether the accounts and permissions are changed within 24 hours after an employee transfer or departure. |
All accounts and their permissions must be reviewed periodically, for example, at least once a month. If any deviation between the accounts and permissions is found, rectify the issue within the specified period. |
Check whether a list of all accounts and permissions. Check the maintenance records for the list to see whether all accounts and permissions are reviewed at least once and whether deviations between accounts and permissions are rectified within the specified period. |
NIST SP 800-53 — Risk Assessment
Check Item |
Description |
---|---|
Your organization assets, including physical devices, systems, virtual devices, software, and data, are identified, and asset risks can be detected based on asset criticality, threat impacts, and risk possibility. |
Check the asset risk threat report to see if assets within the scope have been identified. Determine the risks based on the asset criticality, threat impacts, and risk possibility. The assets include:
|
Documented vulnerability scanning plans must be established and network environments must be periodically scanned (at least once a quarter) with manual or automated tools by specified individuals. |
|
NIST SP 800-53 — Configuration Management
Check Item |
Description |
---|---|
Your organization must create and maintain an asset list that covers all components and asset importance, owner, location, status, and asset associations. |
Check whether an asset list is developed and maintained. Check whether all components are included in the list. Check whether the asset list describes the asset importance, owner, location, status, and asset association. |
Your organization must review the asset list at least once a year or after each major change. |
Check with asset management personnel whether the asset list is reviewed at least once a year. Check with asset management personnel whether the asset list is reviewed after major changes. |
Your organization's data flow must be mapped out and data inventories must be maintained. Data storage modes, data processing and transmission information, and data locations must be recorded. |
Check whether data flow diagrams and data inventories have been developed and maintained, and whether data locations have been documented. The following is an example:
|
You have hardened all software and hardware assets in accordance with security configuration baselines. |
|
Automated tools must be provided for centralized management of security configuration baselines and periodical inspections on configuration file changes and content file integrity. |
Check whether automated tools have been deployed for security configuration baseline management. Check whether the automated tools can:
|
Changes must be documented and risks of changes must be assessed and classified. |
|
Policies on change notifications and implementation must be developed. Upon the completion of each change, the validity of the change must be verified, and the configuration library and applicable documentation must be updated. |
Check whether documented change notification and implementation policies are in place and check whether the following requirements are included in the documentation:
|
NIST SP 800-53 — Awareness and Training
Check Item |
Description |
---|---|
The organization shall review and update cybersecurity and privacy protection management policies, processes, standards, and documentation at least once a year based on the information obtained from continuously monitored data and regular assessments. Personnel shall be designated to develop, distribute, and update the documentation as needed. |
Check whether there are records for annual reviews of cybersecurity and privacy protection management policies, processes, standards, and related documents. Check whether employees in charge of developing, distributing, and updating the related documents are designated. |
Security awareness and job skill training plans for different positions must be prepared, carried out, and periodically updated. |
Check whether there are records of position-specific security awareness and skill training. Check whether there are training plan update records. |
Security skill training and appraisal must be regularly arranged for internal and external employees who assume security roles and key responsibilities. Information security requirements must be incorporated into the performance appraisal system. |
Check whether position skill training is provided for personnel who assume security roles and key responsibilities. Check whether information security requirements are incorporated in the performance appraisal system. Check the technical skill appraisal records, including the training and appraisal of developers on the latest secure coding technologies. |
NIST SP 800-53 — Identification and Authentication
Check Item |
Description |
---|---|
Your organization must classify and mark assets based on factors such as business importance and data sensitivity. |
Check whether assets are classified and marked. Check whether asset identification records are used to record asset protection requirements. |
You have implemented an appropriate user identification management policy, including assigning a unique account name and unique identification code to users, setting validity periods, and identifying cross-organization accounts. |
|
Your password policies must comply with industry standards. Do not use common or shared passwords or those that are the same as accounts. |
Check whether the system password policies meet the following requirements:
|
A password assignment policy must be created. For example, a random password is assigned for the first login, and the password must be changed after the first login. The new password must meet the password complexity requirements. |
Check whether documents related to password control are in place, including but not limited to the following:
Check the password assignment records and check whether there are requirements for the initial password and for password complexity in general. |
Authentication credentials such as passwords must be encrypted using encryption algorithms such as AES, RSA, or IDES during transmission and storage and encrypted channels must be used during transmission. |
|
Your multi-factor authentication (MFA) settings must meet security requirements. Ensure that MFA is associated with a unique account and cannot be shared among multiple accounts. At least one of the authentication methods used must be encrypted. |
Note: In MFA, two or more authentication methods (such as passwords, cryptographic techniques, and biometric authentication) must be used to authenticate users. |
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot