Data Planning
Firewalls
Firewalls must meet the requirements listed in the Table 1.
Source Device |
Source IP Address |
Source Port |
Target Device |
Target IP Address |
Destination Port (Listening) |
Protocol |
Port Description |
Listening Port Configurable |
Authentication Mode |
Encryption Mode |
---|---|---|---|---|---|---|---|---|---|---|
ucsctl executors |
IP address of each ucsctl executor |
All |
All nodes |
IP address of each node |
22 |
TCP |
SSH |
No |
Certificate/Username and password |
TLS 1.2 |
All nodes |
IP address of each node |
All |
NTP server |
IP address of the NTP server |
123 |
UDP |
NTP |
No |
None |
None |
All nodes |
IP address of each node |
All |
DNS server |
IP address of the DNS server |
53 |
UDP |
DNS |
No |
None |
None |
All nodes |
IP address of each node |
All |
Self-built APT repositories |
IP address of each APT repository |
80/443 |
TCP |
HTTP |
No |
None |
None |
All nodes |
IP address of each node |
All |
Load balancer or virtual IP address |
IP address of the load balancer or virtual IP address bound to the nodes |
5443 |
TCP |
kube-apiserver |
No |
HTTPS and certificate |
TLS 1.2 |
All nodes |
IP address of each node |
1024-65535 |
All nodes |
IP address of each node |
1024-65535 |
All |
None |
No |
None |
None |
All nodes |
IP address of each node |
All |
All nodes |
IP address of each node |
8472 |
UDP |
VXLAN |
No |
None |
None |
Nodes that need to access the Ingress |
IP address of each node that needs to access the Ingress |
All |
Network nodes |
IP address of each network node |
80, 443, or a specified port |
TCP |
HTTP |
No |
HTTPS and certificate |
TLS 1.2 |
All nodes |
IP address of each node |
All |
Three master nodes |
IP address of each master node |
5444 |
TCP |
kube-apiserver |
No |
HTTPS and certificate |
TLS 1.2 |
ucsctl executors |
IP address of each ucsctl executor |
All |
Huawei Cloud Object Storage Service (OBS) |
IP address of the OBS endpoint |
443 |
TCP |
HTTP |
No |
HTTPS and certificate |
TLS 1.2 |
Three master nodes |
IP address of each master node |
All |
UCS |
124.70.21.61 proxyurl.ucs.myhuaweicloud.com |
30123 |
TCP |
gRPC |
No |
HTTPS and certificate |
TLS 1.2 |
Three master nodes |
IP address of each master node |
All |
Identity and Access Management (IAM) |
Domain name used by external systems to access IAM |
443 |
TCP |
HTTP |
No |
HTTPS and certificate |
TLS 1.2 |
All nodes |
IP address of each node |
All |
SoftWare Repository for Container (SWR) |
IP address of the SWR endpoint |
443 |
TCP |
HTTP |
No |
HTTPS and certificate |
TLS 1.2 |
All nodes |
IP address of each node |
All |
Official Ubuntu repositories/Proxy repositories in China |
IP address of each repository |
80/443 |
TCP |
HTTP |
No |
None |
None |
Monitoring nodes |
IP address of each monitoring node |
All |
Application Operations Management (AOM) |
IP address mapping a domain name |
443 |
TCP |
HTTP |
No |
HTTPS and certificate |
TLS 1.2 |
Monitoring nodes |
IP address of each monitoring node |
All |
Log Tank Service (LTS) |
IP address mapping a domain name |
443 |
TCP |
HTTP |
No |
HTTPS and certificate |
TLS 1.2 |
Resource Specifications
UCS on-premises clusters are installed in HA mode to meet DR requirements for commercial use. The following tables list resource specifications.
Node Type |
Quantity |
CPU (Cores) |
Memory (GiB) |
System Disk (GiB) |
High-Performance Disk (GiB) |
Data Disk (GiB) |
Remarks |
---|---|---|---|---|---|---|---|
Cluster manage nodes |
3 |
8 |
16 |
100 |
50 |
300 |
A virtual IP address is required for HA. |
Cluster compute nodes |
As required |
2 |
4 |
40 |
- |
100 |
You can increase the number of nodes as required. |
Node Type |
CPU (Cores) |
Memory (GiB) |
---|---|---|
prometheus node |
Requests: 1 Limits: 4 |
Requests: 2 Limits: 12 |
log-agent node |
Requests: 0.5 Limits: 3 |
Requests: 1.5 Limits: 2.5 |
Node Type |
Quantity |
CPU (Cores) |
Memory (GiB) |
System Disk (GiB) |
High-Performance Disk (GiB) |
Data Disk (GiB) |
---|---|---|---|---|---|---|
operator-chef |
1 |
Requests: 0.5 Limits: 2 |
Requests: 0.5 Limits: 2 |
N/A |
N/A |
10 (for storing logs) |
helm-operator |
1 |
Requests: 0.3 Limits: 1.5 |
Requests: 0.3 Limits: 1.5 |
N/A |
N/A |
10 (for storing logs) |
ops-operator |
1 |
Requests: 0.3 Limits: 1.5 |
Requests: 0.3 Limits: 1.5 |
N/A |
N/A |
10 (for storing logs) |
External Dependencies
Dependency |
Function |
---|---|
DNS server |
The DNS server can resolve the domain names of services such as OBS, SWR, IAM, and DNS. For details about the domain names, see Regions and Endpoints. If a node is accessed over a public network, the node can automatically identify the default DNS settings. You only need to configure a public upstream DNS server in advance. If a node is accessed over a private network, the node cannot identify the default DNS settings. You need to configure the DNS resolution for VPC endpoints in advance. For details, see Preparations. If you have not set up a DNS server, set up it by referring to DNS. |
APT or Yum repository |
An APT or Yum repository provides dependency packages for installing components such as NTP on nodes (servers) added to on-premises clusters.
NOTICE:
APT repositories apply to nodes running Ubuntu, and Yum repositories apply to nodes running Huawei Cloud EulerOS or Red Hat. |
NTP server |
(Optional) An NTP server is used for time synchronization between nodes in a cluster. An external NTP server is recommended. |
Disk Volumes
Node Type |
Disk Mount Point |
Available Size (GiB) |
Used For |
---|---|---|---|
Cluster manage nodes |
/var/lib/containerd |
50 |
Directory for storing containerd images |
/run/containerd |
30 |
Directory for storing containerd |
|
/var/paas/run |
50 |
Directory for storing etcd data (SSDs are recommended.) |
|
/var/paas/sys/log |
20 |
Directory for storing logs |
|
/mnt/paas |
40 |
Directory where volumes are mounted when containers are running. |
|
/tmp |
20 |
Directory for storing temporary files |
|
Cluster compute nodes |
/var/lib/containerd |
100 |
Directory for storing containerd images |
/run/containerd |
50 |
Directory for storing containerd |
|
/mnt/paas |
50 |
Directory where volumes are mounted when containers are running. |
|
/tmp |
20 |
Directory for storing temporary files |
Load Balancing
If master nodes in an on-premises cluster are deployed in HA mode for DR, a unified IP address is required for the access from cluster compute nodes and other external services. There are two ways to provide access: virtual IP address and load balancer.
- IP addresses
An idle IP address must be planned as a virtual IP address that can be shared by the three master nodes. The virtual IP address is randomly bound to a master node. When the node becomes abnormal, the virtual IP address is automatically switched to another node to ensure HA.
Table 7 IP addresses IP Type
IP Address
Used For
Virtual IP address
10.10.11.10 (example)
An IP address used for HA. Plan the IP address based on site requirements.
- Load balancers
If you have an external load balancer, on-premises clusters can connect to it for HA. Configurations are as follows:
- Listeners: 3 TCP listeners with three different ports (80, 443, and 5443)
- Backend server groups: 3 TCP backend server groups with three different ports (corresponding to ports 80, 443, and 5444 of the three master nodes)
Table 8 lists the requirements for the TCP backend server groups associated with the listeners.
Table 8 Listeners and TCP backend server groups Listener (Protocol/Port)
Backend Server Group
Backend Server Group Node Mapping and Port
TCP/80
ingress-http
master-01-IP:80
master-02-IP:80
master-03-IP:80
TCP/443
ingress-https
master-01-IP:443
master-02-IP:443
master-03-IP:443
TCP/5443
kube-apiserver
master-01-IP:5444
master-02-IP:5444
master-03-IP:5444
- The configuration page varies depending on the external load balancer. Configure the preceding mappings based on site requirements.
- Before installing on-premises clusters, configure the mappings between the TCP listeners and TCP backend server groups for the external load balancer and ensure that the external load balancer is available.
- The load balancer can route traffic from processes (such as the kubelet process) on all nodes (including master nodes) to three master nodes. In addition, the load balancer can automatically detect and stop routing traffic to unavailable processes, which improves service capabilities and availability. You can also use load balancers provided by other cloud vendors or related hardware devices or use Keepalived and HAProxy to provide HA for master nodes.
- Recommended configuration: Enable source IP transparency for the preceding listening ports and disable loop checking. If loop checking cannot be disabled separately, disable source IP transparency. To check whether loop checking exists, perform the following steps:
- Create an HTTP service on a server that can be accessed over external networks, change default listening port 80 to 88, and add the index.html file for testing.
yum install -y httpd sed -i 's/Listen 80/Listen 88/g' /etc/httpd/conf/httpd.conf echo "This is a test page" > /var/www/html/index.html systemctl start httpd
Enter ${IP address of the server}:88 in the address box of a browser. "This is a test page" is displayed.
- Configure a listening port, for example, 30088, for the load balancer to route traffic to port 88 of the server, and enable source IP transparency.
- Use the private IP address of the load balancer to access the HTTP service.
curl -v ${ELB_IP}:30088
Check whether the HTTP status code is 200. If the status code is not 200, loop checking exists.
- Create an HTTP service on a server that can be accessed over external networks, change default listening port 80 to 88, and add the index.html file for testing.
Users
User |
User Group |
User ID |
User Group ID |
Password |
Used For |
---|---|---|---|---|---|
root |
root |
0 |
0 |
- |
Default user used for installing on-premises clusters. You can also specify another user that meets the following requirements:
NOTE:
After an on-premises cluster is installed, you can change the password or restrict the root permissions. |
paas |
paas |
10000 |
10000 |
- |
User and user group created during the installation of on-premises clusters and used to run on-premises cluster services. The user name and user group name are in the format of paas:paas, and the user ID and user group ID are in the format of 10000:10000. Ensure that the user name, user group name, user ID, and user group ID are not occupied before the installation. If any of them are occupied, delete the existing one in advance. |
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot