Add-on Management
In addition to its underlying components, Kubernetes may have other components, which run as add-ons, such as Kubernetes DNS and Kubernetes Dashboard.
On the CCI console, install the CoreDNS add-on to extend CCI features.
CoreDNS
The CoreDNS add-on provides the internal domain name resolution service for your other workloads. Do not delete or upgrade this workload; otherwise, the internal domain name resolution service will become unavailable.
Installing an Add-on
- Log in to the CCI console. In the navigation pane, choose Add-ons > Add-on Marketplace. Then, click on the card of the add-on you want to install.
Figure 1 CoreDNS add-on
- Select a version from the Add-on Version drop-down list, and click Submit.
- When installing CoreDNS v2.5.9 or later, you must configure the following parameters:
- Stub Domain: A DNS server that resolves user-defined domain names. The stub domain contains the suffix of the DNS domain name followed by one or more DNS IP addresses. For example, acme.local -- 1.2.3.4,6.7.8.9 means that DNS requests with the .acme.local suffix are forwarded to a DNS listening at 1.2.3.4,6.7.8.9.
- Upstream DNS Server: A DNS server that resolves all domain names except intra-cluster service domain names and user-defined domain names. The value can be one or more DNS IP addresses, for example, 8.8.8.8,8.8.4.4.
- When installing CoreDNS v2.5.10 or later, you can also configure the following parameter:
- Log Output: You can select the types of domain name resolution logs to be printed based on service requirements, for example, Success log and Error log. For details, see Configuring Log Output Options for CoreDNS.
After you complete the installation, the installed add-on is available at Add-ons > Add-on Instances.
Figure 2 CoreDNS installed
- When installing CoreDNS v2.5.9 or later, you must configure the following parameters:
Configuring Stub Domains for CoreDNS
As a cluster administrator, you can modify the ConfigMap for the CoreDNS Corefile to change how service discovery works. You can configure stub domains for CoreDNS using the proxy plug-in.
Assume that you are a cluster administrator and you have a Consul DNS server located at 10.150.0.1 and all Consul domain names have the suffix .consul.local. To configure this Consul DNS server in CoreDNS, you need to write the following information in the CoreDNS ConfigMap:
consul.local:5353 { errors cache 30 proxy . 10.150.0.1 }
ConfigMap after modification:
apiVersion: v1 data: Corefile: |- .:5353 { cache 30 errors health kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure upstream /etc/resolv.conf fallthrough in-addr.arpa ip6.arpa } loadbalance round_robin prometheus 0.0.0.0:9153 proxy . /etc/resolv.conf reload } consul.local:5353 { errors cache 30 proxy . 10.150.0.1 } kind: ConfigMap metadata: name: coredns namespace: kube-system
Cluster administrators can modify the ConfigMap for the CoreDNS Corefile to change how service discovery works. They can configure stub domains for CoreDNS using the proxy plug-in.
Configuring Log Output Options for CoreDNS
CoreDNS uses the log plug-in to print the domain name resolution logs to standard output. You can configure Log Output to define the log content to be output, and view the resolution logs on the AOM console. If there is a large number of domain name resolution requests, frequent log printing may affect the CoreDNS performance.
The backend configuration format is as follows:
log [NAMES...] [FORMAT] { class CLASSES... }
CLASSES indicates the classes of responses that should be logged. It is a list separated by spaces.
The log output options include:
- Success log:
If this option is selected, the success response parameter is added to the CLASSES list of the log plug-in, and CoreDNS prints the logs that are successfully resolved to the standard output.
- Denial log:
If this option is selected, the denial response parameter is added to the CLASSES list of the log plug-in, and CoreDNS prints the logs that fail to be resolved. For example, NXDOMAIN or nodata response (the name exists but the type does not exist) will be printed to the standard output.
- Error log:
If this option is selected, the error response parameter is added to the CLASSES list of the log plug-in, and CoreDNS prints logs about resolution errors to the standard output, for example, SERVFAIL, NOTIMP, and REFUSED. This helps detect problems such as DNS server unavailability in a timely manner.
- Deselect all:
If none of the preceding options is selected, the log plug-in is disabled.
Disabling the log plug-in takes effect only for the resolution records of CoreDNS. The logs of the CoreDNS service process are still displayed, which are small in volume and do not affect performance.
log . { class success denial }
The ConfigMap corresponding to the created CoreDNS is as follows:
apiVersion: v1 data: Corefile: |- .:5353 { cache 30 errors log . { classes success denial } health kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure upstream /etc/resolv.conf fallthrough in-addr.arpa ip6.arpa } loadbalance round_robin prometheus 0.0.0.0:9153 proxy . /etc/resolv.conf reload } kind: ConfigMap metadata: name: coredns namespace: kube-system
Viewing Resolution Logs
After configuring the log plug-in, you can view resolution logs on the AOM console.
- Log in to the CCI console. In the navigation pane, choose Add-ons > Add-on Instances. Select CoreDNS on the right to display the CoreDNS page.
Figure 4 Add-on instances
- Click CoreDNS Deployment in the resource list to go to the pod list.
Figure 5 CoreDNS deployment
- Click View Logs in the Operation column in Pod List to access the AOM console to view the CoreDNS logs.
Figure 6 Pod list
How Does Domain Name Resolution Work in Kubernetes?
DNS policies can be set on a per-pod basis. Kubernetes supports four types of DNS policies: Default, ClusterFirst, ClusterFirstWithHostNet, and None. For details, see DNS for Services and Pods. These policies are specified in the dnsPolicy field in the pod-specific.
- Default: Pods inherit the name resolution configuration from the node that runs the pods. The custom upstream DNS server and the stub domain cannot be used together with this policy.
- ClusterFirst: Any DNS query that does not match the configured cluster domain suffix, such as www.kubernetes.io, is forwarded to the upstream name server inherited from the node. Cluster administrators may have extra stub domains and upstream DNS servers configured.
- ClusterFirstWithHostNet: For pods running with hostNetwork, set its DNS policy ClusterFirstWithHostNet.
- None: It allows a pod to ignore DNS settings from the Kubernetes environment. All DNS settings are supposed to be provided using the dnsPolicy field in the pod-specific.
- Clusters of Kubernetes v1.10 and later support Default, ClusterFirst, ClusterFirstWithHostNet, and None. Clusters earlier than Kubernetes v1.10 support only Default, ClusterFirst, and ClusterFirstWithHostNet.
- Default is not the default DNS policy. If dnsPolicy is not explicitly specified, ClusterFirst is used.
Routing
- Without stub domain configurations: Any query that does not match the configured cluster domain suffix, such as www.kubernetes.io, is forwarded to the upstream DNS server inherited from the node.
- With stub domain configurations: If you have configured stub domains and upstream DNS servers, DNS queries are routed according to the following flow:
- The query is first sent to the DNS caching layer in CoreDNS.
- From the caching layer, the suffix of the request is examined and then forwarded to the appropriate DNS, based on the following cases:
- Names with the cluster suffix, for example, .cluster.local: The request is sent to CoreDNS.
- Names with the stub domain suffix, for example, .acme.local: The request is sent to the configured custom DNS resolver, listening for example at 1.2.3.4.
- Names that do not match the suffix (for example, widget.com): The request is forwarded to the upstream DNS.
Figure 7 Routing
Follow-Up Operations
After the add-on is installed, you can perform the following operations on the add-on.
Operation |
Description |
---|---|
Upgrade |
Click . Select the target version, and click Next. Then, confirm the new configuration information, and click Submit. |
Rollback |
Click . Then, select the version to which the add-on is to be rolled back, and click Submit. |
Deletion |
Click and then click Yes.
NOTICE:
Deleted add-ons cannot be recovered. Exercise caution when performing this operation. |
CoreDNS Release History
Add-on Version |
Supported Cluster Version |
New Feature |
Community Version |
---|---|---|---|
1.30.6 |
v1.21 v1.23 v1.25 v1.27 v1.28 v1.29 v1.30 |
|
|
1.29.5 |
v1.21 v1.23 v1.25 v1.27 v1.28 v1.29 |
Fixed some issues. |
|
1.29.4 |
v1.21 v1.23 v1.25 v1.27 v1.28 v1.29 |
CCE clusters 1.29 are supported. |
|
1.28.7 |
v1.21 v1.23 v1.25 v1.27 v1.28 |
Supported hot module replacement. Rolling upgrade is not required. |
|
1.28.5 |
v1.21 v1.23 v1.25 v1.27 v1.28 |
Fixed some issues. |
|
1.28.4 |
v1.21 v1.23 v1.25 v1.27 v1.28 |
CCE clusters 1.28 are supported. |
|
1.27.4 |
v1.19 v1.21 v1.23 v1.25 v1.27 |
None |
|
1.25.14 |
v1.19 v1.21 v1.23 v1.25 |
|
|
1.25.11 |
v1.19 v1.21 v1.23 v1.25 |
|
|
1.25.1 |
v1.19 v1.21 v1.23 v1.25 |
CCE clusters 1.25 are supported. |
|
1.23.3 |
v1.15 v1.17 v1.19 v1.21 v1.23 |
Regular upgrade of add-on dependencies |
|
1.23.2 |
v1.15 v1.17 v1.19 v1.21 v1.23 |
Regular upgrade of add-on dependencies |
|
1.23.1 |
v1.15 v1.17 v1.19 v1.21 v1.23 |
CCE clusters 1.23 are supported. |
|
1.17.15 |
v1.15 v1.17 v1.19 v1.21 |
CCE clusters 1.21 are supported. |
|
1.17.9 |
v1.15 v1.17 v1.19 |
Regular upgrade of add-on dependencies |
|
1.17.7 |
v1.15 v1.17 v1.19 |
Updated the add-on to its community version v1.8.4. |
|
1.17.4 |
v1.17 v1.19 |
CCE clusters 1.19 are supported. |
|
1.17.3 |
v1.17 |
Supported clusters 1.17 and fixed stub domain configuration issues. |
|
1.17.1 |
v1.17 |
Clusters 1.17 are supported. |
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot