- What's New
- Function Overview
- Service Overview
- Billing
- Getting Started
- User Guide
- Best Practices
-
Developer Guide
- Overview
- Using Native kubectl (Recommended)
- Namespace and Network
- Pod
- Label
- Deployment
- EIPPool
- EIP
- Pod Resource Monitoring Metric
- Collecting Pod Logs
- Managing Network Access Through Service and Ingress
- Using PersistentVolumeClaim to Apply for Persistent Storage
- ConfigMap and Secret
- Creating a Workload Using Job and Cron Job
- YAML Syntax
-
API Reference
- Before You Start
- Calling APIs
- Getting Started
- Proprietary APIs
-
Kubernetes APIs
- ConfigMap
- Pod
- StorageClass
- Service
-
Deployment
- Querying All Deployments
- Deleting All Deployments in a Namespace
- Querying Deployments in a Namespace
- Creating a Deployment
- Deleting a Deployment
- Querying a Deployment
- Updating a Deployment
- Replacing a Deployment
- Querying the Scaling Operation of a Specified Deployment
- Updating the Scaling Operation of a Specified Deployment
- Replacing the Scaling Operation of a Specified Deployment
- Querying the Status of a Deployment
- Ingress
- OpenAPIv2
- VolcanoJob
- Namespace
- ClusterRole
- Secret
- Endpoint
- ResourceQuota
- CronJob
-
API groups
- Querying API Versions
- Querying All APIs of v1
- Querying an APIGroupList
- Querying APIGroup (/apis/apps)
- Querying APIs of apps/v1
- Querying an APIGroup (/apis/batch)
- Querying an APIGroup (/apis/batch.volcano.sh)
- Querying All APIs of batch.volcano.sh/v1alpha1
- Querying All APIs of batch/v1
- Querying All APIs of batch/v1beta1
- Querying an APIGroup (/apis/crd.yangtse.cni)
- Querying All APIs of crd.yangtse.cni/v1
- Querying an APIGroup (/apis/extensions)
- Querying All APIs of extensions/v1beta1
- Querying an APIGroup (/apis/metrics.k8s.io)
- Querying All APIs of metrics.k8s.io/v1beta1
- Querying an APIGroup (/apis/networking.cci.io)
- Querying All APIs of networking.cci.io/v1beta1
- Querying an APIGroup (/apis/rbac.authorization.k8s.io)
- Querying All APIs of rbac.authorization.k8s.io/v1
- Event
- PersistentVolumeClaim
- RoleBinding
- StatefulSet
- Job
- ReplicaSet
- Data Structure
- Permissions Policies and Supported Actions
- Appendix
- Out-of-Date APIs
- Change History
-
FAQs
- Product Consulting
-
Basic Concept FAQs
- What Is CCI?
- What Are the Differences Between Cloud Container Instance and Cloud Container Engine?
- What Is an Environment Variable?
- What Is a Service?
- What Is Mcore?
- What Are the Relationships Between Images, Containers, and Workloads?
- What Are Kata Containers?
- Can kubectl Be Used to Manage Container Instances?
- What Are Core-Hours in CCI Resource Packages?
- Workload Abnormalities
-
Container Workload FAQs
- Why Service Performance Does Not Meet the Expectation?
- How Do I Set the Quantity of Instances (Pods)?
- How Do I Check My Resource Quotas?
- How Do I Set Probes for a Workload?
- How Do I Configure an Auto Scaling Policy?
- What Do I Do If the Workload Created from the sample Image Fails to Run?
- How Do I View Pods After I Call the API to Delete a Deployment?
- Why an Error Is Reported When a GPU-Related Operation Is Performed on the Container Entered by Using exec?
- Can I Start a Container in Privileged Mode When Running the systemctl Command in a Container in a CCI Cluster?
- Why Does the Intel oneAPI Toolkit Fail to Run VASP Tasks Occasionally?
- Why Are Pods Evicted?
- Why Is the Workload Web-Terminal Not Displayed on the Console?
- Why Are Fees Continuously Deducted After I Delete a Workload?
-
Image Repository FAQs
- Can I Export Public Images?
- How Do I Create a Container Image?
- How Do I Upload Images?
- Does CCI Provide Base Container Images for Download?
- Does CCI Administrator Have the Permission to Upload Image Packages?
- What Permissions Are Required for Uploading Image Packages for CCI?
- What Do I Do If Authentication Is Required During Image Push?
-
Network Management FAQs
- How Do I View the VPC CIDR Block?
- Does CCI Support Load Balancing?
- How Do I Configure the DNS Service on CCI?
- Does CCI Support InfiniBand (IB) Networks?
- How Do I Access a Container from a Public Network?
- How Do I Access a Public Network from a Container?
- What Do I Do If Access to a Workload from a Public Network Fails?
- What Do I Do If Error 504 Is Reported When I Access a Workload?
- What Do I Do If the Connection Timed Out?
- Storage Management FAQs
- Log Collection
- Account
- SDK Reference
- Videos
- General Reference
Copied.
Service
Direct Access to a Pod
How can I access a workload after it is created? Accessing a workload is to access a pod. However, the following problems may occur when you access a pod directly:
- The pod can be deleted and recreated at any time by a controller such as a Deployment, and the result of accessing the pod becomes unpredictable.
- The IP address of the pod is allocated only after the pod is started. Before the pod is started, the IP address of the pod is unknown.
- An application is usually composed of multiple pods that run the same image. Accessing pods one by one is not efficient.
For example, an application uses Deployments to create the frontend and backend. The frontend calls the backend for computing, as shown in Figure 1. Three pods are running in the backend, which are independent and replaceable. When a backend pod is recreated, the new pod is assigned with a new IP address and the frontend pod is unaware.
How Services Work
Kubernetes Services are used to solve the preceding pod access problems. A Service has a fixed IP address and forwards the traffic to the pods based on labels. In addition, the Service can perform load balancing for these pods.
In the preceding example, two Services are added for accessing the frontend and backend pods. In this way, the frontend pod does not need to sense changes on backend pods, as shown in Figure 2.
Creating a Service
In the following example, create a Service named nginx, and use a selector to select the pod with the label of app:nginx. The port of the target pod is port 80 while the exposed port of the Service is port 8080.
The Service can be accessed using Service name:Exposed port. In the example, nginx:8080 is used. In this case, other workloads can access the pod associated with nginx using nginx:8080.
apiVersion: v1 kind: Service metadata: name: nginx #Service name spec: selector: #Label selector, which selects pods with the label of app=nginx app: nginx ports: - name: service0 targetPort: 80 #Pod port port: 8080 #Service external port protocol: TCP #Forwarding protocol type. The value can be TCP or UDP. type: ClusterIP #Service type
NodePort Services are not supported in CCI.
Save the Service definition to nginx-svc.yaml and use kubectl to create the Service.
# kubectl create -f nginx-svc.yaml -n $namespace_name service/nginx created # kubectl get svc -n $namespace_name NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kube-dns ClusterIP 10.247.9.190 <none> 53/UDP,53/TCP 7m nginx ClusterIP 10.247.148.137 <none> 8080/TCP 1h
You can see that the Service has a ClusterIP, which is fixed unless the Service is deleted. You can use this ClusterIP to access the Service internally.
kube-dns is a Service reserved for domain name resolution. It is automatically created in CCI. For details about domain name resolution, see Using ServiceName to Access a Service.
Using ServiceName to Access a Service
In CCI, you can use the coredns add-on to resolve the domain name for a Service, and use ServiceName:Port to access to the Service. This is the most common mode in Kubernetes. For details about how to install coredns, see Add-on Management.
After coredns is installed, it becomes a DNS. After the Service is created, coredns records the Service name and IP address. In this way, the pod can obtain the Service IP address by querying the Service name from coredns.
nginx.<namespace>.svc.cluster.local is used to access the Service. nginx is the Service name, <namespace> is the namespace, and svc.cluster.local is the domain name suffix. In actual use, you can omit <namespace>.svc.cluster.local and use the Service name.
For example, if the Service named nginx is created, you can access the Service through nginx:8080 and then access backend pods.
An advantage of using ServiceName is that you can write ServiceName into the program when developing the application. In this way, you do not need to know the IP address of a specific Service.
The coredns add-on occupies compute resources. It runs two pods, with each pod occupies 0.5 vCPUs and 1 GiB of memory. You need to pay for the resources.
LoadBalancer Services
You have known that you can create ClusterIP Services. You can access backend pods of the Service through the IP address.
CCI also supports LoadBalancer Services. You can bind a load balancer to a Service. In this way, the traffic for accessing the load balancer is forwarded to the Service.
A load balancer can work on a private network or public network. If the load balancer has a public IP address, it can route requests over the public network. You can create a load balancer by using the API or the ELB console.
- The load balancer must be in the same VPC as the Service.
- Cross-namespace access cannot be achieved using a Service or ELB domain name. It can be implemented only through Private IP address of load balancer:Port.

apiVersion: v1 kind: Service metadata: name: nginx annotations: kubernetes.io/elb.id: 77e6246c-a091-xxxx-xxxx-789baa571280 #ID of the load balancer spec: selector: app: nginx ports: - name: service0 targetPort: 80 port: 8080 #Port configured for the load balancer protocol: TCP type: LoadBalancer #Service type
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot