Configuring gRPC Backend Services for an Nginx Ingress
This section describes how to route traffic to gRPC backend services through an Nginx ingress.
Introduction to gRPC
gRPC is a high-performance, open-source universal RPC framework. It uses Protocol Buffers (Protobuf) as its Interface Definition Language (IDL) and underlying message interchange format. In addition, gRPC leverages HTTP/2 as its underlying transport protocol, which provides features like multiplexing, header compression, and traffic control. This greatly improves the communication efficiency between the client and server. For more information, see Introduction to gRPC.
With gRPC, you can create distributed applications and services more easily by allowing a client application to call a method on a server application on a different machine as if it were a local object. gRPC, similar to other RPC systems, involves defining a service that specifies the methods that can be remotely called, along with their parameters and return types. The server then implements this interface and operates a gRPC server to handle client requests.
Preparations
- A CCE standard cluster is available. For details, see Buying a CCE Standard/Turbo Cluster.
- The NGINX Ingress Controller add-on has been installed in the cluster.
- kubectl has been installed and configured. For details, see Connecting to a Cluster Using kubectl.
- gRPCurl has been installed. For details, see gRPCurl.
Example of the gRPC Service
Define the gRPC service in a proto file, where the client can call the SayHello interface of the helloworld.Greeter service. For details about the source code of the sample gRPC service, see gRPC Hello World.
syntax = "proto3"; option go_package = "google.golang.org/grpc/examples/helloworld/helloworld"; option java_multiple_files = true; option java_package = "io.grpc.examples.helloworld"; option java_outer_classname = "HelloWorldProto"; package helloworld; // The greeting service definition. service Greeter { // Sends a greeting rpc SayHello (HelloRequest) returns (HelloReply) {} } // The request message containing the user's name. message HelloRequest { string name = 1; } // The response message containing the greetings message HelloReply { string message = 1; }
When using an Nginx ingress, gRPC runs only on the HTTPS port (443 by default). Therefore, in the production environment, you need a domain name and its corresponding SSL certificate. In this example, grpc.example.com and a self-signed SSL certificate are used.
Step 1: Creating an SSL Certificate
- Copy the following content and save it to the openssl.cnf file:
[req] distinguished_name = req_distinguished_name attributes = req_attributes [req_distinguished_name] [req_attributes] [test_ca] basicConstraints = critical,CA:TRUE subjectKeyIdentifier = hash authorityKeyIdentifier = keyid:always,issuer:always keyUsage = critical,keyCertSign [test_server] basicConstraints = critical,CA:FALSE subjectKeyIdentifier = hash keyUsage = critical,digitalSignature,keyEncipherment,keyAgreement subjectAltName = @server_alt_names [server_alt_names] DNS.1 = grpc.example.com [test_client] basicConstraints = critical,CA:FALSE subjectKeyIdentifier = hash keyUsage = critical,nonRepudiation,digitalSignature,keyEncipherment extendedKeyUsage = critical,clientAuth
- Copy the following content and save it to the create.sh file, which must be located in the same directory as the openssl.cnf file:
#!/bin/bash # Create the server CA certs. openssl req -x509 \ -newkey rsa:4096 \ -nodes \ -days 3650 \ -keyout ca_key.pem \ -out ca_cert.pem \ -subj /C=CN/ST=CA/L=SVL/O=gRPC/CN=test-server_ca/ \ -config ./openssl.cnf \ -extensions test_ca \ -sha256 # Create the client CA certs. openssl req -x509 \ -newkey rsa:4096 \ -nodes \ -days 3650 \ -keyout client_ca_key.pem \ -out client_ca_cert.pem \ -subj /C=CN/ST=CA/L=SVL/O=gRPC/CN=test-client_ca/ \ -config ./openssl.cnf \ -extensions test_ca \ -sha256 # Generate a server cert. openssl genrsa -out server_key.pem 4096 openssl req -new \ -key server_key.pem \ -days 3650 \ -out server_csr.pem \ -subj /C=CN/ST=CA/L=SVL/O=gRPC/CN=test-server1/ \ -config ./openssl.cnf \ -reqexts test_server openssl x509 -req \ -in server_csr.pem \ -CAkey ca_key.pem \ -CA ca_cert.pem \ -days 3650 \ -set_serial 1000 \ -out server_cert.pem \ -extfile ./openssl.cnf \ -extensions test_server \ -sha256 openssl verify -verbose -CAfile ca_cert.pem server_cert.pem # Generate a client cert. openssl genrsa -out client_key.pem 4096 openssl req -new \ -key client_key.pem \ -days 3650 \ -out client_csr.pem \ -subj /C=CN/ST=CA/L=SVL/O=gRPC/CN=test-client1/ \ -config ./openssl.cnf \ -reqexts test_client openssl x509 -req \ -in client_csr.pem \ -CAkey client_ca_key.pem \ -CA client_ca_cert.pem \ -days 3650 \ -set_serial 1000 \ -out client_cert.pem \ -extfile ./openssl.cnf \ -extensions test_client \ -sha256 openssl verify -verbose -CAfile client_ca_cert.pem client_cert.pem rm *_csr.pem
- Run the following command to generate a certificate:
chmod +x ./create.sh && ./create.sh
After the command is executed, you can obtain the server_key.pem private key file and server_cert.pem certificate file.
- Run the following command to create a TLS secret named grpc-secret:
kubectl create secret tls grpc-secret --key server_key.pem --cert server_cert.pem
Step 2: Creating a Workload for the gRPC Application
Create a gRPC-compliant workload in the cluster.
- Copy the YAML content to create the grpc.yaml file. In this section, the Docker image built using an official sample application is used as an example.
apiVersion: apps/v1 kind: Deployment metadata: annotations: description: '' labels: appgroup: '' version: v1 name: grpc-hello namespace: default spec: selector: matchLabels: app: grpc-hello version: v1 template: metadata: labels: app: grpc-hello version: v1 spec: containers: - name: container-1 image: # The image in this section is for reference only. imagePullPolicy: IfNotPresent imagePullSecrets: - name: default-secret terminationGracePeriodSeconds: 30 dnsPolicy: ClusterFirst replicas: 1
- Run the following command to create the workload:
kubectl apply -f grpc.yaml
Step 3: Creating a Service and Ingress for the Workload
- Copy the following YAML content to create the grpc-svc.yaml file:
apiVersion: v1 kind: Service metadata: name: grpc-hello namespace: default labels: app: grpc-hello spec: ports: - name: cce-service-0 protocol: TCP port: 50051 targetPort: 50051 selector: app: grpc-hello type: NodePort sessionAffinity: None
- Run the following command to create the Service:
kubectl apply -f grpc-svc.yaml
- Copy the following YAML content to create the grpc-ingress.yaml file:
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: grpc-hello namespace: default annotations: nginx.ingress.kubernetes.io/backend-protocol: GRPC # Specify the gRPC backend service. spec: ingressClassName: nginx tls: - secretName: grpc-secret rules: - host: grpc.example.com http: paths: - path: / pathType: Prefix backend: service: name: grpc-hello port: number: 50051
- Run the following command to create ingress routing rules:
kubectl apply -f grpc-ingress.yaml
Verification
After gRPCurl is installed, run the following command to check whether the installation is successful:
./grpcurl -insecure -servername "grpc.example.com" <ip_address>:443 list
In the preceding command, <ip_address> is the IP address of the load balancer, which can be obtained by running kubectl get ingress.
Expected output:
grpc.examples.echo.Echo grpc.reflection.v1.ServerReflection grpc.reflection.v1alpha.ServerReflection helloworld.Greeter
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot