What Is ServiceStage?
ServiceStage is an application management and O&M platform that lets you deploy, roll out, monitor, and maintain applications all in one place. Java, Go, PHP, Node.js, Python, Docker, and Tomcat are supported. Web applications, microservice applications such as Apache ServiceComb, Spring Cloud, Dubbo, and service mesh, and common applications make it easier to migrate enterprise applications to the cloud.
ServiceStage provides the following capabilities:
- Application management: supports application lifecycle management and environment management.
- Microservice application access: supports Java Chassis, Go Chassis, Spring Cloud and Dubbo microservice frameworks and ServiceComb Mesher; works with the microservice engine to implement service registry and discovery, configuration management, and service governance.
- AOM: supports application O&M through logs, monitoring, and alarms.
Application Development
Application Management
After an application is developed, it can be hosted on ServiceStage, which provides you with complete application lifecycle management:
- Application creation by using the source code, software packages (JAR, WAR, or ZIP), and container images, achieving application deployment.
- Entire process management from application creation to logout, covering application creation, deployment, start, upgrade, rollback, scaling, stop, and deletion.
- Multi-dimensional metrics monitoring for application components, helping you understand the running status of online applications.
- GUI-based log query and search, help you quickly locate faults.
Microservice Governance
After an application developed using the microservice framework is managed on ServiceStage, the microservice will be registered with the service center after the application instance starts. You can govern microservices. The supported service governance policies are described in Table 1.
Name |
Description |
---|---|
Load Balancing |
When the access traffic and traffic volume are large and one server cannot be loaded, you can configure load balancing to distribute traffic to multiple servers for load balancing. This reduces latency and prevents server overload. |
Rate Limiting |
When the number of requests sent per second by the rate limiting object to the current service instance exceeds the specified value, the current service instance no longer accepts requests from the rate limiting object. |
Fault Tolerance |
Fault tolerance is a processing policy when an exception occurs in a service instance. After an exception occurs, the system retries or accesses a new service instance based on the defined policy. |
Service Degradation |
Service degradation is a special form of fault tolerance. When the service throughput is large and resources are insufficient, you can use service degradation to disable some services that are not important and have poor performance to avoid occupying resources and ensure that the main services are normal. |
Circuit Breaker |
If the service is overloaded due to certain reasons, you can use circuit breaker to protect the system from being faulty. |
Fault Injection |
Fault injection is used to test the fault tolerance capability of microservices. This helps the user ensure whether the system can run properly when latency or fault occurs. |
Black and white list |
Blacklist and whitelist allow you to set the information that is associated with routes. |
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot