Help Center/
PanguLargeModels/
FAQs/
FAQs Related to LLM Concepts/
How Do I Evaluate and Protect the Safety of Pangu Models?
Updated on 2025-11-04 GMT+08:00
How Do I Evaluate and Protect the Safety of Pangu Models?
The evaluation and assurance of the safety of Pangu models encompass the following aspects:
- Data security and privacy protection: LLMs involve a large amount of training data, which is a crucial asset. To ensure data security, anti-tampering, data privacy protection, encryption, audit, and data sovereignty protection mechanisms must be provided throughout the full lifecycle of data and model training, including data extraction, processing, transmission, training, inference, and deletion. During training and inference, data masking and privacy computing technologies are used to identify and protect sensitive data, preventing privacy leakage and ensuring the security of personal data.
- Content safety: A positive ideology is instilled in LLMs through pre-training and reinforcement learning on prompts that align with human values. The content review module filters harmful information that violates laws or damage social ethics.
- Model safety: The model obfuscation technique is used to keep the model in the obfuscated state during running, effectively preventing the structure and weight information from being stolen or exposed.
- System security: Network isolation, identity authentication and authorization, and web security solutions are used to safeguard the security of the LLM system, including protecting its own security and resisting attacks from external systems.
Parent topic: FAQs Related to LLM Concepts
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
The system is busy. Please try again later.
For any further questions, feel free to contact us through the chatbot.
Chatbot