Halaman ini belum tersedia dalam bahasa lokal Anda. Kami berusaha keras untuk menambahkan lebih banyak versi bahasa. Terima kasih atas dukungan Anda.

Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
OneAccess
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
IoT
IoT Device Access
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Blockchain
Blockchain Service
Web3 Node Engine Service
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
SoftWare Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Edge Data Center Management
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Intelligent EdgeCloud
Solutions
SAP Cloud
High Performance Computing
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive
Help Center/ SAP Cloud/ SAP S/4HANA HA Deployment Guide/ SAP S/4HANA Installation/ Configuring the HA Function of SAP S/4HANA

Configuring the HA Function of SAP S/4HANA

Updated on 2022-03-04 GMT+08:00

Scenarios

To prevent SAP S/4HANA from being affected by a single point of failure and improve the availability of SAP S/4HANA, configure the HA mechanism for both the active and standby ASCS nodes. If the active and standby nodes are located in the same AZ, you can directly configure the HA function of SAP S/4HANA. If the active and standby node are located in different AZs, another three ECSs are required and iSCSI is used to create a shared disk for SBD before the HA function is configured. For details, see section Configuring iSCSI (Cross-AZ HA Deployment).

Prerequisites

  • The mutual trust relationship has been established between the active and standby ASCS nodes.
  • You have disabled the firewall of the OS. For details, see section Modifying OS Configurations.
  • To ensure that the communication between the active and standby ASCS nodes is normal, add the mapping between the virtual IP addresses and virtual hostnames to the hosts file after installing the SAP S/4HANA instance.
    1. Log in to the active and standby ASCS nodes one by one and modify the /etc/hosts file:

      vi /etc/hosts

    2. Change the IP addresses corresponding to the virtual hostnames to the virtual IP addresses.
      10.0.3.52       S/4HANA-0001
      10.0.3.196      S/4HANA-0002
      10.0.3.220      ascsha
      10.0.3.2        ersha
      NOTE:

      ascsha indicates the virtual hostname of the active ASCS node and ersha indicates the virtual hostname of the standby ASCS node. Virtual hostnames can be customized.

  • Check that both the active and standby ASCS nodes have the /var/log/cluster directory. If the directory does not exist, create one.
  • Update the SAP resource-agents package on the active and standby ASCS nodes.
    1. Run the following command to check whether the resource-agents package has been installed:

      sudo grep 'parameter name="IS_ERS"' /usr/lib/ocf/resource.d/heartbeat/SAPInstance

      • If the following information is displayed, the patch package has been installed. No further action is required.
      • If the following information is not displayed, install the patch package. Go to 2.
      <parameter name="IS_ERS" unique="0" required="0">
    2. Install the resource-agents package.

      If the image is SLES 12 SP1, run the following command:

      sudo zypper in -t patch SUSE-SLE-HA-12-SP1-2017-885=1

      If the image is SLES 12 SP2, run the following command:

      sudo zypper in -t patch SUSE-SLE-HA-12-SP2-2018-1923=1

      If the image is SLES 12 SP3, run the following command:

      sudo zypper in -t patch SUSE-SLE-HA-12-SP3-2018-1922=1

  • Update the sap_suse_cluster_connector package on the active and standby ASCS nodes.
    1. Run the following command to uninstall the old package. Note that the software package name uses underscores (_).

      zypper remove sap_suse_cluster_connector

    2. Run the following command to install the new package. Note that the software package name uses hyphens (-):

      zypper install sap-suse-cluster-connector

    3. Run the following command to obtain the version information about the newly installed sap-suse-cluster-connector package:

      /usr/bin/sap_suse_cluster_connector gvi --out version

    4. View the version file to check the version is 3.1.0 or later.

Procedure

  1. Log in to the ASCS instance node, obtain the ha_auto_script.zip package, and decompress it to any directory.

    1. Obtain the ha_auto_script.zip package.
    2. Run the following commands to decompress the package:

      cd /sapmnt

      unzip ha_auto_script.zip

  2. Set parameters in the ascs_ha.cfg file based on the site requirements. Table 1 describes the parameters in the file.

    Table 1 Parameters in the ascs_ha.cfg file

    Type

    Name

    Description

    masterNode

    masterName

    ASCS instance node name

    masterHeartbeatIP1

    Heartbeat plane IP address 1 of the ASCS instance node

    masterHeartbeatIP2

    Service plane IP address of the ASCS instance node

    slaveNode

    slaveName

    ERS instance node name

    slaveHeartbeatIP1

    Heartbeat plane IP address 1 of the ERS instance node

    slaveHeartbeatIP2

    Service plane IP address of the ERS instance node

    ASCSInstance

    ASCSFloatIP

    Service IP address of the ASCS instance node

    ASCSInstanceDir

    Directory of the ASCS instance

    ASCSDevice

    Disk partition used by the ASCS instance directory

    ASCSProfile

    Profile file of the ASCS instance

    ERSInstance

    NOTE:

    You need to log in to the ERS instance node to obtain the information about ERSInstanceDir, ERSDevice, and ERSProfile parameters.

    ERSFloatIP

    Service IP address of the ERS instance node

    ERSInstanceDir

    Directory of the ERS instance

    ERSDevice

    Disk partition used by the ERS instance directory

    ERSProfile

    Profile file of the ERS instance

    trunkInfo

    SBDDevice

    Disk partition used by the SBD. One or three disk partitions are supported. Every two partitions are separated by a comma (,), for example, /dev/sda, /dev/sdb, /dev/sdc.

  3. Run the following command to perform automatic HA deployment:

    sh ascs_auto_ha.sh

  4. Run the crm status command to check the resource status.

    NOTE:

    After the HA function is configured, HAE manages resources. Do not start or stop resources in other modes. If you need to manually perform test or modification operations, switch the cluster to the maintenance mode first.

    crm configure property maintenance-mode=true

    Exit the maintenance mode after the modification is complete.

    crm configure property maintenance-mode=false

    If you need to stop or restart the node, manually stop the cluster service.

    systemctl stop pacemaker

    After the ECS is started or restarted, run the following command to start the cluster service:

    systemctl start pacemaker

    To clear the HA configuration, run the following command on the active node for which the HA mechanism is configured (Roll back to the initial status if the active and standby nodes are switched over.):

    sh ascs_auto_ha.sh unconf

Verifying the Configuration

  1. Open a browser and ensure that JavaScript and Cookie are enabled.
  2. Enter the IP address or host name of the active or standby node as the URL. The login port is 7630.

    https://HOSTNAME_OR_IP_ADDRESS:7630/

    NOTE:

    If a certificate warning is displayed when you attempt to access the URL for the first time, it indicates that a self-signed certificate is used. By default, the self-signed certificate is not considered as a trusted certificate.

    Click Continue to this website (not recommended) or add an exception in the browser to eliminate the warning message.

  3. On the login page, enter the username and password of user hacluster or any other user who belongs to the hacluster group.

    NOTE:

    The username is hacluster and the password is linux. Change the password after the first login.

  4. Click Login. You can view the cluster node and resource status on the displayed page.

Kami menggunakan cookie untuk meningkatkan kualitas situs kami dan pengalaman Anda. Dengan melanjutkan penelusuran di situs kami berarti Anda menerima kebijakan cookie kami. Cari tahu selengkapnya

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback