Configuring the HA Function of SAP S/4HANA
Scenarios
To prevent SAP S/4HANA from being affected by a single point of failure and improve the availability of SAP S/4HANA, configure the HA mechanism for both the active and standby ASCS nodes. If the active and standby nodes are located in the same AZ, you can directly configure the HA function of SAP S/4HANA. If the active and standby node are located in different AZs, another three ECSs are required and iSCSI is used to create a shared disk for SBD before the HA function is configured. For details, see section Configuring iSCSI (Cross-AZ HA Deployment).
Prerequisites
- The mutual trust relationship has been established between the active and standby ASCS nodes.
- You have disabled the firewall of the OS. For details, see section Modifying OS Configurations.
- To ensure that the communication between the active and standby ASCS nodes is normal, add the mapping between the virtual IP addresses and virtual hostnames to the hosts file after installing the SAP S/4HANA instance.
- Log in to the active and standby ASCS nodes one by one and modify the /etc/hosts file:
vi /etc/hosts
- Change the IP addresses corresponding to the virtual hostnames to the virtual IP addresses.
10.0.3.52 S/4HANA-0001 10.0.3.196 S/4HANA-0002 10.0.3.220 ascsha 10.0.3.2 ersha
ascsha indicates the virtual hostname of the active ASCS node and ersha indicates the virtual hostname of the standby ASCS node. Virtual hostnames can be customized.
- Log in to the active and standby ASCS nodes one by one and modify the /etc/hosts file:
- Check that both the active and standby ASCS nodes have the /var/log/cluster directory. If the directory does not exist, create one.
- Update the SAP resource-agents package on the active and standby ASCS nodes.
- Run the following command to check whether the resource-agents package has been installed:
sudo grep 'parameter name="IS_ERS"' /usr/lib/ocf/resource.d/heartbeat/SAPInstance
- If the following information is displayed, the patch package has been installed. No further action is required.
- If the following information is not displayed, install the patch package. Go to 2.
<parameter name="IS_ERS" unique="0" required="0">
- Install the resource-agents package.
If the image is SLES 12 SP1, run the following command:
sudo zypper in -t patch SUSE-SLE-HA-12-SP1-2017-885=1
If the image is SLES 12 SP2, run the following command:
sudo zypper in -t patch SUSE-SLE-HA-12-SP2-2018-1923=1
If the image is SLES 12 SP3, run the following command:
sudo zypper in -t patch SUSE-SLE-HA-12-SP3-2018-1922=1
- Run the following command to check whether the resource-agents package has been installed:
- Update the sap_suse_cluster_connector package on the active and standby ASCS nodes.
- Run the following command to uninstall the old package. Note that the software package name uses underscores (_).
- Run the following command to install the new package. Note that the software package name uses hyphens (-):
- Run the following command to obtain the version information about the newly installed sap-suse-cluster-connector package:
/usr/bin/sap_suse_cluster_connector gvi --out version
- View the version file to check the version is 3.1.0 or later.
Procedure
- Log in to the ASCS instance node, obtain the ha_auto_script.zip package, and decompress it to any directory.
- Set parameters in the ascs_ha.cfg file based on the site requirements. Table 1 describes the parameters in the file.
Table 1 Parameters in the ascs_ha.cfg file Type
Name
Description
masterNode
masterName
ASCS instance node name
masterHeartbeatIP1
Heartbeat plane IP address 1 of the ASCS instance node
masterHeartbeatIP2
Service plane IP address of the ASCS instance node
slaveNode
slaveName
ERS instance node name
slaveHeartbeatIP1
Heartbeat plane IP address 1 of the ERS instance node
slaveHeartbeatIP2
Service plane IP address of the ERS instance node
ASCSInstance
ASCSFloatIP
Service IP address of the ASCS instance node
ASCSInstanceDir
Directory of the ASCS instance
ASCSDevice
Disk partition used by the ASCS instance directory
ASCSProfile
Profile file of the ASCS instance
ERSInstance
NOTE:You need to log in to the ERS instance node to obtain the information about ERSInstanceDir, ERSDevice, and ERSProfile parameters.
ERSFloatIP
Service IP address of the ERS instance node
ERSInstanceDir
Directory of the ERS instance
ERSDevice
Disk partition used by the ERS instance directory
ERSProfile
Profile file of the ERS instance
trunkInfo
SBDDevice
Disk partition used by the SBD. One or three disk partitions are supported. Every two partitions are separated by a comma (,), for example, /dev/sda, /dev/sdb, /dev/sdc.
- Run the following command to perform automatic HA deployment:
sh ascs_auto_ha.sh
- Run the crm status command to check the resource status.
After the HA function is configured, HAE manages resources. Do not start or stop resources in other modes. If you need to manually perform test or modification operations, switch the cluster to the maintenance mode first.
crm configure property maintenance-mode=true
Exit the maintenance mode after the modification is complete.
crm configure property maintenance-mode=false
If you need to stop or restart the node, manually stop the cluster service.
systemctl stop pacemaker
After the ECS is started or restarted, run the following command to start the cluster service:
systemctl start pacemaker
To clear the HA configuration, run the following command on the active node for which the HA mechanism is configured (Roll back to the initial status if the active and standby nodes are switched over.):
sh ascs_auto_ha.sh unconf
Verifying the Configuration
- Open a browser and ensure that JavaScript and Cookie are enabled.
- Enter the IP address or host name of the active or standby node as the URL. The login port is 7630.
https://HOSTNAME_OR_IP_ADDRESS:7630/
If a certificate warning is displayed when you attempt to access the URL for the first time, it indicates that a self-signed certificate is used. By default, the self-signed certificate is not considered as a trusted certificate.
Click Continue to this website (not recommended) or add an exception in the browser to eliminate the warning message.
- On the login page, enter the username and password of user hacluster or any other user who belongs to the hacluster group.
The username is hacluster and the password is linux. Change the password after the first login.
- Click Login. You can view the cluster node and resource status on the displayed page.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot