Updated on 2024-10-25 GMT+08:00

Configuring Mutual Trust Between MRS Clusters

When clusters in different Manager systems with security modes need to access each other's resources, system administrators can establish mutual trust between the systems, allowing external system users to access resources in the local system.

Without configuring cross-cluster trust, resources in each cluster can only be accessed by local users. The scope of secure usage for each system user is defined as a domain, and unique domain names must be defined for different Manager systems. Cross-Manager access is essentially cross-domain usage by users.

In MRS 3.x or later, a maximum of 500 mutual-trust clusters can be configured for a cluster.

Impact on the System

  • Once cross-Manager cluster mutual trust is configured, users from an external system can be utilized in the local system. The system administrator should regularly review user permissions in Manager to ensure they align with enterprise service and security requirements.
  • If you set up mutual trust between clusters, affected services need to be restarted and services will be interrupted.
  • After cross-Manager cluster mutual trust is configured, internal Kerberos users krbtgt/Local cluster domain name@External cluster domain name and krbtgt/External cluster domain name@Local cluster domain name are added to the two mutually trusted clusters. The internal users cannot be deleted.
    • For MRS 2.x or earlier, the default password is Crossrealm@123.
    • For MRS 3.x or later, the system administrator should regularly update the passwords of the four users in the mutually trusted systems and ensure they are consistent. For details, see Changing the Passwords for MRS Cluster Component Running Users. When the passwords are changed, the connectivity between cross-cluster service applications may be affected.
  • For MRS 3.x or later, if the system domain name is changed and there is any running HetuEngine compute instance, restart the compute instance.
  • For MRS 3.x or later, after cross-Manager cluster mutual trust is configured, the clients of each cluster need to be redownloaded and reinstalled.
  • After cross-Manager cluster mutual trust is configured, you need to check whether the system works properly and how to access resources of the peer system as a user of the local system. For details, see Configuring User Permissions for Mutually Trusted MRS Clusters.

Prerequisites

  • The system administrator has clarified service requirements and planned domain names for the systems. A domain name can contain uppercase letters, numbers, periods (.), and underscores (_), and must start with a letter or number. For example, DOMAINA.HW and DOMAINB.HW.
  • The domain names of the two Managers are different. When an ECS or BMS cluster is created on MRS, a unique system domain name is randomly generated. Generally, you do not need to change the system domain name.
  • The two clusters do not have the same host name or the same IP address.
  • The system time of the two clusters is consistent, and the NTP services in the two systems use the same clock source.
  • The running status of all components in the Manager clusters is Normal.
  • The two clusters are in the same VPC. If they are not, create a VPC peering connection between them. For details, see VPC Peering Connection.
  • For MRS 3.x or later, the acl.compare.shortName parameter for the ZooKeeper service in all clusters within Manager should be set to the default value of true. Otherwise, change the value to true and restart the ZooKeeper service.

Configuring Mutual Trust Between MRS Clusters (MRS 3.x or Later)

  1. Log in to FusionInsight Manager of one of the two clusters.
  2. Choose System > Permission > Domain and Mutual Trust.
  3. Modify Peer Mutual Trust Domain.

    Table 1 Parameters

    Parameter

    Description

    realm_name

    Enter the domain name of the peer system.

    ip_port

    Enter the KDC address of the peer system.

    Value format: IP address of the node accommodating the Kerberos service in the peer system:Port number

    • In dual-plane networking, enter the service plane IP address.
    • If an IPv6 address is used, the IP address must be enclosed in square brackets ([]).
    • Use commas (,) to separate the KDC addresses if the active and standby Kerberos services are deployed or multiple clusters in the peer system need to establish mutual trust with the local system.
    • You can obtain the port number from the kdc_ports parameter of the KrbServer service. The default value is 21732. To obtain the IP address of the node where the service is deployed, click the Instance tab on the KrbServer page and view Service IP Address of the KerberosServer role.

      For example, if the Kerberos service is deployed on nodes at 10.0.0.1 and 10.0.0.2 that have established mutual trust with the local system, the parameter value is 10.0.0.1:21732,10.0.0.2:21732.

    If you need to configure mutual trust for multiple Managers, click to add a new item and set parameters. Click to delete unnecessary configurations.

  4. Click OK.
  5. Log in to the active management node as user omm and run the following command to update the domain configuration:

    sh ${BIGDATA_HOME}/om-server/om/sbin/restart-RealmConfig.sh

    The command is executed successfully if the following information is displayed:

    Modify realm successfully. Use the new password to log in to FusionInsight again.

    After the restart, some hosts and services cannot be accessed and an alarm is generated. This problem can be automatically resolved in about 1 minute after restart-RealmConfig.sh is run.

  6. Log in to FusionInsight Manager and restart the cluster or configure expired instances:

    Check whether the system domain name of Manager is changed.

    • If the system domain name is changed, click or More on the home page, click Start, enter the password, select the checkbox for confirming the impact, and click OK. Wait until the cluster is started successfully.
    • If the system domain name is not changed, click or More on the home page and select Restart Configuration-Expired Instances. Enter the password, select the checkbox for confirming the impact, and click OK. Wait until the service is restarted.

    Restarting a cluster or a role instance in a cluster will interrupt services. Perform this operation during off-peak hours or after confirming that the impact on upper-layer services is limited.

  7. Log out of FusionInsight Manager and then log in again. If the login is successful, the configuration is successful.
  8. If a HetuEngine compute instance is running, restart the compute instance.

    1. Log in to FusionInsight Manager as the user who is used to access the HetuEngine web UI.
    2. Choose Cluster > Services > HetuEngine to go to the HetuEngine service page.
    3. In the Basic Information area on the Dashboard page, click the link next to HSConsole WebUI. The HSConsole page is displayed.
    4. For a running compute instance, click Stop in the Operation column. After the compute instance is in the Stopped state, click Start to restart the compute instance.

  9. Log in to FusionInsight Manager of another cluster and repeat the preceding steps.

Configuring Mutual Trust Between MRS Clusters (MRS 2.x or Earlier)

  1. On the MRS management console, query all security groups of the two clusters.

    • If the security groups of the two clusters are the same, go to 3.
    • If the security groups of the two clusters are different, go to 2.

  2. On the VPC management console, choose Access Control > Security Groups. On the Security Groups page, locate the row containing the target security group, click Manage Rule in the Operation column.

    On the Inbound Rules tab page, click Add Rule. In the Add Inbound Rule dialog box that is displayed, configure related parameters.

    • Priority: The value ranges from 1 to 100. The default value is 1, which indicates the highest priority. A smaller value indicates a higher priority.
    • Action: Select Allow.
    • Protocol & Port: Choose Protocols > All.
    • Type: Select IPv4 or IPv6.
    • Source: Select Security group and the security group of the peer cluster.
      • To add an inbound rule to the security group of cluster A, set Source to Security group and the security group of cluster B (peer cluster).
      • To add an inbound rule to the security group of cluster B, set Source to Security group and the security group of cluster A (peer cluster).

    For a common cluster with Kerberos authentication disabled, perform step 1 to 2 to configure cross-cluster mutual trust. For a security cluster with Kerberos authentication enabled, after completing the preceding steps, proceed to the following steps for configuration.

  3. Log in to MRS Manager of the two clusters separately. Click Service and check whether the Health Status of all components is Good.

    • If yes, go to 4.
    • If no, contact O&M personnel for troubleshooting.

  4. Query configuration information.

    1. On MRS Manager of the two clusters, choose Services > KrbServer > Instance. Query the OM IP Address of the two KerberosServer hosts.
    2. Click Service Configuration. Set Type to All. Choose KerberosServer > Port in the navigation tree on the left. Query the value of kdc_ports. The default value is 21732.
    3. Click Realm and query the value of default_realm.

  5. On MRS Manager of either cluster, modify the peer_realms parameter.

    Table 2 Parameter description

    Parameter

    Description

    realm_name

    Domain name of the mutual-trust cluster, that is, the value of default_realm obtained in step 4.

    ip_port

    KDC address of the peer cluster. Format: IP address of a KerberosServer node in the peer cluster:kdc_port

    The addresses of the two KerberosServer nodes are separated by a comma. For example, if the IP addresses of the KerberosServer nodes are 10.0.0.1 and 10.0.0.2 respectively, the value of this parameter is 10.0.0.1:21732,10.0.0.2:21732.

    • To deploy trust relationships with multiple clusters, click to add items and specify relevant parameters. To delete an item, click .
    • A cluster can have trust relationships with a maximum of 16 clusters. By default, no trust relationship exists between different clusters that are trusted by a local cluster.

  6. Click Save Configuration. In the dialog box that is displayed, select Restart the affected services or instances and click OK. If you do not select Restart the affected services or instances, manually restart the affected services or instances.

    After Operation successful is displayed, click Finish.

    Restarting a cluster or a role instance in a cluster will interrupt services. Perform this operation during off-peak hours or after confirming that the impact on upper-layer services is limited.

  7. Exit MRS Manager and log in to it again. If the login is successful, the configurations are valid.
  8. Log in to MRS Manager of the other cluster and repeat step 5 to 7.
  9. Perform subsequent operations by referring to Updating the Client Configuration of Mutually Trusted Clusters (MRS 2.x or Earlier).

Updating the Client Configuration of Mutually Trusted Clusters (MRS 2.x or Earlier)

After cross-cluster mutual trust is configured, the service configuration parameters are modified on MRS Manager and the service is restarted. Therefore, you need to prepare the client configuration file again and update the client.

Scenario 1:

Cluster A and cluster B (peer and mutually trusted clusters) are of the same type, for example, analysis cluster or streaming cluster. Refer to Updating Client Configurations (Version 2.x or Earlier) to update the client configuration files accordingly.

  • Update the client configuration file of cluster A.
  • Update the client configuration file of cluster B.

Scenario 2:

Cluster A and cluster B (peer cluster and mutually trusted cluster) are the different type. Perform the following steps to update the configuration files.

  • Update the client configuration file of cluster A to cluster B.
  • Update the client configuration file of cluster B to cluster A.
  • Update the client configuration file of cluster A.
  • Update the client configuration file of cluster B.
  1. Log in to MRS Manager of cluster A.
  2. Click Services, and then Download Client.

    Figure 1 Downloading a client

  3. Set Client Type to Only configuration files.
  4. Set Download to to Remote host.
  5. Set Host IP Address to the IP address of the active Master node of cluster B, Host Port to 22, and Save Path to /tmp.

    • If the default port 22 for logging in to cluster B using SSH is changed, set Host Port to a new port.
    • The value of Save Path contains a maximum of 256 characters.

  6. Set Login User to root.

    If another user is used, ensure that the user has permissions to read, write, and execute the save path.

  7. Select Password or SSH Private Key for Login Mode.

    • Password: Enter the password of user root set during cluster creation.
    • SSH Private Key: Select and upload the key file used for creating the cluster.

  8. Click OK to generate a client file.

    • If the following information is displayed, the client package is saved. Click Close.
      Client files downloaded to the remote host successfully.
    • If the following information is displayed, check the username, password, and security group configurations of the remote host. Ensure that the username and password are correct and an inbound rule of the SSH (22) port has been added to the security group of the remote host. And then, go to 2 to download the client again.
      Failed to connect to the server. Please check the network connection or parameter settings.

  9. Log in to the ECS of cluster B using VNC. For details, see Logging In to a Windows ECS Using VNC.

    All images support Cloud-Init. The preset username for Cloud-Init is root, and the password is the one set during cluster creation.

  10. Run the following command to switch to the client directory, for example, /opt/Bigdata/client:

    cd /opt/Bigdata/client

  11. Run the following command to update the client configuration of cluster A to cluster B:

    sh refreshConfig.sh Client installation directory Full path of the client configuration file package

    For example, run the following command:

    sh refreshConfig.sh /opt/Bigdata/client /tmp/MRS_Services_Client.tar

    If the following information is displayed, the configurations have been updated successfully.

    ReFresh components client config is complete.
    Succeed to refresh components client config.

    You can also refer to method 2 in Updating Client Configurations (Version 2.x or Earlier) to perform operations in 1 to 11.

  12. Repeat step 1 to 11 to update the client configuration file of cluster B to cluster A.
  13. Refer to Updating Client Configurations (Version 2.x or Earlier) to update the client configuration file of the local cluster.

    • Update the client configuration file of cluster A.
    • Update the client configuration file of cluster B.