Updated on 2022-02-22 GMT+08:00

Configuring Agencies

The following problems may occur during job execution in Data Development:

  • The job execution mechanism of the Data Development module is to execute the job as the user who starts the job. For a job that is executed in periodic scheduling mode, if the IAM account used to start the job is deleted during the scheduling period, the system cannot obtain the user identity authentication information. As a result, the job fails to be executed.
  • If a job is started by a low-privilege user, the job fails to be executed due to insufficient permissions.

To solve the preceding problems, configure an agency. When an agency is configured, the job interacts with other services as an agency during job execution to prevent job execution failures in the preceding scenarios.

Role of an Agency

Cloud services interwork with each other, and some cloud services are dependent on other services. You can create an agency to delegate a cloud service to access other services and perform resource O&M on your behalf.

Agency Classification

Agencies are classified into workspace-level agencies and job-level agencies.

  • Workspace-level agencies can be globally applied to all jobs in the workspace.
  • Job-level agencies can only be applied to a single job.

The job-level agency has a higher priority than the workspace-level agency. If neither of them is configured, execute the job as the user who starts the job.

Constraints

  • To create or modify an agency, you must have the Security Administrator permissions.
  • To configure a workspace-level agency, you must have the DAYUDGC Administrator or Tenant Administrator policy.
  • All users who have the permissions to view the list of agencies can configure a job-level agency.

Creating an Agency

  1. Log in to the IAM console.
  2. Choose Agencies. On the displayed page, click Create Agency.
  3. Enter an agency name, Example: DAYUDGC _agency.
  4. Set Agency Type to Cloud service. Select DAYUDGC for Cloud Service. DAYUDGC then can perform resource O&M operations on behalf of you. See Figure 1.
  5. Set Validity Period to Unlimited.
    Figure 1 Creating an agency
  6. Click Assign Permissions in the Permissions area.
  7. On the displayed page, search for the Tenant Administrator policy, select it, and click OK. See Figure 2.
    • Users assigned the Tenant Administrator policy have all permissions on all services except on IAM. Therefore, delegate the Tenant Administrator policy to DAYUDGC so thatDAYUDGC can access all related services.
    • If you want to meet the security control requirements for fewer permissions, you only need to configure the OBS OperateAccess permissions (During job execution, execution log information needs to be written to OBS. Therefore, you need to add the OBS OperateAccess permissions.) . Then, configure different agency permissions based on the node type in the job. For example, if a job contains only the Import GES node, you can configure the GES Administrator and OBS OperateAccess permissions. For details, see Permission Assignment.
      Figure 2 Assigning permissions
  8. Click OK.

Permission Assignment

After the operation permissions of an account are delegated to DAYUDGC, you need to configure the permissions of the agency identity so that DAYUDGC can interact with other services.

To meet the security control requirements for fewer permissions, you can configure the Admin permissions for services based on the node types in jobs. For details, see Table 1.

The Admin permissions can also be configured based on the operations, resources, and request conditions for a specific service. Based on the node types in jobs, permissions are defined by service APIs to allow for more fine-grained, secure access control of cloud resources. Configure the permissions according to Table 2. For example, for a job containing the Import GES node, you only need to create a custom policy and select ges:graph:getDetail (viewing graph details), ges:jobs:getDetail (querying task status), and ges:graph:access (using graphs).

  • MRS-related nodes (MRS Presto SQL, MRS Spark, MRS Spark Python, MRS Flink Job, and MRS MapReduce) and directly connected nodes (MRS Spark SQL and MRS Hive SQL) do not support job submission in agency mode, therefore, jobs of these types cannot be configured with agencies.
  • MRS clusters that support job submission in agency mode are as follows:
    • Non-security cluster
    • Security cluster whose version is later than 2.1.0, and that has MRS 2.1.0.1 or later installed
  • Configure the service-level Admin permissions.

    During job execution, execution log information needs to be written to OBS. Therefore, the OBS OperateAccess permissions must be added for all jobs during coarse-grained authorization.

Table 1 The admin permissions for related nodes

Node Name

System Permission

Description

CDM Job, DIS Stream, DIS Dump, and DIS Client

DAYUDGC Administrator

All DAYUDataLake Governance Center permissions

Import GES

GES Administrator

Permissions required to perform all operations on GES. This role depends on the Tenant Guest and Server Administrator roles in the same project.

  • MRS Presto SQL, MRS Spark, MRS Spark Python, MRS Flink Job, and MRS MapReduce
  • MRS Spark SQL and MRS Hive SQL (connecting to MRS clusters through MRS APIs)

MRS Administrator

KMS Administrator

Users assigned the MRS Administrator role can perform all operations on MRS. This role depends on the Tenant Guest and Server Administrator roles in the same project.

Users assigned the KMS Administrator role have the administrator permissions for encryption keys in DEW.

MRS Spark SQL, MRS Hive SQL, MRS Kafka, and Kafka Client (connecting to the clusters in proxy mode)

DAYUDGC Administrator

KMS Administrator

DAYUDGC Administrator has all permissions required for DAYUDGC.

Users assigned the KMS Administrator role have the administrator permissions for encryption keys in DEW.

DLI Flink Job, DLI SQL, and DLI Spark

DLI Service Admin

All operation permissions for DLI.

DWS SQL, Shell, and RDS SQL (connecting to data sources in proxy mode)

DAYUDGC Administrator

KMS Administrator

DAYUDGC Administrator has all permissions required for DAYUDGC.

Users assigned the KMS Administrator role have the administrator permissions for encryption keys in DEW.

CSS

DAYUDGC Administrator

Elasticsearch Administrator

DAYUDGC Administrator has all permissions required for DAYUDGC.

Users assigned the Elasticsearch Administrator role have all permissions for CSS. This role depends on the Tenant Guest and Server Administrator roles in the same project.

Create OBS, Delete OBS, and OBS Manager

OBS OperateAccess

Basic object operation permissions, such as viewing buckets, uploading objects, obtaining objects, deleting objects, and obtaining object ACLs.

SMN

SMN Administrator

All operation permissions for SMN.

  • Configure fine-grained permissions. (Create custom policies based on the actions supported by each service.)

    For details on how to create a custom policy, see Creating Custom Policies.

  • During job execution, you need to write execution logs to OBS. When the fine-grained authorization mode is used, the following OBS permissions need to be added for all types of jobs:
    • obs:bucket:GetBucketLocation
    • obs:object:GetObject
    • obs:bucket:CreateBucket
    • obs:object:PutObject
    • obs:bucket:ListAllMyBuckets
    • obs:bucket:ListBucket
  • CDM Job, DIS Stream, DIS Dump and DIS Client nodes belong to the DAYUDGC module. DAYUDGC does not support fine-grained authorization. Therefore, only the DAYUDGC Administrator policy can be configured for jobs containing these types of nodes.
  • CSS does not support fine-grained authorization and requires a proxy. Therefore, the DAYUDGC Administrator and Elasticsearch Administrator policies can be configured for jobs containing these nodes.
  • SMN does not support fine-grained authorization. Therefore, jobs containing these nodes require the SMN Administrator permissions.
Table 2 Creating a custom policy

Node Name

Action

Import GES

  • ges:graph:access
  • ges:graph:getDetail
  • ges:jobs:getDetail
  • MRS Presto SQL, MRS Spark, MRS Spark Python, MRS Flink Job, and MRS MapReduce
  • MRS Spark SQL and MRS Hive SQL (connecting to MRS clusters through MRS APIs)
  • mrs:job:delete
  • mrs:job:stop
  • mrs:job:submit
  • mrs:cluster:get
  • mrs:cluster:list
  • mrs:job:get
  • mrs:job:list
  • kms:dek:crypto
  • kms:cmk:get

MRS Spark SQL, MRS Hive SQL, MRS Kafka, and Kafka Client (connecting to the clusters in proxy mode)

  • kms:dek:crypto
  • kms:cmk:get
  • DAYUDGC Administrator (role)

DLI Flink Job, DLI SQL, and DLI Spark

  • dli:jobs:get
  • dli:jobs:update
  • dli:jobs:create
  • dli:queue:submit_job
  • dli:jobs:list
  • dli:jobs:list_all

DWS SQL, Shell, and RDS SQL (connecting to data sources in proxy mode)

  • kms:dek:crypto
  • kms:cmk:get
  • DAYUDGC Administrator (role)

Create OBS, Delete OBS, and OBS Manager

  • obs:bucket:GetBucketLocation
  • obs:bucket:ListBucketVersions
  • obs:object:GetObject
  • obs:bucket:CreateBucket
  • obs:bucket:DeleteBucket
  • obs:object:DeleteObject
  • obs:object:PutObject
  • obs:bucket:ListAllMyBuckets
  • obs:bucket:ListBucket

Configuring a Workspace-Level Agency

A workspace-level agency impacts on all jobs. Some jobs contain nodes related to MRS. Exercise caution when performing this operation.

  1. In the navigation tree on the left, choose Specifications.
  2. Click Agency. On the displayed page, configure an agency.
  3. You can select an agency from the agency list or create a new one. For details on how to create an agency and configure permissions, see Creating an Agency.
    Figure 3 Configuring a workspace-level agency
  4. Click OK to return to the Agency Configuration page. Then, click to save the settings.

Configuring a Job-level Agency

You can create a job-level agency when creating a job. You can also modify the agency of an existing job.

Configuring an agency when creating a job

  1. In the navigation pane of the Data Development homepage, choose Development > Develop Job.
  2. Right-click the job directory and choose Create Job from the shortcut menu. The Create Job dialog box is displayed. If a workspace-level agency has been configured, it is used for the job by default. You can also select another agency from the agency list.
    Figure 4 Configuring a Job Agency

    Modifying the agency of an existing job

  1. In the navigation pane of the Data Development homepage, choose Development > Develop Job.
  2. In the job directory, double-click an existing job. On the far right of the displayed page, click Basic Info. The dialog box of the job's basic settings is displayed. If a workspace-level agency has been configured, it is used by default. You can also select another agency from the agency list.