Updated on 2024-08-20 GMT+08:00

Overview of DLI Elastic Resource Pools and Queues

DLI queues are classified into queues in an elastic resource pool, standard queues, and default queue.

  • Queues in an elastic resource pool:

    An elastic resource pool offers compute resources (CPU and memory) required for running DLI jobs, which can adapt to the changing demands of services.

    You can create multiple queues within an elastic resource pool. These queues are associated with specific jobs and data processing tasks, and serve as the basic unit for resource allocation and usage within the pool. This means queues are specific compute resources required for executing jobs.

    Queues within an elastic resource pool can be shared to execute jobs. This is achieved by correctly setting the queue allocation policy. This enhances queue utilization.

    For how to buy an elastic resource pool and create queues within it, see Creating an Elastic Resource Pool and Creating Queues in an Elastic Resource Pool.

  • Standard queue (deprecated and not recommended): Previous-gen compute resources for running DLI jobs. Before buying such a queue, you need to estimate how many resources you will need.
  • default queue:
    • The default queue is preset in DLI, which allocates resources as needed. If you are unsure about how much queue capacity you will require or you do not have a space for creating queues, you can use this default queue to run your jobs.
    • The default queue is typically used by users who are new to our service, but it can cause resource contention and prevent you from getting the resources you need for your jobs. It is recommended that you create your own queue to run your jobs.

Queues in an elastic resource pool are recommended, as they offer the flexibility to use resources with high utilization as needed.

Use Cases of Elastic Resource Pools

Resources too fixed to meet a range of requirements.

The quantities of compute resources required for jobs change in different time of a day. If the resources cannot be scaled based on service requirements, they may be wasted or insufficient. Figure 1 shows the resource usage during a day.

  • After ETL jobs are complete, no other jobs are running during 04:00 to 07:00 in the early morning. The resources could be released at that time.
  • From 09:00 to 12:00 a.m. and 02:00 to 04:00 p.m., a large number of ETL report and job queries are queuing for compute resources.
    Figure 1 Fixed resources

Resources are isolated and cannot be shared.

A company has two departments, and each run their jobs on a DLI queue. Department A is idle from 08:00 to 12:00 a.m. and has remaining resources, while department B has a large number of service requests during this period and needs more resources to meet the requirements. Since the resources are isolated and cannot be shared between department A and B, the idle resources are wasted.
Figure 2 Resource waste due to resource isolation

Elastic resource pools can be accessed by different queues and automatically scaled to improve resource utilization and handle resource peaks.

You can use elastic resource pools to centrally manage and allocate resources. Multiple queues can be bound to an elastic resource pool to share the pooled resources.