Help Center/ Data Lake Insight/ Developer Guide/ SQL Jobs/ Calling UDFs in Spark SQL Jobs
Updated on 2024-09-20 GMT+08:00

Calling UDFs in Spark SQL Jobs

Scenario

DLI allows you to use Hive user-defined functions (UDFs) to query data. UDFs take effect only on a single row of data and are applicable to inserting and deleting a single data record.

Constraints

  • To perform UDF-related operations on DLI, you need to create a SQL queue instead of using the default queue.
  • When UDFs are used across accounts, other users, except the user who creates them, need to be authorized before using the UDF. The authorization operations are as follows:

    Log in to the DLI console and choose Data Management > Package Management. On the displayed page, select your UDF Jar package and click Manage Permissions in the Operation column. On the permission management page, click Grant Permission in the upper right corner and select the required permissions.

  • If you use a static class or interface in a UDF, add try catch to capture exceptions. Otherwise, package conflicts may occur.

Environment Preparations

Before you start, set up the development environment.

Table 1 Development environment

Item

Description

OS

Windows 7 or later

JDK

JDK 1.8.

IntelliJ IDEA

This tool is used for application development. The version of the tool must be 2019.1 or other compatible versions.

Maven

Basic configurations of the development environment. Maven is used for project management throughout the lifecycle of software development.

Development Process

The process of developing a UDF is as follows:
Figure 1 Development process
Table 2 Process description

No.

Phase

Software Portal

Description

1

Create a Maven project and configure the POM file.

IntelliJ IDEA

Write UDF code by referring the steps in Procedure.

2

Write UDF code.

3

Debug, compile, and pack the code into a Jar package.

4

Upload the Jar package to OBS.

OBS console

Upload the UDF Jar file to an OBS directory.

5

Create the UDF on DLI.

DLI console

Create a UDF on the SQL job management page of the DLI console.

6

Verify and use the UDF on DLI.

DLI console

Use the UDF in your DLI job.

Procedure

  1. Create a Maven project and configure the POM file. This step uses IntelliJ IDEA 2020.2 as an example.
    1. Start IntelliJ IDEA and choose File > New > Project.
      Figure 2 Creating a project
    2. Choose Maven, set Project SDK to 1.8, and click Next.
      Figure 3 Choosing Maven
    3. Set the project name, configure the storage path, and click Finish.
      Figure 4 Creating a project
    4. Add the following content to the pom.xml file.
      <dependencies>
              <dependency>
                  <groupId>org.apache.hive</groupId>
                  <artifactId>hive-exec</artifactId>
                  <version>1.2.1</version>
              </dependency>
      </dependencies>
      Figure 5 Adding configurations to the POM file
    5. Choose src > main and right-click the java folder. Choose New > Package to create a package and a class file.
      Figure 6 Creating a package and a class file

      Set the package name as you need. In this example, set Package to com.huawei.demo. Then, press Enter.

      Figure 7 Customizing a package

      Create a Java Class file in the package path. In this example, the Java Class file is SumUdfDemo.

      Figure 8 Creating a Java class file
  2. Write UDF code.
    1. The UDF must inherit org.apache.hadoop.hive.ql.exec.UDF.
    2. You must implement the evaluate function, which can be reloaded.

    For details about how to implement the UDF, see the following sample code:

    package com.huawei.demo;
    import org.apache.hadoop.hive.ql.exec.UDF;
      public class SumUdfDemo extends UDF {
        public int evaluate(int a, int b) {
         return a + b;
      }
     }
  3. Use IntelliJ IDEA to compile the code and pack it into the JAR package.
    1. Click Maven in the tool bar on the right, and click clean and compile to compile the code.

      After the compilation is successful, click package.

      Figure 9 Compiling and packaging

      The generated JAR package is stored in the target directory. In this example, MyUDF-1.0-SNAPSHOT.jar is stored in D:\DLITest\MyUDF\target.

      Figure 10 Generating a JAR file
  4. Log in to the OBS console and upload the file to the OBS path.

    The region of the OBS bucket to which the Jar package is uploaded must be the same as the region of the DLI queue. Cross-region operations are not allowed.

  5. (Optional) Upload the file to DLI for package management.
    1. Log in to the DLI management console and choose Data Management > Package Management.
    2. On the Package Management page, click Create in the upper right corner.
    3. In the Create Package dialog, set the following parameters:
      1. Type: Select JAR.
      2. OBS Path: Specify the OBS path for storing the package.
      3. Set Group and Group Name as required for package identification and management.
    4. Click OK.
  6. Create the UDF on DLI.
    1. Log in to the DLI console, choose SQL Editor. Set Engine to spark, and select the created SQL queue and database.
      Figure 11 Selecting the queue and database
    2. In the SQL editing area, run the following statement to create a UDF and click Execute.
      CREATE FUNCTION TestSumUDF AS 'com.huawei.demo.SumUdfDemo' using jar 'obs://dli-test-obs01/MyUDF-1.0-SNAPSHOT.jar';
  7. Restart the original SQL queue for the added function to take effect.
    1. Log in to the DLI console and choose Queue Management from the navigation pane. In the Operation column of the SQL queue job, click Restart.
    2. In the Restart dialog box, click OK.
  8. Call the UDF.

    Use the UDF created in 6 in the SELECT statement as follows:

    select TestSumUDF(1,2);
    Figure 12 Execution result
  9. (Optional) Delete the UDF.

    If the UDF is no longer used, run the following statement to delete it:

    Drop FUNCTION TestSumUDF;