Creating User-Defined Hive Functions
Scenario
When the built-in functions of Hive cannot meet requirements, you can compile user-defined functions (UDFs) and use them in queries. This section describes how to develop and run UDFs.
Introduction to Hive UDF
| Type | Description |
|---|---|
| User-Defined Functions (UDFs) | User-defined scalar function. Its input and output are in one-to-one relationship. That is, once a row of data is read, an output value is generated. |
| User-Defined Aggregating Functions (UDAFs) | User-defined aggregating function, which is used to aggregate multiple input records into one output value. It can be used with the Group By statement in SQL. |
| User-Defined Table-Generating Functions (UDTFs) | User-defined table-generating function, which is used to output multiple rows of data upon a single function call. It is the only user-defined function that can return multiple fields. |
UDFs are classified as follows based on their usage:
- Temporary functions: used only in the current session and must be recreated after a session restarts.
- Permanent functions: used in multiple sessions. You do not need to create them every time a session restarts.
Notes and Constraints
- You need to properly control the memory and thread usage of variables in UDFs. Improper control may cause memory overflow or high CPU usage.
- If Ranger authentication is enabled for the cluster, you need to disable Ranger authentication before using Python UDFs.
Step 1: Develop a UDF
This section uses AddDoublesUDF as an example to describe how to compile and use UDFs.
AddDoublesUDF is used to add two or more floating point numbers. Note the following:
- A common UDF must be inherited from org.apache.hadoop.hive.ql.exec.UDF.
- A common UDF must implement at least one evaluate(). The evaluate function supports overloading.
- To develop a UDF, add the hive-exec-*.jar dependency package to the project. You can obtain the package from the Hive service installation directory, for example, ${BIGDATA_HOME}/components/FusionInsight_HD_*/Hive/disaster/plugin/lib/.
The following is the sample code of AddDoublesUDF. xxx indicates the name of the organization that develops the program.
package com.xxx.bigdata.hive.example.udf;
import org.apache.hadoop.hive.ql.exec.UDF;
public class AddDoublesUDF extends UDF {
public Double evaluate(Double... a) {
Double total = 0.0;
// Processing logic
for (int i = 0; i < a.length; i++)
if (a[i] != null)
total += a[i];
return total;
}
} Package the preceding sample program into AddDoublesUDF.jar.
Step 2: Create a User for Executing the UDF Function
- Log in to Manager as user admin, choose Cluster > Cluster Properties, and check and record the authentication method of the cluster.
- Choose Cluster > Services > Hive, click More in the upper right corner of the page, and check whether Ranger authentication is enabled for Hive.
- Choose System > Permission > User, click Create, set the following parameters, and click OK.
Table 2 Creating a user Parameter
Example Value
Parameters
Username
test
Enter a custom username.
User Type
Human-Machine
The options are Human-Machine and Machine-Machine.
Password
xxx
Enter the password of the user.
Confirm Password
xxx
Enter the password of the user again.
User Group
Click Add, select the hive and hadoop user groups, and click OK.
In the User Group area, click Add to add one or more user groups to the list.
- Assign permissions to the new user based on the cluster authentication method and whether Ranger authentication is enabled.
- The cluster is in security mode.
- The cluster is in normal mode.
- If the Enable Ranger button is grayed out (Ranger authentication has been enabled for Hive), go to 5.
- If the Disable Ranger button is grayed out (Ranger authentication has not enabled for Hive), no further action is required.
- Hive uses Ranger for authentication. Log in to the Ranger management page as the Ranger administrator (rangeradmin for security mode and admin for normal mode) to add Hive permission control policies for the user.
- Choose Cluster > Services > Ranger and click the hyperlink on the right of Ranger web UI.
- For a cluster in security mode, click the username in the upper right corner of the page, click Log Out. Log in to the Ranger management page as user rangeradmin.
- On the home page, click the component plug-in name in the HADOOP SQL area, for example, Hive.
- In the Access tab, click Add New Policy, set the following parameters, and click Add:
Table 3 Adding a Hive access control policy Parameter
Example Value
Parameters
Policy Name
test_hive
A custom policy name, which must be unique in the service.
database
default
Name of the Hive database to which the policy applies.
- Permanent function: Enter the name of the database to which the function is to be added, for example, default.
- Temporary function: Switch database to global and enter a specific function name or set it to *.
table
-
Name of the Hive table to which the policy applies. Switch the value to udf and enter a specific function name or set it to *. You do not need to set this parameter for temporary functions.
Allow Conditions
-
Policy allowed condition. You can configure permissions and exceptions allowed by the policy. Select the new user in the Select User column and add the following permissions in Permissions:- Permanent function: Grant permission based on service requirements. For example, you can add the create, select, and drop permissions.
- Temporary function: Add the Temporary UDF Admin permission.
- Hive uses the Manager role for authentication. Create a user with the Hive administrator permission to execute permanent and temporary functions.
- On the homepage of Manager, choose System > Permission > Role, click Create Role, set the following parameters, and click OK.
- Role Name: Enter a role name, for example, test_role.
- Configure Resource Permission: Click the name of the desired cluster, click Hive, and select Admin.
- Click User and click Modify in the row where the user created in 3 is located.
- On the Modify User page, click Add on the right of Role, add the newly created role with the Hive administrator rights, and click OK.
- On the homepage of Manager, choose System > Permission > Role, click Create Role, set the following parameters, and click OK.
Step 3: Run the UDF.
- Upload the AddDoublesUDF.jar package to the client installation node, for example, the opt directory.
- Log in to the node where the client is installed as the client installation user and upload AddDoublesUDF.jar to a specified directory in HDFS, for example, /user/hive_examples_jars. Both the user who creates the function and the user who uses the function must have the read permission on the package.
- Go to the client installation directory and configure the environment variables:
Go to the client installation directory.
cd Client installation directoryConfigure environment variables.
source bigdata_env
- Authenticate the user.
- For a cluster with Kerberos authentication enabled (security mode):
kinit Service user - For a cluster with Kerberos authentication disabled (normal mode):
export HADOOP_USER_NAME=Service user
- For a cluster with Kerberos authentication enabled (security mode):
- Upload the UDF JAR package to the HDFS directory.
hdfs dfs -put /opt /user/hive_examples_jars
Modify permissions.
hdfs dfs -chmod 777 /user/hive_examples_jars
- Go to the client installation directory and configure the environment variables:
- Log in to the Hive client.
- For a cluster with Kerberos authentication enabled (security mode), run the following command:
beeline
If the user is assigned the Hive administrator role, run the following command to switch to the admin role in each beeline maintenance operation session and then perform subsequent operations:
set role admin;
- For a cluster with Kerberos authentication disabled (normal mode), run the following command:
beeline -n Hive service user
- For a cluster with Kerberos authentication enabled (security mode), run the following command:
- Run the following commands on the Hive Server to create a custom function.
- Create a permanent function.
CREATE FUNCTION addDoubles AS 'com.xxx.bigdata.hive.example.udf.AddDoublesUDF' using jar 'hdfs://hacluster/user/hive_examples_jars/AddDoublesUDF.jar';
addDoubles is the name of the function, which is used for SELECT operations. xxx is typically the name of the organization that develops the program.
- Create a temporary function.
CREATE TEMPORARY FUNCTION addDoubles AS 'com.xxx.bigdata.hive.example.udf.AddDoublesUDF' using jar 'hdfs://hacluster/user/hive_examples_jars/AddDoublesUDF.jar';
- addDoubles is the name of the function, which is used for SELECT operations.
- TEMPORARY indicates that the function is used only in the current session of the Hive Server.
- Create a permanent function.
- Run the following command on the Hive Server to use the function:
SELECT addDoubles(1,2,3);
If the error message "[Error 10011]" is displayed when you reconnect to the client and use the function, run the reload function; command and then use the function again. For example, the error message is as follows:
Error: Error while compiling statement: FAILED: SemanticException [Error 10011]: Line 1:7 ...
- Run the following command on the Hive Server to delete the function:
DROP FUNCTION addDoubles;
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.