Loading Data
Function
This topic describes how to use Hive Query Language (HQL) to load data to the existing employees_info table. You can learn how to load data from a local file system and MRS cluster. LOCAL is used to differentiate between local and non-local data sources.
To perform the following operations on a cluster enabled with the security service, you must have the update permission for databases, owner permission and read/write permission for files to be loaded. For details about permission requirements, see Overview.
If LOCAL is used in data loading statements, data is loaded from a local directory. In addition to the update permission for tables, you must have the read permission for the data path. It is also required that the data can be accessed by user omm on the active HiveServer.
If OVERWRITE is used in data loading statements, the existing data in a table will be overwritten by new data. If OVERWRITE does not exist, data will be added to the table.
Example Codes
-- Load the employee_info.txt file from the /opt/hive_examples_data/ directory of the local file system to the employees_info table. ---- Overwrite the original data using the new data LOAD DATA LOCAL INPATH '/opt/hive_examples_data/employee_info.txt' OVERWRITE INTO TABLE employees_info; ---- Retain the original data and add the new data to the table. LOAD DATA LOCAL INPATH '/opt/hive_examples_data/employee_info.txt' INTO TABLE employees_info; -- Load /user/hive_examples_data/employee_info.txt from HDFS to the employees_info table. ---- Overwrite the original data using the new data LOAD DATA INPATH '/user/hive_examples_data/employee_info.txt' OVERWRITE INTO TABLE employees_info; ---- Retain the original data and add the new data to the table. LOAD DATA INPATH '/user/hive_examples_data/employee_info.txt' INTO TABLE employees_info;
Loading data is to copy data to a specified table in the Hadoop distributed file system (HDFS).
Extensions
None
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.