Updated on 2022-06-01 GMT+08:00

Creating a Table

Function Description

This section describes how to use HiveQL to create internal and external tables. You can create a table by using any of the following methods:

  • Customize the table structure, and use the key word EXTERNAL to differentiate between internal and external tables.
    • If all data is to be processed by Hive, create an internal table. When an internal table is deleted, the metadata and data in the table are deleted together.
    • If data is to be processed by multiple tools (such as Pig), create an external table. When an external table is deleted, only metadata is deleted.
  • Create a table based on existing tables. Use CREATE LIKE to fully copy the original table structure, including the storage format.
  • Create a table based on query results using CREATE AS SELECT.

    This method is flexible. By using this method, you can specify fields (except for the storage format) that you want to copy to the new table when copying the structure of the existing table.

Sample Code

-- Create the external table employees_info.
CREATE EXTERNAL TABLE IF NOT EXISTS employees_info 
( 
id INT, 
name STRING, 
usd_flag STRING, 
salary DOUBLE, 
deductions MAP<STRING, DOUBLE>, 
address STRING, 
entrytime STRING 
) 
-- Specify the field delimiter.
-- "delimited fields terminated by" indicates that the column delimiter is ',' and "MAP KEYS TERMINATED BY" indicates that the delimiter of key values in the MAP is '&'.
ROW FORMAT delimited fields terminated by ',' MAP KEYS TERMINATED BY '&'  
-- Specify the table storage format to TEXTFILE.
STORED AS TEXTFILE;  
 
-- Use CREATE Like to create a table.
CREATE TABLE employees_like LIKE employees_info; 
 
-- Run the DESCRIBE command to check the structures of the employees_info, employees_like, and employees_as_select tables.
DESCRIBE employees_info; 
DESCRIBE employees_like; 

Extended Applications

  • Create a partition table.

    A table may have one or more partitions. Each partition is saved as an independent folder in the table directory. Partitions help minimize the query scope, accelerate data query, and allow users to manage data based on certain criteria.

    A partition is defined using the PARTITIONED BY clause during table creation.

     CREATE EXTERNAL TABLE IF NOT EXISTS employees_info_extended 
     ( 
     id INT, 
     name STRING, 
     usd_flag STRING, 
     salary DOUBLE, 
     deductions MAP<STRING, DOUBLE>, 
     address STRING 
     ) 
    -- Use "PARTITIONED BY" to specify the column name and data type of the partition.
     PARTITIONED BY (entrytime STRING)  
     STORED AS TEXTFILE;     
  • Update the table structure.

    After a table is created, you can use ALTER TABLE to add or delete fields, modify table attributes, and add partitions.

    -- Add the tel_phone and email fields to the employees_info_extended table.
     ALTER TABLE employees_info_extended ADD COLUMNS (tel_phone STRING, email STRING);     
  • Configure Hive data encryption when creating a table.

    Set the table format to RCFile (recommended) or SequenceFile, and the encryption algorithm to ARC4Codec. SequenceFile is a unique Hadoop file format, and RCFile is a Hive file format with optimized column storage. When a big table is queried, RCFile provides higher performance than SequenceFile.

     set hive.exec.compress.output=true; 
     set hive.exec.compress.intermediate=true; 
     set hive.intermediate.compression.codec=org.apache.hadoop.io.encryption.arc4.ARC4Codec; 
     create table seq_Codec (key string, value string) stored as RCFile;
  • Enable Hive to use OBS.

    You need to set the specified parameters in beeline. You can log in to OBS console and access the My Credential page to obtain the AK/SK.

    set fs.obs.access.key=AK;
    set fs.obs.secret.key=SK;
    set metaconf:fs.obs.access.key=AK;
    set metaconf:fs.obs.secret.key=SK;

    Set the storage type of the new table to obs.

    create table obs(c1 string, c2 string) stored as orc location 'obs://obs-lmm/hive/orctest' tblproperties('orc.compress'='SNAPPY');

    When Hive uses OBS to store data, partition and table storage locations in the same table cannot be stored in different buckets.

    For example, create a partition table and set its storage location to the folder in OBS bucket 1. In this case, modifying the storage location of the table partition does not take effect. When data is inserted, the storage location of the table is used.

    1. Create a partition table and specify the path for storing the table.
      create table table_name(id int,name string,company string) partitioned by(dt date) row format delimited fields terminated by ',' stored as textfile location "obs://OBS bucket 1/Folder in the bucket";
    2. Modifying the storage location of the table partition to another bucket does not take effect.
      alter table table_name partition(dt date) set location "obs://OBS bucket 2/Folder in the bucket";