Updated on 2022-08-16 GMT+08:00

Storm-HDFS Development Guideline

Scenario

This topic applies only to the interaction between Storm and HDFS. Determine the versions of the jar packages described in this chapter based on the actual situation.

Log in in security mode is classified into ticket login and keytab file login, and the procedures for these two login modes are the same. The ticket login mode is an open-source capability and requires manual ticket uploading, which may cause reliability and usability problems. Therefore, the keytab file login mode is recommended.

Procedure for Developing an Application

  1. Verify that the Storm and HDFS components have been installed and are running properly.
  2. Import storm-examples to the IntelliJ IDEA development environment. For details, see Environment Preparation.
  3. If security services are enabled in the cluster, perform the related configuration based on the login mode.

    • keytab mode: Obtain a human-machine user from the administrator for login to the FusionInsight Manager platform and authentication, and obtain the keytab file of the user.
    • Ticket mode: Obtain a human-machine user from the administrator for subsequent secure login, enable the renewable and forwardable functions of the Kerberos service, set the ticket update interval, and restart Kerberos and related components.
    • The obtained user must belong to the storm group.
    • The default validity period of a user password is 90 days. Therefore, the validity period of the obtained keytab file is 90 days. To prolong the validity period of the keytab file, modify the user password policy and obtain the keytab file again.
    • The parameters for enabling the renewable and forwardable functions and setting the ticket update interval are on the System tab of the Kerberos service configuration page. The ticket update interval can be set to kdc_renew_lifetime or kdc_max_renewable_life based on the actual situation.

  4. Download and install the HDFS client.
  5. Obtain the related configuration files using the following method.

    Go to the /opt/clientHDFS/HDFS/hadoop/etc/hadoop directory on the installed HDFS client, and obtain configuration files core-site.xml and hdfs-site.xml.

    In keytab mode, obtain the keytab file by following 3. In ticket mode, no extra configuration file is required.

    The obtained keytab file is named as user.keytab by default. A user can directory change the file name as required. However, the user must upload the changed file name as a parameter when submitting a task.

  6. Obtain the related JAR packages.

    • Go to the HDFS/hadoop/share/hadoop/common/lib directory on the installed HDFS client, and obtain the following JAR packages:
      • commons-cli-<version>.jar
      • commons-io-<version>.jar
      • commons-lang-<version>.jar
      • commons-lang3-<version>.jar
      • commons-collections-<version>.jar
      • commons-configuration2-<version>.jar
      • commons-logging-<version>.jar
      • guava-<version>.jar
      • hadoop-*.jar
      • protobuf-java-<version>.jar
      • jackson-databind-<version>.jar
      • jackson-core-<version>.jar
      • jackson-annotations-<version>.jar
      • re2j-<version>.jar
      • jaeger-core-<version>.jar
      • opentracing-api-<version>.jar
      • opentracing-noop-<version>.jar
      • opentracing-tracerresolver-<version>.jar
      • opentracing-util-<version>.jar
    • Go to the HDFS/hadoop/share/hadoop/common directory on the installed HDFS client, and obtain the hadoop-*.jar package.
    • Go to the HDFS/hadoop/share/hadoop/client directory on the installed HDFS client, and obtain the hadoop-*.jar package.
    • Go to the HDFS/hadoop/share/hadoop/hdfs directory on the installed HDFS client, obtain the hadoop-hdfs-*.jar package.
    • Obtain the following JAR packages from the sample project /src/storm-examples/storm-examples/lib:
      • storm-hdfs-<version>.jar
      • storm-autocreds-<version>.jar

IntelliJ IDEA Code Sample

Create a topology.

  public static void main(String[] args) throws Exception  
   { 
     TopologyBuilder builder = new TopologyBuilder(); 

     // Separator. Use ¡°|¡± to replace the default ¡°,¡± to separate fields in tuple. 
     // Mandatory HdfsBolt parameter 
     RecordFormat format = new DelimitedRecordFormat() 
             .withFieldDelimiter("|"); 

     // Synchronize policy. Synchronize the file system for every 1000 tuples. 
     // Mandatory HdfsBolt parameter
 
     SyncPolicy syncPolicy = new CountSyncPolicy(1000); 

     // File size cyclic policy. If the size of a file reaches 5 MB, the file is written from the beginning.
     // Mandatory HdfsBolt parameter 
     FileRotationPolicy rotationPolicy = new FileSizeRotationPolicy(5.0f, Units.MB); 

     // Objective file written to hdfs
     // Mandatory HdfsBolt parameter 
     FileNameFormat fileNameFormat = new DefaultFileNameFormat() 
             .withPath("/user/foo/"); 


     //Create HdfsBolt. 
     HdfsBolt bolt = new HdfsBolt() 
             .withFsUrl(DEFAULT_FS_URL)
             .withFileNameFormat(fileNameFormat) 
             .withRecordFormat(format) 
             .withRotationPolicy(rotationPolicy) 
             .withSyncPolicy(syncPolicy); 

     //Spout generates a random statement. 
     builder.setSpout("spout", new RandomSentenceSpout(), 1);  
     builder.setBolt("split", new SplitSentence(), 1).shuffleGrouping("spout"); 
     builder.setBolt("count", bolt, 1).fieldsGrouping("split", new Fields("word")); 

     //Add the plugin required for kerberos authentication to the list. The security mode is mandatory. 
 
     setSecurityConf(conf,AuthenticationType.KEYTAB);

     Config conf = new Config(); 
     //Write the plugin list configured on the client to a specific config item. The security mode is mandatory. 
     conf.put(Config.TOPOLOGY_AUTO_CREDENTIALS, auto_tgts); 

     if(args.length >= 2) 
     { 
         //The default keytab file name is changed by the user. Specify the new keytab file name as a parameter. 
         conf.put(Config.STORM_CLIENT_KEYTAB_FILE, args[1]); 
     } 

     //Run a command to submit the topology. 
     StormSubmitter.submitTopology(args[0], conf, builder.createTopology()); 

   }

The target file path of Storm cannot be in an SM4 encrypted HDFS partition.

Running the Application and Viewing Results

  1. Export the local JAR package. For details, see Packaging IntelliJ IDEA Code.
  2. Combine the configuration files and JAR packages obtained respectively in 5 and 6, and export a complete service JAR package. For details, see Packaging Services.
  3. Run a command to submit the topology.

    In keytab mode, if the user changes the keytab file name, for example, huawei.keytab, the changed keytab file name must be added to the command as a parameter for description. The submission command examples are as follows (the topology name is hdfs-test):

    storm jar /opt/jartarget/source.jar

    com.huawei.storm.example.hdfs.SimpleHDFSTopology hdfs-test huawei.keytab

    Before submitting source.jar, ensure that the Kerberos security login is implemented, and the login user and the user of the uploaded keytab file are the same user in keytab mode.

  4. After the topology is submitted successfully, log in to the HDFS cluster to view the topology.
  5. To log in with ticket mode, perform the following operations to regularly upload a ticket. The interval for uploading the ticket depends on the deadline for updating the ticket.

    1. Add the following content to a new line at the end of the Storm/storm-1.2.1/conf/storm.yaml file in the Storm client installation directory:

      topology.auto-credentials:

      - org.apache.storm.security.auth.kerberos.AutoTGT

    2. Run the ./storm upload-credentials hdfs-test command.