Instance
Scenario
Assume that the Kafka component receives one word record every 1 second.
The developed Spark application needs to achieve the following function:
Calculate the sum of records for each word in real time.
log1.txt example file:
LiuYang YuanJing GuoYijun CaiXuyu Liyuan FangBo LiuYang YuanJing GuoYijun CaiXuyu FangBo
Data Planning
- Ensure that the cluster, including HDFS, Yarn, Spark, and Kafka is successfully installed.
- Create the input_data1.txt file in the local and copy the content of the log1.txt file to the input_data1.txt file.
On the client installation node, create the /home/data directory and upload the input_data1.txt file to the /home/data directory.
- Change the value of allow.everyone.if.no.acl.found, the Broker configuration value of Kafka, to true.
- Create a topic.
{zkQuorum} indicates ZooKeeper cluster information. The format is IP:port.
$KAFKA_HOME/bin/kafka-topics.sh --create --zookeeper {zkQuorum}/kafka --replication-factor 1 --partitions 3 --topic {Topic}
- Start the Producer of Kafka and send data to Kafka.
java -cp {ClassPath} com.huawei.bigdata.spark.examples.StreamingExampleProducer {BrokerList} {Topic}
In this command, ClassPath must include the absolute path of the Kafka JAR package of the Spark client, for example: /opt/client/Spark2x/spark/jars/*:/opt/client/Spark2x/spark/jars/streamingClient010/*
Development Approach
- Receive data from Kafka and generate DStream.
- Collect the statistics of word records by category.
- Calculate and print the result.
Configuration Operations Before Running
In security mode, the Spark Core sample code needs to read two files (user.keytab and krb5.conf). The user.keytab and krb5.conf files are authentication files in the security mode. Download the authentication credentials of the user principal on the FusionInsight Manager page. The user in the example code is sparkuser, change the value to the prepared development user name.
Packaging the Project
- Upload the user.keytab and krb5.conf files to the server where the client is installed.
- Use the Maven tool provided by IDEA to pack the project and generate a JAR file. For details, see Compiling and Running the Application.
- Before compilation and packaging, change the paths of the user.keytab and krb5.conf files in the sample code to the actual paths on the client server where the files are located. Example: /opt/female/user.keytab and /opt/female/krb5.conf.
- Upload the JAR package to any directory (for example, /opt) on the server where the Spark client is located.
- Prepare dependency packages and upload the following JAR packages to the $SPARK_HOME/jars/streamingClient010 directory on the server where the Spark client is located.
- spark-streaming-kafkaWriter-0-10_2.12-3.1.1-hw-ei-311001.jar
- kafka-clients-xxx.jar
- kafka_2.12-xxx.jar
- spark-sql-kafka-0-10_2.12-3.1.1-hw-ei-311001-SNAPSHOT.jar
- spark-streaming-kafka-0-10_2.12-3.1.1-hw-ei-311001-SNAPSHOT.jar
- spark-token-provider-kafka-0-10_2.12-3.1.1-hw-ei-311001-SNAPSHOT.jar
- For the dependency package whose version number contains "hw-ei", download from the Huawei open-source image site.
- For the dependency package whose version number does not contain "hw-ei", obtain them from the Maven central repository.
Running Tasks
When running the sample program, you need to specify <checkpointDir>, <brokers>, <topic>, and <batchTime>. <checkPointDir> indicates the path for storing the program result backup in HDFS. <brokers> indicates the Kafka address for obtaining metadata. <topic> indicates the topic name read from Kafka. <batchTime> indicates the interval for Streaming processing in batches.
The location of Spark Streaming Kafka dependency package on the client is different from the location of other dependency packages. For example, the path to the Spark Streaming Kafka dependency package is $SPARK_HOME/jars/streamingClient010, whereas the path to other dependency packages is $SPARK_HOME/jars. Therefore, when running an application, you need to add a configuration item to the spark-submit command to specify the path of the dependency package of Spark Streaming Kafka, for example, --jars $(files=($SPARK_HOME/jars/streamingClient010/*.jar); IFS=,; echo "${files[*]}")
- Add configuration items to $SPARK_HOME/conf/jaas.conf:
KafkaClient { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=false useTicketCache=true debug=false; };
- Add configuration items to $SPARK_HOME/conf/jaas-zk.conf:
KafkaClient { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true keyTab="./user.keytab" principal="sparkuser@<system domain name>" useTicketCache=false storeKey=true debug=true; };
- Use --files and relative path to submit the keytab file to ensure that the keytab file is loaded to the container of the executor.
- For the port number in <brokers>, use SASL_PLAINTEXT for the Kafka 0-10 Write To Print example, use PLAINTEXT for the Write To Kafka 0-10 example.
- Example code (Spark Streaming read Kafka 0-10 Write To Print)
bin/spark-submit --master yarn --deploy-mode client --files ./jaas.conf,./user.keytab --jars$(files=($SPARK_HOME/jars/streamingClient010/*.jar); IFS=,; echo "${files[*]}") --class com.huawei.bigdata.spark.examples.SecurityKafkaWordCount /opt/SparkStreamingKafka010JavaExample-1.0.jar <checkpointDir> <brokers> <topic> <batchTime>
The configuration example is as follows:
--files ./jaas.conf,./user.keytab //Use --files to specify the jaas.conf and keytab files.
- Spark Streaming Write To Kafka 0-10 example code:
bin/spark-submit --master yarn --deploy-mode client --jars $(files=($SPARK_HOME/jars/streamingClient010/*.jar); IFS=,; echo "${files[*]}") --class com.huawei.bigdata.spark.examples.JavaDstreamKafkaWriter /opt/SparkStreamingKafka010JavaExample-1.0.jar <groupId> <brokers> <topics>
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.