Development Plan
Overview
Assume that the Kafka component receives one word record every 1 second.
The developed Spark application needs to achieve the following function:
Calculate the sum of records for each word in real time.
log1.txt example file:
LiuYang YuanJing GuoYijun CaiXuyu Liyuan FangBo LiuYang YuanJing GuoYijun CaiXuyu FangBo
Preparing Data
- Ensure that the cluster is installed with all the required components, namely HDFS, YARN, Spark, and Kafka.
- Create the input_data1.txt file in the local and copy the content of the log1.txt file to the input_data1.txt file.
On the client installation node, create the /home/data directory and upload the input_data1.txt file to the /home/data directory.
- Set the allow.everyone.if.no.acl.found parameter of Kafka Broker to true.
- Create a topic.
{zkQuorum} indicates ZooKeeper cluster information in the IP address:Port number format.
$KAFKA_HOME/bin/kafka-topics.sh --create --zookeeper {zkQuorum}/kafka --replication-factor 1 --partitions 3 --topic {Topic}
- Start the Producer of Kafka and send data to Kafka.
java -cp {ClassPath} com.huawei.bigdata.spark.examples.StreamingExampleProducer {BrokerList} {Topic}
ClassPath must contain the absolute path of the Kafka JAR package on the Spark client, for example, /opt/client/Spark2x/spark/jars/*:/opt/client/Spark2x/spark/jars/streamingClient010/*.
Development Guidelines
- Receive data from Kafka and generate DStream.
- Classify word records.
- Calculate and print the result.
Preparations
For clusters with the security mode enabled, the Spark Core sample code needs to read two files (user.keytab and krb5.conf). The user.keytab and krb5.conf files are authentication files in the security mode. Download the authentication credentials of the user principal on the FusionInsight Manager page. The user in the sample code is sparkuser, change the value to the prepared development user name.
Packaging the Project
- Upload the user.keytab and krb5.conf files to the server where the client is located.
- Use the Maven tool provided by IDEA to package the project and generate the JAR file. For details, see Commissioning a Spark Application in a Linux Environment.
Before compilation and packaging, change the paths of the user.keytab and krb5.conf files in the sample code to the actual paths on the client server. For example, /opt/female/user.keytab and /opt/female/krb5.conf.
- Upload the JAR package to any directory (for example, /opt) on the server where the Spark client is located.
- Prepare dependency packages and upload the following JAR packages to the $SPARK_HOME/jars/streamingClient010 directory on the server where the Spark client is located.
- spark-streaming-kafkaWriter-0-10_2.12-3.1.1-hw-ei-311001.jar
- kafka-clients-xxx.jar
- kafka_2.12-xxx.jar
- spark-sql-kafka-0-10_2.12-3.1.1-hw-ei-311001-SNAPSHOT.jar
- spark-streaming-kafka-0-10_2.12-3.1.1-hw-ei-311001-SNAPSHOT.jar
- spark-token-provider-kafka-0-10_2.12-3.1.1-hw-ei-311001-SNAPSHOT.jar
- Download the dependency package whose version number contains hw-ei from Huawei Mirrors.
- Download the dependency package whose version number does not contain hw-ei from the Maven central repository.
Running the Task
When running the sample project, you need to specify <checkpointDir>, <brokers>, <topic>, and <batchTime>. <checkPointDir> indicates the path for storing the program result backup in HDFS. <brokers> indicates the Kafka address for obtaining metadata. <topic> indicates the topic name read from Kafka. <batchTime> indicates the interval for Streaming processing in batches.
The path of Spark Streaming's Kafka dependency package on the client is different from that of other dependency packages. For example, the path of other dependency packages is $SPARK_HOME/jars, and the path of the Kafka dependency package is $SPARK_HOME/jars/streamingClient010. Therefore, when running an application, you need to add a configuration item to the spark-submit command to specify the path of the dependency package of Spark Streaming Kafka, for example, --jars $(files=($SPARK_HOME/jars/streamingClient010/*.jar); IFS=,; echo "${files[*]}").
- Add configuration items to $SPARK_HOME/conf/jaas.conf:
KafkaClient { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=false useTicketCache=true debug=false; };
- Add the following configuration to the $SPARK_HOME/conf/jaas-zk.conf file:
KafkaClient { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true keyTab="./user.keytab" principal="sparkuser@<System domain name>" useTicketCache=false storeKey=true debug=true; };
- Use --files and relative path to submit the keytab file to ensure that the keytab file is loaded to the container of the executor.
- For the port number in <brokers>, use SASL_PLAINTEXT for the Kafka 0-10 Write To Print example, use PLAINTEXT for the Write To Kafka 0-10 example.
- Sample code (Spark Streaming read Kafka 0-10 Write To Print)
bin/spark-submit --master yarn --deploy-mode client --files ./jaas.conf,./user.keytab --jars $(files=($SPARK_HOME/jars/streamingClient010/*.jar); IFS=,; echo "${files[*]}") --class com.huawei.bigdata.spark.examples.SecurityKafkaWordCount /opt/SparkStreamingKafka010JavaExample-1.0.jar <checkpointDir> <brokers> <topic> <batchTime>
The configuration example is as follows:
--files ./jaas.conf,./user.keytab //Use --files to specify the jaas.conf and keytab files.
- Sample code (Spark Streaming Write To Kafka 0-10)
bin/spark-submit --master yarn --deploy-mode client --jars $(files=($SPARK_HOME/jars/streamingClient010/*.jar); IFS=,; echo "${files[*]}") --class com.huawei.bigdata.spark.examples.JavaDstreamKafkaWriter /opt/SparkStreamingKafka010JavaExample-1.0.jar <groupId> <brokers> <topics>
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot