Updated on 2022-09-14 GMT+08:00

Java

To avoid API compatibility or reliability problems, you are advised to use open source APIs of the corresponding version.

Common Spark Core APIs

Spark mainly uses the following classes:

  • JavaSparkContext: external API of Spark, which is used to provide the functions of Spark for Java applications that invoke this class, for example, connecting Spark clusters and creating RDDs, accumulations, and broadcasts. Its function is equivalent to a container.
  • SparkConf: Spark application configuration class, which is used to configure the application name, execution model, and executor memory.
  • JavaRDD: Class for defining JavaRDD in Java applications. The function is similar to the RDD class of Scala.
  • JavaPairRDD: JavaRDD class in the key-value format. This class provides methods such as groupByKey and reduceByKey.
  • Broadcast: broadcast variable class. This class retains one read-only variable, and caches it on each machine, instead of saving a copy for each task.
  • StorageLevel: data storage levels, including memory (MEMORY_ONLY), disk (DISK_ONLY), and memory+disk (MEMORY_AND_DISK)

JavaRDD supports two types of operations: Transformation and Action. Table 1 and Table 2 describe their common methods.

Table 1 Transformation

Method

Description

<R> JavaRDD<R> map(Function<T,R> f)

Uses Function on each element of the RDD.

JavaRDD<T> filter(Function<T,Boolean> f)

Invokes Function on all elements of the RDD and returns the element that is true.

<U> JavaRDD<U> flatMap(FlatMapFunction<T,U> f)

Invokes Function on all elements of the RDD and then flattens the results.

JavaRDD<T> sample(boolean withReplacement, double fraction, long seed)

Sampling

JavaRDD<T> distinct(int numPartitions)

Deletes duplicate elements.

JavaPairRDD<K,Iterable<V>> groupByKey(int numPartitions)

Returns (K,Seq[V]) and combines the values of the same key to a set.

JavaPairRDD<K,V> reduceByKey(Function2<V,V,V> func, int numPartitions)

Invokes Function on the values of the same key.

JavaPairRDD<K,V> sortByKey(boolean ascending, int numPartitions)

Sorts data by key. If ascending is set to true, data is sorted by key in ascending order.

JavaPairRDD<K,scala.Tuple2<V,W>> join(JavaPairRDD<K,W> other)

Returns the dataset of (K,(V,W)) when the (K,V) and (K,W) datasets exist. numTasks indicates the number of concurrent tasks.

JavaPairRDD<K,scala.Tuple2<Iterable<V>,Iterable<W>>> cogroup(JavaPairRDD<K,W> other, int numPartitions)

Returns the dataset of <K,scala.Tuple2<Iterable<V>,Iterable<W>>> when the (K,V) and (K,W) datasets exist. numTasks indicates the number of concurrent tasks.

JavaPairRDD<T,U> cartesian(JavaRDDLike<U,?> other)

Returns the Cartesian product of the RDD and other RDDs.

Table 2 Action

Method

Description

T reduce(Function2<T,T,T> f)

Invokes Function2 on elements of the RDD.

java.util.List<T> collect()

Returns an array that contains all elements of the RDD.

long count()

Returns the number of elements in the dataset.

T first()

Returns the first element in the dataset.

java.util.List<T> take(int num)

Returns the first N elements.

java.util.List<T> takeSample(boolean withReplacement, int num, long seed)

Samples the dataset randomly and returns a dataset of num elements. withReplacement indicates whether replacement is used.

void saveAsTextFile(String path, Class<? extends org.apache.hadoop.io.compress.CompressionCodec> codec)

Writes the dataset to a text file, HDFS, or file system supported by HDFS. Spark converts each record to a row of records and then writes it to the file.

java.util.Map<K,Object> countByKey()

Counts the times that each key occurs.

void foreach(VoidFunction<T> f)

Runs func on each element of the dataset.

java.util.Map<T,Long> countByValue()

Counts the times that each element of the RDD occurs.

Common Spark Streaming APIs

Spark Streaming mainly uses the following classes:

  • JavaStreamingContext: main entrance of Spark Streaming. It provides methods for creating the DStream. A batch interval needs to be set in the input parameter.
  • JavaDStream: a type of data which indicates the RDDs continuous sequence. It indicates the continuous data flow.
  • JavaPairDStream: API of key-value DStream. It provides operations such as reduceByKey and join.
  • JavaReceiverInputDStream<T>: Defines any input stream that receives data from the network.

The common methods of Spark Streaming are similar to those of Spark Core. The following table provides some methods of Spark Streaming.

Table 3 Spark Streaming methods

Method

Description

JavaReceiverInputDStream<java.lang.String> socketStream(java.lang.String hostname,int port)

Creates an input stream and uses a TCP socket to receive data from the corresponding hostname and port. The received bytes are parsed to the UTF8 format. The default storage level is Memory+Disk.

JavaDStream<java.lang.String> textFileStream(java.lang.String directory)

Creates an input stream to detect new files compatible with the Hadoop file system, and read them as text files. The directory of the input parameter is an HDFS directory.

void start()

Starts the Streaming calculation.

void awaitTermination()

Terminates the await of the process, which is similar to pressing Ctrl+C.

void stop()

Stops the Streaming calculation.

<T> JavaDStream<T> transform(java.util.List<JavaDStream<?>> dstreams,Function2<java.util.List<JavaRDD<?>>,Time,JavaRDD<T>> transformFunc)

Performs the Function operation on each RDD to obtain a new DStream. In this function, the sequence of the JavaRDDs must be the same as the corresponding DStreams in the list.

<T> JavaDStream<T> union(JavaDStream<T> first,java.util.List<JavaDStream<T>> rest)

Creates a unified DStream from multiple DStreams with the same type and sliding time.

Table 4 Streaming enhanced feature APIs

Method

Description

JAVADStreamKafkaWriter.writeToKafka()

Writes data from DStream into Kafka in batch.

JAVADStreamKafkaWriter.writeToKafkaBySingle()

Writes data from DStream into Kafka one by one.

Common Spark SQL APIs

Spark SQL mainly uses the following classes:

  • SQLContext: main entrance of Spark SQL functions and DataFrame
  • DataFrame: distributed dataset organized by naming columns
  • DataFrameReader: API for loading DataFrame from external storage systems
  • DataFrameStatFunctions: implements the statistics function of DataFrame.
  • UserDefinedFunction: user-defined functions

The following table provides common Actions methods.

Table 5 Spark SQL methods

Method

Description

Row[] collect()

Returns an array containing all the columns of DataFrame.

long count()

Returns the number of rows in DataFrame.

DataFrame describe(java.lang.String... cols)

Calculates the statistics, including the count, average value, standard deviation, minimum value, and maximum value.

Row first()

Returns the first row.

Row[] head(int n)

Returns the first n rows.

void show()

Displays the first 20 rows in DataFrame using a table.

Row[] take(int n)

Returns the first n rows in DataFrame.

Table 6 Basic DataFrame functions

Method

Description

void explain(boolean extended)

Prints the logical plan and physical plan of the SQL statement.

void printSchema()

Prints schema information to the console.

registerTempTable

Registers DataFrame as a temporary table, whose period is bound to SQLContext.

DataFrame toDF(java.lang.String... colNames)

Returns a DataFrame whose columns are renamed.

DataFrame sort(java.lang.String sortCol,java.lang.String... sortCols)

Sorts by column in ascending or descending order.

GroupedData rollup(Column... cols)

Rolls back the specified column of the current DataFrame in multiple dimensions.