Spark REST APIs
Function Description
Spark REST APIs display some metrics of the web UI in JSON format, provide users with a simpler method to create new display and monitoring tools, and enable users to query information about running apps and the completed apps. The open source Spark REST APIs allow users to query information about Jobs, Stages, Storage, Environment, and Executors. The REST APIs for querying the information about SQL, JDBC/ODBC Server, and Streaming are added to MRS. For details about open source RESTful APIs, visit https://spark.apache.org/docs/2.2.2/monitoring.html#rest-api.
Preparing an Operating Environment
Install a client in a directory on a node, for example, /opt/client.
- You have installed Spark on the server and confirmed that Spark is running properly.
- You have installed JDK 1.7 or 1.8 on the client operating environment.
- Obtain the MRS_Spark_Client.tar installation package, and run the following commands to decompress the package:
tar -xvf MRS_Spark_Client.tar
tar -xvf MRS_Spark_ClientConfig.tar
You are advised to install a client of the same version as the cluster on the server to avoid version incompatibility.
- Go to the MRS_Spark_ClientConfig decompressed folder and run the following command to install the client:
sh install.sh /opt/client
In the preceding command, /opt/client is an example user-defined path.
- Go to /opt/client (the client installation directory) and run the following command to initialize environment variables:
source bigdata_env
REST APIs
You can run the following command to skip the REST API filter to obtain application information:
- Obtaining information about all applications in JobHistory
- Command:
curl https://192.168.227.16:18080/api/v1/applications?mode=monitoring --insecure
192.168.227.16 is the service IP address of the JobHistory node, and 18080 is the port number of the JobHistory node.
- Command output:
[ { "id" : "application_1478570725074_0042", "name" : "Spark-JDBCServer", "attempts" : [ { "startTime" : "2016-11-09T16:57:15.237CST", "endTime" : "2016-11-09T17:01:22.573CST", "lastUpdated" : "2016-11-09T17:01:22.614CST", "duration" : 247336, "sparkUser" : "spark", "completed" : true } ] }, { "id" : "application_1478570725074_0047-part1", "name" : "SparkSQL::192.168.169.84", "attempts" : [ { "startTime" : "2016-11-10T11:57:36.626CST", "endTime" : "1969-12-31T07:59:59.999CST", "lastUpdated" : "2016-11-10T11:57:48.613CST", "duration" : 0, "sparkUser" : "admin", "completed" : false } ] }]
- Result analysis:
By running this command, you can query all Spark applications (including running applications and completed applications) in the current cluster. Table 1 provides information about the applications.
- Command:
- Obtaining information about an application in JobHistory
- Command:
curl https://192.168.227.16:18080/api/v1/applications/application_1478570725074_0042?mode=monitoring --insecure
192.168.227.16 is the service IP address of the JobHistory node, and 18080 is the port number of the JobHistory node. application_1478570725074_0042 is the application ID.
- Command output:
{ "id" : "application_1478570725074_0042", "name" : "Spark-JDBCServer", "attempts" : [ { "startTime" : "2016-11-09T16:57:15.237CST", "endTime" : "2016-11-09T17:01:22.573CST", "lastUpdated" : "2016-11-09T17:01:22.614CST", "duration" : 247336, "sparkUser" : "spark", "completed" : true } ] }
- Result analysis:
By running this command, you can query information about a Spark application. Table 1 provides information about the application.
- Command:
- Obtaining information about the executor of a running application
- Command for an alive executor:
curl https://192.168.169.84:26001/proxy/application_1478570725074_0046/api/v1/applications/application_1478570725074_0046/executors?mode=monitoring --insecure
- Commands for all alive and dead executors:
curl https://192.168.169.84:26001/proxy/application_1478570725074_0046/api/v1/applications/application_1478570725074_0046/allexecutors?mode=monitoring --insecure
192.168.195.232 is the service IP address of the active ResourceManager node, 26001 is the port number of ResourceManager, and application_1478570725074_0046 is the application ID in Yarn.
- Command output:
[{ "id" : "driver", "hostPort" : "192.168.169.84:23886", "isActive" : true, "rddBlocks" : 0, "memoryUsed" : 0, "diskUsed" : 0, "activeTasks" : 0, "failedTasks" : 0, "completedTasks" : 0, "totalTasks" : 0, "totalDuration" : 0, "totalInputBytes" : 0, "totalShuffleRead" : 0, "totalShuffleWrite" : 0, "maxMemory" : 278019440, "executorLogs" : { } }, { "id" : "1", "hostPort" : "192.168.169.84:23902", "isActive" : true, "rddBlocks" : 0, "memoryUsed" : 0, "diskUsed" : 0, "activeTasks" : 0, "failedTasks" : 0, "completedTasks" : 0, "totalTasks" : 0, "totalDuration" : 0, "totalInputBytes" : 0, "totalShuffleRead" : 0, "totalShuffleWrite" : 0, "maxMemory" : 555755765, "executorLogs" : { "stdout" : "https://XTJ-224:26010/node/containerlogs/container_1478570725074_0049_01_000002/admin/stdout?start=-4096", "stderr" : "https://XTJ-224:26010/node/containerlogs/container_1478570725074_0049_01_000002/admin/stderr?start=-4096" } } ]
- Result analysis:
By running this command, you can query information about all executors (including drivers of the current application). Table 2 provides basic information about each executor.
- Command for an alive executor:
Enhanced REST APIs
- SQL commands: Obtain all SQL statements and SQL statements that have the longest execution time.
- SparkUI command:
curl https://192.168.195.232:26001/proxy/application_1476947670799_0053/api/v1/applications/Spark-JDBCServerapplication_1476947670799_0053/SQL?mode=monitoring --insecure
192.168.195.232 is the service IP address of the active ResourceManager node, 26001 is the port number of ResourceManager. application_1476947670799_0053 is the application ID in Yarn, and Spark-JDBCServer is the Spark application name.
- JobHistory command:
curl https://192.168.227.16:22500/api/v1/applications/application_1478570725074_0004-part1/SQL?mode=monitoring --insecure
192.168.227.16 is the service IP address of the JobHistory node, and 22500 is the port number of the JobHistory node. application_1478570725074_0004-part1 is the application ID.
- Command output:
The query results of the SparkUI and JobHistory commands are as follows:
{ "longestDurationOfCompletedSQL" : [ { "id" : 0, "status" : "COMPLETED", "description" : "getCallSite at SQLExecution.scala:48", "submissionTime" : "2016/11/08 15:39:00", "duration" : "2 s", "runningJobs" : [ ], "successedJobs" : [ 0 ], "failedJobs" : [ ] } ], "sqls" : [ { "id" : 0, "status" : "COMPLETED", "description" : "getCallSite at SQLExecution.scala:48", "submissionTime" : "2016/11/08 15:39:00", "duration" : "2 s", "runningJobs" : [ ], "successedJobs" : [ 0 ], "failedJobs" : [ ] }] }
- Result analysis:
After running the commands, you can query information about all SQL statements of the current application (that is, the sqls part in the result) and information about the SQL statement with the longest execution time (that is, the longestDurationOfCompletedSQL part in the result). The following table describes information about each SQL statement.
Table 3 Basic SQL statement information Parameter
Description
id
ID of an SQL statement
status
Execution status of an SQL statement. The options are RUNNING, COMPLETED, and FAILED.
runningJobs
List of jobs that are being executed among the jobs generated by the SQL statement
successedJobs
List of jobs that are successfully executed among the jobs generated by the SQL statement
failedJobs
List of jobs that fail to be executed among the jobs generated by the SQL statement
- SparkUI command:
- JDBC/ODBC Server commands: Obtain the number of connections, the number of running SQL statements, as well as information about all sessions and all SQL statements.
- Command:
curl https://192.168.195.232:26001/proxy/application_1476947670799_0053/api/v1/applications/application_1476947670799_0053/sqlserver?mode=monitoring --insecure
192.168.195.232 is the service IP address of the active ResourceManager node, 26001 is the port number of ResourceManager, and application_1476947670799_0053 is the application ID in Yarn.
- Command output:
{ "sessionNum" : 1, "runningSqlNum" : 0, "sessions" : [ { "user" : "spark", "ip" : "192.168.169.84", "sessionId" : "9dfec575-48b4-4187-876a-71711d3d7a97", "startTime" : "2016/10/29 15:21:10", "finishTime" : "", "duration" : "1 minute 50 seconds", "totalExecute" : 1 } ], "sqls" : [ { "user" : "spark", "jobId" : [ ], "groupId" : "e49ff81a-230f-4892-a209-a48abea2d969", "startTime" : "2016/10/29 15:21:13", "finishTime" : "2016/10/29 15:21:14", "duration" : "555 ms", "statement" : "show tables", "state" : "FINISHED", "detail" : "== Parsed Logical Plan ==\nShowTablesCommand None\n\n== Analyzed Logical Plan ==\ntableName: string, isTemporary: boolean\nShowTablesCommand None\n\n== Cached Logical Plan ==\nShowTablesCommand None\n\n== Optimized Logical Plan ==\nShowTablesCommand None\n\n== Physical Plan ==\nExecutedCommand ShowTablesCommand None\n\nCode Generation: true" } ] }
- Result analysis:
After running this command, you can query the number of sessions, the number of running SQL statements, as well as information about all sessions and SQL statements of the current JDBC/ODBC application. Table 4 provides information about each session. Table 5 provides information about each SQL statement.
Table 4 Basic session information Parameter
Description
user
User connected to the session
ip
IP address of the node where the session resides
sessionId
Session ID
startTime
Time when the session starts the connection
finishTime
Time when the session ends the connection
duration
Session connection duration
totalExecute
Number of SQL statements executed on the session
Table 5 Basic SQL information Parameter
Description
user
User who executes the SQL statement
jobId
List of job IDs contained in the SQL statement
groupId
ID of the group where the SQL statement resides
startTime
SQL start time
finishTime
SQL end time
duration
SQL execution duration
statement
SQL statement
detail
Logical plan and physical plan
- Command:
- Streaming commands: Obtain the average input frequency, average scheduling delay, average execution duration, and average total delay.
- Command:
curl https://192.168.195.232:26001/proxy/application_1477722033672_0008/api/v1/applications/NetworkWordCountapplication_1477722033672_0008/streaming?mode=monitoring --insecure
192.168.195.232 is the service IP address of the active ResourceManager node, 26001 is the port number of ResourceManager. application_1477722033672_0008 is the application ID in Yarn, and NetworkWordCount is the Spark application name.
- Command output:
{ "avgInputRate" : "0.00 events/sec", "avgSchedulingDelay" : "1 ms", "avgProcessingTime" : "72 ms", "avgTotalDelay" : "73 ms" }
- Result analysis:
After running this command, you can query the average input frequency, average scheduling delay, average execution duration, and average total delay of the current Streaming application.
- Command:
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot