Help Center/ MapReduce Service/ Best Practices/ Data Analytics/ Using Spark2x to Analyze IoV Drivers' Driving Behavior
Updated on 2024-09-10 GMT+08:00

Using Spark2x to Analyze IoV Drivers' Driving Behavior

Application Scenarios

The best practices for Huawei Cloud MapReduce Service (MRS) guides you through the basic functions of MRS. This case shows you how to use the Spark2x component of MRS to analyze and collect statistics on driver behaviors and obtain the analysis results.

The raw data in this practice includes information on driver behavior, such as sudden acceleration, sudden braking, neutral coasting, speeding, and fatigue driving. With the Spark2x component, you can analyze and collect statistics on the frequency of these behaviors within a specified time frame.

This practice uses MRS 3.1.0 is as an example.You can create a cluster of this version too.

Solution Architecture

Figure 1 describes the application running architecture of Spark.

  1. An application is running in the cluster as a collection of processes. Driver coordinates the running of applications.
  2. To run an application, Driver connects to the cluster manager (such as Standalone, Mesos, and YARN) to apply for the executor resources, and start ExecutorBackend. The cluster manager schedules resources between different applications. Driver schedules DAGs, divides stages, and generates tasks for the application at the same time.
  3. Then, Spark sends the codes of the application (the codes transferred to SparkContext, which is defined by JAR or Python) to an executor.
  4. After all tasks are finished, the running of the user application is stopped.
Figure 1 Spark application running architecture

Procedure

The operation process of this practice is as follows:

  1. Creating an MRS Cluster: Create an MRS 3.1.0 analysis cluster with Kerberos authentication disabled.
  2. Preparing the Sample Program and Data: Create an OBS parallel file system and upload the Spark2x sample program and sample data files to the OBS parallel file system.
  3. Creating a Job: Create and run a SparkSubmit job on the MRS management console.
  4. Viewing the Execution Results: Obtain the log file from the OBS path and view the execution result.

Creating an MRS Cluster

  1. Go to the Buy Cluster page.
  2. Click the Custom Config tab.

    Configure cluster software information according to Table 1.
    Table 1 Software configurations

    Parameter

    Description

    Example Value

    Region

    Region where the MRS resources belong.

    MRS clusters in different regions cannot communicate with each other over an intranet. For lower network latency and quick resource access, select the region nearest to you.

    CN-Hong Kong

    NOTE:

    This document uses CN-Hong Kong as an example. If you want to perform operations in other regions, ensure that all operations are performed in the same region.

    Billing Mode

    Billing mode of the cluster.

    Pay-per-use

    Cluster Name

    Name of the MRS cluster.

    mrs_demo

    Cluster Type

    Type of the MRS cluster.

    Analysis cluster (for offline data analysis)

    Version Type

    Version type of the MRS cluster.

    Normal

    Cluster Version

    MRS cluster version.

    MRS 3.1.0

    NOTE:

    This practice is available for MRS 3.1.0 only.

    Component

    Components in the MRS cluster.

    All components

    Metadata

    Storage for cluster metadata.

    Local

    Figure 2 Software configurations

  3. Click Next to configure hardware.

    Configure cluster hardware information according to Table 2.
    Table 2 Hardware configurations

    Parameter

    Description

    Example Value

    AZ

    Available AZ associated with the cluster region.

    AZ2

    Enterprise Project

    Enterprise project to which the cluster belongs.

    default

    VPC

    VPC where you want to create the cluster. You can click View VPC to view the name and ID. If no VPC is available, create one.

    xxx

    Subnet

    Subnet where your cluster belongs. You can access the VPC management console to view the names and IDs of existing subnets in the VPC. If no subnet is created under the VPC, click Create Subnet to create one.

    xxx

    Security Group

    A security group is a set of ECS access rules. It provides access policies for ECSs that have the same security protection requirements and are mutually trusted in a VPC.

    Auto create

    EIP

    An EIP allows you to access the Manager web UI of the cluster.

    Bind an EIP.

    Cluster Node

    Cluster node details.

    Default settings

    Figure 3 Hardware configurations

  4. Click Next. On the Set Advanced Options page, set the following parameters by referring to Table 3 and retain the default settings for other parameters.

    Table 3 Advanced configurations

    Parameter

    Description

    Example Value

    Kerberos Authentication

    Whether to enable Kerberos authentication when logging in to Manager.

    Disabled

    Username

    Name of the administrator of MRS Manager. admin is used by default.

    admin

    Password

    Password of the MRS Manager administrator.

    xxx

    Confirm Password

    Enter the password of the Manager administrator again.

    xxx

    Login Mode

    Login method to ECS nodes in the cluster.

    Select Password.

    Username

    User for logging in to the ECS. The default value is root.

    root

    Password

    Password for logging in to ECSs.

    xxx

    Confirm Password

    Enter the password for logging in to ECSs again.

    xxx

  5. Click Next. On the Confirm Configuration page, check the cluster configuration information. If you need to adjust the configuration, click to go to the corresponding tab page and configure parameters again.
  6. Select Secure Communications and click Buy Now.
  7. Click Back to Cluster List to view the cluster status.

    Cluster creation takes some time. The initial status of the cluster is Starting. After the cluster has been created successfully, the cluster status becomes Running.

Preparing the Sample Program and Data

  1. Create an OBS parallel file system to store the Spark sample program, sample data, job execution results, and logs.

    1. Log in to the HUAWEI CLOUD management console.
    2. In the Service List, choose Storage > Object Storage Service.
    3. In the navigation pane on the left, choose Parallel File System and click Create Parallel File System to create a file system named obs-demo-analysis-hwt4. Retain the default values for parameters such as Policy.

  2. Click the name of the file system. In the navigation pane on the left, choose Files. On the displayed page, click Create Folder to create the program and input folders, as shown in Figure 4.

    Figure 4 Creating a folder

  3. Download the sample program driver_behavior.jar from https://mrs-obs-ap-southeast-1.obs.ap-southeast-1.myhuaweicloud.com/mrs-demon-samples/demon/driver_behavior.jar to the local PC.
  4. Go to the program folder. Click Upload File and select the local driver_behavior.jar sample program.
  5. Click Upload to upload the sample program to the OBS parallel file system.
  6. Obtain Spark sample data from https://mrs-obs-ap-southeast-1.obs.ap-southeast-1.myhuaweicloud.com/mrs-demon-samples/demon/detail-records.zip.
  7. Decompress the downloaded detail-records.zip package to obtain the sample data files.

    Figure 5 Sample data

  8. Go to the input folder. Click Upload File and select the local Spark sample data.
  9. Click Upload to upload the sample data to the OBS parallel file system.

    Upload the decompressed data in 7 to the input folder.

    Figure 6 Uploading sample data

Creating a Job

  1. Log in to the MRS console, click the mrs_demo cluster on the displayed Active Clusters page.
  2. Click the Jobs tab and then Create to create a job.
  3. Set job parameters.

    Table 4 Configuring job parameters

    Parameter

    Description

    Example Value

    Type

    Type of the job you want to create.

    Select SparkSubmit.

    Name

    Task name.

    Enter driver_behavior_task.

    Program Path

    Path for storing the program package to be executed.

    Click OBS and select the driver_behavior.jar package uploaded in Preparing the Sample Program and Data.

    Program Parameter

    Optimization parameters for resource usage and job execution performance.

    Select --class in Parameter, and enter com.huawei.bigdata.spark.examples.DriverBehavior in Value.

    Parameters

    AK for accessing OBS SK for accessing OBS 1 Input path Output path.

    • For details about how to obtain the AK/SK, see the steps described in NOTE.
    • 1 is a fixed input that is used to specify the program function invoked during job execution.
    • Input path is the path you selected for the Program Path parameter.
    • Output path should be a directory that does not exist, for example, obs://obs-demo-analysis-hwt4/output/.
    NOTE:

    To obtain the AK/SK for accessing OBS, perform the following steps:

    1. Log in to the HUAWEI CLOUD management console.
    2. Click the username in the upper right corner and choose My Credentials.
    3. In the navigation pane on the left, choose Access Keys.
    4. Click Create Access Key to add a key. Enter the password and verification code as prompted. The browser automatically downloads the credentials.csv file. The file is in CSV format and separated by commas (,). In the file, the middle part is AK and the last part is SK.

    AK information SK information 1 obs://obs-demo-analysis-hwt4/input/ obs://obs-demo-analysis-hwt4/output/

    Service Parameter

    Service parameter modifications of the job to be executed.

    This parameter is left blank by default. Retain the default settings.

    Figure 7 Creating a job

  4. Click OK to start executing the program.

Viewing the Execution Results

  1. Go to the Jobs page to view the job execution status.

    Figure 8 Execution status

  2. Wait 1 to 2 minutes and log in to OBS console. Go to the output path of the obs-demo-analysis-hwt4 file system to view the execution result. Click Download in the Operation column of the generated CSV file to download the file to your local PC.
  3. Open the downloaded CSV file using Excel and classify the data in each column according to the fields defined in the program. The job execution results are obtained.

    Figure 9 Execution result