MRS Java SDK Demo

MapReduce Service (MRS) provides users with a complete controllable, enterprise-class, big data cloud service, ensuring the smooth running of big data components, such as Hadoop, Spark, HBase, Kafka, and Storm.

Creating a Cluster and Submitting the Job

You can create a cluster and submit the job using OpenStack4j based on the following code. After the cluster is created, it will be displayed on the cluster page of the MRS console.

1
2
3
4
5
6
7
public void createClusterAndRunAJob() 
{
MapReduceComponent component = MapReduceComponent.builder().id(component_id).name(component_name).version(component_version).desc(component_desc).build();
MapReduceClusterCreate cluster = MapReduceClusterCreate.builder().dataCenter(data_center).masterNodeNum(master_node_num).masterNodeSize(master_node_size).coreNodeNum(core_node_num).coreNodeSize(core_node_size).name(cluster_name).availablilityZoneId(available_zone_id).vpcName(vpc).vpcId(vpc_id).subnetName(subnet_name).subnetId(subnet_id).version(cluster_version).type(cluster_type).volumeSize(volume_size).volumeType(volume_type).keypair(node_public_cert_name).safeMode(safe_mode).components(Lists.newArrayList(component)).build();
MapReduceJobExeCreate jobExe = MapReduceJobExeCreate.builder().jobType(job_type).jobName(job_name).jarPath(jar_path).arguments(arguments).input(input).output(output).jobLog(job_log).fileAction(file_action).hql(hql).hiveScriptPath(hive_script_path).shutdownCluster(shutdown_cluster).submitJobOnceClusterRun(submit_job_once_cluster_run).build();
MapReduceClusterCreateResult result = osclient.mrs().clusters().createAndRunJob(cluster, jobExe);
}

Querying Cluster Details

You can query details of a cluster using OpenStack4j based on the following code by specifying the cluster ID:

1
2
3
public void describeCluster () {
MapReduceClusterInfo cluster = osclient.mrs().clusters().get(id);
}

Terminating a Cluster

You can terminate a cluster using OpenStack4j based on the following code by specifying the cluster ID:

1
2
3
public void deleteCluster () {
ActionResponse delete = osclient.mrs().clusters().delete(id);
}

Adding and Executing a Job

You can add a job and execute using OpenStack4j based on the following code. After the job is created, it will be displayed on the job page of the MRS console.

1
2
3
4
public void submitAndExecuteJob () {
MapReduceJobExeCreate jobExeCreate = MapReduceJobExeCreate.builder().jobType(job_type).jobName(job_name).clusterId(cluster_id).jarPath(jar_path).arguments(arguments).input(input).output(output).jobLog(job_log).fileAction(file_action).hql(hql).hiveScriptPath(hive_script_path).isProtected(is_protected).isPublic(is_public).build();
MapReduceJobExe exe = osclient.mrs().jobExes().create(jobExeCreate);
}

Querying the Job Exe Object List

You can query job exe object list using OpenStack4j based on the following code:

1
2
3
4
5
public void getJobExeList () {
JobExeListOptions options = JobExeListOptions.create().page(current_page).pageSize(page_size).clusterId(cluster_id)
.state(state);
List<? extends MapReduceJobExe> list = osclient.mrs().jobExes().list(options);
}

Querying Details of a Job Exe Object

You can query details of a job exe object using OpenStack4j based on the following code by specifying the object ID:

1
2
3
public void getJobExes() {
osclient.mrs().jobExes().get(id);
}

Deleting a Job Execution Object

You can delete a job execution object using OpenStack4j based on the following code by specifying the object ID:

1
2
3
public void deleteJobExecution () {
ActionResponse delete = osclient.mrs().jobExecutions().delete(id);
}