El contenido no se encuentra disponible en el idioma seleccionado. Estamos trabajando continuamente para agregar más idiomas. Gracias por su apoyo.

Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
OneAccess
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
IoT
IoT Device Access
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Managed Threat Detection
Blockchain
Blockchain Service
Web3 Node Engine Service
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
SoftWare Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Edge Data Center Management
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Intelligent EdgeCloud
Solutions
SAP Cloud
High Performance Computing
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive

Interconnecting Spark with OBS

Updated on 2024-11-29 GMT+08:00

Interconnecting with OBS

In an MRS cluster, Location can be set to an OBS file system path during Spark table creation and Spark can connect to OBS through Hive Metastore.

  • Setting the location to an OBS path during table creation:
    1. Log in to the node where the client is installed as the client installation user and access the spark-sql client.

      cd Client installation directory

      kinit Component operation user

      spark-sql --master yarn

    2. Set Location to the OBS file system path when creating a table.

      For example, to create a table named test whose Location is obs://obs-test/test/Database name/Table name, run the following command:

      create external table testspark(name string) location "obs://obs-test/test/Database name/Table name";

  • Interconnecting Spark with OBS through Hive Metastore:
    1. Complete the configurations by referring to Interconnecting Hive with OBS using MetaStore.
    2. Log in to FusionInsight Manager, choose Cluster > Services > Spark and choose Configurations > All Configurations.
    3. In the navigation pane on the left, choose SparkResource > Customization. In the custom configuration items, add spark.sql.warehouse.location.first to the custom parameter and set its value to true.
      Figure 1 spark.sql.warehouse.location.first configuration
    4. In the navigation pane on the left, choose JDBCServer > Customization. In the custom configuration items, add spark.sql.warehouse.location.first to the custom parameter and set its value to true.
      Figure 2 spark.sql.warehouse.location.first configuration
    5. Click Save to save the configuration. Click the Dashboard tab choose More > Restart Service, enter the password, click OK, and click OK again to restart Spark.
    6. After Spark is restarted, choose More > Download Client to download and install the Spark client again. Then, go to 7.
      If you do not download and install the client again, you can perform the following steps to update the Spark client configuration file (assume that the client directory is /opt/client):
      1. Log in to the node where the Spark client is deployed as user root and switch to the client installation directory.

        cd /opt/client

      2. Run the following command to modify hive-site.xml in the configuration file directory of the Spark client:

        vi Spark/spark/conf/hive-site.xml

        Change the value of hive.metastore.warehouse.dir to the corresponding OBS path, for example, obs://hivetest/user/hive/warehouse/.

        <property>
        <name>hive.metastore.warehouse.dir</name>
        <value>obs://hivetest/user/hive/warehouse/<value>
        </property>
      3. Run the following command to modify the spark-defaults.conf file in the configuration file directory of the Spark client and set spark.sql.warehouse.location.first to true:

        vi Spark/spark/conf/spark-defaults.conf

    7. Configure the OBS directory permission for the component operation user in clusters with Kerberos authentication enabled by referring to Configuring Ranger Permissions.
    8. Go to the SparkSQL CLI and spark-beeline, create a table, and check whether the location of the table is an OBS path.

      source bigdata_env

      kinit Service user (skip this step for normal clusters)

      • Go to the SparkSQL CLI.

        spark-sql

        create table d(a int);

        desc formatted d;

        As shown in the following figure, the location of table d is in the specified OBS path.

      • Go to spark-beeline.

        spark-beeline

        create table e(a int);

        desc formatted e;

        As shown in the following figure, the location of table e is in the specified OBS path.

Configuring Ranger Permissions

  1. Log in to FusionInsight Manager and choose System > Permission > User Group. On the displayed page, click Create User Group.
  2. Create a user group without a role, for example, obs_spark, and bind the user group to the corresponding user.
  3. Log in to the Ranger management page as the rangeradmin user.
  4. On the home page, click component plug-in name OBS in the EXTERNAL AUTHORIZATION area.
  5. Click Add New Policy to add the Read and Write permissions on OBS paths to the user group created in 2. If there are no OBS paths, create one in advance (wildcard character * is not allowed).

NOTE:
  • Cascading authorization is not supported for view tables.
  • Cascading authorization can be performed only on databases and tables, and cannot be on partitions. If a partition path is not in the table path, you need to manually authorize the partition path.
  • Cascading authorization for Deny Conditions in the Hive Ranger policy is not supported. That is, the Deny Conditions permission only restricts the table permission and cannot generate the permission of the HDFS storage source.
  • The permission of the HDFS Ranger policy is prior to that of the HDFS storage source generated by cascading authorization. If the HDFS Ranger permission has been set for the HDFS storage source of the table, the cascading permission does not take effect.

Configuring Permissions for CDL Service Users

If Kerberos authentication is enabled for the cluster (the cluster is in security mode) and you need to store real-time data to OBS after the interconnection, perform the following operations to grant the Read and Write permissions on the corresponding OBS path to the specific user:

  1. Log in to FusionInsight Manager and choose System > Permission > User Group. On the displayed page, click Create User Group.
  2. Create a user group without a role, for example, obs_cdl, and bind the user group to the corresponding CDL service user, for example, cdluser.
  3. Log in to the Ranger management page as the rangeradmin user.
  4. On the home page, click component plug-in name OBS in the EXTERNAL AUTHORIZATION area.
  5. Click Add New Policy to add the Read and Write permissions on OBS paths to the created user group. If there are no OBS paths, create one in advance (wildcard character * is not allowed).

    The following figure shows the configurations needed for adding the Read and Write permissions on obs://OBS parallel file system name/cdldata to user group obs_cdl.

Utilizamos cookies para mejorar nuestro sitio y tu experiencia. Al continuar navegando en nuestro sitio, tú aceptas nuestra política de cookies. Descubre más

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback