หน้านี้ยังไม่พร้อมใช้งานในภาษาท้องถิ่นของคุณ เรากำลังพยายามอย่างหนักเพื่อเพิ่มเวอร์ชันภาษาอื่น ๆ เพิ่มเติม ขอบคุณสำหรับการสนับสนุนเสมอมา

Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
OneAccess
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
IoT
IoT Device Access
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Managed Threat Detection
Blockchain
Blockchain Service
Web3 Node Engine Service
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
SoftWare Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Edge Data Center Management
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Intelligent EdgeCloud
Solutions
SAP Cloud
High Performance Computing
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive

Step 2: Develop Data

Updated on 2024-11-12 GMT+08:00

This step describes how to use the data in BI reports to analyze the 10 products users like most and 10 products users dislike most. Jobs are periodically executed and the results are exported to tables every day for data analysis.

Analyze 10 Products Users Like Most

  1. On the DataArts Studio console, locate a workspace and click DataArts Factory.
  2. Create a DLI SQL script used to create data tables by entering DLI SQL statements in the editor.

    Figure 1 Creating a script

  3. In the SQL editor, enter the following SQL statements and click Execute to calculate the 10 products users like most from the original data table in the OBS bucket and save the result to the top_like_product table.

    INSERT
      OVERWRITE table top_like_product
    SELECT
      product.brand as brand,
      COUNT(product.brand) as like_count
    FROM
      action
      JOIN product ON (action.product_id = product.product_id)
    WHERE
      action.type = 'like'
    group by
      brand
    ORDER BY
      like_count desc
    LIMIT
      10
    Figure 2 Script for analyzing the 10 products users like most

    The key parameters are as follows:
    • Data Connection: DLI data connection created in Step 4
    • Database: database created in Step 6
    • Resource Queue: The default resource queue default can be used.
      NOTE:
      • The version of the default Spark component of the default DLI queue is not up-to-date, and an error may be reported indicating that a table creation statement cannot be executed. In this case, you are advised to create a queue to run your tasks. To enable the execution of table creation statements in the default queue, contact the customer service or technical support of the DLI service.
      • The default queue default of DLI is only used for trial. It may be occupied by multiple users at a time. Therefore, it is possible that you fail to obtain the resource for related operations. If the execution takes a long time or fails, you are advised to try again during off-peak hours or use a self-built queue to run the job.

  4. After debugging the script, click Save to save the script and name it top_like_product. Click Submit to submit the script version. This script will be referenced later in Developing and Scheduling a Job.
  5. After the script is saved and executed successfully, you can use the following SQL statement to view data in the top_like_product table. You can also download or dump the table data by referring to Figure 3.

    SELECT * FROM top_like_product
    Figure 3 Viewing the data in the top_like_product table

Analyze 10 Products Users Dislike Most

  1. On the DataArts Studio console, locate a workspace and click DataArts Factory.
  2. Create a DLI SQL script used to create data tables by entering DLI SQL statements in the editor.

    Figure 4 Creating a script

  3. In the SQL editor, enter the following SQL statements and click Execute to calculate the 10 products users dislike most from the original data table in the OBS bucket and save the result to the top_bad_comment_product table.

    INSERT
      OVERWRITE table top_bad_comment_product
    SELECT
      DISTINCT product_id,
      comment_num,
      bad_comment_rate 
    FROM 
      comment 
    WHERE 
      comment_num > 3 
    ORDER BY
      bad_comment_rate desc 
    LIMIT
      10
    Figure 5 Script for analyzing the 10 products users dislike most
    The key parameters are as follows:
    • Data Connection: DLI data connection created in Step 4
    • Database: database created in Step 6
    • Resource Queue: The default resource queue default can be used.
      NOTE:
      • The version of the default Spark component of the default DLI queue is not up-to-date, and an error may be reported indicating that a table creation statement cannot be executed. In this case, you are advised to create a queue to run your tasks. To enable the execution of table creation statements in the default queue, contact the customer service or technical support of the DLI service.
      • The default queue default of DLI is only used for trial. It may be occupied by multiple users at a time. Therefore, it is possible that you fail to obtain the resource for related operations. If the execution takes a long time or fails, you are advised to try again during off-peak hours or use a self-built queue to run the job.

  4. After debugging the script, click Save and Submit to save the script and name it top_bad_comment_product. This script will be referenced later in Developing and Scheduling a Job.
  5. After the script is saved and executed successfully, you can use the following SQL statement to view data in the top_bad_comment_product table. You can also download or dump the table data by referring to Figure 6.

    SELECT * FROM top_bad_comment_product
    Figure 6 Viewing the data in the top_bad_comment_product table

Developing and Scheduling a Job

Assume that the BI reports in the OBS bucket are changing every day. To update the analysis result every day, use the job orchestration and scheduling functions of DataArts Factory.

  1. On the DataArts Studio console, locate a workspace and click DataArts Factory.
  2. Create a batch processing job named BI_analysis.

    Figure 7 Creating a job
    Figure 8 Configuring the job

  3. Open the created job, drag two Dummy nodes and two DLI SQL nodes to the canvas, select and drag , and orchestrate the job shown in Figure 9.

    Figure 9 Connecting nodes and configuring node properties

    Key nodes:

    • Begin (Dummy node): serves only as a start identifier.
    • top_like_product (DLI SQL node): In Node Properties, associates with the DLI SQL script top_like_product developed in Analyze 10 Products Users Like Most.
    • top_bad_comment_product (DLI SQL node): In Node Properties, associates with the DLI SQL script top_bad_comment_product developed in Analyze 10 Products Users Dislike Most.
    • Finish (Dummy node): serves only as an end identifier.

  4. Click to test the job.
  5. If the job runs properly, click Scheduling Setup in the right pane and configure the scheduling policy for the job.

    Figure 10 Configuring scheduling

    Note:

    • Scheduling Type: Select Run periodically.
    • Scheduling Properties: The job is executed at 01:00 every day from Feb 09 to Feb 28, 2022.
    • Dependency Properties: You can configure a dependency job for this job. You do not need to configure it in this practice.
    • Cross-Cycle Dependency: Select Independent on the previous schedule cycle.

  6. Click Save, Submit (), and Execute (). Then the job will be automatically executed every day and the BI report analysis result is automatically saved to the top_like_product and top_bad_comment_product tables, respectively.
  7. If you want to check the job execution result, choose Monitoring > Monitor Instance in the left navigation pane.

    Figure 11 Viewing the job execution status

You can also configure notifications to be sent through SMS messages or emails, when a job encounters exceptions or fails.

Now you have learned the data development process based on e-commerce BI reports. In addition, you can analyze the age distribution and gender ratio of users and their browsing, purchase, and evaluation of products to provide valuable information for marketing decision-making, advertising, credit rating, brand monitoring, and user behavior prediction.

เราใช้คุกกี้เพื่อปรับปรุงไซต์และประสบการณ์การใช้ของคุณ การเรียกดูเว็บไซต์ของเราต่อแสดงว่าคุณยอมรับนโยบายคุกกี้ของเรา เรียนรู้เพิ่มเติม

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback