Common Concepts
HBase Table
An HBase table is a three-dimensional map comprised of one or more columns or rows of data.
Column
Column is a dimension of an HBase table. The column name is in the format of <family>:<label>, where <family> and <label> can be any combination of characters. An HBase table consists of a set of column families. Each column in the HBase table belongs to a column family.
Column Family
A column family is a collection of columns stored in the HBase schema. To create columns, you must create a column family first. A column family organizes data with the same property in HBase. Each row of data in the same column family is stored on the same server. Each column family can be one attribute, such as compressed packages, timestamps, and data block cache.
MemStore
MemStore is a core of HBase storage. When the amount of data stored in WAL reaches the upper limit, the data is loaded to MemStore for sorting and storage.
RegionServer
RegionServer is a service running on each DataNode in the HBase cluster. It is responsible for serving and managing regions, uploading the load information of regions, and managing distributed master nodes.
Timestamp
A timestamp is a 64-bit integer used to index different versions of the same data. A timestamp can be automatically assigned by HBase when data is written or assigned by users.
Store
Store is a core of HBase storage. A Store hosts one MemStore and multiple StoreFiles. A Store corresponds to a column family of a table in a region.
Index
An index is a data structure that improves the efficiency of data retrieval in a database table. One or more columns in a database table can be used for fast random retrieval of data and efficient access to ordered records.
Coprocessor
A coprocessor is an interface provided by HBase for implementing calculation logic on RegionServer. Coprocessors are classified into system coprocessors and table coprocessors. The former can import all data tables on RegionServer, and the latter can process a specified table.
Block Pool
A block pool is a collection of blocks that belong to a single namespace. DataNodes store blocks from all block pools in a cluster. Each block pool is managed independently, which allows a namespace to generate an ID for a new block without relying on other namespaces. If one NameNode is invalid, the DataNode can still provide services for other NameNodes in the cluster.
DataNode
A DataNode is a worker node in the HDFS cluster. Scheduled by the client or NameNode, DataNodes store and retrieve data and periodically report file blocks to NameNodes.
File Block
A file block is the minimum logical unit stored in the HDFS. Each HDFS file is stored in one or more file blocks. All file blocks are stored in DataNodes.
Block Replica
A replica is a block copy stored in HDFS. A file block stores multiple replicas for system availability and fault tolerance.
NodeManager
NodeManager executes applications, monitors the usage of resources (including CPUs, memory, disks, and network resources) of applications, and reports the resource usage to the ResourceManager.
ResourceManager
ResourceManager schedules resources required by applications. It provides a scheduling plug-in for allocating cluster resources to multiple queues and applications. The scheduling plug-in schedules resources based on existing capabilities or using the fair scheduling model.
Kafka Partitions
Each topic can be divided into multiple partitions. Each partition corresponds to an appendant log file whose sequence is fixed.
Follower
A follower processes read requests and works with a leader to process write requests. It can also be used as a leader backup. When the leader is faulty, a follower is elected to take over the leader's workload to prevent a single point of failure.
Observer
Observers do not take part in voting for election and write requests. They only process read requests and forward write requests to the leader, improving processing efficiency.
DStream
DStream is an abstract concept provided by Spark Streaming. It is a continuous data stream which is obtained from the data source or the transformed input stream. In essence, a DStream is a series of continuous resilient distributed datasets (RDDs).
Heap Memory
A heap indicates the data area where the Java Virtual Machine (JVM) is running and from which memory for all class instances and arrays is committed. The initial heap memory is controlled by the JVM startup parameter -Xms.
- Maximum heap memory: Heap memory that can be committed to a program at most by the system, which is specified by the -Xmx parameter.
- Committed heap memory: total heap memory committed by the system for running a program. It ranges from the initial heap memory and the maximum heap memory.
- Used heap memory: heap memory that has been used by a program. It is smaller than the committed heap memory.
- Non-heap memory: memory excluded from the JVM heaps and the memory area for running the JVM. Non-heap memory has the following three memory pools:
- Code Cache: stores JIT compiled code. Its value is set through the JVM startup parameter -XX:InitialCodeCacheSize -XX:ReservedCodeCacheSize. The default value is 240 MB.
- Compressed Class Space: stores metadata of a pointer. Its value is set through the JVM startup parameter -XX:CompressedClassSpaceSize. The default value is 1024 MB.
- Metaspace: stores metadata. Its value is set through the JVM startup parameter -XX:MetaspaceSize -XX:MaxMetaspaceSize.
- Maximum non heap memory: non-heap memory committed to a program at most by the system. The value is the sum of the code buffer, space of compressed class pointers, and maximum metaspace.
- Committed non-heap memory: total non-heap memory committed by the system for running a program. It ranges from the initial non-heap memory and the maximum non-heap memory.
- Used non heap memory: non heap memory that has been used by a program. It is smaller than the committed non heap memory.
Hadoop
Hadoop is a distributed system framework. It allows users to develop distributed applications using high-speed computing and storage provided by clusters without knowing the underlying details of the distributed system. It can also reliably and efficiently process massive amounts of data in scalable, distributed mode. Hadoop is reliable because it maintains multiple work data duplicates, enabling distributed processing for failed nodes. Hadoop is highly efficient because it processes data in parallel mode. Hadoop is scalable because it can process petabytes of data. Hadoop consists of HDFS, MapReduce, HBase, and Hive.
Role
A role is an element of a service. A service contains one or multiple roles. Services are installed on servers through roles so that they can run properly.
Cluster
A cluster is computer technology that enables multiple servers to work as one server. Clusters improve the stability, reliability, and data processing or service capability of the system. For example, clusters can prevent single point of failures (SPOFs), share storage resources, reduce system load, and improve system performance.
Instance
An instance is formed when a service role is installed on the host. A service has one or more role instances.
Metadata
Metadata is data that provides information about other data and is also called media data or relay data. It is used to define data properties, specify data storage locations and historical data, retrieve resources, and record files.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot