Help Center/ Object Storage Service/ SDK Reference/ Python/ Parallel File System APIs/ Creating a Parallel File System (SDK for Python)
Updated on 2026-01-28 GMT+08:00

Creating a Parallel File System (SDK for Python)

Function

Parallel file systems (PFSs) are containers for storing objects in OBS. Files uploaded to OBS are stored as objects in PFSs. This API creates a PFS.

When creating a PFS, you can configure parameters such as the storage class, region, and access control as needed.

Restrictions

  • To create a PFS, you must have the obs:bucket:CreateBucket permission. IAM is recommended for granting permissions. For details, see IAM Custom Policies.
  • The mapping between OBS regions and endpoints must comply with what is listed in Regions and Endpoints.

    When creating a PFS, if you use the endpoint obs.myhuaweicloud.com for client initialization, you do not have to specify a region (indicated by location) where the PFS will be created, because OBS automatically creates the PFS in the CN North-Beijing1 (cn-north-1) region. However, if the endpoint you use is not obs.myhuaweicloud.com, you must specify a region that matches the used endpoint, or status code 400 is returned.

    For example, if the endpoint used for initialization is obs.ap-southeast-1.myhuaweicloud.com, you must set Location to ap-southeast-1 when creating a PFS.

  • A maximum of 100 PFSs (regardless of regions) can be created for an account. There is no limit on the number and size of objects in a PFS.
  • The name of a PFS must be unique in OBS. If you repeatedly create PFSs with the same name in the same region, an HTTP status code 200 will be returned. In other cases, creating a PFS with the same name as an existing PFS will have an HTTP status code 409 returned, indicating that such a PFS already exists.
  • The name of a deleted PFS can be reused for another bucket or PFS about 30 minutes after the deletion.

Method

ObsClient.createBucket(bucketName, header, location, extensionHeaders)

Request Parameters

Table 1 List of request parameters

Parameter

Type

Mandatory (Yes/No)

Description

bucketName

str

Yes

Explanation:

PFS name.

Restrictions:

  • A PFS name must be unique across all accounts and regions.
  • A PFS name:
    • Must be 3 to 63 characters long and start with a digit or letter. Lowercase letters, digits, hyphens (-), and periods (.) are allowed.
    • Cannot be formatted as an IP address.
    • Cannot start or end with a hyphen (-) or period (.)
    • Cannot contain two consecutive periods (..), for example, my..bucket.
    • Cannot contain periods (.) and hyphens (-) adjacent to each other, for example, my-.bucket or my.-bucket.

Default value:

None

header

CreateBucketHeader

No

Explanation:

Header used for configuring basic information (such as the storage class and redundancy policy) about the PFS.

Value range:

See Table 2.

Default value:

None

location

str

Yes if the region where the OBS service resides is not the default region

Explanation:

Region where the PFS is to be created.

Restrictions:

If the specified endpoint is obs.myhuaweicloud.com, this parameter is not required. If any other endpoints are specified, this parameter is required.

Value range:

To learn about valid regions and endpoints, see Regions and Endpoints. An endpoint is the request address for calling an API. Endpoints vary depending on services and regions. To obtain the regions and endpoints, contact the enterprise administrator.

Default value:

If obs.myhuaweicloud.com is used as the endpoint and no region is specified, cn-north-1 (the CN North-Beijing1 region) is used by default.

extensionHeaders

dict

No

Explanation:

Extension headers.

Value range:

See User-defined Headers (SDK for Python).

Default value:

None

Table 2 CreateBucketHeader

Parameter

Type

Mandatory (Yes/No)

Description

aclControl

str

No

Explanation:

ACL that can be specified when creating a PFS.

Value range:

For details, see HeadPermission.

Default value:

PRIVATE

storageClass

str

No

Explanation:

PFS storage class that can be specified during PFS creation.

Value range:

See Table 4.

Default value:

STANDARD

extensionGrants

list of ExtensionGrant

No

Explanation:

Extension permissions that can be specified during PFS creation.

Value range:

See Table 5.

Default value:

None

availableZone

str

No

Explanation:

Data redundancy type that can be specified during PFS creation.

Restrictions:

Multi-AZ redundancy is not available for Archive storage. If the region where the PFS is to be created does not support multi-AZ storage, the PFS uses single-AZ storage by default.

Value range:

To configure multi-AZ storage for the PFS, set this parameter to 3az. To configure single-AZ storage (default value assigned by OBS) for the PFS, you do not need to specify this parameter.

Default value:

If this parameter is left blank, single AZ is specified by default.

epid

str

No

Explanation:

Enterprise project ID that can be specified during PFS creation. If you have enabled EPS, you can obtain the project ID from the EPS console.

Restrictions:

The value of epid is a UUID. epid is not required if you have not enabled EPS yet.

Example: 9892d768-2d13-450f-aac7-ed0e44c2585f

Value range:

See How Do I Obtain an Enterprise Project ID?

Default value:

None

isPFS

bool

No

Explanation:

Whether a PFS is created

Value range:

  • True: A PFS is created.
  • False: A bucket is created.

Default value:

False

Table 3 HeadPermission

Constant

Default Value

Description

HeadPermission.PRIVATE

private

Private read/write

A bucket or object can only be accessed by its owner.

HeadPermission.PUBLIC_READ

public-read

Public read and private write

If this permission is granted on a bucket, anyone can read the object list, multipart uploads, metadata, and object versions in the bucket.

If it is granted on an object, anyone can read the content and metadata of the object.

HeadPermission.PUBLIC_READ_WRITE

public-read-write

Public read/write

If this permission is granted on a bucket, anyone can read the object list, multipart tasks, metadata, and object versions in the bucket, and can upload or delete objects, initiate multipart upload tasks, upload parts, assemble parts, copy parts, and abort multipart upload tasks.

If it is granted on an object, anyone can read the content and metadata of the object.

HeadPermission.PUBLIC_READ_DELIVERED

public-read-delivered

Public read on a bucket as well as objects in the bucket

If this permission is granted on a bucket, anyone can read the object list, multipart tasks, metadata, and object versions, and read the content and metadata of objects in the bucket.

NOTE:

PUBLIC_READ_DELIVERED cannot be applied to objects.

HeadPermission.PUBLIC_READ_WRITE_DELIVERED

public-read-write-delivered

Public read/write on a bucket as well as objects in the bucket

If this permission is granted on a bucket, anyone can read the object list, multipart uploads, metadata, and object versions in the bucket, and can upload or delete objects, initiate multipart upload tasks, upload parts, assemble parts, copy parts, and abort multipart uploads. They can also read the content and metadata of objects in the bucket.

NOTE:

PUBLIC_READ_WRITE_DELIVERED cannot be applied to objects.

HeadPermission.BUCKET_OWNER_FULL_CONTROL

bucket-owner-full-control

If this permission is granted on an object, only the bucket and object owners have the full control over the object. By default, if you upload an object to a bucket owned by another user, the bucket owner does not have the permissions on your object. After you grant this permission to the bucket owner, the bucket owner can have full control over your object.

Table 4 StorageClass

Parameter

Type

Description

STANDARD

Standard storage class

Explanation:

Features low access latency and high throughput and is used for storing massive, frequently accessed (multiple times a month) or small objects (< 1 MB) requiring quick response.

WARM

Infrequent Access storage class

Explanation:

Used for storing data that is semi-frequently accessed (fewer than 12 times a year) but is instantly available when needed.

COLD

Archive storage class

Explanation:

Used for storing rarely accessed (once a year) data.

INTELLIGENT_TIERING

Intelligent Tiering

Explanation:

Is designed to optimize storage costs by automatically moving data to a more economical access tier when data access patterns change. This storage class is ideal for data with constantly changing or unpredictable access patterns.

Table 5 ExtensionGrant

Parameter

Type

Mandatory (Yes/No)

Description

granteeId

str

No

Explanation:

Account (domain) ID of the grantee.

Value range:

To obtain the account ID, see How Do I Get My Account ID and IAM User ID? (SDK for Python)

Default value:

None

permission

str

No

Explanation:

User-defined permissions for the PFS.

Value range:

You can select one or more permissions from Table 6.

Default value:

None

Table 6 Permission

Constant

Description

READ

Read permission

A grantee with this permission for a bucket can obtain the list of objects, multipart uploads, bucket metadata, and object versions in the bucket.

A grantee with this permission for an object can obtain the object content and metadata.

WRITE

Write permission

A grantee with this permission for a bucket can upload, overwrite, and delete any object or part in the bucket.

Such permission for an object is not applicable.

READ_ACP

Permission to read ACL configurations

A grantee with this permission can obtain the ACL of a bucket or object.

A bucket or object owner has this permission for the bucket or object permanently.

WRITE_ACP

Permission to modify ACL configurations

A grantee with this permission can update the ACL of a bucket or object.

A bucket or object owner has this permission for the bucket or object permanently.

A grantee with this permission can modify the ACL, thus obtaining full access permissions.

FULL_CONTROL

Full control access, including read and write permissions for a bucket and its ACL, or for an object and its ACL.

A grantee with this permission for a bucket has READ, WRITE, READ_ACP, and WRITE_ACP permissions for the bucket.

A grantee with this permission for an object has READ, READ_ACP, and WRITE_ACP permissions for the object.

Responses

Table 7 Responses

Type

Description

GetResult

Explanation:

SDK common results

Table 8 GetResult

Parameter

Type

Description

status

int

Explanation:

HTTP status code

Value range:

A status code is a group of digits indicating the status of a response. It ranges from 2xx (indicating successes) to 4xx or 5xx (indicating errors). For more information, see Status Code.

Default value:

None

reason

str

Explanation:

Reason description.

Default value:

None

errorCode

str

Explanation:

Error code returned by the OBS server. If the value of status is less than 300, this parameter is left blank.

Default value:

None

errorMessage

str

Explanation:

Error message returned by the OBS server. If the value of status is less than 300, this parameter is left blank.

Default value:

None

requestId

str

Explanation:

Request ID returned by the OBS server

Default value:

None

indicator

str

Explanation:

Error indicator returned by the OBS server.

Default value:

None

hostId

str

Explanation:

Requested server ID. If the value of status is less than 300, this parameter is left blank.

Default value:

None

resource

str

Explanation:

Error source (a bucket or an object). If the value of status is less than 300, this parameter is left blank.

Default value:

None

header

list

Explanation:

Response header list, composed of tuples. Each tuple consists of two elements, respectively corresponding to the key and value of a response header.

Default value:

None

body

object

Explanation:

Result content returned after the operation is successful. If the value of status is larger than 300, the value of body is null. The value varies with the API being called. For details, see Bucket-Related APIs (SDK for Python) and Object-Related APIs (SDK for Python).

Default value:

None

Code Examples

This example creates a parallel file system named examplebucket and specifies its location, ACL, storage class, and redundancy type.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
from obs import CreateBucketHeader
from obs import ObsClient
import os
import traceback

# Obtain an AK and SK pair using environment variables (recommended) or import it in other ways. Using hard coding may result in leakage.
# Obtain an AK and SK pair on the management console. For details, see https://support.huaweicloud.com/intl/en-us/usermanual-ca/ca_01_0003.html.
ak = os.getenv("AccessKeyID")
sk = os.getenv("SecretAccessKey")
# (Optional) If you use a temporary AK and SK pair and a security token to access OBS, obtain them using environment variables.
# security_token = os.getenv("SecurityToken")
# Set server to the endpoint corresponding to the bucket. CN-Hong Kong is used here as an example. Replace it with the one currently in use.
server = "https://obs.ap-southeast-1.myhuaweicloud.com" 

# Create an obsClient instance.
# If you use a temporary AK and SK pair and a security token to access OBS, you must specify security_token when creating an instance.
obsClient = ObsClient(access_key_id=ak, secret_access_key=sk, server=server)
try:
    # Add the additional headers of the request, specifying a private bucket in the Infrequent Access storage class that supports multi-AZ storage.
    header = CreateBucketHeader(aclControl="PRIVATE", storageClass="STANDARD", availableZone="3az",isPFS=True)
    # Specify the region where the bucket is to be created. The region must be the same as that in the endpoint passed. ap-southeast-1 is used as an example.
    location = "ap-southeast-1"
    bucketName = "examplebucket"
    # Create a bucket.
    resp = obsClient.createBucket(bucketName, header, location)
    # If status code 2xx is returned, the API was called successfully. Otherwise, the call failed.
    if resp.status < 300:
        print('Create Bucket Succeeded')
        print('requestId:', resp.requestId)
    else:
        print('Create Bucket Failed')
        print('requestId:', resp.requestId)
        print('errorCode:', resp.errorCode)
        print('errorMessage:', resp.errorMessage)
except:
    print('Create Bucket Failed')
    print(traceback.format_exc())