Cloud File Storage (CFS) provides file systems with scalable storage, which can work with other Tencent Cloud services such as CVM, TKE, and BatchCompute. CFS offers the following storage classes, and you can select one based on your business needs.
Storage classes
Standard
Standard is a highly cost-effective file system that uses mixed media and accelerates data reads/writes through a data tiering mechanism. Three replicas on three independent physical servers on different racks are provided to guarantee strong consistency and successful storage of every data entry written to the file system. Its access server features hot data migration to ensure data reliability and high service availability, making it suitable for scenarios that require small-scale general data storage.
High-Performance
High-Performance is a low-latency file system that uses NVMe only and provides a high storage performance through a data tiering mechanism. Three replicas on three independent physical servers on different racks are provided to guarantee strong consistency and successful storage of every data entry written to the file system. Its access server features hot data migration to ensure data reliability and service high availability, making it suitable for small-scale core businesses that are latency-sensitive.
Standard Turbo
Standard Turbo is a parallel file system that uses mixed media and an asymmetric framework. Data nodes and metadata nodes are deployed independently. By allowing mounting with a private protocol, a single client can deliver performance like a storage cluster. In addition, underlying resources are isolated to ensure exclusive storage for the cluster. Three replicas on three independent physical servers on different racks are provided to guarantee strong consistency and successful storage of every data entry written to the file system. Its access server features hot data migration to ensure data reliability and service high availability, making it suitable for scenarios that require large-scale throughput and mixed loads.
High-Performance Turbo
High-Performance Turbo is a high-bandwidth, low-latency, parallel file system that uses NVMe only and an asymmetric framework. Data nodes and metadata nodes are deployed independently. By allowing mounting with a private protocol, a single client can deliver performance like a storage cluster. In addition, underlying resources are isolated to ensure exclusive storage for the cluster. Three replicas on three independent physical servers on different racks are provided to guarantee strong consistency and successful storage of every data entry written to the file system. Its access server features hot data migration to ensure data reliability and service high availability, making it suitable for scenarios that use a large number of small files.
High-Throughput
High-Throughput is a parallel file system that uses a layered framework. It provides more flexible bandwidth scaling and access over the SMB protocol, meeting the storage requirements for a small capacity and high bandwidth. Three replicas on three independent physical servers on different racks are provided to guarantee strong consistency and successful storage of every data entry written to the file system. Its access server features hot data migration to ensure data reliability and high service availability, making it suitable for read-intensive scenarios such as rendering, game battle server, and non-linear editing.
Performance and specifications
General series
|
Positioning | Cost-effective, suitable for small-scale general data storage | High performance and low latency, suitable for small-scale latency-sensitive core businesses |
Scenario | Small-scale enterprise file sharing, data backup/archive, and log storage | Small-scale CI/CD development and testing environments, high-performance web services, OLTP databases, and high-performance file sharing |
Storage capacity | 0−160 TiB | 0−32 TiB |
Bandwidth (MiB/s) | Min{100 + 0.1 x capacity in GiB, 300} | Min{200 + 0.2 x capacity in GiB, 1024} |
Read IOPS | Min{2,000 + 8 x size in GiB, 15,000} | Min{2,500 + 30 x size in GiB, 30,000} |
Write IOPS | Min{2,000 + 8 x size in GiB, 15,000} | Min{2,500 + 30 x size in GiB, 30,000} |
Maximum OPS | Read/Write: 10,000/1,000 | Read/Write: 30,000/3,000 |
Latency | 4K single-thread read: 3 ms 4K single-thread write: 7 ms | 4K single-thread read: 1 ms 4K single-thread write: 1.5 ms |
Cost | 0.05 USD/GiB/month | 0.2286 USD/GiB/month |
Supported protocol | NFS/SMB | NFS |
Scaling | Auto | Auto |
Supported OS | Linux/Windows | Linux/Windows |
Turbo series
|
Positioning | High-throughput and large storage, suitable for businesses that require high throughput and mixed loads | High-throughput and high IOPS, suitable for businesses that use large-scale small files |
Scenario | Non-linear media asset editing, image rendering, AI inferencing, OLAP business, and high-performance computing | High-performance and large-scale computation, AI training, OLTP databases, big data analysis, and OLAP services |
Storage capacity | 20 TiB to 100 PiB | 10 TiB to 100 PiB |
Bandwidth (MiB/s) | Min{0.1 x capacity in GiB, 100,000} | Min{0.2 x capacity in GiB, 100,000} |
Read IOPS | Min{2 x capacity in GiB, 2 million} | Min{20 x capacity in GiB, 10 million} |
Write IOPS | Min{1 x capacity in GiB, 1 million} | Min{5 x capacity in GiB, 3 million} |
Maximum OPS | Read/Write: 300,000/20,000 | Read/Write: 300,000/20,000 |
Latency | 4K single-thread read: 0.2 ms 4K single-thread write: 3 ms | 4K single-thread read: 0.2 ms 4K single-thread write: 1.5 ms |
Cost | 0.0857 USD/GiB/month | 0.2 USD/GiB/month |
Supported protocol | POSIX/MPI | POSIX/MPI |
Scaling | Manual | Manual |
Supported OS | Linux | Linux |
Consistency | Strong consistency | Strong consistency |
Infrequent Access (IA)
|
Positioning | Storage of infrequently accessed warm and cold data |
Scenario | Used together with Standard Turbo or High-Performance Turbo to achieve automatic hot-cold data tiering, reducing storage costs |
Storage capacity | 0-1 EiB |
Bandwidth | 600 MiB/s |
Cost | Storage usage: 0.0171 USD/GiB/month Data transfer: 0.0085 USD/GiB |
Scaling | Auto |
Note:
An IA file system cannot be mounted directly for access. It must be used together with Standard Turbo or High-Performance Turbo to achieve automatic hot-cold data tiering.
High-Throughput
|
Positioning | High throughput and large capacity, suitable for large-scale read-intensive businesses |
Scenario | Read-intensive scenarios such as video rendering, game battle server, and non-linear editing |
Storage capacity | 0–1 PiB |
Bandwidth (MiB/s) | 0–200 GiB/s (dependent on the deployment workload) |
Read IOPS | Min{2 x capacity in GiB, 2 million} |
Write IOPS | Min{1 x capacity in GiB, 1 million} |
Maximum OPS | Read/Write: 300,000/20,000 |
Latency | 4K single-thread read: 5 ms 4K single-thread write: 10 ms |
Cost | Capacity: 0.1428 USD/GiB/month Bandwidth: 428.571 USD/GiB/s/month |
Supported protocol | SMB |
Scaling | Auto |
Supported OS | Windows |
Notes
In the performance-related formulas, the capacities of Standard Turbo and High-Performance Turbo refer to the capacities purchased for the cluster. For Standard and High-Performance, the capacities refer to the storage that is actually used by the instances.
The table above shows the capabilities of the file system. To reach the performance upper threshold, you usually need to perform multi-threaded reads/writes using multiple compute nodes.
The performance benchmark is tested in interruption-free conditions. The results of mixed tests or other loads may vary.
OPS indicates the file system's ability to process metadata per second, which is not the same as IOPS.
Currently, High-Throughput CFS is not available for purchase in the console. If you need to purchase it, please submit a ticket.
Was this page helpful?