Configuration Item | Description |
Billing Mode | Pay-as-you-go: postpaid, where a bill is generated hourly based on resource usage and then you pay for what you use. |
Region | Currently, Cloud Data Warehouse is available in the Shanghai, Hong Kong (China), Beijing, Guangzhou, Singapore, and Silicon Valley regions. We recommend you select a region closest to your users, and you cannot change the region after the purchase. |
Availability Zone | Select availability zones in different regions as needed on the purchase page. |
Network | A VPC is an isolated, highly secure, and dedicated network environment. You can create a VPC and subnet or select an existing one. |
High Availability | In HA mode, each shard has two replicas; in non-HA mode, each shard has only one replica, where the entire cluster will fail if the replica fails. Therefore, we recommend you use the HA mode for production environments. |
Compute Node Type | There are three types of compute nodes: Standard: 4-core 16 GB, 8-core 32 GB, 16-core 64 GB, 24-core 96 GB, 32-core 128 GB, 64-core 256 GB, 90-core 224 GB, and 128-core 256 GB. Storage-Optimized: 32-core 128 GB (with twelve 3720 GB SATA HDDs) and 64-core 256 GB (with twenty-four 3720 GB SATA HDDs), 84-core 320 GB (with twenty-four 3720 GB SATA HDDs). High-Performance: 32-core 128 GB (with two 3570 GB NVMe SSDs), 64-core 256 GB (with four 3570 GB NVMe SSDs), and 84-core 320 GB (with four 3570 GB NVMe SSDs). The higher the specification, the better the performance. You can select an appropriate specification as needed. |
ZooKeeper Node Type | There are 4-core 16 GB, 8-core 32 GB, 16-core 64 GB, 24-core 96 GB, 32-core 128 GB, 64-core 256 GB, 90-core 224 GB, and 128-core 256 GB ZooKeeper nodes. The heavier the load, the higher the specification needed. You can select an appropriate specification as needed. |
account.csv
file:AccountId, Name, Address, Year1, 'GHua', 'WuHan Hubei', 19902, 'SLiu', 'ShenZhen Guangzhou', 19913, 'JPong', 'Chengdu Sichuan', 1992
wget https://repo.yandex.ru/clickhouse/rpm/stable/x86_64/clickhouse-client-20.7.2.30-2.noarch.rpmwget https://repo.yandex.ru/clickhouse/rpm/stable/x86_64/clickhouse-common-static-20.7.2.30-2.x86_64.rpm
rpm -ivh *.rpm
9000
.clickhouse-client -hxxx.xxx.xxx.xxx --port 9000
8123
and get the specific access IP address in Cluster Access Address (HTTP) on the cluster details page.echo "select version()=21.3.9.83" | curl 'http://xxx.xxx.xxx.xxx:8123/' --data-binary @-
echo "select version()" | curl 'http://xxx.xxx.xxx.xxx:8123/' --data-binary @-
echo "select * from system.clusters" | curl 'http://xxx.xxx.xxx.xxx:8123/' --data-binary @-
CREATE DATABASE IF NOT EXISTS testdb ON CLUSTER default_cluster;
CREATE TABLE testdb.account ON CLUSTER default_cluster(accountid UInt16,name String,address String,year UInt64) ENGINE =ReplicatedMergeTree('/clickhouse/tables/{layer}-{shard}/testdb/account', '{replica}') ORDER BY (accountid);
CREATE DATABASE IF NOT EXISTS testdb ON CLUSTER default_cluster;
CREATE TABLE testdb.account ON CLUSTER default_cluster(accountid UInt16, name String, address String, year UInt64) ENGINE =MergeTree() ORDER BY (accountid);
/data
directory of the CVM instance connected to the ClickHouse cluster and run the following command to import the data.cat /data/account.csv | clickhouse-client - hxxx.xxx.xxx.xxx --database=testdb --query="INSERT INTO account FORMAT CSVWithNames"
select * from testdb.account;
Was this page helpful?