Module | Supported | Service Module to Restart |
YARN | Yes | NodeManager |
Hive | Yes | HiveServer and HiveMetastore |
Spark | Yes | NodeManager |
Sqoop | Yes | NodeManager |
Presto | Yes | HiveServer, HiveMetastore, and Presto |
Flink | Yes | None |
Impala | Yes | None |
EMR | Yes | None |
Self-built component | To be supported in the future | No |
HBase | Not recommended | None |
Cluster-wide Advanced Configuration Snippet(Safety Valve) for core-site.xml
.<property><name>fs.cosn.userinfo.secretId</name><value>AK***</value></property><property><name>fs.cosn.userinfo.secretKey</name><value></value></property><property><name>fs.cosn.impl</name><value>org.apache.hadoop.fs.CosFileSystem</value></property><property><name>fs.AbstractFileSystem.cosn.impl</name><value>org.apache.hadoop.fs.CosN</value></property><property><name>fs.cosn.bucket.region</name><value>ap-shanghai</value></property>
core-site.xml
). For other settings, see Hadoop.COSN Configuration Item | Value | Description |
fs.cosn.userinfo.secretId | AKxxxx | API key information of the account |
fs.cosn.userinfo.secretKey | Wpxxxx | API key information of the account |
fs.cosn.bucket.region | ap-shanghai | Bucket region |
fs.cosn.impl | org.apache.hadoop.fs.CosFileSystem | The implementation class of COSN for FileSystem, which is fixed at `org.apache.hadoop.fs.CosFileSystem` |
fs.AbstractFileSystem.cosn.impl | org.apache.hadoop.fs.CosN | The implementation class of COSN for AbstractFileSystem, which is fixed at `org.apache.hadoop.fs.CosN` |
cp hadoop-cos-2.7.3-shaded.jar /opt/cloudera/parcels/CDH-5.16.1-1.cdh5.16.1.p0.3/lib/hadoop-hdfs/
hadoop jar ./hadoop-mapreduce-examples-2.7.3.jar teragen -Dmapred.job.maps=500 -Dfs.cosn.upload.buffer=mapped_disk -Dfs.cosn.upload.buffer.size=-1 1099 cosn://examplebucket-1250000000/terasortv1/1k-inputhadoop jar ./hadoop-mapreduce-examples-2.7.3.jar terasort -Dmapred.max.split.size=134217728 -Dmapred.min.split.size=134217728 -Dfs.cosn.read.ahead.block.size=4194304 -Dfs.cosn.read.ahead.queue.size=32 cosn://examplebucket-1250000000/terasortv1/1k-input cosn://examplebucket-1250000000/terasortv1/1k-output
cosn:// Replace the content behind
schema` with your own bucket pathCREATE TABLE `report.report_o2o_pid_credit_detail_grant_daily`(`cal_dt` string,`change_time` string,`merchant_id` bigint,`store_id` bigint,`store_name` string,`wid` string,`member_id` bigint,`meber_card` string,`nickname` string,`name` string,`gender` string,`birthday` string,`city` string,`mobile` string,`credit_grant` bigint,`change_reason` string,`available_point` bigint,`date_time` string,`channel_type` bigint,`point_flow_id` bigint)PARTITIONED BY (`topicdate` string)ROW FORMAT SERDE'org.apache.hadoop.hive.ql.io.orc.OrcSerde'STORED AS INPUTFORMAT'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat'OUTPUTFORMAT'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat'LOCATION'cosn://examplebucket-1250000000/user/hive/warehouse/report.db/report_o2o_pid_credit_detail_grant_daily'TBLPROPERTIES ('last_modified_by'='work','last_modified_time'='1589310646','transient_lastDdlTime'='1589310646')
select count(1) from report.report_o2o_pid_credit_detail_grant_daily;
Spark example word count
test conducted with COSN as an example.spark-submit --class org.apache.spark.examples.JavaWordCount --executor-memory 4g --executor-cores 4 ./spark-examples-1.6.0-cdh5.16.1-hadoop2.6.0-cdh5.16.1.jar cosn://examplebucket-1250000000/wordcount
/opt/cloudera/parcels/CDH-5.16.1-1.cdh5.16.1.p0.3/lib/sqoop/
.sqoop import --connect "jdbc:mysql://IP:PORT/mysql" --table sqoop_test --username root --password 123** --target-dir cosn://examplebucket-1250000000/sqoop_test
/usr/local/services/cos_presto/plugin/hive-hadoop2
.
(3) Presto does not load the gson-2...jar JAR file (only used for CHDFS) from Hadoop Common, so you need to manually put it into the presto directory, for example, /usr/local/services/cos_presto/ plugin/hive-hadoop2
.
(4) Restart HiveServer, HiveMetaStore, and Presto.select * from cosn_test_table where bucket is not null limit 1;
cosn_test_table
is a table with location as cosn scheme
.
Apakah halaman ini membantu?