组件名称 | COSN 大数据组件支持情况 | 服务组件是否需要重启 |
Yarn | 支持 | 重启 NodeManager |
Hive | 支持 | 重启 HiveServer 和 HiveMetastore |
Spark | 支持 | 重启 NodeManager |
Sqoop | 支持 | 重启 NodeManager |
Presto | 支持 | 重启 HiveServer 和 HiveMetastore 以及 Presto |
Flink | 支持 | 否 |
Impala | 支持 | 否 |
EMR | 支持 | 否 |
自建组件 | 后续支持 | 无 |
HBase | 不推荐 | 无 |
<property><name>fs.cosn.userinfo.secretId</name><value>AK***</value></property><property><name>fs.cosn.userinfo.secretKey</name><value></value></property><property><name>fs.cosn.impl</name><value>org.apache.hadoop.fs.CosFileSystem</value></property><property><name>fs.AbstractFileSystem.cosn.impl</name><value>org.apache.hadoop.fs.CosN</value></property><property><name>fs.cosn.bucket.region</name><value>ap-shanghai</value></property>
COSN 配置项 | 值 | 含义 |
fs.cosn.userinfo.secretId | AKxxxx | 账户的 API 密钥信息 |
fs.cosn.userinfo.secretKey | Wpxxxx | 账户的 API 密钥信息 |
fs.cosn.bucket.region | ap-shanghai | 用户存储桶所在地域 |
fs.cosn.impl | org.apache.hadoop.fs.CosFileSystem | cosn 对 FileSystem 的实现类,固定为 org.apache.hadoop.fs.CosFileSystem |
fs.AbstractFileSystem.cosn.impl | org.apache.hadoop.fs.CosN | cosn 对 AbstractFileSystem 的实现类,固定为 org.apache.hadoop.fs.CosN |
cp hadoop-cos-2.7.3-shaded.jar /opt/cloudera/parcels/CDH-5.16.1-1.cdh5.16.1.p0.3/lib/hadoop-hdfs/
hadoop jar ./hadoop-mapreduce-examples-2.7.3.jar teragen -Dmapred.job.maps=500 -Dfs.cosn.upload.buffer=mapped_disk -Dfs.cosn.upload.buffer.size=-1 1099 cosn://examplebucket-1250000000/terasortv1/1k-inputhadoop jar ./hadoop-mapreduce-examples-2.7.3.jar terasort -Dmapred.max.split.size=134217728 -Dmapred.min.split.size=134217728 -Dfs.cosn.read.ahead.block.size=4194304 -Dfs.cosn.read.ahead.queue.size=32 cosn://examplebucket-1250000000/terasortv1/1k-input cosn://examplebucket-1250000000/terasortv1/1k-output
cosn:// schema
后面请替换为用户大数据业务的存储桶路径。CREATE TABLE `report.report_o2o_pid_credit_detail_grant_daily`(`cal_dt` string,`change_time` string,`merchant_id` bigint,`store_id` bigint,`store_name` string,`wid` string,`member_id` bigint,`meber_card` string,`nickname` string,`name` string,`gender` string,`birthday` string,`city` string,`mobile` string,`credit_grant` bigint,`change_reason` string,`available_point` bigint,`date_time` string,`channel_type` bigint,`point_flow_id` bigint)PARTITIONED BY (`topicdate` string)ROW FORMAT SERDE'org.apache.hadoop.hive.ql.io.orc.OrcSerde'STORED AS INPUTFORMAT'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat'OUTPUTFORMAT'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat'LOCATION'cosn://examplebucket-1250000000/user/hive/warehouse/report.db/report_o2o_pid_credit_detail_grant_daily'TBLPROPERTIES ('last_modified_by'='work','last_modified_time'='1589310646','transient_lastDdlTime'='1589310646')
select count(1) from report.report_o2o_pid_credit_detail_grant_daily;
spark-submit --class org.apache.spark.examples.JavaWordCount --executor-memory 4g --executor-cores 4 ./spark-examples-1.6.0-cdh5.16.1-hadoop2.6.0-cdh5.16.1.jar cosn://examplebucket-1250000000/wordcount
sqoop import --connect "jdbc:mysql://IP:PORT/mysql" --table sqoop_test --username root --password 123** --target-dir cosn://examplebucket-1250000000/sqoop_test
select * from cosn_test_table where bucket is not null limit 1;
本页内容是否解决了您的问题?