This document describes how to configure and verify common interpreters for Zeppelin (v0.91 or later is used as an example).
SPARK_HOME: /usr/local/service/spark
spark.master: yarn
spark.submit.deployMode: cluster
spark.app.name: zeppelin-spark
wordcount.txt
file to the /tmp
path of emr hdfs
first.hdfs://HDFS45983
value of the fs.defaultFS
configuration item in core-site.xml
.%spark
val data = sc.textFile("hdfs://HDFS45983/tmp/wordcount.txt")
case class WordCount(word: String, count: Integer)
val result = data.flatMap(x => x.split(" ")).map(x => (x, 1)).reduceByKey(_ + _).map(x => WordCount(x._1, x._2))
result.toDF().registerTempTable("result")
%sql
select * from result
FLINK_HOME: /usr/local/service/flink
flink.execution.mode: yarn
%flink
val data = benv.fromElements("hello world", "hello flink", "hello hadoop")
data.flatMap(line => line.split("\\s"))
.map(w => (w, 1))
.groupBy(0)
.sum(1)
.print()
hbase.home: /usr/local/service/hbase
hbase.ruby.sources: lib/ruby
zeppelin.hbase.test.mode: false
Note:As the JAR packages depended on by this interpreter have been integrated into the
/usr/local/service/zeppelin/local-repo
path of the cluster, you don't need to configure dependencies. They are required only if you want to define JAR packages.
%hbase
help 'get'
%hbase
list
zeppelin.livy.url: http://ip:8998
%livy.spark
sc.version
%livy.pyspark
print "1"
%livy.sparkr
hello <- function( name ) {
sprintf( "Hello, %s", name );
}
hello("livy")
kylin.api.url: http://ip:16500/kylin/api/query
kylin.api.user: ADMIN
kylin.api.password: KYLIN
kylin.query.project: default
%kylin(default)
select count(*) from table1
default.url: jdbc:mysql://ip:3306
default.user: xxx
default.password: xxx
default.driver: com.mysql.jdbc.Driver
Note:As the JAR packages depended on by this interpreter have been integrated into the
/usr/local/service/zeppelin/local-repo
path of the cluster, you don't need to configure dependencies. They are required only if you want to define JAR packages.
%mysql
show databases
default.url: jdbc:hive2://ip:7001
default.user: hadoop
default.password:
default.driver: org.apache.hive.jdbc.HiveDriver
Note:As the JAR packages depended on by this interpreter have been integrated into the
/usr/local/service/zeppelin/local-repo
path of the cluster, you don't need to configure dependencies. They are required only if you want to define JAR packages.
%hive
show databases
%hive
use default;
show tables;
default.url: jdbc:presto://ip:9000?user=hadoop
default.user: hadoop
default.password:
default.driver: io.prestosql.jdbc.PrestoDriver
Note:As the JAR packages depended on by this interpreter have been integrated into the
/usr/local/service/zeppelin/local-repo
path of the cluster, you don't need to configure dependencies. They are required only if you want to define JAR packages.
%presto
show catalogs;
%presto
show schemas from hive;
%presto
show tables from hive.default;
For more versions and interpreter configuration, see Zeppelin Documentation.
Was this page helpful?