Apache Zeppelin is a web-based notebook that enables interactive data analysis. It allows you to create interactive collaborative documents with various prebuilt language backends (or interpreters), such as Scala (Apache Spark), Python (Apache Spark), Spark SQL, Hive, and Shell.
Note
The Flink, HBase, Kylin, Livy, and Spark interpreters are configured for EMR v3.3.0 or later and EMR v2.6.0 or later by default. For configuring interpreters of other components for other versions of EMR, see Documentation and configure them based on the Zepplin version. Prerequisites
You have created a cluster and selected the Zeppelin service. For more information, see Creating EMR Cluster. In the EMR security group of the cluster, ports 22, 30001, and 18000 and necessary private network IP ranges are enabled (ports 22 and 30001 enabled for a new cluster by default). A new security group must be named in the format of "emr-xxxxxxxx_yyyyMMdd", and the name cannot be modified manually.
Services are added as needed, such as Spark, Flink, HBase, and Kylin.
Logging In to Zeppelin
1. Create a cluster and select the Zeppelin service. For more information, see Creating EMR Cluster. 2. On the left sidebar in the EMR console, select Cluster Services. 3. Click the Zeppelin block and click WebUI address to access the WebUI.
4. For EMR v2.5.0 or earlier and EMR 3.2.1 or earlier, the default login permission is set, and both the username and password are admin
. To change the password, you can modify the users
and roles
options in the configuration file /usr/local/service/zeppelin-0.8.2/conf/shiro.ini
. For more configuration instructions, see here. 5. In EMR v2.6.0 or later and EMR v3.3.0 or later, Zeppelin login is integrated into the OpenLDAP account, so you can log in only with an OpenLDAP account and password. After a cluster is created, the default OpenLDAP accounts are root
and hadoop
, and the default password is the cluster password. Only the root
account has the Zeppelin admin permissions and thus the access to the interpreter configuration page.
1. Click Create new note on the left and create a notebook on the pop-up page.
2. For EMR v3.3.0 or later and EMR v2.6.0 or later, clusters connecting Spark to EMR (Spark on YARN) are configured by default.
If you are using EMR v3.1.0, EMR v2.5.0, or EMR v2.3.0, see Documentation to configure the Spark interpreter. If you are using EMR v3.2.1, see Documentation to configure the Spark interpreter. 3. Go to your own notebook.
4. Write a wordcount program and run the following commands:
val data = sc.textFile("cosn://huanan/zeppelin-spark-randomint-test")
case class WordCount(word: String, count: Integer)
val result = data.flatMap(x => x.split(" ")).map(x => (x, 1)).reduceByKey(_ + _).map(x => WordCount(x._1, x._2))
result.toDF().registerTempTable("result")
%sql select * from result
Was this page helpful?