Member since
10-08-2015
108
Posts
62
Kudos Received
7
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4205 | 06-03-2017 12:11 AM | |
5477 | 01-24-2017 01:02 PM | |
6000 | 12-27-2016 11:38 AM | |
3254 | 12-20-2016 09:52 AM | |
2359 | 12-07-2016 02:15 AM |
06-03-2021
02:20 AM
Hi @Lleal, as this is an older post, you would have a better chance of receiving a resolution by starting a new thread. This will also be an opportunity to provide details specific to your environment that could aid others in assisting you with a more accurate answer to your question. You can link this thread as a reference in your new post.
... View more
06-06-2017
08:55 AM
I installed the python interpreter and now everything works fine. Thank You.
... View more
08-21-2018
12:22 PM
According to this spark jira this is only available (or planned) in spark 2.4. @jzhang could you confirm?
... View more
05-24-2017
09:29 PM
The sandbox file name is HDP_2.6_virtualbox_05_05_2017_14_46_00_hdp.ova for me
... View more
02-19-2018
03:14 PM
Hi, I have the exact same problem. I am using the Spark thrift server with the following configuration in hive-site.xml: <configuration>
<!--
<property>
<name>hive.server2.transport.mode</name>
<value>http</value>
</property>
-->
<property>
<name>hive.server2.authentication</name>
<value>KERBEROS</value>
</property>
<property>
<name>hive.metastore.kerberos.principal</name>
<value>thrift/iman@EXAMPLE.COM</value>
</property>
<property>
<name>hive.server2.authentication.kerberos.principal</name>
<value>thrift/iman@EXAMPLE.COM</value>
</property>
<property>
<name>hive.server2.authentication.kerberos.keytab</name>
<value>/opt/nginx/iman.keytab</value>
<description>Keytab file for Spark Thrift server principal</description>
</property>
</configuration>
When I start the thrift server by running start-thriftserver.sh, the following error occurs: 18/02/19 18:16:57 ERROR ThriftCLIService: Error starting HiveServer2: could not start ThriftBinaryCLIService
javax.security.auth.login.LoginException: Kerberos principal should have 3 parts: spark
at org.apache.hive.service.auth.HiveAuthFactory.getAuthTransFactory(HiveAuthFactory.java:148)
at org.apache.hive.service.cli.thrift.ThriftBinaryCLIService.run(ThriftBinaryCLIService.java:58)
at java.lang.Thread.run(Thread.java:748)
18/02/19 18:16:57 INFO HiveServer2: Shutting down HiveServer2
It seems like thrift is mistakingly taking the current user name (spark) as principal name, but if I omit the hive.server2.authentication.kerberos.principal in the config file it would result in "no principal specified" error so it's not missing the configuration entry. I've had a frustrating time with Kerberos and Apache Thrift. can anyone please help? thanks in advance.
... View more
02-13-2017
01:38 AM
livy-env.sh is shared by all the sessions which means one livy instance can only run one version of python. I would recommend user to use spark configuration spark.pyspark.driver.python and spark.pyspark.python in spark2 (HDP 2.6) so that each session can set his own python version. https://issues.apache.org/jira/browse/SPARK-13081
... View more
01-23-2017
09:34 AM
4 Kudos
Introduction Apache Zeppelin is a web-based notebook that enables interactive data analytics while Apache Pig is a platform for analyzing large data sets that consists of a high-level language for expressing data analysis programs. Pig-latin is a very powerful languages for data flow processing. One drawback of pig community complains about is that pig-latin is not a standard language like sql so very few BI tools integrate with it. So it is pretty to hard to visualize the result from pig. Now the good news is that Pig is integrated in zeppelin 0.7 where you can write pig latin and visualize the result. Use pig interpreter Pig interpreter is supported from zeppelin 0.7.0, so first you need to install zeppelin, you can refer this link for how to install and start zeppelin. Zeppelin supports 2 kinds of pig interpreters for now. %pig (default interpreter) %pig.query %pig is like the pig grunt shell. Anything you can run in pig grunt shell can be run in %pig.script interpreter, it is used for running pig script where you don’t need to visualize the data, it is suitable for data munging. %pig.query is a little different compared with %pig.script. It is used for exploratory data analysis by using pig latin where you can leverage zeppelin’s visualization ability. There're 2 minor differences in the last statement between %pig and %pig.query No pig alias in the last statement in %pig.query (read the examples below). The last statement must be in single line in %pig.query Here I will give 4 simple examples to illustrate how to use these 2 interpreters. These 4 examples are another implementation of zeppelin tutorial where spark is used. We just do the same thing by using pig instead. This script do the data preprocessing %pig
bankText = load 'bank.csv' using PigStorage(';');
bank = foreach bankText generate $0 as age, $1 as job, $2 as marital, $3 as education, $5 as balance;
bank = filter bank by age != '"age"';
bank = foreach bank generate (int)age, REPLACE(job,'"','') as job, REPLACE(marital, '"', '') as marital, (int)(REPLACE(balance, '"', '')) as balance;
store bank into 'clean_bank.csv' using PigStorage(';'); -- this statement is optional, it just show you that most of time %pig.script is used for data munging before querying the data. Get the number of each age where age is less than 30 %pig.query
bank_data = filter bank by age < 30;
b = group bank_data by age;
foreach b generate group, COUNT($1); The same as above, but use dynamic text form so that use can specify the variable maxAge in textbox. (See screenshot below). Dynamic form is a very cool feature of zeppelin, you can refer this link for details. %pig.query
bank_data = filter bank by age < ${maxAge=40};
b = group bank_data by age;
foreach b generate group, COUNT($1); Get the number of each age for specific marital type, also use dynamic form here. User can choose the marital type in the dropdown list (see screenshot below). %pig.query
bank_data = filter bank by marital=='${marital=single,single|divorced|married}';
b = group bank_data by age;
foreach b generate group, COUNT($1); The following is a screenshot of these 4 examples. You can also check pig tutorial note which contains all the code of this blog in zeppelin. Configuration Pig interpreter in zeppelin supports all the execution engine that pig supports. Local Mode
Nothing needs to be done for local mode MapReduce Mode
HADOOP_CONF_DIR needs to be specified in ZEPPELIN_CONF_DIR/zeppelin-env.sh Tez Local Mode
Nothing needs to be done for tez local mode Tez Mode
HADOOP_CONF_DIR and TEZ_CONF_DIR needs to be specified in ZEPPELIN_CONF_DIR/zeppelin-env.sh The default mode is mapreduce, but you can change that in interpreter setting. You can also set any pig configuration in the interpreter setting page. Here's one screenshot of that. Future work This is the first phase work to integrate pig into zeppelin. There’s lots of work needs to do in future. Here’s my current to-do list Integrate spark engine so that we can use spark sql together with pig-latin Integrate spark mllib so that we can use pig-latin to do machine learning Add new interpreter %pig.udf to allow user to write java udf in zeppelin Integrate more closely with datafu If you have any other new ideas, please contact me at jzhang@hortonworks.com or you can file ticket in apache zeppelin jira https://issues.apache.org/jira/browse/ZEPPELIN
... View more
Labels:
12-20-2016
05:34 PM
@jzhang good call, I changed to yarn-cluster mode for the Livy interpreter and was not able to reproduce the error in HDP 2.5.
... View more
04-17-2018
01:33 PM
Sorry, mongodb interpreter is a not zeppelin's builtin interpreter, I don't know its mechimism
... View more