Member since
01-26-2021
11
Posts
1
Kudos Received
0
Solutions
11-23-2021
02:43 AM
Hi @lwmonster @Shrilesh Please add the linux path of these 3 jars to dependency section of hbase-interpreter hbase-client.jar hbase-protocol.jar hbase-common.jar
... View more
11-21-2021
07:42 PM
Hi Guys, I am currently working on zeppelin running on a hdp cluster and the zeppelin version is 0.7.3. I configured pig interpreter by manually importing the jars from maven repository. When I run a simple script to load a file and dump the output on console it works properly on local mode by keeping the file in local file system(linux). However when I run on tez mode the script fails with the exception below. The same script works when I run pig on tez mode from cli. I have set HADOOP_CONF_DIR and TEZ_CONF_DIR in zeppelin-env.sh. The script is: data = LOAD '/path/sample.txt' as (C1:chararray,C2:chararray,C3:chararray); b = FILTER data by C1 is not null; DUMP b; The Exception is: org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1066: Unable to open iterator for alias b at org.apache.pig.PigServer.openIterator(PigServer.java:1020) at org.apache.pig.tools.grunt.GruntParser.processDump(GruntParser.java:782) at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:383) at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:230) at org.apache.pig.PigServer.registerScript(PigServer.java:781) at org.apache.pig.PigServer.registerScript(PigServer.java:858) at org.apache.pig.PigServer.registerScript(PigServer.java:821) at org.apache.zeppelin.pig.PigInterpreter.interpret(PigInterpreter.java:100) at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:97) at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:498) at org.apache.zeppelin.scheduler.Job.run(Job.java:175) at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:139) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.io.IOException: Couldn't retrieve job. at org.apache.pig.PigServer.store(PigServer.java:1084) at org.apache.pig.PigServer.openIterator(PigServer.java:995) ... 18 more Please help me with your inputs.
... View more
Labels:
- Labels:
-
Apache Pig
-
Apache Zeppelin
05-06-2021
01:18 AM
1 Kudo
Hi @Magudeswaran , Refer to this KB article. It is not supported to directly export and import transactional tables. You need to follow the workaround. Thanks, Megh
... View more
04-13-2021
12:47 AM
Thanks. We have decided to introduce Priority attribute & use the corresponding prioritizer for the connection.
... View more
04-07-2021
12:31 AM
Hi @Magudeswaran , I can rewrite the query using sub-queries, but I have the same error: select count(*) as num_all_changes from ( select s.cod_pers, count(*) as num_changes from ( select t.cod_pers, t.cod_address, count(*) as num_address from be_prd_prt.test_case as t group by t.cod_pers, t.cod_address ) as s group by s.cod_pers having count(*)>1 ) as gg ; +------------------+ | num_all_changes | +------------------+ | 63 | | 58 | | 64 | | 59 | +------------------+ 4 rows selected (18.077 seconds) As you can see, always 4 rows...
... View more
02-11-2021
04:50 AM
Hi @Magudeswaran Ambari server uses appropriate database base connector which is located under /user/share/java/*<databasename>.jar This same connector jar is used to connect to any external or internal database. It needs to be mentioned using ambari-server --jdbc * command. Refer documentation for exact command. Issue could be due to the connector jar that is configured for ambari to use. For eg. Ambari is configured to use postgres but you are trying to connect to Mariadb and since it doesn't has the proper jar configured, it won't connect and you will face issue. Let me know if this resolves your issue. Else please share the screen shot and error logs to check further.
... View more