Member since
02-22-2017
33
Posts
6
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2837 | 10-28-2016 09:38 AM |
12-19-2017
03:34 AM
Hi, was You able to fix the issue? We have the same problem.
... View more
10-28-2016
09:38 AM
Issue was solved by my self. The solutin was: 1) under folder in which workflow.xml is create folder lib and put there all hive jar files from sharedlibDir(/user/oozie/share/lib/lib_20160928171540)/hive; 2) Create hive-site.xml with contents: <configuration>
<property>
<name>ambari.hive.db.schema.name</name>
<value>hive</value>
</property>
<property>
<name>hive.metastore.uris</name>
<value>thrift://xxxxx:9083</value>
</property>
<property>
<name>hive.zookeeper.quorum</name>
<value>xxxx:2181,yyyyy:2181,zzzzzz:2181</value>
</property>
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/smartdata/hive/</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>org.postgresql.Driver</value>
</property>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:postgresql://xxxxx:5432/hive</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hive</value>
</property>
</configuration>
and put it on hdfs for example in /tmp/hive-site.xml 3) Add following line in workflow.xml: <file>/tmp/hive-site.xml</file> This solved my issue.
... View more
10-27-2016
09:25 AM
Hello thanks for advice but all fine with shared libraries: $ oozie admin -oozie http://localhost:11000/oozie -shareliblist
[Available ShareLib]
hive
distcp
mapreduce-streaming
spark
oozie
hcatalog
hive2
sqoop
pig
spark_orig $ oozie admin -oozie http://localhost:11000/oozie -sharelibupdate
[ShareLib update status]
sharelibDirOld = hdfs://os-2471.homecredit.ru:8020/user/oozie/share/lib/lib_20160928171540
host = http://localhost:11000/oozie
sharelibDirNew = hdfs://os-2471.homecredit.ru:8020/user/oozie/share/lib/lib_20160928171540
status = Successful $ oozie admin -oozie http://localhost:11000/oozie -shareliblist
[Available ShareLib]
hive
distcp
mapreduce-streaming
spark
oozie
hcatalog
hive2
sqoop
pig
spark_orig On Resource manager UI all fine, see attached logs.
... View more
10-26-2016
12:47 PM
resource-manager-ui.txtHello, Our HDP version 2.5 When we trying to run sqoop action(to load data from oracle to hive) from oozie we get folowing error in /var/log/oozie/oozie-error.log: JOB[0000004-161024200820785-oozie-oozi-W] ACTION[0000004-161024200820785-oozie-oozi-W@sqoop] Launcher ERROR, reason: Main class [org.apache.oozie.action.hadoop.SqoopMain], exit code [1] And there is nothing more usefull for diagnostic. Job.properties file listed below: # properties nameNode = hdfs://xxxxx:8020 resourceManager = xxxx:8050 queueName=default oozie.use.system.libpath=true oozie.wf.application.path = hdfs://xxxxxx:8020/smartdata/oozie/hive_test.xml mapreduce.framework.name = yarn When we running this job from command line with "sqoop ..... " as command all working fine. Please some one tell me how to solve or troubleshoot this.
... View more
Labels:
- Labels:
-
Apache Oozie
-
Apache Sqoop
10-17-2016
08:05 AM
Is the any workaround of this? or some hot fix?
... View more
10-14-2016
06:58 PM
1 Kudo
After we enabled HDFS HA PutHiveStreaming processor in our NiFi stopped working and generate following errors: 2016-10-14 21:50:53,840 WARN [Timer-Driven Process Thread-6] o.a.n.processors.hive.PutHiveStreaming PutHiveStreaming[id=01571000-c4de-1bfd-0f09-5c439230e84e] Processor Administratively Yielded for 1 sec due to processing failure 2016-10-14 21:50:53,840 WARN [Timer-Driven Process Thread-6] o.a.n.c.t.ContinuallyRunProcessorTask Administratively Yielding PutHiveStreaming[id=01571000-c4de-1bfd-0f09-5c439230e84e] due to uncaught Exception: java.lang.IllegalArgumentException: java.net.UnknownHostException: hdpCROC 2016-10-14 21:50:53,847 WARN [Timer-Driven Process Thread-6] o.a.n.c.t.ContinuallyRunProcessorTask java.lang.IllegalArgumentException: java.net.UnknownHostException: hdpCROC at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:411) ~[na:na] at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:311) ~[na:na] at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:176) ~[na:na] at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:688) ~[na:na] at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:629) ~[na:na] at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:159) ~[na:na] at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2761) ~[na:na] at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:99) ~[na:na] at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2795) ~[na:na] at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2777) ~[na:na] at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:386) ~[na:na] at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295) ~[na:na] at org.apache.hadoop.hive.ql.io.orc.OrcRecordUpdater.<init>(OrcRecordUpdater.java:234) ~[na:na] at org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat.getRecordUpdater(OrcOutputFormat.java:289) ~[na:na] at org.apache.hive.hcatalog.streaming.AbstractRecordWriter.createRecordUpdater(AbstractRecordWriter.java:253) ~[na:na] at org.apache.hive.hcatalog.streaming.AbstractRecordWriter.createRecordUpdaters(AbstractRecordWriter.java:245) ~[na:na] at org.apache.hive.hcatalog.streaming.AbstractRecordWriter.newBatch(AbstractRecordWriter.java:189) ~[na:na] at org.apache.hive.hcatalog.streaming.StrictJsonWriter.newBatch(StrictJsonWriter.java:41) ~[na:na] at org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.<init>(HiveEndPoint.java:607) ~[na:na] at org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.<init>(HiveEndPoint.java:555) ~[na:na] at org.apache.hive.hcatalog.streaming.HiveEndPoint$ConnectionImpl.fetchTransactionBatchImpl(HiveEndPoint.java:441) ~[na:na] at org.apache.hive.hcatalog.streaming.HiveEndPoint$ConnectionImpl.fetchTransactionBatch(HiveEndPoint.java:421) ~[na:na] at org.apache.nifi.util.hive.HiveWriter.lambda$nextTxnBatch$7(HiveWriter.java:250) ~[na:na] at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_77] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_77] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_77] at java.lang.Thread.run(Thread.java:745) [na:1.8.0_77] Caused by: java.net.UnknownHostException: hdpCROC hdpCROC - our HDP cluster and dfs.servicenames property. All files such as hive-site.xml, hdfs-site.xml, hdfs-core.xml are actual. What can cause this issue?
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Hive
-
Apache NiFi
10-07-2016
04:11 PM
Thank You for quick reply, can You tell me please where can I get consumeKafka_o_10 nifi processor?
... View more
10-07-2016
02:49 PM
1 Kudo
When we trying to use getkafka we see following error: 2016-10-07 17:37:39,469 INFO [pool-24-thread-1-EventThread] org.I0Itec.zkclient.ZkClient zookeeper state changed (Expired) 2016-10-07 17:37:39,470 INFO [ZkClient-EventThread-465-hdp-name1.lab.croc.ru:2181] k.consumer.ZookeeperConsumerConnector [95446e62-0157-1000-7951-fd4244e9aec2_###############-1475841346967-f0d261ce], exception during rebalance kafka.common.KafkaException: Failed to parse the broker info from zookeeper: {"jmx_port":-1,"timestamp":"1475501559373","endpoints":["PLAINTEXT://############:6667"],"host":"#############","version":3,"port":6667} next we see: Caused by: kafka.common.KafkaException: Unknown version of broker registration. Only versions 1 and 2 are supported.{"jmx_port":-1,"timestamp":"1475501559373","endpoints":["PLAINTEXT://#########:6667"],"host":"##########","version":3,"port":6667} Our hdp version is 2.5 and hdf version is 2.0.
... View more
Labels:
- Labels:
-
Apache Kafka
-
Apache NiFi
10-05-2016
07:05 AM
Thank You very much, suggestion You've provided solved my issue.
... View more