Member since
09-25-2015
109
Posts
36
Kudos Received
8
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2842 | 04-03-2018 09:08 PM | |
3975 | 03-14-2018 04:01 PM | |
11109 | 03-14-2018 03:22 PM | |
3169 | 10-30-2017 04:29 PM | |
1595 | 10-17-2017 04:49 PM |
10-03-2016
05:14 PM
@Matt Burgess Here is new error: Caused by: java.net.URISyntaxException: Illegal character in path at index 73: hive2://dummyhost:00000/;principal=hive/_HOST@EXAMPLE.COM
at java.net.URI$Parser.fail(URI.java:2848) ~[na:1.8.0]
at java.net.URI$Parser.checkChars(URI.java:3021) ~[na:1.8.0]
at java.net.URI$Parser.parseHierarchical(URI.java:3105) ~[na:1.8.0]
at java.net.URI$Parser.parse(URI.java:3053) ~[na:1.8.0]
at java.net.URI.<init>(URI.java:588) ~[na:1.8.0]
at java.net.URI.create(URI.java:850) ~[na:1.8.0]
... 33 common frames omitted Here is the Database Connection URL: jdbc:hive2://host.name.net:10000/;principal=hive/_HOST@EXAMPLE.COM
... View more
09-22-2016
08:27 PM
1 Kudo
PutHiveQL and SelectHiveQL processors are looking for class ‘org.apache.hive.jdbc.HiveDriver’ which I guess should have already been specified into. Is there any option to specify that in HiveConnectionPool. Error is as follows 2016-09-19 17:55:30,084 ERROR [Timer-Driven Process Thread-4] o.a.nifi.processors.hive.SelectHiveQL SelectHiveQL[id=4467ed6d-0157-1000-5114-2998b4c5f3aa] Unable to execute HiveQL select query select * from nifitest; due to org.apache.nifi.processor.exception.ProcessException: org.apache.commons.dbcp.SQLNestedException: Cannot create JDBC driver of class 'org.apache.hive.jdbc.HiveDriver' for connect URL '!connect jdbc:hive2://host.name.net:10000/;principal=hive/_HOST@EXAMPLE.COM '.
No FlowFile to route to failure: org.apache.nifi.processor.exception.ProcessException: org.apache.commons.dbcp.SQLNestedException: Cannot create JDBC driver of class 'org.apache.hive.jdbc.HiveDriver' for connect URL '!connect jdbc:hive2://host.name.net:10000/;principal=hive/_HOST@EXAMPLE.COM '
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache NiFi
05-24-2016
08:51 PM
heapsize was 1GB. Looks like that is the default. Is there a class to read "yarn-nm-recovery" file to check on how many jobs were running to get recovered ?
... View more
05-24-2016
08:18 PM
@Geoffrey Shelton Okot I am specifically looking for heap size for YARN Resource Manager Daemon and Node Manager daemon and not for containers memory allocation sizing.
... View more
05-24-2016
04:47 PM
To avoid errors on starting up Node Managers application : "GC Overhead limit exceeded" when yarn.nodemanager.recovery.enabled = true. How do you determine / size Node Manager heap.
... View more
Labels:
- Labels:
-
Apache YARN
05-17-2016
07:18 PM
My application need common-codec1.10 version but on cluster i have 1.4, when i launch yarn application using hadoop_classpath=/home/*/common-codec-1.10.jar:o yarn jar <*.jar>
yarn picks up 1.4 first then after it has 1.10. so i am getting exception on method not found.
How can tell yarn application to read through full classpath instead to read till 1.4 and stop.
... View more
Labels:
- Labels:
-
Apache YARN
04-21-2016
02:27 PM
2 Kudos
Default fs.trash.interval=0 & fs.trash.checkpoint.interval=0 indicating i.e. trash feature is disabled. What is recommended value for Production like clusters ? if these values are 0 then what is command to empty entire hdfs trash directories on periodic basis?
... View more
Labels:
- Labels:
-
Apache Hadoop
04-05-2016
02:01 AM
This can be solved by appending "oozie.launcher." to the properties. <configuration>
<property>
<name>oozie.launcher.dfs.nameservices</name>
<value>${nameService1},${nameService2}</value>
</property>
<property>
<name>oozie.launcher.dfs.ha.namenodes.${nameService2}</name>
<value>${nn21},${nn22}</value>
</property>
<property>
<name>oozie.launcher.dfs.namenode.rpc-address.${nameService2}.${nn21}</name>
<value>${nn21_fqdn}:8020</value>
</property>
<property>
<name>oozie.launcher.dfs.namenode.rpc-address.${nameService2}.${nn22}</name>
<value>${nn22_fqdn}:8020</value>
</property>
<property>
<name>oozie.launcher.dfs.client.failover.proxy.provider.${nameService2}</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
<name>oozie.launcher.mapreduce.job.hdfs-servers</name>
<value>${defaultFS1},${defaultFS2}</value>
</property>
</configuration>
... View more
03-03-2016
11:03 PM
3 Kudos
How we would go about configuring the number of HBase snapshots that are saved? Is there a purge process or setting?
... View more
Labels:
- Labels:
-
Apache HBase