Member since
05-30-2018
1322
Posts
715
Kudos Received
148
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 4131 | 08-20-2018 08:26 PM | |
| 2004 | 08-15-2018 01:59 PM | |
| 2431 | 08-13-2018 02:20 PM | |
| 4232 | 07-23-2018 04:37 PM | |
| 5115 | 07-19-2018 12:52 PM |
01-17-2017
10:05 AM
Hi @Sunile Manjee, Cloudbreak doesn't support HDF yet. You can create a cluster with Cloudbreak, manually install the HDF stack and create the NiFi service.
... View more
05-23-2017
08:31 AM
Please first validate your JSON using JSON Formatter and JSON Validator.
... View more
12-27-2016
04:08 AM
@PJ hadoop heavily relies on being able to perform a forward and reverse lookup of the hostname. for intra node communicatation it uses tcp ip, more here https://hadoop.apache.org/docs/r1.2.1/hdfs_design.html#The+Communication+Protocols Therefore passwordless ssh is not require between nodes.
... View more
12-22-2016
10:16 PM
Without the full exception stack trace its difficult to know what happened. If you are instantiating hive then you may need to add hive-site.xml and the data-nucleus jars to the job. e.g. like --jars /usr/hdp/current/spark-client/lib/datanucleus-api-jdo-3.2.6.jar,/usr/hdp/current/spark-client/lib/datanucleus-rdbms-3.2.9.jar,/usr/hdp/current/spark-client/lib/datanucleus-core-3.2.10.jar --files /usr/hdp/current/spark-client/conf/hive-site.xml
... View more
12-20-2016
03:29 AM
1 Kudo
HWX support found my issue. here is what I was doing wrong::: Forgot the double quotes around the jdbc driver beeline -u "jdbc:hive2://example.com:2181,example.com:2181,example:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2" and now it works. cool. @milind pandit thank you for your help. I am sure port checking should be on the list of what to check. however, the issue where was missing double quotes.
... View more
11-13-2017
10:18 PM
@Hoang Le, did you ever get this resolved? I am interesting in to know how to fix this issue too?
... View more
12-19-2016
03:35 PM
@Artem Ervits @Timothy Spann I have confirmed this is a known bug. fixed in next patch.
... View more
12-20-2016
02:43 AM
Do you have ranger audit enabled? if so please provide what the log shows when nifi tries to hit /tmp
... View more
12-21-2016
07:00 PM
@Sunile Manjee Also keep in mind that NiFi Content archiving is enabled by default with a retention period of 12 hours or 50% disk utilization before the archived content is removed/purged. The purging of FlowFile manually within your dataflow will not trigger the deltion of archived FlowFiles.
... View more
06-08-2018
09:58 AM
@Qi Wang Could you please mention the version of Atlas in which the fix has been provided. I am using Atlas 0.8.0 and HDP2.6.4 and I am facing same issue. Could you please help.
... View more