Member since
09-23-2015
800
Posts
898
Kudos Received
185
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5397 | 08-12-2016 01:02 PM | |
2200 | 08-08-2016 10:00 AM | |
2607 | 08-03-2016 04:44 PM | |
5496 | 08-03-2016 02:53 PM | |
1421 | 08-01-2016 02:38 PM |
04-17-2016
06:59 PM
@Benjamin Leonhardi - This was indeed part of the reason. Thank you very much for your help!
... View more
04-12-2016
05:28 AM
Thanks @Benjamin Leonhardi
... View more
06-05-2017
06:58 AM
Hi all, I found a great list of Automated Code Deployment tools list. Check Out Once: Link Regards, DevOps Online Training
... View more
04-10-2019
05:20 PM
@Benjamin Leonhardi could you please tell me how to actually increase the hash cache? I agree 100MB is not much. I'm running an HDP 2.6.5 cluster with HBase 1.1.2 and Phoenix enabled. I couldn't find the property phoenix.query.maxServerCacheBytes defined in hbase-site.xml config file, so I tried adding it in the "Custom hbase-site" section in Ambari and setting it to 500Mb as shown in the picture below: Now I can see the variable defined in hbase-site.xml, but I am still getting the following error "...maximum allowed size (104857600 bytes)" even though I have changed that setting to 500MB). Any ideas or suggestions? Thanks in advance!
... View more
04-05-2016
10:18 AM
So Cascade works because he forces the delete of all objects belonging to that object ( similar to delete ... cascade for row deletes ) Now the question is why your drop function did not work. And I don't know we might have to look into logs to figure that out. But I have seen flakiness with functions in hive before on an older version of hive. So it might be just a bug or a restart required or something. But again without logs hard to say.
... View more
03-31-2017
02:35 AM
great article!
... View more
06-01-2016
08:58 AM
Hi @Sree Venkata. To ad to Neeraj's already excellent answer and to follow your comment, NiFi now *does* support kerberised clusters. Also there is now an RDBMS connector, although I'd still say, use SQOOP if you're transferring very large chunks of RDBMS data and you want it parellised across the whole hadoop cluster, use NiFi if youve got smaller chunks to transfer that can be parallelised over a smaller NiFi cluster. Hope that (in combination with Neeraj's answer) fulfills your requirements.
... View more
03-17-2016
04:44 AM
1 Kudo
Hello Benjamin, many thanks for your explanations. I will Forward it to the Hyper-V admin. 🙂 Klaus
... View more