Member since
03-16-2016
707
Posts
1753
Kudos Received
203
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5121 | 09-21-2018 09:54 PM | |
6488 | 03-31-2018 03:59 AM | |
1967 | 03-31-2018 03:55 AM | |
2175 | 03-31-2018 03:31 AM | |
4810 | 03-27-2018 03:46 PM |
07-25-2016
11:03 PM
6 Kudos
@Devvrata Priyadarshi Avoid copy and paste from an editor that is rich text format. Try typing manually and see if you still get the issue. You can also paste to something like vi, notepad or texwrangler and then copy and paste from that. This is a malformed string most likely. Additionally, please see at the top of the tutorial. There it should be a snippet about interpreter binding, click the Save button and hopefully all start to work normally. If not please post the exception dump. If this response helped, please vote and accept it as the best answer.
... View more
07-25-2016
10:25 PM
@Prakash Punj Could you publish your hue.ini files as text? The screenshots provided are a bit difficult to read and I would like to see the actual content. Side question: which version of OS and HDP you use? Hue on Centos 7 and HDP 2,4 is not supported.
... View more
07-23-2016
01:58 AM
4 Kudos
@Aman Poonia Yes. You can. It is called Name Node Federation: https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-hdfs/Federation.html. However, Ambari does not support it: https://issues.apache.org/jira/browse/AMBARI-10982 If response is helpful, pls vote/accept.
... View more
07-23-2016
01:51 AM
2 Kudos
@Liam MacInnes The normal process requires an administrator to invite an user and email address is required. You are asking for a hack 🙂 , but e-mail address is a required field. The reason is that notifications are submitted on various actions. I'm sorry, but that's how Cloudbreak works currently. If reasonable, vote/accept answer.
... View more
07-23-2016
01:33 AM
1 Kudo
@Mehul Ramani Since you want to make the call from GetHTTP NiFi processor, to be able to make a https call you need to setup 'SSL Context Service' . Summarily, you will need to configure the truststore properties with for your SSL Context Service instance with the path to the default cacerts truststore that comes bundled with your Java installation, located at JAVA_HOME/jre/lib/security/cacerts. TrustStore is of type JKS, and the default password for Truststore is "changeit" . If $JAVA_HOME Is set on your system, it should help point you in the right direction. If not, the location of cacerts varies depending on environment, but is approximately the following for their respective OS OSX: /Library/Java/JavaVirtualMachines/jdk<version>.jdk/Contents/Home/jre/lib/security/cacerts. You can additionally use $(/usr/libexec/java_home) to find the path. Windows: C:\Program Files\Java\jdk<version>\jre\lib\security\cacerts Linux: /usr/lib/jvm/java-<version>/jre/lib/security/cacerts. You can additionally use $(readlink -f $(which java)) If this response was helpful, vote/accept it as the best answer.
... View more
07-23-2016
01:14 AM
@Prakash Punj You say "modified all the parameter in HUE.INI". Does it mean you manually made those changes? If so, have you restarted Hue to account for the new changes? Look also at the ownership of the file. It may not be owned by Hue if you made the changes under a different account. If after restart, you still have the issue, are you pointing to the server that runs the Hive2 Server? If helpful, vote/accept response.
... View more
07-22-2016
11:05 PM
3 Kudos
@Mukesh Kumar it would have been easier to provide a response if you were more specific on the infrastructure that you have available (your laptop/desktop or a bunch of servers or AWS/Azure account), whether a single node was enough (you could use HDP sandbox:http://hortonworks.com/products/sandbox/#tutorial_gallery?utm_source=google&utm_medium=cpc&utm_campaign=Hortonworks_Sandbox_Search ) or you wanted a cluster... Also, what version of Hadoop you wanted to evaluate etc. OS images have slightly different configurations. Anyhow, if it is only for play, use HDP sandbox or if you want to use your laptop/desktop to simulate cluster experience, consider the use of Vagrant. Check my article: https://community.hortonworks.com/articles/39156/setup-hortonworks-data-platform-using-vagrant-virt.html#comment-43820 This is a good starter to test the component. As you can see from the steps of the demo (read VagrantFile), the Centos 6.7 image is extracted from a public repository. Other versions are available. If this response lead you in the right direction, please vote and accept the response.
... View more
07-21-2016
08:30 PM
4 Kudos
@Kumar Veerappan Your user needs an hdfs user directory. As hdfs create a directory for your OS user in /user (under hdfs) and grant ownership to your user on that /user/youruser folder. As root do: sudo su hdfs hadoop fs -mkdir /user/youruser hadoop fs -chown youruser /user/youruser Try again with your OS user and if this response addressed your problem, please vote and accept it the best response.
... View more
07-21-2016
08:00 PM
@Sunile Manjee Yes. You already applied map-side join which does pretty much the same and also you seem to have the small table last in the sequence. I assume that your small table fits in the memory. My take is that when you use so many columns (almost all) in the join, you practically deny the benefit of columnar. Have you gathered statistics? Without the statistics the CBO will not propose the best plan. What does the EXPLAIN PLAN says?
... View more
07-21-2016
07:01 PM
@Sunile Manjee As I understand, you have a small table and a big table. It is a good practice, with and without ORC, to provide a hint that will stream a table in your kind of join. Have you tried that? Check this: https://www.linkedin.com/pulse/20141002060036-90038370-hive-join-optimization-stream-table-in-joins
... View more