Member since
07-15-2016
43
Posts
3
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4431 | 09-21-2017 05:43 PM | |
3650 | 04-12-2017 03:30 PM |
09-05-2017
04:16 PM
@Chiranjeevi Nimmala What change you made in order to resolve this issue? Should I change to verify=enable or do I need to make some changes in Ambari server/agent confs? Thank you.
... View more
07-13-2017
06:56 PM
1 Kudo
I am running into Java Heap Space issue while running a complex query against a big dataset. The same query works fine from Hive CLI. If I run small queries they run fine in beeline. Is there any way I can increase beeline Java heap size for all users so they do not run out of memory?
... View more
Labels:
- Labels:
-
Apache Hive
04-12-2017
07:03 PM
@arjun more I did not notice the last line. Yes, I took that approach.
... View more
04-12-2017
03:30 PM
The above approach will not work as it requires 'hdp-select'. According to HW, HDP is not allowed on a cluster where HDF is installed and vice-versa. An adequate solution is, to install Apache Hadoop (version same as your HDP). Steps I followed:
Setup Java, A running HDF cluster will not require it but no harm in cross check. Download Hadoop from Apache Mirror, Unpack it and move it to the desired location. Set HADOOP_HOME and HADOOP_CONF_DIR in /etc/profile. By default conf. is set to HADOOP_HOME/etc/hadoop. It is good to keep your configs separate. I created HADOOP_HOME/conf. Important step: Copy existing hdp confs (/etc/hadoop/conf) to HADOOP_HOME/conf. Do not format or start Hadoop as we are connecting to an existing cluster. Last step, set HADOOP_HOME/bin in your user profile file (usually .bash_profile/.profile). That's it, try 'hadoop' or 'hdfs' command. I hope this will help somebody in future!
... View more
04-12-2017
03:07 PM
Thank you @arjun more. But, this is not working on a HDF cluster. I took another approach and install Apache client manually.
... View more
04-11-2017
02:34 PM
Hello, I was following this community post to install Hadoop client without yum. But, with latest hdp repo 2.5.3.0 I am getting the below exception. I want to install HDFS client on our HDF cluster to access HDP cluster hdfs. Any suggestions on approaches or how to do it? I installed the repo using below url's: http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.5.3.0/hadoop/hadoop_2_5_3_0_37-hdfs-2.7.3.2.5.3.0-37.el6.x86_64.rpm http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.5.3.0/hadoop/hadoop_2_5_3_0_37-2.7.3.2.5.3.0-37.el6.x86_64.rpm http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.5.3.0/hadoop/hadoop_2_5_3_0_37-libhdfs-2.7.3.2.5.3.0-37.el6.x86_64.rpm http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.5.3.0/hadoop/hadoop_2_5_3_0_37-yarn-2.7.3.2.5.3.0-37.el6.x86_64.rpm http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.5.3.0/hadoop/hadoop_2_5_3_0_37-mapreduce-2.7.3.2.5.3.0-37.el6.x86_64.rpm http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.5.3.0/hadoop/hadoop_2_5_3_0_37-client-2.7.3.2.5.3.0-37.el6.x86_64.rpm http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.5.3.0/zookeeper/zookeeper_2_5_3_0_37-3.4.6.2.5.3.0-37.el6.noarch.rpm http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.5.3.0/bigtop-jsvc/bigtop-jsvc-1.0.15-37.el6.x86_64.rpm Install command --> rpm -Uvh hadoop_2_5_3_0_37-2.7.3.2.5.3.0-37.el6.x86_64.rpm hadoop_2_5_3_0_37-hdfs-2.7.3.2.5.3.0-37.el6.x86_64.rpm hadoop_2_5_3_0_37-client-2.7.3.2.5.3.0-37.el6.x86_64.rpm hadoop_2_5_3_0_37-mapreduce-2.7.3.2.5.3.0-37.el6.x86_64.rpm hadoop_2_5_3_0_37-libhdfs-2.7.3.2.5.3.0-37.el6.x86_64.rpm hadoop_2_5_3_0_37-yarn-2.7.3.2.5.3.0-37.el6.x86_64.rpm zookeeper_2_5_3_0_37-3.4.6.2.5.3.0-37.el6.noarch.rpm bigtop-jsvc-1.0.15-37.el6.x86_64.rpm error: Failed dependencies: ranger_2_5_3_0_37-hdfs-plugin is needed by hadoop_2_5_3_0_37-2.7.3.2.5.3.0-37.el6.x86_64 ranger_2_5_3_0_37-yarn-plugin is needed by hadoop_2_5_3_0_37-2.7.3.2.5.3.0-37.el6.x86_64 hdp-select >= 2.5.3.0-37 is needed by hadoop_2_5_3_0_37-2.7.3.2.5.3.0-37.el6.x86_64 spark_2_5_3_0_37-yarn-shuffle is needed by hadoop_2_5_3_0_37-2.7.3.2.5.3.0-37.el6.x86_64 spark2_2_5_3_0_37-yarn-shuffle is needed by hadoop_2_5_3_0_37-2.7.3.2.5.3.0-37.el6.x86_64 nc is needed by hadoop_2_5_3_0_37-2.7.3.2.5.3.0-37.el6.x86_64 hdp-select >= 2.5.3.0-37 is needed by zookeeper_2_5_3_0_37-3.4.6.2.5.3.0-37.el6.noarch Thank you in advance!
... View more
Labels:
- Labels:
-
Apache Hadoop
04-06-2017
06:53 PM
Thank you @Namit Maheshwari.
... View more
04-06-2017
06:53 PM
Thank you guys for the prompt response. Root cause: Somehow Kerberos admin session was expired which was not creating/setup any key tabs. Resolution: I fixed it by restarting Ambari. After that, regeneration of ticket resolved the problem.
... View more
04-06-2017
05:25 PM
I installed Ranger & Ambari-Infra in my cluster via Ambari but installed hung up on Setup Keytab. Although, the service installed properly but "ranger-usersync" failed due to service.keytab does not exist. Can anyone suggest how to re-generate keytab for a particular service? HDP - 2.5.3, Ambari - 2.4.2.0
... View more
Labels:
- Labels:
-
Apache Ranger
03-27-2017
08:29 PM
@vperiasamy -- Agree, that's what I understood hard way 🙂 I though Ranger will sync groups with users as well as users without groups. So should I disable group search first and keep user & user-group mapping. Any suggestions?
... View more