Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2487 | 04-27-2020 03:48 AM | |
| 4959 | 04-26-2020 06:18 PM | |
| 4042 | 04-26-2020 06:05 PM | |
| 3277 | 04-13-2020 08:53 PM | |
| 4991 | 03-31-2020 02:10 AM |
08-28-2019
11:53 PM
1 Kudo
There is no such option do downgrade HDP version out of the box. Also we do not see any HDP 2.7 version released. Also HDP 3.x and HDP 2.6 has major differences in terms of components. Any specific reason you are looking out for downgrade? If this is a freshly build cluster on HDP 3. and you want to use HDP 2.6 then better to freshly install HDP 2.6 which will be lot time and effort saving than manually fixing and downgrading all the components and configs.
... View more
08-28-2019
06:07 AM
1 Kudo
@Manoj690 Are you sure that the NameNode is running on "localhost" (where you are opening the mentioned URL in the browser) ? 1. Can you specify the namenode IP Address/ Hostname in the URL instead of "localhost" ? 2. Can you also check if the NameNode is listening on port 50070 ? (is that port opened and firewall is disabled) on NameNode host? # netstat -tnlpa | grep 50070
# service iptables stop 3. Please check if you are able to telnet NamerNode host=name & port from the machine where you are running the Browser? # telnet $NAMENODE_HOST 50070
(OR)
# mc -v $NAMENODE_HOST 50070 4. Check and share the NameNode log. Usually it can be found inside the " /var/log/hadoop/hdfs/hadoop-hdfs-namenode-xxxxxxxxxxxxxxxxxx.log "
... View more
08-28-2019
05:51 AM
@Manoj690 Try this: First switch to "root" user using "su - " then from "root" user account run the "su - hdfs" command. # su -
# su - hdfs
... View more
08-28-2019
12:36 AM
@vinodnerella It depends based on the scenario that how much Heap you should be allocating for the Zookeeper. In your case if you are keep noticing that the Zookeeper heap is reaching to its max 1GB then it is better to increase the Zookeeper heap to a larger value and if needed then enable GC logging for zookeeper to monitor the gc usages in a period of time to findout the approximate heap that you need to setup for your zookeeper based on the environment requirement. As you have already set the Zookeeper heap to 4GB it should be good for now. We can monitor it for some time. The common cause of Zookeeper OutOfMemory can be when clients submit requests faster than ZooKeeper can process them, especially if there are a lot of clients. The it can lead to OOM errors. You can also take a look into parameters like "zookeeper.snapCount" but better to monitor Zookeeper with 4GB heap for some time before tuning such parameters.
... View more
08-27-2019
11:35 PM
After following your suggestion, the problem seems to have been solved.
... View more
08-26-2019
11:09 PM
Thank you very much for your inputs.
... View more
08-26-2019
06:34 PM
@rvillanueva HDF and HDP versions can be different in a cluster. They need not to be exactly same. For example please refer to the https://supportmatrix.hortonworks.com/ Click on "HDP 3.1" (or click on desired HDF version like HDF 3.4.1.1) and then you will find the compatibility matrix with Ambari + HDF versions.
... View more
08-24-2019
08:21 PM
Hi @cfarnes Thanks for reply, I gave 28GB RAM to Virtual box to run CDA.
... View more
08-23-2019
12:53 PM
Perfect. Here's my solution based on your feedback. - name: Check if Ambari setup has already run command: grep 'jdbc' /etc/ambari-server/conf/ambari.properties register: ambari_setup_check check_mode: no ignore_errors: yes changed_when: no - name: Setup Ambari server command: printf '%s\n' y ambari y y n | ambari-server setup -j /usr/lib/jvm/java-8- openjdk-amd64 become: yes when: ambari_setup_check.rc == 1
... View more
08-22-2019
03:45 PM
Can we use Ambari_helper concept to write the script to delete dead nodes from Ambari. Can please guide where Can I get details about Ambari_helper classes?
... View more