Member since
04-16-2019
373
Posts
7
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
24017 | 10-16-2018 11:27 AM | |
8064 | 09-29-2018 06:59 AM | |
1234 | 07-17-2018 08:44 AM | |
6864 | 04-18-2018 08:59 AM |
02-19-2018
06:29 AM
I am now able to access ambari web ui, issue was because of firewall rule i did not set in google cloud platform . It got resolve after creating firewall rule for the server. Thanks
... View more
01-31-2018
10:03 AM
@Anurag Mishra You can give the "host_groups" name to anything like "host_group_1", "host_group_2", "host_group_3" ...etc But the host name which is "fqdn" that should be the exact hostname it should match the ambari agents FQDN which you can get when you run the following command on the ambari agent machines: # hostname -f .
... View more
01-26-2018
07:42 PM
2 Kudos
Hi Anurag: The files under /kafka-logs are the actual data files used by Kafka. They aren't the application logs for the Kafka brokers. The files under /var/log/kafka are the application logs for the brokers. Thank you, Jeff Groves
... View more
02-02-2018
07:20 AM
@Anurag Mishra This thread looks like a Duplicate of : https://community.hortonworks.com/questions/170086/hdfs-folder-migration-from-one-cluster-to-another.html Can you please close one of them. In order to avoid duplicacy.
... View more
01-11-2018
08:25 AM
@Jay Kumar SenSharma
I am familiar with the manual creating but exporting blueprint form other cluster some confusions , below i am listing the steps I do while creating blueprint manually, and iits replacement when we export from other cluster .
while we do manual our first step is to create hostmapping.json file but when we have to export blueprint still we have to do the same . Then we create cluster_configuration.json file but in export method all these configs will come automatically so we do not need to create this file , am i right with this ? then we have to create nternal repo . : Register blueprint with
ambari server by executing below command curl -H "X-Requested-By: ambari" -X POST -u
admin:admin http://<ambari-server-hostname>:8080/api/v1/blueprints/multinode-hdp
-d @cluster_config.json [here in place of cluster_config.json we have to pass ambari blueprint that we exported from other cluster .
setup internal repos pull the trigger! below command to start cluster isntallation curl
-H "X-Requested-By: ambari" -X POST -u admin:admin
http://<ambari-server-hostname>:8080/api/v1/clusters/multinode-hdp -d
@hostmap.json conclusion : basically all the steps are same except creating cluster_configuration.json file we are exporting this from another cluster . This is my whole understanding If I am wrong somewhere please highlight the same . Thanks in advance . @Jay Kumar SenSharma can you please check above and help me out
... View more
01-09-2018
07:27 AM
@Jay Kumar SenSharma thanks Jay
... View more
01-08-2018
10:40 AM
@Anurag Mishra Disabling Firewall is one of the major requirement while setting up the Firewall. Else you will need to manually unblock many ports. For Ambari to communicate during setup with the hosts it deploys to and manages, certain ports must be open and available. The easiest way to do this is to temporarily disable iptables, as follows: # systemctl disable firewalld
# service firewalld stop You can restart iptables after setup is complete. If the security protocols in your environment prevent disabling iptables, you can proceed with iptables enabled, if all required ports are open and available. Ambari checks whether iptables is running during the Ambari Server setup process. If iptables is running, a warning displays, reminding you to check that required ports are open and available. The Host Confirm step in the Cluster Install Wizard also issues a warning for each host that has iptables running. . NOTE: As HDP cluster can have multiple HDP components and every HDP component can have multiple Ports which needs to be accessed remotely hence if you will enable firewall then you will need to deal with manually Unblocking various Ports used by various services on your own which might become complicated. Please see the following link to know more about the Ports that are used by various HDP components: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.1/bk_reference/content/hdfs-ports.html
... View more
01-04-2018
08:24 AM
@Anurag Mishra, Please put the table name in quotes ('') # hbase shell
hbase(main):014:0> list # This will list all tables
hbase(main):014:0> list 'table_name' # List only a single table name
hbase(main):014:0> scan 'table_name' # Get contents of the file.
Thanks, Aditya
... View more
12-04-2017
11:31 AM
If You feel this clears your queries, please mark the answer as accepted.
... View more
05-10-2018
07:16 AM
Nice explanation @Jay Kumar SenSharma
... View more