Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2439 | 04-27-2020 03:48 AM | |
4870 | 04-26-2020 06:18 PM | |
3974 | 04-26-2020 06:05 PM | |
3212 | 04-13-2020 08:53 PM | |
4918 | 03-31-2020 02:10 AM |
05-29-2017
05:20 AM
Hi @Jay SenSharma that solved the issue. Thanks for the support.
... View more
10-30-2018
12:40 AM
@Geoffrey Shelton Okot
Only a user who own the thread Or a user with 1000+ points can accept other users answers as accepted. I have marked your previous answer as "Accepted" which you answered on "Aug 09, 2017" as that answer looks more informative form this HCC thread perspective.
... View more
05-22-2017
08:58 AM
1 Kudo
@nshelke
These warning messages indicates that the growth rate was higher than the one that is defined in the alert growth rate threshold in a specified day/week. Normally this happens when we run some excessive job or perform load test to store large data on HDFS. We can tune the threshold based on our requirement or based on our growth observations by clicking on the "Edit" button on the alert definition on the Ambari UI.
[WARNING] [HARD] [HDFS] [increase_nn_heap_usage_daily] (NameNode Heap Usage (Daily)) The variance for this alert is 63MB which is 34% of the 186MB average (37MB is the limit) This service-level alert is triggered if the NameNode heap usage deviation has grown beyond the specified threshold within a day period. . [WARNING] [HARD] [HDFS] [namenode_increase_in_storage_capacity_usage_daily] (HDFS Storage Capacity Usage (Daily)) The variance for this alert is 950,843,960B which is 36% of the 2,626,832,493B average (788,049,748B is the limit) This service-level alert is triggered if the increase in storage capacity usage deviation has grown beyond the specified threshold within a day period." .
... View more
09-30-2017
11:33 AM
I have 8 GB Physical RAM, and have allocated 4GB RAM to the SandBox. It's not booting up. Is this much RAM enough for the sandbox? If yes, please help me find out what to look for in order to troubleshoot. Thanks
... View more
05-16-2017
07:07 PM
Now that I think more about this, I guess that Unable to connect to: https://hadoop-m:8441/agent/v1/register/hadoop-m.c.hdp-1-163209.internal
started to occur when ambari server started to throw NPEs, which I detected right after (re)starting it, which I did right after I copied HUE to ambari's stack with: sudo git clone https://github.com/EsharEditor/ambari-hue-service.git /var/lib/ambari-server/resources/stacks/HDP/$VERSION/services/HUE (which are the very first steps of HUE installation guide via ambari). Regarding FQDNs, connectivity ambari server <-> ambari agents and handshake/registration ports (8440/8441). @hadoop-m $ more /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain610.132.0.4
hadoop-m.c.hdp-1-163209.internal hadoop-m # Added by Google
169.254.169.254 metadata.google.internal # Added by Google
$ hostname -f
hadoop-m.c.hdp-1-163209.internal
@hadoop-w-0 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.132.0.2 hadoop-w-0.c.hdp-1-163209.internal hadoop-w-0 # Added by Google
169.254.169.254 metadata.google.internal # Added by Google $ hostname -f
hadoop-w-0.c.hdp-1-163209.internal $ telnet hadoop-m 8440
Trying 10.132.0.4...
Connected to hadoop-m.
Escape character is '^]'. $ telnet hadoop-m 8441
Trying 10.132.0.4...
Connected to hadoop-m.
Escape character is '^]'.
$ openssl s_client -connect hadoop-m:8440
CONNECTED(00000003)
(... I removed lines ...)
---
Server certificate
-----BEGIN CERTIFICATE-----
MIIFnDCCA4SgAwIBAgIBATAN (...I removed rest...)
$ openssl s_client -connect hadoop-m:8441
CONNECTED(00000003)
(... I removed lines ...)
-----BEGIN CERTIFICATE-----
MIIFnDCCA4SgAwIBAgIBAT (...I removed rest...)
@hadoop-w-1 $ more /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.132.0.3 hadoop-w-1.c.hdp-1-163209.internal hadoop-w-1 # Added by Google
169.254.169.254 metadata.google.internal # Added by Google $ hostname -f
hadoop-w-1.c.hdp-1-163209.internal
$ telnet hadoop-m 8440
Trying 10.132.0.4...
Connected to hadoop-m.
Escape character is '^]'. $ telnet hadoop-m 8441
Trying 10.132.0.4...
Connected to hadoop-m.
Escape character is '^]'. $ openssl s_client -connect hadoop-m:8440
CONNECTED(00000003)
(... I removed lines ...)
---
Server certificate
-----BEGIN CERTIFICATE-----
MIIFnDCCA4SgAwIBAgIBATAN (...I removed rest...) $ openssl s_client -connect hadoop-m:8441
CONNECTED(00000003)
(... I removed lines ...)
-----BEGIN CERTIFICATE-----
MIIFnDCCA4SgAwIBAgIBAT (...I removed rest...) It looks as it should, doesn't it?
... View more
05-15-2017
06:41 PM
The second was what I was looking for. Thanks!
... View more
05-13-2017
06:54 AM
Thanks @Jay SenSharma
... View more
05-11-2017
02:11 PM
Thanks for your response, I have a grafana crt file already created. do i need to import it again? Can you tell me how to import it
... View more
05-11-2017
07:22 AM
@saravanan gopalsamy
Great to know that your issue is resolved. It will be wonderful if you can click on the "Accept" button on this thread to mark this comment as answered, that helps community users to quickly find out the correct answer.
... View more
05-10-2017
06:14 PM
HI Jay, I Fond the issue with given two URLS and get request_id and stage_id values for "status" : "PENDING" then fixed the issue. - Connected ambari DB and issued the below command then continued the Upgrade. update host_role_command set status='COMPLETED' where request_id='523' and stage_id='107';
... View more