- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
History Server can not start
- Labels:
-
Apache Hadoop
Created ‎12-21-2016 10:04 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
After installing fresh HDP 2.5.3 cluster (ambari 2.4.1.0), all sevices (default selection) installed successfully without any warning. When Starting services, History Server fail to start and make mapreduce fail also.
curl: (52) Empty reply from server 100
Created ‎12-22-2016 06:34 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@Jay SenSharma
the real problem is the namenode heap of memory. When History Server try to start, The memory usage of the NameNode climbs quickly to exceed the limit of 1 Gega byte (default configuration) and causes the service to fall. When changing max memory heap to 3 Gb it works fine. I installed previously ambari 2.4.0.1 and i don't seen this behaviour (2.4.2.0 same behaviour as 2.4.1.0). Do you know why?
Created ‎12-22-2016 06:34 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@Jay SenSharma
the real problem is the namenode heap of memory. When History Server try to start, The memory usage of the NameNode climbs quickly to exceed the limit of 1 Gega byte (default configuration) and causes the service to fall. When changing max memory heap to 3 Gb it works fine. I installed previously ambari 2.4.0.1 and i don't seen this behaviour (2.4.2.0 same behaviour as 2.4.1.0). Do you know why?
Created ‎01-20-2017 01:18 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi, in which secction do I increase the memory? HDFS or MapReduce? thanks a lot
Created ‎01-20-2017 09:23 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Aldo, in HDFS. The parameter is called "NameNode Java heap size".
Created ‎10-30-2017 11:08 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi, thank you for each of you by contributing with insights and experiences.
Just my contribution:
I could install my 1st cluster recently and I have got these behavior from it; a dumb distraction was the cause: I thought /etc/hosts were fine and they were not (missing entries) - fixing them properly and restarting services, they got up promptly - reds became green at end in ambari dashboard.
cheers and good luck
Massa

- « Previous
-
- 1
- 2
- Next »