Member since
08-08-2017
1652
Posts
30
Kudos Received
11
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1964 | 06-15-2020 05:23 AM | |
| 16010 | 01-30-2020 08:04 PM | |
| 2105 | 07-07-2019 09:06 PM | |
| 8233 | 01-27-2018 10:17 PM | |
| 4663 | 12-31-2017 10:12 PM |
01-31-2018
10:04 AM
hi all we have 3 masters machines ( master01 master02 master03 ) and on each master machine we have the relevant component how to verify by API that all component on the specific master machine are stop? for example we want to verify that all component on master01 are stoped the API will return info (stop/start) about all componets that are run on master01
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
01-31-2018
09:07 AM
need advice why we get the error about - Failed to replace a bad datanode on the existing pipeline due to no more good datanodes? I saw also other quastion that talk about my problem -https://community.hortonworks.com/questions/27153/getting-ioexception-failed-to-replace-a-bad-datano.html Log description : java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[34.2.31.31:50010,DS-8234bb39-0fd4-49be-98ba-32080bc24fa9,DISK], DatanodeInfoWithStorage[34.2.31.33:50010,DS-b4758979-52a2-4238-99f0-1b5ec45a7e25,DISK]], original=[DatanodeInfoWithStorage[34.2.31.31:50010,DS-8234bb39-0fd4-49be-98ba-32080bc24fa9,DISK], DatanodeInfoWithStorage[34.2.31.33:50010,DS-b4758979-52a2-4238-99f0-1b5ec45a7e25,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:1036)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1110)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1268)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:993)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:500)
---2018-01-30T15:15:15.015 INFO [][][] [dal.locations.LocationsDataFramesHandler]
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
-
Apache Spark
01-30-2018
04:53 PM
we get the following in spark logs java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage DatanodeInfoWithStorage\
The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:1036) my ambari cluster include only 3 workers machines and each worker have only one data disk I search in google and find solution can be about: Block replication need to be set as 1 instead of 3 ( HDFS ) is it true ? second - because my worker machine have obnly one data disk is it can be part of the problem ? Block replication = The total number of files in the file system will be what's specified in the dfs.replication factor
setting dfs.replication=1, means will be only one copy of the file in the file system.
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
-
Apache Spark
01-29-2018
09:14 AM
what need to put in - {host-name} , ( we already set the hostname in ambari-host )
... View more
01-29-2018
06:52 AM
why API not capture the real status of component for example by this API ( example down ) we capture the status of APP_TIMELINE_SERVER component and its show the component started ( "state":"STARTED", ) but in fact the real status from ambari is STOP curl -u $USER:$PASSWORD -H "X-Requested-By: ambari" -X GET http://localhost:8080/api/v1/clusters/HDP/components/APP_TIMELINE_SERVER
{
"href" : "http://localhost:8080/api/v1/clusters/HDP/components/APP_TIMELINE_SERVER",
"ServiceComponentInfo" : {
"category" : "MASTER",
"cluster_name" : "HDP",
"component_name" : "APP_TIMELINE_SERVER",
"display_name" : "App Timeline Server",
"init_count" : 0,
"install_failed_count" : 0,
"installed_count" : 1,
"recovery_enabled" : "true",
"service_name" : "YARN",
"started_count" : 0,
"state" : "STARTED",
"total_count" : 1,
"unknown_count" : 0
},
"host_components" : [
{
"href" : "http://localhost:8080/api/v1/clusters/HDP/hosts/master01.sys67.com/host_components/APP_TIMELINE_SERVER",
"HostRoles" : {
"cluster_name" : "HDP",
"component_name" : "APP_TIMELINE_SERVER",
"host_name" : "master01.sys67.com"
}
}
] so how it can be? why API show different the Ambari GUI? or maybe what is wrong with my API syntax ?
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
01-27-2018
09:59 PM
from the logs we can see that what is the command to get name not not in safe mode ? at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2345)
Caused by: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create file/spark2-history/.cffb30ea-b88b-4044-8e1d-017039458f94. Name node is in safe mode.
The reported blocks 4 needs additional 10 blocks to reach the threshold 0.9900 of total blocks 14.
The number of live datanodes 3 has reached the minimum number 0. Safe mode will be turned off automatically once the thresholds have been reached.
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1391)
... 13 more
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
01-27-2018
09:15 PM
@Jay , can we just do ambari-server restart and installed the blueprint again ( in case we have time for installation )
... View more
01-27-2018
09:13 PM
@jay if you think this isnt the direction then what to capture from the log?
... View more
01-27-2018
09:03 PM
why he check the repo http://public-repo-1.hortonworks.com/HDP/sles12/2.x/updates/2.6.4.0/HDP-2.6.4.0-91.xml. ? , , we use local repo in ambari server
... View more