Member since
01-06-2016
131
Posts
99
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1853 | 03-08-2016 08:34 PM | |
4195 | 03-02-2016 07:04 PM | |
2267 | 01-29-2016 05:47 PM |
03-03-2016
05:31 PM
4 Kudos
curl -u admin:admin -i -H 'X-Requested-By: ambari' -X POST -d '{
"RequestInfo":{
"context":"Recommission DataNodes",
"command":"RECOMMISSION",
"parameters":{
"slave_type":"DATANODE",
"included_hosts":"c6401.ambari.apache.org,c6402.ambari.apache.org,c6403.ambari.apache.org"
},
"operation_level":{
"level":"HOST_COMPONENT",
"cluster_name":"c1"
}
},
"Requests/resource_filters":[
{
"service_name":"HDFS",
"component_name":"NAMENODE"
}
]
}' http://localhost:8080/api/v1/clusters/c1/requests
curl -u admin:admin -i -H 'X-Requested-By: ambari' -X POST -d '{
"RequestInfo":{
"context":"Recommission NodeManagers",
"command":"RECOMMISSION",
"parameters":{
"slave_type":"NODEMANAGER",
"included_hosts":"c6401.ambari.apache.org,c6402.ambari.apache.org,c6403.ambari.apache.org"
},
"operation_level":{
"level":"HOST_COMPONENT",
"cluster_name":"c1"
}
},
"Requests/resource_filters":[
{
"service_name":"YARN",
"component_name":"RESOURCEMANAGER"
}
]
}' http://localhost:8080/api/v1/clusters/c1/requests @Artem Ervits .Can i use like this to recommission the data node and nodemanagers? Whcih one has to be excuted first? Datanodes ?
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
03-03-2016
03:56 PM
2 Kudos
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
03-02-2016
07:04 PM
Step 1 : Decommission Nodemanagers
from the cluster Command : curl -u admin:password -i -H 'X-Requested-By: ambari'-X POST -d '{ "RequestInfo":{ "context":"Decommission NodeManagers", "command":"DECOMMISSION", "parameters":{ "slave_type":"NODEMANAGER", "excluded_hosts":"serf010ext.etops.tllsc.net,serf020ext.etops.tllsc.net,villein010ext.etops.tllsc.net,villein020ext.etops.tllsc.net" }, "operation_level":{ "level":"HOST_COMPONENT", "cluster_name":"Name of the cluster" } }, "Requests/resource_filters":[ { "service_name":"YARN", "component_name":"RESOURCEMANAGER" } ]}' http://ambari_hostname:8080/api/v1/clusters/cluster name/requests Step 2 : Decommission DataNodes from
cluster Command
: curl -u admin:password -i -H 'X-Requested-By: ambari'-X POST -d '{ "RequestInfo":{ "context":"Decommission DataNodes", "command":"DECOMMISSION", "parameters":{ "slave_type":"DATANODE", "excluded_hosts":"serf010ext.etops.tllsc.net,
serf020ext.etops.tllsc.net, villein010ext.etops.tllsc.net,
villein020ext.etops.tllsc.net" }, "operation_level":{ "level":"HOST_COMPONENT", "cluster_name":"Name of the cluster" } }, "Requests/resource_filters":[ { "service_name":"HDFS", "component_name":"NAMENODE" } ]}' http://ambari_hostname:8080/api/v1/clusters/cluster_name/requests Step 3 : Stop the Datanode service on
each node of decommissioned nodes Command: curl -u admin:password -i -H
'X-Requested-By: ambari' -X PUT -d '{"HostRoles": {"state":
"INSTALLED"}}' http://ambari_hostname:8080/api/v1/clusters/cluster_name/hosts/serf010ext.etops.tllsc.net/host_components/DATANODE Command: curl -u admin:password -i -H
'X-Requested-By: ambari' -X PUT -d '{"HostRoles": {"state":
"INSTALLED"}}' http://ambari_hostname:8080/api/v1/clusters/cluster_name/hosts/serf020ext.etops.tllsc.net/host_components/DATANODE Command: curl -u admin:password -i -H
'X-Requested-By: ambari' -X PUT -d '{"HostRoles": {"state":
"INSTALLED"}}' http://ambari_hostname:8080/api/v1/clusters/cluster_name/hosts/villein010ext.etops.tllsc.net/host_components/DATANODE Command: curl -u admin:password -i -H
'X-Requested-By: ambari' -X PUT -d '{"HostRoles": {"state":
"INSTALLED"}}' http://ambari_hostname:8080/api/v1/clusters/cluster_name/hosts/villein020ext.etops.tllsc.net/host_components/DATANODE Step 4 : Check under replicated and
corrupted blocks on Ambari dashboard. It will show some number. Step 4 : Restart Standby Namenode Step 5 : Restart Active Namenode Step 6 : Check under replicated and
corrupted blocks on Ambari dashboard, they should be zero. By restarting
Namenodes, it will distribute the blocks on live Data nodes only. Here serf010ext,serf020ext,villein010ext and villein020ext are the nodes, which are planning to decommission from the cluster. Thank you.
... View more
03-02-2016
06:49 PM
1 Kudo
I tried to stop the DATANODE and NODEMANAGER service first and then tried to decommission the nodes. I am unable to decommission the nodes even it is not showing decommission the nodes got some internal exception. Then i decommissioned NODEMANAGER and DATANODE respectively using curl commands, then changed DATANODE service to INSTALLED state. Restarted Namenodes to get update of DATANODES live status. It was updated successfully. Before namenode start, i am able to see the corrupted blocks and under replicated blocks. After restart of namenodes, they went to zero. In ambari dash board, i am able to see four nodes live now.
... View more
03-01-2016
07:49 PM
@Artem Ervits Once those are excluded and decommissioned, will not be in STARTED state. How can we stop from your given link. Can you please tell me the procedure to follow while decommissioning?
... View more
03-01-2016
07:24 PM
How can we stop it then? If you have any example or link, can you please post here?
... View more
03-01-2016
07:18 PM
1 Kudo
I decommissioned 4 data nodes and node managers out of 8 data nodes and node.managers. I checked dfs.exclude fie, it contains the decommissioned nodes host names. I restarted the Namenode. Still dashboard is showing 8 data nodes live and 4 nodemanagers live. Why it is not effecting data nodes part?
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
03-01-2016
04:48 PM
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=41812517
... View more
03-01-2016
04:36 PM
2 Kudos
curl -u admin:$PASSWORD -i -H 'X-Requested-By: ambari'-X PUT -d '{"RequestInfo": {"context" :"Start HDFS via REST"}, "Body": {"ServiceInfo": {"state": "STARTED"}}}' http://`hostname -f`:8080/api/v1/clusters/$CLUSTER_NAME/services/HDFS In the above curl command, what is the meaning for "hostname -f". I am new to usage of REST APIs, it may be a stupid question. I would like to get clear idea for the above command.
... View more
Labels:
- Labels:
-
Apache Ambari
03-01-2016
04:10 PM
1 Kudo
curl -u admin:$PASSWORD -i -H 'X-Requested-By: ambari' -X PUT -d '{"RequestInfo": {"context" :"Start HDFS via REST"}, "Body": {"ServiceInfo": {"state": "STARTED"}}}' http: //`hostname -f`:8080/api/v1/clusters/$CLUSTER_NAME/services/HDFS In the above curl command, what is the meaning for "hostname -f". I am new to usage of REST APIs, it may be a stupid question. I would like to get clear idea for the above command.
... View more