Member since
08-08-2017
1652
Posts
30
Kudos Received
11
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1918 | 06-15-2020 05:23 AM | |
| 15460 | 01-30-2020 08:04 PM | |
| 2071 | 07-07-2019 09:06 PM | |
| 8108 | 01-27-2018 10:17 PM | |
| 4570 | 12-31-2017 10:12 PM |
11-11-2017
07:09 PM
hi aditya , everything was great until I intend to install the service for this example we try to install the service - Knox I try this: curl -u xxxx:xxxx -i -X PUT -d '{"ServiceInfo": {"state" : "INSTALLED"}}' http://182.243.5.12 :8080/api/v1/clusters/HDP101/services/Knox HTTP/1.1 400 Bad Request
X-Frame-Options: DENY
X-XSS-Protection: 1; mode=block
X-Content-Type-Options: nosniff
Cache-Control: no-store
Pragma: no-cache
Set-Cookie: AMBARISESSIONID=13q980rkix8lh1ewku7n3xlhfh;Path=/;HttpOnly
Expires: Thu, 01 Jan 1970 00:00:00 GMT
User: admin
Content-Type: text/plain
Content-Length: 107
Server: Jetty(8.1.19.v20160209)
... View more
11-10-2017
01:14 PM
another remark if I set this value to 1 it mean that HDFS will start up in spite the volume is bad ? or not in use ,
... View more
11-10-2017
01:09 PM
yes sure thank you so much
... View more
11-10-2017
01:07 PM
hi aditye , can you give me real example for the first API second how to get these values ? stack_versions {stack-version-no} repository_versions {repository-version-no}
... View more
11-10-2017
12:59 PM
hi aditye regarding the - http://{ambari-server}:{port}/api/v1/clusters/{clustername}/services. , can you show me please the full approach ( full syntax )
... View more
11-10-2017
12:56 PM
hi Jay , I have idea but not sure about this so I need your advice , on the problematic worker we have extra volume - sdg , and the bad volume is sdf , so maybe we need to umount the sdf and mount the volume sdg in place sdf , and change the DataNode directories from ambari GUI from sdf to sdg - and then restart the component HDFS on the worker , what you think ?
... View more
11-10-2017
12:50 PM
hi Aditya , on each worker machine we have 5 volumes , and we not want to stay with 4 volume on the problematic workers , so about option 2 we not want to remove the volume , second what is the meaning to set the dfs.datanode.failed.volumes.tolerated to 1 ? after HDFS restart - it will fix the problem ?
... View more
11-10-2017
12:44 PM
hi Jay - grep dfs.datanode.failed.volumes.tolerated /etc/hadoop/conf/hdfs-site.xml <name>dfs.datanode.failed.volumes.tolerated</name> this already set in the xml file
... View more
11-10-2017
12:37 PM
in my Ambari cluster we have some services that not installed yet as graphite service as described here ( in the picture ) what are the API commands that required in order to identify which service is/are available to install and the API that install the service
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
11-10-2017
12:29 PM
here the permissions : ls -ltr /xxxxx/sdc/hadoop/hdfs/data/ drwxr-xr-x. 3 hdfs hadoop 4096 current -rw-r--r--. 1 hdfs hadoop 28 in_use.lock
... View more