Member since
10-24-2015
171
Posts
379
Kudos Received
23
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2624 | 06-26-2018 11:35 PM | |
4337 | 06-12-2018 09:19 PM | |
2871 | 02-01-2018 08:55 PM | |
1435 | 01-02-2018 09:02 PM | |
6741 | 09-06-2017 06:29 PM |
12-20-2017
07:33 PM
1 Kudo
@Michael Bronson, HDFS in this cluster is in safemode. Thats why Timelineserver is failing to start. Kindly check HDFS log to see why is Namenode is in safemode. You can explicitly turn off safemode by running "hdfs dfsadmin -safemode leave".
... View more
12-08-2017
08:34 PM
7 Kudos
@yassine sihi, there are two different concepts. HDFS can be deployed in two modes. 1) Without HA 2) With HA. In without HA mode, HDFS will have Namenode and Secondary Namenode. Here, secondary namenode periodically take snapshot of namenode and keep the metadata and audit logs up to date. So in case of Namenode failure, Secondary Namenode will have copy of latest namenode activity and prevent data loss. In HA mode, HDFS have two set of Namenodes. One acts as active namenode and another acts as Standby Namenode. The duties of standby namenode is similar to Secondary namenode where it keeps the track of active namenode activity and take snapshot periodically. Here, in case of active namenode failure, standby namenode automatically takes the control and becomes active. This way user will not notice the failure in namenode. This way High availability is guaranteed.
... View more
10-10-2017
06:36 PM
1 Kudo
@Nikita Kiselev, you can also use yarn cli to figure out active/stand by RM. You can find out RM-Ids from yarn-site.xml. Look for yarn.resourcemanager.ha.rm-ids properties. <property>
<name> yarn.resourcemanager.ha.rm-ids</name>
<value> rm1, rm2</value>
</property> Run "yarn rmadmin -getServiceState rm1" to find out state of RM1. It will return active if RM1 is an active RM or else it will return standby. You can run same command to know rm2 status too. ( yarn rmadmin -getServiceState rm2).
... View more
09-26-2017
07:01 PM
1 Kudo
It worked. Thanks.
... View more
09-26-2017
06:37 PM
1 Kudo
@Aditya Sirna, /tmp/testa dir is present. an livy user has permission to write to it. I received below output while trying to run webhdfs rest api. [root@xx user]# curl -i -X PUT "http://<namenode host>:50070/webhdfs/v1/tmp/testa/a.txt?user.name=livy&op=CREATE"
HTTP/1.1 307 TEMPORARY_REDIRECT
Cache-Control: no-cache
Expires: Tue, 26 Sep 2017 17:33:17 GMT
Date: Tue, 26 Sep 2017 17:33:17 GMT
Pragma: no-cache
Expires: Tue, 26 Sep 2017 17:33:17 GMT
Date: Tue, 26 Sep 2017 17:33:17 GMT
Pragma: no-cache
X-FRAME-OPTIONS: SAMEORIGIN
Set-Cookie: hadoop.auth="u=livy&p=livy&t=simple&e=1506483197716&s=dRvADKPG0lrenLje4fmEEdgChFw="; Path=/; HttpOnly
Location: http://xxx:50075/webhdfs/v1/tmp/testa/a.txt?op=CREATE&user.name=livy&namenoderpcaddress=xxx:8020&createflag=&createparent=true&overwrite=false
Content-Type: application/octet-stream
Content-Length: 0
Server: Jetty(6.1.26.hwx)
[root@xxx user]# curl -i -T /tmp/a.txt "http://<namenode host>:50070/webhdfs/v1/tmp/testa/a.txt?op=CREATE&overwrite=false"
HTTP/1.1 100 Continue
HTTP/1.1 307 TEMPORARY_REDIRECT
Cache-Control: no-cache
Expires: Tue, 26 Sep 2017 17:33:49 GMT
Date: Tue, 26 Sep 2017 17:33:49 GMT
Pragma: no-cache
Expires: Tue, 26 Sep 2017 17:33:49 GMT
Date: Tue, 26 Sep 2017 17:33:49 GMT
xRAME-OPTIONS: SAMEORIGIN
Location: http://xxx:50075/webhdfs/v1/tmp/testa/a.txt?op=CREATE&namenoderpcaddress=xx:8020&createflag=&createparent=true&overwrite=false
Content-Type: application/octet-stream
Content-Length: 0
Server: Jetty(6.1.26.hwx)
... View more
09-26-2017
05:50 PM
1 Kudo
I'm looking for Wehdfs Rest api example to upload a file to HDFS. I tried with below Api but could not upload a file to hdfs curl -i -X PUT "http://<namenode host>:50070/webhdfs/v1/tmp/testa/a.txt?user.name=livy&op=CREATE" curl -i -T /tmp/a.txt "http://<namenode host>:50070/webhdfs/v1/tmp/testa/a.txt?op=CREATE&overwrite=false"
... View more
Labels:
- Labels:
-
Apache Hadoop
09-15-2017
06:31 PM
1 Kudo
@Palash Dutta, Find below articles which shows how to rotate Hdfs logs and also zip the logs. https://community.hortonworks.com/articles/50058/using-log4j-extras-how-to-rotate-as-well-as-zip-th.html https://community.hortonworks.com/questions/78699/how-to-rotate-and-archive-hdfs-audit-log-file.html
... View more
09-11-2017
06:39 PM
1 Kudo
@Sebastien Chausson, you can refer to below document to set up spark keystore/truststore. https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.5/bk_spark-component-guide/content/spark-encryption.html
... View more
09-06-2017
06:29 PM
11 Kudos
@Sanaz Janbakhsh, Check maximum-applications and maximum-am-resource-percent properties in your cluster. Try increasing values for below properties to allow more applications to be running at a time. yarn.scheduler.capacity.maximum-applications / yarn.scheduler.capacity.<queue-path>.maximum-applications Maximum number of applications in the system which can be concurrently active both running and pending. Limits on each queue are directly proportional to their queue capacities and user limits. This is a hard limit and any applications submitted when this limit is reached will be rejected. Default is 10000. This can be set for all queues with yarn.scheduler.capacity.maximum-applications and can also be overridden on a per queue basis by setting yarn.scheduler.capacity.<queue-path>.maximum-applications. Integer value expected. yarn.scheduler.capacity.maximum-am-resource-percent / yarn.scheduler.capacity.<queue-path>.maximum-am-resource-percent Maximum percent of resources in the cluster which can be used to run application masters - controls number of concurrent active applications. Limits on each queue are directly proportional to their queue capacities and user limits. Specified as a float - ie 0.5 = 50%. Default is 10%. This can be set for all queues with yarn.scheduler.capacity.maximum-am-resource-percent and can also be overridden on a per queue basis by setting yarn.scheduler.capacity.<queue-path>.maximum-am-resource-percent
... View more
08-30-2017
11:26 PM
6 Kudos
@parag dharmadhikari, the permission of /tmp dir is not correct on HDFS. Typically /tmp dir has 777 permission. Run below command on your cluster, it will resolve this permission denied error. hdfs dfs -chmod 777 /tmp
... View more