Member since
04-03-2019
962
Posts
1743
Kudos Received
146
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
11531 | 03-08-2019 06:33 PM | |
4912 | 02-15-2019 08:47 PM | |
4171 | 09-26-2018 06:02 PM | |
10598 | 09-07-2018 10:33 PM | |
5659 | 04-25-2018 01:55 AM |
04-15-2016
07:23 PM
@Amit Tewari - Please check my answer below and accept if this resolves your issue
... View more
04-15-2016
07:15 PM
@Hazarathkumar bobba - Please check below answers and accept one which is more relevant.
... View more
04-15-2016
07:13 PM
@Davide Isoardi - You are welcome! 🙂
... View more
04-15-2016
07:10 PM
@sunil kanthety - Can you please try below command to delete KERBEROS_CLIENT curl -H "X-Requested-By:ambari" -u admin:admin -X DELETE "http://<AMBARI-SERVER>:8080/api/v1/clusters/services/KERBEROS/components/KERBEROS_CLIENT To re-install, again you can use Ambari API however easiest method is try to disable and re-enable Kerberos from Ambari UI. Please do let me know how it goes.
... View more
04-15-2016
06:21 AM
@gaurav sharma - If you look at logs carefully, I noticed below message java.io.IOException: No space left on device at java.io.FileOutputStream.writeBytes(Native Method) at java.io.FileOutputStream.write(FileOutputStream.java:345) at org.apache.hadoop.hdfs.server.namenode.TransferFsImage.receiveFile(TransferFsImage.java:517) at 1. Can you please move existing fsimage from SNN to some other location, make sure that disk on SNN has capacity to store fsimage from NN ( check size of fsimage on NN and see if total disk capacity on SNN sufficient to store fsimage ) 2. Shutdown Secondary NN 3. Run below command to force secondary NN to do checkpointing hadoop secondarynamenode -checkpoint force Note - Please run above command by hdfs user.
... View more
04-15-2016
06:08 AM
@Indrajit swain 1. Start your sandbox, ssh using a terminal ssh root@127.0.0.1 -p 2222 2. Run the following commands: ambari-admin-password-reset The following text will appear Please set the password for admin:
Please retype the password for admin: After setting and retyping your new password, type the command: ambari-agent restart 3. Ambari Admin password should be reset 4. Open Ambari login page. Verify your new password allows you to login as admin user.
... View more
04-14-2016
04:59 PM
2 Kudos
@sunil kanthety - looks like some issues with the installation of KERBEROS_CLIENT. can you please remove it using API and re-install ? https://cwiki.apache.org/confluence/display/AMBARI/Using+APIs+to+delete+a+service+or+all+host+components+on+a+host
... View more
04-14-2016
04:42 PM
3 Kudos
@Davide Isoardi Can you please try below command: hdfs dfsadmin -fs hdfs://<problematic-nn-fqdn>:8020 -refreshNodes Note - Please run above command by hdfs user. Also, Can you please check datanode logs on hdpslave04 and see if you get anything in there.
... View more
04-14-2016
12:28 PM
6 Kudos
@R K Good question! 1. Regarding I/O - Block wont get written on single node instead it gets replicated to other 2 datanodes, NN doesn't take resposibility of replication to avoid an extra overhead, it indeed gives location other other 2 datanodes and datanode continues this chain of replication, for each replication there is an acknowledgement. Read more about HDFS write anatomy here. 2. Hadoop is designed to process BigData hence having files with small size wont give us much benefit. That's correct ext3 filesystem on Linux has block size of 4KB. When MapReduce program/Hadoop Client/Any other Application tries to read any file from HDFS, block is the basic unit. Regarding "Does the 64 MB Block in HDFS will be split into x * 4 KB Blocks of underlying Opertating system ?" Logically speaking, 1 HDFS block corresponds to 1 file in local file system on datanode, if you do stat command on that file, you should get block related info from underlying FS.
... View more
04-14-2016
07:31 AM
3 Kudos
@gsharma - By looking at the error it looks like some problem with secondary namenode's local storage. can you please check value of dfs.namenode.checkpoint.dir and see if any issues like RO mount or storage full or bad disk maybe? Also, undereplicated block issue is not related to this one. How many datanodes you have? what is the replication factor? are all the datanodes healthy ?
... View more