Member since
12-27-2022
21
Posts
0
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3616 | 01-02-2023 11:15 PM | |
1381 | 12-30-2022 04:11 PM | |
3935 | 12-29-2022 02:05 PM |
01-02-2023
08:22 AM
2023-01-01 17:31:58,726 ERROR org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService: Most of the disks failed. 2/2 local-dirs have errors: [ /media/sanjay/hdd02/yarn/nm : Cannot create directory : /media/sanjay/hdd02/yarn/nm, error mkdir of /media/sanjay/hdd02 failed , /media/sanjay/hdd03/yarn/nm : Cannot create directory : /media/sanjay/hdd03/yarn/nm, error mkdir of /media/sanjay/hdd03 failed ]
... View more
01-02-2023
08:21 AM
U r right. The errors are as follows [ /media/sanjay/hdd02/yarn/nm : Cannot create directory: /media/sanjay/hdd02/yarn/nm , /media/sanjay/hdd03/yarn/nm : Cannot create directory: /media/sanjay/hdd03/yarn/nm ] I have tried changing permissions / owners of the folders (yarn:hadoop) and in ultimate desperation made the folders 777 as well just to see if the problem gets solved. No luck yet...Not sure how to solve this. Running out of ideas fast LOL
... View more
12-30-2022
07:13 PM
I have a three node cluster Using cores=8 memory=32GB disks=2 hbase=False Profile: cores=8 memory=31744MB reserved=1GB usableMem=31GB disks=2 Num Container=4 Container Ram=7680MB Used Ram=30GB Unused Ram=1GB yarn.scheduler.minimum-allocation-mb=7680 yarn.scheduler.maximum-allocation-mb=30720 yarn.nodemanager.resource.memory-mb=30720 mapreduce.map.memory.mb=3840 mapreduce.map.java.opts=-Xmx3072m mapreduce.reduce.memory.mb=7680 mapreduce.reduce.java.opts=-Xmx6144m yarn.app.mapreduce.am.resource.mb=3840 yarn.app.mapreduce.am.command-opts=-Xmx3072m mapreduce.task.io.sort.mb=1024
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache YARN
12-30-2022
04:11 PM
OK I solved it. Basically the permissions to the following datanode drives on all my three machines had to be made "sudo chown -R hdfs:hadoop" /media/sanjay/hdd02 /media/sanjay/hdd02
... View more
12-30-2022
12:31 PM
I got CM up and running and want to add the HDFS service. Keep getting this error. In fact I went to the linux disk and in desperation made chmod -R 755 to the root namenode folder. Still get this error java.io.IOException: Cannot create directory /media/sanjay/hdd02/dfs/nn/current at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:416) at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:579) at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:601) at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:173) at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1169) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1610) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1720) 2022-12-30 11:51:27,073 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. java.io.IOException: Cannot create directory /media/sanjay/hdd02/dfs/nn/current at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:416) at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:579) at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:601) at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:173) at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1169) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1610) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1720) 2022-12-30 11:51:27,074 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1: java.io.IOException: Cannot create directory /media/sanjay/hdd02/dfs/nn/current 2022-12-30 11:51:27,076 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at hp8300one/10.0.0.3 ************************************************************/
... View more
Labels:
- Labels:
-
HDFS
12-30-2022
12:00 PM
You know this 🙂 . I unfortunately don't. Since I did not find any documentation on how to actually find and use the allkeys.asc file. So I decided to be innovative and installed the agents myself.
... View more
12-29-2022
02:05 PM
OK - I finally managed to bring up the CDH cluster. I had to go to the two nodes that were failing to install and I installed the following sudo apt-get install cloudera-manager-agent
... View more
12-29-2022
09:46 AM
I find only one ERROR in the /var/log/cloudera-scm-server/cloudera-scm-server.log sudo grep ERROR /var/log/cloudera-scm-server/cloudera-scm-server.log | cut -d ' ' -f 3- | sort -u ERROR ParcelUpdateService:com.cloudera.parcel.components.ParcelDownloaderImpl: Failed to download manifest. Status code: 404 URI: https://archive.cloudera.com/accumulo6/6.1.0/parcels/manifest.json/
... View more
12-29-2022
09:34 AM
@parasHi To clarify out of the three nodes installation succeeded on one node (the node that has CM manager daemon running). The installation failed on the other two nodes. I will and find out details about allkeys.asc and see where to exactly put that file on each node... I am just puzzled that the install worked on one node but is failing on the remaining 2. I will also re-check the CM server logs to see if there are any clues during the startup that might help me solve this problem. I have been using Cloudera CM and CDH since 2011 ! That time I had a handle in the community called sanjumani. Now after this Cloudera rehaul past two years I cannot get to that account anymore. So sanjaysubs is my new handle Thanks
... View more
12-28-2022
11:24 AM
To confirm this is a 3 node hadoop cluster I am trying to setup with Cloudera. - Ubuntu 18.04.5 LTS - I installed Anaconda3-2022.10-Linux-x86_64.sh on each node - On each node I installed openjdk version "1.8.0_352"
... View more
- « Previous
-
- 1
- 2
- Next »