Member since
01-25-2017
396
Posts
28
Kudos Received
11
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
830 | 10-19-2023 04:36 PM | |
4366 | 12-08-2018 06:56 PM | |
5456 | 10-05-2018 06:28 AM | |
19856 | 04-19-2018 02:27 AM | |
19878 | 04-18-2018 09:40 AM |
04-24-2018
02:06 PM
Where did you get this command from? Are you have the hadoop-example jar? Check the jars under /home/cloudera/ by running ll /home/cloudera/
... View more
04-19-2018
02:27 AM
sudo -u hdfs hdfs dfs -chown -R cloudera /
... View more
04-18-2018
09:40 AM
@subbu You are doing the same error all the time. In order to write or read from and to the jdfs the user running the command need permissions. You can solve this by always running with the super user "hdfs" and in order to do that you need to add for you command sudo -u hdfs, so your command should be : sudo -u hdfs hadoop jar /home/cloudera/WordCount.jar WordCount /inputnew/inputfile.txt /outputnew or as i see all the time you are running with cloudera user so you need to change the owner of / to cloudera or you can change the permissions to the root folder. so: 1- use the super user: sudo -u hdfs hadoop jar /home/cloudera/WordCount.jar WordCount /inputnew/inputfile.txt /outputnew 2- change root dir to cloudera-scm: sudo -u hdfs hdfs -chown -R cloudera / then run: hadoop jar /home/cloudera/WordCount.jar WordCount /inputnew/inputfile.txt /outputnew 3- change the permissions: sudo -u hdfs hdfs -chmod -R 777 / then run: hadoop jar /home/cloudera/WordCount.jar WordCount /inputnew/inputfile.txt /outputnew
... View more
04-18-2018
06:55 AM
@dpugazhe Generally the / monut in the linux servers are small. Could you share the df -h command output of you linux box? I would suggest you to change the location for the parcels and logs for example if you have larger mount in your linux box called /xxxxx, change the /var/lib and /var/log to /xxxx/hadoop/lib and /xxxx/hadoop/log and the same for the parcels, as you are using cloudera manager, these changes can be done quickly. so to do that. 1- Stop cloudera manager services. 2- Move the old logs to the new partition. 3- Delete the old logs. 4- Start cloudera manager services
... View more
04-15-2018
04:26 AM
PLease send me the output of the ll command
... View more
04-14-2018
02:19 PM
1) [cloudera@localhost ~]$ sudo -u hdfs hdfs dfs -put /home/cloudera/ipf.txt /inputnew/ put: `/home/cloudera/ipf.txt': No such file or directory The file /home/cloudera/ipf.txt doesn't exist in you local host, you can check by ll /home/cloudera/ Below you are not using the sudo -u hdfs as you used in the above command. ** you faced the same issue in another post. Please use sudo -u hdfs hdfs dfs -put /home/cloudera/ipf.txt /inputnew/ 2) [cloudera@localhost ~]$ hdfs dfs -put /home/cloudera/ipf.txt /inputnew/ put: Permission denied: user=cloudera, access=WRITE, inode="/inputnew":hdfs:supergroup:drwxr-xr-x 3) [cloudera@localhost ~]$ sudo -u cloudera hdfs dfs -put /home/cloudera/ipf.txt /inputnew/ put: Permission denied: user=cloudera, access=WRITE, inode="/inputnew":hdfs:supergroup:drwxr-xr-x
... View more
04-13-2018
05:01 PM
it's depend from which CDH version you are upgrading ... You need to have a look at the services you cluster includes and if it's version was changed for example: Spark, HDFS, Yarn and So on: for example i upgraded my cluster from 5.5.4 to 5.13.0 i just cared about the spark jobs since spark version was changed and there was a need to change the jobs dependancy, and minor changed we did in hive tables refreshment. I would recommend you to go to Major version -1 so to use 5.13 and use the latest minor version so i recommend 5.13.3
... View more
04-13-2018
04:55 PM
1 Kudo
@subbu use: sudo -u hdfs hdfs dfs -mkdir /inputnew
... View more
02-12-2018
02:42 AM
Hi Guys, I upgraded my test cluster to CDH5.13.0 and i notice when i send an API request to the resource manager it doesn't show the active resource manager. This is standby RM. The redirect url is: /ws/v1/cluster/apps?finishedTimeBegin=1518342909577 As you can see the url is not including the the RM node.
... View more
Labels:
- Labels:
-
Apache YARN
12-29-2017
07:35 PM
Why not compacting the historical data ... for example compact daily files into one file for now-14days. A compaction job that runs daily and compact the data before 2 weeks. By this you can make sure you are not imapcting the data freshness.
... View more