Member since
03-29-2018
41
Posts
4
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
10958 | 01-24-2018 10:43 AM | |
2542 | 11-17-2017 02:41 PM |
06-27-2021
07:59 AM
@sandeepksaini Can you share a screenshot of the Add service wizard to highlight which option is currently greyed out?
... View more
11-18-2019
10:39 PM
Generally to collect the data from various sources we use big data. all the data can be in a distinct form. so we cannot perform processes the effectively. using big data Hadoop there are various tools available so we can perform the operations effectively
... View more
09-02-2019
01:44 AM
What are the impacts on other ports if I change from TCP6 to TCP? And will my Ambari server work on TCP?
... View more
04-01-2018
04:05 PM
@Aishwarya Sudhakar You need to understand the HDFS directory structure. This is the one which is causing issues to you. Follows some explanation. Let's say the username for these example commands is ash. So when ash tries to create a directory in HDFS with the following command /user/ashhadoop fs -mkdir demo
//This creates a directory inside HDFS directory
//The complete directory path shall be /user/ash/demo it is different than the command given below hadoop fs -mkdir /demo
//This creates a directory in the root directory.
//The complete directory path shall be /demo So a suggestion here is, whenever you try to access the directories, use the absolute path(s) to avoid the confusion. So in this case, when you create a directory using hadoop fs -mkdir demo and loads the file to HDFS using hadoop fs -copyFromLocal dataset.csv demo You file exists at /user/ash/demo/dataset.csv
//Not at /demo So your reference to your spark code for this file should be sc.textFile("hdfs://user/ash/demo/dataset.csv") Hope this helps!
... View more
01-25-2018
06:33 AM
@Jay Kumar SenSharmaThanks a lot! I got the desired output. but why it is showing "hdfs" the owner instead of "Mark" .although i change the ownership. Correct me where i was wrong : First of all i created user "Mark" [root@namenode] adduser Mark
then check the space for user :
[root@namenode] hadoop fs -ls /user
My user not shown in the list so i create a space for my user [root@namenode] sudo -u hdfs hadoop fs - mkdir /user/Mark
[root@namenode] hadoop fs -ls /user (now user show in the list) Change the ownership
[root@namenode] sudo -u hdfs hadoop fs -chown Mark:hdfs /user/Mark
Now i login to user hdfs
[hdfs@namenode] hdfs dfs -mkdir /user/Mark/cards
[hdfs@namenode] hdfs dfs -touchz /user/Mark/cards/largedeck.txt now login with "Mark" user and then type
[Mark@namenode] hadoop fs -ls /user/Mark/cards Thanks
... View more
01-17-2018
10:19 AM
@Geoffrey Shelton Okot Thank you very much for the information.
... View more
11-21-2017
04:58 PM
@Michael Bronson m -rf -> This is a Linux/Unix based command which will only delete your Unix/Lrinux based directory created in Unix/Linux file system. Whereas hdfs dfs -rmr /DirectoryPath -> Is for deletion of files/dirs in HDFS filesystem. Incase I miss interpreted your question then and you mean to ask me what is difference between "hdfs dfs -rmr" and "hdfs dfs -rm -rf" then the later one doesn't exist as there is no "-f" parameter to rm command in HDFS filesystem. We only have "-r" as an option for rm command in HDFS to delete the dir and files.
... View more
11-17-2017
11:24 AM
Nope, reducers don't communicate with each other and neither the mappers do. All of them runs in a separate JVM containers and don't have information of each other. AppMaster is the demon which takes care and manage these JVM based containers (Mapper/Reducer).
... View more