Member since
05-16-2016
785
Posts
114
Kudos Received
39
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1866 | 06-12-2019 09:27 AM | |
3070 | 05-27-2019 08:29 AM | |
5109 | 05-27-2018 08:49 AM | |
4480 | 05-05-2018 10:47 PM | |
2782 | 05-05-2018 07:32 AM |
07-14-2017
02:13 AM
Great it helped me too.
... View more
07-13-2017
01:32 PM
@csguna 2.6.32-573.22.1.el6.x86_64 Redhat 6.7
... View more
07-10-2017
01:37 AM
After several searches i think the problem is not blocking because all services are working. But, do you thnik that name resolution can be the cause of this problem?
... View more
07-06-2017
07:31 AM
That is correct.
... View more
07-05-2017
02:11 AM
Yes, install web browser on the same maching and try accessing it from the browser. Do this in the first time, instead trying from your host (if you are using VirtualBox or other virtualization tools). You can ignore the "unable to retrieve non-local non-loopback ip address" error.
... View more
07-04-2017
11:58 PM
What user are you runinig the spark ? is this path that you are referring is in hdfs or local /home/cloudera/partfile perform this and let me know if the files are getting listed hadoop fs -ls /home/cloudera/
... View more
06-30-2017
06:34 AM
Interesting story. The decomm process would not complete until all blocks have at least 1 good replica on other DNs. (good replica = replicas that are not stale and on a DataNode that is not being decommissioned or already decommissioned) DirectoryScanner in a DataNode scans the entire directory, reconciling inconsistency between in-memory block map and on-disk replica, so it would eventually pick up the added replica, just a matter of time.
... View more
06-27-2017
02:32 PM
This problem was caused because the Sqoop tables hadn't been created in PostgreSQL.
To solve this problem, please go to CM > Sqoop 2 service, click on Actions button and then choose "Create Sqoop Database". After that, please try to start Sqoop2 service again.
... View more
06-27-2017
08:10 AM
Although I havent tried , trying putting 777 on the chmod to that directory and run it as root. chmod -R 777 /etc/init.d/ it is very very bad practice though.
... View more
06-26-2017
07:02 PM
if you read my previous reply , thats exactly I told that your fs.default should be same across all the nodes more over same version of core-site.xml . to be more precies This is used to specify the default file system it needs be set to a HDFS address. this is need for client configuration as well so your local configuration file should include this element. hdfs://192.168.1.200:9000/. Here 9000 denotes port on which datanode will send heartbeat to namenode
... View more