Member since
06-26-2013
416
Posts
104
Kudos Received
49
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
6812 | 03-23-2016 08:06 AM | |
11697 | 10-12-2015 01:56 PM | |
3992 | 03-05-2015 11:11 AM | |
5585 | 02-19-2015 02:41 PM | |
10632 | 01-26-2015 09:55 AM |
10-16-2013
12:19 PM
17 Kudos
That command must be run as the HDFS superuser (usually "hdfs"). So, try it this way:
sudo -u hdfs hdfs dfsadmin -safemode leave
HTH
... View more
10-16-2013
11:37 AM
1 Kudo
@VINNU Hadoop is a platform and framework for data storage, processing, search, etc. It provides the tools to handle massive amounts of data in a fast, scalable manner, but it does not come with native tools to handle specific use cases such as you are describing. You will have to write your own application that knows how to detect fraudulent banking activity and then utilize hadoop to run that application over your data set.
Hadoop is really just the core HDFS and Mapreduce projects for storage of data and batch processing of it. However, there is a rich ecosystem of complimentary projects around Hadoop, such as Apache Mahout, which are geared toward specific tasks. Mahout, for example is an advanced machine learning/analytics tool which can be used to build applications such as you are describing and then you run those applications in your hadoop cluster against your data, which you have loaded into HDFS.
I hope this helps somewhat
... View more
10-08-2013
11:22 AM
OK, I see that you are running under Windows, so that may be part of the issue. There is a hosts file in Control Panel->Administration somewhere that needs to have all the cluster hosts info (IP/hostname) in it. /etc/hosts is not applicable on Windows, unless that changed in recent versions.
The basic point is that all the hosts in the cluster need to be able to ping each other by hostname and IP, so just double-check that they are all able to do that. You client server (wherever you are running the app from) needs to also be able to resolve all the IPs and hostnames of the cluster machines.
... View more
09-27-2013
12:59 PM
@vcr: I have moved this post to the Impala board in hopes that somebody in here can assist you.
Regards,
Clint
... View more
09-17-2013
11:39 AM
David,
I have moved your post to the Cloudera Manager board so that hopefully someone here could give you the feedback you are seeking.
Clint
... View more
09-17-2013
11:35 AM
1 Kudo
@bertbert98,
I have moved this post to the Cloudera Manager board in case that helps you get more attention. I do believe it's fine to run a KDC on CentOS, though, that's how I run mine. The Cloudera Manager doc you referenced is the correct one.
Clint
... View more
09-17-2013
10:00 AM
We do not supply a quickstart VM anymore with CDH3 on it as that version is EOM. Also, I am not sure what results you might get trying to run that version of pig with CDH 4.3 as this combination has never been tested. There will be potential for incompatibilities there for sure.
... View more
09-17-2013
09:57 AM
1 Kudo
@Nag.
Did you, by any chance recently switch over to using "Parcels" for your CDH cluster? If you upgraded to parcels, you also must go back and remove all the old CDH RPMs manually and also update your symlinks to the new parcels locations. It's one of the last steps in the upgrade doc and easy to overlook. If CM still has it's symlinks pointed to the old locations for CDH commands in /usr/lib, those won't be there if the RPMs were removed. You should be able to update your symlinks by restarting the cloudera agents on all machines:
service cloudera-scm-agent restart
HTH,
Clint
... View more
09-09-2013
07:11 AM
1 Kudo
The best way would be by using the CM API. Here is a handy blog entry describing how to use the API:
http://blog.cloudera.com/blog/2012/09/automating-your-cluster-with-cloudera-manager-api/
Also, we have v2 of the API out now, which is described here:
http://cloudera.github.io/cm_api/apidocs/v2/index.html
... View more
- « Previous
- Next »