Member since
01-20-2014
578
Posts
101
Kudos Received
94
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4083 | 10-28-2015 10:28 PM | |
1608 | 10-10-2015 08:30 PM | |
3648 | 10-10-2015 08:02 PM | |
2753 | 10-07-2015 02:38 PM | |
1481 | 10-06-2015 01:24 AM |
10-31-2018
01:10 AM
1 Kudo
From what I know, setting up those groups will allow you to log on with that privilege. Others will just be read-only users. With external authentication, you can return a negative number, even if user+password is valid. This prevents them from logging on.
... View more
10-31-2018
12:11 AM
Hi @JoaoBarreto, You could configure external authentication in CM. In this you can set up a script to reject anyone who is not in certain groups or any other scheme you like. Does this help? https://www.cloudera.com/documentation/enterprise/latest/topics/cm_sg_external_auth.html#cmug_topic_13_9_3 Regards Gautam
... View more
07-04-2017
03:54 AM
1 Kudo
@rampo wrote: Also when I check Impala Parcels on http://archive.cloudera.com/impala/parcels/latest/ I see there is no EL7 parcel. I'm glad you mentioned the parcel repo. With the latest releases of CDH, Impala is included in the CDH parcel itself and doesn't need a separate repository. The current version of Impala in CDH 5.11.1 is 2.8.0, see this URL: https://www.cloudera.com/documentation/enterprise/release-notes/topics/cdh_vd_cdh_package_tarball_511.html So this means, if you're able to create a CDH cluster on a bunch of CentOS 7 hosts, you should be able to add the Impala service as well. Please don't add that parcel quoted above as it's not required any longer.
... View more
07-03-2017
05:36 PM
rampo wrote: But in Centos 6(since impala is not supported on Centos 7, had to try Centos 6), getting this error on all hosts. This is not true, our latest versions of CDH do run on CentOS 7 which means Impala is supported on it as well. e.g. 5.11.1 https://www.cloudera.com/documentation/enterprise/release-notes/topics/cdh_vd_cdh_download_511.html
... View more
05-10-2016
12:11 AM
1 Kudo
Vik11, you can work around this by setting Java's tmp dir to something other than /tmp. This solution has worked in the past for a customer, YMMV. Of course that mount point should not have noexec set. In YARN configuration append '-Djava.io.tmpdir=/path/to/other/temp/dir' to the following properties: 1. ApplicationMaster Java Opts Base 2. Java Configuration Options for JobHistory Server 3. Java Configuration Options for NodeManager 4. Java Configuration Options for ResourceManager For jobs: Cloudera Manager --> YARN --> search for: Gateway Client Environment Advanced Configuration Snippet (Safety Valve) for hadoop-env.sh and add this: HADOOP_CLIENT_OPTS="-Djava.io.tmpdir=/path/to/other/temp/dir" Now redeploy YARN client configuration.
... View more
10-29-2015
04:53 PM
Yes, by deleting it from the list, Cloudera Manager will automatically remove it from the ensemble and update the config files of the other three servers. They would need a restart to see the change.
... View more
10-29-2015
04:36 PM
4 is not an ideal number. I'd suggest you simply stop and delete the "down" ZooKeeper server from Cloudera Manager. It will remove the server from the ensemble, you'd have to restart all of ZooKeeper to enforce the change.
... View more
10-29-2015
04:05 PM
The message implies you must have 1,3,5,7,... ZooKeeper servers in the cluster. It doesn't matter if they're running or not at the time. How many ZooKeeper servers do you have within the service?
... View more
10-28-2015
10:30 PM
1 Kudo
Please read this document as well, the item regarding "Hive replication fails if "Force Overwrite" is not set." http://www.cloudera.com/content/www/en-us/documentation/enterprise/latest/topics/cm_rn_known_issues.html
... View more
10-28-2015
10:28 PM
3 Kudos
This error likely occurs because the target hive service already has a version of that table , but with different metadata properties. In order to overwrite it, check/select the "Force Overwrite" option in the hive replication schedule. How many times do you see this and how many tables are having this error?
... View more
10-21-2015
06:38 PM
Does your "cluster" definition look like this? The CDH property should be the version, not the entire parcel name cluster { products { CDH: 5.4.7 } parcelRepositories: [ "http://archive.cloudera.com/cdh5/parcels/5.4.7/" ] }
... View more
10-10-2015
08:30 PM
Just distributing a parcel is a means to ensure the same file is present on all nodes. Parcels don't deal with running any roles, that is achieved via a custom CSD. https://github.com/cloudera/cm_ext/wiki/CSD-Overview
... View more
10-10-2015
08:06 PM
The {latest_supported} URL is new to CM5.4, it is used to ensure CM does not download/install a parcel that is newer than itself e.g. CM5.4 with CDH5.5 (when it is released). This URL text will not be parsed as valid in CM5.3, just replace it with http://archive.cloudera.com/cdh5/parcels/5.3/ (to use the latest 5.3.x CDH) edit: fixed URL
... View more
10-10-2015
08:02 PM
No version of CDH supports CentOS 6.7 as of now. It would be best if you applied updates that are specific to CentOS6.5
... View more
10-07-2015
02:39 PM
I can't understand the issue clearly, please provide a screenshot if you could
... View more
10-07-2015
02:38 PM
Please follow this steps under "Adding ZooKeeper Roles" to add a new ZooKeeper server to the existing service. The doc refers to CM4.x but the steps are still valid in 5.x http://www.cloudera.com/content/cloudera/en/documentation/archives/cloudera-manager-4/v4-5-4/Cloudera-Manager-Enterprise-Edition-User-Guide/cmeeug_topic_5_2.html
... View more
10-06-2015
01:24 AM
If the parcel needs to be deployed in rhel 6, you must use the el6 prefix. And so on for the other platforms. Otherwise CM won't know a valid parcel exists for the platform the node runs on
... View more
10-05-2015
10:46 PM
The suffix denotes the platform the parcel targets. So el6 is for Red Hat 6 (and CentOS, Oracle Linux etc). You can model the file names based on the CDH parcels in this URL: http://archive.cloudera.com/cdh5/parcels/5/
... View more
09-24-2015
07:47 PM
"Name node is in safe mode" means not enough datanodes have reported in with the block reports. Visit the Namenode UI to find out which datanodes have not reported in yet. Alternatively one or more datanodes have lost blocks from their local filesystem. You can run the "sudo -u hdfs hdfs fsck / -listcourrptblocks" command to see which files are corrupt
... View more
09-24-2015
04:05 AM
The easiest way is to build your own parcel. It would be best if your custom jars are stored in a location outside of the CDH bits, this way upgrades work smoothly. You can always add the location to yarn.application.classpath in Cloudera Manager, making it easier to support. https://github.com/cloudera/cm_ext/wiki/Parcels:-What-and-Why%3F
... View more
08-27-2015
11:39 PM
Please post the screenshot of the error
... View more
08-26-2015
04:25 PM
Do you have the Standalone Spark service also present in the cluster? If so, please change the name of the existing SPARK service to say "SPARK standalone" before attempting to add SPARK on YARN using "Add A Service"
... View more
08-26-2015
04:21 PM
Kafka is currently only available via the parcel distribution, not the RPM/Deb/tarball distributions. Apologies for the inconvenience.
... View more
08-19-2015
05:55 PM
Can you run a simple python command to see if it can bind to 9000? This will eliminate CDH/CM from the equation. # python -m SimpleHTTPServer 9000 If this works, then the port is available and issue is with the agent. If it fails, then the kernel has the port open for a process which netsat doesn't know about. The only solution would be to reboot the host, then try again.
... View more
08-18-2015
11:34 PM
Try running "netstat -anp | grep 9000" and see if that tells you who holds the port 9000. Once you figure out how to stop that process, you can try starting the agent again.
... View more
08-18-2015
08:23 PM
Sorry to hear you're having trouble upgrading. Please describe what the actual error is. Is the "yum upgrade" itself failing or is Director throwing an error after the upgrade?
... View more
08-17-2015
05:27 PM
Please have a read of this page, let us know if you still need assistance. http://www.cloudera.com/content/cloudera/en/documentation/cloudera-director/latest/topics/director_upgrade.html
... View more
08-13-2015
07:30 PM
50070 and 50470 refer to the Namenode's HTTP ports, not for sending RPC calls to browse HDFS. http://www.cloudera.com/content/cloudera/en/documentation/core/latest/topics/cdh_ig_ports_cdh5.html Have you tried getting the tool to connect to port 8020 on the Namenode host instead of 50070/50470?
... View more
08-13-2015
07:20 PM
Got it, I wasn't aware the hosts are not connected to the Internet. We have the procedure for this documented already Creating and Using a Package Repository for Cloudera Manager http://www.cloudera.com/content/cloudera/en/documentation/core/latest/topics/cm_ig_create_local_package_repo.html Creating a Local Yum Repository http://www.cloudera.com/content/cloudera/en/documentation/core/latest/topics/cdh_ig_yumrepo_local_create.html edit: changed the second link
... View more