Member since
06-03-2014
62
Posts
3
Kudos Received
6
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2854 | 11-30-2017 10:32 AM | |
4848 | 01-20-2016 05:08 PM | |
2210 | 01-13-2015 02:42 PM | |
4527 | 11-12-2014 11:09 AM | |
10679 | 08-20-2014 09:29 AM |
06-18-2015
01:56 PM
In cloudera-scm-server.log I see an odd error: 2015-06-18 13:52:19,346 INFO ParcelUpdateService:com.cloudera.parcel.components.ParcelDownloaderImpl: Preparing to download: http://archive.cloudera.com/cdh5/parcels/5.4/CDH-5.4.2-1.cdh5.4.2.p0.2-trusty.parcel My question is why is CM trying to download the Parcel when I have located the Parcel in the parcel-repo folder? I do not have that open in my firewall, so the download will fail. Shouldn't the install get the Parcel from the local folder? Kevin
... View more
06-18-2015
01:46 PM
Thank you for your help. CM did locate the parcels after copied into the parcel-repo folder. I selected the parcels and clicked Continue. However, CM is not showing download progress of the selected parcels. It seems like the cluster installation has stalled. Do you know what could be happening? Are there logs I can look in to see if there is a problem? I've attached a screenshot. Thanks! Kevin
... View more
06-18-2015
11:26 AM
Was CM 5.4.2 pulled? I can't find CM 5.4.2 on the dist list: http://archive.cloudera.com/cm5/ubuntu/precise/amd64/cm/ Wouldn't I need CM 5.4.2 to distribute CDH 5.4.2? Any suggestions would help. This is the message I get from CM 5.4.1 when I try to distribute CDH 5.4.2 parcels: Versions of CDH that are too new for this version of Cloudera Manager (5.4.1) will not be shown. Regards, Kevin
... View more
Labels:
01-13-2015
02:42 PM
The external authentication feature, including LDAP, is available only with a Cloudera Enterprise license. For the Cloudera Enterprise Data Hub Edition Trial, the feature will not be available after you end the trial or the trial license expires. To obtain a license for Cloudera Enterprise, here is this form or call 866-843-7207. After you install a Cloudera Enterprise license, the feature will be available.
... View more
01-12-2015
12:50 PM
I am running CDH 5.2 managed by Parcels and CM 5.2. I have a few Java applications that require ZooKeeper and I don't like to point to a specific named host because the host could potentially change. Is there a ZooKeeper alias I can use that will point my query to the leader of the ZooKeeper quorum? Or can I just point to any of the three ZooKeeper services and expect my query to work. What I'm looking for is one name for ZooKeeper that will be correct no matter what host I'm running the ZooKeeper servers from. Any advice would be appreciated.
... View more
Labels:
- Labels:
-
Apache Zookeeper
01-12-2015
10:43 AM
I would like to move a Solr server from one host to another host that has a lot more memory using Cloudera. I am using CDH 5.2 with CM 5.2. My questions: Is it possible to move the Solr server from one host to another? What about the indexes, how are they moved? Any advice would be appreciated.
... View more
Labels:
01-08-2015
04:44 PM
1 Kudo
I have two clusters behind a firewall and I would like run distcp to copy data from one cluster to another. What ports should I open in the firewall for this communication? For example, I know I need 50070 to the NameNode. But what other ports are required?
... View more
Labels:
- Labels:
-
HDFS
01-05-2015
05:25 PM
1 Kudo
Thank you Gautam, that was very helpful.
... View more
01-05-2015
05:16 PM
I run CDH 5.2 and have an odd problem. I am running HBase with seven RegionServers and 1200 Regions, but the bulk of the Regions are running on two or three of the RegionServers instead of balancing across all Regions. This leads to a problem where two of the RegionServers run out of memory and fail the Java processes, while the other RegionServers sit idle. Is there a command I can run to balance the Regions across all RegionServers?
... View more
Labels:
- Labels:
-
Apache HBase
12-18-2014
09:09 AM
Edmund, When I've seen Pig scripts show 0% complete and never finish, I've usually resolved this by adjusting Yarn. How many nodes are you running in your cluster? How much memory is available to your nodes? Kevin
... View more