Member since
08-16-2016
48
Posts
14
Kudos Received
9
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
641 | 02-11-2019 05:40 PM | |
3502 | 05-15-2017 03:07 PM | |
3736 | 01-26-2017 02:05 PM | |
17221 | 01-20-2017 03:17 PM | |
5595 | 01-20-2017 03:12 PM |
06-14-2019
04:40 AM
I could not connect to Hue at all. So I restarted all Services (tab Cloudera Manager -> arrow to the right -> Restart). The last service it restarts is Hue. Once it completes, click on tab Hue, and the "new Hue" will show.
... View more
05-20-2019
07:24 AM
Hi Dennis, As mentioned in the (edited) post, the solution suggested above finally worked for me. Thanks again for the help! Regards, Michal
... View more
02-11-2019
05:40 PM
1 Kudo
So the issue is that the environment you created is not a secure environment. You will need to create a new environment which is secure. It is a checkbox during environment creation.
... View more
01-25-2019
04:55 AM
1 Kudo
It could be depends on data layers in your HDFS directory, for instance, if you have raw and standard layer this would be one of the practices. Raw is the first landing of data and need to be as close to the original data as possible. Standard is the staging of the data where it converted into different data formats and still no semantic changed have been done to data. the structure for raw data and meta is : raw/businessarea/sourcesystem/data/date&time raw/businessarea/sourcesystem/meta/date&time the structure of standard data/meta folder is: standard /businessarea/sourcesystem/data/date&time standard/businessarea/sourcesystem/meta/date&time these standards can also help to make sentry/ranger policies based AD groups
... View more
06-29-2018
10:03 AM
You should be able to resolve the pypi mirrors. You can always download the latest bits from https://console.altus.cloudera.com/downloads/latest-cli too
... View more
06-13-2018
07:12 PM
So if you go to your subscription and then look at access control IAM. You should see a Cloudera altus application. Also, if your environment is created, you can go ahead and create the cluster and see if that succeeds. Hope this helps. Suraj
... View more
03-20-2018
02:45 PM
#!/usr/bin/env python import ssl,sys,time from cm_api.api_client import ApiResource from cm_api.endpoints.types import ApiClusterTemplate from cm_api.endpoints.cms import ClouderaManager from cm_api.endpoints import clusters, events, hosts, external_accounts, tools from cm_api.endpoints import types, users, timeseries, roles, services ssl._create_default_https_context = ssl._create_unverified_context try: cm = ApiResource("CM_SERVER","7183","admin","CM_PASS","true","15") cluster = cm.get_cluster("CLUSTER_NAME") except: print "Failed log into cluster %s" % (" CLUSTER_NAME ") sys.exit(0) servers = [ "server1.company.com", " server2. company .com", " server3. company .com"] s = cluster.get_service("solr") ra = [] for r in s.get_roles_by_type("SOLR_SERVER"): hostname = cm.get_host(r.hostRef.hostId).hostname if hostname in servers: ra.append([hostname,r]) ra.sort() print "\nWill restart %s SOLR instances" % len(ra) for hostname,r in ra: print "\nRestarting SOLR on %s" % (hostname) s.restart_roles(r.name) r = s.get_role(r.name) wait = time.time() + 180 # three minutes while r.roleState != "STARTED": print "Role State = %s" % (r.roleState) print "Waiting for role state to be STARTED" print time.strftime("%H:%M:%S") if time.time() > wait: print "SOLR failed to restart on %s" % (hostname) sys.exit(1) time.sleep(10) r = s.get_role(r.name) print "SOLR restarted on %s" % (hostname) print "\nAll SOLR roles restarted" sys.exit(0)
... View more
- Tags:
- solr
01-09-2018
12:00 PM
Since this is an external table ( EXTERNAL_TABLE), Hive will not keep any stats on the table since it is assumed that another application is changing the underlying data at will. Why keep stats if we can't trust that the data will be the same in another 5 minutes? For a managed (non-external) table, data is manipulated through Hive SQL statements (LOAD DATA, INSERT, etc.) so the Hive system will know about any changes to the underlying data and can update the stats accordingly. Using the HDFS utilities to check the directory file sizes will give you the most accurate answer.
... View more
10-27-2017
11:33 PM
Thanks, i find the repo in cloudera, but what i means, i can not find these csd in the deployed server.
... View more
06-09-2017
03:52 AM
Thanks alot @adminzeeshan, yes it was due to open JDK. Issue is resolved and installation is successful. Regards, Shrilesh
... View more
05-25-2017
12:50 PM
From the looks of it, the file permissions on the file : /opt/cloudera/security/CAcerts/cmhost-keystore.jks is incorrect. The process usually runs as cloudera-scm user. So check the permissions on that file.
... View more
01-31-2017
05:45 PM
@Tim Armstrong Thank you very much for your explanation. 🙂 Gatsby
... View more
01-26-2017
09:39 AM
I was able to solve this issue by using various combination and this is how it worked Can we make all the instances in a cluster to spot instances (for testing scenario onl) the anwser is yes In the configuration file The key think you have to remember is 1. for the instance attribute of any type of instance say master you have to have the following keyword useSpotInstances: true spotBidUSDPerHr: 2.760(This is the spot price of the instance that you are using) So the pseudo structure would be like this workers-spot {
count: 10
#
# Minimum number of instances required to set up the cluster.
# Fail and quit if minCount number of instances is not available in this cloud
# environment. Else, continue setting up the cluster.
#minCount is set to 0 always when using spot instance.
minCount: 0
instance: ${instances.d24x} {
useSpotInstances: true #required for spot instance
spotBidUSDPerHr: 2.760 #reauired for spot instance.
tags {
Name: "regionserver-REPLACE-ME"
Owner: "owner-REPLACE-ME"
}
}
roles {
HDFS: [DATANODE]
YARN: [NODEMANAGER]
HBASE: [REGIONSERVER]
}
# Optional custom role configurations
# Configuration keys containing periods must be enclosed in double quotes.
configs {
HBASE {
REGIONSERVER {
hbase_regionserver_handler_count: 64
}
}
}
}
postCreateScripts: ["""#!/bin/sh Hopefully this helps some one. Thanks Anil
... View more
01-24-2017
08:25 PM
Currently Cloudera supports both 1.7 and 1.8. There is no support for Java 1.9 JAVA 1.9 is not expected to be GA till mid July.
... View more
01-21-2017
07:42 AM
1 Kudo
I would also suggest checking out our community knowledge article How to setup Cloudera Quickstart VM. It has some great tips to a smoother install and hits on some of the common questions and issues people have. Best of luck. 🙂
... View more
01-19-2017
01:20 AM
Thanks a lot. I got confused between CM and CDH version. It was in fact CM 5.7.0 which was installed on my other machines.
... View more
01-17-2017
05:21 PM
Dear surajacharya. Thanks , very helpful!!! Have a good day!
... View more
01-17-2017
03:30 PM
1 Kudo
Currently cloudera does not have a parcel with R present in it. If you are trying to run it with spark, here is a good discussion about it. https://community.cloudera.com/t5/Advanced-Analytics-Apache-Spark/SparkR-in-CDH-5-5/td-p/34602
... View more