Member since
08-16-2016
48
Posts
15
Kudos Received
9
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1260 | 02-11-2019 05:40 PM | |
5440 | 05-15-2017 03:07 PM | |
6517 | 01-26-2017 02:05 PM | |
21957 | 01-20-2017 03:17 PM | |
7195 | 01-20-2017 03:12 PM |
02-14-2024
01:34 AM
1 Kudo
Hey everyone, Just wanted to share my experience with the same Solr Server error message I recently encountered. Following @surajacharya advice above, I compared the permissions on the truststore file between a functioning Solr server host and the problematic one. I noticed that the permissions were set to 400 on the problematic host and 644 on the good one. I went ahead and adjusted the permissions on the truststore file of the problematic host to 644 and then restarted the Solr server. Voila! The issue was resolved. Just thought I'd share this in case someone else runs into a similar problem.
... View more
06-14-2019
04:40 AM
I could not connect to Hue at all. So I restarted all Services (tab Cloudera Manager -> arrow to the right -> Restart). The last service it restarts is Hue. Once it completes, click on tab Hue, and the "new Hue" will show.
... View more
05-20-2019
07:24 AM
Hi Dennis, As mentioned in the (edited) post, the solution suggested above finally worked for me. Thanks again for the help! Regards, Michal
... View more
02-11-2019
05:40 PM
1 Kudo
So the issue is that the environment you created is not a secure environment. You will need to create a new environment which is secure. It is a checkbox during environment creation.
... View more
01-25-2019
04:55 AM
1 Kudo
It could be depends on data layers in your HDFS directory, for instance, if you have raw and standard layer this would be one of the practices. Raw is the first landing of data and need to be as close to the original data as possible. Standard is the staging of the data where it converted into different data formats and still no semantic changed have been done to data. the structure for raw data and meta is : raw/businessarea/sourcesystem/data/date&time raw/businessarea/sourcesystem/meta/date&time the structure of standard data/meta folder is: standard/businessarea/sourcesystem/data/date&time standard/businessarea/sourcesystem/meta/date&time these standards can also help to make sentry/ranger policies based AD groups
... View more
03-20-2018
02:45 PM
#!/usr/bin/env python import ssl,sys,time from cm_api.api_client import ApiResource from cm_api.endpoints.types import ApiClusterTemplate from cm_api.endpoints.cms import ClouderaManager from cm_api.endpoints import clusters, events, hosts, external_accounts, tools from cm_api.endpoints import types, users, timeseries, roles, services ssl._create_default_https_context = ssl._create_unverified_context try: cm = ApiResource("CM_SERVER","7183","admin","CM_PASS","true","15") cluster = cm.get_cluster("CLUSTER_NAME") except: print "Failed log into cluster %s" % ("CLUSTER_NAME") sys.exit(0) servers = [ "server1.company.com", "server2.company.com", "server3.company.com"] s = cluster.get_service("solr") ra = [] for r in s.get_roles_by_type("SOLR_SERVER"): hostname = cm.get_host(r.hostRef.hostId).hostname if hostname in servers: ra.append([hostname,r]) ra.sort() print "\nWill restart %s SOLR instances" % len(ra) for hostname,r in ra: print "\nRestarting SOLR on %s" % (hostname) s.restart_roles(r.name) r = s.get_role(r.name) wait = time.time() + 180 # three minutes while r.roleState != "STARTED": print "Role State = %s" % (r.roleState) print "Waiting for role state to be STARTED" print time.strftime("%H:%M:%S") if time.time() > wait: print "SOLR failed to restart on %s" % (hostname) sys.exit(1) time.sleep(10) r = s.get_role(r.name) print "SOLR restarted on %s" % (hostname) print "\nAll SOLR roles restarted" sys.exit(0)
... View more
01-09-2018
12:00 PM
Since this is an external table (EXTERNAL_TABLE), Hive will not keep any stats on the table since it is assumed that another application is changing the underlying data at will. Why keep stats if we can't trust that the data will be the same in another 5 minutes? For a managed (non-external) table, data is manipulated through Hive SQL statements (LOAD DATA, INSERT, etc.) so the Hive system will know about any changes to the underlying data and can update the stats accordingly. Using the HDFS utilities to check the directory file sizes will give you the most accurate answer.
... View more
10-27-2017
11:33 PM
Thanks, i find the repo in cloudera, but what i means, i can not find these csd in the deployed server.
... View more
01-31-2017
05:45 PM
@Tim Armstrong Thank you very much for your explanation. 🙂 Gatsby
... View more
01-26-2017
09:39 AM
I was able to solve this issue by using various combination and this is how it worked Can we make all the instances in a cluster to spot instances (for testing scenario onl) the anwser is yes In the configuration file The key think you have to remember is 1. for the instance attribute of any type of instance say master you have to have the following keyword useSpotInstances: true spotBidUSDPerHr: 2.760(This is the spot price of the instance that you are using) So the pseudo structure would be like this workers-spot {
count: 10
#
# Minimum number of instances required to set up the cluster.
# Fail and quit if minCount number of instances is not available in this cloud
# environment. Else, continue setting up the cluster.
#minCount is set to 0 always when using spot instance.
minCount: 0
instance: ${instances.d24x} {
useSpotInstances: true #required for spot instance
spotBidUSDPerHr: 2.760 #reauired for spot instance.
tags {
Name: "regionserver-REPLACE-ME"
Owner: "owner-REPLACE-ME"
}
}
roles {
HDFS: [DATANODE]
YARN: [NODEMANAGER]
HBASE: [REGIONSERVER]
}
# Optional custom role configurations
# Configuration keys containing periods must be enclosed in double quotes.
configs {
HBASE {
REGIONSERVER {
hbase_regionserver_handler_count: 64
}
}
}
}
postCreateScripts: ["""#!/bin/sh Hopefully this helps some one. Thanks Anil
... View more