Member since
07-01-2015
460
Posts
78
Kudos Received
43
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2093 | 11-26-2019 11:47 PM | |
| 1900 | 11-25-2019 11:44 AM | |
| 11388 | 08-07-2019 12:48 AM | |
| 2985 | 04-17-2019 03:09 AM | |
| 5047 | 02-18-2019 12:23 AM |
01-06-2020
09:26 AM
Hi, As mentioned in the previous posts, did you tried increasing the memory and whether it solved the issue? Please let us know if you are still facing any issues? Thanks AKR
... View more
11-26-2019
11:47 PM
1 Kudo
The solution is quite simple, was not aware that the service-wide configurations are not in roles but in services. So the solution is to use a ServicesResourceApi endpoint and read_service_config method. Something like this: def get_service_config(self, service_name):
"""Returns the configuration of the service"""
services_instance = cm_client.ServicesResourceApi(self.api)
view = 'summary'
try:
api_response = services_instance.read_service_config(
self.cluster_name, service_name, view=view)
return api_response.to_dict()
except ApiException as exception:
print(f"Exception when calling ServicesResourceApi->read_service_config: {exception}\n")
... View more
11-25-2019
11:44 AM
It looks like the java class com.cloudera.enterprise.dbutil.DbProvisioner expects that the user has superuser privilege on the PosgreSQL and thus the Create DB and Create Role is not enough (AWS RDS unfortunately does not provide superuser). I had to workaround the issue by creating the databases upfront.
... View more
08-12-2019
08:11 PM
Hi @Tomas79 , I think I found a way to deal with it. As we can get the version info of Hue from Cloudera Manager web UI, I install the corresbonding version one from the tarball source to my local development enviroment. After I finish my development on local Hue, first I back up that/opt/cloudera/parcels/CDH/lib/hue folder on server, then I replace the server hue/ folder with my local hue/ folder (by using tar,lrzsz,rm,unzip...). Last but not least, I replace the new hue/build/ folder with the back-uped hue/build/ folder, because there are lots of key configurations in original component we need to inherit, as CM starts up differently than locally. Now I restart Hue on Cloudera Manager web UI, it works! My new feature is in effect. Please let me know if you have any questions, I'm on my way to exporing it.
... View more
08-06-2019
08:23 AM
You can use a script like this to create snapshots of old and new files - i.e. search files which are older than 3 days and search for files which are newer than 3 days, just make sure, you use the correct path to the cloudera jars. In the case of CDH5.15: #!/bin/bash
now=`date +"%Y-%m-%dT%H:%M:%S"`
hdfs dfs -rm /data/cleanup_report/part=older3days/*
hdfs dfs -rm /data/cleanup_report/part=newer3days/*
hadoop jar /opt/cloudera/parcels/CDH/jars/search-mr-1.0.0-cdh5.15.1.jar org.apache.solr.hadoop.HdfsFindTool -find /data -type d -mtime +3 | sed "s/^/${now}\tolder3days\t/" | hadoop fs -put - /data/cleanup_report/part=older3days/data.csv
hadoop jar /opt/cloudera/parcels/CDH/jars/search-mr-1.0.0-cdh5.15.1.jar org.apache.solr.hadoop.HdfsFindTool -find /data -type d -mtime -3 | sed "s/^/${now}\tnewer3days\t/" | hadoop fs -put - /data/cleanup_report/part=newer3days/data.csv Then create an external table with partitions on top of this HDFS folder.
... View more
05-18-2019
05:02 PM
Hi Tomas, This message is normal behaviour and expected to happen when the Datanode's security key manager rolls its keys. It will cause clients to print this whenever they use the older cached keys, but the post-action of printing this message is that the client refetches the new key and the job completes. Since Impala is a client of HDFS, there is no concern or worry about this message, as it is part of normal operation. We also see this from HBase logs, which is again, normal. Hope above helps. Cheers Eric
... View more
05-16-2019
07:07 AM
Could you supply what your /var/log/cloudera-scm-agent/certmanager.log looked like after the successful installation? For comparative purposes?
... View more
03-29-2019
09:11 AM
1 Kudo
If I understand correctly, you are talking about the logs in the configured --log_dir. By default Kudu will keep 10 log files per severity level. There is a flag to change that value, but it's currently marked as "experimental". It has been in Kudu for some time, so not changing it to stable is probably a bit of an oversight. I opened an Apache Kudu jira (KUDU-2754) to change it to a stable config. In the mean time, you can use the --max_log_files configuration by unlocking experimental configurations via --unlock_experimental_flags.
... View more