Member since
07-01-2015
460
Posts
78
Kudos Received
43
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2089 | 11-26-2019 11:47 PM | |
| 1899 | 11-25-2019 11:44 AM | |
| 11388 | 08-07-2019 12:48 AM | |
| 2985 | 04-17-2019 03:09 AM | |
| 5047 | 02-18-2019 12:23 AM |
11-26-2019
11:47 PM
1 Kudo
The solution is quite simple, was not aware that the service-wide configurations are not in roles but in services. So the solution is to use a ServicesResourceApi endpoint and read_service_config method. Something like this: def get_service_config(self, service_name):
"""Returns the configuration of the service"""
services_instance = cm_client.ServicesResourceApi(self.api)
view = 'summary'
try:
api_response = services_instance.read_service_config(
self.cluster_name, service_name, view=view)
return api_response.to_dict()
except ApiException as exception:
print(f"Exception when calling ServicesResourceApi->read_service_config: {exception}\n")
... View more
11-25-2019
11:44 AM
It looks like the java class com.cloudera.enterprise.dbutil.DbProvisioner expects that the user has superuser privilege on the PosgreSQL and thus the Create DB and Create Role is not enough (AWS RDS unfortunately does not provide superuser). I had to workaround the issue by creating the databases upfront.
... View more
11-25-2019
03:32 AM
Hi Cloudera,
the cloudera altus director cannot create a database(s) for CM and throws this error, even if it has a root user access to the external AWS RDS posgre database:
org.postgresql.util.PSQLException: ERROR: must be member of role "cmadmin_dkndwlxo"
I could not find any hint in the docs what this exact role means and why the root user must be member of this role.
Postgre error log:
2019-11-25 11:07:06 UTC:10.150.1.7(43878):dbroot@postgres:[12215]:ERROR: must be member of role "cmadmin_dkndwlxo"
2019-11-25 11:07:06 UTC:10.150.1.7(43878):dbroot@postgres:[12215]:STATEMENT: create database scm_75ilop0jdikuhinujsfs7l5m1n owner cmadmin_dkndwlxo encoding 'UTF8'
2019-11-25 11:07:17 UTC:10.150.1.7(43880):dbroot@postgres:[12313]:ERROR: must be member of role "cmadmin_wrhjespw"
2019-11-25 11:07:17 UTC:10.150.1.7(43880):dbroot@postgres:[12313]:STATEMENT: create database scm_38kegs9qab7j5l6hgqo069h3am owner cmadmin_wrhjespw encoding 'UTF8'
2019-11-25 11:07:28 UTC:10.150.1.7(43882):dbroot@postgres:[12422]:ERROR: must be member of role "cmadmin_kfelwpnh"
2019-11-25 11:07:28 UTC:10.150.1.7(43882):dbroot@postgres:[12422]:STATEMENT: create database scm_5vrk2jc93r9h4nq9n87c3majfp owner cmadmin_kfelwpnh encoding 'UTF8'
2019-11-25 11:07:48 UTC:10.150.1.7(43884):dbroot@postgres:[12703]:ERROR: must be member of role "cmadmin_xxyehrrb"
2019-11-25 11:07:48 UTC:10.150.1.7(43884):dbroot@postgres:[12703]:STATEMENT: create database scm_fprfmbk5dq8n7n659594goeukg owner cmadmin_xxyehrrb encoding 'UTF8'
2019-11-25 11:08:19 UTC:10.150.1.7(43886):dbroot@postgres:[13017]:ERROR: must be member of role "cmadmin_qgathjfw"
2019-11-25 11:08:19 UTC:10.150.1.7(43886):dbroot@postgres:[13017]:STATEMENT: create database scm_fo6j4rn05hdlrid3g0l584urjs owner cmadmin_qgathjfw encoding 'UTF8'
Postgre users:
test=> \du
List of roles
Role name | Attributes | Member of
------------------+------------------------------------------------+-----------------
cmadmin_dkndwlxo | | {}
cmadmin_kfelwpnh | | {}
cmadmin_qgathjfw | | {}
cmadmin_wrhjespw | | {}
cmadmin_xxyehrrb | | {}
dbroot | Create role, Create DB +| {rds_superuser}
| Password valid until infinity |
Any hints?
Thanks
... View more
Labels:
- Labels:
-
Cloudera Manager
11-20-2019
11:48 AM
Hi,
I am wondering if it is possible to get Service-Wide configurations via read_config method of the RoleConfigGroupsResourceApi class.
https://archive.cloudera.com/cm6/6.3.0/generic/jar/cm_api/swagger-html-sdk-docs/python/docs/RoleConfigGroupsResourceApi.html#read_config
The read_roles method of RolesResourceAPI returns theses roles:
CD-HDFS-eHtEMKVf-DATANODE-BASE
CD-HDFS-eHtEMKVf-SECONDARYNAMENODE-BASE
CD-HDFS-eHtEMKVf-HTTPFS-BASE
CD-HDFS-eHtEMKVf-DATANODE-BASE
CD-HDFS-eHtEMKVf-DATANODE-BASE
CD-HDFS-eHtEMKVf-NAMENODE-BASE
But when I query all these roles, I cannot find the Service-Wide property of Advanced configuration for core-site.xml
Reading configuration for CD-HDFS-eHtEMKVf-DATANODE-BASE
{'items': [{'default': None,
'description': 'For advanced use only, key-value pairs (one on '
"each line) to be inserted into a role's "
'environment. Applies to configurations of this '
'role except client configuration.',
'display_name': 'DataNode Environment Advanced Configuration '
'Snippet (Safety Valve)',
'name': 'DATANODE_role_env_safety_valve',
'related_name': '',
'required': False,
'sensitive': False,
'validation_message': None,
'validation_state': 'OK',
'validation_warnings_suppressed': False,
'value': None},
{'default': '{"critical":"never","warning":"1000000.0"}',
'description': 'The health test thresholds of the number of blocks '
'on a DataNode',
'display_name': 'DataNode Block Count Thresholds',
'name': 'datanode_block_count_thresholds',
'related_name': '',
'required': False,
'sensitive': False,
'validation_message': None,
'validation_state': 'OK',
'validation_warnings_suppressed': None,
'value': None},
{'default': None,
Maybe I should search in other classes? Please advise,
Thanks
... View more
Labels:
- Labels:
-
Cloudera Manager
08-07-2019
12:48 AM
1 Kudo
Hi, the probable root cause is that the spark job submitted by the Jupyter notebook has a different memory config parameters. So I dont think the issue is Jupyter, but rather the executor and driver memory settings. Yarn is not able to provide enough resources (i.e. memory) 19/08/06 23:10:41 WARN cluster.YarnScheduler: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources Check your cluster settings: - how much memory YARN has allocated in NodeManagers, how big the container could be - what are the submit options of your spark job
... View more
08-06-2019
08:23 AM
You can use a script like this to create snapshots of old and new files - i.e. search files which are older than 3 days and search for files which are newer than 3 days, just make sure, you use the correct path to the cloudera jars. In the case of CDH5.15: #!/bin/bash
now=`date +"%Y-%m-%dT%H:%M:%S"`
hdfs dfs -rm /data/cleanup_report/part=older3days/*
hdfs dfs -rm /data/cleanup_report/part=newer3days/*
hadoop jar /opt/cloudera/parcels/CDH/jars/search-mr-1.0.0-cdh5.15.1.jar org.apache.solr.hadoop.HdfsFindTool -find /data -type d -mtime +3 | sed "s/^/${now}\tolder3days\t/" | hadoop fs -put - /data/cleanup_report/part=older3days/data.csv
hadoop jar /opt/cloudera/parcels/CDH/jars/search-mr-1.0.0-cdh5.15.1.jar org.apache.solr.hadoop.HdfsFindTool -find /data -type d -mtime -3 | sed "s/^/${now}\tnewer3days\t/" | hadoop fs -put - /data/cleanup_report/part=newer3days/data.csv Then create an external table with partitions on top of this HDFS folder.
... View more
08-06-2019
07:23 AM
Hi, I dont think it is so easy to do. At least I tried it once - downloading and compiling from the source. That part was the easier part - I just had to install some development libraries, gcc and other tools. But the issue is that HUE in CDH is running with a specific versions of python packages, specially pyOpenSSL, lpysaml, asn1crypto and others. The problem was, that I had to change (upgrade/downgrade) the system packages for making the "external" hue working, but then the other services and components were not working. I am sorry for this generic answer, dont have the exact details, already deleted that env. Please let me know if you find any solution to this.
... View more
05-15-2019
11:31 PM
Hi Cloudera, I see a lot of these warnings in Impala Daemon logs: W0516 07:12:24.227567 1049 ShortCircuitCache.java:826] ShortCircuitCache(0x119fb869): could not load 1399296933_BP-76826636-10.197.31.86-1501521881839 due to InvalidToken exception. Does it indicate some bad configuration? What can I do to eliminate these warnings? Thanks
... View more
Labels:
- Labels:
-
Apache Impala
04-17-2019
03:35 AM
1 Kudo
Make sure that the command above returns not just the name of the server, but the fully qualified domain name, so in your case "mugzy.c.essential-rider-208218.internal". You can do it by editing the /etc/hosts file. Or check the GCP documentation regarding the FQDN for VMs for your specific Linux OS.
... View more