Member since
04-22-2014
1218
Posts
341
Kudos Received
157
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 26232 | 03-03-2020 08:12 AM | |
| 16367 | 02-28-2020 10:43 AM | |
| 4703 | 12-16-2019 12:59 PM | |
| 4470 | 11-12-2019 03:28 PM | |
| 6648 | 11-01-2019 09:01 AM |
07-10-2019
03:04 AM
@kernel8liang Apologies for Miscommunication. Please follow the solution provided by @bgooley'.
... View more
07-09-2019
01:26 PM
1 Kudo
@kal, Yup, you can stop Service Monitor and Host Monitor and then remove the contents of cloudera-host-monitor and cloudera-service-monitor directories. When you restart Servcie and Host Monitor, new indexes will be start from scratch. You will only lose the historical information.
... View more
07-01-2019
07:12 PM
hi,man, did you fixed this problem,i have the same too.
... View more
06-23-2019
08:37 PM
1 Kudo
This looks like a case of edit logs getting reordered. As @bgooley noted, it is similar to HDFS-12369, where the OP_CLOSE is appearing after OP_DELETE causing the file to be absent when replaying the edits. The simplest fix, depending on if this is the only file instance of the reordered issue in your edit logs, would be to run the NameNode manually in an edits-recovery mode and "skip" this edit when it catches the error. The rest of the edits should apply normally and let you start up your NameNode. The recovery mode of NameNode is detailed at https://blog.cloudera.com/blog/2012/05/namenode-recovery-tools-for-the-hadoop-distributed-file-system/ If you're using CM, you'll need to use the NameNode's most recent generated configuration directory under /var/run/cloudera-scm-agent/process/ on the NameNode host as the HADOOP_CONF_DIR, while logged in as 'hdfs' user, before invoking the manual NameNode startup command. Once you've followed the prompts and the NameNode appears to start up, quit out/kill it to restart from Cloudera Manager normally. If you have a Support subscription, I'd recommend filing a case for this, as the process could get more involved depending on how widespread this issue is.
... View more
06-18-2019
11:37 AM
1 Kudo
@MichalAR , Right, so SAML can be used for Authentication and then LDAP for user/group sync. If you are not using LDAP for authentication, then "Create LDAP users on login" won't impact you. If you want to prevent the creation of a Hue user for a new user login, you can set the following: [libsaml] create_users_on_login=False If you do that, though, you need to be sure that you have all of your users already in Hue before they authenticate; otherwise, they will get an error. If you would like to leave create_users_on_login "True" but change the default group membership, you can adjust the "default" group that is set for new users. To do so, set: [useradmin] default_user_group=<name_of_your_preferred_group> That way, you don't prevent users from authenticating via SAML if they don't already exist as Hue users, but you can restrict the resources they can access. It's just another thing to consider that may help you achieve the type of configuration you want.
... View more
06-13-2019
04:07 AM
@bgooley I need your help once again. This is for Hue connecting to hiveservers2 instances using haproxy load balancer. Below is haproxy configuration for hiveserver2 to connect using hue, Kindly revert if any mistake in below configuration. frontend hivejdbc_front bind *:10003 mode tcp option tcplog timeout client 720m timeout server 720m default_backend hive-hue #--------------------------------------------------------------------- # source balancing between the various backends #--------------------------------------------------------------------- backend hive-hue balance source mode tcp server hs2_1 a301-9941-0809.ldn.swissbank.com:10001 check server hs2_2 a301-9941-1309.ldn.swissbank.com:10001 check - Update hue config property Hue Service Advanced Configuration Snippet (Safety Valve) for hue_safety_valve.ini from CM UI and added below into it, Kindly confirm if its not correct. [beeswax] hive_server_host=a301-9941-0727.ldn.swissbank.com hive_server_port=10003 Also help me with haproxy config for any ODBC connectivity to hive from any BI tools. - Vijay M
... View more
06-06-2019
11:28 PM
yarn logs -applicationId <application master ID> should help. It occurs typically due improper container memory allocation and physical memory availability on the cluster.
... View more
06-03-2019
07:06 PM
Hi Ben, Appreciate your quick and kind response. I have one more question. After reading your answer, I've started to develope using "time-seriese" api. During programming, I found a problem with data before about 30 days. I'd like to obtain a every single minute's data of whole period but, It might be only available every 10 minute's( or longer) data set when I tried to get old resource data. I used this api below. Could you help me know how to get every single minute's such as, cpu/mem usage ? import time import datetime api_instance = cm_client.TimeSeriesResourceApi(api_client) from_time = datetime.datetime.fromtimestamp(time.time() - 7776000) to_time = datetime.datetime.fromtimestamp(time.time()) query = "select cpu_user_rate "\ " where entityname = 'xx' " # Retrieve time-series data from the Cloudera Manager (CM) time-series data store using a tsquery. result = api_instance.query_time_series(_from=from_time, query=query, to=to_time)#, desired_rollup='RAW', must_use_desired_rollup = 'true') ts_list = result.items[0] for ts in ts_list.time_series: print (ts.metadata.attributes['entityName'], ts.metadata.metric_name) for point in ts.data: print (point.timestamp, point.value) Appreciate your response. Ben,
... View more
05-24-2019
04:44 PM
@sree3192 , Welcome to the Community. I started a new thread since your output indicates a different issue than that older thread to which you originally replied. Key information: The problem occurs when importing credentials (import_credentials.sh) The error is "kinit: Client 'USERNAME-REDACTED' not found in Kerberos database while getting initial credentials" The error is coming from MIT Kerberos libraries and it means that the user (which is redacted in the output) cannot be found in the configured KDC. Please make sure you have created the user principal you specified for Cloudera Manager to use in order to import the admin user's keytab. For instance, if typed in my_cm_user/admin make sure that your KDC has a principal for that user
... View more