Member since
08-16-2016
642
Posts
131
Kudos Received
68
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3926 | 10-13-2017 09:42 PM | |
7342 | 09-14-2017 11:15 AM | |
3734 | 09-13-2017 10:35 PM | |
5924 | 09-13-2017 10:25 PM | |
6498 | 09-13-2017 10:05 PM |
02-28-2017
08:52 AM
1 Kudo
Hi @admcdh, It sounds as if you are looking to export events data to a flat file. If so, you can use the "events" api like this: http://my_cmhost.example.com:7180/api/v15/events (your api version may differ) See: https://cloudera.github.io/cm_api/apidocs/v15/index.html Ben
... View more
02-24-2017
05:22 PM
Hi @IgorYakushin, To add to what @mbigelow mentioned, you can enable Kerberos without using TLS to secure communication between your agents and Cloudera Manager, but that would allow the kerberos keytabs to be transmitted from Cloudera Manager to your agents in the clear (risking a malicious party gaining access to your ketyab). Most of the security you will likley need is taken care of by inabling TLS for Agent communication in this section: Configuring TLS Encryption for Cloudera Manager Agents This will encrypt communication when the agent gets the keytabs and other files from CM. If you want more security by having the agents do verification of Cloudera Manager's certificate signer and hostname, then you can configure your trust file for each agent (to trust the CM signer). In summary, you don't need to have TLS enabled to enable Kerberos. If you need to protect the keytabs, enable TLS Encryption for Agents. If you need higher security by having the agents trust the signer of the Cloudera Manager server certificate, you can proceed with the other steps: https://www.cloudera.com/documentation/enterprise/latest/topics/how_to_configure_cm_tls.html#topic_3 Ben
... View more
02-24-2017
02:51 PM
@Piks, I think I have seen this before. Check your /opt directory on the zookeeper hosts to make sure that it has "execute" for other. I reproduced what you see by setting the following: drwxr-xr--. 3 root root 21 Feb 7 13:50 opt Try running: chmod 755 So that it looks like this: drwxr-xr-x. 3 root root 21 Feb 7 13:50 opt Try starting services again after that. Regards, Ben
... View more
02-23-2017
06:47 PM
Thomas, you have a legitimate request and concern. First, there is no perfectly fool-proof solution because the resource consumption is somewhat dependent on what happens at runtime, and not all memory consumption is tracked by Impala (but must is). We are constantly making improvements in this area though. 1. I'd recommend fixing the num_scanner_threads for your queries. A different number of scanner threads can result in different memory consumption from run to run (and dependent on what else is going on in the system at the time). 2. The operators of a query do not run one-by-one. Some of them run concurrently (e.g. join builds may execute concurrently). So just looking at the highest peak in the exec summary is not enough. Taking the sum of the peaks over all operators is a safer bet, but tends to overestimate the actual consumption. Hope this helps!
... View more
02-23-2017
12:49 PM
Digging down in the cluster, i found one of the application that runs outside of the hadoop cluster has clients that make hdfs dfs -put to the hadoop cluster, these clients weren't have hdfs-site.xml and it got the default replication factor for the cluster, what i did? tested the hdfs dfs -put from a cleint server in my cluster and the client out side the cluster and notice the client outside the cluster put files with replication factor 3, to solve the issue i added hdfs-site.xml to each of the clients outside the cluster and override the default replication factor at the file.
... View more
02-22-2017
06:13 PM
maybe not possible, as you know, it's almost done for the upgrade , and i have upgraded hdfs metadata. and i think i cant roll back without cloudera manager enterprise version.
... View more
02-22-2017
01:52 AM
Hi, Thanks for your answer. Yes, that is what I'm thinking. I have used the hdfs and hive users. Since that time, I have solved the problem by executing the commands below (from the top of https://www.cloudera.com/documentation/enterprise/5-6-x/topics/sg_sentry_service_config.html) : $ sudo -u hdfs hdfs dfs -chmod -R 777 /user/hive/warehouse
$ sudo -u hdfs hdfs dfs -chown -R hive:hive /user/hive/warehouse
... View more
02-21-2017
12:39 PM
CDH verison is 5.9 actually
... View more
02-20-2017
06:01 AM
Finally I can get my cluster up and running! As msbigelow said two of my three JNs were up and running but bad rdeclared in hdfs-site.xml dfs.namenode.shared.edits.dir property. After change it the namenode service starts! Now everything apperars to be in order. I hope my problem could help in this community. Thanks @saranvisa and @mbigelow!
... View more