Member since
09-02-2016
523
Posts
89
Kudos Received
42
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2268 | 08-28-2018 02:00 AM | |
2095 | 07-31-2018 06:55 AM | |
4974 | 07-26-2018 03:02 AM | |
2353 | 07-19-2018 02:30 AM | |
5748 | 05-21-2018 03:42 AM |
07-28-2019
05:13 PM
@cjervis Very good news and thanks much for sharing the happy moment with us! Waiting for the update!! I have went through the FAQ about new repuation program... looks very interesting! Still I am not 100% clear about my current/existing reputation. I have contributed to Cloudera Community for 2.5 years with more than 500+ posts and received Champion award for the year 2017. But unfortunately not able to contribute in recent days (since Jan-2019) due to my new role/technologies and some additional works... Hope I can re-start my contribution in couple of months... Now my question is, Does my reputation still valid after you roll out a new reputation system? or all my effort will be voiding? Please clarify
... View more
12-11-2018
07:29 AM
1 Kudo
@orak Are you using Cloudera Enterprise by any chance? if so, you can generate report from CM -> Clusters (top menu) -> Reports -> Directory usage For more details, pls refer https://www.cloudera.com/documentation/enterprise/5-13-x/topics/cm_dg_disk_usage_reports.html#cmug_topic_12_1
... View more
10-26-2018
11:24 AM
@DanielWhite I had the similar issue long back and below was my findings Please check the owner of HDFS folder/files for the corresponding db that you are trying to delete. if you are the owner and trying to delete the table/db from hive/impala, it will delete both metadata and hdfs file/folder. Whereas you are not the owner of hdfs folder/file but got an access in hive/impala to manage data and trying to delete it, it will just delete the metadata but not the underlined folder/files from hdfs pls try this with a sample db/table for more understanding
... View more
10-23-2018
12:50 PM
1 Kudo
@Broche Please refer the below link, it may help you https://www.cloudera.com/documentation/enterprise/5-8-x/topics/impala_set.html
... View more
10-17-2018
07:40 AM
@chriswalton007 According to the below link, If you have CDH 5.7 and above then you can upgrade an existing cluster to Cloudera 6. But this is for enterprise edison , are you trying it from enterprise edison? https://community.cloudera.com/t5/Community-News-Release/ANNOUNCE-Cloudera-Enterprise-6-0-Released/m-p/79235
... View more
10-04-2018
01:21 PM
@fil You can get this report from Cloudera Navigator. Search by user id and apply filter as needed
... View more
09-25-2018
07:45 PM
@mdjedaini There is nothing to do with cloudera on this request as there are so many other tools are available in the market. I am not sure how big your environment. In general, those who are using big environments with huge nodes will use some tools like Chef, Puppet, Terraform, Ansible, etc to achieve your requirement (for cloud there are another different set of tools like Cloudformation, etc) In high level, you can divide them into two category: Push and Pull based a. Tools like Puppet and Chef are pull based. Agent/Client on the server periodically checks for the configuration information from central server(master) b. Ansible is Push based. Central server pushes the configuration information on target servers. You control when the changes are made on the servers
... View more
09-06-2018
02:39 AM
@phaothu To do via CM, Login as admin to CM -> HDFS -> Instances -> 'Federation and high availability' button -> Action -> Manual Failover
... View more
08-29-2018
03:29 AM
@Matt_ I can give you two easy steps , it may reduce your burden 1. To list the valid kerberos principal
$ cd /var/run/cloudera-scm-agent/process/<pid>-hdfs-DATANODE
$ klist -kt hdfs.keytab
## The klist command will list the valid kerbros principal in the following format "hdfs/<NODE_FQDN>@<OUR_REALM>"
2. to kinit with the aboev listed full path
$ kinit -kt hdfs.keytab <copy paste the any one of the hdfs principal from the above klist>
... View more