Member since
09-02-2016
523
Posts
89
Kudos Received
42
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2006 | 08-28-2018 02:00 AM | |
1666 | 07-31-2018 06:55 AM | |
4451 | 07-26-2018 03:02 AM | |
1902 | 07-19-2018 02:30 AM | |
5138 | 05-21-2018 03:42 AM |
04-20-2017
06:47 AM
@onsbt Can you translate your issue in english? also if it is not related to Java heap space, I would recommend you to create a new thread instead so that it is easy to track and others to contribute as well
... View more
04-19-2017
07:02 AM
@onsbt In general a service restart required after any configuration change Again as I mentioned, it is recommended to update any configuration change via CM
... View more
04-19-2017
06:56 AM
@jpayne1 You can achieve this with Apache Sentry https://www.cloudera.com/documentation/enterprise/5-9-x/topics/sg_sentry_overview.html
... View more
04-19-2017
06:51 AM
@onsbt In general, the path is /etc/hadoop/conf But I would recommend you to not update this file directly, instead update via Cloudera manager -> Yarn -> Configuration. if you are not using CM ask your admin Also an another recommendation is, you can set those values 'temporarily' & directly in HDFS/Hive and test to find the suitable value for your environment before you make the permanent change in configuration file
... View more
04-18-2017
01:56 PM
@epowell We can use 15GB and above for our practice purpose but 64GB and above is recommendated for the enterprise level To make it more clear, you can go to the below link, it will show you the memory consumption by each services. Cloudera manager -> select a Host -> Resource (menu) -> memory 1. check it in all the nodes including the node where you have installed Cloudera Manager. In an enterprise, it is no wonder if Cloudera Management Service (mgmt) alone consumed upto 20 GB depends upon the trafic. 2. In addition to impala, the following services also requires memory by default, ex: hdfs, yarn, zookeeper, spark, etc Also there is a difference in one person using multi node (1 to n) and multiple persons using mult node (n to n). So the document refering 64Gb and above based on n to n
... View more
04-17-2017
12:41 PM
@SanjeevkishoreY Most of the points are covered by others, my 2 cents are 1. Configure NameNode and SecondaryNameNode on different servers. Make sure both the server has similar configuration 2. Configure Cloudera manager, Hive, Impala on different servers. 3. If possible keep Cloudera manager and YARN Resource manager in different nodes
... View more
04-12-2017
01:24 PM
@KRIJAN This link may help you! http://stackoverflow.com/questions/19943766/hadoop-unable-to-load-native-hadoop-library-for-your-platform-warning
... View more
04-11-2017
02:44 PM
@geko 1. Are you applying Sentry on the file in local (or) HDFS? if it is local, it will not work (as per my understanding) because Sentry currently works out of the box with Apache Hive, Hive Metastore/HCatalog, Apache Solr, Impala and HDFS (limited to Hive table data). https://cwiki.apache.org/confluence/display/SENTRY/Sentry+Tutorial 2. If you are referring to HDFS, please ignore my above point and confirm the follows, have you done with your HDFS & Sentry synchronization? https://www.cloudera.com/documentation/enterprise/5-6-x/topics/sg_hdfs_sentry_sync.html
... View more
04-10-2017
12:36 PM
@MSharma You have to apply the below command before use the table across the services INVALIDATE METADATA [[db_name.]table_name] https://www.cloudera.com/documentation/enterprise/5-8-x/topics/impala_invalidate_metadata.html
... View more
04-06-2017
10:44 AM
1 Kudo
@Vitali1 If you are not using Navigator in your project then you can 1. CM -> Management -> Configuration -> “Navigator Audit Server Data Expiration Period" -> default 90 days. You can reduce it to 45 days or less --or-- 2. Purge the Navigator old history and start again
... View more