Member since
09-02-2016
523
Posts
89
Kudos Received
42
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2724 | 08-28-2018 02:00 AM | |
| 2696 | 07-31-2018 06:55 AM | |
| 5686 | 07-26-2018 03:02 AM | |
| 2983 | 07-19-2018 02:30 AM | |
| 6466 | 05-21-2018 03:42 AM |
01-17-2018
12:54 PM
Thank you @saranvisa Thank you @Divyani
... View more
01-11-2018
01:18 PM
So I have been doing some testing today and I noticed that once I disable impersonation I am able to run queries in hue via the impala editor (not sure if ldap is in action here) BUT I am unable to access impala via the command line. Maybe Hue may needs Sentry after all? Side note authorized_prozy_user_config flag is still blank in the impalad startup so if any has ldap working with hue and impala I would be insterested in seeing what yours looks like.
... View more
12-28-2017
08:08 AM
@Iron I don't think it is mandatory to enable the safemode during copyToLocal. May be you can use safemode to make sure nobody is updating/deleting/inserting the data during the data copy. I know the difficulties without cloudera manager/hortonworks, etc Long back, i've used the below export/import method for Hive table backup, again this will export the data to HDFS and you still have to use copyTolocal. The advantage is, it will also take care of metadata https://cwiki.apache.org/confluence/display/Hive/LanguageManual+ImportExport#LanguageManualImportExport-ExportSyntax You can use these options as temporary solution but once you start using cloudera manager or any other management tool, I would recommend to use the backup option that i've mentioned earlier
... View more
12-06-2017
01:17 AM
Hey Harsha, I am facing a similar problem with the CDH 5.13 version.. have shared the details here http://community.cloudera.com/t5/Data-Ingestion-Integration/Problem-in-connecting-to-Hbase-from-scala-code-in-Cloudera/m-p/62519#M2779 Please let me knoe if there is something wrong that I am doing. Thanks
... View more
11-30-2017
12:53 AM
To clarify we don't have HA setup for mysql, but do use external mysql database for CDH services, which is located on one of the namenodes (we have HA Hadoop cluster). Cloudera management services use another mysql database on another host.
... View more
11-27-2017
08:12 PM
below is the example let me know if that works for you. You can use Range or hash partition , also we can perform Range as well as hash partition together or just hash partition by using bucket . Below is the table that has primary key that coloum id we are using for partition (that is a good practice ) CREATE TABLE customersDetails (
state STRING,
PRIMARY KEY (state, name)
)
PARTITION BY RANGE (state)
(
PARTITION VALUE = 'al',
PARTITION VALUE = 'ak',
PARTITION VALUE = 'wv',
PARTITION VALUE = 'wy'
)
STORED AS KUDU;
... View more
11-15-2017
07:38 AM
1 Kudo
@hparteaga The correct way is 1. In Cloudera manager -> Add Sentry Service and make sure it has Hue 2. Login to Hue -> Go to Security Menu -> it will have sub menu called either Sentry table (or) Hive table. The below link will explain why either sentry table or hive table. Use this option to set the db, table, column level authentication http://community.cloudera.com/t5/Security-Apache-Sentry/Hive-Tables-instead-Sentry-Tables/m-p/48740#M190
... View more
11-14-2017
01:41 AM
@HarshJ Thanks for inputs. I checked heap charts on jobtracker instance and it's hitting the maximum heap value frequently and then reducing to little less value. Also, there has not been any change/increase in load. I checked the jobtracker logs but couldn't find any pauses as logging is not enabled for GC. Can you please let me know what are the history retention configurations of jobtracker? Can you please suggest me how to identify the reason behind the GC taking significant time? Thanks, Priya
... View more
11-13-2017
11:27 AM
As noted in the previous reply, I did not have any nodes with the Failover Controller role. Importantly, I also had not enabled Automatic Failover despite running in an HA configuration. I went ahead and added the Failover Controller role to both namenodes - the good one and the bad one. After that, I attempted enable the Automatic Failover using the link shown in the screenshot from this post. To do that, however, I needed to first start Zookeeper. At that point, If I recall correctly, the other namenode was still not active but I then restarted the entire cluster and the automatic failover kicked in, using the other namenode as the active one and leaving the bad namenode in a stopped state.
... View more
11-03-2017
08:16 AM
2 Kudos
@dubislv Pls follow this steps 1. Ex: Impala -> instances -> Role Groups -> Create (as needed and choose the existing group) 2. Ex: Impala -> instances -> Role Groups -> click on already existing group (in your case Impala Daemon Default Group) -> Select the host -> Action for Selected -> Move to Different Role Group -> select the newly created group
... View more