Member since
09-02-2016
523
Posts
89
Kudos Received
42
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2724 | 08-28-2018 02:00 AM | |
| 2697 | 07-31-2018 06:55 AM | |
| 5688 | 07-26-2018 03:02 AM | |
| 2988 | 07-19-2018 02:30 AM | |
| 6466 | 05-21-2018 03:42 AM |
03-16-2017
02:03 PM
@cjervis I am also getting the similar issue while downloading data from solr dashboard. Hue doesn't have any option in Solr setting, also no option in CM -> Solr -> Configuration. Any reference to SOLR team?
... View more
03-16-2017
07:40 AM
@UjjwalRana It seems your issue is related to access "Access denied for user 'APP'@'storage'" Pls refer the below link for answer http://stackoverflow.com/questions/23040084/hive-connectivity-to-mysql-access-denied-for-user-hivelocalhost-hive
... View more
03-15-2017
04:46 PM
@dwill Pls go to the YARN application monitor/page that you have mentioned above, there will be a link called 'Finished' in the left side
... View more
03-15-2017
12:05 PM
@dwill The job might already been completed, so you have to check that under Finished link As an alternate, When you run the job in command line it will show the URL to track the job as follows INFO mapreduce.Job: The url to track the job: http://ipaddress:8088/proxy/application_1489467591893_1234/
... View more
03-13-2017
10:45 AM
1 Kudo
@wenjie Pls try this https://www.cloudera.com/documentation/enterprise/5-6-x/topics/cm_mc_adding_hosts.html
... View more
03-09-2017
12:31 PM
1 Kudo
@vsreddy For object based security you have to implement Sentry 1. Install Kerberos (Pre-request: for Sentry) 2. Enabling Kerberos Authentication for Hadoop (Pre-request: Kerberos Installation is different from enable Kerberos to Hadoop) http://www.cloudera.com/documentation/enterprise/latest/topics/cm_sg_intro_kerb.html 3. Add Sentry Service in cluster 4. Enable Sentry service for Hive & Impala. http://www.cloudera.com/documentation/enterprise/latest/topics/cm_sg_sentry_service.html 5. Create necessary groups, users in OS and match the same with Hue. You can try this manually for few users/group for testing purpose... Ex: For Role creation https://community.cloudera.com/t5/Security-Apache-Sentry/How-to-create-the-following-user-roles/m-p/49374
... View more
03-08-2017
12:27 PM
Interesting!! both the links are from Cloudera For Hive, as mentioned above, no need to assume anything, we can confirm it by running hive> SET mapred.output.compression.codec; For Impala, as we know it won't use map reduce, so we need to go by the link that you have mentioned
... View more
03-08-2017
11:58 AM
1. List of compression & default gzip - org.apache.hadoop.io.compress.GzipCodec bzip2 - org.apache.hadoop.io.compress.BZip2Codec LZO - com.hadoop.compression.lzo.LzopCodec Snappy - org.apache.hadoop.io.compress.SnappyCodec Deflate - org.apache.hadoop.io.compress.DeflateCodec From the above list, Snappy is NOT a default one, DeflateCodec is the default You can confirm this by running hive> SET mapred.output.compression.codec; Refer this link to get the list of compression types. https://www.cloudera.com/documentation/enterprise/5-9-x/topics/introduction_compression.html#concept_wlk_hgy_pv__section_sth_1rx_pv 2. Refer the below link to understand how to setup snappy https://www.cloudera.com/documentation/enterprise/5-9-x/topics/introduction_compression_snappy.html#topic_23_5
... View more
03-08-2017
06:55 AM
2 Kudos
@yexianyi The actual answer for your question is you need to change the owner/group of /user/customer.tbl.1 accessible by hive/impala In addition to that, the default cloudera recommended path to maintain hive/impala table is "/user/hive/warehouse/" So in your case, create a DB called customer in the default path as follows and make sure owner/group accessible by hive/impala and try again hdfs dfs -ls /user/hive/warehouse/customer.db hdfs dfs -ls /user/hive/warehouse drwxrwxrwt - hive hive 0 2016-11-25 15:11 /user/hive/warehouse/customer.db
... View more
03-07-2017
07:23 AM
1 Kudo
@guillaume_vande You can not go by the only roles, in addition to that you need to check the memory consumed by each role by each node. Because it may not be balanced all ways... To check that, as mentioned in the warning message, you have to go to CM -> Hosts -> Node01 (or suitable) -> Resources -> Memory -> Get all the memory consumed by each node -> Manually SUM
CM -> Hosts -> Node02 (or suitable) -> Resources -> Memory -> Get all the memory consumed by each node -> Manually SUM Thanks Kumar
... View more