Member since
11-10-2015
22
Posts
5
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2183 | 04-18-2016 02:55 PM | |
2909 | 03-29-2016 01:46 AM |
12-04-2018
07:19 AM
1 Kudo
Hello, we are testing HIve 2.1.1 of Cloudera 6.0.1 , but we are getting unexpected behaviours. In particular with some datatype we are getting error: from org.apache.hadoop.hive.ql.exec.DDLTask. java.lang.UnsupportedOperationException: Parquet does not support date. See HIVE-6384 By the way the issue HIVE-6384 is resolved from Hive 1.2 ( HIVE-6384 ) To replicate the problem, for example : create table testdate ( part int , a date ) STORED AS PARQUET; We try this in 2 clusters both upgraded from 5.X . Thank you in advance
... View more
Labels:
- Labels:
-
Apache Hive
09-15-2016
01:21 AM
Hello, this is the official documentation CDH 5.8 official upgrade doc . I think the critical point it's the kafka upgrade version from 0.8 (1 ) to 0.9 ( 2 ) if you have a custom application that is using kafka as a distributed queue system. Regards
... View more
09-14-2016
12:40 PM
Hello, could you post the log in directory : /var/log/cloudera-scm-agent ? Regards
... View more
09-14-2016
12:36 PM
Hello, Gateway is a particular type of role, and it means that is a host that will receive a client configuration. So it's not a service with a process, this is the reason why is gray. Regards
... View more
05-19-2016
05:47 AM
Hello, are you using the Phonex Clab ? What is the version of it ? Kind Regards
... View more
04-27-2016
07:56 AM
Hello, yarn makes three checks ( source code ) : compare the name of the user with string root with string compare (strcmp(user, "root") == 0 verify if your user is white listed ( !is_whitelisted(user) check the uid of the user with minuid. ( user_info->pw_uid < min_uid ) For now the only workaround I found is to create a new user with UID and GID equal to 0 and insert the name of the user in white listed and set min user id to 0. There is an important motivation to use root: if you need to use distcp on a target location that is an NFS filesystem or a sharable filesystem mounted local on the datanode/workernode to make a backup. Infact in that case, if you run a job with a normal user, it's not possible to change the owner of the file, so the distcp backup will fails. Obviously if you run as root it will fail too for the hard coded control. Kind Regards
... View more
04-18-2016
02:55 PM
Workaround : insert in the safety valve hue.ini in CM :ìthe follow rows : [hadoop] [[hdfs_clusters]] [[[default]]] webhdfs_url=http://loadbalancer_httpfs_server_fwdn_name:14000/webhdfs/v1/
... View more
04-12-2016
01:44 AM
Hello Darren, could you explain better what are the problems about postgresql license ? I am also in the PostgreSQL community, so if it's possible I will help with. Kind Regards
... View more
03-29-2016
01:46 AM
moved to http://community.cloudera.com/t5/Cloudera-Manager-Installation/CM-5-5-failed-to-validate-hue-configuration-with-httpfs-load/m-p/39115#U39115
... View more
03-29-2016
01:45 AM
Hello we have deployed two httpfs in CDH 5.5.2 with CM 5.5.3 , and we have balanced the 2 services with an external loadbalancer, and populate in Cm => HDFS => scope = HttpFS the "httpfs Load Balancer" with loadbalancer:port. We tested it with curl and it's working like a charm. After that in CM=> Hue , it's possible to choose the load balancer for browsing hdfs from hue, by the way, if it's selected, the CM will give an error of "configuration error:null". As workaround we used the safety valve to define the loadbalancer web url. Any idea ? Do we have hit a bug of CM ? Kind Regards
... View more
Labels: