Member since
08-16-2016
642
Posts
131
Kudos Received
68
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 3971 | 10-13-2017 09:42 PM | |
| 7460 | 09-14-2017 11:15 AM | |
| 3789 | 09-13-2017 10:35 PM | |
| 6024 | 09-13-2017 10:25 PM | |
| 6595 | 09-13-2017 10:05 PM |
10-23-2017
05:41 AM
1 Kudo
/opt/cloudera/parcels/SPARK2/ should not be used as SPARK_HOME. The correct path to use is /opt/cloudera/parcels/SPARK2/lib/spark2/ We were just wrong about using spark2-xxxx scripts.
... View more
10-19-2017
07:15 PM
there are couple of places that needsd tuining in the query level 1 . stats for the table is must for good performance 2. when user is joining two tables make sure there are using the large table in the last and the first table is smaller 3. you can also use HINTS to imporve query performance. 4. hive table's file format is big a factor 5. choosing when to use paritioning vs bucketing. 6.allocate good memory to hiveserver2 and metastore 7.heapsize 8 .load balancer on the host https://www.cloudera.com/documentation/enterprise/5-9-x/topics/admin_cm_ha_hosts.html#concept_qkr_bfd_pr
... View more
10-19-2017
07:25 AM
Also, is there way to confirm csd file is properly deployed. Also, I don't see scala 11 libraries under /opt/cloudera/parcels/CDH/jars and only scala 10 libraries. I heard that scala 10 and 11 both are installed with CDH 5.7 and later. Shouldn't scala 11 be available, Is this also cause for spark2 service not appearing. I did all steps as mentioned and all steps did completely successfully, spark2 parcel is activated now. Regards, Hitesh
... View more
10-16-2017
09:49 AM
@jackyyipjk, The following error indicates that the "hue_hive" user is not authorized to act as a proxy for other users: Failed to validate proxy privilege of hue_hive for administrator:14:13' Hue authenticates to Hive as "hue_hive" but it must then act as a proxy. This is restricted by default. Usually, this can be configured in Cloudera Manager by editing: HDFS --> Configuration --> Service Wide --> Advanced --> Cluster-wide Advanced Configuration Snippet (Safety Valve) for core-site.xml You can add, for instance: hadoop.proxyuser.hue_hive.groups * XML representiation: <property> <name>hadoop.proxyuser.hue_hive.groups</name> <value>*</value> </property> The above will allow hue_hive to act as a proxy for any user (including "administrator) Regards, Ben
... View more
10-13-2017
10:05 PM
Kerberos service principals have three parts, the service name, the hostname, and the domain name. The hostname must be in the formation of fully qualified domain name. That is why the service is looking for it in that format while the keytab does not contain an entry for that principal. Recreate the keytab file with the principal in the correct format and you should be good.
... View more
10-13-2017
09:42 PM
1 Kudo
It is a group. By default Hadoop create the user hdfs in the group hdfs. The first statement does make it confusing but assumes the defaults as that is the only user in the group. You could add users to the group as well (not recommended). The last portion referencing the Kerberos principal is just pointing out that it isn't enough to have a user in the superusergroup/supergroup they also need a valid Kerberos principal. In reality, the users in the group you assign to that property will have Kerberos principals already. I also recommend, as Cloudera does, to not use the default hdfs group.
... View more
10-02-2017
06:58 PM
Hi Penta, Did it work? Actually Im facing the same issue and this is what I have used: a1.sources.Twitter.consumerKey=XXX a1.sources.Twitter.consumerSecret=XXX a1.sources.Twitter.accessToken=XXX a1.sources.Twitter.accessTokenSecret=XXX I am trying to run the flume agent in cloudera VM. Please advice if you or anyone know the solution. Appreciate your suggestions/help!
... View more
09-28-2017
11:37 AM
As an alternative, you could enable LDAP for Impala and then connect to the slaves directly thus bypassing Kerberos and the load balancer.
... View more
09-26-2017
08:36 PM
@mbigelow Any ideas?
... View more
09-20-2017
03:21 PM
The snippet posted shows that the tablet server is unable to verify the TLS certificate generated for the tablet server because the certificate 'valid from' field is in the future. That's most likely because the master host's clock is at least 1 second ahead of the tablet server host's clock. Tablet server TLS certificates are generated by master when the tablet server connects to the master first time after starting up. Tablet server will retry the connection with next heartbeat to the master, sending a new certificate signing request, and the master will generate a new certificate, with the validity date in the future, again. I suspect that the error will continue to appear even if you restart Kudu, and restarting Kudu will not help. You need to synchronize clock across machines in the cluster, at least within the delta of 1 second. If NTP does not work for you, I would recommend trying at least to run 'ntpdate' at every machine of your cluster prior to starting Kudu servers.
... View more