Member since
04-22-2014
1218
Posts
341
Kudos Received
157
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 26359 | 03-03-2020 08:12 AM | |
| 16506 | 02-28-2020 10:43 AM | |
| 4790 | 12-16-2019 12:59 PM | |
| 4502 | 11-12-2019 03:28 PM | |
| 6764 | 11-01-2019 09:01 AM |
07-10-2018
04:44 PM
@balusu, Good testing and solid results. So this means that Active Directory has disabled the account. You may need to check the Event Viewer's Security entries to find out why. You may check the zookeeper credential object in AD and verify that it is enabled, never expires, password doesn't need to be changed, etc.
... View more
07-10-2018
03:53 PM
1 Kudo
@balusu, That's great news and the new issue is something completely unrelated it seems. When zookeeper tries to login "kinit" that fails since it cannot reach the port. By default Kerberos uses UDP, so I wonder if UDP packets are being blocked when going to your KDC. I would try the following: klist -kt /var/run/cloudera-scm-agent/process/`ls -lrt /var/run/cloudera-scm-agent/process/ | awk '{print $9}' |grep zookeeper| tail -1`/zookeeper.keytab This should show you the zookeer principal Try running "kinit" like this: Note the principal name. kinit -kt /var/run/cloudera-scm-agent/process/`ls -lrt /var/run/cloudera-scm-agent/process/ | awk '{print $9}' |grep zookeeper| tail -1`/zookeeper.keytab principal_from_klist_output Even if this succeeds, I'd consider forcing Kerberos clients to use TCP just to see by adding this in the [libdefaults] section of your /etc/krb5.conf on the zookeeper host: udp_preference_limit=1 Hope some of this helps you track down the cause.
... View more
07-10-2018
02:15 PM
@balusu, It appears that you hav modified the script already to try to force it to use LDAP instead of LDAPS which I would guess is why the error 50 may be returned by Active Directory. Either way, the main reason we did things this way is to protect the content of the password. If you are shipping the password in the clear (LDAP) there is the opportunity that it could be intercepted and used to act on the cluster It isn't "supported" but if you can get it to work that way, it's up to you. If the change doesn't work, then you can always revert back, use LDAPS that we "hard coded" into the product and debug this further from the AD side. let us know how it goes.
... View more
07-10-2018
01:57 PM
OOPS... I scanned the post earlier and made the mistake of thinking it was a duplicate! My bad... I moved this to the right place, though, so the Kudu foks can have a look. Cheers, Ben
... View more
07-10-2018
01:55 PM
I moved this from the Cloudera Manager message board to the correct board, but it appears this is a duplicate of http://community.cloudera.com/t5/Interactive-Short-cycle-SQL/Apache-kudu/m-p/69597#M4670
... View more
07-10-2018
10:51 AM
2 Kudos
@Prav, More or less, I think we are on the same page. One thing to keep in mind, too is the offset so that you can make sure you are seeing all the results in the timeperiod. For example: Return the first 1000 queries starting from most recent: https://hostname:7183/api/v17/clusters/cluster/services/impala/impalaQueries?from=2018-07-09T12:59:32.776Z&to=2018-07-10&limit=1000 Retrun the next 1000 queries: https://hostname:7183/api/v17/clusters/cluster/services/impala/impalaQueries?from=2018-07-09T12:59:32.776Z&to=2018-07-10&limit=1000&offset=1000 Return the next 1000: https://hostname:7183/api/v17/clusters/cluster/services/impala/impalaQueries?from=2018-07-09T12:59:32.776Z&to=2018-07-10&limit=1000&offset=2000 (Keep doing this until you get 0 results) If you get 0 results AND you also have a warning, that means you have another partition to traverse. In that case, you would use the date/time in the warning to populate the "to" parameter in the next query (assuming the warnings show the date time 2018-07-10T01:16:17.434Z): https://hostname:7183/api/v17/clusters/cluster/services/impala/impalaQueries?from=2018-07-09T12:59:32.776Z&to=2018-07-10T01:16:17.434Z&limit=1000 If the number of queries is equal to the limit, increment the offset to return the next 1000: https://hostname:7183/api/v17/clusters/cluster/services/impala/impalaQueries?from=2018-07-09T12:59:32.776Z&to=2018-07-10T01:16:17.434Z&limit=1000&offset=1000 and repeat until you get 0 queries returned. If you get another warnings date/time, replace the "to" parameter value with it and repeat. If you get 0 results and 0 warnings, there are no more queries to retrieve. NOTE: While you are doing all these queries, running queries may complete, so it is a good idea to specify an initial "to" date that is a little bit in the past if you want consistent results.
... View more
07-10-2018
10:26 AM
@Szczechla, Impala is included in the CDH parcel, so there is no use for the Impala parcel. As for the parcel being stuck in "undistributing" that usually indicates that one or more agents may not be able to perform an action that is necessary for this to complete. The first thing to do is isolate what host or hosts are "stuck". To do so, you may be able to see more information in the /var/log/cloudera-scm-server/cloudera-scm-server.log, but I am not certain what to look for. Also, I wonder if possibly some process is actually using the 5.9.3 parcel. To check, go to the Parcel Usage page and check to see if any clusters listed there show CDH 5.9.3 in use... if so, you may need to shut down the service before undistributing the parcel. Another thing to try is to look at all your agent logs and see if there may be an error regarding a parcel or directory. After you have cleaned up, make sure to restart all your agents, too, to make sure they have a fresh view of their parcels.
... View more
07-10-2018
09:41 AM
@prabhat10, Thank you for clarifying. In recent CDH versions, there should be no need to edit the "hbase_conf_dir" value as Cloudera Manager configures that on its own. I would recommend reverting any changes you made to try to troubleshoot the issue and then recreate it. Explain exactly what you are clicking on in Hue and what happens so we have some context for helping. Next, look at the /var/log/hue/runcpserver.log file to see what messages or errors occur when Hue is attempting to connect to the thrift server. If you are unsure what to do or what the messages show you, share with us your runcpserver.log output that shows time during your attempt to view tables in Hue
... View more
07-10-2018
09:27 AM
@ArunAppathurai, Hi, I moved the post to the Sentry Community Board so we could get those with more expertise in this area looking at your post. Cloudera does not support Ranger at this time, so I am hoping you can explain your goals and the community may be able to find a way via Sentry.
... View more
07-09-2018
05:16 PM
@Prav, I am still a tad confused about how this works, but... - Queries are returned from most recent to least recent - default result limit is 100 - default offset is 0 It seems that if the number of queries in that partition is greater than the "limit" value + offset, then you will get the warning. I suggest playing a bit more with the limit and offset values. For example: https://hostname:7183/api/v18/clusters/cluster_name/services/impala/impalaQueries?from=2018-07-09T12:59:32.776Z&to=2018-07-09T17:04:32.776Z&limit=1000 Check the number of results returned and the warning. If you do hit a partition, use it as the "to" value for the next query. Since the queries are listed from most recent, going back in time, the partition date will be come the "to" value once you have exhausted all results in the partition. It is more or less this: Return queries If num queries == limit value offset = limit + offset if num_queries == 0 && warning shows "Impala query scan limit reached" set "from" date to the value in the warning continue querying for results as shown above until 0 queries are returned. I ran out of time today to write this out... play a bit with limit and offset and see if it makes sense. Let us know your progress.
... View more