Member since
04-26-2017
58
Posts
4
Kudos Received
6
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2959 | 12-18-2018 07:30 AM | |
1882 | 12-04-2018 08:44 AM | |
7729 | 04-09-2018 01:43 AM | |
3012 | 03-23-2018 02:23 AM | |
2806 | 08-09-2017 03:10 AM |
08-19-2020
06:42 AM
Hi @rohit19, As this is an older post you would have a better chance of receiving a resolution by starting a new thread. This will also provide the opportunity to provide details specific to your environment that could aid others in providing a more accurate answer to your question.
... View more
09-25-2019
03:03 AM
Hi Harsha, Thanks for the explanation. In extension to the topic, I need small clarification - we recently implemented sentry on impala, based on below KB [1] article, we can't execute "Invalidate all metadata and rebuild index" and "Perform incremental metadata update" , since we don't have access to all the DB's, it's fair as well. Now my question is - 1. I am not able to see new DB in Hue impala, I can see the same from beeline or impala shell. How to fix or solve this ? 2. I can execute invalidate metadata on table from impala shell but I have 50+ DB's and 10's of tables in each db. Is there any option to run invalidate metadata ion DB level instead of individual table? [1] https://my.cloudera.com/knowledge/INVALIDATE-METADATA--Sentry-Enabled--ERROR? id=71141 Thanks Krishna
... View more
05-29-2019
06:02 AM
I'm using cloudera manager, once i added the node i need to provide the url for the cloudera manager packages, once this step finish, an automatic step kicked to distribute the parcels.
... View more
02-27-2019
07:42 PM
Any thoughts on my previous query can someone let me know how to upgrade an unmanaged cluster to 6.1 from 5.16?
... View more
02-12-2019
10:03 AM
Hi @Tulasi, Greate to hear the issue got resolved! I will report internally on this to our documentation team to see how we can improve on it. Thanks, Li
... View more
12-04-2018
08:44 AM
1 Kudo
Incase anyone comes across this, it only took around 3 seconds per 1000 rows.
... View more
12-04-2018
06:27 AM
2 Kudos
Make sure that the nosuid flag isn't set on the /var (or /var/lib) mount point in /etc/fstab. Since this release the container-executor has now moved to /var/lib/yarn-ce, which for many users will be on a different mount that it was previously (perhaps /opt or /usr). This should probably be in the release notes for v5.16, as it isn't clear that the default location of container-executor has moved, and potential implications this will have. Matt
... View more
10-09-2018
09:04 AM
1 Kudo
Setting the cron job will take this particular error away but eventually, you are bound to run into a lot of other issues. Feel free to try though. Also, let me know your experience after trying that 🙂
... View more
07-31-2018
06:55 AM
1 Kudo
@chriswalton007 There are different types of latencies Namenode RPC latency Journal Node FSync latency network latency etc There are few points here, 1. Your network latency will vary based on the traffic in your cluster. It may create trouble during peak hours. 2. The latency issue may leads to follows: As we know, the master daemons will always wait for the update from child daemons in every few seconds, and master will consider the child is not available/dead in case of any delay and look for alternate. It is unnecessary unless it is a real issue with child. 3. As far as NN RPC is the concern, in a HA cluster, both active and standby NN has to talk each other and it should be in sync with in few seconds. If they are not in sync and if something went wrong on active NN, the standby become active but it may not be up to date and it will lead to confusion. end of the day, every seconds are matter in a distributted cluster. But in your use case, not sure you are going to use cloudera director if so, the link that you have shared says it will not allow to create a mixed cloud/on-premise cluster. But if you are going to use a different tool and it will allow you to configure the mixed cloud/on-prem then you can go ahead based on the below... 1. If you are going to try this for a non-prod environment first 2. if you have less work loads
... View more
05-01-2018
01:28 AM
The reason you were seeing HdfsParquetTableWriter::ColumnWriter is that I was testing the bug using the syntax: CREATE TABLE db.newTable STORED AS PARQUET AS SELECT a.topLevelField, b.priceFromNestedField FROM db.table a LEFT JOIN a.nestedField This was purely to force the bug to occur - if you just did the SELECT in Hue it would often succeed because it only brings back the first 100 rows - to consistently trigger the crash I had to make Impala read from both Parquet files. No other query was running at the time. Anyway, as Chris says, the bug appears to be fixed in 5.14.2. The job which originally consistently triggered the crash has now been running unchanged over the same source data for 20 hours without hitch. Thanks for your help Matt
... View more