Member since
01-19-2017
3676
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 608 | 06-04-2025 11:36 PM | |
| 1168 | 03-23-2025 05:23 AM | |
| 578 | 03-17-2025 10:18 AM | |
| 2172 | 03-05-2025 01:34 PM | |
| 1369 | 03-03-2025 01:09 PM |
03-23-2021
07:50 AM
For future reference adding here the link to our public documentation on how to connect NiFi with Hive on CDP.
... View more
03-22-2021
02:43 AM
Hi @Priya09, as this is an older post, you would have a better chance of receiving a resolution by starting a new thread. This will also be an opportunity to provide details specific to your environment that could aid others in assisting you with a more accurate answer to your question. You can link this thread as a reference in your new post.
... View more
03-18-2021
03:47 AM
@Shelton I sent you a request.
... View more
03-15-2021
02:03 PM
Thanks @Shelton for the response. I believe I had already tried this. I tried to replace the ambari tez configuration again but still the same error. Also our production cluster currently has the old value, but for some reason the production cluster is working fine.
... View more
03-15-2021
01:48 PM
Thank you for your reply I don't want to use spark warehouse, I want to use hive warehouse the global hive
... View more
03-15-2021
07:30 AM
Hoping you might be able to point me in the right direction. If I'm experiencing this same error and "cat /proc/sys/fs/file-max" is reporting 3136547 "over 1 million" what would you suggest? When I run "ls /proc/<PID>/fs/" on the Nifi "Process ID" it reports back a low and high limit of 4096. Does this mean that my Nifi process is limited to 4096 open files at a given time? I'm wondering whether the problem is that the limit is just to low or more of a Nifi issue that its failing to close files at a faster rate.
... View more
03-13-2021
02:11 PM
1 Kudo
@SnehasishRSC REFRESH in the common case where you add new data files for an existing table it reloads the metadata immediately, but only loads the block location data for newly added data files, making it a less expensive operation overall. It is recommended to run COMPUTE STATS when 30 % of data is altered in a table, where altered means the addition or deletion of files/data. INVALIDATE METADATA is a relatively expensive operation compared to the incremental metadata update done by the REFRESH statement, so in the common scenario of adding new data files to an existing table, prefer REFRESH rather than INVALIDATE METADATA which marks the metadata for one or all tables as stale. The next time the Impala service performs a query against a table whose metadata is invalidated, Impala reloads the associated metadata before the query proceed. Hope that helps
... View more
03-02-2021
09:23 AM
Yes @PabitraDas , it's accessible now. It would be helpful to have a secondary repository or maintenance windows announced.
... View more
03-01-2021
01:07 PM
@Shelton Thanks for the advice. So let's say if a subscription is currently out of our budget, but may consider a subscription in the future, previously, before Hortonworks was acquired by Cloudera, lots of the HDP packages were accessible. Now that Hortonworks got acquired by Cloudera, many of the HDP policies changed i believe, therefore making the HDP packages inaccessible unless you have a subscription. Do you have any suggestions in regards to upgrading any hadoop components as a subscription may be currently out of our budget? If we were to upgrade the hadoop components, would the best way is to start over and deploy and brand new cluster via open source and migrate the data from the old cluster to the new? Any advice is much appreciated. Thanks,
... View more
03-01-2021
10:24 AM
@Alex_IT From my Oracle knowledge, there are 2 options for migrating the same Oracle_home [DB] from 12C to 19C if you are running 12.1.0.2 then you have the direct path see the attached matrix. With this option, you won't need to change the hostname. The other option is to export your current schema CM ,oozie,hive,hue,Ranger etc schemas install a fresh Oracle 19c box with an empty database, and import the old schemas this could be a challenge as you might have to rebuild indexes or recompile some database packages etc but bot are doable. Hope that helps
... View more