Member since
01-19-2017
3676
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 483 | 06-04-2025 11:36 PM | |
| 1013 | 03-23-2025 05:23 AM | |
| 537 | 03-17-2025 10:18 AM | |
| 2015 | 03-05-2025 01:34 PM | |
| 1259 | 03-03-2025 01:09 PM |
03-14-2021
12:04 PM
@Jay2021 Impala and hive share metadata catalog ie Hive MetaStore , when a database/table is created in HIVE it's readily available for hive users but not Impala! To successfully query a table or database created in HIVE there is a caveat you need to run the INVALIDATE METADATA from the impala-shell before the table is available for Impala queries. INVALIDATE METADATA reloads all the metadata for the table needed for a subsequent query The next time the current Impala node performs a query against a table whose metadata is invalidated you definitely will run into errors you could use the REFRESH in the common case where you add new data files for an existing table it reloads the metadata immediately, but only loads the block location data for newly added data files, making it a less expensive operation overall. INVALIDATE METADATA [[db_name.]table_name] Example $ impala-shell
> INVALIDATE METADATA new_db_from_hive.new_table_from_hive;
> SHOW TABLES IN new_db_from_hive;
+---------------------+
| new_table_from_hive |
+---------------------+ That should resolve your issue Happy hadooping
... View more
03-13-2021
02:11 PM
1 Kudo
@SnehasishRSC REFRESH in the common case where you add new data files for an existing table it reloads the metadata immediately, but only loads the block location data for newly added data files, making it a less expensive operation overall. It is recommended to run COMPUTE STATS when 30 % of data is altered in a table, where altered means the addition or deletion of files/data. INVALIDATE METADATA is a relatively expensive operation compared to the incremental metadata update done by the REFRESH statement, so in the common scenario of adding new data files to an existing table, prefer REFRESH rather than INVALIDATE METADATA which marks the metadata for one or all tables as stale. The next time the Impala service performs a query against a table whose metadata is invalidated, Impala reloads the associated metadata before the query proceed. Hope that helps
... View more
03-01-2021
01:06 PM
@raghurok Bad news As of February 1, 2021, all downloads of CDH and Cloudera Manager require a username and password and use a modified URL. You must use the modified URL, including the username and password when downloading the cloudera repository contents Hope that helps
... View more
03-01-2021
12:58 PM
@ryu My advice is just don't attempt because the HDP software is closely wired. Vigorous unit testing and compatibility are implemented before certifying a version. HDP is a packaged software when you update it's either all or none, you can't update only a component except Ambari and the underlying databases for hive,oozie,ranger etc Yes, the old good days of real open source is gone.I loved HWX If you are running production clusters then you definitely need to a subscription Hope that helps
... View more
03-01-2021
12:41 PM
@totti1 You will need to copy the hdfs/core-site.xml to a local path accessible to your windows. And you will need to update your host's file entry to make the VM reachable from the windows machine. You should be able to ping your vm from the windows machine and vice versa. Edit and change core-site.xml and hdfs-site.xml files and remove the FQDN:8020 to an IP ie for class C network like 192.168.10.201:8020 restart the processors and let me know. Hope that helps?
... View more
03-01-2021
10:24 AM
@Alex_IT From my Oracle knowledge, there are 2 options for migrating the same Oracle_home [DB] from 12C to 19C if you are running 12.1.0.2 then you have the direct path see the attached matrix. With this option, you won't need to change the hostname. The other option is to export your current schema CM ,oozie,hive,hue,Ranger etc schemas install a fresh Oracle 19c box with an empty database, and import the old schemas this could be a challenge as you might have to rebuild indexes or recompile some database packages etc but bot are doable. Hope that helps
... View more
03-01-2021
10:11 AM
@totti1 Nifi cluste is not aware of your Hadoop cluster until you copy these 2 files from your cluster /etc/hadoop/conf/hdfs-site.xml or /etc/hadoop/conf/core-site.xml to your local nifi installation Hadoop configuration resources=/local/dir/hdfs-site.xml,/local/dir/core-site.xml look for any of these processor group for HDFS Hope that helps
... View more
02-16-2021
11:04 PM
@rohit_sharma Can you change your syntax as below, note the zookeeper ensemble /bin/kafka-topics.sh --create \
--zookeeper zk1:2181,zk2:2181,zk3:2181 \
--topic "topic_name" \
--partitions 1> \
--replication-factor 2 Hope that helps
... View more
01-12-2021
11:58 AM
1 Kudo
@zetta4ever In a Hadoop cluster, three types of nodes exist Master, Worker and edge nodes. The distinction of roles helps maintain efficiency. Master nodes control which nodes perform which tasks and what processes run on what nodes. The majority of work is assigned to worker nodes. Worker node store most of the data and perform most of the calculations Edge nodes aka gateway facilitate communications from end users to master and worker nodes. The 3 masternodes should have the Namenode[Active & Standby],YARN [Active & Standby], Zookeeper Quorum [3 masters] and the other component you intend to install and on the 6 worker node aka slave nodes you will install the Nodemanager,Datanodes and the all the clients. There is no need to install the client on the master nodes, Some nodes have important tasks, which may impact performance if interrupted. Edge nodes allow end-users to contact worker nodes when necessary, providing a network interface for the cluster without leaving the entire cluster open to communication. That limitation improves reliability and security. As work is evenly distributed between work nodes, the edge node’s role helps avoid data skewing and performance issues. See my document on edge node https://community.cloudera.com/t5/Support-Questions/Edge-node-or-utility-node-packages/td-p/202164# Hope that helps
... View more