Member since
11-12-2018
218
Posts
178
Kudos Received
35
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
280 | 08-08-2025 04:22 PM | |
349 | 07-11-2025 08:48 PM | |
553 | 07-09-2025 09:33 PM | |
1081 | 04-26-2024 02:20 AM | |
1427 | 04-18-2024 12:35 PM |
12-26-2018
07:48 AM
3 Kudos
@Jack From your above output I can see it's MANAGED_TABLE. If the table has TBLPROPERTIES ("auto.purge"="true") the previous data of the table is not moved to Trash when INSERT OVERWRITE query is run against the table. This functionality is applicable only for managed tables and is turned off when "auto.purge" property is unset or set to false. For more detail HIVE-15880
... View more
12-26-2018
01:10 PM
where its generated log files? i checked in the /var/logs/superset but ididnt find anything.
... View more
12-26-2018
06:02 AM
3 Kudos
@Michael Bronson As for automatic clean up not getting triggered, well it may be due to (or at-least related to) this unresolved bug reported on Yarn. https://issues.apache.org/jira/browse/YARN-4540
... View more
12-27-2018
10:49 AM
I know, i just wanted to prevent managaing multuiple configurations
... View more
12-25-2018
07:41 AM
2 Kudos
Please can you attach full error logs to debug further, if issue persist..
... View more
12-04-2018
11:45 AM
2 Kudos
I can suggest, for 20 kafka machines you can go with 3 zookeeper servers
... View more
12-04-2018
12:41 PM
hdp-select status | grep -i hdfs
hadoop-hdfs-client - 2.6.4.0-91
hadoop-hdfs-datanode - 2.6.4.0-91
... View more
11-26-2018
02:13 PM
2 Kudos
@vamsi valiveti Shuffling is the process of transferring data from the mappers to the reducers, so I think it is obvious that it is necessary for the reducers, since otherwise, they wouldn't be able to have any input (or input from every mapper). Shuffling can start even before the map phase has finished, to save some time. That's why you can see a reduce status greater than 0% (but less than 33%) when the map status is not yet 100%. Sorting saves time for the reducer, helping it easily distinguish when a new reduce task should start. It simply starts a new reduce task, when the next key in the sorted input data is different than the previous, to put it simply. Each reduce task takes a list of key-value pairs, but it has to call the reduce() method which takes a key-list(value) input, so it has to group values by key. It's easy to do so, if input data is pre-sorted (locally) in the map phase and simply merge-sorted in the reduce phase (since the reducers get data from many mappers). A great source of information for these steps is this Yahoo tutorial. A nice graphical representation of this is the following: Note that shuffling and sorting are not performed at all if you specify zero reducers (setNumReduceTasks(0)). Then, the MapReduce job stops at the map phase, and the map phase does not include any kind of sorting (so even the map phase is faster) Ref Please accept the answer you found most useful
... View more
11-24-2018
06:58 AM
Hi, @Jagadeesan A S thanks that was it. However, as I'm using the Sandbox I discovered I can only change the settings in Ambari. Each time I changed hdfs-site.xml it was overwritten when I restarted.
... View more
11-25-2018
10:25 AM
3 Kudos
@raja reddy
You can copy the HDFS files from your dev cluster to prod cluster, then you can re-create the hive table on the prod cluster and then perform a compute statistic for all the metadata like MSCK REPAIR TABLE command. For re-creating the hive tables, you can get the create statement of the table by doing the show create table <table_name> query in your dev cluster.
Following are the high-level steps involved in a Hive migration
Use distcp command to copy the data present in the Hive warehouse complete database directory (/user/hive/warehouse) in Dev cluster to Prod cluster.
https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.1/administration/content/using_distcp.html
Once the files are moved to new prod cluster, take the DDL for dev cluster and create the hive tables in prod cluster. (i.e., show create table <table_name> ) https://community.hortonworks.com/articles/107762/how-to-extract-all-hive-tables-ddl.html
Run metastore check with repair table, which will add metadata about partitions to the Hive metastore for partitions for which such metadata doesn't already exist. https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-RecoverPartitions(MSCKREPAIRTABLE)
Suppose if clusters are Kerberized then you can refer below links for distcp.
https://community.hortonworks.com/content/supportkb/151079/configure-distcp-between-two-clusters-with-kerbero.html
Note: There's no need for export because you can directly copy the data from HDFS between both clusters. Please accept the answer you found most useful
... View more
- « Previous
- Next »