Support Questions

Find answers, ask questions, and share your expertise

hive issue while renaming location

avatar

I am doing one activity in which I have created external table in one cluster say source cluster and created same table in destination table with different hdfs location, Then I have moved hdfs folder of source location (hive data) to destination and then renamed this folder to the destination hive location path , but when I am trying to select data in destination cluster, I am getting below error:

FAILED: SemanticException Unable to determine if hdfs://<namenode>:8020/apps/hive/warehouse/test_ext is encrypted: java.lang.IllegalArgumentException: Wrong FS: hdfs://<namenode>:8020/apps/hive/warehouse/test_ext, expected: hdfs://<namenode>:8020

Please find below details on above :

1. created hive table in source cluster:

CREATE EXTERNAL TABLE IF NOT EXISTS test_ext (ID int, DEPT int, NAME string ) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE LOCATION '/poc_hive';

2.loading data into table :

load data local inpath '/tmp/hive_data.txt' into table test_ext;

3. Destination cluster :

CREATE EXTERNAL TABLE IF NOT EXISTS test_ext (ID int, DEPT int, NAME string ) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE LOCATION '/old_hive';

but here did not load any data .

4. distcp from source to destination

hadoop distcp hdfs://<namenode>:8020/poc_hive hdfs://<namenode>:8020/

5. renamed /poc_hive to /old_hive

6. when I try to fetch data in destination cluster I get error :

select *from test_ext;

error :

FAILED: SemanticException Unable to determine if hdfs://<namenode>:8020/apps/hive/warehouse/test_ext is encrypted: java.lang.IllegalArgumentException: Wrong FS: hdfs://<namenode>:8020/apps/hive/warehouse/test_ext, expected: hdfs://<namenode>:8020
1 ACCEPTED SOLUTION

avatar

Hi ,

i have done this poc successfully actually because of some configurations issue i was getting issue but usually it works if you create external table in source cluster and load the data then you create external data with same schema as in source in destination cluster . and then with the help of distcp move data from source to destination and rename folder to the one where hive table in destination pointing .

View solution in original post

5 REPLIES 5

avatar
Master Mentor

@Anurag Mishra

Your error "table is encrypted: java.lang.IllegalArgumentException: Wrong FS" " looks similar to the one described in the following Support KB article:

https://community.hortonworks.com/content/supportkb/48759/javalangillegalargumentexception-wrong-fs-...

avatar

@Jay Kumar SenSharma

Hi jay ,

thanks for you reply , I have gone trough above link and I have some queries please help me on the same . As article explains

"If you have data in some directories outside of the normal warehouse directory (e.g. /apps/hive/warehouse), you must run the metatool with updateLocations to get those other paths in the FS Roots output."

but When we give LOCATION ( hdfs path other than /apps/hive/warehouse) while creating external table I do not encounter any issue while fetching the records , but when I move the external table to some to some other cluster and then rename it to folder what I have given to location filed during creating table in destination end , and then I try to fetch the data this error comes.

My point is what mentioned in the article is if I have data in some other directories outside of the normal warehouse directory I must run metatool to get those path in FS roots output , but If i create table and provide data folder out of warehouse , I do not get any issue while fetching the records but when i move it some other cluster and rename the folder to location what I have given there then it throws error while fetching the record .

avatar

@Jay Kumar SenSharma @Geoffrey Shelton Okot @Sindhu

Hi anyone please help me on this , what is other way to get it done ?

Thanks in Advance.

avatar
Contributor
@Anurag Mishra

Can you post DESCRIBE FORMATTED test_ext from destination cluster. And also try to DROP and recreate external table command.

CREATE EXTERNAL TABLE IF NOT EXISTS test_ext (ID int, DEPT int, NAME string ) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE LOCATION '/old_hive';

avatar

Hi ,

i have done this poc successfully actually because of some configurations issue i was getting issue but usually it works if you create external table in source cluster and load the data then you create external data with same schema as in source in destination cluster . and then with the help of distcp move data from source to destination and rename folder to the one where hive table in destination pointing .