Member since
12-10-2015
12
Posts
2
Kudos Received
0
Solutions
10-17-2017
06:56 PM
1. Below command works, i tried it. It also creates folders 2016 and 2017 hadoop distcp hdfs://nn1:port/foo/bar/2016 hdfs://nn1:port/foo/bar/2017 hdfs://nn2:port/foo/bar 2. If not working can you try this (Actually both are same) hadoop distcp -f hdfs://nn1:port/srclist hdfs://nn2:port/foo/bar Where srclist has hdfs://nn1:port/foo/bar/2016
hdfs://nn1:port/foo/bar/2017
... View more
09-08-2017
09:10 AM
@Nilesh Shrimant Good to know that your issue is resolved. It will be wonderful if you can make the correct answer as "Accepted" so that it will be useful for other HCC users to quickly browse the answered threads.
... View more
02-01-2017
10:11 AM
3 Kudos
@Nilesh Shrimant Try to create table in parquet format , and set this config set hive.fetch.task.conversion=more; https://issues.apache.org/jira/browse/HIVE-11785 hive> create table repo (lvalue int, charstring string) stored as parquet;
OK
Time taken: 0.34 seconds
hive> load data inpath '/tmp/repo/test.parquet' overwrite into table repo;
Loading data to table default.repo
chgrp: changing ownership of 'hdfs://nameservice1/user/hive/warehouse/repo/test.parquet': User does not belong to hive
Table default.repo stats: [numFiles=1, numRows=0, totalSize=610, rawDataSize=0]
OK
Time taken: 0.732 seconds
hive> set hive.fetch.task.conversion=more;
hive> select * from repo; Option 2: There is some info here: http://stackoverflow.com/questions/26339564/handling-newline-character-in-hive Records in Hive are hard-coded to be terminated by the newline character (even though there is a LINES TERMINATED BY clause, it is not implemented).
Write a custom InputFormat that uses a RecordReader that understands non-newline delimited records. Look at the code for LineReader / LineRecordReader and TextInputFormat . Use a format other than text/ASCII, like Parquet. I would recommend this regardless, as text is probably the worst format you can store data in anyway.
... View more
12-17-2016
02:23 PM
1 Kudo
One example sqoop import -D oracle.sessionTimeZone=America/Los_Angeles \
--connect jdbc:oracle:thin:@//db.example.com/foo --table bar https://sqoop.apache.org/docs/1.4.6/SqoopUserGuide.html#_oracle
... View more
02-04-2016
10:04 AM
@Gangadhar Kadam , thanks for the suggestion, but we circumvent the problem in an unethical way. We took table backup and then renamed partitions in Hive warehouse and then ran msck repair table command. It created new partitions in Hive which were renamed and then we created directories with old partition names and dropped them with drop partition command deleting metastore entry.
... View more