Member since
11-21-2017
70
Posts
5
Kudos Received
0
Solutions
02-20-2018
11:29 AM
1 Kudo
I want to get ftp file into hdfs,in ftp files are created in date directory for every day, I need to autonmate this job. what will be the best way for doing this?
... View more
Labels:
- Labels:
-
Apache Hadoop
01-24-2018
08:23 AM
We will import updated row and already we imported that row in earlier import.so now we will have those 2 rows,how can we avoid this ?
... View more
01-23-2018
12:28 PM
@ssharma Thank you.. I will try with this
... View more
01-23-2018
11:33 AM
I am having Ambari 2.5.2 cluster and I didnt find any Oozie flow in that,Is it possible to create OOzie view in Ambari?If can what are the steps?
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
-
Apache Oozie
01-15-2018
12:00 PM
@Jay Kumar SenSharma Thank u.. do u have any idea abt installation of NiFi on HDP cluster?
... View more
01-15-2018
09:50 AM
how can I find the Flume installard directory in HDP cluster.
... View more
Labels:
- Labels:
-
Apache Flume
-
Apache Hadoop
01-10-2018
12:38 PM
Hi @Aditya Sirna, I want to schedule this job so I can not give value like 3 or something else,dynamically it should take.
... View more
01-10-2018
08:32 AM
HI @Aditya Sirna, Thanks for response.. I need to load hive table incrementally,in my source table batchid column is there, I tried --incremental append --check-cloumn batchid.. whn I run 2nd time records are getting double.How can I achieve my this with out duplicates. sqoop import --connect "jdbc:sqlserver://10.21.29.15:1433;database=db;username=ReportingServices;password=ReportingServices" --check-column batchid --incremental append -m 1 --hive-table mmidwpresentation.journeypositions_archive --table JourneyPositions --hive-import -- --schema safedrive
... View more
01-10-2018
08:31 AM
Hi @Shu, Thanks for response.. I need to load hive table incrementally,in my source table batchid column is there, I tried --incremental append --check-cloumn batchid.. whn I run 2nd time records are getting double.How can I achieve my this with out duplicates. sqoop import --connect "jdbc:sqlserver://10.21.29.15:1433;database=db;username=ReportingServices;password=ReportingServices" --check-column batchid --incremental append -m 1 --hive-table mmidwpresentation.journeypositions_archive --table JourneyPositions --hive-import -- --schema safedrive
... View more
01-09-2018
08:42 AM
I am having one hive table with historical data and every day one new hive table will create with new data, I want load this data into historical Table with out overwriting older data.
... View more
Labels:
- Labels:
-
Apache Hive