Member since
05-18-2016
71
Posts
39
Kudos Received
6
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3490 | 12-16-2016 06:12 PM | |
1200 | 11-02-2016 05:35 PM | |
4618 | 10-06-2016 04:32 PM | |
1953 | 10-06-2016 04:21 PM | |
1702 | 09-12-2016 05:16 PM |
08-12-2016
06:21 PM
1 Kudo
In Hadoop, the construct of a update is to a huge MapReduce and then find the record(s), that need to be updated and do an insert and delete. As you can see from MapReduce perspective its an expensive operation with levels of mapReduce. With ACID turned on, All of the above answers are correct. But you should design your Data Structures to be append only with date and time stamp and or a version reference for the latest state of your records. Even though ACID supports Updates, i would say in order to manage performance, i would recommend to insert instead of update ( More like an Upsert function).
... View more
08-11-2016
03:38 PM
1 Kudo
This is a great article for anyone looking to ingest data quickly and store in compressed formats. This will work very well For POC, testing and sandbox type of activities. I used something like this and made it production grade at a client by automating some of the jobs using oozie. Once the data was loaded we also had verification scripts that would audit what came in and what got dropped.. Also we had clean up scripts that would remove all the raw data from HDFS, once the data was set in Hive in ORC format that was compressed and partitioned. With the advent of Nifi and Spark, its worth looking at building an Nifi processor in conjuction with spark jobs to load the data seamlessly into Hive/Hbase in compressed formats as its being loaded.
... View more
08-04-2016
03:59 PM
Here is a tutorial you could use Spark to load csv into hive. http://hortonworks.com/hadoop-tutorial/using-hive-with-orc-from-apache-spark/
... View more
07-29-2016
05:39 PM
@Robert Levas Thank you so much.
... View more
07-29-2016
05:12 PM
Thanks Robert, This makes sense. Does it increase the complexity for the install? Also what versions are recommended, is there any issues with versions..
... View more
07-29-2016
04:46 PM
Labels:
- Labels:
-
Hortonworks Data Platform (HDP)
07-25-2016
07:32 PM
2 Kudos
We would have to create the column as a date field instead of a string field, to store dates. That way everything would be stored as a date datatype. So once you have that determined, your ingest data can use a UDF like @Sunile Manjee suggested. If you have data being stored as the date into hive, you could use any of the hive functions to represent the data in any way you prefer. https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-DateFunctions search for Date Functions
... View more
06-22-2016
02:53 PM
1 Kudo
Hi @alain
One more way: 3 Step Method Step 1: You can create a external table pointing to an HDFS location conforming to the schema of your csv file. You can drop the csv file(s) into the external table location. Step 2: Create a managed Hive table with ORC format. Step 3: Do Insert into Managed table select from External table. ( Once the records are copied, delete the files from the external directory) This process can be automated using scripting via oozie or cron. I have used this to do mass batch ingestion. More recent way of doing this is using Apache Nifi with Hive table processor, makes life much more simpler..:). If you want to read about Nifi please go to http://hortonworks.com/products/hdf/ Thanks Satish
... View more
06-17-2016
05:52 PM
Hi Jan WebHcat is interface for the HDFS metadata management tool HCatalog. So for both PIG and Hive Hcatalog would store schema related information. Please view the attached tutorial. Hiveserver2 is the actual engine that runs hive. You could http://hortonworks.com/hadoop-tutorial/how-to-use-hcatalog-basic-pig-hive-commands/ Data can be accessed via WebHcat Rest APIs which would inern call the Hive API.. More Reference https://cwiki.apache.org/confluence/display/Hive/WebHCat+Reference https://cwiki.apache.org/confluence/display/Hive/Hive+APIs+Overview#HiveAPIsOverview-WebHCat%28REST%29
... View more
06-16-2016
08:54 PM
Agree with Sindhus Comments. The link provides some basic setup to optimize the queries on Hive. @Roberto Sancho Can you please let us know how is the table partitioned, bucketed. If it is partitioned, can you please give us if the where clause makes use of the partition columns.
... View more
- « Previous
- Next »