Member since
09-25-2015
112
Posts
88
Kudos Received
12
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
9364 | 03-15-2017 03:17 PM | |
5833 | 02-17-2017 01:03 PM | |
1675 | 01-02-2017 10:47 PM | |
2497 | 11-16-2016 07:03 PM | |
1011 | 10-07-2016 05:24 PM |
06-12-2020
08:41 AM
Hello, Does COBRIX support Python ? I see only Scala api's..at https://github.com/AbsaOSS/cobrix Please advice. Thanks Sreedhar Y
... View more
04-02-2020
08:57 AM
@bpreachuk yes, keytabs needs to be regenerated
... View more
02-17-2020
10:07 PM
What if i want to use date_table within a subquery? can you provide the syntax?
... View more
04-10-2017
07:51 PM
Hi @Ciarán Porter. There is a hotfix required for this issue. In the meantime you can use the beeline cli and send the output to a csv format. A good explanation is found here: https://community.hortonworks.com/questions/25789/how-to-dump-the-output-from-beeline.html
... View more
02-27-2017
08:02 AM
I was able to import all tables in the following format: sqoop import -connect jdbc:oracle:thin:@<fdqn>/<server> -username <username> -P -table CUST_NAV -columns "<column names separated by commas" -hive-import -hive-table databasenameinhive.New_CUST_NAV -target-dir 'location in hdfs' @bpreachuk I understood the workaround in my problem using your suggestion. I'll import all tables as is from the oracle db and create different views which i can then use in my select statements. Thanks guys.
... View more
07-19-2017
07:54 PM
Thanks @Wynner!
... View more
01-27-2017
05:45 PM
Thanks @Eugene Koifman
Can you point to an updated complete and updated documentation/book on Hive features ? (ACID, LLAP, etc)
... View more
01-04-2018
11:54 AM
Hey everyone, I have a somewhat similar question, which I posted here: https://community.hortonworks.com/questions/155681/how-to-defragment-hdfs-data.html I would really appreciate any ideas. cc @Lester Martin @Jagatheesh Ramakrishnan @rbiswas
... View more
11-04-2017
12:19 PM
Hi @Jeff Watson. You are correct about SAS use of String datatypes. Good catch! One of my customers also had to deal with this. String datatype conversions can perform very poorly in SAS. With SAS/ACCESS to Hadoop you can set the libname option DBMAX_TEXT (added with SAS 9.4m1 release) to globally restrict the character length of all columns read into SAS. However for restricting column size SAS does specifically recommends using the VARCHAR datatype in Hive whenever possible. http://support.sas.com/documentation/cdl/en/acreldb/67473/HTML/default/viewer.htm#n1aqglg4ftdj04n1eyvh2l3367ql.htm Use Case
Large Table, All Columns of Type String: Table A stored in Hive has 40 columns, all of type String, with 500M rows. By default, SAS Access converts String to $32K. So, 32K in length for char. The math for this size table yields 1.2MB row length x 500M rows. This causes the system to come to a halt - Too large to store in LASR or WORK. The following techniques can be used to work around the challenge in SAS, and they all work:
Use char and varchar in Hive instead of String. Set the libname option DBMAX_TEXT to globally restrict the character length of all columns read in In Hive do "SET TBLPROPERTIES SASFMT" to add formats for SAS on schema in HIVE. Add formatting to SAS code during inbound reads
example: Sequence Length 8 Informat 10. format 10. I hope this helps.
... View more