Member since
08-16-2018
55
Posts
4
Kudos Received
2
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2032 | 10-16-2019 08:18 AM | |
| 8439 | 08-05-2019 06:38 AM |
03-09-2020
02:41 AM
@ARVINDR , Yes, this is a default behaviour in Hive. When an empty value ( '' ) is passed for STRING datatype, it is taken as it is. If you would like the null values for STRING to be treated as NULL instead of just empty, you can set this property - serialization.null.format for the table of your choice. For example, you can do this on a new table or to an existing table by running a statement like the below, New Table: CREATE TABLE null_test_1
(service1end string,
service1start string,
service2end string,
service2start string,
firstlinemaintcost double)
TBLPROPERTIES (
'serialization.null.format'=''); Existing Table: alter table null_test set tblproperties ('serialization.null.format'=''); Once done, the empty values inserted for the columns with STRING datatype will be shown as NULL. Hope this helps!
... View more
10-16-2019
08:18 AM
1 Kudo
Hello, Oracle Data Integrator connects to Hive by using JDBC and uses Hive and the Hive Query Language (HiveQL), a SQL-like language for implementing MapReduce jobs. Source - HERE The points mentioned by you from the documentation is for the purpose of Blocking the external applications and non service users from accessing the Hive metastore. Since, ODI connects to Hive using JDBC, it should connect to HiveServer2 as described in this documentation. Once connected, the query executed from ODI will connect with HiveServer2. Then, HiveServer2 will connect with HiveMetastore for getting the metadata details of the table against which you are querying and proceed with the execution. It is not necessary for ODI to connect to Hive MetaStore directly. For details about Hive Metastore HA, please read HERE
... View more
08-05-2019
06:38 AM
@smkmuthu If you are using CDH Distribution, you can use HdfsFindTool to accomplish this. Sample Command to find files older than 3 days in the directory "/user/hive" from now: hadoop jar /opt/cloudera/parcels/CDH/jars/search-mr-1.0.0-cdh5.15.1.jar org.apache.solr.hadoop.HdfsFindTool -find /user/hive -type f -mtime -3 Please modify the /opt/cloudera/parcels path in the command as per the version of CDH you are using and the target directory as per the requirement. More details about HdfsFindTool can be found HERE. Hope it helps!
... View more
07-30-2019
03:45 AM
Hello, Did you try performing this with the help of Hive queries which I think would be possible? 1. CREATE a new empty table with the columns with correct datatypes as per the requirement (meaning the final file's column structure) 2. INSERT data into this new table with a SELECT query with JOIN to join the data from both the views. 3. You will have the files present in the table's HDFS directory. This would be your final file. Thanks!
... View more
07-23-2019
10:01 PM
Can you check if the short-circuit read is configured as per the Documentation HERE ?
... View more