Member since
04-11-2016
535
Posts
148
Kudos Received
77
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
7705 | 09-17-2018 06:33 AM | |
1919 | 08-29-2018 07:48 AM | |
2871 | 08-28-2018 12:38 PM | |
2238 | 08-03-2018 05:42 AM | |
2104 | 07-27-2018 04:00 PM |
05-15-2018
03:59 PM
1 Kudo
Unfortunately "--hive-overwrite" option destroy hive table structure and re-create it after that which is not acceptable way. The only way is: 1. hive> truncate table sample; 2. sqoop import --connect jdbc:mysql://yourhost/test --username test --password test01 --table sample --hcatalog-table sample
... View more
09-01-2017
03:27 PM
When importing data from Oracle using Sqoop, it fails with the following error: Error: java.lang.RuntimeException: java.lang.RuntimeException: java.sql.SQLException:
ORA-09817: Write to audit file failed.
Linux-x86_64 Error: 28: No space left on device
Additional information: 12
ORA-02002: error while writing to audit trail Cause: This issue occurs when there is no sufficient space in the /var/log/audit directory on the Oracle server. Solution: To resolve the issue, clear the space on the /var/log/audit directory of the Oracle server.
... View more
Labels:
06-02-2018
09:55 PM
Can you please share what exact known issues you are referring to?
... View more
08-29-2017
03:03 PM
1) For item 2, you don't need superuser to own a database. 2) For item 3, you just open the whole world to access your hawq cluster (all users from all ip to access all dbs) with superuser role. According to the error, you were trying to login "etl_users" from local socket to access "etl_users" db in hawq master, but hawq master didn't find any match entry in pg_hba. You can either specify $PGHOST to master IP (this way, psql will try to access hawq from tcp), or create local entry with md5/password auth method. For example, "local etl_user etl_user md5". 3) For item 4, you don't need to restart hawq cluster. "hawq stop --reload" should work in this case.
... View more
09-14-2017
11:50 AM
Hi Sindhu We are facing the same issue with insert ovewrite but it is not a local directory. We are facing this issue after upgrade from 2.5.3 to 2.6.1 Tried running with different destination . It created the folder but fails with below error Error: Error while compiling statement: FAILED: HiveAccessControlException Permission denied: user [xyz] does not have [WRITE] privilege on [/tmp/*] (state=42000,code=40000)
Closing: 0: jdbc:hive2://host:2181,host:2181,host:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2
Error: Error while compiling statement: FAILED: HiveAccessControlException Permission denied: user [xyz] does not have [WRITE] privilege on [/user/*] (state=42000,code=40000)
Closing: 0: jdbc:hive2://host:2181,host:2181,host:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2
... View more
09-04-2017
08:34 AM
@Sindhu Many thanks for your answer... it didn't work but apparently worked with disable database check. It completed the upgrade at least. Many thanks!
... View more
08-25-2017
07:38 AM
For now, it is not possible to encrypt the data during its transmission from Sqoop. This is a known limitation, Jira SQOOP-917 is already in place for the feature.
... View more
07-28-2017
01:41 PM
2 Kudos
@Kiran Kumar You can see when Full Support and Technical Guidance periods end. It's in the table at the bottom of this page: https://hortonworks.com/agreements/support-services-policy/
... View more
07-28-2017
02:23 PM
I'm just interesting. Why does it setup these options in hive config from ambari web? Well, actually, that means is if I want to use hive ORC file format with advanced TBLPROPERTIES such as "orc.compress, orc.compress.size, orc.stripe.size, orc,create.index....etc", I have to specify these tblproperties options every times when I'm trying to create hive table ORC file format.
... View more
07-26-2017
05:47 AM
@rahul gulati I don't think we can handle \n characters with serde RegedSerDe, as by default all '\n' are retreated as line delimiters by Hive. You might need to handle new line using Omniture Data SerDe, refer link for details.
... View more