Member since
04-11-2016
535
Posts
148
Kudos Received
77
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
9094 | 09-17-2018 06:33 AM | |
2374 | 08-29-2018 07:48 AM | |
3367 | 08-28-2018 12:38 PM | |
2858 | 08-03-2018 05:42 AM | |
2580 | 07-27-2018 04:00 PM |
09-04-2017
06:25 AM
@Hugo Felix Did you try running ambari-server start --auto-fix-database?
... View more
09-04-2017
06:23 AM
@Jiri Novak Seems like the requirement is to enable Ranger policies at function level rather than the generic UDF level, this feature is not available for now.
... View more
09-04-2017
06:09 AM
1 Kudo
@Riccardo Iacomini Issue seems to be related to several known issues of ORC split generation. Try running the query with hive.exec.orc.split.strategy=BI.
... View more
09-01-2017
03:27 PM
When importing data from Oracle using Sqoop, it fails with the following error: Error: java.lang.RuntimeException: java.lang.RuntimeException: java.sql.SQLException:
ORA-09817: Write to audit file failed.
Linux-x86_64 Error: 28: No space left on device
Additional information: 12
ORA-02002: error while writing to audit trail Cause: This issue occurs when there is no sufficient space in the /var/log/audit directory on the Oracle server. Solution: To resolve the issue, clear the space on the /var/log/audit directory of the Oracle server.
... View more
Labels:
08-31-2017
09:18 AM
@milind pandit Seems like the OOM is due to the hash table array allocation, try running query after setting "hive.mapjoin.hybridgrace.hashtable=false".
... View more
08-26-2017
05:11 AM
While accessing psql for HAWQ, the command fails with the following error: [etl_user@hdpmaster2 ~]$ source /usr/local/hawq/greenplum_path.sh
[etl_user@hdpmaster2 ~]$ psql
psql: FATAL: no pg_hba.conf entry for host "[local]", user "etl_user", database "etl_user", SSL off This issue occurs when the etl_user is missing under pg_hba.conf and Postgres database. To resolve the issue, create a new user from postgres by performing the following steps:
Login as gpadmin, sudo su - gpadmin. Create new role and database: CREATE USER <user_name> SUPERUSER; CREATE DATABASE <user_name> WITH OWNER <database_name>; Edit pg_hba.conf under Master data directory and make the following changes: local all <user_name> trust host all <user_name> 0.0.0.0/0 trust host all <user_name> ::/0 trust Restart HAWQ cluster.
... View more
08-26-2017
05:06 AM
While running Hive query from Hue, the query fails with the following error: WARN [HiveServer2-Handler-Pool: Thread-41]: thrift.ThriftCLIService (ThriftCLIService.java:FetchResults(596)) - Error fetching results:
org.apache.hive.service.cli.HiveSQLException: Couldn't find log associated with operation handle:
OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=1cd26ef5-00f8-4003-be5c-65e2a542ee18]
at org.apache.hive.service.cli.operation.OperationManager.getOperationLogRowSet
(OperationManager.java:257)
at org.apache.hive.service.cli.session.HiveSessionImpl.fetchResults(HiveSessionImpl.java:656)
at sun.reflect.GeneratedMethodAccessor104.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:79)
at org.apache.hive.service.cli.session.HiveSessionProxy.access$000(HiveSessionProxy.java:37)
at org.apache.hive.service.cli.session.HiveSessionProxy$1.run(HiveSessionProxy.java:64)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.hive.shims.HadoopShimsSecure.doAs(HadoopShimsSecure.java:536)
at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:60)
at com.sun.proxy.$Proxy29.fetchResults(Unknown Source)
at org.apache.hive.service.cli.CLIService.fetchResults(CLIService.java:427)
at org.apache.hive.service.cli.thrift.ThriftCLIService.FetchResults(ThriftCLIService.java:587)
at org.apache.hive.service.cli.thrift.TCLIService$Processor$FetchResults.getResult(TCLIService.java:1553)
at org.apache.hive.service.cli.thrift.TCLIService$Processor$FetchResults.getResult(TCLIService.java:1538)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
at org.apache.hive.service.auth.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:56)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:206)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745) This is a known issue which is recorded under Jira HIVE-15100 and Hortonworks Internal BUG-37219. Workaround
Hard code the hive.server2.logging.operation.log.location parameter to /tmp/hive/operation_logs Assign it permission level 777 on Hiveserver2 node
... View more
Labels:
08-26-2017
04:37 AM
0: jdbc:hive2://xxx.com:10000> insert overwrite local directory '/xxx row format delimited fields terminated by '|' null defined as '' select * from xx.yy limit 20;
Error: Error while compiling statement: FAILED: HiveAccessControlException Permission denied: user [xxx] does not have [WRITE] privilege on [/xxx] (state=42000,code=40000)
Same query (INSERT OVERWRITE LOCAL DIRECTORY) works fine with Hive CLI:
hive> insert overwrite local directory '/xxx' row format delimited fields terminated by '|' null defined as '' select * from xx.yy limit 20;
Query ID = xx_abae1e37-0e40-4743-a7c6-a33ca9e5156c
Total jobs = 1
Launching Job 1 out of 1
Status: Running (Executing on YARN cluster with App id application_1492503032060_2596)
--------------------------------------------------------------------------------
VERTICES STATUS TOTAL COMPLETED RUNNING PENDING FAILED KILLED
--------------------------------------------------------------------------------
Map 1 .......... SUCCEEDED 4 4 0 0 0 0
Reducer 2 ...... SUCCEEDED 1 1 0 0 0 0
--------------------------------------------------------------------------------
VERTICES: 02/02 [==========================>>] 100% ELAPSED TIME: 10.13 s
--------------------------------------------------------------------------------
Moving data to local directory /xxx
OK
Time taken: 14.594 seconds
Cause The issue is related to Jira HIVE-11666 and behavioral difference between Hive CLI and Beeline. When the INSERT OVERWRITE LOCAL DIRECTORY query is run from Hive CLI, it writes local host whereas Beeline writes to node directory where Hiveserver2 is running. Solution This is a known limitation and Hortonworks feature request RMP-8974 has been raised to address this behavioural difference between Hive CLI and Beeline in a future release. Workaround Provide permissions for xyz user on /xyz directory on Hiveserver2 machine.
... View more
Labels:
08-25-2017
07:38 AM
For now, it is not possible to encrypt the data during its transmission from Sqoop. This is a known limitation, Jira SQOOP-917 is already in place for the feature.
... View more
08-25-2017
07:33 AM
1 Kudo
Hi @Ramya Jayathirtha Adding to @Sonu Sahi's reply, the CSVSerde is available in Hive 0.14 and greater.
The following example creates a TSV (Tab-separated) file. <code>CREATE TABLE my_table(a string, b string, ...) ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.OpenCSVSerde' WITH SERDEPROPERTIES (
"separatorChar" = "\t" ,
"quoteChar" = "'" ,
"escapeChar" = "\\"
)
STORED AS TEXTFILE;
Default properties for SerDe is Comma-Separated (CSV) file DEFAULT_ESCAPE_CHARACTER \
DEFAULT_QUOTE_CHARACTER " DEFAULT_SEPARATOR ,
This SerDe works for most CSV data, but does not handle embedded newlines. To use the SerDe, specify the fully qualified class name org.apache.hadoop.hive.serde2.OpenCSVSerde. If you want to use the TextFile format, then use 'ESCAPED BY' in the DDL. "Enable escaping for the delimiter characters by using the 'ESCAPED BY' clause (such as ESCAPED BY '\') Escaping is needed if you want to work with data that can contain these delimiter characters. A custom NULL format can also be specified using the 'NULL DEFINED AS' clause (default is '\N')."
... View more