Member since
04-11-2016
535
Posts
148
Kudos Received
77
Solutions
09-01-2017
03:27 PM
When importing data from Oracle using Sqoop, it fails with the following error: Error: java.lang.RuntimeException: java.lang.RuntimeException: java.sql.SQLException:
ORA-09817: Write to audit file failed.
Linux-x86_64 Error: 28: No space left on device
Additional information: 12
ORA-02002: error while writing to audit trail Cause: This issue occurs when there is no sufficient space in the /var/log/audit directory on the Oracle server. Solution: To resolve the issue, clear the space on the /var/log/audit directory of the Oracle server.
... View more
Labels:
08-26-2017
05:11 AM
While accessing psql for HAWQ, the command fails with the following error: [etl_user@hdpmaster2 ~]$ source /usr/local/hawq/greenplum_path.sh
[etl_user@hdpmaster2 ~]$ psql
psql: FATAL: no pg_hba.conf entry for host "[local]", user "etl_user", database "etl_user", SSL off This issue occurs when the etl_user is missing under pg_hba.conf and Postgres database. To resolve the issue, create a new user from postgres by performing the following steps:
Login as gpadmin, sudo su - gpadmin. Create new role and database: CREATE USER <user_name> SUPERUSER; CREATE DATABASE <user_name> WITH OWNER <database_name>; Edit pg_hba.conf under Master data directory and make the following changes: local all <user_name> trust host all <user_name> 0.0.0.0/0 trust host all <user_name> ::/0 trust Restart HAWQ cluster.
... View more
08-26-2017
05:06 AM
While running Hive query from Hue, the query fails with the following error: WARN [HiveServer2-Handler-Pool: Thread-41]: thrift.ThriftCLIService (ThriftCLIService.java:FetchResults(596)) - Error fetching results:
org.apache.hive.service.cli.HiveSQLException: Couldn't find log associated with operation handle:
OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=1cd26ef5-00f8-4003-be5c-65e2a542ee18]
at org.apache.hive.service.cli.operation.OperationManager.getOperationLogRowSet
(OperationManager.java:257)
at org.apache.hive.service.cli.session.HiveSessionImpl.fetchResults(HiveSessionImpl.java:656)
at sun.reflect.GeneratedMethodAccessor104.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:79)
at org.apache.hive.service.cli.session.HiveSessionProxy.access$000(HiveSessionProxy.java:37)
at org.apache.hive.service.cli.session.HiveSessionProxy$1.run(HiveSessionProxy.java:64)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.hive.shims.HadoopShimsSecure.doAs(HadoopShimsSecure.java:536)
at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:60)
at com.sun.proxy.$Proxy29.fetchResults(Unknown Source)
at org.apache.hive.service.cli.CLIService.fetchResults(CLIService.java:427)
at org.apache.hive.service.cli.thrift.ThriftCLIService.FetchResults(ThriftCLIService.java:587)
at org.apache.hive.service.cli.thrift.TCLIService$Processor$FetchResults.getResult(TCLIService.java:1553)
at org.apache.hive.service.cli.thrift.TCLIService$Processor$FetchResults.getResult(TCLIService.java:1538)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
at org.apache.hive.service.auth.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:56)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:206)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745) This is a known issue which is recorded under Jira HIVE-15100 and Hortonworks Internal BUG-37219. Workaround
Hard code the hive.server2.logging.operation.log.location parameter to /tmp/hive/operation_logs Assign it permission level 777 on Hiveserver2 node
... View more
Labels:
08-26-2017
04:37 AM
0: jdbc:hive2://xxx.com:10000> insert overwrite local directory '/xxx row format delimited fields terminated by '|' null defined as '' select * from xx.yy limit 20;
Error: Error while compiling statement: FAILED: HiveAccessControlException Permission denied: user [xxx] does not have [WRITE] privilege on [/xxx] (state=42000,code=40000)
Same query (INSERT OVERWRITE LOCAL DIRECTORY) works fine with Hive CLI:
hive> insert overwrite local directory '/xxx' row format delimited fields terminated by '|' null defined as '' select * from xx.yy limit 20;
Query ID = xx_abae1e37-0e40-4743-a7c6-a33ca9e5156c
Total jobs = 1
Launching Job 1 out of 1
Status: Running (Executing on YARN cluster with App id application_1492503032060_2596)
--------------------------------------------------------------------------------
VERTICES STATUS TOTAL COMPLETED RUNNING PENDING FAILED KILLED
--------------------------------------------------------------------------------
Map 1 .......... SUCCEEDED 4 4 0 0 0 0
Reducer 2 ...... SUCCEEDED 1 1 0 0 0 0
--------------------------------------------------------------------------------
VERTICES: 02/02 [==========================>>] 100% ELAPSED TIME: 10.13 s
--------------------------------------------------------------------------------
Moving data to local directory /xxx
OK
Time taken: 14.594 seconds
Cause The issue is related to Jira HIVE-11666 and behavioral difference between Hive CLI and Beeline. When the INSERT OVERWRITE LOCAL DIRECTORY query is run from Hive CLI, it writes local host whereas Beeline writes to node directory where Hiveserver2 is running. Solution This is a known limitation and Hortonworks feature request RMP-8974 has been raised to address this behavioural difference between Hive CLI and Beeline in a future release. Workaround Provide permissions for xyz user on /xyz directory on Hiveserver2 machine.
... View more
Labels:
08-25-2017
07:38 AM
For now, it is not possible to encrypt the data during its transmission from Sqoop. This is a known limitation, Jira SQOOP-917 is already in place for the feature.
... View more
07-07-2017
05:54 AM
If the mysqldump is for different version other than Hive 2.1.1000 in HDP 2.6. Then do the following: 1. Stop Hive services from Ambari. 2. Create new database under MySQL as say hive2: mysql> create database hive2;
Query OK,1 row affected (0.00 sec)
mysql> grant all privileges on hive2.* to 'hive'@'%' identified by'hive';
Query OK,0 rows affected (0.00 sec) 3. Restore database as: mysql -u hive -phive hive2 < dumpfilename.sql 4. Update database connection string for mysql under Ambari -> Hive configs. 5. Save configurations and try restarting. Since there is different in VERSION, service startup would fail. 6. Run Hive schematool command to upgrade the schema as below: hive@ssnode260 bin]$ /usr/hdp/2.6.0.3-8/hive2/bin/schematool -upgradeSchema -dbType mysql 7. Restart Hive services from Ambari. If the Hive metadata version is same as Hive 2.1.1000 in HDP 2.6, then follow steps 1 through 5.
... View more
Labels:
02-21-2017
07:59 AM
SYMPTOM When running Hive queries from Resource Manager log the following error is displayed: 2017-02-07 15:08:32,140 ERROR impl.MetricsSinkAdapter (MetricsSinkAdapter.java:publishMetricsFromQueue(148)) - Got sink exception, retry in 4600ms
org.apache.hadoop.metrics2.MetricsException: Failed to putMetrics
at org.apache.hadoop.metrics2.sink.timeline.HadoopTimelineMetricsSink.putMetrics(HadoopTimelineMetricsSink.java:216)
at org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:186)
at org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:43)
at org.apache.hadoop.metrics2.impl.SinkQueue.consumeAll(SinkQueue.java:87)
at org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.publishMetricsFromQueue(MetricsSinkAdapter.java:134)
at org.apache.hadoop.metrics2.impl.MetricsSinkAdapter$1.run(MetricsSinkAdapter.java:88)
Caused by: java.net.UnknownHostException: http
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:178)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at java.net.Socket.connect(Socket.java:528)
at java.net.Socket.<init>(Socket.java:425)
at java.net.Socket.<init>(Socket.java:280) ROOT CAUSE This issue occurs when reverse lookup returns incorrect hostname for the IP address. RESOLUTION To resolve this issue, fix the DNS issue under /etc/resolv.conf.
... View more
Labels:
01-03-2017
05:33 PM
The limitation of display of 5000 tables is due to dbms.py script under Hue, below is the snippet of the script: def get_tables(self, database='default', table_names='*'):
hql = "SHOW TABLES IN %s '%s'" % (database, table_names) # self.client.get_tables(database, table_names) is too slow
query = hql_query(hql)
handle = self.execute_and_wait(query, timeout_sec=15.0)
if handle:
result = self.fetch(handle, rows=5000)
self.close(handle)
return [name for table in result.rows() for name in table]
else:
return []
To increase the number of the tables displayed in Hue. Please do the following: 1. cp /usr/lib/hue/apps/beeswax/src/beeswax/server/dbms.py /tmp 2. Stop Hue service as 'service hue stop'. 3. Edit /usr/lib/hue/apps/beeswax/src/beeswax/server/dbms.py to change the rows value to 8000. if handle:
result = self.fetch(handle, rows=8000) --> This value needs to be changed.
self.close(handle)
return [name for table in result.rows() for name in table]
else:
return []
4. Restart hue as 'service hue restart'.
... View more
Labels:
12-29-2016
10:17 AM
3 Kudos
The table definition "LINES TERMINATED BY" only supports newline '\n' right now. This is a known issue and Jira Hive 11996 has already been raised for the issue:
To handle the newline characters within the data, you can use the Omniture Data format which uses a EscapedLineReader which gets around Omniture's pesky escaped tabs and newlines.
Please note that the data files need to include '\' characters before the newline character within the data and run the below command in sequence and required jars are attached along with data file: add jar /tmp/omnituredata-1.0.2-SNAPSHOT-jar-with-dependencies.jar;
add jar /tmp/omnituredata-1.0.2-SNAPSHOT-javadoc.jar;
add jar /tmp/omnituredata-1.0.2-SNAPSHOT-sources.jar;
add jar /tmp/omnituredata-1.0.2-SNAPSHOT.jar;
(Note: jars are available on the HDFS /tmp folder). CREATE TABLE test8(id string,desc string)
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t'
STORED AS INPUTFORMAT 'org.rassee.omniture.hadoop.mapred.OmnitureDataFileInputFormat'
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.IgnoreKeyTextOutputFormat'
LOCATION '/apps/hive/warehouse/test8';
Sample file under HDFS location '/apps/hive/warehouse/test8' is as: [hive@sindhu root]$ hdfs dfs -cat /apps/hive/warehouse/test8/file.txt
id desc
1 Hi\
I am a member and would like to open savings accts for both my kids aged 12 and 16.\
Is that possible and what documents do I need to bring?\
Also do I need to make an appt first?\
Thx!
Also, the inputformat as TEXT does not understand the escaped newline characters. 4 rows selected (0.165 seconds)
0: jdbc:hive2://sindhu:2181/> CREATE TABLE test9(id string,desc string)
0: jdbc:hive2://sindhu:2181/> ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t'
0: jdbc:hive2://sindhu:2181/> STORED AS textfile LOCATION '/apps/hive/warehouse/test9';
No rows affected (0.209 seconds)
0: jdbc:hive2://sindhu:2181/> select * from test9;
+---------------------------------------------------------------------------------------+-------------------+--+
| test9.id | test9.desc |
+---------------------------------------------------------------------------------------+-------------------+--+
| id | desc |
| 1 | Hi\ |
| I am a member and would like to open savings accts for both my kids aged 12 and 16.\ | NULL |
| Is that possible and what documents do I need to bring?\ | NULL |
| Also do I need to make an appt first?\ | NULL |
| Thx! | NULL |
| 2 | hi jihidp\ |
| uiunoo! | NULL |
| 3 | hi who are you\ |
| talking with | NULL |
+--------------------------+
... View more
Labels: