Member since
04-07-2017
9
Posts
0
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2823 | 04-21-2017 03:10 AM |
04-03-2020
07:12 AM
Can you try either of below 2 1)convert you hive to external and try to read 2) Check the data type of the table which might be wrongly aligned with metadata
... View more
03-03-2020
04:44 AM
Get all the classes listed as part of error( ClassNotfound) and add those corrresponding jars to spark.driver.extraClassPath and spark.executor.extraClassPath. It worked for us, when all the jars corresponding the class in the error was added.
... View more
11-15-2017
07:52 AM
use shell action in your workflow and echo the required parameter in your shell executable file and use them as ${wf:actionData('shell_action_name') ['shell_variable']} in further on hive action as hiveconf or to the sqoop
... View more
11-15-2017
07:46 AM
1) use the following config in the coordinator xml <configuration> <property> <name>coord_id</name> <value>${coord:actionId()}</value> </property> </configuration> 2) specify as following in worflow xml as similar to ${wf:id()} for getting workflow id Coordinator ID:${coord_id}
... View more
04-21-2017
03:10 AM
Issue ssems to be resolved, once our admin found out the reason of hiveserver2 restarting frequently.
... View more
04-10-2017
03:32 PM
Actually it is different,in order to mask details i have pasted it as jdbc:hive2://node:10000/hive_instance. hive-site.xml was added at <job-xml> hive2 action as well in the <file>. Batch sql fails after some of the queries in it being executed succesfully. I am using hive2 as action.
... View more
04-09-2017
03:16 AM
Recently our cloudera was upgraded from CDH 5.7.1 to CDH 5.9.1. Prior to the migration it was working flawless.
... View more
04-08-2017
01:56 AM
I have following sqoop to import data from oracle to hive instance. sqoop import -Dhadoop.security.credential.provider.path=${keystore_custom} \ -Dmapred.job.queue.name=${queue} \ --connect jdbc:oracle:thin:@${hostname}:${port}:${custom_db} \ --username ${username} \ --password-alias ${custom_alias} \ --query 'SELECT * FROM table C where $CONDITIONS' \ --hive-import \ --delete-target-dir \ --null-string '\\N' \ --null-non-string '\\N' \ --hive-drop-import-delims \ --hive-table hive_instance.table \ --target-dir /user/hive_instance/table -m 1; Above sqoop works normally when triggered from edge node. But when kerberos ticket is refreshed using kinit -R or kinit user@realm -k -t /home/user/keytab/user.keytab, credential providers doesnot work with the following error. Warning: /opt/cloudera/parcels/CDH-5.9.1-1.cdh5.9.1.p0.4/bin/../lib/sqoop/../accumulo does not exist! Accumulo imports will fail. Please set $ACCUMULO_HOME to the root of your Accumulo installation. 17/04/08 08:42:02 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6-cdh5.9.1 Error resolving password from the credential providers. When i remove the --password-alias and replaced it with --password i have lost write permission to the hdfs fs temporarly. So with kinit being executed i lose my write permission to hdfs fs. Above issue doesnot occur when tried from new session without kerberos ticket being refreshed.
... View more
04-07-2017
08:54 AM
Getting following error when hive batch sql is triggered via oozie. Works fine when run from edge nod. HS2 may be unavailable, check server status Error: org.apache.thrift.transport.TTransportException: java.net.SocketException: Broken pipe (state=08S01,code=0) Closing: 0: jdbc:hive2://node:10000/hive_instance HS2 may be unavailable, check server status Error: Error while cleaning up the server resources (state=,code=0) Intercepting System.exit(2) Failing Oozie Launcher, Main class [org.apache.oozie.action.hadoop.Hive2Main], exit code [2]
... View more
Labels: