Member since
03-28-2016
194
Posts
18
Kudos Received
0
Solutions
01-24-2017
03:15 PM
yes, ranger implemented for HIVE. how do i solve this without disabling ranger
... View more
01-24-2017
02:45 PM
Team Iam getting below error while executing show tables 0: jdbc:hive2://rwlp508.rw.discoverfinancial.> show
tables ; 17/01/24 09:01:39 INFO log.PerfLogger: <PERFLOG
method=compile from=org.apache.hadoop.hive.ql.Driver> 17/01/24 09:01:39 INFO ql.Driver: We are setting the hadoop
caller context from HIVE_SSN_ID:6805bcb1-84bb-4c18-a394-4407e14bf3f4 to
hive_20170124090139_016247db-7df1-4563-af40-0b3256f66efe 17/01/24 09:01:39 INFO log.PerfLogger: <PERFLOG
method=parse from=org.apache.hadoop.hive.ql.Driver> 17/01/24 09:01:39 INFO parse.ParseDriver: Parsing command:
show tables 17/01/24 09:01:39 INFO parse.ParseDriver: Parse Completed 17/01/24 09:01:39 INFO log.PerfLogger: </PERFLOG
method=parse start=1485266499788 end=1485266499789 duration=1
from=org.apache.hadoop.hive.ql.Driver> 17/01/24 09:01:39 INFO log.PerfLogger: <PERFLOG
method=semanticAnalyze from=org.apache.hadoop.hive.ql.Driver> 17/01/24 09:01:39 INFO metastore.HiveMetaStore: 6:
get_database: default 17/01/24 09:01:39 INFO HiveMetaStore.audit:
ugi=hive/rw***.COM ip=unknown-ip-addr
cmd=get_database: default 17/01/24 09:01:39 INFO ql.Driver: Semantic Analysis
Completed 17/01/24 09:01:39 INFO log.PerfLogger: </PERFLOG
method=semanticAnalyze start=1485266499789 end=1485266499800 duration=11
from=org.apache.hadoop.hive.ql.Driver> 17/01/24 09:01:39 INFO exec.ListSinkOperator: Initializing
operator OP[17] 17/01/24 09:01:39 INFO ql.Driver: Returning Hive schema:
Schema(fieldSchemas:[FieldSchema(name:tab_name, type:string, comment:from
deserializer)], properties:null) 17/01/24 09:01:39 INFO log.PerfLogger: <PERFLOG
method=doAuthorization from=org.apache.hadoop.hive.ql.Driver> 17/01/24 09:01:39 INFO log.PerfLogger: </PERFLOG
method=doAuthorization start=1485266499801 end=1485266499802 duration=1
from=org.apache.hadoop.hive.ql.Driver> 17/01/24 09:01:39 INFO log.PerfLogger: </PERFLOG
method=compile start=1485266499787 end=1485266499802 duration=15
from=org.apache.hadoop.hive.ql.Driver> 17/01/24 09:01:39 INFO ql.Driver: We are resetting the
hadoop caller context to HIVE_SSN_ID:6805bcb1-84bb-4c18-a394-4407e14bf3f4 17/01/24 09:01:39 INFO log.PerfLogger: <PERFLOG
method=Driver.run from=org.apache.hadoop.hive.ql.Driver> 17/01/24 09:01:39 INFO log.PerfLogger: <PERFLOG
method=TimeToSubmit from=org.apache.hadoop.hive.ql.Driver> 17/01/24 09:01:39 INFO ql.Driver: Concurrency mode is
disabled, not creating a lock manager 17/01/24 09:01:39 INFO log.PerfLogger: <PERFLOG
method=Driver.execute from=org.apache.hadoop.hive.ql.Driver> 17/01/24 09:01:39 INFO ql.Driver: Setting caller context to
query id hive_20170124090139_016247db-7df1-4563-af40-0b3256f66efe 17/01/24 09:01:39 INFO ql.Driver: Starting
command(queryId=hive_20170124090139_016247db-7df1-4563-af40-0b3256f66efe): show
tables 17/01/24 09:01:39 INFO log.PerfLogger: <PERFLOG
method=PreHook.org.apache.hadoop.hive.ql.security.authorization.plugin.DisallowTransformHook
from=org.apache.hadoop.hive.ql.Driver> 17/01/24 09:01:39 INFO log.PerfLogger: </PERFLOG
method=PreHook.org.apache.hadoop.hive.ql.security.authorization.plugin.DisallowTransformHook
start=1485266499804 end=1485266499804 duration=0
from=org.apache.hadoop.hive.ql.Driver> 17/01/24 09:01:39 INFO log.PerfLogger: </PERFLOG
method=TimeToSubmit start=1485266499803 end=1485266499804 duration=1
from=org.apache.hadoop.hive.ql.Driver> 17/01/24 09:01:39 INFO log.PerfLogger: <PERFLOG
method=runTasks from=org.apache.hadoop.hive.ql.Driver> 17/01/24 09:01:39 INFO log.PerfLogger: <PERFLOG
method=task.DDL.Stage-0 from=org.apache.hadoop.hive.ql.Driver> 17/01/24 09:01:39 INFO ql.Driver: Starting task
[Stage-0:DDL] in serial mode 17/01/24 09:01:39 INFO metastore.HiveMetaStore: 8:
get_database: default 17/01/24 09:01:39 INFO HiveMetaStore.audit:
ugi=hive/rwl***.COM ip=unknown-ip-addr cmd=get_database:
default 17/01/24 09:01:39 INFO metastore.HiveMetaStore: 8: Opening
raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore 17/01/24 09:01:39 INFO metastore.ObjectStore: ObjectStore,
initialize called 17/01/24 09:01:39 INFO metastore.MetaStoreDirectSql: Using
direct SQL, underlying DB is ORACLE 17/01/24 09:01:39 INFO metastore.ObjectStore: Initialized
ObjectStore 17/01/24 09:01:39 INFO metadata.HiveUtils: Adding metastore
authorization provider: org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider 17/01/24 09:01:39 INFO metastore.HiveMetaStore: 8:
get_tables: db=default pat=.* 17/01 17/01/24 09:01:39 ERROR authorizer.RangerHiveAuthorizer:
filterListCmdObjects: Internal error: null RangerAccessResult object received
back from isAccessAllowed()! 17/01/24 09:01:39 ERROR authorizer.RangerHiveAuthorizer:
filterListCmdObjects: Internal error: null RangerAccessResult object received
back from isAccessAllowed()! 17/01/24 09:01:39 ERROR authorizer.RangerHiveAuthorizer:
filterListCmdObjects: Internal error: null RangerAccessResult object received
back from isAccessAllowed()! 17/01/24 09:01:39 ERROR authorizer.RangerHiveAuthorizer:
filterListCmdObjects: Internal error: null RangerAccessResult object received
back from isAccessAllowed()! 17/01/24 09:01:39 ERROR authorizer.RangerHiveAuthorizer:
filterListCmdObjects: Internal error: null RangerAccessResult object received
back from isAccessAllowed()! 17/01/24 09:01:39 ERROR authorizer.RangerHiveAuthorizer:
filterListCmdObjects: Internal error: null RangerAccessResult object received
back from isAccessAllowed()! 17/01/24 09:01:39 ERROR authorizer.RangerHiveAuthorizer:
filterListCmdObjects: Internal error: null RangerAccessResult object received
back from isAccessAllowed()! 17/01/24 09:01:39 ERROR authorizer.RangerHiveAuthorizer:
filterListCmdObjects: Internal error: null RangerAccessResult object received
back from isAccessAllowed()! 17/01/24 09:01:39 ERROR authorizer.RangerHiveAuthorizer:
filterListCmdObjects: Internal error: null RangerAccessResult object received
back from isAccessAllowed()! 17/01/24 09:01:39 ERROR authorizer.RangerHiveAuthorizer:
filterListCmdObjects: Internal error: null RangerAccessResult object received
back from isAccessAllowed()! 17/01/24 09:01:39 ERROR authorizer.RangerHiveAuthorizer:
filterListCmdObjects: Internal error: null RangerAccessResult object received
back from isAccessAllowed()! 17/01/24 09:01:39 ERROR authorizer.RangerHiveAuthorizer:
filterListCmdObjects: Internal error: null RangerAccessResult object received
back from isAccessAllowed()! 17/01/24 09:01:39 ERROR authorizer.RangerHiveAuthorizer:
filterListCmdObjects: Internal error: null RangerAccessResult object received
back from isAccessAllowed()! 17/01/24 09:01:39 ERROR authorizer.RangerHiveAuthorizer:
filterListCmdObjects: Internal error: null RangerAccessResult object received
back from isAccessAllowed()! 17/01/24 09:01:39 ERROR authorizer.RangerHiveAuthorizer:
filterListCmdObjects: Internal error: null RangerAccessResult object received
back from isAccessAllowed()! 17/01/24 09:01:39 ERROR authorizer.RangerHiveAuthorizer:
filterListCmdObjects: Internal error: null RangerAccessResult object received
back from isAccessAllowed()! 17/01/24 09:01:39 ERROR authorizer.RangerHiveAuthorizer:
filterListCmdObjects: Internal error: null RangerAccessResult object received
back from isAccessAllowed()! 17/01/24 09:01:39 ERROR authorizer.RangerHiveAuthorizer:
filterListCmdObjects: Internal error: null RangerAccessResult object received back
from isAccessAllowed()! 17/01/24 09:01:39 ERROR authorizer.RangerHiveAuthorizer:
filterListCmdObjects: Internal error: null RangerAccessResult object received
back from isAccessAllowed()! 17/01/24 09:01:39 INFO log.PerfLogger: </PERFLOG
method=runTasks start=1485266499804 end=1485266499862 duration=58
from=org.apache.hadoop.hive.ql.Driver> 17/01/24 09:01:39 INFO hooks.ATSHook: Created ATS Hook 17/01/24 09:01:39 INFO log.PerfLogger: <PERFLOG
method=PostHook.org.apache.hadoop.hive.ql.hooks.ATSHook
from=org.apache.hadoop.hive.ql.Driver> 17/01/24 09:01:39 INFO log.PerfLogger: </PERFLOG
method=PostHook.org.apache.hadoop.hive.ql.hooks.ATSHook start=1485266499862
end=1485266499863 duration=1 from=org.apache.hadoop.hive.ql.Driver> 17/01/24 09:01:39 INFO ql.Driver: Resetting the caller
context to HIVE_SSN_ID:6805bcb1-84bb-4c18-a394-4407e14bf3f4 17/01/24 09:01:39 INFO log.PerfLogger: </PERFLOG
method=Driver.execute start=1485266499803 end=1485266499863 duration=60
from=org.apache.hadoop.hive.ql.Driver> 17/01/24 09:01:39 INFO ql.Driver: OK 17/01/24 09:01:39 INFO log.PerfLogger: <PERFLOG
method=releaseLocks from=org.apache.hadoop.hive.ql.Driver> 17/01/24 09:01:39 INFO log.PerfLogger: </PERFLOG method=releaseLocks
start=1485266499863 end=1485266499863 duration=0
from=org.apache.hadoop.hive.ql.Driver> 17/01/24 09:01:39 INFO log.PerfLogger: </PERFLOG
method=Driver.run start=1485266499803 end=1485266499864 duration=61
from=org.apache.hadoop.hive.ql.Driver> +-----------+--+ | tab_name | +-----------+--+ +-----------+--+ No rows selected (0.159 seconds)
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Ranger
01-11-2017
12:27 PM
Team, Iam getting below error when i run select query Error: java.io.IOException: java.io.IOException: ORC does
not support type conversion from STRING to CHAR (state=,code=0) Please let me know the solution
... View more
Labels:
- Labels:
-
Apache Hive
11-29-2016
08:32 AM
Hi Team, I need to enable Oozie error lot which is not done before. Can you let me know the process of doing it. I see few jobs are in RUNNING state in COORDINATOR JOBS. Let me know how to enable Error log without affecting the jobs
... View more
Labels:
- Labels:
-
Apache Oozie
10-19-2016
07:58 AM
Hi Eyad Garelnabi, Thanks for your time. I have one quick question. As you said if it is more than 100 nodes then local LDAP/KDC will be better. In that case user will be created on local machine ? i.e users will be created on linux machine and it will handled by LDAP ? Could you please correct me if iam wrong
... View more
10-18-2016
04:17 PM
Team, Need your help on understanding AD / LDAP / Kerbores Integration on Hadoop. Please help me to understand 1) what is the use of Having Ldap between AD, Hadoop and kerberos integration ? 2) What is the advantage and Disadvantage on Integrating AD and hadoop and kerbores without Ldap.? 3) what is difference between implementing MIT KDC and Direct AD setup Can you please provide me the doc where i can understand the integration of Hadoop
Cluster into a Active Directory and Kerbores
... View more
Labels:
- Labels:
-
Hortonworks Data Platform (HDP)
09-20-2016
02:43 PM
Hi Team, Iam getting below error when i try sqooping from Oracle. But when i rerun the same job without any change. its running But why it is not runnig in the first time. Please help Error:################### Warning: /usr/hdp/2.3.2.0-2950/accumulo does not exist!
Accumulo imports will fail. Please set $ACCUMULO_HOME to the root of your Accumulo
installation. 16/09/15 01:04:11 INFO sqoop.Sqoop: Running Sqoop version:
1.4.6.2.3.2.0-2950 16/09/15 01:04:11 INFO oracle.OraOopManagerFactory: Data
Connector for Oracle and Hadoop is disabled. 16/09/15 01:04:11 INFO manager.SqlManager: Using default
fetchSize of 1000 16/09/15 01:04:11 INFO tool.CodeGenTool: Beginning code
generation SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in
[jar:file:/opt/hadoop/hdp/2.3.2.0-2950/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in
[jar:file:/opt/hadoop/hdp/2.3.2.0-2950/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings
for an explanation. SLF4J: Actual binding is of type
[org.slf4j.impl.Log4jLoggerFactory] 16/09/15 01:05:13 ERROR manager.SqlManager: Error executing
statement: java.sql.SQLRecoverableException: IO Error: Connection reset java.sql.SQLRecoverableException: IO Error: Connection reset
at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:752)
at oracle.jdbc.driver.PhysicalConnection.connect(PhysicalConnection.java:662)
at
oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:32)
at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:560)
at java.sql.DriverManager.getConnection(DriverManager.java:571)
at java.sql.DriverManager.getConnection(DriverManager.java:233)
at
org.apache.sqoop.manager.OracleManager.makeConnection(OracleManager.java:325)
at org.apache.sqoop.manager.GenericJdbcManager.getConnection(GenericJdbcManager.java:52)
at org.apache.sqoop.manager.SqlManager.execute(SqlManager.java:744)
at org.apache.sqoop.manager.SqlManager.execute(SqlManager.java:767)
at
org.apache.sqoop.manager.SqlManager.getColumnInfoForRawQuery(SqlManager.java:270)
at
org.apache.sqoop.manager.SqlManager.getColumnTypesForRawQuery(SqlManager.java:241)
at org.apache.sqoop.manager.SqlManager.getColumnTypesForQuery(SqlManager.java:234)
at org.apache.sqoop.manager.ConnManager.getColumnTypes(ConnManager.java:304)
at org.apache.sqoop.orm.ClassWriter.getColumnTypes(ClassWriter.java:1845)
at org.apache.sqoop.orm.ClassWriter.generate(ClassWriter.java:1645)
at org.apache.sqoop.tool.CodeGenTool.generateORM(CodeGenTool.java:107)
at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:478)
at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:605)
at org.apache.sqoop.Sqoop.run(Sqoop.java:148)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:184)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:226)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:235)
at org.apache.sqoop.Sqoop.main(Sqoop.java:244) Caused by: java.net.SocketException: Connection reset
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:118)
at java.net.SocketOutputStream.write(SocketOutputStream.java:159)
at oracle.net.ns.DataPacket.send(DataPacket.java:209)
at oracle.net.ns.NetOutputStream.flush(NetOutputStream.java:215)
at oracle.net.ns.NetInputStream.getNextPacket(NetInputStream.java:302)
at oracle.net.ns.NetInputStream.read(NetInputStream.java:249)
at oracle.net.ns.NetInputStream.read(NetInputStream.java:171)
at oracle.net.ns.NetInputStream.read(NetInputStream.java:89)
at
oracle.jdbc.driver.T4CSocketInputStreamWrapper.readNextPacket(T4CSocketInputStreamWrapper.java:123)
at oracle.jdbc.driver.T4CSocketInputStreamWrapper.read(T4CSocketInputStreamWrapper.java:79)
at
oracle.jdbc.driver.T4CMAREngineStream.unmarshalUB1(T4CMAREngineStream.java:429)
at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:397)
at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:257)
at oracle.jdbc.driver.T4CTTIoauthenticate.doOAUTH(T4CTTIoauthenticate.java:433)
at oracle.jdbc.driver.T4CTTIoauthenticate.doOAUTH(T4CTTIoauthenticate.java:950)
at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:639)
... 24 more 16/09/15 01:05:13 ERROR tool.ImportTool: Encountered
IOException running import job: java.io.IOException: No columns to generate for
ClassWriter
at org.apache.sqoop.orm.ClassWriter.generate(ClassWriter.java:1651)
at org.apache.sqoop.tool.CodeGenTool.generateORM(CodeGenTool.java:107)
at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:478)
at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:605)
at org.apache.sqoop.Sqoop.run(Sqoop.java:148)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:184)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:226)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:235)
at org.apache.sqoop.Sqoop.main(Sqoop.java:244)
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Sqoop
08-24-2016
08:06 AM
2 Kudos
Hi Team, Iam getting below error while running hive on TEZ . Please help ERROR : Status: Failed 2287 ERROR : Vertex failed, vertexName=Map 3,
vertexId=vertex_1471822483769_2700_1_02, diagnostics=[Task failed,
taskId=task_1471822483769_2700_1_02_000014, diagnostics=[TaskAttempt 0 failed,
info= [Error: Failure while running
task:java.lang.RuntimeException: java.lang.OutOfMemoryError: Java heap space 2288 at
org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:171) 2289 at
org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:137) 2290 at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:344) 2291 at
org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:179) 2292 at
org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:171) 2293 at
java.security.AccessController.doPrivileged(Native Method) 2294 at
javax.security.auth.Subject.doAs(Subject.java:415) 2295 at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) 2296 at
org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:171) 2297 at
org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:167) 2298 at
org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) 2299 at
java.util.concurrent.FutureTask.run(FutureTask.java:262) 2300 at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 2301 at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 2302 at java.lang.Thread.run(Thread.java:745) 2303 Caused by: java.lang.OutOfMemoryError: Java heap space 2304 at
org.apache.hadoop.hive.ql.exec.persistence.BytesBytesMultiHashMap.expandAndRehashImpl(BytesBytesMultiHashMap.java:749)2305 at
org.apache.hadoop.hive.ql.exec.persistence.BytesBytesMultiHashMap.expandAndRehashToTarget(BytesBytesMultiHashMap.java:567)2306 at org.apache.hadoop.hive.ql.exec.persistence.HybridHashTableContainer$HashPartition.getHashMapFromDisk(HybridHashTableContainer.java:150) 2307 at
org.apache.hadoop.hive.ql.exec.MapJoinOperator.reloadHashTable(MapJoinOperator.java:592) 2308 at
org.apache.hadoop.hive.ql.exec.MapJoinOperator.continueProcess(MapJoinOperator.java:556) 2309 at
org.apache.hadoop.hive.ql.exec.MapJoinOperator.closeOp(MapJoinOperator.java:500) 2310 at
org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:617) 2311 at
org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:631) 2312 at
org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.close(MapRecordProcessor.java:344) 2313 at
org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:162) 2315 ], TaskAttempt 1 failed, info=[Error: Failure while
running task:java.lang.RuntimeException: java.lang.OutOfMemoryError: Java heap
space 2316 at
org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:171) 2317 at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:137) 2318 at
org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:344) 2319 at
org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:179) 2320 at
org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:171) 2321 at
java.security.AccessController.doPrivileged(Native Method) 2322 at
javax.security.auth.Subject.doAs(Subject.java:415) 2323 at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) 2324 at
org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:171) 2325 at
org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:167) 2326 at
org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) 2327 at
java.util.concurrent.FutureTask.run(FutureTask.java:262) 2328 at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 2329 at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 2330 at java.lang.Thread.run(Thread.java:745) 2331 Caused by: java.lang.OutOfMemoryError: Java heap space 2332 at org.apache.hadoop.hive.ql.exec.persistence.BytesBytesMultiHashMap.expandAndRehashImpl(BytesBytesMultiHashMap.java:749) rg.apache.hadoop.hive.ql.exec.persistence.BytesBytesMultiHashMap.expandAndRehashToTarget(BytesBytesMultiHashMap.java:567) org.apache.hadoop.hive.ql.exec.persistence.HybridHashTableContainer$HashPartition.getHashMapFromDisk(HybridHashTableContainer.java:150) 2335 at
org.apache.hadoop.hive.ql.exec.MapJoinOperator.reloadHashTable(MapJoinOperator.java:592) 2336 at org.apache.hadoop.hive.ql.exec.MapJoinOperator.continueProcess(MapJoinOperator.java:556) 2337 at
org.apache.hadoop.hive.ql.exec.MapJoinOperator.closeOp(MapJoinOperator.java:500) 2338 at
org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:617) 2339 at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:631)
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Tez
07-18-2016
10:51 AM
Hi Kashif, I tried with partition details. But i did not get any output . I just got like 0 row affected. Is their anything i need to do . please help
... View more
07-13-2016
04:32 PM
Error: Failed semanticexception, table is partitioned and partition specific is needed
... View more