Member since
01-12-2016
12
Posts
3
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1259 | 01-09-2018 02:46 AM |
05-31-2018
10:18 AM
and jvm arguments are as follows; VM Arguments: jvm_args: -Xmx1024m -Dhdp.version=2.6.3.0-235 -Djava.net.preferIPv4Stack=true -Dhdp.version=2.6.3.0-235 -Dhadoop.log.dir=/log/hadoop/hive -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.3.0-235/hadoop -Dhadoop.id.str=hive -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.6.3.0-235/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.3.0-235/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.3.0-235/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xmx1024m -XX:+UseG1GC -XX:+UseStringDeduplication -XX:MaxGCPauseMillis=1000 -XX:InitiatingHeapOccupancyPercent=45 -XX:NewRatio=2 -XX:SurvivorRatio=8 -XX:MaxTenuringThreshold=15 -XX:G1ReservePercent=10 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps file:///usr/hdp/current/hive-webhcat/share/hcatalog/hive-hcatalog-core.jar -hiveconf hive.metastore.uris= -hiveconf hive.log.file=hiveserver2.log -hiveconf hive.log.dir=/log/hive
... View more
05-31-2018
10:12 AM
Hi, All I am Jaehyung's colleague. The core dumps are as follows: CoreDump #1 Stack: [0x00007f43dabec000,0x00007f43daced000], sp=0x00007f43dace7240, free space=1004k Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code) V [libjvm.so+0x651649] InstanceKlass::oop_oop_iterate_backwards_nv(oopDesc*, G1ParScanClosure*)+0xc9 V [libjvm.so+0x661140] InstanceMirrorKlass::oop_oop_iterate_backwards_nv(oopDesc*, G1ParScanClosure*)+0x20 V [libjvm.so+0x5c7845] G1ParScanThreadState::copy_to_survivor_space(InCSetState, oopDesc*, markOopDesc*)+0x4c5 V [libjvm.so+0x5aa7b1] G1ParCopyClosure<(G1Barrier)2, (G1Mark)0>::do_oop(oopDesc**)+0x51 V [libjvm.so+0x5aa0e4] G1KlassScanClosure::do_klass(Klass*)+0x34 V [libjvm.so+0x46bc7c] ClassLoaderData::oops_do(OopClosure*, KlassClosure*, bool)+0x8c V [libjvm.so+0x46afc8] ClassLoaderDataGraph::roots_cld_do(CLDClosure*, CLDClosure*)+0x38 V [libjvm.so+0x5ce4de] G1RootProcessor::process_java_roots(OopClosure*, CLDClosure*, CLDClosure*, CLDClosure*, CodeBlobClosure*, G1GCPhaseTimes*, unsigned int)+0x6e V [libjvm.so+0x5ced81] G1RootProcessor::evacuate_roots(OopClosure*, OopClosure*, CLDClosure*, CLDClosure*, bool, unsigned int)+0x561 V [libjvm.so+0x5ae1a8] G1ParTask::work(unsigned int)+0x3b8 V [libjvm.so+0xaed0ff] GangWorker::loop()+0xcf V [libjvm.so+0x92a728] java_start(Thread*)+0x108 Heap: garbage-first heap total 67108864K, used 41487682K [0x00007f33d2000000, 0x00007f33d4004000, 0x00007f43d2000000) region size 32768K, 682 young (22347776K), 17 survivors (557056K) Metaspace used 144974K, capacity 149252K, committed 149440K, reserved 149504K GC Heap History (10 events): Event: 1095307.920 GC heap after Heap after GC invocations=9809 (full 0): garbage-first heap total 67108864K, used 19507107K [0x00007f33d2000000, 0x00007f33d4004000, 0x00007f43d2000000) region size 32768K, 12 young (393216K), 12 survivors (393216K) Metaspace used 144921K, capacity 149223K, committed 149440K, reserved 149504K } Event: 1095337.303 GC heap before {Heap before GC invocations=9809 (full 0): garbage-first heap total 67108864K, used 41461667K [0x00007f33d2000000, 0x00007f33d4004000, 0x00007f43d2000000) region size 32768K, 682 young (22347776K), 12 survivors (393216K) Metaspace used 144921K, capacity 149223K, committed 149440K, reserved 149504K Event: 1095337.411 GC heap after Heap after GC invocations=9810 (full 0): garbage-first heap total 67108864K, used 19527369K [0x00007f33d2000000, 0x00007f33d4004000, 0x00007f43d2000000) region size 32768K, 13 young (425984K), 13 survivors (425984K) Metaspace used 144921K, capacity 149223K, committed 149440K, reserved 149504K } Event: 1095368.891 GC heap before {Heap before GC invocations=9810 (full 0): garbage-first heap total 67108864K, used 41449161K [0x00007f33d2000000, 0x00007f33d4004000, 0x00007f43d2000000) region size 32768K, 682 young (22347776K), 13 survivors (425984K) Metaspace used 144921K, capacity 149223K, committed 149440K, reserved 149504K Event: 1095368.988 GC heap after Heap after GC invocations=9811 (full 0): garbage-first heap total 67108864K, used 19582126K [0x00007f33d2000000, 0x00007f33d4004000, 0x00007f43d2000000) region size 32768K, 14 young (458752K), 14 survivors (458752K) Metaspace used 144921K, capacity 149223K, committed 149440K, reserved 149504K } Event: 1095559.738 GC heap before {Heap before GC invocations=9811 (full 0): garbage-first heap total 67108864K, used 41471150K [0x00007f33d2000000, 0x00007f33d4004000, 0x00007f43d2000000) region size 32768K, 682 young (22347776K), 14 survivors (458752K) Metaspace used 144921K, capacity 149223K, committed 149440K, reserved 149504K Event: 1095559.885 GC heap after Heap after GC invocations=9812 (full 0): garbage-first heap total 67108864K, used 19671570K [0x00007f33d2000000, 0x00007f33d4004000, 0x00007f43d2000000) region size 32768K, 17 young (557056K), 17 survivors (557056K) Metaspace used 144921K, capacity 149223K, committed 149440K, reserved 149504K } Event: 1095746.491 GC heap before {Heap before GC invocations=9812 (full 0): garbage-first heap total 67108864K, used 41462290K [0x00007f33d2000000, 0x00007f33d4004000, 0x00007f43d2000000) region size 32768K, 682 young (22347776K), 17 survivors (557056K) Metaspace used 144938K, capacity 149244K, committed 149440K, reserved 149504K Event: 1095746.629 GC heap after Heap after GC invocations=9813 (full 0): garbage-first heap total 67108864K, used 19696962K [0x00007f33d2000000, 0x00007f33d4004000, 0x00007f43d2000000) region size 32768K, 17 young (557056K), 17 survivors (557056K) Metaspace used 144938K, capacity 149244K, committed 149440K, reserved 149504K } Event: 1095881.378 GC heap before {Heap before GC invocations=9813 (full 0): garbage-first heap total 67108864K, used 41487682K [0x00007f33d2000000, 0x00007f33d4004000, 0x00007f43d2000000) region size 32768K, 682 young (22347776K), 17 survivors (557056K) Metaspace used 144974K, capacity 149252K, committed 149440K, reserved 149504K -------- CoreDump #2 Stack: [0x00007f7aa0973000,0x00007f7aa0a74000], sp=0x00007f7aa0a6e1c0, free space=1004k Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code) V [libjvm.so+0x651649] InstanceKlass::oop_oop_iterate_backwards_nv(oopDesc*, G1ParScanClosure*)+0xc9 V [libjvm.so+0x661140] InstanceMirrorKlass::oop_oop_iterate_backwards_nv(oopDesc*, G1ParScanClosure*)+0x20 V [libjvm.so+0x5c7845] G1ParScanThreadState::copy_to_survivor_space(InCSetState, oopDesc*, markOopDesc*)+0x4c5 V [libjvm.so+0x5aa7b1] G1ParCopyClosure<(G1Barrier)2, (G1Mark)0>::do_oop(oopDesc**)+0x51 V [libjvm.so+0x5aa0e4] G1KlassScanClosure::do_klass(Klass*)+0x34 V [libjvm.so+0x46bc7c] ClassLoaderData::oops_do(OopClosure*, KlassClosure*, bool)+0x8c V [libjvm.so+0x46afc8] ClassLoaderDataGraph::roots_cld_do(CLDClosure*, CLDClosure*)+0x38 V [libjvm.so+0x5ce4de] G1RootProcessor::process_java_roots(OopClosure*, CLDClosure*, CLDClosure*, CLDClosure*, CodeBlobClosure*, G1GCPhaseTimes*, unsigned int)+0x6e V [libjvm.so+0x5ced81] G1RootProcessor::evacuate_roots(OopClosure*, OopClosure*, CLDClosure*, CLDClosure*, bool, unsigned int)+0x561 V [libjvm.so+0x5ae1a8] G1ParTask::work(unsigned int)+0x3b8 V [libjvm.so+0xaed0ff] GangWorker::loop()+0xcf V [libjvm.so+0x92a728] java_start(Thread*)+0x108 Heap: garbage-first heap total 67108864K, used 44362593K [0x00007f6a7e000000, 0x00007f6a80004000, 0x00007f7a7e000000) region size 32768K, 682 young (22347776K), 5 survivors (163840K) Metaspace used 150062K, capacity 155003K, committed 155360K, reserved 155648K GC Heap History (10 events): Event: 2031681.264 GC heap after Heap after GC invocations=15747 (full 0): garbage-first heap total 67108864K, used 22165995K [0x00007f6a7e000000, 0x00007f6a80004000, 0x00007f7a7e000000) region size 32768K, 4 young (131072K), 4 survivors (131072K) Metaspace used 150034K, capacity 155003K, committed 155360K, reserved 155648K } Event: 2031720.454 GC heap before {Heap before GC invocations=15747 (full 0): garbage-first heap total 67108864K, used 44382699K [0x00007f6a7e000000, 0x00007f6a80004000, 0x00007f7a7e000000) region size 32768K, 682 young (22347776K), 4 survivors (131072K) Metaspace used 150034K, capacity 155003K, committed 155360K, reserved 155648K Event: 2031720.538 GC heap after Heap after GC invocations=15748 (full 0): garbage-first heap total 67108864K, used 22175438K [0x00007f6a7e000000, 0x00007f6a80004000, 0x00007f7a7e000000) region size 32768K, 5 young (163840K), 5 survivors (163840K) Metaspace used 150034K, capacity 155003K, committed 155360K, reserved 155648K } Event: 2031754.375 GC heap before {Heap before GC invocations=15748 (full 0): garbage-first heap total 67108864K, used 44359374K [0x00007f6a7e000000, 0x00007f6a80004000, 0x00007f7a7e000000) region size 32768K, 682 young (22347776K), 5 survivors (163840K) Metaspace used 150034K, capacity 155003K, committed 155360K, reserved 155648K Event: 2031754.461 GC heap after Heap after GC invocations=15749 (full 0): garbage-first heap total 67108864K, used 22158646K [0x00007f6a7e000000, 0x00007f6a80004000, 0x00007f7a7e000000) region size 32768K, 4 young (131072K), 4 survivors (131072K) Metaspace used 150034K, capacity 155003K, committed 155360K, reserved 155648K } Event: 2031795.794 GC heap before {Heap before GC invocations=15749 (full 0): garbage-first heap total 67108864K, used 44375350K [0x00007f6a7e000000, 0x00007f6a80004000, 0x00007f7a7e000000) region size 32768K, 682 young (22347776K), 4 survivors (131072K) Metaspace used 150034K, capacity 155003K, committed 155360K, reserved 155648K Event: 2031795.892 GC heap after Heap after GC invocations=15750 (full 0): garbage-first heap total 67108864K, used 22165753K [0x00007f6a7e000000, 0x00007f6a80004000, 0x00007f7a7e000000) region size 32768K, 4 young (131072K), 4 survivors (131072K) Metaspace used 150034K, capacity 155003K, committed 155360K, reserved 155648K } Event: 2031844.619 GC heap before {Heap before GC invocations=15750 (full 0): garbage-first heap total 67108864K, used 44382457K [0x00007f6a7e000000, 0x00007f6a80004000, 0x00007f7a7e000000) region size 32768K, 682 young (22347776K), 4 survivors (131072K) Metaspace used 150034K, capacity 155003K, committed 155360K, reserved 155648K Event: 2031844.716 GC heap after Heap after GC invocations=15751 (full 0): garbage-first heap total 67108864K, used 22178657K [0x00007f6a7e000000, 0x00007f6a80004000, 0x00007f7a7e000000) region size 32768K, 5 young (163840K), 5 survivors (163840K) Metaspace used 150034K, capacity 155003K, committed 155360K, reserved 155648K } Event: 2031889.871 GC heap before {Heap before GC invocations=15751 (full 0): garbage-first heap total 67108864K, used 44362593K [0x00007f6a7e000000, 0x00007f6a80004000, 0x00007f7a7e000000) region size 32768K, 682 young (22347776K), 5 survivors (163840K) Metaspace used 150062K, capacity 155003K, committed 155360K, reserved 155648K
... View more
01-09-2018
02:46 AM
1 Kudo
I found exact tez version, 0.7.0.2.6.2.0-205
... View more
01-09-2018
12:52 AM
hi, all I downloaded HDP-2.6.3.0-235, where I checked that tez version is 0.7.0. but in HDP-CHANGES.txt file, there is an TEZ-3491, where Affects Version/s is 0.7.1; https://issues.apache.org/jira/browse/TEZ-3491 is right version of Tez included this HDP over 0.7.1, isn't it ? thanks in advance park
... View more
Labels:
12-19-2016
11:27 AM
Current value is default value, innodb_lock_wait_timeout = 50.
... View more
12-19-2016
11:16 AM
Hi, All I got a error message of 'Lock wait timeout exceeded' in hive metastore as follows; what causes this problem? HDP (2.1.2.1) Hive(1.2.1.2.3) MySQL(5.6.27) after googling, I got the following recommendation; first, add two values in my.cnf and restart mysqld. this resolution right? Is it enough to set ONLY 'innodb_lock_wait_timeout' value? innodb_lock_wait_timeout = 300
transaction-isolation = READ-COMMITTED Is the reason that cause timeout is deadlock possible? Is there any configuration of Hive to be changed? NestedThrowablesStackTrace:
Insert of object "org.apache.hadoop.hive.metastore.model.MPartition@2a3d337" using statement "INSERT INTO `PARTITIONS` (`PART_ID`,`LAST_ACCESS_TIME`,`TBL_ID`,`SD_ID`,`CREATE_TIME`,`PART_NAME`) VALUES (?,?,?,?,?,?)" failed : Lock wait timeout exceeded; try restarting transaction org.datanucleus.exceptions.NucleusDataStoreException: Insert of object "org.apache.hadoop.hive.metastore.model.MPartition@2a3d337" using statement "INSERT INTO `PARTITIONS` (`PART_ID`,`LAST_ACCESS_TIME`,`TBL_ID`,`SD_ID`,`CREATE_TIME`,`PART_NAME`) VALUES (?,?,?,?,?,?)" failed : Lock wait timeout exceeded; try restarting transaction
at org.datanucleus.store.rdbms.request.InsertRequest.execute(InsertRequest.java:505)
at org.datanucleus.store.rdbms.RDBMSPersistenceHandler.insertTable(RDBMSPersistenceHandler.java:167)
at org.datanucleus.store.rdbms.RDBMSPersistenceHandler.insertObject(RDBMSPersistenceHandler.java:143)
at org.datanucleus.state.JDOStateManager.internalMakePersistent(JDOStateManager.java:3784)
at org.datanucleus.state.JDOStateManager.makePersistent(JDOStateManager.java:3760)
at org.datanucleus.ExecutionContextImpl.persistObjectInternal(ExecutionContextImpl.java:2219)
at org.datanucleus.ExecutionContextImpl.persistObjectWork(ExecutionContextImpl.java:2065)
at org.datanucleus.ExecutionContextImpl.persistObjects(ExecutionContextImpl.java:2005)
at org.datanucleus.ExecutionContextThreadedImpl.persistObjects(ExecutionContextThreadedImpl.java:231)
at org.datanucleus.api.jdo.JDOPersistenceManager.makePersistentAll(JDOPersistenceManager.java:776)
at org.apache.hadoop.hive.metastore.ObjectStore.addPartitions(ObjectStore.java:1368)
at sun.reflect.GeneratedMethodAccessor43.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:114)
at com.sun.proxy.$Proxy0.addPartitions(Unknown Source)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.add_partitions_core(HiveMetaStore.java:2181)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.add_partitions_req(HiveMetaStore.java:2215)
at sun.reflect.GeneratedMethodAccessor41.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107)
at com.sun.proxy.$Proxy3.add_partitions_req(Unknown Source)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$add_partitions_req.getResult(ThriftHiveMetastore.java:9632)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$add_partitions_req.getResult(ThriftHiveMetastore.java:9616)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:681)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:676)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:676)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745) Caused by: java.sql.SQLException: Lock wait timeout exceeded; try restarting transaction
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:998)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3847)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3783)
at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2447)
at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2594)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2545)
at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:1901)
at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2113)
at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2049)
at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2034)
at com.jolbox.bonecp.PreparedStatementHandle.executeUpdate(PreparedStatementHandle.java:205)
at org.datanucleus.store.rdbms.ParamLoggingPreparedStatement.executeUpdate(ParamLoggingPreparedStatement.java:399)
at org.datanucleus.store.rdbms.SQLController.executeStatementUpdate(SQLController.java:439)
at org.datanucleus.store.rdbms.request.InsertRequest.execute(InsertRequest.java:410)
... 36 more Best Regards, Park
... View more
Labels:
- Labels:
-
Apache Hive
09-08-2016
12:48 PM
1 Kudo
Hi, All
I run spark-sql queries with a thrift server. I know that if multiple sql queries are submitted through the thrift server each query would be run sequentially.
If many users want to query the table on a spark cluster over yarn at the same time, how these requested queries could be run concurrently? The requested query do not update the table and just query
I have an idea that because a thrift server has dedicated executor cluster if multiple thrift servers are used multiple queries could be processed concurrently.
Is there any idea about this situation? Thanks in advance. Park.
... View more
Labels:
- Labels:
-
Apache Spark
05-23-2016
04:59 AM
thanks for your reply in advance. https://docs.hortonworks.com/HDPDocuments/Ambari-2.2.0.0/bk_Ambari_Users_Guide/content/_modify_hdfs_configurations.html Is there any impact on hadoop operations after "config.sh delete ... hdfs-site “dfs.namenode.rpc-address" command runs?
... View more
05-23-2016
02:47 AM
1 Kudo
Hi, All
I met a problem when rebalancing hdfs as follows:
ambari : 2.1.2.1
Is there anyone have an idea about this?
and where is the script of 'start-balancer.sh' ?
.....
16/05/23 11:01:21 INFO balancer.Balancer: Need to move 14.26 TB to make the cluster balanced.
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RpcNoSuchMethodException): Unknown method isUpgradeFinalized called on org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol protocol.
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:575)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
. Exiting ...
May 23, 2016 11:01:21 AM 0 0 B -1 B -1 B
May 23, 2016 11:01:21 AM Balancing took 1.618 seconds
Thanks in advance
Park
... View more
Labels:
- Labels:
-
Apache Ambari
01-12-2016
10:02 AM
Hello, In hdp-2.3.2.0-2950, there are UIs for capacity scheduler in Ambari: Yarn > Configs > Scheduler Yarn Queue Manager menu on the Top Is there any UI for fair scheduler? If not, how can I set the fair scheduler? Both value of yarn.resourcemanager.scheduler.class and fair-scheduler.xml are enough? Regards, park
... View more
Labels:
- Labels:
-
Apache YARN