Member since
07-31-2013
1924
Posts
462
Kudos Received
311
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1543 | 07-09-2019 12:53 AM | |
9292 | 06-23-2019 08:37 PM | |
8050 | 06-18-2019 11:28 PM | |
8676 | 05-23-2019 08:46 PM | |
3473 | 05-20-2019 01:14 AM |
05-21-2019
10:41 AM
Hi, I am trying to download thesqoop informix jdbc/odbc driver . Can you please help me from where I can download that jar. Thanks
... View more
05-21-2019
03:36 AM
I have similar problem can you please help logs of HMaster- 2019-05-21 15:16:39,745 WARN [main-EventThread] coordination.SplitLogManagerCoordination: Error splitting /hbase/splitWAL/WALs%2F10.136.107.153%2C16020%2C1558423482915-splitting%2F10.136.107.153%252C16020%252C1558423482915.default.1558426329243 2019-05-21 15:16:39,745 WARN [MASTER_SERVER_OPERATIONS-DIGPUNPERHEL02:16000-0] master.SplitLogManager: error while splitting logs in [hdfs://10.136.107.59:9000/hbase/WALs/10.136.107.153,16020,1558423482915-splitting] installed = 3 but only 0 done 2019-05-21 15:16:39,747 ERROR [MASTER_SERVER_OPERATIONS-DIGPUNPERHEL02:16000-0] executor.EventHandler: Caught throwable while processing event M_SERVER_SHUTDOWN java.io.IOException: failed log splitting for 10.136.107.153,16020,1558423482915, will retry at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.resubmit(ServerShutdownHandler.java:357) at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:220) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.io.IOException: error or interrupted while splitting logs in [hdfs://10.136.107.59:9000/hbase/WALs/10.136.107.153,16020,1558423482915-splitting] Task = installed = 3 done = 0 error = 3 at org.apache.hadoop.hbase.master.SplitLogManager.splitLogDistributed(SplitLogManager.java:290) at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:391) at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:364) at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:286) at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:213) ... 4 more 2019-05-21 15:16:39,748 FATAL [MASTER_SERVER_OPERATIONS-DIGPUNPERHEL02:16000-0] master.HMaster: Master server abort: loaded coprocessors are: [] 2019-05-21 15:16:39,748 FATAL [MASTER_SERVER_OPERATIONS-DIGPUNPERHEL02:16000-0] master.HMaster: Caught throwable while processing event M_SERVER_SHUTDOWN java.io.IOException: failed log splitting for 10.136.107.153,16020,1558423482915, will retry at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.resubmit(ServerShutdownHandler.java:357) at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:220) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.io.IOException: error or interrupted while splitting logs in [hdfs://10.136.107.59:9000/hbase/WALs/10.136.107.153,16020,1558423482915-splitting] Task = installed = 3 done = 0 error = 3 Logs of HRegionServer - 2019-05-21 15:16:38,669 INFO [RS_LOG_REPLAY_OPS-DIGPUNPERHEL02:16020-0-Writer-1] wal.WALSplitter: Creating writer path=hdfs://10.136.107.59:9000/hbase/data/dev2/observation/8baa93c0a9ddc9ab4ebfead1a50d85b2/recovered.edits/0000000000002354365.temp region=8baa93c0a9ddc9ab4ebfead1a50d85b2 2019-05-21 15:16:38,701 WARN [Thread-101] hdfs.DFSClient: DataStreamer Exception org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /hbase/data/dev2/observation/8baa93c0a9ddc9ab4ebfead1a50d85b2/recovered.edits/0000000000002354365.temp could only be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and no node(s) are excluded in this operation. at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1547) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3107) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3031) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:724) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:492) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043) at org.apache.hadoop.ipc.Client.call(Client.java:1411) at org.apache.hadoop.ipc.Client.call(Client.java:1364) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at com.sun.proxy.$Proxy16.addBlock(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
... View more
05-21-2019
03:30 AM
Mr. Harsh, would you please have a look at my reply, please? thanks
... View more
05-20-2019
04:18 AM
Thanks for this. I think, we can summarize this as follows: * If only External Hive Table is used to process S3 data, the technical issues regarding consistency, scalable meta-data handling would be resolved. * If External & Internal Hive Tables are used in combination to process S3 data, the technical issues regarding consistency, scalable meta-data handling and data locality would be resolved. * If Spark alone is used on top of S3, the technical issues regarding consistency with (in memory processing), scalable meta-data handling would be resolved. As Spark will perform transient storage in memory and only read the initial data from S3 and write back the result.
... View more
05-19-2019
08:48 PM
- Do you observe this intermittency from only specific client/gateway hosts? - Does your cluster apply firewall rules between the cluster hosts? One probable reason behind the intermittent 'Connection refused' from KMS could be that it is frequently (auto)restarting. Checkout its process stdout messages and service logs to confirm if there's a kill causing it to be restarted by the CM Agent supervisor.
... View more
05-15-2019
06:52 PM
1 Kudo
The Disk Balancer sub-system is local to each DataNode and can be triggered on distinct hosts in parallel. The only time you should receive that exception is if the targeted DN's hdfs-site.xml does not carry the property that enables disk balancer, or when the DataNode is mid-shutdown/restart. How have you configured disk balancer for your cluster? Did you follow the configuration approach presented at https://blog.cloudera.com/blog/2016/10/how-to-use-the-new-hdfs-intra-datanode-disk-balancer-in-apache-hadoop/? What is your CDH and CM version?
... View more
05-15-2019
07:17 AM
Before the failure to open message is this block (only including first line of java stackstrace): 2019-05-14 15:55:53,356 INFO org.apache.hadoop.hbase.regionserver.HRegion: Replaying edits from hdfs://athos/hbase/data/default/deveng_v500/ab693aebe203bc8781f1a9f1c0a1d045/recovered.edits/0000000000094270192
2019-05-14 15:55:53,383 INFO org.apache.hadoop.hbase.regionserver.HRegion: Replaying edits from hdfs://athos/hbase/data/default/deveng_v500/ab693aebe203bc8781f1a9f1c0a1d045/recovered.edits/0000000000094270299
2019-05-14 15:55:53,722 INFO org.apache.hadoop.hbase.regionserver.HRegion: Replaying edits from hdfs://athos/hbase/data/default/deveng_v500/ab693aebe203bc8781f1a9f1c0a1d045/recovered.edits/0000000000094270330
2019-05-14 15:55:53,903 INFO SecurityLogger.org.apache.hadoop.hbase.Server: Auth successful for tomcat (auth:SIMPLE)
2019-05-14 15:55:53,904 INFO SecurityLogger.org.apache.hadoop.hbase.Server: Connection from 10.190.158.151 port: 60648 with unknown version info
2019-05-14 15:55:54,614 ERROR org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler: Failed open of region=deveng_v500,\x00\x00\x1C\xAB\x92\xBC\xD8\x02,1544486155414.ab693aebe203bc8781f1a9f1c0a1d045., starting to roll back the global memstore size.
java.lang.IllegalArgumentException: offset (8) + length (2) exceed the capacity of the array: 0
at org.apache.hadoop.hbase.util.Bytes.explainWrongLengthOrOffset(Bytes.java:631)
.............
... View more
05-12-2019
07:47 PM
Thank you for sharing that output. The jar does appear to carry one class set in the right package directory, but also carries another set under a different directory. Perhaps there is a versioning/work-in-progress issue here, where the incorrect build is the one that ends up running. Can you try to build your jar again from a clean working directory? If the right driver class runs, you should not be seeing the following observed log: > 19/05/11 02:43:49 WARN mapreduce.JobResourceUploader: No job jar file set. User classes may not be found. See Job or Job#setJar(String).
... View more
05-09-2019
02:39 AM
1 Kudo
Spark running on YARN will use the temporary storage presented to it by the NodeManagers where the containers run. These directory path lists are configured via Cloudera Manager -> YARN -> Configuration -> "NodeManager Local Directories" and "NodeManager Log Directories". You can replace its values to point to your new, larger volume, and it will cease to use your root partition. FWIW, the same applies for HDFS if you use it. Also see: https://www.cloudera.com/documentation/enterprise/release-notes/topics/hardware_requirements_guide.html
... View more
05-09-2019
02:09 AM
Quoted from documentation about using Avro files at https://www.cloudera.com/documentation/enterprise/latest/topics/cdh_ig_avro_usage.html#topic_26_2 """ Hive (…) To enable Snappy compression on output [avro] files, run the following before writing to the table: SET hive.exec.compress.output=true; SET avro.output.codec=snappy; """ Please try this out. You're missing only the second property mentioned here, which appears specific to Avro serialization in Hive. Default compression of Avro is deflate, so that explains the behaviour you observe without it.
... View more