Member since
04-27-2016
61
Posts
61
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5153 | 09-19-2016 05:42 PM | |
1935 | 06-11-2016 06:41 AM | |
4668 | 06-10-2016 05:17 PM |
06-06-2016
10:20 PM
Thank you sujitha. thats the same reason
... View more
06-06-2016
10:20 PM
@Ali Bajwa Thanks Ali. These errors were from, before we sat with you. Doing all things suggested worked like charm . Appreciate your response.
... View more
06-04-2016
09:56 PM
When you run a query that involves some or multiple joins, hive requires some settings to be modified inorder to have optimal map reduce working. try the below Advanced configs: 1. hive.exec.parallel= true 2.hive.auto.convert.join= false 3.hive.exec.compress.output= true
... View more
06-03-2016
10:56 PM
1 Kudo
Does hortonworks provide native JSON support to HBase or has nested json data model?. If so Any resources ? If this question is already answered somewhere,can you please point me that.
... View more
Labels:
- Labels:
-
Apache HBase
05-23-2016
10:34 PM
Can any body help me identify the root cause of the run time errors as shown in the picture attached. This is coming up when running the trucking demo.
... View more
Labels:
- Labels:
-
Apache NiFi
-
Apache Storm
05-03-2016
08:50 PM
def printVersions():
result = {}
for f in os.listdir(root):
if f not in [".", "..", "current", "share", "lost+found","docker"]:
result[tuple(map(int, versionRegex.split(f)))] = f
keys = result.keys()
.... This fixed my issue. Happened to me when i was restarting Hbase to deploy a service on Ambari and hbase client wouldn't install.It said "docker" in printVersions function, instead of "usr". Thanks!
... View more
04-29-2016
11:20 PM
HI ryan, Yeah i did that, also had to increase my node's memory and remove the SNN from safe mode. Thanks!
... View more
04-29-2016
10:37 PM
core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 46763 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 46763 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited This is the ulimit result
... View more
04-29-2016
10:04 PM
2016-04-29 20:57:08,371 INFO blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateHeartbeatState(451)) - Number of failed storage changes from 0 to 0 2016-04-29 20:57:08,405 WARN hdfs.DFSClient (DFSOutputStream.java:run(857)) - DFSOutputStream ResponseProcessor exception for block BP-1014530610-10.0.2.15-1456769265896:blk_1073786547_45771 java.io.EOFException: Premature EOF: no length prefix available at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2293) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:244) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:749) 2016-04-29 20:57:08,407 INFO provider.BaseAuditHandler (BaseAuditHandler.java:logStatus(312)) - Audit Status Log: name=hdfs.async.multi_dest.batch.hdfs, interval=03.058 seconds, events=1, succcessCount=1, totalEvents=9406, totalSuccessCount=9403, totalDeferredCount=3 2016-04-29 20:57:08,407 INFO queue.AuditFileSpool (AuditFileSpool.java:stop(321)) - Stop called, queueName=hdfs.async.multi_dest.batch, consumer=hdfs.async.multi_dest.batch.hdfs 2016-04-29 20:57:08,407 INFO provider.BaseAuditHandler (BaseAuditHandler.java:logStatus(312)) - Audit Status Log: name=hdfs.async.multi_dest.batch, finalDestination=hdfs.async.multi_dest.batch.hdfs, interval=21.059 seconds, events=30, succcessCount=9, stashedCount=3, totalEvents=1057848, totalSuccessCount=9295, totalStashedCount=3 2016-04-29 20:57:08,407 INFO queue.AuditFileSpool (AuditFileSpool.java:runDoAs(877)) - Caught exception in consumer thread. Shutdown might be in progress 2016-04-29 20:57:08,407 INFO queue.AuditBatchQueue (AuditBatchQueue.java:runDoAs(373)) - Exiting consumerThread.run() method. name=hdfs.async.multi_dest.batch 2016-04-29 20:57:08,439 ERROR hdfs.DFSClient (DFSClient.java:closeAllFilesBeingWritten(954)) - Failed to close inode 96024 java.io.IOException: All datanodes DatanodeInfoWithStorage[10.0.2.15:50010,DS-c63e2550-a18a-4035-8bb3-7f8f2b4dd607,DISK] are bad. Aborting... at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1146) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:909) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:412) 2016-04-29 20:57:08,565 INFO namenode.NameNode (LogAdapter.java:info(47)) - SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at sandbox.hortonworks.com/10.0.2.15 ************************************************************/ [root@sandbox ~]#
... View more
04-29-2016
09:51 PM
looks like the cnnection exception What does this mean?
... View more
- « Previous
- Next »