<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Regarding&amp;quot;http://hortonworks.com/hadoop-tutorial/how-to-refine-and-visualize-server-log-data/&amp;quot; in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Regarding-quot-http-hortonworks-com-hadoop-tutorial-how-to/m-p/120556#M47147</link>
    <description>&lt;P&gt;Right now HDFS services are getting failed please see the logs-&lt;/P&gt;&lt;PRE&gt;0.1024069 secs] 91101K-&amp;gt;9945K(245760K), 0.1024744 secs] [Times: user=0.32 sys=0.00, real=0.10 secs] 
Heap
 par new generation   total 92160K, used 35183K [0x00000000f0600000, 0x00000000f6a00000, 0x00000000f6a00000)
  eden space 81920K,  34% used [0x00000000f0600000, 0x00000000f218e640, 0x00000000f5600000)
  from space 10240K,  68% used [0x00000000f5600000, 0x00000000f5ccd8e8, 0x00000000f6000000)
  to   space 10240K,   0% used [0x00000000f6000000, 0x00000000f6000000, 0x00000000f6a00000)
 concurrent mark-sweep generation total 153600K, used 2978K [0x00000000f6a00000, 0x0000000100000000, 0x0000000100000000)
 Metaspace       used 21376K, capacity 21678K, committed 21884K, reserved 1069056K
  class space    used 2556K, capacity 2654K, committed 2688K, reserved 1048576K
==&amp;gt; /var/log/hadoop/hdfs/gc.log-201611240749 &amp;lt;==
Java HotSpot(TM) 64-Bit Server VM (25.77-b03) for linux-amd64 JRE (1.8.0_77-b03), built on Mar 20 2016 22:00:46 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 9064548k(2520228k free), swap 5119996k(5119996k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=262144000 -XX:MaxHeapSize=262144000 -XX:MaxNewSize=104857600 -XX:MaxTenuringThreshold=6 -XX:NewSize=52428800 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 
2016-11-24T07:49:27.202+0000: 1.402: [GC (Allocation Failure) 2016-11-24T07:49:27.203+0000: 1.402: [ParNew: 81920K-&amp;gt;9191K(92160K), 0.0138715 secs] 81920K-&amp;gt;9191K(245760K), 0.0139731 secs] [Times: user=0.03 sys=0.01, real=0.02 secs] 
2016-11-24T07:49:27.694+0000: 1.893: [GC (Allocation Failure) 2016-11-24T07:49:27.694+0000: 1.893: [ParNew: 91111K-&amp;gt;7299K(92160K), 0.0262494 secs] 91111K-&amp;gt;10275K(245760K), 0.0263202 secs] [Times: user=0.05 sys=0.00, real=0.02 secs] 
Heap
 par new generation   total 92160K, used 35518K [0x00000000f0600000, 0x00000000f6a00000, 0x00000000f6a00000)
  eden space 81920K,  34% used [0x00000000f0600000, 0x00000000f218ecd8, 0x00000000f5600000)
  from space 10240K,  71% used [0x00000000f5600000, 0x00000000f5d20d30, 0x00000000f6000000)
  to   space 10240K,   0% used [0x00000000f6000000, 0x00000000f6000000, 0x00000000f6a00000)
 concurrent mark-sweep generation total 153600K, used 2976K [0x00000000f6a00000, 0x0000000100000000, 0x0000000100000000)
 Metaspace       used 21335K, capacity 21622K, committed 21884K, reserved 1069056K
  class space    used 2555K, capacity 2654K, committed 2688K, reserved 1048576K&lt;/PRE&gt;</description>
    <pubDate>Thu, 24 Nov 2016 15:50:35 GMT</pubDate>
    <dc:creator>bibhas_burman</dc:creator>
    <dc:date>2016-11-24T15:50:35Z</dc:date>
    <item>
      <title>Regarding"http://hortonworks.com/hadoop-tutorial/how-to-refine-and-visualize-server-log-data/"</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Regarding-quot-http-hortonworks-com-hadoop-tutorial-how-to/m-p/120556#M47147</link>
      <description>&lt;P&gt;Right now HDFS services are getting failed please see the logs-&lt;/P&gt;&lt;PRE&gt;0.1024069 secs] 91101K-&amp;gt;9945K(245760K), 0.1024744 secs] [Times: user=0.32 sys=0.00, real=0.10 secs] 
Heap
 par new generation   total 92160K, used 35183K [0x00000000f0600000, 0x00000000f6a00000, 0x00000000f6a00000)
  eden space 81920K,  34% used [0x00000000f0600000, 0x00000000f218e640, 0x00000000f5600000)
  from space 10240K,  68% used [0x00000000f5600000, 0x00000000f5ccd8e8, 0x00000000f6000000)
  to   space 10240K,   0% used [0x00000000f6000000, 0x00000000f6000000, 0x00000000f6a00000)
 concurrent mark-sweep generation total 153600K, used 2978K [0x00000000f6a00000, 0x0000000100000000, 0x0000000100000000)
 Metaspace       used 21376K, capacity 21678K, committed 21884K, reserved 1069056K
  class space    used 2556K, capacity 2654K, committed 2688K, reserved 1048576K
==&amp;gt; /var/log/hadoop/hdfs/gc.log-201611240749 &amp;lt;==
Java HotSpot(TM) 64-Bit Server VM (25.77-b03) for linux-amd64 JRE (1.8.0_77-b03), built on Mar 20 2016 22:00:46 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 9064548k(2520228k free), swap 5119996k(5119996k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=262144000 -XX:MaxHeapSize=262144000 -XX:MaxNewSize=104857600 -XX:MaxTenuringThreshold=6 -XX:NewSize=52428800 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 
2016-11-24T07:49:27.202+0000: 1.402: [GC (Allocation Failure) 2016-11-24T07:49:27.203+0000: 1.402: [ParNew: 81920K-&amp;gt;9191K(92160K), 0.0138715 secs] 81920K-&amp;gt;9191K(245760K), 0.0139731 secs] [Times: user=0.03 sys=0.01, real=0.02 secs] 
2016-11-24T07:49:27.694+0000: 1.893: [GC (Allocation Failure) 2016-11-24T07:49:27.694+0000: 1.893: [ParNew: 91111K-&amp;gt;7299K(92160K), 0.0262494 secs] 91111K-&amp;gt;10275K(245760K), 0.0263202 secs] [Times: user=0.05 sys=0.00, real=0.02 secs] 
Heap
 par new generation   total 92160K, used 35518K [0x00000000f0600000, 0x00000000f6a00000, 0x00000000f6a00000)
  eden space 81920K,  34% used [0x00000000f0600000, 0x00000000f218ecd8, 0x00000000f5600000)
  from space 10240K,  71% used [0x00000000f5600000, 0x00000000f5d20d30, 0x00000000f6000000)
  to   space 10240K,   0% used [0x00000000f6000000, 0x00000000f6000000, 0x00000000f6a00000)
 concurrent mark-sweep generation total 153600K, used 2976K [0x00000000f6a00000, 0x0000000100000000, 0x0000000100000000)
 Metaspace       used 21335K, capacity 21622K, committed 21884K, reserved 1069056K
  class space    used 2555K, capacity 2654K, committed 2688K, reserved 1048576K&lt;/PRE&gt;</description>
      <pubDate>Thu, 24 Nov 2016 15:50:35 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Regarding-quot-http-hortonworks-com-hadoop-tutorial-how-to/m-p/120556#M47147</guid>
      <dc:creator>bibhas_burman</dc:creator>
      <dc:date>2016-11-24T15:50:35Z</dc:date>
    </item>
    <item>
      <title>Re: Regarding"http://hortonworks.com/hadoop-tutorial/how-to-refine-and-visualize-server-log-data/"</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Regarding-quot-http-hortonworks-com-hadoop-tutorial-how-to/m-p/120557#M47148</link>
      <description>&lt;P&gt;@&lt;A href="https://community.hortonworks.com/users/13580/bibhasburman.html"&gt;Bibhas Burman&lt;/A&gt;&lt;/P&gt;&lt;P&gt;You seem to have run out of disk space ! CAn you run this command and check the output assuming you are root&lt;/P&gt;&lt;PRE&gt;# su - hdfs
$ hdfs dfsadmin -report&lt;/PRE&gt;&lt;P&gt;or &lt;/P&gt;&lt;PRE&gt;hdfs dfs -du -h /&lt;/PRE&gt;</description>
      <pubDate>Thu, 24 Nov 2016 16:02:35 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Regarding-quot-http-hortonworks-com-hadoop-tutorial-how-to/m-p/120557#M47148</guid>
      <dc:creator>Shelton</dc:creator>
      <dc:date>2016-11-24T16:02:35Z</dc:date>
    </item>
    <item>
      <title>Re: Regarding"http://hortonworks.com/hadoop-tutorial/how-to-refine-and-visualize-server-log-data/"</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Regarding-quot-http-hortonworks-com-hadoop-tutorial-how-to/m-p/120558#M47149</link>
      <description>&lt;OL&gt;
&lt;LI&gt;$ hdfs dfsadmin -report : I am getting -&lt;/LI&gt;&lt;/OL&gt;&lt;P&gt;16/11/24 08:12:07 WARN retry.RetryInvocationHandler: Exception while invoking ClientNamenodeProtocolTranslatorPB.getStats over null. Not retrying because try once and fail.
java.net.ConnectException: Call From sandbox.hortonworks.com/172.17.0.2 to sandbox.hortonworks.com:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  &lt;A href="http://wiki.apache.org/hadoop/ConnectionRefused" target="_blank"&gt;http://wiki.apache.org/hadoop/ConnectionRefused&lt;/A&gt;
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:801)
        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732)
        at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1556)
        at org.apache.hadoop.ipc.Client.call(Client.java:1496)
        at org.apache.hadoop.ipc.Client.call(Client.java:1396)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
        at com.sun.proxy.$Proxy10.getFsStats(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getStats(ClientNamenodeProtocolTranslatorPB.java:657)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:278)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:194)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:176)
        at com.sun.proxy.$Proxy11.getStats(Unknown Source)
        at org.apache.hadoop.hdfs.DFSClient.callGetStats(DFSClient.java:2535)
        at org.apache.hadoop.hdfs.DFSClient.getDiskStatus(DFSClient.java:2545)
        at org.apache.hadoop.hdfs.DistributedFileSystem.getStatus(DistributedFileSystem.java:1231)
        at org.apache.hadoop.fs.FileSystem.getStatus(FileSystem.java:2335)
        at org.apache.hadoop.hdfs.tools.DFSAdmin.report(DFSAdmin.java:457)
        at org.apache.hadoop.hdfs.tools.DFSAdmin.run(DFSAdmin.java:1914)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
        at org.apache.hadoop.hdfs.tools.DFSAdmin.main(DFSAdmin.java:2107)
Caused by: java.net.ConnectException: Connection refused
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
        at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
        at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:650)
        at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:745)
        at org.apache.hadoop.ipc.Client$Connection.access$3200(Client.java:397)
        at org.apache.hadoop.ipc.Client.getConnection(Client.java:1618)
        at org.apache.hadoop.ipc.Client.call(Client.java:1449)
        ... 21 more
report: Call From sandbox.hortonworks.com/172.17.0.2 to sandbox.hortonworks.com:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  &lt;A href="http://wiki.apache.org/hadoop/ConnectionRefused" target="_blank"&gt;http://wiki.apache.org/hadoop/ConnectionRefused&lt;/A&gt;
[hdfs@sandbox ~]$&lt;/P&gt;</description>
      <pubDate>Thu, 24 Nov 2016 22:02:26 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Regarding-quot-http-hortonworks-com-hadoop-tutorial-how-to/m-p/120558#M47149</guid>
      <dc:creator>bibhas_burman</dc:creator>
      <dc:date>2016-11-24T22:02:26Z</dc:date>
    </item>
  </channel>
</rss>

