<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: Running &amp;quot;terasort&amp;quot; on brand new CDH 5.6.0 Cluster is very very slow in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Running-quot-terasort-quot-on-brand-new-CDH-5-6-0-Cluster-is/m-p/363166#M24295</link>
    <description>&lt;P&gt;Hello&amp;nbsp;&lt;A href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/14059" target="_blank"&gt;ozzielabrat,&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I am also facing same issue, what is the solution for it.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks&lt;/P&gt;</description>
    <pubDate>Mon, 06 Feb 2023 13:52:15 GMT</pubDate>
    <dc:creator>Sam5986</dc:creator>
    <dc:date>2023-02-06T13:52:15Z</dc:date>
    <item>
      <title>Running "terasort" on brand new CDH 5.6.0 Cluster is very very slow</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Running-quot-terasort-quot-on-brand-new-CDH-5-6-0-Cluster-is/m-p/39311#M24292</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I was wondering if anyone could assist with a question relating to very slow performance of "terasort" on a brand new CDH 5.6.0 cluster ?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;1) &amp;nbsp;Running "teragen" take about 15 minutes for a 100GB file:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;$  time hadoop jar /opt/cloudera/parcels/CDH/lib/hadoop-0.20-mapreduce/hadoop-examples.jar teragen&amp;nbsp;1000000000&amp;nbsp;/benchmarks/teragen-100gb-test-3&lt;/PRE&gt;&lt;DIV class="line number21 index20 alt2"&gt;&lt;PRE&gt;&amp;nbsp;&lt;/PRE&gt;&lt;PRE&gt;      16/04/04 08:05:13 INFO mapreduce.Job: Counters: 24&lt;BR /&gt;       File System Counters&lt;BR /&gt;       FILE: Number of bytes read=276326&lt;BR /&gt;       FILE: Number of bytes written=550410&lt;BR /&gt;       FILE: Number of read operations=0&lt;BR /&gt;       FILE: Number of large read operations=0&lt;BR /&gt;       FILE: Number of write operations=0&lt;BR /&gt;       HDFS: Number of bytes read=0&lt;BR /&gt;       HDFS: Number of bytes written=100000000000&lt;BR /&gt;       HDFS: Number of read operations=4&lt;BR /&gt;       HDFS: Number of large read operations=0&lt;BR /&gt;       HDFS: Number of write operations=3&lt;BR /&gt;       Map-Reduce Framework&lt;BR /&gt;       Map input records=1000000000&lt;BR /&gt;       Map output records=1000000000&lt;BR /&gt;       Input split bytes=83&lt;BR /&gt;       Spilled Records=0&lt;BR /&gt;       Failed Shuffles=0&lt;BR /&gt;       Merged Map outputs=0&lt;BR /&gt;       GC time elapsed (ms)=8211&lt;BR /&gt;       CPU time spent (ms)=0&lt;BR /&gt;       Physical memory (bytes) snapshot=0&lt;BR /&gt;       Virtual memory (bytes) snapshot=0&lt;BR /&gt;       Total committed heap usage (bytes)=261619712&lt;BR /&gt;       org.apache.hadoop.examples.terasort.TeraGen$Counters&lt;BR /&gt;       CHECKSUM=2147523228284173905&lt;BR /&gt;       File Input Format Counters &lt;BR /&gt;       Bytes Read=0&lt;BR /&gt;       File Output Format Counters &lt;BR /&gt;       Bytes Written=100000000000&lt;/PRE&gt;&lt;PRE&gt;      real 15m 0.469s&lt;BR /&gt;      user 17m 35.634s&lt;BR /&gt;      sys 0m 46.904s&lt;/PRE&gt;&lt;/DIV&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;2) &amp;nbsp;Running "terasort" takes 85+ minutes:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Lots of similar looping log file entries as per below. &amp;nbsp; I hit "&amp;lt;ctrl&amp;gt; + c" to kill the terasort.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Has anyone experienced a similar slow issue on CDH 5.6.0 ? &amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Perhaps there are some MapReduce or YARN parameters I can look at that might speed things up ?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;$ time hadoop jar /opt/cloudera/parcels/CDH/lib/hadoop-0.20-mapreduce/hadoop-examples.jar terasort /benchmarks/teragen-100gb-test-3 /benchmarks/terasort&lt;/P&gt;&lt;P&gt;....&lt;/P&gt;&lt;P&gt;....&lt;/P&gt;&lt;P&gt;16/04/04 09:32:48 INFO mapred.MapTask: Starting flush of map output&lt;BR /&gt;16/04/04 09:32:48 INFO mapred.MapTask: Spilling map output&lt;BR /&gt;16/04/04 09:32:48 INFO mapred.MapTask: bufstart = 75355282; bufend = 34888038; bufvoid = 104857600&lt;BR /&gt;16/04/04 09:32:48 INFO mapred.MapTask: kvstart = 18838816(75355264); kvend = 16313708(65254832); length = 2525109/6553600&lt;BR /&gt;16/04/04 09:32:49 INFO mapred.MapTask: Finished spill 1&lt;BR /&gt;16/04/04 09:32:49 INFO mapred.Merger: Merging 2 sorted segments&lt;BR /&gt;16/04/04 09:32:49 INFO mapred.Merger: Down to the last merge-pass, with 2 segments left of total size: 139586394 bytes&lt;BR /&gt;16/04/04 09:32:51 INFO mapred.LocalJobRunner: map &amp;gt; sort &amp;gt;&lt;BR /&gt;16/04/04 09:32:52 INFO mapred.Task: Task:attempt_local1770220147_0001_m_000742_0 is done. And is in the process of committing&lt;BR /&gt;16/04/04 09:32:52 INFO mapred.LocalJobRunner: map &amp;gt; sort&lt;BR /&gt;16/04/04 09:32:52 INFO mapred.Task: Task 'attempt_local1770220147_0001_m_000742_0' done.&lt;BR /&gt;16/04/04 09:32:52 INFO mapred.LocalJobRunner: Finishing task: attempt_local1770220147_0001_m_000742_0&lt;BR /&gt;16/04/04 09:32:52 INFO mapred.LocalJobRunner: Starting task: attempt_local1770220147_0001_m_000743_0&lt;BR /&gt;16/04/04 09:32:52 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1&lt;BR /&gt;16/04/04 09:32:52 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ]&lt;BR /&gt;16/04/04 09:32:52 INFO mapred.MapTask: Processing split: hdfs://{obfuscated}/benchmarks/teragen-100gb-test-3/part-m-00000:939524096+134217728&lt;BR /&gt;16/04/04 09:32:52 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)&lt;BR /&gt;16/04/04 09:32:52 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100&lt;BR /&gt;16/04/04 09:32:52 INFO mapred.MapTask: soft limit at 83886080&lt;BR /&gt;16/04/04 09:32:52 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600&lt;BR /&gt;16/04/04 09:32:52 INFO mapred.MapTask: kvstart = 26214396; length = 6553600&lt;BR /&gt;16/04/04 09:32:52 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer&lt;BR /&gt;16/04/04 09:32:52 INFO mapred.MapTask: Spilling map output&lt;BR /&gt;16/04/04 09:32:52 INFO mapred.MapTask: bufstart = 0; bufend = 72511698; bufvoid = 104857600&lt;BR /&gt;16/04/04 09:32:52 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 23370804(93483216); length = 2843593/6553600&lt;BR /&gt;16/04/04 09:32:52 INFO mapred.MapTask: (EQUATOR) 75355282 kvi 18838816(75355264)&lt;BR /&gt;16/04/04 09:32:54 INFO mapred.MapTask: Finished spill 0&lt;BR /&gt;16/04/04 09:32:54 INFO mapred.MapTask: (RESET) equator 75355282 kv 18838816(75355264) kvi 18127932(72511728)&lt;BR /&gt;16/04/04 09:32:55 INFO mapred.LocalJobRunner:&lt;BR /&gt;16/04/04 09:32:55 INFO mapred.MapTask: Starting flush of map output&lt;BR /&gt;16/04/04 09:32:55 INFO mapred.MapTask: Spilling map output&lt;BR /&gt;16/04/04 09:32:55 INFO mapred.MapTask: bufstart = 75355282; bufend = 34888140; bufvoid = 104857600&lt;BR /&gt;16/04/04 09:32:55 INFO mapred.MapTask: kvstart = 18838816(75355264); kvend = 16313704(65254816); length = 2525113/6553600&lt;BR /&gt;16/04/04 09:32:56 INFO mapred.MapTask: Finished spill 1&lt;BR /&gt;16/04/04 09:32:56 INFO mapred.Merger: Merging 2 sorted segments&lt;BR /&gt;16/04/04 09:32:56 INFO mapred.Merger: Down to the last merge-pass, with 2 segments left of total size: 139586498 bytes&lt;BR /&gt;16/04/04 09:32:58 INFO mapred.LocalJobRunner: map &amp;gt; sort &amp;gt;&lt;BR /&gt;16/04/04 09:32:59 INFO mapred.Task: Task:attempt_local1770220147_0001_m_000743_0 is done. And is in the process of committing&lt;BR /&gt;16/04/04 09:32:59 INFO mapred.LocalJobRunner: map &amp;gt; sort&lt;BR /&gt;16/04/04 09:32:59 INFO mapred.Task: Task 'attempt_local1770220147_0001_m_000743_0' done.&lt;BR /&gt;16/04/04 09:32:59 INFO mapred.LocalJobRunner: Finishing task: attempt_local1770220147_0001_m_000743_0&lt;BR /&gt;16/04/04 09:32:59 INFO mapred.LocalJobRunner: Starting task: attempt_local1770220147_0001_m_000744_0&lt;BR /&gt;16/04/04 09:32:59 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1&lt;BR /&gt;16/04/04 09:32:59 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ]&lt;BR /&gt;16/04/04 09:32:59 INFO mapred.MapTask: Processing split: hdfs://{obfuscated}/benchmarks/teragen-100gb-test-3/part-m-00000:805306368+134217728&lt;BR /&gt;16/04/04 09:32:59 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)&lt;BR /&gt;16/04/04 09:32:59 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100&lt;BR /&gt;16/04/04 09:32:59 INFO mapred.MapTask: soft limit at 83886080&lt;BR /&gt;16/04/04 09:32:59 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600&lt;BR /&gt;16/04/04 09:32:59 INFO mapred.MapTask: kvstart = 26214396; length = 6553600&lt;BR /&gt;16/04/04 09:32:59 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer&lt;BR /&gt;16/04/04 09:32:59 INFO mapred.MapTask: Spilling map output&lt;BR /&gt;16/04/04 09:32:59 INFO mapred.MapTask: bufstart = 0; bufend = 72511698; bufvoid = 104857600&lt;BR /&gt;16/04/04 09:32:59 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 23370804(93483216); length = 2843593/6553600&lt;BR /&gt;16/04/04 09:32:59 INFO mapred.MapTask: (EQUATOR) 75355282 kvi 18838816(75355264)&lt;BR /&gt;16/04/04 09:33:01 INFO mapred.MapTask: Finished spill 0&lt;BR /&gt;16/04/04 09:33:01 INFO mapred.MapTask: (RESET) equator 75355282 kv 18838816(75355264) kvi 18127932(72511728)&lt;BR /&gt;16/04/04 09:33:02 INFO mapred.LocalJobRunner:&lt;BR /&gt;16/04/04 09:33:02 INFO mapred.MapTask: Starting flush of map output&lt;BR /&gt;16/04/04 09:33:02 INFO mapred.MapTask: Spilling map output&lt;BR /&gt;16/04/04 09:33:02 INFO mapred.MapTask: bufstart = 75355282; bufend = 34888038; bufvoid = 104857600&lt;BR /&gt;16/04/04 09:33:02 INFO mapred.MapTask: kvstart = 18838816(75355264); kvend = 16313708(65254832); length = 2525109/6553600&lt;BR /&gt;16/04/04 09:33:03 INFO mapred.MapTask: Finished spill 1&lt;BR /&gt;16/04/04 09:33:03 INFO mapred.Merger: Merging 2 sorted segments&lt;BR /&gt;16/04/04 09:33:03 INFO mapred.Merger: Down to the last merge-pass, with 2 segments left of total size: 139586394 bytes&lt;BR /&gt;16/04/04 09:33:05 INFO mapred.LocalJobRunner: map &amp;gt; sort &amp;gt;&lt;BR /&gt;16/04/04 09:33:05 INFO mapred.Task: Task:attempt_local1770220147_0001_m_000744_0 is done. And is in the process of committing&lt;BR /&gt;16/04/04 09:33:05 INFO mapred.LocalJobRunner: map &amp;gt; sort&lt;BR /&gt;16/04/04 09:33:05 INFO mapred.Task: Task 'attempt_local1770220147_0001_m_000744_0' done.&lt;BR /&gt;16/04/04 09:33:05 INFO mapred.LocalJobRunner: Finishing task: attempt_local1770220147_0001_m_000744_0&lt;BR /&gt;16/04/04 09:33:05 INFO mapred.LocalJobRunner: map task executor complete.&lt;BR /&gt;16/04/04 09:33:05 INFO mapred.LocalJobRunner: Waiting for reduce tasks&lt;BR /&gt;16/04/04 09:33:05 INFO mapred.LocalJobRunner: Starting task: attempt_local1770220147_0001_r_000000_0&lt;BR /&gt;16/04/04 09:33:05 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1&lt;BR /&gt;16/04/04 09:33:05 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ]&lt;BR /&gt;16/04/04 09:33:05 INFO mapred.ReduceTask: Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@3d467064&lt;BR /&gt;16/04/04 09:33:05 INFO reduce.MergeManagerImpl: MergerManager: memoryLimit=180197776, maxSingleShuffleLimit=45049444, mergeThreshold=118930536, ioSortFactor=10, memToMemMergeOutputsThreshold=10&lt;BR /&gt;16/04/04 09:33:05 INFO reduce.EventFetcher: attempt_local1770220147_0001_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events&lt;BR /&gt;16/04/04 09:33:05 INFO reduce.MergeManagerImpl: attempt_local1770220147_0001_m_000315_0: Shuffling to disk since 139586514 is greater than maxSingleShuffleLimit (45049444)&lt;BR /&gt;16/04/04 09:33:05 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1770220147_0001_m_000315_0 decomp: 139586514 len: 139586518 to DISK&lt;BR /&gt;16/04/04 09:33:07 INFO reduce.OnDiskMapOutput: Read 139586518 bytes from map-output for attempt_local1770220147_0001_m_000315_0&lt;BR /&gt;16/04/04 09:33:07 INFO reduce.MergeManagerImpl: attempt_local1770220147_0001_m_000143_0: Shuffling to disk since 139586514 is greater than maxSingleShuffleLimit (45049444)&lt;BR /&gt;16/04/04 09:33:07 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1770220147_0001_m_000143_0 decomp: 139586514 len: 139586518 to DISK&lt;BR /&gt;16/04/04 09:33:08 INFO reduce.OnDiskMapOutput: Read 139586518 bytes from map-output for attempt_local1770220147_0001_m_000143_0&lt;BR /&gt;16/04/04 09:33:08 INFO reduce.MergeManagerImpl: attempt_local1770220147_0001_m_000535_0: Shuffling to disk since 139586410 is greater than maxSingleShuffleLimit (45049444)&lt;BR /&gt;16/04/04 09:33:08 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1770220147_0001_m_000535_0 decomp: 139586410 len: 139586414 to DISK&lt;BR /&gt;16/04/04 09:33:08 INFO reduce.OnDiskMapOutput: Read 139586414 bytes from map-output for attempt_local1770220147_0001_m_000535_0&lt;BR /&gt;16/04/04 09:33:08 INFO reduce.MergeManagerImpl: attempt_local1770220147_0001_m_000704_0: Shuffling to disk since 139586514 is greater than maxSingleShuffleLimit (45049444)&lt;BR /&gt;16/04/04 09:33:08 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1770220147_0001_m_000704_0 decomp: 139586514 len: 139586518 to DISK&lt;BR /&gt;16/04/04 09:33:09 INFO reduce.OnDiskMapOutput: Read 139586518 bytes from map-output for attempt_local1770220147_0001_m_000704_0&lt;BR /&gt;16/04/04 09:33:09 INFO reduce.MergeManagerImpl: attempt_local1770220147_0001_m_000142_0: Shuffling to disk since 139586410 is greater than maxSingleShuffleLimit (45049444)&lt;BR /&gt;16/04/04 09:33:09 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1770220147_0001_m_000142_0 decomp: 139586410 len: 139586414 to DISK&lt;BR /&gt;16/04/04 09:33:09 INFO reduce.OnDiskMapOutput: Read 139586414 bytes from map-output for attempt_local1770220147_0001_m_000142_0&lt;BR /&gt;16/04/04 09:33:09 INFO reduce.MergeManagerImpl: attempt_local1770220147_0001_m_000534_0: Shuffling to disk since 139586410 is greater than maxSingleShuffleLimit (45049444)&lt;BR /&gt;16/04/04 09:33:09 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1770220147_0001_m_000534_0 decomp: 139586410 len: 139586414 to DISK&lt;BR /&gt;16/04/04 09:33:10 INFO reduce.OnDiskMapOutput: Read 139586414 bytes from map-output for attempt_local1770220147_0001_m_000534_0&lt;BR /&gt;16/04/04 09:33:10 INFO reduce.MergeManagerImpl: attempt_local1770220147_0001_m_000705_0: Shuffling to disk since 139586410 is greater than maxSingleShuffleLimit (45049444)&lt;BR /&gt;16/04/04 09:33:10 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1770220147_0001_m_000705_0 decomp: 139586410 len: 139586414 to DISK&lt;BR /&gt;16/04/04 09:33:10 INFO reduce.OnDiskMapOutput: Read 139586414 bytes from map-output for attempt_local1770220147_0001_m_000705_0&lt;BR /&gt;16/04/04 09:33:10 INFO reduce.MergeManagerImpl: attempt_local1770220147_0001_m_000316_0: Shuffling to disk since 139586410 is greater than maxSingleShuffleLimit (45049444)&lt;BR /&gt;16/04/04 09:33:10 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1770220147_0001_m_000316_0 decomp: 139586410 len: 139586414 to DISK&lt;BR /&gt;16/04/04 09:33:11 INFO reduce.OnDiskMapOutput: Read 139586414 bytes from map-output for attempt_local1770220147_0001_m_000316_0&lt;BR /&gt;16/04/04 09:33:11 INFO reduce.MergeManagerImpl: attempt_local1770220147_0001_m_000702_0: Shuffling to disk since 139586410 is greater than maxSingleShuffleLimit (45049444)&lt;BR /&gt;16/04/04 09:33:11 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1770220147_0001_m_000702_0 decomp: 139586410 len: 139586414 to DISK&lt;BR /&gt;16/04/04 09:33:11 INFO reduce.OnDiskMapOutput: Read 139586414 bytes from map-output for attempt_local1770220147_0001_m_000702_0&lt;BR /&gt;16/04/04 09:33:11 INFO reduce.MergeManagerImpl: attempt_local1770220147_0001_m_000317_0: Shuffling to disk since 139586410 is greater than maxSingleShuffleLimit (45049444)&lt;BR /&gt;16/04/04 09:33:11 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1770220147_0001_m_000317_0 decomp: 139586410 len: 139586414 to DISK&lt;BR /&gt;16/04/04 09:33:11 INFO mapred.LocalJobRunner: reduce &amp;gt; copy task(attempt_local1770220147_0001_m_000702_0 succeeded at 133119.98 MB/s) Aggregated copy rate(9 of 745 at 1198080.12 MB/s)&lt;BR /&gt;16/04/04 09:33:12 INFO reduce.OnDiskMapOutput: Read 139586414 bytes from map-output for attempt_local1770220147_0001_m_000317_0&lt;BR /&gt;16/04/04 09:33:12 INFO reduce.MergeManagerImpl: attempt_local1770220147_0001_m_000145_0: Shuffling to disk since 139586410 is greater than maxSingleShuffleLimit (45049444)&lt;BR /&gt;16/04/04 09:33:12 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1770220147_0001_m_000145_0 decomp: 139586410 len: 139586414 to DISK&lt;BR /&gt;16/04/04 09:33:13 INFO reduce.OnDiskMapOutput: Read 139586414 bytes from map-output for attempt_local1770220147_0001_m_000145_0&lt;BR /&gt;16/04/04 09:33:13 INFO reduce.MergeManagerImpl: attempt_local1770220147_0001_m_000703_0: Shuffling to disk since 139586410 is greater than maxSingleShuffleLimit (45049444)&lt;BR /&gt;16/04/04 09:33:13 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1770220147_0001_m_000703_0 decomp: 139586410 len: 139586414 to DISK&lt;BR /&gt;16/04/04 09:33:13 INFO reduce.OnDiskMapOutput: Read 139586414 bytes from map-output for attempt_local1770220147_0001_m_000703_0&lt;BR /&gt;16/04/04 09:33:13 INFO reduce.MergeManagerImpl: attempt_local1770220147_0001_m_000533_0: Shuffling to disk since 139586514 is greater than maxSingleShuffleLimit (45049444)&lt;BR /&gt;16/04/04 09:33:13 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1770220147_0001_m_000533_0 decomp: 139586514 len: 139586518 to DISK&lt;BR /&gt;16/04/04 09:33:14 INFO reduce.OnDiskMapOutput: Read 139586518 bytes from map-output for attempt_local1770220147_0001_m_000533_0&lt;BR /&gt;16/04/04 09:33:14 INFO reduce.MergeManagerImpl: attempt_local1770220147_0001_m_000144_0: Shuffling to disk since 139586410 is greater than maxSingleShuffleLimit (45049444)&lt;BR /&gt;16/04/04 09:33:14 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1770220147_0001_m_000144_0 decomp: 139586410 len: 139586414 to DISK&lt;BR /&gt;16/04/04 09:33:14 INFO reduce.OnDiskMapOutput: Read 139586414 bytes from map-output for attempt_local1770220147_0001_m_000144_0&lt;BR /&gt;16/04/04 09:33:14 INFO reduce.MergeManagerImpl: attempt_local1770220147_0001_m_000538_0: Shuffling to disk since 139586410 is greater than maxSingleShuffleLimit (45049444)&lt;BR /&gt;16/04/04 09:33:14 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1770220147_0001_m_000538_0 decomp: 139586410 len: 139586414 to DISK&lt;BR /&gt;16/04/04 09:33:14 INFO mapred.LocalJobRunner: reduce &amp;gt; copy task(attempt_local1770220147_0001_m_000144_0 succeeded at 133119.98 MB/s) Aggregated copy rate(14 of 745 at 1863680.00 MB/s)&lt;BR /&gt;16/04/04 09:33:15 INFO reduce.OnDiskMapOutput: Read 139586414 bytes from map-output for attempt_local1770220147_0001_m_000538_0&lt;BR /&gt;16/04/04 09:33:15 INFO reduce.MergeManagerImpl: attempt_local1770220147_0001_m_000146_0: Shuffling to disk since 139586410 is greater than maxSingleShuffleLimit (45049444)&lt;BR /&gt;16/04/04 09:33:15 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1770220147_0001_m_000146_0 decomp: 139586410 len: 139586414 to DISK&lt;BR /&gt;16/04/04 09:33:15 INFO mapreduce.Job: map 100% reduce 1%&lt;BR /&gt;16/04/04 09:33:15 INFO reduce.OnDiskMapOutput: Read 139586414 bytes from map-output for attempt_local1770220147_0001_m_000146_0&lt;BR /&gt;16/04/04 09:33:15 INFO reduce.MergeManagerImpl: attempt_local1770220147_0001_m_000318_0: Shuffling to disk since 139586514 is greater than maxSingleShuffleLimit (45049444)&lt;BR /&gt;16/04/04 09:33:15 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1770220147_0001_m_000318_0 decomp: 139586514 len: 139586518 to DISK&lt;BR /&gt;16/04/04 09:33:16 INFO reduce.OnDiskMapOutput: Read 139586518 bytes from map-output for attempt_local1770220147_0001_m_000318_0&lt;BR /&gt;16/04/04 09:33:16 INFO reduce.MergeManagerImpl: attempt_local1770220147_0001_m_000701_0: Shuffling to disk since 139586410 is greater than maxSingleShuffleLimit (45049444)&lt;BR /&gt;16/04/04 09:33:16 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1770220147_0001_m_000701_0 decomp: 139586410 len: 139586414 to DISK&lt;BR /&gt;16/04/04 09:33:17 INFO reduce.OnDiskMapOutput: Read 139586414 bytes from map-output for attempt_local1770220147_0001_m_000701_0&lt;BR /&gt;16/04/04 09:33:17 INFO reduce.MergeManagerImpl: attempt_local1770220147_0001_m_000319_0: Shuffling to disk since 139586410 is greater than maxSingleShuffleLimit (45049444)&lt;BR /&gt;16/04/04 09:33:17 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1770220147_0001_m_000319_0 decomp: 139586410 len: 139586414 to DISK&lt;BR /&gt;16/04/04 09:33:17 INFO mapred.LocalJobRunner: reduce &amp;gt; copy task(attempt_local1770220147_0001_m_000701_0 succeeded at 133119.98 MB/s) Aggregated copy rate(18 of 745 at 2396160.25 MB/s)&lt;BR /&gt;16/04/04 09:33:18 INFO reduce.OnDiskMapOutput: Read 139586414 bytes from map-output for attempt_local1770220147_0001_m_000319_0&lt;BR /&gt;16/04/04 09:33:18 INFO reduce.MergeThread: OnDiskMerger - Thread to merge on-disk map-outputs: Starting merge with 10 segments, while ignoring 9 segments&lt;BR /&gt;16/04/04 09:33:18 INFO Configuration.deprecation: io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum&lt;BR /&gt;16/04/04 09:33:18 INFO reduce.MergeManagerImpl: OnDiskMerger: We have 10 map outputs on disk. Triggering merge...&lt;BR /&gt;16/04/04 09:33:18 INFO reduce.MergeManagerImpl: attempt_local1770220147_0001_m_000699_0: Shuffling to disk since 139586410 is greater than maxSingleShuffleLimit (45049444)&lt;BR /&gt;16/04/04 09:33:18 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1770220147_0001_m_000699_0 decomp: 139586410 len: 139586414 to DISK&lt;BR /&gt;16/04/04 09:33:18 INFO mapred.Merger: Merging 10 sorted segments&lt;BR /&gt;16/04/04 09:33:18 INFO mapred.Merger: Down to the last merge-pass, with 10 segments left of total size: 1395863970 bytes&lt;BR /&gt;16/04/04 09:33:19 INFO reduce.OnDiskMapOutput: Read 139586414 bytes from map-output for attempt_local1770220147_0001_m_000699_0&lt;BR /&gt;16/04/04 09:33:19 INFO reduce.MergeManagerImpl: attempt_local1770220147_0001_m_000148_0: Shuffling to disk since 139586410 is greater than maxSingleShuffleLimit (45049444)&lt;BR /&gt;16/04/04 09:33:19 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1770220147_0001_m_000148_0 decomp: 139586410 len: 139586414 to DISK&lt;BR /&gt;16/04/04 09:33:19 INFO reduce.OnDiskMapOutput: Read 139586414 bytes from map-output for attempt_local1770220147_0001_m_000148_0&lt;BR /&gt;16/04/04 09:33:19 INFO reduce.MergeManagerImpl: attempt_local1770220147_0001_m_000320_0: Shuffling to disk since 139586410 is greater than maxSingleShuffleLimit (45049444)&lt;BR /&gt;16/04/04 09:33:19 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1770220147_0001_m_000320_0 decomp: 139586410 len: 139586414 to DISK&lt;BR /&gt;16/04/04 09:33:20 INFO mapred.LocalJobRunner: reduce &amp;gt; copy task(attempt_local1770220147_0001_m_000148_0 succeeded at 133119.98 MB/s) Aggregated copy rate(21 of 745 at 2795520.00 MB/s)&lt;BR /&gt;16/04/04 09:33:20 INFO reduce.OnDiskMapOutput: Read 139586414 bytes from map-output for attempt_local1770220147_0001_m_000320_0&lt;BR /&gt;16/04/04 09:33:20 INFO reduce.MergeManagerImpl: attempt_local1770220147_0001_m_000537_0: Shuffling to disk since 139586410 is greater than maxSingleShuffleLimit (45049444)&lt;BR /&gt;16/04/04 09:33:20 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1770220147_0001_m_000537_0 decomp: 139586410 len: 139586414 to DISK&lt;BR /&gt;16/04/04 09:33:21 INFO reduce.OnDiskMapOutput: Read 139586414 bytes from map-output for attempt_local1770220147_0001_m_000537_0&lt;BR /&gt;16/04/04 09:33:21 INFO reduce.MergeManagerImpl: attempt_local1770220147_0001_m_000147_0: Shuffling to disk since 139586514 is greater than maxSingleShuffleLimit (45049444)&lt;BR /&gt;16/04/04 09:33:21 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1770220147_0001_m_000147_0 decomp: 139586514 len: 139586518 to DISK&lt;BR /&gt;16/04/04 09:33:22 INFO reduce.OnDiskMapOutput: Read 139586518 bytes from map-output for attempt_local1770220147_0001_m_000147_0&lt;BR /&gt;16/04/04 09:33:22 INFO reduce.MergeManagerImpl: attempt_local1770220147_0001_m_000536_0: Shuffling to disk since 139586514 is greater than maxSingleShuffleLimit (45049444)&lt;BR /&gt;16/04/04 09:33:22 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1770220147_0001_m_000536_0 decomp: 139586514 len: 139586518 to DISK&lt;BR /&gt;16/04/04 09:33:22 INFO reduce.OnDiskMapOutput: Read 139586518 bytes from map-output for attempt_local1770220147_0001_m_000536_0&lt;BR /&gt;16/04/04 09:33:22 INFO reduce.MergeManagerImpl: attempt_local1770220147_0001_m_000700_0: Shuffling to disk since 139586514 is greater than maxSingleShuffleLimit (45049444)&lt;BR /&gt;16/04/04 09:33:22 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1770220147_0001_m_000700_0 decomp: 139586514 len: 139586518 to DISK&lt;BR /&gt;16/04/04 09:33:22 INFO reduce.OnDiskMapOutput: Read 139586518 bytes from map-output for attempt_local1770220147_0001_m_000700_0&lt;BR /&gt;16/04/04 09:33:22 INFO reduce.MergeManagerImpl: attempt_local1770220147_0001_m_000321_0: Shuffling to disk since 139586410 is greater than maxSingleShuffleLimit (45049444)&lt;BR /&gt;16/04/04 09:33:22 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1770220147_0001_m_000321_0 decomp: 139586410 len: 139586414 to DISK&lt;BR /&gt;16/04/04 09:33:23 INFO mapred.LocalJobRunner: reduce &amp;gt; copy task(attempt_local1770220147_0001_m_000700_0 succeeded at 133120.08 MB/s) Aggregated copy rate(26 of 745 at 3461120.00 MB/s)&lt;BR /&gt;16/04/04 09:33:24 INFO reduce.OnDiskMapOutput: Read 139586414 bytes from map-output for attempt_local1770220147_0001_m_000321_0&lt;BR /&gt;16/04/04 09:33:24 INFO reduce.MergeManagerImpl: attempt_local1770220147_0001_m_000309_0: Shuffling to disk since 139586410 is greater than maxSingleShuffleLimit (45049444)&lt;BR /&gt;16/04/04 09:33:24 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1770220147_0001_m_000309_0 decomp: 139586410 len: 139586414 to DISK&lt;BR /&gt;16/04/04 09:33:25 INFO reduce.OnDiskMapOutput: Read 139586414 bytes from map-output for attempt_local1770220147_0001_m_000309_0&lt;BR /&gt;16/04/04 09:33:25 INFO reduce.MergeManagerImpl: attempt_local1770220147_0001_m_000529_0: Shuffling to disk since 139586514 is greater than maxSingleShuffleLimit (45049444)&lt;BR /&gt;16/04/04 09:33:25 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1770220147_0001_m_000529_0 decomp: 139586514 len: 139586518 to DISK&lt;BR /&gt;16/04/04 09:33:25 INFO reduce.OnDiskMapOutput: Read 139586518 bytes from map-output for attempt_local1770220147_0001_m_000529_0&lt;BR /&gt;16/04/04 09:33:25 INFO reduce.MergeThread: OnDiskMerger - Thread to merge on-disk map-outputs: Starting merge with 10 segments, while ignoring 9 segments&lt;BR /&gt;16/04/04 09:33:25 INFO reduce.MergeManagerImpl: attempt_local1770220147_0001_m_000149_0: Shuffling to disk since 139586410 is greater than maxSingleShuffleLimit (45049444)&lt;BR /&gt;16/04/04 09:33:25 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1770220147_0001_m_000149_0 decomp: 139586410 len: 139586414 to DISK&lt;BR /&gt;16/04/04 09:33:26 INFO reduce.OnDiskMapOutput: Read 139586414 bytes from map-output for attempt_local1770220147_0001_m_000149_0&lt;BR /&gt;16/04/04 09:33:26 INFO reduce.MergeManagerImpl: attempt_local1770220147_0001_m_000528_0: Shuffling to disk since 139586410 is greater than maxSingleShuffleLimit (45049444)&lt;BR /&gt;16/04/04 09:33:26 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1770220147_0001_m_000528_0 decomp: 139586410 len: 139586414 to DISK&lt;BR /&gt;16/04/04 09:33:26 INFO reduce.OnDiskMapOutput: Read 139586414 bytes from map-output for attempt_local1770220147_0001_m_000528_0&lt;BR /&gt;16/04/04 09:33:26 INFO reduce.MergeManagerImpl: attempt_local1770220147_0001_m_000698_0: Shuffling to disk since 139586410 is greater than maxSingleShuffleLimit (45049444)&lt;BR /&gt;16/04/04 09:33:26 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1770220147_0001_m_000698_0 decomp: 139586410 len: 139586414 to DISK&lt;BR /&gt;16/04/04 09:33:26 INFO mapred.LocalJobRunner: reduce &amp;gt; copy task(attempt_local1770220147_0001_m_000528_0 succeeded at 133119.98 MB/s) Aggregated copy rate(31 of 745 at 4126720.25 MB/s)&lt;BR /&gt;16/04/04 09:33:27 INFO reduce.OnDiskMapOutput: Read 139586414 bytes from map-output for attempt_local1770220147_0001_m_000698_0&lt;BR /&gt;16/04/04 09:33:27 INFO reduce.MergeManagerImpl: attempt_local1770220147_0001_m_000696_0: Shuffling to disk since 139586410 is greater than maxSingleShuffleLimit (45049444)&lt;BR /&gt;16/04/04 09:33:27 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1770220147_0001_m_000696_0 decomp: 139586410 len: 139586414 to DISK&lt;BR /&gt;16/04/04 09:33:27 INFO reduce.OnDiskMapOutput: Read 139586414 bytes from map-output for attempt_local1770220147_0001_m_000696_0&lt;BR /&gt;16/04/04 09:33:27 INFO reduce.MergeManagerImpl: attempt_local1770220147_0001_m_000151_0: Shuffling to disk since 139586410 is greater than maxSingleShuffleLimit (45049444)&lt;BR /&gt;16/04/04 09:33:27 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1770220147_0001_m_000151_0 decomp: 139586410 len: 139586414 to DISK&lt;BR /&gt;16/04/04 09:33:28 INFO reduce.OnDiskMapOutput: Read 139586414 bytes from map-output for attempt_local1770220147_0001_m_000151_0&lt;BR /&gt;16/04/04 09:33:28 INFO reduce.MergeManagerImpl: attempt_local1770220147_0001_m_000527_0: Shuffling to disk since 139586410 is greater than maxSingleShuffleLimit (45049444)&lt;BR /&gt;16/04/04 09:33:28 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1770220147_0001_m_000527_0 decomp: 139586410 len: 139586414 to DISK&lt;BR /&gt;16/04/04 09:33:28 INFO reduce.OnDiskMapOutput: Read 139586414 bytes from map-output for attempt_local1770220147_0001_m_000527_0&lt;BR /&gt;16/04/04 09:33:28 INFO reduce.MergeManagerImpl: attempt_local1770220147_0001_m_000310_0: Shuffling to disk since 139586410 is greater than maxSingleShuffleLimit (45049444)&lt;BR /&gt;16/04/04 09:33:28 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1770220147_0001_m_000310_0 decomp: 139586410 len: 139586414 to DISK&lt;BR /&gt;16/04/04 09:33:29 INFO mapred.LocalJobRunner: reduce &amp;gt; copy task(attempt_local1770220147_0001_m_000527_0 succeeded at 133119.98 MB/s) Aggregated copy rate(35 of 745 at 4659200.00 MB/s)&lt;BR /&gt;16/04/04 09:33:30 INFO mapreduce.Job: map 100% reduce 2%&lt;BR /&gt;16/04/04 09:33:30 INFO reduce.OnDiskMapOutput: Read 139586414 bytes from map-output for attempt_local1770220147_0001_m_000310_0&lt;BR /&gt;16/04/04 09:33:30 INFO reduce.MergeManagerImpl: attempt_local1770220147_0001_m_000697_0: Shuffling to disk since 139586514 is greater than maxSingleShuffleLimit (45049444)&lt;BR /&gt;16/04/04 09:33:30 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1770220147_0001_m_000697_0 decomp: 139586514 len: 139586518 to DISK&lt;BR /&gt;16/04/04 09:33:31 INFO reduce.OnDiskMapOutput: Read 139586518 bytes from map-output for attempt_local1770220147_0001_m_000697_0&lt;BR /&gt;16/04/04 09:33:31 INFO reduce.MergeManagerImpl: attempt_local1770220147_0001_m_000311_0: Shuffling to disk since 139586514 is greater than maxSingleShuffleLimit (45049444)&lt;BR /&gt;16/04/04 09:33:31 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1770220147_0001_m_000311_0 decomp: 139586514 len: 139586518 to DISK&lt;BR /&gt;16/04/04 09:33:32 INFO mapred.LocalJobRunner: reduce &amp;gt; copy task(attempt_local1770220147_0001_m_000697_0 succeeded at 133120.08 MB/s) Aggregated copy rate(37 of 745 at 4925440.00 MB/s)&lt;BR /&gt;16/04/04 09:33:35 INFO mapred.LocalJobRunner: reduce &amp;gt; copy task(attempt_local1770220147_0001_m_000697_0 succeeded at 133120.08 MB/s) Aggregated copy rate(37 of 745 at 4925440.00 MB/s)&lt;BR /&gt;16/04/04 09:33:38 INFO mapred.LocalJobRunner: reduce &amp;gt; copy task(attempt_local1770220147_0001_m_000697_0 succeeded at 133120.08 MB/s) Aggregated copy rate(37 of 745 at 4925440.00 MB/s)&lt;BR /&gt;16/04/04 09:33:39 INFO reduce.MergeManagerImpl: attempt_local1770220147_0001_r_000000_0 Finished merging 10 map output files on disk of total-size 1406769340. Local output file is /tmp/hadoop-hdfs/mapred/local/localRunner/hdfs/jobcache/job_local1770220147_0001/attempt_local1770220147_0001_r_000000_0/tmp/hadoop-hdfs/mapred/local/localRunner/hdfs/jobcache/job_local1770220147_0001/attempt_local1770220147_0001_r_000000_0/output/map_142.out.merged of size 1395864086&lt;BR /&gt;16/04/04 09:33:39 INFO reduce.MergeManagerImpl: OnDiskMerger: We have 10 map outputs on disk. Triggering merge...&lt;BR /&gt;16/04/04 09:33:39 INFO mapred.Merger: Merging 10 sorted segments&lt;BR /&gt;16/04/04 09:33:39 INFO mapred.Merger: Down to the last merge-pass, with 10 segments left of total size: 1395863970 bytes&lt;BR /&gt;16/04/04 09:33:41 INFO mapred.LocalJobRunner: reduce &amp;gt; copy task(attempt_local1770220147_0001_m_000697_0 succeeded at 133120.08 MB/s) Aggregated copy rate(37 of 745 at 4925440.00 MB/s)&lt;BR /&gt;^C&lt;BR /&gt;real 86m 6.353s&lt;BR /&gt;user 126m 44.561s&lt;BR /&gt;sys 7m 38.514s&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Regards,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Damion.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;DIV class="line number21 index20 alt2"&gt;&amp;nbsp;&lt;/DIV&gt;</description>
      <pubDate>Fri, 16 Sep 2022 10:12:11 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Running-quot-terasort-quot-on-brand-new-CDH-5-6-0-Cluster-is/m-p/39311#M24292</guid>
      <dc:creator>ozzielabrat</dc:creator>
      <dc:date>2022-09-16T10:12:11Z</dc:date>
    </item>
    <item>
      <title>Re: Running "terasort" on brand new CDH 5.6.0 Cluster is very very slow</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Running-quot-terasort-quot-on-brand-new-CDH-5-6-0-Cluster-is/m-p/39341#M24293</link>
      <description>&lt;P&gt;I've resolved this issue....thanks to anyone who may have been interested !&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Cheers,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Damion.&lt;/P&gt;</description>
      <pubDate>Tue, 05 Apr 2016 04:25:49 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Running-quot-terasort-quot-on-brand-new-CDH-5-6-0-Cluster-is/m-p/39341#M24293</guid>
      <dc:creator>ozzielabrat</dc:creator>
      <dc:date>2016-04-05T04:25:49Z</dc:date>
    </item>
    <item>
      <title>Re: Running "terasort" on brand new CDH 5.6.0 Cluster is very very slow</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Running-quot-terasort-quot-on-brand-new-CDH-5-6-0-Cluster-is/m-p/48612#M24294</link>
      <description>Hi Damion, can you tell us how you were able to solve this issue?&lt;BR /&gt;&lt;BR /&gt;Thanks,&lt;BR /&gt;Suri</description>
      <pubDate>Sat, 17 Dec 2016 19:10:25 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Running-quot-terasort-quot-on-brand-new-CDH-5-6-0-Cluster-is/m-p/48612#M24294</guid>
      <dc:creator>SuriNuthalapati</dc:creator>
      <dc:date>2016-12-17T19:10:25Z</dc:date>
    </item>
    <item>
      <title>Re: Running "terasort" on brand new CDH 5.6.0 Cluster is very very slow</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Running-quot-terasort-quot-on-brand-new-CDH-5-6-0-Cluster-is/m-p/363166#M24295</link>
      <description>&lt;P&gt;Hello&amp;nbsp;&lt;A href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/14059" target="_blank"&gt;ozzielabrat,&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I am also facing same issue, what is the solution for it.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks&lt;/P&gt;</description>
      <pubDate>Mon, 06 Feb 2023 13:52:15 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Running-quot-terasort-quot-on-brand-new-CDH-5-6-0-Cluster-is/m-p/363166#M24295</guid>
      <dc:creator>Sam5986</dc:creator>
      <dc:date>2023-02-06T13:52:15Z</dc:date>
    </item>
  </channel>
</rss>

