<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: select count query taking more time in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/select-count-query-taking-more-time/m-p/225397#M187258</link>
    <description>&lt;P&gt;@sindhu&lt;/P&gt;&lt;P&gt;Can you please help on this.&lt;/P&gt;&lt;P&gt;I tried to run analyze it takes around 217 second. There is aroung 38 lakh's records.&lt;/P&gt;&lt;PRE&gt;analyze table schema.table compute statistics for columns; Query ID = ec2-user_20171218003632_b89c66b2-2484-41b3-8d11-d0559e2b3ff7 Total jobs = 1 Launching Job 1 out of 1 Number of reduce tasks determined at compile time: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=&amp;lt;number&amp;gt; In order to limit the maximum number of reducers: set hive.exec.reducers.max=&amp;lt;number&amp;gt; In order to set a constant number of reducers: set mapreduce.job.reduces=&amp;lt;number&amp;gt; Starting Job = job_1513235262783_0081, Tracking URL = &lt;A href="http://ip-192-168-180-54.ca-central-1.compute.internal:8088/proxy/application_1513235262783_0081/" target="_blank"&gt;http://ip-192-168-180-54.ca-central-1.compute.internal:8088/proxy/application_1513235262783_0081/&lt;/A&gt; Kill Command = /usr/hdp/2.6.2.14-5/hadoop/bin/hadoop job -kill job_1513235262783_0081 Hadoop job information for Stage-0: number of mappers: 3; number of reducers: 1 2017-12-18 00:36:41,824 Stage-0 map = 0%, reduce = 0% 2017-12-18 00:37:42,572 Stage-0 map = 0%, reduce = 0%, Cumulative CPU 189.89 sec 2017-12-18 00:38:32,943 Stage-0 map = 33%, reduce = 0%, Cumulative CPU 352.13 sec 2017-12-18 00:38:37,056 Stage-0 map = 67%, reduce = 0%, Cumulative CPU 359.55 sec 2017-12-18 00:38:43,223 Stage-0 map = 67%, reduce = 22%, Cumulative CPU 366.29 sec 2017-12-18 00:39:43,898 Stage-0 map = 67%, reduce = 22%, Cumulative CPU 428.53 sec 2017-12-18 00:40:06,476 Stage-0 map = 100%, reduce = 22%, Cumulative CPU 454.93 sec 2017-12-18 00:40:07,503 Stage-0 map = 100%, reduce = 67%, Cumulative CPU 455.44 sec 2017-12-18 00:40:08,526 Stage-0 map = 100%, reduce = 100%, Cumulative CPU 457.46 sec MapReduce Total cumulative CPU time: 7 minutes 37 seconds 460 msec Ended Job = job_1513235262783_0081 MapReduce Jobs Launched: Stage-Stage-0: Map: 3 Reduce: 1 Cumulative CPU: 457.46 sec HDFS Read: 688303 HDFS Write: 2067 SUCCESS Total MapReduce CPU Time Spent: 7 minutes 37 seconds 460 msec OK Time taken: 216.908 seconds&lt;/PRE&gt;&lt;P&gt;I tried to run count qury after analyze stil its take 5 min. Can you please help to tune hive so it work faster. We use map reduce engine.&lt;/P&gt;&lt;PRE&gt;Query ID = ec2-user_20171218004512_6bef2ddb-d981-42f8-b2e5-c42a9ad80bfd
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=&amp;lt;number&amp;gt;
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=&amp;lt;number&amp;gt;
In order to set a constant number of reducers:
  set mapreduce.job.reduces=&amp;lt;number&amp;gt;
Starting Job = job_1513235262783_0082, Tracking URL = &lt;A href="http://ip-192-168-180-54.ca-central-1.compute.internal:8088/proxy/application_1513235262783_0082/" target="_blank"&gt;http://ip-192-168-180-54.ca-central-1.compute.internal:8088/proxy/application_1513235262783_0082/&lt;/A&gt;
Kill Command = /usr/hdp/2.6.2.14-5/hadoop/bin/hadoop job  -kill job_1513235262783_0082
Hadoop job information for Stage-1: number of mappers: 3; number of reducers: 1
2017-12-18 00:45:22,022 Stage-1 map = 0%,  reduce = 0%
2017-12-18 00:46:22,804 Stage-1 map = 0%,  reduce = 0%, Cumulative CPU 189.83 sec
2017-12-18 00:46:43,363 Stage-1 map = 33%,  reduce = 0%, Cumulative CPU 251.69 sec
2017-12-18 00:46:46,436 Stage-1 map = 67%,  reduce = 0%, Cumulative CPU 259.47 sec
2017-12-18 00:46:54,666 Stage-1 map = 67%,  reduce = 22%, Cumulative CPU 269.17 sec
2017-12-18 00:47:49,101 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 328.79 sec
MapReduce Total cumulative CPU time: 5 minutes 28 seconds 790 msec
Ended Job = job_1513235262783_0082
MapReduce Jobs Launched:
Stage-Stage-1: Map: 3  Reduce: 1   Cumulative CPU: 328.79 sec   HDFS Read: 330169 HDFS Write: 8 SUCCESS
Total MapReduce CPU Time Spent: 5 minutes 28 seconds 790 msec
OK
3778700


&lt;/PRE&gt;</description>
    <pubDate>Mon, 18 Dec 2017 13:44:25 GMT</pubDate>
    <dc:creator>ashneesharma88</dc:creator>
    <dc:date>2017-12-18T13:44:25Z</dc:date>
  </channel>
</rss>

