Options
- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
No of Reducers are not working on Hive
Labels:
- Labels:
-
Apache Hive
Expert Contributor
Created ‎10-13-2018 05:13 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I have set the No of reducers to 2 but still Hive is executing with 1.Any body help on this
set hive.exec.reducers.max=2
Hive (default)> insert overwrite directory '/input123456' > select count(*) from partitioned_user; Total jobs = 1 Launching Job 1 out of 1 Number of reduce tasks determined at compile time: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapred.reduce.tasks=<number> Starting Job = job_201810122125_0003, Tracking URL = http://ubuntu:50030/jobdetails.jsp?jobid=job_201810122125_0003 Kill Command = /home/naresh/Work1/hadoop-1.2.1/libexec/../bin/hadoop job -kill job_201810122125_0003 Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1 2018-10-12 21:36:24,774 Stage-1 map = 0%, reduce = 0% 2018-10-12 21:36:32,825 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 4.12 sec 2018-10-12 21:36:41,919 Stage-1 map = 100%, reduce = 33%, Cumulative CPU 4.12 sec 2018-10-12 21:36:42,926 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 6.38 sec MapReduce Total cumulative CPU time: 6 seconds 380 msec Ended Job = job_201810122125_0003 Moving data to: /input123456 MapReduce Jobs Launched: Job 0: Map: 1 Reduce: 1 Cumulative CPU: 6.38 sec HDFS Read: 354134 HDFS Write: 5 SUCCESS Total MapReduce CPU Time Spent: 6 seconds 380 msec OK _c0 Time taken: 37.199 seconds
1 REPLY 1
Expert Contributor
Created ‎10-13-2018 09:00 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
In count(*) query, final aggregation vertex should always be single task which fetches count of all the mappers & sumup all of the them.
If my response helped your query, accept the answer. It might help others in the community.
