Member since
09-17-2014
93
Posts
5
Kudos Received
6
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
26041 | 03-02-2015 12:47 PM | |
1349 | 02-03-2015 01:24 PM | |
2956 | 12-12-2014 08:19 AM | |
2700 | 11-07-2014 01:55 PM | |
1526 | 10-13-2014 06:47 PM |
11-14-2015
01:16 PM
Hi, I am running my query on CDH 5.2 when i got this error where we are processing TB's of data on 182 node cluster. After this error the job remain stuck for 16 hours. communication thread] org.apache.hadoop.yarn.util.ProcfsBasedProcessTree: Error reading the stream java.io.IOException: No such processcommunication thread] org.apache.hadoop.yarn.util.ProcfsBasedProcessTree: Error reading the stream java.io.IOException: No such process Any solution?
... View more
08-10-2015
06:00 PM
Hi, After the upgradation of cloudera from 4.7 to cdh 5.3. I am getting warnings in the log files and the job is taking tooo long to run. The log file says:- Logs for container_1439220480088_1797_01_098459 <script src="/static/jquery/jquery-1.8.2.min.js" type="text/javascript">// // </script> <script src="/static/jquery/jquery-ui-1.9.1.custom.min.js" type="text/javascript">// // </script> <script src="/static/dt-1.9.4/js/jquery.dataTables.min.js" type="text/javascript">// // </script> <script src="/static/yarn.dt.plugins.js" type="text/javascript">// // </script> <script type="text/javascript">// $(function() { $('#nav').accordion({autoHeight:false, active:0}); }); // </script> 2015-08-10 17:09:01,493 INFO [main] ExecReducer: ExecReducer: processing 8000000 rows: used memory = 1615166528
2015-08-10 17:09:01,729 INFO [main] org.apache.hadoop.hive.ql.exec.CommonJoinOperator: table 0 has 7228000 rows for join key [GTS1019783444, 96916743]
2015-08-10 17:09:02,829 INFO [main] org.apache.hadoop.hive.ql.exec.CommonJoinOperator: table 0 has 7328000 rows for join key [GTS1019783444, 96916743]
2015-08-10 17:09:04,257 INFO [main] org.apache.hadoop.hive.ql.exec.CommonJoinOperator: table 0 has 7428000 rows for join key [GTS1019783444, 96916743]
2015-08-10 17:09:05,321 INFO [main] org.apache.hadoop.hive.ql.exec.CommonJoinOperator: table 0 has 7528000 rows for join key [GTS1019783444, 96916743]
2015-08-10 17:09:06,375 INFO [main] org.apache.hadoop.hive.ql.exec.CommonJoinOperator: table 0 has 7628000 rows for join key [GTS1019783444, 96916743]
2015-08-10 17:09:07,528 INFO [main] org.apache.hadoop.hive.ql.exec.CommonJoinOperator: table 0 has 7728000 rows for join key [GTS1019783444, 96916743]
2015-08-10 17:09:08,608 INFO [main] org.apache.hadoop.hive.ql.exec.CommonJoinOperator: table 0 has 7828000 rows for join key [GTS1019783444, 96916743]
2015-08-10 17:09:10,014 INFO [main] org.apache.hadoop.hive.ql.exec.CommonJoinOperator: table 0 has 7928000 rows for join key [GTS1019783444, 96916743]
2015-08-10 17:09:10,370 INFO [main] org.apache.hadoop.mapred.FileInputFormat: Total input paths to process : 1
2015-08-10 17:22:37,994 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2219689
2015-08-10 17:22:37,995 INFO [main] org.apache.hadoop.hive.ql.exec.FileSinkOperator: Final Path: FS hdfs://burrito/tmp/hive-hive/hive_2015-08-10_16-27-13_378_7079430883879413208-2563/_tmp.-mr-10011/000210_0
2015-08-10 17:22:37,995 INFO [main] org.apache.hadoop.hive.ql.exec.FileSinkOperator: Writing to temp file: FS hdfs://burrito/tmp/hive-hive/hive_2015-08-10_16-27-13_378_7079430883879413208-2563/_task_tmp.-mr-10011/_tmp.000210_0
2015-08-10 17:22:37,995 INFO [main] org.apache.hadoop.hive.ql.exec.FileSinkOperator: New Final Path: FS hdfs://burrito/tmp/hive-hive/hive_2015-08-10_16-27-13_378_7079430883879413208-2563/_tmp.-mr-10011/000210_0
2015-08-10 17:22:38,007 WARN [main] org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:ll06208 (auth:SIMPLE) cause:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state standby
2015-08-10 17:22:38,008 WARN [main] org.apache.hadoop.ipc.Client: Exception encountered while connecting to the server : org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state standby
2015-08-10 17:22:38,008 WARN [main] org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:ll06208 (auth:SIMPLE) cause:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state standby
2015-08-10 17:22:39,550 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1997720
2015-08-10 17:24:26,047 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2219689
2015-08-10 17:24:27,314 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1997720
2015-08-10 17:26:13,775 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2219689
2015-08-10 17:26:14,980 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1997720
2015-08-10 17:26:17,268 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2219689
2015-08-10 17:26:18,546 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1997720
2015-08-10 17:28:04,456 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2219689
2015-08-10 17:28:05,591 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1997720
2015-08-10 17:29:47,636 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2219689
2015-08-10 17:29:48,862 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1997720
2015-08-10 17:31:38,030 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2219689
2015-08-10 17:31:39,202 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1997720
2015-08-10 17:31:41,646 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2221411
2015-08-10 17:31:42,975 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1999269
2015-08-10 17:33:27,669 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2221411
2015-08-10 17:33:28,780 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1999269
2015-08-10 17:35:22,326 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2221411
2015-08-10 17:35:23,513 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1999269
2015-08-10 17:35:25,843 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2221411
2015-08-10 17:35:27,364 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1999269
2015-08-10 17:37:09,083 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2221411
2015-08-10 17:37:10,221 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1999269
2015-08-10 17:38:52,160 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2221411
2015-08-10 17:38:53,297 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1999269
2015-08-10 17:38:56,067 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2221411
2015-08-10 17:38:57,276 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1999269
2015-08-10 17:40:37,203 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2221411
2015-08-10 17:40:38,284 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1999269
2015-08-10 17:42:21,486 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2221411
2015-08-10 17:42:22,619 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1999269
2015-08-10 17:42:25,450 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2221411
2015-08-10 17:42:26,592 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1999269
2015-08-10 17:44:05,253 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2221411
2015-08-10 17:44:06,300 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1999269
2015-08-10 17:44:09,373 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2221411
2015-08-10 17:44:10,449 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1999269
2015-08-10 17:45:09,747 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2221411
2015-08-10 17:45:10,928 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1999269
2015-08-10 17:45:13,343 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2217969
2015-08-10 17:45:14,892 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1996172
2015-08-10 17:45:36,757 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2212827
2015-08-10 17:45:37,964 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1991544
2015-08-10 17:45:40,952 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2209412
2015-08-10 17:45:42,168 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1988470
2015-08-10 17:47:39,324 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2206008
2015-08-10 17:47:40,514 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1985407
2015-08-10 17:47:42,937 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2202614
2015-08-10 17:47:44,282 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1982352
2015-08-10 17:49:37,457 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2199231
2015-08-10 17:49:38,568 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1979307
2015-08-10 17:49:40,906 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2193000
2015-08-10 17:49:42,289 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1973700
2015-08-10 17:51:34,530 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2194175
2015-08-10 17:51:35,589 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1974757
2015-08-10 17:51:35,698 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 1985000
2015-08-10 17:51:36,759 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1786500
2015-08-10 17:51:36,765 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 1787000
2015-08-10 17:51:37,733 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1608300
2015-08-10 17:51:37,741 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 1609000
2015-08-10 17:51:38,725 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1448100
2015-08-10 17:53:35,455 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 1857000
2015-08-10 17:53:36,462 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1671300
2015-08-10 17:53:36,470 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 1672000
2015-08-10 17:53:37,324 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1504800
2015-08-10 17:53:37,326 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 1505000
2015-08-10 17:53:38,216 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1354500
2015-08-10 17:53:38,221 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 1355000
2015-08-10 17:53:39,075 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1219500
2015-08-10 17:55:32,230 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 1438000
2015-08-10 17:55:32,896 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1294200
2015-08-10 17:55:32,905 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 1295000
2015-08-10 17:55:33,706 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1165500
2015-08-10 18:03:33,839 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2180806
2015-08-10 18:03:35,043 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1962725
2015-08-10 18:03:37,302 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2179146
2015-08-10 18:03:38,807 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1961231
2015-08-10 18:05:35,500 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2177489
2015-08-10 18:05:37,084 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1959740
2015-08-10 18:07:33,699 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2174182
2015-08-10 18:07:34,891 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1956763
2015-08-10 18:07:37,331 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2174182
2015-08-10 18:07:38,553 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1956763
2015-08-10 18:09:35,064 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2172533
2015-08-10 18:09:36,202 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1955279
2015-08-10 18:11:33,227 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2170886
2015-08-10 18:11:34,272 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1953797
2015-08-10 18:11:36,666 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2169241
2015-08-10 18:11:37,788 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1952316
2015-08-10 18:12:19,845 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2167599
2015-08-10 18:12:20,881 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1950839
2015-08-10 18:13:12,302 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2169241
2015-08-10 18:13:13,363 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1952316
2015-08-10 18:13:49,780 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2170886
2015-08-10 18:13:50,770 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1953797
2015-08-10 18:15:03,609 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2169241
2015-08-10 18:15:04,647 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1952316
2015-08-10 18:16:18,790 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2169241
2015-08-10 18:16:19,926 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1952316
2015-08-10 18:17:20,496 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2169241
2015-08-10 18:17:21,699 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1952316
2015-08-10 18:17:24,235 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2167599
2015-08-10 18:17:25,437 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1950839
2015-08-10 18:17:26,479 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2022000
2015-08-10 18:17:27,777 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1819800
2015-08-10 18:18:46,974 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2165959
2015-08-10 18:18:48,161 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1949363
2015-08-10 18:21:05,432 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2165959
2015-08-10 18:21:06,665 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1949363
2015-08-10 18:21:09,047 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2164322
2015-08-10 18:21:10,340 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1947889
2015-08-10 18:23:09,278 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2162688
2015-08-10 18:23:10,475 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1946419
2015-08-10 18:23:13,043 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2162688
2015-08-10 18:23:14,254 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1946419
2015-08-10 18:25:11,939 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2161055
2015-08-10 18:25:13,071 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1944949
2015-08-10 18:25:15,533 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2161055
2015-08-10 18:25:16,923 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1944949
2015-08-10 18:27:11,191 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2159426
2015-08-10 18:27:12,204 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1943483
2015-08-10 18:27:14,747 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2159426
2015-08-10 18:27:15,740 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1943483
2015-08-10 18:29:07,332 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2157798
2015-08-10 18:29:08,375 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1942018
2015-08-10 18:29:11,100 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2157798
2015-08-10 18:29:12,178 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1942018
2015-08-10 18:29:14,875 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2153000
2015-08-10 18:29:16,131 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1937700
2015-08-10 18:31:22,622 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2156173
2015-08-10 18:31:23,953 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1940555
2015-08-10 18:31:26,301 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2156173
2015-08-10 18:31:27,495 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1940555
2015-08-10 18:33:22,484 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2154551
2015-08-10 18:33:23,563 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1939095
2015-08-10 18:33:25,982 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2154551
2015-08-10 18:33:27,271 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1939095
2015-08-10 18:33:30,156 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2152931
2015-08-10 18:33:31,290 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1937637
2015-08-10 18:35:25,078 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2152931
2015-08-10 18:35:26,315 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1937637
2015-08-10 18:35:28,997 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2152931
2015-08-10 18:35:30,443 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1937637
2015-08-10 18:35:33,388 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2151313
2015-08-10 18:35:34,637 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1936181
2015-08-10 18:37:34,774 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2151313
2015-08-10 18:37:35,805 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1936181
2015-08-10 18:37:38,213 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2151313
2015-08-10 18:37:39,538 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1936181
2015-08-10 18:37:42,343 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2149698
2015-08-10 18:37:43,538 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1934728
2015-08-10 18:39:49,468 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2149698
2015-08-10 18:39:50,603 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1934728
2015-08-10 18:39:53,115 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2149698
2015-08-10 18:39:54,312 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1934728
2015-08-10 18:39:57,254 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2148086
2015-08-10 18:39:58,395 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1933277
2015-08-10 18:41:51,850 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2148086
2015-08-10 18:41:52,877 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1933277
2015-08-10 18:41:55,514 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2148086
2015-08-10 18:41:56,542 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1933277
2015-08-10 18:41:59,171 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2148086
2015-08-10 18:42:00,401 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1933277
2015-08-10 18:43:49,574 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2146475
2015-08-10 18:43:50,623 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1931827
2015-08-10 18:43:53,142 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2146475
2015-08-10 18:43:54,543 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1931827
2015-08-10 18:43:57,198 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2146475
2015-08-10 18:43:58,264 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1931827
2015-08-10 18:44:01,070 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2144868
2015-08-10 18:44:02,358 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1930381
2015-08-10 18:45:37,432 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2144868
2015-08-10 18:45:38,470 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1930381
2015-08-10 18:45:53,464 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2146475
2015-08-10 18:45:54,467 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1931827
2015-08-10 18:45:56,948 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2146475
2015-08-10 18:45:58,025 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1931827
2015-08-10 18:46:01,066 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2148086
2015-08-10 18:46:02,034 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1933277
2015-08-10 18:47:47,983 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2148086
2015-08-10 18:47:49,062 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1933277
2015-08-10 18:47:51,594 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2148086
2015-08-10 18:47:52,801 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1933277
2015-08-10 18:47:55,458 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2149698
2015-08-10 18:47:56,848 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1934728
2015-08-10 18:49:30,346 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2149698
2015-08-10 18:49:31,447 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1934728
2015-08-10 18:50:47,874 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2151313
2015-08-10 18:50:49,146 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1936181
2015-08-10 18:51:29,149 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2151313
2015-08-10 18:51:30,324 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1936181
2015-08-10 18:51:33,221 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2151313
2015-08-10 18:51:34,312 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1936181
2015-08-10 18:53:11,710 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2152931
2015-08-10 18:53:12,769 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1937637
2015-08-10 18:53:15,327 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2152931
2015-08-10 18:53:16,627 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1937637
2015-08-10 18:53:19,150 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2152931
2015-08-10 18:53:20,335 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1937637
2015-08-10 18:54:53,635 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2154551
2015-08-10 18:54:54,633 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1939095
2015-08-10 18:55:37,037 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2154551
2015-08-10 18:55:38,191 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1939095
2015-08-10 18:55:41,938 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2154551
2015-08-10 18:55:43,048 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1939095
2015-08-10 18:56:12,034 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2156173
2015-08-10 18:56:13,076 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1940555
2015-08-10 18:57:38,853 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2154551
2015-08-10 18:57:39,946 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1939095
2015-08-10 18:57:40,559 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 1996000
2015-08-10 18:57:41,603 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1796400
2015-08-10 18:57:41,609 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 1797000
2015-08-10 18:57:42,644 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1617300
2015-08-10 18:59:36,845 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 1974000
2015-08-10 18:59:38,391 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1776600
2015-08-10 19:02:15,053 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2157798
2015-08-10 19:02:16,042 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1942018
2015-08-10 19:02:34,882 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2157798
2015-08-10 19:02:36,601 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1942018
2015-08-10 19:02:39,317 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2157798
2015-08-10 19:02:40,432 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1942018
2015-08-10 19:03:59,537 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2157798
2015-08-10 19:04:00,566 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1942018
2015-08-10 19:04:02,996 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2156173
2015-08-10 19:04:04,059 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1940555
2015-08-10 19:05:51,908 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2156173
2015-08-10 19:05:52,926 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1940555
2015-08-10 19:05:55,441 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2156173
2015-08-10 19:05:56,477 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1940555
2015-08-10 19:07:49,107 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2154551
2015-08-10 19:07:50,192 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1939095
2015-08-10 19:07:52,746 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2154551
2015-08-10 19:07:53,800 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1939095
2015-08-10 19:09:56,338 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2154551
2015-08-10 19:09:57,334 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1939095
2015-08-10 19:09:59,971 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2154551
2015-08-10 19:10:01,028 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1939095
2015-08-10 19:11:54,096 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2154551
2015-08-10 19:11:55,288 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1939095
2015-08-10 19:11:57,835 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2152931
2015-08-10 19:11:59,107 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1937637
2015-08-10 19:12:02,007 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2152931
2015-08-10 19:12:03,236 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1937637
2015-08-10 19:13:54,021 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2152931
2015-08-10 19:13:55,153 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1937637
2015-08-10 19:13:57,500 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2152931
2015-08-10 19:13:58,584 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1937637
2015-08-10 19:14:01,367 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2151313
2015-08-10 19:14:02,439 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1936181
2015-08-10 19:14:05,399 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2151313
2015-08-10 19:14:06,443 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1936181
2015-08-10 19:14:09,182 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2151313
2015-08-10 19:14:10,396 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1936181
2015-08-10 19:14:13,048 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2151313
2015-08-10 19:14:14,204 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1936181
2015-08-10 19:14:17,036 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2149698
2015-08-10 19:14:18,102 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1934728
2015-08-10 19:14:20,777 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2149698
2015-08-10 19:14:22,042 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1934728
2015-08-10 19:14:24,710 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2149698
2015-08-10 19:14:25,792 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1934728
2015-08-10 19:14:28,705 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2149698
2015-08-10 19:14:29,755 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1934728
2015-08-10 19:14:32,429 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2149698
2015-08-10 19:14:33,501 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1934728
2015-08-10 19:14:36,282 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2148086
2015-08-10 19:14:37,356 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1933277
2015-08-10 19:14:40,239 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2148086
2015-08-10 19:14:41,407 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1933277
2015-08-10 19:14:44,078 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2148086
2015-08-10 19:14:45,146 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1933277
2015-08-10 19:14:47,884 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2148086
2015-08-10 19:14:48,955 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1933277
2015-08-10 19:14:51,565 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2148086
2015-08-10 19:14:52,964 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1933277
2015-08-10 19:14:55,532 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2146475
2015-08-10 19:14:56,613 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1931827
2015-08-10 19:14:59,524 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2146475
2015-08-10 19:15:00,597 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1931827
2015-08-10 19:15:03,238 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2146475
2015-08-10 19:15:04,526 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1931827
2015-08-10 19:15:07,295 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2146475
2015-08-10 19:15:08,366 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1931827
2015-08-10 19:15:11,184 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2146475
2015-08-10 19:15:12,216 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1931827
2015-08-10 19:15:14,946 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2146475
2015-08-10 19:15:16,036 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1931827
2015-08-10 19:15:18,718 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2144868
2015-08-10 19:15:19,784 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1930381
2015-08-10 19:15:22,606 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2144868
2015-08-10 19:15:23,851 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1930381
2015-08-10 19:15:26,496 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2144868
2015-08-10 19:15:27,593 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1930381
2015-08-10 19:15:30,543 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2144868
2015-08-10 19:15:31,605 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1930381
2015-08-10 19:15:34,226 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2144868
2015-08-10 19:15:35,326 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1930381
2015-08-10 19:15:38,224 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2144868
2015-08-10 19:15:39,337 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1930381
2015-08-10 19:15:42,025 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2143262
2015-08-10 19:15:43,202 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1928935
2015-08-10 19:15:46,033 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2143262
2015-08-10 19:15:47,136 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1928935
2015-08-10 19:15:50,001 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2143262
2015-08-10 19:15:51,097 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1928935
2015-08-10 19:15:53,753 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2143262
2015-08-10 19:15:54,829 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1928935
2015-08-10 19:15:57,631 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2143262
2015-08-10 19:15:58,836 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1928935
2015-08-10 19:16:01,475 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2143262
2015-08-10 19:16:02,554 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1928935
2015-08-10 19:16:05,295 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2143262
2015-08-10 19:16:06,413 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1928935
2015-08-10 19:16:09,026 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2141659
2015-08-10 19:16:10,179 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1927493
2015-08-10 19:16:12,917 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2141659
2015-08-10 19:16:13,988 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1927493
2015-08-10 19:16:16,622 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2141659
2015-08-10 19:16:17,839 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1927493
2015-08-10 19:16:20,505 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2141659
2015-08-10 19:16:21,571 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1927493
2015-08-10 19:16:24,272 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2141659
2015-08-10 19:16:25,290 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1927493
2015-08-10 19:16:27,817 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2141659
2015-08-10 19:16:29,070 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1927493
2015-08-10 19:16:31,859 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2141659
2015-08-10 19:16:32,903 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1927493
2015-08-10 19:16:35,517 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2140058
2015-08-10 19:16:36,586 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1926052
2015-08-10 19:16:39,335 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2140058
2015-08-10 19:16:40,393 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1926052
2015-08-10 19:16:43,209 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2140058
2015-08-10 19:16:44,288 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1926052
2015-08-10 19:16:46,966 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2140058
2015-08-10 19:16:48,064 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1926052
2015-08-10 19:16:50,662 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2140058
2015-08-10 19:16:51,962 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1926052
2015-08-10 19:16:54,559 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2140058
2015-08-10 19:16:55,656 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1926052
2015-08-10 19:16:58,488 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2140058
2015-08-10 19:16:59,671 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1926052
2015-08-10 19:17:02,108 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2140058
2015-08-10 19:17:03,178 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Table flushed: new size = 1926052
2015-08-10 19:17:05,828 INFO [main] org.apache.hadoop.hive.ql.exec.GroupByOperator: Hash Tbl flush: #hash table = 2138460
... View more
07-20-2015
12:12 PM
Hi, Well this is a spotfire Issue,i had numerous calls with tibco guys and they said its an issue from their side. Although Tableau works fine in this scenario.
... View more
03-19-2015
03:26 PM
I did Flume Streaming for twitter,and to see the table,i added hive serde as seen in the picture but it gave me an error. Is this an issue with sentry?
... View more
03-02-2015
12:47 PM
1 Kudo
i was able to solve the problem, instead of keeping the file in /user/<home-directory> i put the script file in /user/<home-directory>/oozie-oozi and it worked.
... View more
03-02-2015
11:52 AM
Hi Romain, after enabling acl's in hdfs the permissions worked as expected. if they were not enabled i was still able to make the folders in the directory with 700 rwx------ access.
... View more
03-02-2015
11:50 AM
Hi, I am using CDH 5.2 on RHEL 6.3. I want to run shell script using oozie fron HUE. i am getting an error like this:- java.io.IOException: Cannot run program "test.sh" (in directory "/apps/yarn/nm/usercache/tsingh12/appcache/application_1425085556881_0042/container_1425085556881_0042_01_000002"): error=2, No such file or directory
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1047)
at org.apache.oozie.action.hadoop.ShellMain.execute(ShellMain.java:93)
at org.apache.oozie.action.hadoop.ShellMain.run(ShellMain.java:55)
at org.apache.oozie.action.hadoop.LauncherMain.run(LauncherMain.java:39)
at org.apache.oozie.action.hadoop.ShellMain.main(ShellMain.java:47)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.oozie.action.hadoop.LauncherMapper.map(LauncherMapper.java:227)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:450)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
Caused by: java.io.IOException: error=2, No such file or directory
at java.lang.UNIXProcess.forkAndExec(Native Method)
at java.lang.UNIXProcess.<init>(UNIXProcess.java:186)
at java.lang.ProcessImpl.start(ProcessImpl.java:130)
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1028)
... 17 more
... View more
02-23-2015
07:49 AM
Did you enable sentry using policy file or using sentry service configuration? if you did not enable it it will give error like this. and for sentry the prerequisite is kerberos.
... View more
02-23-2015
07:45 AM
HI, I am using CDH 5.2 on RHEL 6.5. i did not enable acl's in hdfs in one cluster but on the secure(with kerberos and sentry) cluster i did enable acl's Te thing is whenever i am trying to change the permissions(restricted) of any file/folder in unsecured cluster it still is abling me to create files and folders from other username but on secured cluster when i give resterictive permissions,it wont allow me enter into the folder/file of any other. is this because of acl's of one needs kerberos to make file browser permissions to work as expected.
... View more
02-16-2015
05:32 PM
i added these jar files:- Does CDH 5.3 has better support of spark than CDH 5.2?
... View more
02-16-2015
05:25 PM
I tried that but it gave me an error like this:- Failed to initialize compiler: object scala.runtime in compiler mirror not found Note that as of 2.8 scala does not assume use of the java classpath.For the old behavior pass -usejavacp to scala, or if using a Settings object programatically, settings.usejavacp.value = true.
... View more
02-12-2015
08:51 AM
I have been trying to use spark sql in CDH 5.2 using scala in spark -shell. I wanted to test out spark sql. I was trying a simple select statement in scala:- import org.apache.spark._ import org.apache.spark.sql._ import org.apache.spark.sql.hive._ val sparkConf = new SparkConf().setAppName("HiveFromSpark") val sc = new SparkContext(sparkConf) val hiveContext = new HiveContext(sc) import hiveContext.sql println("Result of 'SELECT *': ") sql("SELECT * FROM sample_07 limit 10").collect().foreach(println) sc.stop() The hive context gave me an error like:- scala> val hiveContext = new HiveContext(sc) error: bad symbolic reference. A signature in HiveContext.class refers to term hive in package org.apache.hadoop which is not available. It may be completely missing from the current classpath, or the version on the classpath might be incompatible with the version used when compiling HiveContext.class. error: while compiling: <console> during phase: erasure library version: version 2.10.4 compiler version: version 2.10.4 reconstructed args: last tree to typer: This(class $iwC) symbol: class $iwC (flags: ) symbol definition: class $iwC extends Serializable tpe: $iwC.$iwC.$iwC.$iwC.$iwC.$iwC.$iwC.$iwC.$iwC.$iwC.type symbol owners: class $iwC -> class $iwC -> class $iwC -> class $iwC -> class $iwC -> class $iwC -> class $iwC -> class $iwC -> class $iwC -> class $iwC -> class $read -> package $line31 context owners: class $iwC -> class $iwC -> class $iwC -> class $iwC -> class $iwC -> class $iwC -> class $iwC -> class $iwC -> class $iwC -> class $iwC -> class $read -> package $line31 == Enclosing template or block == ClassDef( // class $iwC extends Serializable 0 "$iwC" [] Template( // val <local $iwC>: <notype>, tree.tpe=$iwC "java.lang.Object", "scala.Serializable" // parents ValDef( private "_" <tpt> <empty> ) // 5 statements DefDef( // def <init>(arg$outer: $iwC.$iwC.$iwC.$iwC.$iwC.$iwC.$iwC.$iwC.$iwC.type): $iwC <method> <triedcooking> "<init>" [] // 1 parameter list ValDef( // $outer: $iwC.$iwC.$iwC.$iwC.$iwC.$iwC.$iwC.$iwC.$iwC.type <param> "$outer" <tpt> // tree.tpe=$iwC.$iwC.$iwC.$iwC.$iwC.$iwC.$iwC.$iwC.$iwC.type <empty> ) <tpt> // tree.tpe=$iwC Block( // tree.tpe=Unit Apply( // def <init>(): Object in class Object, tree.tpe=Object $iwC.super."<init>" // def <init>(): Object in class Object, tree.tpe=()Object Nil ) () ) ) ValDef( // private[this] val hiveContext: org.apache.spark.sql.hive.HiveContext private <local> <triedcooking> "hiveContext " <tpt> // tree.tpe=org.apache.spark.sql.hive.HiveContext Apply( // def <init>(sc: org.apache.spark.SparkContext): org.apache.spark.sql.hive.HiveContext in class HiveContext, tree.tpe=org.apache.spark.sql.hive.HiveContext new org.apache.spark.sql.hive.HiveContext."<init>" // def <init>(sc: org.apache.spark.SparkContext): org.apache.spark.sql.hive.HiveContext in class HiveContext, tree.tpe=(sc: org.apache.spark.SparkContext)org.apache.spark.sql.hive.HiveContext Apply( // val sc(): org.apache.spark.SparkContext, tree.tpe=org.apache.spark.SparkContext $iwC.this.$line31$$read$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$$outer().$VAL3().$iw().$iw().$iw().$iw().$iw().$iw().$iw().$iw().$iw().$iw()."sc" // val sc(): org.apache.spark.SparkContext, tree.tpe=()org.apache.spark.SparkContext Nil ) ) ) DefDef( // val hiveContext(): org.apache.spark.sql.hive.HiveContext <method> <stable> <accessor> "hiveContext" [] List(Nil) <tpt> // tree.tpe=org.apache.spark.sql.hive.HiveContext $iwC.this."hiveContext " // private[this] val hiveContext: org.apache.spark.sql.hive.HiveContext, tree.tpe=org.apache.spark.sql.hive.HiveContext ) ValDef( // protected val $outer: $iwC.$iwC.$iwC.$iwC.$iwC.$iwC.$iwC.$iwC.$iwC.type protected <synthetic> <paramaccessor> <triedcooking> "$outer " <tpt> // tree.tpe=$iwC.$iwC.$iwC.$iwC.$iwC.$iwC.$iwC.$iwC.$iwC.type <empty> ) DefDef( // val $outer(): $iwC.$iwC.$iwC.$iwC.$iwC.$iwC.$iwC.$iwC.$iwC.type <method> <synthetic> <stable> <expandedname> <triedcooking> "$line31$$read$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$$outer" [] List(Nil) <tpt> // tree.tpe=Any $iwC.this."$outer " // protected val $outer: $iwC.$iwC.$iwC.$iwC.$iwC.$iwC.$iwC.$iwC.$iwC.type, tree.tpe=$iwC.$iwC.$iwC.$iwC.$iwC.$iwC.$iwC.$iwC.$iwC.type ) ) ) == Expanded type of tree == ThisType(class $iwC) uncaught exception during compilation: scala.reflect.internal.Types$TypeError scala.reflect.internal.Types$TypeError: bad symbolic reference. A signature in HiveContext.class refers to term conf in value org.apache.hadoop.hive which is not available. It may be completely missing from the current classpath, or the version on the classpath might be incompatible with the version used when compiling HiveContext.class. That entry seems to have slain the compiler. Shall I replay your session? I can re-run each line except the last one. [y/n]Replaying: import org.apache.spark._ error:
... View more
02-10-2015
11:05 AM
I am trying to integrate Cloudera Navigator with LDAP/AD using CDH 5.2 on RHEL 6.3. even after configuring from cloudera management configuration--->Navigator metastore---Default group---External Configuration and giving all the details of AD server ,after restarting it ,In Cloudera navigator when i click on administration link it gives me an error like this:- Cloudera Navigator must be configured with LDAP or Active Directory for the administration page to function. Is there any configuration that is missing? I followed the url :- http://www.cloudera.com/content/cloudera/en/documentation/core/latest/topics/cn_sg_external_auth.html
... View more
02-04-2015
12:48 PM
I am trying to download solr examples from HUE administrator:- It gives an error like this :- Could not create instance directory. Check if [indexer] solr_zk_ensemble is correct in Hue config and look at the Solr error logs for more info Error logs does not give any error.
... View more
02-03-2015
01:24 PM
I was able to resolve this, I changed a property in krb5.conf dns_lookup_kdc=true I was getting an error in creating KDC initial credentials.By changing this property from false to true i was able to install spark on secure kerberized with sentry cluster.
... View more
02-03-2015
07:46 AM
Grant all the permissions using SQL GRANT Syntax on the role of which that user is the part of. After that you would be able to create the table
... View more
01-30-2015
09:04 AM
i got the same error, Restart your Cloudera management services,it will get working.
... View more
01-30-2015
09:03 AM
i got the same error,solved it by restarting the cloudera management services and it got working.
... View more
01-29-2015
01:20 PM
what if i want to connect to a BI tool like Spotfire or tableau and want to use ldap user authentication using Cloudera Hive connector which tries to connect to a kerberized cluster. Is there any workaround for that? Any way i can do that in 5.2 or 5.3?like Impala does?
... View more
01-27-2015
11:23 AM
Sqooping in CDH5 works well with local linux user in a kerberized environment but when sqooping using a ldap user it says,no user rules applied on the user principle. Anything else needs to be configured for an ldap user?
... View more
01-23-2015
10:33 AM
Also when i try to run a simple script of just loading the data and dumping it in HUE editor,it gives me an error like:- Could not find job job_1419349593786_0009. Job job_1419349593786_0009 could not be found: {"RemoteException":{"exception":"NotFoundException","message":"java.lang.Exception: job, job_1419349593786_0009, is not found","javaClassName":"org.apache.hadoop.yarn.webapp.NotFoundException"}} (error 404) and log files says:- [23/Jan/2015 10:19:47 -0800] access INFO 10.22.161.67 tsingh12 - "GET /pig/watch/0000001-141223104749248-oozie-oozi-W HTTP/1.1" [23/Jan/2015 10:19:45 -0800] api ERROR An error happen while watching the demo running: Could not find job job_1419349593786_0009. [23/Jan/2015 10:19:45 -0800] kerberos_ DEBUG handle_response(): returning <Response [404]> [23/Jan/2015 10:19:45 -0800] kerberos_ ERROR handle_other(): Mutual authentication unavailable on 404 response [23/Jan/2015 10:19:45 -0800] kerberos_ DEBUG handle_other(): Handling: 404 Any suggestions? Kerberos is actually giving a lot of problems,we need to deploy it in a production environment. Also sqoop2 does not support kerberos yet in CDH5.3 any workarounds would be good to go with.
... View more
01-23-2015
09:07 AM
I am using CDH 5.2 on RHEL 6.5 I was trying to enter into pig shell,grunt in kerberized environment when it gives me an error like and after that says no rules applied on the principle :- 2015-01-23 11:59:37,527 [main] ERROR org.apache.pig.Main - ERROR 2999: Unexpected internal error. Failed to create DataStorage 2015-01-23 11:59:37,527 [main] WARN org.apache.pig.Main - There is no log file to write to. 2015-01-23 11:59:37,527 [main] ERROR org.apache.pig.Main - java.lang.RuntimeException: Failed to create DataStorage at org.apache.pig.backend.hadoop.datastorage.HDataStorage.init(HDataStorage.java:75) at org.apache.pig.backend.hadoop.datastorage.HDataStorage.<init>(HDataStorage.java:58) at org.apache.pig.backend.hadoop.executionengine.HExecutionEngine.init(HExecutionEngine.java:215) at org.apache.pig.backend.hadoop.executionengine.HExecutionEngine.init(HExecutionEngine.java:122) at org.apache.pig.impl.PigContext.connect(PigContext.java:301) at org.apache.pig.PigServer.<init>(PigServer.java:220) at org.apache.pig.PigServer.<init>(PigServer.java:205) at org.apache.pig.tools.grunt.Grunt.<init>(Grunt.java:47) at org.apache.pig.Main.run(Main.java:538) at org.apache.pig.Main.main(Main.java:156) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.apache.hadoop.util.RunJar.main(RunJar.java:212) Caused by: java.io.IOException: failure to login at org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:782) at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:734) at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:607) at org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:2753) at org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:2745) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2611) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:169) at org.apache.pig.backend.hadoop.datastorage.HDataStorage.init(HDataStorage.java:72) ... 14 more
... View more
01-22-2015
08:50 AM
It says either LDAP or Kerberos I am using LDAP for User authentication and Kerberos for service authentication. I did the same for Impala and it works fine but when i configure hive for LDAP it gives me error (Trace in the previous reply)
... View more
01-22-2015
07:59 AM
Hi , I am using CDH 5.2 on RHEL 6.5 I am trying to install spark in yarn mode in kerberized environment. But it fails on the 3 rd step when it tries to upload the jars after creating history server and user dir. + echo 'Using /var/run/cloudera-scm-agent/process/295-spark_on_yarn-SPARK_YARN_HISTORY_SERVER-SparkUploadJarCommand as conf dir' + echo 'Using scripts/control.sh as process script' + export COMMON_SCRIPT=/usr/lib64/cmf/service/common/cloudera-config.sh + COMMON_SCRIPT=/usr/lib64/cmf/service/common/cloudera-config.sh + chmod u+x /var/run/cloudera-scm-agent/process/295-spark_on_yarn-SPARK_YARN_HISTORY_SERVER-SparkUploadJarCommand/scripts/control.sh + exec /var/run/cloudera-scm-agent/process/295-spark_on_yarn-SPARK_YARN_HISTORY_SERVER-SparkUploadJarCommand/scripts/control.sh upload_jar Thu Jan 22 10:41:50 EST 2015 Thu Jan 22 10:41:50 EST 2015: Detected CDH_VERSION of [5] Thu Jan 22 10:41:50 EST 2015: Uploading Spark assembly jar to '/user/spark/share/lib/spark-assembly.jar' on CDH 5 cluster + export SCM_KERBEROS_PRINCIPAL=spark/itsusmpl00512.xxx.com@CDH5.xxx.COM + SCM_KERBEROS_PRINCIPAL=spark/itsusmpl00512.xxx.com@CDH5.xxx.COM + acquire_kerberos_tgt spark_on_yarn.keytab + '[' -z spark_on_yarn.keytab ']' + '[' -n spark/itsusmpl00512.xxx.com@CDH5.xxx.COM ']' + '[' -d /usr/kerberos/bin ']' + which kinit + '[' 0 -ne 0 ']' ++ id -u + export KRB5CCNAME=/var/run/cloudera-scm-agent/process/295-spark_on_yarn-SPARK_YARN_HISTORY_SERVER-SparkUploadJarCommand/krb5cc_481 + KRB5CCNAME=/var/run/cloudera-scm-agent/process/295-spark_on_yarn-SPARK_YARN_HISTORY_SERVER-SparkUploadJarCommand/krb5cc_481 + echo 'using spark/itsusmpl00512.jnj.com@CDH5.JNJ.COM as Kerberos principal' + echo 'using /var/run/cloudera-scm-agent/process/295-spark_on_yarn-SPARK_YARN_HISTORY_SERVER-SparkUploadJarCommand/krb5cc_481 as Kerberos ticket cache' + kinit -c /var/run/cloudera-scm-agent/process/295-spark_on_yarn-SPARK_YARN_HISTORY_SERVER-SparkUploadJarCommand/krb5cc_481 -kt /var/run/cloudera-scm-agent/process/295-spark_on_yarn-SPARK_YARN_HISTORY_SERVER-SparkUploadJarCommand/spark_on_yarn.keytab spark/itsusmpl00512.xxx.com@CDH5.xxx.COM kinit: Cannot resolve network address for KDC in realm "CDH5.xxx.COM" while getting initial credentials + '[' 1 -ne 0 ']' + echo 'kinit was not successful.' + exit 1
... View more
01-08-2015
09:12 AM
Hi, I am trying to enable LDAP on CDH 5.2 by adding this property in hive-site.xml using safety valve in configuration of cloudera manager. <property> <name>hive.server2.authentication</name> <value>LDAP</value> </property> <property> <name>hive.server2.authentication.ldap.url</name> <value>ldap://itsusranadc10.na.xxx.com:3268</value> </property> Hiveserver2 gives error on restart 12:05:38.554 PM ERROR org.apache.thrift.transport.TSaslTransport SASL negotiation failure javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:212) at org.apache.thrift.transport.TSaslClientTransport.handleSaslStartMessage(TSaslClientTransport.java:94) at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:262) at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37) at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52) at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614) at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:347) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:216) at org.apache.sentry.binding.metastore.SentryHiveMetaStoreClient.<init>(SentryHiveMetaStoreClient.java:54) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1422) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:63) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:73) at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:2462) at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:2481) at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:340) at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:291) at org.apache.hive.service.cli.session.SessionManager.applyAuthorizationConfigPolicy(SessionManager.java:113) at org.apache.hive.service.cli.session.SessionManager.init(SessionManager.java:74) at org.apache.hive.service.CompositeService.init(CompositeService.java:59) at org.apache.hive.service.cli.CLIService.init(CLIService.java:111) at org.apache.hive.service.CompositeService.init(CompositeService.java:59) at org.apache.hive.service.server.HiveServer2.init(HiveServer2.java:68) at org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:100) at org.apache.hive.service.server.HiveServer2.main(HiveServer2.java:149) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.RunJar.main(RunJar.java:212) Caused by: GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt) at sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147) at sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:121) at sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187) at sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:223) at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212) at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179) at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:193) ... 36 more
... View more
01-08-2015
08:00 AM
I meant LDAP and Kerberos in the same hive server2. Is there any way to do that?
... View more
- Tags:
- meant LDAP
01-05-2015
10:42 AM
apparently hdfs does not allow to run MR jobs but another user chdfs which is an admin user and has its own principal in kdc allows to run the same command after doing kinit chdfs and providing with the password after it prompts for the password . What if i want to use my username to execute MR job by using chdfs impersonation rather than making my own principle? If i do like that it gives me the same error as above saying no ticket found. Am i doing correct or am i missing any step?
... View more
01-05-2015
08:10 AM
LDAP can be enabled with sentry,FOR LDAPS you will need ssl key and certificate which is the next step.(For security purposes). Secondly to enable sentry with hive/impala you need to have an admin user,check in allowed.users property in impala,sentry configurations. You will get to know which users have the privilege to bypass the sentry authorization so that you can give roles and privileges to other users using an admin user. Go to impala shell by switching to that admin user.I tried with impala -->su impala --->impala-shell Grant roles and privileges using sql statements .Take a look at this url http://www.cloudera.com/content/cloudera/en/documentation/core/v5-2-x/topics/sg_hive_sql.html
... View more
01-05-2015
07:30 AM
Hi, I am using CDH5.2 on RHEL 6.3 with sentry and kerberos enabled. I want to access hiveserver2 via beeline after doing configurations for LDAP. I got to know hive server 2 does not support both ldap and hiveserver2 in the same server instance. How to do that?
... View more
12-22-2014
02:08 PM
Hi, I want to execute simple word count example .I have made a java code with name MRV1_1.jar and given i/o file names but it gives me an error . I am using CDH5.2 on RHEL 6.3. Kerberos is enabled,i am impersonating hdfs on my username. I did klist and kinit and it gave me following:- [tsingh12@itsusmpl00512:/root]# #-> kinit hdfs Password for hdfs@CDH5.XXX.XXX: [tsingh12@itsusmpl00512:/root]# #-> klist Ticket cache: FILE:/tmp/krb5cc_38157 Default principal: hdfs@CDH5.XXX.COM Valid starting Expires Service principal 12/22/14 17:06:00 12/23/14 17:06:00 krbtgt/CDH5.XXX.COM@CDH5.XXX.COM renew until 12/29/14 17:06:00 [tsingh12@itsusmpl00512:/root]# But when i run a job it says invalid principal. #-> hadoop jar MRV_1_1.jar /user/tsingh12/Count.txt /user/tsingh12/output/Count Exception in thread "main" java.lang.RuntimeException: java.io.IOException: failure to login at org.apache.hadoop.mapred.JobConf.getWorkingDirectory(JobConf.java:660) at org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:436) at com.jnj.runJob.WordCount.main(WordCount.java:44) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.RunJar.main(RunJar.java:212) Caused by: java.io.IOException: failure to login at org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:782) at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:734) at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:607) at org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:2753) at org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:2745) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2611) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:169) at org.apache.hadoop.mapred.JobConf.getWorkingDirectory(JobConf.java:656) ... 7 more Caused by: javax.security.auth.login.LoginException: java.lang.IllegalArgumentException: Illegal principal name hdfs@CDH5.XXX.XXX at org.apache.hadoop.security.User.<init>(User.java:50) at org.apache.hadoop.security.User.<init>(User.java:43) at org.apache.hadoop.security.UserGroupInformation$HadoopLoginModule.commit(UserGroupInformation.java:179) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at javax.security.auth.login.LoginContext.invoke(LoginContext.java:769) at javax.security.auth.login.LoginContext.access$000(LoginContext.java:186) at javax.security.auth.login.LoginContext$5.run(LoginContext.java:706) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.login.LoginContext.invokeCreatorPriv(LoginContext.java:703) at javax.security.auth.login.LoginContext.login(LoginContext.java:576) at org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:757) at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:734) at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:607) at org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:2753) at org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:2745) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2611) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:169) at org.apache.hadoop.mapred.JobConf.getWorkingDirectory(JobConf.java:656) at org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:436) at com.jnj.runJob.WordCount.main(WordCount.java:44) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.RunJar.main(RunJar.java:212) Caused by: org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: No rules applied to hdfs@CDH5.XXX.XXX at org.apache.hadoop.security.authentication.util.KerberosName.getShortName(KerberosName.java:389) at org.apache.hadoop.security.User.<init>(User.java:48) ... 28 more at javax.security.auth.login.LoginContext.invoke(LoginContext.java:872) at javax.security.auth.login.LoginContext.access$000(LoginContext.java:186) at javax.security.auth.login.LoginContext$5.run(LoginContext.java:706) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.login.LoginContext.invokeCreatorPriv(LoginContext.java:703) at javax.security.auth.login.LoginContext.login(LoginContext.java:576) at org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:757) ... 15 more Any suggestions what could be wrong?
... View more