Member since
08-15-2016
33
Posts
6
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
9260 | 02-21-2017 10:56 AM | |
2468 | 01-18-2017 07:06 AM | |
3378 | 04-07-2016 04:15 PM | |
4094 | 04-04-2016 05:03 PM |
05-25-2017
12:34 PM
Tim - See the next post for the TEXT PLAN. Please let me know, if you figure whats causing the error message.
... View more
05-25-2017
12:29 PM
Estimated Per-Host Requirements: Memory=628.99MB VCores=3 PLAN-ROOT SINK | 66:EXCHANGE [UNPARTITIONED] | hosts=10 per-host-mem=unavailable | tuple-ids=20,42N row-size=210B cardinality=26977 | 32:HASH JOIN [LEFT OUTER JOIN, PARTITIONED] | hash predicates: campaign = campaign, carrier = carrier, market = market, sessiontype = sessiontype | hosts=10 per-host-mem=292.69KB | tuple-ids=20,42N row-size=210B cardinality=26977 | |--65:EXCHANGE [HASH(campaign,market,carrier,sessiontype)] | | hosts=10 per-host-mem=0B | | tuple-ids=42 row-size=101B cardinality=26977 | | | 64:AGGREGATE [FINALIZE] | | output: sum:merge(CASE WHEN carrier_count = 2 THEN samples ELSE 0 END), sum:merge(CAST(samples AS FLOAT)), sum:merge(CASE WHEN carrier_count = 3 THEN samples ELSE 0 END), sum:merge(CASE WHEN carrier_count > 1 THEN samples ELSE 0 END), sum:merge(CASE WHEN carrier_count > 1 THEN sum_total_bandwidth ELSE 0 END) | | group by: campaign, market, carrier, sessiontype | | hosts=10 per-host-mem=10.00MB | | tuple-ids=42 row-size=101B cardinality=26977 | | | 63:EXCHANGE [HASH(campaign,market,carrier,sessiontype)] | | hosts=10 per-host-mem=0B | | tuple-ids=42 row-size=101B cardinality=26977 | | | 31:AGGREGATE [STREAMING] | | output: sum(CASE WHEN carrier_count = 2 THEN samples ELSE 0 END), sum(CAST(samples AS FLOAT)), sum(CASE WHEN carrier_count = 3 THEN samples ELSE 0 END), sum(CASE WHEN carrier_count > 1 THEN samples ELSE 0 END), sum(CASE WHEN carrier_count > 1 THEN sum_total_bandwidth ELSE 0 END) | | group by: campaign, market, carrier, sessiontype | | hosts=10 per-host-mem=10.00MB | | tuple-ids=42 row-size=101B cardinality=26977 | | | 16:UNION | | hosts=10 per-host-mem=0B | | tuple-ids=40 row-size=78B cardinality=26977 | | | |--62:AGGREGATE [FINALIZE] | | | output: sum:merge(ltebwcum), count:merge(*) | | | group by: campaign, market, carrier, sessiontype, carrier_count, tech_mode | | | having: tech_mode = 'LTECA' | | | hosts=10 per-host-mem=10.00MB | | | tuple-ids=38 row-size=94B cardinality=13400 | | | | | 61:EXCHANGE [HASH(campaign,market,carrier,sessiontype,carrier_count,tech_mode)] | | | hosts=10 per-host-mem=0B | | | tuple-ids=38 row-size=94B cardinality=13400 | | | | | 30:AGGREGATE [STREAMING] | | | output: sum(ltebwcum), count(*) | | | group by: a.campaign, a.market, a.carrier, CASE WHEN SESSIONTYPE = 'HTTPDL_CAPACITY_L' THEN 'HTTPDL_CAPACITY' ELSE SESSIONTYPE END, CASE WHEN l_pdschbytes_scc3 > 0 THEN 4 WHEN l_pdschbytes_scc2 > 0 THEN 3 WHEN l_pdschbytes_scc1 > 0 THEN 2 WHEN L_pdschbytes > 0 THEN 1 ELSE 0 END, CASE WHEN l_pdschbytes_scc3 > 0 THEN 'LTECA' WHEN l_pdschbytes_scc2 > 0 THEN 'LTECA' WHEN l_pdschbytes_scc1 > 0 THEN 'LTECA' WHEN L_pdschbytes > 0 THEN 'LTE' ELSE NULL END | | | hosts=10 per-host-mem=10.00MB | | | tuple-ids=38 row-size=94B cardinality=13400 | | | | | 29:HASH JOIN [INNER JOIN, PARTITIONED] | | | hash predicates: a.campaign = campaign, a.carrier = carrier, a.market = a.market, a.filename = filename | | | other predicates: unix_timestamp(udfs.totimestamp(a.time_stamp)) <= unix_timestamp(udfs.totimestamp(concat(task_date, ' ', timeend))), unix_timestamp(udfs.totimestamp(a.time_stamp)) >= unix_timestamp(udfs.totimestamp(concat(task_date, ' ', timeinit))) | | | runtime filters: RF015 <- campaign, RF017 <- a.market, RF016 <- carrier, RF018 <- filename | | | hosts=10 per-host-mem=1018.71KB | | | tuple-ids=31,32,34 row-size=468B cardinality=13400 | | | | | |--60:EXCHANGE [HASH(campaign,carrier,a.market,filename)] | | | | hosts=10 per-host-mem=0B | | | | tuple-ids=32,34 row-size=281B cardinality=33804 | | | | | | | 28:HASH JOIN [INNER JOIN, BROADCAST] | | | | hash predicates: a.market = market | | | | hosts=10 per-host-mem=17.62KB | | | | tuple-ids=32,34 row-size=281B cardinality=33804 | | | | | | | |--58:EXCHANGE [BROADCAST] | | | | | hosts=5 per-host-mem=0B | | | | | tuple-ids=34 row-size=29B cardinality=569 | | | | | | | | | 57:AGGREGATE [FINALIZE] | | | | | group by: market | | | | | hosts=5 per-host-mem=10.00MB | | | | | tuple-ids=34 row-size=29B cardinality=569 | | | | | | | | | 56:EXCHANGE [HASH(market)] | | | | | hosts=5 per-host-mem=0B | | | | | tuple-ids=34 row-size=29B cardinality=569 | | | | | | | | | 27:AGGREGATE [STREAMING] | | | | | group by: market | | | | | hosts=5 per-host-mem=10.00MB | | | | | tuple-ids=34 row-size=29B cardinality=569 | | | | | | | | | 26:SCAN HDFS [mobistat.allstats_packet, RANDOM] | | | | partitions=1/1 files=6 size=9.32MB | | | | predicates: bbdo_approved = 1, campaign = '17D1' | | | | table stats: 51137 rows total | | | | column stats: all | | | | hosts=5 per-host-mem=32.00MB | | | | tuple-ids=33 row-size=53B cardinality=1065 | | | | | | | 25:SCAN HDFS [mobistat.cdr_packet a, RANDOM] | | | partitions=4680/10328 files=4681 size=4.70GB | | | predicates: regexp_like(calldirection, 'HTTPDL_CAPACITY') = TRUE, regexp_like(endresult, 'HTTP SUCCESS') = TRUE, (modpctlte + isnull(modpctlteca, 0)) > 0.999 | | | table stats: 58699689 rows total | | | column stats: all | | | hosts=10 per-host-mem=304.00MB | | | tuple-ids=32 row-size=252B cardinality=2888511 | | | | | 59:EXCHANGE [HASH(a.campaign,a.carrier,a.market,a.filename)] | | | hosts=10 per-host-mem=0B | | | tuple-ids=31 row-size=187B cardinality=23267425 | | | | | 24:SCAN HDFS [mobistat.psr_packet_cdma a, RANDOM] | | partitions=2332/3707 files=2332 size=45.71GB | | predicates: regexp_like(SESSIONTYPE, 'HTTPDL_CAPACITY') = TRUE, CASE WHEN l_pdschbytes_scc3 > 0 THEN 'LTECA' WHEN l_pdschbytes_scc2 > 0 THEN 'LTECA' WHEN l_pdschbytes_scc1 > 0 THEN 'LTECA' WHEN L_pdschbytes > 0 THEN 'LTE' ELSE NULL END IS NOT NULL | | runtime filters: RF015 -> a.campaign, RF017 -> a.market, RF016 -> a.carrier, RF018 -> a.filename | | table stats: 358488531 rows total | | column stats: all | | hosts=10 per-host-mem=608.00MB | | tuple-ids=31 row-size=187B cardinality=23267425 | | | 55:AGGREGATE [FINALIZE] | | output: sum:merge(ltebwcum), count:merge(*) | | group by: campaign, market, carrier, sessiontype, carrier_count, tech_mode | | having: tech_mode = 'LTECA' | | hosts=10 per-host-mem=10.00MB | | tuple-ids=29 row-size=94B cardinality=13577 | | | 54:EXCHANGE [HASH(campaign,market,carrier,sessiontype,carrier_count,tech_mode)] | | hosts=10 per-host-mem=0B | | tuple-ids=29 row-size=94B cardinality=13577 | | | 23:AGGREGATE [STREAMING] | | output: sum(ltebwcum), count(*) | | group by: a.campaign, a.market, a.carrier, CASE WHEN SESSIONTYPE = 'HTTPDL_CAPACITY_L' THEN 'HTTPDL_CAPACITY' ELSE SESSIONTYPE END, CASE WHEN l_pdschbytes_scc3 > 0 THEN 4 WHEN l_pdschbytes_scc2 > 0 THEN 3 WHEN l_pdschbytes_scc1 > 0 THEN 2 WHEN L_pdschbytes > 0 THEN 1 ELSE 0 END, CASE WHEN l_pdschbytes_scc3 > 0 THEN 'LTECA' WHEN l_pdschbytes_scc2 > 0 THEN 'LTECA' WHEN l_pdschbytes_scc1 > 0 THEN 'LTECA' WHEN L_pdschbytes > 0 THEN 'LTE' ELSE NULL END | | hosts=10 per-host-mem=10.00MB | | tuple-ids=29 row-size=94B cardinality=13577 | | | 22:HASH JOIN [INNER JOIN, PARTITIONED] | | hash predicates: a.campaign = campaign, a.carrier = carrier, a.market = a.market, a.filename = filename | | other predicates: unix_timestamp(udfs.totimestamp(a.time_stamp)) <= unix_timestamp(udfs.totimestamp(concat(task_date, ' ', timeend))), unix_timestamp(udfs.totimestamp(a.time_stamp)) >= unix_timestamp(udfs.totimestamp(concat(task_date, ' ', timeinit))) | | hosts=10 per-host-mem=1018.71KB | | tuple-ids=22,23,25 row-size=468B cardinality=13577 | | | |--53:EXCHANGE [HASH(campaign,carrier,a.market,filename)] | | | hosts=10 per-host-mem=0B | | | tuple-ids=23,25 row-size=281B cardinality=33804 | | | | | 21:HASH JOIN [INNER JOIN, BROADCAST] | | | hash predicates: a.market = market | | | hosts=10 per-host-mem=17.62KB | | | tuple-ids=23,25 row-size=281B cardinality=33804 | | | | | |--51:EXCHANGE [BROADCAST] | | | | hosts=5 per-host-mem=0B | | | | tuple-ids=25 row-size=29B cardinality=569 | | | | | | | 50:AGGREGATE [FINALIZE] | | | | group by: market | | | | hosts=5 per-host-mem=10.00MB | | | | tuple-ids=25 row-size=29B cardinality=569 | | | | | | | 49:EXCHANGE [HASH(market)] | | | | hosts=5 per-host-mem=0B | | | | tuple-ids=25 row-size=29B cardinality=569 | | | | | | | 20:AGGREGATE [STREAMING] | | | | group by: market | | | | hosts=5 per-host-mem=10.00MB | | | | tuple-ids=25 row-size=29B cardinality=569 | | | | | | | 19:SCAN HDFS [mobistat.allstats_packet, RANDOM] | | | partitions=1/1 files=6 size=9.32MB | | | predicates: bbdo_approved = 1, campaign = '17D1' | | | table stats: 51137 rows total | | | column stats: all | | | hosts=5 per-host-mem=32.00MB | | | tuple-ids=24 row-size=53B cardinality=1065 | | | | | 18:SCAN HDFS [mobistat.cdr_packet a, RANDOM] | | partitions=4680/10328 files=4681 size=4.70GB | | predicates: regexp_like(calldirection, 'HTTPDL_CAPACITY') = TRUE, regexp_like(endresult, 'HTTP SUCCESS') = TRUE, (modpctlte + isnull(modpctlteca, 0)) > 0.999 | | table stats: 58699689 rows total | | column stats: all | | hosts=10 per-host-mem=304.00MB | | tuple-ids=23 row-size=252B cardinality=2888511 | | | 52:EXCHANGE [HASH(a.campaign,a.carrier,a.market,a.filename)] | | hosts=10 per-host-mem=0B | | tuple-ids=22 row-size=187B cardinality=23574499 | | | 17:SCAN HDFS [mobistat.psr_packet_gsm a, RANDOM] | partitions=2336/6581 files=2336 size=48.59GB | predicates: regexp_like(SESSIONTYPE, 'HTTPDL_CAPACITY') = TRUE, CASE WHEN l_pdschbytes_scc3 > 0 THEN 'LTECA' WHEN l_pdschbytes_scc2 > 0 THEN 'LTECA' WHEN l_pdschbytes_scc1 > 0 THEN 'LTECA' WHEN L_pdschbytes > 0 THEN 'LTE' ELSE NULL END IS NOT NULL | table stats: 636668107 rows total | column stats: all | hosts=10 per-host-mem=608.00MB | tuple-ids=22 row-size=187B cardinality=23574499 | 48:AGGREGATE [FINALIZE] | output: sum:merge(carrier_count * samples), sum:merge(samples), sum:merge(sum_256qam), sum:merge(sum_total_frames), sum:merge(sum_total_bandwidth), sum:merge(sum_4tx_samples) | group by: campaign, market, carrier, sessiontype | hosts=10 per-host-mem=10.00MB | tuple-ids=20 row-size=109B cardinality=26977 | 47:EXCHANGE [HASH(campaign,market,carrier,sessiontype)] | hosts=10 per-host-mem=0B | tuple-ids=20 row-size=109B cardinality=26977 | 15:AGGREGATE [STREAMING] | output: sum(carrier_count * samples), sum(samples), sum(sum_256qam), sum(sum_total_frames), sum(sum_total_bandwidth), sum(sum_4tx_samples) | group by: campaign, market, carrier, sessiontype | hosts=10 per-host-mem=10.00MB | tuple-ids=20 row-size=109B cardinality=26977 | 00:UNION | hosts=10 per-host-mem=0B | tuple-ids=18 row-size=102B cardinality=26977 | |--46:AGGREGATE [FINALIZE] | | output: sum:merge(l_dlnum256qam), sum:merge(CAST(total_frames AS FLOAT)), sum:merge(ltebwcum), sum:merge(CASE WHEN l_dlmaxnumlayer = 4 THEN 1 ELSE 0 END), count:merge(*) | | group by: campaign, market, carrier, sessiontype, carrier_count, tech_mode | | hosts=10 per-host-mem=10.00MB | | tuple-ids=16 row-size=118B cardinality=13400 | | | 45:EXCHANGE [HASH(campaign,market,carrier,sessiontype,carrier_count,tech_mode)] | | hosts=10 per-host-mem=0B | | tuple-ids=16 row-size=118B cardinality=13400 | | | 14:AGGREGATE [STREAMING] | | output: sum(l_dlnum256qam), sum(CAST((l_dlnum256qam + l_dlnum64qam + l_dlnum16qam + l_dlnumqpsk) AS FLOAT)), sum(ltebwcum), sum(CASE WHEN a.l_dlmaxnumlayer = 4 THEN 1 ELSE 0 END), count(*) | | group by: a.campaign, a.market, a.carrier, CASE WHEN SESSIONTYPE = 'HTTPDL_CAPACITY_L' THEN 'HTTPDL_CAPACITY' ELSE SESSIONTYPE END, CASE WHEN l_pdschbytes_scc3 > 0 THEN 4 WHEN l_pdschbytes_scc2 > 0 THEN 3 WHEN l_pdschbytes_scc1 > 0 THEN 2 WHEN L_pdschbytes > 0 THEN 1 ELSE 0 END, CASE WHEN l_pdschbytes_scc3 > 0 THEN 'LTECA' WHEN l_pdschbytes_scc2 > 0 THEN 'LTECA' WHEN l_pdschbytes_scc1 > 0 THEN 'LTECA' WHEN L_pdschbytes > 0 THEN 'LTE' ELSE NULL END | | hosts=10 per-host-mem=10.00MB | | tuple-ids=16 row-size=118B cardinality=13400 | | | 13:HASH JOIN [INNER JOIN, PARTITIONED] | | hash predicates: a.campaign = campaign, a.carrier = carrier, a.market = a.market, a.filename = filename | | other predicates: unix_timestamp(udfs.totimestamp(a.time_stamp)) <= unix_timestamp(udfs.totimestamp(concat(task_date, ' ', timeend))), unix_timestamp(udfs.totimestamp(a.time_stamp)) >= unix_timestamp(udfs.totimestamp(concat(task_date, ' ', timeinit))) | | runtime filters: RF005 <- campaign, RF006 <- carrier, RF007 <- a.market, RF008 <- filename | | hosts=10 per-host-mem=1018.71KB | | tuple-ids=9,10,12 row-size=488B cardinality=13400 | | | |--44:EXCHANGE [HASH(campaign,carrier,a.market,filename)] | | | hosts=10 per-host-mem=0B | | | tuple-ids=10,12 row-size=281B cardinality=33804 | | | | | 12:HASH JOIN [INNER JOIN, BROADCAST] | | | hash predicates: a.market = market | | | hosts=10 per-host-mem=17.62KB | | | tuple-ids=10,12 row-size=281B cardinality=33804 | | | | | |--42:EXCHANGE [BROADCAST] | | | | hosts=5 per-host-mem=0B | | | | tuple-ids=12 row-size=29B cardinality=569 | | | | | | | 41:AGGREGATE [FINALIZE] | | | | group by: market | | | | hosts=5 per-host-mem=10.00MB | | | | tuple-ids=12 row-size=29B cardinality=569 | | | | | | | 40:EXCHANGE [HASH(market)] | | | | hosts=5 per-host-mem=0B | | | | tuple-ids=12 row-size=29B cardinality=569 | | | | | | | 11:AGGREGATE [STREAMING] | | | | group by: market | | | | hosts=5 per-host-mem=10.00MB | | | | tuple-ids=12 row-size=29B cardinality=569 | | | | | | | 10:SCAN HDFS [mobistat.allstats_packet, RANDOM] | | | partitions=1/1 files=6 size=9.32MB | | | predicates: bbdo_approved = 1, campaign = '17D1' | | | table stats: 51137 rows total | | | column stats: all | | | hosts=5 per-host-mem=32.00MB | | | tuple-ids=11 row-size=53B cardinality=1065 | | | | | 09:SCAN HDFS [mobistat.cdr_packet a, RANDOM] | | partitions=4680/10328 files=4681 size=4.70GB | | predicates: regexp_like(calldirection, 'HTTPDL_CAPACITY') = TRUE, regexp_like(endresult, 'HTTP SUCCESS') = TRUE, (modpctlte + isnull(modpctlteca, 0)) > 0.999 | | table stats: 58699689 rows total | | column stats: all | | hosts=10 per-host-mem=304.00MB | | tuple-ids=10 row-size=252B cardinality=2888511 | | | 43:EXCHANGE [HASH(a.campaign,a.carrier,a.market,a.filename)] | | hosts=10 per-host-mem=0B | | tuple-ids=9 row-size=207B cardinality=23267425 | | | 08:SCAN HDFS [mobistat.psr_packet_cdma a, RANDOM] | partitions=2332/3707 files=2332 size=45.71GB | predicates: regexp_like(SESSIONTYPE, 'HTTPDL_CAPACITY') = TRUE, CASE WHEN l_pdschbytes_scc3 > 0 THEN 'LTECA' WHEN l_pdschbytes_scc2 > 0 THEN 'LTECA' WHEN l_pdschbytes_scc1 > 0 THEN 'LTECA' WHEN L_pdschbytes > 0 THEN 'LTE' ELSE NULL END IS NOT NULL | runtime filters: RF005 -> a.campaign, RF006 -> a.carrier, RF007 -> a.market, RF008 -> a.filename | table stats: 358488531 rows total | column stats: all | hosts=10 per-host-mem=608.00MB | tuple-ids=9 row-size=207B cardinality=23267425 | 39:AGGREGATE [FINALIZE] | output: sum:merge(l_dlnum256qam), sum:merge(CAST(total_frames AS FLOAT)), sum:merge(ltebwcum), sum:merge(CASE WHEN l_dlmaxnumlayer = 4 THEN 1 ELSE 0 END), count:merge(*) | group by: campaign, market, carrier, sessiontype, carrier_count, tech_mode | hosts=10 per-host-mem=10.00MB | tuple-ids=7 row-size=118B cardinality=13577 | 38:EXCHANGE [HASH(campaign,market,carrier,sessiontype,carrier_count,tech_mode)] | hosts=10 per-host-mem=0B | tuple-ids=7 row-size=118B cardinality=13577 | 07:AGGREGATE [STREAMING] | output: sum(l_dlnum256qam), sum(CAST((l_dlnum256qam + l_dlnum64qam + l_dlnum16qam + l_dlnumqpsk) AS FLOAT)), sum(ltebwcum), sum(CASE WHEN a.l_dlmaxnumlayer = 4 THEN 1 ELSE 0 END), count(*) | group by: a.campaign, a.market, a.carrier, CASE WHEN SESSIONTYPE = 'HTTPDL_CAPACITY_L' THEN 'HTTPDL_CAPACITY' ELSE SESSIONTYPE END, CASE WHEN l_pdschbytes_scc3 > 0 THEN 4 WHEN l_pdschbytes_scc2 > 0 THEN 3 WHEN l_pdschbytes_scc1 > 0 THEN 2 WHEN L_pdschbytes > 0 THEN 1 ELSE 0 END, CASE WHEN l_pdschbytes_scc3 > 0 THEN 'LTECA' WHEN l_pdschbytes_scc2 > 0 THEN 'LTECA' WHEN l_pdschbytes_scc1 > 0 THEN 'LTECA' WHEN L_pdschbytes > 0 THEN 'LTE' ELSE NULL END | hosts=10 per-host-mem=10.00MB | tuple-ids=7 row-size=118B cardinality=13577 | 06:HASH JOIN [INNER JOIN, PARTITIONED] | hash predicates: a.campaign = campaign, a.carrier = carrier, a.market = a.market, a.filename = filename | other predicates: unix_timestamp(udfs.totimestamp(a.time_stamp)) <= unix_timestamp(udfs.totimestamp(concat(task_date, ' ', timeend))), unix_timestamp(udfs.totimestamp(a.time_stamp)) >= unix_timestamp(udfs.totimestamp(concat(task_date, ' ', timeinit))) | runtime filters: RF000 <- campaign, RF001 <- carrier | hosts=10 per-host-mem=1018.71KB | tuple-ids=0,1,3 row-size=488B cardinality=13577 | |--37:EXCHANGE [HASH(campaign,carrier,a.market,filename)] | | hosts=10 per-host-mem=0B | | tuple-ids=1,3 row-size=281B cardinality=33804 | | | 05:HASH JOIN [INNER JOIN, BROADCAST] | | hash predicates: a.market = market | | hosts=10 per-host-mem=17.62KB | | tuple-ids=1,3 row-size=281B cardinality=33804 | | | |--35:EXCHANGE [BROADCAST] | | | hosts=5 per-host-mem=0B | | | tuple-ids=3 row-size=29B cardinality=569 | | | | | 34:AGGREGATE [FINALIZE] | | | group by: market | | | hosts=5 per-host-mem=10.00MB | | | tuple-ids=3 row-size=29B cardinality=569 | | | | | 33:EXCHANGE [HASH(market)] | | | hosts=5 per-host-mem=0B | | | tuple-ids=3 row-size=29B cardinality=569 | | | | | 04:AGGREGATE [STREAMING] | | | group by: market | | | hosts=5 per-host-mem=10.00MB | | | tuple-ids=3 row-size=29B cardinality=569 | | | | | 03:SCAN HDFS [mobistat.allstats_packet, RANDOM] | | partitions=1/1 files=6 size=9.32MB | | predicates: bbdo_approved = 1, campaign = '17D1' | | table stats: 51137 rows total | | column stats: all | | hosts=5 per-host-mem=32.00MB | | tuple-ids=2 row-size=53B cardinality=1065 | | | 02:SCAN HDFS [mobistat.cdr_packet a, RANDOM] | partitions=4680/10328 files=4681 size=4.70GB | predicates: regexp_like(calldirection, 'HTTPDL_CAPACITY') = TRUE, regexp_like(endresult, 'HTTP SUCCESS') = TRUE, (modpctlte + isnull(modpctlteca, 0)) > 0.999 | table stats: 58699689 rows total | column stats: all | hosts=10 per-host-mem=304.00MB | tuple-ids=1 row-size=252B cardinality=2888511 | 36:EXCHANGE [HASH(a.campaign,a.carrier,a.market,a.filename)] | hosts=10 per-host-mem=0B | tuple-ids=0 row-size=207B cardinality=23574499 | 01:SCAN HDFS [mobistat.psr_packet_gsm a, RANDOM] partitions=2336/6581 files=2336 size=48.59GB predicates: regexp_like(SESSIONTYPE, 'HTTPDL_CAPACITY') = TRUE, CASE WHEN l_pdschbytes_scc3 > 0 THEN 'LTECA' WHEN l_pdschbytes_scc2 > 0 THEN 'LTECA' WHEN l_pdschbytes_scc1 > 0 THEN 'LTECA' WHEN L_pdschbytes > 0 THEN 'LTE' ELSE NULL END IS NOT NULL runtime filters: RF000 -> a.campaign, RF001 -> a.carrier table stats: 636668107 rows total column stats: all hosts=10 per-host-mem=608.00MB tuple-ids=0 row-size=207B cardinality=23574499
... View more
05-25-2017
09:38 AM
Hey Guys - I am using CDH5.10.1 and noticed the exact same error. In our case, Required mem_limit was 686MB and we gave it 3gb. At the time, when this query was running, there was no other query on the coordinator. So its quite confusing that it gives this error. Please let me know, if anyone of you had figured out a solution to this problem.
... View more
05-23-2017
01:54 PM
1 Kudo
We are using CDH 5.10.1, and notice frequent exceptions in HUE, when running Impala Queries in Notbook. Results Expired. Rerun quries to get results. Any idea, why this is happening, and what can be done to resolve it? Our Hive Metastore is PostgreSQL
... View more
Labels:
- Labels:
-
Apache Impala
-
Cloudera Hue
04-18-2017
02:40 PM
We are running Impala v2.7.0-cdh5.10.1 (876895d), and are facing similar issue. While running alter table <table-name> recover partitions; - we get "Error communicating with impalad: TSocket read 0 bytes" error message. after multiple attempts to recover this table, we were finally able to recover partitions on the table. At which point, we ran "compute stats <table-name>;" and received this error message: CatalogException: Table was modified during stats computation. Although both these errors may be mutually exclusive, somehow they seems to only happen on the two tables with over 10,000 partition files.
... View more
Labels:
- Labels:
-
Apache Impala
02-21-2017
12:34 PM
That was my first instint that too, that We are not passing hive-site.xml properly. We had the same issue back when we were using CDH 5.7.2 - and passing hive-site.xml from HDFS as --file in SPARK OPTIONS fixed it. Here in CDH 5.10, we are passing hive-site.xml as --file in SPARK OPTION, also we are attaching it under the new FILE attribute on Oozie Spark-Action. If its still the hive-site.xml, I am not sure why it would think its still missing when its in the SPARK OPTION. Any suggestions on how to troubleshoot this?
... View more
02-21-2017
10:56 AM
1 Kudo
On our cluster, Hive Metastore is on a External MySQL Database. So Oozie Spark-Submit should be looking for a mysql driver to connect to the metastore via thrift store connection. However, for some reason its looking for derby jar. Upon investigating, I noticed that derby jar is not begin passed with the spark-submit. I resolved the issue, by attaching derby jar with the SPARK ACTION. bizarde but this seems to resolve it. I am going to dig into the source code, to figure out the underlying problem. Will update this post, if and when, I find something.
... View more
02-21-2017
08:28 AM
1 Kudo
We are running CDH 5.10 with Impala and Spark. Our ETL uses HiveSQLContext, and has no issues executing as Spark-submit. However, when we try to use the same ETL Jar from Oozie Spark Action - we get the following error message. We had the same issue with CDH 5.7.2, at which point - we passed the hive-site.xml as --file argument in the Spark Action on Oozie and it worked file. However, the same does not seem to work with CDH 5.10. I did notice that oozie Spark Action was updated to have FILE field. So I removed the "--file" and attached hive-site.xml in the FILE field. This did not help out. Any suggestion on how to resolve this in CDH 5.10? 7/02/21 16:16:05 INFO Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
17/02/21 16:16:05 WARN Hive: Failed to register all functions.
java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1530)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:67)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:82)
at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3230)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3249)
at org.apache.hadoop.hive.ql.metadata.Hive.getAllFunctions(Hive.java:3474)
at org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(Hive.java:225)
at org.apache.hadoop.hive.ql.metadata.Hive.registerAllFunctionsOnce(Hive.java:209)
at org.apache.hadoop.hive.ql.metadata.Hive.<init>(Hive.java:332)
at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:293)
at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:268)
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:529)
at org.apache.spark.sql.hive.client.ClientWrapper.<init>(ClientWrapper.scala:204)
at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:238)
at org.apache.spark.sql.hive.HiveContext.executionHive$lzycompute(HiveContext.scala:220)
at org.apache.spark.sql.hive.HiveContext.executionHive(HiveContext.scala:210)
at org.apache.spark.sql.hive.HiveContext.setConf(HiveContext.scala:442)
at org.apache.spark.sql.SQLContext$$anonfun$4.apply(SQLContext.scala:272)
at org.apache.spark.sql.SQLContext$$anonfun$4.apply(SQLContext.scala:271)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at org.apache.spark.sql.SQLContext.<init>(SQLContext.scala:271)
at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:90)
at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:101)
at MobistatPSRETL$.main(MobistatPSRETL.scala:90)
at MobistatPSRETL.main(MobistatPSRETL.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:552)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1528)
... 32 more
Caused by: javax.jdo.JDOFatalInternalException: Error creating transactional connection factory
NestedThrowables:
java.lang.reflect.InvocationTargetException
at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:587)
at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:781)
at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:326)
at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:195)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at javax.jdo.JDOHelper$16.run(JDOHelper.java:1965)
at java.security.AccessController.doPrivileged(Native Method)
at javax.jdo.JDOHelper.invoke(JDOHelper.java:1960)
at javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1166)
at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:808)
at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:701)
at org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:411)
at org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:440)
at org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:335)
at org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:291)
at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
at org.apache.hadoop.hive.metastore.RawStoreProxy.<init>(RawStoreProxy.java:57)
at org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:66)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:648)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:626)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:679)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:484)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:78)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:84)
at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5995)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:203)
at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:74)
... 37 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:631)
at org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:325)
at org.datanucleus.store.AbstractStoreManager.registerConnectionFactory(AbstractStoreManager.java:281)
at org.datanucleus.store.AbstractStoreManager.<init>(AbstractStoreManager.java:239)
at org.datanucleus.store.rdbms.RDBMSStoreManager.<init>(RDBMSStoreManager.java:292)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:631)
at org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:301)
at org.datanucleus.NucleusContext.createStoreManagerForProperties(NucleusContext.java:1069)
at org.datanucleus.NucleusContext.initialise(NucleusContext.java:359)
at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:768)
... 66 more
Caused by: org.datanucleus.exceptions.NucleusException: Attempt to invoke the "BONECP" plugin to create a ConnectionPool gave an error : The specified datastore driver ("org.apache.derby.jdbc.EmbeddedDriver") was not found in the CLASSPATH. Please check your CLASSPATH specification, and the name of the driver.
at org.datanucleus.store.rdbms.ConnectionFactoryImpl.generateDataSources(ConnectionFactoryImpl.java:237)
at org.datanucleus.store.rdbms.ConnectionFactoryImpl.initialiseDataSources(ConnectionFactoryImpl.java:110)
at org.datanucleus.store.rdbms.ConnectionFactoryImpl.<init>(ConnectionFactoryImpl.java:82)
... 84 more
Caused by: org.datanucleus.store.rdbms.datasource.DatastoreDriverNotFoundException: The specified datastore driver ("org.apache.derby.jdbc.EmbeddedDriver") was not found in the CLASSPATH. Please check your CLASSPATH specification, and the name of the driver.
at org.datanucleus.store.rdbms.datasource.AbstractDataSourceFactory.loadDriver(AbstractDataSourceFactory.java:58)
at org.datanucleus.store.rdbms.datasource.BoneCPDataSourceFactory.makePooledDataSource(BoneCPDataSourceFactory.java:61)
at org.datanucleus.store.rdbms.ConnectionFactoryImpl.generateDataSources(ConnectionFactoryImpl.java:217)
... 86 more
17/02/21 16:16:05 ERROR ApplicationMaster: User class threw exception: java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:556)
at org.apache.spark.sql.hive.client.ClientWrapper.<init>(ClientWrapper.scala:204)
at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:238)
at org.apache.spark.sql.hive.HiveContext.executionHive$lzycompute(HiveContext.scala:220)
at org.apache.spark.sql.hive.HiveContext.executionHive(HiveContext.scala:210)
at org.apache.spark.sql.hive.HiveContext.setConf(HiveContext.scala:442)
at org.apache.spark.sql.SQLContext$$anonfun$4.apply(SQLContext.scala:272)
at org.apache.spark.sql.SQLContext$$anonfun$4.apply(SQLContext.scala:271)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at org.apache.spark.sql.SQLContext.<init>(SQLContext.scala:271)
at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:90)
at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:101)
at MobistatPSRETL$.main(MobistatPSRETL.scala:90)
at MobistatPSRETL.main(MobistatPSRETL.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:552)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
at org.apache.hadoop.hive.ql.metadata.Hive.registerAllFunctionsOnce(Hive.java:214)
at org.apache.hadoop.hive.ql.metadata.Hive.<init>(Hive.java:332)
at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:293)
at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:268)
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:529)
... 21 more
Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1530)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:67)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:82)
at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3230)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3249)
at org.apache.hadoop.hive.ql.metadata.Hive.getAllFunctions(Hive.java:3474)
at org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(Hive.java:225)
at org.apache.hadoop.hive.ql.metadata.Hive.registerAllFunctionsOnce(Hive.java:209)
... 25 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1528)
... 32 more
Caused by: javax.jdo.JDOFatalInternalException: Error creating transactional connection factory
NestedThrowables:
java.lang.reflect.InvocationTargetException
at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:587)
at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:781)
at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:326)
at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:195)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at javax.jdo.JDOHelper$16.run(JDOHelper.java:1965)
at java.security.AccessController.doPrivileged(Native Method)
at javax.jdo.JDOHelper.invoke(JDOHelper.java:1960)
at javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1166)
at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:808)
at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:701)
at org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:411)
at org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:440)
at org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:335)
at org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:291)
at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
at org.apache.hadoop.hive.metastore.RawStoreProxy.<init>(RawStoreProxy.java:57)
at org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:66)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:648)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:626)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:679)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:484)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:78)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:84)
at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5995)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:203)
at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:74)
... 37 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:631)
at org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:325)
at org.datanucleus.store.AbstractStoreManager.registerConnectionFactory(AbstractStoreManager.java:281)
at org.datanucleus.store.AbstractStoreManager.<init>(AbstractStoreManager.java:239)
at org.datanucleus.store.rdbms.RDBMSStoreManager.<init>(RDBMSStoreManager.java:292)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:631)
at org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:301)
at org.datanucleus.NucleusContext.createStoreManagerForProperties(NucleusContext.java:1069)
at org.datanucleus.NucleusContext.initialise(NucleusContext.java:359)
at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:768)
... 66 more
Caused by: org.datanucleus.exceptions.NucleusException: Attempt to invoke the "BONECP" plugin to create a ConnectionPool gave an error : The specified datastore driver ("org.apache.derby.jdbc.EmbeddedDriver") was not found in the CLASSPATH. Please check your CLASSPATH specification, and the name of the driver.
at org.datanucleus.store.rdbms.ConnectionFactoryImpl.generateDataSources(ConnectionFactoryImpl.java:237)
at org.datanucleus.store.rdbms.ConnectionFactoryImpl.initialiseDataSources(ConnectionFactoryImpl.java:110)
at org.datanucleus.store.rdbms.ConnectionFactoryImpl.<init>(ConnectionFactoryImpl.java:82)
... 84 more
Caused by: org.datanucleus.store.rdbms.datasource.DatastoreDriverNotFoundException: The specified datastore driver ("org.apache.derby.jdbc.EmbeddedDriver") was not found in the CLASSPATH. Please check your CLASSPATH specification, and the name of the driver.
at org.datanucleus.store.rdbms.datasource.AbstractDataSourceFactory.loadDriver(AbstractDataSourceFactory.java:58)
at org.datanucleus.store.rdbms.datasource.BoneCPDataSourceFactory.makePooledDataSource(BoneCPDataSourceFactory.java:61)
at org.datanucleus.store.rdbms.ConnectionFactoryImpl.generateDataSources(ConnectionFactoryImpl.java:217)
... 86 more
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Oozie
-
Apache Spark
02-08-2017
07:31 AM
We got ODBC Connection working with Kerberos. However, JDBC has issues identifying the Kerberos Principal. We thought about investigating the JDBC Connector source code, but other issues took priority. I have heard from other Big Data Engineerins thru meetups, that JDBC only works with username-password. For which, we need to setup LDAP Authentication for Hive and Impala. Before which, we need to setup cross-realm trust between LDAP/Active Directory and Kerberos - which prooves to be a pain by itself.
... View more
01-18-2017
07:06 AM
Thanks! We wrote a UDF, to handle this date convertion. It worked out well. Thanks, Krishna
... View more