Member since
12-06-2016
136
Posts
12
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1471 | 01-18-2018 12:56 PM |
04-04-2019
04:03 PM
Hi all! Also, have this error.
... View more
10-12-2018
01:39 PM
Hi all!
we are running pyspark code with NIFI ExecuteSparkInteractive, but in YARN see a lot of generated livy-session about 103024
Why this happen ?
... View more
Labels:
- Labels:
-
Apache NiFi
-
Apache Spark
-
Apache YARN
10-04-2018
06:44 AM
Hi! this in express upgrade with "Skip all Service Check failures" Documentation: https://docs.hortonworks.com/HDPDocuments/HDF3/HDF-3.2.0/ambari-managed-hdf-upgrade/content/upgrade-hdf.html
... View more
10-03-2018
04:40 PM
Hi! Thanks you , I check and take an ansver .
... View more
10-03-2018
03:08 PM
Hi! Have problem in process NIFI Upgrade , [root@serv12 ~]# grep -i Ranger /var/lib/ambari-agent/data/output-869.txt 2018-10-03 17:49:28,216 - Ranger admin not installed
... View more
Labels:
- Labels:
-
Apache NiFi
10-03-2018
02:34 PM
Hi! Have same problem, but Ranger not installed [root@serv12 ~]# grep -i Ranger /var/lib/ambari-agent/data/output-869.txt
2018-10-03 17:49:28,216 - Ranger admin not installed
... View more
09-25-2018
11:32 AM
How to respond to messages related to GC Failure notifications ? : Logs for container_e152_1537807360391_1057_01_000081
ResourceManager RM Home NodeManager Tools
2018-09-25 12:22:15 Starting to run new task attempt: attempt_1537807360391_1057_2_04_000182_0
1.148: [GC (Allocation Failure) [PSYoungGen: 134807K->24906K(204288K)] 134807K->24994K(2068480K), 0.0255463 secs] [Times: user=0.21 sys=0.02, real=0.03 secs]
1.411: [GC (Metadata GC Threshold) [PSYoungGen: 74769K->22331K(379904K)] 74857K->22427K(2244096K), 0.0233332 secs] [Times: user=0.12 sys=0.03, real=0.02 secs]
1.435: [Full GC (Metadata GC Threshold) [PSYoungGen: 22331K->0K(379904K)] [ParOldGen: 96K->21509K(264192K)] 22427K->21509K(644096K), [Metaspace: 20929K->20929K(1069056K)], 0.0289288 secs] [Times: user=0.32 sys=0.03, real=0.03 secs]
6.755: [GC (Allocation Failure) [PSYoungGen: 237172K->28660K(379904K)] 258681K->95006K(644096K), 0.0204362 secs] [Times: user=0.08 sys=0.24, real=0.02 secs]
15.663: [GC (Allocation Failure) [PSYoungGen: 213057K->28659K(547328K)] 279403K->193109K(811520K), 0.0390040 secs] [Times: user=0.14 sys=0.50, real=0.04 secs]
15.702: [Full GC (Ergonomics) [PSYoungGen: 28659K->0K(547328K)] [ParOldGen: 164449K->172137K(616960K)] 193109K->172137K(1164288K), [Metaspace: 32575K->32575K(1079296K)], 0.0628115 secs] [Times: user=0.96 sys=0.16, real=0.06 secs]
33.575: [GC (Allocation Failure) [PSYoungGen: 280808K->28661K(569344K)] 5743954K->5559068K(6478848K), 0.0278063 secs] [Times: user=0.21 sys=0.32, real=0.03 secs]
34.210: [GC (Allocation Failure) [PSYoungGen: 490957K->82416K(791552K)] 6021364K->5612831K(6701056K), 0.0320689 secs] [Times: user=0.57 sys=0.16, real=0.03 secs]
34.243: [Full GC (Ergonomics) [PSYoungGen: 82416K->0K(791552K)] [ParOldGen: 5530414K->5603149K(6443008K)] 5612831K->5603149K(7234560K), [Metaspace: 33968K->33968K(1079296K)], 0.9965374 secs] [Times: user=26.00 sys=0.29, real=0.99 secs]
36.154: [GC (Allocation Failure) [PSYoungGen: 459509K->36576K(683520K)] 6062659K->5639733K(7126528K), 0.0232837 secs] [Times: user=0.69 sys=0.02, real=0.02 secs]
36.746: [GC (Allocation Failure) [PSYoungGen: 466467K->13984K(783360K)] 6069625K->5617149K(7226368K), 0.0202414 secs] [Times: user=0.54 sys=0.01, real=0.02 secs]
37.123: [GC (Allocation Failure) [PSYoungGen: 321932K->14048K(784896K)] 5925097K->5617213K(7227904K), 0.0258958 secs] [Times: user=0.69 sys=0.01, real=0.02 secs]
37.723: [GC (Allocation Failure) [PSYoungGen: 566423K->15086K(787968K)] 6169588K->5618252K(7230976K), 0.0238336 secs] [Times: user=0.58 sys=0.00, real=0.02 secs]
38.036: [GC (Allocation Failure) [PSYoungGen: 259899K->14478K(790016K)] 5863064K->5617644K(7233024K), 0.0251177 secs] [Times: user=0.55 sys=0.01, real=0.03 secs]
38.293: [GC (Allocation Failure) [PSYoungGen: 222330K->1806K(792576K)] 5825496K->5617860K(7235584K), 0.0262013 secs] [Times: user=0.57 sys=0.02, real=0.02 secs]
38.624: [GC (Allocation Failure) [PSYoungGen: 270228K->1248K(793088K)] 5886282K->5617856K(7236096K), 0.0277400 secs] [Times: user=0.56 sys=0.00, real=0.03 secs]
... View more
Labels:
- Labels:
-
Apache Hive
07-31-2018
10:39 AM
Hi! Try One of this changes:
Zeppelin
NIFI
ExecuteSparkInteractive
preds =
spark.sql(‘select * from sandbox.CHURN_PRP_D_PREDS_S ’)
preds =
spark.sql("select
* from andbox.CHURN_PRP_D_PREDS_S")
dir = ‘
/user/CHURN_PRP_D’
dir =
"hdfs:/user/CHURN_PRP_D"
date_df =
spark.sql("select to_date('{}') as score_date".format(date_calc))
ttt =
spark.createDataFrame([(date_calc,)], ["t"])
date_df =
ttt.select(to_date(ttt.t))
sqlContext.registerDataFrameAsTable(train,
"train")
train.registerTempTable("train")
... View more
06-21-2018
07:23 AM
Thanks you!!!
In my case need route “CSV” content, example:
Id date_col name
1|2018:06:2108:40:00|Ukraine
2|2018:06:2108:15:00|USA
If date_col less then 25 minutes of now(), route to rule1
If date_col great equivalent 25 minutes of now(), route to rule2
now() = 2018:06:21 08:50:00
Result
2018:06:2108:40:00 to rule1 <<<<<<<<<
2018:06:21 08:15:00 to rule2 <<<<<<<<<
... View more
06-20-2018
07:41 PM
I mean apply Expression Language on content : now(), gt, ge
... View more
Labels:
- Labels:
-
Apache NiFi
06-11-2018
01:45 PM
Hi! one more question, how control count "livy-session-"" ?
... View more
06-11-2018
11:46 AM
Problem was in "sqlContext.registerDataFrameAsTable" What investigate similar (HTTP) errors ?
... View more
06-06-2018
03:48 PM
Do you have example run Zeppelin REST API ?
... View more
06-06-2018
02:00 PM
Labels:
- Labels:
-
Apache NiFi
-
Apache Zeppelin
06-06-2018
12:38 PM
CODE : executesparkinteractive-code.txt
ERROR :
2018-06-06 15:32:13,566 ERROR [Timer-Driven Process Thread-5] o.a.n.p.livy.ExecuteSparkInteractive ExecuteSparkInteractive[id=aeb74038-5333-13d2-0000-00001ea7e32e] ExecuteSparkInteractive[i
d=aeb74038-5333-13d2-0000-00001ea7e32e] failed to process session due to java.lang.RuntimeException: Failed : HTTP error code : 400 : Bad Request: {}
java.lang.RuntimeException: Failed : HTTP error code : 400 : Bad Request
at org.apache.nifi.processors.livy.ExecuteSparkInteractive.readJSONObjectFromUrlPOST(ExecuteSparkInteractive.java:282)
at org.apache.nifi.processors.livy.ExecuteSparkInteractive.submitAndHandleJob(ExecuteSparkInteractive.java:234)
at org.apache.nifi.processors.livy.ExecuteSparkInteractive.onTrigger(ExecuteSparkInteractive.java:197)
at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1122)
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
... View more
Labels:
- Labels:
-
Apache NiFi
-
Apache Spark
05-31-2018
08:03 PM
1 Kudo
Thank you! But what to do with processor , for example FetchFTP , after import change (version control) from DEV to PROD property: Password "No value set" Can add pasword to variables (on processor group), but this is not secured.
... View more
05-31-2018
09:02 AM
This is need for correct working in two environment dev and prod for example. Nifi 1.5
... View more
Labels:
- Labels:
-
Apache NiFi
04-23-2018
09:57 AM
---------------------------
ACID Transactions - ON
hive.support.concurrency=true
-------------------------------------
Table ODS_C1.CALL_HISTORYSS - not transaction
In table ODS_C1.CALL_HISTORYSS inserting data, but not in partition hday='2017-01-28'
Executing command: select count(*) from ODS_C1.CALL_HISTORYSS where hday='2017-01-28'; <<<<<<<<<<<<< problem select
Error: Error while processing statement: FAILED: Error in acquiring locks: Lock acquisition for LockRequest(component:[LockComponent(type:SHARED_READ, level:TABLE, dbname:ods_c1, tablename:
call_historyss, operationType:SELECT)], txnid:0, user:hive, hostname:ks-dmp01.kyivstar.ua, agentInfo:hive_20180423093534_533cc06f-dd4d-4bea-987b-af0e9c5ed468) timed out after 5515859ms. Lo
ckResponse(lockid:15373078, state:WAITING) (state=42000,code=10)
java.sql.SQLException: Error while processing statement: FAILED: Error in acquiring locks: Lock acquisition for LockRequest(component:[LockComponent(type:SHARED_READ, level:TABLE,
dbname:ods_c1, tablename:call_historyss, operationType:SELECT)], txnid:0, user:hive, hostname:ks-dmp01.kyivstar.ua, agentInfo:hive_20180423093534_533cc06f-dd4d-4bea-987b-af0e9c5ed468)
timed out after 5515859ms. LockResponse(lockid:15373078, state:WAITING)
at org.apache.hive.jdbc.HiveStatement.waitForOperationToComplete(HiveStatement.java:354)
at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:245)
at org.apache.hive.beeline.Commands.execute(Commands.java:859)
at org.apache.hive.beeline.Commands.sql(Commands.java:729)
at org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:1000)
at org.apache.hive.beeline.BeeLine.initArgs(BeeLine.java:730)
at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:779)
at org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:493)
at org.apache.hive.beeline.BeeLine.main(BeeLine.java:476)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:233)
at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
... View more
Labels:
- Labels:
-
Apache Hive
03-20-2018
03:56 PM
load-gc.pnghang-reducer-task.png In Container ID log see many GC notification
... View more
Labels:
- Labels:
-
Apache Hive
03-16-2018
04:00 PM
Hi! finded this tables! Thank you!
... View more
03-15-2018
02:42 PM
Thanks you ! But I do not see this tables in postgres database: postgres=# \connect ambari
You are now connected to database "ambari" as user "postgres".
ambari=# \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+----------+----------+-------------+-------------+------------------------
ambari | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =Tc/postgres +
| | | | | postgres=CTc/postgres +
| | | | | ambari=CTc/postgres
postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
ranger | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =Tc/postgres +
| | | | | postgres=CTc/postgres +
| | | | | rangerdba=CTc/postgres
template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
(5 rows)
ambari=# \dt[S+]
List of relations
Schema | Name | Type | Owner | Size | Description
------------+-------------------------+-------+----------+------------+-------------
pg_catalog | pg_aggregate | table | postgres | 40 kB |
pg_catalog | pg_am | table | postgres | 40 kB |
pg_catalog | pg_amop | table | postgres | 64 kB |
pg_catalog | pg_amproc | table | postgres | 56 kB |
pg_catalog | pg_attrdef | table | postgres | 64 kB |
pg_catalog | pg_attribute | table | postgres | 784 kB |
pg_catalog | pg_auth_members | table | postgres | 0 bytes |
pg_catalog | pg_authid | table | postgres | 40 kB |
pg_catalog | pg_cast | table | postgres | 48 kB |
pg_catalog | pg_class | table | postgres | 192 kB |
pg_catalog | pg_collation | table | postgres | 256 kB |
pg_catalog | pg_constraint | table | postgres | 120 kB |
pg_catalog | pg_conversion | table | postgres | 56 kB |
pg_catalog | pg_database | table | postgres | 8192 bytes |
pg_catalog | pg_db_role_setting | table | postgres | 16 kB |
pg_catalog | pg_default_acl | table | postgres | 0 bytes |
pg_catalog | pg_depend | table | postgres | 560 kB |
pg_catalog | pg_description | table | postgres | 280 kB |
pg_catalog | pg_enum | table | postgres | 0 bytes |
pg_catalog | pg_extension | table | postgres | 40 kB |
pg_catalog | pg_foreign_data_wrapper | table | postgres | 0 bytes |
pg_catalog | pg_foreign_server | table | postgres | 0 bytes |
pg_catalog | pg_foreign_table | table | postgres | 0 bytes |
pg_catalog | pg_index | table | postgres | 104 kB |
pg_catalog | pg_inherits | table | postgres | 0 bytes |
pg_catalog | pg_language | table | postgres | 40 kB |
pg_catalog | pg_largeobject | table | postgres | 0 bytes |
pg_catalog | pg_largeobject_metadata | table | postgres | 0 bytes |
pg_catalog | pg_namespace | table | postgres | 40 kB |
pg_catalog | pg_opclass | table | postgres | 48 kB |
pg_catalog | pg_operator | table | postgres | 144 kB |
pg_catalog | pg_opfamily | table | postgres | 48 kB |
pg_catalog | pg_pltemplate | table | postgres | 40 kB |
pg_catalog | pg_proc | table | postgres | 536 kB |
pg_catalog | pg_range | table | postgres | 40 kB |
pg_catalog | pg_rewrite | table | postgres | 496 kB |
pg_catalog | pg_seclabel | table | postgres | 8192 bytes |
pg_catalog | pg_shdepend | table | postgres | 48 kB |
pg_catalog | pg_shdescription | table | postgres | 48 kB |
pg_catalog | pg_shseclabel | table | postgres | 0 bytes |
pg_catalog | pg_statistic | table | postgres | 872 kB |
pg_catalog | pg_tablespace | table | postgres | 40 kB |
pg_catalog | pg_trigger | table | postgres | 160 kB |
pg_catalog | pg_ts_config | table | postgres | 40 kB |
pg_catalog | pg_ts_config_map | table | postgres | 48 kB |
pg_catalog | pg_ts_dict | table | postgres | 40 kB |
pg_catalog | pg_ts_parser | table | postgres | 40 kB |
pg_catalog | pg_ts_template | table | postgres | 40 kB |
pg_catalog | pg_type | table | postgres | 160 kB |
pg_catalog | pg_user_mapping | table | postgres | 0 bytes |
(50 rows)
ambari=#
... View more
03-15-2018
11:29 AM
In my case, ambari installed on postgres database
... View more
Labels:
- Labels:
-
Apache Ambari
03-11-2018
07:14 PM
Hi! Thank you,
it is works. One more question, how add
user mapping on “sub-queue” ? : Example like
this not working : yarn.scheduler.capacity.queue-mappings=u:airflow:aggregate.airflow yarn.scheduler.capacity.maximum-am-resource-percent=0.6
yarn.scheduler.capacity.maximum-applications=10000
yarn.scheduler.capacity.node-locality-delay=40
yarn.scheduler.capacity.root.accessible-node-labels=*
yarn.scheduler.capacity.root.acl_administer_queue=*
yarn.scheduler.capacity.root.capacity=100
yarn.scheduler.capacity.root.default.acl_submit_applications=*
yarn.scheduler.capacity.root.default.capacity=10
yarn.scheduler.capacity.root.default.maximum-capacity=40
yarn.scheduler.capacity.root.default.state=RUNNING
yarn.scheduler.capacity.root.default.user-limit-factor=0.7
yarn.scheduler.capacity.root.queues=aggregate,default,llap,load,reports
yarn.scheduler.capacity.root.aggregate.queues=airflow <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
yarn.scheduler.capacity.queue-mappings=u:airflow:aggregate.airflow <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<,
yarn.scheduler.capacity.queue-mappings-override.enable=true
yarn.scheduler.capacity.root.aggregate.acl_submit_applications=*
yarn.scheduler.capacity.root.aggregate.capacity=35
yarn.scheduler.capacity.root.aggregate.maximum-am-resource-percent=1
yarn.scheduler.capacity.root.aggregate.maximum-applications=20000
yarn.scheduler.capacity.root.aggregate.maximum-capacity=60
yarn.scheduler.capacity.root.aggregate.minimum-user-limit-percent=10
yarn.scheduler.capacity.root.aggregate.state=RUNNING
yarn.scheduler.capacity.root.aggregate.user-limit-factor=0.7
yarn.scheduler.capacity.root.aggregate.airflow.acl_submit_applications=*
yarn.scheduler.capacity.root.aggregate.airflow.capacity=100
yarn.scheduler.capacity.root.aggregate.airflow.maximum-am-resource-percent=1
yarn.scheduler.capacity.root.aggregate.airflow.maximum-applications=20000
yarn.scheduler.capacity.root.aggregate.airflow.maximum-capacity=100
yarn.scheduler.capacity.root.aggregate.airflow.minimum-user-limit-percent=10
yarn.scheduler.capacity.root.aggregate.airflow.state=RUNNING
yarn.scheduler.capacity.root.aggregate.airflow.user-limit-factor=0.5
yarn.scheduler.capacity.root.default.maximum-am-resource-percent=1
yarn.scheduler.capacity.root.default.maximum-applications=20000
yarn.scheduler.capacity.root.default.minimum-user-limit-percent=10
yarn.scheduler.capacity.root.default.priority=0
yarn.scheduler.capacity.root.llap.acl_administer_queue=*
yarn.scheduler.capacity.root.llap.acl_submit_applications=*
yarn.scheduler.capacity.root.llap.capacity=0
yarn.scheduler.capacity.root.llap.maximum-am-resource-percent=1
yarn.scheduler.capacity.root.llap.maximum-capacity=0
yarn.scheduler.capacity.root.llap.minimum-user-limit-percent=100
yarn.scheduler.capacity.root.llap.ordering-policy=fifo
yarn.scheduler.capacity.root.llap.priority=0
yarn.scheduler.capacity.root.llap.state=STOPPED
yarn.scheduler.capacity.root.llap.user-limit-factor=1
yarn.scheduler.capacity.root.load.acl_submit_applications=*
yarn.scheduler.capacity.root.load.capacity=45
yarn.scheduler.capacity.root.load.maximum-am-resource-percent=1
yarn.scheduler.capacity.root.load.maximum-applications=20000
yarn.scheduler.capacity.root.load.maximum-capacity=80
yarn.scheduler.capacity.root.load.minimum-user-limit-percent=10
yarn.scheduler.capacity.root.load.ordering-policy=fair
yarn.scheduler.capacity.root.load.ordering-policy.fair.enable-size-based-weight=false
yarn.scheduler.capacity.root.load.priority=1
yarn.scheduler.capacity.root.load.state=RUNNING
yarn.scheduler.capacity.root.load.user-limit-factor=0.7
yarn.scheduler.capacity.root.ordering-policy=priority-utilization
yarn.scheduler.capacity.root.priority=0
yarn.scheduler.capacity.root.reports.acl_submit_applications=*
yarn.scheduler.capacity.root.reports.capacity=10
yarn.scheduler.capacity.root.reports.maximum-am-resource-percent=0.4
yarn.scheduler.capacity.root.reports.maximum-capacity=10
yarn.scheduler.capacity.root.reports.minimum-user-limit-percent=5
yarn.scheduler.capacity.root.reports.priority=0
yarn.scheduler.capacity.root.reports.state=RUNNING
yarn.scheduler.capacity.root.reports.user-limit-factor=0.5
<br>
... View more
03-07-2018
04:43 PM
In log see next errors : [yarn@ks-dmp03 ~]$ grep -i ERROR /tmp/application_1518080001967_51536.log | grep -v HADOOP_ROOT_LOGGER | grep -v Dtez.root.logger | awk '{print $1 $3 $4 $5 $6 $7 $8 $9 $10 $11 $12 $13 $14 $15 $16}' | sort -u
2018-03-07[ERROR][TaskSchedulerEventHandlerThread]|rm.TaskSchedulerEventHandler|:Nocontainerallocatedtotask:attempt_1518080001967_51536_3_02_000034_0accordingtoscheduler.Taskreported
2018-03-07[ERROR][TaskSchedulerEventHandlerThread]|rm.TaskSchedulerEventHandler|:Nocontainerallocatedtotask:attempt_1518080001967_51536_3_02_000035_0accordingtoscheduler.Taskreported
2018-03-07[ERROR][TaskSchedulerEventHandlerThread]|rm.TaskSchedulerEventHandler|:Nocontainerallocatedtotask:attempt_1518080001967_51536_3_02_000036_0accordingtoscheduler.Taskreported
2018-03-07[ERROR][TaskSchedulerEventHandlerThread]|rm.TaskSchedulerEventHandler|:Nocontainerallocatedtotask:attempt_1518080001967_51536_3_02_000037_0accordingtoscheduler.Taskreported
2018-03-07[ERROR][TaskSchedulerEventHandlerThread]|rm.TaskSchedulerEventHandler|:Nocontainerallocatedtotask:attempt_1518080001967_51536_3_02_000038_0accordingtoscheduler.Taskreported
2018-03-07[ERROR][TaskSchedulerEventHandlerThread]|rm.TaskSchedulerEventHandler|:Nocontainerallocatedtotask:attempt_1518080001967_51536_3_02_000039_0accordingtoscheduler.Taskreported
2018-03-07[ERROR][TaskSchedulerEventHandlerThread]|rm.TaskSchedulerEventHandler|:Nocontainerallocatedtotask:attempt_1518080001967_51536_3_02_000040_0accordingtoscheduler.Taskreported
******************************
2018-03-07[ERROR][TaskSchedulerEventHandlerThread]|rm.TaskSchedulerEventHandler|:Nocontainerallocatedtotask:attempt_1518080001967_51536_3_02_000107_0accordingtoscheduler.Taskreported
2018-03-07[ERROR][TaskSchedulerEventHandlerThread]|rm.TaskSchedulerEventHandler|:Nocontainerallocatedtotask:attempt_1518080001967_51536_3_02_000108_0accordingtoscheduler.Taskreported
2018-03-07[ERROR][TaskSchedulerEventHandlerThread]|rm.TaskSchedulerEventHandler|:Nocontainerallocatedtotask:attempt_1518080001967_51536_3_02_000109_0accordingtoscheduler.Taskreported
2018-03-07[ERROR][TaskSchedulerEventHandlerThread]|rm.TaskSchedulerEventHandler|:Nocontainerallocatedtotask:attempt_1518080001967_51536_3_02_000110_0accordingtoscheduler.Taskreported
2018-03-07[ERROR][TaskSchedulerEventHandlerThread]|rm.TaskSchedulerEventHandler|:Nocontainerallocatedtotask:attempt_1518080001967_51536_3_02_000111_0accordingtoscheduler.Taskreported
2018-03-07[ERROR][TaskSchedulerEventHandlerThread]|rm.TaskSchedulerEventHandler|:Nocontainerallocatedtotask:attempt_1518080001967_51536_3_02_000112_0accordingtoscheduler.Taskreported
2018-03-07[ERROR][TaskSchedulerEventHandlerThread]|rm.TaskSchedulerEventHandler|:Nocontainerallocatedtotask:attempt_1518080001967_51536_3_02_000114_0accordingtoscheduler.Taskreported
2018-03-07[ERROR][TaskSchedulerEventHandlerThread]|rm.TaskSchedulerEventHandler|:Nocontainerallocatedtotask:attempt_1518080001967_51536_3_02_000115_0accordingtoscheduler.Taskreported
2018-03-07[ERROR][TaskSchedulerEventHandlerThread]|rm.TaskSchedulerEventHandler|:Nocontainerallocatedtotask:attempt_1518080001967_51536_3_02_000116_0accordingtoscheduler.Taskreported
2018-03-07[ERROR][TaskSchedulerEventHandlerThread]|rm.TaskSchedulerEventHandler|:Nocontainerallocatedtotask:attempt_1518080001967_51536_3_02_000117_0accordingtoscheduler.Taskreported
2018-03-07[ERROR][TaskSchedulerEventHandlerThread]|rm.TaskSchedulerEventHandler|:Nocontainerallocatedtotask:attempt_1518080001967_51536_3_02_000118_0accordingtoscheduler.Taskreported
2018-03-07[ERROR][TaskSchedulerEventHandlerThread]|rm.TaskSchedulerEventHandler|:Nocontainerallocatedtotask:attempt_1518080001967_51536_3_02_000119_0accordingtoscheduler.Taskreported
2018-03-07[ERROR][TaskSchedulerEventHandlerThread]|rm.TaskSchedulerEventHandler|:Nocontainerallocatedtotask:attempt_1518080001967_51536_3_02_000120_0accordingtoscheduler.Taskreported
2018-03-07[ERROR][TaskSchedulerEventHandlerThread]|rm.TaskSchedulerEventHandler|:Nocontainerallocatedtotask:attempt_1518080001967_51536_3_02_000121_0accordingtoscheduler.Taskreported
2018-03-07[ERROR][TaskSchedulerEventHandlerThread]|rm.TaskSchedulerEventHandler|:Nocontainerallocatedtotask:attempt_1518080001967_51536_3_02_000122_0accordingtoscheduler.Taskreported
2018-03-07[ERROR][TaskSchedulerEventHandlerThread]|rm.TaskSchedulerEventHandler|:Nocontainerallocatedtotask:attempt_1518080001967_51536_3_02_000124_0accordingtoscheduler.Taskreported
2018-03-07[ERROR][TezChild]|tez.ReduceRecordProcessor|:Hiterrorwhileclosingoperators-failingtree
2018-03-07[ERROR][TezChild]|tez.TezProcessor|:java.lang.InterruptedException
[yarn@ks-dmp03 ~]$
... View more
03-07-2018
04:31 PM
Vertex dependency in root stage
Map 1 <- Map 3 (BROADCAST_EDGE)
Reducer 2 <- Map 1 (SIMPLE_EDGE)
Stage-3
Stats-Aggr Operator
Stage-0
Move Operator
partition:{"hday":"2018-03-06","src":"OTHER_DETAIL"}
table:{"name:":"agg_dpi.agg_dpi_traffic_daily_l","input format:":"org.apache.hadoop.hive.ql.io.orc.OrcInputFormat","output format:":"org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat","serde:":"org.apache.hadoop.hive.ql.io.orc.OrcSerde"}
Stage-2
Dependency Collection{}
Stage-1
Reducer 2
File Output Operator [FS_15]
compressed:true
Statistics:Num rows: 419376708 Data size: 435732400043 Basic stats: COMPLETE Column stats: NONE
table:{"name:":"agg_dpi.agg_dpi_traffic_daily_l","input format:":"org.apache.hadoop.hive.ql.io.orc.OrcInputFormat","output format:":"org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat","serde:":"org.apache.hadoop.hive.ql.io.orc.OrcSerde"}
Select Operator [SEL_13]
outputColumnNames:["_col0","_col1","_col2","_col3","_col4","_col5","_col6","_col7","_col8","_col9","_col10","_col11","_col12","_col13","_col14","_col15","_col16","_col17","_col18"]
Statistics:Num rows: 419376708 Data size: 435732400043 Basic stats: COMPLETE Column stats: NONE
Group By Operator [GBY_12]
| aggregations:["sum(VALUE._col0)","sum(VALUE._col1)","sum(VALUE._col2)","sum(VALUE._col3)","sum(VALUE._col4)","sum(VALUE._col5)","sum(VALUE._col6)","sum(VALUE._col7)","count(DISTINCT KEY._col5:0._col0)","count(DISTINCT KEY._col5:1._col0)","count(DISTINCT KEY._col5:2._col0)","count(DISTINCT KEY._col5:3._col0)"]
| keys:KEY._col0 (type: varchar(16)), KEY._col1 (type: decimal(10,0)), KEY._col2 (type: decimal(10,0)), KEY._col3 (type: decimal(1,0)), KEY._col4 (type: string)
| outputColumnNames:["_col0","_col1","_col2","_col3","_col4","_col5","_col6","_col7","_col8","_col9","_col10","_col11","_col12","_col13","_col14","_col15","_col16"]
| Statistics:Num rows: 419376708 Data size: 435732400043 Basic stats: COMPLETE Column stats: NONE
|<-Map 1 [SIMPLE_EDGE]
Reduce Output Operator [RS_11]
key expressions:_col0 (type: varchar(16)), _col1 (type: decimal(10,0)), _col2 (type: decimal(10,0)), _col3 (type: decimal(1,0)), _col4 (type: string), _col5 (type: decimal(20,0)), _col6 (type: decimal(20,0)), _col7 (type: decimal(20,0)), _col8 (type: decimal(20,0))
Map-reduce partition columns:_col0 (type: varchar(16)), _col1 (type: decimal(10,0)), _col2 (type: decimal(10,0)), _col3 (type: decimal(1,0)), _col4 (type: string)
sort order:+++++++++
Statistics:Num rows: 838753416 Data size: 871464800087 Basic stats: COMPLETE Column stats: NONE
value expressions:_col9 (type: decimal(31,0)), _col10 (type: decimal(31,0)), _col11 (type: decimal(31,0)), _col12 (type: decimal(31,0)), _col13 (type: decimal(21,0)), _col14 (type: decimal(21,0)), _col15 (type: decimal(21,0)), _col16 (type: decimal(21,0))
Group By Operator [GBY_10]
aggregations:["sum(CASE WHEN ((_col5 = 2)) THEN ((_col6 + _col7)) ELSE (0) END)","sum(CASE WHEN ((_col5 = 1)) THEN ((_col6 + _col7)) ELSE (0) END)","sum(CASE WHEN ((_col5 = 0)) THEN ((_col6 + _col7)) ELSE (0) END)","sum((_col6 + _col7))","sum(CASE WHEN ((_col5 = 2)) THEN ((_col8 - _col9)) ELSE (0) END)","sum(CASE WHEN ((_col5 = 1)) THEN ((_col8 - _col9)) ELSE (0) END)","sum(CASE WHEN ((_col5 = 0)) THEN ((_col8 - _col9)) ELSE (0) END)","sum((_col8 - _col9))","count(DISTINCT CASE WHEN ((_col5 = 2)) THEN (_col10) END)","count(DISTINCT CASE WHEN ((_col5 = 1)) THEN (_col10) END)","count(DISTINCT CASE WHEN ((_col5 = 0)) THEN (_col10) END)","count(DISTINCT _col10)"]
keys:_col0 (type: varchar(16)), _col1 (type: decimal(10,0)), _col2 (type: decimal(10,0)), _col4 (type: decimal(1,0)), _col3 (type: string), CASE WHEN ((_col5 = 2)) THEN (_col10) END (type: decimal(20,0)), CASE WHEN ((_col5 = 1)) THEN (_col10) END (type: decimal(20,0)), CASE WHEN ((_col5 = 0)) THEN (_col10) END (type: decimal(20,0)), _col10 (type: decimal(20,0))
outputColumnNames:["_col0","_col1","_col2","_col3","_col4","_col5","_col6","_col7","_col8","_col9","_col10","_col11","_col12","_col13","_col14","_col15","_col16","_col17","_col18","_col19","_col20"]
Statistics:Num rows: 838753416 Data size: 871464800087 Basic stats: COMPLETE Column stats: NONE
Select Operator [SEL_8]
outputColumnNames:["_col0","_col1","_col2","_col3","_col4","_col5","_col6","_col7","_col8","_col9","_col10"]
Statistics:Num rows: 838753416 Data size: 871464800087 Basic stats: COMPLETE Column stats: NONE
Map Join Operator [MAPJOIN_19]
| condition map:[{"":"Left Outer Join0 to 1"}]
| HybridGraceHashJoin:true
| keys:{"Map 3":"_col1 (type: string)","Map 1":"_col10 (type: string)"}
| outputColumnNames:["_col0","_col1","_col2","_col3","_col4","_col5","_col6","_col7","_col8","_col9","_col11"]
| Statistics:Num rows: 838753416 Data size: 871464800087 Basic stats: COMPLETE Column stats: NONE
|<-Map 3 [BROADCAST_EDGE] vectorized
| Reduce Output Operator [RS_22]
| key expressions:_col1 (type: string)
| Map-reduce partition columns:_col1 (type: string)
| sort order:+
| Statistics:Num rows: 65114 Data size: 3581293 Basic stats: COMPLETE Column stats: NONE
| value expressions:_col0 (type: varchar(40))
| Select Operator [OP_21]
| outputColumnNames:["_col0","_col1"]
| Statistics:Num rows: 65114 Data size: 3581293 Basic stats: COMPLETE Column stats: NONE
| TableScan [TS_3]
|alias:dim_dpi_ip_service
|Statistics:Num rows: 65114 Data size: 3581293 Basic stats: COMPLETE Column stats: NONE
|<-Select Operator [SEL_2]
outputColumnNames:["_col0","_col1","_col2","_col3","_col4","_col5","_col6","_col7","_col8","_col9","_col10"]
Statistics:Num rows: 762503089 Data size: 792240710181 Basic stats: COMPLETE Column stats: NONE
TableScan [TS_0]
ACID table:true
alias:dpi_other_detail_ufdr
Statistics:Num rows: 762503089 Data size: 792240710181 Basic stats: COMPLETE Column stats: NONE
Time taken: 3.479 seconds, Fetched: 70 row(s)
hive>
... View more
03-07-2018
04:30 PM
[yarn@ks-dmp03 ~]$ hive
log4j:WARN No such property [maxFileSize] in org.apache.log4j.DailyRollingFileAp
Logging initialized using configuration in file:/etc/hive/2.6.3.0-235/0/hive-log
hive> explain WITH DPI AS (SELECT SERVICE_NAME, trim(IP_ADDRESS) AS IP_ADDR FROM OTHER_SOURCES.DIM_DPI_IP_SERVICE), DSD AS (SELECT MSISDN, NVL(PROT_TYPE,0) AS PROTOCOL_ID, NVL(PROT_CATEGORY,0) AS PROTOCOL_CATEGORY_ID, TETHERING_FLAG, RAT, NETWORK_UL_TRAFFIC, NETWORK_DL_TRAFFIC, END_TIME_N, BEGIN_TIME_N, SID, trim(SERVER_IP) AS SERV_IP FROM OTHER_SOURCES.DPI_OTHER_DETAIL_UFDR WHERE HDAY="2018-03-06"), JN AS (SELECT DSD.MSISDN, DSD.PROTOCOL_ID, DSD.PROTOCOL_CATEGORY_ID, NVL(DPI.SERVICE_NAME,"Other") AS CUSTOMIZED_CATEGORY, DSD.TETHERING_FLAG, DSD.RAT, DSD.NETWORK_UL_TRAFFIC, DSD.NETWORK_DL_TRAFFIC, DSD.END_TIME_N, DSD.BEGIN_TIME_N, DSD.SID FROM DSD LEFT JOIN DPI ON DSD.SERV_IP = DPI.IP_ADDR) INSERT INTO AGG_DPI.AGG_DPI_TRAFFIC_DAILY_l PARTITION(HDAY="2018-03-06",SRC="OTHER_DETAIL") SELECT JN.MSISDN, JN.PROTOCOL_ID, JN.PROTOCOL_CATEGORY_ID, JN.CUSTOMIZED_CATEGORY, JN.TETHERING_FLAG, ROUND(SUM(CASE WHEN JN.RAT = 2 THEN JN.NETWORK_UL_TRAFFIC+JN.NETWORK_DL_TRAFFIC ELSE 0 END)/ 1024, 2) AS DATA_VOL_2G_AMT, ROUND(SUM(CASE WHEN JN.RAT = 1 THEN JN.NETWORK_UL_TRAFFIC+JN.NETWORK_DL_TRAFFIC ELSE 0 END)/ 1024, 2) AS DATA_VOL_3G_AMT, ROUND(SUM(CASE WHEN JN.RAT = 0 THEN JN.NETWORK_UL_TRAFFIC+JN.NETWORK_DL_TRAFFIC ELSE 0 END)/ 1024, 2) AS DATA_VOL_0_AMT, ROUND(SUM(JN.NETWORK_UL_TRAFFIC+JN.NETWORK_DL_TRAFFIC)/ 1024, 2) DATA_VOL_TOTAL_AMT, SUM(CASE WHEN JN.RAT = 2 THEN END_TIME_N - BEGIN_TIME_N ELSE 0 END) DATA_SESSIONS_DUR_2G, SUM(CASE WHEN JN.RAT = 1 THEN END_TIME_N - BEGIN_TIME_N ELSE 0 END) DATA_SESSIONS_DUR_3G, SUM(CASE WHEN JN.RAT = 0 THEN END_TIME_N - BEGIN_TIME_N ELSE 0 END) DATA_SESSIONS_DUR_0, SUM(END_TIME_N - BEGIN_TIME_N) DATA_SESSIONS_DUR, COUNT(DISTINCT CASE WHEN JN.RAT = 2 THEN JN.SID END) AS DATA_SESSIONS_2G_CNT, COUNT(DISTINCT CASE WHEN JN.RAT = 1 THEN JN.SID END) AS DATA_SESSIONS_3G_CNT, COUNT(DISTINCT CASE WHEN JN.RAT = 0 THEN JN.SID END) AS DATA_SESSIONS_0_CNT, COUNT(DISTINCT JN.SID) AS DATA_SESSIONS_CNT, 1 AS LOAD_ID, CURRENT_TIMESTAMP AS INS_DT FROM JN GROUP BY JN.MSISDN, JN.PROTOCOL_ID, JN.PROTOCOL_CATEGORY_ID, JN.TETHERING_FLAG, JN.CUSTOMIZED_CATEGORY;
OK
Plan not optimized by CBO due to missing statistics. Please check log for more details.
... View more
03-07-2018
04:25 PM
Application log very big: [yarn@serv03 ~]$ ls -hl /tmp/application_1518080001967_51536.log
-rw-r--r-- 1 yarn hadoop 89M Mar 7 18:10 /tmp/application_1518080001967_51536.log
... View more
03-07-2018
02:04 PM
Problem container : container-e144-1518080001967-51536-01-000047.txt
... View more
Labels:
- Labels:
-
Apache Hive