Member since
11-12-2015
90
Posts
1
Kudos Received
8
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5735 | 06-09-2017 01:52 PM | |
13429 | 02-24-2017 02:32 PM | |
11396 | 11-30-2016 02:48 PM | |
3830 | 03-02-2016 11:14 AM | |
4560 | 12-16-2015 07:11 AM |
02-24-2017
01:42 PM
This is the result: /opt/cloudera/parcels/CDH-5.8.0-1.cdh5.8.0.p0.42/lib/parquet/bin/parquet-tools cat /user/spot/flow/hive/y=2017/m=02/d=24/h=20/000000_0
File /user/spot/flow/hive/y=2017/m=02/d=24/h=20/000000_0 does not exist But the file exists sudo -u hdfs hdfs dfs -ls /user/spot/flow/hive/y=2017/m=02/d=24/h=20
Found 12 items
-rwxr-xr-x 3 spot supergroup 440 2017-02-24 17:05 /user/spot/flow/hive/y=2017/m=02/d=24/h=20/000000_0
-rwxr-xr-x 3 spot supergroup 440 2017-02-24 17:10 /user/spot/flow/hive/y=2017/m=02/d=24/h=20/000000_0_copy_1
-rwxr-xr-x 3 spot supergroup 440 2017-02-24 17:55 /user/spot/flow/hive/y=2017/m=02/d=24/h=20/000000_0_copy_10
-rwxr-xr-x 3 spot supergroup 440 2017-02-24 18:00 /user/spot/flow/hive/y=2017/m=02/d=24/h=20/000000_0_copy_11
-rwxr-xr-x 3 spot supergroup 440 2017-02-24 17:15 /user/spot/flow/hive/y=2017/m=02/d=24/h=20/000000_0_copy_2
-rwxr-xr-x 3 spot supergroup 440 2017-02-24 17:20 /user/spot/flow/hive/y=2017/m=02/d=24/h=20/000000_0_copy_3
-rwxr-xr-x 3 spot supergroup 440 2017-02-24 17:25 /user/spot/flow/hive/y=2017/m=02/d=24/h=20/000000_0_copy_4
-rwxr-xr-x 3 spot supergroup 440 2017-02-24 17:30 /user/spot/flow/hive/y=2017/m=02/d=24/h=20/000000_0_copy_5
-rwxr-xr-x 3 spot supergroup 440 2017-02-24 17:35 /user/spot/flow/hive/y=2017/m=02/d=24/h=20/000000_0_copy_6
-rwxr-xr-x 3 spot supergroup 440 2017-02-24 17:40 /user/spot/flow/hive/y=2017/m=02/d=24/h=20/000000_0_copy_7
-rwxr-xr-x 3 spot supergroup 440 2017-02-24 17:45 /user/spot/flow/hive/y=2017/m=02/d=24/h=20/000000_0_copy_8
-rwxr-xr-x 3 spot supergroup 440 2017-02-24 17:50 /user/spot/flow/hive/y=2017/m=02/d=24/h=20/000000_0_copy_9
... View more
02-24-2017
01:08 PM
This is the result: show partitions spotdb.flow;
OK
y=2017/m=02/d=23/h=20
y=2017/m=02/d=23/h=21
y=2017/m=02/d=23/h=22
y=2017/m=02/d=23/h=23
y=2017/m=02/d=24/h=00
y=2017/m=02/d=24/h=01
y=2017/m=02/d=24/h=02
y=2017/m=02/d=24/h=03
y=2017/m=02/d=24/h=04
y=2017/m=02/d=24/h=05
y=2017/m=02/d=24/h=06
y=2017/m=02/d=24/h=07
y=2017/m=02/d=24/h=08
y=2017/m=02/d=24/h=09
y=2017/m=02/d=24/h=10
y=2017/m=02/d=24/h=11
y=2017/m=02/d=24/h=12
y=2017/m=02/d=24/h=13
y=2017/m=02/d=24/h=14
y=2017/m=02/d=24/h=15
y=2017/m=02/d=24/h=16
y=2017/m=02/d=24/h=17
y=2017/m=02/d=24/h=18
y=2017/m=02/d=24/h=19
y=2017/m=02/d=24/h=20 As I can see it recognize the partitions.
... View more
02-24-2017
12:53 PM
Hello, I have a table that is pointing to a location in HDFS, like this: # col_name data_type comment
treceived string
unix_tstamp bigint
tryear int
trmonth int
trday int
trhour int
trminute int
trsec int
tdur float
sip string
dip string
sport int
dport int
proto string
flag string
fwd int
stos int
ipkt bigint
ibyt bigint
opkt bigint
obyt bigint
input int
output int
sas int
das int
dtos int
dir int
rip string
# Partition Information
# col_name data_type comment
y int
m int
d int
h int
# Detailed Table Information
Database: spotdb
Owner: spot
CreateTime: Thu Feb 23 16:41:20 CLST 2017
LastAccessTime: UNKNOWN
Protect Mode: None
Retention: 0
Location: hdfs://HDFS-namenode:8020/user/spot/flow/hive
Table Type: EXTERNAL_TABLE
Table Parameters:
EXTERNAL TRUE
avro.schema.literal {\n \"type\": \"record\"\n , \"name\": \"FlowRecord\"\n , \"namespace\" : \"com.cloudera.accelerators.flows.avro\"\n , \"fields\": [\n {\"name\": \"treceived\", \"type\":[\"string\", \"null\"]}\n , {\"name\": \"unix_tstamp\", \"type\":[\"long\", \"null\"]}\n , {\"name\": \"tryear\", \"type\":[\"int\", \"null\"]}\n , {\"name\": \"trmonth\", \"type\":[\"int\", \"null\"]}\n , {\"name\": \"trday\", \"type\":[\"int\", \"null\"]}\n , {\"name\": \"trhour\", \"type\":[\"int\", \"null\"]}\n , {\"name\": \"trminute\", \"type\":[\"int\", \"null\"]}\n , {\"name\": \"trsec\", \"type\":[\"int\", \"null\"]}\n , {\"name\": \"tdur\", \"type\":[\"float\", \"null\"]}\n , {\"name\": \"sip\", \"type\":[\"string\", \"null\"]}\n , {\"name\": \"sport\", \"type\":[\"int\", \"null\"]}\n , {\"name\": \"dip\", \"type\":[\"string\", \"null\"]}\n , {\"name\": \"dport\", \"type\":[\"int\", \"null\"]}\n , {\"name\": \"proto\", \"type\":[\"string\", \"null\"]}\n , {\"name\": \"flag\", \"type\":[\"string\", \"null\"]}\n , {\"name\": \"fwd\", \"type\":[\"int\", \"null\"]}\n , {\"name\": \"stos\", \"type\":[\"int\", \"null\"]}\n , {\"name\": \"ipkt\", \"type\":[\"bigint\", \"null\"]}\n , {\"name\": \"ibytt\", \"type\":[\"bigint\", \"null\"]}\n , {\"name\": \"opkt\", \"type\":[\"bigint\", \"null\"]}\n , {\"name\": \"obyt\", \"type\":[\"bigint\", \"null\"]}\n , {\"name\": \"input\", \"type\":[\"int\", \"null\"]}\n , {\"name\": \"output\", \"type\":[\"int\", \"null\"]}\n , {\"name\": \"sas\", \"type\":[\"int\", \"null\"]}\n , {\"name\": \"das\", \"type\":[\"int\", \"null\"]}\n , {\"name\": \"dtos\", \"type\":[\"int\", \"null\"]}\n , {\"name\": \"dir\", \"type\":[\"int\", \"null\"]}\n , {\"name\": \"rip\", \"type\":[\"string\", \"null\"]}\n ]\n}
transient_lastDdlTime 1487878880
# Storage Information
SerDe Library: org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe
InputFormat: org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat
OutputFormat: org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat
Compressed: No
Num Buckets: -1
Bucket Columns: []
Sort Columns: []
Storage Desc Params:
field.delim ,
serialization.format , But when I select that table it says that it's empty. I checked that HDFS loacation and it has parquet files: sudo -u hdfs hdfs dfs -ls /user/spot/flow/hive/y=2017/m=02/d=23/h=23
Found 12 items
-rwxr-xr-x 3 spot supergroup 440 2017-02-23 20:05 /user/spot/flow/hive/y=2017/m=02/d=23/h=23/000000_0
-rwxr-xr-x 3 spot supergroup 440 2017-02-23 20:10 /user/spot/flow/hive/y=2017/m=02/d=23/h=23/000000_0_copy_1
-rwxr-xr-x 3 spot supergroup 440 2017-02-23 20:55 /user/spot/flow/hive/y=2017/m=02/d=23/h=23/000000_0_copy_10
-rwxr-xr-x 3 spot supergroup 440 2017-02-23 21:00 /user/spot/flow/hive/y=2017/m=02/d=23/h=23/000000_0_copy_11
-rwxr-xr-x 3 spot supergroup 440 2017-02-23 20:15 /user/spot/flow/hive/y=2017/m=02/d=23/h=23/000000_0_copy_2
-rwxr-xr-x 3 spot supergroup 440 2017-02-23 20:20 /user/spot/flow/hive/y=2017/m=02/d=23/h=23/000000_0_copy_3
-rwxr-xr-x 3 spot supergroup 440 2017-02-23 20:25 /user/spot/flow/hive/y=2017/m=02/d=23/h=23/000000_0_copy_4
-rwxr-xr-x 3 spot supergroup 440 2017-02-23 20:30 /user/spot/flow/hive/y=2017/m=02/d=23/h=23/000000_0_copy_5
-rwxr-xr-x 3 spot supergroup 440 2017-02-23 20:35 /user/spot/flow/hive/y=2017/m=02/d=23/h=23/000000_0_copy_6
-rwxr-xr-x 3 spot supergroup 440 2017-02-23 20:40 /user/spot/flow/hive/y=2017/m=02/d=23/h=23/000000_0_copy_7
-rwxr-xr-x 3 spot supergroup 440 2017-02-23 20:45 /user/spot/flow/hive/y=2017/m=02/d=23/h=23/000000_0_copy_8
-rwxr-xr-x 3 spot supergroup 440 2017-02-23 20:50 /user/spot/flow/hive/y=2017/m=02/d=23/h=23/000000_0_copy_9 What did i do wrong? Regards, Joaquín Silva
... View more
Labels:
- Labels:
-
Apache Hive
-
HDFS
02-21-2017
08:21 AM
Hello, I have a string like "0010011" and I want to convert it to int. It is the inverse of function bin(a int) I think cast("0010011" as binary) Doesn't exists How can I do that? Regards, Joaquín Silva
... View more
Labels:
- Labels:
-
Apache Impala
11-30-2016
02:48 PM
Solved. I had install the JCE Policy file.
... View more
11-30-2016
01:32 PM
Hello, I'm having this error when I try to start HDFS after enabling Kerberos: java.io.IOException: Login failure for hdfs/levante.akainix.local@lebeche.akainix.local from keytab hdfs.keytab: javax.security.auth.login.LoginException: No supported encryption types listed in default_tkt_enctypes The encriptation type that i'm using is aes256-cts-hmac-sha1-99, as show the /etc/krb5.conf file: default_tgs_enctypes = aes256-cts-hmac-sha1-96
default_tkt_enctypes = aes256-cts-hmac-sha1-96
permitted_enctypes = aes256-cts-hmac-sha1-96 Other thing is that the node that contains the KDC started correctly but the rest of them showed the error. Thanks Joaquín
... View more
Labels:
- Labels:
-
Cloudera Manager
-
Kerberos
11-26-2016
06:57 AM
No one stopped or uninstalled the agent manually because I'm the only one that manages that server. What I did that day was reinstall a MySQL server, I don't know if that is related with this issue. Running cloudera-scm-agent seems that is was uninstalled: Failed to start cloudera-scm-agent.service: Unit cloudera-scm-agent.service failed to load: No such file or directory. So I reinstalled the agent and now is working. Thanks
... View more
11-24-2016
07:00 AM
Hello, One of the Cloudera Agents shut down with this error: [15/Nov/2016 19:48:05 +0000] 14910 MainThread agent INFO Stopping agent...
[15/Nov/2016 19:48:05 +0000] 14910 MainThread agent INFO No extant cgroups; unmounting any cgroup roots
[15/Nov/2016 19:48:05 +0000] 14910 MainThread agent INFO 10 processes are being managed; Supervisor will continue to run.
[15/Nov/2016 19:48:05 +0000] 14910 MainThread _cplogging INFO [15/Nov/2016:19:48:05] ENGINE Bus STOPPING
[15/Nov/2016 19:48:05 +0000] 14910 MainThread _cplogging INFO [15/Nov/2016:19:48:05] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('totoro.akainix.local', 9000)) shut down
[15/Nov/2016 19:48:05 +0000] 14910 MainThread _cplogging INFO [15/Nov/2016:19:48:05] ENGINE Stopped thread '_TimeoutMonitor'.
[15/Nov/2016 19:48:05 +0000] 14910 MainThread _cplogging INFO [15/Nov/2016:19:48:05] ENGINE Bus STOPPED
[15/Nov/2016 19:48:05 +0000] 14910 MainThread _cplogging INFO [15/Nov/2016:19:48:05] ENGINE Bus STOPPING
[15/Nov/2016 19:48:05 +0000] 14910 MainThread _cplogging INFO [15/Nov/2016:19:48:05] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('totoro.akainix.local', 9000)) already shut down
[15/Nov/2016 19:48:05 +0000] 14910 MainThread _cplogging INFO [15/Nov/2016:19:48:05] ENGINE No thread running for None.
[15/Nov/2016 19:48:05 +0000] 14910 MainThread _cplogging INFO [15/Nov/2016:19:48:05] ENGINE Bus STOPPED
[15/Nov/2016 19:48:05 +0000] 14910 MainThread _cplogging INFO [15/Nov/2016:19:48:05] ENGINE Bus EXITING
[15/Nov/2016 19:48:05 +0000] 14910 MainThread _cplogging INFO [15/Nov/2016:19:48:05] ENGINE Bus EXITED
[15/Nov/2016 19:48:05 +0000] 14910 MainThread agent INFO Cleaning up daemon
[15/Nov/2016 19:48:05 +0000] 14910 Dummy-14 agent INFO Stopping agent...
[15/Nov/2016 19:48:05 +0000] 14910 Dummy-14 agent INFO No extant cgroups; unmounting any cgroup roots
[15/Nov/2016 19:48:05 +0000] 14910 Dummy-14 agent ERROR Shutdown callback failed.
Traceback (most recent call last):
File "/usr/lib64/cmf/agent/build/env/lib/python2.7/site-packages/cmf-5.8.1-py2.7.egg/cmf/agent.py", line 2777, in stop
f()
File "/usr/lib64/python2.7/asyncore.py", line 409, in close
self.socket.close()
File "/usr/lib64/python2.7/asyncore.py", line 636, in close
os.close(self.fd)
OSError: [Errno 9] Bad file descriptor
[15/Nov/2016 19:48:05 +0000] 14910 Dummy-14 agent INFO 10 processes are being managed; Supervisor will continue to run.
[15/Nov/2016 19:48:05 +0000] 14910 Dummy-14 agent ERROR Shutdown callback failed.
Traceback (most recent call last):
File "/usr/lib64/cmf/agent/build/env/lib/python2.7/site-packages/cmf-5.8.1-py2.7.egg/cmf/agent.py", line 2777, in stop
f()
File "/usr/lib64/python2.7/asyncore.py", line 409, in close
self.socket.close()
File "/usr/lib64/python2.7/asyncore.py", line 636, in close
os.close(self.fd)
OSError: [Errno 9] Bad file descriptor
[15/Nov/2016 19:48:05 +0000] 14910 Dummy-14 _cplogging INFO [15/Nov/2016:19:48:05] ENGINE Bus STOPPING
[15/Nov/2016 19:48:05 +0000] 14910 Dummy-14 _cplogging INFO [15/Nov/2016:19:48:05] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('totoro.akainix.local', 9000)) already shut down
[15/Nov/2016 19:48:05 +0000] 14910 Dummy-14 _cplogging INFO [15/Nov/2016:19:48:05] ENGINE No thread running for None.
[15/Nov/2016 19:48:05 +0000] 14910 Dummy-14 _cplogging INFO [15/Nov/2016:19:48:05] ENGINE Bus STOPPED
[15/Nov/2016 19:48:05 +0000] 14910 Dummy-14 _cplogging INFO [15/Nov/2016:19:48:05] ENGINE Bus STOPPING
[15/Nov/2016 19:48:05 +0000] 14910 Dummy-14 _cplogging INFO [15/Nov/2016:19:48:05] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('totoro.akainix.local', 9000)) already shut down
[15/Nov/2016 19:48:05 +0000] 14910 Dummy-14 _cplogging INFO [15/Nov/2016:19:48:05] ENGINE No thread running for None.
[15/Nov/2016 19:48:05 +0000] 14910 Dummy-14 _cplogging INFO [15/Nov/2016:19:48:05] ENGINE Bus STOPPED
[15/Nov/2016 19:48:05 +0000] 14910 Dummy-14 _cplogging INFO [15/Nov/2016:19:48:05] ENGINE Bus EXITING
[15/Nov/2016 19:48:05 +0000] 14910 Dummy-14 _cplogging INFO [15/Nov/2016:19:48:05] ENGINE Bus EXITED
[15/Nov/2016 19:48:05 +0000] 14910 Dummy-14 agent ERROR Shutdown callback failed.
Traceback (most recent call last):
File "/usr/lib64/cmf/agent/build/env/lib/python2.7/site-packages/cmf-5.8.1-py2.7.egg/cmf/agent.py", line 2777, in stop
f()
File "/usr/lib64/cmf/agent/build/env/lib/python2.7/site-packages/pyinotify-0.9.3-py2.7.egg/pyinotify.py", line 1424, in stop
self._pollobj.unregister(self._fd)
KeyError: 15
[15/Nov/2016 19:48:05 +0000] 14910 Dummy-14 agent INFO Cleaning up daemon And I cant restart the agent because it seems that it was unistalled. All the services work fine, only the Clouder agent stopped working. Please help, I'm very lost with this. Regards, Joaquín
... View more
Labels:
- Labels:
-
Cloudera Manager
07-11-2016
08:15 AM
Hello, I'm trying to run a Spark submit,but I get this error:
WARN scheduler.TaskSetManager: Lost task 0.0 in stage 2.0 (TID 70, totoro.akainix.local): java.lang.AbstractMethodError
at org.apache.spark.Logging$class.log(Logging.scala:51)
at org.apache.spark.streaming.twitter.TwitterReceiver.log(TwitterInputDStream.scala:60)
at org.apache.spark.Logging$class.logInfo(Logging.scala:58)
at org.apache.spark.streaming.twitter.TwitterReceiver.logInfo(TwitterInputDStream.scala:60)
at org.apache.spark.streaming.twitter.TwitterReceiver.onStart(TwitterInputDStream.scala:93)
at org.apache.spark.streaming.receiver.ReceiverSupervisor.startReceiver(ReceiverSupervisor.scala:148)
at org.apache.spark.streaming.receiver.ReceiverSupervisor.start(ReceiverSupervisor.scala:130)
at org.apache.spark.streaming.scheduler.ReceiverTracker$ReceiverTrackerEndpoint$$anonfun$9.apply(ReceiverTracker.scala:575)
at org.apache.spark.streaming.scheduler.ReceiverTracker$ReceiverTrackerEndpoint$$anonfun$9.apply(ReceiverTracker.scala:565)
at org.apache.spark.SparkContext$$anonfun$38.apply(SparkContext.scala:2003)
at org.apache.spark.SparkContext$$anonfun$38.apply(SparkContext.scala:2003)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
ERROR cluster.YarnScheduler: Lost executor 18 on totoro.akainix.local: Container marked as failed: container_1468247436212_0003_01_000019 on host: totoro.akainix.local. Exit status: 50. Diagnostics: Exception from container-launch.
Container id: container_1468247436212_0003_01_000019
Exit code: 50
Stack trace: ExitCodeException exitCode=50:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:561)
at org.apache.hadoop.util.Shell.run(Shell.java:478)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:738)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:213)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Container exited with a non-zero exit code 50
I'm using Spark 1.6.0 on YARN, and this is the tutorial that I'm following.
Please help me, I'm completely lost.
EDIT: here is more info about the error:
16/07/11 12:32:23 ERROR executor.Executor: Exception in task 0.0 in stage 3.0 (TID 72) java.lang.AbstractMethodError at org.apache.spark.Logging$class.log(Logging.scala:51) at org.apache.spark.streaming.twitter.TwitterReceiver.log(TwitterInputDStream.scala:60) at org.apache.spark.Logging$class.logInfo(Logging.scala:58) at org.apache.spark.streaming.twitter.TwitterReceiver.logInfo(TwitterInputDStream.scala:60) at org.apache.spark.streaming.twitter.TwitterReceiver.onStart(TwitterInputDStream.scala:93) at org.apache.spark.streaming.receiver.ReceiverSupervisor.startReceiver(ReceiverSupervisor.scala:148) at org.apache.spark.streaming.receiver.ReceiverSupervisor.start(ReceiverSupervisor.scala:130) at org.apache.spark.streaming.scheduler.ReceiverTracker$ReceiverTrackerEndpoint$$anonfun$9.apply(ReceiverTracker.scala:575) at org.apache.spark.streaming.scheduler.ReceiverTracker$ReceiverTrackerEndpoint$$anonfun$9.apply(ReceiverTracker.scala:565) at org.apache.spark.SparkContext$$anonfun$38.apply(SparkContext.scala:2003) at org.apache.spark.SparkContext$$anonfun$38.apply(SparkContext.scala:2003) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) at org.apache.spark.scheduler.Task.run(Task.scala:89) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/07/11 12:32:23 ERROR util.SparkUncaughtExceptionHandler: Uncaught exception in thread Thread[Executor task launch worker-0,5,main] java.lang.AbstractMethodError at org.apache.spark.Logging$class.log(Logging.scala:51) at org.apache.spark.streaming.twitter.TwitterReceiver.log(TwitterInputDStream.scala:60) at org.apache.spark.Logging$class.logInfo(Logging.scala:58) at org.apache.spark.streaming.twitter.TwitterReceiver.logInfo(TwitterInputDStream.scala:60) at org.apache.spark.streaming.twitter.TwitterReceiver.onStart(TwitterInputDStream.scala:93) at org.apache.spark.streaming.receiver.ReceiverSupervisor.startReceiver(ReceiverSupervisor.scala:148) at org.apache.spark.streaming.receiver.ReceiverSupervisor.start(ReceiverSupervisor.scala:130) at org.apache.spark.streaming.scheduler.ReceiverTracker$ReceiverTrackerEndpoint$$anonfun$9.apply(ReceiverTracker.scala:575) at org.apache.spark.streaming.scheduler.ReceiverTracker$ReceiverTrackerEndpoint$$anonfun$9.apply(ReceiverTracker.scala:565) at org.apache.spark.SparkContext$$anonfun$38.apply(SparkContext.scala:2003) at org.apache.spark.SparkContext$$anonfun$38.apply(SparkContext.scala:2003) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) at org.apache.spark.scheduler.Task.run(Task.scala:89) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745)
... View more
Labels:
- Labels:
-
Apache Spark
-
Apache YARN