Member since
10-09-2014
43
Posts
13
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2397 | 01-14-2016 05:56 AM | |
1963 | 11-11-2014 07:05 AM |
03-15-2018
08:00 PM
Hi, I am trying to create NiFi cluster using ambari blueprint. But ddint find sample blueprint template. Anyone has done this and help with sample blueprint template ? Thanks
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache NiFi
-
Schema Registry
02-15-2017
03:12 PM
2 Kudos
Hi, We have been using HDInsight on Azure, and seen most of the scriptActions are run on the cluster are through Ambari. Now we have NiFi cluster setup using Ambari, so need to know how do I run custom scripts through Ambari on cluster nodes ? Thanks
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache NiFi
11-29-2016
09:10 PM
Hi, I am trying to install NiFi on redhat 6.8 withing following steps wget http://public-repo-1.hortonworks.com/HDF/2.0.1.0/HDF-2.0.1.0-12.tar.gz
tar -zxf HDF-2.0.1.0-12.tar.gz
cd HDF-2.0.1.0-12/nifi
bin/nifi.sh start
After this I see following messages in logs [root@abc000732 nifi]# tailf logs/nifi-bootstrap.log
2016-11-29 20:06:03,579 INFO [main] org.apache.nifi.bootstrap.RunNiFi NiFi never started. Will not restart NiFi
2016-11-29 20:07:11,636 INFO [main] o.a.n.b.NotificationServiceManager Successfully loaded the following 0 services: []
2016-11-29 20:07:11,639 INFO [main] org.apache.nifi.bootstrap.RunNiFi Registered no Notification Services for Notification Type NIFI_STARTED
2016-11-29 20:07:11,639 INFO [main] org.apache.nifi.bootstrap.RunNiFi Registered no Notification Services for Notification Type NIFI_STOPPED
2016-11-29 20:07:11,639 INFO [main] org.apache.nifi.bootstrap.RunNiFi Registered no Notification Services for Notification Type NIFI_DIED
2016-11-29 20:07:11,657 INFO [main] org.apache.nifi.bootstrap.Command Starting Apache NiFi...
2016-11-29 20:07:11,657 INFO [main] org.apache.nifi.bootstrap.Command Working Directory: /opt/HDF-2.0.1.0/nifi
2016-11-29 20:07:11,657 INFO [main] org.apache.nifi.bootstrap.Command Command: java -classpath /opt/HDF-2.0.1.0/nifi/./conf:/opt/HDF-2.0.1.0/nifi/./lib/nifi-documentation-1.0.0.2.0.1.0-12.jar:/opt/HDF-2.0.1.0/nifi/./lib/logback-classic-1.1.3.jar:/opt/HDF-2.0.1.0/nifi/./lib/nifi-api-1.0.0.2.0.1.0-12.jar:/opt/HDF-2.0.1.0/nifi/./lib/nifi-nar-utils-1.0.0.2.0.1.0-12.jar:/opt/HDF-2.0.1.0/nifi/./lib/slf4j-api-1.7.12.jar:/opt/HDF-2.0.1.0/nifi/./lib/nifi-properties-1.0.0.2.0.1.0-12.jar:/opt/HDF-2.0.1.0/nifi/./lib/commons-lang3-3.4.jar:/opt/HDF-2.0.1.0/nifi/./lib/nifi-properties-loader-1.0.0.2.0.1.0-12.jar:/opt/HDF-2.0.1.0/nifi/./lib/nifi-framework-api-1.0.0.2.0.1.0-12.jar:/opt/HDF-2.0.1.0/nifi/./lib/logback-core-1.1.3.jar:/opt/HDF-2.0.1.0/nifi/./lib/jul-to-slf4j-1.7.12.jar:/opt/HDF-2.0.1.0/nifi/./lib/log4j-over-slf4j-1.7.12.jar:/opt/HDF-2.0.1.0/nifi/./lib/jcl-over-slf4j-1.7.12.jar:/opt/HDF-2.0.1.0/nifi/./lib/bcprov-jdk15on-1.54.jar:/opt/HDF-2.0.1.0/nifi/./lib/nifi-runtime-1.0.0.2.0.1.0-12.jar -Dorg.apache.jasper.compiler.disablejsr199=true -Xmx512m -Xms512m -Dsun.net.http.allowRestrictedHeaders=true -Djava.net.preferIPv4Stack=true -Djava.awt.headless=true -XX:+UseG1GC -Djava.protocol.handler.pkgs=sun.net.www.protocol -Dnifi.properties.file.path=/opt/HDF-2.0.1.0/nifi/./conf/nifi.properties -Dnifi.bootstrap.listen.port=19169 -Dapp=NiFi -Dorg.apache.nifi.bootstrap.config.log.dir=/opt/HDF-2.0.1.0/nifi/logs org.apache.nifi.NiFi
2016-11-29 20:07:12,102 INFO [NiFi Bootstrap Command Listener] org.apache.nifi.bootstrap.RunNiFi Apache NiFi now running and listening for Bootstrap requests on port 52062
2016-11-29 20:07:14,687 INFO [main] org.apache.nifi.bootstrap.RunNiFi NiFi never started. Will not restart NiFi
[root@abc000732 nifi]# tailf logs/nifi-app.log
2016-11-29 20:07:13,939 INFO [main] org.apache.nifi.nar.NarClassLoaders Loaded NAR file: /opt/HDF-2.0.1.0/nifi/./work/nar/extensions/nifi-standard-services-api-nar-1.0.0.2.0.1.0-12.nar-unpacked as class loader org.apache.nifi.nar.NarClassLoader[./work/nar/extensions/nifi-standard-services-api-nar-1.0.0.2.0.1.0-12.nar-unpacked]
2016-11-29 20:07:13,941 INFO [main] org.apache.nifi.nar.NarClassLoaders Loaded NAR file: /opt/HDF-2.0.1.0/nifi/./work/nar/extensions/nifi-enrich-nar-1.0.0.2.0.1.0-12.nar-unpacked as class loader org.apache.nifi.nar.NarClassLoader[./work/nar/extensions/nifi-enrich-nar-1.0.0.2.0.1.0-12.nar-unpacked]
2016-11-29 20:07:13,944 INFO [main] org.apache.nifi.nar.NarClassLoaders Loaded NAR file: /opt/HDF-2.0.1.0/nifi/./work/nar/extensions/nifi-elasticsearch-nar-1.0.0.2.0.1.0-12.nar-unpacked as class loader org.apache.nifi.nar.NarClassLoader[./work/nar/extensions/nifi-elasticsearch-nar-1.0.0.2.0.1.0-12.nar-unpacked]
2016-11-29 20:07:13,949 INFO [main] org.apache.nifi.nar.NarClassLoaders Loaded NAR file: /opt/HDF-2.0.1.0/nifi/./work/nar/extensions/nifi-standard-nar-1.0.0.2.0.1.0-12.nar-unpacked as class loader org.apache.nifi.nar.NarClassLoader[./work/nar/extensions/nifi-standard-nar-1.0.0.2.0.1.0-12.nar-unpacked]
2016-11-29 20:07:13,950 INFO [main] org.apache.nifi.nar.NarClassLoaders Loaded NAR file: /opt/HDF-2.0.1.0/nifi/./work/nar/extensions/nifi-avro-nar-1.0.0.2.0.1.0-12.nar-unpacked as class loader org.apache.nifi.nar.NarClassLoader[./work/nar/extensions/nifi-avro-nar-1.0.0.2.0.1.0-12.nar-unpacked]
2016-11-29 20:07:13,951 INFO [main] org.apache.nifi.nar.NarClassLoaders Loaded NAR file: /opt/HDF-2.0.1.0/nifi/./work/nar/extensions/nifi-amqp-nar-1.0.0.2.0.1.0-12.nar-unpacked as class loader org.apache.nifi.nar.NarClassLoader[./work/nar/extensions/nifi-amqp-nar-1.0.0.2.0.1.0-12.nar-unpacked]
2016-11-29 20:07:13,961 INFO [main] org.apache.nifi.nar.NarClassLoaders Loaded NAR file: /opt/HDF-2.0.1.0/nifi/./work/nar/extensions/nifi-hive-nar-1.0.0.2.0.1.0-12.nar-unpacked as class loader org.apache.nifi.nar.NarClassLoader[./work/nar/extensions/nifi-hive-nar-1.0.0.2.0.1.0-12.nar-unpacked]
2016-11-29 20:07:13,962 INFO [main] org.apache.nifi.nar.NarClassLoaders Loaded NAR file: /opt/HDF-2.0.1.0/nifi/./work/nar/extensions/nifi-riemann-nar-1.0.0.2.0.1.0-12.nar-unpacked as class loader org.apache.nifi.nar.NarClassLoader[./work/nar/extensions/nifi-riemann-nar-1.0.0.2.0.1.0-12.nar-unpacked]
2016-11-29 20:07:13,963 INFO [main] org.apache.nifi.nar.NarClassLoaders Loaded NAR file: /opt/HDF-2.0.1.0/nifi/./work/nar/extensions/nifi-scripting-nar-1.0.0.2.0.1.0-12.nar-unpacked as class loader org.apache.nifi.nar.NarClassLoader[./work/nar/extensions/nifi-scripting-nar-1.0.0.2.0.1.0-12.nar-unpacked]
2016-11-29 20:07:13,965 INFO [main] org.apache.nifi.nar.NarClassLoaders Loaded NAR file: /opt/HDF-2.0.1.0/nifi/
I have oracle java (1.8.0_60) installed on this VM and iptables is off and selinux is disabled. Anyone know whats wrong with this. Same steps works on Centos 6.8
... View more
Labels:
- Labels:
-
Apache NiFi
09-14-2016
08:46 PM
After correcting values for hbase.regionserver.global.memstore.size and hfile.block.cache.size everything looks good.
... View more
09-13-2016
01:42 PM
1 Kudo
I am trying to update Hbase properties through Ambari API and was following this document. Here are the steps I followed : Dumped the existing config into newconfig.json curl -u "admin:admin" -G "https://myhbase.net/api/v1/clusters/myhbase/configurations?type=hbase-site&tag=TOPOLOGY_RESOLVED" | jq --arg newtag $(echo version$(date +%s%N)) '.items[] | del(.href, .version, .Config) | .tag |= $newtag | {"Clusters": {"desired_config": .}}' > newconfig.json Then Modified the property from `0.4` to `0.6` in newconfig.json, also version number "hbase.regionserver.global.memstore.size": "0.6", Then apply the modified config cat newconfig.json | curl -u "admin:admin" -H "X-Requested-By: ambari" -X PUT -d "@-" "https://myhbase.net/api/v1/clusters/myhbase" Then restarted the HBase Stop echo '{"RequestInfo": {"context" :"Stopping the Hbase service"}, "Body": {"ServiceInfo": {"state": "INSTALLED"}}}' | curl -u "admin:admin" -H "X-Requested-By: ambari" -X PUT -d "@-" "https://myhbase.net/api/v1/clusters/myhbase/services/HBASE" Start echo '{"RequestInfo": {"context" :"Restarting the Hbase service"}, "Body": {"ServiceInfo": {"state": "STARTED"}}}' | curl -u "admin:admin" -H "X-Requested-By: ambari" -X PUT -d "@-" "https://myhbase.net/api/v1/clusters/myhbase/services/HBASE" But after restart Hbase master & region server went down & stuck into restart mode, where they were getting restarted. Anyone know what wrong I ma doing here ? is there any better way to do this through Ambari API ?
... View more
Labels:
07-05-2016
12:41 PM
I do have azure-storage package installed root@sbd-docker:~# pip show azure-storage
---
Name: azure-storage
Version: 0.20.0
Location: /usr/local/lib/python2.7/dist-packages
Requires: azure-nspkg, requests, python-dateutil, azure-common
root@sbd-docker:~# Is this what you mean ?
... View more
07-05-2016
12:38 PM
I tried setting up HADOOP_HOME & log4j property you mentioned. Not its looks lik this https://gist.github.com/anonymous/6502365d31d68bc29bc2afac15b01158. spark-shell trace https://gist.github.com/anonymous/57014be445e1c8526fdaba561739ba44
... View more
07-01-2016
05:34 PM
I do have `/usr/hdp/current/hadoop-client/hadoop-azure.jar` present on the node
... View more
07-01-2016
02:34 PM
Hi, We have HDInsight cluster in Azure running, but it doesn't allow to spin up edge/gateway node at the time of cluster creation. So I was creating this edge/gateway node by installing echo 'deb http://private-repo-1.hortonworks.com/HDP/ubuntu14/2.x/updates/2.4.2.0 HDP main' >> /etc/apt/sources.list.d/HDP.list
echo 'deb http://private-repo-1.hortonworks.com/HDP-UTILS-1.1.0.20/repos/ubuntu14 HDP-UTILS main' >> /etc/apt/sources.list.d/HDP.list
echo 'deb [arch=amd64] https://apt-mo.trafficmanager.net/repos/azurecore/ trusty main' >> /etc/apt/sources.list.d/azure-public-trusty.list
gpg --keyserver pgp.mit.edu --recv-keys B9733A7A07513CAD
gpg -a --export 07513CAD | apt-key add -
gpg --keyserver pgp.mit.edu --recv-keys B02C46DF417A0893
gpg -a --export 417A0893 | apt-key add -
apt-get -y install openjdk-7-jdk
export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64
apt-get -y install hadoop hadoop-hdfs hadoop-yarn hadoop-mapreduce hadoop-client openssl libhdfs0 liblzo2-2 liblzo2-dev hadoop-lzo phoenix hive hive-hcatalog tez mysql-connector-java* oozie oozie-client sqoop flume flume-agent spark After installing all packages and copying config files from cluster node, I am able to access hadoop fs commands and run yarn jobs. But Spark doesn't work smoothly yet, following packages are present on the edge/gateway node with spark config from cluster. root@sbd-docker:~/ubuntu# dpkg -l | grep spark
ii spark 1.6.1.2.4.2.0-258 all spark is a virtual package that brings spark-2-4-2-0-258 as a dependency.
ii spark-2-4-2-0-258 1.6.1.2.4.2.0-258 all Lightning-Fast Cluster Computing
ii spark-2-4-2-0-258-master 1.6.1.2.4.2.0-258 all Server for Spark master
ii spark-2-4-2-0-258-python 1.6.1.2.4.2.0-258 all Python client for Spark
ii spark-2-4-2-0-258-worker 1.6.1.2.4.2.0-258 all Server for Spark worker
ii spark-2-4-2-0-258-yarn-shuffle 1.6.1.2.4.2.0-258 all Spark Yarn Shuffle jar
root@sbd-docker:~/ubuntu#
spark-shell gives me following error root@sbd-docker:~/ubuntu# spark-shell
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/2.4.2.0-258/spark/lib/spark-assembly-1.6.1.2.4.2.0-258-hadoop2.7.1.2.4.2.0-258.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.4.2.0-258/spark/lib/spark-assembly.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.4.2.0-258/spark/lib/spark-examples-1.6.1.2.4.2.0-258-hadoop2.7.1.2.4.2.0-258.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
16/07/01 14:35:28 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/07/01 14:35:29 INFO SecurityManager: Changing view acls to: root
16/07/01 14:35:29 INFO SecurityManager: Changing modify acls to: root
16/07/01 14:35:29 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
16/07/01 14:35:29 INFO HttpServer: Starting HTTP Server
16/07/01 14:35:29 INFO Server: jetty-8.y.z-SNAPSHOT
16/07/01 14:35:29 INFO AbstractConnector: Started SocketConnector@0.0.0.0:47325
16/07/01 14:35:29 INFO Utils: Successfully started service 'HTTP class server' on port 47325.
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 1.6.1
/_/
Using Scala version 2.10.5 (OpenJDK 64-Bit Server VM, Java 1.7.0_101)
Type in expressions to have them evaluated.
Type :help for more information.
16/07/01 14:35:37 INFO SparkContext: Running Spark version 1.6.1
16/07/01 14:35:37 INFO SecurityManager: Changing view acls to: root
16/07/01 14:35:37 INFO SecurityManager: Changing modify acls to: root
16/07/01 14:35:37 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
16/07/01 14:35:37 INFO Utils: Successfully started service 'sparkDriver' on port 37810.
16/07/01 14:35:39 INFO Slf4jLogger: Slf4jLogger started
16/07/01 14:35:39 INFO Remoting: Starting remoting
16/07/01 14:35:39 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@10.8.17.5:45089]
16/07/01 14:35:39 INFO Utils: Successfully started service 'sparkDriverActorSystem' on port 45089.
16/07/01 14:35:39 INFO SparkEnv: Registering MapOutputTracker
16/07/01 14:35:39 INFO SparkEnv: Registering BlockManagerMaster
16/07/01 14:35:39 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-0de66eed-5a2e-4c6b-a78c-f1719dce3b1d
16/07/01 14:35:39 INFO MemoryStore: MemoryStore started with capacity 517.4 MB
16/07/01 14:35:39 INFO SparkEnv: Registering OutputCommitCoordinator
16/07/01 14:35:40 INFO Server: jetty-8.y.z-SNAPSHOT
16/07/01 14:35:40 INFO AbstractConnector: Started SelectChannelConnector@0.0.0.0:4040
16/07/01 14:35:40 INFO Utils: Successfully started service 'SparkUI' on port 4040.
16/07/01 14:35:40 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://10.8.17.5:4040
spark.yarn.driver.memoryOverhead is set but does not apply in client mode.
16/07/01 14:35:41 INFO TimelineClientImpl: Timeline service address: http://hn0-haspar.pbed5jwkixfebdxr1by2u30lzf.cx.internal.cloudapp.net:8188/ws/v1/timeline/
16/07/01 14:35:41 INFO AbstractService: Service org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl failed in state STARTED; cause: java.io.IOException: No FileSystem for scheme: wasb
java.io.IOException: No FileSystem for scheme: wasb
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2644)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2651)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:92)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2687)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2669)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:371)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:170)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:355)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
at org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.serviceStart(TimelineClientImpl.java:378)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.serviceStart(YarnClientImpl.java:194)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:127)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:56)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:144)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:530)
at org.apache.spark.repl.SparkILoop.createSparkContext(SparkILoop.scala:1017)
at $line3.$read$$iwC$$iwC.<init>(<console>:15)
at $line3.$read$$iwC.<init>(<console>:24)
at $line3.$read.<init>(<console>:26)
at $line3.$read$.<init>(<console>:30)
at $line3.$read$.<clinit>(<console>)
at $line3.$eval$.<init>(<console>:7)
at $line3.$eval$.<clinit>(<console>)
at $line3.$eval.$print(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:125)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:124)
at org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:324)
at org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoopInit.scala:124)
at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1$$anonfun$apply$mcZ$sp$5.apply$mcV$sp(SparkILoop.scala:974)
at org.apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:159)
at org.apache.spark.repl.SparkILoop.runThunks(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoopInit$class.postInitialization(SparkILoopInit.scala:108)
at org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:991)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
at org.apache.spark.repl.Main$.main(Main.scala:31)
at org.apache.spark.repl.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
16/07/01 14:35:41 INFO AbstractService: Service org.apache.hadoop.yarn.client.api.impl.YarnClientImpl failed in state STARTED; cause: org.apache.hadoop.service.ServiceStateException: java.io.IOException: No FileSystem for scheme: wasb
org.apache.hadoop.service.ServiceStateException: java.io.IOException: No FileSystem for scheme: wasb
at org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:59)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:204)
at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.serviceStart(YarnClientImpl.java:194)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:127)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:56)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:144)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:530)
at org.apache.spark.repl.SparkILoop.createSparkContext(SparkILoop.scala:1017)
at $line3.$read$$iwC$$iwC.<init>(<console>:15)
at $line3.$read$$iwC.<init>(<console>:24)
at $line3.$read.<init>(<console>:26)
at $line3.$read$.<init>(<console>:30)
at $line3.$read$.<clinit>(<console>)
at $line3.$eval$.<init>(<console>:7)
at $line3.$eval$.<clinit>(<console>)
at $line3.$eval.$print(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:125)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:124)
at org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:324)
at org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoopInit.scala:124)
at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1$$anonfun$apply$mcZ$sp$5.apply$mcV$sp(SparkILoop.scala:974)
at org.apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:159)
at org.apache.spark.repl.SparkILoop.runThunks(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoopInit$class.postInitialization(SparkILoopInit.scala:108)
at org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:991)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
at org.apache.spark.repl.Main$.main(Main.scala:31)
at org.apache.spark.repl.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.io.IOException: No FileSystem for scheme: wasb
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2644)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2651)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:92)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2687)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2669)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:371)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:170)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:355)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
at org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.serviceStart(TimelineClientImpl.java:378)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
... 54 more
16/07/01 14:35:41 ERROR SparkContext: Error initializing SparkContext.
org.apache.hadoop.service.ServiceStateException: java.io.IOException: No FileSystem for scheme: wasb
at org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:59)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:204)
at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.serviceStart(YarnClientImpl.java:194)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:127)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:56)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:144)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:530)
at org.apache.spark.repl.SparkILoop.createSparkContext(SparkILoop.scala:1017)
at $line3.$read$$iwC$$iwC.<init>(<console>:15)
at $line3.$read$$iwC.<init>(<console>:24)
at $line3.$read.<init>(<console>:26)
at $line3.$read$.<init>(<console>:30)
at $line3.$read$.<clinit>(<console>)
at $line3.$eval$.<init>(<console>:7)
at $line3.$eval$.<clinit>(<console>)
at $line3.$eval.$print(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:125)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:124)
at org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:324)
at org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoopInit.scala:124)
at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1$$anonfun$apply$mcZ$sp$5.apply$mcV$sp(SparkILoop.scala:974)
at org.apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:159)
at org.apache.spark.repl.SparkILoop.runThunks(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoopInit$class.postInitialization(SparkILoopInit.scala:108)
at org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:991)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
at org.apache.spark.repl.Main$.main(Main.scala:31)
at org.apache.spark.repl.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.io.IOException: No FileSystem for scheme: wasb
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2644)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2651)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:92)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2687)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2669)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:371)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:170)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:355)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
at org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.serviceStart(TimelineClientImpl.java:378)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
... 54 more
16/07/01 14:35:41 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/kill,null}
16/07/01 14:35:41 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/api,null}
16/07/01 14:35:41 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/,null}
16/07/01 14:35:41 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/static,null}
16/07/01 14:35:41 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump/json,null}
16/07/01 14:35:41 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump,null}
16/07/01 14:35:41 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/json,null}
16/07/01 14:35:41 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors,null}
16/07/01 14:35:41 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment/json,null}
16/07/01 14:35:41 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment,null}
16/07/01 14:35:41 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd/json,null}
16/07/01 14:35:41 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd,null}
16/07/01 14:35:41 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/json,null}
16/07/01 14:35:41 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage,null}
16/07/01 14:35:41 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool/json,null}
16/07/01 14:35:41 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool,null}
16/07/01 14:35:41 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/json,null}
16/07/01 14:35:41 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage,null}
16/07/01 14:35:41 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/json,null}
16/07/01 14:35:41 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages,null}
16/07/01 14:35:41 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job/json,null}
16/07/01 14:35:41 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job,null}
16/07/01 14:35:41 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/json,null}
16/07/01 14:35:41 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs,null}
16/07/01 14:35:41 INFO SparkUI: Stopped Spark web UI at http://10.8.17.5:4040
16/07/01 14:35:41 WARN YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to request executors before the AM has registered!
16/07/01 14:35:41 INFO YarnClientSchedulerBackend: Stopped
16/07/01 14:35:41 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
16/07/01 14:35:41 INFO MemoryStore: MemoryStore cleared
16/07/01 14:35:41 INFO BlockManager: BlockManager stopped
16/07/01 14:35:41 INFO BlockManagerMaster: BlockManagerMaster stopped
16/07/01 14:35:41 WARN MetricsSystem: Stopping a MetricsSystem that is not running
16/07/01 14:35:41 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
16/07/01 14:35:41 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
16/07/01 14:35:41 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
16/07/01 14:35:41 INFO SparkContext: Successfully stopped SparkContext
16/07/01 14:35:41 INFO RemoteActorRefProvider$RemotingTerminator: Remoting shut down.
org.apache.hadoop.service.ServiceStateException: java.io.IOException: No FileSystem for scheme: wasb
at org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:59)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:204)
at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.serviceStart(YarnClientImpl.java:194)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:127)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:56)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:144)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:530)
at org.apache.spark.repl.SparkILoop.createSparkContext(SparkILoop.scala:1017)
at $iwC$$iwC.<init>(<console>:15)
at $iwC.<init>(<console>:24)
at <init>(<console>:26)
at .<init>(<console>:30)
at .<clinit>(<console>)
at .<init>(<console>:7)
at .<clinit>(<console>)
at $print(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:125)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:124)
at org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:324)
at org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoopInit.scala:124)
at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1$$anonfun$apply$mcZ$sp$5.apply$mcV$sp(SparkILoop.scala:974)
at org.apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:159)
at org.apache.spark.repl.SparkILoop.runThunks(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoopInit$class.postInitialization(SparkILoopInit.scala:108)
at org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:991)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
at org.apache.spark.repl.Main$.main(Main.scala:31)
at org.apache.spark.repl.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.io.IOException: No FileSystem for scheme: wasb
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2644)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2651)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:92)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2687)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2669)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:371)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:170)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:355)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
at org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.serviceStart(TimelineClientImpl.java:378)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
... 54 more
java.lang.NullPointerException
at org.apache.spark.sql.SQLContext$.createListenerAndUI(SQLContext.scala:1367)
at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:101)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.spark.repl.SparkILoop.createSQLContext(SparkILoop.scala:1028)
at $iwC$$iwC.<init>(<console>:15)
at $iwC.<init>(<console>:24)
at <init>(<console>:26)
at .<init>(<console>:30)
at .<clinit>(<console>)
at .<init>(<console>:7)
at .<clinit>(<console>)
at $print(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:132)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:124)
at org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:324)
at org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoopInit.scala:124)
at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1$$anonfun$apply$mcZ$sp$5.apply$mcV$sp(SparkILoop.scala:974)
at org.apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:159)
at org.apache.spark.repl.SparkILoop.runThunks(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoopInit$class.postInitialization(SparkILoopInit.scala:108)
at org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:991)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
at org.apache.spark.repl.Main$.main(Main.scala:31)
at org.apache.spark.repl.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
<console>:16: error: not found: value sqlContext
import sqlContext.implicits._
^
<console>:16: error: not found: value sqlContext
import sqlContext.sql
^
scala>
Any one know what I am missing here.
... View more
Labels:
- Labels:
-
Apache Spark
-
Apache YARN
06-23-2016
12:33 PM
1 Kudo
Thanks @Chris Nauroth for explanation. At present We have Namenode HA and we are putting data from Flume into this cluster. We are configuring hdfs://mycluster/flume as destination in Flume sink. Whats is the correct way to put data into default HDFS storage (WASB) from Flume and make it accessible from hadoop fs -ls / ? Appreciate help in this.
... View more
06-22-2016
08:57 PM
1 Kudo
We have HDInsight cluster setup in Azure.
When I do hadoop fs -ls / it shows me drwxr-xr-x - root supergroup 0 2016-06-17 20:56 /HdiNotebooks
drwxr-xr-x - root supergroup 0 2016-06-17 21:00 /HdiSamples
drwxr-xr-x - hdfs supergroup 0 2016-06-17 20:48 /ams
drwxr-xr-x - hdfs supergroup 0 2016-06-17 20:48 /amshbase
drwxrwxrwx - yarn hadoop 0 2016-06-17 20:48 /app-logs
drwxr-xr-x - yarn hadoop 0 2016-06-17 20:48 /atshistory
drwxr-xr-x - sshuser supergroup 0 2016-06-21 18:38 /data
drwxr-xr-x - root supergroup 0 2016-06-17 20:59 /example
drwxr-xr-x - hdfs supergroup 0 2016-06-17 20:48 /hdp
drwxr-xr-x - hdfs supergroup 0 2016-06-17 20:48 /hive
drwxr-xr-x - mapred supergroup 0 2016-06-17 20:48 /mapred
drwx------ - sshuser supergroup 0 2016-06-20 14:22 /mapreducestaging
drwxrwxrwx - mapred hadoop 0 2016-06-17 20:48 /mr-history
drwxr-xr-x - sshuser supergroup 0 2016-06-20 19:20 /sqoop
drwxrwxrwx - hdfs supergroup 0 2016-06-17 20:48 /tmp
drwxr-xr-x - hdfs supergroup 0 2016-06-17 20:48 /user
But hadoop fs -ls hdfs://mycluster/ shows following result. root@hn0-haspar:~# hadoop fs -ls hdfs://mycluster/
Found 3 items
drwxr-xr-x - root hdfs 0 2016-06-21 18:48 hdfs://mycluster/data
drwx-wx-wx - root hdfs 0 2016-06-17 20:57 hdfs://mycluster/tmp
drwx------ - root hdfs 0 2016-06-22 17:24 hdfs://mycluster/user
Dont know where this different dir coming. Cluster has HA configuration.
... View more
Labels:
03-11-2016
01:14 PM
I am trying to run sample hadoop example jobs on the new cluster with CDH 5.6.0 I am running following command : /usr/ bin / yarn jar / usr / lib / hadoop - 0.20 - mapreduce / hadoop - examples . jar pi 16 100 Number of Maps = 16 Samples per Map = 100 Wrote input for Map #0 Wrote input for Map #1 Wrote input for Map #2 Wrote input for Map #3 Wrote input for Map #4 Wrote input for Map #5 Wrote input for Map #6 Wrote input for Map #7 Wrote input for Map #8 Wrote input for Map #9 Wrote input for Map #10 Wrote input for Map #11 Wrote input for Map #12 Wrote input for Map #13 Wrote input for Map #14 Wrote input for Map #15 Starting Job 16/03/11 15:24:19 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to rm73 16/03/11 15:24:19 INFO input.FileInputFormat: Total input paths to process : 16 16/03/11 15:24:19 INFO mapreduce.JobSubmitter: number of splits:16 16/03/11 15:24:19 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1457718893017_0004 16/03/11 15:24:20 INFO impl.YarnClientImpl: Submitted application application_1457718893017_0004 16/03/11 15:24:20 INFO mapreduce.Job: The url to track the job: http://hmn002.dev.abc.com:8088/proxy/application_1457718893017_0004/ 16/03/11 15:24:20 INFO mapreduce.Job: Running job: job_1457718893017_0004 16/03/11 15:24:25 INFO mapreduce.Job: Job job_1457718893017_0004 running in uber mode : false 16/03/11 15:24:25 INFO mapreduce.Job: map 0% reduce 0% 16/03/11 15:24:27 INFO mapreduce.Job: Task Id : attempt_1457718893017_0004_m_000004_0, Status : FAILED Exception from container-launch. Container id: container_1457718893017_0004_01_000005 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:561) at org.apache.hadoop.util.Shell.run(Shell.java:478) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:738) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:210) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 16/03/11 15:24:27 INFO mapreduce.Job: Task Id : attempt_1457718893017_0004_m_000007_0, Status : FAILED Exception from container-launch. Container id: container_1457718893017_0004_01_000009 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:561) at org.apache.hadoop.util.Shell.run(Shell.java:478) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:738) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:210) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 Its failing with above error. When I checked many people are pointing to hadoop classpath but I have checked classpath also correct in the Cloudera Manager. This is the another log I found from one of the task. Log Type : stderr Log Upload Time : Fri Mar 11 20 : 24 : 44 + 0000 2016 Log Length : 108 Error : Could not create the Java Virtual Machine . Error : A fatal exception has occurred . Program will exit . Log Type : stdout Log Upload Time : Fri Mar 11 20 : 24 : 44 + 0000 2016 Log Length : 90 Error occurred during initialization of VM Could not reserve enough space for object heap Anyone know whats wrong here ?
... View more
01-14-2016
06:05 AM
I want to change my email for cloudera comunity, how do I do this ?
... View more
01-14-2016
05:56 AM
looks like its working after addition of -D mapred.task.timeout=60000000
... View more
01-13-2016
02:04 PM
Hi, I am trying to backup dome data from our ancient hadoop cluster which is on cdh3u5 to S3 bucket with distcp. But my job is failing because some of the task are getting killed multiple times with following message Task attempt_201404090636_336528_m_000112_0 failed to report status for 600 seconds. Killing!
Task attempt_201404090636_336528_m_000112_1 failed to report status for 600 seconds. Killing!
Task attempt_201404090636_336528_m_000112_2 failed to report status for 600 seconds. Killing! I was trying to distcp directory which has about ~1200 files with size 3MB - 5MB We have 80 datanodes in the cluster. Any help here with this please. Thanks roy
... View more
Labels:
- Labels:
-
Apache Hadoop
11-11-2014
07:05 AM
1 Kudo
this link shoud clear your confusion https://www.linkedin.com/pulse/article/20140706112523-176301000-yarn-resource-allocation
... View more