Member since
10-24-2016
19
Posts
2
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
11369 | 11-05-2016 09:36 PM |
11-05-2016
09:36 PM
1 Kudo
*WORKED* 1: su - hdfs hdfs dfs -put /usr/hdp/2.4.3.0-227/hadoop/mapreduce.tar.gz /hdp/apps/2.4.3.0-227/mapreduce/ 3: su - atlas cp /usr/hdp/2.4.3.0-227/etc/atlas/conf.dist/client.properties /etc/atlas/conf/
... View more
11-01-2016
04:18 AM
I am still having this issue Env: HDInsight: Spark 1.6 on Linux (HDI 3.5.1000.0) HDP Version: 2.5.1.0-56 Spark: spark-assembly-1.6.2.2.5.1.0-56-hadoop2.7.3.2.5.1.0-56.jar Issue: {code} 11/01/2016 04:06:31 [INFO] [ExecHelper] [] [] [] [] [20] [] [] [] Executing Command :[/opt/lib/spark-1.6.2-bin-hadoop2.7/bin/spark-submit, --name, AJ-21-for-Romo-MsSQL-HDFS, --class, com.bigdlabs.romo
.tool.RomoMain, --master, yarn-client, --num-executors, 1, --driver-memory, 1g, --executor-memory, 1g, --executor-cores, 1, --driver-java-options="-XX:MaxPermSize=256m", --jars, /opt/conflux/dependenc
ylibs/spark1.6/jersey-server-1.9.jar,/opt/conflux/dependencylibs/spark1.6/datanucleus-api-jdo-3.2.6.jar,/opt/conflux/dependencylibs/spark1.6/sqljdbc4.jar,/opt/conflux/dependencylibs/spark1.6/datanucle
us-rdbms-3.2.9.jar,/opt/conflux/dependencylibs/spark1.6/microsoft-log4j-etwappender-1.0.jar,/opt/conflux/dependencylibs/spark1.6/datanucleus-core-3.2.10.jar, /opt/conflux/lib/romolib/romo-0.0.1-SNAPSH
OT.jar, --STATS_REST_ENDPOINT=http://apervi-azr-conflux-test2.apervi.com:8080/workflowmanager, sourcerdbms=sourcerdbms, sourcerdbms.name=RDBMS-1, sourcerdbms.url=jdbc:sqlserver://....., sourcerdbms.table=sanity_test, sourcerdbms.driver=com.microsoft.sqlserver.jdbc.SQLServerDriver, sourcerdbms.query=RDBMS-Sr
c-21-RDBMS-1.sql, sourcerdbms.infields=id,name,logindate, sourcerdbms.infieldstypes=java.lang.Integer,java.lang.String,java.util.Date, sourcerdbms.outfields=id,name,logindate, sourcerdbms.outfieldstyp
es=java.lang.Integer,java.lang.String,java.util.Date, sourcerdbms.parallelism=2, sourcerdbms.retain.fields=id,name,logindate, sourcerdbms.wfitemstatusid=40, filesink=filesink, filesink.name=Delimited_
File-1, filesink.source=RDBMS-1, filesink.filetype=text, filesink.fsurl=hdfs://.....:8020, filesink.path=/user/conflux/output/mssql-hdfs.out, f
ilesink.delimiter=,, filesink.quote=", filesink.quotemode=MINIMAL, filesink.compression, filesink.infields=id,name,logindate, filesink.infieldstypes=java.lang.Integer,java.lang.String,java.util.Date,
filesink.writefields=id,name,logindate, filesink.writefieldstypes=java.lang.Integer,java.lang.String,java.util.Date, filesink.replace=true, filesink.writeheader=true, filesink.singlefile=true, filesin
k.retain.fields=id,name,logindate, filesink.wfitemstatusid=39] {code} Logs {code} 11/01/2016 04:06:34 [INFO] [StreamGobbler] [] [] [] [] [20] [] [] [] OUTPUT>2016-11-01 04:06:34,771 - WARN [main:FileSystem@2731] - Cannot load filesystem
11/01/2016 04:06:34 [INFO] [StreamGobbler] [] [] [] [] [20] [] [] [] OUTPUT>java.util.ServiceConfigurationError: org.apache.hadoop.fs.FileSystem: Provider org.apache.hadoop.fs.s3a.S3AFileSystem could not be instantiated
11/01/2016 04:06:34 [INFO] [StreamGobbler] [] [] [] [] [20] [] [] [] OUTPUT> at java.util.ServiceLoader.fail(ServiceLoader.java:232)
11/01/2016 04:06:34 [INFO] [StreamGobbler] [] [] [] [] [20] [] [] [] OUTPUT> at java.util.ServiceLoader.access$100(ServiceLoader.java:185)
11/01/2016 04:06:34 [INFO] [StreamGobbler] [] [] [] [] [20] [] [] [] OUTPUT> at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:384)
11/01/2016 04:06:34 [INFO] [StreamGobbler] [] [] [] [] [20] [] [] [] OUTPUT> at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
11/01/2016 04:06:34 [INFO] [StreamGobbler] [] [] [] [] [20] [] [] [] OUTPUT> at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
11/01/2016 04:06:34 [INFO] [StreamGobbler] [] [] [] [] [20] [] [] [] OUTPUT> at org.apache.hadoop.fs.FileSystem.loadFileSystems(FileSystem.java:2723)
11/01/2016 04:06:34 [INFO] [StreamGobbler] [] [] [] [] [20] [] [] [] OUTPUT> at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2742)
11/01/2016 04:06:34 [INFO] [StreamGobbler] [] [] [] [] [20] [] [] [] OUTPUT> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2759)
11/01/2016 04:06:34 [INFO] [StreamGobbler] [] [] [] [] [20] [] [] [] OUTPUT> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:99)
11/01/2016 04:06:34 [INFO] [StreamGobbler] [] [] [] [] [20] [] [] [] OUTPUT> at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2795)
11/01/2016 04:06:34 [INFO] [StreamGobbler] [] [] [] [] [20] [] [] [] OUTPUT> at org.apache.hadoop.fs.FileSystem$Cache.getUnique(FileSystem.java:2783)
11/01/2016 04:06:34 [INFO] [StreamGobbler] [] [] [] [] [20] [] [] [] OUTPUT> at org.apache.hadoop.fs.FileSystem.newInstance(FileSystem.java:433)
11/01/2016 04:06:34 [INFO] [StreamGobbler] [] [] [] [] [20] [] [] [] OUTPUT> at org.apache.hadoop.fs.FileSystem.newInstance(FileSystem.java:441)
11/01/2016 04:06:34 [INFO] [StreamGobbler] [] [] [] [] [20] [] [] [] OUTPUT> at org.apache.hadoop.fs.FileSystem.newInstance(FileSystem.java:423)
11/01/2016 04:06:34 [INFO] [StreamGobbler] [] [] [] [] [20] [] [] [] OUTPUT> at org.apache.hadoop.yarn.client.api.impl.FileSystemTimelineWriter.<init>(FileSystemTimelineWriter.java:122)
11/01/2016 04:06:34 [INFO] [StreamGobbler] [] [] [] [] [20] [] [] [] OUTPUT> at org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.createTimelineWriter(TimelineClientImpl.java:317)
11/01/2016 04:06:34 [INFO] [StreamGobbler] [] [] [] [] [20] [] [] [] OUTPUT> at org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.serviceStart(TimelineClientImpl.java:309)
11/01/2016 04:06:34 [INFO] [StreamGobbler] [] [] [] [] [20] [] [] [] OUTPUT> at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
11/01/2016 04:06:34 [INFO] [StreamGobbler] [] [] [] [] [20] [] [] [] OUTPUT> at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.serviceStart(YarnClientImpl.java:199)
11/01/2016 04:06:34 [INFO] [StreamGobbler] [] [] [] [] [20] [] [] [] OUTPUT> at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
11/01/2016 04:06:34 [INFO] [StreamGobbler] [] [] [] [] [20] [] [] [] OUTPUT> at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:127)
11/01/2016 04:06:34 [INFO] [StreamGobbler] [] [] [] [] [20] [] [] [] OUTPUT> at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:56)
11/01/2016 04:06:34 [INFO] [StreamGobbler] [] [] [] [] [20] [] [] [] OUTPUT> at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:144)
11/01/2016 04:06:34 [INFO] [StreamGobbler] [] [] [] [] [20] [] [] [] OUTPUT> at org.apache.spark.SparkContext.<init>(SparkContext.scala:530)
11/01/2016 04:06:34 [INFO] [StreamGobbler] [] [] [] [] [20] [] [] [] OUTPUT> at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:59)
11/01/2016 04:06:34 [INFO] [StreamGobbler] [] [] [] [] [20] [] [] [] OUTPUT> at com.bigdlabs.romo.tool.RomoMain.plan(RomoMain.java:318)
11/01/2016 04:06:34 [INFO] [StreamGobbler] [] [] [] [] [20] [] [] [] OUTPUT> at com.bigdlabs.romo.tool.RomoMain.execute(RomoMain.java:257)
11/01/2016 04:06:34 [INFO] [StreamGobbler] [] [] [] [] [20] [] [] [] OUTPUT> at com.bigdlabs.romo.tool.RomoMain.main(RomoMain.java:471)
11/01/2016 04:06:34 [INFO] [StreamGobbler] [] [] [] [] [20] [] [] [] OUTPUT> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
11/01/2016 04:06:34 [INFO] [StreamGobbler] [] [] [] [] [20] [] [] [] OUTPUT> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
11/01/2016 04:06:34 [INFO] [StreamGobbler] [] [] [] [] [20] [] [] [] OUTPUT> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
11/01/2016 04:06:34 [INFO] [StreamGobbler] [] [] [] [] [20] [] [] [] OUTPUT> at java.lang.reflect.Method.invoke(Method.java:498)
11/01/2016 04:06:34 [INFO] [StreamGobbler] [] [] [] [] [20] [] [] [] OUTPUT> at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$runMain(SparkSubmit.scala:731)
11/01/2016 04:06:34 [INFO] [StreamGobbler] [] [] [] [] [20] [] [] [] OUTPUT> at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
11/01/2016 04:06:34 [INFO] [StreamGobbler] [] [] [] [] [20] [] [] [] OUTPUT> at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
11/01/2016 04:06:34 [INFO] [StreamGobbler] [] [] [] [] [20] [] [] [] OUTPUT> at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
11/01/2016 04:06:34 [INFO] [StreamGobbler] [] [] [] [] [20] [] [] [] OUTPUT> at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
11/01/2016 04:06:34 [INFO] [StreamGobbler] [] [] [] [] [20] [] [] [] OUTPUT>Caused by: java.lang.NoClassDefFoundError: com/amazonaws/AmazonClientException {code}
... View more
10-26-2016
10:36 PM
@Todd Wilson @Artem Ervits *Sandbox* [root@sandbox hdp]# grep -r "YarnShuffleService" * Binary file 2.4.0.0-169/spark/lib/spark-1.6.0.2.4.0.0-169-yarn-shuffle.jar matches Binary file 2.4.0.0-169/spark/lib/spark-assembly-1.6.0.2.4.0.0-169-hadoop2.7.1.2.4.0.0-169.jar matches *HDP2.4 Cluster* - Installed using Ambari Automated Install [root@node09 hdp]# grep -r "YarnShuffleService" * Binary file 2.4.3.0-227/spark/lib/spark-assembly-1.6.2.2.4.3.0-227-hadoop2.7.1.2.4.3.0-227.jar matches Some LIB's are missing. Sandbox: [root@sandbox hdp]# find . -name *.jar | wc -l 4402 Cluster-Node: [root@node09 hdp]# find . -name *.jar | wc -l 1675 Difference could be - some services are not ON yet in the Cluster. I got that. But the "yarn-shuffle" scares me. Why wouldnt it install? STEPS: #1 Manual: copied over from HDP2.4 sandbox to HDP2.4 Cluster. >cp ~/spark-1.6.0.2.4.0.0-169-yarn-shuffle.jar . >mv spark-1.6.0.2.4.0.0-169-yarn-shuffle.jar spark-1.6.2.2.4.3.0-227-yarn-shuffle.jar #2: Add the following properties to the spark-defaults.conf file associated with your Spark installation. (For general Spark applications, this file typically resides at $SPARK_HOME/conf/spark-defaults.conf .) Set spark.dynamicAllocation.enabled to true Set spark.shuffle.service.enabled to true #3: I manually restarted all components. It sort of Worked briefly. However it went DOWN again. "Background Operations Running" --> Showed as if - it is UP. However when I went back to Ambari -> Hosts -> Summary - it shows down. Logs complaint the same missing class.
... View more
10-26-2016
09:11 PM
@Todd Wilson Here is the file: yarn-yarn-nodemanager-node09examplecomlog.txt Please check the attached. {code} 2016-10-26 16:08:03,700 FATAL containermanager.AuxServices (AuxServices.java:serviceInit(145)) - Failed to initialize spark_shuffle java.lang.RuntimeException: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class org.apache.spark.network.yarn.YarnShuffleService not found at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2240) at org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices.serviceInit(AuxServices.java:121) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107) at org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.serviceInit(ContainerManagerImpl.java:245) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:292) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:547) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:595) Caused by: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class org.apache.spark.network.yarn.YarnShuffleService not found at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2208) at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2232) ... 10 more Caused by: java.lang.ClassNotFoundException: Class org.apache.spark.network.yarn.YarnShuffleService not found at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2114) at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2206) {code}
... View more
10-26-2016
03:37 PM
Please see my Firewall and other info below. No luck still. Cannot get my NodeManager, Hive UP. Those are must haves for me.
... View more
10-26-2016
03:16 PM
One more Puzzle. SafeMode turns ON automatic some times. hdfs dfsadmin -fs hdfs://node09.example.com:8020 -safemode get Safe mode is ON I manually turned off: [hdfs@node09 ~]$ hdfs dfsadmin -fs hdfs://node09.example.com:8020 -safemode leave Then: -- Restarted NameNode -- Started NodeManager: Still turns off after some time -- Started HiveServer2: Still doesnt come up. Never did so far.
... View more
10-26-2016
02:47 PM
@Artem Ervits may I know which service runs on 8042? I looked at this to see which one it is. can't fine 8042. https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.0/bk_HDP_Reference_Guide/content/accumulo-ports.html ip6tables: was running. I just turned if off now. iptables: was off since couple of days. [root@~]# service ip6tables status ip6tables: Firewall is not running. [root@~]# service iptables status iptables: Firewall is not running. Tried Restarting NodeManager. Still Same ERROR 2016-10-26 09:45:38,614 script_alert.py:119 - [Alert][yarn_nodemanager_health] Failed with result CRITICAL: ['Connection failed to http://node09.example.com:8042/ws/v1/node/info (Traceback (most recent call last):\n File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/alerts/alert_nodemanager_health.py", line 171, in execute\n url_response = urllib2.urlopen(query, timeout=connection_timeout)\n File "/usr/lib64/python2.6/urllib2.py", line 126, in urlopen\n return _opener.open(url, data, timeout)\n File "/usr/lib64/python2.6/urllib2.py", line 391, in open\n response = self._open(req, data)\n File "/usr/lib64/python2.6/urllib2.py", line 409, in _open\n \'_open\', req)\n File "/usr/lib64/python2.6/urllib2.py", line 369, in _call_chain\n result = func(*args)\n File "/usr/lib64/python2.6/urllib2.py", line 1190, in http_open\n return self.do_open(httplib.HTTPConnection, req)\n File "/usr/lib64/python2.6/urllib2.py", line 1165, in do_open\n raise URLError(err)\nURLError: <urlopen error [Errno 111] Connection refused>\n)']
... View more
10-26-2016
01:56 PM
Sorry @Todd Wilson. I did not find NodeManager logs exclusively. So I added Ambari-Agent and Server logs below. Please see my 2 Replies to Artem's request.
... View more
10-26-2016
01:26 PM
@Todd Wilson Please see my NodeManager and NodeManager+HiveServer2 Logs
... View more
10-26-2016
01:25 PM
#2: Sequence: Truncate Ambari Server and Agent logs Start NodeManager, Then start HiveServer2 Logs Attached. ambari-serverloghive.txt ambari-agentloghive.txt
... View more