Member since
12-15-2016
54
Posts
4
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1432 | 01-19-2017 07:24 PM |
10-30-2017
04:31 PM
Actually I did so, kill the app, submit the same app with same config
and he took 3GB again. I'll give it a shot again and give u feedback
asap
... View more
10-30-2017
04:20 PM
Hi @Gour Saha Is it somehow possible to allocate just 512 because, apps jobs aren't that "expensive" that they need 3-4GBs of RAM? Thank you 🙂
... View more
10-30-2017
01:35 PM
Hi, As my question is saying. Lets say I'm submiting spark-job like this: spark-submit --class streaming.test --master yarn --deploy-mode cluster --name some_name --executor-memory 512m --executor-cores 1 --driver-memory 512m some.jar The job is submited and it is running as you can see here: screenshot-6.jpg But as you can see, I gave to job 512MB of RAM, YARN gave 3GB and it is happening for every Spark job I'm submitting. Can someone lead me where I'm mistaking? UPDATE: I have 3 RMs. and yarn.scheduler.minimum-alocation-mb is set to 1024. Is that because this 1024 *(num of of RM) ?
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Spark
-
Apache YARN
10-16-2017
02:15 PM
1 Kudo
I was on the edge with time, so I created Phoenix server on the other node and he created all SYSTEM tables he needed. On the other side, I haven't knew for that .py file. Does he really creates SYSTEM files? I don't want to run the same file because now its all working, or if I run it nothing will happen? If so, I can mark your answer as accepted. Also, thank you for your time 🙂
... View more
10-15-2017
07:57 PM
I cannot find anything, but any little thing how to recreate system tables, can you please help me I' literally driving nuts 😞
... View more
10-15-2017
07:06 PM
Hi @Ted Yu ng HDP2.6.0. So, when type "list" in shell or zeppelin I'm getting ZERO tables as a result. When I try to create table I'm getting this: org.apache.phoenix.exception.PhoenixIOException: Table 'SYSTEM.CATALOG' was not found, got: AFM_49_CLICKS_ANTIFRAUDD.
at org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:111)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1303)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1268)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:1464)
at org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:2190)
at org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:872)
at org.apache.phoenix.compile.CreateTableCompiler$2.execute(CreateTableCompiler.java:194)
at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:343)
at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:331)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:329)
at org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1440)
at org.apache.commons.dbcp2.DelegatingStatement.execute(DelegatingStatement.java:291)
at org.apache.commons.dbcp2.DelegatingStatement.execute(DelegatingStatement.java:291)
at org.apache.zeppelin.jdbc.JDBCInterpreter.executeSql(JDBCInterpreter.java:580)
at org.apache.zeppelin.jdbc.JDBCInterpreter.interpret(JDBCInterpreter.java:692)
at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:94)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:489)
at org.apache.zeppelin.scheduler.Job.run(Job.java:175)
at org.apache.zeppelin.scheduler.ParallelScheduler$JobRunner.run(ParallelScheduler.java:162)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.TableNotFoundException: Table 'SYSTEM.CATALOG' was not found, got: AFM_49_CLICKS_ANTIFRAUDD1.
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1284)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1165)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1149)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1106)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getRegionLocation(ConnectionManager.java:941)
at org.apache.hadoop.hbase.client.HRegionLocator.getRegionLocation(HRegionLocator.java:83)
at org.apache.hadoop.hbase.client.HTable.getRegionLocation(HTable.java:504)
at org.apache.hadoop.hbase.client.HTable.getKeysAndRegionsInRange(HTable.java:720)
at org.apache.hadoop.hbase.client.HTable.getKeysAndRegionsInRange(HTable.java:690)
at org.apache.hadoop.hbase.client.HTable.getStartKeysInRange(HTable.java:1757)
at org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:1712)
at org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:1692)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1286)
... 25 more The same table gets created, because I when I "list" its there but its not in SYSTEM.CATALOG since there is no such table in HBase. When I try to make SELECT query on the same table, this is what I'm getting: org.apache.phoenix.exception.PhoenixIOException: Table 'SYSTEM.CATALOG' was not found, got: AFM_49_CLICKS_ANTIFRAUDD.
at org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:111)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1303)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1268)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.getTable(ConnectionQueryServicesImpl.java:1493)
at org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:514)
at org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:437)
at org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:429)
at org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:425)
at org.apache.phoenix.compile.FromCompiler$BaseColumnResolver.createTableRef(FromCompiler.java:535)
at org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.<init>(FromCompiler.java:365)
at org.apache.phoenix.compile.FromCompiler.getResolverForQuery(FromCompiler.java:213)
at org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:397)
at org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:378)
at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:271)
at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:266)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
... View more
10-15-2017
02:23 PM
Hi! I'm trying to recreate my SYSTEM tables since I deleted them from HDFS. So I cleaned everyhing from HDFS /apps/hbase + I did $ hbase zkCli clean -cleanAll + I delete everyhing from Zookeeper in /hbase-unsecure and afterwards start Hbase master. When I tried to query one new created table in Phoenix I got this org.apache.phoenix.exception.PhoenixIOException: Table 'SYSTEM.CATALOG' was not found, got: AFM_49_CLICKS_ANTIFRAUDD. In my zookeeper I have this: $ ls /hbase-unsecure/table
[hbase:meta, hbase:namespace, AFM_49_CLICKS_ANTIFRAUDD] So I don't know how to recreate all of the SYSTEM tables in my HBASE. I tried many things, with hbck options -repair, -fix/fixAssignments and many others, but nothing really happend. Can someone help me how to recreate this tables? Is that even possible? How to continue using Phoenix? Please don't suggest me to delete znode from zookeeper and etc because I tried everyhing and nothing is creating SYSTEM tables on HDFS.
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache HBase
-
Apache Phoenix
06-12-2017
12:00 PM
1 Kudo
Hello! I'm using new HDP2.6. and Ambari. On it I have installed Yarn, MapReduce, Spark2, Hadoop and etc. I'm trying to enter spark shell with --master yarn but I'm constantly getting this kind of error: bin/spark-shell --master yarn --deploy-mode client
Warning: Ignoring non-spark config property: spark-executor.memory=4g
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
17/06/12 13:38:38 ERROR SparkContext: Error initializing SparkContext.
java.lang.IllegalArgumentException: Required executor memory (8192+819 MB) is above the max threshold (8192 MB) of this cluster! Please check the values of 'yarn.scheduler.maximum-allocation-mb' and/or 'yarn.nodemanager.resource.memory-mb'.
at org.apache.spark.deploy.yarn.Client.verifyClusterResources(Client.scala:334)
at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:168)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:56)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:156)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:509)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2320)
at org.apache.spark.sql.SparkSession$Builder$anonfun$6.apply(SparkSession.scala:868)
at org.apache.spark.sql.SparkSession$Builder$anonfun$6.apply(SparkSession.scala:860)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:860)
at org.apache.spark.repl.Main$.createSparkSession(Main.scala:96)
at $line3.$read$iw$iw.<init>(<console>:15)
at $line3.$read$iw.<init>(<console>:42)
at $line3.$read.<init>(<console>:44)
at $line3.$read$.<init>(<console>:48)
at $line3.$read$.<clinit>(<console>)
at $line3.$eval$.$print$lzycompute(<console>:7)
at $line3.$eval$.$print(<console>:6)
at $line3.$eval.$print(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:786)
at scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:1047)
at scala.tools.nsc.interpreter.IMain$WrappedRequest$anonfun$loadAndRunReq$1.apply(IMain.scala:638)
at scala.tools.nsc.interpreter.IMain$WrappedRequest$anonfun$loadAndRunReq$1.apply(IMain.scala:637)
at scala.reflect.internal.util.ScalaClassLoader$class.asContext(ScalaClassLoader.scala:31)
at scala.reflect.internal.util.AbstractFileClassLoader.asContext(AbstractFileClassLoader.scala:19)
at scala.tools.nsc.interpreter.IMain$WrappedRequest.loadAndRunReq(IMain.scala:637)
at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:569)
at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:565)
at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:807)
at scala.tools.nsc.interpreter.ILoop.command(ILoop.scala:681)
at scala.tools.nsc.interpreter.ILoop.processLine(ILoop.scala:395)
at org.apache.spark.repl.SparkILoop$anonfun$initializeSpark$1.apply$mcV$sp(SparkILoop.scala:38)
at org.apache.spark.repl.SparkILoop$anonfun$initializeSpark$1.apply(SparkILoop.scala:37)
at org.apache.spark.repl.SparkILoop$anonfun$initializeSpark$1.apply(SparkILoop.scala:37)
at scala.tools.nsc.interpreter.IMain.beQuietDuring(IMain.scala:214)
at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:37)
at org.apache.spark.repl.SparkILoop.loadFiles(SparkILoop.scala:105)
at scala.tools.nsc.interpreter.ILoop$anonfun$process$1.apply$mcZ$sp(ILoop.scala:920)
at scala.tools.nsc.interpreter.ILoop$anonfun$process$1.apply(ILoop.scala:909)
at scala.tools.nsc.interpreter.ILoop$anonfun$process$1.apply(ILoop.scala:909)
at scala.reflect.internal.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:97)
at scala.tools.nsc.interpreter.ILoop.process(ILoop.scala:909)
at org.apache.spark.repl.Main$.doMain(Main.scala:69)
at org.apache.spark.repl.Main$.main(Main.scala:52)
at org.apache.spark.repl.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$runMain(SparkSubmit.scala:745)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) Also I tried with this line of code: bin/spark-shell --conf spark-executor.memory=4g --conf spark.executor.cores=2 --master yarn --deploy-mode client But still getting exactly the same error. This is my Yarn resources: And this are apps that succeded on Ambari test: Can someone tell me what I'm doing wrong here because I'm running nuts. Trying to fix this already one week and I can't anymore. Please someone. 😞 @Wynner @Matt Clarke @Jay SenSharma
... View more
Labels:
- Labels:
-
Apache Spark
-
Apache YARN
06-12-2017
11:58 AM
Hello! I'm using new HDP2.6. and Ambari. On it I have installed Yarn, MapReduce, Spark2, Hadoop and etc. I'm trying to enter spark shell with --master yarn but I'm constantly getting this kind of error: bin/spark-shell --master yarn --deploy-mode client Warning: Ignoring non-spark config property: spark-executor.memory=4g
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
17/06/12 13:38:38 ERROR SparkContext: Error initializing SparkContext.
java.lang.IllegalArgumentException: Required executor memory (8192+819 MB) is above the max threshold (8192 MB) of this cluster! Please check the values of 'yarn.scheduler.maximum-allocation-mb' and/or 'yarn.nodemanager.resource.memory-mb'.
at org.apache.spark.deploy.yarn.Client.verifyClusterResources(Client.scala:334)
at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:168)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:56)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:156)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:509)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2320)
at org.apache.spark.sql.SparkSession$Builder$anonfun$6.apply(SparkSession.scala:868)
at org.apache.spark.sql.SparkSession$Builder$anonfun$6.apply(SparkSession.scala:860)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:860)
at org.apache.spark.repl.Main$.createSparkSession(Main.scala:96)
at $line3.$read$iw$iw.<init>(<console>:15)
at $line3.$read$iw.<init>(<console>:42)
at $line3.$read.<init>(<console>:44)
at $line3.$read$.<init>(<console>:48)
at $line3.$read$.<clinit>(<console>)
at $line3.$eval$.$print$lzycompute(<console>:7)
at $line3.$eval$.$print(<console>:6)
at $line3.$eval.$print(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:786)
at scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:1047)
at scala.tools.nsc.interpreter.IMain$WrappedRequest$anonfun$loadAndRunReq$1.apply(IMain.scala:638)
at scala.tools.nsc.interpreter.IMain$WrappedRequest$anonfun$loadAndRunReq$1.apply(IMain.scala:637)
at scala.reflect.internal.util.ScalaClassLoader$class.asContext(ScalaClassLoader.scala:31)
at scala.reflect.internal.util.AbstractFileClassLoader.asContext(AbstractFileClassLoader.scala:19)
at scala.tools.nsc.interpreter.IMain$WrappedRequest.loadAndRunReq(IMain.scala:637)
at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:569)
at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:565)
at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:807)
at scala.tools.nsc.interpreter.ILoop.command(ILoop.scala:681)
at scala.tools.nsc.interpreter.ILoop.processLine(ILoop.scala:395)
at org.apache.spark.repl.SparkILoop$anonfun$initializeSpark$1.apply$mcV$sp(SparkILoop.scala:38)
at org.apache.spark.repl.SparkILoop$anonfun$initializeSpark$1.apply(SparkILoop.scala:37)
at org.apache.spark.repl.SparkILoop$anonfun$initializeSpark$1.apply(SparkILoop.scala:37)
at scala.tools.nsc.interpreter.IMain.beQuietDuring(IMain.scala:214)
at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:37)
at org.apache.spark.repl.SparkILoop.loadFiles(SparkILoop.scala:105)
at scala.tools.nsc.interpreter.ILoop$anonfun$process$1.apply$mcZ$sp(ILoop.scala:920)
at scala.tools.nsc.interpreter.ILoop$anonfun$process$1.apply(ILoop.scala:909)
at scala.tools.nsc.interpreter.ILoop$anonfun$process$1.apply(ILoop.scala:909)
at scala.reflect.internal.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:97)
at scala.tools.nsc.interpreter.ILoop.process(ILoop.scala:909)
at org.apache.spark.repl.Main$.doMain(Main.scala:69)
at org.apache.spark.repl.Main$.main(Main.scala:52)
at org.apache.spark.repl.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$runMain(SparkSubmit.scala:745)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) Also I tried with this line of code: bin/spark-shell --conf spark-executor.memory=4g --conf spark.executor.cores=2 --master yarn --deploy-mode client But still getting exactly the same error. This is my Yarn resources: And this are apps that succeded on Ambari test: Can someone tell me what I'm doing wrong here because I'm running nuts. Trying to fix this already one week and I can't anymore. Please someone. 😞 @Wynner @Matt Clarke @Jay SenSharma
... View more
Labels:
- Labels:
-
Apache Spark
-
Apache YARN
- « Previous
-
- 1
- 2
- Next »