Member since
08-09-2022
7
Posts
0
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1721 | 11-04-2022 10:18 AM |
11-04-2022
10:18 AM
quick update. ran spark shell and got better errors indicating wrong config settings $ spark-shell --master yarn Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties Setting default log level to "WARN". To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel). 22/11/04 16:59:48 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 22/11/04 16:59:49 ERROR SparkContext: Error initializing SparkContext. java.lang.IllegalArgumentException: Required executor memory (1024 MB), offHeap memory (0) MB, overhead (384 MB), and PySpark memory (0 MB) is above the max threshold (1024 MB) of this cluster! Please check the values of 'yarn.scheduler.maximum-allocation-mb' and/or 'yarn.nodemanager.resource.memory-mb'. at org.apache.spark.deploy.yarn.Client.verifyClusterResources(Client.scala:368) at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:199)
... View more
11-04-2022
05:09 AM
running the following from hive SET hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager; SET hive.support.concurrency=true; INSERT INTO hello_acid partition (load_date='2016-03-01') VALUES (1, 1); getting this error SQL Error [30041] [42000]: Error while processing statement: FAILED: Execution Error, return code 30041 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Failed to create Spark client for Spark session e73c125d-e40a-49fb-b2e0-28c28ab1ff60_0: java.lang.RuntimeException: spark-submit process failed with exit code 1 and error ? Hive has Spark configured as dependency Have this settings in hive-site.xml <property><name>hive.spark.client.connect.timeout</name><value>30000ms</value></property><property><name>hive.spark.client.server.connect.timeout</name><value>300000ms</value></property> Some Other settings Java Configuration Options for HiveServer2 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -XX:MaxPermSize=512M -XX:+UseParNewGC -XX:-UseGCOverheadLimit I do not see any applications in the hadoop web UI. The logs are below From what I can tell hive does not recognize where spark libraries are. I believe this should have been addressed by setting hive dependency on Spark. What am I missing? Thanks 2022-11-04 02:46:21,069 ERROR org.apache.hive.spark.client.SparkClientImpl: [HiveServer2-Background-Pool: Thread-57]: Error while waiting for Remote Spark Driver to connect back to HiveServer2. java.util.concurrent.ExecutionException: java.lang.RuntimeException: spark-submit process failed with exit code 1 and error ? at io.netty.util.concurrent.AbstractFuture.get(AbstractFuture.java:41) ~[netty-common-4.1.17.Final.jar:4.1.17.Final] at org.apache.hive.spark.client.SparkClientImpl.<init>(SparkClientImpl.java:103) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hive.spark.client.SparkClientFactory.createClient(SparkClientFactory.java:90) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.exec.spark.RemoteHiveSparkClient.createRemoteClient(RemoteHiveSparkClient.java:104) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.exec.spark.RemoteHiveSparkClient.<init>(RemoteHiveSparkClient.java:100) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.exec.spark.HiveSparkClientFactory.createHiveSparkClient(HiveSparkClientFactory.java:77) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSessionImpl.java:131) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionManagerImpl.getSession(SparkSessionManagerImpl.java:132) [hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6 ... at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181] Caused by: java.lang.RuntimeException: spark-submit process failed with exit code 1 and error ? at org.apache.hive.spark.client.SparkClientImpl$2.run(SparkClientImpl.java:495) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] ... 1 more 2022-11-04 02:46:21,088 ERROR org.apache.hadoop.hive.ql.exec.spark.SparkTask: [HiveServer2-Background-Pool: Thread-57]: Failed to execute Spark task "Stage-1" org.apache.hadoop.hive.ql.metadata.HiveException: Failed to create Spark client for Spark session e73c125d-e40a-49fb-b2e0-28c28ab1ff60_0: java.lang.RuntimeException: spark-submit process failed with exit code 1 and error ? at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.getHiveException(SparkSessionImpl.java:286) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSessionImpl.java:135) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionManagerImpl.getSession(SparkSessionManagerImpl.java:132) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] .. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_181] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181] Caused by: java.lang.RuntimeException: Error while waiting for Remote Spark Driver to connect back to HiveServer2. at org.apache.hive.spark.client.SparkClientImpl.<init>(SparkClientImpl.java:124) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hive.spark.client.SparkClientFactory.createClient(SparkClientFactory.java:90) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.exec.spark.RemoteHiveSparkClient.createRemoteClient(RemoteHiveSparkClient.java:104) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.exec.spark.RemoteHiveSparkClient.<init>(RemoteHiveSparkClient.java:100) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.exec.spark.HiveSparkClientFactory.createHiveSparkClient(HiveSparkClientFactory.java:77) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSessionImpl.java:131) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] ... 22 more Caused by: java.util.concurrent.ExecutionException: java.lang.RuntimeException: spark-submit process failed with exit code 1 and error ? at io.netty.util.concurrent.AbstractFuture.get(AbstractFuture.java:41) ~[netty-common-4.1.17.Final.jar:4.1.17.Final] at org.apache.hive.spark.client.SparkClientImpl.<init>(SparkClientImpl.java:103) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hive.spark.client.SparkClientFactory.createClient(SparkClientFactory.java:90) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.exec.spark.RemoteHiveSparkClient.createRemoteClient(RemoteHiveSparkClient.java:104) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] ... 22 more Caused by: java.util.concurrent.ExecutionException: java.lang.RuntimeException: spark-submit process failed with exit code 1 and error ? at io.netty.util.concurrent.AbstractFuture.get(AbstractFuture.java:41) ~[netty-common-4.1.17.Final.jar:4.1.17.Final] at org.apache.hive.spark.client.SparkClientImpl.<init>(SparkClientImpl.java:103) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hive.spark.client.SparkClientFactory.createClient(SparkClientFactory.java:90) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.exec.spark.RemoteHiveSparkClient.createRemoteClient(RemoteHiveSparkClient.java:104) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.exec.spark.RemoteHiveSparkClient.<init>(RemoteHiveSparkClient.java:100) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.exec.spark.HiveSparkClientFactory.createHiveSparkClient(HiveSparkClientFactory.java:77) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSessionImpl.java:131) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] ... 22 more Caused by: java.lang.RuntimeException: spark-submit process failed with exit code 1 and error ? at org.apache.hive.spark.client.SparkClientImpl$2.run(SparkClientImpl.java:495) ~[hive-exec-2.1.1-cdh6.2.1.jar:2.1.1-cdh6.2.1] ... 1 more
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Spark
09-21-2022
05:11 AM
hi, I tried that but getting message that only DataNodes can be deleted.
... View more
09-20-2022
06:58 AM
i accidentally added secondary name node in HDFS configuration in CDP 7. I tried removing the roles and hosts but both operations are failing 1. Removal of hosts (hosts->decomission) is failing with Failed to refresh NameNode. 2. When removing it from HDFS ( HDFS-> Instances-> select SSN role->Decommission) is failing with Only DataNode roles can be decommissioned Now the whole cluster is messed up; I can neither add nor remove hosts. Is there way to fix this without recreating the cluster?
... View more
Labels:
- Labels:
-
Apache Hadoop
-
HDFS
09-08-2022
04:56 AM
thanks Chethan. The solution here https://community.cloudera.com/t5/Support-Questions/Invalid-resource-request-requested-resource-type-yarn-io-gpu/m-p/243629#M205427 is apparently no longer valid. I switched back to the Capacity scheduler, increased yarn.nodemanager.resource.memory-mb and everything seems to be OK now
... View more
09-04-2022
03:45 AM
Hi, when I switched to Fair scheduler however I still can't start the Resource Manager. I'm using CDP 7.4.4 Following steps listed here https://community.cloudera.com/t5/Support-Questions/Unable-to-start-Node-Manager/td-p/285976 I made the following changes via UI and verified the deployed files In yarn_site.xml I can see from the resource manager that fair scheduler is loaded 2022-09-04 10:39:43,377 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationFileLoaderService: Loading allocation file file:/run/cloudera-scm-agent/process/1546336134-yarn-RESOURCEMANAGER/fair-scheduler.xml [root@10-222-53-95 1546336089-yarn-RESOURCEMANAGER]# cat /run/cloudera-scm-agent/process/1546336134-yarn-RESOURCEMANAGER/fair-scheduler.xml <?xml version="1.0"?> <allocations> <queue name="sample_queue"> <minResources>10000 mb,0vcores</minResources> <maxResources>90000 mb,0vcores</maxResources> <maxRunningApps>50</maxRunningApps> <weight>2.0</weight> <schedulingPolicy>fair</schedulingPolicy> <queue name="sample_sub_queue"> <aclSubmitApps>charlie</aclSubmitApps> <minResources>5000 mb,0vcores</minResources> </queue> <queue name="sample_reversable_queue"> <resevravation></resevravation> </queue> </queue> <queueMaxAMShareDefault>0.5</queueMaxAMShareDefault> <queueMaxResourcesDefault>5000 mb,0vcores</queueMaxResourcesDefault> <!-- Queue 'secondary_group_queue' is a parent queue and may have user queues under it --> <queue name="secondary_group_queue" type="parent"> <weight>3.0</weight> <maxChildResources>4096 mb,4vcores</maxChildResources> </queue> <user name="sample_user"> <maxRunningApps>30</maxRunningApps> </user> <userMaxAppsDefault>5</userMaxAppsDefault> <queuePlacementPolicy> <rule name="specified" /> <rule name="primaryGroup" create="false" /> <rule name="nestedUserQueue"> <rule name="secondaryGroupExistingQueue" create="false" /> </rule> <rule name="default" queue="sample_queue"/> </queuePlacementPolicy> 2022-09-04 10:39:43,583 INFO org.apache.zookeeper.ClientCnxn: EventThread shut down for session: 0x1000fdc05e46ceb 2022-09-04 10:39:43,583 INFO org.apache.hadoop.service.AbstractService: Service ResourceManager failed in state INITED org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Class org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler not instance of org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.AutoCreatedQueueDeletionPolicy.init(AutoCreatedQueueDeletionPolicy.java:69) at org.apache.hadoop.yarn.server.resourcemanager.monitor.SchedulingMonitor.serviceInit(SchedulingMonitor.java:61) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:164) at org.apache.hadoop.yarn.server.resourcemanager.monitor.SchedulingMonitorManager.updateSchedulingMonitors(SchedulingMonitorManager.java:93) at org.apache.hadoop.yarn.server.resourcemanager.monitor.SchedulingMonitorManager.initialize(SchedulingMonitorManager.java:123) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.serviceInit(FairScheduler.java:1517) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:164) at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceInit(ResourceManager.java:853) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:164) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.createAndInitActiveServices(ResourceManager.java:1271) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:328) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:164) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1558) 2022-09-04 10:39:43,584 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Transitioning to standby state 2022-09-04 10:39:43,584 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Transitioned to standby state 2022-09-04 10:39:43,584 FATAL org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting ResourceManager org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Class org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler not instance of org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.AutoCreatedQueueDeletionPolicy.init(AutoCreatedQueueDeletionPolicy.java:69) at org.apache.hadoop.yarn.server.resourcemanager.monitor.SchedulingMonitor.serviceInit(SchedulingMonitor.java:61) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:164) at org.apache.hadoop.yarn.server.resourcemanager.monitor.SchedulingMonitorManager.updateSchedulingMonitors(SchedulingMonitorManager.java:93) at org.apache.hadoop.yarn.server.resourcemanager.monitor.SchedulingMonitorManager.initialize(SchedulingMonitorManager.java:123) at org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.serviceInit(FairScheduler.java:1517) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:164) at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceInit(ResourceManager.java:853) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:164) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.createAndInitActiveServices(ResourceManager.java:1271) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:328) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:164) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1558) 2022-09-04 10:39:43,585 INFO org.apache.ranger.audit.provider.AuditProviderFactory: ==> JVMShutdownHook.run()
... View more
Labels:
- Labels:
-
Cloudera Data Platform (CDP)