Member since
09-20-2018
366
Posts
0
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3067 | 05-14-2019 10:47 AM |
09-03-2019
10:55 AM
Hi, could you please look into the spark History server logs and check for any errors? also grep for the application id in the Application History, that will show the status of the application whether it is inprogress or completed? Thanks AKR
... View more
08-26-2019
10:29 AM
Hi, This is very difficult to identify the active files now which are in progress state. Please look for the RUNNING jobs from RM WebUI and remove all the other in progress files that are not listed in RUNNING state. To check for the RUNNING Jobs from RM WeBUI please follow this steps 1. Login into Cloudera Manager. 2. Choose Yarn as Service 3. Click WEBUI 4. Choose Resource Manager WEBUI 5. A New Screen will be displayed showing list of all applications. 6. On the Left Hand side you can see links displayed there under applications link. Click the "Running" link displayed under Applications link 7. This "Running" link will all show the in-progress jobs that are active,. 8. Please look for the RUNNING jobs from RM WebUI and remove all the other in progress jobs that are not listed in RUNNING state.
... View more
08-24-2019
06:54 AM
Hi, Please check in Logs whether any java.net.BindException: Address already in use is arrived, If it is seen It clearly resembles that port is already in use, (i.e) something is trying to connect to the port and is blocking which we are not sure. Possible causes Maybe 1. Some "port scanner" or something is blocking this port (or) Some other Service is running on the port like a port scanner. 2. The accumulator is still running on those ports from an earlier run 3. Several Services may be running on this port. Please make sure what services (or) applications is running on this port and try to stop it.
... View more
07-31-2019
07:08 AM
Hi Alex, Did u checked in Oozie Configuration or in oozie logs like whether the Event logs are writing in some other path apart from the path that was configured in CM? Thanks AKR
... View more
07-26-2019
09:59 AM
Hi Pal, Can you grep for the particular application ID in the folder /user/spark/applicationHistory to make sure whether the job has been successfully completed or still in .inprogress state? Thanks AKR
... View more
07-19-2019
10:28 AM
Hi, This Error will happen when "Spark executor-memory is too small for spark to start" . Please refer to the upstream jira for more details. https://issues.apache.org/jira/browse/SPARK-12759 Thanks AKR
... View more
07-17-2019
09:09 AM
Any "manually" killed application not showing up in the history server. In resource manager I am not able to browse the tasks. We care using Cloudera 5.14.X Any application killed by yarn does show up in history server and able to browse tasks in resource manager.
... View more
07-12-2019
11:43 AM
> Key: SPARK-23476 > URL: https://issues.apache.org/jira/browse/SPARK-23476 > Project: Spark > Issue Type: Bug > Components: Spark Shell > Affects Versions: 2.3.0 > Reporter: Gabor Somogyi > Priority: Minor > > If spark is run with "spark.authenticate=true", then it will fail to start in local mode. > {noformat} > 17/02/03 12:09:39 ERROR spark.SparkContext: Error initializing SparkContext. > java.lang.IllegalArgumentException: Error: a secret key must be specified via the spark.authenticate.secret config > at org.apache.spark.SecurityManager.generateSecretKey(SecurityManager.scala:401) > at org.apache.spark.SecurityManager.<init>(SecurityManager.scala:221) > at org.apache.spark.SparkEnv$.create(SparkEnv.scala:258) > at org.apache.spark.SparkEnv$.createDriverEnv(SparkEnv.scala:199) > at org.apache.spark.SparkContext.createSparkEnv(SparkContext.scala:290) > ... > {noformat} > It can be confusing when authentication is turned on by default in a cluster, and one tries to start spark in local mode for a simple test. > *Workaround*: If {{spark.authenticate=true}} is specified as a cluster wide config, then the following has to be added > {{--conf "spark.authenticate=false" --conf "spark.shuffle.service.enabled=false" --conf "spark.dynamicAllocation.enabled=false" --conf "spark.network.crypto.enabled=false" --conf "spark.authenticate.enableSaslEncryption=false"}} > in the spark-submit command.
... View more
06-15-2019
12:55 AM
@diebestetest wrote: Hi, Could you please share the Entire console logs for further analysis? Thanks Arun Sorry not familiar with the topic.
... View more
06-12-2019
08:07 AM
After making a small change to the location of the jar, we got it working. The steps are as follows: added the hbase jars to the executor classpath via the following setting: signed in to ClouderaManager went to the Spark on YARN service went to the Configuration tab typed defaults in the search box selected gateway in the scope added the entry: spark.executor.extraClassPath=/hdfs03/parcels/CDH/lib/hbase/lib/htrace-core-3.2.0-incubating.jar Note: we had to use the hdfs directory path.
... View more
- « Previous
- Next »