Member since
03-02-2017
31
Posts
1
Kudos Received
6
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3811 | 09-08-2017 05:49 AM | |
2672 | 06-05-2017 04:29 AM | |
2963 | 06-05-2017 04:24 AM | |
2348 | 03-11-2017 06:18 PM | |
8191 | 03-03-2017 01:54 AM |
04-10-2018
02:10 AM
Hi The issue was related to customized environment and it is fixed now. regards Shafi
... View more
09-20-2017
03:33 AM
Hi I was using 5.12 express edition. It was not easy to startup with services after installation. The configurations needs to be changed and High Availability configurations, Journal nodes. I have updated Java heap size for HDFS instances. Namenode, Secondary Name node and Journal nodes only retained regards Shafi
... View more
09-13-2017
06:50 AM
Hi Yes through oozie in kerberised env, it shows Spark job as FAILED. Application Manager tries two attempts and it fails but actually in Yarn logs i see the spark dataframe output and message as SUCCEEDED regards Shafi
... View more
09-08-2017
05:49 AM
Hi Added it as dependent project to a spark project.
... View more
09-06-2017
12:56 PM
When i submit a spark job i am the spark job is shown as failed in Yarn logs but in detailed logs it actually succeeds. Why does the application master is failing after making attempts. I have reduced the number of attempts to 1. User: dev Name: com.example Application Type: SPARK Application Tags: State: FAILED FinalStatus: FAILED Started: Wed Sep 06 09:12:15 -0500 2017 Elapsed: 21sec Tracking URL: History Diagnostics: Application application_1504705933896_0004 failed 1 times due to AM Container for appattempt_1504705933896_0004_000001 exited with exitCode: 0 Log Type: stderr Log Upload Time: Wed Sep 06 09:12:38 -0500 2017 Log Length: 119115 Showing 4096 bytes of 119115 total. Click here for the full log. heduler: ResultStage 3 (foreachPartition at HBaseContext.scala:216) finished in 0.142 s 17/09/06 09:12:36 INFO scheduler.DAGScheduler: Job 3 finished: foreachPartition at HBaseContext.scala:216, took 0.157439 s 17/09/06 09:12:36 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x15e029e15d2ff0e 17/09/06 09:12:36 INFO zookeeper.ZooKeeper: Session: 0x15e029e15d2ff0e closed 17/09/06 09:12:36 INFO zookeeper.ClientCnxn: EventThread shut down |COL1|COL2|COL3| +----------+-------------+-------------+--------------+--------------------------+-------------------------+------------+ | Data is printed here 17/09/06 09:12:18 INFO yarn.ApplicationMaster: Registered signal handlers for [TERM, HUP, INT] 17/09/06 09:12:18 INFO yarn.ApplicationMaster: ApplicationAttemptId: appattempt_1504705933896_0004_000001 17/09/06 09:12:19 INFO spark.SecurityManager: Changing view acls to: yarn,dev 17/09/06 09:12:19 INFO spark.SecurityManager: Changing modify acls to: yarn,dev 17/09/06 09:12:19 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(yarn, dev); users with modify permissions: Set(yarn, dev) 17/09/06 09:12:19 INFO yarn.ApplicationMaster: Starting the user application in a separate Thread 17/09/06 09:12:19 INFO yarn.ApplicationMaster: Waiting for spark context initialization... 17/09/06 09:12:36 INFO ingestion: mysamplespark job executed successfully 17/09/06 09:12:36 INFO yarn.ApplicationMaster: Final app status: SUCCEEDED, exitCode: 0 17/09/06 09:12:36 INFO spark.SparkContext: Invoking stop() from shutdown hook 17/09/06 09:12:36 INFO ui.SparkUI: Stopped Spark web UI at http://10.6.0.10:43467 17/09/06 09:12:37 INFO spark.MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped! 17/09/06 09:12:37 INFO storage.MemoryStore: MemoryStore cleared 17/09/06 09:12:37 INFO storage.BlockManager: BlockManager stopped 17/09/06 09:12:37 INFO storage.BlockManagerMaster: BlockManagerMaster stopped 17/09/06 09:12:37 INFO scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped! 17/09/06 09:12:37 INFO spark.SparkContext: Successfully stopped SparkContext 17/09/06 09:12:37 INFO yarn.ApplicationMaster: Unregistering ApplicationMaster with SUCCEEDED 17/09/06 09:12:37 INFO remote.RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon. 17/09/06 09:12:37 INFO yarn.ApplicationMaster: Deleting staging directory .sparkStaging/application_1504705933896_0004 17/09/06 09:12:37 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports. 17/09/06 09:12:37 INFO util.ShutdownHookManager: Shutdown hook called 17/09/06 09:12:37 INFO util.ShutdownHookManager: Deleting directory /data1/yarn/nm/usercache/dev/appcache/application_1504705933896_0004/spark-de7ee7f3-5e8f-49a2-b99f-37e1c0a0122c 17/09/06 09:12:37 INFO util.ShutdownHookManager: Deleting directory /data0/yarn/nm/usercache/dev/appcache/application_1504705933896_0004/spark-b0ebe098-f97c-49c8-bc0e-317af738619c 17/09/06 09:12:37 INFO util.ShutdownHookManager: Deleting directory /data8/yarn/nm/usercache/dev/appcache/application_1504705933896_0004/container_1504705933896_0004_01_000001/tmp/spark-afaac501-7ff7-42e9-a175-6f3ab2da8465 17/09/06 09:12:37 INFO util.ShutdownHookManager: Deleting directory /data6/yarn/nm/usercache/dev/appcache/application_1504705933896_0004/spark-0508bde0-58ff-42ce-8cca-b15e07551e05 17/09/06 09:12:37 INFO util.ShutdownHookManager: Deleting directory /data4/yarn/nm/usercache/dev/appcache/application_1504705933896_0004/spark-7fd2da4a-1c79-426b-bbbe-16c9c7b1aeaf 17/09/06 09:12:37 INFO util.ShutdownHookManager: Deleting directory /data9/yarn/nm/usercache/dev/appcache/application_1504705933896_0004/spark-c1b3d37f-e394-4049-9874-62a2504c4d6b 17/09/06 09:12:37 INFO util.ShutdownHookManager: Deleting directory /data5/yarn/nm/usercache/dev/appcache/application_1504705933896_0004/spark-ce45d5fd-970b-41cb-9678-855b86254285 17/09/06 09:12:37 INFO util.ShutdownHookManager: Deleting directory /data2/yarn/nm/usercache/dev/appcache/application_1504705933896_0004/spark-53786e4f-f263-4876-926a-ed7b98822930 17/09/06 09:12:37 INFO util.ShutdownHookManager: Deleting directory /data3/yarn/nm/usercache/dev/appcache/application_1504705933896_0004/spark-c7ab8793-95d6-4c21-9eac-a7407375005f 17/09/06 09:12:37 INFO util.ShutdownHookManager: Deleting directory /data7/yarn/nm/usercache/dev/appcache/application_1504705933896_0004/spark-4669faae-b4d1-4ae8-8751-194b476eb6dd 17/09/06 09:12:37 INFO util.ShutdownHookManager: Deleting directory /data8/yarn/nm/usercache/dev/appcache/application_1504705933896_0004/spark-c267166f-0c4b-483d-a7e1-6621afb3eb90 17/09/06 09:12:37 INFO Remoting: Remoting shut down 17/09/06 09:12:37 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remoting shut down.
... View more
Labels:
- Labels:
-
Apache Spark
08-25-2017
03:57 AM
Hi I have a scala program with a main class and main function. It is not a spark project. it is a utility project how can i schedule the scala program as a oozie action regards Shafi
... View more
Labels:
- Labels:
-
Apache Oozie
08-25-2017
03:55 AM
Dear All, I am facing this issue in Cloudera 5.9. 5.11 and 5.12 when i am submitting a spark job through oozie this exception comes /var/run/cloudera-scm-agent/process/578-yarn-NODEMANAGER/log4j.properties (permission denied) The spark project is built with scala 2.10.6 and java 1.8. regards Shafi
... View more
Labels:
- Labels:
-
Apache Oozie
-
Apache Spark
08-07-2017
10:04 PM
Spark2 is not yet supported. But we can install in separately. you will have two submit commands after installation. one for 1.6.0 (default) and spark2 version 2.0
... View more
08-07-2017
03:21 AM
Hi I resolved this issue by configuring proper java heap memory for name node & Configuring HA on namenode regards Shafi
... View more
08-07-2017
03:17 AM
Hi Check the High Availability is configured properly or not. regards Shafi
... View more
06-05-2017
04:29 AM
It was due to the rogue node. i had to delete that node from cluster. This is fine now.
... View more
06-05-2017
04:24 AM
Hi, I have to delete that node from azure. then Cloudera was not able to find it and i was able to stop and delete the service. Earlier i only stopped that node in Azure. Somehow it still was able to attached to it. Anyhow, that node was of no use so i deleted it. I was able to move the Activity monitor to different host by creating the database for it. Also, the de-commissioning and deletion of host is also done from cluster. The parcel distribution is also fine now.
... View more
06-01-2017
04:00 AM
The screenshot is given below: The highlighted service 'host' is not available in the cluster. The service is is still started condition and it is not getting stopped. I am not able to move Activity Monitor to another role instance either, there is no option. Can you assist me in resolving this. Do i need to create the am and nav databases on different host. At present i want to stop service and delete this host. add this service to new role instance
... View more
06-01-2017
03:56 AM
Hi Recently i have resized the disks on the node and restarted the cluster. My parcels screen looks like below. I have to install spark 2 and kafka, can you please suggest how to resolve this issue The parcels are in activating state for 2 days.
... View more
Labels:
- Labels:
-
Cloudera Manager
05-30-2017
12:44 AM
Hi I have tried that. The service is not getting stopped. Even the inspect hosts on that node results into error. So when i want to decommission node, it says Cloudera Manager Activity Monitor is running stop it first. When i try to stop activity monitor on that node it times out after 90 seconds.
... View more
05-28-2017
12:48 PM
Hi I have a 4 node cluster and the node on which Activity monitor service of Cloudera manager service is running is gone down. In cloudera manager service, i am not getting option to change this role to another host. It is frustrating. In the configuration page i have changed the property also to different host. still it is not getting out. I need to decomission this host and add another node. How to resolve this issue ? i want to change the role instance of Actvity monitor to another role. The wizard is also not showing any option
... View more
Labels:
- Labels:
-
Cloudera Manager
05-03-2017
08:15 AM
Hi I have only Masternode and Data nodes on my cloudera Cluster. I want to add a new host and assign it Gateway node which the team will connect and work Please help on documentation links and optimal hardware configuration for gateway node.
... View more
Labels:
- Labels:
-
Cloudera Manager
-
Manual Installation
04-13-2017
05:22 AM
1 Kudo
I have installed Clouder Enterprise data hub in azure. The CDH is version 5.10.0 The spark version i have is 1.6.0 there Can i upgrade my spark to stable verion of spark 2.0 or spark 2.1 It is a parcel based installation. Can i upgrade the parcels. Please share any related documentation or links
... View more
Labels:
- Labels:
-
Apache Spark
-
Cloudera Manager
03-21-2017
11:41 PM
Hi I am removing hive service from Cloudera manager. i am getting following notification from hue. is it tightly integrated with hue "Missing Required Value: HiveService" How to resolve this. I am not seeing "none" option
... View more
Labels:
- Labels:
-
Apache Hive
-
Cloudera Hue
03-11-2017
06:18 PM
Hi I have opted for Cloudera Enterprise Data Hub from Azure Market place. Once installed i have customized it. Azure installed the Cluster in VNet with Subnets. I am able to get it working. However, the above one is not working. I am using this option now.
... View more
03-04-2017
09:29 AM
Thank you so much. I have tried the installation on Google Cloud it works like a charm and installs in an hour. No issues at all. There is something with Azure platform. The installation is not straight forward. I tried Path B as well as manual installation. Getting lot of problems. Is there any reference link to install cloudera latest one on Azure Platform. Please share the link. Otherwise no issues at all on Google Cloud
... View more
03-03-2017
05:57 PM
Hi I have tried all possible combinations to install cloudera express edition. everytime i get stuck at parcel distribution. I am installing on azure using ubuntu 14.04 LTS. The port 7191 and other parcel configuration are in place, no firewall, all mentioned ports are working. The heart beating of thread is creating frustration from last two days. The logs of agents are fine only the server log has one exception. please resolve at the earliest as i need to make choice between opting for enterprise edition vs other distribution
... View more
Labels:
- Labels:
-
Cloudera Manager
03-03-2017
01:54 AM
hi I have resolved this by keeping /etc/hosts with only localhost entries a) updating listening host and Ip in config.ini of all nodes it worked
... View more
03-02-2017
10:35 PM
i have same problem. can u help what u did to resolve this
... View more
03-02-2017
05:47 PM
Hi I am installing latest Cloudera Express Edition on Azure using Ubuntu trusty 14.04 LTS. i am getting this error. /etc/hosts is having public IPs defined in it. The dig command is able to forward lookup, the revese lookup is not working properly. All the ports are open in Azure portal. I kills the 9001 everytime i re-try the installation. The SCM agent is waiting for heartbeat. This is frustrating. I have followed and crosschecked various links in this forum. Nothing is working. Kindly suggest how to resolve this.
... View more
Labels:
- Labels:
-
Cloudera Manager