Member since
08-16-2016
642
Posts
131
Kudos Received
68
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3432 | 10-13-2017 09:42 PM | |
6184 | 09-14-2017 11:15 AM | |
3176 | 09-13-2017 10:35 PM | |
5100 | 09-13-2017 10:25 PM | |
5733 | 09-13-2017 10:05 PM |
01-03-2017
09:30 AM
1 Kudo
Those scripts, master and slave, are for running a Standalone Spark cluster. They are not needed if you are running Spark in YARN. The preference is to have Spark running in YARN so you don't have to divy up the cluster resources. The history script is for the Spark History server. You can start and stop that through CM. My best guess re: the UI not showing up when you start the slave is that the master was no longer running. Were you able to launch the spark-shell afterwards but not access the UI?
... View more
01-02-2017
11:03 PM
1 Kudo
The 'K' needs to be capitalized.
... View more
01-02-2017
10:58 PM
TwitterAgent.sources.Twitter.consumerkey = XXX should be TwitterAgent.sources.Twitter.consumerKey = XXX
... View more
01-02-2017
10:31 PM
It thinks the credentials are missing. How did you set the OAuth consumer and token settings? Please mask the actual values but share the files and/or method they are provided to Flume.
... View more
01-02-2017
04:48 PM
4 Kudos
Hi Shilpa, I am assuming that you selected the Spark service from the base CDH 5.9 parcels/packages and didn't fetch is separate and therefor are running a standalone Spark cluster alongside the Hadoop cluster. In any case, the Spark Gateway, as well as the other Gateway's (HDFS, YARN, HBase, etc.), only install the libraries, binaries, and configuration files required to use the command line tools. In this case it is spark-submit and spark-shell (maybe the pyspark files as well, not 100% sure on it). There is not service to start and stop or monitor. That is why you can't start it and why it is gray instead of green. In the case of the above assumption, it automatically sets up the tools to run in YARN mode. This means that the spark application runs in YARN instead of a standalone Spark cluster. Find the ResourceManager UI. You will find the application listed there and can access the AM and worker logs. You will be direct to the Spark History server UI (so you could go directly here if you wanted but it is good to get used to the RM interface).
... View more
01-02-2017
04:41 PM
2 Kudos
Is that log snippet from the AM or task logs? Have you checked the other? I have seen this message on the AM log side when something has terminated a task and the task log contains the detail as to why. Is pre-emption enabled? This message will be received, on the AM side, when tasks are killed by YARN when pre-emption kicks in. Pre-emption works on the container level and there is always the risk that an AM container is killed in the process. You could run the driver on the local client, --deploy-mode client, you launch from if you aren't already. Turn up the logging to DEBUG. Either in the Spark Gateway or by passing your own log4j.properties to the application. Also check if spark.dynamicAllocation.enabled is on as I have seen sigterm 15 messages when it trims down the containers. Cloudera also does not recommend having it turned on.
... View more
01-02-2017
01:19 PM
1 Kudo
Yes. Go through your process. It is granting more accessible which is generally less risky. Also, it is the correct way to install Hadoop/CDH. https://www.cloudera.com/documentation/enterprise/5-6-x/topics/cm_sg_cm_users_principals.html
... View more
12-31-2016
10:36 PM
What do you see in hive CLI or beeline? It is possible the hdfs dirs/files were created outside of hive and the metadata doesn't exist.
... View more
12-31-2016
10:29 PM
The KDC and admin settings in krb5.conf should point to actually DC hostnames and not just the domain name. Also check the logs directly on the hosts. I have had better luck sifting through the logs there vs CM role logs. It is failing to authenticate but no clear info on why. Try logging in using the impala keytab generated in the running process directory. Kinit -kt /path/to/impala.keytab.
... View more
- « Previous
- Next »