Member since
02-10-2015
84
Posts
2
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
13244 | 06-04-2015 06:09 PM | |
7257 | 05-22-2015 06:59 AM | |
5881 | 05-13-2015 03:19 PM | |
2376 | 05-11-2015 05:22 AM |
05-14-2015
04:39 AM
Thank you for the explanation! BTW, where do you see the 'Comments'?? I don't have that! << The comment on the setting in CM should have explained it for you: Maximum size in bytes for the Java Process heap memory. Passed to Java -Xmx.>>
... View more
05-13-2015
03:19 PM
Basically, I have to instantiate these steps via a CP API Python script: To add the History Server: 1.Go to the Spark service. 2.Click the Instances tab. 3.Click the Add Role Instances button. 4.Select a host in the column under History Server, then click OK. 5.Click Continue. 6.Check the checkbox next to the History Server role. 7.Select Actions for Selected > Start and click Start. 8.Click Close when the action completes.
... View more
05-13-2015
02:14 PM
1 Kudo
I have been using CM API python scripts for adding Hadoop services into a CDH cluster. I would like to add the Spark History Server role by calling a script. Could you please provide me with some samples/links/docs to create it. Thank you!
... View more
05-13-2015
02:08 PM
Actually, here is what I have deployed/confgured for Spark: <><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><> scm=> select * from services where service_id = 24; service_id | optimistic_lock_version | name | service_type | cluster_id | maintenance_count | display_name | generation ------------+-------------------------+--------+--------------+------------+-------------------+--------------+------------ 24 | 34 | spark0 | SPARK | 25 | 0 | spark0 | 1 (1 row) scm=> select role_type, configured_status, host_id from roles where service_id = 24; role_type | configured_status | host_id --------------+-------------------+--------- SPARK_WORKER | RUNNING | 1 GATEWAY | NA | 4 GATEWAY | NA | 5 GATEWAY | NA | 6 GATEWAY | NA | 3 GATEWAY | NA | 1 GATEWAY | NA | 2 SPARK_WORKER | RUNNING | 2 SPARK_WORKER | RUNNING | 6 SPARK_WORKER | RUNNING | 8 SPARK_WORKER | RUNNING | 5 SPARK_WORKER | RUNNING | 7 SPARK_WORKER | RUNNING | 3 SPARK_MASTER | RUNNING | 4 (14 rows) <><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><> That tells me that the 'Spark History Server' role is not installed. Do I have to install it, and if so how?? Thank you!
... View more
05-13-2015
01:25 PM
I understand Spark History Server is an independent module (not related to YARN's Job History Server). I have deployed CDH 5.4 (via parcels) but Spark History Server is not there!! <Q1> How do I install Spark History Server? via parcels or via RPMs? <Q2> Any special configuration for deploying Spark History Server? <Q3> what port Spark History Server is running on? <Q4> So far I have deployed 1 Spark Master (Master Web UI), several Spark Workers. What other 'services' could be deployed? For instane, YARN has: ResourceManager WE UI, HistoryServer Web UI, Dynamic Resource Pools.
... View more
Labels:
- Labels:
-
Apache Spark
05-13-2015
09:48 AM
I have reset the values for YARN's Java Heap Size of NodeManager and Java Heap Size of ResourceManager,via CM. Then I restared the cluster. <Q1> Under what file (xml, sh, map, py) do these parameters exist?? (When I look, after the cluster restarted, under the /etc/hadoop/conf.cloudera.yarn the only files got updated are topology.map & topology.py). Files such as core-site.xml. mapred-site.xml haven't changed! [BTW, when I execute ps aux | grep resourcemanager (or ps aux | grep nodemanager) are -Xms and -Xmx that tell me the Java Heap Size? ] <Q2> How do you determine what's the best value of these parameters (Java Heap Size)? (Is it a trial-and-error scenario?? Any best practices to follow)?
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache YARN
-
Cloudera Manager
05-12-2015
12:34 PM
Thank you for the clarification!
... View more
05-11-2015
05:22 AM
After I commented out the portion of my script that creates the RMAN role and rerun the script it worked!
... View more
05-10-2015
08:18 PM
Hi JM, thank you again! The issue (under-replicated & corrupt blocks) started when I added 2 new nodes into an existing CDH 5.4 cluster. I went and selectively removed and restored files back into HDFS. HDFS now is HEALTHY. However, I haven't pinpointed the root cause! I have opened up another thread listing more details about the corrputed blocks issues. I'll close this one amd continue with the other one. Thanks for all your help. Happy Mother's Day 🙂
... View more
05-10-2015
07:49 PM
Hi JM, that worked!! I am now able to startup HBase with all its servers! Thanks for your assistance and persistance 🙂 Cheers, TS
... View more