Member since
07-15-2019
12
Posts
7
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1311 | 08-30-2017 06:18 AM |
08-30-2017
06:18 AM
1 Kudo
you can restart after you set all parameters. but might be difficult to troubleshoot if some of the services fail to start.
... View more
08-29-2017
08:58 AM
3 Kudos
You should be able to drop table using phoenix sqlline. https://phoenix.apache.org/language/#drop_table sqlline.py hostname.com /location/of/file/drop_table.sql
... View more
08-29-2017
08:47 AM
https://hortonworks.com/blog/apache-hadoop-yarn-concepts-and-applications/ Above link has good overview of how YARN works and the algorithms used ( capacity and fair scheduler ) by Resource manager for scheduling. Yarn capacity scheduler config tutorial is available at https://hortonworks.com/hadoop-tutorial/configuring-yarn-capacity-scheduler-ambari/ Does this help?
... View more
08-29-2017
08:16 AM
@uri ben-ari you need to look at those services that are failing and look at the log file to understand why they are failing and fix those issues. you can always revert back to the working config from the amabri UI. you can also perform restart of all the components required from the Ambari UI under Add Services -> Restart All affected
... View more
08-29-2017
08:07 AM
1 Kudo
@Rohit Khose YARN is resource negotiator for your cluster. Spark ( like other hadoop application) requests YARN for resources specified by the user and if available it will use them. You can enable spark dynamic allocation so the spark application can scale up/down executors depending on the need. https://spark.apache.org/docs/1.6.1/configuration.html#dynamic-allocation
... View more
07-12-2017
02:08 AM
1 Kudo
you can look into turning on `spark.dynamicAllocation.enabled` setting, this setting will release any un-unsed executors back to the cluster and request when they are needed link https://spark.apache.org/docs/latest/configuration.html#dynamic-allocation or after you have completed your analysis, you can restart the spark interpreter in zeppelin, due to lazy evaluation zeppelin will only start the spark context when you need it.
... View more
07-11-2017
02:20 AM
1 Kudo
@Krishna Kumar you dont need to stop spark context when you are using zeppelin. can you remove sc.stop() line and try again?
... View more