Member since
01-01-2016
12
Posts
2
Kudos Received
0
Solutions
05-08-2019
06:57 PM
1 Kudo
@jbowles Yes, it is advisable to clean up the NM local directories when changing LCE setting, please see https://www.cloudera.com/documentation/enterprise/5-10-x/topics/cdh_sg_other_hadoop_security.html#topic_18_3 Important: Configuration changes to the Linux container executor could result in local NodeManager directories (such as usercache) being left with incorrect permissions. To avoid this, when making changes using either Cloudera Manager or the command line, first manually remove the existing NodeManager local directories from all configured local directories (yarn.nodemanager.local-dirs), and let the NodeManager recreate the directory structure.
... View more
09-07-2016
03:35 PM
1 Kudo
Hi, CM will continually re-try client config deployment. This is helpful in particular if the host is temporarily not available and comes online later. It makes it easier for the administrator to reason about the state of client configs, so you don't have to worry about re-executing the command on a few random hosts that weren't operational at the time of deploy. So the retries are intended. Ideally, the deploy scripts should be so simple that they can't really fail. If you're just debugging your changes, then you can stop the CM agent (service cloudera-scm-agent stop) on that host to stop the wiping / retry logic and make it easier to debug. Thanks, Darren
... View more
08-26-2016
09:18 AM
It has always been documented in "Known Issues": https://www.cloudera.com/documentation/enterprise/release-notes/topics/cdh_rn_spark_ki.html Generally speaking, there aren't differences. Not supported != different. However there are some pieces that aren't shipped like the thrift server and SparkR. Usually differences crop up when upstream introduces a breaking change and it can't be followed in a minor release. For example: default in CDH is for the "legacy" memory config parameters to be active so that default memory config doesn't change in 1.6. Sometimes it relates to other stuff in the platfrom that can't change, like I think the Akka version is (was) different because other stuff in Hadoop needed a different version. The biggest example of this IMHO is Spark Streaming + Kafka. Spark 1.x doesn't support Kafka 0.9+ but CDH 5.7+ had to move to it to get security features. So CDH Spark 1.6 will actually only work with 0.9+ because the Kafka differences are mutually incompatible. Good in that you can use recent Kafka, but, a difference! Most of it though are warnings about incompatibilities between what Spark happens to support and what CDH ships in other components.
... View more
06-01-2016
07:36 PM
Hi : I also encountered same issue, also I have to declared spark_on_yarn dependency, how to fix it or workaround it?could you give the solution example,thanks!
... View more