Member since
07-31-2013
1924
Posts
462
Kudos Received
311
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1635 | 07-09-2019 12:53 AM | |
10120 | 06-23-2019 08:37 PM | |
8310 | 06-18-2019 11:28 PM | |
9058 | 05-23-2019 08:46 PM | |
3682 | 05-20-2019 01:14 AM |
07-21-2014
12:13 PM
1 Kudo
If its of any help, the error is sourced from the client's values. The client here is Hue, which is sending the jobtracker value for the WF as the MR1 URL instead of the MR2 RM one. As a result, since the Oozie server is pre-tuned to only talk to YARN service, it rejects the client (Hue) cause its passed JT location is invalid for the current whitelist of RMs. You should look further into why Hue is still sending the MR1 properties along, instead of MR2. Perhaps the values are embedded into the WF application, or perhaps the Hue service has not been configured to also talk to the YARN service (via its CM Hue Configuration page, just like how you switched Oozie to talk to YARN). The Oozie server isn't at fault here.
... View more
07-20-2014
09:30 AM
5 Kudos
With CDH4 and CDH5 there's no longer a 'HADOOP_HOME' env-var. It has been instead renamed to 'HADOOP_PREFIX', which for a default parcel environment can be set to /opt/cloudera/parcels/CDH/lib/hadoop.
... View more
07-20-2014
09:02 AM
Have you also added a KT_RENEWER role to Hue after enabling security, as the documentation dictates: http://www.cloudera.com/content/cloudera-content/cloudera-docs/CM4Ent/4.8.2/Configuring-Hadoop-Security-with-Cloudera-Manager/cmchs_enable_hue_sec_s11.html? This would be necessary in order to allow Hue to talk to the JobTracker Thrift plugin with Kerberos+SASL, and will remove such exceptions.
... View more
07-20-2014
08:56 AM
1 Kudo
The CM API does support fetching metrics of a given activity, as described at http://cloudera.github.io/cm_api/apidocs/v6/path__clusters_-clusterName-_services_-serviceName-_activities_-activityId-_metrics.html.
... View more
07-20-2014
08:35 AM
3 Kudos
Leftover /tmp/scm* directories from earlier attempts may cause this issue. Look for such files on your hanging hosts and remove them and then retry the installation from the UI.
... View more
07-20-2014
08:20 AM
1 Kudo
How large are your Parquet input files? If you are copying your files around, have you ensured following the block size preservation method mentioned at http://www.cloudera.com/content/cloudera-content/cloudera-docs/Impala/latest/Installing-and-Using-Impala/ciiu_parquet.html? Within Hive, you can perhaps "set dfs.blocksize=1g;" before issuing the queries to create the files.
... View more
07-20-2014
07:42 AM
2 Kudos
Your local Hive CLI JVM heap size is insufficient for even building and submitting the job. Please try raising it as below, and retrying: ~> export HADOOP_CLIENT_OPTS="-Xmx2g" ~> hive -e "select count(station_id) from aws_new;"
... View more
07-20-2014
07:41 AM
CDH Hive is based on Apache Hive and does support .hiverc for Hive CLI. If you are asking about Beeline instead, support for such a file loader is incoming in a future CDH5 release.
... View more
07-20-2014
07:40 AM
The group lookup for user 'anon1' is done on the HS2 host by default. Can you ensure that the HS2 unix host also has the same groups setup for 'anon1' as the impalad hosts have (which seem to work)?
... View more
07-20-2014
06:50 AM
Flume provides and supports an inbuilt JMS source that can work with external systems. This is documented further at http://archive.cloudera.com/cdh5/cdh/5/flume-ng/FlumeUserGuide.html#jms-source.
... View more