Member since
01-27-2017
28
Posts
10
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
15747 | 06-13-2017 12:21 PM | |
2928 | 06-13-2017 12:13 PM | |
11403 | 04-06-2017 02:13 PM |
08-13-2019
12:43 AM
This can be a case to case incident. In some cases we need to remove and replace the content of the /opt/cloudera/parcels/CDH/lib/hadoop/etc/hadoop directory from the other nodes of the same cluster if this node is added from some other cluster. This is because the directory may not be cleared before adding the server to the new cluster. The existing files form the old cluster may not work as expected in the new cluster as the parameters in the configuration files may vary from cluster to cluster. So we need to forcefully remove the contents and add them manually.
... View more
04-25-2018
09:07 AM
Did anyone found the solution for this issue? I have a similar issue, We are using CM5.11 and installed Spark2 seperately on other server. When we enable KMS on the cluster then Spark2 throws Unknown Host exception.
... View more
12-07-2017
12:24 AM
Hi @Borg Thank you for your answer. I have tried your suggestion, with the unfortunate lack of change compared to the things I tried previously. It simply ignores it, and still creates a single rdd that exceeds the 2G limit. Any way to force it? Thanks
... View more
08-24-2017
12:38 PM
Hi desind, I see that you have the map and reduce cluster wide memory set to 4G and 4G respectively. However the parameter that you will need to change is the PIG_HEAPSIZE = X, I would suggest increasing this and running the job again. For reference on changing properties [1]: Pig Properties Pig supports a number of Java properties that you can use to customize Pig behavior. You can retrieve a list of the properties using the help properties command. All of these properties are optional; none are required. To specify Pig properties use one of these mechanisms: The pig.properties file (add the directory that contains the pig.properties file to the classpath) The -D command line option and a Pig property (pig -Dpig.tmpfilecompression=true) The -P command line option and a properties file (pig -P mypig.properties) The set command (set pig.exec.nocombiner true) Note: The properties file uses standard Java property file format. The following precedence order is supported: pig.properties > -D Pig property > -P properties file > set command. This means that if the same property is provided using the –D command line option as well as the –P command line option and a properties file, the value of the property in the properties file will take precedence. To specify Hadoop properties you can use the same mechanisms: The hadoop-site.xml file (add the directory that contains the hadoop-site.xml file to the classpath) The -D command line option and a Hadoop property (pig –Dmapreduce.task.profile=true) The -P command line option and a property file (pig -P property_file) The set command (set mapred.map.tasks.speculative.execution false) The same precedence holds: hadoop-site.xml > -D Hadoop property > -P properties_file > set command. Hadoop properties are not interpreted by Pig but are passed directly to Hadoop. Any Hadoop property can be passed this way. All properties that Pig collects, including Hadoop properties, are available to any UDF via the UDFContext object. To get access to the properties, you can call the getJobConf method. [1] https://pig.apache.org/docs/r0.9.1/cmds.html#help Thanks, Jordan
... View more
06-14-2017
12:24 AM
Thanks Borg , It happaned after deleteing huge ammount of data from the HDFS . To reslove for the meatim I deleted all snapshots and created new one . Its working OK . Waiting for 5.11.1 🙂 Many Thanks Alon
... View more
06-13-2017
12:36 PM
@mbigelowTry to search for "How to install CM over an existing CDH Cluster"
... View more
05-22-2017
11:20 AM
2 Kudos
Hi munna143, You may have to recreate the hash-file for the parcel by completing the steps below: Download the manually deleted parcel and add it to parcel repo location /opt/cloudera/parcel-repo. Create the hash-file based on parcel: $sha1sum /opt/cloudera/parcel-repo/CDH-parcel-file.parcel | cut -d ' ' -f 1 > /opt/cloudera/parce-repo/CDH-parcel-file.parcel.sha Set the ownership of the files to cloudera-scm $chown cloudera-scm:cloudera-scm /opt/cloudera/parcel-repo/CDH-parcel-file.parcel /opt/cloudera/parcel-repo/CDH-parcel-file.parcel.sha Delete the parcel through Cloudera Manager from the Parcels page. Thanks, Jordan
... View more
04-06-2017
09:36 PM
Hi Jordan, Yes, Cloudera also recommended increasing the heap size and after I did it couple weeks ago, I did not see any more crashes. It is rather surprising though that the default configuration causes crashes. That raises the question how optimal or even acceptable other parameters are and how to tune them. Thank you, Igor
... View more
03-30-2017
01:47 PM
Hi Ravi, This error is typical of a scenario where the proxy/Hue has been configured and has timed out the connection according to its own internal timeout settings before the destination server has been able to respond to a request. I would suggest reviewing the configuration to ensure that your timeout values are sufficient to allow the destination server to respond (or reach its own internal timeout). To determine whether increasing Hue's configured timeouts is appropriate check the follow: For older versions of Cloudera Manager this must be set as a safety valve: 1. From Cloudera Manager, click Hue service > Configuration 2. Enter: Hue Service Advanced Configuration Snippet (Safety Valve) for hue_safety_valve.ini in the search field. 3. Add the following: [impala]
query_timeout_s=600
[beeswax]
server_conn_timeout=600 For newer versions of Cloudera Manager, this can be configured using the configuration options for Hue in Cloudera Manager: Hue service > Configuration > HiveServer2 and Impala Thrift Connection Timeout
... View more