Member since
08-01-2013
187
Posts
9
Kudos Received
8
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1116 | 09-01-2016 09:26 PM | |
1044 | 10-14-2015 07:31 AM | |
1056 | 06-21-2015 06:02 PM | |
1886 | 02-26-2015 04:36 PM | |
2909 | 02-18-2015 12:18 AM |
04-21-2019
10:00 PM
1 Kudo
Hi Roberto, Thank you for asking! 1) Please kindly provide the name and date/time of creation of the cluster. Similarly, if this fails for termination, please provide the same for the time the termination was attempted. 2) Do you repeatedly see the same cluster creation failure over Azure? 3) For Azure-service related issues with Altus clusters, please gather the following info from the Azure Portal: - Login to https://portal.azure.com - Navigate to Resource Groups - Select the Resource Group of the cluster in question - On the left hand column under Overview, select Activity Log - Filter Event Severity to Error and Critical - Review the Operations Names that have Failed or Critical Statuses, click any related items, select JSON and copy the contents to the filed support case. Or if you prefer, we can coordinate a webex session to gather this information) Regards, Daisuke
... View more
10-31-2018
06:08 PM
Thank you for uploading the files! Okay, let us investigate to identify what is going on. Daisuke
... View more
10-29-2018
06:42 PM
ashish5ahu, Thank you for posting the question. I am assuming you are using Workload XM with Altus, correct? If true, look at the right-bottom side of the job's page, where you are trying to see the Analytics tab, copy the CRN, and reply back with it. With the CRN, we can diagnose what's going on. Regards, Daisuke
... View more
09-05-2017
07:04 AM
2 Kudos
Symptoms A Spark job fails with INTERNAL_FAILURE. In the WA (Workload Analytics) page of the job that failed, the following message is reported: org.apache.spark.SparkException: Application application_1503474791091_0002 finished with failed status
Diagnosis As the Telemetry Publisher didn't retrieve the application log due to a known bug, we have to diagnose the application logs (application_1503474791091_0002) directly, which are stored in the user's S3 bucket. If the following exception is found, it indicates that the application failed to resolve a dependency in the Hadoop class path: 17/08/24 13:13:33 INFO ApplicationMaster: Preparing Local resources Exception in thread "main" java.lang.NoSuchMethodError: org.apache.hadoop.tracing.TraceUtils.wrapHadoopConf(Ljava/lang/String;Lorg/apache/hadoop/conf/Configuration;)Lorg/apache/htrace/core/HTraceConfiguration; at org.apache.hadoop.fs.FsTracer.get(FsTracer.java:42) at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:687) at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:671) at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:155) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2653) This most likely occurred because the jar may have been built using the another Hadoop distribution's repository, for example EMR (Amazon Elastic MapReduce)
Solution To resolve the issue, rebuild the application using the CDH repository, https://repository.cloudera.com/artifactory/cloudera-repos/, using Maven or sbt. The example of using Maven is as follows. https://www.cloudera.com/documentation/enterprise/release-notes/topics/cdh_vd_cdh5_maven_repo.html
... View more
Labels:
06-21-2017
02:11 PM
3 Kudos
Question Where does Workload Analytics ingest user workloads into and analyze it?
Answer Workload Analytics runs as part of Cloudera Altus functionality on an environment operated by Cloudera. Telemetry Publisher, part of Cloudera Manager installation, sends user's workload to the environment as soon as a job ends and analyze it. Thereon the result shows up in the Cloudera Altus UI. https://www.cloudera.com/documentation/altus/topics/wa_overview.html
... View more
- Find more articles tagged with:
- analytics
Labels:
06-21-2017
12:57 PM
1 Kudo
Question Is it possible to tune the thresholds for Health Check which Workload Analytics does based on user's requirements?
Answer No, it's not possible to tune the thresholds. Health Check uses predefined thresholds which are described in the document: https://www.cloudera.com/documentation/altus/topics/wa_analyze_jobs.html
... View more
- Find more articles tagged with:
- analytics
Labels:
09-04-2016
06:23 PM
Hi, Are you sure that the blocks are still existing in the DataNode hosts even after rebooting the instances? By default, the location should be under /dfs/dn{1,.2..}.
... View more
09-01-2016
09:26 PM
1 Kudo
Such custom jar file that HBase uses cannot be distributed across the hosts by CM automatically. You have to locate by yourself. HTH. -- Sent from my mobile
... View more
08-24-2016
10:39 PM
Do you find something related error in the postgres logs that's located under /var/lib/cloudera-scm-server-db/data/pg_log/?
... View more
08-24-2016
08:04 PM
Hi, That WARN message was introduced by an improvement per HDFS-9260. However, as our backport missed sorting, the WARN appears. It already gets addressed in: https://github.com/cloudera/hadoop-common/commit/95c7d8fbe122de617d11b6e4ea7d101803d0bd12 and the fix is available in CDH releases of 5.7.2 onwards and also in the 5.8.x series.
... View more
08-23-2016
09:33 PM
Even you hit a single host crash, the corresponding blocks are replicated in the other hosts. That is, the hfile in the HDFS level should be safe. Otherwise, are you running the cluster as standalone mode? Thanks, Dice. -- Sent from my mobile
... View more
04-13-2016
02:01 AM
Are you able to see the host being listed in the 'Hosts' page? Is the latest hearbeat coming within 15s? Also what's the OS version and the name of the distribution?
... View more
04-12-2016
10:14 PM
Both HBASE-14533 and HBASE-14196 are being included in CDH 5.6. Can you upload 1) the client log, 2) thriftserver log, and 3) the result of 'hbase version' command in your console? Dice.
... View more
04-12-2016
08:27 PM
You can change the parcel directory from /opt/cloudera/parcels (by default) to given directory per the following guide: Before following the steps, please shutdown the cluster for a safe. http://www.cloudera.com/documentation/enterprise/latest/topics/cm_ig_parcels.html?scroll=cmug_topic_7_11_5_unique_1__section_srx_xyx_bm_unique_1 Configuring the Host Parcel Directory Otherwise, you need to add more disk to /opt.
... View more
04-05-2016
02:47 AM
1 Kudo
Hi, Unfortunately, it's hard-coded in the CM server side. I'm wondering why it takes longer than the particular timeout limit (90 seconds). What do you see in the CM agent log in that target host?
... View more
12-08-2015
11:23 PM
What is the i18n configuration in those targeted hosts?
... View more
12-07-2015
05:29 AM
To enable Kerberos in your CM managed CDH cluster, please follow the document below: http://www.cloudera.com/content/www/en-us/documentation/enterprise/latest/topics/cm_sg_authentication.html As you've already noticed, files under /etc take effect only on the client programs. HTH. -- Sent from my mobile
... View more
12-03-2015
03:41 PM
If you are still hitting the same stack trace, the indications won't be relevant. Can you upload the Reports Manager log please? -- Sent from my mobile
... View more
12-02-2015
08:58 PM
Run 'hadoop version' as well.
... View more
12-02-2015
02:47 AM
Also upload the contents under /var/log/cloudera-manager-installer/.
... View more
12-02-2015
02:44 AM
Hi, I'm unsure if your /etc/hosts is the real one, but ensure that you meet all the requirements under " Networking and Security Requirements " in the following guide. http://www.cloudera.com/content/www/en-us/documentation/enterprise/latest/topics/cm_ig_cm_requirements.html Also it looks you're enabling single user mode per "Because agent not running as root". Is this correct? Have you followed the guide below? http://www.cloudera.com/content/www/en-us/documentation/enterprise/latest/topics/install_singleuser_reqts.html
... View more
12-01-2015
07:11 PM
Hi, Could you please elaborate where you get stuck? What are you looking at now?
... View more
12-01-2015
04:27 PM
Is the issue still occurring? Just in case, can you run 'rpm -qa | grep cloudera' in the CM server host?
... View more
12-01-2015
02:01 AM
Is your CM version higher than 5.4? What is the CDH version, btw? Dice.
... View more
11-30-2015
02:05 AM
Hi hawkphil, What did you actually do in the cluster before you face the error? Which version are you running on? Dice.
... View more
11-27-2015
11:09 PM
1. No, snapshot is just for the metadata operation. 2. Once you make particular directory snapshottable, the blocks belonging the underlying files never be deleted.
... View more
10-14-2015
07:31 AM
1 Kudo
Hi, Please note that Cloudera Search is being included in CDH 5. As the CDH 5.4.3 parcel looks already being activated, you can simply use it via "Add services" from the CM home page.
... View more
06-21-2015
06:02 PM
Are you using Cloudera Manager 5.4 and higher? If you're still on 5.2 or 5.3, Kafka CSD needs to be downloaded per http://www.cloudera.com/content/cloudera/en/documentation/cloudera-kafka/latest/topics/kafka_installing.html, thereon see the parcels being available.
... View more
06-07-2015
05:38 AM
Hi, What did you make a change over the cluster before you see the message, "Space free in the cluster: 0 B"? How did you verify that the space is not the case? Can you also verify if the DataNodes are up? Are there actual blocks in DNs' local directories?
... View more