Member since
11-12-2015
90
Posts
1
Kudos Received
8
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5735 | 06-09-2017 01:52 PM | |
13429 | 02-24-2017 02:32 PM | |
11396 | 11-30-2016 02:48 PM | |
3830 | 03-02-2016 11:14 AM | |
4560 | 12-16-2015 07:11 AM |
05-08-2019
10:21 AM
I'm having the same issue. This is my agent configuration: [General]
server_host=cloudera-1
server_port=7182
max_collection_wait_seconds=10.0
metrics_url_timeout_seconds=30.0
task_metrics_timeout_seconds=5.0
monitored_nodev_filesystem_types=nfs,nfs4,tmpfs
local_filesystem_whitelist=ext2,ext3,ext4,xfs
impala_profile_bundle_max_bytes=1073741824
stacks_log_bundle_max_bytes=1073741824
stacks_log_max_uncompressed_file_size_bytes=5242880
orphan_process_dir_staleness_threshold=5184000
orphan_process_dir_refresh_interval=3600
scm_debug=INFO
dns_resolution_collection_interval_seconds=60
dns_resolution_collection_timeout_seconds=30 This is my Cloudera Server config: Link to the image: https://imgur.com/SIciDFr Regards, Silva
... View more
01-10-2019
08:01 AM
But I need to know which specific queries spills into disk, generating the scratch files. Is possible to have that kind of information?.
... View more
01-09-2019
11:59 AM
Hello, A simple question: How can I know which queries generate Scratch files? I'm inspecting the Impalad logs and I couldn't find any information about the scratch file generation. Regards, Silva
... View more
Labels:
- Labels:
-
Apache Impala
07-09-2018
01:33 PM
I'm using this one: /api/v18/clusters/cluster/services/impala/impalaQueries?from=2018-05-31T0%3A0%3A0&filter=(user=userX)" Also, I solved this issue by restarting the Monitoring Service. Regards,
... View more
05-31-2018
03:14 PM
Hello, I was using the CM API and I think that I reached the maximum number of requests. ¿What is the maximum of requests and how can I increase this value? /api/v7/clusters/cluster/services/impala/impalaQueries?from=2018-05-31T0%3A0%3A0 {
"queries" : [ ],
"warnings" : [ "Impala query scan limit reached. Last end time considered is 2018-05-31T16:21:46.409Z" ]
} Regards, Joaquin
... View more
Labels:
- Labels:
-
Apache Impala
-
Cloudera Manager
08-10-2017
08:20 AM
If you want to use spark2-shell and spark2-submit, you don't have to set those ENV variables. I set it because I wanted to point the current spark-shell/submit to spark2. This should be done in all the nodes that you want to use the shell and/or the submit. I forgot to add the changes that I made for spark-sumbit. In these files: /opt/cloudera/parcels/CDH-5.8.0-1.cdh5.8.0.p0.42/bin/spark-submit
/opt/cloudera/parcels/CDH-5.8.0-1.cdh5.8.0.p0.42/lib/spark/bin/spark-submit Add this ENV var: SPARK_HOME=/opt/cloudera/parcels/SPARK2-2.1.0.cloudera1-1.cdh5.7.0.p0.120904/lib/spark2
... View more
06-09-2017
01:52 PM
1 Kudo
@saranvisa Thanks, that worked. Also, in order to point to the new Spark, I had to change some symbolic links and ENV variables. export SPARK_DIST_CLASSPATH=$(hadoop classpath)
ln -sf /opt/cloudera/parcels/SPARK2-2.1.0.cloudera1-1.cdh5.7.0.p0.120904/lib/spark2/bin/spark-shell /etc/alternatives/spark-shell
export SPARK_HOME=/opt/cloudera/parcels/SPARK2-2.1.0.cloudera1-1.cdh5.7.0.p0.120904/lib/spark2
... View more
06-09-2017
09:06 AM
I forgot to mention that Spark 1.6 came with CDH 5.8. I don't know how CDH installed Spark.
... View more
06-09-2017
08:33 AM
Hello, I want to remove Spark 1.6 in order to Install Spark 2.1. But when I try to remove it with this command: sudo yum remove spark-core spark-master spark-worker spark-history-server spark-python source: https://www.cloudera.com/documentation/enterprise/5-8-x/topics/cdh_ig_cdh_comp_uninstall.html The packages are not found. What should I do in order to remove Spark 1.6 from my Cluester?. Also, in a previousstep I deleted it from my services. I'm using CDH 5.8.0.
... View more
Labels:
- Labels:
-
Apache Spark
-
Cloudera Manager