Member since
01-25-2017
396
Posts
28
Kudos Received
11
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
854 | 10-19-2023 04:36 PM | |
4405 | 12-08-2018 06:56 PM | |
5514 | 10-05-2018 06:28 AM | |
20014 | 04-19-2018 02:27 AM | |
20036 | 04-18-2018 09:40 AM |
05-11-2018
02:51 PM
Hi @Tim Armstrong Hope you are doing well, It will be nice if we have a metirc for the memory part of the daemons_memory_limit used by impala daemon in a given time. So when i get a query failing on memory, i can investigate the memory usage thatwill help me to understand when to increase the limit, secondly, i can learn trend and usage over time and i can plan my increase. Currently i see only the resident memory per node but this memory isn't used by the queries, so it's a diffcult task for me to investigate the impala behaviour once a query failed on memory. Yes i have a metric of the total memory used by node, but i have different roles in the node, so it hard to track this issue.
... View more
05-09-2018
07:44 PM
Thanks all and specially @GeKas < just to update that i was able to solve the issue, it was some of the lefover of enabling keberos on the cluster, i was install the oracle JDK which installed java1.7_cloudera, once i removed this package from the node, the LZO error gone.
... View more
04-18-2018
06:55 AM
@dpugazhe Generally the / monut in the linux servers are small. Could you share the df -h command output of you linux box? I would suggest you to change the location for the parcels and logs for example if you have larger mount in your linux box called /xxxxx, change the /var/lib and /var/log to /xxxx/hadoop/lib and /xxxx/hadoop/log and the same for the parcels, as you are using cloudera manager, these changes can be done quickly. so to do that. 1- Stop cloudera manager services. 2- Move the old logs to the new partition. 3- Delete the old logs. 4- Start cloudera manager services
... View more
04-14-2018
07:53 AM
sir thank you very much,its working
... View more
04-13-2018
05:01 PM
it's depend from which CDH version you are upgrading ... You need to have a look at the services you cluster includes and if it's version was changed for example: Spark, HDFS, Yarn and So on: for example i upgraded my cluster from 5.5.4 to 5.13.0 i just cared about the spark jobs since spark version was changed and there was a need to change the jobs dependancy, and minor changed we did in hive tables refreshment. I would recommend you to go to Major version -1 so to use 5.13 and use the latest minor version so i recommend 5.13.3
... View more
12-29-2017
07:35 PM
Why not compacting the historical data ... for example compact daily files into one file for now-14days. A compaction job that runs daily and compact the data before 2 weeks. By this you can make sure you are not imapcting the data freshness.
... View more
11-05-2017
06:52 AM
@mbigelow Hi mbigelow, I tried to use LIKE in the CM API with now success: I ahve one like this: curl -u 'xxxx':'xxxx' 'http://CM_server.domain.com:7180/api/v11/clusters/cluster/services/impala/impalaQueries?from=2017-10-10T00:00:00&to2017-10-11T00:00:00&limit=1000&filter=statement RLIKE ".*fawzea.*"' >>f.json Can you help
... View more
11-02-2017
01:57 AM
1 Kudo
I think you are missing this which it was mentioned here: [desktop] use_new_editor=true Hope it helps
... View more
10-29-2017
04:06 AM
The problem was the limitation of sub directory under specific dir so when checking the folder container i see there is 32,000 directories which is the limit. looking why the retention isnot deleting these files and i have the following conf: Log Aggregation Retention Period 7 days Job History Files Cleaner Interval 1 day Log Retain Duration 3 hours
... View more
10-19-2017
07:25 AM
Also, is there way to confirm csd file is properly deployed. Also, I don't see scala 11 libraries under /opt/cloudera/parcels/CDH/jars and only scala 10 libraries. I heard that scala 10 and 11 both are installed with CDH 5.7 and later. Shouldn't scala 11 be available, Is this also cause for spark2 service not appearing. I did all steps as mentioned and all steps did completely successfully, spark2 parcel is activated now. Regards, Hitesh
... View more