Member since
01-25-2017
396
Posts
28
Kudos Received
11
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
854 | 10-19-2023 04:36 PM | |
4396 | 12-08-2018 06:56 PM | |
5512 | 10-05-2018 06:28 AM | |
19998 | 04-19-2018 02:27 AM | |
20020 | 04-18-2018 09:40 AM |
09-04-2019
10:31 AM
Hi, Check for the total no of applications in the Application history path, if the total no of files is more try to increase the heap size and look whether it works. Alternatively look for the spark history server logs too for any errors. Thanks AKR
... View more
08-25-2019
03:18 AM
I am getting below error when trying to access file browser via Hue; Cannot access: /user/user02/. StandbyException: Operation category READ is not supported in state standby. Visit https://s.apache.org/sbnn-error (error 403) Please advise.
... View more
07-19-2019
10:28 AM
Hi, This Error will happen when "Spark executor-memory is too small for spark to start" . Please refer to the upstream jira for more details. https://issues.apache.org/jira/browse/SPARK-12759 Thanks AKR
... View more
05-29-2019
06:02 AM
I'm using cloudera manager, once i added the node i need to provide the url for the cloudera manager packages, once this step finish, an automatic step kicked to distribute the parcels.
... View more
05-06-2019
08:14 AM
Worked for me too. Thank you.
... View more
03-04-2019
06:53 PM
vmem checks have been disabled in CDH almost since their introduction. The vmem check is not stable and highly dependent on Linux version and distro. If you run CDH you are already running with it disabled. Wilfred
... View more
01-29-2019
12:31 AM
We solved the issue. Looks like it is a ulimit related problem. We raised user limits under /etc/security/limits.d/ And then we created a file under /etc/systemd/system/cloudera-scm-agent.service.d/override.conf To override service-level limits And we raised the value echo "65536" > /sys/fs/cgroup/pids/system.slice/cloudera-scm-agent.service/pids.max (instead of rebooting).
... View more
01-08-2019
03:00 PM
Hi Fawze, This is not disk space issue. There is sufficient space on these large drives. Thanks
... View more
12-06-2018
01:50 PM
To add, For sizing a datanode heap it's similar to namenode heap, its recommend 1GB per 1M blocks. As a block could be as small a 1byte or as large as 128MB, the requirement of heap space is the same.
... View more
11-22-2018
08:20 AM
@anrama You can user the filter=status!=Running
... View more