Member since
03-05-2016
21
Posts
4
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
8805 | 08-19-2016 06:03 AM | |
12015 | 04-16-2016 01:24 PM |
06-29-2017
01:34 PM
works, I will accept this as solution
... View more
06-29-2017
04:24 AM
Deleting the instances and adding them again did not solve the problem. The issue still exists.
... View more
06-27-2017
02:13 PM
I have a problem on a CDH 5.8.0 parcels testsystem starting the Impala Catalog Server.
The error message is the following:
Can't open /var/run/cloudera-scm-agent/process/148-impala-CATALOGSERVER/supervisor.conf: Permission denied.
The permissions for this file are:
-rw------- 1 root root 2970 Jun 27 23:01 supervisor.conf
All other files and directories in the /var/run/cloudera-scm-agent/process/148-impala-CATLOGSERVER folder having owner impala and group impala.
When I change the permissions it will be overwritten on the next start.
What is wrong here?
And what needs to be changed. I recently installed a different service and changed the visudo file, but I have not changed the Impala configuration.
Thanks for the advice.
... View more
Labels:
- Labels:
-
Apache Impala
-
Cloudera Manager
08-19-2016
06:03 AM
I solved it by myself by installing anaconda again as root in the directory /opt/anaconda and adjusted the environment variables accordingly. This saved my issue. Cheers
... View more
08-19-2016
01:37 AM
Hi Community, I have installed anaconda on Centos6 for using ipython and jupyter notebooks together with spark. When I run pyspark I get the following error: java.io.IOException: Cannot run program "/home/hadoop/anaconda/bin/": error=13, Permission denied This is wired, because I start pyspark on the console with user hadoop and anaconda is in the home directory. Also I've set the permissions, so that the user hadoop should be able to execute this. drwxr-xr-x 3 hadoop hadoop 4096 Aug 16 23:46 bin Anaconda in general is running and I'm able to execute pyspark using sudo pyspark (but thats not a solution as ipython for root is not available). Question: what needs to be set, that the user hadoop is able to run pyspark using anaconda? Thanks!!!
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Spark
04-16-2016
01:24 PM
1 Kudo
So I've fixed it by adjusting the following Yarn settings: yarn.scheduler.maximum-allocation-mb = 8 GiB mapreduce.map.memory.mb = 4 GiB mapreduce.reduce.memory.mb = 4 GiB And I've got the test example as following command running: sudo -u hdfs hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar pi 10 100 Thanks for the comments.
... View more
04-14-2016
01:45 PM
The "cluster" is pseudo distributed with one node on CentOs6 and I've updated the settings according to your recommendation: mapreduce.map.memory.mb and mapreduce.reduce.memory.mb = 240 MiB Deployed the configuration, restarted the serivces but the result is the same, the job runs forever. I would really need to test something urgently and I'm lost. The health of the services is good exept "under replicated blocks" I will follow up on this. Thanks for any hints.
... View more
04-13-2016
02:25 PM
No success, the job is still running forever 😞 I have updated the memory setting from 0 GiB to 1 Gib. And this memory is also available on the node, but the job will not start. I'm lost. I have not altered the
... View more
04-13-2016
01:51 PM
Thanks, I will try it. The first parameter mapreduce.map.memory.mb was set to 0 GiB, maybe this is the problem.
... View more
04-10-2016
12:32 PM
Hi experts, I need help, I have installed CDH 5.7.0 on CentOs 6 and all services are up and running. However testing the installation running the simple pi example and using the following command doesnt execute a map-reduce job: sudo -u hdfs hadoop jar /opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar pi 10 100 The job is planned but nothing is happening... So the job runs forever. What can I do to test what is wrong with my installation?!? Thanks & Regards
... View more
Labels: