Member since
08-08-2013
339
Posts
132
Kudos Received
27
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
14900 | 01-18-2018 08:38 AM | |
1591 | 05-11-2017 06:50 PM | |
9238 | 04-28-2017 11:00 AM | |
3457 | 04-12-2017 01:36 AM | |
2856 | 02-14-2017 05:11 AM |
12-17-2015
08:07 PM
1 Kudo
Hi,
I am running Ranger 0.4 with local linux usersync. On the linux boxes I have defined two additional groups called "hadoop-users" and "hadoop-admins". After restarting ranger-usersync, there is just the group "hadoop-users" visible in Ranger-Admin-Webui, but "hadoop-admins" is missing?!?! What is going on there ?!?! Thanks and regards, Gerd
... View more
Labels:
- Labels:
-
Apache Ranger
12-17-2015
11:18 AM
Hi Robert, thanks for answering. Ambari was running as root. "was" because I did a reinstall from scratch in the meantime due to time pressure for delivering the cluster....unfortunately. This new installation worked nice, therefore I guess the problem was caused by "disable Kerberos" and maybe that was done in a corrupt manner...?!?!
... View more
12-13-2015
10:05 AM
Hello @mahadev , no, the logs aren't that verbose, I just see: 12 Dec 2015 20:38:51,803 ERROR [ambari-action-scheduler] ClusterImpl:2382 - ServiceComponentHost lookup exception
12 Dec 2015 20:38:51,810 INFO [Server Action Executor Worker 1179] KerberosServerAction:327 - Processing identities...
12 Dec 2015 20:38:52,032 INFO [Server Action Executor Worker 1179] KerberosServerAction:429 - Processing identities completed.
12 Dec 2015 20:38:52,839 ERROR [ambari-action-scheduler] ClusterImpl:2382 - ServiceComponentHost lookup exception
12 Dec 2015 20:38:52,847 INFO [Server Action Executor Worker 1180] KerberosServerAction:327 - Processing identities...
12 Dec 2015 20:38:52,848 INFO [Server Action Executor Worker 1180] CreateKeytabFilesServerAction:170 - Creating keytab file for HTTP/deala01875.domain@HDP.SIT on host deala01875.comain And afterwards the keytab files are being created... I ran a "disable Kerberos" afterwards, and then in the ambari-logs I can see that it tries to delete all the principals, but somehow the creation fails. I tried the whole steps several times.... Regards..
... View more
12-12-2015
08:40 PM
1 Kudo
Hi, I am using Ambari 2.0.1 and MIT Kerberos. After running through the enabling Kerberos wizard, the services are failing to start. After some search I found out that there are no principals being created in the KDC: "listprincs" just shows the previously (manually) created admin/admin@REALM principal, but no further principals as expected from enabling Kerberos via the wizard?!?!?! This is the first time I see this strange behaviour, several other kerberized clusters didn't have this problem. Why doesn't the Ambari wizard create principals in the KDC, while showing no errors at running through the wizard ? Thanks in advance...
... View more
Labels:
- Labels:
-
Apache Ambari
07-03-2014
12:16 AM
1 Kudo
Hi, to read data in avro format from Hive you have to use an Avro SerDe. Maybe a good starting point will be http://www.michael-noll.com/blog/2013/07/04/using-avro-in-mapreduce-jobs-with-hadoop-pig-hive/ But this is not related to this topic since the solr sink will put data into Solr. I'd suggest to use just a HDFS sink to put your data on HDFS and create an (external or not) Hive table afterwards. You do not need Solr and/or Morphlines for this. best, Gerd
... View more
06-25-2014
10:23 AM
Hi Darren, thanks for answering. I assumed that since the "Client override" is in the naming of the property 😉 Did I get it right that there are 3 possibilities => 1) if I submit a job without setting a property explicitly the "Gateway" setting are used (from /etc/hadoop/conf/xyz) 2) if I specify a property explicitly this will be used (the client override setting is empty) 3) if I specify a property and the Tasktracker...client override property is also set, the override setting will be used ? thanks in advance, Gerd
... View more
06-25-2014
04:31 AM
so, at the end, why at all can I set "MapReduce Child Java Maximum Heap Size (Client Override)" in the "TaskTracker" section of CM=>service mapreduce1 config if any job that will be submitted is using the mapred.child.java.opts from file /etc/hadoop/conf/mapred-site.xml ?!?! And this file is generated from the settings of section "Gateway (Default" in the configuration pane of mapreduce service. When will the Child-Heap-Size setting from the TaskTracker section be applied, and to whom ?!?! Currently I really have no clue since jobs submitted on the shell, submitted via Hive are receiving the settings from the standard config. directory /etc/hadoop/conf best, Gerd
... View more
06-25-2014
02:28 AM
Hi, after some further tests it seems like I got it 😉 a) the configuration in the section "Gateway..." will be written in the mapred-site.xml of the client-configuration and thereby deployed via "Deploy client configuration" to the corresponding directory under /etc/hadoop/conf (via the update-alternatives) b) if I submit the example M/R job, it uses that config. from /etc/hadoop/conf c) the settings in section "TaskTracker..." will be written to the mapred-site.xml of the rundirectory while restarint the corresponding service and therefore the modifications in this section will never be considered for jobs being submitted Is this correct ?!?!
... View more
06-25-2014
12:56 AM
Hi, I recently tried to modify the Heap size for m/r tasks by setting property "MapReduce Child Java Maximum Heap Size (Client Override)" in menu "Tasktracker" => "Resource Management". I set the value to 222MiB. After any config. change I re-deploy the config and restart the services. If I submit e.g. the "pi" job from the hadoop-mapreduce-examples.jar and dive into the job's xml configuration I can see that "mapred.child.java.opts" is set to "-Xmx145171557". This is the part I don't understand, since I'd expect to have a mapred.child.java.opts being set to my configured value of 222MB the size "145171557" matches exactly the configured value for "MapReduce Child Java Maximum Heap Size" of the "Default Gateway" Role, but there is no node serving the Gateway role Why is the HeapSize value of the Gateway role applied to the configuration of a submitted job, rather than the value set explicitly under chapter "Tasktracker" ?!?! Didn't I see the forest but the trees ?!?! Any help highly appreciated, regards...Gerd... === Info: I tested this behaviour on a 10-node CDH4.5 cluster as well as in a CDH4 quickstart VM, thereby I assume it is a base mis-understanding somewhere...
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Gateway
-
MapReduce
-
Quickstart VM
05-16-2014
11:34 AM
sorry for bothering 😉 Issue has been solved, the error was the result of "work in concurrency" of many people in the same folder. Thereby the message "file not found" did make sense....
... View more