Member since
09-29-2016
19
Posts
2
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1166 | 10-10-2016 03:29 PM |
10-14-2016
11:21 PM
@Roman Glova Thanks for your answers As i expected, based on your advice, my local host was not obfuscated due to port fowarding. Had to (writing it out for poeple like me maybe finding this post having the same problem) : Specify a public IP in th vagrant file and reload
Restart Ambari-server
Install Kerberos client as stated here : https://help.ubuntu.com/lts/serverguide/kerberos.html
kinit using a previously added user as add_princ on the kerberos server
klist to check the credentials have been generated
And it's magic, i could log into webui
kdestroy will destroy the credentials.
... View more
10-14-2016
10:44 PM
@Roman Glova thanks again for your reply. Tried what you suggested but it's not working. Thinking about it and trying to figure out how it works i think it's maybe due to the fact i'm using vagrant and redirecting machine localports to the host port. I thought it was obfuscated to the running VM but perhaps is the WebUI waiting credentials for the active user of the hosting system neither than from the hosted (vagrant) system ? Am i true ? In this case i think i have to install kerberos client locally and do some more configuration ? Am i right ?
... View more
10-14-2016
10:18 PM
Hi @Roman Glova
Thanks for your answer. Can you be more precise on the way of getting those credentials please ? I'm connecting the Ambari WebUi using the admin generic account; wich account do you mean kerberos is credential checking when i use the ambari quicklink to services web ui ? Regards.
... View more
10-14-2016
09:52 PM
Hello, Step by step following the documentation
about enabling SPNEGO Authentication for Hadoop, i end up getting 401
authentication errors trying to display webui of affected services
(HDFS, YARN , MapReduce) HTTP ERROR 401 Problem accessing /cluster. Reason:
Authentication required It's more than obvious i made a noob mistake configuring SPNEGO or
Kerberos. Anyway, all services are started with no alert and i am able
to run the Spark tour in 5 seconds tutorial in Zeppelin, involving, File
download and move on hdfs and Spark read on HDFS to count empty lines. Not to say i of course double checked the configuration modifications
i did and most of the time copy/pasted from the Ambari documentation
after reading on external sources what was meant to do what, so ... Can anyone help me getting out of trouble ? Regards.
... View more
10-14-2016
06:47 PM
Hi @nshetty Thanks for your answer; i feel better knowing i did well in this step, as i'm stuck at another step further; won't rollback to these parameters addition. Regards.
... View more
10-14-2016
01:33 PM
Hi, I'm trying to setup HTTP authentication but the documentation is unclear about properties addition : In my ambari setup, documentation'described properties are missing from the HDFS 'Advanced core-site' properties, apart hadoop.http.authentication.simple.anonymous.allowed => but the documentation ask to had them if not present. As the add property button is available in the webui only for custom sections, It's unclear to me if i have to manually update the xml config files for the 'Adavced core-site' (and i don't know how to do that in a simpel way) or if i can use the 'custom core-site' section as well and save via the webui. Looked around for answer in adding ambari services documentations but i'm a bit lost, can anyone help ? Regards.
... View more
Labels:
- Labels:
-
Apache Hadoop
10-10-2016
03:29 PM
Hi, After a long long long way searching the web for answers to help me debug this Yarn not starting, i wondered if all this mess can't be a network configuration problem. The fact is i wasn't even able to find where the right starting logs where for Yarn. The only logs i got was the webui being unavailable. I so reinstalled the whole cluster, buts this time putting more services on slave Vms hoping another component to fail starting and giving me explicit logs. it worked. The explaination : Despite any kind of configuration you use in the VagrantFile, Vagrant will add to etc/hosts a line with the hostname you defined for the machine and the localhost adress, and obviously ,several components of the stack, at a step or another in the setup process, use the first IP feeded in the hosts file neither than the hostname. It explains why everything worked in single node and why their was so many mess in multi-node : some services or log where try-accessed throught 127.0.0.1 adress but where on other machines. The workaround is to provision an inline shell command like this : "sudo sed -i'' '/^127.0.0.1\\t#{hostname}\\t#{name}$/d' /etc/hosts" in the VagrantFile, then install and setup the Ambari server. trying to correct afterwards is like checking by hand every configuration file of the stack. This ticket can be closed.
... View more
10-03-2016
03:11 PM
please note each vm ping-pong each other and that master1 has port 8080 bind to host 8080 so i can use the UI.
... View more
10-03-2016
03:09 PM
Hi mates, I'm new to Ambari, testing it for my company as, maybe, our future Hadoop stack management an monitoring plateforme. All was working ok (tested several Zeppelin tuotrials) on single node but the mess the comes when installing on 2 vagrant-driven Vms ("master1" and "agent1" : ping-tested them together, they can communicate). Plz note "Master1" VM has 24gb of ram and 4 vcpus and agent vm has 16gb of ram and 1 vcpu. Both vm are running on the same linux workstation. Master1 has a client and server installed, agent1 has the client installed. Installed HDFS 2.7.3, Yarn 2.7.3, MapReduce2 2.7.3, Zookeeper 3.4.6 , Ambari metrics 0.1.0 on this "mini" cluster The Yarn ressource manager, perfectly working single node, don't anymore. Crawled the web but at that time didn't find how to figure the problem out. Ca anyone help ? Regards. Details below : Installation : b.1 => Got several warnings on installation : On Master1 : App Timeline Server Start, History Server Start, ResourceManager Start, SNameNode Start, Metrics Collector Start, NodeManager Start, Grafana Start
On Agent1 : Check ZooKeeper, Check HDFS, NodeManager Start, Check Ambari Metrics, Check YARN, Check MapReduce2 Log files : First of all, the logs files path for warnings informations given in the "Install, Start and Test" section don't exist :
Master : /var/lib/ambari-agent/data/output-*.txt => * ranging from 37 to 43 but the last file is 36
Agent :/var/lib/ambari-agent/data/output-*.txt => * ranging from 4 to 48 but the last file is 36 Run : HDFS, MapReduce2, Zookeeper and Ambari Metrics are said to be running fine by Ambari, but not Yarn RessouceManager wich is said to have stopped after several minutes, but in fact the logs are like this since it started :
2016-10-03 14:52:56,572 INFO zookeeper.ClientCnxn (ClientCnxn.java:logStartConnect(1019)) - Opening socket connection to server master1.localdomain/192.168.0.50:2181. Will not attempt to authenticate using SASL (unknown error)
2016-10-03 14:52:56,573 INFO zookeeper.ClientCnxn (ClientCnxn.java:primeConnection(864)) - Socket connection established to master1.localdomain/192.168.0.50:2181, initiating session
2016-10-03 14:52:56,574 INFO zookeeper.ClientCnxn (ClientCnxn.java:run(1142)) - Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
2016-10-03 14:52:56,675 INFO recovery.ZKRMStateStore (ZKRMStateStore.java:runWithRetries(1227)) - Exception while executing a ZK operation.
2016-10-03 14:51:16,158 INFO recovery.ZKRMStateStore (ZKRMStateStore.java:runWithRetries(1230)) - Retrying operation on ZK. Retry no. 695
2016-10-03 14:51:16,584 INFO zookeeper.ClientCnxn (ClientCnxn.java:logStartConnect(1019)) - Opening socket connection to server master1.localdomain/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
2016-10-03 14:51:16,584 INFO zookeeper.ClientCnxn (ClientCnxn.java:primeConnection(864)) - Socket connection established to master1.localdomain/127.0.0.1:2181, initiating session
2016-10-03 14:51:16,585 INFO zookeeper.ClientCnxn (ClientCnxn.java:run(1142)) - Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
2016-10-03 14:51:16,910 INFO zookeeper.ClientCnxn (ClientCnxn.java:logStartConnect(1019)) - Opening socket connection to server agent1.localdomain/192.168.0.51:2181. Will not attempt to authenticate using SASL (unknown error)
2016-10-03 14:51:16,911 INFO zookeeper.ClientCnxn (ClientCnxn.java:primeConnection(864)) - Socket connection established to agent1.localdomain/192.168.0.51:2181, initiating session
2016-10-03 14:51:16,912 INFO zookeeper.ClientCnxn (ClientCnxn.java:run(1142)) - Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
[root@master1 yarn]# Logs on the Zookeeper side : 2016-10-03 14:51:16,056 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@197] - Accepted socket connection from /192.168.0.50:34266
2016-10-03 14:51:16,056 - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@362] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running
2016-10-03 14:51:16,056 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket connection for client /192.168.0.50:34266 (no session established for client)
2016-10-03 14:51:16,584 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@197] - Accepted socket connection from /127.0.0.1:39082
2016-10-03 14:51:16,585 - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@362] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running
2016-10-03 14:51:16,585 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket connection for client /127.0.0.1:39082 (no session established for client)
... View more
Labels:
- Labels:
-
Apache YARN
-
Cloudera Manager