Support Questions

Find answers, ask questions, and share your expertise

Apache Metron Single Node installation problems

avatar
Contributor

Hi,

I am trying to setup single node via vagrant but no matter what i try it keeps up throwing error. Can you please guide what could be the solution ?

error-while-vagrantup.png

TASK [libselinux-python : Install libselinux-python] ***************************

fatal: [node1]: FAILED! => {"failed": true, "msg": "ERROR! The conditional check 'result.rc == 0' failed. The error was: ERROR! error while evaluating conditional (result.rc == 0): ERROR! 'dict object' has no attribute 'rc'"}

After this i tried vagrant provision which solved that but gave a new kind of error.

TASK [yum-update : Yum Update Packages] ****************************************

fatal: [node1]: FAILED! => {"failed": true, "msg": "ERROR! The conditional check 'result.rc == 0' failed. The error was: ERROR! error while evaluating conditional (result.rc == 0): ERROR! 'dict object' has no attribute 'rc'"}
1 ACCEPTED SOLUTION

avatar
Contributor

Thanks to all, this problem was resolved when I increased the RAM to 64 GB

View solution in original post

19 REPLIES 19

avatar
Contributor

Hi,

After fixings versions I am into this problem now.

I am getting this error while i am trying to install the latest apache metron using quick-dev-platform

TASK [ambari_config : Start the ambari cluster - no wait] **********************
changed: [node1]
TASK [ambari_config : Start the ambari cluster - wait] *************************
fatal: [node1]: FAILED! => {"changed": false, "failed": true, "module_stderr": "", "module_stdout": "", "msg": "MODULE FAILURE", "parsed": false}
PLAY RECAP *********************************************************************
node1                      : ok=15   changed=1    unreachable=0    failed=1  
Ansible failed to complete successfully. Any error output should be
visible above. Please fix these errors and try again.

Following is the output of my platform

 ./metron-deployment/scripts/platform-info.shMetron 0.2.0BETA
fatal: Not a git repository (or any of the parent directories): .git
--
ansible 2.0.0.2
  config file = 
  configured module search path = Default w/o overrides
--
Vagrant 1.8.1
--
Python 2.7.11
--
Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T21:41:47+05:00)
Maven home: /usr/share/apache-maven
Java version: 1.8.0_45, vendor: Oracle Corporation
Java home: /usr/java/jdk1.8.0_45/jre
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "2.6.32-642.el6.x86_64", arch: "amd64", family: "unix"
--
Linux CentOS17 2.6.32-642.el6.x86_64 #1 SMP Tue May 10 17:27:01 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

Any help will be appreciated.

avatar
Super Collaborator

Hey @Farrukh Anjum, my guess is that you might be running out of memory on your linux box, cuz I have faced a similar situation on my Dell Vostro with Ubuntu 14.04. Can you dump the contents of /proc/meminfo and /proc/cpuinfo on your box?

Also, you can add ansible.verbose = "vvv" in your 'Vagrantfile' at incubator-metron/metron-deployment/vagrant/quick-dev-platform folder and re-run the 'run.sh'. This will increase the verbosity and will indicate what could be going wrong. Note that you might have to change 'up' to 'provision' in the run.sh if you are running the same command again.

  if ansibleSkipTags != '' or ansibleTags != ''
    config.vm.provision :ansible do |ansible|
        ansible.playbook = "../../playbooks/metron_full_install.yml"
        ansible.sudo = true
        ansible.tags = ansibleTags.split(",") if ansibleTags != ''
        ansible.skip_tags = ansibleSkipTags.split(",") if ansibleSkipTags != ''
        ansible.inventory_path = "../../inventory/full-dev-platform"
        ansible.verbose = "vvv"
    end

avatar
Contributor

Hi @asubramanian

I enabled verbosity mode. Now its giving following details

<node1> SSH: EXEC scp -C -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -i /root/incubator-metron/metron-deployment/vagrant/quick-dev-platform/.vagrant/machines/node1/virtualbox/private_key -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%p-%r /tmp/tmpCymgCH '[node1]:/home/vagrant/.ansible/tmp/ansible-tmp-1473230983.26-243852837830618/yum'
<node1> ESTABLISH SSH CONNECTION FOR USER: vagrant
<node1> SSH: EXEC ssh -C -q -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -i /root/incubator-metron/metron-deployment/vagrant/quick-dev-platform/.vagrant/machines/node1/virtualbox/private_key -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%p-%r -tt node1 '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-zggucclqgkinuvvngryguppbglygrvng; LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python -tt /home/vagrant/.ansible/tmp/ansible-tmp-1473230983.26-243852837830618/yum; rm -rf "/home/vagrant/.ansible/tmp/ansible-tmp-1473230983.26-243852837830618/" > /dev/null 2>&1'"'"'"'"'"'"'"'"''"'"''
fatal: [node1]: FAILED! => {"failed": true, "msg": "ERROR! The conditional check 'result.rc == 0' failed. The error was: ERROR! error while evaluating conditional (result.rc == 0): ERROR! 'dict object' has no attribute 'rc'"}
PLAY RECAP *********************************************************************
node1                      : ok=34   changed=6    unreachable=0    failed=1  
Ansible failed to complete successfully. Any error output should be
visible above. Please fix these errors and try again.

How can I fix this ?

avatar
Contributor

Hi @asubramanian

Its quit a strange in now this problem is comming again. Following is the log of error you asked for.

TASK [ambari_config : Start the ambari cluster - wait] *************************
task path: /root/incubator-metron/metron-deployment/roles/ambari_config/tasks/start_hdp.yml:34
<node1> ESTABLISH SSH CONNECTION FOR USER: vagrant
<node1> SSH: EXEC ssh -C -q -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -i /root/incubator-metron/metron-deployment/vagrant/quick-dev-platform/.vagrant/machines/node1/virtualbox/private_key -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%p-%r -tt node1 '( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1473293699.31-125101223690255 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1473293699.31-125101223690255 )" )'
<node1> PUT /tmp/tmpgcNEKd TO /home/vagrant/.ansible/tmp/ansible-tmp-1473293699.31-125101223690255/ambari_cluster_state
<node1> SSH: EXEC scp -C -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -i /root/incubator-metron/metron-deployment/vagrant/quick-dev-platform/.vagrant/machines/node1/virtualbox/private_key -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%p-%r /tmp/tmpgcNEKd '[node1]:/home/vagrant/.ansible/tmp/ansible-tmp-1473293699.31-125101223690255/ambari_cluster_state'
<node1> ESTABLISH SSH CONNECTION FOR USER: vagrant
<node1> SSH: EXEC ssh -C -q -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -i /root/incubator-metron/metron-deployment/vagrant/quick-dev-platform/.vagrant/machines/node1/virtualbox/private_key -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%p-%r -tt node1 '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-rhyjdnwebywftmalgheffabpduagxrfz; LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/vagrant/.ansible/tmp/ansible-tmp-1473293699.31-125101223690255/ambari_cluster_state; rm -rf "/home/vagrant/.ansible/tmp/ansible-tmp-1473293699.31-125101223690255/" > /dev/null 2>&1'"'"'"'"'"'"'"'"''"'"''
fatal: [node1]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_args": {"blueprint_name": null, "blueprint_var": null, "cluster_name": "metron_cluster", "cluster_state": "started", "configurations": null, "host": "node1", "password": "admin", "port": 8080, "username": "admin", "wait_for_complete": true}, "module_name": "ambari_cluster_state"}, "msg": "Ambari client exception occurred: No JSON object could be decoded"}
PLAY RECAP *********************************************************************
node1                      : ok=15   changed=1    unreachable=0    failed=1  
Ansible failed to complete successfully. Any error output should be
visible above. Please fix these errors and try again.

avatar
Contributor

Thanks to all, this problem was resolved when I increased the RAM to 64 GB

avatar
Explorer

I am having this same problem doing the install. reruning the vagrant provision does not get me pass the error during the task of libselinux-python. I can tell the VM is running because I can see the login screen in python. Is there any other solutions that fixed this?

avatar
Explorer

I am having this same problem doing the install. reruning the vagrant provision does not get me pass the error during the task of libselinux-python. I can tell the VM is running because I can see the login screen in python. Is there any other solutions that fixed this?

avatar
Super Collaborator

Hey @Earl Hinkle, please post the output of incubator-metron/metron-deployment/scripts/platform-info.sh

avatar
Explorer

So I went through and uninstalled and reinstalled everything and tried the quick deploy verses the full deploy and got further, but I still do not have the system come completely up. I see the ambari server starts and can see the login page but it is stil failing. Below is the error. If I go to node1:8080 I do see the ambari login.

Task [ambari_config : Start the ambari cluster - no wait] changed: [Node1] Task [ambari_config : Start the ambari cluster - wait] fatal: [node1]: FAILED! => {"changed": false, "failed": true, "msg": "Request failed with status FAILED"

Here is the output of the command requested.

METRON-616: Added support for float and long literals in Stellar closes apache/incubator-metron#392 -- -- ansible 2.0.0.2 config file = /incubator-metron/metron-deployment/vagrant/quick-dev-platform/ansible.cfg configured module search path = ../../extra_modules -- Vagrant 1.8.1 -- Python 2.7.12 -- Apache Maven 3.3.9 Maven home: /usr/share/maven Java version: 1.8.0_112, vendor: Oracle Corporation Java home: /usr/local/java/jdk1.8.0_112/jre Default locale: en_US, platform encoding: UTF-8 OS name: "linux", version: "4.4.0-53-generic", arch: "amd64", family: "unix"

avatar
Explorer

Below is the output when changing verbose to vvv

TASK [ambari_config : Start the ambari cluster - wait] ************************* task path: /incubator-metron/metron-deployment/roles/ambari_config/tasks/start_hdp.yml:34 <node1> ESTABLISH SSH CONNECTION FOR USER: vagrant <node1> SSH: EXEC ssh -C -q -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -i /incubator-metron/metron-deployment/vagrant/quick-dev-platform/.vagrant/machines/node1/virtualbox/private_key -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%p-%r -tt node1 '( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1482284789.32-204398118579253 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1482284789.32-204398118579253 )" )' <node1> PUT /tmp/tmp8gjt4U TO /home/vagrant/.ansible/tmp/ansible-tmp-1482284789.32-204398118579253/ambari_cluster_state <node1> SSH: EXEC sftp -b - -C -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -i /incubator-metron/metron-deployment/vagrant/quick-dev-platform/.vagrant/machines/node1/virtualbox/private_key -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%p-%r '[node1]' <node1> ESTABLISH SSH CONNECTION FOR USER: vagrant <node1> SSH: EXEC ssh -C -q -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -i /incubator-metron/metron-deployment/vagrant/quick-dev-platform/.vagrant/machines/node1/virtualbox/private_key -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%p-%r -tt node1 '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-pxqzgtoqufpebexymsvjohxeysursuva; LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/vagrant/.ansible/tmp/ansible-tmp-1482284789.32-204398118579253/ambari_cluster_state; rm -rf "/home/vagrant/.ansible/tmp/ansible-tmp-1482284789.32-204398118579253/" > /dev/null 2>&1'"'"'"'"'"'"'"'"''"'"'' fatal: [node1]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_args": {"blueprint_name": null, "blueprint_var": null, "cluster_name": "metron_cluster", "cluster_state": "started", "configurations": null, "host": "node1", "password": "admin", "port": 8080, "username": "admin", "wait_for_complete": true}, "module_name": "ambari_cluster_state"}, "msg": "Request failed with status FAILED"} PLAY RECAP ********************************************************************* node1 : ok=19 changed=3 unreachable=0 failed=1 Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again.