Member since
10-09-2014
43
Posts
13
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1673 | 01-14-2016 05:56 AM | |
1598 | 11-11-2014 07:05 AM |
03-15-2018
08:00 PM
Hi, I am trying to create NiFi cluster using ambari blueprint. But ddint find sample blueprint template. Anyone has done this and help with sample blueprint template ? Thanks
... View more
Labels:
05-16-2017
01:24 PM
Hi, I am trying to add new host to ambari host through python. I want to perform curl -i -H "X-Requested-By: ambari" -u username:password -X POST -d @test.json http://dev-nifi01-ambari1:80/api/v1/clusters/test-nifi/hosts/hostname with following code import os
import urllib2
import base64
import json
username = "admin"
password = "admin"
hostname = os.getenv('HOSTNAME')
url = "http://dev-nifi01-ambari1:80/api/v1/clusters/test-nifi/hosts/" + hostname
data = {"blueprint" : "recommended","host_group" : "nifi-hg-1"}
req = urllib2.Request(url, data=json.dumps(data))
base64string = base64.encodestring('%s:%s' % (username, password)).replace('\n', '')
req.add_header('X-Requested-By','ambari')
req.add_header("Authorization", "Basic %s" % base64string)
res = urllib2.urlopen(req)
But getting following error Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib64/python2.7/urllib2.py", line 154, in urlopen
return opener.open(url, data, timeout)
File "/usr/lib64/python2.7/urllib2.py", line 437, in open
response = meth(req, response)
File "/usr/lib64/python2.7/urllib2.py", line 550, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/lib64/python2.7/urllib2.py", line 475, in error
return self._call_chain(*args)
File "/usr/lib64/python2.7/urllib2.py", line 409, in _call_chain
result = func(*args)
File "/usr/lib64/python2.7/urllib2.py", line 558, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 400: Bad Request
... View more
Labels:
02-15-2017
03:12 PM
2 Kudos
Hi, We have been using HDInsight on Azure, and seen most of the scriptActions are run on the cluster are through Ambari. Now we have NiFi cluster setup using Ambari, so need to know how do I run custom scripts through Ambari on cluster nodes ? Thanks
... View more
Labels:
11-29-2016
09:10 PM
Hi, I am trying to install NiFi on redhat 6.8 withing following steps wget http://public-repo-1.hortonworks.com/HDF/2.0.1.0/HDF-2.0.1.0-12.tar.gz
tar -zxf HDF-2.0.1.0-12.tar.gz
cd HDF-2.0.1.0-12/nifi
bin/nifi.sh start
After this I see following messages in logs [root@abc000732 nifi]# tailf logs/nifi-bootstrap.log
2016-11-29 20:06:03,579 INFO [main] org.apache.nifi.bootstrap.RunNiFi NiFi never started. Will not restart NiFi
2016-11-29 20:07:11,636 INFO [main] o.a.n.b.NotificationServiceManager Successfully loaded the following 0 services: []
2016-11-29 20:07:11,639 INFO [main] org.apache.nifi.bootstrap.RunNiFi Registered no Notification Services for Notification Type NIFI_STARTED
2016-11-29 20:07:11,639 INFO [main] org.apache.nifi.bootstrap.RunNiFi Registered no Notification Services for Notification Type NIFI_STOPPED
2016-11-29 20:07:11,639 INFO [main] org.apache.nifi.bootstrap.RunNiFi Registered no Notification Services for Notification Type NIFI_DIED
2016-11-29 20:07:11,657 INFO [main] org.apache.nifi.bootstrap.Command Starting Apache NiFi...
2016-11-29 20:07:11,657 INFO [main] org.apache.nifi.bootstrap.Command Working Directory: /opt/HDF-2.0.1.0/nifi
2016-11-29 20:07:11,657 INFO [main] org.apache.nifi.bootstrap.Command Command: java -classpath /opt/HDF-2.0.1.0/nifi/./conf:/opt/HDF-2.0.1.0/nifi/./lib/nifi-documentation-1.0.0.2.0.1.0-12.jar:/opt/HDF-2.0.1.0/nifi/./lib/logback-classic-1.1.3.jar:/opt/HDF-2.0.1.0/nifi/./lib/nifi-api-1.0.0.2.0.1.0-12.jar:/opt/HDF-2.0.1.0/nifi/./lib/nifi-nar-utils-1.0.0.2.0.1.0-12.jar:/opt/HDF-2.0.1.0/nifi/./lib/slf4j-api-1.7.12.jar:/opt/HDF-2.0.1.0/nifi/./lib/nifi-properties-1.0.0.2.0.1.0-12.jar:/opt/HDF-2.0.1.0/nifi/./lib/commons-lang3-3.4.jar:/opt/HDF-2.0.1.0/nifi/./lib/nifi-properties-loader-1.0.0.2.0.1.0-12.jar:/opt/HDF-2.0.1.0/nifi/./lib/nifi-framework-api-1.0.0.2.0.1.0-12.jar:/opt/HDF-2.0.1.0/nifi/./lib/logback-core-1.1.3.jar:/opt/HDF-2.0.1.0/nifi/./lib/jul-to-slf4j-1.7.12.jar:/opt/HDF-2.0.1.0/nifi/./lib/log4j-over-slf4j-1.7.12.jar:/opt/HDF-2.0.1.0/nifi/./lib/jcl-over-slf4j-1.7.12.jar:/opt/HDF-2.0.1.0/nifi/./lib/bcprov-jdk15on-1.54.jar:/opt/HDF-2.0.1.0/nifi/./lib/nifi-runtime-1.0.0.2.0.1.0-12.jar -Dorg.apache.jasper.compiler.disablejsr199=true -Xmx512m -Xms512m -Dsun.net.http.allowRestrictedHeaders=true -Djava.net.preferIPv4Stack=true -Djava.awt.headless=true -XX:+UseG1GC -Djava.protocol.handler.pkgs=sun.net.www.protocol -Dnifi.properties.file.path=/opt/HDF-2.0.1.0/nifi/./conf/nifi.properties -Dnifi.bootstrap.listen.port=19169 -Dapp=NiFi -Dorg.apache.nifi.bootstrap.config.log.dir=/opt/HDF-2.0.1.0/nifi/logs org.apache.nifi.NiFi
2016-11-29 20:07:12,102 INFO [NiFi Bootstrap Command Listener] org.apache.nifi.bootstrap.RunNiFi Apache NiFi now running and listening for Bootstrap requests on port 52062
2016-11-29 20:07:14,687 INFO [main] org.apache.nifi.bootstrap.RunNiFi NiFi never started. Will not restart NiFi
[root@abc000732 nifi]# tailf logs/nifi-app.log
2016-11-29 20:07:13,939 INFO [main] org.apache.nifi.nar.NarClassLoaders Loaded NAR file: /opt/HDF-2.0.1.0/nifi/./work/nar/extensions/nifi-standard-services-api-nar-1.0.0.2.0.1.0-12.nar-unpacked as class loader org.apache.nifi.nar.NarClassLoader[./work/nar/extensions/nifi-standard-services-api-nar-1.0.0.2.0.1.0-12.nar-unpacked]
2016-11-29 20:07:13,941 INFO [main] org.apache.nifi.nar.NarClassLoaders Loaded NAR file: /opt/HDF-2.0.1.0/nifi/./work/nar/extensions/nifi-enrich-nar-1.0.0.2.0.1.0-12.nar-unpacked as class loader org.apache.nifi.nar.NarClassLoader[./work/nar/extensions/nifi-enrich-nar-1.0.0.2.0.1.0-12.nar-unpacked]
2016-11-29 20:07:13,944 INFO [main] org.apache.nifi.nar.NarClassLoaders Loaded NAR file: /opt/HDF-2.0.1.0/nifi/./work/nar/extensions/nifi-elasticsearch-nar-1.0.0.2.0.1.0-12.nar-unpacked as class loader org.apache.nifi.nar.NarClassLoader[./work/nar/extensions/nifi-elasticsearch-nar-1.0.0.2.0.1.0-12.nar-unpacked]
2016-11-29 20:07:13,949 INFO [main] org.apache.nifi.nar.NarClassLoaders Loaded NAR file: /opt/HDF-2.0.1.0/nifi/./work/nar/extensions/nifi-standard-nar-1.0.0.2.0.1.0-12.nar-unpacked as class loader org.apache.nifi.nar.NarClassLoader[./work/nar/extensions/nifi-standard-nar-1.0.0.2.0.1.0-12.nar-unpacked]
2016-11-29 20:07:13,950 INFO [main] org.apache.nifi.nar.NarClassLoaders Loaded NAR file: /opt/HDF-2.0.1.0/nifi/./work/nar/extensions/nifi-avro-nar-1.0.0.2.0.1.0-12.nar-unpacked as class loader org.apache.nifi.nar.NarClassLoader[./work/nar/extensions/nifi-avro-nar-1.0.0.2.0.1.0-12.nar-unpacked]
2016-11-29 20:07:13,951 INFO [main] org.apache.nifi.nar.NarClassLoaders Loaded NAR file: /opt/HDF-2.0.1.0/nifi/./work/nar/extensions/nifi-amqp-nar-1.0.0.2.0.1.0-12.nar-unpacked as class loader org.apache.nifi.nar.NarClassLoader[./work/nar/extensions/nifi-amqp-nar-1.0.0.2.0.1.0-12.nar-unpacked]
2016-11-29 20:07:13,961 INFO [main] org.apache.nifi.nar.NarClassLoaders Loaded NAR file: /opt/HDF-2.0.1.0/nifi/./work/nar/extensions/nifi-hive-nar-1.0.0.2.0.1.0-12.nar-unpacked as class loader org.apache.nifi.nar.NarClassLoader[./work/nar/extensions/nifi-hive-nar-1.0.0.2.0.1.0-12.nar-unpacked]
2016-11-29 20:07:13,962 INFO [main] org.apache.nifi.nar.NarClassLoaders Loaded NAR file: /opt/HDF-2.0.1.0/nifi/./work/nar/extensions/nifi-riemann-nar-1.0.0.2.0.1.0-12.nar-unpacked as class loader org.apache.nifi.nar.NarClassLoader[./work/nar/extensions/nifi-riemann-nar-1.0.0.2.0.1.0-12.nar-unpacked]
2016-11-29 20:07:13,963 INFO [main] org.apache.nifi.nar.NarClassLoaders Loaded NAR file: /opt/HDF-2.0.1.0/nifi/./work/nar/extensions/nifi-scripting-nar-1.0.0.2.0.1.0-12.nar-unpacked as class loader org.apache.nifi.nar.NarClassLoader[./work/nar/extensions/nifi-scripting-nar-1.0.0.2.0.1.0-12.nar-unpacked]
2016-11-29 20:07:13,965 INFO [main] org.apache.nifi.nar.NarClassLoaders Loaded NAR file: /opt/HDF-2.0.1.0/nifi/
I have oracle java (1.8.0_60) installed on this VM and iptables is off and selinux is disabled. Anyone know whats wrong with this. Same steps works on Centos 6.8
... View more
Labels:
11-18-2016
06:52 PM
I am trying to install 2 node NiFi cluster using Ambari. I was following this doc. At the time of deployement of the HDF components, its failing. More specifically following tasks are failing Infra Solr Instance Install Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/AMBARI_INFRA/0.1.0/package/scripts/infra_solr.py", line 110, in <module>
InfraSolr().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 280, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/AMBARI_INFRA/0.1.0/package/scripts/infra_solr.py", line 34, in install
self.install_packages(env)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 567, in install_packages
retry_count=agent_stack_retry_count)
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 54, in action_install
self.install_package(package_name, self.resource.use_repos, self.resource.skip_repos)
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/yumrpm.py", line 49, in install_package
self.checked_call_with_retries(cmd, sudo=True, logoutput=self.get_logoutput())
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 83, in checked_call_with_retries
return self._call_with_retries(cmd, is_checked=True, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 91, in _call_with_retries
code, out = func(cmd, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 71, in inner
result = function(command, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 93, in checked_call
tries=tries, try_sleep=try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 141, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 294, in _call
raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of '/usr/bin/yum -d 0 -e 0 -y install ambari-infra-solr-client' returned 1. Error: Nothing to dostdout: /var/lib/ambari-agent/data/output-202.txt
2016-11-18 18:38:52,578 - Group['hadoop'] {}
2016-11-18 18:38:52,580 - Group['nifi'] {}
2016-11-18 18:38:52,580 - User['logsearch'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-11-18 18:38:52,581 - User['infra-solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-11-18 18:38:52,581 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-11-18 18:38:52,582 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-11-18 18:38:52,582 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2016-11-18 18:38:52,583 - User['nifi'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'nifi']}
2016-11-18 18:38:52,583 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-11-18 18:38:52,585 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2016-11-18 18:38:52,589 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2016-11-18 18:38:52,598 - Initializing 2 repositories
2016-11-18 18:38:52,600 - Repository['HDF-2.0'] {'base_url': 'http://public-repo-1.hortonworks.com/HDF/centos7/2.x/updates/2.0.1.0', 'action': ['create'], 'components': [u'HDF', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDF', 'mirror_list': None}
2016-11-18 18:38:52,607 - File['/etc/yum.repos.d/HDF.repo'] {'content': '[HDF-2.0]\nname=HDF-2.0\nbaseurl=http://public-repo-1.hortonworks.com/HDF/centos7/2.x/updates/2.0.1.0\n\npath=/\nenabled=1\ngpgcheck=0'}
2016-11-18 18:38:52,608 - Repository['HDP-UTILS-1.1.0.21'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7', 'action': ['create'], 'components': [u'HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP-UTILS', 'mirror_list': None}
2016-11-18 18:38:52,610 - File['/etc/yum.repos.d/HDP-UTILS.repo'] {'content': '[HDP-UTILS-1.1.0.21]\nname=HDP-UTILS-1.1.0.21\nbaseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7\n\npath=/\nenabled=1\ngpgcheck=0'}
2016-11-18 18:38:52,610 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2016-11-18 18:38:53,347 - Skipping installation of existing package unzip
2016-11-18 18:38:53,347 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2016-11-18 18:38:53,356 - Skipping installation of existing package curl
2016-11-18 18:38:53,357 - Package['hdf-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2016-11-18 18:38:53,365 - Skipping installation of existing package hdf-select
2016-11-18 18:38:53,499 - Package['ambari-infra-solr-client'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2016-11-18 18:38:53,562 - Installing package ambari-infra-solr-client ('/usr/bin/yum -d 0 -e 0 -y install ambari-infra-solr-client')
Command failed after 1 tries Infra Solr Client Install Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/AMBARI_INFRA/0.1.0/package/scripts/infra_solr_client.py", line 51, in <module>
InfraSolrClient().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 280, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/AMBARI_INFRA/0.1.0/package/scripts/infra_solr_client.py", line 29, in install
self.install_packages(env)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 567, in install_packages
retry_count=agent_stack_retry_count)
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 54, in action_install
self.install_package(package_name, self.resource.use_repos, self.resource.skip_repos)
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/yumrpm.py", line 49, in install_package
self.checked_call_with_retries(cmd, sudo=True, logoutput=self.get_logoutput())
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 83, in checked_call_with_retries
return self._call_with_retries(cmd, is_checked=True, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 91, in _call_with_retries
code, out = func(cmd, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 71, in inner
result = function(command, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 93, in checked_call
tries=tries, try_sleep=try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 141, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 294, in _call
raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of '/usr/bin/yum -d 0 -e 0 -y install ambari-infra-solr-client' returned 1. Error: Nothing to dostdout: /var/lib/ambari-agent/data/output-203.txt
2016-11-18 18:38:55,368 - Group['hadoop'] {}
2016-11-18 18:38:55,369 - Group['nifi'] {}
2016-11-18 18:38:55,369 - User['logsearch'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-11-18 18:38:55,369 - User['infra-solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-11-18 18:38:55,370 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-11-18 18:38:55,370 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-11-18 18:38:55,371 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2016-11-18 18:38:55,371 - User['nifi'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'nifi']}
2016-11-18 18:38:55,372 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-11-18 18:38:55,373 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2016-11-18 18:38:55,376 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2016-11-18 18:38:55,387 - Initializing 2 repositories
2016-11-18 18:38:55,389 - Repository['HDF-2.0'] {'base_url': 'http://public-repo-1.hortonworks.com/HDF/centos7/2.x/updates/2.0.1.0', 'action': ['create'], 'components': [u'HDF', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDF', 'mirror_list': None}
2016-11-18 18:38:55,394 - File['/etc/yum.repos.d/HDF.repo'] {'content': '[HDF-2.0]\nname=HDF-2.0\nbaseurl=http://public-repo-1.hortonworks.com/HDF/centos7/2.x/updates/2.0.1.0\n\npath=/\nenabled=1\ngpgcheck=0'}
2016-11-18 18:38:55,395 - Repository['HDP-UTILS-1.1.0.21'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7', 'action': ['create'], 'components': [u'HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP-UTILS', 'mirror_list': None}
2016-11-18 18:38:55,397 - File['/etc/yum.repos.d/HDP-UTILS.repo'] {'content': '[HDP-UTILS-1.1.0.21]\nname=HDP-UTILS-1.1.0.21\nbaseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7\n\npath=/\nenabled=1\ngpgcheck=0'}
2016-11-18 18:38:55,397 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2016-11-18 18:38:55,457 - Skipping installation of existing package unzip
2016-11-18 18:38:55,457 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2016-11-18 18:38:55,465 - Skipping installation of existing package curl
2016-11-18 18:38:55,465 - Package['hdf-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2016-11-18 18:38:55,473 - Skipping installation of existing package hdf-select
2016-11-18 18:38:55,625 - Package['ambari-infra-solr-client'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2016-11-18 18:38:55,688 - Installing package ambari-infra-solr-client ('/usr/bin/yum -d 0 -e 0 -y install ambari-infra-solr-client')
Command failed after 1 tries I also tried to install these packages manually, but got [root@hdf1 ~]# yum -y install ambari-infra-solr-client
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
No package ambari-infra-solr-client available.
Error: Nothing to do
[root@hdf1 ~]#
I do have following repos on all three VM's /etc/yum.repos.d/HDP-UTILS.repo [HDP-UTILS-1.1.0.21]
name=HDP-UTILS-1.1.0.21
baseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7
path=/
enabled=1
/etc/yum.repos.d/HDF.repo [HDF-2.0]
name=HDF-2.0
baseurl=http://public-repo-1.hortonworks.com/HDF/centos7/2.x/updates/2.0.1.0
path=/
enabled=1
Anyone know whats wrong here ?
... View more
09-14-2016
08:46 PM
After correcting values for hbase.regionserver.global.memstore.size and hfile.block.cache.size everything looks good.
... View more
09-13-2016
01:42 PM
1 Kudo
I am trying to update Hbase properties through Ambari API and was following this document. Here are the steps I followed : Dumped the existing config into newconfig.json curl -u "admin:admin" -G "https://myhbase.net/api/v1/clusters/myhbase/configurations?type=hbase-site&tag=TOPOLOGY_RESOLVED" | jq --arg newtag $(echo version$(date +%s%N)) '.items[] | del(.href, .version, .Config) | .tag |= $newtag | {"Clusters": {"desired_config": .}}' > newconfig.json Then Modified the property from `0.4` to `0.6` in newconfig.json, also version number "hbase.regionserver.global.memstore.size": "0.6", Then apply the modified config cat newconfig.json | curl -u "admin:admin" -H "X-Requested-By: ambari" -X PUT -d "@-" "https://myhbase.net/api/v1/clusters/myhbase" Then restarted the HBase Stop echo '{"RequestInfo": {"context" :"Stopping the Hbase service"}, "Body": {"ServiceInfo": {"state": "INSTALLED"}}}' | curl -u "admin:admin" -H "X-Requested-By: ambari" -X PUT -d "@-" "https://myhbase.net/api/v1/clusters/myhbase/services/HBASE" Start echo '{"RequestInfo": {"context" :"Restarting the Hbase service"}, "Body": {"ServiceInfo": {"state": "STARTED"}}}' | curl -u "admin:admin" -H "X-Requested-By: ambari" -X PUT -d "@-" "https://myhbase.net/api/v1/clusters/myhbase/services/HBASE" But after restart Hbase master & region server went down & stuck into restart mode, where they were getting restarted. Anyone know what wrong I ma doing here ? is there any better way to do this through Ambari API ?
... View more
08-31-2016
02:41 PM
Thats what I am doing.
Going to Ambari as Admin user. Add Host Providing ip, ssh username, private key for the ssh user
... View more
08-31-2016
02:30 PM
I am able to ssh from one of the head node to this new host. If I am providing ssh user & ssh key, then why I need password-less ssh ?
... View more
08-29-2016
09:04 PM
Hi, I am trying to add new node to the cluster from Ambari, I am providing host IP, username & ssh key. Getting this error in ambari ==========================
Creating target directory...
==========================
Command start time 2016-08-29 20:54:24
chmod: cannot access ���/var/lib/ambari-agent/data���: No such file or directory
Connection to 10.8.17.132 closed.
SSH command execution finished
host=10.8.17.132, exitcode=0
Command end time 2016-08-29 20:54:24
==========================
Copying common functions script...
==========================
Command start time 2016-08-29 20:54:24
scp /usr/lib/python2.6/site-packages/ambari_commons
host=10.8.17.132, exitcode=0
Command end time 2016-08-29 20:54:24
==========================
Copying OS type check script...
==========================
Command start time 2016-08-29 20:54:24
scp /usr/lib/python2.6/site-packages/ambari_server/os_check_type.py
host=10.8.17.132, exitcode=0
Command end time 2016-08-29 20:54:25
==========================
Running OS type check...
==========================
Command start time 2016-08-29 20:54:25
Cluster primary/cluster OS family is ubuntu14 and local/current OS family is ubuntu14
Connection to 10.8.17.132 closed.
SSH command execution finished
host=10.8.17.132, exitcode=0
Command end time 2016-08-29 20:54:25
==========================
Checking 'sudo' package on remote host...
==========================
Command start time 2016-08-29 20:54:25
sudo install
Connection to 10.8.17.132 closed.
SSH command execution finished
host=10.8.17.132, exitcode=0
Command end time 2016-08-29 20:54:26
==========================
Copying repo file to 'tmp' folder...
==========================
Command start time 2016-08-29 20:54:26
/etc/apt/sources.list.d/ambari.list: No such file or directory
scp /etc/apt/sources.list.d/ambari.list
host=10.8.17.132, exitcode=1
Command end time 2016-08-29 20:54:26
==========================
Moving file to repo dir...
==========================
Command start time 2016-08-29 20:54:26
mv: cannot stat ���/var/lib/ambari-agent/tmp/ambari1472504066.list���: No such file or directory
Connection to 10.8.17.132 closed.
SSH command execution finished
host=10.8.17.132, exitcode=1
Command end time 2016-08-29 20:54:26
==========================
Changing permissions for ambari.repo...
==========================
Command start time 2016-08-29 20:54:26
chmod: cannot access ���/etc/apt/sources.list.d/ambari.list���: No such file or directory
Connection to 10.8.17.132 closed.
SSH command execution finished
host=10.8.17.132, exitcode=1
Command end time 2016-08-29 20:54:27
==========================
Update apt cache of repository...
==========================
Command start time 2016-08-29 20:54:27
Reading package lists... 0%
Reading package lists... 0%
Reading package lists... 20%
Reading package lists... Done
Connection to 10.8.17.132 closed.
SSH command execution finished
host=10.8.17.132, exitcode=0
Command end time 2016-08-29 20:54:27
==========================
Copying setup script file...
==========================
Command start time 2016-08-29 20:54:27
scp /usr/lib/python2.6/site-packages/ambari_server/setupAgent.py
host=10.8.17.132, exitcode=0
Command end time 2016-08-29 20:54:28
ERROR: Bootstrap of host 10.8.17.132 fails because previous action finished with non-zero exit code (1)
ERROR MESSAGE: Execute of '<bound method BootstrapDefault.copyNeededFiles of <BootstrapDefault(Thread-1, started daemon 140149792454400)>>' failed
STDOUT: Try to execute '<bound method BootstrapDefault.copyNeededFiles of <BootstrapDefault(Thread-1, started daemon 140149792454400)>>' Anyone know what I missing here ? This is on Ambari : 2.2.1.12 & HDP 2.4.2.4-5 Thanks
... View more
08-09-2016
10:41 AM
I have this jar file /usr/hdp/2.4.2.0-258/hadoop/src/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/sink/WindowsAzureTableSink.java
also copied it to /usr/hdp/2.4.2.0-258/spark/lib/
/usr/hdp/2.4.2.0-258/hadoop/lib/
... View more
08-08-2016
09:00 PM
2 Kudos
I am getting following error in HDinsight cluster while submitting spark job Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.metrics2.sink.WasbAzureIaasSink So wondering which jar file has `org.apache.hadoop.metrics2.sink.WasbAzureIaasSink` class ?
... View more
07-18-2016
06:30 PM
How do I use /usr/hdp/current/flume-server/etc/init.d/flume-agent
with service command to start/stop flume agent. Do I need to copy to /etc/init.d/
... View more
07-18-2016
12:26 PM
I was following https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.2/bk_installing_manually_book/content/installing_flume.html to install flume on one of the cluster node. apt-get install flume-agent #This installs init scripts Even after installing flume-agent package it doesn't install /etc/init.d/flume-ng script. Anyone knows whats wrong here. Thanks
... View more
07-05-2016
12:41 PM
I do have azure-storage package installed root@sbd-docker:~# pip show azure-storage
---
Name: azure-storage
Version: 0.20.0
Location: /usr/local/lib/python2.7/dist-packages
Requires: azure-nspkg, requests, python-dateutil, azure-common
root@sbd-docker:~# Is this what you mean ?
... View more
07-05-2016
12:38 PM
I tried setting up HADOOP_HOME & log4j property you mentioned. Not its looks lik this https://gist.github.com/anonymous/6502365d31d68bc29bc2afac15b01158. spark-shell trace https://gist.github.com/anonymous/57014be445e1c8526fdaba561739ba44
... View more
07-01-2016
05:34 PM
I do have `/usr/hdp/current/hadoop-client/hadoop-azure.jar` present on the node
... View more
07-01-2016
02:34 PM
Hi, We have HDInsight cluster in Azure running, but it doesn't allow to spin up edge/gateway node at the time of cluster creation. So I was creating this edge/gateway node by installing echo 'deb http://private-repo-1.hortonworks.com/HDP/ubuntu14/2.x/updates/2.4.2.0 HDP main' >> /etc/apt/sources.list.d/HDP.list
echo 'deb http://private-repo-1.hortonworks.com/HDP-UTILS-1.1.0.20/repos/ubuntu14 HDP-UTILS main' >> /etc/apt/sources.list.d/HDP.list
echo 'deb [arch=amd64] https://apt-mo.trafficmanager.net/repos/azurecore/ trusty main' >> /etc/apt/sources.list.d/azure-public-trusty.list
gpg --keyserver pgp.mit.edu --recv-keys B9733A7A07513CAD
gpg -a --export 07513CAD | apt-key add -
gpg --keyserver pgp.mit.edu --recv-keys B02C46DF417A0893
gpg -a --export 417A0893 | apt-key add -
apt-get -y install openjdk-7-jdk
export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64
apt-get -y install hadoop hadoop-hdfs hadoop-yarn hadoop-mapreduce hadoop-client openssl libhdfs0 liblzo2-2 liblzo2-dev hadoop-lzo phoenix hive hive-hcatalog tez mysql-connector-java* oozie oozie-client sqoop flume flume-agent spark After installing all packages and copying config files from cluster node, I am able to access hadoop fs commands and run yarn jobs. But Spark doesn't work smoothly yet, following packages are present on the edge/gateway node with spark config from cluster. root@sbd-docker:~/ubuntu# dpkg -l | grep spark
ii spark 1.6.1.2.4.2.0-258 all spark is a virtual package that brings spark-2-4-2-0-258 as a dependency.
ii spark-2-4-2-0-258 1.6.1.2.4.2.0-258 all Lightning-Fast Cluster Computing
ii spark-2-4-2-0-258-master 1.6.1.2.4.2.0-258 all Server for Spark master
ii spark-2-4-2-0-258-python 1.6.1.2.4.2.0-258 all Python client for Spark
ii spark-2-4-2-0-258-worker 1.6.1.2.4.2.0-258 all Server for Spark worker
ii spark-2-4-2-0-258-yarn-shuffle 1.6.1.2.4.2.0-258 all Spark Yarn Shuffle jar
root@sbd-docker:~/ubuntu#
spark-shell gives me following error root@sbd-docker:~/ubuntu# spark-shell
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/2.4.2.0-258/spark/lib/spark-assembly-1.6.1.2.4.2.0-258-hadoop2.7.1.2.4.2.0-258.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.4.2.0-258/spark/lib/spark-assembly.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.4.2.0-258/spark/lib/spark-examples-1.6.1.2.4.2.0-258-hadoop2.7.1.2.4.2.0-258.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
16/07/01 14:35:28 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/07/01 14:35:29 INFO SecurityManager: Changing view acls to: root
16/07/01 14:35:29 INFO SecurityManager: Changing modify acls to: root
16/07/01 14:35:29 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
16/07/01 14:35:29 INFO HttpServer: Starting HTTP Server
16/07/01 14:35:29 INFO Server: jetty-8.y.z-SNAPSHOT
16/07/01 14:35:29 INFO AbstractConnector: Started SocketConnector@0.0.0.0:47325
16/07/01 14:35:29 INFO Utils: Successfully started service 'HTTP class server' on port 47325.
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 1.6.1
/_/
Using Scala version 2.10.5 (OpenJDK 64-Bit Server VM, Java 1.7.0_101)
Type in expressions to have them evaluated.
Type :help for more information.
16/07/01 14:35:37 INFO SparkContext: Running Spark version 1.6.1
16/07/01 14:35:37 INFO SecurityManager: Changing view acls to: root
16/07/01 14:35:37 INFO SecurityManager: Changing modify acls to: root
16/07/01 14:35:37 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
16/07/01 14:35:37 INFO Utils: Successfully started service 'sparkDriver' on port 37810.
16/07/01 14:35:39 INFO Slf4jLogger: Slf4jLogger started
16/07/01 14:35:39 INFO Remoting: Starting remoting
16/07/01 14:35:39 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@10.8.17.5:45089]
16/07/01 14:35:39 INFO Utils: Successfully started service 'sparkDriverActorSystem' on port 45089.
16/07/01 14:35:39 INFO SparkEnv: Registering MapOutputTracker
16/07/01 14:35:39 INFO SparkEnv: Registering BlockManagerMaster
16/07/01 14:35:39 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-0de66eed-5a2e-4c6b-a78c-f1719dce3b1d
16/07/01 14:35:39 INFO MemoryStore: MemoryStore started with capacity 517.4 MB
16/07/01 14:35:39 INFO SparkEnv: Registering OutputCommitCoordinator
16/07/01 14:35:40 INFO Server: jetty-8.y.z-SNAPSHOT
16/07/01 14:35:40 INFO AbstractConnector: Started SelectChannelConnector@0.0.0.0:4040
16/07/01 14:35:40 INFO Utils: Successfully started service 'SparkUI' on port 4040.
16/07/01 14:35:40 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://10.8.17.5:4040
spark.yarn.driver.memoryOverhead is set but does not apply in client mode.
16/07/01 14:35:41 INFO TimelineClientImpl: Timeline service address: http://hn0-haspar.pbed5jwkixfebdxr1by2u30lzf.cx.internal.cloudapp.net:8188/ws/v1/timeline/
16/07/01 14:35:41 INFO AbstractService: Service org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl failed in state STARTED; cause: java.io.IOException: No FileSystem for scheme: wasb
java.io.IOException: No FileSystem for scheme: wasb
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2644)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2651)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:92)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2687)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2669)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:371)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:170)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:355)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
at org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.serviceStart(TimelineClientImpl.java:378)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.serviceStart(YarnClientImpl.java:194)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:127)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:56)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:144)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:530)
at org.apache.spark.repl.SparkILoop.createSparkContext(SparkILoop.scala:1017)
at $line3.$read$$iwC$$iwC.<init>(<console>:15)
at $line3.$read$$iwC.<init>(<console>:24)
at $line3.$read.<init>(<console>:26)
at $line3.$read$.<init>(<console>:30)
at $line3.$read$.<clinit>(<console>)
at $line3.$eval$.<init>(<console>:7)
at $line3.$eval$.<clinit>(<console>)
at $line3.$eval.$print(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:125)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:124)
at org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:324)
at org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoopInit.scala:124)
at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1$$anonfun$apply$mcZ$sp$5.apply$mcV$sp(SparkILoop.scala:974)
at org.apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:159)
at org.apache.spark.repl.SparkILoop.runThunks(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoopInit$class.postInitialization(SparkILoopInit.scala:108)
at org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:991)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
at org.apache.spark.repl.Main$.main(Main.scala:31)
at org.apache.spark.repl.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
16/07/01 14:35:41 INFO AbstractService: Service org.apache.hadoop.yarn.client.api.impl.YarnClientImpl failed in state STARTED; cause: org.apache.hadoop.service.ServiceStateException: java.io.IOException: No FileSystem for scheme: wasb
org.apache.hadoop.service.ServiceStateException: java.io.IOException: No FileSystem for scheme: wasb
at org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:59)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:204)
at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.serviceStart(YarnClientImpl.java:194)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:127)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:56)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:144)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:530)
at org.apache.spark.repl.SparkILoop.createSparkContext(SparkILoop.scala:1017)
at $line3.$read$$iwC$$iwC.<init>(<console>:15)
at $line3.$read$$iwC.<init>(<console>:24)
at $line3.$read.<init>(<console>:26)
at $line3.$read$.<init>(<console>:30)
at $line3.$read$.<clinit>(<console>)
at $line3.$eval$.<init>(<console>:7)
at $line3.$eval$.<clinit>(<console>)
at $line3.$eval.$print(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:125)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:124)
at org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:324)
at org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoopInit.scala:124)
at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1$$anonfun$apply$mcZ$sp$5.apply$mcV$sp(SparkILoop.scala:974)
at org.apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:159)
at org.apache.spark.repl.SparkILoop.runThunks(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoopInit$class.postInitialization(SparkILoopInit.scala:108)
at org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:991)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
at org.apache.spark.repl.Main$.main(Main.scala:31)
at org.apache.spark.repl.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.io.IOException: No FileSystem for scheme: wasb
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2644)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2651)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:92)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2687)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2669)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:371)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:170)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:355)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
at org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.serviceStart(TimelineClientImpl.java:378)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
... 54 more
16/07/01 14:35:41 ERROR SparkContext: Error initializing SparkContext.
org.apache.hadoop.service.ServiceStateException: java.io.IOException: No FileSystem for scheme: wasb
at org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:59)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:204)
at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.serviceStart(YarnClientImpl.java:194)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:127)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:56)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:144)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:530)
at org.apache.spark.repl.SparkILoop.createSparkContext(SparkILoop.scala:1017)
at $line3.$read$$iwC$$iwC.<init>(<console>:15)
at $line3.$read$$iwC.<init>(<console>:24)
at $line3.$read.<init>(<console>:26)
at $line3.$read$.<init>(<console>:30)
at $line3.$read$.<clinit>(<console>)
at $line3.$eval$.<init>(<console>:7)
at $line3.$eval$.<clinit>(<console>)
at $line3.$eval.$print(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:125)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:124)
at org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:324)
at org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoopInit.scala:124)
at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1$$anonfun$apply$mcZ$sp$5.apply$mcV$sp(SparkILoop.scala:974)
at org.apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:159)
at org.apache.spark.repl.SparkILoop.runThunks(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoopInit$class.postInitialization(SparkILoopInit.scala:108)
at org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:991)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
at org.apache.spark.repl.Main$.main(Main.scala:31)
at org.apache.spark.repl.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.io.IOException: No FileSystem for scheme: wasb
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2644)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2651)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:92)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2687)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2669)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:371)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:170)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:355)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
at org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.serviceStart(TimelineClientImpl.java:378)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
... 54 more
16/07/01 14:35:41 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/kill,null}
16/07/01 14:35:41 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/api,null}
16/07/01 14:35:41 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/,null}
16/07/01 14:35:41 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/static,null}
16/07/01 14:35:41 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump/json,null}
16/07/01 14:35:41 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump,null}
16/07/01 14:35:41 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/json,null}
16/07/01 14:35:41 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors,null}
16/07/01 14:35:41 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment/json,null}
16/07/01 14:35:41 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment,null}
16/07/01 14:35:41 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd/json,null}
16/07/01 14:35:41 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd,null}
16/07/01 14:35:41 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/json,null}
16/07/01 14:35:41 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage,null}
16/07/01 14:35:41 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool/json,null}
16/07/01 14:35:41 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool,null}
16/07/01 14:35:41 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/json,null}
16/07/01 14:35:41 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage,null}
16/07/01 14:35:41 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/json,null}
16/07/01 14:35:41 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages,null}
16/07/01 14:35:41 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job/json,null}
16/07/01 14:35:41 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job,null}
16/07/01 14:35:41 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/json,null}
16/07/01 14:35:41 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs,null}
16/07/01 14:35:41 INFO SparkUI: Stopped Spark web UI at http://10.8.17.5:4040
16/07/01 14:35:41 WARN YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to request executors before the AM has registered!
16/07/01 14:35:41 INFO YarnClientSchedulerBackend: Stopped
16/07/01 14:35:41 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
16/07/01 14:35:41 INFO MemoryStore: MemoryStore cleared
16/07/01 14:35:41 INFO BlockManager: BlockManager stopped
16/07/01 14:35:41 INFO BlockManagerMaster: BlockManagerMaster stopped
16/07/01 14:35:41 WARN MetricsSystem: Stopping a MetricsSystem that is not running
16/07/01 14:35:41 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
16/07/01 14:35:41 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
16/07/01 14:35:41 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
16/07/01 14:35:41 INFO SparkContext: Successfully stopped SparkContext
16/07/01 14:35:41 INFO RemoteActorRefProvider$RemotingTerminator: Remoting shut down.
org.apache.hadoop.service.ServiceStateException: java.io.IOException: No FileSystem for scheme: wasb
at org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:59)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:204)
at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.serviceStart(YarnClientImpl.java:194)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:127)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:56)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:144)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:530)
at org.apache.spark.repl.SparkILoop.createSparkContext(SparkILoop.scala:1017)
at $iwC$$iwC.<init>(<console>:15)
at $iwC.<init>(<console>:24)
at <init>(<console>:26)
at .<init>(<console>:30)
at .<clinit>(<console>)
at .<init>(<console>:7)
at .<clinit>(<console>)
at $print(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:125)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:124)
at org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:324)
at org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoopInit.scala:124)
at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1$$anonfun$apply$mcZ$sp$5.apply$mcV$sp(SparkILoop.scala:974)
at org.apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:159)
at org.apache.spark.repl.SparkILoop.runThunks(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoopInit$class.postInitialization(SparkILoopInit.scala:108)
at org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:991)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
at org.apache.spark.repl.Main$.main(Main.scala:31)
at org.apache.spark.repl.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.io.IOException: No FileSystem for scheme: wasb
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2644)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2651)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:92)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2687)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2669)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:371)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:170)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:355)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
at org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.serviceStart(TimelineClientImpl.java:378)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
... 54 more
java.lang.NullPointerException
at org.apache.spark.sql.SQLContext$.createListenerAndUI(SQLContext.scala:1367)
at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:101)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.spark.repl.SparkILoop.createSQLContext(SparkILoop.scala:1028)
at $iwC$$iwC.<init>(<console>:15)
at $iwC.<init>(<console>:24)
at <init>(<console>:26)
at .<init>(<console>:30)
at .<clinit>(<console>)
at .<init>(<console>:7)
at .<clinit>(<console>)
at $print(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:132)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:124)
at org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:324)
at org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoopInit.scala:124)
at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1$$anonfun$apply$mcZ$sp$5.apply$mcV$sp(SparkILoop.scala:974)
at org.apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:159)
at org.apache.spark.repl.SparkILoop.runThunks(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoopInit$class.postInitialization(SparkILoopInit.scala:108)
at org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:991)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
at org.apache.spark.repl.Main$.main(Main.scala:31)
at org.apache.spark.repl.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
<console>:16: error: not found: value sqlContext
import sqlContext.implicits._
^
<console>:16: error: not found: value sqlContext
import sqlContext.sql
^
scala>
Any one know what I am missing here.
... View more
Labels:
06-23-2016
06:45 PM
Any idea which package install /usr/lib/python2.7/dist-packages/hdinsight_common
... View more
06-23-2016
04:56 PM
2 Kudos
Hi, We have HDinsight cluster running on Azure. I was trying to create a client machine to connect to HDInsight. I followed the instruction from Hortonworks installation guide to install all client component, and then copied /etc/hadoop/conf from one of the HDInsight node to this new node. But when I try to access the cluster by hadoop fs -ls I get following error root@sbd-docker:~# hadoop fs -ls /
log4j:ERROR Could not instantiate class [com.microsoft.log4jappender.EtwAppender].
java.lang.ClassNotFoundException: com.microsoft.log4jappender.EtwAppender
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:195)
at org.apache.log4j.helpers.Loader.loadClass(Loader.java:198)
at org.apache.log4j.helpers.OptionConverter.instantiateByClassName(OptionConverter.java:327)
at org.apache.log4j.helpers.OptionConverter.instantiateByKey(OptionConverter.java:124)
at org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:785)
at org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:768)
at org.apache.log4j.PropertyConfigurator.configureRootCategory(PropertyConfigurator.java:648)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:514)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:580)
at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526)
at org.apache.log4j.LogManager.<clinit>(LogManager.java:127)
at org.apache.log4j.Logger.getLogger(Logger.java:104)
at org.apache.commons.logging.impl.Log4JLogger.getLogger(Log4JLogger.java:262)
at org.apache.commons.logging.impl.Log4JLogger.<init>(Log4JLogger.java:108)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.commons.logging.impl.LogFactoryImpl.createLogFromClass(LogFactoryImpl.java:1025)
at org.apache.commons.logging.impl.LogFactoryImpl.discoverLogImplementation(LogFactoryImpl.java:790)
at org.apache.commons.logging.impl.LogFactoryImpl.newInstance(LogFactoryImpl.java:541)
at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:292)
at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:269)
at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:657)
at org.apache.hadoop.fs.FsShell.<clinit>(FsShell.java:43)
log4j:ERROR Could not instantiate appender named "ETW".
log4j:ERROR Could not instantiate class [com.microsoft.log4jappender.FilterLogAppender].
java.lang.ClassNotFoundException: com.microsoft.log4jappender.FilterLogAppender
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:195)
at org.apache.log4j.helpers.Loader.loadClass(Loader.java:198)
at org.apache.log4j.helpers.OptionConverter.instantiateByClassName(OptionConverter.java:327)
at org.apache.log4j.helpers.OptionConverter.instantiateByKey(OptionConverter.java:124)
at org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:785)
at org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:768)
at org.apache.log4j.PropertyConfigurator.configureRootCategory(PropertyConfigurator.java:648)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:514)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:580)
at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526)
at org.apache.log4j.LogManager.<clinit>(LogManager.java:127)
at org.apache.log4j.Logger.getLogger(Logger.java:104)
at org.apache.commons.logging.impl.Log4JLogger.getLogger(Log4JLogger.java:262)
at org.apache.commons.logging.impl.Log4JLogger.<init>(Log4JLogger.java:108)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.commons.logging.impl.LogFactoryImpl.createLogFromClass(LogFactoryImpl.java:1025)
at org.apache.commons.logging.impl.LogFactoryImpl.discoverLogImplementation(LogFactoryImpl.java:790)
at org.apache.commons.logging.impl.LogFactoryImpl.newInstance(LogFactoryImpl.java:541)
at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:292)
at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:269)
at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:657)
at org.apache.hadoop.fs.FsShell.<clinit>(FsShell.java:43)
log4j:ERROR Could not instantiate appender named "FilterLog".
log4j:ERROR Could not instantiate class [com.microsoft.log4jappender.EtwAppender].
java.lang.ClassNotFoundException: com.microsoft.log4jappender.EtwAppender
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:195)
at org.apache.log4j.helpers.Loader.loadClass(Loader.java:198)
at org.apache.log4j.helpers.OptionConverter.instantiateByClassName(OptionConverter.java:327)
at org.apache.log4j.helpers.OptionConverter.instantiateByKey(OptionConverter.java:124)
at org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:785)
at org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:768)
at org.apache.log4j.PropertyConfigurator.parseCatsAndRenderers(PropertyConfigurator.java:672)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:516)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:580)
at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526)
at org.apache.log4j.LogManager.<clinit>(LogManager.java:127)
at org.apache.log4j.Logger.getLogger(Logger.java:104)
at org.apache.commons.logging.impl.Log4JLogger.getLogger(Log4JLogger.java:262)
at org.apache.commons.logging.impl.Log4JLogger.<init>(Log4JLogger.java:108)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.commons.logging.impl.LogFactoryImpl.createLogFromClass(LogFactoryImpl.java:1025)
at org.apache.commons.logging.impl.LogFactoryImpl.discoverLogImplementation(LogFactoryImpl.java:790)
at org.apache.commons.logging.impl.LogFactoryImpl.newInstance(LogFactoryImpl.java:541)
at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:292)
at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:269)
at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:657)
at org.apache.hadoop.fs.FsShell.<clinit>(FsShell.java:43)
log4j:ERROR Could not instantiate appender named "ETW".
log4j:ERROR Could not instantiate class [com.microsoft.log4jappender.FilterLogAppender].
java.lang.ClassNotFoundException: com.microsoft.log4jappender.FilterLogAppender
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:195)
at org.apache.log4j.helpers.Loader.loadClass(Loader.java:198)
at org.apache.log4j.helpers.OptionConverter.instantiateByClassName(OptionConverter.java:327)
at org.apache.log4j.helpers.OptionConverter.instantiateByKey(OptionConverter.java:124)
at org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:785)
at org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:768)
at org.apache.log4j.PropertyConfigurator.parseCatsAndRenderers(PropertyConfigurator.java:672)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:516)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:580)
at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526)
at org.apache.log4j.LogManager.<clinit>(LogManager.java:127)
at org.apache.log4j.Logger.getLogger(Logger.java:104)
at org.apache.commons.logging.impl.Log4JLogger.getLogger(Log4JLogger.java:262)
at org.apache.commons.logging.impl.Log4JLogger.<init>(Log4JLogger.java:108)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.commons.logging.impl.LogFactoryImpl.createLogFromClass(LogFactoryImpl.java:1025)
at org.apache.commons.logging.impl.LogFactoryImpl.discoverLogImplementation(LogFactoryImpl.java:790)
at org.apache.commons.logging.impl.LogFactoryImpl.newInstance(LogFactoryImpl.java:541)
at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:292)
at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:269)
at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:657)
at org.apache.hadoop.fs.FsShell.<clinit>(FsShell.java:43)
log4j:ERROR Could not instantiate appender named "AMFilterLog".
16/06/23 16:57:20 WARN impl.MetricsSystemImpl: Error creating sink 'azurefs2'
org.apache.hadoop.metrics2.impl.MetricsConfigException: Error creating plugin: org.apache.hadoop.metrics2.sink.WasbAzureIaasSink
at org.apache.hadoop.metrics2.impl.MetricsConfig.getPlugin(MetricsConfig.java:203)
at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.newSink(MetricsSystemImpl.java:528)
at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.configureSinks(MetricsSystemImpl.java:500)
at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.configure(MetricsSystemImpl.java:479)
at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.start(MetricsSystemImpl.java:189)
at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.init(MetricsSystemImpl.java:164)
at org.apache.hadoop.fs.azure.metrics.AzureFileSystemMetricsSystem.fileSystemStarted(AzureFileSystemMetricsSystem.java:41)
at org.apache.hadoop.fs.azure.NativeAzureFileSystem.initialize(NativeAzureFileSystem.java:1153)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2653)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:92)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2687)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2669)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:371)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:170)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:355)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:235)
at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:218)
at org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:201)
at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.metrics2.sink.WasbAzureIaasSink
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:278)
at org.apache.hadoop.metrics2.impl.MetricsConfig.getPlugin(MetricsConfig.java:196)
... 24 more
ls: org.apache.hadoop.fs.azure.KeyProviderException: java.io.IOException: Cannot run program "/usr/lib/python2.7/dist-packages/hdinsight_common/decrypt.sh": error=2, No such file or directory
It seems some packages are missing. any idea which packages are missing here ?
... View more
06-23-2016
12:33 PM
1 Kudo
Thanks @Chris Nauroth for explanation. At present We have Namenode HA and we are putting data from Flume into this cluster. We are configuring hdfs://mycluster/flume as destination in Flume sink. Whats is the correct way to put data into default HDFS storage (WASB) from Flume and make it accessible from hadoop fs -ls / ? Appreciate help in this.
... View more
06-22-2016
08:57 PM
1 Kudo
We have HDInsight cluster setup in Azure.
When I do hadoop fs -ls / it shows me drwxr-xr-x - root supergroup 0 2016-06-17 20:56 /HdiNotebooks
drwxr-xr-x - root supergroup 0 2016-06-17 21:00 /HdiSamples
drwxr-xr-x - hdfs supergroup 0 2016-06-17 20:48 /ams
drwxr-xr-x - hdfs supergroup 0 2016-06-17 20:48 /amshbase
drwxrwxrwx - yarn hadoop 0 2016-06-17 20:48 /app-logs
drwxr-xr-x - yarn hadoop 0 2016-06-17 20:48 /atshistory
drwxr-xr-x - sshuser supergroup 0 2016-06-21 18:38 /data
drwxr-xr-x - root supergroup 0 2016-06-17 20:59 /example
drwxr-xr-x - hdfs supergroup 0 2016-06-17 20:48 /hdp
drwxr-xr-x - hdfs supergroup 0 2016-06-17 20:48 /hive
drwxr-xr-x - mapred supergroup 0 2016-06-17 20:48 /mapred
drwx------ - sshuser supergroup 0 2016-06-20 14:22 /mapreducestaging
drwxrwxrwx - mapred hadoop 0 2016-06-17 20:48 /mr-history
drwxr-xr-x - sshuser supergroup 0 2016-06-20 19:20 /sqoop
drwxrwxrwx - hdfs supergroup 0 2016-06-17 20:48 /tmp
drwxr-xr-x - hdfs supergroup 0 2016-06-17 20:48 /user
But hadoop fs -ls hdfs://mycluster/ shows following result. root@hn0-haspar:~# hadoop fs -ls hdfs://mycluster/
Found 3 items
drwxr-xr-x - root hdfs 0 2016-06-21 18:48 hdfs://mycluster/data
drwx-wx-wx - root hdfs 0 2016-06-17 20:57 hdfs://mycluster/tmp
drwx------ - root hdfs 0 2016-06-22 17:24 hdfs://mycluster/user
Dont know where this different dir coming. Cluster has HA configuration.
... View more
06-22-2016
02:45 PM
1 Kudo
Hi, We are using HDInsight on Azure, at present we log into one of the headnode to submit a job to the cluster. So wonderingif its possible to have client/gateway node for job submission ? we are spinning up the HDInsight using template. Thanks
... View more
04-08-2016
10:50 AM
Hi, I was trying to integrate YARN as a resource manager for Impala, I tried to do it through Cloudera Manager by adding Impala Llama ApplicationMaster in impala service and configured Impala to use YARN ResourceManager, by setting mapreduce.framework.name = yarn, yarn.nodemanager.linux- container-executor.resources- handler.class=true, yarn. nodemanager.container- executor.class=true. Enable Cgroup-based Resource Management = Ture (for all hosts) But after this configuration yarn jobs started failing. So I am looking for a document which can guide me on how to do this correctly. Thanks
... View more
04-07-2016
08:46 AM
Hi, I have added Impala Llama ApplicationMaster into Impala Service and configured Impala to use YARN ResourceManager, by mapreduce.framework.name=yarn, yarn.nodemanager.linux-container-executor.resources-handler.class=true, yarn.nodemanager.container-executor.class=true, Enable Cgroup-based Resource Management = Ture (for all hosts) But after this config, jobs on yarns are failing with Can 't create directory /yarn/nm/usercache/abc/appcache/application-* Anyone have seen this ? any suggestion how to correct this ?
... View more
03-11-2016
01:14 PM
I am trying to run sample hadoop example jobs on the new cluster with CDH 5.6.0 I am running following command : /usr/ bin / yarn jar / usr / lib / hadoop - 0.20 - mapreduce / hadoop - examples . jar pi 16 100 Number of Maps = 16 Samples per Map = 100 Wrote input for Map #0 Wrote input for Map #1 Wrote input for Map #2 Wrote input for Map #3 Wrote input for Map #4 Wrote input for Map #5 Wrote input for Map #6 Wrote input for Map #7 Wrote input for Map #8 Wrote input for Map #9 Wrote input for Map #10 Wrote input for Map #11 Wrote input for Map #12 Wrote input for Map #13 Wrote input for Map #14 Wrote input for Map #15 Starting Job 16/03/11 15:24:19 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to rm73 16/03/11 15:24:19 INFO input.FileInputFormat: Total input paths to process : 16 16/03/11 15:24:19 INFO mapreduce.JobSubmitter: number of splits:16 16/03/11 15:24:19 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1457718893017_0004 16/03/11 15:24:20 INFO impl.YarnClientImpl: Submitted application application_1457718893017_0004 16/03/11 15:24:20 INFO mapreduce.Job: The url to track the job: http://hmn002.dev.abc.com:8088/proxy/application_1457718893017_0004/ 16/03/11 15:24:20 INFO mapreduce.Job: Running job: job_1457718893017_0004 16/03/11 15:24:25 INFO mapreduce.Job: Job job_1457718893017_0004 running in uber mode : false 16/03/11 15:24:25 INFO mapreduce.Job: map 0% reduce 0% 16/03/11 15:24:27 INFO mapreduce.Job: Task Id : attempt_1457718893017_0004_m_000004_0, Status : FAILED Exception from container-launch. Container id: container_1457718893017_0004_01_000005 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:561) at org.apache.hadoop.util.Shell.run(Shell.java:478) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:738) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:210) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 16/03/11 15:24:27 INFO mapreduce.Job: Task Id : attempt_1457718893017_0004_m_000007_0, Status : FAILED Exception from container-launch. Container id: container_1457718893017_0004_01_000009 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:561) at org.apache.hadoop.util.Shell.run(Shell.java:478) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:738) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:210) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 Its failing with above error. When I checked many people are pointing to hadoop classpath but I have checked classpath also correct in the Cloudera Manager. This is the another log I found from one of the task. Log Type : stderr Log Upload Time : Fri Mar 11 20 : 24 : 44 + 0000 2016 Log Length : 108 Error : Could not create the Java Virtual Machine . Error : A fatal exception has occurred . Program will exit . Log Type : stdout Log Upload Time : Fri Mar 11 20 : 24 : 44 + 0000 2016 Log Length : 90 Error occurred during initialization of VM Could not reserve enough space for object heap Anyone know whats wrong here ?
... View more
02-08-2016
01:42 PM
I am trying to import Vertica data into Hadoop/Hive/Impala. I am Vertica v6.0.1-7 & CDH 5.4. I have Sqoop 1.99.5-cdh5.4.0 (sqoop2) installed through CM. I tried to create link using following command sqoop:000> create link --cid 4
Creating link for connector with id 4
Please fill following values to create new link object
Name: vertica
Link configuration
JDBC Driver Class: com.vertica.jdbc.Driver
JDBC Connection String: jdbc:vertica://10.1.1.1:5433/PROD
Username: user
Password: *******
JDBC Connection Properties:
There are currently 0 values in the map:
entry# after this it got stuck here if I hit enter it start over again. any idea how do I create this import using sqoop2. also anyone know if I can just do this import in one command line the way sqoop1 does. thanks
... View more
02-08-2016
07:12 AM
1 Kudo
Got resolved by following command. sqoop import -m 1 --driver com.vertica.jdbc.Driver --connect "jdbc:vertica://10.10.10.10:5433/MYDB" --password dbpassword --username dbusername --target-dir "/user/my/hdfs/dir" --verbose --query 'SELECT * FROM ORDER_V2 LIMIT 10 WHERE $CONDITIONS;'
... View more