Member since
11-22-2016
50
Posts
3
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3885 | 01-17-2017 02:54 PM |
05-24-2016
11:23 AM
I've built a HDP cluster of 2.2.x version.Now i wanted to install Apache NIFI on this cluster.But the installation guide of HDF shows that my HDP version is not compatible with HDF 1.2.So, i want to know+ +"Which NIFI version or HDF version supports HDP 2.2.x"?? Thanks in Advance
... View more
Labels:
05-24-2016
06:23 AM
Mr.Dyer, your article inspired me!! This is such an inspirational post.I'm a new bee to the hadoop world, and you made my day.May god bless you with more technologies.
... View more
05-23-2016
01:09 PM
So, apache-flume-1.4 can still bring the data? or shall i upgrade my flume to 1.6 or higher?
... View more
05-23-2016
12:59 PM
Hello team,
I'm a programming enthusiast.I have downloaded twitter stream before but now i'm not able to do so.I'm using apache-flume-1.4 on my hadoop 2.3.0 and cdh 5.0.0.
No matter how many times i've tried ,it is throwing the same error, hadoop@ubuntu:~/hadoop/apache-flume-1.4.0-cdh5.0.0-bin$ ./bin/flume-ng agent -n TwitterAgent -c conf -f /home/hadoop/hadoop/apache-flume-1.4.0-cdh5.0.0-bin/conf/local.conf Dflume.root.logger=DEBUG,console -n TwitterAgent
Info: Sourcing environment configuration script /home/hadoop/hadoop/apache-flume-1.4.0-cdh5.0.0-bin/conf/flume-env.sh
Info: Including Hadoop libraries found via (/home/hadoop/hadoop/hadoop-2.3.0-cdh5.0.0/bin/hadoop) for HDFS access
Info: Excluding /home/hadoop/hadoop/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/slf4j-api-1.7.5.jar from classpath
Info: Excluding /home/hadoop/hadoop/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar from classpath
Info: Including HBASE libraries found via (/home/hadoop/hadoop/hbase-0.96.1.1-cdh5.0.0/bin/hbase) for HBASE access
Info: Excluding /home/hadoop/hadoop/hbase-0.96.1.1-cdh5.0.0/lib/slf4j-api-1.7.5.jar from classpath
Info: Excluding /home/hadoop/hadoop/hbase-0.96.1.1-cdh5.0.0/lib/slf4j-log4j12-1.7.5.jar from classpath
Info: Excluding /home/hadoop/hadoop/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/slf4j-api-1.7.5.jar from classpath
Info: Excluding /home/hadoop/hadoop/hadoop-2.3.0-cdh5.0.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar from classpath
+ exec /usr/lib/jvm/java-7-openjdk-amd64/bin/java -Xms100m -Xmx200m -Dcom.sun.management.jmxremote -cp '/home/hadoop/hadoop/apache-flume-1.4.0-cdh5.0.0-bin/conf:/home/hadoop/hadoop/apache-flume-1.4.0-cdh5.0.0-bin/lib/*:/home/hadoop/hadoop/apache-flume-1.4.0-cdh5.0.0-bin/lib/flume-sources-1.0-SNAPSHOT.jar:/home/hadoop/hadoop/hadoop-2.3.0-cdh5.0.0/etc/hadoop:/home/ha.....
And the .conf file is as follows: TwitterAgent.sources= Twitter
TwitterAgent.channels= MemChannel
TwitterAgent.sinks=HDFS
TwitterAgent.sources.Twitter.type = org.apache.flume.source.twitter.TwitterSource
TwitterAgent.sources.Twitter.channels=MemChannel
TwitterAgent.sources.Twitter.consumerKey=Pw63cpjptT59uT6w
TwitterAgent.sources.Twitter.consumerSecret= n8awrhKf7S576DcILPk5Ddfp1LQUU
TwitterAgent.sources.Twitter.accessToken=163543326-s0Rqm5y4UC2WV7HPOuiOE9fPZZ56eWO95P
TwitterAgent.sources.Twitter.accessTokenSecret= CLwyJJ1jY4atf7iaiaR96Z1PmVvKF0iOXsP8E
TwitterAgent.sources.Twitter.keywords= hadoop,election,sports, cricket,Big data,Trump
TwitterAgent.sinks.HDFS.channel=MemChannel
TwitterAgent.sinks.HDFS.type=hdfs
TwitterAgent.sinks.HDFS.hdfs.path=hdfs://localhost:9000/tweety
TwitterAgent.sinks.HDFS.hdfs.fileType=DataStream
TwitterAgent.sinks.HDFS.hdfs.writeformat=Text
TwitterAgent.sinks.HDFS.hdfs.batchSize=1000
TwitterAgent.sinks.HDFS.hdfs.rollSize=0
TwitterAgent.sinks.HDFS.hdfs.rollCount=10000
TwitterAgent.sinks.HDFS.hdfs.rollInterval=600
TwitterAgent.channels.MemChannel.type=memory
TwitterAgent.channels.MemChannel.capacity=10000
TwitterAgent.channels.MemChannel.transactionCapacity=100 And flume-env.sh file as follows: # Enviroment variables can be set here.
JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64
# Give Flume more memory and pre-allocate, enable remote monitoring via JMX
JAVA_OPTS="-Xms100m -Xmx200m -Dcom.sun.management.jmxremote"
# Note that the Flume conf directory is always included in the classpath.
FLUME_CLASSPATH="/home/hadoop/hadoop/apache-flume-1.4.0-cdh5.0.0-bin/lib/flume-sources-1.0-SNAPSHOT.jar"
And the .bashrc file: export FLUME_HOME="/home/hadoop/hadoop/apache-flume-1.4.0-cdh5.0.0-bin"
export PATH="$FLUME_HOME/bin:$PATH"
export FLUME_CLASSPATH="$CLASSPATH:/home/hadoop/hadoop/apache-flume-1.4.0-cdh5.0.0-bin/lib/flume-sources-1.0-SNAPSHOT.jar "
Please i want to know on which part i'm doing it wrong. Any valuable suggestion is much appreciated. Thanks in advance.
... View more
Labels:
- Labels:
-
Apache Flume
-
Apache Hadoop
05-03-2016
06:58 AM
And what can you say about the oozie server? Why is it shutting down on its own?
... View more
04-19-2016
12:08 PM
Hi,everyone. I have a problem While summarising the cluster in Ambari server , the Ambari metrics server is constantly throwing this warning and aborting in the middle. Below is the log generated by the service while aborting. stderr:
Command aborted. Aborted by user
stdout:
2016-04-19 12:28:16,359 - Group['hadoop'] {}
2016-04-19 12:28:16,360 - Group['users'] {}
2016-04-19 12:28:16,361 - Group['knox'] {}
2016-04-19 12:28:16,361 - Group['spark'] {}
2016-04-19 12:28:16,361 - User['oozie'] {'gid': 'hadoop', 'groups': ['users']}
2016-04-19 12:28:16,362 - User['hive'] {'gid': 'hadoop', 'groups': ['hadoop']}
2016-04-19 12:28:16,363 - User['ambari-qa'] {'gid': 'hadoop', 'groups': ['users']}
2016-04-19 12:28:16,363 - User['flume'] {'gid': 'hadoop', 'groups': ['hadoop']}
2016-04-19 12:28:16,364 - User['hdfs'] {'gid': 'hadoop', 'groups': ['hadoop']}
2016-04-19 12:28:16,365 - User['knox'] {'gid': 'hadoop', 'groups': ['hadoop']}
2016-04-19 12:28:16,366 - User['storm'] {'gid': 'hadoop', 'groups': ['hadoop']}
2016-04-19 12:28:16,367 - User['spark'] {'gid': 'hadoop', 'groups': ['hadoop']}
2016-04-19 12:28:16,367 - User['mapred'] {'gid': 'hadoop', 'groups': ['hadoop']}
2016-04-19 12:28:16,368 - User['hbase'] {'gid': 'hadoop', 'groups': ['hadoop']}
2016-04-19 12:28:16,369 - User['tez'] {'gid': 'hadoop', 'groups': ['users']}
2016-04-19 12:28:16,369 - User['zookeeper'] {'gid': 'hadoop', 'groups': ['hadoop']}
2016-04-19 12:28:16,370 - User['kafka'] {'gid': 'hadoop', 'groups': ['hadoop']}
2016-04-19 12:28:16,371 - User['falcon'] {'gid': 'hadoop', 'groups': ['users']}
2016-04-19 12:28:16,372 - User['sqoop'] {'gid': 'hadoop', 'groups': ['hadoop']}
2016-04-19 12:28:16,372 - User['yarn'] {'gid': 'hadoop', 'groups': ['hadoop']}
2016-04-19 12:28:16,373 - User['hcat'] {'gid': 'hadoop', 'groups': ['hadoop']}
2016-04-19 12:28:16,374 - User['ams'] {'gid': 'hadoop', 'groups': ['hadoop']}
2016-04-19 12:28:16,375 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-04-19 12:28:16,376 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2016-04-19 12:28:16,384 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2016-04-19 12:28:16,384 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'recursive': True, 'mode': 0775, 'cd_access': 'a'}
2016-04-19 12:28:16,391 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-04-19 12:28:16,393 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2016-04-19 12:28:16,401 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
2016-04-19 12:28:16,402 - Group['hdfs'] {'ignore_failures': False}
2016-04-19 12:28:16,402 - User['hdfs'] {'ignore_failures': False, 'groups': ['hadoop', 'hdfs']}
2016-04-19 12:28:16,404 - Directory['/etc/hadoop'] {'mode': 0755}
2016-04-19 12:28:16,406 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0777}
2016-04-19 12:28:16,420 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2016-04-19 12:28:16,461 - Directory['/var/log/hadoop'] {'owner': 'root', 'mode': 0775, 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'}
2016-04-19 12:28:16,464 - Directory['/var/run/hadoop'] {'owner': 'root', 'group': 'root', 'recursive': True, 'cd_access': 'a'}
2016-04-19 12:28:16,465 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'recursive': True, 'cd_access': 'a'}
2016-04-19 12:28:16,475 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop'}
2016-04-19 12:28:16,482 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2016-04-19 12:28:16,664 - Directory['/etc/ams-hbase/conf'] {'owner': 'ams', 'group': 'hadoop', 'recursive': True}
2016-04-19 12:28:16,666 - Directory['/var/lib/ambari-metrics-collector/hbase-tmp'] {'owner': 'ams', 'recursive': True, 'cd_access': 'a'}
2016-04-19 12:28:16,667 - Directory['/var/lib/ambari-metrics-collector/hbase-tmp/local/jars'] {'owner': 'ams', 'cd_access': 'a', 'group': 'hadoop', 'mode': 0775, 'recursive': True}
2016-04-19 12:28:16,668 - XmlConfig['hbase-site.xml'] {'owner': 'ams', 'group': 'hadoop', 'conf_dir': '/etc/ams-hbase/conf', 'configuration_attributes': {}, 'configurations': ...}
2016-04-19 12:28:16,682 - Generating config: /etc/ams-hbase/conf/hbase-site.xml
2016-04-19 12:28:16,682 - File['/etc/ams-hbase/conf/hbase-site.xml'] {'owner': 'ams', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2016-04-19 12:28:16,711 - XmlConfig['hbase-policy.xml'] {'owner': 'ams', 'group': 'hadoop', 'conf_dir': '/etc/ams-hbase/conf', 'configuration_attributes': {}, 'configurations': {'security.admin.protocol.acl': '*', 'security.masterregion.protocol.acl': '*', 'security.client.protocol.acl': '*'}}
2016-04-19 12:28:16,721 - Generating config: /etc/ams-hbase/conf/hbase-policy.xml
2016-04-19 12:28:16,722 - File['/etc/ams-hbase/conf/hbase-policy.xml'] {'owner': 'ams', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2016-04-19 12:28:16,735 - File['/etc/ams-hbase/conf/hbase-env.sh'] {'content': InlineTemplate(...), 'owner': 'ams'}
2016-04-19 12:28:16,741 - File['/etc/ams-hbase/conf/hadoop-metrics2-hbase.properties'] {'content': Template('hadoop-metrics2-hbase.properties.j2'), 'owner': 'ams', 'group': 'hadoop'}
2016-04-19 12:28:16,742 - TemplateConfig['/etc/ams-hbase/conf/regionservers'] {'owner': 'ams', 'template_tag': None}
2016-04-19 12:28:16,744 - File['/etc/ams-hbase/conf/regionservers'] {'content': Template('regionservers.j2'), 'owner': 'ams', 'group': None, 'mode': None}
2016-04-19 12:28:16,745 - Directory['/var/run/ambari-metrics-collector/'] {'owner': 'ams', 'recursive': True}
2016-04-19 12:28:16,745 - Directory['/var/log/ambari-metrics-collector'] {'owner': 'ams', 'recursive': True}
2016-04-19 12:28:16,745 - Directory['/var/lib/ambari-metrics-collector/hbase'] {'owner': 'ams', 'recursive': True, 'cd_access': 'a'}
2016-04-19 12:28:16,746 - File['/var/run/ambari-metrics-collector//distributed_mode'] {'owner': 'ams', 'action': ['delete']}
2016-04-19 12:28:16,746 - File['/etc/ams-hbase/conf/log4j.properties'] {'content': ..., 'owner': 'ams', 'group': 'hadoop', 'mode': 0644}
2016-04-19 12:28:16,747 - Directory['/etc/ams-hbase/conf'] {'owner': 'ams', 'group': 'hadoop', 'recursive': True}
2016-04-19 12:28:16,747 - Directory['/var/lib/ambari-metrics-collector/hbase-tmp'] {'owner': 'ams', 'recursive': True, 'cd_access': 'a'}
2016-04-19 12:28:16,748 - Directory['/var/lib/ambari-metrics-collector/hbase-tmp/local/jars'] {'owner': 'ams', 'cd_access': 'a', 'group': 'hadoop', 'mode': 0775, 'recursive': True}
2016-04-19 12:28:16,748 - XmlConfig['hbase-site.xml'] {'owner': 'ams', 'group': 'hadoop', 'conf_dir': '/etc/ams-hbase/conf', 'configuration_attributes': {}, 'configurations': ...}
2016-04-19 12:28:16,759 - Generating config: /etc/ams-hbase/conf/hbase-site.xml
2016-04-19 12:28:16,759 - File['/etc/ams-hbase/conf/hbase-site.xml'] {'owner': 'ams', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2016-04-19 12:28:16,792 - XmlConfig['hbase-policy.xml'] {'owner': 'ams', 'group': 'hadoop', 'conf_dir': '/etc/ams-hbase/conf', 'configuration_attributes': {}, 'configurations': {'security.admin.protocol.acl': '*', 'security.masterregion.protocol.acl': '*', 'security.client.protocol.acl': '*'}}
2016-04-19 12:28:16,803 - Generating config: /etc/ams-hbase/conf/hbase-policy.xml
2016-04-19 12:28:16,804 - File['/etc/ams-hbase/conf/hbase-policy.xml'] {'owner': 'ams', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2016-04-19 12:28:16,818 - File['/etc/ams-hbase/conf/hbase-env.sh'] {'content': InlineTemplate(...), 'owner': 'ams'}
2016-04-19 12:28:16,822 - File['/etc/ams-hbase/conf/hadoop-metrics2-hbase.properties'] {'content': Template('hadoop-metrics2-hbase.properties.j2'), 'owner': 'ams', 'group': 'hadoop'}
2016-04-19 12:28:16,822 - TemplateConfig['/etc/ams-hbase/conf/regionservers'] {'owner': 'ams', 'template_tag': None}
2016-04-19 12:28:16,824 - File['/etc/ams-hbase/conf/regionservers'] {'content': Template('regionservers.j2'), 'owner': 'ams', 'group': None, 'mode': None}
2016-04-19 12:28:16,825 - Directory['/var/run/ambari-metrics-collector/'] {'owner': 'ams', 'recursive': True}
2016-04-19 12:28:16,825 - Directory['/var/log/ambari-metrics-collector'] {'owner': 'ams', 'recursive': True}
2016-04-19 12:28:16,826 - File['/etc/ams-hbase/conf/log4j.properties'] {'content': ..., 'owner': 'ams', 'group': 'hadoop', 'mode': 0644}
2016-04-19 12:28:16,827 - Directory['/etc/ambari-metrics-collector/conf'] {'owner': 'ams', 'group': 'hadoop', 'recursive': True}
2016-04-19 12:28:16,827 - Directory['/var/lib/ambari-metrics-collector/checkpoint'] {'owner': 'ams', 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'}
2016-04-19 12:28:16,827 - XmlConfig['ams-site.xml'] {'owner': 'ams', 'group': 'hadoop', 'conf_dir': '/etc/ambari-metrics-collector/conf', 'configuration_attributes': {}, 'configurations': ...}
2016-04-19 12:28:16,837 - Generating config: /etc/ambari-metrics-collector/conf/ams-site.xml
2016-04-19 12:28:16,838 - File['/etc/ambari-metrics-collector/conf/ams-site.xml'] {'owner': 'ams', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2016-04-19 12:28:16,871 - XmlConfig['hbase-site.xml'] {'owner': 'ams', 'group': 'hadoop', 'conf_dir': '/etc/ambari-metrics-collector/conf', 'configuration_attributes': {}, 'configurations': ...}
2016-04-19 12:28:16,881 - Generating config: /etc/ambari-metrics-collector/conf/hbase-site.xml
2016-04-19 12:28:16,881 - File['/etc/ambari-metrics-collector/conf/hbase-site.xml'] {'owner': 'ams', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2016-04-19 12:28:16,910 - File['/etc/ambari-metrics-collector/conf/log4j.properties'] {'content': ..., 'owner': 'ams', 'group': 'hadoop', 'mode': 0644}
2016-04-19 12:28:16,914 - File['/etc/ambari-metrics-collector/conf/ams-env.sh'] {'content': InlineTemplate(...), 'owner': 'ams'}
2016-04-19 12:28:16,915 - Directory['/var/log/ambari-metrics-collector'] {'owner': 'ams', 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'}
2016-04-19 12:28:16,915 - Directory['/var/run/ambari-metrics-collector'] {'owner': 'ams', 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'}
2016-04-19 12:28:16,916 - File['/usr/lib/ams-hbase/bin/hadoop'] {'owner': 'ams', 'mode': 0755}
2016-04-19 12:28:16,916 - Directory['/etc/security/limits.d'] {'owner': 'root', 'group': 'root', 'recursive': True}
2016-04-19 12:28:16,918 - File['/etc/security/limits.d/ams.conf'] {'content': Template('ams.conf.j2'), 'owner': 'root', 'group': 'root', 'mode': 0644}
2016-04-19 12:28:16,919 - Execute['ambari-sudo.sh rm -rf /var/lib/ambari-metrics-collector/hbase-tmp/*.tmp'] {}
2016-04-19 12:28:16,929 - Execute['/usr/sbin/ambari-metrics-collector --config /etc/ambari-metrics-collector/conf start'] {'user': 'ams'}
Command aborted. Aborted by user 2. And Oozie is shutting down on its own. I had to start it for every two minutes every time manually. Any useful suggestion can be helpful. Thanks in advance
... View more
Labels:
- Labels:
-
Apache Ambari
04-18-2016
12:34 PM
What about the second node?
... View more
04-17-2016
05:45 AM
I'm trying to setting up a 4 node cluster using HDP 2.3.x ambari2.1.x.At final stage of installation i.e., Install,start,test phase. I've received the following errors and warnings. 1.Failure on the 4th node: stderr:
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/datanode.py", line 153, in <module>
DataNode().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 216, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/datanode.py", line 34, in install
self.install_packages(env, params.exclude_packages)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 392, in install_packages
Package(name)
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 152, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 118, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 45, in action_install
self.install_package(package_name, self.resource.use_repos, self.resource.skip_repos)
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/yumrpm.py", line 49, in install_package
shell.checked_call(cmd, sudo=True, logoutput=self.get_logoutput())
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 70, in inner
result = function(command, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 92, in checked_call
tries=tries, try_sleep=try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 140, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 291, in _call
raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of '/usr/bin/yum -d 0 -e 0 -y install snappy-devel' returned 1. Error: Package: snappy-devel-1.0.5-1.el6.x86_64 (HDP-UTILS-1.1.0.20)
Requires: snappy(x86-64) = 1.0.5-1.el6
Installed: snappy-1.1.0-1.el6.x86_64 (@anaconda-RedHatEnterpriseLinux-201507020259.x86_64/6.7)
snappy(x86-64) = 1.1.0-1.el6
Available: snappy-1.0.5-1.el6.x86_64 (HDP-UTILS-1.1.0.20)
snappy(x86-64) = 1.0.5-1.el6
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest
stdout:
2016-04-17 06:05:11,465 - Group['hadoop'] {}
2016-04-17 06:05:11,466 - Group['users'] {}
2016-04-17 06:05:11,466 - Group['knox'] {}
2016-04-17 06:05:11,466 - Group['spark'] {}
2016-04-17 06:05:11,467 - User['oozie'] {'gid': 'hadoop', 'groups': ['users']}
2016-04-17 06:05:11,468 - User['hive'] {'gid': 'hadoop', 'groups': ['hadoop']}
2016-04-17 06:05:11,468 - User['ambari-qa'] {'gid': 'hadoop', 'groups': ['users']}
2016-04-17 06:05:11,469 - User['flume'] {'gid': 'hadoop', 'groups': ['hadoop']}
2016-04-17 06:05:11,470 - User['hdfs'] {'gid': 'hadoop', 'groups': ['hadoop']}
2016-04-17 06:05:11,471 - User['knox'] {'gid': 'hadoop', 'groups': ['hadoop']}
2016-04-17 06:05:11,472 - User['storm'] {'gid': 'hadoop', 'groups': ['hadoop']}
2016-04-17 06:05:11,473 - User['spark'] {'gid': 'hadoop', 'groups': ['hadoop']}
2016-04-17 06:05:11,473 - User['mapred'] {'gid': 'hadoop', 'groups': ['hadoop']}
2016-04-17 06:05:11,474 - User['hbase'] {'gid': 'hadoop', 'groups': ['hadoop']}
2016-04-17 06:05:11,475 - User['tez'] {'gid': 'hadoop', 'groups': ['users']}
2016-04-17 06:05:11,475 - User['zookeeper'] {'gid': 'hadoop', 'groups': ['hadoop']}
2016-04-17 06:05:11,476 - User['kafka'] {'gid': 'hadoop', 'groups': ['hadoop']}
2016-04-17 06:05:11,477 - User['falcon'] {'gid': 'hadoop', 'groups': ['users']}
2016-04-17 06:05:11,478 - User['sqoop'] {'gid': 'hadoop', 'groups': ['hadoop']}
2016-04-17 06:05:11,478 - User['yarn'] {'gid': 'hadoop', 'groups': ['hadoop']}
2016-04-17 06:05:11,479 - User['hcat'] {'gid': 'hadoop', 'groups': ['hadoop']}
2016-04-17 06:05:11,480 - User['ams'] {'gid': 'hadoop', 'groups': ['hadoop']}
2016-04-17 06:05:11,481 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-04-17 06:05:11,483 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2016-04-17 06:05:11,492 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2016-04-17 06:05:11,492 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'recursive': True, 'mode': 0775, 'cd_access': 'a'}
2016-04-17 06:05:11,499 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-04-17 06:05:11,501 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2016-04-17 06:05:11,509 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
2016-04-17 06:05:11,510 - Group['hdfs'] {'ignore_failures': False}
2016-04-17 06:05:11,510 - User['hdfs'] {'ignore_failures': False, 'groups': ['hadoop', 'hdfs']}
2016-04-17 06:05:11,511 - Directory['/etc/hadoop'] {'mode': 0755}
2016-04-17 06:05:11,533 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2016-04-17 06:05:11,534 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0777}
2016-04-17 06:05:11,552 - Repository['HDP-2.2'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.2.8.0', 'action': ['create'], 'components': ['HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP', 'mirror_list': None}
2016-04-17 06:05:11,561 - File['/etc/yum.repos.d/HDP.repo'] {'content': InlineTemplate(...)}
2016-04-17 06:05:11,562 - Repository['HDP-UTILS-1.1.0.20'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.20/repos/centos6', 'action': ['create'], 'components': ['HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP-UTILS', 'mirror_list': None}
2016-04-17 06:05:11,565 - File['/etc/yum.repos.d/HDP-UTILS.repo'] {'content': InlineTemplate(...)}
2016-04-17 06:05:11,565 - Package['unzip'] {}
2016-04-17 06:05:11,737 - Skipping installation of existing package unzip
2016-04-17 06:05:11,737 - Package['curl'] {}
2016-04-17 06:05:11,756 - Skipping installation of existing package curl
2016-04-17 06:05:11,756 - Package['hdp-select'] {}
2016-04-17 06:05:11,773 - Skipping installation of existing package hdp-select
2016-04-17 06:05:11,954 - Package['hadoop_2_2_*'] {}
2016-04-17 06:05:12,119 - Skipping installation of existing package hadoop_2_2_*
2016-04-17 06:05:12,120 - Package['snappy'] {}
2016-04-17 06:05:12,138 - Skipping installation of existing package snappy
2016-04-17 06:05:12,139 - Package['snappy-devel'] {}
2016-04-17 06:05:12,157 - Installing package snappy-devel ('/usr/bin/yum -d 0 -e 0 -y install snappy-devel') And it gave a warning on the top " ! Data Node Install ". 2.Warnings on the 2nd node: stderr:
None
stdout:
2016-04-17 06:05:11,468 - Group['hadoop'] {}
2016-04-17 06:05:11,469 - Group['users'] {}
2016-04-17 06:05:11,469 - Group['knox'] {}
2016-04-17 06:05:11,469 - Group['spark'] {}
2016-04-17 06:05:11,470 - User['oozie'] {'gid': 'hadoop', 'groups': ['users']}
2016-04-17 06:05:11,470 - User['hive'] {'gid': 'hadoop', 'groups': ['hadoop']}
2016-04-17 06:05:11,471 - User['ambari-qa'] {'gid': 'hadoop', 'groups': ['users']}
2016-04-17 06:05:11,472 - User['flume'] {'gid': 'hadoop', 'groups': ['hadoop']}
2016-04-17 06:05:11,472 - User['hdfs'] {'gid': 'hadoop', 'groups': ['hadoop']}
2016-04-17 06:05:11,473 - User['knox'] {'gid': 'hadoop', 'groups': ['hadoop']}
2016-04-17 06:05:11,474 - User['storm'] {'gid': 'hadoop', 'groups': ['hadoop']}
2016-04-17 06:05:11,475 - User['spark'] {'gid': 'hadoop', 'groups': ['hadoop']}
2016-04-17 06:05:11,475 - User['mapred'] {'gid': 'hadoop', 'groups': ['hadoop']}
2016-04-17 06:05:11,476 - User['hbase'] {'gid': 'hadoop', 'groups': ['hadoop']}
2016-04-17 06:05:11,477 - User['tez'] {'gid': 'hadoop', 'groups': ['users']}
2016-04-17 06:05:11,477 - User['zookeeper'] {'gid': 'hadoop', 'groups': ['hadoop']}
2016-04-17 06:05:11,478 - User['kafka'] {'gid': 'hadoop', 'groups': ['hadoop']}
2016-04-17 06:05:11,479 - User['falcon'] {'gid': 'hadoop', 'groups': ['users']}
2016-04-17 06:05:11,479 - User['sqoop'] {'gid': 'hadoop', 'groups': ['hadoop']}
2016-04-17 06:05:11,480 - User['yarn'] {'gid': 'hadoop', 'groups': ['hadoop']}
2016-04-17 06:05:11,481 - User['hcat'] {'gid': 'hadoop', 'groups': ['hadoop']}
2016-04-17 06:05:11,481 - User['ams'] {'gid': 'hadoop', 'groups': ['hadoop']}
2016-04-17 06:05:11,482 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-04-17 06:05:11,484 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2016-04-17 06:05:11,489 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2016-04-17 06:05:11,490 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'recursive': True, 'mode': 0775, 'cd_access': 'a'}
2016-04-17 06:05:11,491 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-04-17 06:05:11,493 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2016-04-17 06:05:11,499 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
2016-04-17 06:05:11,500 - Group['hdfs'] {'ignore_failures': False}
2016-04-17 06:05:11,500 - User['hdfs'] {'ignore_failures': False, 'groups': ['hadoop', 'hdfs']}
2016-04-17 06:05:11,501 - Directory['/etc/hadoop'] {'mode': 0755}
2016-04-17 06:05:11,525 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2016-04-17 06:05:11,525 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0777}
2016-04-17 06:05:11,541 - Repository['HDP-2.2'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.2.8.0', 'action': ['create'], 'components': ['HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP', 'mirror_list': None}
2016-04-17 06:05:11,552 - File['/etc/yum.repos.d/HDP.repo'] {'content': InlineTemplate(...)}
2016-04-17 06:05:11,553 - Repository['HDP-UTILS-1.1.0.20'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.20/repos/centos6', 'action': ['create'], 'components': ['HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP-UTILS', 'mirror_list': None}
2016-04-17 06:05:11,557 - File['/etc/yum.repos.d/HDP-UTILS.repo'] {'content': InlineTemplate(...)}
2016-04-17 06:05:11,557 - Package['unzip'] {}
2016-04-17 06:05:11,707 - Skipping installation of existing package unzip
2016-04-17 06:05:11,708 - Package['curl'] {}
2016-04-17 06:05:11,725 - Skipping installation of existing package curl
2016-04-17 06:05:11,725 - Package['hdp-select'] {}
2016-04-17 06:05:11,743 - Skipping installation of existing package hdp-select
2016-04-17 06:05:11,931 - Package['falcon_2_2_*'] {}
2016-04-17 06:05:12,076 - Installing package falcon_2_2_* ('/usr/bin/yum -d 0 -e 0 -y install 'falcon_2_2_*'')
2016-04-17 06:08:11,638 - Execute['ambari-sudo.sh -H -E touch /var/lib/ambari-agent/data/hdp-select-set-all.performed ; ambari-sudo.sh /usr/bin/hdp-select set all `ambari-python-wrap /usr/bin/hdp-select versions | grep ^2.2 | tail -1`'] {'not_if': 'test -f /var/lib/ambari-agent/data/hdp-select-set-all.performed', 'only_if': 'ls -d /usr/hdp/2.2*'}
2016-04-17 06:08:11,641 - Skipping Execute['ambari-sudo.sh -H -E touch /var/lib/ambari-agent/data/hdp-select-set-all.performed ; ambari-sudo.sh /usr/bin/hdp-select set all `ambari-python-wrap /usr/bin/hdp-select versions | grep ^2.2 | tail -1`'] due to not_if
2016-04-17 06:08:11,641 - XmlConfig['core-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'owner': 'hdfs', 'only_if': 'ls /usr/hdp/current/hadoop-client/conf', 'configurations': ...}
2016-04-17 06:08:11,668 - Generating config: /usr/hdp/current/hadoop-client/conf/core-site.xml
2016-04-17 06:08:11,668 - File['/usr/hdp/current/hadoop-client/conf/core-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2016-04-17 06:08:11,687 - Can only link configs for HDP-2.3 and higher. And the title of the warning is "Falcon server install ". In the earlier step of "Configurations": I faced this warning which i didn't attend as i was said that DB can be changed after the deployment of the cluster. Is this one of the reasons behind the failures?. Anybody please share your thoughts to resolve this issue.And post me any useful links if available. Thanks in advance, Karthik.
... View more
04-16-2016
06:29 PM
Thank you minovic. You are very helpful
... View more
04-16-2016
01:53 PM
Thanks for your comforting reply.By your answer you mean i can change db after deploying my cluster?
... View more
- « Previous
- Next »