Member since
06-03-2016
66
Posts
21
Kudos Received
7
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3297 | 12-03-2016 08:51 AM | |
1767 | 09-15-2016 06:39 AM | |
1972 | 09-12-2016 01:20 PM | |
2278 | 09-11-2016 07:04 AM | |
1889 | 09-09-2016 12:19 PM |
03-13-2019
12:47 PM
I am getting the same issue. I did tried the above command but still getiing the error i was trying to install ambari 2.7.3.0 and hdp 3.1.0. Logs:- 2019-03-13 18:08:02,234 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.0.0-78 -> 3.1.0.0-78
2019-03-13 18:08:02,248 - Using hadoop conf dir: /usr/hdp/3.1.0.0-78/hadoop/conf
2019-03-13 18:08:02,350 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.0.0-78 -> 3.1.0.0-78
2019-03-13 18:08:02,355 - Using hadoop conf dir: /usr/hdp/3.1.0.0-78/hadoop/conf
2019-03-13 18:08:02,355 - Group['hdfs'] {}
2019-03-13 18:08:02,356 - Group['hadoop'] {}
2019-03-13 18:08:02,356 - Group['users'] {}
2019-03-13 18:08:02,357 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-03-13 18:08:02,357 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-03-13 18:08:02,358 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None}
2019-03-13 18:08:02,358 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop'], 'uid': None}
2019-03-13 18:08:02,359 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2019-03-13 18:08:02,360 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2019-03-13 18:08:02,363 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if
2019-03-13 18:08:02,363 - Group['hdfs'] {}
2019-03-13 18:08:02,363 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop', u'hdfs']}
2019-03-13 18:08:02,364 - FS Type: HDFS
2019-03-13 18:08:02,364 - Directory['/etc/hadoop'] {'mode': 0755}
2019-03-13 18:08:02,375 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2019-03-13 18:08:02,375 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2019-03-13 18:08:02,388 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2019-03-13 18:08:02,392 - Skipping Execute[('setenforce', '0')] due to not_if
2019-03-13 18:08:02,392 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
2019-03-13 18:08:02,393 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'}
2019-03-13 18:08:02,393 - Changing owner for /var/run/hadoop from 1014 to root
2019-03-13 18:08:02,394 - Changing group for /var/run/hadoop from 1001 to root
2019-03-13 18:08:02,394 - Directory['/var/run/hadoop/hdfs'] {'owner': 'hdfs', 'cd_access': 'a'}
2019-03-13 18:08:02,394 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'}
2019-03-13 18:08:02,397 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2019-03-13 18:08:02,398 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2019-03-13 18:08:02,401 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2019-03-13 18:08:02,409 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/hadoop-metrics2.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2019-03-13 18:08:02,409 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2019-03-13 18:08:02,410 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2019-03-13 18:08:02,412 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop', 'mode': 0644}
2019-03-13 18:08:02,415 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2019-03-13 18:08:02,417 - Skipping unlimited key JCE policy check and setup since it is not required
2019-03-13 18:08:02,612 - Using hadoop conf dir: /usr/hdp/3.1.0.0-78/hadoop/conf
2019-03-13 18:08:02,613 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.0.0-78 -> 3.1.0.0-78
2019-03-13 18:08:02,628 - Using hadoop conf dir: /usr/hdp/3.1.0.0-78/hadoop/conf
2019-03-13 18:08:02,640 - Directory['/etc/security/limits.d'] {'owner': 'root', 'create_parents': True, 'group': 'root'}
2019-03-13 18:08:02,643 - File['/etc/security/limits.d/hdfs.conf'] {'content': Template('hdfs.conf.j2'), 'owner': 'root', 'group': 'root', 'mode': 0644}
2019-03-13 18:08:02,644 - XmlConfig['hadoop-policy.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hdfs', 'configurations': ...}
2019-03-13 18:08:02,650 - Generating config: /usr/hdp/3.1.0.0-78/hadoop/conf/hadoop-policy.xml
2019-03-13 18:08:02,650 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/hadoop-policy.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2019-03-13 18:08:02,656 - XmlConfig['ssl-client.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hdfs', 'configurations': ...}
2019-03-13 18:08:02,662 - Generating config: /usr/hdp/3.1.0.0-78/hadoop/conf/ssl-client.xml
2019-03-13 18:08:02,662 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2019-03-13 18:08:02,666 - Directory['/usr/hdp/3.1.0.0-78/hadoop/conf/secure'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'}
2019-03-13 18:08:02,666 - XmlConfig['ssl-client.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf/secure', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hdfs', 'configurations': ...}
2019-03-13 18:08:02,672 - Generating config: /usr/hdp/3.1.0.0-78/hadoop/conf/secure/ssl-client.xml
2019-03-13 18:08:02,672 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/secure/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2019-03-13 18:08:02,676 - XmlConfig['ssl-server.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hdfs', 'configurations': ...}
2019-03-13 18:08:02,682 - Generating config: /usr/hdp/3.1.0.0-78/hadoop/conf/ssl-server.xml
2019-03-13 18:08:02,682 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/ssl-server.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2019-03-13 18:08:02,686 - XmlConfig['hdfs-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'mode': 0644, 'configuration_attributes': {u'final': {u'dfs.datanode.failed.volumes.tolerated': u'true', u'dfs.datanode.data.dir': u'true', u'dfs.namenode.http-address': u'true', u'dfs.namenode.name.dir': u'true', u'dfs.webhdfs.enabled': u'true'}}, 'owner': 'hdfs', 'configurations': ...}
2019-03-13 18:08:02,692 - Generating config: /usr/hdp/3.1.0.0-78/hadoop/conf/hdfs-site.xml
2019-03-13 18:08:02,692 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/hdfs-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2019-03-13 18:08:02,722 - XmlConfig['core-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'xml_include_file': None, 'mode': 0644, 'configuration_attributes': {u'final': {u'fs.defaultFS': u'true'}}, 'owner': 'hdfs', 'configurations': ...}
2019-03-13 18:08:02,728 - Generating config: /usr/hdp/3.1.0.0-78/hadoop/conf/core-site.xml
2019-03-13 18:08:02,728 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/core-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2019-03-13 18:08:02,746 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/slaves'] {'content': Template('slaves.j2'), 'owner': 'hdfs'}
2019-03-13 18:08:02,749 - Directory['/hadoop/hdfs/namenode'] {'owner': 'hdfs', 'create_parents': True, 'group': 'hadoop', 'mode': 0755, 'cd_access': 'a'}
2019-03-13 18:08:02,749 - Changing group for /hadoop/hdfs/namenode from 1002 to hadoop
2019-03-13 18:08:02,750 - Directory['/data/hadoop/hdfs/namenode'] {'owner': 'hdfs', 'group': 'hadoop', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2019-03-13 18:08:02,757 - Directory['/usr/lib/ambari-logsearch-logfeeder/conf'] {'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2019-03-13 18:08:02,757 - Generate Log Feeder config file: /usr/lib/ambari-logsearch-logfeeder/conf/input.config-hdfs.json
2019-03-13 18:08:02,757 - File['/usr/lib/ambari-logsearch-logfeeder/conf/input.config-hdfs.json'] {'content': Template('input.config-hdfs.json.j2'), 'mode': 0644}
2019-03-13 18:08:02,757 - Skipping setting up secure ZNode ACL for HFDS as it's supported only for NameNode HA mode.
2019-03-13 18:08:02,760 - Called service start with upgrade_type: None
2019-03-13 18:08:02,760 - Ranger Hdfs plugin is not enabled
2019-03-13 18:08:02,761 - File['/etc/hadoop/conf/dfs.exclude'] {'owner': 'hdfs', 'content': Template('exclude_hosts_list.j2'), 'group': 'hadoop'}
2019-03-13 18:08:02,761 - /hadoop/hdfs/namenode/namenode-formatted/ exists. Namenode DFS already formatted
2019-03-13 18:08:02,761 - /data/hadoop/hdfs/namenode/namenode-formatted/ exists. Namenode DFS already formatted
2019-03-13 18:08:02,761 - Directory['/hadoop/hdfs/namenode/namenode-formatted/'] {'create_parents': True}
2019-03-13 18:08:02,762 - Directory['/data/hadoop/hdfs/namenode/namenode-formatted/'] {'create_parents': True}
2019-03-13 18:08:02,762 - Options for start command are:
2019-03-13 18:08:02,762 - Directory['/var/run/hadoop'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0755}
2019-03-13 18:08:02,762 - Changing owner for /var/run/hadoop from 0 to hdfs
2019-03-13 18:08:02,762 - Changing group for /var/run/hadoop from 0 to hadoop
2019-03-13 18:08:02,762 - Directory['/var/run/hadoop/hdfs'] {'owner': 'hdfs', 'group': 'hadoop', 'create_parents': True}
2019-03-13 18:08:02,763 - Directory['/var/log/hadoop/hdfs'] {'owner': 'hdfs', 'group': 'hadoop', 'create_parents': True}
2019-03-13 18:08:02,763 - File['/var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid'] {'action': ['delete'], 'not_if': 'ambari-sudo.sh -H -E test -f /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid && ambari-sudo.sh -H -E pgrep -F /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid'}
2019-03-13 18:08:02,771 - Deleting File['/var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid']
2019-03-13 18:08:02,771 - Execute['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ; /usr/hdp/3.1.0.0-78/hadoop/bin/hdfs --config /usr/hdp/3.1.0.0-78/hadoop/conf --daemon start namenode''] {'environment': {'HADOOP_LIBEXEC_DIR': '/usr/hdp/3.1.0.0-78/hadoop/libexec'}, 'not_if': 'ambari-sudo.sh -H -E test -f /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid && ambari-sudo.sh -H -E pgrep -F /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid'}
2019-03-13 18:08:04,854 - Execute['find /var/log/hadoop/hdfs -maxdepth 1 -type f -name '*' -exec echo '==> {} <==' \; -exec tail -n 40 {} \;'] {'logoutput': True, 'ignore_failures': True, 'user': 'hdfs'}
==> /var/log/hadoop/hdfs/gc.log-201903131801 <==
Java HotSpot(TM) 64-Bit Server VM (25.112-b15) for linux-amd64 JRE (1.8.0_112-b15), built on Sep 22 2016 21:10:53 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 16337780k(5591480k free), swap 15624188k(15624188k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=134217728 -XX:MaxTenuringThreshold=6 -XX:NewSize=134217728 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2019-03-13T18:01:42.815+0530: 0.636: [GC (Allocation Failure) 2019-03-13T18:01:42.815+0530: 0.636: [ParNew: 104960K->8510K(118016K), 0.0072988 secs] 104960K->8510K(1035520K), 0.0073751 secs] [Times: user=0.02 sys=0.00, real=0.01 secs]
2019-03-13T18:01:43.416+0530: 1.237: [GC (Allocation Failure) 2019-03-13T18:01:43.416+0530: 1.237: [ParNew: 113470K->10636K(118016K), 0.0296314 secs] 113470K->13333K(1035520K), 0.0296978 secs] [Times: user=0.08 sys=0.00, real=0.03 secs]
2019-03-13T18:01:43.448+0530: 1.269: [GC (CMS Initial Mark) [1 CMS-initial-mark: 2697K(917504K)] 16041K(1035520K), 0.0012481 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
2019-03-13T18:01:43.449+0530: 1.270: [CMS-concurrent-mark-start]
2019-03-13T18:01:43.451+0530: 1.272: [CMS-concurrent-mark: 0.002/0.002 secs] [Times: user=0.01 sys=0.00, real=0.00 secs]
2019-03-13T18:01:43.451+0530: 1.272: [CMS-concurrent-preclean-start]
2019-03-13T18:01:43.453+0530: 1.273: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times: user=0.01 sys=0.00, real=0.00 secs]
202019-03-13T18:01:40.037+0530: 2.235: [CMS-concurrent-abortable-preclean-start]
CMS: abort preclean due to time 2019-03-13T18:01:45.154+0530: 7.352: [CMS-concurrent-abortable-preclean: 1.299/5.117 secs] [Times: user=2.38 sys=0.02, real=5.11 secs]
2019-03-13T18:01:45.155+0530: 7.353: [GC (CMS Final Remark) [YG occupancy: 48001 K (184320 K)]2019-03-13T18:01:45.155+0530: 7.353: [Rescan (parallel) , 0.0028621 secs]2019-03-13T18:01:45.158+0530: 7.356: [weak refs processing, 0.0000235 secs]2019-03-13T18:01:45.158+0530: 7.356: [class unloading, 0.0026645 secs]2019-03-13T18:01:45.160+0530: 7.358: [scrub symbol table, 0.0028774 secs]2019-03-13T18:01:45.163+0530: 7.361: [scrub string table, 0.0005686 secs][1 CMS-remark: 3414K(843776K)] 51415K(1028096K), 0.0094373 secs] [Times: user=0.02 sys=0.00, real=0.01 secs]
2019-03-13T18:01:45.164+0530: 7.362: [CMS-concurrent-sweep-start]
2019-03-13T18:01:45.166+0530: 7.364: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
2019-03-13T18:01:45.166+0530: 7.364: [CMS-concurrent-reset-start]
2019-03-13T18:01:45.167+0530: 7.365: [CMS-concurrent-reset: 0.002/0.002 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
8016K, used 85405K [0x00000000c0000000, 0x00000000c8000000, 0x00000000c8000000)
eden space 104960K, 71% used [0x00000000c0000000, 0x00000000c49046a8, 0x00000000c6680000)
from space 13056K, 81% used [0x00000000c6680000, 0x00000000c70e3040, 0x00000000c7340000)
to space 13056K, 0% used [0x00000000c7340000, 0x00000000c7340000, 0x00000000c8000000)
concurrent mark-sweep generation total 917504K, used 2687K [0x00000000c8000000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 25931K, capacity 26396K, committed 26660K, reserved 1073152K
class space used 3050K, capacity 3224K, committed 3268K, reserved 1048576K
==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-D-9033.kpit.com.log <==
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at org.eclipse.jetty.server.Server.handle(Server.java:539)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:333)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NullPointerException: Storage not yet initialized
at com.google.common.base.Preconditions.checkNotNull(Preconditions.java:204)
at org.apache.hadoop.hdfs.server.datanode.DataNode.getVolumeInfo(DataNode.java:3136)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71)
at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:275)
at com.sun.jmx.mbeanserver.ConvertingMethod.invokeWithOpenReturn(ConvertingMethod.java:193)
at com.sun.jmx.mbeanserver.ConvertingMethod.invokeWithOpenReturn(ConvertingMethod.java:175)
at com.sun.jmx.mbeanserver.MXBeanIntrospector.invokeM2(MXBeanIntrospector.java:117)
at com.sun.jmx.mbeanserver.MXBeanIntrospector.invokeM2(MXBeanIntrospector.java:54)
at com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
at com.sun.jmx.mbeanserver.PerInterface.getAttribute(PerInterface.java:83)
at com.sun.jmx.mbeanserver.MBeanSupport.getAttribute(MBeanSupport.java:206)
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:647)
... 35 more
2019-03-13 18:07:57,425 INFO ipc.Client (Client.java:handleConnectionFailure(942)) - Retrying connect to server: D-9033.kpit.com/10.10.167.157:8020. Already tried 46 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-13 18:07:58,426 INFO ipc.Client (Client.java:handleConnectionFailure(942)) - Retrying connect to server: D-9033.kpit.com/10.10.167.157:8020. Already tried 47 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-13 18:07:59,426 INFO ipc.Client (Client.java:handleConnectionFailure(942)) - Retrying connect to server: D-9033.kpit.com/10.10.167.157:8020. Already tried 48 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-13 18:08:00,427 INFO ipc.Client (Client.java:handleConnectionFailure(942)) - Retrying connect to server: D-9033.kpit.com/10.10.167.157:8020. Already tried 49 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-13 18:08:00,427 WARN datanode.DataNode (BPServiceActor.java:retrieveNamespaceInfo(235)) - Problem connecting to server: D-9033.kpit.com/10.10.167.157:8020
==> /var/log/hadoop/hdfs/gc.log-201903131804 <==
Java HotSpot(TM) 64-Bit Server VM (25.112-b15) for linux-amd64 JRE (1.8.0_112-b15), built on Sep 22 2016 21:10:53 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 16337780k(5549728k free), swap 15624188k(15624188k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=134217728 -XX:MaxTenuringThreshold=6 -XX:NewSize=134217728 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2019-03-13T18:04:21.675+0530: 0.586: [GC (Allocation Failure) 2019-03-13T18:04:21.675+0530: 0.587: [ParNew: 104960K->8519K(118016K), 0.0063297 secs] 104960K->8519K(1035520K), 0.0064241 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
2019-03-13T18:04:22.244+0530: 1.156: [GC (Allocation Failure) 2019-03-13T18:04:22.244+0530: 1.156: [ParNew: 113479K->10589K(118016K), 0.0206730 secs] 113479K->13287K(1035520K), 0.0207371 secs] [Times: user=0.06 sys=0.00, real=0.02 secs]
2019-03-13T18:04:22.267+0530: 1.178: [GC (CMS Initial Mark) [1 CMS-initial-mark: 2697K(917504K)] 15328K(1035520K), 0.0021081 secs] [Times: user=0.01 sys=0.00, real=0.00 secs]
2019-03-13T18:04:22.269+0530: 1.181: [CMS-concurrent-mark-start]
2019-03-13T18:04:22.271+0530: 1.183: [CMS-concurrent-mark: 0.003/0.003 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
2019-03-13T18:04:22.271+0530: 1.183: [CMS-concurrent-preclean-start]
2019-03-13T18:04:22.273+0530: 1.185: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times: user=0.01 sys=0.00, real=0.00 secs]
2019-03-13T18:04:22.273+0530: 1.185: [GC (CMS Final Remark) [YG occupancy: 12630 K (118016 K)]2019-03-13T18:04:22.273+0530: 1.185: [Rescan (parallel) , 0.0101631 secs]2019-03-13T18:04:22.283+0530: 1.195: [weak refs processing, 0.0000278 secs]2019-03-13T18:04:22.283+0530: 1.195: [class unloading, 0.0021329 secs]2019-03-13T18:04:22.285+0530: 1.197: [scrub symbol table, 0.0018142 secs]2019-03-13T18:04:22.287+0530: 1.199: [scrub string table, 0.0004759 secs][1 CMS-remark: 2697K(917504K)] 15328K(1035520K), 0.0150536 secs] [Times: user=0.04 sys=0.00, real=0.02 secs]
2019-03-13T18:04:22.288+0530: 1.200: [CMS-concurrent-sweep-start]
2019-03-13T18:04:22.289+0530: 1.201: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.01 sys=0.01, real=0.00 secs]
2019-03-13T18:04:22.289+0530: 1.201: [CMS-concurrent-reset-start]
2019-03-13T18:04:22.291+0530: 1.203: [CMS-concurrent-reset: 0.002/0.002 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
Heap
par new generation total 118016K, used 85513K [0x00000000c0000000, 0x00000000c8000000, 0x00000000c8000000)
eden space 104960K, 71% used [0x00000000c0000000, 0x00000000c492acf0, 0x00000000c6680000)
from space 13056K, 81% used [0x00000000c6680000, 0x00000000c70d77e0, 0x00000000c7340000)
to space 13056K, 0% used [0x00000000c7340000, 0x00000000c7340000, 0x00000000c8000000)
concurrent mark-sweep generation total 917504K, used 2685K [0x00000000c8000000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 25959K, capacity 26396K, committed 26820K, reserved 1073152K
class space used 3054K, capacity 3224K, committed 3296K, reserved 1048576K
==> /var/log/hadoop/hdfs/hadoop-hdfs-namenode-D-9033.kpit.com.out.2 <==
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 63705
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 128000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
==> /var/log/hadoop/hdfs/gc.log-201903131808 <==
Java HotSpot(TM) 64-Bit Server VM (25.112-b15) for linux-amd64 JRE (1.8.0_112-b15), built on Sep 22 2016 21:10:53 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 16337780k(5498248k free), swap 15624188k(15624188k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=134217728 -XX:MaxTenuringThreshold=6 -XX:NewSize=134217728 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2019-03-13T18:08:03.428+0530: 0.585: [GC (Allocation Failure) 2019-03-13T18:08:03.428+0530: 0.585: [ParNew: 104960K->8531K(118016K), 0.0072916 secs] 104960K->8531K(1035520K), 0.0073681 secs] [Times: user=0.02 sys=0.00, real=0.00 secs]
2019-03-13T18:08:04.000+0530: 1.157: [GC (Allocation Failure) 2019-03-13T18:08:04.000+0530: 1.157: [ParNew: 113491K->10252K(118016K), 0.0362179 secs] 113491K->12950K(1035520K), 0.0362798 secs] [Times: user=0.12 sys=0.00, real=0.04 secs]
2019-03-13T18:08:04.038+0530: 1.195: [GC (CMS Initial Mark) [1 CMS-initial-mark: 2697K(917504K)] 14990K(1035520K), 0.0014137 secs] [Times: user=0.01 sys=0.00, real=0.00 secs]
2019-03-13T18:08:04.039+0530: 1.196: [CMS-concurrent-mark-start]
2019-03-13T18:08:04.041+0530: 1.198: [CMS-concurrent-mark: 0.002/0.002 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
2019-03-13T18:08:04.041+0530: 1.198: [CMS-concurrent-preclean-start]
2019-03-13T18:08:04.043+0530: 1.200: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times: user=0.01 sys=0.00, real=0.00 secs]
2019-03-13T18:08:04.043+0530: 1.200: [GC (CMS Final Remark) [YG occupancy: 12293 K (118016 K)]2019-03-13T18:08:04.043+0530: 1.200: [Rescan (parallel) , 0.0020385 secs]2019-03-13T18:08:04.045+0530: 1.202: [weak refs processing, 0.0001378 secs]2019-03-13T18:08:04.045+0530: 1.202: [class unloading, 0.0021708 secs]2019-03-13T18:08:04.047+0530: 1.204: [scrub symbol table, 0.0018312 secs]2019-03-13T18:08:04.049+0530: 1.206: [scrub string table, 0.0004769 secs][1 CMS-remark: 2697K(917504K)] 14990K(1035520K), 0.0070740 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
2019-03-13T18:08:04.050+0530: 1.207: [CMS-concurrent-sweep-start]
2019-03-13T18:08:04.051+0530: 1.208: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
2019-03-13T18:08:04.051+0530: 1.208: [CMS-concurrent-reset-start]
2019-03-13T18:08:04.054+0530: 1.211: [CMS-concurrent-reset: 0.002/0.002 secs] [Times: user=0.01 sys=0.00, real=0.00 secs]
Heap
par new generation total 118016K, used 85171K [0x00000000c0000000, 0x00000000c8000000, 0x00000000c8000000)
eden space 104960K, 71% used [0x00000000c0000000, 0x00000000c4929988, 0x00000000c6680000)
from space 13056K, 78% used [0x00000000c6680000, 0x00000000c70832b0, 0x00000000c7340000)
to space 13056K, 0% used [0x00000000c7340000, 0x00000000c7340000, 0x00000000c8000000)
concurrent mark-sweep generation total 917504K, used 2685K [0x00000000c8000000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 25960K, capacity 26396K, committed 26820K, reserved 1073152K
class space used 3054K, capacity 3224K, committed 3296K, reserved 1048576K
==> /var/log/hadoop/hdfs/hadoop-hdfs-namenode-D-9033.kpit.com.out.1 <==
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 63705
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 128000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
==> /var/log/hadoop/hdfs/hadoop-hdfs-namenode-D-9033.kpit.com.log <==
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:388)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:227)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1090)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:714)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:632)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:694)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:937)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:910)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1643)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1710)
2019-03-13 18:08:04,298 INFO handler.ContextHandler (ContextHandler.java:doStop(910)) - Stopped o.e.j.w.WebAppContext@22295ec4{/,null,UNAVAILABLE}{/hdfs}
2019-03-13 18:08:04,300 INFO server.AbstractConnector (AbstractConnector.java:doStop(318)) - Stopped ServerConnector@f316aeb{HTTP/1.1,[http/1.1]}{D-9033.kpit.com:50070}
2019-03-13 18:08:04,300 INFO handler.ContextHandler (ContextHandler.java:doStop(910)) - Stopped o.e.j.s.ServletContextHandler@27216cd{/static,file:///usr/hdp/3.1.0.0-78/hadoop-hdfs/webapps/static/,UNAVAILABLE}
2019-03-13 18:08:04,300 INFO handler.ContextHandler (ContextHandler.java:doStop(910)) - Stopped o.e.j.s.ServletContextHandler@3d9c13b5{/logs,file:///var/log/hadoop/hdfs/,UNAVAILABLE}
2019-03-13 18:08:04,301 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(210)) - Stopping NameNode metrics system...
2019-03-13 18:08:04,302 INFO impl.MetricsSinkAdapter (MetricsSinkAdapter.java:publishMetricsFromQueue(141)) - timeline thread interrupted.
2019-03-13 18:08:04,302 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(216)) - NameNode metrics system stopped.
2019-03-13 18:08:04,303 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:shutdown(607)) - NameNode metrics system shutdown complete.
2019-03-13 18:08:04,303 ERROR namenode.NameNode (NameNode.java:main(1715)) - Failed to start namenode.
java.io.FileNotFoundException: /data/hadoop/hdfs/namenode/current/VERSION (Permission denied)
at java.io.RandomAccessFile.open0(Native Method)
at java.io.RandomAccessFile.open(RandomAccessFile.java:316)
at java.io.RandomAccessFile.<init>(RandomAccessFile.java:243)
at org.apache.hadoop.hdfs.server.common.StorageInfo.readPropertiesFile(StorageInfo.java:250)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.readProperties(NNStorage.java:660)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:388)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:227)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1090)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:714)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:632)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:694)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:937)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:910)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1643)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1710)
2019-03-13 18:08:04,304 INFO util.ExitUtil (ExitUtil.java:terminate(210)) - Exiting with status 1: java.io.FileNotFoundException: /data/hadoop/hdfs/namenode/current/VERSION (Permission denied)
2019-03-13 18:08:04,304 INFO namenode.NameNode (LogAdapter.java:info(51)) - SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at D-9033.kpit.com/10.10.167.157
************************************************************/
==> /var/log/hadoop/hdfs/SecurityAuth.audit <==
==> /var/log/hadoop/hdfs/hadoop-hdfs-namenode-D-9033.kpit.com.out <==
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 63705
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 128000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
==> /var/log/hadoop/hdfs/hdfs-audit.log <==
==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-D-9033.kpit.com.out <==
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 63705
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 128000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Please help
... View more
02-05-2018
07:30 AM
Hello All, I have installed ambari and it got successfully got running on a vm. but when i check ipaddress:8080, it is not getting access. When i run the command netstat -nl | head Output:- Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp6 0 0 :::5901 :::* LISTEN tcp6 0 0 :::8080 :::* LISTEN
tcp6 0 0 :::8081 :::* LISTEN it is running under tcp6 protocal. how can i make it to run under tcp so that i can access through out the network. Any suggestions, Thanks, Mohan V
... View more
Labels:
- Labels:
-
Apache Ambari
02-17-2017
12:08 PM
Hello All, I am trying to configure hive in my local single node cluster. successfully it is installed and i have started but after one or two minutes it goes down with put any error logs. I tried to tail the logs but i could not find anywhere in my local system. Please suggest me. Thank you, Mohan.V
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Hive
12-16-2016
07:01 AM
I am using Nifi for getting live tweets. Nifi version is 0.7.1 kafka version is 0.9.0.2.3. Kafka is running properly. I have given proper endpoints of kafka to nifi. When I start the nifi gettwitter its getting collecting bunch of tweets and they are queued. But In putKafka, tweets are not going as they queued after gettinig the tweets. after 6o or 80 tweets it is not able to get the tweets which are queued. I have tried in all the single node clusters(6 ) that are available to me and also in multinode cluster(7 nodes). And I am getting the same issue in all the clusters Please help me. Mohan.V
... View more
Labels:
- Labels:
-
Apache NiFi
12-03-2016
08:51 AM
Thanks for the suggestion jss. But it could'nt solved the issue completely. I have moved those files into temp directory and again tried to start the server but, now it given another error as ERROR: Exiting with exit code -1.
REASON: Ambari Server java process died with exitcode 255. Check /var/log/ambari-server/ambari-server.out for more information. when i checked into the logs, there i have found that the current version db is not comapatable with the server. then i have tried these steps
wget -O /etc/yum.repos.d/ambari.repo http://public-repo-1.hortonworks.com/ambari/centos6/2.x/updates/2.2.1.0/ambari.repo
yum install ambari-server -y ambari-server setup -y
wget -O /etc/yum.repos.d/ambari.repo http://public-repo-1.hortonworks.com/ambari/centos6/2.x/updates/2.2.1.1/ambari.repo
yum upgrade ambari-server -y
ambari-server upgrade ambari-server start when i run these commands after that ambari server did started but here is the amazing this has happened. actually, i removed the ambari completely and trying to reinstall it. when i completed all the above steps, and when i entered into the ambari ui, it is again pointing to the same host which i have removed previously. I was just shocked by seeing that with heartbeat lost. then i realised that ambari agent is not at installed, then i installed ambari agent and started it . yum -y install ambari-agent ambari-agent start then,when i tried to start the services it didnt worked. i checked in command prompt that, is these all serivecs still exist or not ?, by entering zookeeper . but that command is not found, because service is not installed in my host. Then i started to remove the services from the host which is present in a dead mode,using these commands. curl
-u admin:admin -H “X-Requested-By: Ambari” -X DELETE http://localhost:8080/api/v1/clusters/hostname/services/servicename but it didnt worked, I got a error msg as
message" : "CSRF protection is turned on. X-Requested-By HTTP header is required." then i have edited the ambari-server.properties file and added these lines into that vi /etc/ambari-server/conf/ambari.properties
api.csrfPrevention.enabled=false
ambari-server restart then again i have retried it, at this time it did worked. But when i tried to remove hive, it didnt,because mysql is running in my machine. when i tried this command it did worked. curl -u admin:admin -X DELETE -H 'X-Requested-By:admin' http://localhost:8080/api/v1/clusters/mycluster/hosts/host/host_components/MYSQL_SERVER then, when i tried to add the services starting with zookeeper,again, it given me error like "resource_management.core.exceptions.Fail: Applying
Directory['/usr/hdp/current/zookeeper-client/conf'] failed, looped
symbolic links found while resolving
/usr/hdp/current/zookeeper-client/con Then i have checked the directories, i got to know that these links were pointing back to the same directories. So, i have tried these commands to solve this issue. rm /usr/hdp/current/zookeeper-client/conf
ln -s /etc/zookeeper/2.3.2.0-2950/0 /usr/hdp/current/zookeeper-client/conf And it did worked. at last i have successfully reinstalled the ambari as well as hadoop in my machine. Thank you.
... View more
12-02-2016
07:45 PM
Trying to install ambari in my local centos7 machine. I have followed the hortonworks document step by step. when i run the command i.e ambari-server start it giving me bolow error. Starting ambari-server
Ambari Server running with administrator privileges.
Organizing resource files at /var/lib/ambari-server/resources...
Server PID at: /var/run/ambari-server/ambari-server.pid
Server out at: /var/log/ambari-server/ambari-server.out
Server log at: /var/log/ambari-server/ambari-server.log
Waiting for server start.........
ERROR: Exiting with exit code -1.
REASON: Ambari Server java process died with exitcode 255. Check /var/log/ambari-server/ambari-server.out for more information. I have checked into /var/log/ambari-server/ambari-server.out file, it contains [EL Warning]: metadata: 2016-12-02 12:53:02.301--ServerSession(799570413)--The reference column name [resource_type_id] mapped on the element [field permissions] does not correspond to a valid id or basic field/column on the mapping reference. Will use referenced column name as provided. and also i have checked the logs in /var/logs/ambari-server/ambari-server.log file it contains 02 Dec 2016 12:53:00,195 INFO [main] ControllerModule:185 - Detected POSTGRES as the database type from the JDBC URL
02 Dec 2016 12:53:00,643 INFO [main] ControllerModule:558 - Binding and registering notification dispatcher class org.apache.ambari.server.notifications.dispatchers.AlertScriptDispatcher
02 Dec 2016 12:53:00,647 INFO [main] ControllerModule:558 - Binding and registering notification dispatcher class org.apache.ambari.server.notifications.dispatchers.EmailDispatcher
02 Dec 2016 12:53:00,684 INFO [main] ControllerModule:558 - Binding and registering notification dispatcher class org.apache.ambari.server.notifications.dispatchers.SNMPDispatcher
02 Dec 2016 12:53:01,911 INFO [main] AmbariServer:705 - Getting the controller
02 Dec 2016 12:53:02,614 INFO [main] StackManager:107 - Initializing the stack manager...
02 Dec 2016 12:53:02,614 INFO [main] StackManager:267 - Validating stack directory /var/lib/ambari-server/resources/stacks ...
02 Dec 2016 12:53:02,614 INFO [main] StackManager:243 - Validating common services directory /var/lib/ambari-server/resources/common-services ...
02 Dec 2016 12:53:02,888 ERROR [main] AmbariServer:717 - Failed to run the Ambari Server
com.google.inject.ProvisionException: Guice provision errors: 1) Error injecting constructor, org.apache.ambari.server.AmbariException: Stack Definition Service at '/var/lib/ambari-server/resources/common-services/HAWQ/2.0.0/metainfo.xml' doesn't contain a metainfo.xml file
at org.apache.ambari.server.stack.StackManager.<init>(StackManager.java:105)
while locating org.apache.ambari.server.stack.StackManager annotated with interface com.google.inject.assistedinject.Assisted
at org.apache.ambari.server.api.services.AmbariMetaInfo.init(AmbariMetaInfo.java:242)
at org.apache.ambari.server.api.services.AmbariMetaInfo.class(AmbariMetaInfo.java:124)
while locating org.apache.ambari.server.api.services.AmbariMetaInfo
for field at org.apache.ambari.server.controller.AmbariServer.ambariMetaInfo(AmbariServer.java:138)
at org.apache.ambari.server.controller.AmbariServer.class(AmbariServer.java:138)
while locating org.apache.ambari.server.controller.AmbariServer 1 error
at com.google.inject.internal.InjectorImpl$4.get(InjectorImpl.java:987)
at com.google.inject.internal.InjectorImpl.getInstance(InjectorImpl.java:1013)
at org.apache.ambari.server.controller.AmbariServer.main(AmbariServer.java:710)
Caused by: org.apache.ambari.server.AmbariException: Stack Definition Service at '/var/lib/ambari-server/resources/common-services/HAWQ/2.0.0/metainfo.xml' doesn't contain a metainfo.xml file
at org.apache.ambari.server.stack.ServiceDirectory.parseMetaInfoFile(ServiceDirectory.java:209)
at org.apache.ambari.server.stack.CommonServiceDirectory.parsePath(CommonServiceDirectory.java:71)
at org.apache.ambari.server.stack.ServiceDirectory.<init>(ServiceDirectory.java:106)
at org.apache.ambari.server.stack.CommonServiceDirectory.<init>(CommonServiceDirectory.java:43)
at org.apache.ambari.server.stack.StackManager.parseCommonServicesDirectory(StackManager.java:301)
at org.apache.ambari.server.stack.StackManager.<init>(StackManager.java:115)
at org.apache.ambari.server.stack.StackManager$$FastClassByGuice$$33e4ffe0.newInstance(<generated>)
at com.google.inject.internal.cglib.reflect.$FastConstructor.newInstance(FastConstructor.java:40)
at com.google.inject.internal.DefaultConstructionProxyFactory$1.newInstance(DefaultConstructionProxyFactory.java:60)
at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:85)
at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:254)
at com.google.inject.internal.InjectorImpl$4$1.call(InjectorImpl.java:978)
at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1031)
at com.google.inject.internal.InjectorImpl$4.get(InjectorImpl.java:974)
at com.google.inject.assistedinject.FactoryProvider2.invoke(FactoryProvider2.java:632)
at com.sun.proxy.$Proxy25.create(Unknown Source)
at org.apache.ambari.server.api.services.AmbariMetaInfo.init(AmbariMetaInfo.java:246)
at org.apache.ambari.server.api.services.AmbariMetaInfo$$FastClassByGuice$$202844bc.invoke(<generated>)
at com.google.inject.internal.cglib.reflect.$FastMethod.invoke(FastMethod.java:53)
at com.google.inject.internal.SingleMethodInjector$1.invoke(SingleMethodInjector.java:56)
at com.google.inject.internal.SingleMethodInjector.inject(SingleMethodInjector.java:90)
at com.google.inject.internal.MembersInjectorImpl.injectMembers(MembersInjectorImpl.java:110)
at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:94)
at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:254)
at com.google.inject.internal.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:46)
at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1031)
at com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40)
at com.google.inject.Scopes$1$1.get(Scopes.java:65)
at com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:40)
at com.google.inject.internal.SingleFieldInjector.inject(SingleFieldInjector.java:53)
at com.google.inject.internal.MembersInjectorImpl.injectMembers(MembersInjectorImpl.java:110)
at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:94)
at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:254)
at com.google.inject.internal.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:46)
at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1031)
at com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40)
at com.google.inject.Scopes$1$1.get(Scopes.java:65)
at com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:40)
at com.google.inject.internal.InjectorImpl$4$1.call(InjectorImpl.java:978)
at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1024)
at com.google.inject.internal.InjectorImpl$4.get(InjectorImpl.java:974)
... 2 more Please suggest me. Mohan.V
... View more
Labels:
- Labels:
-
Apache Ambari
12-01-2016
08:28 AM
thanks for the reply jss. i have tried all what you have suggested already. but still getting the same issue. when i start the datanode through ambari ui follwoing error is occured, File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 92, in checked_call
tries=tries, try_sleep=try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 140, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 291, in _call
raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of 'ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ; /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /usr/hdp/current/hadoop-client/conf start datanode'' returned 1. /etc/profile: line 45: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
-bash: /dev/null: Permission denied
/usr/hdp/current/hadoop-client/conf/hadoop-env.sh: line 100: /dev/null: Permission denied
ls: write error: Broken pipe
/usr/hdp/2.3.4.7-4/hadoop/libexec/hadoop-config.sh: line 155: /dev/null: Permission denied
/usr/hdp/current/hadoop-client/conf/hadoop-env.sh: line 100: /dev/null: Permission denied
ls: write error: Broken pipe
starting datanode, logging to /data/log/hadoop/hdfs/hadoop-hdfs-datanode-.out
/usr/hdp/2.3.4.7-4//hadoop-hdfs/bin/hdfs.distro: line 30: /dev/null: Permission denied
/usr/hdp/current/hadoop-client/conf/hadoop-env.sh: line 100: /dev/null: Permission denied
ls: write error: Broken pipe
/usr/hdp/2.3.4.7-4/hadoop/libexec/hadoop-config.sh: line 155: /dev/null: Permission denied
/usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh: line 187: /dev/null: Permission denied
... View more
12-01-2016
07:48 AM
i changed the permissions of the files from above by the reference of other cluster. then agian i troed the command hdfs datanode i got the follwing error in logs 16/12/01 13:13:22 INFO datanode.DataNode: Shutdown complete.
16/12/01 13:13:22 FATAL datanode.DataNode: Exception in secureMain
java.io.IOException: the path component: '/var/lib/hadoop-hdfs' is owned by a user who is not root and not you. Your effective user id is 0; the path is owned by user id 1005, and its permissions are 0751. Please fix this or select a different socket path.
at org.apache.hadoop.net.unix.DomainSocket.validateSocketPathSecurity0(Native Method)
at org.apache.hadoop.net.unix.DomainSocket.bindAndListen(DomainSocket.java:189)
at org.apache.hadoop.hdfs.net.DomainPeerServer.<init>(DomainPeerServer.java:40)
at org.apache.hadoop.hdfs.server.datanode.DataNode.getDomainPeerServer(DataNode.java:965)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:931)
at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1134)
at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:430)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2411)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2298)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2345)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2526)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2550)
16/12/01 13:13:22 INFO util.ExitUtil: Exiting with status 1
16/12/01 13:13:22 INFO datanode.DataNode: SHUTDOWN_MSG: i changed the hadoop-hdfs owner to root, but still getting the same issue. any suggestions.
... View more
12-01-2016
06:47 AM
thanks for the reply kuldeep.
i tried what you have suggested.
I got the following output.
16/12/01 11:27:49 DEBUG sasl.DataTransferSaslUtil: DataTransferProtocol not using SaslPropertiesResolver, no QOP found in configuration for dfs.data.transfer.protection 16/12/01 11:27:49 INFO datanode.DataNode: Starting DataNode with maxLockedMemory = 0 16/12/01 11:27:49 INFO datanode.DataNode: Opened streaming server at /0.0.0.0:50010 16/12/01 11:27:49 INFO datanode.DataNode: Balancing bandwith is 6250000 bytes/s 16/12/01 11:27:49 INFO datanode.DataNode: Number threads for balancing is 5 16/12/01 11:27:49 INFO datanode.DataNode: Shutdown complete. 16/12/01 11:27:49 FATAL datanode.DataNode: Exception in secureMain java.io.IOException: the path component: '/' is world-writable. Its permissions are 0777. Please fix this or select a different socket path. at org.apache.hadoop.net.unix.DomainSocket.validateSocketPathSecurity0(Native Method) at org.apache.hadoop.net.unix.DomainSocket.bindAndListen(DomainSocket.java:189) at org.apache.hadoop.hdfs.net.DomainPeerServer.<init>(DomainPeerServer.java:40) at org.apache.hadoop.hdfs.server.datanode.DataNode.getDomainPeerServer(DataNode.java:965) at org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:931) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1134) at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:430) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2411) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2298) at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2345) at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2526) at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2550) 16/12/01 11:27:49 INFO util.ExitUtil: Exiting with status 1 16/12/01 11:27:49 INFO datanode.DataNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down DataNode at d-9539.kpit.com/10.10.167.160
as i have googled that error,here http://grokbase.com/t/cloudera/scm-users/143a6q05g6/data-node-failed-to-start sugested to change the permissions of /(root). and when i did it still the datanode not started infact now it giving below error. File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 291, in _call
raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of 'ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ; /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /usr/hdp/current/hadoop-client/conf start datanode'' returned 1. /etc/profile: line 45: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
-bash: /dev/null: Permission denied
/usr/hdp/current/hadoop-client/conf/hadoop-env.sh: line 100: /dev/null: Permission denied
ls: write error: Broken pipe
/usr/hdp/2.3.4.7-4/hadoop/libexec/hadoop-config.sh: line 155: /dev/null: Permission denied
/usr/hdp/current/hadoop-client/conf/hadoop-env.sh: line 100: /dev/null: Permission denied
ls: write error: Broken pipe
starting datanode, logging to /data/log/hadoop/hdfs/hadoop-hdfs-datanode-.out
/usr/hdp/2.3.4.7-4//hadoop-hdfs/bin/hdfs.distro: line 30: /dev/null: Permission denied
/usr/hdp/current/hadoop-client/conf/hadoop-env.sh: line 100: /dev/null: Permission denied
ls: write error: Broken pipe
/usr/hdp/2.3.4.7-4/hadoop/libexec/hadoop-config.sh: line 155: /dev/null: Permission denied
/usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh: line 187: /dev/null: Permission denied
... View more
11-30-2016
02:08 PM
Data node is not starting and it is not giving any error logs in logs file. error logs:- Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/datanode.py", line 167, in <module>
DataNode().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 219, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/datanode.py", line 62, in start
datanode(action="start")
File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk
return fn(*args, **kwargs)
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_datanode.py", line 72, in datanode
create_log_dir=True
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/utils.py", line 267, in service
Execute(daemon_cmd, not_if=process_id_exists_command, environment=hadoop_env_exports)
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 158, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 121, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 238, in action_run
tries=self.resource.tries, try_sleep=self.resource.try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 70, in inner
result = function(command, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 92, in checked_call
tries=tries, try_sleep=try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 140, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 291, in _call
raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of 'ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ; /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /usr/hdp/current/hadoop-client/conf start datanode'' returned 1. starting datanode, logging to /data/log/hadoop/hdfs/hadoop-hdfs-datanode-hostname-out in /var/log/hadoop/hdfs/hadoop-hdfs-datanode.log at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2411)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2298)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2345)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2526)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2550)
2016-05-04 17:42:04,139 INFO util.ExitUtil (ExitUtil.java:terminate(124)) - Exiting with status 1
2016-05-04 17:42:04,140 INFO datanode.DataNode (LogAdapter.java:info(45)) - SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at FQDN/IP
When i start the datanode through ambari i dont see any logs in datanode log file. In /data/log/hadoop/hdfs/hadoop-hdfs-datanode-hostname-out core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0 file size (blocks, -f) unlimited
pending signals (-i) 63785
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 63785
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
/data/log/hadoop/hdfs/hadoop-hdfs-datanode-D-9539.out: line 2: syntax error near unexpected token `('
/data/log/hadoop/hdfs/hadoop-hdfs-datanode-D-9539.out: line 2: `core file size (blocks, -c) unlimited'
Please suggest me. Mohan.V
... View more
Labels: