Member since
06-03-2016
66
Posts
21
Kudos Received
7
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2283 | 12-03-2016 08:51 AM | |
894 | 09-15-2016 06:39 AM | |
1219 | 09-12-2016 01:20 PM | |
1082 | 09-11-2016 07:04 AM | |
1069 | 09-09-2016 12:19 PM |
03-13-2019
12:47 PM
I am getting the same issue. I did tried the above command but still getiing the error i was trying to install ambari 2.7.3.0 and hdp 3.1.0. Logs:- 2019-03-13 18:08:02,234 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.0.0-78 -> 3.1.0.0-78
2019-03-13 18:08:02,248 - Using hadoop conf dir: /usr/hdp/3.1.0.0-78/hadoop/conf
2019-03-13 18:08:02,350 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.0.0-78 -> 3.1.0.0-78
2019-03-13 18:08:02,355 - Using hadoop conf dir: /usr/hdp/3.1.0.0-78/hadoop/conf
2019-03-13 18:08:02,355 - Group['hdfs'] {}
2019-03-13 18:08:02,356 - Group['hadoop'] {}
2019-03-13 18:08:02,356 - Group['users'] {}
2019-03-13 18:08:02,357 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-03-13 18:08:02,357 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-03-13 18:08:02,358 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None}
2019-03-13 18:08:02,358 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop'], 'uid': None}
2019-03-13 18:08:02,359 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2019-03-13 18:08:02,360 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2019-03-13 18:08:02,363 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if
2019-03-13 18:08:02,363 - Group['hdfs'] {}
2019-03-13 18:08:02,363 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop', u'hdfs']}
2019-03-13 18:08:02,364 - FS Type: HDFS
2019-03-13 18:08:02,364 - Directory['/etc/hadoop'] {'mode': 0755}
2019-03-13 18:08:02,375 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2019-03-13 18:08:02,375 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2019-03-13 18:08:02,388 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2019-03-13 18:08:02,392 - Skipping Execute[('setenforce', '0')] due to not_if
2019-03-13 18:08:02,392 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
2019-03-13 18:08:02,393 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'}
2019-03-13 18:08:02,393 - Changing owner for /var/run/hadoop from 1014 to root
2019-03-13 18:08:02,394 - Changing group for /var/run/hadoop from 1001 to root
2019-03-13 18:08:02,394 - Directory['/var/run/hadoop/hdfs'] {'owner': 'hdfs', 'cd_access': 'a'}
2019-03-13 18:08:02,394 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'}
2019-03-13 18:08:02,397 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2019-03-13 18:08:02,398 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2019-03-13 18:08:02,401 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2019-03-13 18:08:02,409 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/hadoop-metrics2.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2019-03-13 18:08:02,409 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2019-03-13 18:08:02,410 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2019-03-13 18:08:02,412 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop', 'mode': 0644}
2019-03-13 18:08:02,415 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2019-03-13 18:08:02,417 - Skipping unlimited key JCE policy check and setup since it is not required
2019-03-13 18:08:02,612 - Using hadoop conf dir: /usr/hdp/3.1.0.0-78/hadoop/conf
2019-03-13 18:08:02,613 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.0.0-78 -> 3.1.0.0-78
2019-03-13 18:08:02,628 - Using hadoop conf dir: /usr/hdp/3.1.0.0-78/hadoop/conf
2019-03-13 18:08:02,640 - Directory['/etc/security/limits.d'] {'owner': 'root', 'create_parents': True, 'group': 'root'}
2019-03-13 18:08:02,643 - File['/etc/security/limits.d/hdfs.conf'] {'content': Template('hdfs.conf.j2'), 'owner': 'root', 'group': 'root', 'mode': 0644}
2019-03-13 18:08:02,644 - XmlConfig['hadoop-policy.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hdfs', 'configurations': ...}
2019-03-13 18:08:02,650 - Generating config: /usr/hdp/3.1.0.0-78/hadoop/conf/hadoop-policy.xml
2019-03-13 18:08:02,650 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/hadoop-policy.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2019-03-13 18:08:02,656 - XmlConfig['ssl-client.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hdfs', 'configurations': ...}
2019-03-13 18:08:02,662 - Generating config: /usr/hdp/3.1.0.0-78/hadoop/conf/ssl-client.xml
2019-03-13 18:08:02,662 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2019-03-13 18:08:02,666 - Directory['/usr/hdp/3.1.0.0-78/hadoop/conf/secure'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'}
2019-03-13 18:08:02,666 - XmlConfig['ssl-client.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf/secure', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hdfs', 'configurations': ...}
2019-03-13 18:08:02,672 - Generating config: /usr/hdp/3.1.0.0-78/hadoop/conf/secure/ssl-client.xml
2019-03-13 18:08:02,672 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/secure/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2019-03-13 18:08:02,676 - XmlConfig['ssl-server.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hdfs', 'configurations': ...}
2019-03-13 18:08:02,682 - Generating config: /usr/hdp/3.1.0.0-78/hadoop/conf/ssl-server.xml
2019-03-13 18:08:02,682 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/ssl-server.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2019-03-13 18:08:02,686 - XmlConfig['hdfs-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'mode': 0644, 'configuration_attributes': {u'final': {u'dfs.datanode.failed.volumes.tolerated': u'true', u'dfs.datanode.data.dir': u'true', u'dfs.namenode.http-address': u'true', u'dfs.namenode.name.dir': u'true', u'dfs.webhdfs.enabled': u'true'}}, 'owner': 'hdfs', 'configurations': ...}
2019-03-13 18:08:02,692 - Generating config: /usr/hdp/3.1.0.0-78/hadoop/conf/hdfs-site.xml
2019-03-13 18:08:02,692 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/hdfs-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2019-03-13 18:08:02,722 - XmlConfig['core-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'xml_include_file': None, 'mode': 0644, 'configuration_attributes': {u'final': {u'fs.defaultFS': u'true'}}, 'owner': 'hdfs', 'configurations': ...}
2019-03-13 18:08:02,728 - Generating config: /usr/hdp/3.1.0.0-78/hadoop/conf/core-site.xml
2019-03-13 18:08:02,728 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/core-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2019-03-13 18:08:02,746 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/slaves'] {'content': Template('slaves.j2'), 'owner': 'hdfs'}
2019-03-13 18:08:02,749 - Directory['/hadoop/hdfs/namenode'] {'owner': 'hdfs', 'create_parents': True, 'group': 'hadoop', 'mode': 0755, 'cd_access': 'a'}
2019-03-13 18:08:02,749 - Changing group for /hadoop/hdfs/namenode from 1002 to hadoop
2019-03-13 18:08:02,750 - Directory['/data/hadoop/hdfs/namenode'] {'owner': 'hdfs', 'group': 'hadoop', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2019-03-13 18:08:02,757 - Directory['/usr/lib/ambari-logsearch-logfeeder/conf'] {'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2019-03-13 18:08:02,757 - Generate Log Feeder config file: /usr/lib/ambari-logsearch-logfeeder/conf/input.config-hdfs.json
2019-03-13 18:08:02,757 - File['/usr/lib/ambari-logsearch-logfeeder/conf/input.config-hdfs.json'] {'content': Template('input.config-hdfs.json.j2'), 'mode': 0644}
2019-03-13 18:08:02,757 - Skipping setting up secure ZNode ACL for HFDS as it's supported only for NameNode HA mode.
2019-03-13 18:08:02,760 - Called service start with upgrade_type: None
2019-03-13 18:08:02,760 - Ranger Hdfs plugin is not enabled
2019-03-13 18:08:02,761 - File['/etc/hadoop/conf/dfs.exclude'] {'owner': 'hdfs', 'content': Template('exclude_hosts_list.j2'), 'group': 'hadoop'}
2019-03-13 18:08:02,761 - /hadoop/hdfs/namenode/namenode-formatted/ exists. Namenode DFS already formatted
2019-03-13 18:08:02,761 - /data/hadoop/hdfs/namenode/namenode-formatted/ exists. Namenode DFS already formatted
2019-03-13 18:08:02,761 - Directory['/hadoop/hdfs/namenode/namenode-formatted/'] {'create_parents': True}
2019-03-13 18:08:02,762 - Directory['/data/hadoop/hdfs/namenode/namenode-formatted/'] {'create_parents': True}
2019-03-13 18:08:02,762 - Options for start command are:
2019-03-13 18:08:02,762 - Directory['/var/run/hadoop'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0755}
2019-03-13 18:08:02,762 - Changing owner for /var/run/hadoop from 0 to hdfs
2019-03-13 18:08:02,762 - Changing group for /var/run/hadoop from 0 to hadoop
2019-03-13 18:08:02,762 - Directory['/var/run/hadoop/hdfs'] {'owner': 'hdfs', 'group': 'hadoop', 'create_parents': True}
2019-03-13 18:08:02,763 - Directory['/var/log/hadoop/hdfs'] {'owner': 'hdfs', 'group': 'hadoop', 'create_parents': True}
2019-03-13 18:08:02,763 - File['/var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid'] {'action': ['delete'], 'not_if': 'ambari-sudo.sh -H -E test -f /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid && ambari-sudo.sh -H -E pgrep -F /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid'}
2019-03-13 18:08:02,771 - Deleting File['/var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid']
2019-03-13 18:08:02,771 - Execute['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ; /usr/hdp/3.1.0.0-78/hadoop/bin/hdfs --config /usr/hdp/3.1.0.0-78/hadoop/conf --daemon start namenode''] {'environment': {'HADOOP_LIBEXEC_DIR': '/usr/hdp/3.1.0.0-78/hadoop/libexec'}, 'not_if': 'ambari-sudo.sh -H -E test -f /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid && ambari-sudo.sh -H -E pgrep -F /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid'}
2019-03-13 18:08:04,854 - Execute['find /var/log/hadoop/hdfs -maxdepth 1 -type f -name '*' -exec echo '==> {} <==' \; -exec tail -n 40 {} \;'] {'logoutput': True, 'ignore_failures': True, 'user': 'hdfs'}
==> /var/log/hadoop/hdfs/gc.log-201903131801 <==
Java HotSpot(TM) 64-Bit Server VM (25.112-b15) for linux-amd64 JRE (1.8.0_112-b15), built on Sep 22 2016 21:10:53 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 16337780k(5591480k free), swap 15624188k(15624188k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=134217728 -XX:MaxTenuringThreshold=6 -XX:NewSize=134217728 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2019-03-13T18:01:42.815+0530: 0.636: [GC (Allocation Failure) 2019-03-13T18:01:42.815+0530: 0.636: [ParNew: 104960K->8510K(118016K), 0.0072988 secs] 104960K->8510K(1035520K), 0.0073751 secs] [Times: user=0.02 sys=0.00, real=0.01 secs]
2019-03-13T18:01:43.416+0530: 1.237: [GC (Allocation Failure) 2019-03-13T18:01:43.416+0530: 1.237: [ParNew: 113470K->10636K(118016K), 0.0296314 secs] 113470K->13333K(1035520K), 0.0296978 secs] [Times: user=0.08 sys=0.00, real=0.03 secs]
2019-03-13T18:01:43.448+0530: 1.269: [GC (CMS Initial Mark) [1 CMS-initial-mark: 2697K(917504K)] 16041K(1035520K), 0.0012481 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
2019-03-13T18:01:43.449+0530: 1.270: [CMS-concurrent-mark-start]
2019-03-13T18:01:43.451+0530: 1.272: [CMS-concurrent-mark: 0.002/0.002 secs] [Times: user=0.01 sys=0.00, real=0.00 secs]
2019-03-13T18:01:43.451+0530: 1.272: [CMS-concurrent-preclean-start]
2019-03-13T18:01:43.453+0530: 1.273: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times: user=0.01 sys=0.00, real=0.00 secs]
202019-03-13T18:01:40.037+0530: 2.235: [CMS-concurrent-abortable-preclean-start]
CMS: abort preclean due to time 2019-03-13T18:01:45.154+0530: 7.352: [CMS-concurrent-abortable-preclean: 1.299/5.117 secs] [Times: user=2.38 sys=0.02, real=5.11 secs]
2019-03-13T18:01:45.155+0530: 7.353: [GC (CMS Final Remark) [YG occupancy: 48001 K (184320 K)]2019-03-13T18:01:45.155+0530: 7.353: [Rescan (parallel) , 0.0028621 secs]2019-03-13T18:01:45.158+0530: 7.356: [weak refs processing, 0.0000235 secs]2019-03-13T18:01:45.158+0530: 7.356: [class unloading, 0.0026645 secs]2019-03-13T18:01:45.160+0530: 7.358: [scrub symbol table, 0.0028774 secs]2019-03-13T18:01:45.163+0530: 7.361: [scrub string table, 0.0005686 secs][1 CMS-remark: 3414K(843776K)] 51415K(1028096K), 0.0094373 secs] [Times: user=0.02 sys=0.00, real=0.01 secs]
2019-03-13T18:01:45.164+0530: 7.362: [CMS-concurrent-sweep-start]
2019-03-13T18:01:45.166+0530: 7.364: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
2019-03-13T18:01:45.166+0530: 7.364: [CMS-concurrent-reset-start]
2019-03-13T18:01:45.167+0530: 7.365: [CMS-concurrent-reset: 0.002/0.002 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
8016K, used 85405K [0x00000000c0000000, 0x00000000c8000000, 0x00000000c8000000)
eden space 104960K, 71% used [0x00000000c0000000, 0x00000000c49046a8, 0x00000000c6680000)
from space 13056K, 81% used [0x00000000c6680000, 0x00000000c70e3040, 0x00000000c7340000)
to space 13056K, 0% used [0x00000000c7340000, 0x00000000c7340000, 0x00000000c8000000)
concurrent mark-sweep generation total 917504K, used 2687K [0x00000000c8000000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 25931K, capacity 26396K, committed 26660K, reserved 1073152K
class space used 3050K, capacity 3224K, committed 3268K, reserved 1048576K
==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-D-9033.kpit.com.log <==
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at org.eclipse.jetty.server.Server.handle(Server.java:539)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:333)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NullPointerException: Storage not yet initialized
at com.google.common.base.Preconditions.checkNotNull(Preconditions.java:204)
at org.apache.hadoop.hdfs.server.datanode.DataNode.getVolumeInfo(DataNode.java:3136)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71)
at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:275)
at com.sun.jmx.mbeanserver.ConvertingMethod.invokeWithOpenReturn(ConvertingMethod.java:193)
at com.sun.jmx.mbeanserver.ConvertingMethod.invokeWithOpenReturn(ConvertingMethod.java:175)
at com.sun.jmx.mbeanserver.MXBeanIntrospector.invokeM2(MXBeanIntrospector.java:117)
at com.sun.jmx.mbeanserver.MXBeanIntrospector.invokeM2(MXBeanIntrospector.java:54)
at com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
at com.sun.jmx.mbeanserver.PerInterface.getAttribute(PerInterface.java:83)
at com.sun.jmx.mbeanserver.MBeanSupport.getAttribute(MBeanSupport.java:206)
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:647)
... 35 more
2019-03-13 18:07:57,425 INFO ipc.Client (Client.java:handleConnectionFailure(942)) - Retrying connect to server: D-9033.kpit.com/10.10.167.157:8020. Already tried 46 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-13 18:07:58,426 INFO ipc.Client (Client.java:handleConnectionFailure(942)) - Retrying connect to server: D-9033.kpit.com/10.10.167.157:8020. Already tried 47 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-13 18:07:59,426 INFO ipc.Client (Client.java:handleConnectionFailure(942)) - Retrying connect to server: D-9033.kpit.com/10.10.167.157:8020. Already tried 48 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-13 18:08:00,427 INFO ipc.Client (Client.java:handleConnectionFailure(942)) - Retrying connect to server: D-9033.kpit.com/10.10.167.157:8020. Already tried 49 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-13 18:08:00,427 WARN datanode.DataNode (BPServiceActor.java:retrieveNamespaceInfo(235)) - Problem connecting to server: D-9033.kpit.com/10.10.167.157:8020
==> /var/log/hadoop/hdfs/gc.log-201903131804 <==
Java HotSpot(TM) 64-Bit Server VM (25.112-b15) for linux-amd64 JRE (1.8.0_112-b15), built on Sep 22 2016 21:10:53 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 16337780k(5549728k free), swap 15624188k(15624188k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=134217728 -XX:MaxTenuringThreshold=6 -XX:NewSize=134217728 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2019-03-13T18:04:21.675+0530: 0.586: [GC (Allocation Failure) 2019-03-13T18:04:21.675+0530: 0.587: [ParNew: 104960K->8519K(118016K), 0.0063297 secs] 104960K->8519K(1035520K), 0.0064241 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
2019-03-13T18:04:22.244+0530: 1.156: [GC (Allocation Failure) 2019-03-13T18:04:22.244+0530: 1.156: [ParNew: 113479K->10589K(118016K), 0.0206730 secs] 113479K->13287K(1035520K), 0.0207371 secs] [Times: user=0.06 sys=0.00, real=0.02 secs]
2019-03-13T18:04:22.267+0530: 1.178: [GC (CMS Initial Mark) [1 CMS-initial-mark: 2697K(917504K)] 15328K(1035520K), 0.0021081 secs] [Times: user=0.01 sys=0.00, real=0.00 secs]
2019-03-13T18:04:22.269+0530: 1.181: [CMS-concurrent-mark-start]
2019-03-13T18:04:22.271+0530: 1.183: [CMS-concurrent-mark: 0.003/0.003 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
2019-03-13T18:04:22.271+0530: 1.183: [CMS-concurrent-preclean-start]
2019-03-13T18:04:22.273+0530: 1.185: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times: user=0.01 sys=0.00, real=0.00 secs]
2019-03-13T18:04:22.273+0530: 1.185: [GC (CMS Final Remark) [YG occupancy: 12630 K (118016 K)]2019-03-13T18:04:22.273+0530: 1.185: [Rescan (parallel) , 0.0101631 secs]2019-03-13T18:04:22.283+0530: 1.195: [weak refs processing, 0.0000278 secs]2019-03-13T18:04:22.283+0530: 1.195: [class unloading, 0.0021329 secs]2019-03-13T18:04:22.285+0530: 1.197: [scrub symbol table, 0.0018142 secs]2019-03-13T18:04:22.287+0530: 1.199: [scrub string table, 0.0004759 secs][1 CMS-remark: 2697K(917504K)] 15328K(1035520K), 0.0150536 secs] [Times: user=0.04 sys=0.00, real=0.02 secs]
2019-03-13T18:04:22.288+0530: 1.200: [CMS-concurrent-sweep-start]
2019-03-13T18:04:22.289+0530: 1.201: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.01 sys=0.01, real=0.00 secs]
2019-03-13T18:04:22.289+0530: 1.201: [CMS-concurrent-reset-start]
2019-03-13T18:04:22.291+0530: 1.203: [CMS-concurrent-reset: 0.002/0.002 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
Heap
par new generation total 118016K, used 85513K [0x00000000c0000000, 0x00000000c8000000, 0x00000000c8000000)
eden space 104960K, 71% used [0x00000000c0000000, 0x00000000c492acf0, 0x00000000c6680000)
from space 13056K, 81% used [0x00000000c6680000, 0x00000000c70d77e0, 0x00000000c7340000)
to space 13056K, 0% used [0x00000000c7340000, 0x00000000c7340000, 0x00000000c8000000)
concurrent mark-sweep generation total 917504K, used 2685K [0x00000000c8000000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 25959K, capacity 26396K, committed 26820K, reserved 1073152K
class space used 3054K, capacity 3224K, committed 3296K, reserved 1048576K
==> /var/log/hadoop/hdfs/hadoop-hdfs-namenode-D-9033.kpit.com.out.2 <==
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 63705
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 128000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
==> /var/log/hadoop/hdfs/gc.log-201903131808 <==
Java HotSpot(TM) 64-Bit Server VM (25.112-b15) for linux-amd64 JRE (1.8.0_112-b15), built on Sep 22 2016 21:10:53 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 16337780k(5498248k free), swap 15624188k(15624188k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=134217728 -XX:MaxTenuringThreshold=6 -XX:NewSize=134217728 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2019-03-13T18:08:03.428+0530: 0.585: [GC (Allocation Failure) 2019-03-13T18:08:03.428+0530: 0.585: [ParNew: 104960K->8531K(118016K), 0.0072916 secs] 104960K->8531K(1035520K), 0.0073681 secs] [Times: user=0.02 sys=0.00, real=0.00 secs]
2019-03-13T18:08:04.000+0530: 1.157: [GC (Allocation Failure) 2019-03-13T18:08:04.000+0530: 1.157: [ParNew: 113491K->10252K(118016K), 0.0362179 secs] 113491K->12950K(1035520K), 0.0362798 secs] [Times: user=0.12 sys=0.00, real=0.04 secs]
2019-03-13T18:08:04.038+0530: 1.195: [GC (CMS Initial Mark) [1 CMS-initial-mark: 2697K(917504K)] 14990K(1035520K), 0.0014137 secs] [Times: user=0.01 sys=0.00, real=0.00 secs]
2019-03-13T18:08:04.039+0530: 1.196: [CMS-concurrent-mark-start]
2019-03-13T18:08:04.041+0530: 1.198: [CMS-concurrent-mark: 0.002/0.002 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
2019-03-13T18:08:04.041+0530: 1.198: [CMS-concurrent-preclean-start]
2019-03-13T18:08:04.043+0530: 1.200: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times: user=0.01 sys=0.00, real=0.00 secs]
2019-03-13T18:08:04.043+0530: 1.200: [GC (CMS Final Remark) [YG occupancy: 12293 K (118016 K)]2019-03-13T18:08:04.043+0530: 1.200: [Rescan (parallel) , 0.0020385 secs]2019-03-13T18:08:04.045+0530: 1.202: [weak refs processing, 0.0001378 secs]2019-03-13T18:08:04.045+0530: 1.202: [class unloading, 0.0021708 secs]2019-03-13T18:08:04.047+0530: 1.204: [scrub symbol table, 0.0018312 secs]2019-03-13T18:08:04.049+0530: 1.206: [scrub string table, 0.0004769 secs][1 CMS-remark: 2697K(917504K)] 14990K(1035520K), 0.0070740 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
2019-03-13T18:08:04.050+0530: 1.207: [CMS-concurrent-sweep-start]
2019-03-13T18:08:04.051+0530: 1.208: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
2019-03-13T18:08:04.051+0530: 1.208: [CMS-concurrent-reset-start]
2019-03-13T18:08:04.054+0530: 1.211: [CMS-concurrent-reset: 0.002/0.002 secs] [Times: user=0.01 sys=0.00, real=0.00 secs]
Heap
par new generation total 118016K, used 85171K [0x00000000c0000000, 0x00000000c8000000, 0x00000000c8000000)
eden space 104960K, 71% used [0x00000000c0000000, 0x00000000c4929988, 0x00000000c6680000)
from space 13056K, 78% used [0x00000000c6680000, 0x00000000c70832b0, 0x00000000c7340000)
to space 13056K, 0% used [0x00000000c7340000, 0x00000000c7340000, 0x00000000c8000000)
concurrent mark-sweep generation total 917504K, used 2685K [0x00000000c8000000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 25960K, capacity 26396K, committed 26820K, reserved 1073152K
class space used 3054K, capacity 3224K, committed 3296K, reserved 1048576K
==> /var/log/hadoop/hdfs/hadoop-hdfs-namenode-D-9033.kpit.com.out.1 <==
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 63705
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 128000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
==> /var/log/hadoop/hdfs/hadoop-hdfs-namenode-D-9033.kpit.com.log <==
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:388)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:227)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1090)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:714)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:632)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:694)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:937)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:910)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1643)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1710)
2019-03-13 18:08:04,298 INFO handler.ContextHandler (ContextHandler.java:doStop(910)) - Stopped o.e.j.w.WebAppContext@22295ec4{/,null,UNAVAILABLE}{/hdfs}
2019-03-13 18:08:04,300 INFO server.AbstractConnector (AbstractConnector.java:doStop(318)) - Stopped ServerConnector@f316aeb{HTTP/1.1,[http/1.1]}{D-9033.kpit.com:50070}
2019-03-13 18:08:04,300 INFO handler.ContextHandler (ContextHandler.java:doStop(910)) - Stopped o.e.j.s.ServletContextHandler@27216cd{/static,file:///usr/hdp/3.1.0.0-78/hadoop-hdfs/webapps/static/,UNAVAILABLE}
2019-03-13 18:08:04,300 INFO handler.ContextHandler (ContextHandler.java:doStop(910)) - Stopped o.e.j.s.ServletContextHandler@3d9c13b5{/logs,file:///var/log/hadoop/hdfs/,UNAVAILABLE}
2019-03-13 18:08:04,301 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(210)) - Stopping NameNode metrics system...
2019-03-13 18:08:04,302 INFO impl.MetricsSinkAdapter (MetricsSinkAdapter.java:publishMetricsFromQueue(141)) - timeline thread interrupted.
2019-03-13 18:08:04,302 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(216)) - NameNode metrics system stopped.
2019-03-13 18:08:04,303 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:shutdown(607)) - NameNode metrics system shutdown complete.
2019-03-13 18:08:04,303 ERROR namenode.NameNode (NameNode.java:main(1715)) - Failed to start namenode.
java.io.FileNotFoundException: /data/hadoop/hdfs/namenode/current/VERSION (Permission denied)
at java.io.RandomAccessFile.open0(Native Method)
at java.io.RandomAccessFile.open(RandomAccessFile.java:316)
at java.io.RandomAccessFile.<init>(RandomAccessFile.java:243)
at org.apache.hadoop.hdfs.server.common.StorageInfo.readPropertiesFile(StorageInfo.java:250)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.readProperties(NNStorage.java:660)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:388)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:227)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1090)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:714)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:632)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:694)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:937)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:910)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1643)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1710)
2019-03-13 18:08:04,304 INFO util.ExitUtil (ExitUtil.java:terminate(210)) - Exiting with status 1: java.io.FileNotFoundException: /data/hadoop/hdfs/namenode/current/VERSION (Permission denied)
2019-03-13 18:08:04,304 INFO namenode.NameNode (LogAdapter.java:info(51)) - SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at D-9033.kpit.com/10.10.167.157
************************************************************/
==> /var/log/hadoop/hdfs/SecurityAuth.audit <==
==> /var/log/hadoop/hdfs/hadoop-hdfs-namenode-D-9033.kpit.com.out <==
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 63705
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 128000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
==> /var/log/hadoop/hdfs/hdfs-audit.log <==
==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-D-9033.kpit.com.out <==
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 63705
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 128000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Please help
... View more
09-11-2018
01:58 PM
@Shu Im still having the same issue. I checked the firewalls and they are off. And i have downloded the jdbc drivers for sql and put it onto C:\Program Files\Java\jre1.8.0_171\lib\ext this directory In PutDatabaserecord:- Url:- jdbc:sqlserver://hjcorpsql-04:1433;databaseName=Test1;user=MyorganizationName\pbiuser;password=Secure@99; ClassName:- com.microsoft.sqlserver.jdbc.SQLServerDriver Driver Location:- E:/Software/sqljdbc_6.0/enu/jre8/sqljdbc42.jar untitled5.png Im running my nifi onto production system. And the sql server is SQL Server Enterprise Edition 2016 and Windows Server 2012 R2. Please help me. I have been struck here from last one week.
... View more
09-06-2018
08:00 PM
Thanks for the reply @Shu. I tried what you have suggested but i got struck. I ll give you the screen shots of what i have done.. Please correct me where i was doing mistake. GetMongo Processor:- PutDataBaseRecord Processor:- I have mentioend Record Reader -- avroreader statement type - INSERT Database connection pooling service - DBCPCONNECTIONPOOL schema name - dbo Table Name - Locations Rest is as they are then AvroReader:- AvroShemaRegistry:- DBCPConnectionPool:- And then i created upstream connections and started. But i am authentication error which is something weird for me as the same user id and credentials i am using in other applications to get the data from sql also. Error as Can not create pooling connection factory (login failed for user pbiuser). Can you please help me with this. I have tried for every possibility with that userid and password.but it didnt worked. My sql server, and nifi server both are in same machine.
... View more
09-05-2018
04:17 AM
Wow..That was quick and clear.. Thanks for the reply @Shu. Will get back to you once i try this.
... View more
09-05-2018
12:35 AM
Hello All, I am completely new to nifi. But as i found it is very interesting to learn. I am trying to migrate the MongoDB Data into sql server using NIFI. Is it possible. If yes, please suggest me what are the steps that i need to follow. Thanks, Mohan V
... View more
Labels:
- Labels:
-
Apache NiFi
02-05-2018
07:30 AM
Hello All, I have installed ambari and it got successfully got running on a vm. but when i check ipaddress:8080, it is not getting access. When i run the command netstat -nl | head Output:- Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp6 0 0 :::5901 :::* LISTEN tcp6 0 0 :::8080 :::* LISTEN
tcp6 0 0 :::8081 :::* LISTEN it is running under tcp6 protocal. how can i make it to run under tcp so that i can access through out the network. Any suggestions, Thanks, Mohan V
... View more
Labels:
- Labels:
-
Apache Ambari
06-07-2017
10:33 AM
Hello All, I just installed nifi in my windows system and trying to get the tweets using Gettweets, and i have mentioned all the keys in properties. Before loading the json formated tweets into a json file, i am trying to merge those into single json file using MergeContent processor with name Tweets.json. MergeContent properties:- After that when i am tying to put these merged tweets into a file using Putfile processor, i am unable to get the tweets in my file. PutFile Properties:- Well i am able to get the tweets from gettweets and the tweets are getting merged using MergeContent but those are not storing into the Tweets.json file using PutFile. Can anyone suggest me what exactly the mistake am i doing. Please help. Mohan V
... View more
Labels:
- Labels:
-
Apache NiFi
02-17-2017
12:08 PM
Hello All, I am trying to configure hive in my local single node cluster. successfully it is installed and i have started but after one or two minutes it goes down with put any error logs. I tried to tail the logs but i could not find anywhere in my local system. Please suggest me. Thank you, Mohan.V
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Hive
12-16-2016
07:01 AM
I am using Nifi for getting live tweets. Nifi version is 0.7.1 kafka version is 0.9.0.2.3. Kafka is running properly. I have given proper endpoints of kafka to nifi. When I start the nifi gettwitter its getting collecting bunch of tweets and they are queued. But In putKafka, tweets are not going as they queued after gettinig the tweets. after 6o or 80 tweets it is not able to get the tweets which are queued. I have tried in all the single node clusters(6 ) that are available to me and also in multinode cluster(7 nodes). And I am getting the same issue in all the clusters Please help me. Mohan.V
... View more
Labels:
- Labels:
-
Apache NiFi
12-03-2016
08:51 AM
Thanks for the suggestion jss. But it could'nt solved the issue completely. I have moved those files into temp directory and again tried to start the server but, now it given another error as ERROR: Exiting with exit code -1.
REASON: Ambari Server java process died with exitcode 255. Check /var/log/ambari-server/ambari-server.out for more information. when i checked into the logs, there i have found that the current version db is not comapatable with the server. then i have tried these steps
wget -O /etc/yum.repos.d/ambari.repo http://public-repo-1.hortonworks.com/ambari/centos6/2.x/updates/2.2.1.0/ambari.repo
yum install ambari-server -y ambari-server setup -y
wget -O /etc/yum.repos.d/ambari.repo http://public-repo-1.hortonworks.com/ambari/centos6/2.x/updates/2.2.1.1/ambari.repo
yum upgrade ambari-server -y
ambari-server upgrade ambari-server start when i run these commands after that ambari server did started but here is the amazing this has happened. actually, i removed the ambari completely and trying to reinstall it. when i completed all the above steps, and when i entered into the ambari ui, it is again pointing to the same host which i have removed previously. I was just shocked by seeing that with heartbeat lost. then i realised that ambari agent is not at installed, then i installed ambari agent and started it . yum -y install ambari-agent ambari-agent start then,when i tried to start the services it didnt worked. i checked in command prompt that, is these all serivecs still exist or not ?, by entering zookeeper . but that command is not found, because service is not installed in my host. Then i started to remove the services from the host which is present in a dead mode,using these commands. curl
-u admin:admin -H “X-Requested-By: Ambari” -X DELETE http://localhost:8080/api/v1/clusters/hostname/services/servicename but it didnt worked, I got a error msg as
message" : "CSRF protection is turned on. X-Requested-By HTTP header is required." then i have edited the ambari-server.properties file and added these lines into that vi /etc/ambari-server/conf/ambari.properties
api.csrfPrevention.enabled=false
ambari-server restart then again i have retried it, at this time it did worked. But when i tried to remove hive, it didnt,because mysql is running in my machine. when i tried this command it did worked. curl -u admin:admin -X DELETE -H 'X-Requested-By:admin' http://localhost:8080/api/v1/clusters/mycluster/hosts/host/host_components/MYSQL_SERVER then, when i tried to add the services starting with zookeeper,again, it given me error like "resource_management.core.exceptions.Fail: Applying
Directory['/usr/hdp/current/zookeeper-client/conf'] failed, looped
symbolic links found while resolving
/usr/hdp/current/zookeeper-client/con Then i have checked the directories, i got to know that these links were pointing back to the same directories. So, i have tried these commands to solve this issue. rm /usr/hdp/current/zookeeper-client/conf
ln -s /etc/zookeeper/2.3.2.0-2950/0 /usr/hdp/current/zookeeper-client/conf And it did worked. at last i have successfully reinstalled the ambari as well as hadoop in my machine. Thank you.
... View more
12-02-2016
07:45 PM
Trying to install ambari in my local centos7 machine. I have followed the hortonworks document step by step. when i run the command i.e ambari-server start it giving me bolow error. Starting ambari-server
Ambari Server running with administrator privileges.
Organizing resource files at /var/lib/ambari-server/resources...
Server PID at: /var/run/ambari-server/ambari-server.pid
Server out at: /var/log/ambari-server/ambari-server.out
Server log at: /var/log/ambari-server/ambari-server.log
Waiting for server start.........
ERROR: Exiting with exit code -1.
REASON: Ambari Server java process died with exitcode 255. Check /var/log/ambari-server/ambari-server.out for more information. I have checked into /var/log/ambari-server/ambari-server.out file, it contains [EL Warning]: metadata: 2016-12-02 12:53:02.301--ServerSession(799570413)--The reference column name [resource_type_id] mapped on the element [field permissions] does not correspond to a valid id or basic field/column on the mapping reference. Will use referenced column name as provided. and also i have checked the logs in /var/logs/ambari-server/ambari-server.log file it contains 02 Dec 2016 12:53:00,195 INFO [main] ControllerModule:185 - Detected POSTGRES as the database type from the JDBC URL
02 Dec 2016 12:53:00,643 INFO [main] ControllerModule:558 - Binding and registering notification dispatcher class org.apache.ambari.server.notifications.dispatchers.AlertScriptDispatcher
02 Dec 2016 12:53:00,647 INFO [main] ControllerModule:558 - Binding and registering notification dispatcher class org.apache.ambari.server.notifications.dispatchers.EmailDispatcher
02 Dec 2016 12:53:00,684 INFO [main] ControllerModule:558 - Binding and registering notification dispatcher class org.apache.ambari.server.notifications.dispatchers.SNMPDispatcher
02 Dec 2016 12:53:01,911 INFO [main] AmbariServer:705 - Getting the controller
02 Dec 2016 12:53:02,614 INFO [main] StackManager:107 - Initializing the stack manager...
02 Dec 2016 12:53:02,614 INFO [main] StackManager:267 - Validating stack directory /var/lib/ambari-server/resources/stacks ...
02 Dec 2016 12:53:02,614 INFO [main] StackManager:243 - Validating common services directory /var/lib/ambari-server/resources/common-services ...
02 Dec 2016 12:53:02,888 ERROR [main] AmbariServer:717 - Failed to run the Ambari Server
com.google.inject.ProvisionException: Guice provision errors: 1) Error injecting constructor, org.apache.ambari.server.AmbariException: Stack Definition Service at '/var/lib/ambari-server/resources/common-services/HAWQ/2.0.0/metainfo.xml' doesn't contain a metainfo.xml file
at org.apache.ambari.server.stack.StackManager.<init>(StackManager.java:105)
while locating org.apache.ambari.server.stack.StackManager annotated with interface com.google.inject.assistedinject.Assisted
at org.apache.ambari.server.api.services.AmbariMetaInfo.init(AmbariMetaInfo.java:242)
at org.apache.ambari.server.api.services.AmbariMetaInfo.class(AmbariMetaInfo.java:124)
while locating org.apache.ambari.server.api.services.AmbariMetaInfo
for field at org.apache.ambari.server.controller.AmbariServer.ambariMetaInfo(AmbariServer.java:138)
at org.apache.ambari.server.controller.AmbariServer.class(AmbariServer.java:138)
while locating org.apache.ambari.server.controller.AmbariServer 1 error
at com.google.inject.internal.InjectorImpl$4.get(InjectorImpl.java:987)
at com.google.inject.internal.InjectorImpl.getInstance(InjectorImpl.java:1013)
at org.apache.ambari.server.controller.AmbariServer.main(AmbariServer.java:710)
Caused by: org.apache.ambari.server.AmbariException: Stack Definition Service at '/var/lib/ambari-server/resources/common-services/HAWQ/2.0.0/metainfo.xml' doesn't contain a metainfo.xml file
at org.apache.ambari.server.stack.ServiceDirectory.parseMetaInfoFile(ServiceDirectory.java:209)
at org.apache.ambari.server.stack.CommonServiceDirectory.parsePath(CommonServiceDirectory.java:71)
at org.apache.ambari.server.stack.ServiceDirectory.<init>(ServiceDirectory.java:106)
at org.apache.ambari.server.stack.CommonServiceDirectory.<init>(CommonServiceDirectory.java:43)
at org.apache.ambari.server.stack.StackManager.parseCommonServicesDirectory(StackManager.java:301)
at org.apache.ambari.server.stack.StackManager.<init>(StackManager.java:115)
at org.apache.ambari.server.stack.StackManager$$FastClassByGuice$$33e4ffe0.newInstance(<generated>)
at com.google.inject.internal.cglib.reflect.$FastConstructor.newInstance(FastConstructor.java:40)
at com.google.inject.internal.DefaultConstructionProxyFactory$1.newInstance(DefaultConstructionProxyFactory.java:60)
at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:85)
at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:254)
at com.google.inject.internal.InjectorImpl$4$1.call(InjectorImpl.java:978)
at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1031)
at com.google.inject.internal.InjectorImpl$4.get(InjectorImpl.java:974)
at com.google.inject.assistedinject.FactoryProvider2.invoke(FactoryProvider2.java:632)
at com.sun.proxy.$Proxy25.create(Unknown Source)
at org.apache.ambari.server.api.services.AmbariMetaInfo.init(AmbariMetaInfo.java:246)
at org.apache.ambari.server.api.services.AmbariMetaInfo$$FastClassByGuice$$202844bc.invoke(<generated>)
at com.google.inject.internal.cglib.reflect.$FastMethod.invoke(FastMethod.java:53)
at com.google.inject.internal.SingleMethodInjector$1.invoke(SingleMethodInjector.java:56)
at com.google.inject.internal.SingleMethodInjector.inject(SingleMethodInjector.java:90)
at com.google.inject.internal.MembersInjectorImpl.injectMembers(MembersInjectorImpl.java:110)
at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:94)
at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:254)
at com.google.inject.internal.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:46)
at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1031)
at com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40)
at com.google.inject.Scopes$1$1.get(Scopes.java:65)
at com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:40)
at com.google.inject.internal.SingleFieldInjector.inject(SingleFieldInjector.java:53)
at com.google.inject.internal.MembersInjectorImpl.injectMembers(MembersInjectorImpl.java:110)
at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:94)
at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:254)
at com.google.inject.internal.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:46)
at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1031)
at com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40)
at com.google.inject.Scopes$1$1.get(Scopes.java:65)
at com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:40)
at com.google.inject.internal.InjectorImpl$4$1.call(InjectorImpl.java:978)
at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1024)
at com.google.inject.internal.InjectorImpl$4.get(InjectorImpl.java:974)
... 2 more Please suggest me. Mohan.V
... View more
Labels:
- Labels:
-
Apache Ambari
12-01-2016
08:28 AM
thanks for the reply jss. i have tried all what you have suggested already. but still getting the same issue. when i start the datanode through ambari ui follwoing error is occured, File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 92, in checked_call
tries=tries, try_sleep=try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 140, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 291, in _call
raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of 'ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ; /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /usr/hdp/current/hadoop-client/conf start datanode'' returned 1. /etc/profile: line 45: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
-bash: /dev/null: Permission denied
/usr/hdp/current/hadoop-client/conf/hadoop-env.sh: line 100: /dev/null: Permission denied
ls: write error: Broken pipe
/usr/hdp/2.3.4.7-4/hadoop/libexec/hadoop-config.sh: line 155: /dev/null: Permission denied
/usr/hdp/current/hadoop-client/conf/hadoop-env.sh: line 100: /dev/null: Permission denied
ls: write error: Broken pipe
starting datanode, logging to /data/log/hadoop/hdfs/hadoop-hdfs-datanode-.out
/usr/hdp/2.3.4.7-4//hadoop-hdfs/bin/hdfs.distro: line 30: /dev/null: Permission denied
/usr/hdp/current/hadoop-client/conf/hadoop-env.sh: line 100: /dev/null: Permission denied
ls: write error: Broken pipe
/usr/hdp/2.3.4.7-4/hadoop/libexec/hadoop-config.sh: line 155: /dev/null: Permission denied
/usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh: line 187: /dev/null: Permission denied
... View more
12-01-2016
07:48 AM
i changed the permissions of the files from above by the reference of other cluster. then agian i troed the command hdfs datanode i got the follwing error in logs 16/12/01 13:13:22 INFO datanode.DataNode: Shutdown complete.
16/12/01 13:13:22 FATAL datanode.DataNode: Exception in secureMain
java.io.IOException: the path component: '/var/lib/hadoop-hdfs' is owned by a user who is not root and not you. Your effective user id is 0; the path is owned by user id 1005, and its permissions are 0751. Please fix this or select a different socket path.
at org.apache.hadoop.net.unix.DomainSocket.validateSocketPathSecurity0(Native Method)
at org.apache.hadoop.net.unix.DomainSocket.bindAndListen(DomainSocket.java:189)
at org.apache.hadoop.hdfs.net.DomainPeerServer.<init>(DomainPeerServer.java:40)
at org.apache.hadoop.hdfs.server.datanode.DataNode.getDomainPeerServer(DataNode.java:965)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:931)
at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1134)
at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:430)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2411)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2298)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2345)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2526)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2550)
16/12/01 13:13:22 INFO util.ExitUtil: Exiting with status 1
16/12/01 13:13:22 INFO datanode.DataNode: SHUTDOWN_MSG: i changed the hadoop-hdfs owner to root, but still getting the same issue. any suggestions.
... View more
12-01-2016
06:47 AM
thanks for the reply kuldeep.
i tried what you have suggested.
I got the following output.
16/12/01 11:27:49 DEBUG sasl.DataTransferSaslUtil: DataTransferProtocol not using SaslPropertiesResolver, no QOP found in configuration for dfs.data.transfer.protection 16/12/01 11:27:49 INFO datanode.DataNode: Starting DataNode with maxLockedMemory = 0 16/12/01 11:27:49 INFO datanode.DataNode: Opened streaming server at /0.0.0.0:50010 16/12/01 11:27:49 INFO datanode.DataNode: Balancing bandwith is 6250000 bytes/s 16/12/01 11:27:49 INFO datanode.DataNode: Number threads for balancing is 5 16/12/01 11:27:49 INFO datanode.DataNode: Shutdown complete. 16/12/01 11:27:49 FATAL datanode.DataNode: Exception in secureMain java.io.IOException: the path component: '/' is world-writable. Its permissions are 0777. Please fix this or select a different socket path. at org.apache.hadoop.net.unix.DomainSocket.validateSocketPathSecurity0(Native Method) at org.apache.hadoop.net.unix.DomainSocket.bindAndListen(DomainSocket.java:189) at org.apache.hadoop.hdfs.net.DomainPeerServer.<init>(DomainPeerServer.java:40) at org.apache.hadoop.hdfs.server.datanode.DataNode.getDomainPeerServer(DataNode.java:965) at org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:931) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1134) at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:430) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2411) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2298) at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2345) at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2526) at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2550) 16/12/01 11:27:49 INFO util.ExitUtil: Exiting with status 1 16/12/01 11:27:49 INFO datanode.DataNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down DataNode at d-9539.kpit.com/10.10.167.160
as i have googled that error,here http://grokbase.com/t/cloudera/scm-users/143a6q05g6/data-node-failed-to-start sugested to change the permissions of /(root). and when i did it still the datanode not started infact now it giving below error. File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 291, in _call
raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of 'ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ; /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /usr/hdp/current/hadoop-client/conf start datanode'' returned 1. /etc/profile: line 45: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
/etc/profile: line 70: /dev/null: Permission denied
-bash: /dev/null: Permission denied
/usr/hdp/current/hadoop-client/conf/hadoop-env.sh: line 100: /dev/null: Permission denied
ls: write error: Broken pipe
/usr/hdp/2.3.4.7-4/hadoop/libexec/hadoop-config.sh: line 155: /dev/null: Permission denied
/usr/hdp/current/hadoop-client/conf/hadoop-env.sh: line 100: /dev/null: Permission denied
ls: write error: Broken pipe
starting datanode, logging to /data/log/hadoop/hdfs/hadoop-hdfs-datanode-.out
/usr/hdp/2.3.4.7-4//hadoop-hdfs/bin/hdfs.distro: line 30: /dev/null: Permission denied
/usr/hdp/current/hadoop-client/conf/hadoop-env.sh: line 100: /dev/null: Permission denied
ls: write error: Broken pipe
/usr/hdp/2.3.4.7-4/hadoop/libexec/hadoop-config.sh: line 155: /dev/null: Permission denied
/usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh: line 187: /dev/null: Permission denied
... View more
11-30-2016
02:08 PM
Data node is not starting and it is not giving any error logs in logs file. error logs:- Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/datanode.py", line 167, in <module>
DataNode().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 219, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/datanode.py", line 62, in start
datanode(action="start")
File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk
return fn(*args, **kwargs)
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_datanode.py", line 72, in datanode
create_log_dir=True
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/utils.py", line 267, in service
Execute(daemon_cmd, not_if=process_id_exists_command, environment=hadoop_env_exports)
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 158, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 121, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 238, in action_run
tries=self.resource.tries, try_sleep=self.resource.try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 70, in inner
result = function(command, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 92, in checked_call
tries=tries, try_sleep=try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 140, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 291, in _call
raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of 'ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ; /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /usr/hdp/current/hadoop-client/conf start datanode'' returned 1. starting datanode, logging to /data/log/hadoop/hdfs/hadoop-hdfs-datanode-hostname-out in /var/log/hadoop/hdfs/hadoop-hdfs-datanode.log at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2411)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2298)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2345)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2526)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2550)
2016-05-04 17:42:04,139 INFO util.ExitUtil (ExitUtil.java:terminate(124)) - Exiting with status 1
2016-05-04 17:42:04,140 INFO datanode.DataNode (LogAdapter.java:info(45)) - SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at FQDN/IP
When i start the datanode through ambari i dont see any logs in datanode log file. In /data/log/hadoop/hdfs/hadoop-hdfs-datanode-hostname-out core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0 file size (blocks, -f) unlimited
pending signals (-i) 63785
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 63785
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
/data/log/hadoop/hdfs/hadoop-hdfs-datanode-D-9539.out: line 2: syntax error near unexpected token `('
/data/log/hadoop/hdfs/hadoop-hdfs-datanode-D-9539.out: line 2: `core file size (blocks, -c) unlimited'
Please suggest me. Mohan.V
... View more
Labels:
10-11-2016
04:28 AM
Thanks for the reply Ayub Pathan. 1:- command to create topic ./kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test and it is created. 2:- When kerberos was enabled i dont have the permission to create topic in kafka. But then I disabled kerberos from cluster without any issues. And I am able to create the topic. 3:- No.Ranger is not enabled in my cluster. And i have already gone through the link that you have mentioned, but I didnt get any solution as there was no proper explanation that how he solved that issue. Please suggest me. Mohan.V
... View more
10-10-2016
03:11 PM
Hi all, Trying to produce messages but getting the below errors. [2016-10-10 20:22:10,947] ERROR Failed to collate messages by topic, partition due to: Failed to fetch topic metadata for topic: test11 (kafka.producer.async.DefaultEventHandler)
[2016-10-10 20:22:11,049] WARN Error while fetching metadata [{TopicMetadata for topic test11 ->
No partition metadata for topic test11 due to kafka.common.TopicAuthorizationException}] for topic [test11]: class kafka.common.TopicAuthorizationException (kafka.producer.BrokerPartitionInfo)
[2016-10-10 20:22:11,051] WARN Error while fetching metadata [{TopicMetadata for topic test11 ->
No partition metadata for topic test11 due to kafka.common.TopicAuthorizationException}] for topic [test11]: class kafka.common.TopicAuthorizationException (kafka.producer.BrokerPartitionInfo)
[2016-10-10 20:22:11,051] ERROR Failed to collate messages by topic, partition due to: Failed to fetch topic metadata for topic: test11 (kafka.producer.async.DefaultEventHandler)
[2016-10-10 20:22:11,153] WARN Error while fetching metadata [{TopicMetadata for topic test11 -> No partition metadata for topic test11 due to kafka.common.TopicAuthorizationException}] for topic [test11]: class kafka.common.TopicAuthorizationException (kafka.producer.BrokerPartitionInfo) [2016-10-10 20:22:11,154] ERROR Failed to send requests for topics test11 with correlation ids in [0,8] (kafka.producer.async.DefaultEventHandler)
[2016-10-10 20:22:11,155] ERROR Error in handling batch of 1 events (kafka.producer.async.ProducerSendThread)
kafka.common.FailedToSendMessageException: Failed to send messages after 3 tries.
at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:91)
at kafka.producer.async.ProducerSendThread.tryToHandle(ProducerSendThread.scala:105)
at kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:88)
at kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:68)
at scala.collection.immutable.Stream.foreach(Stream.scala:547)
at kafka.producer.async.ProducerSendThread.processEvents(ProducerSendThread.scala:67)
at kafka.producer.async.ProducerSendThread.run(ProducerSendThread.scala:45) I have disabled the kerberos. But I guess Kafka was still trying to get authorise from Kerbors. Please suggest me. Thank you. Mohan.V
... View more
Labels:
- Labels:
-
Apache Kafka
09-22-2016
10:54 AM
thanks for your valuable suggestion Pierre Villard. Its done.
... View more
09-22-2016
07:26 AM
2 Kudos
Hi All I am trying to get the tweets using nifi and store those into a local file. I have used the following configurations to get the tweets using GetTwitter process Twitter Endpoint :- Filter Endpoint
given all twitter keys
Languages :- en
Terms to Filter On :- facebook,wipro,google Trying to put the tweets into a file using PutFile Configurations that i have used are, Directory:- /root/tweets
Conflict Resolution Strategy:- fail
Create Missing Directories :-true
Maximum File Count :- 1
Last Modified Time :-
Permissions:-rw-r--r--
Owner:-root
Group:- root I am able to get the tweet but that is ONE per ONE JSON FILE. If I increase the Maximum File Count to 20 then it creates 20 json files and each file contains only one tweet. But I want it to be store all the tweets in single json file. In this https://community.hortonworks.com/questions/42149/using-nifi-to-collect-tweets-into-one-large-file.html they have mentioned to use MergeContent processor. But I didnt get it how to use MergeContent exactly as I am completely new to NIFI. Please sugget me how to use. Do i have to use it after the Putfile or Before PutFile. please help. Mohan.V
... View more
Labels:
- Labels:
-
Apache NiFi
09-20-2016
05:17 AM
Santhoshi G refer this link, you may get the solution https://community.hortonworks.com/questions/15495/amabari-server-212-setup-error-while-creating-data.html
... View more
09-15-2016
06:39 AM
2 Kudos
I got it on my own I think it is because of the difference versions that i have used in my script. When i used the same versions of elephant bird then it worked fine for me as suggested by @gkeys. script:- REGISTER elephant-bird-core-4.1.jar
REGISTER elephant-bird-hadoop-compat-4.1.jar
REGISTER elephant-bird-pig-4.1.jar
REGISTER json-simple-1.1.1.jar
twitter = LOAD 'sample.json' USING com.twitter.elephantbird.pig.load.JsonLoader();
extracted =foreach twitter generate (chararray)$0#'created_at' as created_at,(chararray)$0#'id' as id,(chararray)$0#'id_str' as id_str,(chararray)$0#'text' as text,(chararray)$0#'source' as source,com.twitter.elephantbird.pig.piggybank.JsonStringToMap($0#'entities') as entities,(boolean)$0#'favorited' as favorited,(long)$0#'favorite_count' as favorite_count,(long)$0#'retweet_count' as retweet_count,(boolean)$0#'retweeted' as retweeted,com.twitter.elephantbird.pig.piggybank.JsonStringToMap($0#'place') as place;
dump extracted; And it worked fine.
... View more
09-12-2016
01:38 PM
@gkeys please try to look in to this.and suggest me where am i missing. https://community.hortonworks.com/questions/56017/pig-to-elasesticsearch-stringindexoutofboundsexcep.html
... View more
09-12-2016
01:35 PM
thanks @gkeys. I have followed the doc, and it worked. as i said you are the best.:):)
... View more
09-12-2016
01:20 PM
1 Kudo
thanks for your reply Artem Ervits. I think it is because of the difference versions that i have used in my script. When i used the same versions of elephant bird then it worked fine for me as suggested by @gkeys. script:- REGISTER elephant-bird-core-4.1.jar
REGISTER elephant-bird-hadoop-compat-4.1.jar
REGISTER elephant-bird-pig-4.1.jar
REGISTER json-simple-1.1.1.jar
twitter = LOAD 'sample.json' USING com.twitter.elephantbird.pig.load.JsonLoader();
extracted = foreach twitter generate (chararray)$0#'created_at' as created_at,(chararray)$0#'id' as id,(chararray)$0#'id_str' as id_str,(chararray)$0#'text' as text,(chararray)$0#'source' as source,com.twitter.elephantbird.pig.piggybank.JsonStringToMap($0#'entities') as entities,(boolean)$0#'favorited' as favorited,(long)$0#'favorite_count' as favorite_count,(long)$0#'retweet_count' as retweet_count,(boolean)$0#'retweeted' as retweeted,com.twitter.elephantbird.pig.piggybank.JsonStringToMap($0#'place') as place;
dump extracted;
And it worked fine.
... View more
09-12-2016
06:41 AM
2 Kudos
I am trying to store the pig output by using elephant bird LzoJsonStorage() But it didn't worked. sample:- {"in_reply_to_user_id_str":null,"coordinates":null,"text":"\u0627\u0627\u0627\u0627\u0627\u062d \u0627\u0644\u0627\u062c\u0648\u0627\u0621 \u0628\u062a\u0627\u0639\u062a \u0633\u0643\u0633 \u062d\u0627\u0627\u0627\u0627\u0627\u0631\u0631 \u0645\u0646\u0648 \u0627\u0644\u0641\u062d\u0644 \u0627\u0644\u0644\u064a \u064a\u0628\u064a \u0627\u0633\u0648\u064a \u0644\u0647 \u0641\u0648\u0644\u0648 \u064a\u0633\u0648\u064a \u0631\u062a\u0648\u064a\u062a","created_at":"Thu Apr 12 17:38:47 +0000 2012","favorited":false,"contributors":null,"in_reply_to_screen_name":null,"source":"\u003Ca href=\"http:\/\/blackberry.com\/twitter\" rel=\"nofollow\"\u003ETwitter for BlackBerry\u00ae\u003C\/a\u003E","retweet_count":0,"in_reply_to_user_id":null,"in_reply_to_status_id":null,"id_str":"190494185374220289","entities":{"hashtags":[],"user_mentions":[],"urls":[]},"geo":null,"retweeted":false,"place":null,"truncated":false,"in_reply_to_status_id_str":null,"user":{"created_at":"Tue Apr 10 11:43:10 +0000 2012","notifications":null,"profile_use_background_image":true,"profile_background_image_url_https":"https:\/\/si0.twimg.com\/images\/themes\/theme1\/bg.png","url":null,"contributors_enabled":false,"geo_enabled":false,"profile_text_color":"333333","followers_count":5,"profile_image_url_https":"https:\/\/si0.twimg.com\/profile_images\/2084863527\/Screen-120409-224940_normal.jpg","profile_image_url":"http:\/\/a0.twimg.com\/profile_images\/2084863527\/Screen-120409-224940_normal.jpg","listed_count":0,"profile_background_image_url":"http:\/\/a0.twimg.com\/images\/themes\/theme1\/bg.png","description":"\u0627\u0628\u064a \u0633\u0643\u0633 \u0631\u0631\u0648\u0639\u0647 \u0645\u0639 \u0641\u062d\u0644 ","screen_name":"Ga7bah_sex","profile_link_color":"0084B4","location":"\u0627\u0631\u0636 \u0627\u0644\u0633\u0643\u0633","default_profile":true,"show_all_inline_media":false,"is_translator":false,"statuses_count":5,"profile_background_color":"C0DEED","id_str":"550121247","follow_request_sent":null,"lang":"ar","profile_background_tile":false,"protected":false,"profile_sidebar_fill_color":"DDEEF6","name":"\u0642\u062d\u0628\u0647 \u0648\u0627\u0628\u064a \u0632\u0628 ","default_profile_image":false,"time_zone":null,"friends_count":8,"id":550121247,"following":null,"verified":false,"utc_offset":null,"favourites_count":0,"profile_sidebar_border_color":"C0DEED"},"id":190494185374220289} Script:- REGISTER elephant-bird-core-4.1.jar
REGISTER elephant-bird-hadoop-compat-4.1.jar
REGISTER elephant-bird-pig-4.1.jar
REGISTER json-simple-1.1.1.jar
REGISTER google-collections-1.0.jar
REGISTER hadoop-lzo-0.4.14.jar
REGISTER piggybank-0.12.0.jar
twitter = LOAD 'sample.json' USING com.twitter.elephantbird.pig.load.JsonLoader();
extracted = foreach twitter generate (chararray)$0#'created_at' as created_at,(chararray)$0#'id' as id,(chararray)$0#'id_str' as id_str,(chararray)$0#'text' as text,(chararray)$0#'source' as source;
STORE extracted into 'tweets' using com.twitter.elephantbird.pig.store.LzoJsonStorage();
Error While trying to store in tweets:- java.lang.Exception: java.lang.RuntimeException: native-lzo library not available
at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:529)
Caused by: java.lang.RuntimeException: native-lzo library not available
at com.hadoop.compression.lzo.LzoCodec.createCompressor(LzoCodec.java:165)
at com.hadoop.compression.lzo.LzopCodec.createOutputStream(LzopCodec.java:50)
at com.twitter.elephantbird.util.LzoUtils.getIndexedLzoOutputStream(LzoUtils.java:75)
at com.twitter.elephantbird.mapreduce.output.LzoTextOutputFormat.getRecordWriter(LzoTextOutputFormat.java:24)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat.getRecordWriter(PigOutputFormat.java:81)
at org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.<init>(ReduceTask.java:540)
at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:614)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389)
at By the error I guess lzo library is not available. Please suggest me how can I resolve this. Mohan.V
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Pig
09-12-2016
04:12 AM
thank you gkeys... You are....the best...
... View more
09-11-2016
12:37 PM
I would like to know that, How can we consume kafka topic messages using PIG? What are the jar files it requires? Any suggestions. Mohan.V
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Kafka
-
Apache Pig
09-11-2016
08:43 AM
1 Kudo
I have been facing this issue from long time. I tried to solve this but i couldn't. I need some experts advice to solve this. I am trying to load a sample tweets json file. sample.json;- {"filter_level":"low","retweeted":false,"in_reply_to_screen_name":"FilmFan","truncated":false,"lang":"en","in_reply_to_status_id_str":null,"id":689085590822891521,"in_reply_to_user_id_str":"6048122","timestamp_ms":"1453125782100","in_reply_to_status_id":null,"created_at":"Mon Jan 18 14:03:02 +0000 2016","favorite_count":0,"place":null,"coordinates":null,"text":"@filmfan hey its time for you guys follow @acadgild To #AchieveMore and participate in contest Win Rs.500 worth vouchers","contributors":null,"geo":null,"entities":{"symbols":[],"urls":[],"hashtags":[{"text":"AchieveMore","indices":[56,68]}],"user_mentions":[{"id":6048122,"name":"Tanya","indices":[0,8],"screen_name":"FilmFan","id_str":"6048122"},{"id":2649945906,"name":"ACADGILD","indices":[42,51],"screen_name":"acadgild","id_str":"2649945906"}]},"is_quote_status":false,"source":"<a href=\"https://about.twitter.com/products/tweetdeck\" rel=\"nofollow\">TweetDeck<\/a>","favorited":false,"in_reply_to_user_id":6048122,"retweet_count":0,"id_str":"689085590822891521","user":{"location":"India ","default_profile":false,"profile_background_tile":false,"statuses_count":86548,"lang":"en","profile_link_color":"94D487","profile_banner_url":"https://pbs.twimg.com/profile_banners/197865769/1436198000","id":197865769,"following":null,"protected":false,"favourites_count":1002,"profile_text_color":"000000","verified":false,"description":"Proud Indian, Digital Marketing Consultant,Traveler, Foodie, Adventurer, Data Architect, Movie Lover, Namo Fan","contributors_enabled":false,"profile_sidebar_border_color":"000000","name":"Bahubali","profile_background_color":"000000","created_at":"Sat Oct 02 17:41:02 +0000 2010","default_profile_image":false,"followers_count":4467,"profile_image_url_https":"https://pbs.twimg.com/profile_images/664486535040000000/GOjDUiuK_normal.jpg","geo_enabled":true,"profile_background_image_url":"http://abs.twimg.com/images/themes/theme1/bg.png","profile_background_image_url_https":"https://abs.twimg.com/images/themes/theme1/bg.png","follow_request_sent":null,"url":null,"utc_offset":19800,"time_zone":"Chennai","notifications":null,"profile_use_background_image":false,"friends_count":810,"profile_sidebar_fill_color":"000000","screen_name":"Ashok_Uppuluri","id_str":"197865769","profile_image_url":"http://pbs.twimg.com/profile_images/664486535040000000/GOjDUiuK_normal.jpg","listed_count":50,"is_translator":false}}
I have tried to load this json file using ELEPHANT BIRD script:- REGISTER json-simple-1.1.1.jar
REGISTER elephant-bird-2.2.3.jar
REGISTER guava-11.0.2.jar
REGISTER avro-1.7.7.jar
REGISTER piggybank-0.12.0.jar
twitter = LOAD 'sample.json' USING com.twitter.elephantbird.pig.load.JsonLoader();
B = foreach twitter generate (chararray)$0#'created_at' as created_at,(chararray)$0#'id' as id,(chararray)$0#'id_str' as id_str,(chararray)$0#'text' as text,(chararray)$0#'source' as source,com.twitter.elephantbird.pig.piggybank.JsonStringToMap($0#'entities') as entities,(boolean)$0#'favorited' as favorited;
describe B;
OUTPUT:- B: {created_at: chararray,id: chararray,id_str: chararray,text: chararray,source: chararray,entitis: map[chararray],favorited: boolean}
But when I tried to DUMP B the follwoing error has occured ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1066: Unable to open iterator for alias B I am providing the complete logs here. 2016-09-11 14:07:57,184 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size before optimization: 1
2016-09-11 14:07:57,184 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size after optimization: 1
2016-09-11 14:07:57,194 [main] INFO org.apache.hadoop.metrics.jvm.JvmMetrics - Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
2016-09-11 14:07:57,194 [main] INFO org.apache.pig.tools.pigstats.mapreduce.MRScriptState - Pig script settings are added to the job
2016-09-11 14:07:57,194 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3
2016-09-11 14:07:57,199 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Setting up single store job
2016-09-11 14:07:57,199 [main] INFO org.apache.pig.data.SchemaTupleFrontend - Key [pig.schematuple] is false, will not generate code.
2016-09-11 14:07:57,199 [main] INFO org.apache.pig.data.SchemaTupleFrontend - Starting process to move generated code to distributed cacche
2016-09-11 14:07:57,199 [main] INFO org.apache.pig.data.SchemaTupleFrontend - Distributed cache not supported or needed in local mode. Setting key [pig.schematuple.local.dir] with code temp directory: /tmp/1473583077199-0
2016-09-11 14:07:57,206 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 1 map-reduce job(s) waiting for submission.
2016-09-11 14:07:57,207 [JobControl] INFO org.apache.hadoop.metrics.jvm.JvmMetrics - Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
2016-09-11 14:07:57,208 [JobControl] WARN org.apache.hadoop.mapreduce.JobResourceUploader - No job jar file set. User classes may not be found. See Job or Job#setJar(String).
2016-09-11 14:07:57,211 [JobControl] INFO org.apache.hadoop.mapreduce.lib.input.FileInputFormat - Total input paths to process : 1
2016-09-11 14:07:57,211 [JobControl] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths (combined) to process : 1
2016-09-11 14:07:57,212 [JobControl] INFO org.apache.hadoop.mapreduce.JobSubmitter - number of splits:1
2016-09-11 14:07:57,216 [JobControl] INFO org.apache.hadoop.mapreduce.JobSubmitter - Submitting tokens for job: job_local360376249_0009
2016-09-11 14:07:57,267 [JobControl] INFO org.apache.hadoop.mapreduce.Job - The url to track the job: http://localhost:8080/
2016-09-11 14:07:57,267 [Thread-214] INFO org.apache.hadoop.mapred.LocalJobRunner - OutputCommitter set in config null
2016-09-11 14:07:57,270 [Thread-214] INFO org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter - File Output Committer Algorithm version is 1
2016-09-11 14:07:57,270 [Thread-214] INFO org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
2016-09-11 14:07:57,270 [Thread-214] INFO org.apache.hadoop.mapred.LocalJobRunner - OutputCommitter is org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputCommitter
2016-09-11 14:07:57,271 [Thread-214] INFO org.apache.hadoop.mapred.LocalJobRunner - Waiting for map tasks
2016-09-11 14:07:57,272 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.LocalJobRunner - Starting task: attempt_local360376249_0009_m_000000_0
2016-09-11 14:07:57,277 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter - File Output Committer Algorithm version is 1
2016-09-11 14:07:57,277 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
2016-09-11 14:07:57,277 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.Task - Using ResourceCalculatorProcessTree : [ ]
2016-09-11 14:07:57,278 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.MapTask - Processing split: Number of splits :1
Total Length = 2416
Input split[0]:
Length = 2416
ClassName: org.apache.hadoop.mapreduce.lib.input.FileSplit
Locations:
-----------------------
2016-09-11 14:07:57,282 [LocalJobRunner Map Task Executor #0] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader - Current split being processed file:/root/PIG/PIG/sample.json:0+2416
2016-09-11 14:07:57,282 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter - File Output Committer Algorithm version is 1
2016-09-11 14:07:57,282 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
2016-09-11 14:07:57,288 [LocalJobRunner Map Task Executor #0] INFO org.apache.pig.data.SchemaTupleBackend - Key [pig.schematuple] was not set... will not generate code.
2016-09-11 14:07:57,290 [LocalJobRunner Map Task Executor #0] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapOnly$Map - Aliases being processed per job phase (AliasName[line,offset]): M: twitter[20,10],B[21,4] C: R:
2016-09-11 14:07:57,291 [Thread-214] INFO org.apache.hadoop.mapred.LocalJobRunner - map task executor complete.
2016-09-11 14:07:57,296 [Thread-214] WARN org.apache.hadoop.mapred.LocalJobRunner - job_local360376249_0009
java.lang.Exception: java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.Counter, but class was expected
at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522)
Caused by: java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.Counter, but class was expected
at com.twitter.elephantbird.pig.util.PigCounterHelper.incrCounter(PigCounterHelper.java:55)
at com.twitter.elephantbird.pig.load.LzoBaseLoadFunc.incrCounter(LzoBaseLoadFunc.java:70)
at com.twitter.elephantbird.pig.load.JsonLoader.getNext(JsonLoader.java:130)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.nextKeyValue(PigRecordReader.java:204)
at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:556)
at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
2016-09-11 14:07:57,467 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - HadoopJobId: job_local360376249_0009
2016-09-11 14:07:57,467 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Processing aliases B,twitter
2016-09-11 14:07:57,467 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - detailed locations: M: twitter[20,10],B[21,4] C: R:
2016-09-11 14:07:57,468 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 0% complete
2016-09-11 14:07:57,468 [main] WARN org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Ooops! Some job has failed! Specify -stop_on_failure if you want Pig to stop immediately on failure.
2016-09-11 14:07:57,468 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - job job_local360376249_0009 has failed! Stop running all dependent jobs
2016-09-11 14:07:57,468 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 100% complete
2016-09-11 14:07:57,469 [main] INFO org.apache.hadoop.metrics.jvm.JvmMetrics - Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
2016-09-11 14:07:57,469 [main] INFO org.apache.hadoop.metrics.jvm.JvmMetrics - Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
2016-09-11 14:07:57,469 [main] ERROR org.apache.pig.tools.pigstats.mapreduce.MRPigStatsUtil - 1 map reduce job(s) failed!
2016-09-11 14:07:57,470 [main] INFO org.apache.pig.tools.pigstats.mapreduce.SimplePigStats - Script Statistics:
HadoopVersionPigVersionUserIdStartedAtFinishedAtFeatures
2.7.1.2.3.4.7-40.15.0.2.3.4.7-4root2016-09-11 14:07:572016-09-11 14:07:57UNKNOWN
Failed!
Failed Jobs:
JobIdAliasFeatureMessageOutputs
job_local360376249_0009B,twitterMAP_ONLYMessage: Job failed!file:/tmp/temp252944192/tmp-470484503,
Input(s):
Failed to read data from "file:///root/PIG/PIG/sample.json"
Output(s):
Failed to produce result in "file:/tmp/temp252944192/tmp-470484503"
Counters:
Total records written : 0
Total bytes written : 0
Spillable Memory Manager spill count : 0
Total bags proactively spilled: 0
Total records proactively spilled: 0
Job DAG:
job_local360376249_0009 And please give a clarification on how to use jar files. And what are the versions to use. There is soo much of confusion for me. Someone says use Elephant Bird, and Someone says use AVRO. Please help. Mohan.V
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Pig
09-11-2016
07:04 AM
1 Kudo
I think i got it on my own. As gkeys said, i made it too complex. But at last I have realized that I don't need the 3rd step which is grouping, and it is successfully stored into the hbase. Here is the Script:- data = load 'sample.txt' using JsonLoader('pattern:chararray, tweets: bag {(tweet::created_at: chararray,tweet::id: chararray,tweet::user_id: chararray,tweet::text: chararray)}');
A = FILTER data BY pattern =='google_*';
STORE A into 'hbase://tablename' USING org.apache.pig.backend.hadoop.hbase.HBaseStorage('tweets:tweets');
... View more
09-10-2016
02:24 PM
Thank you for your valuable suggestions gkeys. I didnt expected that it will beome a complex script like this. As i said that I am just a beginner in Pig. So please suggest me solve for the same.
... View more