Member since
05-31-2017
10
Posts
0
Kudos Received
0
Solutions
11-15-2017
05:15 PM
Hi All, After enabling kerberos Ambari services are not comming up .Only zookeeper service comming up and going down as all other services fails. its able to regenerate key tab and successfully stop services but fails to start services. we are using AWS openAD as service for kerberos AD integration. Caused by: KrbException: Identifier doesn't match expected value (906) log file attached below : stderr: Traceback (most recent call last): File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/datanode.py", line 177, in <module> DataNode().execute() File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 314, in execute method(env) File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/datanode.py", line 64, in start datanode(action="start") File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk return fn(*args, **kwargs) File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_datanode.py", line 68, in datanode create_log_dir=True File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/utils.py", line 271, in service Execute(daemon_cmd, not_if=process_id_exists_command, environment=hadoop_env_exports) File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__ self.env.run() File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run self.run_action(resource, action) File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action provider_action() File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 262, in action_run tries=self.resource.tries, try_sleep=self.resource.try_sleep) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 72, in inner result = function(command, **kwargs) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 102, in checked_call tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 150, in _call_wrapper result = _call(command, **kwargs_copy) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 303, in _call raise ExecutionFailed(err_msg, code, out, err) resource_management.core.exceptions.ExecutionFailed: Execution of 'ambari-sudo.sh -H -E /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /usr/hdp/current/hadoop-client/conf start datanode' returned 1. starting datanode, logging to /var/log/hadoop/hdfs/hadoop-hdfs-datanode-ip-192-168-0-37.eu-west-1.compute.internal.out stdout: 2017-11-15 16:09:32,516 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf 2017-11-15 16:09:32,619 - Stack Feature Version Info: stack_version=2.5, version=2.5.5.0-157, current_cluster_version=2.5.5.0-157 -> 2.5.5.0-157 2017-11-15 16:09:32,620 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf User Group mapping (user_group) is missing in the hostLevelParams 2017-11-15 16:09:32,621 - Group['livy'] {} 2017-11-15 16:09:32,622 - Group['spark'] {} 2017-11-15 16:09:32,622 - Group['hadoop'] {} 2017-11-15 16:09:32,622 - Group['users'] {} 2017-11-15 16:09:32,622 - Group['knox'] {} 2017-11-15 16:09:32,623 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']} 2017-11-15 16:09:32,623 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']} 2017-11-15 16:09:32,624 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']} 2017-11-15 16:09:32,624 - User['infra-solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']} 2017-11-15 16:09:32,625 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']} 2017-11-15 16:09:32,625 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']} 2017-11-15 16:09:32,626 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']} 2017-11-15 16:09:32,626 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']} 2017-11-15 16:09:32,627 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']} 2017-11-15 16:09:32,627 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']} 2017-11-15 16:09:32,628 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']} 2017-11-15 16:09:32,628 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']} 2017-11-15 16:09:32,629 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']} 2017-11-15 16:09:32,629 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']} 2017-11-15 16:09:32,630 - User['knox'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']} 2017-11-15 16:09:32,630 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']} 2017-11-15 16:09:32,631 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2017-11-15 16:09:32,632 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'} 2017-11-15 16:09:32,635 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if 2017-11-15 16:09:32,636 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'} 2017-11-15 16:09:32,636 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2017-11-15 16:09:32,637 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'} 2017-11-15 16:09:32,641 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if 2017-11-15 16:09:32,643 - Group['hdfs'] {} 2017-11-15 16:09:32,643 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'hdfs']} 2017-11-15 16:09:32,644 - FS Type: 2017-11-15 16:09:32,644 - Directory['/etc/hadoop'] {'mode': 0755} 2017-11-15 16:09:32,656 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'root', 'group': 'hadoop'} 2017-11-15 16:09:32,656 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777} 2017-11-15 16:09:32,667 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'} 2017-11-15 16:09:32,672 - Skipping Execute[('setenforce', '0')] due to not_if 2017-11-15 16:09:32,672 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'} 2017-11-15 16:09:32,674 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'} 2017-11-15 16:09:32,674 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'} 2017-11-15 16:09:32,678 - File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'root'} 2017-11-15 16:09:32,679 - File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'root'} 2017-11-15 16:09:32,685 - File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644} 2017-11-15 16:09:32,694 - File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs', 'group': 'hadoop'} 2017-11-15 16:09:32,694 - File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755} 2017-11-15 16:09:32,695 - File['/usr/hdp/current/hadoop-client/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'} 2017-11-15 16:09:32,699 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop'} 2017-11-15 16:09:32,702 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755} 2017-11-15 16:09:32,704 - Testing the JVM's JCE policy to see it if supports an unlimited key length. 2017-11-15 16:09:32,705 - Execute['/usr/jdk64/jdk1.8.0_112/bin/java -jar /var/lib/ambari-agent/tools/jcepolicyinfo.jar -tu'] {'logoutput': True, 'environment': {'JAVA_HOME': '/usr/jdk64/jdk1.8.0_112'}} Unlimited Key JCE Policy: true 2017-11-15 16:09:32,873 - The unlimited key JCE policy is required, and appears to have been installed. 2017-11-15 16:09:33,035 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf 2017-11-15 16:09:33,039 - Stack Feature Version Info: stack_version=2.5, version=2.5.5.0-157, current_cluster_version=2.5.5.0-157 -> 2.5.5.0-157 2017-11-15 16:09:33,041 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf 2017-11-15 16:09:33,047 - checked_call['rpm -q --queryformat '%{version}-%{release}' hdp-select | sed -e 's/\.el[0-9]//g''] {'stderr': -1} 2017-11-15 16:09:33,079 - checked_call returned (0, '2.5.5.0-157', '') 2017-11-15 16:09:33,083 - Directory['/etc/security/limits.d'] {'owner': 'root', 'create_parents': True, 'group': 'root'} 2017-11-15 16:09:33,088 - File['/etc/security/limits.d/hdfs.conf'] {'content': Template('hdfs.conf.j2'), 'owner': 'root', 'group': 'root', 'mode': 0644} 2017-11-15 16:09:33,089 - XmlConfig['hadoop-policy.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...} 2017-11-15 16:09:33,097 - Generating config: /usr/hdp/current/hadoop-client/conf/hadoop-policy.xml 2017-11-15 16:09:33,097 - File['/usr/hdp/current/hadoop-client/conf/hadoop-policy.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'} 2017-11-15 16:09:33,105 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...} 2017-11-15 16:09:33,112 - Generating config: /usr/hdp/current/hadoop-client/conf/ssl-client.xml 2017-11-15 16:09:33,113 - File['/usr/hdp/current/hadoop-client/conf/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'} 2017-11-15 16:09:33,118 - Directory['/usr/hdp/current/hadoop-client/conf/secure'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'} 2017-11-15 16:09:33,119 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf/secure', 'configuration_attributes': {}, 'configurations': ...} 2017-11-15 16:09:33,125 - Generating config: /usr/hdp/current/hadoop-client/conf/secure/ssl-client.xml 2017-11-15 16:09:33,126 - File['/usr/hdp/current/hadoop-client/conf/secure/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'} 2017-11-15 16:09:33,131 - XmlConfig['ssl-server.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...} 2017-11-15 16:09:33,137 - Generating config: /usr/hdp/current/hadoop-client/conf/ssl-server.xml 2017-11-15 16:09:33,138 - File['/usr/hdp/current/hadoop-client/conf/ssl-server.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'} 2017-11-15 16:09:33,144 - XmlConfig['hdfs-site.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {'final': {'dfs.support.append': 'true', 'dfs.datanode.data.dir': 'true', 'dfs.namenode.http-address': 'true', 'dfs.namenode.name.dir': 'true', 'dfs.webhdfs.enabled': 'true', 'dfs.datanode.failed.volumes.tolerated': 'true'}}, 'configurations': ...} 2017-11-15 16:09:33,151 - Generating config: /usr/hdp/current/hadoop-client/conf/hdfs-site.xml 2017-11-15 16:09:33,151 - File['/usr/hdp/current/hadoop-client/conf/hdfs-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'} 2017-11-15 16:09:33,195 - XmlConfig['core-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {'final': {'fs.defaultFS': 'true'}}, 'owner': 'hdfs', 'configurations': ...} 2017-11-15 16:09:33,203 - Generating config: /usr/hdp/current/hadoop-client/conf/core-site.xml 2017-11-15 16:09:33,203 - File['/usr/hdp/current/hadoop-client/conf/core-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2017-11-15 16:09:33,226 - File['/usr/hdp/current/hadoop-client/conf/slaves'] {'content': Template('slaves.j2'), 'owner': 'root'} 2017-11-15 16:09:33,227 - Directory['/var/lib/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'group': 'hadoop', 'mode': 0751} 2017-11-15 16:09:33,227 - Directory['/var/lib/ambari-agent/data/datanode'] {'create_parents': True, 'mode': 0755} 2017-11-15 16:09:33,230 - Host contains mounts: ['/', '/proc', '/sys', '/dev/pts', '/dev/shm', '/tmp', '/datanode1', '/datanode2', '/proc/sys/fs/binfmt_misc']. 2017-11-15 16:09:33,230 - Mount point for directory /datanode1 is /datanode1 2017-11-15 16:09:33,230 - Mount point for directory /datanode2 is /datanode2 2017-11-15 16:09:33,230 - Mount point for directory /datanode1 is /datanode1 2017-11-15 16:09:33,230 - Forcefully ensuring existence and permissions of the directory: /datanode1 2017-11-15 16:09:33,231 - Directory['/datanode1'] {'group': 'hadoop', 'cd_access': 'a', 'create_parents': True, 'ignore_failures': True, 'mode': 0755, 'owner': 'hdfs'} 2017-11-15 16:09:33,231 - Mount point for directory /datanode2 is /datanode2 2017-11-15 16:09:33,231 - Forcefully ensuring existence and permissions of the directory: /datanode2 2017-11-15 16:09:33,231 - Directory['/datanode2'] {'group': 'hadoop', 'cd_access': 'a', 'create_parents': True, 'ignore_failures': True, 'mode': 0755, 'owner': 'hdfs'} 2017-11-15 16:09:33,234 - Host contains mounts: ['/', '/proc', '/sys', '/dev/pts', '/dev/shm', '/tmp', '/datanode1', '/datanode2', '/proc/sys/fs/binfmt_misc']. 2017-11-15 16:09:33,234 - Mount point for directory /datanode1 is /datanode1 2017-11-15 16:09:33,234 - Mount point for directory /datanode2 is /datanode2 2017-11-15 16:09:33,235 - File['/var/lib/ambari-agent/data/datanode/dfs_data_dir_mount.hist'] {'content': '\n# This file keeps track of the last known mount-point for each dir.\n# It is safe to delete, since it will get regenerated the next time that the component of the service starts.\n# However, it is not advised to delete this file since Ambari may\n# re-create a dir that used to be mounted on a drive but is now mounted on the root.\n# Comments begin with a hash (#) symbol\n# dir,mount_point\n/datanode2,/datanode2\n/datanode1,/datanode1\n', 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644} 2017-11-15 16:09:33,236 - Directory['/var/run/hadoop'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0755} 2017-11-15 16:09:33,236 - Changing owner for /var/run/hadoop from 0 to hdfs 2017-11-15 16:09:33,236 - Changing group for /var/run/hadoop from 0 to hadoop 2017-11-15 16:09:33,237 - Directory['/var/run/hadoop/hdfs'] {'owner': 'hdfs', 'group': 'hadoop', 'create_parents': True} 2017-11-15 16:09:33,237 - Directory['/var/log/hadoop/hdfs'] {'owner': 'hdfs', 'group': 'hadoop', 'create_parents': True} 2017-11-15 16:09:33,238 - File['/var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid'] {'action': ['delete'], 'not_if': 'ambari-sudo.sh -H -E test -f /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid && ambari-sudo.sh -H -E pgrep -F /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid'} 2017-11-15 16:09:33,248 - Deleting File['/var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid'] 2017-11-15 16:09:33,248 - Execute['ambari-sudo.sh -H -E /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /usr/hdp/current/hadoop-client/conf start datanode'] {'environment': {'HADOOP_LIBEXEC_DIR': '/usr/hdp/current/hadoop-client/libexec'}, 'not_if': 'ambari-sudo.sh -H -E test -f /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid && ambari-sudo.sh -H -E pgrep -F /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid'} 2017-11-15 16:09:37,296 - Execute['find /var/log/hadoop/hdfs -maxdepth 1 -type f -name '*' -exec echo '==> {} <==' \; -exec tail -n 40 {} \;'] {'logoutput': True, 'ignore_failures': True, 'user': 'root'} ==> /var/log/hadoop/hdfs/gc.log-201711151019 <== Java HotSpot(TM) 64-Bit Server VM (25.112-b15) for linux-amd64 JRE (1.8.0_112-b15), built on Sep 22 2016 21:10:53 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8) Memory: 4k page, physical 66093268k(62720916k free), swap 8388604k(8388604k free) CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=209715200 -XX:MaxTenuringThreshold=6 -XX:NewSize=209715200 -XX:OldPLABSize=16 -XX:ParallelGCThreads=4 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 2017-11-15T10:19:03.555+0000: 1.286: [GC (Allocation Failure) 2017-11-15T10:19:03.555+0000: 1.286: [ParNew: 163840K->13709K(184320K), 0.0097530 secs] 163840K->13709K(1028096K), 0.0098261 secs] [Times: user=0.02 sys=0.00, real=0.01 secs] 2017-11-15T10:19:05.565+0000: 3.296: [GC (CMS Initial Mark) [1 CMS-initial-mark: 0K(843776K)] 142966K(1028096K), 0.0094206 secs] [Times: user=0.04 sys=0.00, real=0.01 secs] 2017-11-15T10:19:05.574+0000: 3.306: [CMS-concurrent-mark-start] 2017-11-15T10:19:05.582+0000: 3.314: [CMS-concurrent-mark: 0.008/0.008 secs] [Times: user=0.00 sys=0.01, real=0.01 secs] 2017-11-15T10:19:05.582+0000: 3.314: [CMS-concurrent-preclean-start] 2017-11-15T10:19:05.584+0000: 3.315: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 2017-11-15T10:19:05.584+0000: 3.315: [CMS-concurrent-abortable-preclean-start] Heap par new generation total 184320K, used 144604K [0x00000000c0000000, 0x00000000cc800000, 0x00000000cc800000) eden space 163840K, 79% used [0x00000000c0000000, 0x00000000c7fd3fb0, 0x00000000ca000000) from space 20480K, 66% used [0x00000000cb400000, 0x00000000cc163438, 0x00000000cc800000) to space 20480K, 0% used [0x00000000ca000000, 0x00000000ca000000, 0x00000000cb400000) concurrent mark-sweep generation total 843776K, used 0K [0x00000000cc800000, 0x0000000100000000, 0x0000000100000000) Metaspace used 26596K, capacity 26956K, committed 27272K, reserved 1073152K class space used 3219K, capacity 3328K, committed 3376K, reserved 1048576K ==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-ip-192-168-0-37.eu-west-1.compute.internal.out.2 <== ulimit -a for secure datanode user hdfs core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 257533 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 128000 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 65536 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited ==> /var/log/hadoop/hdfs/gc.log-201711151030 <== Java HotSpot(TM) 64-Bit Server VM (25.112-b15) for linux-amd64 JRE (1.8.0_112-b15), built on Sep 22 2016 21:10:53 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8) Memory: 4k page, physical 66093268k(60902568k free), swap 8388604k(8388604k free) CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=209715200 -XX:MaxTenuringThreshold=6 -XX:NewSize=209715200 -XX:OldPLABSize=16 -XX:ParallelGCThreads=4 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 2017-11-15T10:30:51.116+0000: 1.326: [GC (Allocation Failure) 2017-11-15T10:30:51.116+0000: 1.326: [ParNew: 163840K->13675K(184320K), 0.0099165 secs] 163840K->13675K(1028096K), 0.0099936 secs] [Times: user=0.03 sys=0.00, real=0.01 secs] 2017-11-15T10:30:53.126+0000: 3.335: [GC (CMS Initial Mark) [1 CMS-initial-mark: 0K(843776K)] 143623K(1028096K), 0.0094666 secs] [Times: user=0.03 sys=0.00, real=0.01 secs] 2017-11-15T10:30:53.135+0000: 3.345: [CMS-concurrent-mark-start] 2017-11-15T10:30:53.144+0000: 3.353: [CMS-concurrent-mark: 0.008/0.008 secs] [Times: user=0.01 sys=0.01, real=0.01 secs] 2017-11-15T10:30:53.144+0000: 3.353: [CMS-concurrent-preclean-start] 2017-11-15T10:30:53.145+0000: 3.355: [CMS-concurrent-preclean: 0.002/0.002 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 2017-11-15T10:30:53.145+0000: 3.355: [CMS-concurrent-abortable-preclean-start] Heap par new generation total 184320K, used 145262K [0x00000000c0000000, 0x00000000cc800000, 0x00000000cc800000) eden space 163840K, 80% used [0x00000000c0000000, 0x00000000c8080a78, 0x00000000ca000000) from space 20480K, 66% used [0x00000000cb400000, 0x00000000cc15adb0, 0x00000000cc800000) to space 20480K, 0% used [0x00000000ca000000, 0x00000000ca000000, 0x00000000cb400000) concurrent mark-sweep generation total 843776K, used 0K [0x00000000cc800000, 0x0000000100000000, 0x0000000100000000) Metaspace used 26598K, capacity 26956K, committed 27272K, reserved 1073152K class space used 3219K, capacity 3328K, committed 3376K, reserved 1048576K ==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-ip-192-168-0-37.eu-west-1.compute.internal.out.1 <== ulimit -a for secure datanode user hdfs core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 257533 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 128000 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 65536 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited ==> /var/log/hadoop/hdfs/gc.log-201711151021 <== Java HotSpot(TM) 64-Bit Server VM (25.112-b15) for linux-amd64 JRE (1.8.0_112-b15), built on Sep 22 2016 21:10:53 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8) Memory: 4k page, physical 66093268k(61788236k free), swap 8388604k(8388604k free) CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=209715200 -XX:MaxTenuringThreshold=6 -XX:NewSize=209715200 -XX:OldPLABSize=16 -XX:ParallelGCThreads=4 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 2017-11-15T10:21:21.831+0000: 1.306: [GC (Allocation Failure) 2017-11-15T10:21:21.831+0000: 1.306: [ParNew: 163840K->13669K(184320K), 0.0111553 secs] 163840K->13669K(1028096K), 0.0112318 secs] [Times: user=0.03 sys=0.00, real=0.01 secs] 2017-11-15T10:21:23.842+0000: 3.317: [GC (CMS Initial Mark) [1 CMS-initial-mark: 0K(843776K)] 155282K(1028096K), 0.0111901 secs] [Times: user=0.04 sys=0.00, real=0.01 secs] 2017-11-15T10:21:23.854+0000: 3.329: [CMS-concurrent-mark-start] 2017-11-15T10:21:23.861+0000: 3.336: [CMS-concurrent-mark: 0.008/0.008 secs] [Times: user=0.01 sys=0.01, real=0.01 secs] 2017-11-15T10:21:23.861+0000: 3.336: [CMS-concurrent-preclean-start] 2017-11-15T10:21:23.863+0000: 3.338: [CMS-concurrent-preclean: 0.002/0.002 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 2017-11-15T10:21:23.863+0000: 3.338: [CMS-concurrent-abortable-preclean-start] Heap par new generation total 184320K, used 156921K [0x00000000c0000000, 0x00000000cc800000, 0x00000000cc800000) eden space 163840K, 87% used [0x00000000c0000000, 0x00000000c8be4c88, 0x00000000ca000000) from space 20480K, 66% used [0x00000000cb400000, 0x00000000cc1597b8, 0x00000000cc800000) to space 20480K, 0% used [0x00000000ca000000, 0x00000000ca000000, 0x00000000cb400000) concurrent mark-sweep generation total 843776K, used 0K [0x00000000cc800000, 0x0000000100000000, 0x0000000100000000) Metaspace used 27715K, capacity 28012K, committed 28268K, reserved 1075200K class space used 3340K, capacity 3424K, committed 3504K, reserved 1048576K ==> /var/log/hadoop/hdfs/jsvc.out <== ==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-ip-192-168-0-37.eu-west-1.compute.internal.out.4 <== ulimit -a for user hdfs core file size (blocks, -c) unlimited data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 257533 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 128000 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 65536 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited ==> /var/log/hadoop/hdfs/hdfs-audit.log <== ==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-ip-192-168-0-37.eu-west-1.compute.internal.out.3 <== SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". SLF4J: Defaulting to no-operation (NOP) logger implementation SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details. g signals (-i) 257533 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 128000 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 65536 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited ==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-ip-192-168-0-37.eu-west-1.compute.internal.log <== at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2671) at org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.start(SecureDataNodeStarter.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:243) Caused by: javax.security.auth.login.LoginException: Pre-authentication information was invalid (24) at com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5LoginModule.java:804) at com.sun.security.auth.module.Krb5LoginModule.login(Krb5LoginModule.java:617) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at javax.security.auth.login.LoginContext.invoke(LoginContext.java:755) at javax.security.auth.login.LoginContext.access$000(LoginContext.java:195) at javax.security.auth.login.LoginContext$4.run(LoginContext.java:682) at javax.security.auth.login.LoginContext$4.run(LoginContext.java:680) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:680) at javax.security.auth.login.LoginContext.login(LoginContext.java:587) at org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytab(UserGroupInformation.java:1088) ... 10 more Caused by: KrbException: Pre-authentication information was invalid (24) at sun.security.krb5.KrbAsRep.<init>(KrbAsRep.java:76) at sun.security.krb5.KrbAsReqBuilder.send(KrbAsReqBuilder.java:316) at sun.security.krb5.KrbAsReqBuilder.action(KrbAsReqBuilder.java:361) at com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5LoginModule.java:776) ... 23 more Caused by: KrbException: Identifier doesn't match expected value (906) at sun.security.krb5.internal.KDCRep.init(KDCRep.java:140) at sun.security.krb5.internal.ASRep.init(ASRep.java:64) at sun.security.krb5.internal.ASRep.<init>(ASRep.java:59) at sun.security.krb5.KrbAsRep.<init>(KrbAsRep.java:60) ... 26 more 2017-11-15 16:09:34,310 INFO util.ExitUtil (ExitUtil.java:terminate(124)) - Exiting with status 1 2017-11-15 16:09:34,312 INFO datanode.DataNode (LogAdapter.java:info(47)) - SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down DataNode at ip-192-168-0-37.eu-west-1.compute.internal/192.168.0.37 ************************************************************/ ==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-ip-192-168-0-37.eu-west-1.compute.internal.out <== ulimit -a for secure datanode user hdfs core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 257533 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 128000 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 65536 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited ==> /var/log/hadoop/hdfs/gc.log-201711151025 <== Java HotSpot(TM) 64-Bit Server VM (25.112-b15) for linux-amd64 JRE (1.8.0_112-b15), built on Sep 22 2016 21:10:53 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8) Memory: 4k page, physical 66093268k(61757988k free), swap 8388604k(8388604k free) CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=209715200 -XX:MaxTenuringThreshold=6 -XX:NewSize=209715200 -XX:OldPLABSize=16 -XX:ParallelGCThreads=4 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 2017-11-15T10:25:43.400+0000: 1.336: [GC (Allocation Failure) 2017-11-15T10:25:43.400+0000: 1.336: [ParNew: 163840K->13681K(184320K), 0.0131699 secs] 163840K->13681K(1028096K), 0.0132582 secs] [Times: user=0.05 sys=0.01, real=0.02 secs] 2017-11-15T10:25:45.413+0000: 3.348: [GC (CMS Initial Mark) [1 CMS-initial-mark: 0K(843776K)] 142884K(1028096K), 0.0126406 secs] [Times: user=0.05 sys=0.00, real=0.01 secs] 2017-11-15T10:25:45.425+0000: 3.361: [CMS-concurrent-mark-start] 2017-11-15T10:25:45.433+0000: 3.369: [CMS-concurrent-mark: 0.008/0.008 secs] [Times: user=0.00 sys=0.00, real=0.01 secs] 2017-11-15T10:25:45.433+0000: 3.369: [CMS-concurrent-preclean-start] 2017-11-15T10:25:45.435+0000: 3.370: [CMS-concurrent-preclean: 0.002/0.002 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 2017-11-15T10:25:45.435+0000: 3.370: [CMS-concurrent-abortable-preclean-start] Heap par new generation total 184320K, used 147472K [0x00000000c0000000, 0x00000000cc800000, 0x00000000cc800000) eden space 163840K, 81% used [0x00000000c0000000, 0x00000000c82a78b8, 0x00000000ca000000) from space 20480K, 66% used [0x00000000cb400000, 0x00000000cc15c770, 0x00000000cc800000) to space 20480K, 0% used [0x00000000ca000000, 0x00000000ca000000, 0x00000000cb400000) concurrent mark-sweep generation total 843776K, used 0K [0x00000000cc800000, 0x0000000100000000, 0x0000000100000000) Metaspace used 26588K, capacity 26892K, committed 27272K, reserved 1073152K class space used 3218K, capacity 3328K, committed 3376K, reserved 1048576K ==> /var/log/hadoop/hdfs/gc.log-201711151015 <== Java HotSpot(TM) 64-Bit Server VM (25.112-b15) for linux-amd64 JRE (1.8.0_112-b15), built on Sep 22 2016 21:10:53 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8) Memory: 4k page, physical 66093268k(62723240k free), swap 8388604k(8388604k free) CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=209715200 -XX:MaxTenuringThreshold=6 -XX:NewSize=209715200 -XX:OldPLABSize=16 -XX:ParallelGCThreads=4 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 2017-11-15T10:15:34.350+0000: 1.611: [GC (Allocation Failure) 2017-11-15T10:15:34.350+0000: 1.611: [ParNew: 163840K->13715K(184320K), 0.0101096 secs] 163840K->13715K(1028096K), 0.0101994 secs] [Times: user=0.03 sys=0.01, real=0.01 secs] 2017-11-15T10:15:36.360+0000: 3.621: [GC (CMS Initial Mark) [1 CMS-initial-mark: 0K(843776K)] 132237K(1028096K), 0.0085585 secs] [Times: user=0.03 sys=0.00, real=0.01 secs] 2017-11-15T10:15:36.369+0000: 3.630: [CMS-concurrent-mark-start] 2017-11-15T10:15:36.377+0000: 3.638: [CMS-concurrent-mark: 0.008/0.008 secs] [Times: user=0.00 sys=0.01, real=0.00 secs] 2017-11-15T10:15:36.377+0000: 3.638: [CMS-concurrent-preclean-start] 2017-11-15T10:15:36.378+0000: 3.639: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times: user=0.01 sys=0.00, real=0.01 secs] 2017-11-15T10:15:36.378+0000: 3.639: [CMS-concurrent-abortable-preclean-start] CMS: abort preclean due to time 2017-11-15T10:15:41.402+0000: 8.664: [CMS-concurrent-abortable-preclean: 1.359/5.024 secs] [Times: user=1.55 sys=0.00, real=5.02 secs] 2017-11-15T10:15:41.403+0000: 8.664: [GC (CMS Final Remark) [YG occupancy: 142997 K (184320 K)]2017-11-15T10:15:41.403+0000: 8.664: [Rescan (parallel) , 0.0089383 secs]2017-11-15T10:15:41.412+0000: 8.673: [weak refs processing, 0.0000171 secs]2017-11-15T10:15:41.412+0000: 8.673: [class unloading, 0.0031303 secs]2017-11-15T10:15:41.415+0000: 8.676: [scrub symbol table, 0.0022981 secs]2017-11-15T10:15:41.417+0000: 8.678: [scrub string table, 0.0004424 secs][1 CMS-remark: 0K(843776K)] 142997K(1028096K), 0.0153866 secs] [Times: user=0.04 sys=0.00, real=0.02 secs] 2017-11-15T10:15:41.418+0000: 8.679: [CMS-concurrent-sweep-start] 2017-11-15T10:15:41.418+0000: 8.679: [CMS-concurrent-sweep: 0.000/0.000 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 2017-11-15T10:15:41.418+0000: 8.679: [CMS-concurrent-reset-start] 2017-11-15T10:15:41.420+0000: 8.681: [CMS-concurrent-reset: 0.002/0.002 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] Heap par new generation total 184320K, used 144635K [0x00000000c0000000, 0x00000000cc800000, 0x00000000cc800000) eden space 163840K, 79% used [0x00000000c0000000, 0x00000000c7fda088, 0x00000000ca000000) from space 20480K, 66% used [0x00000000cb400000, 0x00000000cc164e00, 0x00000000cc800000) to space 20480K, 0% used [0x00000000ca000000, 0x00000000ca000000, 0x00000000cb400000) concurrent mark-sweep generation total 843776K, used 0K [0x00000000cc800000, 0x0000000100000000, 0x0000000100000000) Metaspace used 26591K, capacity 26892K, committed 27272K, reserved 1073152K class space used 3221K, capacity 3328K, committed 3376K, reserved 1048576K ==> /var/log/hadoop/hdfs/gc.log-201711151057 <== Java HotSpot(TM) 64-Bit Server VM (25.112-b15) for linux-amd64 JRE (1.8.0_112-b15), built on Sep 22 2016 21:10:53 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8) Memory: 4k page, physical 66093268k(60441804k free), swap 8388604k(8388604k free) CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=209715200 -XX:MaxTenuringThreshold=6 -XX:NewSize=209715200 -XX:OldPLABSize=16 -XX:ParallelGCThreads=4 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 2017-11-15T10:57:48.921+0000: 1.284: [GC (Allocation Failure) 2017-11-15T10:57:48.921+0000: 1.284: [ParNew: 163840K->13679K(184320K), 0.0097478 secs] 163840K->13679K(1028096K), 0.0098245 secs] [Times: user=0.02 sys=0.00, real=0.01 secs] 2017-11-15T10:57:50.930+0000: 3.293: [GC (CMS Initial Mark) [1 CMS-initial-mark: 0K(843776K)] 175294K(1028096K), 0.0115010 secs] [Times: user=0.04 sys=0.00, real=0.01 secs] 2017-11-15T10:57:50.942+0000: 3.304: [CMS-concurrent-mark-start] 2017-11-15T10:57:50.950+0000: 3.312: [CMS-concurrent-mark: 0.008/0.008 secs] [Times: user=0.01 sys=0.00, real=0.01 secs] 2017-11-15T10:57:50.950+0000: 3.312: [CMS-concurrent-preclean-start] 2017-11-15T10:57:50.951+0000: 3.314: [CMS-concurrent-preclean: 0.002/0.002 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 2017-11-15T10:57:50.951+0000: 3.314: [CMS-concurrent-abortable-preclean-start] 2017-11-15T10:57:53.891+0000: 6.254: [GC (Allocation Failure) 2017-11-15T10:57:53.891+0000: 6.254: [ParNew: 177519K->16269K(184320K), 0.0253375 secs] 177519K->20980K(1028096K), 0.0254111 secs] [Times: user=0.07 sys=0.00, real=0.03 secs] CMS: abort preclean due to time 2017-11-15T10:57:55.993+0000: 8.356: [CMS-concurrent-abortable-preclean: 1.291/5.042 secs] [Times: user=1.51 sys=0.02, real=5.05 secs] 2017-11-15T10:57:55.994+0000: 8.356: [GC (CMS Final Remark) [YG occupancy: 28286 K (184320 K)]2017-11-15T10:57:55.995+0000: 8.358: [Rescan (parallel) , 0.0019891 secs]2017-11-15T10:57:55.997+0000: 8.360: [weak refs processing, 0.0000165 secs]2017-11-15T10:57:55.997+0000: 8.360: [class unloading, 0.0034043 secs]2017-11-15T10:57:56.000+0000: 8.363: [scrub symbol table, 0.0027747 secs]2017-11-15T10:57:56.003+0000: 8.366: [scrub string table, 0.0006049 secs][1 CMS-remark: 4710K(843776K)] 32996K(1028096K), 0.0107307 secs] [Times: user=0.01 sys=0.00, real=0.01 secs] 2017-11-15T10:57:56.004+0000: 8.367: [CMS-concurrent-sweep-start] 2017-11-15T10:57:56.007+0000: 8.369: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 2017-11-15T10:57:56.007+0000: 8.369: [CMS-concurrent-reset-start] 2017-11-15T10:57:56.009+0000: 8.371: [CMS-concurrent-reset: 0.002/0.002 secs] [Times: user=0.01 sys=0.00, real=0.00 secs] 2017-11-15T11:13:09.456+0000: 921.818: [GC (Allocation Failure) 2017-11-15T11:13:09.456+0000: 921.818: [ParNew: 180109K->11327K(184320K), 0.0103137 secs] 184820K->23879K(1028096K), 0.0103927 secs] [Times: user=0.03 sys=0.01, real=0.01 secs] 2017-11-15T11:55:19.882+0000: 3452.245: [GC (Allocation Failure) 2017-11-15T11:55:19.882+0000: 3452.245: [ParNew: 175167K->3596K(184320K), 0.0048509 secs] 187719K->16148K(1028096K), 0.0049374 secs] [Times: user=0.01 sys=0.00, real=0.01 secs] 2017-11-15T12:38:23.471+0000: 6035.834: [GC (Allocation Failure) 2017-11-15T12:38:23.471+0000: 6035.834: [ParNew: 167436K->3474K(184320K), 0.0050320 secs] 179988K->16026K(1028096K), 0.0051199 secs] [Times: user=0.01 sys=0.00, real=0.00 secs] 2017-11-15T13:18:41.111+0000: 8453.474: [GC (Allocation Failure) 2017-11-15T13:18:41.111+0000: 8453.474: [ParNew: 167314K->5373K(184320K), 0.0053833 secs] 179866K->17925K(1028096K), 0.0054606 secs] [Times: user=0.01 sys=0.00, real=0.00 secs] 2017-11-15T14:05:18.787+0000: 11251.149: [GC (Allocation Failure) 2017-11-15T14:05:18.787+0000: 11251.149: [ParNew: 169213K->4612K(184320K), 0.0050841 secs] 181765K->17164K(1028096K), 0.0051535 secs] [Times: user=0.02 sys=0.00, real=0.00 secs] Heap par new generation total 184320K, used 112248K [0x00000000c0000000, 0x00000000cc800000, 0x00000000cc800000) eden space 163840K, 65% used [0x00000000c0000000, 0x00000000c691cfc8, 0x00000000ca000000) from space 20480K, 22% used [0x00000000cb400000, 0x00000000cb881180, 0x00000000cc800000) to space 20480K, 0% used [0x00000000ca000000, 0x00000000ca000000, 0x00000000cb400000) concurrent mark-sweep generation total 843776K, used 12552K [0x00000000cc800000, 0x0000000100000000, 0x0000000100000000) Metaspace used 36546K, capacity 36972K, committed 37268K, reserved 1083392K class space used 4028K, capacity 4135K, committed 4244K, reserved 1048576K ==> /var/log/hadoop/hdfs/SecurityAuth.audit <== 2017-11-15 13:00:47,701 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for yarn (auth:SIMPLE) 2017-11-15 13:02:47,688 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for yarn (auth:SIMPLE) ==> /var/log/hadoop/hdfs/jsvc.err <== Initializing secure datanode resources Opened streaming server at /0.0.0.0:1019 Successfully obtained privileged resources (streaming port = ServerSocket[addr=/0.0.0.0,localport=1019] ) (http listener port = 1022) Opened info server at /0.0.0.0:1022 Starting regular datanode initialization Service exit with a return value of 1 Initializing secure datanode resources Opened streaming server at /0.0.0.0:1019 Successfully obtained privileged resources (streaming port = ServerSocket[addr=/0.0.0.0,localport=1019] ) (http listener port = 1022) Opened info server at /0.0.0.0:1022 Starting regular datanode initialization Service exit with a return value of 1 Initializing secure datanode resources Opened streaming server at /0.0.0.0:1019 Successfully obtained privileged resources (streaming port = ServerSocket[addr=/0.0.0.0,localport=1019] ) (http listener port = 1022) Opened info server at /0.0.0.0:1022 Starting regular datanode initialization Service exit with a return value of 1 ==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-ip-192-168-0-37.eu-west-1.compute.internal.out.5 <== ulimit -a for user hdfs core file size (blocks, -c) unlimited data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 257533 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 128000 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 65536 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited Command failed after 1 tries
... View more
11-15-2017
05:12 PM
Hi All, After enabling kerberos Ambari services are not comming up .Only zookeeper service comming up and going down as all other services fails. its able to regenerate key tab and successfully stop services but fails to start services. Caused by: KrbException: Identifier doesn't match expected value (906) log file attached below : stderr:
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/datanode.py", line 177, in <module>
DataNode().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 314, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/datanode.py", line 64, in start
datanode(action="start")
File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk
return fn(*args, **kwargs)
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_datanode.py", line 68, in datanode
create_log_dir=True
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/utils.py", line 271, in service
Execute(daemon_cmd, not_if=process_id_exists_command, environment=hadoop_env_exports)
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 262, in action_run
tries=self.resource.tries, try_sleep=self.resource.try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 72, in inner
result = function(command, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 102, in checked_call
tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 150, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 303, in _call
raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of 'ambari-sudo.sh -H -E /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /usr/hdp/current/hadoop-client/conf start datanode' returned 1. starting datanode, logging to /var/log/hadoop/hdfs/hadoop-hdfs-datanode-ip-192-168-0-37.eu-west-1.compute.internal.out
stdout:
2017-11-15 16:09:32,516 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-11-15 16:09:32,619 - Stack Feature Version Info: stack_version=2.5, version=2.5.5.0-157, current_cluster_version=2.5.5.0-157 -> 2.5.5.0-157
2017-11-15 16:09:32,620 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
User Group mapping (user_group) is missing in the hostLevelParams
2017-11-15 16:09:32,621 - Group['livy'] {}
2017-11-15 16:09:32,622 - Group['spark'] {}
2017-11-15 16:09:32,622 - Group['hadoop'] {}
2017-11-15 16:09:32,622 - Group['users'] {}
2017-11-15 16:09:32,622 - Group['knox'] {}
2017-11-15 16:09:32,623 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-11-15 16:09:32,623 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-11-15 16:09:32,624 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-11-15 16:09:32,624 - User['infra-solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-11-15 16:09:32,625 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-11-15 16:09:32,625 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2017-11-15 16:09:32,626 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-11-15 16:09:32,626 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-11-15 16:09:32,627 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2017-11-15 16:09:32,627 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-11-15 16:09:32,628 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-11-15 16:09:32,628 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-11-15 16:09:32,629 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-11-15 16:09:32,629 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-11-15 16:09:32,630 - User['knox'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-11-15 16:09:32,630 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-11-15 16:09:32,631 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-15 16:09:32,632 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2017-11-15 16:09:32,635 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2017-11-15 16:09:32,636 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2017-11-15 16:09:32,636 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-15 16:09:32,637 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2017-11-15 16:09:32,641 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
2017-11-15 16:09:32,643 - Group['hdfs'] {}
2017-11-15 16:09:32,643 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'hdfs']}
2017-11-15 16:09:32,644 - FS Type:
2017-11-15 16:09:32,644 - Directory['/etc/hadoop'] {'mode': 0755}
2017-11-15 16:09:32,656 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'root', 'group': 'hadoop'}
2017-11-15 16:09:32,656 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2017-11-15 16:09:32,667 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2017-11-15 16:09:32,672 - Skipping Execute[('setenforce', '0')] due to not_if
2017-11-15 16:09:32,672 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
2017-11-15 16:09:32,674 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'}
2017-11-15 16:09:32,674 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'}
2017-11-15 16:09:32,678 - File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'root'}
2017-11-15 16:09:32,679 - File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'root'}
2017-11-15 16:09:32,685 - File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2017-11-15 16:09:32,694 - File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs', 'group': 'hadoop'}
2017-11-15 16:09:32,694 - File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2017-11-15 16:09:32,695 - File['/usr/hdp/current/hadoop-client/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2017-11-15 16:09:32,699 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop'}
2017-11-15 16:09:32,702 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2017-11-15 16:09:32,704 - Testing the JVM's JCE policy to see it if supports an unlimited key length.
2017-11-15 16:09:32,705 - Execute['/usr/jdk64/jdk1.8.0_112/bin/java -jar /var/lib/ambari-agent/tools/jcepolicyinfo.jar -tu'] {'logoutput': True, 'environment': {'JAVA_HOME': '/usr/jdk64/jdk1.8.0_112'}}
Unlimited Key JCE Policy: true
2017-11-15 16:09:32,873 - The unlimited key JCE policy is required, and appears to have been installed.
2017-11-15 16:09:33,035 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-11-15 16:09:33,039 - Stack Feature Version Info: stack_version=2.5, version=2.5.5.0-157, current_cluster_version=2.5.5.0-157 -> 2.5.5.0-157
2017-11-15 16:09:33,041 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-11-15 16:09:33,047 - checked_call['rpm -q --queryformat '%{version}-%{release}' hdp-select | sed -e 's/\.el[0-9]//g''] {'stderr': -1}
2017-11-15 16:09:33,079 - checked_call returned (0, '2.5.5.0-157', '')
2017-11-15 16:09:33,083 - Directory['/etc/security/limits.d'] {'owner': 'root', 'create_parents': True, 'group': 'root'}
2017-11-15 16:09:33,088 - File['/etc/security/limits.d/hdfs.conf'] {'content': Template('hdfs.conf.j2'), 'owner': 'root', 'group': 'root', 'mode': 0644}
2017-11-15 16:09:33,089 - XmlConfig['hadoop-policy.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...}
2017-11-15 16:09:33,097 - Generating config: /usr/hdp/current/hadoop-client/conf/hadoop-policy.xml
2017-11-15 16:09:33,097 - File['/usr/hdp/current/hadoop-client/conf/hadoop-policy.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2017-11-15 16:09:33,105 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...}
2017-11-15 16:09:33,112 - Generating config: /usr/hdp/current/hadoop-client/conf/ssl-client.xml
2017-11-15 16:09:33,113 - File['/usr/hdp/current/hadoop-client/conf/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2017-11-15 16:09:33,118 - Directory['/usr/hdp/current/hadoop-client/conf/secure'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'}
2017-11-15 16:09:33,119 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf/secure', 'configuration_attributes': {}, 'configurations': ...}
2017-11-15 16:09:33,125 - Generating config: /usr/hdp/current/hadoop-client/conf/secure/ssl-client.xml
2017-11-15 16:09:33,126 - File['/usr/hdp/current/hadoop-client/conf/secure/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2017-11-15 16:09:33,131 - XmlConfig['ssl-server.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...}
2017-11-15 16:09:33,137 - Generating config: /usr/hdp/current/hadoop-client/conf/ssl-server.xml
2017-11-15 16:09:33,138 - File['/usr/hdp/current/hadoop-client/conf/ssl-server.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2017-11-15 16:09:33,144 - XmlConfig['hdfs-site.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {'final': {'dfs.support.append': 'true', 'dfs.datanode.data.dir': 'true', 'dfs.namenode.http-address': 'true', 'dfs.namenode.name.dir': 'true', 'dfs.webhdfs.enabled': 'true', 'dfs.datanode.failed.volumes.tolerated': 'true'}}, 'configurations': ...}
2017-11-15 16:09:33,151 - Generating config: /usr/hdp/current/hadoop-client/conf/hdfs-site.xml
2017-11-15 16:09:33,151 - File['/usr/hdp/current/hadoop-client/conf/hdfs-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2017-11-15 16:09:33,195 - XmlConfig['core-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {'final': {'fs.defaultFS': 'true'}}, 'owner': 'hdfs', 'configurations': ...}
2017-11-15 16:09:33,203 - Generating config: /usr/hdp/current/hadoop-client/conf/core-site.xml
2017-11-15 16:09:33,203 - File['/usr/hdp/current/hadoop-client/conf/core-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2017-11-15 16:09:33,226 - File['/usr/hdp/current/hadoop-client/conf/slaves'] {'content': Template('slaves.j2'), 'owner': 'root'}
2017-11-15 16:09:33,227 - Directory['/var/lib/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'group': 'hadoop', 'mode': 0751}
2017-11-15 16:09:33,227 - Directory['/var/lib/ambari-agent/data/datanode'] {'create_parents': True, 'mode': 0755}
2017-11-15 16:09:33,230 - Host contains mounts: ['/', '/proc', '/sys', '/dev/pts', '/dev/shm', '/tmp', '/datanode1', '/datanode2', '/proc/sys/fs/binfmt_misc'].
2017-11-15 16:09:33,230 - Mount point for directory /datanode1 is /datanode1
2017-11-15 16:09:33,230 - Mount point for directory /datanode2 is /datanode2
2017-11-15 16:09:33,230 - Mount point for directory /datanode1 is /datanode1
2017-11-15 16:09:33,230 - Forcefully ensuring existence and permissions of the directory: /datanode1
2017-11-15 16:09:33,231 - Directory['/datanode1'] {'group': 'hadoop', 'cd_access': 'a', 'create_parents': True, 'ignore_failures': True, 'mode': 0755, 'owner': 'hdfs'}
2017-11-15 16:09:33,231 - Mount point for directory /datanode2 is /datanode2
2017-11-15 16:09:33,231 - Forcefully ensuring existence and permissions of the directory: /datanode2
2017-11-15 16:09:33,231 - Directory['/datanode2'] {'group': 'hadoop', 'cd_access': 'a', 'create_parents': True, 'ignore_failures': True, 'mode': 0755, 'owner': 'hdfs'}
2017-11-15 16:09:33,234 - Host contains mounts: ['/', '/proc', '/sys', '/dev/pts', '/dev/shm', '/tmp', '/datanode1', '/datanode2', '/proc/sys/fs/binfmt_misc'].
2017-11-15 16:09:33,234 - Mount point for directory /datanode1 is /datanode1
2017-11-15 16:09:33,234 - Mount point for directory /datanode2 is /datanode2
2017-11-15 16:09:33,235 - File['/var/lib/ambari-agent/data/datanode/dfs_data_dir_mount.hist'] {'content': '\n# This file keeps track of the last known mount-point for each dir.\n# It is safe to delete, since it will get regenerated the next time that the component of the service starts.\n# However, it is not advised to delete this file since Ambari may\n# re-create a dir that used to be mounted on a drive but is now mounted on the root.\n# Comments begin with a hash (#) symbol\n# dir,mount_point\n/datanode2,/datanode2\n/datanode1,/datanode1\n', 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2017-11-15 16:09:33,236 - Directory['/var/run/hadoop'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0755}
2017-11-15 16:09:33,236 - Changing owner for /var/run/hadoop from 0 to hdfs
2017-11-15 16:09:33,236 - Changing group for /var/run/hadoop from 0 to hadoop
2017-11-15 16:09:33,237 - Directory['/var/run/hadoop/hdfs'] {'owner': 'hdfs', 'group': 'hadoop', 'create_parents': True}
2017-11-15 16:09:33,237 - Directory['/var/log/hadoop/hdfs'] {'owner': 'hdfs', 'group': 'hadoop', 'create_parents': True}
2017-11-15 16:09:33,238 - File['/var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid'] {'action': ['delete'], 'not_if': 'ambari-sudo.sh -H -E test -f /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid && ambari-sudo.sh -H -E pgrep -F /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid'}
2017-11-15 16:09:33,248 - Deleting File['/var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid']
2017-11-15 16:09:33,248 - Execute['ambari-sudo.sh -H -E /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /usr/hdp/current/hadoop-client/conf start datanode'] {'environment': {'HADOOP_LIBEXEC_DIR': '/usr/hdp/current/hadoop-client/libexec'}, 'not_if': 'ambari-sudo.sh -H -E test -f /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid && ambari-sudo.sh -H -E pgrep -F /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid'}
2017-11-15 16:09:37,296 - Execute['find /var/log/hadoop/hdfs -maxdepth 1 -type f -name '*' -exec echo '==> {} <==' \; -exec tail -n 40 {} \;'] {'logoutput': True, 'ignore_failures': True, 'user': 'root'}
==> /var/log/hadoop/hdfs/gc.log-201711151019 <==
Java HotSpot(TM) 64-Bit Server VM (25.112-b15) for linux-amd64 JRE (1.8.0_112-b15), built on Sep 22 2016 21:10:53 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 66093268k(62720916k free), swap 8388604k(8388604k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=209715200 -XX:MaxTenuringThreshold=6 -XX:NewSize=209715200 -XX:OldPLABSize=16 -XX:ParallelGCThreads=4 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2017-11-15T10:19:03.555+0000: 1.286: [GC (Allocation Failure) 2017-11-15T10:19:03.555+0000: 1.286: [ParNew: 163840K->13709K(184320K), 0.0097530 secs] 163840K->13709K(1028096K), 0.0098261 secs] [Times: user=0.02 sys=0.00, real=0.01 secs]
2017-11-15T10:19:05.565+0000: 3.296: [GC (CMS Initial Mark) [1 CMS-initial-mark: 0K(843776K)] 142966K(1028096K), 0.0094206 secs] [Times: user=0.04 sys=0.00, real=0.01 secs]
2017-11-15T10:19:05.574+0000: 3.306: [CMS-concurrent-mark-start]
2017-11-15T10:19:05.582+0000: 3.314: [CMS-concurrent-mark: 0.008/0.008 secs] [Times: user=0.00 sys=0.01, real=0.01 secs]
2017-11-15T10:19:05.582+0000: 3.314: [CMS-concurrent-preclean-start]
2017-11-15T10:19:05.584+0000: 3.315: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
2017-11-15T10:19:05.584+0000: 3.315: [CMS-concurrent-abortable-preclean-start]
Heap
par new generation total 184320K, used 144604K [0x00000000c0000000, 0x00000000cc800000, 0x00000000cc800000)
eden space 163840K, 79% used [0x00000000c0000000, 0x00000000c7fd3fb0, 0x00000000ca000000)
from space 20480K, 66% used [0x00000000cb400000, 0x00000000cc163438, 0x00000000cc800000)
to space 20480K, 0% used [0x00000000ca000000, 0x00000000ca000000, 0x00000000cb400000)
concurrent mark-sweep generation total 843776K, used 0K [0x00000000cc800000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 26596K, capacity 26956K, committed 27272K, reserved 1073152K
class space used 3219K, capacity 3328K, committed 3376K, reserved 1048576K
==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-ip-192-168-0-37.eu-west-1.compute.internal.out.2 <==
ulimit -a for secure datanode user hdfs
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 257533
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 128000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
==> /var/log/hadoop/hdfs/gc.log-201711151030 <==
Java HotSpot(TM) 64-Bit Server VM (25.112-b15) for linux-amd64 JRE (1.8.0_112-b15), built on Sep 22 2016 21:10:53 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 66093268k(60902568k free), swap 8388604k(8388604k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=209715200 -XX:MaxTenuringThreshold=6 -XX:NewSize=209715200 -XX:OldPLABSize=16 -XX:ParallelGCThreads=4 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2017-11-15T10:30:51.116+0000: 1.326: [GC (Allocation Failure) 2017-11-15T10:30:51.116+0000: 1.326: [ParNew: 163840K->13675K(184320K), 0.0099165 secs] 163840K->13675K(1028096K), 0.0099936 secs] [Times: user=0.03 sys=0.00, real=0.01 secs]
2017-11-15T10:30:53.126+0000: 3.335: [GC (CMS Initial Mark) [1 CMS-initial-mark: 0K(843776K)] 143623K(1028096K), 0.0094666 secs] [Times: user=0.03 sys=0.00, real=0.01 secs]
2017-11-15T10:30:53.135+0000: 3.345: [CMS-concurrent-mark-start]
2017-11-15T10:30:53.144+0000: 3.353: [CMS-concurrent-mark: 0.008/0.008 secs] [Times: user=0.01 sys=0.01, real=0.01 secs]
2017-11-15T10:30:53.144+0000: 3.353: [CMS-concurrent-preclean-start]
2017-11-15T10:30:53.145+0000: 3.355: [CMS-concurrent-preclean: 0.002/0.002 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
2017-11-15T10:30:53.145+0000: 3.355: [CMS-concurrent-abortable-preclean-start]
Heap
par new generation total 184320K, used 145262K [0x00000000c0000000, 0x00000000cc800000, 0x00000000cc800000)
eden space 163840K, 80% used [0x00000000c0000000, 0x00000000c8080a78, 0x00000000ca000000)
from space 20480K, 66% used [0x00000000cb400000, 0x00000000cc15adb0, 0x00000000cc800000)
to space 20480K, 0% used [0x00000000ca000000, 0x00000000ca000000, 0x00000000cb400000)
concurrent mark-sweep generation total 843776K, used 0K [0x00000000cc800000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 26598K, capacity 26956K, committed 27272K, reserved 1073152K
class space used 3219K, capacity 3328K, committed 3376K, reserved 1048576K
==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-ip-192-168-0-37.eu-west-1.compute.internal.out.1 <==
ulimit -a for secure datanode user hdfs
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 257533
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 128000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
==> /var/log/hadoop/hdfs/gc.log-201711151021 <==
Java HotSpot(TM) 64-Bit Server VM (25.112-b15) for linux-amd64 JRE (1.8.0_112-b15), built on Sep 22 2016 21:10:53 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 66093268k(61788236k free), swap 8388604k(8388604k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=209715200 -XX:MaxTenuringThreshold=6 -XX:NewSize=209715200 -XX:OldPLABSize=16 -XX:ParallelGCThreads=4 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2017-11-15T10:21:21.831+0000: 1.306: [GC (Allocation Failure) 2017-11-15T10:21:21.831+0000: 1.306: [ParNew: 163840K->13669K(184320K), 0.0111553 secs] 163840K->13669K(1028096K), 0.0112318 secs] [Times: user=0.03 sys=0.00, real=0.01 secs]
2017-11-15T10:21:23.842+0000: 3.317: [GC (CMS Initial Mark) [1 CMS-initial-mark: 0K(843776K)] 155282K(1028096K), 0.0111901 secs] [Times: user=0.04 sys=0.00, real=0.01 secs]
2017-11-15T10:21:23.854+0000: 3.329: [CMS-concurrent-mark-start]
2017-11-15T10:21:23.861+0000: 3.336: [CMS-concurrent-mark: 0.008/0.008 secs] [Times: user=0.01 sys=0.01, real=0.01 secs]
2017-11-15T10:21:23.861+0000: 3.336: [CMS-concurrent-preclean-start]
2017-11-15T10:21:23.863+0000: 3.338: [CMS-concurrent-preclean: 0.002/0.002 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
2017-11-15T10:21:23.863+0000: 3.338: [CMS-concurrent-abortable-preclean-start]
Heap
par new generation total 184320K, used 156921K [0x00000000c0000000, 0x00000000cc800000, 0x00000000cc800000)
eden space 163840K, 87% used [0x00000000c0000000, 0x00000000c8be4c88, 0x00000000ca000000)
from space 20480K, 66% used [0x00000000cb400000, 0x00000000cc1597b8, 0x00000000cc800000)
to space 20480K, 0% used [0x00000000ca000000, 0x00000000ca000000, 0x00000000cb400000)
concurrent mark-sweep generation total 843776K, used 0K [0x00000000cc800000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 27715K, capacity 28012K, committed 28268K, reserved 1075200K
class space used 3340K, capacity 3424K, committed 3504K, reserved 1048576K
==> /var/log/hadoop/hdfs/jsvc.out <==
==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-ip-192-168-0-37.eu-west-1.compute.internal.out.4 <==
ulimit -a for user hdfs
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 257533
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 128000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
==> /var/log/hadoop/hdfs/hdfs-audit.log <==
==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-ip-192-168-0-37.eu-west-1.compute.internal.out.3 <==
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
g signals (-i) 257533
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 128000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-ip-192-168-0-37.eu-west-1.compute.internal.log <==
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2671)
at org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.start(SecureDataNodeStarter.java:77)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:243)
Caused by: javax.security.auth.login.LoginException: Pre-authentication information was invalid (24)
at com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5LoginModule.java:804)
at com.sun.security.auth.module.Krb5LoginModule.login(Krb5LoginModule.java:617)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at javax.security.auth.login.LoginContext.invoke(LoginContext.java:755)
at javax.security.auth.login.LoginContext.access$000(LoginContext.java:195)
at javax.security.auth.login.LoginContext$4.run(LoginContext.java:682)
at javax.security.auth.login.LoginContext$4.run(LoginContext.java:680)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:680)
at javax.security.auth.login.LoginContext.login(LoginContext.java:587)
at org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytab(UserGroupInformation.java:1088)
... 10 more
Caused by: KrbException: Pre-authentication information was invalid (24)
at sun.security.krb5.KrbAsRep.<init>(KrbAsRep.java:76)
at sun.security.krb5.KrbAsReqBuilder.send(KrbAsReqBuilder.java:316)
at sun.security.krb5.KrbAsReqBuilder.action(KrbAsReqBuilder.java:361)
at com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5LoginModule.java:776)
... 23 more
Caused by: KrbException: Identifier doesn't match expected value (906)
at sun.security.krb5.internal.KDCRep.init(KDCRep.java:140)
at sun.security.krb5.internal.ASRep.init(ASRep.java:64)
at sun.security.krb5.internal.ASRep.<init>(ASRep.java:59)
at sun.security.krb5.KrbAsRep.<init>(KrbAsRep.java:60)
... 26 more
2017-11-15 16:09:34,310 INFO util.ExitUtil (ExitUtil.java:terminate(124)) - Exiting with status 1
2017-11-15 16:09:34,312 INFO datanode.DataNode (LogAdapter.java:info(47)) - SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at ip-192-168-0-37.eu-west-1.compute.internal/192.168.0.37
************************************************************/
==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-ip-192-168-0-37.eu-west-1.compute.internal.out <==
ulimit -a for secure datanode user hdfs
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 257533
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 128000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
==> /var/log/hadoop/hdfs/gc.log-201711151025 <==
Java HotSpot(TM) 64-Bit Server VM (25.112-b15) for linux-amd64 JRE (1.8.0_112-b15), built on Sep 22 2016 21:10:53 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 66093268k(61757988k free), swap 8388604k(8388604k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=209715200 -XX:MaxTenuringThreshold=6 -XX:NewSize=209715200 -XX:OldPLABSize=16 -XX:ParallelGCThreads=4 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2017-11-15T10:25:43.400+0000: 1.336: [GC (Allocation Failure) 2017-11-15T10:25:43.400+0000: 1.336: [ParNew: 163840K->13681K(184320K), 0.0131699 secs] 163840K->13681K(1028096K), 0.0132582 secs] [Times: user=0.05 sys=0.01, real=0.02 secs]
2017-11-15T10:25:45.413+0000: 3.348: [GC (CMS Initial Mark) [1 CMS-initial-mark: 0K(843776K)] 142884K(1028096K), 0.0126406 secs] [Times: user=0.05 sys=0.00, real=0.01 secs]
2017-11-15T10:25:45.425+0000: 3.361: [CMS-concurrent-mark-start]
2017-11-15T10:25:45.433+0000: 3.369: [CMS-concurrent-mark: 0.008/0.008 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
2017-11-15T10:25:45.433+0000: 3.369: [CMS-concurrent-preclean-start]
2017-11-15T10:25:45.435+0000: 3.370: [CMS-concurrent-preclean: 0.002/0.002 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
2017-11-15T10:25:45.435+0000: 3.370: [CMS-concurrent-abortable-preclean-start]
Heap
par new generation total 184320K, used 147472K [0x00000000c0000000, 0x00000000cc800000, 0x00000000cc800000)
eden space 163840K, 81% used [0x00000000c0000000, 0x00000000c82a78b8, 0x00000000ca000000)
from space 20480K, 66% used [0x00000000cb400000, 0x00000000cc15c770, 0x00000000cc800000)
to space 20480K, 0% used [0x00000000ca000000, 0x00000000ca000000, 0x00000000cb400000)
concurrent mark-sweep generation total 843776K, used 0K [0x00000000cc800000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 26588K, capacity 26892K, committed 27272K, reserved 1073152K
class space used 3218K, capacity 3328K, committed 3376K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201711151015 <==
Java HotSpot(TM) 64-Bit Server VM (25.112-b15) for linux-amd64 JRE (1.8.0_112-b15), built on Sep 22 2016 21:10:53 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 66093268k(62723240k free), swap 8388604k(8388604k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=209715200 -XX:MaxTenuringThreshold=6 -XX:NewSize=209715200 -XX:OldPLABSize=16 -XX:ParallelGCThreads=4 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2017-11-15T10:15:34.350+0000: 1.611: [GC (Allocation Failure) 2017-11-15T10:15:34.350+0000: 1.611: [ParNew: 163840K->13715K(184320K), 0.0101096 secs] 163840K->13715K(1028096K), 0.0101994 secs] [Times: user=0.03 sys=0.01, real=0.01 secs]
2017-11-15T10:15:36.360+0000: 3.621: [GC (CMS Initial Mark) [1 CMS-initial-mark: 0K(843776K)] 132237K(1028096K), 0.0085585 secs] [Times: user=0.03 sys=0.00, real=0.01 secs]
2017-11-15T10:15:36.369+0000: 3.630: [CMS-concurrent-mark-start]
2017-11-15T10:15:36.377+0000: 3.638: [CMS-concurrent-mark: 0.008/0.008 secs] [Times: user=0.00 sys=0.01, real=0.00 secs]
2017-11-15T10:15:36.377+0000: 3.638: [CMS-concurrent-preclean-start]
2017-11-15T10:15:36.378+0000: 3.639: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
2017-11-15T10:15:36.378+0000: 3.639: [CMS-concurrent-abortable-preclean-start]
CMS: abort preclean due to time 2017-11-15T10:15:41.402+0000: 8.664: [CMS-concurrent-abortable-preclean: 1.359/5.024 secs] [Times: user=1.55 sys=0.00, real=5.02 secs]
2017-11-15T10:15:41.403+0000: 8.664: [GC (CMS Final Remark) [YG occupancy: 142997 K (184320 K)]2017-11-15T10:15:41.403+0000: 8.664: [Rescan (parallel) , 0.0089383 secs]2017-11-15T10:15:41.412+0000: 8.673: [weak refs processing, 0.0000171 secs]2017-11-15T10:15:41.412+0000: 8.673: [class unloading, 0.0031303 secs]2017-11-15T10:15:41.415+0000: 8.676: [scrub symbol table, 0.0022981 secs]2017-11-15T10:15:41.417+0000: 8.678: [scrub string table, 0.0004424 secs][1 CMS-remark: 0K(843776K)] 142997K(1028096K), 0.0153866 secs] [Times: user=0.04 sys=0.00, real=0.02 secs]
2017-11-15T10:15:41.418+0000: 8.679: [CMS-concurrent-sweep-start]
2017-11-15T10:15:41.418+0000: 8.679: [CMS-concurrent-sweep: 0.000/0.000 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
2017-11-15T10:15:41.418+0000: 8.679: [CMS-concurrent-reset-start]
2017-11-15T10:15:41.420+0000: 8.681: [CMS-concurrent-reset: 0.002/0.002 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
Heap
par new generation total 184320K, used 144635K [0x00000000c0000000, 0x00000000cc800000, 0x00000000cc800000)
eden space 163840K, 79% used [0x00000000c0000000, 0x00000000c7fda088, 0x00000000ca000000)
from space 20480K, 66% used [0x00000000cb400000, 0x00000000cc164e00, 0x00000000cc800000)
to space 20480K, 0% used [0x00000000ca000000, 0x00000000ca000000, 0x00000000cb400000)
concurrent mark-sweep generation total 843776K, used 0K [0x00000000cc800000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 26591K, capacity 26892K, committed 27272K, reserved 1073152K
class space used 3221K, capacity 3328K, committed 3376K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201711151057 <==
Java HotSpot(TM) 64-Bit Server VM (25.112-b15) for linux-amd64 JRE (1.8.0_112-b15), built on Sep 22 2016 21:10:53 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 66093268k(60441804k free), swap 8388604k(8388604k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=209715200 -XX:MaxTenuringThreshold=6 -XX:NewSize=209715200 -XX:OldPLABSize=16 -XX:ParallelGCThreads=4 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2017-11-15T10:57:48.921+0000: 1.284: [GC (Allocation Failure) 2017-11-15T10:57:48.921+0000: 1.284: [ParNew: 163840K->13679K(184320K), 0.0097478 secs] 163840K->13679K(1028096K), 0.0098245 secs] [Times: user=0.02 sys=0.00, real=0.01 secs]
2017-11-15T10:57:50.930+0000: 3.293: [GC (CMS Initial Mark) [1 CMS-initial-mark: 0K(843776K)] 175294K(1028096K), 0.0115010 secs] [Times: user=0.04 sys=0.00, real=0.01 secs]
2017-11-15T10:57:50.942+0000: 3.304: [CMS-concurrent-mark-start]
2017-11-15T10:57:50.950+0000: 3.312: [CMS-concurrent-mark: 0.008/0.008 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
2017-11-15T10:57:50.950+0000: 3.312: [CMS-concurrent-preclean-start]
2017-11-15T10:57:50.951+0000: 3.314: [CMS-concurrent-preclean: 0.002/0.002 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
2017-11-15T10:57:50.951+0000: 3.314: [CMS-concurrent-abortable-preclean-start]
2017-11-15T10:57:53.891+0000: 6.254: [GC (Allocation Failure) 2017-11-15T10:57:53.891+0000: 6.254: [ParNew: 177519K->16269K(184320K), 0.0253375 secs] 177519K->20980K(1028096K), 0.0254111 secs] [Times: user=0.07 sys=0.00, real=0.03 secs]
CMS: abort preclean due to time 2017-11-15T10:57:55.993+0000: 8.356: [CMS-concurrent-abortable-preclean: 1.291/5.042 secs] [Times: user=1.51 sys=0.02, real=5.05 secs]
2017-11-15T10:57:55.994+0000: 8.356: [GC (CMS Final Remark) [YG occupancy: 28286 K (184320 K)]2017-11-15T10:57:55.995+0000: 8.358: [Rescan (parallel) , 0.0019891 secs]2017-11-15T10:57:55.997+0000: 8.360: [weak refs processing, 0.0000165 secs]2017-11-15T10:57:55.997+0000: 8.360: [class unloading, 0.0034043 secs]2017-11-15T10:57:56.000+0000: 8.363: [scrub symbol table, 0.0027747 secs]2017-11-15T10:57:56.003+0000: 8.366: [scrub string table, 0.0006049 secs][1 CMS-remark: 4710K(843776K)] 32996K(1028096K), 0.0107307 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
2017-11-15T10:57:56.004+0000: 8.367: [CMS-concurrent-sweep-start]
2017-11-15T10:57:56.007+0000: 8.369: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
2017-11-15T10:57:56.007+0000: 8.369: [CMS-concurrent-reset-start]
2017-11-15T10:57:56.009+0000: 8.371: [CMS-concurrent-reset: 0.002/0.002 secs] [Times: user=0.01 sys=0.00, real=0.00 secs]
2017-11-15T11:13:09.456+0000: 921.818: [GC (Allocation Failure) 2017-11-15T11:13:09.456+0000: 921.818: [ParNew: 180109K->11327K(184320K), 0.0103137 secs] 184820K->23879K(1028096K), 0.0103927 secs] [Times: user=0.03 sys=0.01, real=0.01 secs]
2017-11-15T11:55:19.882+0000: 3452.245: [GC (Allocation Failure) 2017-11-15T11:55:19.882+0000: 3452.245: [ParNew: 175167K->3596K(184320K), 0.0048509 secs] 187719K->16148K(1028096K), 0.0049374 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
2017-11-15T12:38:23.471+0000: 6035.834: [GC (Allocation Failure) 2017-11-15T12:38:23.471+0000: 6035.834: [ParNew: 167436K->3474K(184320K), 0.0050320 secs] 179988K->16026K(1028096K), 0.0051199 secs] [Times: user=0.01 sys=0.00, real=0.00 secs]
2017-11-15T13:18:41.111+0000: 8453.474: [GC (Allocation Failure) 2017-11-15T13:18:41.111+0000: 8453.474: [ParNew: 167314K->5373K(184320K), 0.0053833 secs] 179866K->17925K(1028096K), 0.0054606 secs] [Times: user=0.01 sys=0.00, real=0.00 secs]
2017-11-15T14:05:18.787+0000: 11251.149: [GC (Allocation Failure) 2017-11-15T14:05:18.787+0000: 11251.149: [ParNew: 169213K->4612K(184320K), 0.0050841 secs] 181765K->17164K(1028096K), 0.0051535 secs] [Times: user=0.02 sys=0.00, real=0.00 secs]
Heap
par new generation total 184320K, used 112248K [0x00000000c0000000, 0x00000000cc800000, 0x00000000cc800000)
eden space 163840K, 65% used [0x00000000c0000000, 0x00000000c691cfc8, 0x00000000ca000000)
from space 20480K, 22% used [0x00000000cb400000, 0x00000000cb881180, 0x00000000cc800000)
to space 20480K, 0% used [0x00000000ca000000, 0x00000000ca000000, 0x00000000cb400000)
concurrent mark-sweep generation total 843776K, used 12552K [0x00000000cc800000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 36546K, capacity 36972K, committed 37268K, reserved 1083392K
class space used 4028K, capacity 4135K, committed 4244K, reserved 1048576K
==> /var/log/hadoop/hdfs/SecurityAuth.audit <==
2017-11-15 13:00:47,701 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for yarn (auth:SIMPLE)
2017-11-15 13:02:47,688 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for yarn (auth:SIMPLE)
==> /var/log/hadoop/hdfs/jsvc.err <==
Initializing secure datanode resources
Opened streaming server at /0.0.0.0:1019
Successfully obtained privileged resources (streaming port = ServerSocket[addr=/0.0.0.0,localport=1019] ) (http listener port = 1022)
Opened info server at /0.0.0.0:1022
Starting regular datanode initialization
Service exit with a return value of 1
Initializing secure datanode resources
Opened streaming server at /0.0.0.0:1019
Successfully obtained privileged resources (streaming port = ServerSocket[addr=/0.0.0.0,localport=1019] ) (http listener port = 1022)
Opened info server at /0.0.0.0:1022
Starting regular datanode initialization
Service exit with a return value of 1
Initializing secure datanode resources
Opened streaming server at /0.0.0.0:1019
Successfully obtained privileged resources (streaming port = ServerSocket[addr=/0.0.0.0,localport=1019] ) (http listener port = 1022)
Opened info server at /0.0.0.0:1022
Starting regular datanode initialization
Service exit with a return value of 1
==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-ip-192-168-0-37.eu-west-1.compute.internal.out.5 <==
ulimit -a for user hdfs
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 257533
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 128000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Command failed after 1 tries
... View more
11-14-2017
06:22 AM
HI Saumil, After runnning this command am getting below error . /usr/bin/kinit -kt /etc/security/keytabs/rm.service.keytab rm/ip-192-168-0-50.eu-west-1.compute.internal@TECHNIPFMC.COM; Same error: /usr/bin/kinit -kt /etc/security/keytabs/rm.service.keytab rm/ip-192-168-0-50.eu-west-1.compute.internal@TECHNIPFMC.COM;
kinit: Preauthentication failed while getting initial credentials Run both as root user and yarn user also .. both same error.
... View more
11-13-2017
06:31 PM
HI All, After enabling kerberos yarn service not able to start. [root@ip-192-168-0-50 keytabs]# ll /etc/security/keytabs
total 56 -r--------. 1 root root 588 Nov 13 13:50 activity-explorer.headless.keytab
-r--r-----. 1 hbase hadoop 383 Nov 13 13:50 hbase.headless.keytab
-r--------. 1 hbase hadoop 528 Nov 13 13:50 hbase.service.keytab
-r--------. 1 hdfs hadoop 378 Nov 13 13:50 hdfs.headless.keytab
-r--------. 1 hue hue 518 Nov 13 13:50 hue.service.keytab
-rw-r-----. 1 ambari-qa hadoop 383 Nov 11 11:51 kerberos.service_check.111117.keytab
-r--------. 1 storm hadoop 533 Nov 13 13:50 nimbus.service.keytab
-r--------. 1 hdfs hadoop 513 Nov 13 13:50 nn.service.keytab
-r--------. 1 yarn hadoop 513 Nov 13 13:50 rm.service.keytab
-r--r-----. 1 ambari-qa hadoop 403 Nov 13 13:50 smokeuser.headless.keytab
-r--------. 1 spark hadoop 383 Nov 13 13:50 spark.headless.keytab
-r--r-----. 1 root hadoop 523 Nov 13 13:50 spnego.service.keytab
-r--------. 1 storm hadoop 383 Nov 13 13:50 storm.headless.keytab
-r--------. 1 zookeeper hadoop 548 Nov 13 13:50 zk.service.keytab
[root@ip-192-168-0-50 keytabs]# Its giving error like below : Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/resourcemanager.py", line 304, in <module>
Resourcemanager().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 314, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/resourcemanager.py", line 124, in start
self.wait_for_dfs_directories_created(params.entity_groupfs_store_dir, params.entity_groupfs_active_dir)
File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/resourcemanager.py", line 254, in wait_for_dfs_directories_created
user=params.yarn_user
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 262, in action_run
tries=self.resource.tries, try_sleep=self.resource.try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 72, in inner
result = function(command, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 102, in checked_call
tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 150, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 303, in _call
raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of '/usr/bin/kinit -kt /etc/security/keytabs/rm.service.keytab rm/ip-192-168-0-50.eu-west-1.compute.internal@TECHNIPFMC.COM;' returned 1. kinit: Preauthentication failed while getting initial credentials 2017-11-13 18:23:25,071 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-11-13 18:23:25,181 - Stack Feature Version Info: stack_version=2.5, version=2.5.5.0-157, current_cluster_version=2.5.5.0-157 -> 2.5.5.0-157
2017-11-13 18:23:25,182 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
User Group mapping (user_group) is missing in the hostLevelParams
2017-11-13 18:23:25,183 - Group['livy'] {}
2017-11-13 18:23:25,191 - Group['spark'] {}
2017-11-13 18:23:25,191 - Group['hue'] {}
2017-11-13 18:23:25,191 - Group['hadoop'] {}
2017-11-13 18:23:25,192 - Group['users'] {}
2017-11-13 18:23:25,192 - Group['knox'] {}
2017-11-13 18:23:25,192 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-11-13 18:23:25,193 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-11-13 18:23:25,194 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-11-13 18:23:25,194 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-11-13 18:23:25,195 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2017-11-13 18:23:25,195 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-11-13 18:23:25,196 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-11-13 18:23:25,197 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2017-11-13 18:23:25,197 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-11-13 18:23:25,198 - User['hue'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-11-13 18:23:25,198 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-11-13 18:23:25,199 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-11-13 18:23:25,200 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-11-13 18:23:25,200 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-11-13 18:23:25,201 - User['knox'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-11-13 18:23:25,202 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-11-13 18:23:25,202 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-13 18:23:25,204 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2017-11-13 18:23:25,208 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2017-11-13 18:23:25,208 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2017-11-13 18:23:25,211 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-13 18:23:25,212 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2017-11-13 18:23:25,216 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
2017-11-13 18:23:25,216 - Group['hdfs'] {}
2017-11-13 18:23:25,216 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'hdfs']}
2017-11-13 18:23:25,217 - FS Type:
2017-11-13 18:23:25,217 - Directory['/etc/hadoop'] {'mode': 0755}
2017-11-13 18:23:25,230 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'root', 'group': 'hadoop'}
2017-11-13 18:23:25,230 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2017-11-13 18:23:25,242 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2017-11-13 18:23:25,254 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
2017-11-13 18:23:25,255 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'}
2017-11-13 18:23:25,256 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'}
2017-11-13 18:23:25,259 - File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'root'}
2017-11-13 18:23:25,261 - File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'root'}
2017-11-13 18:23:25,267 - File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2017-11-13 18:23:25,281 - File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs', 'group': 'hadoop'}
2017-11-13 18:23:25,282 - File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2017-11-13 18:23:25,283 - File['/usr/hdp/current/hadoop-client/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2017-11-13 18:23:25,286 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop'}
2017-11-13 18:23:25,289 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2017-11-13 18:23:25,292 - Testing the JVM's JCE policy to see it if supports an unlimited key length.
2017-11-13 18:23:25,293 - Execute['/usr/jdk64/jdk1.8.0_112/bin/java -jar /var/lib/ambari-agent/tools/jcepolicyinfo.jar -tu'] {'logoutput': True, 'environment': {'JAVA_HOME': '/usr/jdk64/jdk1.8.0_112'}}
Unlimited Key JCE Policy: true
2017-11-13 18:23:25,473 - The unlimited key JCE policy is required, and appears to have been installed.
2017-11-13 18:23:25,685 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-11-13 18:23:25,686 - call['ambari-python-wrap /usr/bin/hdp-select status hadoop-yarn-resourcemanager'] {'timeout': 20}
2017-11-13 18:23:25,705 - call returned (0, 'hadoop-yarn-resourcemanager - 2.5.5.0-157')
2017-11-13 18:23:25,706 - Stack Feature Version Info: stack_version=2.5, version=2.5.5.0-157, current_cluster_version=2.5.5.0-157 -> 2.5.5.0-157
2017-11-13 18:23:25,708 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-11-13 18:23:25,716 - Directory['/var/log/hadoop-yarn/nodemanager/recovery-state'] {'owner': 'yarn', 'group': 'hadoop', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2017-11-13 18:23:25,718 - Directory['/var/run/hadoop-yarn'] {'owner': 'yarn', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'}
2017-11-13 18:23:25,718 - Directory['/var/run/hadoop-yarn/yarn'] {'owner': 'yarn', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'}
2017-11-13 18:23:25,718 - Directory['/var/log/hadoop-yarn/yarn'] {'owner': 'yarn', 'group': 'hadoop', 'create_parents': True, 'cd_access': 'a'}
2017-11-13 18:23:25,719 - Directory['/var/run/hadoop-mapreduce'] {'owner': 'mapred', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'}
2017-11-13 18:23:25,719 - Directory['/var/run/hadoop-mapreduce/mapred'] {'owner': 'mapred', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'}
2017-11-13 18:23:25,720 - Directory['/var/log/hadoop-mapreduce'] {'owner': 'mapred', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'}
2017-11-13 18:23:25,720 - Directory['/var/log/hadoop-mapreduce/mapred'] {'owner': 'mapred', 'group': 'hadoop', 'create_parents': True, 'cd_access': 'a'}
2017-11-13 18:23:25,721 - Directory['/var/log/hadoop-yarn'] {'owner': 'yarn', 'group': 'hadoop', 'ignore_failures': True, 'create_parents': True, 'cd_access': 'a'}
2017-11-13 18:23:25,721 - XmlConfig['core-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {'final': {'fs.defaultFS': 'true'}}, 'owner': 'hdfs', 'configurations': ...}
2017-11-13 18:23:25,728 - Generating config: /usr/hdp/current/hadoop-client/conf/core-site.xml
2017-11-13 18:23:25,729 - File['/usr/hdp/current/hadoop-client/conf/core-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2017-11-13 18:23:25,755 - XmlConfig['hdfs-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {'final': {'dfs.support.append': 'true', 'dfs.datanode.data.dir': 'true', 'dfs.namenode.http-address': 'true', 'dfs.namenode.name.dir': 'true', 'dfs.webhdfs.enabled': 'true', 'dfs.datanode.failed.volumes.tolerated': 'true'}}, 'owner': 'hdfs', 'configurations': ...}
2017-11-13 18:23:25,762 - Generating config: /usr/hdp/current/hadoop-client/conf/hdfs-site.xml
2017-11-13 18:23:25,762 - File['/usr/hdp/current/hadoop-client/conf/hdfs-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2017-11-13 18:23:25,806 - XmlConfig['mapred-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'yarn', 'configurations': ...}
2017-11-13 18:23:25,813 - Generating config: /usr/hdp/current/hadoop-client/conf/mapred-site.xml
2017-11-13 18:23:25,813 - File['/usr/hdp/current/hadoop-client/conf/mapred-site.xml'] {'owner': 'yarn', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2017-11-13 18:23:25,848 - Changing owner for /usr/hdp/current/hadoop-client/conf/mapred-site.xml from 513 to yarn
2017-11-13 18:23:25,848 - XmlConfig['yarn-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'yarn', 'configurations': ...}
2017-11-13 18:23:25,855 - Generating config: /usr/hdp/current/hadoop-client/conf/yarn-site.xml
2017-11-13 18:23:25,855 - File['/usr/hdp/current/hadoop-client/conf/yarn-site.xml'] {'owner': 'yarn', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2017-11-13 18:23:25,946 - XmlConfig['capacity-scheduler.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'yarn', 'configurations': ...}
2017-11-13 18:23:25,953 - Generating config: /usr/hdp/current/hadoop-client/conf/capacity-scheduler.xml
2017-11-13 18:23:25,953 - File['/usr/hdp/current/hadoop-client/conf/capacity-scheduler.xml'] {'owner': 'yarn', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2017-11-13 18:23:25,965 - Changing owner for /usr/hdp/current/hadoop-client/conf/capacity-scheduler.xml from 510 to yarn
2017-11-13 18:23:25,965 - Directory['/etc/hadoop/conf'] {'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2017-11-13 18:23:25,965 - File['/etc/hadoop/conf/yarn.exclude'] {'owner': 'yarn', 'group': 'hadoop'}
2017-11-13 18:23:25,966 - File['/var/log/hadoop-yarn/yarn/hadoop-mapreduce.jobsummary.log'] {'owner': 'yarn', 'group': 'hadoop'}
2017-11-13 18:23:25,968 - File['/etc/security/limits.d/yarn.conf'] {'content': Template('yarn.conf.j2'), 'mode': 0644}
2017-11-13 18:23:25,970 - File['/etc/security/limits.d/mapreduce.conf'] {'content': Template('mapreduce.conf.j2'), 'mode': 0644}
2017-11-13 18:23:25,974 - File['/usr/hdp/current/hadoop-client/conf/yarn-env.sh'] {'content': InlineTemplate(...), 'owner': 'yarn', 'group': 'hadoop', 'mode': 0755}
2017-11-13 18:23:25,975 - Writing File['/usr/hdp/current/hadoop-client/conf/yarn-env.sh'] because contents don't match
2017-11-13 18:23:25,975 - File['/usr/hdp/current/hadoop-yarn-resourcemanager/bin/container-executor'] {'group': 'hadoop', 'mode': 06050}
2017-11-13 18:23:25,977 - File['/usr/hdp/current/hadoop-client/conf/container-executor.cfg'] {'content': Template('container-executor.cfg.j2'), 'group': 'hadoop', 'mode': 0644}
2017-11-13 18:23:25,977 - Directory['/cgroups_test/cpu'] {'group': 'hadoop', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2017-11-13 18:23:25,979 - File['/usr/hdp/current/hadoop-client/conf/mapred-env.sh'] {'content': InlineTemplate(...), 'owner': 'root', 'mode': 0755}
2017-11-13 18:23:25,980 - File['/usr/hdp/current/hadoop-client/sbin/task-controller'] {'owner': 'root', 'group': 'hadoop', 'mode': 06050}
2017-11-13 18:23:25,982 - File['/usr/hdp/current/hadoop-client/conf/taskcontroller.cfg'] {'content': Template('taskcontroller.cfg.j2'), 'owner': 'root', 'group': 'hadoop', 'mode': 0644}
2017-11-13 18:23:25,983 - File['/usr/hdp/current/hadoop-client/conf/yarn_jaas.conf'] {'content': Template('yarn_jaas.conf.j2'), 'owner': 'yarn', 'group': 'hadoop'}
2017-11-13 18:23:25,984 - XmlConfig['mapred-site.xml'] {'owner': 'mapred', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...}
2017-11-13 18:23:25,990 - Generating config: /usr/hdp/current/hadoop-client/conf/mapred-site.xml
2017-11-13 18:23:25,991 - File['/usr/hdp/current/hadoop-client/conf/mapred-site.xml'] {'owner': 'mapred', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2017-11-13 18:23:26,025 - Changing owner for /usr/hdp/current/hadoop-client/conf/mapred-site.xml from 512 to mapred
2017-11-13 18:23:26,025 - XmlConfig['capacity-scheduler.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...}
2017-11-13 18:23:26,032 - Generating config: /usr/hdp/current/hadoop-client/conf/capacity-scheduler.xml
2017-11-13 18:23:26,032 - File['/usr/hdp/current/hadoop-client/conf/capacity-scheduler.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2017-11-13 18:23:26,044 - Changing owner for /usr/hdp/current/hadoop-client/conf/capacity-scheduler.xml from 512 to hdfs
2017-11-13 18:23:26,044 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...}
2017-11-13 18:23:26,051 - Generating config: /usr/hdp/current/hadoop-client/conf/ssl-client.xml
2017-11-13 18:23:26,051 - File['/usr/hdp/current/hadoop-client/conf/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2017-11-13 18:23:26,056 - Directory['/usr/hdp/current/hadoop-client/conf/secure'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'}
2017-11-13 18:23:26,057 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf/secure', 'configuration_attributes': {}, 'configurations': ...}
2017-11-13 18:23:26,063 - Generating config: /usr/hdp/current/hadoop-client/conf/secure/ssl-client.xml
2017-11-13 18:23:26,064 - File['/usr/hdp/current/hadoop-client/conf/secure/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2017-11-13 18:23:26,069 - XmlConfig['ssl-server.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...}
2017-11-13 18:23:26,075 - Generating config: /usr/hdp/current/hadoop-client/conf/ssl-server.xml
2017-11-13 18:23:26,076 - File['/usr/hdp/current/hadoop-client/conf/ssl-server.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2017-11-13 18:23:26,081 - File['/usr/hdp/current/hadoop-client/conf/ssl-client.xml.example'] {'owner': 'mapred', 'group': 'hadoop'}
2017-11-13 18:23:26,082 - File['/usr/hdp/current/hadoop-client/conf/ssl-server.xml.example'] {'owner': 'mapred', 'group': 'hadoop'}
2017-11-13 18:23:26,082 - Verifying DFS directories where ATS stores time line data for active and completed applications.
2017-11-13 18:23:26,082 - Execute['/usr/bin/kinit -kt /etc/security/keytabs/rm.service.keytab rm/ip-192-168-0-50.eu-west-1.compute.internal@TECHNIPFMC.COM;'] {'user': 'yarn'}
Command failed after 1 tries
... View more
11-09-2017
08:49 AM
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py", line 260, in <module>
HiveMetastore().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 314, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py", line 60, in start
create_metastore_schema()
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive.py", line 371, in create_metastore_schema
user = params.hive_user
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 262, in action_run
tries=self.resource.tries, try_sleep=self.resource.try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 72, in inner
result = function(command, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 102, in checked_call
tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 150, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 303, in _call
raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of 'export HIVE_CONF_DIR=/usr/hdp/current/hive-metastore/conf/conf.server ; /usr/hdp/current/hive-server2-hive2/bin/schematool -initSchema -dbType mysql -userName hive -passWord [PROTECTED] -verbose' returned 1. SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/2.5.5.0-157/hive2/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.5.5.0-157/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Metastore connection URL: jdbc:mysql://ip-192-168-0-42.eu-west-1.compute.internal/hive?createDatabaseIfNotExist=true
Metastore Connection Driver : com.mysql.jdbc.Driver
Metastore connection User: hive
Starting metastore schema initialization to 2.1.0
Initialization script hive-schema-2.1.0.mysql.sql
Connecting to jdbc:mysql://ip-192-168-0-42.eu-west-1.compute.internal/hive?createDatabaseIfNotExist=true
Connected to: MySQL (version 5.1.73)
Driver: MySQL-AB JDBC Driver (version mysql-connector-java-5.1.17-SNAPSHOT ( Revision: ${bzr.revision-id} ))
Transaction isolation: TRANSACTION_READ_COMMITTED
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> !autocommit on
Autocommit status: true
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET @OLD_CHARACTER_SET_C
LIENT=@@CHARACTER_SET_CLIENT */
No rows affected (0.003 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET @OLD_CHARACTER_SET_R
ESULTS=@@CHARACTER_SET_RESULTS */
No rows affected (0.001 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET @OLD_COLLATION_CONNE
CTION=@@COLLATION_CONNECTION */
No rows affected (0.001 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET NAMES utf8 */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40103 SET @OLD_TIME_ZONE=@@TIM
E_ZONE */
No rows affected (0.001 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40103 SET TIME_ZONE='+00:00' *
/
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40014 SET @OLD_UNIQUE_CHECKS=@
@UNIQUE_CHECKS, UNIQUE_CHECKS=0 */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40014 SET @OLD_FOREIGN_KEY_CHE
CKS=@@FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=0 */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET @OLD_SQL_MODE=@@SQL_
MODE, SQL_MODE='NO_AUTO_VALUE_ON_ZERO' */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40111 SET @OLD_SQL_NOTES=@@SQL
_NOTES, SQL_NOTES=0 */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET @saved_cs_client
= @@character_set_client */
No rows affected (0.001 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= utf8 */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> CREATE TABLE IF NOT EXISTS `BUCKE
TING_COLS` ( `SD_ID` bigint(20) NOT NULL, `BUCKET_COL_NAME` varchar(256) CHARACT
ER SET latin1 COLLATE latin1_bin DEFAULT NULL, `INTEGER_IDX` int(11) NOT NULL, P
RIMARY KEY (`SD_ID`,`INTEGER_IDX`), KEY `BUCKETING_COLS_N49` (`SD_ID`), CONSTRAI
NT `BUCKETING_COLS_FK1` FOREIGN KEY (`SD_ID`) REFERENCES `SDS` (`SD_ID`) ) ENGIN
E=InnoDB DEFAULT CHARSET=latin1
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= @saved_cs_client */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET @saved_cs_client
= @@character_set_client */
No rows affected (0.001 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= utf8 */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> CREATE TABLE IF NOT EXISTS `CDS`
( `CD_ID` bigint(20) NOT NULL, PRIMARY KEY (`CD_ID`) ) ENGINE=InnoDB DEFAULT CHA
RSET=latin1
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= @saved_cs_client */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET @saved_cs_client
= @@character_set_client */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= utf8 */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> CREATE TABLE IF NOT EXISTS `COLUM
NS_V2` ( `CD_ID` bigint(20) NOT NULL, `COMMENT` varchar(256) CHARACTER SET latin
1 COLLATE latin1_bin DEFAULT NULL, `COLUMN_NAME` varchar(767) CHARACTER SET lati
n1 COLLATE latin1_bin NOT NULL, `TYPE_NAME` varchar(4000) DEFAULT NULL, `INTEGER
_IDX` int(11) NOT NULL, PRIMARY KEY (`CD_ID`,`COLUMN_NAME`), KEY `COLUMNS_V2_N49
` (`CD_ID`), CONSTRAINT `COLUMNS_V2_FK1` FOREIGN KEY (`CD_ID`) REFERENCES `CDS`
(`CD_ID`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= @saved_cs_client */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET @saved_cs_client
= @@character_set_client */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= utf8 */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> CREATE TABLE IF NOT EXISTS `DATAB
ASE_PARAMS` ( `DB_ID` bigint(20) NOT NULL, `PARAM_KEY` varchar(180) CHARACTER SE
T latin1 COLLATE latin1_bin NOT NULL, `PARAM_VALUE` varchar(4000) CHARACTER SET
latin1 COLLATE latin1_bin DEFAULT NULL, PRIMARY KEY (`DB_ID`,`PARAM_KEY`), KEY `
DATABASE_PARAMS_N49` (`DB_ID`), CONSTRAINT `DATABASE_PARAMS_FK1` FOREIGN KEY (`D
B_ID`) REFERENCES `DBS` (`DB_ID`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= @saved_cs_client */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET @saved_cs_client
= @@character_set_client */
No rows affected (0.001 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= utf8 */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> CREATE TABLE IF NOT EXISTS `DBS`
( `DB_ID` bigint(20) NOT NULL, `DESC` varchar(4000) CHARACTER SET latin1 COLLATE
latin1_bin DEFAULT NULL, `DB_LOCATION_URI` varchar(4000) CHARACTER SET latin1 C
OLLATE latin1_bin NOT NULL, `NAME` varchar(128) CHARACTER SET latin1 COLLATE lat
in1_bin DEFAULT NULL, `OWNER_NAME` varchar(128) CHARACTER SET latin1 COLLATE lat
in1_bin DEFAULT NULL, `OWNER_TYPE` varchar(10) CHARACTER SET latin1 COLLATE lati
n1_bin DEFAULT NULL, PRIMARY KEY (`DB_ID`), UNIQUE KEY `UNIQUE_DATABASE` (`NAME`
) ) ENGINE=InnoDB DEFAULT CHARSET=latin1
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= @saved_cs_client */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET @saved_cs_client
= @@character_set_client */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= utf8 */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> CREATE TABLE IF NOT EXISTS `DB_PR
IVS` ( `DB_GRANT_ID` bigint(20) NOT NULL, `CREATE_TIME` int(11) NOT NULL, `DB_ID
` bigint(20) DEFAULT NULL, `GRANT_OPTION` smallint(6) NOT NULL, `GRANTOR` varcha
r(128) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL, `GRANTOR_TYPE` varc
har(128) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL, `PRINCIPAL_NAME`
varchar(128) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL, `PRINCIPAL_TY
PE` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL, `DB_PRIV`
varchar(128) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL, PRIMARY KEY
(`DB_GRANT_ID`), UNIQUE KEY `DBPRIVILEGEINDEX` (`DB_ID`,`PRINCIPAL_NAME`,`PRINCI
PAL_TYPE`,`DB_PRIV`,`GRANTOR`,`GRANTOR_TYPE`), KEY `DB_PRIVS_N49` (`DB_ID`), CON
STRAINT `DB_PRIVS_FK1` FOREIGN KEY (`DB_ID`) REFERENCES `DBS` (`DB_ID`) ) ENGINE
=InnoDB DEFAULT CHARSET=latin1
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= @saved_cs_client */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET @saved_cs_client
= @@character_set_client */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= utf8 */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> CREATE TABLE IF NOT EXISTS `GLOBA
L_PRIVS` ( `USER_GRANT_ID` bigint(20) NOT NULL, `CREATE_TIME` int(11) NOT NULL,
`GRANT_OPTION` smallint(6) NOT NULL, `GRANTOR` varchar(128) CHARACTER SET latin1
COLLATE latin1_bin DEFAULT NULL, `GRANTOR_TYPE` varchar(128) CHARACTER SET lati
n1 COLLATE latin1_bin DEFAULT NULL, `PRINCIPAL_NAME` varchar(128) CHARACTER SET
latin1 COLLATE latin1_bin DEFAULT NULL, `PRINCIPAL_TYPE` varchar(128) CHARACTER
SET latin1 COLLATE latin1_bin DEFAULT NULL, `USER_PRIV` varchar(128) CHARACTER S
ET latin1 COLLATE latin1_bin DEFAULT NULL, PRIMARY KEY (`USER_GRANT_ID`), UNIQUE
KEY `GLOBALPRIVILEGEINDEX` (`PRINCIPAL_NAME`,`PRINCIPAL_TYPE`,`USER_PRIV`,`GRAN
TOR`,`GRANTOR_TYPE`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= @saved_cs_client */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET @saved_cs_client
= @@character_set_client */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= utf8 */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> CREATE TABLE IF NOT EXISTS `IDXS`
( `INDEX_ID` bigint(20) NOT NULL, `CREATE_TIME` int(11) NOT NULL, `DEFERRED_REB
UILD` bit(1) NOT NULL, `INDEX_HANDLER_CLASS` varchar(4000) CHARACTER SET latin1
COLLATE latin1_bin DEFAULT NULL, `INDEX_NAME` varchar(128) CHARACTER SET latin1
COLLATE latin1_bin DEFAULT NULL, `INDEX_TBL_ID` bigint(20) DEFAULT NULL, `LAST_A
CCESS_TIME` int(11) NOT NULL, `ORIG_TBL_ID` bigint(20) DEFAULT NULL, `SD_ID` big
int(20) DEFAULT NULL, PRIMARY KEY (`INDEX_ID`), UNIQUE KEY `UNIQUEINDEX` (`INDEX
_NAME`,`ORIG_TBL_ID`), KEY `IDXS_N51` (`SD_ID`), KEY `IDXS_N50` (`INDEX_TBL_ID`)
, KEY `IDXS_N49` (`ORIG_TBL_ID`), CONSTRAINT `IDXS_FK1` FOREIGN KEY (`ORIG_TBL_I
D`) REFERENCES `TBLS` (`TBL_ID`), CONSTRAINT `IDXS_FK2` FOREIGN KEY (`SD_ID`) RE
FERENCES `SDS` (`SD_ID`), CONSTRAINT `IDXS_FK3` FOREIGN KEY (`INDEX_TBL_ID`) REF
ERENCES `TBLS` (`TBL_ID`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= @saved_cs_client */
No rows affected (0.001 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET @saved_cs_client
= @@character_set_client */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= utf8 */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> CREATE TABLE IF NOT EXISTS `INDEX
_PARAMS` ( `INDEX_ID` bigint(20) NOT NULL, `PARAM_KEY` varchar(256) CHARACTER SE
T latin1 COLLATE latin1_bin NOT NULL, `PARAM_VALUE` varchar(4000) CHARACTER SET
latin1 COLLATE latin1_bin DEFAULT NULL, PRIMARY KEY (`INDEX_ID`,`PARAM_KEY`), KE
Y `INDEX_PARAMS_N49` (`INDEX_ID`), CONSTRAINT `INDEX_PARAMS_FK1` FOREIGN KEY (`I
NDEX_ID`) REFERENCES `IDXS` (`INDEX_ID`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= @saved_cs_client */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET @saved_cs_client
= @@character_set_client */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= utf8 */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> CREATE TABLE IF NOT EXISTS `NUCLE
US_TABLES` ( `CLASS_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin N
OT NULL, `TABLE_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT N
ULL, `TYPE` varchar(4) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL, `OWNER`
varchar(2) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL, `VERSION` varchar(
20) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL, `INTERFACE_NAME` varchar(2
55) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL, PRIMARY KEY (`CLASS_NA
ME`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1
No rows affected (0.001 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= @saved_cs_client */
No rows affected (0.001 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET @saved_cs_client
= @@character_set_client */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= utf8 */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> CREATE TABLE IF NOT EXISTS `PARTI
TIONS` ( `PART_ID` bigint(20) NOT NULL, `CREATE_TIME` int(11) NOT NULL, `LAST_AC
CESS_TIME` int(11) NOT NULL, `PART_NAME` varchar(767) CHARACTER SET latin1 COLLA
TE latin1_bin DEFAULT NULL, `SD_ID` bigint(20) DEFAULT NULL, `TBL_ID` bigint(20)
DEFAULT NULL, PRIMARY KEY (`PART_ID`), UNIQUE KEY `UNIQUEPARTITION` (`PART_NAME
`,`TBL_ID`), KEY `PARTITIONS_N49` (`TBL_ID`), KEY `PARTITIONS_N50` (`SD_ID`), CO
NSTRAINT `PARTITIONS_FK1` FOREIGN KEY (`TBL_ID`) REFERENCES `TBLS` (`TBL_ID`), C
ONSTRAINT `PARTITIONS_FK2` FOREIGN KEY (`SD_ID`) REFERENCES `SDS` (`SD_ID`) ) EN
GINE=InnoDB DEFAULT CHARSET=latin1
No rows affected (0.001 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= @saved_cs_client */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET @saved_cs_client
= @@character_set_client */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= utf8 */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> CREATE TABLE IF NOT EXISTS `PARTI
TION_EVENTS` ( `PART_NAME_ID` bigint(20) NOT NULL, `DB_NAME` varchar(128) CHARAC
TER SET latin1 COLLATE latin1_bin DEFAULT NULL, `EVENT_TIME` bigint(20) NOT NULL
, `EVENT_TYPE` int(11) NOT NULL, `PARTITION_NAME` varchar(767) CHARACTER SET lat
in1 COLLATE latin1_bin DEFAULT NULL, `TBL_NAME` varchar(128) CHARACTER SET latin
1 COLLATE latin1_bin DEFAULT NULL, PRIMARY KEY (`PART_NAME_ID`), KEY `PARTITIONE
VENTINDEX` (`PARTITION_NAME`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= @saved_cs_client */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET @saved_cs_client
= @@character_set_client */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= utf8 */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> CREATE TABLE IF NOT EXISTS `PARTI
TION_KEYS` ( `TBL_ID` bigint(20) NOT NULL, `PKEY_COMMENT` varchar(4000) CHARACTE
R SET latin1 COLLATE latin1_bin DEFAULT NULL, `PKEY_NAME` varchar(128) CHARACTER
SET latin1 COLLATE latin1_bin NOT NULL, `PKEY_TYPE` varchar(767) CHARACTER SET
latin1 COLLATE latin1_bin NOT NULL, `INTEGER_IDX` int(11) NOT NULL, PRIMARY KEY
(`TBL_ID`,`PKEY_NAME`), KEY `PARTITION_KEYS_N49` (`TBL_ID`), CONSTRAINT `PARTITI
ON_KEYS_FK1` FOREIGN KEY (`TBL_ID`) REFERENCES `TBLS` (`TBL_ID`) ) ENGINE=InnoDB
DEFAULT CHARSET=latin1
No rows affected (0.001 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= @saved_cs_client */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET @saved_cs_client
= @@character_set_client */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= utf8 */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> CREATE TABLE IF NOT EXISTS `PARTI
TION_KEY_VALS` ( `PART_ID` bigint(20) NOT NULL, `PART_KEY_VAL` varchar(256) CHAR
ACTER SET latin1 COLLATE latin1_bin DEFAULT NULL, `INTEGER_IDX` int(11) NOT NULL
, PRIMARY KEY (`PART_ID`,`INTEGER_IDX`), KEY `PARTITION_KEY_VALS_N49` (`PART_ID`
), CONSTRAINT `PARTITION_KEY_VALS_FK1` FOREIGN KEY (`PART_ID`) REFERENCES `PARTI
TIONS` (`PART_ID`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= @saved_cs_client */
No rows affected (0.001 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET @saved_cs_client
= @@character_set_client */
No rows affected (0.001 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= utf8 */
No rows affected (0.001 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> CREATE TABLE IF NOT EXISTS `PARTI
TION_PARAMS` ( `PART_ID` bigint(20) NOT NULL, `PARAM_KEY` varchar(256) CHARACTER
SET latin1 COLLATE latin1_bin NOT NULL, `PARAM_VALUE` varchar(4000) CHARACTER S
ET latin1 COLLATE latin1_bin DEFAULT NULL, PRIMARY KEY (`PART_ID`,`PARAM_KEY`),
KEY `PARTITION_PARAMS_N49` (`PART_ID`), CONSTRAINT `PARTITION_PARAMS_FK1` FOREIG
N KEY (`PART_ID`) REFERENCES `PARTITIONS` (`PART_ID`) ) ENGINE=InnoDB DEFAULT CH
ARSET=latin1
No rows affected (0.001 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= @saved_cs_client */
No rows affected (0.001 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET @saved_cs_client
= @@character_set_client */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= utf8 */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> CREATE TABLE IF NOT EXISTS `PART_
COL_PRIVS` ( `PART_COLUMN_GRANT_ID` bigint(20) NOT NULL, `COLUMN_NAME` varchar(1
000) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL, `CREATE_TIME` int(11)
NOT NULL, `GRANT_OPTION` smallint(6) NOT NULL, `GRANTOR` varchar(128) CHARACTER
SET latin1 COLLATE latin1_bin DEFAULT NULL, `GRANTOR_TYPE` varchar(128) CHARACT
ER SET latin1 COLLATE latin1_bin DEFAULT NULL, `PART_ID` bigint(20) DEFAULT NULL
, `PRINCIPAL_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT
NULL, `PRINCIPAL_TYPE` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin DEFA
ULT NULL, `PART_COL_PRIV` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin D
EFAULT NULL, PRIMARY KEY (`PART_COLUMN_GRANT_ID`), KEY `PART_COL_PRIVS_N49` (`PA
RT_ID`), KEY `PARTITIONCOLUMNPRIVILEGEINDEX` (`PART_ID`,`COLUMN_NAME`,`PRINCIPAL
_NAME`,`PRINCIPAL_TYPE`,`PART_COL_PRIV`,`GRANTOR`,`GRANTOR_TYPE`), CONSTRAINT `P
ART_COL_PRIVS_FK1` FOREIGN KEY (`PART_ID`) REFERENCES `PARTITIONS` (`PART_ID`) )
ENGINE=InnoDB DEFAULT CHARSET=latin1
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= @saved_cs_client */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET @saved_cs_client
= @@character_set_client */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= utf8 */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> CREATE TABLE IF NOT EXISTS `PART_
PRIVS` ( `PART_GRANT_ID` bigint(20) NOT NULL, `CREATE_TIME` int(11) NOT NULL, `G
RANT_OPTION` smallint(6) NOT NULL, `GRANTOR` varchar(128) CHARACTER SET latin1 C
OLLATE latin1_bin DEFAULT NULL, `GRANTOR_TYPE` varchar(128) CHARACTER SET latin1
COLLATE latin1_bin DEFAULT NULL, `PART_ID` bigint(20) DEFAULT NULL, `PRINCIPAL_
NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL, `PRINCI
PAL_TYPE` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL, `PA
RT_PRIV` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL, PRIM
ARY KEY (`PART_GRANT_ID`), KEY `PARTPRIVILEGEINDEX` (`PART_ID`,`PRINCIPAL_NAME`,
`PRINCIPAL_TYPE`,`PART_PRIV`,`GRANTOR`,`GRANTOR_TYPE`), KEY `PART_PRIVS_N49` (`P
ART_ID`), CONSTRAINT `PART_PRIVS_FK1` FOREIGN KEY (`PART_ID`) REFERENCES `PARTIT
IONS` (`PART_ID`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1
No rows affected (0.001 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= @saved_cs_client */
No rows affected (0.001 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET @saved_cs_client
= @@character_set_client */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= utf8 */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> CREATE TABLE IF NOT EXISTS `ROLES
` ( `ROLE_ID` bigint(20) NOT NULL, `CREATE_TIME` int(11) NOT NULL, `OWNER_NAME`
varchar(128) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL, `ROLE_NAME` v
archar(128) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL, PRIMARY KEY (`
ROLE_ID`), UNIQUE KEY `ROLEENTITYINDEX` (`ROLE_NAME`) ) ENGINE=InnoDB DEFAULT CH
ARSET=latin1
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= @saved_cs_client */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET @saved_cs_client
= @@character_set_client */
No rows affected (0.001 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= utf8 */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> CREATE TABLE IF NOT EXISTS `ROLE_
MAP` ( `ROLE_GRANT_ID` bigint(20) NOT NULL, `ADD_TIME` int(11) NOT NULL, `GRANT_
OPTION` smallint(6) NOT NULL, `GRANTOR` varchar(128) CHARACTER SET latin1 COLLAT
E latin1_bin DEFAULT NULL, `GRANTOR_TYPE` varchar(128) CHARACTER SET latin1 COLL
ATE latin1_bin DEFAULT NULL, `PRINCIPAL_NAME` varchar(128) CHARACTER SET latin1
COLLATE latin1_bin DEFAULT NULL, `PRINCIPAL_TYPE` varchar(128) CHARACTER SET lat
in1 COLLATE latin1_bin DEFAULT NULL, `ROLE_ID` bigint(20) DEFAULT NULL, PRIMARY
KEY (`ROLE_GRANT_ID`), UNIQUE KEY `USERROLEMAPINDEX` (`PRINCIPAL_NAME`,`ROLE_ID`
,`GRANTOR`,`GRANTOR_TYPE`), KEY `ROLE_MAP_N49` (`ROLE_ID`), CONSTRAINT `ROLE_MAP
_FK1` FOREIGN KEY (`ROLE_ID`) REFERENCES `ROLES` (`ROLE_ID`) ) ENGINE=InnoDB DEF
AULT CHARSET=latin1
No rows affected (0.001 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= @saved_cs_client */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET @saved_cs_client
= @@character_set_client */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= utf8 */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> CREATE TABLE IF NOT EXISTS `SDS`
( `SD_ID` bigint(20) NOT NULL, `CD_ID` bigint(20) DEFAULT NULL, `INPUT_FORMAT` v
archar(4000) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL, `IS_COMPRESSE
D` bit(1) NOT NULL, `IS_STOREDASSUBDIRECTORIES` bit(1) NOT NULL, `LOCATION` varc
har(4000) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL, `NUM_BUCKETS` in
t(11) NOT NULL, `OUTPUT_FORMAT` varchar(4000) CHARACTER SET latin1 COLLATE latin
1_bin DEFAULT NULL, `SERDE_ID` bigint(20) DEFAULT NULL, PRIMARY KEY (`SD_ID`), K
EY `SDS_N49` (`SERDE_ID`), KEY `SDS_N50` (`CD_ID`), CONSTRAINT `SDS_FK1` FOREIGN
KEY (`SERDE_ID`) REFERENCES `SERDES` (`SERDE_ID`), CONSTRAINT `SDS_FK2` FOREIGN
KEY (`CD_ID`) REFERENCES `CDS` (`CD_ID`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= @saved_cs_client */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET @saved_cs_client
= @@character_set_client */
No rows affected (0.001 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= utf8 */
No rows affected (0.001 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> CREATE TABLE IF NOT EXISTS `SD_PA
RAMS` ( `SD_ID` bigint(20) NOT NULL, `PARAM_KEY` varchar(256) CHARACTER SET lati
n1 COLLATE latin1_bin NOT NULL, `PARAM_VALUE` varchar(4000) CHARACTER SET latin1
COLLATE latin1_bin DEFAULT NULL, PRIMARY KEY (`SD_ID`,`PARAM_KEY`), KEY `SD_PAR
AMS_N49` (`SD_ID`), CONSTRAINT `SD_PARAMS_FK1` FOREIGN KEY (`SD_ID`) REFERENCES
`SDS` (`SD_ID`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= @saved_cs_client */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET @saved_cs_client
= @@character_set_client */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= utf8 */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> CREATE TABLE IF NOT EXISTS `SEQUE
NCE_TABLE` ( `SEQUENCE_NAME` varchar(255) CHARACTER SET latin1 COLLATE latin1_bi
n NOT NULL, `NEXT_VAL` bigint(20) NOT NULL, PRIMARY KEY (`SEQUENCE_NAME`) ) ENGI
NE=InnoDB DEFAULT CHARSET=latin1
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= @saved_cs_client */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET @saved_cs_client
= @@character_set_client */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= utf8 */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> CREATE TABLE IF NOT EXISTS `SERDE
S` ( `SERDE_ID` bigint(20) NOT NULL, `NAME` varchar(128) CHARACTER SET latin1 CO
LLATE latin1_bin DEFAULT NULL, `SLIB` varchar(4000) CHARACTER SET latin1 COLLATE
latin1_bin DEFAULT NULL, PRIMARY KEY (`SERDE_ID`) ) ENGINE=InnoDB DEFAULT CHARS
ET=latin1
No rows affected (0.001 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= @saved_cs_client */
No rows affected (0.001 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET @saved_cs_client
= @@character_set_client */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= utf8 */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> CREATE TABLE IF NOT EXISTS `SERDE
_PARAMS` ( `SERDE_ID` bigint(20) NOT NULL, `PARAM_KEY` varchar(256) CHARACTER SE
T latin1 COLLATE latin1_bin NOT NULL, `PARAM_VALUE` varchar(4000) CHARACTER SET
latin1 COLLATE latin1_bin DEFAULT NULL, PRIMARY KEY (`SERDE_ID`,`PARAM_KEY`), KE
Y `SERDE_PARAMS_N49` (`SERDE_ID`), CONSTRAINT `SERDE_PARAMS_FK1` FOREIGN KEY (`S
ERDE_ID`) REFERENCES `SERDES` (`SERDE_ID`) ) ENGINE=InnoDB DEFAULT CHARSET=latin
1
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= @saved_cs_client */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET @saved_cs_client
= @@character_set_client */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= utf8 */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> CREATE TABLE IF NOT EXISTS `SKEWE
D_COL_NAMES` ( `SD_ID` bigint(20) NOT NULL, `SKEWED_COL_NAME` varchar(256) CHARA
CTER SET latin1 COLLATE latin1_bin DEFAULT NULL, `INTEGER_IDX` int(11) NOT NULL,
PRIMARY KEY (`SD_ID`,`INTEGER_IDX`), KEY `SKEWED_COL_NAMES_N49` (`SD_ID`), CONS
TRAINT `SKEWED_COL_NAMES_FK1` FOREIGN KEY (`SD_ID`) REFERENCES `SDS` (`SD_ID`) )
ENGINE=InnoDB DEFAULT CHARSET=latin1
No rows affected (0.001 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= @saved_cs_client */
No rows affected (0.001 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET @saved_cs_client
= @@character_set_client */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= utf8 */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> CREATE TABLE IF NOT EXISTS `SKEWE
D_COL_VALUE_LOC_MAP` ( `SD_ID` bigint(20) NOT NULL, `STRING_LIST_ID_KID` bigint(
20) NOT NULL, `LOCATION` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin D
EFAULT NULL, PRIMARY KEY (`SD_ID`,`STRING_LIST_ID_KID`), KEY `SKEWED_COL_VALUE_L
OC_MAP_N49` (`STRING_LIST_ID_KID`), KEY `SKEWED_COL_VALUE_LOC_MAP_N50` (`SD_ID`)
, CONSTRAINT `SKEWED_COL_VALUE_LOC_MAP_FK2` FOREIGN KEY (`STRING_LIST_ID_KID`) R
EFERENCES `SKEWED_STRING_LIST` (`STRING_LIST_ID`), CONSTRAINT `SKEWED_COL_VALUE_
LOC_MAP_FK1` FOREIGN KEY (`SD_ID`) REFERENCES `SDS` (`SD_ID`) ) ENGINE=InnoDB DE
FAULT CHARSET=latin1
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= @saved_cs_client */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET @saved_cs_client
= @@character_set_client */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= utf8 */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> CREATE TABLE IF NOT EXISTS `SKEWE
D_STRING_LIST` ( `STRING_LIST_ID` bigint(20) NOT NULL, PRIMARY KEY (`STRING_LIST
_ID`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= @saved_cs_client */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET @saved_cs_client
= @@character_set_client */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= utf8 */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> CREATE TABLE IF NOT EXISTS `SKEWE
D_STRING_LIST_VALUES` ( `STRING_LIST_ID` bigint(20) NOT NULL, `STRING_LIST_VALUE
` varchar(256) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL, `INTEGER_ID
X` int(11) NOT NULL, PRIMARY KEY (`STRING_LIST_ID`,`INTEGER_IDX`), KEY `SKEWED_S
TRING_LIST_VALUES_N49` (`STRING_LIST_ID`), CONSTRAINT `SKEWED_STRING_LIST_VALUES
_FK1` FOREIGN KEY (`STRING_LIST_ID`) REFERENCES `SKEWED_STRING_LIST` (`STRING_LI
ST_ID`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1
No rows affected (0.001 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= @saved_cs_client */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET @saved_cs_client
= @@character_set_client */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= utf8 */
No rows affected (0.001 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> CREATE TABLE IF NOT EXISTS `SKEWE
D_VALUES` ( `SD_ID_OID` bigint(20) NOT NULL, `STRING_LIST_ID_EID` bigint(20) NOT
NULL, `INTEGER_IDX` int(11) NOT NULL, PRIMARY KEY (`SD_ID_OID`,`INTEGER_IDX`),
KEY `SKEWED_VALUES_N50` (`SD_ID_OID`), KEY `SKEWED_VALUES_N49` (`STRING_LIST_ID_
EID`), CONSTRAINT `SKEWED_VALUES_FK2` FOREIGN KEY (`STRING_LIST_ID_EID`) REFEREN
CES `SKEWED_STRING_LIST` (`STRING_LIST_ID`), CONSTRAINT `SKEWED_VALUES_FK1` FORE
IGN KEY (`SD_ID_OID`) REFERENCES `SDS` (`SD_ID`) ) ENGINE=InnoDB DEFAULT CHARSET
=latin1
No rows affected (0.001 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= @saved_cs_client */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET @saved_cs_client
= @@character_set_client */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= utf8 */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> CREATE TABLE IF NOT EXISTS `SORT_
COLS` ( `SD_ID` bigint(20) NOT NULL, `COLUMN_NAME` varchar(1000) CHARACTER SET l
atin1 COLLATE latin1_bin DEFAULT NULL, `ORDER` int(11) NOT NULL, `INTEGER_IDX` i
nt(11) NOT NULL, PRIMARY KEY (`SD_ID`,`INTEGER_IDX`), KEY `SORT_COLS_N49` (`SD_I
D`), CONSTRAINT `SORT_COLS_FK1` FOREIGN KEY (`SD_ID`) REFERENCES `SDS` (`SD_ID`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= @saved_cs_client */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET @saved_cs_client
= @@character_set_client */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= utf8 */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> CREATE TABLE IF NOT EXISTS `TABLE
_PARAMS` ( `TBL_ID` bigint(20) NOT NULL, `PARAM_KEY` varchar(256) CHARACTER SET
latin1 COLLATE latin1_bin NOT NULL, `PARAM_VALUE` varchar(4000) CHARACTER SET la
tin1 COLLATE latin1_bin DEFAULT NULL, PRIMARY KEY (`TBL_ID`,`PARAM_KEY`), KEY `T
ABLE_PARAMS_N49` (`TBL_ID`), CONSTRAINT `TABLE_PARAMS_FK1` FOREIGN KEY (`TBL_ID`
) REFERENCES `TBLS` (`TBL_ID`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1
No rows affected (0.001 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= @saved_cs_client */
No rows affected (0.001 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET @saved_cs_client
= @@character_set_client */
No rows affected (0.001 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= utf8 */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> CREATE TABLE IF NOT EXISTS `TBLS`
( `TBL_ID` bigint(20) NOT NULL, `CREATE_TIME` int(11) NOT NULL, `DB_ID` bigint(
20) DEFAULT NULL, `LAST_ACCESS_TIME` int(11) NOT NULL, `OWNER` varchar(767) CHAR
ACTER SET latin1 COLLATE latin1_bin DEFAULT NULL, `RETENTION` int(11) NOT NULL,
`SD_ID` bigint(20) DEFAULT NULL, `TBL_NAME` varchar(128) CHARACTER SET latin1 CO
LLATE latin1_bin DEFAULT NULL, `TBL_TYPE` varchar(128) CHARACTER SET latin1 COLL
ATE latin1_bin DEFAULT NULL, `VIEW_EXPANDED_TEXT` mediumtext, `VIEW_ORIGINAL_TEX
T` mediumtext, PRIMARY KEY (`TBL_ID`), UNIQUE KEY `UNIQUETABLE` (`TBL_NAME`,`DB_
ID`), KEY `TBLS_N50` (`SD_ID`), KEY `TBLS_N49` (`DB_ID`), CONSTRAINT `TBLS_FK1`
FOREIGN KEY (`SD_ID`) REFERENCES `SDS` (`SD_ID`), CONSTRAINT `TBLS_FK2` FOREIGN
KEY (`DB_ID`) REFERENCES `DBS` (`DB_ID`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= @saved_cs_client */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET @saved_cs_client
= @@character_set_client */
No rows affected (0.001 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= utf8 */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> CREATE TABLE IF NOT EXISTS `TBL_C
OL_PRIVS` ( `TBL_COLUMN_GRANT_ID` bigint(20) NOT NULL, `COLUMN_NAME` varchar(100
0) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL, `CREATE_TIME` int(11) N
OT NULL, `GRANT_OPTION` smallint(6) NOT NULL, `GRANTOR` varchar(128) CHARACTER S
ET latin1 COLLATE latin1_bin DEFAULT NULL, `GRANTOR_TYPE` varchar(128) CHARACTER
SET latin1 COLLATE latin1_bin DEFAULT NULL, `PRINCIPAL_NAME` varchar(128) CHARA
CTER SET latin1 COLLATE latin1_bin DEFAULT NULL, `PRINCIPAL_TYPE` varchar(128) C
HARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL, `TBL_COL_PRIV` varchar(128)
CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL, `TBL_ID` bigint(20) DEFAU
LT NULL, PRIMARY KEY (`TBL_COLUMN_GRANT_ID`), KEY `TABLECOLUMNPRIVILEGEINDEX` (`
TBL_ID`,`COLUMN_NAME`,`PRINCIPAL_NAME`,`PRINCIPAL_TYPE`,`TBL_COL_PRIV`,`GRANTOR`
,`GRANTOR_TYPE`), KEY `TBL_COL_PRIVS_N49` (`TBL_ID`), CONSTRAINT `TBL_COL_PRIVS_
FK1` FOREIGN KEY (`TBL_ID`) REFERENCES `TBLS` (`TBL_ID`) ) ENGINE=InnoDB DEFAULT
CHARSET=latin1
No rows affected (0.001 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= @saved_cs_client */
No rows affected (0.001 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET @saved_cs_client
= @@character_set_client */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= utf8 */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> CREATE TABLE IF NOT EXISTS `TBL_P
RIVS` ( `TBL_GRANT_ID` bigint(20) NOT NULL, `CREATE_TIME` int(11) NOT NULL, `GRA
NT_OPTION` smallint(6) NOT NULL, `GRANTOR` varchar(128) CHARACTER SET latin1 COL
LATE latin1_bin DEFAULT NULL, `GRANTOR_TYPE` varchar(128) CHARACTER SET latin1 C
OLLATE latin1_bin DEFAULT NULL, `PRINCIPAL_NAME` varchar(128) CHARACTER SET lati
n1 COLLATE latin1_bin DEFAULT NULL, `PRINCIPAL_TYPE` varchar(128) CHARACTER SET
latin1 COLLATE latin1_bin DEFAULT NULL, `TBL_PRIV` varchar(128) CHARACTER SET la
tin1 COLLATE latin1_bin DEFAULT NULL, `TBL_ID` bigint(20) DEFAULT NULL, PRIMARY
KEY (`TBL_GRANT_ID`), KEY `TBL_PRIVS_N49` (`TBL_ID`), KEY `TABLEPRIVILEGEINDEX`
(`TBL_ID`,`PRINCIPAL_NAME`,`PRINCIPAL_TYPE`,`TBL_PRIV`,`GRANTOR`,`GRANTOR_TYPE`)
, CONSTRAINT `TBL_PRIVS_FK1` FOREIGN KEY (`TBL_ID`) REFERENCES `TBLS` (`TBL_ID`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> /*!40101 SET character_set_client
= @saved_cs_client */
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> CREATE TABLE IF NOT EXISTS `TAB_C
OL_STATS` ( `CS_ID` bigint(20) NOT NULL, `DB_NAME` varchar(128) CHARACTER SET la
tin1 COLLATE latin1_bin NOT NULL, `TABLE_NAME` varchar(128) CHARACTER SET latin1
COLLATE latin1_bin NOT NULL, `COLUMN_NAME` varchar(1000) CHARACTER SET latin1 C
OLLATE latin1_bin NOT NULL, `COLUMN_TYPE` varchar(128) CHARACTER SET latin1 COLL
ATE latin1_bin NOT NULL, `TBL_ID` bigint(20) NOT NULL, `LONG_LOW_VALUE` bigint(2
0), `LONG_HIGH_VALUE` bigint(20), `DOUBLE_HIGH_VALUE` double(53,4), `DOUBLE_LOW_
VALUE` double(53,4), `BIG_DECIMAL_LOW_VALUE` varchar(4000) CHARACTER SET latin1
COLLATE latin1_bin, `BIG_DECIMAL_HIGH_VALUE` varchar(4000) CHARACTER SET latin1
COLLATE latin1_bin, `NUM_NULLS` bigint(20) NOT NULL, `NUM_DISTINCTS` bigint(20),
`AVG_COL_LEN` double(53,4), `MAX_COL_LEN` bigint(20), `NUM_TRUES` bigint(20), `
NUM_FALSES` bigint(20), `LAST_ANALYZED` bigint(20) NOT NULL, PRIMARY KEY (`CS_ID
`), CONSTRAINT `TAB_COL_STATS_FK` FOREIGN KEY (`TBL_ID`) REFERENCES `TBLS` (`TBL
_ID`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1
No rows affected (0.001 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> CREATE TABLE IF NOT EXISTS `PART_
COL_STATS` ( `CS_ID` bigint(20) NOT NULL, `DB_NAME` varchar(128) CHARACTER SET l
atin1 COLLATE latin1_bin NOT NULL, `TABLE_NAME` varchar(128) CHARACTER SET latin
1 COLLATE latin1_bin NOT NULL, `PARTITION_NAME` varchar(767) CHARACTER SET latin
1 COLLATE latin1_bin NOT NULL, `COLUMN_NAME` varchar(1000) CHARACTER SET latin1
COLLATE latin1_bin NOT NULL, `COLUMN_TYPE` varchar(128) CHARACTER SET latin1 COL
LATE latin1_bin NOT NULL, `PART_ID` bigint(20) NOT NULL, `LONG_LOW_VALUE` bigint
(20), `LONG_HIGH_VALUE` bigint(20), `DOUBLE_HIGH_VALUE` double(53,4), `DOUBLE_LO
W_VALUE` double(53,4), `BIG_DECIMAL_LOW_VALUE` varchar(4000) CHARACTER SET latin
1 COLLATE latin1_bin, `BIG_DECIMAL_HIGH_VALUE` varchar(4000) CHARACTER SET latin
1 COLLATE latin1_bin, `NUM_NULLS` bigint(20) NOT NULL, `NUM_DISTINCTS` bigint(20
), `AVG_COL_LEN` double(53,4), `MAX_COL_LEN` bigint(20), `NUM_TRUES` bigint(20),
`NUM_FALSES` bigint(20), `LAST_ANALYZED` bigint(20) NOT NULL, PRIMARY KEY (`CS_
ID`), CONSTRAINT `PART_COL_STATS_FK` FOREIGN KEY (`PART_ID`) REFERENCES `PARTITI
ONS` (`PART_ID`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1
No rows affected (0 seconds)
0: jdbc:mysql://ip-192-168-0-42.eu-west-1.com> CREATE INDEX PCS_STATS_IDX ON PAR
T_COL_STATS (DB_NAME,TABLE_NAME,COLUMN_NAME,PARTITION_NAME) USING BTREE
Error: Duplicate key name 'PCS_STATS_IDX' (state=42000,code=1061)
Closing: 0: jdbc:mysql://ip-192-168-0-42.eu-west-1.compute.internal/hive?createDatabaseIfNotExist=true
org.apache.hadoop.hive.metastore.HiveMetaException: Schema initialization FAILED! Metastore state would be inconsistent !!
Underlying cause: java.io.IOException : Schema script failed, errorcode 2
org.apache.hadoop.hive.metastore.HiveMetaException: Schema initialization FAILED! Metastore state would be inconsistent !!
at org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:304)
at org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:277)
at org.apache.hive.beeline.HiveSchemaTool.main(HiveSchemaTool.java:526)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:233)
at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
Caused by: java.io.IOException: Schema script failed, errorcode 2
at org.apache.hive.beeline.HiveSchemaTool.runBeeLine(HiveSchemaTool.java:410)
at org.apache.hive.beeline.HiveSchemaTool.runBeeLine(HiveSchemaTool.java:367)
at org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:300)
... 8 more *** schemaTool failed ***
... View more
- Tags:
- Data Processing
- Hive
Labels:
- Labels:
-
Apache Hive
11-07-2017
05:55 PM
Hi Jay, Thanks for your reply.. I tried with this but no luck.. [root@ip-192-168-0-42 etc]# service mysqld restart
Stopping mysqld: [ OK ]
Starting mysqld: [ OK ]
[root@ip-192-168-0-42 etc]# mysql -urootmysql
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.1.73 Source distribution
Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> UPDATE user SET Password=PASSWORD('root@123') where USER='root';
ERROR 1046 (3D000): No database selected
mysql> use mysql
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
mysql> UPDATE user SET Password=PASSWORD('root@123') where USER='root';
Query OK, 0 rows affected (0.00 sec)
Rows matched: 0 Changed: 0 Warnings: 0
mysql> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec) mysql> exit
Bye [root@ip-192-168-0-42 etc]# mysql -uroot -p
Enter password:
ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES) Still not able to log in as root
... View more
11-07-2017
04:23 PM
[ec2-user@ip-192-168-0-42 etc]$ mysql -u root -p
Enter password:
ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
[ec2-user@ip-192-168-0-42 etc]$ SLF4J: Found binding in [jar:file:/usr/hdp/2.5.5.0-157/hive2/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.5.5.0-157/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Metastore connection URL: jdbc:mysql://ip-192-168-0-42.eu-west-1.compute.internal/hive?createDatabaseIfNotExist=true
Metastore Connection Driver : com.mysql.jdbc.Driver
Metastore connection User: hive
org.apache.hadoop.hive.metastore.HiveMetaException: Failed to get schema version.
Underlying cause: java.sql.SQLException : null, message from server: "Host 'ip-192-168-0-50.eu-west-1.compute.internal' is not allowed to connect to this MySQL server"
SQL Error code: 1130
org.apache.hadoop.hive.metastore.HiveMetaException: Failed to get schema version.
at org.apache.hive.beeline.HiveSchemaHelper.getConnectionToMetastore(HiveSchemaHelper.java:80)
at org.apache.hive.beeline.HiveSchemaTool.getConnectionToMetastore(HiveSchemaTool.java:133)
at org.apache.hive.beeline.HiveSchemaTool.testConnectionToMetastore(HiveSchemaTool.java:187)
at org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:291)
at org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:277)
at org.apache.hive.beeline.HiveSchemaTool.main(HiveSchemaTool.java:526)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:233)
at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
Caused by: java.sql.SQLException: null, message from server: "Host 'ip-192-168-0-50.eu-west-1.compute.internal' is not allowed to connect to this MySQL server"
... View more
- Tags:
- Metastore
06-01-2017
02:37 PM
Thanks Vineet. Regards, Priyaranjan
... View more
05-31-2017
09:47 AM
: 2017-05-31 09:32:01,998 - Execution of '/usr/bin/yum -d 0 -e 0 -y install hawq' returned 1. Transaction check error: file /usr/bin from install of thrift-0.9.1-1.el6.x86_64 conflicts with file from package filesystem-2.4.30-3.8.amzn1.x86_64 file /usr/lib from install of thrift-0.9.1-1.el6.x86_64 conflicts with file from package filesystem-2.4.30-3.8.amzn1.x86_64 file /usr/lib64 from install of thrift-0.9.1-1.el6.x86_64 conflicts with file from package filesystem-2.4.30-3.8.amzn1.x86_64
... View more
- Tags:
- hawq