Member since
02-29-2016
37
Posts
48
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
27264 | 12-22-2016 12:55 PM | |
1014 | 12-22-2016 10:49 AM |
04-29-2018
12:50 PM
3 Kudos
Issue: This issue has been observed on an HDP cluster running NiFi service with HDF 3.0.1 managed by Ambari 2.6.1.5. HDF 3.0.1 is supported only with Ambari 2.5.1 version. Therefore, in situation where you upgrade Ambari from 2.5.x to 2.6.y HDF services like NiFi won't start & complain the error shared in the below stack trace. To handle this, you have to upgrade HDF mpack from 3.0.1 to 3.0.2 after the Ambari upgrade is complete. This additional activity is required to get the HDF service working with Ambari 2.6.y. Also, this step is independent of the HDF upgrade that you would have planned separately stderr:
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/NIFI/1.0.0/package/scripts/nifi.py", line 309, in <module>
Master().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 375, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/NIFI/1.0.0/package/scripts/nifi.py", line 177, in start
self.configure(env, is_starting = True)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 120, in locking_configure
original_configure(obj, *args, **kw)
File "/var/lib/ambari-agent/cache/common-services/NIFI/1.0.0/package/scripts/nifi.py", line 112, in configure
PropertiesFile(params.nifi_config_dir + '/nifi.properties', properties = params.nifi_properties, mode = 0600, owner = params.nifi_user, group = params.nifi_group)
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 166, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/properties_file.py", line 54, in action_create
mode = self.resource.mode
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 166, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 123, in action_create
content = self._get_content()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 160, in _get_content
return content()
File "/usr/lib/python2.6/site-packages/resource_management/core/source.py", line 52, in __call__
return self.get_content()
File "/usr/lib/python2.6/site-packages/resource_management/core/source.py", line 144, in get_content
rendered = self.template.render(self.context)
File "/usr/lib/python2.6/site-packages/ambari_jinja2/environment.py", line 891, in render
return self.environment.handle_exception(exc_info, True)
File "<template>", line 3, in top-level template code
File "/usr/lib/python2.6/site-packages/resource_management/core/source.py", line 144, in get_content
rendered = self.template.render(self.context)
File "/usr/lib/python2.6/site-packages/ambari_jinja2/environment.py", line 891, in render
return self.environment.handle_exception(exc_info, True)
File "<template>", line 1, in top-level template code
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/config_dictionary.py", line 73, in __getattr__
raise Fail("Configuration parameter '" + self.name + "' was not found in configurations dictionary!")
resource_management.core.exceptions.Fail: Configuration parameter 'availableServices' was not found in configurations dictionary!
stdout:
2018-04-27 11:49:37,580 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.1.0-129 -> 2.6.1.0-129
2018-04-27 11:49:37,592 - Using hadoop conf dir: /usr/hdp/2.6.1.0-129/hadoop/conf
2018-04-27 11:49:37,722 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.1.0-129 -> 2.6.1.0-129
2018-04-27 11:49:37,726 - Using hadoop conf dir: /usr/hdp/2.6.1.0-129/hadoop/conf
2018-04-27 11:49:37,726 - Group['kms'] {}
2018-04-27 11:49:37,727 - Group['livy'] {}
2018-04-27 11:49:37,728 - Group['spark'] {}
2018-04-27 11:49:37,728 - Group['ranger'] {}
2018-04-27 11:49:37,728 - Group['sg_hdp_hdfsadmins'] {}
2018-04-27 11:49:37,730 - Group['zeppelin'] {}
2018-04-27 11:49:37,730 - Group['hadoop'] {}
2018-04-27 11:49:37,730 - Group['nifi'] {}
2018-04-27 11:49:37,730 - Group['users'] {}
2018-04-27 11:49:37,730 - Group['knox'] {}
2018-04-27 11:49:37,731 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-04-27 11:49:37,732 - User['infra-solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-04-27 11:49:37,733 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-04-27 11:49:37,734 - User['atlas'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-04-27 11:49:37,735 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2018-04-27 11:49:37,735 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-04-27 11:49:37,736 - User['falcon'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2018-04-27 11:49:37,737 - User['ranger'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'ranger'], 'uid': None}
2018-04-27 11:49:37,738 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2018-04-27 11:49:37,738 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'zeppelin', u'hadoop'], 'uid': None}
2018-04-27 11:49:37,739 - User['nifi'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-04-27 11:49:37,740 - User['kms'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-04-27 11:49:37,741 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-04-27 11:49:37,742 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-04-27 11:49:37,742 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2018-04-27 11:49:37,743 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-04-27 11:49:37,744 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['sg_hdp_hdfsadmins'], 'uid': None}
2018-04-27 11:49:37,744 - Modifying user hdfs
2018-04-27 11:49:37,756 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-04-27 11:49:37,757 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-04-27 11:49:37,758 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-04-27 11:49:37,759 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-04-27 11:49:37,759 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-04-27 11:49:37,760 - User['knox'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-04-27 11:49:37,761 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2018-04-27 11:49:37,762 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2018-04-27 11:49:37,770 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if
2018-04-27 11:49:37,770 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2018-04-27 11:49:37,771 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2018-04-27 11:49:37,772 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2018-04-27 11:49:37,772 - call['/var/lib/ambari-agent/tmp/changeUid.sh hbase'] {}
2018-04-27 11:49:37,782 - call returned (0, '1018')
2018-04-27 11:49:37,782 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1018'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2018-04-27 11:49:37,790 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1018'] due to not_if
2018-04-27 11:49:37,790 - Group['hdfs'] {}
2018-04-27 11:49:37,791 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['sg_hdp_hdfsadmins', 'hadoop', 'hdfs', u'hdfs']}
2018-04-27 11:49:37,791 - Modifying user hdfs
2018-04-27 11:49:37,802 - FS Type:
2018-04-27 11:49:37,802 - Directory['/etc/hadoop'] {'mode': 0755}
2018-04-27 11:49:37,812 - File['/usr/hdp/2.6.1.0-129/hadoop/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'root', 'group': 'hadoop'}
2018-04-27 11:49:37,812 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2018-04-27 11:49:37,823 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2018-04-27 11:49:37,832 - Skipping Execute[('setenforce', '0')] due to not_if
2018-04-27 11:49:37,832 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
2018-04-27 11:49:37,834 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'}
2018-04-27 11:49:37,834 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'}
2018-04-27 11:49:37,837 - File['/usr/hdp/2.6.1.0-129/hadoop/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'root'}
2018-04-27 11:49:37,838 - File['/usr/hdp/2.6.1.0-129/hadoop/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'root'}
2018-04-27 11:49:37,842 - File['/usr/hdp/2.6.1.0-129/hadoop/conf/log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2018-04-27 11:49:37,849 - File['/usr/hdp/2.6.1.0-129/hadoop/conf/hadoop-metrics2.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2018-04-27 11:49:37,849 - File['/usr/hdp/2.6.1.0-129/hadoop/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2018-04-27 11:49:37,850 - File['/usr/hdp/2.6.1.0-129/hadoop/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2018-04-27 11:49:37,853 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop', 'mode': 0644}
2018-04-27 11:49:37,859 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2018-04-27 11:49:37,873 - Skipping stack-select on NIFI because it does not exist in the stack-select package structure.
2018-04-27 11:49:38,078 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.1.0-129 -> 2.6.1.0-129
2018-04-27 11:49:38,118 - Using hadoop conf dir: /usr/hdp/2.6.1.0-129/hadoop/conf
2018-04-27 11:49:38,119 - Directory['/var/run/nifi'] {'owner': 'nifi', 'create_parents': True, 'group': 'nifi', 'recursive_ownership': True}
2018-04-27 11:49:38,121 - Directory['/var/log/nifi'] {'owner': 'nifi', 'create_parents': True, 'group': 'nifi', 'recursive_ownership': True}
2018-04-27 11:49:38,122 - Directory['/var/lib/nifi'] {'owner': 'nifi', 'create_parents': True, 'group': 'nifi', 'recursive_ownership': True}
2018-04-27 11:49:38,189 - Directory['/var/lib/nifi/database_repository'] {'owner': 'nifi', 'create_parents': True, 'group': 'nifi', 'recursive_ownership': True}
2018-04-27 11:49:38,189 - Directory['/flowfile_repo'] {'owner': 'nifi', 'create_parents': True, 'group': 'nifi', 'recursive_ownership': True}
2018-04-27 11:49:38,197 - Directory['/prov_repo1'] {'owner': 'nifi', 'create_parents': True, 'group': 'nifi', 'recursive_ownership': True}
2018-04-27 11:49:38,198 - Directory['/usr/hdf/current/nifi/conf'] {'owner': 'nifi', 'create_parents': True, 'group': 'nifi', 'recursive_ownership': True}
2018-04-27 11:49:38,199 - Directory['/var/lib/nifi/conf'] {'owner': 'nifi', 'create_parents': True, 'group': 'nifi', 'recursive_ownership': True}
2018-04-27 11:49:38,199 - Directory['/var/lib/nifi/state/local'] {'owner': 'nifi', 'create_parents': True, 'group': 'nifi', 'recursive_ownership': True}
2018-04-27 11:49:38,200 - Directory['/usr/hdf/current/nifi/lib'] {'owner': 'nifi', 'create_parents': True, 'group': 'nifi', 'recursive_ownership': True}
2018-04-27 11:49:38,201 - Directory['{{nifi_content_repo_dir_default}}'] {'owner': 'nifi', 'create_parents': True, 'group': 'nifi', 'recursive_ownership': True}
2018-04-27 11:49:38,201 - Directory['/cont_repo2'] {'owner': 'nifi', 'create_parents': True, 'group': 'nifi', 'recursive_ownership': True}
2018-04-27 11:49:38,230 - Directory['/cont_repo1'] {'owner': 'nifi', 'group': 'nifi', 'create_parents': True, 'recursive_ownership': True}
2018-04-27 11:49:38,258 - Directory['/etc/security/limits.d'] {'owner': 'root', 'create_parents': True, 'group': 'root'}
2018-04-27 11:49:38,261 - File['/etc/security/limits.d/nifi.conf'] {'content': Template('nifi.conf.j2'), 'owner': 'root', 'group': 'root', 'mode': 0644}
2018-04-27 11:49:38,271 - PropertiesFile['/usr/hdf/current/nifi/conf/nifi.properties'] {'owner': 'nifi', 'group': 'nifi', 'mode': 0600, 'properties': ...}
2018-04-27 11:49:38,274 - Generating properties file: /usr/hdf/current/nifi/conf/nifi.properties
2018-04-27 11:49:38,274 - File['/usr/hdf/current/nifi/conf/nifi.properties'] {'owner': 'nifi', 'content': InlineTemplate(...), 'group': 'nifi', 'mode': 0600}
2018-04-27 11:49:38,350 - Skipping stack-select on NIFI because it does not exist in the stack-select package structure.
Command failed after 1 tries
Resolution: ssh into Ambari node and follow the steps described on this official doc For support matrices on HDF 3.0.2 please visit this link For support matrices on HDF 3.0.1 please visit this link For any other info, please visit docs.hortonworks.com
... View more
Labels:
04-29-2018
12:07 PM
2 Kudos
Issue: When you upgrade HDF to 3.0.2, you might experience NiFi service unable to start via Ambari due to the OOM exception in the below stack trace Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/NIFI/1.0.0/package/scripts/nifi.py", line 309, in <module>
Master().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 375, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/NIFI/1.0.0/package/scripts/nifi.py", line 177, in start
self.configure(env, is_starting = True)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 120, in locking_configure
original_configure(obj, *args, **kw)
File "/var/lib/ambari-agent/cache/common-services/NIFI/1.0.0/package/scripts/nifi.py", line 150, in configure
params.nifi_flow_config_dir, params.nifi_sensitive_props_key, is_starting)
File "/var/lib/ambari-agent/cache/common-services/NIFI/1.0.0/package/scripts/nifi.py", line 304, in encrypt_sensitive_properties
Execute(encrypt_config_script_prefix, user=nifi_user,logoutput=False)
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 166, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 262, in action_run
tries=self.resource.tries, try_sleep=self.resource.try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 72, in inner
result = function(command, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 102, in checked_call
tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 150, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 303, in _call
raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of 'JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk /var/lib/ambari-agent/cache/common-services/NIFI/1.0.0/package/files/nifi-toolkit-1.2.0.3.0.2.0-76/bin/encrypt-config.sh -v -b /usr/hdf/current/nifi/conf/bootstrap.conf -n /usr/hdf/current/nifi/conf/nifi.properties -f /var/lib/nifi/conf/flow.xml.gz -s '[PROTECTED]' -l /usr/hdf/current/nifi/conf/login-identity-providers.xml -p '[PROTECTED]'' returned 1. 2018/04/27 17:28:26 INFO [main] org.apache.nifi.properties.ConfigEncryptionTool: Handling encryption of login-identity-providers.xml
2018/04/27 17:28:26 WARN [main] org.apache.nifi.properties.ConfigEncryptionTool: The source login-identity-providers.xml and destination login-identity-providers.xml are identical [/usr/hdf/current/nifi/conf/login-identity-providers.xml] so the original will be overwritten
2018/04/27 17:28:26 INFO [main] org.apache.nifi.properties.ConfigEncryptionTool: Handling encryption of nifi.properties
2018/04/27 17:28:26 WARN [main] org.apache.nifi.properties.ConfigEncryptionTool: The source nifi.properties and destination nifi.properties are identical [/usr/hdf/current/nifi/conf/nifi.properties] so the original will be overwritten
2018/04/27 17:28:26 INFO [main] org.apache.nifi.properties.ConfigEncryptionTool: Handling encryption of flow.xml.gz
2018/04/27 17:28:26 WARN [main] org.apache.nifi.properties.ConfigEncryptionTool: The source flow.xml.gz and destination flow.xml.gz are identical [/var/lib/nifi/conf/flow.xml.gz] so the original will be overwritten
2018/04/27 17:28:26 INFO [main] org.apache.nifi.properties.ConfigEncryptionTool: bootstrap.conf: /usr/hdf/current/nifi/conf/bootstrap.conf
2018/04/27 17:28:26 INFO [main] org.apache.nifi.properties.ConfigEncryptionTool: (src) nifi.properties: /usr/hdf/current/nifi/conf/nifi.properties
2018/04/27 17:28:26 INFO [main] org.apache.nifi.properties.ConfigEncryptionTool: (dest) nifi.properties: /usr/hdf/current/nifi/conf/nifi.properties
2018/04/27 17:28:26 INFO [main] org.apache.nifi.properties.ConfigEncryptionTool: (src) login-identity-providers.xml: /usr/hdf/current/nifi/conf/login-identity-providers.xml
2018/04/27 17:28:26 INFO [main] org.apache.nifi.properties.ConfigEncryptionTool: (dest) login-identity-providers.xml: /usr/hdf/current/nifi/conf/login-identity-providers.xml
2018/04/27 17:28:26 INFO [main] org.apache.nifi.properties.ConfigEncryptionTool: (src) flow.xml.gz: /var/lib/nifi/conf/flow.xml.gz
2018/04/27 17:28:26 INFO [main] org.apache.nifi.properties.ConfigEncryptionTool: (dest) flow.xml.gz: /var/lib/nifi/conf/flow.xml.gz
2018/04/27 17:28:26 INFO [main] org.apache.nifi.properties.NiFiPropertiesLoader: Loaded 133 properties from /usr/hdf/current/nifi/conf/nifi.properties
2018/04/27 17:28:27 INFO [main] org.apache.nifi.properties.NiFiPropertiesLoader: Loaded 133 properties from /usr/hdf/current/nifi/conf/nifi.properties
2018/04/27 17:28:27 INFO [main] org.apache.nifi.properties.ConfigEncryptionTool: Loaded NiFiProperties instance with 133 properties
2018/04/27 17:28:27 INFO [main] org.apache.nifi.properties.ConfigEncryptionTool: Loaded LoginIdentityProviders content (104 lines)
2018/04/27 17:28:27 INFO [main] org.apache.nifi.properties.ConfigEncryptionTool: No encrypted password property elements found in login-identity-providers.xml
2018/04/27 17:28:27 INFO [main] org.apache.nifi.properties.ConfigEncryptionTool: No unencrypted password property elements found in login-identity-providers.xml
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Arrays.java:3332)
at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124)
at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:596)
at java.lang.StringBuilder.append(StringBuilder.java:190)
at org.apache.commons.io.output.StringBuilderWriter.write(StringBuilderWriter.java:143)
at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:2370)
at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:2348)
at org.apache.commons.io.IOUtils.copy(IOUtils.java:2325)
at org.apache.commons.io.IOUtils.copy(IOUtils.java:2273)
at org.apache.commons.io.IOUtils.toString(IOUtils.java:1041)
at org.apache.commons.io.IOUtils$toString.call(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:133)
at org.apache.nifi.properties.ConfigEncryptionTool$_loadFlowXml_closure2$_closure19.doCall(ConfigEncryptionTool.groovy:488)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:93)
at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:325)
at org.codehaus.groovy.runtime.metaclass.ClosureMetaClass.invokeMethod(ClosureMetaClass.java:294)
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1019)
at groovy.lang.Closure.call(Closure.java:426)
at groovy.lang.Closure.call(Closure.java:442)
at org.codehaus.groovy.runtime.IOGroovyMethods.withCloseable(IOGroovyMethods.java:1622)
at org.codehaus.groovy.runtime.NioGroovyMethods.withCloseable(NioGroovyMethods.java:1754)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.codehaus.groovy.runtime.metaclass.ReflectionMetaMethod.invoke(ReflectionMetaMethod.java:54)
Root cause: By default, the max heap configured in the encrypt-config.sh file is not sufficient to run the service via Ambari. If you are upgrading HDF to versions above & including 3.0.2 it requires a max heap of 1G to be set in-order for the NiFi service to start successfully from Ambari Resolution: 1. ssh into the Ambari server node 2. find / -name encrypt-config.sh 3. Open the 'encrypt-config.sh' file from the correct HDF version using 'vi'. For ex. in this scenario the version is 3.0.2, so you'll find the script under the below location /var/lib/ambari-server/resources/mpacks/hdf-ambari-mpack-3.0.2.0-76/common-services/NIFI/1.0.0/package/files/nifi-toolkit-1.2.0.3.0.2.0-76/bin/encrypt-config.sh 4. Increase the max heap to '1024m' (-Xmx1024m) as shown below and 'wq!' the change umask 0077
"${JAVA}" -cp "${CLASSPATH}" -Xms128m -Xmx1024m org.apache.nifi.properties.ConfigEncryptionTool "$@"
return $?
}
init
run "$@"
5. Restart ambari-server and ambari-agent 6. Now restart NiFi service from Ambari UI and it should work like a charm
... View more
Labels:
11-27-2017
02:39 PM
Description: Once you have a Hadoop cluster that is using Kerberos for authentication, you have to do the following in Ambari to configure Knox to work with that cluster. 1) Grant proxy privileges for Knox in core-site.xml hadoop.proxyuser.knox.groups=*
hadoop.proxyuser.knox.hosts=* 2) Grant proxy privileges for Knox in webhcat-site.xml webhcat.proxyuser.knox.groups=*
webhcat.proxyuser.knox.hosts=* 3) Grant proxy privileges for Knox in oozie-site.xml oozie.service.ProxyUserService.proxyuser.knox.groups=*
oozie.service.ProxyUserService.proxyuser.knox.hosts=* 4) Update hive-site.xml and set the following properties on Hive Server2 hosts. Some of the properties may already be in the hive-site.xml. Ensure that the values match the ones below. hive.server2.allow.user.substitution=true
hive.server2.transport.mode=http
hive.server2.thrift.http.port=10001
hive.server2.thrift.http.path=cliservice NOTE: The properties shown above allows all users and all hostnames. Please consider security and update the value per your architecture. For more info please visit this link
... View more
Labels:
07-20-2017
09:57 PM
1 Kudo
Issue: Running the below sqoop on a client machine might lead to the reported error sqoop export -Dmapreduce.map.log.level=DEBUG -Dteradata.db.output.fastload.socket.port=8678 \
-Dteradata.db.output.method=internal.fastload \
--connect jdbc:teradata://mrt1.openstacklocal/Database=testdb \
--connection-manager org.apache.sqoop.teradata.TeradataConnManager --username COOXP -P \
--export-dir /user/karthick/fastload_test \
--table test_table --input-fields-terminated-by ',' \
--null-string '\N' --null-non-string '\N' --num-mappers 5
INFO mapreduce.Job: Task Id : attempt_1497611885470_0832_m_000000_0, Status : FAILED
Error: com.teradata.connector.common.exception.ConnectorException: java.net.SocketException:
Socket is not connected
at java.net.Socket.getInputStream(Socket.java:905)
at com.teradata.connector.teradata.TeradataInternalFastloadOutputFormat.getRecordWriter(TeradataInternalFastloadOutputFormat.java:356)
at com.teradata.connector.common.ConnectorOutputFormat$ConnectorFileRecordWriter.<init>(ConnectorOutputFormat.java:89)
at com.teradata.connector.common.ConnectorOutputFormat.getRecordWriter(ConnectorOutputFormat.java:38)
at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.<init>(MapTask.java:647)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:767)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
at com.teradata.connector.teradata.TeradataInternalFastloadOutputFormat.getRecordWriter(TeradataInternalFastloadOutputFormat.java:478)
at com.teradata.connector.common.ConnectorOutputFormat$ConnectorFileRecordWriter.<init>(ConnectorOutputFormat.java:89)
at com.teradata.connector.common.ConnectorOutputFormat.getRecordWriter(ConnectorOutputFormat.java:38)
at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.<init>(MapTask.java:647)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:767)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162) Root cause: Fastload protocol requires ports 8678 through 65535 be opened on all the nodes in the HDP cluster including the client machine whether part of the cluster or not. These ports should be open for both inbound and outbound traffic. The default port 1025 is the only port that needs to be open on the Teradata server If all the ports cannot be opened, the job need to be ran using a specified port using the following:
a) To run using only a single port within this range then it is recommended that the port no is explicitly set using the property "-Dteradata.db.output.fastload.socket.port=8678 " within the sqoop statement as above Resolution: The way fastload protocol works is slightly different than the batch mode or the standard sqoop export, so when you run the same command using batch mode or standard sqoop export - they run successfully. For fast load its mandatory that we open those ports and then run the sqoop for successful completion.
... View more
Labels:
07-15-2017
12:56 AM
Issue: Nodemanager log: FATAL nodemanager.NodeManager (NodeManager.java:initAndStartNodeManager(540)) - Error starting NodeManager java.lang.UnsatisfiedLinkError: Could not load library. Reasons: [no leveldbjni64-1.6.0.2.4.0.0-169 in java.library.path, no leveldbjni-1.6.0.2.4.0.0-169 in java.library.path, no leveldbjni in java.library.path, No such file or directory]
Resolution: a) If you are using Ambari, by default the library files reported as missing in the above error should be present under /var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir You can find this location by running "ps -ef | grep -i nodemanager" on any of the hosts in the cluster where nodemanager runs successfully and look for this property "-Djava.io.tmpdir" If you have a different location other than the default then make sure the library files are present in that location or copy from another hosts. b) You can also check to remove noexec from /tmp as below and then start node manager via ambari cat /etc/fstab
mount -o remount,exec /tmp
... View more
Labels:
07-14-2017
11:57 PM
Issue: When you notice RM UI page truncated & in the log it shows NPE like below screenshot-2017-07-12-000551.png 2017-06-22 11:18:05,507 ERROR webapp.Dispatcher (Dispatcher.java:service(171)) - error handling URI: /cluster/scheduler
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.yarn.webapp.Dispatcher.service(Dispatcher.java:162)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at com.google.inject.servlet.ServletDefinition.doService(ServletDefinition.java:263)
at com.google.inject.servlet.ServletDefinition.service(ServletDefinition.java:178)
at com.google.inject.servlet.ManagedServletPipeline.service(ManagedServletPipeline.java:91)
at com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:62)
at com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:900)
at com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:834)
at org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebAppFilter.doFilter(RMWebAppFilter.java:178)
at com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:795)
at com.google.inject.servlet.FilterDefinition.doFilter(FilterDefinition.java:163)
at com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:58)
at com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:118)
at com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:113)
at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at org.apache.hadoop.security.http.XFrameOptionsFilter.doFilter(XFrameOptionsFilter.java:57)
at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:109)
at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:614)
at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.doFilter(DelegationTokenAuthenticationFilter.java:294)
at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:573)
at org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.doFilter(RMAuthenticationFilter.java:82)
at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at org.apache.hadoop.security.http.CrossOriginFilter.doFilter(CrossOriginFilter.java:95)
at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1294)
at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
at org.mortbay.jetty.Server.handle(Server.java:326)
at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
at org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228)
at org.mortbay.jetty.security.SslSocketConnector$SslConnection.run(SslSocketConnector.java:713)
at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
Caused by: java.lang.NullPointerException
at org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.PartitionQueueCapacitiesInfo.getMaxAMLimitPercentage(PartitionQueueCapacitiesInfo.java:114)
at org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderQueueCapacityInfo(CapacitySchedulerPage.java:155)
at org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderQueueCapacityInfo(CapacitySchedulerPage.java:155)
at org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderLeafQueueInfoWithPartition(CapacitySchedulerPage.java:105)
at org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.render(CapacitySchedulerPage.java:94)
at org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
at org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
at org.apache.hadoop.yarn.webapp.view.HtmlBlock$Block.subView(HtmlBlock.java:43)
at org.apache.hadoop.yarn.webapp.hamlet.HamletImpl$EImp._v(HamletImpl.java:117)
at org.apache.hadoop.yarn.webapp.hamlet.Hamlet$LI._(Hamlet.java:7702)
at org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$QueueBlock.render(CapacitySchedulerPage.java:294)
at org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
at org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
at org.apache.hadoop.yarn.webapp.view.HtmlBlock$Block.subView(HtmlBlock.java:43)
at org.apache.hadoop.yarn.webapp.hamlet.HamletImpl$EImp._v(HamletImpl.java:117)
at org.apache.hadoop.yarn.webapp.hamlet.Hamlet$LI._(Hamlet.java:7702)
at org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$QueuesBlock.render(CapacitySchedulerPage.java:454)
at org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
at org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
at org.apache.hadoop.yarn.webapp.view.HtmlPage$Page.subView(HtmlPage.java:49)
at org.apache.hadoop.yarn.webapp.hamlet.HamletImpl$EImp._v(HamletImpl.java:117)
at org.apache.hadoop.yarn.webapp.hamlet.Hamlet$TD._(Hamlet.java:845)
at org.apache.hadoop.yarn.webapp.view.TwoColumnLayout.render(TwoColumnLayout.java:56)
at org.apache.hadoop.yarn.webapp.view.HtmlPage.render(HtmlPage.java:82)
at org.apache.hadoop.yarn.webapp.Controller.render(Controller.java:212)
at org.apache.hadoop.yarn.server.resourcemanager.webapp.RmController.scheduler(RmController.java:82)
... 52 more Resolution: You are hitting this apache bug YARN-4624 Workaround: You can apply the below workaround to overcome the issue & you'll notice the full RM UI page with no truncate Enabled node labels to "default" queue (even if the capacity is set to 0) Add the below lines to the capacity scheduler xml via Ambari yarn.scheduler.capacity.root.default.accessible-node-labels=compute,storage
yarn.scheduler.capacity.root.default.accessible-node-labels.compute.capacity=0
yarn.scheduler.capacity.root.default.accessible-node-labels.compute.maximum-capacity=100
yarn.scheduler.capacity.root.default.accessible-node-labels.storage.capacity=0
yarn.scheduler.capacity.root.default.accessible-node-labels.storage.maximum-capacity=100
yarn.scheduler.capacity.root.default.default-node-label-expression=compute
... View more
Labels:
07-14-2017
10:51 PM
Issue: Sometimes ambari might throw the above alert which puts you in some confusion how all 50 DN's can be active and stale. Firstly, I suggest you to go through this article from HCC and understand how to identify stale datanodes Resolution: Increase the value of the property from 30000(default) to 60000 milli seconds or 180000 milli seconds dfs.namenode.stale.datanode.interval=60000 or 180000
... View more
Labels:
06-29-2017
07:18 PM
Issue: Supposedly if the table has large no of partitions in the order of 2800 or above and you wanted the API call to return all the partitions of the table in JSON format then you'll end up with the above error API: curl --negotiate -u: http://mrt1.openstacklocal:50111/templeton/v1/ddl/database/<db name>/table/<table name>/partition > /tmp/partition_json Resolution: The error message is not because of the no. of partitions in the table. JSON responses are limited to 1MB in size. Responses over this limit must be stored into HDFS using provided options instead of being directly returned. Workaround: You can use HCatalog, if your DDL command return results greater than 1MB /usr/hdp/<hdp version>/hive/bin/hcat -e "use <dbname>; show partitions <tablename>; " -D hive.ddl.output.format=json -D hive.format=json -D hive.metastore.token.signature=hcat -D proxy.user.name=<username> Reference: https://cwiki.apache.org/confluence/display/Hive/WebHCat+UsingWebHCat
... View more
Labels:
06-29-2017
12:09 AM
1 Kudo
select * from t1 where col1 = 'aaa' limit 2; If you see such scenario, make sure to verify the no. of files in each partitions of the table. It is possible that if there are too many small files in the order of KBs under each partition then most of the time is spent in just opening the file in ORC. Check if it is expected to have so many files in each partition or if it is possible to merge them before loading data. These are the parameters to consider hive.merge.tezfiles=true
hive.merge.mapredfiles=true
hive.merge.mapfiles=true
hive.merge.orcfile.stripe.level=true In such situation, checking the no of files in the partition should be the first protocol & verifying anything else should be only secondary.
... View more
Labels:
06-26-2017
08:22 PM
4 Kudos
Issue: Whilst running a 'select count(*)' or 'analyze table compute statistics' on a regex table & if you see the below error, it indicates you have created the table with old serde. Prior to Hive 0.10, 'RegexSerDe' is part of 'hive-contrib' library. From Hive 0.10 onwards the serde is part of 'hive-serde-<version>.jar' ERROR : Status: Failed
ERROR : Vertex failed, vertexName=Map 1, vertexId=vertex_1496529203645_0883_2_00, diagnostics=[Task failed, taskId=task_1496529203645_0883_2_00_000010, diagnostics=[TaskAttempt 0 failed, info=[Error: Failure while running task:java.lang.RuntimeException: java.lang.RuntimeException: Map operator initialization failed
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:173)
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:139)
at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:347)
at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:194)
at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:185)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:185)
at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:181)
at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: Map operator initialization failed
at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:262)
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:149)
... 14 more
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.ClassNotFoundException: Class org.apache.hadoop.hive.contrib.serde2.RegexSerDe not found
at org.apache.hadoop.hive.ql.exec.MapOperator.getConvertedOI(MapOperator.java:350)
at org.apache.hadoop.hive.ql.exec.MapOperator.setChildren(MapOperator.java:385)
at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:224)
... 15 more
Caused by: java.lang.ClassNotFoundException: Class org.apache.hadoop.hive.contrib.serde2.RegexSerDe not found
at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2114)
at org.apache.hadoop.hive.ql.plan.PartitionDesc.getDeserializer(PartitionDesc.java:143)
at org.apache.hadoop.hive.ql.exec.MapOperator.getConvertedOI(MapOperator.java:316)
... 17 more Resolution: Create the table with latest serde 'org.apache.hadoop.hive.serde2.RegexSerDe' instead of 'org.apache.hadoop.hive.contrib.serde2.RegexSerDe' You can also run alter table to modify the serde as below ALTER TABLE <TABLENAME> SET SERDE 'org.apache.hadoop.hive.serde2.RegexSerDe'; Hope you like the article
... View more
Labels: