Member since
04-05-2016
188
Posts
19
Kudos Received
11
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
209 | 10-30-2017 07:05 AM | |
308 | 10-12-2017 07:03 AM | |
933 | 10-12-2017 06:59 AM | |
1776 | 03-01-2017 09:56 AM | |
6253 | 01-26-2017 11:52 AM |
04-12-2019
12:17 AM
I just installed HDF 3.1.2 and NiFi is not starting because of this error below. I have changed the hard coded memory definition in the encrypt script but it still fails. Can anyone help? Traceback (most recent call last): File "/var/lib/ambari-agent/cache/common-services/NIFI/1.0.0/package/scripts/nifi.py", line 231, in <module> Master().execute() File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 375, in execute method(env) File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 978, in restart self.start(env, upgrade_type=upgrade_type) File "/var/lib/ambari-agent/cache/common-services/NIFI/1.0.0/package/scripts/nifi.py", line 152, in start self.configure(env, is_starting = True) File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 120, in locking_configure original_configure(obj, *args, **kw) File "/var/lib/ambari-agent/cache/common-services/NIFI/1.0.0/package/scripts/nifi.py", line 122, in configure params.stack_support_encrypt_authorizers) File "/var/lib/ambari-agent/cache/common-services/NIFI/1.0.0/package/scripts/nifi_toolkit_util.py", line 454, in encrypt_sensitive_properties Execute(encrypt_config_command, user=nifi_user, logoutput=False, environment=environment) File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__ self.env.run() File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run self.run_action(resource, action) File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action provider_action() File "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line 262, in action_run tries=self.resource.tries, try_sleep=self.resource.try_sleep) File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 72, in inner result = function(command, **kwargs) File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, in checked_call tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy) File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper result = _call(command, **kwargs_copy) File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 303, in _call raise ExecutionFailed(err_msg, code, out, err) resource_management.core.exceptions.ExecutionFailed: Execution of '/var/lib/ambari-agent/tmp/nifi-toolkit-1.5.0.3.1.2.0-7/bin/encrypt-config.sh -v -b /usr/hdf/current/nifi/conf/bootstrap.conf -n /usr/hdf/current/nifi/conf/nifi.properties -f /var/lib/nifi/conf/flow.xml.gz -s '[PROTECTED]' -p '[PROTECTED]'' returned 4. 2019/04/12 02:03:50 INFO [main] org.apache.nifi.properties.ConfigEncryptionTool: Handling encryption of nifi.properties 2019/04/12 02:03:50 WARN [main] org.apache.nifi.properties.ConfigEncryptionTool: The source nifi.properties and destination nifi.properties are identical [/usr/hdf/current/nifi/conf/nifi.properties] so the original will be overwritten 2019/04/12 02:03:50 INFO [main] org.apache.nifi.properties.ConfigEncryptionTool: Handling encryption of flow.xml.gz 2019/04/12 02:03:50 WARN [main] org.apache.nifi.properties.ConfigEncryptionTool: The source flow.xml.gz and destination flow.xml.gz are identical [/var/lib/nifi/conf/flow.xml.gz] so the original will be overwritten 2019/04/12 02:03:50 INFO [main] org.apache.nifi.properties.ConfigEncryptionTool: bootstrap.conf: /usr/hdf/current/nifi/conf/bootstrap.conf 2019/04/12 02:03:50 INFO [main] org.apache.nifi.properties.ConfigEncryptionTool: (src) nifi.properties: /usr/hdf/current/nifi/conf/nifi.properties 2019/04/12 02:03:50 INFO [main] org.apache.nifi.properties.ConfigEncryptionTool: (dest) nifi.properties: /usr/hdf/current/nifi/conf/nifi.properties 2019/04/12 02:03:50 INFO [main] org.apache.nifi.properties.ConfigEncryptionTool: (src) login-identity-providers.xml: null 2019/04/12 02:03:50 INFO [main] org.apache.nifi.properties.ConfigEncryptionTool: (dest) login-identity-providers.xml: null 2019/04/12 02:03:50 INFO [main] org.apache.nifi.properties.ConfigEncryptionTool: (src) authorizers.xml: null 2019/04/12 02:03:50 INFO [main] org.apache.nifi.properties.ConfigEncryptionTool: (dest) authorizers.xml: null 2019/04/12 02:03:50 INFO [main] org.apache.nifi.properties.ConfigEncryptionTool: (src) flow.xml.gz: /var/lib/nifi/conf/flow.xml.gz 2019/04/12 02:03:50 INFO [main] org.apache.nifi.properties.ConfigEncryptionTool: (dest) flow.xml.gz: /var/lib/nifi/conf/flow.xml.gz 2019/04/12 02:03:50 INFO [main] org.apache.nifi.properties.NiFiPropertiesLoader: Loaded 145 properties from /usr/hdf/current/nifi/conf/nifi.properties 2019/04/12 02:03:51 INFO [main] org.apache.nifi.properties.NiFiPropertiesLoader: Loaded 145 properties from /usr/hdf/current/nifi/conf/nifi.properties 2019/04/12 02:03:51 INFO [main] org.apache.nifi.properties.ConfigEncryptionTool: Loaded NiFiProperties instance with 145 properties 2019/04/12 02:03:53 ERROR [main] org.apache.nifi.properties.ConfigEncryptionTool: Encountered an error javax.crypto.BadPaddingException: pad block corrupted at org.bouncycastle.jcajce.provider.symmetric.util.BaseBlockCipher$BufferedGenericBlockCipher.doFinal(Unknown Source) at org.bouncycastle.jcajce.provider.symmetric.util.BaseBlockCipher.engineDoFinal(Unknown Source) at javax.crypto.Cipher.doFinal(Cipher.java:2164) at javax.crypto.Cipher$doFinal$2.call(Unknown Source) at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:125) at org.apache.nifi.properties.ConfigEncryptionTool.decryptFlowElement(ConfigEncryptionTool.groovy:636) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:93) at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:325) at org.codehaus.groovy.runtime.metaclass.ClosureMetaClass.invokeMethod(ClosureMetaClass.java:384) at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1019) at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.callCurrent(PogoMetaClassSite.java:69) at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCallCurrent(CallSiteArray.java:52) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:154) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:190) at org.apache.nifi.properties.ConfigEncryptionTool$_migrateFlowXmlContent_closure4.doCall(ConfigEncryptionTool.groovy:731) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:93) at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:325) at org.codehaus.groovy.runtime.metaclass.ClosureMetaClass.invokeMethod(ClosureMetaClass.java:294) at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1019) at groovy.lang.Closure.call(Closure.java:426) at groovy.lang.Closure.call(Closure.java:442) at org.codehaus.groovy.runtime.StringGroovyMethods.getReplacement(StringGroovyMethods.java:1543) at org.codehaus.groovy.runtime.StringGroovyMethods.replaceAll(StringGroovyMethods.java:2580) at org.codehaus.groovy.runtime.StringGroovyMethods.replaceAll(StringGroovyMethods.java:2506) at org.codehaus.groovy.runtime.dgm$1127.invoke(Unknown Source) at org.codehaus.groovy.runtime.callsite.PojoMetaMethodSite$PojoMetaMethodSiteNoUnwrapNoCoerce.invoke(PojoMetaMethodSite.java:274) at org.codehaus.groovy.runtime.callsite.PojoMetaMethodSite.call(PojoMetaMethodSite.java:56) at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:133) at org.apache.nifi.properties.ConfigEncryptionTool.migrateFlowXmlContent(ConfigEncryptionTool.groovy:730) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.codehaus.groovy.runtime.callsite.PogoMetaMethodSite$PogoCachedMethodSiteNoUnwrapNoCoerce.invoke(PogoMetaMethodSite.java:210) at org.codehaus.groovy.runtime.callsite.PogoMetaMethodSite.call(PogoMetaMethodSite.java:71) at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113) at org.apache.nifi.properties.ConfigEncryptionTool.main(ConfigEncryptionTool.groovy:1427) at org.apache.nifi.properties.ConfigEncryptionTool$main.call(Unknown Source) at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:125) at org.apache.nifi.toolkit.encryptconfig.LegacyMode.run(LegacyMode.groovy:30) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.codehaus.groovy.runtime.callsite.PogoMetaMethodSite$PogoCachedMethodSite.invoke(PogoMetaMethodSite.java:169) at org.codehaus.groovy.runtime.callsite.PogoMetaMethodSite.call(PogoMetaMethodSite.java:71) at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:125) at org.apache.nifi.toolkit.encryptconfig.EncryptConfigMain.main(EncryptConfigMain.groovy:109) pad block corrupted
usage: org.apache.nifi.properties.ConfigEncryptionTool [-h] [-v] [-n <file>] [-o <file>] [-l <file>] [-i <file>] [-a <file>] [-u <file>] [-f <file>] [-g <file>] [-b <file>] [-k <keyhex>] [-e <keyhex>] [-p <password>] [-w <password>] [-r] [-m] [-x] [-s <password|keyhex>] [-A <algorithm>] [-P <algorithm>]
This tool reads from a nifi.properties and/or login-identity-providers.xml file with plain sensitive configuration values, prompts the user for a master key, and encrypts each value. It will replace the plain value with the protected value in the same file (or write to a new file if specified). It can also be used to migrate already-encrypted values in those files or in flow.xml.gz to be encrypted with a new key. -h,--help Show usage information (this message) -v,--verbose Sets verbose mode (default false) -n,--niFiProperties <file> The nifi.properties file containing unprotected config values (will be overwritten unless -o is specified) -o,--outputNiFiProperties <file> The destination nifi.properties file containing protected config values (will not modify input nifi.properties) -l,--loginIdentityProviders <file> The login-identity-providers.xml file containing unprotected config values (will be overwritten unless -i is specified) -i,--outputLoginIdentityProviders <file> The destination login-identity-providers.xml file containing protected config values (will not modify input login-identity-providers.xml) -a,--authorizers <file> The authorizers.xml file containing unprotected config values (will be overwritten unless -u is specified) -u,--outputAuthorizers <file> The destination authorizers.xml file containing protected config values (will not modify input authorizers.xml) -f,--flowXml <file> The flow.xml.gz file currently protected with old password (will be overwritten unless -g is specified) -g,--outputFlowXml <file> The destination flow.xml.gz file containing protected config values (will not modify input flow.xml.gz) -b,--bootstrapConf <file> The bootstrap.conf file to persist master key -k,--key <keyhex> The raw hexadecimal key to use to encrypt the sensitive properties -e,--oldKey <keyhex> The old raw hexadecimal key to use during key migration -p,--password <password> The password from which to derive the key to use to encrypt the sensitive properties -w,--oldPassword <password> The old password from which to derive the key during migration -r,--useRawKey If provided, the secure console will prompt for the raw key value in hexadecimal form -m,--migrate If provided, the nifi.properties and/or login-identity-providers.xml sensitive properties will be re-encrypted with a new key -x,--encryptFlowXmlOnly If provided, the properties in flow.xml.gz will be re-encrypted with a new key but the nifi.properties and/or login-identity-providers.xml files will not be modified -s,--propsKey <password|keyhex> The password or key to use to encrypt the sensitive processor properties in flow.xml.gz -A,--newFlowAlgorithm <algorithm> The algorithm to use to encrypt the sensitive processor properties in flow.xml.gz -P,--newFlowProvider <algorithm> The security provider to use to encrypt the sensitive processor properties in flow.xml.gz
Java home: /usr/java/latest/ NiFi Toolkit home: /var/lib/ambari-agent/tmp/nifi-toolkit-1.5.0.3.1.2.0-7 2019-04-12 02:03:40,971 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.0.0.0-453 -> 3.0.0.0-453 2019-04-12 02:03:41,547 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.0.0.0-453 -> 3.0.0.0-453 User Group mapping (user_group) is missing in the hostLevelParams 2019-04-12 02:03:41,565 - Group['hadoop'] {} 2019-04-12 02:03:41,570 - Group['nifi'] {} 2019-04-12 02:03:41,571 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2019-04-12 02:03:41,576 - call['/var/lib/ambari-agent/tmp/changeUid.sh infra-solr'] {} 2019-04-12 02:03:41,602 - call returned (0, '1005') 2019-04-12 02:03:41,604 - User['infra-solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1005} 2019-04-12 02:03:41,608 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2019-04-12 02:03:41,612 - call['/var/lib/ambari-agent/tmp/changeUid.sh zookeeper'] {} 2019-04-12 02:03:41,638 - call returned (0, '1006') 2019-04-12 02:03:41,640 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1006} 2019-04-12 02:03:41,644 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2019-04-12 02:03:41,648 - call['/var/lib/ambari-agent/tmp/changeUid.sh ams'] {} 2019-04-12 02:03:41,673 - call returned (0, '1007') 2019-04-12 02:03:41,674 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1007} 2019-04-12 02:03:41,678 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users'], 'uid': None} 2019-04-12 02:03:41,681 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2019-04-12 02:03:41,685 - call['/var/lib/ambari-agent/tmp/changeUid.sh nifi'] {} 2019-04-12 02:03:41,710 - call returned (0, '1010') 2019-04-12 02:03:41,712 - User['nifi'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1010} 2019-04-12 02:03:41,715 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2019-04-12 02:03:41,721 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'} 2019-04-12 02:03:41,738 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if 2019-04-12 02:03:41,806 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'} 2019-04-12 02:03:41,828 - Skipping Execute[('setenforce', '0')] due to not_if 2019-04-12 02:03:42,959 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.0.0.0-453 -> 3.0.0.0-453 2019-04-12 02:03:43,104 - File['/usr/hdf/current/nifi/bin/nifi-env.sh'] {'owner': 'nifi', 'content': InlineTemplate(...), 'group': 'nifi', 'mode': 0755} 2019-04-12 02:03:43,107 - Execute['export JAVA_HOME=/usr/java/latest/;/usr/hdf/current/nifi/bin/nifi.sh stop >> /var/log/nifi/nifi-setup.log'] {'user': 'nifi'} 2019-04-12 02:03:47,567 - Pid file /var/run/nifi/nifi.pid is empty or does not exist 2019-04-12 02:03:47,577 - Directory['/var/run/nifi'] {'owner': 'nifi', 'create_parents': True, 'group': 'nifi', 'recursive_ownership': True, 'cd_access': 'a'} 2019-04-12 02:03:47,581 - Directory['/var/lib/nifi'] {'owner': 'nifi', 'create_parents': True, 'group': 'nifi', 'recursive_ownership': True, 'cd_access': 'a'} 2019-04-12 02:03:48,130 - Directory['/var/lib/nifi/database_repository'] {'owner': 'nifi', 'create_parents': True, 'group': 'nifi', 'recursive_ownership': True, 'cd_access': 'a'} 2019-04-12 02:03:48,132 - Directory['/var/lib/nifi/flowfile_repository'] {'owner': 'nifi', 'create_parents': True, 'group': 'nifi', 'recursive_ownership': True, 'cd_access': 'a'} 2019-04-12 02:03:48,151 - Directory['/var/lib/nifi/provenance_repository'] {'owner': 'nifi', 'create_parents': True, 'group': 'nifi', 'recursive_ownership': True, 'cd_access': 'a'} 2019-04-12 02:03:48,155 - Directory['/usr/hdf/current/nifi/conf'] {'owner': 'nifi', 'create_parents': True, 'group': 'nifi', 'recursive_ownership': True, 'cd_access': 'a'} 2019-04-12 02:03:48,157 - Directory['/var/lib/nifi/conf'] {'owner': 'nifi', 'create_parents': True, 'group': 'nifi', 'recursive_ownership': True, 'cd_access': 'a'} 2019-04-12 02:03:48,157 - Directory['/var/lib/nifi/state/local'] {'owner': 'nifi', 'create_parents': True, 'group': 'nifi', 'recursive_ownership': True, 'cd_access': 'a'} 2019-04-12 02:03:48,159 - Directory['/usr/hdf/current/nifi/lib'] {'owner': 'nifi', 'create_parents': True, 'group': 'nifi', 'recursive_ownership': True, 'cd_access': 'a'} 2019-04-12 02:03:48,164 - Directory['/var/lib/nifi/content_repository'] {'owner': 'nifi', 'create_parents': True, 'group': 'nifi', 'recursive_ownership': True, 'cd_access': 'a'} 2019-04-12 02:03:48,296 - Directory['/var/lib/nifi/content_repository'] {'owner': 'nifi', 'group': 'nifi', 'create_parents': True, 'recursive_ownership': True, 'cd_access': 'a'} 2019-04-12 02:03:48,552 - Directory['/etc/security/limits.d'] {'owner': 'root', 'create_parents': True, 'group': 'root'} 2019-04-12 02:03:48,560 - File['/etc/security/limits.d/nifi.conf'] {'content': Template('nifi.conf.j2'), 'owner': 'root', 'group': 'root', 'mode': 0644} 2019-04-12 02:03:48,562 - PropertiesFile['/usr/hdf/current/nifi/conf/nifi.properties'] {'owner': 'nifi', 'group': 'nifi', 'mode': 0600, 'properties': ...} 2019-04-12 02:03:48,570 - Generating properties file: /usr/hdf/current/nifi/conf/nifi.properties 2019-04-12 02:03:48,570 - File['/usr/hdf/current/nifi/conf/nifi.properties'] {'owner': 'nifi', 'content': InlineTemplate(...), 'group': 'nifi', 'mode': 0600} 2019-04-12 02:03:48,722 - Writing File['/usr/hdf/current/nifi/conf/nifi.properties'] because contents don't match 2019-04-12 02:03:48,730 - File['/usr/hdf/current/nifi/conf/bootstrap.conf'] {'owner': 'nifi', 'content': InlineTemplate(...), 'group': 'nifi', 'mode': 0600} 2019-04-12 02:03:48,738 - File['/usr/hdf/current/nifi/conf/logback.xml'] {'owner': 'nifi', 'content': InlineTemplate(...), 'group': 'nifi', 'mode': 0400} 2019-04-12 02:03:48,743 - File['/usr/hdf/current/nifi/conf/state-management.xml'] {'owner': 'nifi', 'content': InlineTemplate(...), 'group': 'nifi', 'mode': 0400} 2019-04-12 02:03:48,756 - File['/usr/hdf/current/nifi/conf/authorizers.xml'] {'owner': 'nifi', 'content': InlineTemplate(...), 'group': 'nifi', 'mode': 0600} 2019-04-12 02:03:48,765 - File['/usr/hdf/current/nifi/conf/login-identity-providers.xml'] {'owner': 'nifi', 'content': InlineTemplate(...), 'group': 'nifi', 'mode': 0600} 2019-04-12 02:03:48,769 - File['/usr/hdf/current/nifi/bin/nifi-env.sh'] {'owner': 'nifi', 'content': InlineTemplate(...), 'group': 'nifi', 'mode': 0755} 2019-04-12 02:03:48,772 - File['/usr/hdf/current/nifi/conf/bootstrap-notification-services.xml'] {'owner': 'nifi', 'content': InlineTemplate(...), 'group': 'nifi', 'mode': 0400} 2019-04-12 02:03:48,773 - Encrypting NiFi sensitive configuration properties 2019-04-12 02:03:48,774 - File['/var/lib/ambari-agent/tmp/nifi-toolkit-1.5.0.3.1.2.0-7/bin/encrypt-config.sh'] {'mode': 0755} 2019-04-12 02:03:48,786 - Execute[('/var/lib/ambari-agent/tmp/nifi-toolkit-1.5.0.3.1.2.0-7/bin/encrypt-config.sh', '-v', '-b', u'/usr/hdf/current/nifi/conf/bootstrap.conf', '-n', u'/usr/hdf/current/nifi/conf/nifi.properties', '-f', u'/var/lib/nifi/conf/flow.xml.gz', '-s', [PROTECTED], '-p', [PROTECTED])] {'environment': {'JAVA_OPTS': u'-Xms128m -Xmx256m', 'JAVA_HOME': u'/usr/java/latest/'}, 'logoutput': False, 'user': 'nifi'}
Command failed after 1 tries
... View more
Labels:
02-26-2019
05:53 PM
I have a large HBase table (60TB) that needs to be exported to another HDP cluster. I get errors relating to hdfs block issues after about an hour (45 - 50 mins) when exporting the large HBase table. How can i move this table to the new cluster? I use HDP 2.6.0.3-8 and HBase version is 1.1.2.
... View more
Labels:
11-19-2018
01:04 PM
@Geoffrey Shelton Okot Thanks for your response. My cluster has 11 nodes (3 master and 8 worker nodes). yes, i ran the balancer with a threshold of 5. I see it's still running from Friday morning... My Datanode:
/dev/sdb 5.4T 5.1T 17M 100% /grid/1
/dev/sdc 5.4T 5.1T 263M 100% /grid/2
/dev/sdd 5.4T 5.1T 912M 100% /grid/3
/dev/sde 5.4T 5.1T 283M 100% /grid/4
/dev/sdf 5.4T 5.1T 95M 100% /grid/5
/dev/sdg 5.4T 5.1T 388M 100% /grid/6
/dev/sdh 5.4T 5.1T 22G 100% /grid/7
/dev/sdi 5.4T 5.1T 694M 100% /grid/8
/dev/sdj 5.4T 5.1T 843M 100% /grid/9
/dev/sdk 5.4T 5.1T 36M 100% /grid/10
/dev/sdl 5.4T 5.1T 120M 100% /grid/11
/dev/sda 5.4T 5.1T 802M 100% /grid/0
tail of balancer output log:
18/11/19 12:12:02 INFO balancer.Dispatcher: Successfully moved blk_1107025919_33285238 with size=134217728 from nodeg:50010:DISK to nodeh:50010:DISK through nodeg:50010
18/11/19 12:12:02 INFO balancer.Dispatcher: Start moving blk_1107022998_33282317 with size=134217728 from nodeg:50010:DISK to nodeh:50010:DISK through nodeg:50010
18/11/19 12:12:07 INFO balancer.Dispatcher: Successfully moved blk_1107025997_33285316 with size=134217728 from nodeg:50010:DISK to nodeh:50010:DISK through nodeg:50010
18/11/19 12:12:07 INFO balancer.Dispatcher: Start moving blk_1107022634_33281953 with size=134217728 from nodeg:50010:DISK to nodeh:50010:DISK through nodej:50010
... View more
11-16-2018
11:30 AM
Hi, I am having some issues with rebalancing my HDF cluster (runs 2.6). There's a node whose data directory is 100% full. I used the hdfs balancer in Ambari and also ran the balancer command <hdfs balancer>. I have not seen any changes to the server node ... Please what's the way forward?
... View more
Labels:
11-12-2018
12:20 PM
Thanks @Isaac Arnault. I checked the documentation. However since the document says NiFi, Storm and Kafka cannot co-exist in the same node and the example given was a 19-node cluster, I wanted to know how i can achieve this with a 3-node cluster. General guidelines for production guidelines for service distribution:
NiFi, Storm, and Kafka should not be located on the same node or virtual machine.
NiFi, Storm, and Kafka must have a dedicated ZooKeeper cluster with at least three nodes.
If the HDF SAM is being used in an HDP cluster, the SAM should not be installed on the same node as the Storm
worker node.
... View more
11-12-2018
12:14 PM
thank you @Geoffrey Shelton Okot I am trying to upgrade to HDF 3.2.
... View more
11-12-2018
11:58 AM
I need to map YARN queues to analytic phoenix queries. That is, when user1 runs
"select col1, col2 from table1"
user1 runs it in yarn-queue1 and that way we can make sure adequate resources is given to the queue to avoid contention with lower priority jobs. However, I have tested with the yarn.scheduler.capacity.queue-mappings-override.enable=true yarn.scheduler.capacity.queue-mappings=u:user1:queue1,g:group1:queue2 parameters but i am not able to see the queries that are executed in YARN. How do i make sure these queries are catered for? N.B: We use squirrel to connect to Phoenix.
... View more
Labels:
11-07-2018
05:01 AM
Hi @Bryan Bende @Matt Clarke I am currently running HDF 2.1.4 on a node as a cluster(256GB RAM) in production. We now have two more nodes to make three and want to install HDF 3.2. Would you please guide on how to make this work? I am only limited to 3 nodes...
... View more
Labels:
09-19-2018
07:47 AM
Hi, I run HDP 2.6 and i need clarifications on the disk usage. When i run "hdfs dfs -dh -h" command on the hdfs directory, it gives a ridiculous size of 28.1T but when i drill down to each day and sum it all up, it's just over 8TB of data. Why is there a huge difference? 28.1 T /in/feed/type
98.8 G /in/feed/type/day=20180701
112 G /in/feed/type/day=20180702
104.4 G /in/feed/type/day=20180703
....
... View more
- Tags:
- Hadoop Core
- HDFS
Labels:
09-12-2018
01:13 PM
@rabbit s Reducing the memory specs for the spark executors will reduce the total memory consumed which should eventually allow for more jobs (new threads) to be spun...
... View more
09-12-2018
09:56 AM
@Jay Kumar SenSharma Yes, i see "hs_err_pid*" file. Please find attached the log. Also, the 2.22TB is the un-utilised RAM. hs-err-pid931.txt .
... View more
09-12-2018
09:15 AM
Hi, I am unable to launch more spark jobs on my cluster due to the error message below. I still have 2.22TB free according to YARN UI. I run HDP 2.6. #
# There is insufficient memory for the Java Runtime Environment to continue.
# Cannot create GC thread. Out of system resources. What's the way forward? @Jay Kumar SenSharma
... View more
08-12-2018
09:42 PM
I find that ListFile processor fails to pick new files for a particular feed. Indeed, that feed has a lot of small files and totals about 500k per day. When i delete the old processor and recreate, it starts working again. What could be the problem? BTW i use HDF 2.1.4... Thank you
... View more
Labels:
08-08-2018
07:29 AM
Thanks @Bryan Bende. Will try and revert back...
... View more
08-07-2018
06:02 AM
We currently use HDF-2.1.4.0-5 and will like for more AD users to be able to login to NiFi and create data flows themselves. Can you guide me on how to implement a user/password policy for each user. Currently, the AD users have been synced like we have in HDP 2.6 but i'm not sure how we'll get them to login to on NiFi with their AD accounts @Bryan Bende @Matt Clarke
... View more
Labels:
07-23-2018
12:54 PM
Hi, I created an external table on HAR files but i'm not able to see meaningful records. Is this supported in Hive and what SerDe should be used? I currently run HDP 2.6
... View more
Labels:
06-20-2018
09:50 AM
I am test loading into a Phoenix table with index on 3 of the columns. It started loading fine but came up with the error message "org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 100 actions: org.apache.phoenix.hbase.index.builder.IndexBuildingFailureException: Failed to build index for unexpected reason!". Please see the error attached. I use HDP 2.6.0
... View more
06-13-2018
02:51 PM
@Denise O Regan Did you manage to resolve this?
... View more
06-08-2018
06:42 AM
Has anyone successfully deployed HDP on isilon? I am having some errors...I'm using HDP 2.6 07 Jun 2018 14:45:22,450 ERROR [alert-event-bus-2] AlertReceivedListener:480 - Unable to process alert ams_metrics_collector_process for an invalid cluster named cluster
07 Jun 2018 14:45:22,451 ERROR [alert-event-bus-2] AlertReceivedListener:480 - Unable to process alert yarn_resourcemanager_webui for an invalid cluster named cluster
07 Jun 2018 14:45:22,451 ERROR [alert-event-bus-2] AlertReceivedListener:480 - Unable to process alert oozie_server_status for an invalid cluster named cluster
07 Jun 2018 14:45:22,451 ERROR [alert-event-bus-2] AlertReceivedListener:480 - Unable to process alert yarn_app_timeline_server_webui for an invalid cluster named cluster
07 Jun 2018 14:45:22,451 ERROR [alert-event-bus-2] AlertReceivedListener:480 - Unable to process alert ams_metrics_collector_autostart for an invalid cluster named cluster
07 Jun 2018 14:45:22,451 ERROR [alert-event-bus-2] AlertReceivedListener:480 - Unable to process alert ambari_agent_disk_usage for an invalid cluster named cluster
07 Jun 2018 14:45:22,451 ERROR [alert-event-bus-2] AlertReceivedListener:480 - Unable to process alert hive_webhcat_server_status for an invalid cluster named cluster
07 Jun 2018 14:45:22,451 ERROR [alert-event-bus-2] AlertReceivedListener:480 - Unable to process alert nodemanager_health_summary for an invalid cluster named cluster
07 Jun 2018 14:45:22,451 ERROR [alert-event-bus-2] AlertReceivedListener:480 - Unable to process alert smartsense_gateway_status for an invalid cluster named cluster
07 Jun 2018 14:45:22,451 ERROR [alert-event-bus-2] AlertReceivedListener:480 - Unable to process alert ams_metrics_collector_hbase_master_process for an invalid cluster named cluster
... View more
Labels:
06-05-2018
09:41 AM
This worked for me. Thanks @Anish Gupta
... View more
06-01-2018
05:42 AM
Hi, Is there any way to reduce the polling interval for ListSFTP on NiFi? I use 2.1.4
... View more
Labels:
05-07-2018
08:02 AM
I implemented log rotation on my HDP cluster (2.6.0) with ambari 2.5.0.3. I could not restart any any service with Ambari after changing the log4j properties. I got the error message below... Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-START/scripts/hook.py", line 40, in <module>
BeforeStartHook().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 314, in execute
method(env)
File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-START/scripts/hook.py", line 33, in hook
setup_hadoop()
File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-START/scripts/shared_initialization.py", line 93, in setup_hadoop
content=InlineTemplate(params.log4j_props)
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 123, in action_create
content = self._get_content()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 160, in _get_content
return content()
File "/usr/lib/python2.6/site-packages/resource_management/core/source.py", line 52, in __call__
return self.get_content()
File "/usr/lib/python2.6/site-packages/resource_management/core/source.py", line 144, in get_content
rendered = self.template.render(self.context)
File "/usr/lib/python2.6/site-packages/ambari_jinja2/environment.py", line 891, in render
return self.environment.handle_exception(exc_info, True)
File "<template>", line 101, in top-level template code
File "/usr/lib/python2.6/site-packages/ambari_jinja2/environment.py", line 371, in getattr
return getattr(obj, attribute)
ambari_jinja2.exceptions.UndefinedError: 'hadoop_log' is undefined
2018-04-19 07:15:51,479 - File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2018-04-19 07:15:51,488 - File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
Command failed after 1 tries
... View more
03-29-2018
11:14 AM
I am encountering an issue and my NiFi cluster won't start. The error message is "Cluster is still in the process of voting on the appropriate Data Flow."; I have tried using the last flow.xml.gz file in the archive directory but it still gives the same error... 2018-03-29 13:03:48,739 INFO [NiFi Web Server-46] o.a.n.w.a.c.IllegalClusterStateExceptionMapper org.apache.nifi.cluster.manager.exception.IllegalClusterStateException: Cluster is still in the process of voting on the appropriate Data Flow.. Returning Conflict response.
... View more
Labels:
03-23-2018
09:09 AM
thanks @russ stevenson From the logs, the issue was due to incompatibility. I see EMC's compatibility chart talks about the Ambari version, it must be that they mean the Hortonworks version.
... View more
03-22-2018
02:11 PM
I am trying to add an isilon storage (using add host) to our HDP cluster. However, i encounter error "Registering with the server...Registration with the server failed." Has anyone come across this? I'm using HDP 2.6 with Ambari 2.5.3.
... View more
02-27-2018
02:31 PM
@Prakash Punj Did you get the access tab to work in ranger? Will be glad if you post the solution to this issue...Thank you.
... View more
01-29-2018
10:21 AM
Thanks @Aditya Sirna I have checked using the sharelib command and error persists... I also ran the sharelibupdate command before relaunching... oozie admin -oozie http://{oozie-server}:11000/oozie -shareliblist
[Available ShareLib]
hive
distcp
mapreduce-streaming
spark
oozie
hcatalog
hive2
sqoop
hbase
pig
oozie admin -oozie http://{oozie-server}:11000/oozie -sharelibupdate
[ShareLib update status]
sharelibDirOld = hdfs://{name-node}:8020/user/oozie/share/lib/lib_20160610105311
host = http://{oozie-server}:11000/oozie
sharelibDirNew = hdfs://{name-node}:8020/user/oozie/share/lib/lib_20160610105311
status = Successful
... View more