Member since
02-01-2018
17
Posts
0
Kudos Received
0
Solutions
01-10-2019
11:53 AM
Dear Jay, Thank you for your reply I was already thinking you will reply as I am one of your followers. I have tried the steps you suggested but problem is still there. Yes I am using proxy and yum install was working fine on this proxy for update, mysql server and ntp but now producing error while I have not made any change in the system except setup local repository on /var/www/htmo/repo folder and created .repo files for offline repositories. It looks there is some url resolution issue using proxy but I am unable to find it. But as I already mentioned that when I use IP address having internet access without proxy same local urls are resolved and are accessible to yum command. Hope you can understand my problem.
... View more
01-10-2019
07:01 AM
I am trying to install HDP 3.1.0.1 with Ambari using offline repository and have Completed all prerequisites including: hosts entries, hostname, firewall disabled, iptables etc Java update, MySQL installation with connector, HTTP service installation. Repository creation for offline HDP, HDP UTILS, AMBARI and modified .repo files properly to point local url. After all above prerequisites I can see web folders from browser, can perform wget to copy files, but when try to install yum install ambari-server get following error. ********************************************************************************************************************************************* [root@hdf03 ~]# yum install ambari-server Loaded plugins: fastestmirror, langpacks Loading mirror speeds from cached hostfile * base: centos.excellmedia.net * extras: centos.excellmedia.net * updates: centos.excellmedia.net ambari-2.7.3.0 | 790 B 00:00:00
http://hdf03.ufm/repo/ambari/centos7/2.7.3.0-139/repodata/repomd.xml: [Errno -1] Error importing repomd.xml for ambari-2.7.3.0: Damaged repomd.xml file
Trying other mirror.
One of the configured repositories failed (ambari Version - ambari-2.7.3.0),
and yum doesn't have enough cached data to continue. At this point the only
safe thing yum can do is fail. There are a few ways to work "fix" this: 1. Contact the upstream for the repository and get them to fix the problem. 2. Reconfigure the baseurl/etc. for the repository, to point to a working
upstream. This is most often useful if you are using a newer
distribution release than is supported by the repository (and the
packages for the previous distribution release still work).
3. Run the command with the repository temporarily disabled
yum --disablerepo=ambari-2.7.3.0 ... 4. Disable the repository permanently, so yum won't use it by default. Yum
will then just ignore the repository until you permanently enable it
again or use --enablerepo for temporary usage:
yum-config-manager --disable ambari-2.7.3.0
or
subscription-manager repos --disable=ambari-2.7.3.0
5. Configure the failing repository to be skipped, if it is unavailable. Note that yum will try to contact the repo. when it runs most commands,
so will have to try and fail each time (and thus. yum will be be much
slower). If it is a very temporary problem though, this is often a nice
compromise:
yum-config-manager --save --setopt=ambari-2.7.3.0.skip_if_unavailable=true
failure: repodata/repomd.xml from ambari-2.7.3.0: [Errno 256] No more mirrors to try.
http://hdf03.ufm/repo/ambari/centos7/2.7.3.0-139/repodata/repomd.xml: [Errno -1] Error importing repomd.xml for ambari-2.7.3.0: Damaged repomd.xml file ************************************************************************************************************************************ Can any body help me where I am missing something or doing wrong because in another installation when I connected to open internet it worked fine even it was also offline repository. What I have learned from previous installations local repositories url are not resolved during installation while same url are accessible using browser.
... View more
Labels:
11-16-2018
10:01 AM
Dear @Jay Kumar SenSharma, Thank you very much for your reply which perfectly worked in my case, my NiFi is up and running now. This is great help indeed.
... View more
11-15-2018
09:10 AM
I am using NiFi to put data to hdfs and hive, at some point needed to restart NiFi but it is not starting again with " Java Heap Space" error. Can any body help me to resolve this issue as NiFi configurations are OK and it was running fine. Below is the output of log: ********************************************************************************************************************************************** Traceback (most recent call last):
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 982, in restart
self.status(env)
File "/var/lib/ambari-agent/cache/common-services/NIFI/1.0.0/package/scripts/nifi.py", line 156, in status
check_process_status(status_params.nifi_node_pid_file)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/check_process_status.py", line 43, in check_process_status
raise ComponentIsNotRunning()
ComponentIsNotRunning
The above exception was the cause of the following exception:
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/NIFI/1.0.0/package/scripts/nifi.py", line 278, in <module>
Master().execute()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 353, in execute
method(env)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 993, in restart
self.start(env, upgrade_type=upgrade_type)
File "/var/lib/ambari-agent/cache/common-services/NIFI/1.0.0/package/scripts/nifi.py", line 142, in start
self.configure(env, is_starting = True)
File "/var/lib/ambari-agent/cache/common-services/NIFI/1.0.0/package/scripts/nifi.py", line 110, in configure
self.write_configurations(params, is_starting)
File "/var/lib/ambari-agent/cache/common-services/NIFI/1.0.0/package/scripts/nifi.py", line 227, in write_configurations
params.stack_support_encrypt_authorizers, params.stack_version_buildnum)
File "/var/lib/ambari-agent/cache/common-services/NIFI/1.0.0/package/scripts/nifi_toolkit_util.py", line 393, in encrypt_sensitive_properties
Execute(encrypt_config_command, user=nifi_user, logoutput=False, environment=environment)
File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
self.env.run()
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line 263, in action_run
returns=self.resource.returns)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 72, in inner
result = function(command, **kwargs)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, in checked_call
tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy, returns=returns)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 314, in _call
raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of '/var/lib/ambari-agent/tmp/nifi-toolkit-1.7.0.3.2.0.0-520/bin/encrypt-config.sh -v -b /usr/hdf/current/nifi/conf/bootstrap.conf -n /usr/hdf/current/nifi/conf/nifi.properties -f /var/lib/nifi/conf/flow.xml.gz -s '[PROTECTED]' -a /usr/hdf/current/nifi/conf/authorizers.xml -p '[PROTECTED]'' returned 255. 2018/11/15 13:48:31 INFO [main] org.apache.nifi.properties.ConfigEncryptionTool: Handling encryption of nifi.properties
2018/11/15 13:48:31 WARN [main] org.apache.nifi.properties.ConfigEncryptionTool: The source nifi.properties and destination nifi.properties are identical [/usr/hdf/current/nifi/conf/nifi.properties] so the original will be overwritten
2018/11/15 13:48:31 INFO [main] org.apache.nifi.properties.ConfigEncryptionTool: Handling encryption of authorizers.xml
2018/11/15 13:48:31 WARN [main] org.apache.nifi.properties.ConfigEncryptionTool: The source authorizers.xml and destination authorizers.xml are identical [/usr/hdf/current/nifi/conf/authorizers.xml] so the original will be overwritten
2018/11/15 13:48:31 INFO [main] org.apache.nifi.properties.ConfigEncryptionTool: Handling encryption of flow.xml.gz
2018/11/15 13:48:31 WARN [main] org.apache.nifi.properties.ConfigEncryptionTool: The source flow.xml.gz and destination flow.xml.gz are identical [/var/lib/nifi/conf/flow.xml.gz] so the original will be overwritten
2018/11/15 13:48:31 INFO [main] org.apache.nifi.properties.ConfigEncryptionTool: bootstrap.conf: /usr/hdf/current/nifi/conf/bootstrap.conf
2018/11/15 13:48:31 INFO [main] org.apache.nifi.properties.ConfigEncryptionTool: (src) nifi.properties: /usr/hdf/current/nifi/conf/nifi.properties
2018/11/15 13:48:31 INFO [main] org.apache.nifi.properties.ConfigEncryptionTool: (dest) nifi.properties: /usr/hdf/current/nifi/conf/nifi.properties
2018/11/15 13:48:31 INFO [main] org.apache.nifi.properties.ConfigEncryptionTool: (src) login-identity-providers.xml: null
2018/11/15 13:48:31 INFO [main] org.apache.nifi.properties.ConfigEncryptionTool: (dest) login-identity-providers.xml: null
2018/11/15 13:48:31 INFO [main] org.apache.nifi.properties.ConfigEncryptionTool: (src) authorizers.xml: /usr/hdf/current/nifi/conf/authorizers.xml
2018/11/15 13:48:31 INFO [main] org.apache.nifi.properties.ConfigEncryptionTool: (dest) authorizers.xml: /usr/hdf/current/nifi/conf/authorizers.xml
2018/11/15 13:48:31 INFO [main] org.apache.nifi.properties.ConfigEncryptionTool: (src) flow.xml.gz: /var/lib/nifi/conf/flow.xml.gz
2018/11/15 13:48:31 INFO [main] org.apache.nifi.properties.ConfigEncryptionTool: (dest) flow.xml.gz: /var/lib/nifi/conf/flow.xml.gz
2018/11/15 13:48:31 INFO [main] org.apache.nifi.properties.NiFiPropertiesLoader: Loaded 154 properties from /usr/hdf/current/nifi/conf/nifi.properties
2018/11/15 13:48:32 INFO [main] org.apache.nifi.properties.NiFiPropertiesLoader: Loaded 154 properties from /usr/hdf/current/nifi/conf/nifi.properties
2018/11/15 13:48:32 INFO [main] org.apache.nifi.properties.ProtectedNiFiProperties: There are 1 protected properties of 5 sensitive properties (25%)
2018/11/15 13:48:32 INFO [main] org.apache.nifi.properties.ConfigEncryptionTool: Loaded NiFiProperties instance with 153 properties
2018/11/15 13:48:32 INFO [main] org.apache.nifi.properties.ConfigEncryptionTool: Loaded authorizers content (74 lines)
2018/11/15 13:48:32 INFO [main] org.apache.nifi.properties.ConfigEncryptionTool: No encrypted password property elements found in authorizers.xml
2018/11/15 13:48:32 INFO [main] org.apache.nifi.properties.ConfigEncryptionTool: No unencrypted password property elements found in authorizers.xml
2018/11/15 13:48:32 ERROR [main] org.apache.nifi.toolkit.encryptconfig.EncryptConfigMain:
java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Arrays.java:3332)
at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124)
at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:596)
at java.lang.StringBuilder.append(StringBuilder.java:190)
at org.apache.commons.io.output.StringBuilderWriter.write(StringBuilderWriter.java:142)
at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:2538)
at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:2516)
at org.apache.commons.io.IOUtils.copy(IOUtils.java:2493)
at org.apache.commons.io.IOUtils.copy(IOUtils.java:2441)
at org.apache.commons.io.IOUtils.toString(IOUtils.java:1084)
at org.apache.commons.io.IOUtils$toString.call(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:133)
at org.apache.nifi.properties.ConfigEncryptionTool$_loadFlowXml_closure3$_closure29.doCall(ConfigEncryptionTool.groovy:666)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:93)
at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:325)
at org.codehaus.groovy.runtime.metaclass.ClosureMetaClass.invokeMethod(ClosureMetaClass.java:294)
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1022)
at groovy.lang.Closure.call(Closure.java:414)
at groovy.lang.Closure.call(Closure.java:430)
at org.codehaus.groovy.runtime.IOGroovyMethods.withCloseable(IOGroovyMethods.java:1622)
at org.codehaus.groovy.runtime.NioGroovyMethods.withCloseable(NioGroovyMethods.java:1759)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.codehaus.groovy.runtime.metaclass.ReflectionMetaMethod.invoke(ReflectionMetaMethod.java:54)
Java heap space
usage: org.apache.nifi.toolkit.encryptconfig.EncryptConfigMain [-h] [options]
This tool enables easy encryption and decryption of configuration files for NiFi and its sub-projects. Unprotected files can be input to this tool to be
protected by a key in a manner that is understood by NiFi. Protected files, along with a key, can be input to this tool to be unprotected, for troubleshooting
or automation purposes.
-h,--help Show usage information (this message)
--nifiRegistry Specifies to target NiFi Registry. When this flag is not included, NiFi is the target.
When targeting NiFi:
-h,--help Show usage information (this message)
-v,--verbose Sets verbose mode (default false)
-n,--niFiProperties <file> The nifi.properties file containing unprotected config values (will be overwritten unless -o is specified)
-o,--outputNiFiProperties <file> The destination nifi.properties file containing protected config values (will not modify input nifi.properties)
-l,--loginIdentityProviders <file> The login-identity-providers.xml file containing unprotected config values (will be overwritten unless -i is
specified)
-i,--outputLoginIdentityProviders <file> The destination login-identity-providers.xml file containing protected config values (will not modify input
login-identity-providers.xml)
-a,--authorizers <file> The authorizers.xml file containing unprotected config values (will be overwritten unless -u is specified)
-u,--outputAuthorizers <file> The destination authorizers.xml file containing protected config values (will not modify input authorizers.xml)
-f,--flowXml <file> The flow.xml.gz file currently protected with old password (will be overwritten unless -g is specified)
-g,--outputFlowXml <file> The destination flow.xml.gz file containing protected config values (will not modify input flow.xml.gz)
-b,--bootstrapConf <file> The bootstrap.conf file to persist master key
-k,--key <keyhex> The raw hexadecimal key to use to encrypt the sensitive properties
-e,--oldKey <keyhex> The old raw hexadecimal key to use during key migration
-p,--password <password> The password from which to derive the key to use to encrypt the sensitive properties
-w,--oldPassword <password> The old password from which to derive the key during migration
-r,--useRawKey If provided, the secure console will prompt for the raw key value in hexadecimal form
-m,--migrate If provided, the nifi.properties and/or login-identity-providers.xml sensitive properties will be re-encrypted with
a new key
-x,--encryptFlowXmlOnly If provided, the properties in flow.xml.gz will be re-encrypted with a new key but the nifi.properties and/or
login-identity-providers.xml files will not be modified
-s,--propsKey <password|keyhex> The password or key to use to encrypt the sensitive processor properties in flow.xml.gz
-A,--newFlowAlgorithm <algorithm> The algorithm to use to encrypt the sensitive processor properties in flow.xml.gz
-P,--newFlowProvider <algorithm> The security provider to use to encrypt the sensitive processor properties in flow.xml.gz
-c,--translateCli Translates the nifi.properties file to a format suitable for the NiFi CLI tool
When targeting NiFi Registry using the --nifiRegistry flag:
-h,--help Show usage information (this message)
-v,--verbose Sets verbose mode (default false)
-p,--password <password> Protect the files using a password-derived key. If an argument is not provided to this flag, interactive mode will
be triggered to prompt the user to enter the password.
-k,--key <keyhex> Protect the files using a raw hexadecimal key. If an argument is not provided to this flag, interactive mode will be
triggered to prompt the user to enter the key.
--oldPassword <password> If the input files are already protected using a password-derived key, this specifies the old password so that the
files can be unprotected before re-protecting.
--oldKey <keyhex> If the input files are already protected using a key, this specifies the raw hexadecimal key so that the files can
be unprotected before re-protecting.
-b,--bootstrapConf <file> The bootstrap.conf file containing no master key or an existing master key. If a new password or key is specified
(using -p or -k) and no output bootstrap.conf file is specified, then this file will be overwritten to persist the
new master key.
-B,--outputBootstrapConf <file> The destination bootstrap.conf file to persist master key. If specified, the input bootstrap.conf will not be
modified.
-r,--nifiRegistryProperties <file> The nifi-registry.properties file containing unprotected config values, overwritten if no output file specified.
-R,--outputNifiRegistryProperties <file> The destination nifi-registry.properties file containing protected config values.
-a,--authorizersXml <file> The authorizers.xml file containing unprotected config values, overwritten if no output file specified.
-A,--outputAuthorizersXml <file> The destination authorizers.xml file containing protected config values.
-i,--identityProvidersXml <file> The identity-providers.xml file containing unprotected config values, overwritten if no output file specified.
-I,--outputIdentityProvidersXml <file> The destination identity-providers.xml file containing protected config values.
--decrypt Can be used with -r to decrypt a previously encrypted NiFi Registry Properties file. Decrypted content is printed to
STDOUT.
... View more
Labels:
- Labels:
-
Apache NiFi
10-04-2018
10:36 AM
I am installing Ambari with HDF 3.2.0 and there is error on deployment of selected services as below: the error is " Cannot find a valid baseurl for repo: HDF-3.2-repo-1" and during this error it creates a file /etc/yum.repo.d/ambari-hdf-1.repo which is having no entries as below: [root@ufm yum.repos.d]# cat ambari-hdf-1.repo
[HDF-3.2-repo-1] name=HDF-3.2-repo-1
baseurl=
path=/ enabled=1
gpgcheck=0
[HDP-UTILS-1.1.0.22-repo-1] name=HDP-UTILS-1.1.0.22-repo-1
baseurl= path=/ enabled=1 gpgcheck=0 [root@ufm yum.repos.d]# ******************************************************************************************************************************* Deployment error is below. Can any body guide where is the mistake. ******************************************************************************************************************************* Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/stack-hooks/before-INSTALL/scripts/hook.py", line 37, in <module>
BeforeInstallHook().execute()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 353, in execute
method(env)
File "/var/lib/ambari-agent/cache/stack-hooks/before-INSTALL/scripts/hook.py", line 34, in hook
install_packages()
File "/var/lib/ambari-agent/cache/stack-hooks/before-INSTALL/scripts/shared_initialization.py", line 37, in install_packages
retry_count=params.agent_stack_retry_count)
File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
self.env.run()
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/ambari-agent/lib/resource_management/core/providers/packaging.py", line 30, in action_install
self._pkg_manager.install_package(package_name, self.__create_context())
File "/usr/lib/ambari-agent/lib/ambari_commons/repo_manager/yum_manager.py", line 219, in install_package
shell.repository_manager_executor(cmd, self.properties, context)
File "/usr/lib/ambari-agent/lib/ambari_commons/shell.py", line 749, in repository_manager_executor
raise RuntimeError(message)
RuntimeError: Failed to execute command '/usr/bin/yum -y install hdf-select', exited with code '1', message: '
One of the configured repositories failed (Unknown),
and yum doesn't have enough cached data to continue. At this point the only
safe thing yum can do is fail. There are a few ways to work "fix" this:
1. Contact the upstream for the repository and get them to fix the problem.
2. Reconfigure the baseurl/etc. for the repository, to point to a working
upstream. This is most often useful if you are using a newer
distribution release than is supported by the repository (and the
packages for the previous distribution release still work).
3. Run the command with the repository temporarily disabled
yum --disablerepo=<repoid> ...
4. Disable the repository permanently, so yum won't use it by default. Yum
will then just ignore the repository until you permanently enable it
again or use --enablerepo for temporary usage:
yum-config-manager --disable <repoid>
or
subscription-manager repos --disable=<repoid>
5. Configure the failing repository to be skipped, if it is unavailable.
Note that yum will try to contact the repo. when it runs most commands,
so will have to try and fail each time (and thus. yum will be be much
slower). If it is a very temporary problem though, this is often a nice
compromise:
yum-config-manager --save --setopt=<repoid>.skip_if_unavailable=true
Cannot find a valid baseurl for repo: HDF-3.2-repo-1
... View more
Labels:
- Labels:
-
Apache Ambari
-
Cloudera DataFlow (CDF)
09-14-2018
02:30 PM
Dear Jay, Thanks for your response, I was using proxy for internet access. I have tried both ways by using proxy and disabling proxy but results are same. When I use wget on the host where single nod is conit is successful as below: ********************************************************************************************************************************************* [root@ufm yum.repos.d]# wget http://ufm.hdp03.uat/repo/HDP/centos7/3.0.0.0-1634/repodata/repomd.xml
--2018-09-14 17:43:23-- http://ufm.hdp03.uat/repo/HDP/centos7/3.0.0.0-1634/repodata/repomd.xml
Resolving ufm.hdp03.uat (ufm.hdp03.uat)... 10.200.40.160, 10.200.41.61
Connecting to ufm.hdp03.uat (ufm.hdp03.uat)|10.200.40.160|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2988 (2.9K) [text/xml]
Saving to: ârepomd.xmlâ
100%[=============================================================================================================================>] 2,988 --.-K/s in 0s
2018-09-14 17:43:23 (155 MB/s) - ârepomd.xmlâ saved [2988/2988] ********************************************************************************************************************************************* it is clear that local host is responding as web server properly while downloading above file. but when I try to run other command " yum install hdp-select -y" it produces error as below: ********************************************************************************************************************************************** [root@ufm yum.repos.d]# yum install hdp-select -y
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
* base: centos.excellmedia.net
* c7-media:
* extras: centos.excellmedia.net
* updates: centos.excellmedia.net
http://ufm.hdp03.uat/repo/HDP/centos7/3.0.0.0-1634/repodata/repomd.xml: [Errno 14] HTTP Error 503 - Service Unavailable
Trying other mirror.
http://ufm.hdp03.uat/repo/HDP/centos7/3.0.0.0-1634/repodata/repomd.xml: [Errno 14] HTTP Error 503 - Service Unavailable
Trying other mirror.
http://ufm.hdp03.uat/repo/HDP/centos7/3.0.0.0-1634/repodata/repomd.xml: [Errno 14] HTTP Error 503 - Service Unavailable
Trying other mirror.
http://ufm.hdp03.uat/repo/HDP/centos7/3.0.0.0-1634/repodata/repomd.xml: [Errno 14] HTTP Error 503 - Service Unavailable
Trying other mirror.
http://ufm.hdp03.uat/repo/HDP/centos7/3.0.0.0-1634/repodata/repomd.xml: [Errno 14] HTTP Error 503 - Service Unavailable
Trying other mirror.
http://ufm.hdp03.uat/repo/HDP/centos7/3.0.0.0-1634/repodata/repomd.xml: [Errno 14] HTTP Error 503 - Service Unavailable
Trying other mirror.
http://ufm.hdp03.uat/repo/HDP/centos7/3.0.0.0-1634/repodata/repomd.xml: [Errno 14] HTTP Error 503 - Service Unavailable
Trying other mirror.
http://ufm.hdp03.uat/repo/HDP/centos7/3.0.0.0-1634/repodata/repomd.xml: [Errno 14] HTTP Error 503 - Service Unavailable
Trying other mirror.
http://ufm.hdp03.uat/repo/HDP/centos7/3.0.0.0-1634/repodata/repomd.xml: [Errno 14] HTTP Error 503 - Service Unavailable
Trying other mirror.
http://ufm.hdp03.uat/repo/HDP/centos7/3.0.0.0-1634/repodata/repomd.xml: [Errno 14] HTTP Error 503 - Service Unavailable
Trying other mirror.
One of the configured repositories failed (HDP Version - HDP-3.0),
and yum doesn't have enough cached data to continue. At this point the only
safe thing yum can do is fail. There are a few ways to work "fix" this:
1. Contact the upstream for the repository and get them to fix the problem.
2. Reconfigure the baseurl/etc. for the repository, to point to a working
upstream. This is most often useful if you are using a newer
distribution release than is supported by the repository (and the
packages for the previous distribution release still work).
3. Run the command with the repository temporarily disabled
yum --disablerepo=HDP-3.0 ...
4. Disable the repository permanently, so yum won't use it by default. Yum
will then just ignore the repository until you permanently enable it
again or use --enablerepo for temporary usage:
yum-config-manager --disable HDP-3.0
or
subscription-manager repos --disable=HDP-3.0
5. Configure the failing repository to be skipped, if it is unavailable.
Note that yum will try to contact the repo. when it runs most commands,
so will have to try and fail each time (and thus. yum will be be much
slower). If it is a very temporary problem though, this is often a nice
compromise:
yum-config-manager --save --setopt=HDP-3.0.skip_if_unavailable=true
failure: repodata/repomd.xml from HDP-3.0: [Errno 256] No more mirrors to try.
http://ufm.hdp03.uat/repo/HDP/centos7/3.0.0.0-1634/repodata/repomd.xml: [Errno 14] HTTP Error 503 - Service Unavailable
http://ufm.hdp03.uat/repo/HDP/centos7/3.0.0.0-1634/repodata/repomd.xml: [Errno 14] HTTP Error 503 - Service Unavailable
http://ufm.hdp03.uat/repo/HDP/centos7/3.0.0.0-1634/repodata/repomd.xml: [Errno 14] HTTP Error 503 - Service Unavailable
http://ufm.hdp03.uat/repo/HDP/centos7/3.0.0.0-1634/repodata/repomd.xml: [Errno 14] HTTP Error 503 - Service Unavailable
http://ufm.hdp03.uat/repo/HDP/centos7/3.0.0.0-1634/repodata/repomd.xml: [Errno 14] HTTP Error 503 - Service Unavailable
http://ufm.hdp03.uat/repo/HDP/centos7/3.0.0.0-1634/repodata/repomd.xml: [Errno 14] HTTP Error 503 - Service Unavailable
http://ufm.hdp03.uat/repo/HDP/centos7/3.0.0.0-1634/repodata/repomd.xml: [Errno 14] HTTP Error 503 - Service Unavailable
http://ufm.hdp03.uat/repo/HDP/centos7/3.0.0.0-1634/repodata/repomd.xml: [Errno 14] HTTP Error 503 - Service Unavailable
http://ufm.hdp03.uat/repo/HDP/centos7/3.0.0.0-1634/repodata/repomd.xml: [Errno 14] HTTP Error 503 - Service Unavailable
http://ufm.hdp03.uat/repo/HDP/centos7/3.0.0.0-1634/repodata/repomd.xml: [Errno 14] HTTP Error 503 - Service Unavailable ******************************************************************************************************************************************************************************* some are saying this due to DNS lookup failure and when i cat /etc/resolv.conf file there is a single entry like this ********************************************************************************************************* [root@ufm yum.repos.d]# cat /etc/resolv.conf # Generated by NetworkManager search hdp03.uat [root@ufm yum.repos.d]#t even when I add 8.8.8.8 here it changes nothing. can you guide me further what I am missing or the way forward. *********************************************************************************************************
... View more
09-14-2018
02:44 AM
I am trying to install HDP 3.0 on single node using local repository and stuck on Install and Test stage where it is producing error 503 (http Service unavailable). local repository are properly created and are accessible through web browser as attached image. all services are running properly but when try to run Install and Test of installation there is error as in image attached. can somebody help me where I am wrong and how to resolve this issue. Error log is given below. ************************************************************************************************************************************ stderr:
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/stack-hooks/before-INSTALL/scripts/hook.py", line 37, in <module>
BeforeInstallHook().execute()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 353, in execute
method(env)
File "/var/lib/ambari-agent/cache/stack-hooks/before-INSTALL/scripts/hook.py", line 34, in hook
install_packages()
File "/var/lib/ambari-agent/cache/stack-hooks/before-INSTALL/scripts/shared_initialization.py", line 37, in install_packages
retry_count=params.agent_stack_retry_count)
File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
self.env.run()
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/ambari-agent/lib/resource_management/core/providers/packaging.py", line 30, in action_install
self._pkg_manager.install_package(package_name, self.__create_context())
File "/usr/lib/ambari-agent/lib/ambari_commons/repo_manager/yum_manager.py", line 219, in install_package
shell.repository_manager_executor(cmd, self.properties, context)
File "/usr/lib/ambari-agent/lib/ambari_commons/shell.py", line 749, in repository_manager_executor
raise RuntimeError(message)
RuntimeError: Failed to execute command '/usr/bin/yum -y install hdp-select', exited with code '1', message: 'http://ufm.hdp03.uat/repo/HDP/centos7/3.0.0.0-1634/repodata/repomd.xml: [Errno 14] HTTP Error 503 - Service Unavailable
Trying other mirror.
http://ufm.hdp03.uat/repo/HDP/centos7/3.0.0.0-1634/repodata/repomd.xml: [Errno 14] HTTP Error 503 - Service Unavailable
Trying other mirror.
http://ufm.hdp03.uat/repo/HDP/centos7/3.0.0.0-1634/repodata/repomd.xml: [Errno 14] HTTP Error 503 - Service Unavailable
Trying other mirror.
http://ufm.hdp03.uat/repo/HDP/centos7/3.0.0.0-1634/repodata/repomd.xml: [Errno 14] HTTP Error 503 - Service Unavailable
Trying other mirror.
http://ufm.hdp03.uat/repo/HDP/centos7/3.0.0.0-1634/repodata/repomd.xml: [Errno 14] HTTP Error 503 - Service Unavailable
Trying other mirror.
http://ufm.hdp03.uat/repo/HDP/centos7/3.0.0.0-1634/repodata/repomd.xml: [Errno 14] HTTP Error 503 - Service Unavailable
Trying other mirror.
http://ufm.hdp03.uat/repo/HDP/centos7/3.0.0.0-1634/repodata/repomd.xml: [Errno 14] HTTP Error 503 - Service Unavailable
Trying other mirror.
http://ufm.hdp03.uat/repo/HDP/centos7/3.0.0.0-1634/repodata/repomd.xml: [Errno 14] HTTP Error 503 - Service Unavailable
Trying other mirror.
http://ufm.hdp03.uat/repo/HDP/centos7/3.0.0.0-1634/repodata/repomd.xml: [Errno 14] HTTP Error 503 - Service Unavailable
Trying other mirror.
http://ufm.hdp03.uat/repo/HDP/centos7/3.0.0.0-1634/repodata/repomd.xml: [Errno 14] HTTP Error 503 - Service Unavailable
Trying other mirror.
http://ufm.hdp03.uat/repo/HDP-GPL/centos7/3.0.0.0-1634/repodata/repomd.xml: [Errno 14] HTTP Error 503 - Service Unavailable
Trying other mirror.
http://ufm.hdp03.uat/repo/HDP-GPL/centos7/3.0.0.0-1634/repodata/repomd.xml: [Errno 14] HTTP Error 503 - Service Unavailable
Trying other mirror.
http://ufm.hdp03.uat/repo/HDP-GPL/centos7/3.0.0.0-1634/repodata/repomd.xml: [Errno 14] HTTP Error 503 - Service Unavailable
Trying other mirror.
http://ufm.hdp03.uat/repo/HDP-GPL/centos7/3.0.0.0-1634/repodata/repomd.xml: [Errno 14] HTTP Error 503 - Service Unavailable
Trying other mirror.
http://ufm.hdp03.uat/repo/HDP-GPL/centos7/3.0.0.0-1634/repodata/repomd.xml: [Errno 14] HTTP Error 503 - Service Unavailable
Trying other mirror.
http://ufm.hdp03.uat/repo/HDP-GPL/centos7/3.0.0.0-1634/repodata/repomd.xml: [Errno 14] HTTP Error 503 - Service Unavailable
Trying other mirror.
http://ufm.hdp03.uat/repo/HDP-GPL/centos7/3.0.0.0-1634/repodata/repomd.xml: [Errno 14] HTTP Error 503 - Service Unavailable
Trying other mirror.
http://ufm.hdp03.uat/repo/HDP-GPL/centos7/3.0.0.0-1634/repodata/repomd.xml: [Errno 14] HTTP Error 503 - Service Unavailable
Trying other mirror.
http://ufm.hdp03.uat/repo/HDP-GPL/centos7/3.0.0.0-1634/repodata/repomd.xml: [Errno 14] HTTP Error 503 - Service Unavailable
Trying other mirror.
http://ufm.hdp03.uat/repo/HDP-GPL/centos7/3.0.0.0-1634/repodata/repomd.xml: [Errno 14] HTTP Error 503 - Service Unavailable
Trying other mirror.
One of the configured repositories failed (HDP-3.0-GPL-repo-1),
and yum doesn't have enough cached data to continue. At this point the only
safe thing yum can do is fail. There are a few ways to work "fix" this:
1. Contact the upstream for the repository and get them to fix the problem.
2. Reconfigure the baseurl/etc. for the repository, to point to a working
upstream. This is most often useful if you are using a newer
distribution release than is supported by the repository (and the
packages for the previous distribution release still work).
3. Run the command with the repository temporarily disabled
yum --disablerepo=HDP-3.0-GPL-repo-1 ...
4. Disable the repository permanently, so yum won't use it by default. Yum
will then just ignore the repository until you permanently enable it
again or use --enablerepo for temporary usage:
yum-config-manager --disable HDP-3.0-GPL-repo-1
or
subscription-manager repos --disable=HDP-3.0-GPL-repo-1
5. Configure the failing repository to be skipped, if it is unavailable.
Note that yum will try to contact the repo. when it runs most commands,
so will have to try and fail each time (and thus. yum will be be much
slower). If it is a very temporary problem though, this is often a nice
compromise:
yum-config-manager --save --setopt=HDP-3.0-GPL-repo-1.skip_if_unavailable=true
failure: repodata/repomd.xml from HDP-3.0-GPL-repo-1: [Errno 256] No more mirrors to try.
http://ufm.hdp03.uat/repo/HDP-GPL/centos7/3.0.0.0-1634/repodata/repomd.xml: [Errno 14] HTTP Error 503 - Service Unavailable
http://ufm.hdp03.uat/repo/HDP-GPL/centos7/3.0.0.0-1634/repodata/repomd.xml: [Errno 14] HTTP Error 503 - Service Unavailable
http://ufm.hdp03.uat/repo/HDP-GPL/centos7/3.0.0.0-1634/repodata/repomd.xml: [Errno 14] HTTP Error 503 - Service Unavailable
http://ufm.hdp03.uat/repo/HDP-GPL/centos7/3.0.0.0-1634/repodata/repomd.xml: [Errno 14] HTTP Error 503 - Service Unavailable
http://ufm.hdp03.uat/repo/HDP-GPL/centos7/3.0.0.0-1634/repodata/repomd.xml: [Errno 14] HTTP Error 503 - Service Unavailable
http://ufm.hdp03.uat/repo/HDP-GPL/centos7/3.0.0.0-1634/repodata/repomd.xml: [Errno 14] HTTP Error 503 - Service Unavailable
http://ufm.hdp03.uat/repo/HDP-GPL/centos7/3.0.0.0-1634/repodata/repomd.xml: [Errno 14] HTTP Error 503 - Service Unavailable
http://ufm.hdp03.uat/repo/HDP-GPL/centos7/3.0.0.0-1634/repodata/repomd.xml: [Errno 14] HTTP Error 503 - Service Unavailable
http://ufm.hdp03.uat/repo/HDP-GPL/centos7/3.0.0.0-1634/repodata/repomd.xml: [Errno 14] HTTP Error 503 - Service Unavailable
http://ufm.hdp03.uat/repo/HDP-GPL/centos7/3.0.0.0-1634/repodata/repomd.xml: [Errno 14] HTTP Error 503 - Service Unavailable
'
stdout:
2018-09-13 16:34:39,667 - Stack Feature Version Info: Cluster Stack=3.0, Command Stack=None, Command Version=None -> 3.0
2018-09-13 16:34:39,673 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2018-09-13 16:34:39,674 - Group['livy'] {}
2018-09-13 16:34:39,676 - Group['spark'] {}
2018-09-13 16:34:39,676 - Group['hdfs'] {}
2018-09-13 16:34:39,676 - Group['hadoop'] {}
2018-09-13 16:34:39,676 - Group['users'] {}
2018-09-13 16:34:39,677 - User['yarn-ats'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-09-13 16:34:39,678 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-09-13 16:34:39,680 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-09-13 16:34:39,681 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-09-13 16:34:39,682 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-09-13 16:34:39,683 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None}
2018-09-13 16:34:39,684 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['livy', 'hadoop'], 'uid': None}
2018-09-13 16:34:39,685 - User['druid'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-09-13 16:34:39,687 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['spark', 'hadoop'], 'uid': None}
2018-09-13 16:34:39,688 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None}
2018-09-13 16:34:39,689 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop'], 'uid': None}
2018-09-13 16:34:39,690 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-09-13 16:34:39,691 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-09-13 16:34:39,692 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-09-13 16:34:39,693 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2018-09-13 16:34:39,695 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2018-09-13 16:34:39,707 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if
2018-09-13 16:34:39,708 - Group['hdfs'] {}
2018-09-13 16:34:39,709 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop', u'hdfs']}
2018-09-13 16:34:39,710 - FS Type: HDFS
2018-09-13 16:34:39,711 - Directory['/etc/hadoop'] {'mode': 0755}
2018-09-13 16:34:39,711 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2018-09-13 16:34:39,732 - Repository['HDP-3.0-repo-1'] {'append_to_file': False, 'base_url': 'http://ufm.hdp03.uat/repo/HDP/centos7/3.0.0.0-1634/', 'action': ['create'], 'components': [u'HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-1', 'mirror_list': None}
2018-09-13 16:34:39,742 - File['/etc/yum.repos.d/ambari-hdp-1.repo'] {'content': '[HDP-3.0-repo-1]\nname=HDP-3.0-repo-1\nbaseurl=http://ufm.hdp03.uat/repo/HDP/centos7/3.0.0.0-1634/\n\npath=/\nenabled=1\ngpgcheck=0'}
2018-09-13 16:34:39,743 - Writing File['/etc/yum.repos.d/ambari-hdp-1.repo'] because contents don't match
2018-09-13 16:34:39,743 - Repository['HDP-3.0-GPL-repo-1'] {'append_to_file': True, 'base_url': 'http://ufm.hdp03.uat/repo/HDP-GPL/centos7/3.0.0.0-1634/', 'action': ['create'], 'components': [u'HDP-GPL', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-1', 'mirror_list': None}
2018-09-13 16:34:39,747 - File['/etc/yum.repos.d/ambari-hdp-1.repo'] {'content': '[HDP-3.0-repo-1]\nname=HDP-3.0-repo-1\nbaseurl=http://ufm.hdp03.uat/repo/HDP/centos7/3.0.0.0-1634/\n\npath=/\nenabled=1\ngpgcheck=0\n[HDP-3.0-GPL-repo-1]\nname=HDP-3.0-GPL-repo-1\nbaseurl=http://ufm.hdp03.uat/repo/HDP-GPL/centos7/3.0.0.0-1634/\n\npath=/\nenabled=1\ngpgcheck=0'}
2018-09-13 16:34:39,747 - Writing File['/etc/yum.repos.d/ambari-hdp-1.repo'] because contents don't match
2018-09-13 16:34:39,748 - Repository['HDP-UTILS-1.1.0.22-repo-1'] {'append_to_file': True, 'base_url': 'http://ufm.hdp03.uat/repo/HDP-UTILS/centos7/1.1.0.22/', 'action': ['create'], 'components': [u'HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-1', 'mirror_list': None}
2018-09-13 16:34:39,751 - File['/etc/yum.repos.d/ambari-hdp-1.repo'] {'content': '[HDP-3.0-repo-1]\nname=HDP-3.0-repo-1\nbaseurl=http://ufm.hdp03.uat/repo/HDP/centos7/3.0.0.0-1634/\n\npath=/\nenabled=1\ngpgcheck=0\n[HDP-3.0-GPL-repo-1]\nname=HDP-3.0-GPL-repo-1\nbaseurl=http://ufm.hdp03.uat/repo/HDP-GPL/centos7/3.0.0.0-1634/\n\npath=/\nenabled=1\ngpgcheck=0\n[HDP-UTILS-1.1.0.22-repo-1]\nname=HDP-UTILS-1.1.0.22-repo-1\nbaseurl=http://ufm.hdp03.uat/repo/HDP-UTILS/centos7/1.1.0.22/\n\npath=/\nenabled=1\ngpgcheck=0'}
2018-09-13 16:34:39,752 - Writing File['/etc/yum.repos.d/ambari-hdp-1.repo'] because contents don't match
2018-09-13 16:34:39,752 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2018-09-13 16:34:39,987 - Skipping installation of existing package unzip
2018-09-13 16:34:39,987 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2018-09-13 16:34:40,129 - Skipping installation of existing package curl
2018-09-13 16:34:40,129 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2018-09-13 16:34:40,269 - Installing package hdp-select ('/usr/bin/yum -y install hdp-select')
2018-09-13 16:34:41,186 - Skipping stack-select on SMARTSENSE because it does not exist in the stack-select package structure.
Command failed after 1 tries
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
05-14-2018
11:23 AM
Dear @Shu it worked perfectly, concept was simple but great, thank you !!
... View more
05-11-2018
10:39 AM
I am working to clean CSV files and stuck due to additional coma's in headers. Can any body guide me how to replace header line with customer header.
... View more
05-09-2018
12:27 PM
Thanks it worked perfectly !!
... View more
05-09-2018
10:37 AM
I am using NiFi replacetext processor to delete some strings in csv file but somehow its not working. Below is the search string: (,Minutes.*\n|CellID|=|"|LogicRNCID|CGI|CellIndex|GCELL:LABEL) below is snapshot: Can anyone tell how to mention multiple string in search box and how to mention delete these in replacement value.
... View more
Labels:
- Labels:
-
Apache NiFi
05-04-2018
12:25 PM
Thanks, it makes me clear and idea of NIFi Registry and version controlled flows is also good.
... View more
05-04-2018
10:41 AM
Dear Matt, Thanks for detailed answer, let me try to explain again: Currently I am using NiFi on windows machine where I create a process in browser GUI and its running fine. I am curious about I keep building processes on GUI and find a mechanism where I can run finalized process independent of development environment (GUI) so that finished processes can run un-interrupted whichever is platform Windows or Linux. Because during process building there are many deletions, starts, stops and other activities in GUI.
... View more
05-03-2018
07:32 AM
As shown in picture I am working to clean CSV files and need to design a process which will: 1. Remove first two lines of CVS file. 2. Assign custom headers to all CSV fields. 3. Further split a column into multiple columns based on delimited field (/) an assign headers to new fields.
... View more
Labels:
- Labels:
-
Apache NiFi
05-02-2018
08:55 PM
I have build a process on NiFi for windows where it lists SFTP from a linux machine, Fetchsftp for listed file and putsftp to another linux server. This process is running fine and all files are received on target server. Now I wan to run this process as a cron job on a linux machine independent of NiFi which is running on windows machine in browser. Can any body guide me how to configure linux machine to do this process as a job.
... View more
Labels:
- Labels:
-
Apache NiFi
-
Apache Phoenix