Member since
06-21-2017
18
Posts
0
Kudos Received
0
Solutions
01-18-2019
06:34 PM
Always stuck at: https://demo.hortonworks.com:8443/gateway/knoxsso/knoxauth/login.html?originalUrl=http%3A%2F%2Fdemo.hortonworks.com%3A8080%2F%23%2Flogin?redirected=true
... View more
01-18-2019
06:31 PM
Find document at: https://community.hortonworks.com/articles/151939/hdp-securitygovernance-demo-kit.html After I edited C:\Windows\System32\drivers\etc\hosts with "my_IP demo.hortonworks.com" Now I can get to login page. But after I entered admin/BadPass#1, I get back to login page again. Still looking into this. If anyone has any idea, please let me know. Thanks.
... View more
01-17-2019
06:03 PM
I launched an instance from this AMI in AWS: HDP 3.1.0 with Hortoniabank demo including GDPR scenarios plus Knox SSO/NiFi v6 -1/11/2019 - ami-0b705846bc9b97e11 How to access/login to the instance? I tried to access Ambari as following but failed: http://<IP>:8080 https://<IP>:8443 Any introduction documents regarding how to explore this demo AMI? Thanks.
... View more
Labels:
11-15-2018
08:32 PM
I tried to add security to one of my HDF cluster, but failed. So I start with one NiFi local node, but still failed. Here are my main steps following some web links listed below: 1. Installed nifi 1.5 2. Installed nifi toolkit 1.5 3. Ran toolkit - ./tls-toolkit.sh standalone -n 'localhost' -C 'CN=ML,OU=NIFI' -O -o ../security_output 4. Copied generated keystore, truststore and nifi properties to nifi/config
folder 5. Imported the generated certificate to chrome browser 6. Modified authorizers.xml as attached. 7. With required restarts. Now when i enter the below url in the browser, I
see the below error. https://localhost:9443/nifi/ Insufficient Permissions
- home
Unknown user with identity 'CN=ML, OU=NIFI'. Contact the system
administrator. authorizers.xml ------------------------------------------- <userGroupProvider>
<identifier>file-user-group-provider</identifier> <class>org.apache.nifi.authorization.FileUserGroupProvider</class> <property name="Users File">./conf/users.xml</property> <property name="Legacy Authorized Users File"></property> <property name="Initial User Identity 1">CN=ML,OU=NIFI</property> </userGroupProvider> <accessPolicyProvider> <identifier>file-access-policy-provider</identifier> <class>org.apache.nifi.authorization.FileAccessPolicyProvider</class> <property name="User Group Provider">file-user-group-provider</property> <property name="Authorizations File">./conf/authorizations.xml</property> <property name="Initial Admin Identity">CN=ML,OU=NIFI</property> <property name="Legacy Authorized Users File"></property> <property name="Node Identity 1"></property> </accessPolicyProvider> --------------------------------------------------------------------------------------------------------------- Generated users.xml ------------------------------------------------------- <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <tenants> <groups/> <users> <user identifier="10375150-f717-3891-afda-e009d1f1184b" identity="CN=ML,OU=NIFI"/> </users> </tenants> ------------------------------------------------------------------------------------------------------------------ Generated authorizations.xml see attached image ---------------------------------------------------------------------------------------------------------------- nifi-app.log See attached image ---------------------------------------------------------------------------------------------------------------- nifi.properties See attached image -------------------------------------------------------------------------------------------------------- Some links I referred: https://community.hortonworks.com/content/kbentry/58233/using-the-tls-toolkit-to-simplify-security.html https://pierrevillard.com/2016/11/29/apache-nifi-1-1-0-secured-cluster-setup/ https://lists.apache.org/thread.html/%3CCAG6AKAEXOJtDRw07L=quzn+EMO7N5=n0_B8tBC-w6Edd2vptYw@mail.gmail.com%3E Here are what I tried: Both nifi-1.5 and nifi-1.8 AWS instance Local virtual machine (Ubuntu 18.04) This should be straight but I just can't fig out what I did wrong or what I missed. I have been stuck here for days. Your help is really appreciated. Thanks a lot.
... View more
Labels:
- Labels:
-
Cloudera DataFlow (CDF)
05-14-2018
07:37 PM
For your reference, here is link to bdutill: https://github.com/GoogleCloudPlatform/bdutil
... View more
05-14-2018
07:36 PM
I am creating a HDP cluster in Google Cloud Platform (GCP) using bdutil. But failed with error messages: Command used: ./bdutil -e platforms/hdp/ambari_env.sh -P hdp -a deploy Messages: ...................... Mon May 14 19:08:55 UTC 2018: Invoking on workers: ./ambari-setup.sh
..Mon May 14 19:08:55 UTC 2018: Invoking on master: ./ambari-setup.sh
.Mon May 14 19:08:55 UTC 2018: Waiting on async 'ssh' jobs to finish. Might take a while...
Mon May 14 19:09:40 UTC 2018: Exited 1 : gcloud --project=migration-test-200817 --quiet --verbosity=info compute ssh hdp-a-m --command=sudo su -l -c "cd ${PWD} && ./ambari-setup.sh" 2>>ambari-setup_deploy.stderr 1>>ambari-setup_deploy.stdou t --ssh-flag=-tt --ssh-flag=-oServerAliveInterval=60 --ssh-flag=-oServerAliveCou ntMax=3 --ssh-flag=-oConnectTimeout=30 --zone=us-east1-b
Mon May 14 19:09:40 UTC 2018: Fetching on-VM logs from hdp-a-m
INFO: Display format "default".
Mon May 14 19:09:45 UTC 2018: Exited 1 : gcloud --project=migration-test-200817 --quiet --verbosity=info compute ssh hdp-a-w-1 --command=sudo su -l -c "cd ${PWD } && ./ambari-setup.sh" 2>>ambari-setup_deploy.stderr 1>>ambari-setup_deploy.std out --ssh-flag=-tt --ssh-flag=-oServerAliveInterval=60 --ssh-flag=-oServerAliveC ountMax=3 --ssh-flag=-oConnectTimeout=30 --zone=us-east1-b
Mon May 14 19:09:45 UTC 2018: Fetching on-VM logs from hdp-a-w-1
Mon May 14 19:09:45 UTC 2018: Exited 1 : gcloud --project=migration-test-200817 --quiet --verbosity=info compute ssh hdp-a-w-0 --command=sudo su -l -c "cd ${PWD } && ./ambari-setup.sh" 2>>ambari-setup_deploy.stderr 1>>ambari-setup_deploy.std out --ssh-flag=-tt --ssh-flag=-oServerAliveInterval=60 --ssh-flag=-oServerAliveC ountMax=3 --ssh-flag=-oConnectTimeout=30 --zone=us-east1-b
Mon May 14 19:09:45 UTC 2018: Fetching on-VM logs from hdp-a-w-0
INFO: Display format "default".
INFO: Display format "default".
Mon May 14 19:09:46 UTC 2018: Command failed: wait ${SUBPROC} on line 326.
Mon May 14 19:09:46 UTC 2018: Exit code of failed command: 1
Mon May 14 19:09:46 UTC 2018: Detailed debug info available in file: /tmp/bdutil -20180514-190713-KRH/debuginfo.txt
Mon May 14 19:09:46 UTC 2018: Check console output for error messages and/or ret ry your command. Here are details in /tmp/bdutil-20180514-190713-KRH/debuginfo.txt hdp-a-m: which: no apt-get in (/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin) hdp-a-m: mkdir: cannot create directory `/hadoop/hdfs': No such file or directory Here are cluster I am setting up: CONFIGBUCKET='mybucket' PROJECT='myproject'
GCE_IMAGE=''
GCE_IMAGE_PROJECT='centos-cloud'
GCE_IMAGE_FAMILY='centos-6'
GCE_ZONE='us-east1-b'
GCE_NETWORK='default'
GCE_TAGS='bdutil'
PREEMPTIBLE_FRACTION=0.0
PREFIX='hdp-a'
NUM_WORKERS=2
MASTER_HOSTNAME='hdp-a-m'
WORKERS='hdp-a-w-0 hdp-a-w-1'
BDUTIL_GCS_STAGING_DIR='gs://tsy-hdp/bdutil-staging/hdp-a-m' Thanks for any help.
... View more
Labels:
- Labels:
-
Hortonworks Data Platform (HDP)
04-24-2018
03:25 PM
Followed the link but failed with following msg: Downloading packages:
warning: /var/cache/yum/x86_64/7Server/mysql-connectors-community/packages/mysql-community-release-el7-5.noarch.rpm: V3 DSA/SHA1 Signature, key ID 5072e1f5: NOKEY
Public key for mysql-community-release-el7-5.noarch.rpm is not installed
mysql-community-release-el7-5.noarch.rpm | 6.0 kB 00:00
Retrieving key from file:/etc/pki/rpm-gpg/RPM-GPG-KEY-mysql
GPG key retrieval failed: [Errno 14] curl#37 - "Couldn't open file /etc/pki/rpm-gpg/RPM-GPG-KEY-mysql" Thanks for help.
... View more
03-21-2018
10:03 PM
I am testing on two buckets, one has default encryption as None, and the other has default encryption as AWS-KMS. The IAM role should figure this out automatically just as I am using "aws s3" command. With your setup, I should setup both fs.s3a.server-side-encryption-algorithm(with value SSE-KMS") and fs.s3a.server-side-encryption.key. In this setup, both buckets will use same encryption, which is not what I want. Also I need to setup server-side.encryption.key which is not safe. I want HDP/Hadoop can use IAM role and use right server-side.encryption.algorigthm and server-side.encryption.key for different buckets automatically based on IAM role setup. Thanks.
... View more
03-21-2018
06:34 PM
Thanks for your suggestion. I ran the hadoop distcp and got similar error while write to the encryption-bucket: hadoop distcp temp/testfile.txt s3a://encryption-bucket/data-sources-reports/media_mix_model/test/
18/03/21 14:24:26 INFO tools.DistCp: Input Options: DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, ignoreFailures=false, overwrite=false, append=false, useDiff=false, fromSnapshot=null, toSnapshot=null, skipCRC=false, blocking=true, numListstatusThreads=0, maxMaps=20, mapBandwidth=100, sslConfigurationFile='null', copyStrategy='uniformsize', preserveStatus=[], preserveRawXattrs=false, atomicWorkPath=null, logPath=null, sourceFileListing=null, sourcePaths=[temp/testfile.txt], targetPath=s3a://encryption-bucket/data-sources-reports/media_mix_model/test, targetPathExists=true, filtersFile='null', verboseLog=false}
18/03/21 14:24:26 INFO client.RMProxy: Connecting to ResourceManager at ip-10-0-0-97.us-east-2.compute.internal/10.0.0.97:8050
18/03/21 14:24:26 INFO client.AHSProxy: Connecting to Application History server at ip-10-0-0-97.us-east-2.compute.internal/10.0.0.97:10200
18/03/21 14:24:27 INFO tools.SimpleCopyListing: Paths (files+dirs) cnt = 1; dirCnt = 0
18/03/21 14:24:27 INFO tools.SimpleCopyListing: Build file listing completed.
18/03/21 14:24:27 INFO tools.DistCp: Number of paths in the copy list: 1
18/03/21 14:24:27 INFO tools.DistCp: Number of paths in the copy list: 1
18/03/21 14:24:27 INFO client.RMProxy: Connecting to ResourceManager at ip-10-0-0-97.us-east-2.compute.internal/10.0.0.97:8050
18/03/21 14:24:27 INFO client.AHSProxy: Connecting to Application History server at ip-10-0-0-97.us-east-2.compute.internal/10.0.0.97:10200
18/03/21 14:24:27 INFO mapreduce.JobSubmitter: number of splits:1
18/03/21 14:24:27 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1521586586857_0006
18/03/21 14:24:27 INFO impl.YarnClientImpl: Submitted application application_1521586586857_0006
18/03/21 14:24:27 INFO mapreduce.Job: The url to track the job: http://ip-10-0-0-97.us-east-2.compute.internal:8088/proxy/application_1521586586857_0006/
18/03/21 14:24:27 INFO tools.DistCp: DistCp job-id: job_1521586586857_0006
18/03/21 14:24:27 INFO mapreduce.Job: Running job: job_1521586586857_0006
18/03/21 14:24:33 INFO mapreduce.Job: Job job_1521586586857_0006 running in uber mode : false
18/03/21 14:24:33 INFO mapreduce.Job: map 0% reduce 0%
18/03/21 14:24:43 INFO mapreduce.Job: map 100% reduce 0%
18/03/21 14:24:45 INFO mapreduce.Job: Task Id : attempt_1521586586857_0006_m_000000_0, Status : FAILED
Error: java.io.IOException: File copy failed: hdfs://ip-10-0-0-75.us-east-2.compute.internal:8020/user/mli/temp/testfile.txt --> s3a://encryption-bucket/test/testfile.txt
at org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:299)
at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:266)
at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:52)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:170)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:164)
Caused by: java.io.IOException: Couldn't run retriable-command: Copying hdfs://ip-10-0-0-75.us-east-2.compute.internal:8020/user/mli/temp/testfile.txt to s3a://encryption-bucket/data-sources-reports/media_mix_model/test/testfile.txt
at org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101)
at org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:296)
... 10 more
Caused by: org.apache.hadoop.fs.s3a.AWSClientIOException: saving output on data-sources-reports/media_mix_model/test/.distcp.tmp.attempt_1521586586857_0006_m_000000_0: com.amazonaws.AmazonClientException: Unable to verify integrity of data upload. Client calculated content hash (contentMD5: pkYbRKamqFtjgaN8gTsaCw== in base 64) didn't match hash (etag: dd9f32656c595bd40176556bd2f65a68 in hex) calculated by Amazon S3. You may need to delete the data stored in Amazon S3. (metadata.contentMD5: pkYbRKamqFtjgaN8gTsaCw==, md5DigestStream: null, bucketName: encryption-bucket, key: data-sources-reports/media_mix_model/test/.distcp.tmp.attempt_1521586586857_0006_m_000000_0): Unable to verify integrity of data upload. Client calculated content hash (contentMD5: pkYbRKamqFtjgaN8gTsaCw== in base 64) didn't match hash (etag: dd9f32656c595bd40176556bd2f65a68 in hex) calculated by Amazon S3. You may need to delete the data stored in Amazon S3. (metadata.contentMD5: pkYbRKamqFtjgaN8gTsaCw==, md5DigestStream: null, bucketName: encryption-bucket, key: test/.distcp.tmp.attempt_1521586586857_0006_m_000000_0)
at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:144)
at org.apache.hadoop.fs.s3a.S3AOutputStream.close(S3AOutputStream.java:121)
at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
at java.io.FilterOutputStream.close(FilterOutputStream.java:159)
at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.copyBytes(RetriableFileCopyCommand.java:260)
at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.copyToFile(RetriableFileCopyCommand.java:183)
at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:123)
at ........
... View more
03-21-2018
04:03 PM
I added this setup to customer core-site using Ambari for HDFS: <property>
<name>fs.s3a.aws.credentials.provider</name>
<value>org.apache.hadoop.fs.s3a.SharedInstanceProfileCredentialsProvider</value>
</property> Following links: https://community.hortonworks.com/questions/138691/how-to-configure-hdp26-to-use-s3.html https://hadoop.apache.org/docs/r2.8.0/hadoop-aws/tools/hadoop-aws/index.html#S3A But issue remains. Here are my versions: HDP: 2.6.1 Hadoop(HDFS/YARN): 2.7.3. Two questions: If Hadoop 2.8.0 will solve this issue with the above setup? When HDP will support Hadoop 2.8 or above version? Thanks.
... View more
03-20-2018
10:43 PM
I tried "hdfs dfs" command and got similar error when writing to the encryption bucket: hdfs dfs -put testfile.txt s3a://encryption_bucket/hdfs_testfile_hdpA.txt
put: saving output on hdfs_testfile_hdpA.txt._COPYING_: com.amazonaws.AmazonClientException: Unable to verify integrity of data upload. Client calculated content hash (contentMD5: pkYbRKamqFtjgaN8gTsaCw== in base 64) didn't match hash (etag: a54112d682d10f89c1f9c1e49968bb0f in hex) calculated by Amazon S3. You may need to delete the data stored in Amazon S3. (metadata.contentMD5: pkYbRKamqFtjgaN8gTsaCw==, md5DigestStream: null, bucketName: clients, key: hdfs_testfile_hdpA.txt._COPYING_): Unable to verify integrity of data upload. Client calculated content hash (contentMD5: pkYbRKamqFtjgaN8gTsaCw== in base 64) didn't match hash (etag: a54112d682d10f89c1f9c1e49968bb0f in hex) calculated by Amazon S3. You may need to delete the data stored in Amazon S3. (metadata.contentMD5: pkYbRKamqFtjgaN8gTsaCw==, md5DigestStream: null, bucketName: clients, key: hdfs_testfile_hdpA.txt._COPYING_)
... View more
03-20-2018
05:20 PM
I setup a HDP cluster on AWS EC2 instances with IAM role setting up for S3 access. I tested this setup on two S3 buckets, one with AWS-KMS encryption and the other without encryption. I can write to these two buckets using "aws s3" command successfully: aws s3 cp testfile.txt s3://no_encryption_bucket/testfile.txt aws s3 cp testfile.txt s3://encryption_bucket/testfile.txt These work just as expected with the first file not encrypted and the second file encrypted with AWS-KMS But I failed to use "hadoop fs" command to write to the encrypted bucket: hadoop fs -copyFromLocal testfile.txt
s3a://no_encryption_bucket/hadoop_testfile_hdpA.txt This command run successfully with the result file not encryped, just as expected hadoop fs -copyFromLocal testfile.txt s3a://encryption_bucket/hadoop_testfile_hdpA.txt This command failed with error: copyFromLocal: saving output on encryption_bucket/test/hadoop_testfile_hdpA.txt._COPYING_: com.amazonaws.AmazonClientException: Unable to verify integrity of data upload. Client calculated content hash (contentMD5: pkYbRKamqFtjgaN8gTsaCw== in base 64) didn't match hash (etag: 41d5f50238eefddc6de740d997ddc23e in hex) calculated by Amazon S3. You may need to delete the data stored in Amazon S3. (metadata.contentMD5: pkYbRKamqFtjgaN8gTsaCw==, md5DigestStream: null, bucketName: encryption_bucket, key: test/hadoop_testfile_hdpA.txt._COPYING_): Unable to verify integrity of data upload. Client calculated content hash (contentMD5: pkYbRKamqFtjgaN8gTsaCw== in base 64) didn't match hash (etag: 41d5f50238eefddc6de740d997ddc23e in hex) calculated by Amazon S3. You may need to delete the data stored in Amazon S3. (metadata.contentMD5: pkYbRKamqFtjgaN8gTsaCw==, md5DigestStream: null, bucketName: encryption_bucket, key: test/hadoop_testfile_hdpA.txt._COPYING_) Seems HDP gets fs.s3a.access.key and fs.s3a.secret.key from IAM role, but HDP didn't figure out fs.s3a.server-side-encryption-algorithm and fs.s3a.server-side-encryption.key from IAM role. Thanks for any help or any clue.
... View more
Labels:
- Labels:
-
Cloudera Navigator Encrypt
12-13-2017
10:41 PM
Seems for the instances in the same region and same subnet, I need to add private IP for minifi to the NiFi security group, besides the public IP. Hope this helps others with same situation as this.
... View more
12-13-2017
07:00 PM
I setup a HDF/HDP cluster on one aws instance and one Minifi on another aws instance. Also I setup Site to Site properties in NiFi configuration as following # Site to Site properties nifi.remote.input.host=ec2-********.compute.amazonaws.com nifi.remote.input.secure=false nifi.remote.input.socket.port=1026 nifi.remote.input.http.enabled=true nifi.remote.input.http.transaction.ttl=30 sec Then I created a minifi template to tail a log file and then send to NiFi. I converted the template to config.yml and restarted minifi. Everything worked as expected when NiFi instance and Minifi instance were in different regions. But failed when NiFi instance and Minifi instance were in same region, with following WARN/ERROR message: ***********************
2017-12-13 17:11:47,853 WARN [Timer-Driven Process Thread-1] o.a.n.r.util.SiteTo
SiteRestApiClient Failed to get controller from http://ec2-********.us-wes
t-2.compute.amazonaws.com:9090/nifi-api due to org.apache.http.conn.ConnectTimeo
utException: Connect to ec2-********.us-west-2.compute.amazonaws.com:9090
[ec2-********.us-west-2.compute.amazonaws.com/172.31.1.161] failed: connec
t timed out
2017-12-13 17:11:47,853 WARN [Timer-Driven Process Thread-5] o.a.n.r.util.SiteTo
SiteRestApiClient Failed to get controller from http://ec2-********.us-west-2.compute.amazonaws.com:9090/nifi-api due to org.apache.http.conn.ConnectTimeoutException: Connect to ec2-********.us-west-2.compute.amazonaws.com:9090 [ec2-********.us-west-2.compute.amazonaws.com/172.31.1.161] failed: connect timed out
2017-12-13 17:11:47,854 ERROR [Timer-Driven Process Thread-5] o.a.nifi.remote.StandardRemoteGroupPort RemoteGroupPort[name=minifiRemote,targets=http://ec2-********.us-west-2.compute.amazonaws.com:9090/nifi] failed to communicate with http://ec2-********.us-west-2.compute.amazonaws.com:9090/nifi due to org.apache.http.conn.ConnectTimeoutException: Connect to ec2-********.us-west-2.compute.amazonaws.com:9090 [ec2-********.us-west-2.compute.amazonaws.com/172.31.1.161] failed: connect timed out******************************* This is very strange to me as within same region connection should be easier than cross regions, as I originally thought. Thanks for any ideas or helps.
... View more
Labels:
- Labels:
-
Apache MiNiFi
-
Apache NiFi
06-22-2017
01:57 AM
I setup a HDP 2.6 cluster using Hortonworks Data Cloud (HDCloud) for AWS. I tried to install R on the HDP cluster, but failed. Below are error messages when I tried to install R on the master node. Thanks for helps. sudo yum install R R-devel libcurl-devel openssl-devel ........................................................... ---> Package libcurl-devel.x86_64 0:7.51.0-4.73.amzn1 will be installed
http://private-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos6/repodata/4a066ab87ef4593992315a707eee5184dc6fce29d982f0f6e7b61399d33779ed-filelists.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found
Trying other mirror.
To address this issue please refer to the below knowledge base article
https://access.redhat.com/articles/1320623
If above article doesn't help to resolve this issue please open a ticket with Red Hat Support.
One of the configured repositories failed (HDP-UTILS-1.1.0.21),
and yum doesn't have enough cached data to continue. At this point the only
safe thing yum can do is fail. There are a few ways to work "fix" this:
1. Contact the upstream for the repository and get them to fix the problem.
2. Reconfigure the baseurl/etc. for the repository, to point to a working
upstream. This is most often useful if you are using a newer
distribution release than is supported by the repository (and the
packages for the previous distribution release still work).
3. Disable the repository, so yum won't use it by default. Yum will then
just ignore the repository until you permanently enable it again or use
--enablerepo for temporary usage:
yum-config-manager --disable HDP-UTILS-1.1.0.21
4. Configure the failing repository to be skipped, if it is unavailable.
Note that yum will try to contact the repo. when it runs most commands,
so will have to try and fail each time (and thus. yum will be be much
slower). If it is a very temporary problem though, this is often a nice
compromise:
yum-config-manager --save --setopt=HDP-UTILS-1.1.0.21.skip_if_unavailable=true
failure: repodata/4a066ab87ef4593992315a707eee5184dc6fce29d982f0f6e7b61399d33779ed-filelists.sqlite.bz2 from HDP-UTILS-1.1.0.21: [Errno 256] No more mirrors to try.
http://private-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos6/repodata/4a066ab87ef4593992315a707eee5184dc6fce29d982f0f6e7b61399d33779ed-filelists.sqlite.bz2: [Errno 14] HTTP Error 404 - Not Found
... View more
Labels:
- Labels:
-
Hortonworks Data Platform (HDP)