Member since
08-08-2017
1652
Posts
30
Kudos Received
11
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1999 | 06-15-2020 05:23 AM | |
| 16468 | 01-30-2020 08:04 PM | |
| 2147 | 07-07-2019 09:06 PM | |
| 8346 | 01-27-2018 10:17 PM | |
| 4733 | 12-31-2017 10:12 PM |
12-02-2019
02:06 PM
@mike_bronson7 You can change the ownership of the HDFS directory to airflow:hadoop please do run the -chown command on / ??? It should something like /users/airflow/xxx Please let me know
... View more
11-27-2019
07:30 AM
Dear Shelton this is very very strange that some application / other delete the files ( there are 4 files while 2 of them represented the md5 chksum ) what we are very worry from this case , is that we actuality not know the real reason why files deleted and we cant reverse and see whats happens
... View more
11-26-2019
02:09 PM
ok so after second thinking I think its better to use the latest version that supported - 7.6 instead of 7.5 rhel 7.6 can support HDP 2.6.4 and up ( latest HDP version ) rhel 7.6 can support 2.6.2.2 , 2.7.3 , 2.7.4 ambari version and when current rhel version is 7.2 and we want to add rhel 7.6 OS version then its fully recommended to upgrade 7.2 to 7.6
... View more
11-20-2019
12:37 PM
thank you so much , by the way I posted a new question about mixing RHEL 7.2 with 7.5 , I will very happy if you can answer on that question also
... View more
11-19-2019
01:09 PM
thank you so much btw - can I get your advice about other thread - https://community.cloudera.com/t5/Support-Questions/schema-registry-service-failed-to-start-due-schemas-topic/td-p/283403
... View more
11-18-2019
11:09 PM
@mike_bronson7 The latest command you posted again has typo. "R" is missing at the end in below command - >>curl -u admin:admin -H "X-Requested-By: ambari" -X DELETE "http://node02:8080/api/v1/clusters/HDP/hosts/node01/host_components/SPARK2_THRIFTSERVE" Pls try and pass new error if any.
... View more
11-15-2019
06:10 AM
@mike_bronson7 you just need to backup /hadoop/hdfs/namenode/current from active namenode Also if you backup one week earlier the activity and lets say your first cluster is going serve more request to clients then you will loose that data which was written after backup. So best is to do savenamespace and backup when you are going to do activity and freeze clients not accessing the cluster.
... View more
11-12-2019
08:09 PM
1. Is the job failed due to above reason? If "NO" - then is it the error occurring displayed in logs for all spark jobs or just for this job?
... View more
11-03-2019
06:47 AM
@mike_bronson7 Yes, it's possible to deploy HDF using Ambari blueprints. If you compared an HDP and HDF blueprint you will notice a difference in the components section only. Deploy HDF 1 using a blueprint Deploy HDF 2 using a blueprint Deploy HDF 3 using a blueprint Above are some links that show the possibility
... View more