Member since
08-08-2017
1652
Posts
30
Kudos Received
11
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1917 | 06-15-2020 05:23 AM | |
| 15459 | 01-30-2020 08:04 PM | |
| 2071 | 07-07-2019 09:06 PM | |
| 8104 | 01-27-2018 10:17 PM | |
| 4569 | 12-31-2017 10:12 PM |
11-20-2019
12:37 PM
thank you so much , by the way I posted a new question about mixing RHEL 7.2 with 7.5 , I will very happy if you can answer on that question also
... View more
11-19-2019
01:09 PM
thank you so much btw - can I get your advice about other thread - https://community.cloudera.com/t5/Support-Questions/schema-registry-service-failed-to-start-due-schemas-topic/td-p/283403
... View more
11-18-2019
11:09 PM
@mike_bronson7 The latest command you posted again has typo. "R" is missing at the end in below command - >>curl -u admin:admin -H "X-Requested-By: ambari" -X DELETE "http://node02:8080/api/v1/clusters/HDP/hosts/node01/host_components/SPARK2_THRIFTSERVE" Pls try and pass new error if any.
... View more
11-15-2019
06:10 AM
@mike_bronson7 you just need to backup /hadoop/hdfs/namenode/current from active namenode Also if you backup one week earlier the activity and lets say your first cluster is going serve more request to clients then you will loose that data which was written after backup. So best is to do savenamespace and backup when you are going to do activity and freeze clients not accessing the cluster.
... View more
11-12-2019
08:09 PM
1. Is the job failed due to above reason? If "NO" - then is it the error occurring displayed in logs for all spark jobs or just for this job?
... View more
11-03-2019
06:47 AM
@mike_bronson7 Yes, it's possible to deploy HDF using Ambari blueprints. If you compared an HDP and HDF blueprint you will notice a difference in the components section only. Deploy HDF 1 using a blueprint Deploy HDF 2 using a blueprint Deploy HDF 3 using a blueprint Above are some links that show the possibility
... View more
11-02-2019
09:39 AM
first thank you for your answer the reason that I ask this question is because the blueprint json file is with the logsearch configuration as the following example }, { "zookeeper-logsearch-conf" : { "properties_attributes" : { }, "properties" : { "component_mappings" : "ZOOKEEPER_SERVER:zookeeper", "content" : "\n{\n \"input\":[\n {\n \"type\":\"zookeeper\",\n \"rowtype\":\"service\",\n \"path\":\"{{default('/configurations/zookeeper-env/zk_log_dir', '/var/log/zookeeper')}}/zookeeper*.log\"\n }\n ],\n \"filter\":[\n {\n \"filter\":\"grok\",\n \"conditions\":{\n \"fields\":{\"type\":[\"zookeeper\"]}\n },\n \"log4j_format\":\"%d{ISO8601} - %-5p [%t:%C{1}@%L] - %m%n\",\n \"multiline_pattern\":\"^(%{TIMESTAMP_ISO8601:logtime})\",\n \"message_pattern\":\"(?m)^%{TIMESTAMP_ISO8601:logtime}%{SPACE}-%{SPACE}%{LOGLEVEL:level}%{SPACE}\\\\[%{DATA:thread_name}\\\\@%{INT:line_number}\\\\]%{SPACE}-%{SPACE}%{GREEDYDATA:log_message}\",\n \"post_map_values\": {\n \"logtime\": {\n \"map_date\":{\n \"target_date_pattern\":\"yyyy-MM-dd HH:mm:ss,SSS\"\n }\n }\n }\n }\n ]\n}", "service_name" : "Zookeeper" } } }, can we get advice about how to remove the logsearch configuration tag's from the blueprint json file
... View more
10-27-2019
04:02 AM
may I return to my first question until using redhat 7.2 , every thing was ok , after each scratch installation we never seen that but when we jump to redhat 7.5 then every cluster that created was with corrupted files - any HINT - why ?
... View more
10-11-2019
04:45 AM
Hi, Did you checked what process is running on this port and tried killing the process if required? if the issue got solved? Thanks AKR
... View more
10-11-2019
04:30 AM
Hi, Added to the above email because if there are too many old files are available in the SHS folder the cleaner may not work as expected. So the ideal way to be delete manually if there are too old .inprogress files. Thanks AKR
... View more