Member since
08-08-2017
1652
Posts
30
Kudos Received
11
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2195 | 06-15-2020 05:23 AM | |
| 18896 | 01-30-2020 08:04 PM | |
| 2353 | 07-07-2019 09:06 PM |
11-18-2019
10:02 AM
we have two SPARK2_THRIFTSERVER on node01/03 in our ambari server
we want to delete both SPARK2_THRIFTSERVER on both nodes
we try the following API , but without success , any idea where we are wrong?
component_name : SPARK2_THRIFTSERVER
the REST API: ( we try to delete the thrift server on one of the nodes )
curl -iv -u admin:admin -H "X-Requested-By: ambari" -X DELETE http://node02:8080/api/v1/clusters/HDP/hosts/node01/SPARK2_THRIFTSERVER
* About to connect() to node02 port 8080 (#0) * Connected to node02 (45.3.23.4) port 8080 (#0) * Server auth using Basic with user 'admin' > DELETE /api/v1/clusters/HDP/hosts/node01/SPARK2_THRIFTSERVER HTTP/1.1 > Authorization: Basic YWRtaW46YWRtaW4= > User-Agent: curl/7.29.0 > Accept: */* > X-Requested-By: ambari > < HTTP/1.1 404 Not Found HTTP/1.1 404 Not Found
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Spark
11-15-2019
05:24 AM
since we have both current folders as: /hadoop/hdfs/namenode/current ( when fsimage exists i /hadoop/hdfs/journal/hdfsha/current/ do you mean to backup both them? second how backup for time prescriptive for example one week or more ?
... View more
11-15-2019
04:37 AM
about option one I guess you not mean to backup the metadata by copy it as scp or rsync maybe you means that there are tool for backup like barman for postgresql so do you know tool for this option? on each name nodes we have the following folders /hadoop/hdfs/namenode/current ( when fsimage exists i /hadoop/hdfs/journal/hdfsha/current/ do you means to backup only these folders , lets say every day ?
... View more
11-15-2019
02:40 AM
ok do you can summarize all options to recover the namenode ( also option out from the box )
... View more
11-15-2019
12:45 AM
hi all
I want to ask this important question
lets say we have the following:
HDP cluster with :
3 masters machine ( active/standby name-node ) , ( active/standby resource manager )
3 datanode machines
- each data-node machine have 4 disks for HDFS ( not include the OS )
3 kafka machines
- each kafka machine have one disk of 10T ( not include the OS )
now we want to install from scratch all cluster include HDP and ambari
but save the data on datanode machines and kafka topics data machine by the following:
we umount the disks on datanode machines and kafka machines
example
on datanode machine ( note - /etc/stab is already configured )
umount /grid/data1
umount /grid/data2
.
.
.
so the second scratch installation we install all the cluster ( by blueprint ) , but without data-node HDFS disks , and kafka topic disk ( scratch installation means fresh new linux OS )
after installation we mount all data-node machines disks and kafka disks machines ( where we are store all topics )
example
on datanode machine ( note - /etc/stab is already configured )
mount /grid/data1
mount /grid/data2
.
.
.
in order to complete the picture , need to restart HDFS and YARN and kafka
so - is this scenario could to work ?
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Kafka
11-02-2019
04:57 PM
you mentioned the HDF kit until now we works with HDP and ambari dose HDF is the same concept as HDP ? ( include the blueprint in case we want to automate the installation process ? )
... View more
11-02-2019
09:39 AM
first thank you for your answer the reason that I ask this question is because the blueprint json file is with the logsearch configuration as the following example }, { "zookeeper-logsearch-conf" : { "properties_attributes" : { }, "properties" : { "component_mappings" : "ZOOKEEPER_SERVER:zookeeper", "content" : "\n{\n \"input\":[\n {\n \"type\":\"zookeeper\",\n \"rowtype\":\"service\",\n \"path\":\"{{default('/configurations/zookeeper-env/zk_log_dir', '/var/log/zookeeper')}}/zookeeper*.log\"\n }\n ],\n \"filter\":[\n {\n \"filter\":\"grok\",\n \"conditions\":{\n \"fields\":{\"type\":[\"zookeeper\"]}\n },\n \"log4j_format\":\"%d{ISO8601} - %-5p [%t:%C{1}@%L] - %m%n\",\n \"multiline_pattern\":\"^(%{TIMESTAMP_ISO8601:logtime})\",\n \"message_pattern\":\"(?m)^%{TIMESTAMP_ISO8601:logtime}%{SPACE}-%{SPACE}%{LOGLEVEL:level}%{SPACE}\\\\[%{DATA:thread_name}\\\\@%{INT:line_number}\\\\]%{SPACE}-%{SPACE}%{GREEDYDATA:log_message}\",\n \"post_map_values\": {\n \"logtime\": {\n \"map_date\":{\n \"target_date_pattern\":\"yyyy-MM-dd HH:mm:ss,SSS\"\n }\n }\n }\n }\n ]\n}", "service_name" : "Zookeeper" } } }, can we get advice about how to remove the logsearch configuration tag's from the blueprint json file
... View more
11-01-2019
03:22 AM
Hi all
On ambari 2.6.2 version we have the following logsearch files
find / -name *-logsearch-conf.xml
/var/lib/ambari-server/resources/common-services/ACCUMULO/1.6.1.2.2.0/configuration/accumulo-logsearch-conf.xml
/var/lib/ambari-server/resources/common-services/AMBARI_INFRA/0.1.0/configuration/infra-logsearch-conf.xml
/var/lib/ambari-server/resources/common-services/AMBARI_METRICS/0.1.0/configuration/ams-logsearch-conf.xml
/var/lib/ambari-server/resources/common-services/ATLAS/0.1.0.2.3/configuration/atlas-logsearch-conf.xml
/var/lib/ambari-server/resources/common-services/FALCON/0.5.0.2.1/configuration/falcon-logsearch-conf.xml
/var/lib/ambari-server/resources/common-services/FLUME/1.4.0.2.0/configuration/flume-logsearch-conf.xml
/var/lib/ambari-server/resources/common-services/HBASE/0.96.0.2.0/configuration/hbase-logsearch-conf.xml
/var/lib/ambari-server/resources/common-services/HDFS/2.1.0.2.0/configuration/hdfs-logsearch-conf.xml
/var/lib/ambari-server/resources/common-services/HIVE/0.12.0.2.0/configuration/hive-logsearch-conf.xml
/var/lib/ambari-server/resources/common-services/KAFKA/0.8.1/configuration/kafka-logsearch-conf.xml
/var/lib/ambari-server/resources/common-services/KNOX/0.5.0.2.2/configuration/knox-logsearch-conf.xml
/var/lib/ambari-server/resources/common-services/LOGSEARCH/0.5.0/configuration/logfeeder-custom-logsearch-conf.xml
/var/lib/ambari-server/resources/common-services/OOZIE/4.0.0.2.0/configuration/oozie-logsearch-conf.xml
/var/lib/ambari-server/resources/common-services/RANGER/0.4.0/configuration/ranger-logsearch-conf.xml
.
.
.
/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/configuration/hive-logsearch-conf.xml /var/lib/ambari-agent/cache/common-services/KAFKA/0.8.1/configuration/kafka-logsearch-conf.xml /var/lib/ambari-agent/cache/common-services/KNOX/0.5.0.2.2/configuration/knox-logsearch-conf.xml /var/lib/ambari-agent/cache/common-services/LOGSEARCH/0.5.0/configuration/logfeeder-custom-logsearch-conf.xml /var/lib/ambari-agent/cache/common-services/OOZIE/4.0.0.2.0/configuration/oozie-logsearch-conf.xml /var/lib/ambari-agent/cache/common-services/RANGER/0.4.0/configuration/ranger-logsearch-conf.xml /var/lib/ambari-agent/cache/common-services/RANGER_KMS/0.5.0.2.3/configuration/ranger-kms-logsearch-conf.xml /var/lib/ambari-agent/cache/common-services/SPARK/1.2.1/configuration/spark-logsearch-conf.xml /var/lib/ambari-agent/cache/common-services/SPARK2/2.0.0/configuration/spark2-logsearch-conf.xml /var/lib/ambari-agent/cache/common-services/STORM/0.9.1/configuration/storm-logsearch-conf.xml /var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/configuration/yarn-logsearch-conf.xml /var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/configuration-mapred/mapred-logsearch-conf.xml /var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.6.0/configuration/zeppelin-logsearch-conf.xml /var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.7.0/configuration/zeppelin-logsearch-conf.xml /var/lib/ambari-agent/cache/common-services/ZOOKEEPER/3.4.5/configuration/zookeeper-logsearch-conf.xml
-----------------
NOW we installed the latest ambari version – 2.7.4 on other machine
rpm -qa | grep -i ambari
ambari-agent-2.7.4.0-118.x86_64
ambari-server-2.7.4.0-118.x86_64
my repo:
more ambari.repo [ambari-2.7.4.0] name=ambari-2.7.4.0 baseurl=http://master5.sys53.com/ambari/centos7/2.7.4.0-118 enabled=1 gpgcheck=0
But on the ambari latest version we have only that?
find / -name *-logsearch-conf.xml
/var/lib/ambari-server/resources/stacks/HDP/3.0/services/STORM/configuration/storm-logsearch-conf.xml
/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/STORM/configuration/storm-logsearch-conf.xml
Is it mistake? In ambari version – 2.7.4 ?
Why types of logsearch are missing from ambari 2.7.4 version
... View more
Labels:
- Labels:
-
Apache Ambari
10-30-2019
02:22 PM
just copy what you said "some challenges including container management, scheduling, network configuration and security, and performance" so I am understand that you think containers can give negative aspects about performance the question is if this is very minor affect or maybe major affect on performance as I mentions we have two choices install kafka cluster from confluent with zoo and schema registry OR install kafka using docker with zoo and schema registry from confluent third choice is: install kafka cluster from HDF Kit ( with kafka + zoo + schema registry ) please give your professional opinion what is the best kafka cluster from these three options? ( when focusing on performance side / production env)
... View more
10-30-2019
12:27 PM
just want to say first thank you for all explain but for now we cant work with Kubernetes ( because some internal reasons ) so the option is to work with docker based on that - do you think kafka cluster using docker will have less performance then kafka cluster without docker ?
... View more