Member since
08-08-2017
1652
Posts
30
Kudos Received
11
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2059 | 06-15-2020 05:23 AM | |
| 17057 | 01-30-2020 08:04 PM | |
| 2220 | 07-07-2019 09:06 PM | |
| 8557 | 01-27-2018 10:17 PM | |
| 4838 | 12-31-2017 10:12 PM |
11-13-2019
05:16 AM
we have two namenode machines ( are part of HDP cluster in ambari )
because electricity failure , we notices about the following
on one name node we see that fsimage_xxxx files are missing while on the second namenode they are exists
is it possible to re-create them on the faulty name node
example on the bad node
ls /hadoop/hdfs/namenode/current | grep fsimage_ no output
on the good namenode
ls /hadoop/hdfs/namenode/current | grep fsimage_ fsimage_0000000000044556627 fsimage_0000000000044556627.md5 fsimage_0000000000044577059 fsimage_0000000000044577059.md5
the status for now is that name-node service not startup successfully from ambari
and the logs from the faulty name-node say like this:
ERROR namenode.NameNode (NameNode.java:main(1783)) - Failed to start namenode. java.io.FileNotFoundException: No valid image files found
... View more
Labels:
11-12-2019
07:08 AM
by the following hdfs fsck / -files -blocks -locations | grep blk_xxxxxx_xxxxxx as: su hdfs hdfs fsck / -files -blocks -locations | grep blk_1081495827_7755233 we not get any results so I guess its mean that blk_xxxxx_xxxx isnt exist in HDFS file-system what next ?
... View more
11-12-2019
01:55 AM
please send me the fsck cli that you want me to run
... View more
11-12-2019
01:52 AM
we also do the following su hdfs hadoop fsck / -files -blocks >/tmp/file and we bot found the block - blk_1081495827_7755233 in the file - /tmp/file so what is the reason that block removed?
... View more
11-12-2019
01:34 AM
hi 1. all Datanodes are up and running fine 2. I not see corrupted block or under replica 3, We runs the fsck and hdfs is healthy any other possibility's?
... View more
11-12-2019
12:42 AM
We have spark cluster with the following details ( all machines are linux redhat machines )
2 name-node machines 2 resource-manager machines 8 data-node machines ( HDFS file-system)
We are running running spark streaming application
From the yarn logs we can see the following errors , example:
yarn logs -applicationId application_xxxxxxxx -log_files ALL ---2019-11-08T10:12:20.040 ERROR [][][] [org.apache.spark.scheduler.LiveListenerBus] Listener EventLoggingListener threw an exception org.apache.hadoop.ipc.RemoteException(java.io.IOException): BP-484874736-172.2.45.23-8478399929292:blk_1081495827_7755233 does not exist or is not under Construction at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:6721) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:6789) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:931) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:979) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2347)
we can see that - `8478399929292:-blk_1081495827_7755233` does not exist or is not under Construction
but what could be the reasons that yarn complain about this?
... View more
Labels:
- Labels:
-
Apache Spark
-
Apache YARN
11-02-2019
04:57 PM
you mentioned the HDF kit until now we works with HDP and ambari dose HDF is the same concept as HDP ? ( include the blueprint in case we want to automate the installation process ? )
... View more
11-02-2019
09:39 AM
first thank you for your answer the reason that I ask this question is because the blueprint json file is with the logsearch configuration as the following example }, { "zookeeper-logsearch-conf" : { "properties_attributes" : { }, "properties" : { "component_mappings" : "ZOOKEEPER_SERVER:zookeeper", "content" : "\n{\n \"input\":[\n {\n \"type\":\"zookeeper\",\n \"rowtype\":\"service\",\n \"path\":\"{{default('/configurations/zookeeper-env/zk_log_dir', '/var/log/zookeeper')}}/zookeeper*.log\"\n }\n ],\n \"filter\":[\n {\n \"filter\":\"grok\",\n \"conditions\":{\n \"fields\":{\"type\":[\"zookeeper\"]}\n },\n \"log4j_format\":\"%d{ISO8601} - %-5p [%t:%C{1}@%L] - %m%n\",\n \"multiline_pattern\":\"^(%{TIMESTAMP_ISO8601:logtime})\",\n \"message_pattern\":\"(?m)^%{TIMESTAMP_ISO8601:logtime}%{SPACE}-%{SPACE}%{LOGLEVEL:level}%{SPACE}\\\\[%{DATA:thread_name}\\\\@%{INT:line_number}\\\\]%{SPACE}-%{SPACE}%{GREEDYDATA:log_message}\",\n \"post_map_values\": {\n \"logtime\": {\n \"map_date\":{\n \"target_date_pattern\":\"yyyy-MM-dd HH:mm:ss,SSS\"\n }\n }\n }\n }\n ]\n}", "service_name" : "Zookeeper" } } }, can we get advice about how to remove the logsearch configuration tag's from the blueprint json file
... View more
11-01-2019
03:22 AM
Hi all
On ambari 2.6.2 version we have the following logsearch files
find / -name *-logsearch-conf.xml
/var/lib/ambari-server/resources/common-services/ACCUMULO/1.6.1.2.2.0/configuration/accumulo-logsearch-conf.xml
/var/lib/ambari-server/resources/common-services/AMBARI_INFRA/0.1.0/configuration/infra-logsearch-conf.xml
/var/lib/ambari-server/resources/common-services/AMBARI_METRICS/0.1.0/configuration/ams-logsearch-conf.xml
/var/lib/ambari-server/resources/common-services/ATLAS/0.1.0.2.3/configuration/atlas-logsearch-conf.xml
/var/lib/ambari-server/resources/common-services/FALCON/0.5.0.2.1/configuration/falcon-logsearch-conf.xml
/var/lib/ambari-server/resources/common-services/FLUME/1.4.0.2.0/configuration/flume-logsearch-conf.xml
/var/lib/ambari-server/resources/common-services/HBASE/0.96.0.2.0/configuration/hbase-logsearch-conf.xml
/var/lib/ambari-server/resources/common-services/HDFS/2.1.0.2.0/configuration/hdfs-logsearch-conf.xml
/var/lib/ambari-server/resources/common-services/HIVE/0.12.0.2.0/configuration/hive-logsearch-conf.xml
/var/lib/ambari-server/resources/common-services/KAFKA/0.8.1/configuration/kafka-logsearch-conf.xml
/var/lib/ambari-server/resources/common-services/KNOX/0.5.0.2.2/configuration/knox-logsearch-conf.xml
/var/lib/ambari-server/resources/common-services/LOGSEARCH/0.5.0/configuration/logfeeder-custom-logsearch-conf.xml
/var/lib/ambari-server/resources/common-services/OOZIE/4.0.0.2.0/configuration/oozie-logsearch-conf.xml
/var/lib/ambari-server/resources/common-services/RANGER/0.4.0/configuration/ranger-logsearch-conf.xml
.
.
.
/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/configuration/hive-logsearch-conf.xml /var/lib/ambari-agent/cache/common-services/KAFKA/0.8.1/configuration/kafka-logsearch-conf.xml /var/lib/ambari-agent/cache/common-services/KNOX/0.5.0.2.2/configuration/knox-logsearch-conf.xml /var/lib/ambari-agent/cache/common-services/LOGSEARCH/0.5.0/configuration/logfeeder-custom-logsearch-conf.xml /var/lib/ambari-agent/cache/common-services/OOZIE/4.0.0.2.0/configuration/oozie-logsearch-conf.xml /var/lib/ambari-agent/cache/common-services/RANGER/0.4.0/configuration/ranger-logsearch-conf.xml /var/lib/ambari-agent/cache/common-services/RANGER_KMS/0.5.0.2.3/configuration/ranger-kms-logsearch-conf.xml /var/lib/ambari-agent/cache/common-services/SPARK/1.2.1/configuration/spark-logsearch-conf.xml /var/lib/ambari-agent/cache/common-services/SPARK2/2.0.0/configuration/spark2-logsearch-conf.xml /var/lib/ambari-agent/cache/common-services/STORM/0.9.1/configuration/storm-logsearch-conf.xml /var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/configuration/yarn-logsearch-conf.xml /var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/configuration-mapred/mapred-logsearch-conf.xml /var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.6.0/configuration/zeppelin-logsearch-conf.xml /var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.7.0/configuration/zeppelin-logsearch-conf.xml /var/lib/ambari-agent/cache/common-services/ZOOKEEPER/3.4.5/configuration/zookeeper-logsearch-conf.xml
-----------------
NOW we installed the latest ambari version – 2.7.4 on other machine
rpm -qa | grep -i ambari
ambari-agent-2.7.4.0-118.x86_64
ambari-server-2.7.4.0-118.x86_64
my repo:
more ambari.repo [ambari-2.7.4.0] name=ambari-2.7.4.0 baseurl=http://master5.sys53.com/ambari/centos7/2.7.4.0-118 enabled=1 gpgcheck=0
But on the ambari latest version we have only that?
find / -name *-logsearch-conf.xml
/var/lib/ambari-server/resources/stacks/HDP/3.0/services/STORM/configuration/storm-logsearch-conf.xml
/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/STORM/configuration/storm-logsearch-conf.xml
Is it mistake? In ambari version – 2.7.4 ?
Why types of logsearch are missing from ambari 2.7.4 version
... View more
Labels:
- Labels:
-
Apache Ambari
10-30-2019
02:22 PM
just copy what you said "some challenges including container management, scheduling, network configuration and security, and performance" so I am understand that you think containers can give negative aspects about performance the question is if this is very minor affect or maybe major affect on performance as I mentions we have two choices install kafka cluster from confluent with zoo and schema registry OR install kafka using docker with zoo and schema registry from confluent third choice is: install kafka cluster from HDF Kit ( with kafka + zoo + schema registry ) please give your professional opinion what is the best kafka cluster from these three options? ( when focusing on performance side / production env)
... View more