Member since
03-14-2016
4721
Posts
1110
Kudos Received
874
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1673 | 04-27-2020 03:48 AM | |
3285 | 04-26-2020 06:18 PM | |
2621 | 04-26-2020 06:05 PM | |
2016 | 04-13-2020 08:53 PM | |
3034 | 03-31-2020 02:10 AM |
09-12-2018
09:55 AM
@Jonathan Sneep Wonderful Article !!!
... View more
09-12-2018
09:26 AM
1 Kudo
@Joshua Adeleke The message which you shared normally indicates that your process is not having enough memory. (It is not about the Disk Space) So can you please share the exact JOB which you are running and where exactly do you see the above kind of message? Like do you see any "hs_err_pid*" file created on the problematic host? Where exactly do you see this message and can you please share the complete message ? Also please let us know about the size of your spark executor memory. Can you please try reducing the spark executor memory and then try again because it looks like we see Unable to create new threads means we might need to reduce the Heap a bit.
... View more
09-12-2018
07:42 AM
1 Kudo
@Takefumi Oide Recover Host feature was added in the UI as part of JIRA: https://issues.apache.org/jira/browse/AMBARI-21929. UI task for Host Recovery https://github.com/apache/ambari/commit/bb4645f7c0e8242a397b192cabd326bef0d99700 As per the fix: This action will completely re-install all components on this host, and should only be used when restoring a host using replacement hardware. Host cannot be recovered unless every host component is not in Stopped, Install Failed or Init state
... View more
09-12-2018
03:12 AM
@Maxim
Neaga
Can you please check if you have the Hive Clients installed on Atlas Node? Also can you please let us know if you have setup Ranger ? If yes, then have added proper policies/permissions?
... View more
09-11-2018
12:08 PM
@Ankita Ghate
Are you passing the "--security-protocol" option while running the producer? Can you please share the exact command which you are using to start the producer?
Example: # /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh --broker-list <BROKER_1>:6667,<BROKER_2>:6667 --topic test --security-protocol PLAINTEXTSASL
Please refer to the NOTE in the following doc saying : https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.2/bk_security/content/secure-kafka-produce-events.html
Add "--security-protocol" SASL_PLAINTEXT to the kafka-console-producer.sh runtime options.
... View more
09-11-2018
07:17 AM
1 Kudo
@Takefumi Oide Also please have a look at the default value for "MAX_METRIC_ROW_CACHE_SIZE" (maxRowCacheSize, default value 10000) , "TimelineMetricsCache.MAX_RECS_PER_NAME_DEFAULT" and "METRICS_SEND_INTERVAL (default value 59000 millisecond means ~ 1 minute)" https://github.com/apache/ambari/blob/release-2.7.0/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/cache/TimelineMetricsCache.java#L41-L42 https://github.com/apache/ambari/blob/release-2.7.0/ambari-metrics/ambari-metrics-hadoop-sink/src/main/java/org/apache/hadoop/metrics2/sink/timeline/HadoopTimelineMetricsSink.java#L140-L143
... View more
09-11-2018
07:09 AM
1 Kudo
@Takefumi Oide Sink uses small caches and there are some settings like "maxRowCacheSize" and "sendInterval" which you can find inside the "Advanced hadoop-metrics2.properties" in ambari UI or in the relevant sink properties file. Reference Links: https://github.com/apache/ambari/blob/release-2.7.0/ambari-metrics/ambari-metrics-common/src/main/java/org/apache/hadoop/metrics2/sink/timeline/AbstractTimelineMetricsSink.java#L311-L357 Before directly posting the data based on "maxRowCacheSize" to AMS collector the data is cached for a small time until cache is full the "sendInterval" can also be found as shows in the above code: https://github.com/apache/ambari/blob/release-2.7.0/ambari-metrics/ambari-metrics-hadoop-sink/src/main/java/org/apache/hadoop/metrics2/sink/timeline/HadoopTimelineMetricsSink.java#L140-L142
... View more
09-11-2018
06:44 AM
1 Kudo
@Takefumi Oide Yes, The "HadoopTimelineMetricsSink" are actually the sink code running inside the components like DataNode/NameNode/NodeManager/ResourceManagers ...etc Which reads the "/etc/hadoop/conf/hadoop-metrics2.properties" and based on the INFO available in this file they know where the Metrics Collector should be running and the port (default 6188) and then they will start emitting the data to the Metrics Collector. If the Metrics Collector is down then we will see Connection Refused messages in the components logs but the Sink will keep doing it's job until the Collector Comes online & become available. The logging for Metrics Collector will be ignored (suppressed after 20 attempts) to avoid duplicate logging on the component logs. WARN timeline.HadoopTimelineMetricsSink (HadoopTimelineMetricsSink.java:putMetrics(356)) - Unable to send metrics to collector by address:http://XXX.example.com:6188/ws/v1/timeline/metrics
INFO timeline.HadoopTimelineMetricsSink (AbstractTimelineMetricsSink.java:getCurrentCollectorHost(278)) - No live collector to send metrics to. Metrics to be sent will be discarded. This message will be skipped for the next 20 times. .
... View more
09-11-2018
05:06 AM
@Ankita Ghate We see the error cause as following: Caused by: org.apache.kafka.common.KafkaException: SSL trust store is specified, but trust store password is not specified. We see that you are getting the above error in your Kafka. Which indicates that while configuring truststore for Kafka you might have forgotten to add the "ssl.truststore.password" property properly. Can you please check your Kafka configs to see if you have setup the truststore properly as mentioned in Reference Doc: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.0/bk_security/content/ch_wire-kafka.html ssl.keystore.location = /var/private/ssl/kafka.server.keystore.jks
ssl.keystore.password = test1234
ssl.key.password = test1234
ssl.truststore.location = /var/private/ssl/kafka.server.truststore.jks
ssl.truststore.password = test1234 Values may be based on your requirement but need to make sure that it has correctly defined ssl.truststore.password .
... View more
09-07-2018
01:12 AM
@Andrew
Mills
Regarding your query: "What version of HDP/Ambari do I need to be on in order to use the latest version of Python?" As per the Support matrix you should be able to use : For Ambari 2.6.2.x Any of the following Python version are supported as per: https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.2.0/bk_ambari-installation/content/mmsr_software_reqs.html Python
--------
For SLES 12 --> Python 2.7.x
For CentOS 7, Ubuntu 14, Ubuntu 16, and Debian 9 --> Python 2.7.x
.
... View more