some of the ambari metrics display 'no data available'.
Getting below error message in ambari-metrics-collector.log
Caused by: java.io.IOException: maxStamp is smaller than minStamp
at org.apache.hadoop.hbase.io.TimeRange.<init>(TimeRange.java:80)
at org.apache.phoenix.util.ScanUtil.intersectTimeRange(ScanUtil.java:861)
at org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:267)
... 57 more
2018-03-07 04:16:56,736 WARN org.apache.hadoop.yarn.webapp.GenericExceptionHandler: INTERNAL_SERVER_ERROR
java.lang.RuntimeException: java.io.IOException: maxStamp is smaller than minStamp
at org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:273)
at org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:216)
at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:281)
at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:266)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:265)
at org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeQuery(PhoenixPreparedStatement.java:186)
at org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.PhoenixHBaseAccessor.getAggregateMetricRecords(PhoenixHBaseAccessor.java:1030)
at org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.HBaseTimelineMetricStore.getTimelineMetrics(HBaseTimelineMetricStore.java:256)
at org.apache.hadoop.yarn.server.applicationhistoryservice.webapp.TimelineWebServices.getTimelineMetrics(TimelineWebServices.java:359)
at sun.reflect.GeneratedMethodAccessor28.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
Restarted Metrics Monitors and Metrics Collector, but still same issue.
Any ideas?