Member since
10-04-2016
240
Posts
278
Kudos Received
43
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
230 | 01-16-2018 03:38 PM | |
1719 | 11-13-2017 05:45 PM | |
597 | 11-13-2017 12:30 AM | |
379 | 10-27-2017 03:58 AM | |
16874 | 10-19-2017 03:17 AM |
08-21-2019
08:47 PM
2 Kudos
When using Ambari Metrics in Distributed Mode,after HDP-3/Ambari-2.7 upgrade, HBase metrics are not emitted due to an issue which will likely be fixed after Ambari-2.7.4. We will see similar kind of messages in AMS Collector Log. Error : 2019-06-10 02:42:59,215 INFO timeline timeline.HadoopTimelineMetricsSink: No live collector to send metrics to. Metrics to be sent will be discarded. This message will be skipped for the next 20
Debug Error shows this :
2019-06-14 20:35:29,538 DEBUG main timeline.HadoopTimelineMetricsSink: Trying to find live collector host from : exp5.lab.com,exp4.lab.com 2019-06-14 20:35:29,538 DEBUG main timeline.HadoopTimelineMetricsSink: Requesting live collector nodes : http://exp5.lab.com,exp4.lab.com:6188/ws/v1/timeline/metrics/livenodes 2019-06-14 20:35:29,557 DEBUG main timeline.HadoopTimelineMetricsSink: Unable to connect to collector, http://exp5.lab.com,exp4.lab.com:6188/ws/v1/timeline/metrics/livenodes 2019-06-14 20:35:29,557 DEBUG main timeline.HadoopTimelineMetricsSink: java.net.UnknownHostException: exp5.lab.com,exp4.lab.com 2019-06-14 20:35:29,558 DEBUG main timeline.HadoopTimelineMetricsSink: Collector exp5.lab.com,exp4.lab.com is not longer live. Removing it from list of know live collector hosts : [] 2019-06-14 20:35:29,558 DEBUG main timeline.HadoopTimelineMetricsSink: No live collectors from configuration. Basically, it's incorrectly parsing hostnames when there are more than one Metrics collector. In the meantime, there is a very easy workaround. Add *.sink.timeline.zookeeper.quorum=<ZK_QUORUM_ADDRESS> Example: *.sink.timeline.zookeeper.quorum=zk_host1:2181,zk_host2:2181,zk_host3:2181 to the following files on Ambari Server host: /var/lib/ambari-server/resources/stacks/HDP/3.0/services/HBASE/package/templates/hadoop-metrics2-hbase.properties-GANGLIA-MASTER.j2 /var/lib/ambari-server/resources/stacks/HDP/3.0/services/HBASE/package/templates/hadoop-metrics2-hbase.properties-GANGLIA-RS.j2 Restart Ambari Server for changes to take effect and soon you will be able to see metrics on the Grafana HBase Dashboards.
... View more
08-21-2019
07:34 PM
1 Kudo
In HDP-2.6/Ambari-2.6, it was not mandatory enable HS2 metrics explicitly. Thus, all metrics would be emitted without defining any configs explicitly. In HDP-3/Ambari-2.7, we will see similar erros in AMS Collector Log: Error : 2019-06-10 02:42:59,215 INFO timeline timeline.HadoopTimelineMetricsSink: No live collector to send metrics to. Metrics to be sent will be discarded. This message will be skipped for the next 20
Debug Error shows this :
2019-06-14 20:35:29,538 DEBUG main timeline.HadoopTimelineMetricsSink: Trying to find live collector host from : exp5.lab.com,exp4.lab.com 2019-06-14 20:35:29,538 DEBUG main timeline.HadoopTimelineMetricsSink: Requesting live collector nodes : http://exp5.lab.com,exp4.lab.com:6188/ws/v1/timeline/metrics/livenodes 2019-06-14 20:35:29,557 DEBUG main timeline.HadoopTimelineMetricsSink: Unable to connect to collector, http://exp5.lab.com,exp4.lab.com:6188/ws/v1/timeline/metrics/livenodes 2019-06-14 20:35:29,557 DEBUG main timeline.HadoopTimelineMetricsSink: java.net.UnknownHostException: exp5.lab.com,exp4.lab.com 2019-06-14 20:35:29,558 DEBUG main timeline.HadoopTimelineMetricsSink: Collector exp5.lab.com,exp4.lab.com is not longer live. Removing it from list of know live collector hosts : [] 2019-06-14 20:35:29,558 DEBUG main timeline.HadoopTimelineMetricsSink: No live collectors from configuration. You need to ensure the following properties exist. If not, first add them in the respective custom section via Ambari >Hive> Configs. Next, if you are using Ambari Metrics with more than one collector, then you need to make one more change due a BUG, which will likely be fixed after Ambari-2.7.4. Add *.sink.timeline.zookeeper.quorum=<ZK_QUORUM_ADDRESS> Example: *.sink.timeline.zookeeper.quorum=zk_host1:2181,zk_host2:2181,zk_host3:2181 to all the 4 files under /var/lib/ambari-server/resources/stacks/HDP/3.0/services/HIVE/package/templates/ located on Ambari Server host. Restart Ambari Server & Hive for changes to take effect. Now the metrics will be emitted and you should be able to see data on your Grafana Dashboard.
... View more
- Find more articles tagged with:
- HDP
05-23-2019
01:26 PM
@Maurice Knopp We do not yet have any planned dates yet. However, if you are an Enterprise Support customer, you can ask for a hotfix and you will be provided a patch jar which is very easy to replace on all machines with Tez.
... View more
05-23-2019
03:41 AM
@Maurice Knopp We recently saw that TEZ-3894 only fixes the issue partially. If you job ends up spinning multiple mappers then you are likely to hit a variant of TEZ-3894 although on surface it appears to be same. For permanent fix, you may want to get a patch for https://issues.apache.org/jira/browse/TEZ-4057
... View more
02-15-2019
08:52 PM
@Mahesh Balakrishnan Since there can be only one accepted answer 😞 , I am sharing 25 bounty points with you. Thanks for the guidance.
... View more
02-15-2019
08:51 PM
@Kuldeep Kulkarni Thanks that got me past the initial problem and then I ran into following error: resource_management.core.exceptions.Fail: Cannot find /usr/hdp/current/oozie-client/doc. Possible reason is that /etc/yum.conf contains tsflags=nodocs which prevents this folder from being installed along with oozie-client package. If this is the case, please fix /etc/yum.conf and re-install the package. This was resolved by @Mahesh Balakrishnan's input of remove and install the packages.
... View more
02-15-2019
07:27 PM
@Kuldeep Kulkarni, @amarnath reddy pappu Any pointers? TIA.
... View more
02-15-2019
07:26 PM
HDP-2.6.5, RHEL 7, Kerberized, Ambari-2.6 upgraded to Ambari-2.7.3 Oozie Service Check fails: STDERR Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/OOZIE/4.0.0.2.0/package/scripts/service_check.py", line 139, in <module>
OozieServiceCheck().execute()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 352, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/OOZIE/4.0.0.2.0/package/scripts/service_check.py", line 52, in service_check
OozieServiceCheckDefault.oozie_smoke_shell_file(smoke_test_file_name, prepare_hdfs_file_name)
File "/var/lib/ambari-agent/cache/common-services/OOZIE/4.0.0.2.0/package/scripts/service_check.py", line 70, in oozie_smoke_shell_file
raise Fail(format(NO_DOCS_FOLDER_MESSAGE))
NameError: global name 'Fail' is not defined STDOUT 2019-02-15 19:02:10,245 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.5.74-2 -> 2.6.5.74-2
2019-02-15 19:02:10,250 - Using hadoop conf dir: /usr/hdp/2.6.5.74-2/hadoop/conf
2019-02-15 19:02:10,275 - File['/var/lib/ambari-agent/cred/lib/CredentialUtil.jar'] {'content': DownloadSource('http://ambariServer.com:8080/resources/CredentialUtil.jar'), 'mode': 0755}
2019-02-15 19:02:10,277 - Not downloading the file from http://ambariServer:8080/resources/CredentialUtil.jar, because /var/lib/ambari-agent/tmp/CredentialUtil.jar already exists
2019-02-15 19:02:11,274 - File['/var/lib/ambari-agent/tmp/oozieSmoke2.sh'] {'content': StaticFile('oozieSmoke2.sh'), 'mode': 0755}
2019-02-15 19:02:11,277 - File['/var/lib/ambari-agent/tmp/prepareOozieHdfsDirectories.sh'] {'content': StaticFile('prepareOozieHdfsDirectories.sh'), 'mode': 0755}
Command failed after 1 tries I have noticed that hosts with Oozie Server and Clients do not have the following directories:
/usr/hdp/current/oozie-client/doc/ /usr/hdp/current/oozie-server/doc/ So, I have tried removing tsflags=nodocs from /etc/yum.conf and attempted reinstalling the oozie client, however, it still does not create the above folders.
... View more
Labels:
09-24-2018
08:44 PM
@kkanchu Thanks for pointing out! Updated the article.
... View more
09-05-2018
09:05 PM
2 Kudos
If you have started using Hive LLAP, you would have noticed that by default its configured to use log4j2. Default configuration makes use of advanced features from log4j2 like Rolling Over logs based on time interval and size. With time, a lot of old log files would have accumulated and typically you would compress those files manually or add additional jars and change configuration when using log4j1 to achieve the same With log4j2, a simple change in configuration can ensure that every time a log file is rolled over, it gets compressed for optimal use of storage space. Default configuration: To automatically compress the rolled over log files, update the highlighted line to: appender.DRFA.filePattern = ${sys:hive.log.dir}/${sys:hive.log.file}.%d{yyyy-MM-dd}-%i.gz -%i will ensure that in a rare scenario when there has been increased logging and the threshold size can be been reached more than once in the specified interval, the previously rolled over file won't get over written. .gz will ensure that files are compressed using gzip To understand the finer details about log4j2 appenders, you may check out the official documentation. Similarly you can also make similar changes to llap-cli log settings:
... View more
- Find more articles tagged with:
- compression
- Data Processing
- Hive
- How-ToTutorial
- llap
- logging
- logs
Labels:
08-15-2018
04:12 PM
@prashanth ramesh I see the beeline example you have given above uses --hive-conf approach.
When appending the credential path to the jdbc string, are you getting the exact same error as shown above? beeline -u "jdbc:hive2://hs2_hostname:port/default;principal=my/principal@REALM?hadoop.security.credential.provider.path=jceks://hdfs@hostname/path/to/jceks"
... View more
05-11-2018
04:45 AM
I am trying to compile and build hadoop source on MacOS.
Here is the complete error trace. [exec] Scanning dependencies of target rpc_obj
[exec] [ 32%] Building CXX object main/native/libhdfspp/lib/rpc/CMakeFiles/rpc_obj.dir/rpc_connection_impl.cc.o
[exec] [ 32%] Building CXX object main/native/libhdfspp/lib/rpc/CMakeFiles/rpc_obj.dir/rpc_engine.cc.o
[exec] [ 32%] Building CXX object main/native/libhdfspp/lib/rpc/CMakeFiles/rpc_obj.dir/namenode_tracker.cc.o
[exec] [ 33%] Building CXX object main/native/libhdfspp/lib/rpc/CMakeFiles/rpc_obj.dir/request.cc.o
[exec] [ 33%] Building CXX object main/native/libhdfspp/lib/rpc/CMakeFiles/rpc_obj.dir/sasl_protocol.cc.o
[exec] [ 34%] Building CXX object main/native/libhdfspp/lib/rpc/CMakeFiles/rpc_obj.dir/sasl_engine.cc.o
[exec] [ 34%] Building CXX object main/native/libhdfspp/lib/rpc/CMakeFiles/rpc_obj.dir/cyrus_sasl_engine.cc.o
[exec] [ 34%] Built target rpc_obj
[exec] Scanning dependencies of target rpc
[exec] [ 35%] Linking CXX static library librpc.a
[exec] [ 35%] Built target rpc
[exec] [ 35%] Building CXX object main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/ioservice_impl.cc.o
[exec] In file included from /Users/dc/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/common/ioservice_impl.cc:19:
[exec] In file included from /Users/dc/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/common/ioservice_impl.h:22:
[exec] /Users/dc/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/include/hdfspp/ioservice.h:109:30: error: no type named 'function' in namespace 'std'
[exec] virtual void PostTask(std::function<void(void)> asyncTask) = 0;
[exec] ~~~~~^
[exec] /Users/dc/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/include/hdfspp/ioservice.h:109:38: error: expected ')'
[exec] virtual void PostTask(std::function<void(void)> asyncTask) = 0;
[exec] ^
[exec] /Users/dc/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/include/hdfspp/ioservice.h:109:24: note: to match this '('
[exec] virtual void PostTask(std::function<void(void)> asyncTask) = 0;
[exec] ^
[exec] /Users/dc/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/include/hdfspp/ioservice.h:117:10: error: no member named 'function' in namespace 'std'
[exec] std::function<void(void)> typeEraser = func;
[exec] ~~~~~^
[exec] /Users/dc/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/include/hdfspp/ioservice.h:117:28: error: expected '(' for function-style cast or type construction
[exec] std::function<void(void)> typeEraser = func;
[exec] ~~~~^
[exec] /Users/dc/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/include/hdfspp/ioservice.h:117:31: error: use of undeclared identifier 'typeEraser'
[exec] std::function<void(void)> typeEraser = func;
[exec] ^
[exec] In file included from /Users/dc/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/common/ioservice_impl.cc:19:
[exec] /Users/dc/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/common/ioservice_impl.h:45:54: error: non-virtual member function marked 'override' hides virtual member function
[exec] void PostTask(std::function<void(void)> asyncTask) override;
[exec] ^
[exec] /Users/dc/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/include/hdfspp/ioservice.h:109:16: note: hidden overloaded virtual function 'hdfs::IoService::PostTask' declared here: type mismatch at 1st parameter ('int' vs 'std::function<void ()>')
[exec] virtual void PostTask(std::function<void(void)> asyncTask) = 0;
[exec] ^
[exec] /Users/dc/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/common/ioservice_impl.cc:34:14: error: allocating an object of abstract class type 'hdfs::IoServiceImpl'
[exec] return new IoServiceImpl();
[exec] ^
[exec] /Users/dc/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/include/hdfspp/ioservice.h:109:16: note: unimplemented pure virtual method 'PostTask' in 'IoServiceImpl'
[exec] virtual void PostTask(std::function<void(void)> asyncTask) = 0;
[exec] ^
[exec] In file included from /Users/dc/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/common/ioservice_impl.cc:19:
[exec] In file included from /Users/dc/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/common/ioservice_impl.h:22:
[exec] In file included from /Users/dc/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/include/hdfspp/ioservice.h:61:
[exec] /Library/Developer/CommandLineTools/usr/include/c++/v1/memory:2143:9: error: field type 'hdfs::IoServiceImpl' is an abstract class
[exec] _T2 __second_;
[exec] ^
[exec] /Library/Developer/CommandLineTools/usr/include/c++/v1/memory:2302:15: note: in instantiation of template class 'std::__1::__libcpp_compressed_pair_imp<std::__1::allocator<hdfs::IoServiceImpl>, hdfs::IoServiceImpl, 1>' requested here
[exec] : private __libcpp_compressed_pair_imp<_T1, _T2>
[exec] ^
[exec] /Library/Developer/CommandLineTools/usr/include/c++/v1/memory:3816:36: note: in instantiation of template class 'std::__1::__compressed_pair<std::__1::allocator<hdfs::IoServiceImpl>, hdfs::IoServiceImpl>' requested here
[exec] __compressed_pair<_Alloc, _Tp> __data_;
[exec] ^
[exec] /Library/Developer/CommandLineTools/usr/include/c++/v1/memory:4444:26: note: in instantiation of template class 'std::__1::__shared_ptr_emplace<hdfs::IoServiceImpl, std::__1::allocator<hdfs::IoServiceImpl> >' requested here
[exec] ::new(__hold2.get()) _CntrlBlk(__a2, _VSTD::forward<_Args>(__args)...);
[exec] ^
[exec] /Library/Developer/CommandLineTools/usr/include/c++/v1/memory:4810:29: note: in instantiation of function template specialization 'std::__1::shared_ptr<hdfs::IoServiceImpl>::make_shared<>' requested here
[exec] return shared_ptr<_Tp>::make_shared(_VSTD::forward<_Args>(__args)...);
[exec] ^
[exec] /Users/dc/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/common/ioservice_impl.cc:38:15: note: in instantiation of function template specialization 'std::__1::make_shared<hdfs::IoServiceImpl>' requested here
[exec] return std::make_shared<IoServiceImpl>();
[exec] ^
[exec] In file included from /Users/dc/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/common/ioservice_impl.cc:19:
[exec] In file included from /Users/dc/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/common/ioservice_impl.h:22:
[exec] In file included from /Users/dc/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/include/hdfspp/ioservice.h:61:
[exec] /Library/Developer/CommandLineTools/usr/include/c++/v1/memory:4447:28: error: assigning to 'std::__1::__shared_weak_count *' from incompatible type 'pointer' (aka 'std::__1::__shared_ptr_emplace<hdfs::IoServiceImpl, std::__1::allocator<hdfs::IoServiceImpl> > *')
[exec] __r.__cntrl_ = __hold2.release();
[exec] ~~~~~~~~^~~~~~~~~
[exec] /Library/Developer/CommandLineTools/usr/include/c++/v1/memory:4810:29: note: in instantiation of function template specialization 'std::__1::shared_ptr<hdfs::IoServiceImpl>::make_shared<>' requested here
[exec] return shared_ptr<_Tp>::make_shared(_VSTD::forward<_Args>(__args)...);
[exec] ^
[exec] /Users/dc/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/common/ioservice_impl.cc:38:15: note: in instantiation of function template specialization 'std::__1::make_shared<hdfs::IoServiceImpl>' requested here
[exec] return std::make_shared<IoServiceImpl>();
[exec] ^
[exec] 9 errors generated.
[exec] make[2]: *** [main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/ioservice_impl.cc.o] Error 1
[exec] make[1]: *** [main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/all] Error 2
[exec] make: *** [all] Error 2
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] Apache Hadoop HDFS Native Client ................... FAILURE [ 29.737 s]
[INFO] Apache Hadoop HttpFS ............................... SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................. SKIPPED
[INFO] Apache Hadoop HDFS-RBF ............................. SKIPPED
[INFO] Apache Hadoop HDFS Project ......................... SKIPPED
[INFO] Apache Hadoop YARN ................................. SKIPPED
[INFO] Apache Hadoop YARN API ............................. SKIPPED
[INFO] Apache Hadoop YARN Common .......................... SKIPPED
[INFO] Apache Hadoop YARN Registry ........................ SKIPPED
[INFO] Apache Hadoop YARN Server .......................... SKIPPED
[INFO] Apache Hadoop YARN Server Common ................... SKIPPED
[INFO] Apache Hadoop YARN NodeManager ..................... SKIPPED
[INFO] Apache Hadoop YARN Web Proxy ....................... SKIPPED
[INFO] Apache Hadoop YARN ApplicationHistoryService ....... SKIPPED
[INFO] Apache Hadoop YARN Timeline Service ................ SKIPPED
[INFO] Apache Hadoop YARN ResourceManager ................. SKIPPED
[INFO] Apache Hadoop YARN Server Tests .................... SKIPPED
[INFO] Apache Hadoop YARN Client .......................... SKIPPED
[INFO] Apache Hadoop YARN SharedCacheManager .............. SKIPPED
[INFO] Apache Hadoop YARN Timeline Plugin Storage ......... SKIPPED
[INFO] Apache Hadoop YARN TimelineService HBase Backend ... SKIPPED
[INFO] Apache Hadoop YARN TimelineService HBase Common .... SKIPPED
[INFO] Apache Hadoop YARN TimelineService HBase Client .... SKIPPED
[INFO] Apache Hadoop YARN TimelineService HBase Servers ... SKIPPED
[INFO] Apache Hadoop YARN TimelineService HBase Server 1.2 SKIPPED
[INFO] Apache Hadoop YARN TimelineService HBase tests ..... SKIPPED
[INFO] Apache Hadoop YARN Router .......................... SKIPPED
[INFO] Apache Hadoop YARN Applications .................... SKIPPED
[INFO] Apache Hadoop YARN DistributedShell ................ SKIPPED
[INFO] Apache Hadoop YARN Unmanaged Am Launcher ........... SKIPPED
[INFO] Apache Hadoop MapReduce Client ..................... SKIPPED
[INFO] Apache Hadoop MapReduce Core ....................... SKIPPED
[INFO] Apache Hadoop MapReduce Common ..................... SKIPPED
[INFO] Apache Hadoop MapReduce Shuffle .................... SKIPPED
[INFO] Apache Hadoop MapReduce App ........................ SKIPPED
[INFO] Apache Hadoop MapReduce HistoryServer .............. SKIPPED
[INFO] Apache Hadoop MapReduce JobClient .................. SKIPPED
[INFO] Apache Hadoop Mini-Cluster ......................... SKIPPED
[INFO] Apache Hadoop YARN Services ........................ SKIPPED
[INFO] Apache Hadoop YARN Services Core ................... SKIPPED
[INFO] Apache Hadoop YARN Services API .................... SKIPPED
[INFO] Apache Hadoop YARN Site ............................ SKIPPED
[INFO] Apache Hadoop YARN UI .............................. SKIPPED
[INFO] Apache Hadoop YARN Project ......................... SKIPPED
[INFO] Apache Hadoop MapReduce HistoryServer Plugins ...... SKIPPED
[INFO] Apache Hadoop MapReduce NativeTask ................. SKIPPED
[INFO] Apache Hadoop MapReduce Uploader ................... SKIPPED
[INFO] Apache Hadoop MapReduce Examples ................... SKIPPED
[INFO] Apache Hadoop MapReduce ............................ SKIPPED
[INFO] Apache Hadoop MapReduce Streaming .................. SKIPPED
[INFO] Apache Hadoop Distributed Copy ..................... SKIPPED
[INFO] Apache Hadoop Archives ............................. SKIPPED
[INFO] Apache Hadoop Archive Logs ......................... SKIPPED
[INFO] Apache Hadoop Rumen ................................ SKIPPED
[INFO] Apache Hadoop Gridmix .............................. SKIPPED
[INFO] Apache Hadoop Data Join ............................ SKIPPED
[INFO] Apache Hadoop Extras ............................... SKIPPED
[INFO] Apache Hadoop Pipes ................................ SKIPPED
[INFO] Apache Hadoop OpenStack support .................... SKIPPED
[INFO] Apache Hadoop Amazon Web Services support .......... SKIPPED
[INFO] Apache Hadoop Kafka Library support ................ SKIPPED
[INFO] Apache Hadoop Azure support ........................ SKIPPED
[INFO] Apache Hadoop Aliyun OSS support ................... SKIPPED
[INFO] Apache Hadoop Client Aggregator .................... SKIPPED
[INFO] Apache Hadoop Scheduler Load Simulator ............. SKIPPED
[INFO] Apache Hadoop Resource Estimator Service ........... SKIPPED
[INFO] Apache Hadoop Azure Data Lake support .............. SKIPPED
[INFO] Apache Hadoop Image Generation Tool ................ SKIPPED
[INFO] Apache Hadoop Tools Dist ........................... SKIPPED
[INFO] Apache Hadoop Tools ................................ SKIPPED
[INFO] Apache Hadoop Client API ........................... SKIPPED
[INFO] Apache Hadoop Client Runtime ....................... SKIPPED
[INFO] Apache Hadoop Client Packaging Invariants .......... SKIPPED
[INFO] Apache Hadoop Client Test Minicluster .............. SKIPPED
[INFO] Apache Hadoop Client Packaging Invariants for Test . SKIPPED
[INFO] Apache Hadoop Client Packaging Integration Tests ... SKIPPED
[INFO] Apache Hadoop Distribution ......................... SKIPPED
[INFO] Apache Hadoop Client Modules ....................... SKIPPED
[INFO] Apache Hadoop Cloud Storage ........................ SKIPPED
[INFO] Apache Hadoop Cloud Storage Project ................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 31.310 s
[INFO] Finished at: 2018-05-11T00:27:30-04:00
[INFO] Final Memory: 67M/557M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (make) on project hadoop-hdfs-native-client: An Ant BuildException has occured: exec returned: 2
[ERROR] around Ant part ...<exec failonerror="true" dir="/Users/dc/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/target" executable="make">... @ 9:131 in /Users/dc/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/target/antrun/build-main.xml
[ERROR] -> [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (make) on project hadoop-hdfs-native-client: An Ant BuildException has occured: exec returned: 2
around Ant part ...<exec failonerror="true" dir="/Users/dc/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/target" executable="make">... @ 9:131 in /Users/dc/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/target/antrun/build-main.xml
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:213)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:154)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:146)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:117)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:81)
at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51)
at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:128)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:309)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:194)
at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:107)
at org.apache.maven.cli.MavenCli.execute(MavenCli.java:993)
at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:345)
at org.apache.maven.cli.MavenCli.main(MavenCli.java:191)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)
at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229)
at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)
at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)
Caused by: org.apache.maven.plugin.MojoExecutionException: An Ant BuildException has occured: exec returned: 2
around Ant part ...<exec failonerror="true" dir="/Users/dc/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/target" executable="make">... @ 9:131 in /Users/dc/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/target/antrun/build-main.xml
at org.apache.maven.plugin.antrun.AntRunMojo.execute(AntRunMojo.java:355)
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:134)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208)
... 20 more
Caused by: /Users/dc/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/target/antrun/build-main.xml:9: exec returned: 2
at org.apache.tools.ant.taskdefs.ExecTask.runExecute(ExecTask.java:646)
at org.apache.tools.ant.taskdefs.ExecTask.runExec(ExecTask.java:672)
at org.apache.tools.ant.taskdefs.ExecTask.execute(ExecTask.java:498)
at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
at org.apache.tools.ant.Task.perform(Task.java:348)
at org.apache.tools.ant.Target.execute(Target.java:390)
at org.apache.tools.ant.Target.performTasks(Target.java:411)
at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1399)
at org.apache.tools.ant.Project.executeTarget(Project.java:1368)
at org.apache.maven.plugin.antrun.AntRunMojo.execute(AntRunMojo.java:327)
... 22 more
[ERROR]
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
I have done the following steps: Installed XCode and Command Line Developer tools Installed Protobuf 2.5.0 Followed this HCC article for all other installations: https://community.hortonworks.com/articles/36832/setting-hadoop-development-environment-on-mac-os-x.html Clone github.com/apache/hadoop cd /.../hadoop mvn clean package -DskipTests mvn package -Pdist -Pnative -Dtar -DskipTests It appears like the C++ code is failing to compile. Appreciate any help
... View more
Labels:
03-21-2018
09:04 PM
Repo Description This script triggers service checks for components which are not in maintenance mode. Once service checks are triggered, you can monitor their progress from Ambari UI. It can be used with following arguments: -u <ambariAdminUser> -p <ambariAdminPassword> -s <all|comma-separated list> Optional arguments: [-t <ambariServerHost or IP Address>] [-n <ambariServerPort>] [-c <empty or path to cert file>] to be used when Ambari SSL is enabled When using -c Mention -c at the end of the command as shown in the examples Mention the path to the CAcerts file (preferably) or the Ambari certs file (if selfsigned) to trust the https connectivity to Ambari. If either paths are not available, you may use -c without any path. If not specified, default value for -t : localhost If not specified, default value for -n : 8080(when SSL disabled) and 8443(when SSL enabled) For KERBEROS service check, script will prompt for an option to skip service check. You may skip the check for KERBEROS if you do not have KDC Admin principal and password. If executing this script on Ambari Server host, you do not need to specify -t and -p options Example: Trigger Service Check for all components
sh ambari-service-check.sh -u admin -p admin -s all Example: Trigger Service Check for only for HIVE, HDFS, and KNOX
sh ambari-service-check.sh -u admin -p admin -s hive,hdfs,knox Example: Trigger Service Check for only for HIVE, HDFS, and KNOX when ssl is enabled
sh ambari-service-check.sh -u admin -p admin -s hive,hdfs,knox -c Example: Trigger Service Check for only for HIVE, HDFS, and KNOX when ssl is enabled and you want to specify cert file
sh ambari-service-check.sh -u admin -p admin -s hive,hdfs,knox -c /path/to/cert/file Repo Info Github Repo URL https://github.com/dineshchitlangia/Ambari-Service-Check Github account name dineshchitlangia Repo name Ambari-Service-Check
... View more
- Find more articles tagged with:
- Ambari
- ambari-server
- Cloud & Operations
- service-check
- utilities
Labels:
03-08-2018
05:04 AM
@Daniel Kozlowski This feature in not supported in any version of HDP so far. Here is the doc from the latest HDP-2.6.4 Refer section: 4.2.4 Using the Note Toolbar. It says "Schedule the execution of all paragraphs using CRON syntax. This feature is not currently
operational. If you need to schedule Spark jobs, consider using Oozie Spark action." We recently opened ZEPPELIN-3271 to provide a way to disable this feature to avoid risks.
... View more
01-16-2018
03:38 PM
1 Kudo
@Rajesh K There is no harm in starting up both the services and turning off maintenance mode. Regarding your atlas service crashing everytime after startup, it could indicate multiple problems. The most common one could be an out of memory error. Could you check the logs and share the error stack trace ?
... View more
12-01-2017
06:06 PM
2 Kudos
When running a custom Java application that connects via JDBC to Hive, after migration to HDP-2.6.x, the application now fails to start with a NoClassDefFoundError or ClassNotFoundException related to a Hive class, like: Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hive/service/cli/thrift/TCLIService$Iface
at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:105)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:270)
Root Cause Prior to HDP-2.6.x, the hive-jdbc.jar is a symlink which points to the "standalone" jdbc jar (the one intended to be used for non-hadoop apps, like a generic app that has JDBC driver DB accessibility), for example in HDP 2.5.0: /usr/hdp/current/hive-client/lib/hive-jdbc.jar -> hive-jdbc-1.2.1000.2.5.0.0-1245-standalone.jar But from newer versions, HDP-2.6.x onwards, the hive-jdbc.jar now points to the "hadoop env" JDBC driver, which has dependencies on many other Hadoop JARs, for example in HDP 2.6.2: /usr/hdp/current/hive-client/lib/hive-jdbc.jar -> hive-jdbc-1.2.1000.2.6.2.0-205.jar or in HDP-2.6.3 /usr/hdp/current/hive-client/lib/hive-jdbc.jar -> hive-jdbc-1.2.1000.2.6.3.0-235.jar Does this mean the HDP stack no longer includes a standalone JAR ? No. The standalone jar has been moved to this path: /usr/hdp/current/hive-client/jdbc Two ways to solve this: 1. Change the custom Java application's classpath to use the hive-jdbc-*-standalone.jar explicitly As noted above, the standalone jar is now available in a different path. For example in HDP-2.6.2: /usr/hdp/current/hive-client/jdbc/hive-jdbc-1.2.1000.2.6.2.0-205-standalone.jar
In HDP-2.6.3 /usr/hdp/current/hive-client/jdbc/hive-jdbc-1.2.1000.2.6.3.0-235-standalone.jar 2. Add the following to the HADOOP_CLASSPATH of the custom Java application if it uses other Hadoop components/JARs /usr/hdp/current/hive-client/lib/hive-metastore-*.jar:/usr/hdp/current/hive-client/lib/hive-common-*.jar:/usr/hdp/current/hive-client/lib/hive-cli-*.jar:/usr/hdp/current/hive-client/lib/hive-exec-*.jar:/usr/hdp/current/hive-client/lib/hive-service.jar:/usr/hdp/current/hive-client/lib/libfb303-*.jar:/usr/hdp/current/hive-client/lib/libthrift-*.jar:/usr/hdp/current/hadoop-client/lib/log4j*.jar:/usr/hdp/current/hadoop-client/lib/slf4j-api-*.jar:/usr/hdp/current/hadoop-client/lib/slf4j-log4j12-*.jar:/usr/hdp/current/hadoop-client/lib/commons-logging-*.jar
... View more
- Find more articles tagged with:
- application
- Hadoop Core
- hdp-2.6
- Hive
- hive-jdbc
- Issue Resolution
- java
11-20-2017
06:25 AM
2 Kudos
You need to save the new data to a temp table and then read from that and overwrite into hive table. cdc_data.write.mode("overwrite").saveAsTable("temp_table") Then you can overwrite rows in your target table val dy = sqlContext.table("temp_table")
dy.write.mode("overwrite").insertInto("senty_audit.temptable")
... View more
11-16-2017
03:58 PM
2 Kudos
Description During HDP Upgrade, Hive Metastore restart step fails with message - "ValueError: time data '2017-05-10 19:08:30' does not match format '%Y-%m-%d %H:%M:%S.%f'" Following is the stack trace: Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py", line 211, in <module> HiveMetastore().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 329, in execute method(env)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 841, in restart self.pre_upgrade_restart(env, upgrade_type=upgrade_type)
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py", line 118, in pre_upgrade_restart self.upgrade_schema(env)
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py", line 150, in upgrade_schema status_params.tmp_dir)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/security_commons.py", line 242, in cached_kinit_executor if (now - datetime.strptime(last_run_time, "%Y-%m-%d %H:%M:%S.%f") > timedelta(minutes=expiration_time)):
File "/usr/lib64/python2.6/_strptime.py", line 325, in _strptime (data_string, format))
ValueError: time data '2017-05-10 19:08:30' does not match format '%Y-%m-%d %H:%M:%S.%f' Root cause During the upgrade, the data will be read from a file, such as *_tmp.txt, under the /var/lib/ambari-agent/tmp/kinit_executor_cache directory. This issue occurs if this file is not updated and points to an older date. Solution 1. Login to Hive Metastore host 2. Move *_tmp.txt files mv /var/lib/ambari-agent/tmp/kinit_executor_cache/*_tmp.txt /tmp
3. Retry Restart Hive Metastore step from Ambari Upgrade screen
... View more
- Find more articles tagged with:
- Ambari
- Hadoop Core
- hdp-upgrade
- hive-metastore
- Issue Resolution
- issue-resolution
Labels:
11-13-2017
05:45 PM
2 Kudos
This is usually a problem with multiline json files. use the following in your json file: {"colors":[{"color":"black","category":"hue","type":"primary","code":{"rgba":[255,255,255,1],"hex":"#000"}},{"color":"white","category":"value","code":{"rgba":[0,0,0,1],"hex":"#FFF"}},{"color":"red","category":"hue","type":"primary","code":{"rgba":[255,0,0,1],"hex":"#FF0"}},{"color":"blue","category":"hue","type":"primary","code":{"rgba":[0,0,255,1],"hex":"#00F"}},{"color":"yellow","category":"hue","type":"primary","code":{"rgba":[255,255,0,1],"hex":"#FF0"}},{"color":"green","category":"hue","type":"secondary","code":{"rgba":[0,255,0,1],"hex":"#0F0"}}]}
... View more
11-13-2017
06:14 AM
1 Kudo
You need to do that step. That is the one which configures the proxy for your ambari principal.
... View more
11-13-2017
12:30 AM
2 Kudos
@Mike Bit This appears to be a config issue.Check if you have followed all steps listed here: https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.0.0/bk_ambari-views/content/configuring_pig_view.html
... View more
11-13-2017
12:14 AM
@Mike Bit Can you share your Pig View configurations and HDP version ?
... View more
11-09-2017
03:23 AM
2 Kudos
During upgrade, if Namenode restarts timeout, it may not appear to be a problem as the request times out from the Ambari UI but the restart process continue to run in the background. However, this can lead to inconsistencies in Ambari database and cause further issues at Finalize upgrade step. Note: This article is only useful upto Ambari-2.5.x version and must be performed before starting the upgrade process. Ambari-2.6 onwards it is a one step change where you only need to update the ambari.properties file instead of all the xml changes listed below. For Ambari-2.6 onwards: refer this document https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.0.0/bk_ambari-upgrade/content/preparing_to_upgrade_ambari_and_hdp.html Caution: The steps described below are a hack! It is not recommended that we go about making changes to upgrade XML files based on our needs. Please exercise caution and measure your risks before following the steps. You can increase the timeouts for namenode restart using the following steps before you start the upgrade process: Step 1: Locate the upgrade file on Ambari server host. If we are upgrading from HDP-2.5 to HDP-2.6 then : /var/lib/ambari-server/resources/stacks/HDP/2.5/upgrades/nonrolling-upgrade-2.6.xml [Express Upgrade]
/var/lib/ambari-server/resources/stacks/HDP/2.5/upgrades/upgrade-2.6.xml [Rolling Upgrade] Step 2: Change <service name="HDFS">
<component name="NAMENODE">
<upgrade>
<task xsi:type="restart-task"/>
</upgrade>
</component>
to <service name="HDFS">
<component name="NAMENODE">
<upgrade>
<task xsi:type="restart-task" timeout-config="upgrade.parameter.nn-restart.timeout"/>
</upgrade>
</component>
Step 3: Add this to ambari.properties upgrade.parameter.nn-restart.timeout=XXXXXX
where XXXXXX is the time in seconds Step 4: Restart Ambari Server Step 5: Now you can move on to your upgrade process
... View more
- Find more articles tagged with:
- Ambari
- Hadoop Core
- hdp-upgrade
- How-ToTutorial
- namenode
- timeout
Labels:
11-09-2017
01:10 AM
2 Kudos
For this article, I am using Ambari-2.5.2.0 and trying to upgrade HDP-2.5.3 to HDP-2.6.2. If you have a large HBase cluster, it can take a long time to do a HBase snapshot. As part of upgrade, this is one of the steps which Ambari will perform for you. However, for a large cluster, this step can actually lead to a timeout from Ambari UI and may result in further inconsistencies just before the Finalize Upgrade step. To overcome this, a lot of people have started performing a manual HBase Snapshot before the upgrade. However, not many have found a way to force Ambari to skip this step and save some time instead of waiting for it to timeout in order to proceed to next step during the upgrade. Here is how you can skip HBase Snapshot step altogether (in case you want to perform it manually before the upgrade): Caution: The steps described below are a hack! It is not recommended that we go about making changes to upgrade XML files based on our needs. Please exercise caution and measure your risks before following the steps. The following steps must be performed before starting the upgrade. Step 1: Locate the upgrade XML file on ambari-server host /var/lib/ambari-server/resources/stacks/HDP/2.5/upgrades/upgrade-2.6.xml (for Rolling Upgrade) /var/lib/ambari-server/resources/stacks/HDP/2.5/upgrades/nonrolling-upgrade-2.6.xml (for Express Upgrade) Step 2: Comment out the following piece of code in the upgrade XML file and save it <execute-stage service="HBASE" component="HBASE_MASTER" title="Snapshot HBASE">
<task xsi:type="execute" hosts="master">
<script>scripts/hbase_upgrade.py</script>
<function>take_snapshot</function>
</task>
</execute-stage> Step 3: Restart Ambari Server for it to pick up the changes. Step 4: Now you can start your upgrade.
... View more
- Find more articles tagged with:
- Ambari
- Hadoop Core
- HBase
- hdp-upgrade
- How-ToTutorial
- snapshot
Labels:
11-07-2017
07:23 PM
2 Kudos
@Arjun kumar 1. Deleting/Retaining old snapshots depends on your use-case. It all depends on whether or not you want to keep a copy of each day's data. 2. To snapshot any directory, that directory must be be snapshottable. As an admin user, you do that using : hdfs dfsadmin -allowSnapshot <path> Once you have allowed the snapshot for a certain path, you can create snapshot using: hdfs dfs -createSnapshot <path> [<snapshotName>] The snapshot name, which is an optional argument. When it is omitted, a default name is generated using a timestamp with the format "'s'yyyyMMdd-HHmmss.SSS", e.g. "s20130412-151029.033". Read here for more commands and options related to HDFS snapshot. 3. Yes. Basically you must create an executable script file and add your snapshotting instructions there. Then schedule it using crontab. Click here for an example of using crontab. Alternatively, you can also use this repo to achieve periodic snapshots. Caution: Snapshotting will lead to increase in total number of file system objects in your cluster. If unmonitored, this could lead to various other performance issues with Namenode. So use the snapshot feature with caution, always plan to get rid of snapshots that are not required.
... View more
10-27-2017
04:00 AM
5 Kudos
HDFS per-user Metrics aren't emitted by default. Kindly exercise caution before enabling them and make sure to refer to the details of client and service port numbers. To be able to use the HDFS - Users dashboard in your Grafana instance as well as to view metrics for HDFS per user, you will need to add these custom properties to your configuration. Step-by-step guide Presumption for this guide: This is a HA environment with dfs.internal.nameservices=nnha and dfs.ha.namenodes.nnha=nn1,nn2 in Ambari, HDFS > Configs > Advanced > Custom hdfs-site 1. In Ambari, HDFS > Configs > Advanced > Custom hdfs-site - Add the following properties. dfs.namenode.servicerpc-address.<dfs.internal.nameservices>.nn1=<namenodehost1>:8050
dfs.namenode.servicerpc-address.<dfs.internal.nameservices>.nn2=<namenodehost2>:8050
ipc.8020.callqueue.impl=org.apache.hadoop.ipc.FairCallQueue
ipc.8020.backoff.enable=true
ipc.8020.scheduler.impl=org.apache.hadoop.ipc.DecayRpcScheduler
ipc.8020.scheduler.priority.levels=3
ipc.8020.decay-scheduler.backoff.responsetime.enable=true
ipc.8020.decay-scheduler.backoff.responsetime.thresholds=10,20,30 If you have already enabled Service RPC port, then you can avoid adding the first two lines about servicerpc-address. Replace 8020 with your Namenode RPC port if it is different. DO NOT replace it with Service RPC Port or DataNode Lifeline Port 2. After this change you may see issues like both namenodes as Active or both as Standby in Ambari. To avoid this issue: a. Stop the ZKFC on both NameNodes b. Run the following command from one of the Namenode host as hdfs user su - hdfs
hdfs zkfc -formatZK
c. Restart all ZKFC 3: Restart HDFS & you should see the metrics being emitted. 4: After a few minutes, you should also be able to use the HDFS - Users Dashboard in Grafana. Things to ensure:
Client port : 8020 (if different, replace it with appropriate port in all keys) Service port: 8021 (if different, replace it with appropriate port in first value) namenodehost1 and namenodehost2: needs to be replaced with actual values from the cluster and must be FQDN. dfs.internal.nameservices: needs to be replaced with acutal vallues from the cluster Example: dfs.namenode.servicerpc-address.nnha.nn1=<namenodehost1>:8050 dfs.namenode.servicerpc-address.nnha.nn2=<namenodehost2>:8050 * For more than 2 namenodes in your HA environment, please add one additional line for each additional namenode: dfs.namenode.servicerpc-address.<dfs.internal.nameservices>.nnX=<namenodehostX>:8021 Adapted from this wiki which describes how to enable per user HDFS metrics for a non-HA environment. Note : This article has been validated against Ambari-2.5.2 and HDP-2.6.2 It will not work in older versions of Ambari due to this BUG https://issues.apache.org/jira/browse/AMBARI-21640
... View more
- Find more articles tagged with:
- grafana
- Hadoop Core
- HDFS
- hdfs-ha
- How-ToTutorial
- metric
- user
Labels:
10-27-2017
03:58 AM
2 Kudos
The article you mentioned only talks about non-HA scenario. For a HA scenario: 1. You must add one line per namenode. For example if you have 2 namenodes nn1 and nn2 and dfs.internal.nameservices-nnha, then dfs.namenode.servicerpc-address.<dfs.internal.nameservices>.nn1=<namenodehost1>:8021
dfs.namenode.servicerpc-address.<dfs.internal.nameservices>.nn2=<namenodehost2>:8021 2. Stop all ZKFC and then from any namenode host run the command: hdfs zkfc -formatZK 3. Restart ZKFC and HDFS. 4. Now you will be able to see metrics in grafana after few minutes.
... View more
10-23-2017
06:13 AM
@Turing nix - If my answer helps you, kindly consider accepting the answer so as to close mark this post resolved.
... View more