Member since
10-01-2015
3933
Posts
1150
Kudos Received
374
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3373 | 05-03-2017 05:13 PM | |
2801 | 05-02-2017 08:38 AM | |
3082 | 05-02-2017 08:13 AM | |
3011 | 04-10-2017 10:51 PM | |
1527 | 03-28-2017 02:27 AM |
11-06-2016
01:50 PM
Also make sure to open endpoints to outside for the ports you're using in azure portal.
... View more
11-06-2016
01:24 PM
I should've realized that you're on Azure and using WASB, I am not 100 % sure but see if you can take the external address of the FS.defaultFS property and add to Hive view. You need to fill out the following properties in your Hive view with their associated values. This is in case it is HA, in case you also have kerberos, you need to add those properties as well. Keep in mind you need public address for each URL not the ones that says internal in their name. I have example REST API calls to create Ambari views in this script, properties in the curl calls are needed to setup views to function https://github.com/dbist/ambari-chef in that link go to recipes then ambari_views_setup.rb file for examples "webhdfs.client.failover.proxy.provider" : "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider", "webhdfs.ha.namenode.http-address.nn1" : "u1201.ambari.apache.org:50070", "webhdfs.ha.namenode.http-address.nn2" : "u1201.ambari.apache.org:50070", "webhdfs.ha.namenode.https-address.nn1" : null, "webhdfs.ha.namenode.https-address.nn2" : null, "webhdfs.ha.namenode.rpc-address.nn1" : "u1201.ambari.apache.org:8020", "webhdfs.ha.namenode.rpc-address.nn2" : "u1202.ambari.apache.org:8020", "webhdfs.ha.namenodes.list" : "nn1,nn2", "webhdfs.nameservices" : "hacluster", "webhdfs.url" : "webhdfs://hacluster", "hive.host" : "u1203.ambari.apache.org", "hive.http.path" : "cliservice", "hive.http.port" : "10001", "hive.metastore.warehouse.dir" : "/apps/hive/warehouse", "hive.port" : "10000", "hive.transport.mode" : "binary", "yarn.ats.url" : "http://u1202.ambari.apache.org:8188", "yarn.resourcemanager.url" : "u1202.ambari.apache.org:8088"
... View more
11-05-2016
11:39 PM
1 Kudo
This issue is fixed in HDP 2.5 that also comes with Tez 0.7.0. sometimes we backport critical fixes. http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.0/bk_release-notes/content/patch_tez.html
... View more
11-05-2016
11:13 PM
If you intend to use 2.4 release, please try registering 2.4.3 version and upgrade cluster to that. Then try adding storm. storm in HDP 2.4 is version 0.10.x so you're definitely pulling something incorrect. go to Ambari admin page and make sure you have correct repos setup.
... View more
11-05-2016
09:33 PM
Look in hdfs-site.xml in advanced Ambari config for hdfs service
... View more
11-05-2016
05:07 PM
in your Hive view, make sure webhdfs.url property is set to correct namenode url, I.e. webhdfs://fqdn:8020 or in case your cluster is HA, webhdfs://nameservices where nameservices is the dfs.nameservices property.
... View more
11-05-2016
02:25 AM
1 Kudo
Even though we no longer support Ganglia in our stack, the ganglia metrics sink in Hadoop is not removed, you can still leverage that https://wiki.apache.org/hadoop/GangliaMetrics Another option is DropWizard https://community.hortonworks.com/repos/56568/dropwizard-metrics-reporter-for-apache-hadoop-metr.html AMS supports rest API so you can also build your own sink, I tried similar approach with Apache Nifi And Graphite.
... View more
11-05-2016
01:24 AM
1 Kudo
Hadoop clusters spanning multiple data centers is not supported and can lead to unsatisfactory results.
... View more
11-04-2016
05:18 PM
@emaxwell just heard from engineering, in case of Storm, kerberos is required for Ranger authorization.
... View more
11-04-2016
10:41 AM
@AT can you provide a solution? Otherwise this is not really useful.
... View more