Member since
09-15-2015
457
Posts
507
Kudos Received
90
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
15665 | 11-01-2016 08:16 AM | |
11081 | 11-01-2016 07:45 AM | |
8568 | 10-25-2016 09:50 AM | |
1918 | 10-21-2016 03:50 AM | |
3827 | 10-14-2016 03:12 PM |
12-10-2015
05:56 AM
Agree, the Sandbox should work on Windows 10 as long as you have VMwarePlayer or Virtual Box installed.
... View more
12-09-2015
10:22 PM
Could you please validate your endpoints and make sure all the hostnames are correct? If your env. is kerberized, then your ODBC Driver config file should a similar DSN as follows: Description=Hortonworks Hive ODBC Driver DSN
Driver=/usr/lib/hive/lib/native/Linux-i386-32/libhortonworkshiveodbc32.so
DriverUnicodeEncoding=1
HOST=hive.example.com
PORT=10000
Schema=default
FastSQLPrepare=0
UseNativeQuery=0
HiveServerType=2
AuthMech=1
KrbHostFQDN=kdc.example.com
KrbServiceName=hive
KrbRealm=EXAMPLE.COM
UID=test
Could you share your ODBC configuration?
... View more
12-09-2015
09:25 PM
4 Kudos
This article is a follow-up on my original article about the visualization of a cluster and its services/components ( https://community.hortonworks.com/articles/2010/visualizing-hdp-cluster-service-allocation.html). In the first part I am particularly focusing on a new feature that enables users to build and plan a cluster by using a drag-n-drop Web UI. Build a Cluster Until now visualizing a cluster and its service allocation either meant exporting the information from Ambari or writing a JSON file that outlines the details of the nodes. Planning and deploying a cluster should be easier, right? I'll introduce, Build a Cluster 🙂 This simple Web UI is based on different drag-n-drop functionalities and allows the creation of a new cluster by simply dragging Hadoop components from the elements list to the indivdual nodes. Lets go over the different features.... User Interface The UI is divided into two sections Elements & Settings and Cluster: Elements & Settings (left): Contain the available services and components of the environment (remember these can be edited by simply importing a different env.) as well as cluster settings (HDP version, cluster name, security enabled yes/no). Additionaly this section provides some action buttons to finalize the cluster and add nodes Cluster (right): This is the current cluster with all its components. Elements can be dragged from the elements list and dropped in the individual nodes. Nodes can be edited or removed. Nodes Note: The data structure of Nodes has changed in this version, one node does not have to represent a single physical machine anymore, a node in this app can now represent many physical machines that all share the same components. Adding Nodes The number of nodes is curently limited to 1000. New nodes can be added by pressing the "+ Node"-button in the elements section. Editing the hostname and cardinality Simply click on the hostname or cardinality. (Note: ever Hostname Syntax Hostnames allow some special syntax, which automatically generates multiple hostnames (only if cardinality is set > 1) #{x} => Number with trailing zeros {x} => Number without trailing zeros X => defines the start of the counter Examples: 1) datanode#{0}.example.com (Cardinality = 2) datanode1.example.com datanode2.example.com 2) datanode#{0}.example.com (Cardinality = 30) datanode01.example.com datanode02.example.com ... 3) datanode{100}.example.com (Cardinality = 20) datanode100.example.com datanode101.example.com ... 4) datanode.example.com (Cardinality = 2) datanode.example.com1 datanode.example.com2 Adding components to a Node Select a service in the Elements list, this will bring up the list of components of this service Select & Drag a component to any of the nodes Removing a component from a Node Drag the component from the node and drop it outside the node or over the "Trash" area, inside the elements section Finalize a Cluster When your cluster is finished, press the "Finalize"-button inside the elements section, this will convert the built cluster into the same data format as any exported or JSON-specified cluster. Additionaly this imports or basically transfers the new cluster to the main page "Cluster". Finalizing a cluster also regenerates the Ambari Blueprint (read more in the next section) Note: You can press the button multiple times while you're developing a new cluster 🙂 This might be helpful, e.g. if you want to see different views (service, component, list) on the Cluster-page during the development. Generating Ambari Blueprints In this section I am focusing on another new feature that will more than simplfy the creation of Ambari Blueprints. The blueprint section contains the actual blueprint (left) as well as the cluster creation template or hostgroup mapping (right). Hitting the "Copy"-button in the upper-right corner will copy the individual content to the clipboard (might not work with all browsers) Cluster-Configuration No blueprint is complete without any configuration! The Cluster-Configuration page provides the necessary functionalities to add general or host-group specific configurations to the blueprint or the cluster creation template (hostgroup mapping). I have seen plenty of blueprints that had typos within the configuration section, e.g. instead of dfs.blocksize the blueprint included a dfs.blcksize configuration. This is why the typeahead feature for the config location and name was added. Simply start typing and the app will come up with some suggestions. HDFS HA & Yarn HA configurations (automation) A nice little gimmick that has been added to this application is the automatic config generation for HDFS HA and Yarn HA clusters. Whenever the app recognizes a specified set of service components (e.g. 2 Namenodes, 3 Journalnodes, etc.) in the cluster, it will automatically generate the necessary configuration for HDFS or Yarn High Availability. Project & Setup: https://github.com/mr-jstraub/ambari-node-view I hope you enjoy these new features and find them useful. Looking forward to your feedback and feature requests 🙂
... View more
Labels:
12-09-2015
07:59 PM
1 Kudo
Have looked at the HS2 and RM logs, havent seen any errors in there. We might need to try to turn the log level to DEBUG for Hive and RM to get more details
... View more
12-09-2015
07:47 PM
The ZNode is automatically recreated by the HBase Master during startup, if it does not exist. There might be times where you run into a corrupted HBase Znode, in that case you have to basically shutdown hbase, remove the old ZNode, and restart HBase again. Is your environment kerberized? Could you please shutdown HBase, open the HBase master log and restart the hbase master. During startup monitor the log and look out for Errors and Zookeeper related entries.
... View more
12-09-2015
06:53 AM
Our current sandbox does not have a folder /home/user/hue, do you mean /user/hue? You can upload files by using the Upload-button in the upper right corner (screenshot above). These files will be uploaded to the current folder, for example if you are in the folder /user/hue, files will all be in this folder after you have uploaded them. The uploaded files are located in the HDFS, its not possible to access them directly via SCP
... View more
12-08-2015
01:26 PM
Thanks for confirming! The Znode makes a huge difference 🙂
... View more
12-08-2015
08:41 AM
1 Kudo
Awesome, I am glad you were able to fix it 🙂 I guess you are not using a separate znode for Solr in your Zookeeper environment, right? So basically all you solr content is placed in the root directory of Zookeeper.
... View more
12-08-2015
06:59 AM
1 Kudo
Thanks for you question 🙂 I am seeing the same error message with HDFS SolrCloud Audit. Basically the Audit client is not picking up the collection name from the configuration and setting "ranger.audit.solr.collection.name" to "ranger_audits" has no effect. I will follow up on this issue and see what I can find out. Just to make sure, did you follow this guide (http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.2/bk_Ranger_Install_Guide/content/ch_install_solr.html) to setup your SolrCloud for Ranger Audits? Does your Audit configuration for Hive look something like this: xasecure.audit.destination.solr.zookeepersmaster01.example.com:2181/solrxasecure.audit.destination.solr.urls{{ranger_audit_solr_urls}}Audit to SOLRtrue You can work around this issue by setting the following configuration: HDFS (ranger-hdfs-audit):
xasecure.audit.destination.solr.zookeepers=NONE Ranger (ranger-admin-site):
ranger.audit.solr.urls=http://solrNode01.example.com:8983/solr/ranger_audits This way you're working around Zookeeper and write your audit log directly to one of the Solr nodes.
... View more
12-08-2015
05:49 AM
Very common question, thanks for sharing!
... View more