Member since
05-30-2018
1322
Posts
715
Kudos Received
148
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2864 | 08-20-2018 08:26 PM | |
1219 | 08-15-2018 01:59 PM | |
1554 | 08-13-2018 02:20 PM | |
2687 | 07-23-2018 04:37 PM | |
3344 | 07-19-2018 12:52 PM |
10-27-2016
01:25 AM
Go to ranger admin page. Click on settings. Select Users select user to delete. Hit the red delete button You should know the next time sync is ran the user will show up in list. If you want to remove user permanantly I recommend you remove user from the OU group in AD/LDAP.
... View more
10-26-2016
08:45 PM
I have created a atlas entity. I want to retreive that entities GUID via rest api. The post here https://community.hortonworks.com/questions/42114/atlas-rest-api.html walks through get all GUIDs for type. i want to know if I can get a GUID for a specific entity via rest api.
... View more
Labels:
- Labels:
-
Apache Atlas
10-26-2016
08:11 PM
1 Kudo
@Ashnee Sharma Take a look this HCC post https://community.hortonworks.com/questions/7165/how-to-copy-hdfs-file-to-aws-s3-bucket-hadoop-dist.html It outlines options to move hdfs data to s3
... View more
10-26-2016
08:09 PM
1 Kudo
@srinivas reddy gaddam Using ambari to kerberos your cluster is very straight forward. Please follow directions on how to (the guide) here: http://docs.hortonworks.com/HDPDocuments/Ambari-2.4.1.0/bk_ambari-security/content/ch_configuring_amb_hdp_for_kerberos.html
... View more
10-20-2016
09:00 PM
@Amit Dass Yes it is a key value store similar to Apache HBase. Think of it as a map. To get a "column" you need to know how to find it. That is through the rowkey design. the rowkey+column name provide you the "Map" to the column of where it physically resides.
... View more
10-20-2016
08:56 PM
1 Kudo
I completely agree with josh. hive is best suite for EDW type of querying. HBase is a key value store, so you need to know your questions prior to designing the PDM. Both use HDFS as the underlying storage. If you which queries will be run and have a defined access path model, Phoenix/hbase will provide you lowest latency. If you are looking for general BI queries and can't define access path up front, hive is the way to go.
... View more
10-20-2016
03:30 PM
1 Kudo
HDP 2.5 GA'd phoenix query server. This makes connecting to phoenix much easier. Article will walk through steps on how to connect to phoenix Phoenix Query Server via DBVisualizer. Grab the phoenix thin jdbc driver onto your desktop. On the HDP 2.5 here is the location of the jdbc driver Start up DBVisualizer. From the top menu bar select Tools->Driver Manager Popoluate the fields: Name:Apache Phoenix Thin Client I used phoenixthin. URL Format: jdbc:phoenix:thin:url=<scheme>://<server-hostname>:<port>[...] Client on the folder icon Locate your phoenix thin jdbc driver you downloaded to your desktop. Once selected click on ok and and now you have the driver loaded. Lets connect to phoenix. Click on the icon shown below which creates a new db connection. Then select "Use Wizard". Enter a connection name. I used phoenix-QPS Now select the driver you loaded in the previous steps. I named my driver phoenixthin Next enter your userid & password. For this example I use the root user ID Now a data connection has been created. Lets connect to phoenix using that connection alias. Go to your connection alias shown on the left pane. Right click and select "Connect" You are now connected! Start having fun and open up some name spaces. Select * from tables It is clear connecting to Phoenix is much easier now thanks to the community building Phoenix Query Server. Happy Phoenix-ing
... View more
Labels:
10-19-2016
05:35 AM
Have you looked at this article? https://community.hortonworks.com/content/kbentry/61188/enable-jmx-metrics-on-hadoop-using-jmxterm.html
... View more
10-18-2016
01:54 PM
3 Kudos
@Ankit Jindal LLAP which comes with HDP 2.5. Or Apache HAWQ, which is also known as HDB. Both are fast sql engines and faster then impala. Both run on yarn. LLAP has terabyte scale. HDB is virtually limitless by adding nodes based on your usage. Lastly for known query patterns or access patterns, apache phoenix should be considered. Using primary row key along with secondary index is simply fast. Here is article on how to use phoenix secondary indexes. https://community.hortonworks.com/content/kbentry/61705/art-of-phoenix-secondary-indexes.html
... View more
10-17-2016
02:27 AM
@Atif Mohammad can you Please try using the external DNS or IP with port 9995? reply back if that works. If you are able to hit zeppelin directly using external DNS or IP with port 9995 then you need to change ambari config to point to correct DNS or use route53.
... View more