Member since
08-23-2016
261
Posts
201
Kudos Received
106
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1756 | 01-26-2018 07:28 PM | |
1402 | 11-29-2017 04:02 PM | |
35337 | 11-29-2017 03:56 PM | |
3517 | 11-28-2017 01:01 AM | |
955 | 11-22-2017 04:08 PM |
05-16-2017
06:53 PM
3 Kudos
Hi @PJ the easiest and least intrusive way is to use Hortonworks Data Flow (powered by Apache NiFi) to quickly build a data flow that queries Cassandra and sends the results to HDFS. HDF/Nifi includes Cassandra processors that make integration simple. Take a look at this article about ingesting data into hadoop from a RDBMS, but, you would be using the QueryCassandra processor instead: https://community.hortonworks.com/articles/87686/rdbms-to-hive-using-nifi-small-medium-tables.html https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-cassandra-nar/1.2.0/org.apache.nifi.processors.cassandra.QueryCassandra/ As
always, if you find this post useful, don't forget to accept the
answer.
... View more
05-11-2017
05:31 AM
1 Kudo
Hi @Anil Reddy If you are like me, and dislike RegEx, one trick you can try is to use the SplitContent processor first. Change config dropdown to use Text instead of Hexadecimal, and use the byte sequence of your pair delimiter &. This would simplify the RegEx if you wanted to use ExtractText still. Or perhaps you can explore using another SplitContent processor on the = to get the field and value tokens separately. Hopefully you can avoid the RegEx there. As always, if you find this post helpful, please accept the answer.
... View more
05-10-2017
02:56 PM
1 Kudo
@Roberto Sancho Flume is included in the HDP repo, and usually installed on an Edge node. The Installation instructions can be found here: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.0/bk_command-line-installation/content/installing_flume.html As always, if you find this post useful, don't forget to accept the answer.
... View more
05-08-2017
07:53 PM
2 Kudos
hi @Stefan Schuster I'm not sure why they aren't on the site, perhaps re-organized. I believe the one you are looking for is still avail here: https://github.com/hortonworks/data-tutorials/blob/1f3893c64bbf5ffeae4f1a5cbf1bd667dcea6b06/tutorials/hdp/hdp-2.6/hadoop-tutorial-getting-started-with-hdp/tutorial-8.md As
always, if you find this post useful, don't forget to accept the
answer.
... View more
05-08-2017
07:46 PM
1 Kudo
@Roberto Sancho Assuming the box meets the other prereqs (OS, etc) then the client tools can be installed on the same box, though it isn't typical.
... View more
05-08-2017
03:13 PM
1 Kudo
@Roberto Sancho The HDP Client tools including Sqoop and Flume would typically be installed on HDP Edge Nodes, and not usually in a HDF cluster. As
always, if you find this post useful, don't forget to accept the
answer.
... View more
05-04-2017
02:30 AM
1 Kudo
@Rakesh Maheshwari can you confirm the version of the Sandbox you downloaded. Also, can you show the screenshot if the Ambari menu including the user you are logging in with? If not admin, the user should have the view permissions assigned. With the admin user, click on the dropdown (in my screenshot, I'd click on Admin) -> Manage Ambari -> Views -> Hive -> Hive View 2.0 then scroll down to permissions and make sure the users you require are added. As
always, if you find this post useful, don't forget to accept the
answer, and/or upvote.
... View more
05-03-2017
11:22 PM
3 Kudos
Hi @Aref Asvadi The SandBox archive link is a little easy to miss, but it is right on the Sandbox download page. Scroll down past the current Sandbox, and right past the Sandbox in the cloud section, and you should see an expandable/clickable section for Hortonworks Sandbox Archive. Expand the section, and download the version of your choice! As
always, if you find this post useful, don't forget to accept and/or upvote the
answer. https://hortonworks.com/downloads/
... View more
05-03-2017
04:08 PM
2 Kudos
@Dinesh Das Remember that Apache Phoenix is a SQL Skin over HBase. The underlying database is HBase, but accessed via Phoenix is one wishes to use it for SQL. Here are a couple of good links that can help explain further: https://phoenix.apache.org/Phoenix-in-15-minutes-or-less.html https://hortonworks.com/hadoop-tutorial/introduction-apache-hbase-concepts-apache-phoenix-new-backup-restore-utility-hbase/ As
always, if you find this post useful, don't forget to accept and/or upvote the
answer.
... View more
04-06-2017
03:48 PM
As
always, if you find this post useful, don't forget to accept and/or upvote the
answer.
... View more