Member since
10-01-2015
3933
Posts
1150
Kudos Received
374
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 3644 | 05-03-2017 05:13 PM | |
| 2999 | 05-02-2017 08:38 AM | |
| 3253 | 05-02-2017 08:13 AM | |
| 3212 | 04-10-2017 10:51 PM | |
| 1677 | 03-28-2017 02:27 AM |
05-03-2016
01:55 PM
1 Kudo
Take a look at this https://github.com/hortonworks-gallery/ambari-vnc-service
... View more
05-03-2016
12:31 PM
You read the string, convert it up json obj and read it. Look at my example how I read the key to make it a row key, you will do the same after calling a Get.
... View more
05-03-2016
11:57 AM
you can treat each json as string then in HBase you would do the following where line is a json string. Full example here https://github.com/dbist/workshops/blob/master/hbase/HBaseJsonLoad/src/main/java/com/hortonworks/hbase/Application.java p.addColumn(CF, Bytes.toBytes("json"), Bytes.toBytes(line));
... View more
05-03-2016
02:39 AM
I happen to agree with @Ravi Mutyala nifi is great to bring logs into hadoop and track logs at the edge, there are built-in parsers and filters, you'll feel right at home with Nifi. Here are some nifi templates including working with logs https://cwiki.apache.org/confluence/display/NIFI/Example+Dataflow+Templates
... View more
05-03-2016
02:30 AM
2 Kudos
I'm a long-time user of Apache Bigtop. My experience with Hadoop and Bigtop predates Ambari. I started using Bigtop with version 0.3. I remember pulling bigtop.repo file and install Hadoop, Pig and Hive for some quick development. Bigtop makes it convenient and easy. Bigtop has matured since then and there are now multiple ways of deployment. There's still a way to pull repo and install manually but there's better ways now with Vagrant and Docker. I won't rehash how to deploy Bigtop using Docker as it was beautifly described here. Admittedly, I'm running it on Mac and was not able to provision a cluster using Docker. I did not try with non-OSX. This post is about Vagrant. Let's get started: Install VirtualBox and Vagrant Download 1.1.0 release wget http://www.apache.org/dist/bigtop/bigtop-1.1.0/bigtop-1.1.0-project.tar.gz uncompress the tarball tar -xvzf bigtop-1.1.0-project.tar.gz change directory to bigtop-1.1.0/bigtop-deploy/vm/vagrant-puppet-vm cd bigtop-1.1.0/bigtop-deploy/vm/vagrant-puppet-vm here you can review the README but to keep it short you can edit the vagrantconfig.yaml for any additional customization like changing VM memory, OS, number of CPUs, components (e.g. hadoop, spark, tez, hama, solr) etc and also number of VMs you'd like to provision. This last part is the killer feature, you can provision a Sandbox with multiple nodes, not a single VM. Same is true with Docker provisioner but I can't confirm that for you. Feel free to read the README in bigtop-1.1.0/bigtop-deploy/vm/vagrant-puppet-docker for that approach. then you can start provisioning your custom sandbox with vagrant up wait 5-10min and then you can use standard Vagrant commands to interact with your custom Sandbox. vagrant ssh bigtop1 now just create your local user and off you go sudo -u hdfs hdfs dfs -mkdir /user/vagrant
sudo -u hdfs hdfs dfs -chown -R vagrant:hdfs /user/vagrant for your convenience, add the bigtop machine(s) to /etc/hosts Now, you're probably wondering why would I use Bigtop over regular sandbox? Well, Sandbox has been getting pretty resource heavy and has a lot of components. I like to provision a small cluster with just a few components like hadoop, spark, yarn and pig. Bigtop makes this possible and runs easily within a memory strapped VM. One downside is that with the latest release, Spark is at 1.5.0 and Hortonworks Sandbox is at 1.6.0, story is the same with other components. There are version gaps and if you can look past it, you have a quick way to prototype without much fuss! This is by no means meant to steal thunder from an excellent Ambari quick start guide, this is meant to demonstrate yet another approach from a rich ecosystem of Hadoop tools.
... View more
Labels:
05-02-2016
11:17 PM
Scott, can you confirm youre having an issue with hive or the hive view? Please run your sql against hive cli or beeline. i have a working example here https://github.com/dbist/workshops/blob/master/hive/JSON/JSON_SERDE.txt Another thing to keep in mind, switch to 1.2 introduced a clause for reserved keywords, in my example, user is a keyword and I turn the check off. Confirm your schema does not include any reserved words.
... View more
05-02-2016
11:08 PM
We're you able to overcome this problem or do you still need help?
... View more
05-02-2016
11:05 PM
Are you still having issues? Can you post the error messages you get from the Hive logs directory?
... View more
05-02-2016
07:52 PM
2 Kudos
how does context pass from paragraph to paragraph? Think hive context shared with Spark, then phoenix, etc. Also is context sharing enabled for multi-user?
... View more
Labels:
- Labels:
-
Apache Spark
-
Apache Zeppelin