Member since
06-05-2019
128
Posts
133
Kudos Received
11
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1792 | 12-17-2016 08:30 PM | |
1334 | 08-08-2016 07:20 PM | |
2375 | 08-08-2016 03:13 PM | |
2473 | 08-04-2016 02:49 PM | |
2279 | 08-03-2016 06:29 PM |
05-03-2016
08:23 PM
Following George Vetticaden's article Apache Metron TP1 Deep Dive - the chart depicts how Apache Metron would look on a 10 node cluster. Where are the Zookeeper nodes (I'd assume we'd want 3 to achieve quorum?) Does ZS in Node 2 stand for Zookeeper Server (if so only one instance of ZK?)
... View more
Labels:
- Labels:
-
Apache Metron
05-02-2016
09:01 PM
2 Kudos
@Amol Y I tried the tutorial you referenced. I found that if I ran "%sh hadoop fs -put ~/Hortonworks /tmp" twice: first time = successfully populated /tmp/Hortonworks
second time = failed with "Process exited with an error: 1 (Exit value: 1)" because /tmp/Hortonworks was already populated. If I delete the /tmp/Hortonworks folder "%sh hadoop fs -rm -r -f /tmp/Hortonworks" , I am able to re-run "%sh hadoop fs -put ~/Hortonworks /tmp" successfully
... View more
05-02-2016
07:52 PM
Hi @simran kaur This error is indicating that the jars are not available through the classpath. Are you submitting using oozie.use.system.libpath=true? Could you post the launcher log?
... View more
04-29-2016
10:46 PM
Hi @ganne! Try to do "stop all" and "start all" in Ambari. Some of the services rely on each-other so they must be started in a certain order. If you stop everything and start everything, Ambari will make sure every starts in the right order. Let me know if that doesn't work.
... View more
04-14-2016
03:52 PM
Hi @Michael M - good question. I think my understanding of the MQ Queue was incorrect - where I thought if data is read, the data still exists on the queue when in fact that data is gone. The flume agent is set to use the memory channel and not the file channel, so if the agent crashes, what has been ingested from the source is lost. This may be the wrong approach because if the agent reads off the queue, that data off of the queue is no longer available for consumption. So if the agent crashes (and is using Memory Channel), that data is lost right? Multiple flume agents reading from the same queue won't step on each other because of this, right?
... View more
04-13-2016
09:37 PM
How would I go about configuring multiple flume agents to fetch data from an MQ messaging broker? So that they don't duplicate messages back to their sink.
... View more
Labels:
- Labels:
-
Apache Flume
03-29-2016
10:31 PM
Hi @Santiago Goro After some digging, I found http://gethue.com/how-to-deploy-hue-on-hdp/ It seems the sandbox ships with version 2.6.1 (can you verify you have 2.6.1 by viewing /usr/lib/hue/VERSION) Can you try to http://gethue.com/hadoop-hue-3-on-hdp-installation-tutorial/ Let me know if that works
... View more
03-29-2016
02:56 AM
Hi @Santiago Goro Are you using HDP 2.4? Can you validate Hue has been started https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.0/bk_installing_manually_book/content/start_hue.html
... View more
03-23-2016
05:03 PM
@DIALLO Sory referring to your screenshot, were you able to resolve the Failed Host on your Confirm Host during the Ambari setup?
... View more
03-21-2016
09:42 PM
1 Kudo
While running the latest Sandbox (HDP 2.4 on Hortonworks Sandbox), I noticed HDFS had 500+ under replicated blocks (via Ambari). Opening /etc/hadoop/conf/hdfs-site.xml, dfs.replication=3 (default http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml) Does anyone know why the Sandbox uses a HDFS replication factor of 3, aside from the fact that its the HDFS default? I'd assume most Sandbox users are running a virtual machine representing one node. If this is the case, dfs.replication=1 in the Sandbox to prevent under replicated blocks. Is my assumption incorrect?
... View more
Labels:
- Labels:
-
Apache Hadoop
- « Previous
- Next »