Member since
10-01-2015
3933
Posts
1150
Kudos Received
374
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 3662 | 05-03-2017 05:13 PM | |
| 3018 | 05-02-2017 08:38 AM | |
| 3280 | 05-02-2017 08:13 AM | |
| 3223 | 04-10-2017 10:51 PM | |
| 1690 | 03-28-2017 02:27 AM |
02-08-2016
07:28 PM
@Daniel Vielvoye it's not yet supported but in general you can refer to this api to add metrics https://cwiki.apache.org/confluence/display/AMBARI/Metrics+Collector+API+Specification
... View more
02-08-2016
07:24 PM
do you have directory /opt/lucidworks-hdpsearch? Please refer to link I provided to install SOLR. On non Sandbox you have to install it. Just follow directions in the link, it was tested many times on latest sandbox.
yum install -y lucidworks-hdpsearch sudo -u hdfs hadoop fs -mkdir /user/solr sudo -u hdfs hadoop fs -chown solr /user/solr
... View more
02-08-2016
07:02 PM
full classname including package name i.e. "com.mypackage.MyClass". Please refer to http://storm.apache.org/documentation/Running-topologies-on-a-production-cluster.html @keerthana gajarajakumar
... View more
02-08-2016
07:01 PM
@zblanco @Rafael Coss it makes sense to add java classpath to your tutorial as it's a common gotcha with new users. Please refer to the Ali Bajwa's tutorial where he addressed it with JAVA_HOME
... View more
02-08-2016
06:57 PM
2 Kudos
you need to add java classpath. refer to the updated steps here https://community.hortonworks.com/content/kbentry/1282/sample-hdfnifi-flow-to-push-tweets-into-solrbanana.html @keerthana gajarajakumar export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk.x86_64/opt/lucidworks-hdpsearch/solr/bin/solr start -c -z localhost:2181
/opt/lucidworks-hdpsearch/solr/bin/solr create -c tweets \
-d data_driven_schema_configs \
-s 1 \
-rf 1
... View more
02-08-2016
06:32 PM
cost of typical disk vs. SAN backed disk would be cost prohibitive. @Sunile Manjee
... View more
02-08-2016
06:21 PM
@Sunile Manjee SAN is terrible for Hadoop go with direct attached or Isilon NAS. SAN suffers from busy neighbor aside from being a shared pool of storage, it's outside of namenode's control so blocks can move around and lose data locality, latency can also be an issue and final thought, direct attached disk is redundant by many disks, so you can tolerate failure by having more disk. Quick search led to this http://hortonworks.com/blog/thinking-about-the-hdfs-vs-other-storage-technologies/ and more http://www.infoworld.com/article/2609694/application-development/never--ever-do-this-to-hadoop.html Here's our official doc http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.2/bk_cluster-planning-guide/content/hardware-for-slave.1.html I guess one thing more to mention is that busy neighbor is also a problem in reverse, you will affect the other applications running on your SAN.
... View more
02-08-2016
05:59 PM
@Ram D do patches on minor versions, do them via one host at a time, enable HA on your cluster.
... View more
02-08-2016
05:02 PM
@Gerd Koenig the only advice I have for you as this is a unique use case is to try and then post an article :).
... View more
02-08-2016
04:42 PM
@Gerd Koenig are you following similar steps as this guide? You should be able to see the table from both places https://community.hortonworks.com/content/kbentry/14806/working-with-hbase-and-hive-wip.html
... View more