Member since
08-02-2019
131
Posts
92
Kudos Received
13
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2665 | 12-03-2018 09:33 PM | |
3390 | 04-11-2018 02:26 PM | |
1994 | 05-09-2017 09:35 PM | |
835 | 03-31-2017 12:59 PM | |
1612 | 11-21-2016 08:58 PM |
10-11-2016
04:43 PM
@Aaron Harris Check to be sure the enrichment config and parser configs for squid are installed using the zk_load_configs.sh with the -m DUMP method: For example on quick dev run this command. The parser enrichment configs are in bold: [vagrant@node1 ~]$ /usr/metron/0.2.0BETA/bin/zk_load_configs.sh -i /usr/metron/0.2.0BETA/config/zookeeper/ -m DUMP -z localhost:2181 | grep -i squid | grep Config log4j:WARN No appenders could be found for logger (org.apache.curator.framework.imps.CuratorFrameworkImpl). log4j:WARN Please initialize the log4j system properly. log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info. PARSER Config: squid ENRICHMENT Config: squid If not, check the zookeeper config directory: [vagrant@node1 ~]$ ls /usr/metron/0.2.0BETA/config/zookeeper/enrichments/ bro.json snort.json squid.json websphere.json yaf.json Then update zookeeper: /usr/metron/0.2.0BETA/bin/zk_load_configs.sh -i /usr/metron/0.2.0BETA/config/zookeeper/ -m PUSH -z localhost:2181 Then you will probably need to restart the enrichment topology. From Ambari, go to the storm UI, click into the enrichment topology and then the Kill button. If you are using quick dev, monit should automatically restart.
... View more
10-11-2016
02:47 PM
Check out these articles: https://community.hortonworks.com/articles/57350/how-to-start-atlas-on-hortonworks-sandbox-for-hdp.html https://community.hortonworks.com/articles/57315/how-to-get-atlas-up-and-running-in-hdp-25-sandbox.html
... View more
10-11-2016
02:30 PM
1 Kudo
Is it possible to use cloud break and ambari blueprints to deploy an EMC isilon cluster using openstack? Can you post an example? For example, can you specify in an ambari blueprint the location of the name node and data node but have ambari skip the install?
... View more
Labels:
- Labels:
-
Hortonworks Cloudbreak
09-29-2016
01:46 PM
4 Kudos
@Breandán Mac Parland You can use the PutHDFS processor. https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi.processors.hadoop.PutHDFS/ This article might help as well: https://community.hortonworks.com/questions/25232/adding-user-defined-attributes-to-nifi-flowfiles.html
... View more
09-26-2016
04:33 PM
Glad you got it working.
... View more
09-26-2016
04:29 PM
@Jay Zhou and @Georgios Gkekas Also check out this article on how to use the artifacts in the Hortonworks repository from Maven. It is for building streaming applications but can should be able to translate to other Spark applications: https://community.hortonworks.com/articles/30430/a-maven-pomxml-for-java-based-sparkstreaming-appli.html
... View more
09-23-2016
07:23 PM
There may be a compatibility issue with your scoop workflow definition XML. Could you post it? What HDP version are you using? Here is the sqoop definition for HDP 4.2. http://oozie.apache.org/docs/4.2.0/DG_SqoopActionExtension.html
... View more
09-23-2016
12:48 PM
3 Kudos
@Shankar P In AWS you can really exercise the latest release. With the sandbox you have only limited resources on your laptop but in the cloud you can easily spin up test clusters in different configurations. Start by creating a Hortonworks cloud controller host that has a GUI where you can fill out a form and spin up a new cluster. Check out this article for an example of how to use cloud break controller on AWS: https://community.hortonworks.com/articles/54226/how-to-use-hortonworks-cloud-to-provision-a-cluste.html Also check out this page on : http://hortonworks.github.io/hdp-aws/
... View more
09-19-2016
05:41 PM
1 Kudo
It looks like it was interrupted. Maybe you ran out of resources on the sandbox host? 2016-09-16 18:09:44,369 [AMRM Callback Handler Thread] INFO impl.AMRMClientAsyncImpl - Interrupted while waiting for queue
java.lang.InterruptedException
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2052)
at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at org.apache.hadoop.yarn.client.api.async.impl.AMRMClientAsyncImpl$CallbackHandlerThread.run(AMRMClientAsyncImpl.java:287)
... View more
09-09-2016
04:30 PM
I think there is a definitely a clash of versions. The reflection error below indicates a mismatch of versions when the client is creating a session: more Caused by: java.lang.NoSuchMethodError: org.apache.hadoop.util.StringUtils.toUpperCase(Ljava/lang/String;)Ljava/lang/String; at org.apache.hadoop.security.SaslPropertiesResolver.setConf(SaslPropertiesResolver.java:69) at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133) at org.apache.hadoop.security.SaslPropertiesResolver.getInstance(SaslPropertiesResolver.java:58) ... 54 more 16/09/07 14:21:36 INFO metastore: Trying to connect to metastore with URI thrift://xxxxxxxxxxxxxxxxxxx:9083 Exception in thread "main" Check the contents of the jars to make sure they are all compatible. For example what is the contents of target/YOUR_JAR-1.0.0-SNAPSHOT.jar
... View more