Member since
07-30-2019
333
Posts
357
Kudos Received
76
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
10114 | 02-17-2017 10:58 PM | |
2405 | 02-16-2017 07:55 PM | |
8130 | 12-21-2016 06:24 PM | |
1803 | 12-20-2016 01:29 PM | |
1275 | 12-16-2016 01:21 PM |
10-10-2018
12:17 PM
@Alex Coast The 5 minute retention of Bulletins is a hard coded value that cannot be edited by the end user. It is normal to see the occasional bulletin from some NiFi processors. For example a failed putSFTP because of filename conflict or network issues, but on retry it is successful. A continuous problem would result in non stop bulletins being produced which would be easily noticed. - Take a look at my response further up on using the "SiteToSiteBulletinReportingTask" if you are looking to retain bulletins info longer, manipulate, route, store that somewhere, etc.. - Thank you, Matt - If you found this answer addressed your question, please take a moment to login in and click the "ACCEPT" link.
... View more
02-02-2016
02:52 PM
@Jonas Straub has this been resolved? Can you post your solution or accept best answer :)?
... View more
01-21-2016
05:08 AM
1 Kudo
Thanks for reporting it and for providing the stack traces. Very helpful. I've filed an Apache NiFi JIRA for it https://issues.apache.org/jira/browse/NIFI-1417
... View more
01-07-2016
01:37 PM
Performance really isn't slow when executing the query. This is interesting. I figured that because the query had utilized the CBO in the tutorial I linked in the original question that it would still work now. I guess my thinking is incorrect?
... View more
03-08-2016
02:30 AM
Hey guys. The tutorial mentioned above has been updated and is also compatible with the latest Sandbox HDP 2.4. It addresses the issue of permissions. Here is the link: http://hortonworks.com/hadoop-tutorial/how-to-process-data-with-apache-hive/ When you a chance, can you go through the tutorial on our new Sandbox?
... View more
01-21-2016
08:44 PM
3 Kudos
Today I spoke with Robert Molina from Hortonworks and possibly found what is creating all those alerts! The sandbox is intended to be run on a desktop with a NAT networked interface. I set it up on a dedicaded headless server with a bridge adaptor. Looks like sandbox have a problem with that and that cause some of the services configs to not function properly! As a result some services works but reports network connections alerts! After some config change. The related alerts weren't there anymore. So always use a vm for and how it was intended to be used. Thanks to the hortonworks team and Robert who wanted to go to the bottom of this. Conclusion: If you want, like me, to test drive hortonworks on a headless server. Start from scratch and build it! What every sysadmin should do anyways... That's what I'll do this week end... P
... View more
01-04-2016
02:07 PM
Dear Grace, We can start with this template and improve it : #!/bin/bash kinit ...... hdfs dfs -rm -r hdfs://.... sqoop import --connect "jdbc:sqlserver://....:1433;username=.....;password=….;database=....DB" --table ..... \ -m 1 --where "...... > 0" CR=$? if [ $CR -ne 0 ]; then echo 'Sqoop job failed' exit 1 fi hdfs dfs -cat hdfs://...../* > export_fs_table.txt CR=$? if [ $CR -ne 0 ]; then echo 'hdfs cat failed' exit 1 fi while IFS=',' read -r id tablename nbr flag; do sqoop import --connect "jdbc:sqlserver://......:1433;username=......;password=......;database=.......DB" --table $tablename CR=$? if [ $CR -ne 0 ]; then echo 'sqoop import failed for '$tablename exit 1 fi done < export_fs_table.txt Kind regards
... View more
12-17-2015
03:05 PM
1 Kudo
When running Solr in a clustered mode (SolrCloud), it has a runtime dependency on a ZooKeeper, where it stores configs, coordinates leader election, tracks replicas allocation, etc. All-in-all, there's a whole tree of ZK nodes created with sub-nodes. Deploying SolrCloud into a Hadoop cluster usually means re-using the centralized ZK quorum already maintained by HDP. Unfortunately, if not explicitly taken care of, SolrCloud will happily dump all its ZK content in ZK root, which really complicates things for an admin down the line. If you need to clean up your ZK first, take a look at this how-to. Solution is to put all SolrCloud ZK entries under its own ZK node (e.g. /solr). Here's how one does it: su - zookeeper
cd /usr/hdp/current/zookeeper-client/bin/
# point it a ZK quorum (or just a single ZK server is ok, e.g. localhost)
./zkCli.sh -server lake02:2181,lake03:2181,lake04:2181
# in zk shell now
# note the empty brackets are _required_
create /solr []
# verify the zk node has been created, must not complain the node doesn't exist
ls /solr
quit
# back in the OS shell
# start SolrCloud and tell it which ZK node to use
su - solr
cd /opt/lucidworks-hdpsearch/solr/bin/
# note how we add '/solr' to a ZK quorum address.
# it must be added to the _last_ ZK node address
# this keeps things organized and doesn't pollute root ZK tree with Solr artifacts
./solr start -c -z lake02:2181,lake03:2181,lake04:2181/solr
# alternatively, if you have multiple IPs on your Hadoop nodes and have
# issues accessing Solr UI and dashboards, try binding it to an address explicitly:
./solr start -c -z lake02:2181,lake03:2181,lake04:2181/solr -h $HOSTNAME
... View more
Labels:
11-23-2015
01:08 AM
When you start NiFi, the bootstrap process is constantly monitoring the NiFi process. If it notices that the NiFi process doesn't exist anymore, it will produce a NIFI_DIED notification and then restart NiFi.
... View more
10-03-2016
01:26 PM
3 Kudos
@Wael Emam Its never recommended to have different OS for same host in cluster. If you still manage to bypass this for ambari agent install you will still face issue while deploying services and running applications on the host. Better is to re-install and use same version of OS. Still you can check - https://community.hortonworks.com/questions/18479/how-to-register-host-with-different-os-to-ambari.html
... View more