Member since
04-27-2016
218
Posts
133
Kudos Received
25
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2645 | 08-31-2017 03:34 PM | |
5522 | 02-08-2017 03:17 AM | |
2187 | 01-24-2017 03:37 AM | |
8551 | 01-19-2017 03:57 AM | |
4490 | 01-17-2017 09:51 PM |
03-10-2017
06:46 AM
1 Kudo
I think the easiest way is to create a new cloud-xyz subproject in your fork. The simplest implementation is our cloud-mock implementation which is used for testing, but implements all of the requirements without 3rd parties.
... View more
02-21-2019
10:39 AM
I think you have to check the log on the zookeeper side, my advice is to increase the value of "nifi.zookeeper.connect.timeout" and "nifi.zookeeper.session.timeout" settings. Also check your network connection between the Zookeeper servers and Nifi servers, Network latency can cause the issue.
... View more
02-06-2017
04:07 PM
@Benjamin Hopp What reason if NiFi giving in the nifi-app.log for the node disconnections? Rather then restarting the node that disconnects, did you try just clicking the reconnect icon in the cluster UI? Verify that your nodes do not have trouble communicating with each other. Makes there are no firewalls between the nodes affecting communications to the HTTP/HTTPS ports:
nifi.web.http.host=nifi-ambari-08.openstacklocal
nifi.web.http.port=8090
nifi.web.https.host=nifi-ambari-08.openstacklocal
nifi.web.https.port=9091
or node communication port:
nifi.cluster.node.address=nifi-ambari-08.openstacklocal
nifi.cluster.node.protocol.port=9088
Make sure Both yo r nodes are properly configured to talk to ZK and neither has issues communicating with them: nifi.zookeeper.connect.string=nifi-ambari-09.openstacklocal:2181,nifi-ambari-07.openstacklocal:2181,nifi-ambari-08.openstacklocal:2181
nifi.zookeeper.connect.timeout=3 secs
nifi.zookeeper.root.node=/nifi
nifi.zookeeper.session.timeout=3 sec
All of the above setting are in the nifi.properties file. Thanks, Matt
... View more
01-09-2019
05:05 AM
@milind pandit i didn't make any password for the sandbox .then which password i should enter?? help me please.!
... View more
01-18-2017
12:26 AM
Sorry it worked !!! I just had to change the command : chown atlas:hadoop /etc/atlas/conf/solr/*
to chown -R atlas:hadoop /etc/atlas/conf/solr/*
Thanks a lot for your help
... View more
01-05-2017
05:50 PM
@Josh Elser Do I need to perform this on all the nodes? For now I am ok if I loose the data as its test cluster.
... View more
12-29-2016
01:30 AM
2 Kudos
@milind pandit Storm core has abstractions for bolts to save and retrieve the state of its operations. There is a default in-memory based state implementation and also a Redis backed implementation that provides state persistence. Currently the only kind of State implementation that is supported is KeyValueState which provides key-value mapping. Bolts that requires its state to be managed and persisted by the framework should implement the IStatefulBolt interface or extend the BaseStatefulBolt and implement void initState(T state) method. Please see following link for details: http://storm.apache.org/releases/2.0.0-SNAPSHOT/State-checkpointing.html
... View more
02-01-2017
03:21 AM
1 Kudo
@Vaibhav Kumar
recommendations from my colleagues are valid, you have strings in header row of your CSV documents. You can certainly filter by some known entity but there's a more advanced version of CSV Pig Loader called CSVExcelStorage. It is part of Piggybank library that comes bundled with HDP, hence the register command. You can pass different control parameters to it. Mortar blog is an excellent source of information on working with Pig http://help.mortardata.com/technologies/pig/csv. grunt> register /usr/hdp/current/pig-client/piggybank.jar;
grunt> a = load 'BJsales.csv' using org.apache.pig.piggybank.storage.CSVExcelStorage(',', 'NO_MULTILINE', 'NOCHANGE', 'SKIP_INPUT_HEADER') as (Num:Int,time:int,BJsales:float);
grunt> describe a;
a: {Num: int,time: int,BJsales: float}
grunt> b = limit a 5;
grunt> dump b;
output (1,1,200.1)
(2,2,199.5)
(3,3,199.4)
(4,4,198.9)
(5,5,199.0)
notice I am not filtering any relation, I'm telling the loader to skip header outright, it saves a few key strokes and doesn't waste any cycles processing anything extra.
... View more
12-23-2016
10:43 AM
2 Kudos
I have installed HDP 2.5 and Atlas 0.7.0.2.5 and I don't need HBase to use Atlas. Currently I'm using Solr (Ambari Infra) as the index engine and Berkeleyje as the storage engine. The real dependency between Atlas and HBase is the AuditRepository, by default use HBase. Is not easy change it, but investigating the source code I found a special property atlas.EntityAuditRepository.impl that you have to set with the value org.apache.atlas.repository.audit.InMemoryEntityAuditRepository (is case sensitive, so copy&paste exactly the name of the property and the value). @Chad Woodhedad, add the above property as the screenshot shows, restart Atlas services and you will have it, Atlas without HBase 🙂 And now some details about how I found that property: In this link from GitHub you can see why Atlas by default needs HBase. And in this link from GitHub you can find the available values that you can use for configure the audit. Don't worry about the rest of the properties related with HBase. Atlas will use the value of atlas.graph.storage.hbase.table to create the table in the storage backend that you choose (Berkeleyje or Cassandra). With these properties my Atlas services work very well. I hope this information is helpful and work for you to avoid install HBase in your clusters.
... View more
12-23-2016
06:34 AM
I have logged in as admin. after creating a new user, am unable to grant cluster permission. After clicking Roles from the left side I don't see any option to define permission to the new user. as per ur image I can see that option while clicking on each in the Views but after enabling the check box there is no save option and also its not taking on its own.
... View more