Member since
07-10-2017
78
Posts
6
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4440 | 10-17-2017 12:17 PM | |
7015 | 09-13-2017 12:36 PM | |
5397 | 07-14-2017 09:57 AM | |
3521 | 07-13-2017 12:52 PM |
10-16-2017
12:25 PM
Hi, If a processor failed and routed the flowfile to the relationship failure. Is there an attribute "error"? If it's the case for some processor how to know if they have it? For example, the puthdfs, I don't see anything in the documentation. doc puthdfs Is there another way to have the reason of the failure attached to the flowfile, Thanks, Michel
... View more
Labels:
- Labels:
-
Apache NiFi
09-21-2017
07:44 AM
@nallen The pcap_replay is install as a service by default with HCP 1.2? If not, how to install it manually? Thanks
... View more
09-16-2017
12:18 PM
Hi @n c, You are welcome! 🙂 . I don't think there's other object in hive (but not sure) there's the UDF for that you need to export the jar and you use for UDF in you first cluster. May I ask you to accept my answer? 🙂 Thanks! Michel
... View more
09-15-2017
01:20 PM
Hi, I saw that it's possible to use pycapa script in order to capture data and send it to kafka. Do you know if there's an easy way to directly ingest pcap file that has been generated by another system? Like a program that read the pcap file and send it to kafka? Or another manner to do it? Thanks Michel
... View more
Labels:
- Labels:
-
Apache Metron
09-14-2017
09:29 AM
Hi @n c, You don't have to copy the metadata. Copy the folder structure with all the data to the new cluster. Recreate the table and don't forget to do a compute statistic on the table. This will recreate a lot of metadata in order to have the CBO working fine. There's no need for export tool because you can directly copy the data from HDFS 🙂 Michel
... View more
09-13-2017
12:36 PM
Hi @n c, You can also copy the hdfs file, then re-create the hive table on the target hive and then perform a compute statistic for all the metadata. You can get the create statement of the table by doing the following query: show create table xxxxx
... View more
09-13-2017
12:28 PM
Hi @Ashnee Sharma, Can you please provide more context? what's exactly the issue? Michel
... View more
09-11-2017
09:09 AM
Hi @Mukesh Burman, I saw in you code that you refer to mongoDB. Did you add all the necessary library for mongoDB? Michel
... View more
08-08-2017
09:02 AM
Hi @Gevorg KHACHATURYAN, Do you have a correct value defined for "zookeeper.znode.parent" in hbase-site.xml? or did you specify a good one? normally it should be something like "/hbase-unsecure" or "/hbase-secure" depending if you hav e kerberos or not. Michel
... View more
08-03-2017
08:34 AM
Hello, I install hdp 2.6.1 with druid on a MySQL and then HDF (nifi+SAM) on CentOS 7.2. All the component of Druid work fine except the Superset. When I try to start it I got the following error: resource_management.core.exceptions.ExecutionFailed: Execution of 'source /etc/superset/conf/superset-env.sh ; /usr/hdp/current/druid-superset/bin/superset db upgrade' returned 1. /usr/hdp/2.6.1.0-129/superset/lib/python3.4/importlib/_bootstrap.py:1161: ExtDeprecationWarning: Importing flask.ext.sqlalchemy is deprecated, use flask_sqlalchemy instead.
spec.loader.load_module(spec.name)
/usr/hdp/2.6.1.0-129/superset/lib/python3.4/importlib/_bootstrap.py:1161: ExtDeprecationWarning: Importing flask.ext.script is deprecated, use flask_script instead.
spec.loader.load_module(spec.name)
And also: File "/usr/hdp/2.6.1.0-129/superset/lib/python3.4/site-packages/sqlalchemy/engine/url.py", line 60, in __init__
self.port = int(port)
ValueError: invalid literal for int() with base 10: ''
Any idea how to make it work? Or what kind of mistake I made? 🙂 In attachment you have the full log from ambari. Thanks, Michelsuperset-error.txt
... View more
Labels: