Member since
08-08-2013
339
Posts
132
Kudos Received
27
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
14767 | 01-18-2018 08:38 AM | |
1547 | 05-11-2017 06:50 PM | |
9096 | 04-28-2017 11:00 AM | |
3409 | 04-12-2017 01:36 AM | |
2805 | 02-14-2017 05:11 AM |
10-19-2017
08:35 AM
Hi, I setup HDF (in particular NiFi & Ranger) to fetch users&groups from AD and do auth against AD. Defining policies in Ranger for NiFi, based on AD users, is working as expected after logging in to NiFi with AD credentials. The only thing that is not working are the policies which grants access based on AD groups. There is this article from almost a year ago. Does it still apply @Bryan Bende? Means, NiFi policies based on AD group membership is not working ? Thanks in advance....
... View more
Labels:
- Labels:
-
Apache NiFi
-
Apache Ranger
08-14-2017
08:21 AM
Hi @Lucky_Luke, the script "kafka-topics.sh" with parameter "--describe" is what you are looking for. To get details for a certain topic, e.g. "test-topic", you would call (adjust zookeeper connect string according to your env.): /usr/hdp/current/kafka-broker/bin/kafka-topics.sh --zookeeper sandbox.hortonworks.com:2181/kafka --describe --topic test-topic The output contains (amongst others) no. of partitions, leading broker for each partition, in-snyc-replicas The topic-level configuration properties are listed under "Configs:" . If this is blank, then the default (broker-wide) settings are applied and you should check your broker config file (or Ambari section) for property "log.retention.hours" ...... assuming you mean the retention time by mentioning "TTL" HTH, Gerd
... View more
08-02-2017
03:00 PM
Hello @Alexandru Anghel , many thanks....works brilliant !
... View more
07-29-2017
06:11 PM
Hi, I am setting up HDF3.0 via Blueprint where installation of services works nice, but starting NiFi fails because it expects a password being provided for decrypting flow.xml.gz under /nifi directory (calling the encrypt-config tool). Two questions here: 1.) which properties needs to be provided in the blueprint so that NiFi starts successfully without being asked for a password....which obviously cannot be provided 2.) from where does NiFi at startup time populate the subdirectories under /nifi ? Before re-deploying the blueprint I deleted whole /nifi directory just to ensure the main error is not caused by some old/previous files, but at starting up Nifi this folder gets recreated incl. subdirectories. Any other hint to get the services startup also being successfull at applying the blueprint highly appreciated 😉
... View more
Labels:
07-06-2017
07:02 AM
Hi @Robin Dong , if you try the standalone-mode, there is no configuration via REST at all, hence you do NOT need any curl command to provide the connector config to your worker. In standalone-mode you pass the connector config as a second commandline parameter to start your worker, see here for an example how to start the standalone stuff including the connector config. Maybe it is worth providing both configurations, the standalone worker as well the distributed one. If you start the distributed worker, at the end of the commandline output there you will find the URL to the REST-Interface. Can you paste that terminal output as well ? Do you execute the curl command from the same node where you started Connect Worker, or is it from a remote host and maybe the AWS Network/Security settings prevent you from talking to the REST Interface ? Regards, Gerd
... View more
07-03-2017
07:03 PM
Hi @Robin Dong , port 8083 is the default port for the KafkaConnect Worker, if started in distributed mode....which is the case in the URL you are referring to. You can set this port to another one in the properties file you provide as parameter to the connect-distributed.sh cmdline call (the property is called rest.port , see here ). In distributed mode you have to use the REST API to configure your Connectors, that's the only option. You can of course also start investigating into using Connect by starting with standalone mode. Then you do not need a REST call to configure your connector, you can just provide the connector.properties file as additional parameter at starting time of the ConnectWorker to the connect-standalone.sh script (ref. here) Please try to replace 'localhost' by the FQDN of the host , where the Connect worker was started, and of course check if this start was successfull by looking at the listening ports e.g. netstat -tulpn | grep 8083 HTH, Gerd --------- If you find this post useful, any vote/reward highly appreciated 😉
... View more
06-27-2017
07:12 AM
1 Kudo
Hi @mel mendoza , maybe it is worth checking Flume to ingest multiple files to Kafka. Alternatively you can use HDF (particularly NiFi) to do so.
... View more
06-23-2017
06:47 AM
1 Kudo
Hello @mel mendoza , Kafka is basically not a file based systems, but event based. If you want to process files with Spark-Streaming via Kafka you have a 2-step approach. First is ingest to Kafka, then consume the events from Kafka by Spark-Streaming. To ingest into Kafka you can e.g. use Kafka-Connect with the file source (check /usr/hdp/current/kafka-broker/conf/connect-file-source.properties). It works like a "tail -f " on that file and streams any incoming data from that file to the Kafka topic. Afterwards you have to consume the events from that Kafka topic with your Spark-Streaming job. HTH, Gerd
... View more
05-11-2017
06:50 PM
Hi @Zhao Chaofeng , you can configure ssl without kerberos, there is no dependency. Just check those ones: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_security/content/ch_wire-kafka.html http://kafka.apache.org/documentation.html#security_ssl
... View more
05-11-2017
06:48 PM
Hi @Sebastian Carroll , why not copying over content of previous log.dirs to new ones, before restarting kafka ? this should reduce/remove the takeover of partitions
... View more