Member since
09-17-2015
10
Posts
10
Kudos Received
0
Solutions
03-23-2017
06:51 PM
2 Kudos
Currently, NiFi supports encrypting/decrypting data through the EncryptContent processor, but the pre/post state of the data would still be stored in plaintext in the content repository. In general, transparent disk encryption/OS-level data encryption is recommended in conjunction with strict OS-level/POSIX access controls. There is a current effort to provide encrypted implementations of the flowfile (attribute), content, and provenance repositories. As Dan mentioned, a combination of encrypted payload and plaintext metadata for routing can work very well if the payload does not need to be processed/transformed inside NiFi.
... View more
05-11-2017
05:02 PM
1 Kudo
Yes Go for druid ! I want to start with disclaimer i am a druid committer. First i want to point that as an engineer i don't believe that there is a single query engine that can be always be better that all the other solutions, it is all relative to the use case you want to solve. Now let's get to why Druid and not OpenTSDB for real-time stream application ? Therefore the use case keyword here is real time streaming applications. Well for the simple reasons are: Druid has native ingestion and indexing support with almost all the rising real time stream processing technologies (eg kafka, rabitMQ, spark, storm, flink, apex, ... and the list goes on and on). This integration is production tested at a very very large scale (eg Yahoo-Flurry or Metamarket) where we have more than 1 million events per second through real-time ingestion. Druid out of the box has support for lambda architecture. Druid can ingest data directly from Kafka with the guaranty of exactly once delivery semantic. In my opinion those are the key element to look for when i am building realtime streaming application. To my limited knowledge i am not aware if there is any integration or production use cases with real time streams and OpenTSDB.
... View more
10-23-2015
01:54 PM
Specifically Knox supports TAM now ISAM via pre-authenticated headers. You can find out more here: http://knox.apache.org/books/knox-0-6-0/user-guide.html#Preauthenticated+SSO+Provider
... View more
03-22-2018
01:48 PM
Hi! In the case both clusters will be configured to use a DellEMC ISILON. On Installation Guide have a config step to remove the "-${CLUSTER_NAME}". How to proceed in this case? Will be possible both cluster in the same AD Domain (realm)? From Installation Guide "Isilon-OneFS-With-Hadoop-and-Hortonworks-for-Kerberos-Installation-Guide" "Click the General tab and configure the Apache Ambari user principals as shown in the next table. Remove -${cluster-name} from the default value and change to a value as shown in the Required value column so that it matches the service account names (users) that you created during the initial configuration of your Isilon OneFS cluster for use with Ambari and Hortonworks."
... View more
10-23-2015
03:50 PM
Falcon will automatically add this property to oozie jobs in secure kerberized clusters. User does not have to add the property separately. When defining a cluster entity, please make sure to add the following cluster property. <property name="dfs.namenode.kerberos.principal" value="nn/$my.internal@EXAMPLE.COM"/>
http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.0/bk_installing_manually_book/content/configuring_for_secure_clusters_falcon.html
... View more
09-29-2015
02:59 PM
1 Kudo
Once you setup one way trust and windows workstation is in domain, you don't need a separate kerberos client requesting tickets. You can refer to http://hortonworks.com/blog/enabling-kerberos-hdp-active-directory-integration/ (slightly older but should work)
... View more