Member since 
    
	
		
		
		06-13-2016
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                76
            
            
                Posts
            
        
                13
            
            
                Kudos Received
            
        
                6
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 2746 | 08-09-2017 06:54 PM | |
| 4050 | 05-03-2017 02:25 PM | |
| 5238 | 03-28-2017 01:56 PM | |
| 5342 | 09-26-2016 09:05 PM | |
| 3556 | 09-22-2016 03:49 AM | 
			
    
	
		
		
		09-06-2017
	
		
		06:23 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hello,  I have a kerberos-enabled cluster and trying to enable SASL/PLAIN as well on the same broker.  SASL (GSSAPI) works fine.   These are the steps i took:  1) Added PlainLoginModule to kafka_jaas.conf (all other sections already there due to kerberos)  KafkaServer {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="{{kafka_keytab_path}}"
storeKey=true
useTicketCache=false
serviceName="{{kafka_bare_jaas_principal}}"
principal="{{kafka_jaas_principal}}";
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-secret"
user_admin="admin-secret"
user_alice="alice-secret";
};
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useTicketCache=true
renewTicket=true
serviceName="{{kafka_bare_jaas_principal}}";
};
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="{{kafka_keytab_path}}"
storeKey=true
useTicketCache=false
serviceName="zookeeper"
principal="{{kafka_jaas_principal}}";
};
  I've also validated, -Djava.security.auth.login.config=/usr/hdp/current/kafka-broker/config/kafka_jaas.conf is being loaded (ps -ef | grep kafka_jaas.conf)  2) Created a kafka_plain_jaas_client.conf  KafkaClient {
  org.apache.kafka.common.security.plain.PlainLoginModule required
  username="alice"
  password="alice-secret";
};
  3) Update to server.properties  sasl.enabled.mechanisms=GSSAPI,PLAIN
advertised.listeners=PLAINTEXTSASL://ip-123-0-0-12.ec2.internal:6667  4) Producer.proerties  security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN  5) Restarted Kafka  When I use the old kafka_client_jaas that references com.sun.security.auth.module.Krb5LoginModule, everything still works but using the new client_jaas with plainLoginModule I get:  kafka@ip-170-0-0-12:/usr/hdp/current/kafka-broker/bin$ /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh --broker-list ip-170-0-0-12.ec2.internal:6667 --topic ssl_plain_test -producer.config /usr/hdp/current/kafka-broker/conf/producer.properties --security-protocol PLAINTEXTSASL
d
[2017-09-06 18:13:56,982] WARN Error while fetching metadata with correlation id 0 : {ssl_plain_test=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
[2017-09-06 18:13:57,183] WARN Error while fetching metadata with correlation id 1 : {ssl_plain_test=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
[2017-09-06 18:13:57,284] WARN Error while fetching metadata with correlation id 2 : {ssl_plain_test=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
[2017-09-06 18:13:57,385] WARN Error while fetching metadata with correlation id 3 : {ssl_plain_test=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
[2017-09-06 18:13:57,485] WARN Error while fetching metadata with correlation id 4 : {ssl_plain_test=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
  I edited: /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh to point to my client_jaas:  export KAFKA_CLIENT_KERBEROS_PARAMS="-Djava.security.auth.login.config=$KAFKA_HOME/config/kafka_plain_jaas_client.conf"  Any ideas?  Thanks! 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
- 
						
							
		
			Apache Ambari
- 
						
							
		
			Apache Kafka
			
    
	
		
		
		08-09-2017
	
		
		06:54 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 Hey Eyad,  One option is to use the XML as the starting point/ingestion/trigger. Once you get the getFile/fetchFile you can pass it to evaluateXPath to read/parse the XML file and turn the values into attributes.   Once you have the attributes you should have everything you need to prep the file (fetch file, create table, putHDFS, etc). We do something similar for our ingestion but use a sql db that has all the metadata information. Once we detect a file, we query mysql to pull in the similar info you have in your XML file. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		07-07-2017
	
		
		07:42 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi,  Getting the below error during nifi startup:  Exception in thread "main" java.net.BindException: Cannot assign requested address (Bind failed)
at java.net.PlainSocketImpl.socketBind(Native Method)
at java.net.AbstractPlainSocketImpl.bind(AbstractPlainSocketImpl.java:387)
at java.net.ServerSocket.bind(ServerSocket.java:375)
at java.net.ServerSocket.bind(ServerSocket.java:329)
at org.apache.nifi.bootstrap.NiFiListener.start(NiFiListener.java:38)
at org.apache.nifi.bootstrap.RunNiFi.start(RunNiFi.java:1022)
at org.apache.nifi.bootstrap.RunNiFi.main(RunNiFi.java:216)  I've left nifi.properties default and verified ports not being used  nifi.web.http.host=  nifi.web.http.port=8080  any ideas?  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
- 
						
							
		
			Apache NiFi
			
    
	
		
		
		06-30-2017
	
		
		05:05 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi - Is it possible to setup a Hierarchy of tags that can be searchable (either API or UI) through the parent tag?   If for example I have 4 tags, company1,company2,company3,company4 and a parent tag of vendors. Is there a way for me to search vendors and it to come up with the tag list of company1-4?   I have created tags that inheirt from others but cant see this relationship in the UI  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
- 
						
							
		
			Apache Atlas
			
    
	
		
		
		06-30-2017
	
		
		04:59 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi - upgraded from HDF 2.x -> 3.0 (ran into issue with upgrade so wiped everything 2.x related and installed 3). Getting the below error in app log during startup. Everything done through Ambari  2017-06-29 20:11:31,391 ERROR [NiFi logging handler] org.apache.nifi.StdErr Failed to start web server: Error creating bean with name 'niFiWebApiSecurityConfiguration': Injection of autowired dependencies failed; nested exception is org.springframework.beans.factory.BeanCreationException: Could not autowire method: public void org.apache.nifi.web.NiFiWebApiSecurityConfiguration.setJwtAuthenticationProvider(org.apache.nifi.web.security.jwt.JwtAuthenticationProvider); nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'jwtAuthenticationProvider' defined in class path resource [nifi-web-security-context.xml]: Cannot resolve reference to bean 'jwtService' while setting constructor argument; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'jwtService' defined in class path resource [nifi-web-security-context.xml]: Cannot resolve reference to bean 'keyService' while setting constructor argument; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'keyService' defined in class path resource [nifi-administration-context.xml]: Cannot resolve reference to bean 'keyTransactionBuilder' while setting bean property 'transactionBuilder'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'keyTransactionBuilder' defined in class path resource [nifi-administration-context.xml]: Cannot resolve reference to bean 'keyDataSource' while setting bean property 'dataSource'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'keyDataSource': FactoryBean threw exception on object creation; nested exception is org.h2.jdbc.JdbcSQLException: Error while creating file "/data/1/nifi/database_repository_rock" [90062-176]  I do see:  2017-06-29 20:11:10,586 - Creating directory Directory['/data/1/nifi/database_repository'] since it doesn't exist.
2017-06-29 20:11:10,586 - Changing owner for /data/1/nifi/database_repository_rock from 0 to nifi  2017-06-29 20:11:10,586 - Changing group for /data/1/nifi/database_repository_rock from 0 to nifi  Permissions and everything looks to be set correctly.  Have tried completely removing /data/1/* and clearing other remnants of previous install.  any ideas or places I should look? Only happens  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
- 
						
							
		
			Apache NiFi
			
    
	
		
		
		06-30-2017
	
		
		04:47 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi Ashutosh,  I would use Hortonworks contact us support page for questions around certification and options you may have.  Someone should be in touch with you  https://hortonworks.com/marketo-contact-training  Thanks,  Matt 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-03-2017
	
		
		02:25 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Look like it was just hive needed to be restarted (no restart prompt was there)...none of the above made any difference.   
 ranger.usersync.ldap.username.caseconversion=lower  ranger.usersync.ldap.groupname.caseconversion=lower   This is only used for usersync - how ranger imports your users in groups. It doesnt affect how your username or group will appear in audit  Please verify the auth_to_local rules in the host where hive is running, usually in /usr/hdp/<Version>/hadoop/conf  You can also try copy/link core-site.xml to /etc/hive/conf/conf.server and /etc/hive/conf  This didnt make any difference either, i believe its because it hive uses:  usr/hdp/current/hadoop-client/conf/: 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-03-2017
	
		
		03:46 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hello,  When i run hive commands, ranger audit is picking up my user name with Capitals e.g "John.Doe".   When I do HDFS Commands, its lower case "john.doe"  My Principal is: John.Doe@CORP.AD and we have auth-to-local rules to convert this to all lower case. (john.doe)
In ranger we are also doing ranger.user.sync case conversion to lower so if we use user policies, only hdfs will work (e.g. i appear as john.doe in users and since Hive comes in as "John.Doe" user policies dont get applied).   Example: CREATE TABLE test.permtest (field1 int); - the location of this folder is /data/2017  [john.doe@edge1 ~]$ hdfs dfs -ls /data/2017/  drwxr-xr-x   - John.Doe hdfs          0 2017-05-02 20:43 /data/2017/permtest  As you can see from the above, the table gets created with the ACL permissions as John.Doe.   -------  Now when I do HDFS commands, e.g. it comes up as expected (john.doe - lower case)   [john.doe@edge1 ~]$ hdfs dfs -mkdir /data/2017/permtest1  drwxr-xr-x   - John.Doe hdfs          0 2017-05-02 20:43 /data/2017/permtest  drwxr-xr-x   - john.doe hdfs          0 2017-05-02 20:44 /data/2017/permtest1  The John.Doe and john.doe is what gets passed to ranger for authorization and this is a problem since user ranger sync brings over "john.doe" and so any Hive policies wont work.  Any ideas? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
- 
						
							
		
			Apache Hadoop
- 
						
							
		
			Apache Hive
- 
						
							
		
			Apache Ranger
			
    
	
		
		
		03-28-2017
	
		
		02:18 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 1) locally..I would also change generate flow file processor to have a file size of say 1KB to start and scale up from there. It looks like you've left it as the default of 0. (30,000 flow files but 0 bytes)   2) You need to make sure all route paths are taken care of. If you put your cursor over the yellow exclamation mark it'll highlight error. In your case you need to handle the failure route. (send to funnel or another processor)   3) Once compresscontent is completed  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-28-2017
	
		
		02:00 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Also, is the cluster kerberized? Do you have ranger policies for Hive? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		 
        













