Member since 
    
	
		
		
		06-16-2017
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                21
            
            
                Posts
            
        
                3
            
            
                Kudos Received
            
        
                0
            
            
                Solutions
            
        
			
    
	
		
		
		08-24-2017
	
		
		03:59 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Thanks for the update Sam.  I appreciate it. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-23-2017
	
		
		03:08 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hello.  We recently upgraded from HDF 2.1.4 to  HDF
3.0.1.0 with Ambari 2.5.1.  The ranger version is 0.7.0.  We followed instructions for the upgrade in this document - https://docs.hortonworks.com/HDPDocuments/HDF3/HDF-3.0.1/bk_ambari-upgrade/bk_ambari-upgrade.pdf  While trying to restart Ambari the server failed to start due to the database check:  Starting
ambari-server  Ambari
Server running with administrator privileges.  Organizing
resource files at /var/lib/ambari-server/resources...  Ambari
database consistency check started...  Server
PID at: /var/run/ambari-server/ambari-server.pid  Server
out at: /var/log/ambari-server/ambari-server.out  Server
log at: /var/log/ambari-server/ambari-server.log  Waiting
for server start.............  DB
configs consistency check failed. Run "ambari-server start
--skip-database-check" to skip. You may try --auto-fix-database flag to
attempt to fix issues automatically. If you use this
"--skip-database-check" option, do not make any changes to your
cluster topology or perform a cluster upgrade until you correct the database
consistency issues. See /var/log/ambari-server/ambari-server-check-database.log
for more details on the consistency issues.  ERROR:
Exiting with exit code -1.  REASON:
Ambari Server java process has stopped. Please check the logs for more
information.  This error was in the
/var/log/ambari-server/ambari-server-check-database.log  ERROR - Required
config(s): atlas-tagsync-ssl is(are) not available for service RANGER with
service config version XX in cluster CLUSTERNAME  The default value in Ranger for ranger.tagsync.source.atlasrest.ssl.config.filename is -   /etc/ranger/tagsync/conf/atlas-tagsync-ssl.xml   It is a required value in Ambari.  The file does not exist in the location specified.  I tried to copy a atlas-tagsync-ssl.xml file from different location on the server.  It did not affect the outcome any.    I managed to resolve the problem by following these steps below:  Start Ambari and skip the DB check -
ambari-server start --skip-database-check  After Ambari starts added this configuration
parameter for Ranger:  Ranger
> Advanced > Custom atlas-tagsync-ssl  Add:
ranger.tagsync.source.atlas = false  Restart Ranger   Restart Ambari without the --skip-database-check
option  Ambari started without incident.  I found the setting above in this documentation - https://secure-web.cisco.com/1QXVWuC6hnxIKVz1BA5oBaahiMPQaiuFCTEguaO-VB_gnm4uros1qSqdLeZqsQno5CwDBylUxwIw5Xt440eeSo8hPLWKIINOGqTsgtTHvpuznSFSUh4qPpSnuP-RMM7sn6qPuxZ6S-qCEktLxCyDA9GzwqTS7XEd7xPyPu6JUmUzO8pc1CQoD8DypGpjX9tpKV7cnbuF5cItdSnUiPA6YK3ou7C3R6_qCK9uAaUm_td-8NcffpdpjjKhZVVtvy3DpH0q7salinr-bOeyFrDHVJw/https%3A%2F%2Fcwiki.apache.org%2Fconfluence%2Fdisplay%2FRANGER%2FTag%26#43;Synchronizer+Installation+and+Configuration  The documentation above indicates ranger.tagsync.source.atlas
value is true by default.  We are not using Atlas nor are we using tags in
this environment.   We did not have this value set when we were
running HDF 2.1.4.  It might be possible this setting was introduced with HDF 3.0 and Ambari 2.5.1 during the upgrade.
  I wanted to report this in case others are experiencing the same issue or if this might be a bug.  Thanks,  Kirk DeMumbrane 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
- 
						
							
		
			Apache Ambari
- 
						
							
		
			Apache Atlas
- 
						
							
		
			Apache Ranger
			
    
	
		
		
		08-16-2017
	
		
		04:12 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hello Sriharsha.  Has any progress been made on the post that I submitted yesterday?  Thanks in advance. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-14-2017
	
		
		06:27 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Thanks.  I gave it a try.  The SQL script now runs correctly.  However when I try to create add a new schema I see this error in the registry.log file:  ERROR  [13:21:06.461] [dw-27 - POST /api/v1/schemaregistry/schemas] c.h.r.s.w.SchemaRegistryResource -  Error encountered while adding schema info [SchemaMetadata{type='avro', schemaGroup='sales-nxt-email', name='test', description='test', compatibility=BACKWARD, evolve=true}]  com.hortonworks.registries.storage.exception.StorageException: org.postgresql.util.PSQLException: ERROR: null value in column "validationLevel" violates not-null constraint    Detail: Failing row contains (2, avro, sales-nxt-email, test, BACKWARD, null, test, t, 1502734866442).    at com.hortonworks.registries.storage.impl.jdbc.provider.sql.factory.AbstractQueryExecutor$QueryExecution.executeUpdate(AbstractQueryExecutor.java:225)    at com.hortonworks.registries.storage.impl.jdbc.provider.sql.factory.AbstractQueryExecutor.executeUpdate(AbstractQueryExecutor.java:182)    at com.hortonworks.registries.storage.impl.jdbc.provider.postgresql.factory.PostgresqlExecutor.insertOrUpdateWithUniqueId(PostgresqlExecutor.java:182)    at com.hortonworks.registries.storage.impl.jdbc.provider.postgresql.factory.PostgresqlExecutor.insert(PostgresqlExecutor.java:80)    at com.hortonworks.registries.storage.impl.jdbc.JdbcStorageManager.add(JdbcStorageManager.java:66)    at com.hortonworks.registries.schemaregistry.DefaultSchemaRegistry.addSchemaMetadata(DefaultSchemaRegistry.java:168)    at com.hortonworks.registries.schemaregistry.webservice.SchemaRegistryResource.lambda$addSchemaInfo$1(SchemaRegistryResource.java:380)    at com.hortonworks.registries.schemaregistry.webservice.SchemaRegistryResource.handleLeaderAction(SchemaRegistryResource.java:158)    at com.hortonworks.registries.schemaregistry.webservice.SchemaRegistryResource.addSchemaInfo(SchemaRegistryResource.java:371)    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)    at java.lang.reflect.Method.invoke(Method.java:498)  Looking at the create_tables.sql script for the schema_metadata_info table it appears there is a new column called "validationLevel" that is not in the original script that was used to install schema registry.  Should this column allow nulls?  --New Script  CREATE TABLE IF NOT EXISTS schema_metadata_info (      "id"  SERIAL UNIQUE NOT NULL,      "type"  VARCHAR(255) NOT NULL,      "schemaGroup"  VARCHAR(255) NOT NULL,      "name"  VARCHAR(255) NOT NULL,      "compatibility"  VARCHAR(255) NOT NULL,      "validationLevel" VARCHAR(255) NOT NULL, -- added in 0.3.1, table should be altered to add this column from earlier versions.      "description"  TEXT,      "evolve"  BOOLEAN  NOT NULL,      "timestamp"  BIGINT  NOT NULL,      PRIMARY KEY ( "name"),      UNIQUE ("id")    );  --Script which was originally used to install Schema Registry  CREATE TABLE IF NOT EXISTS schema_metadata_info (    "id"  SERIAL PRIMARY KEY,    "type"  VARCHAR(256)  NOT NULL,    "schemaGroup"  VARCHAR(256)  NOT NULL,    "name"  VARCHAR(256)  NOT NULL,    "compatibility" VARCHAR(256)  NOT NULL,    "description"  TEXT,    "evolve"  BOOLEAN  NOT NULL,    "timestamp"  BIGINT  NOT NULL,    UNIQUE("id","name")  ); 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-11-2017
	
		
		07:40 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hello Sriharsha,  I am trying to run the drop-create command and there appears to be a problem with the create_tables.sql script.  
Error:  Exception in thread "main" org.postgresql.util.PSQLException: ERROR: multiple primary keys for table "schema_metadata_info" are not allowed  Position: 555  at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2455)  at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2155)  at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:288)  at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:430)  at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:356)  at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:303)  at org.postgresql.jdbc.PgStatement.executeCachedSql(PgStatement.java:289)  at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:266)  at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:262)  at com.hortonworks.registries.storage.tool.SQLScriptRunner.runScript(SQLScriptRunner.java:98)  at com.hortonworks.registries.storage.tool.TablesInitializer.doExecute(TablesInitializer.java:198)  at com.hortonworks.registries.storage.tool.TablesInitializer.doExecuteCreate(TablesInitializer.java:175)  at com.hortonworks.registries.storage.tool.TablesInitializer.main(TablesInitializer.java:162)    The create table command below has id as the serial primary key and also there is a primary key statement at the bottom indicating "name" is a primary key.  Can you correct the script with the proper primary key and repost it on GitHub?  CREATE TABLE IF NOT EXISTS schema_metadata_info ( "id" SERIAL PRIMARY KEY, "type" VARCHAR(255) NOT NULL, "schemaGroup" VARCHAR(255) NOT NULL, "name" VARCHAR(255) NOT NULL, "compatibility" VARCHAR(255) NOT NULL, "validationLevel" VARCHAR(255) NOT NULL, -- added in 0.3.1, table should be altered to add this column from earlier versions. "description" TEXT, "evolve" BOOLEAN NOT NULL, "timestamp" BIGINT NOT NULL, UNIQUE ("id"), PRIMARY KEY ( "name") );
Thanks,  Kirk     
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-10-2017
	
		
		02:05 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Thanks this does work for us.  Our port is different but I was able to see the api reference.   
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-10-2017
	
		
		01:53 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hello.  I work with Dave Holtzhouser.  I am the system admin that has access to the PostgreSQL database.  Here is the results of the sql query you requested above:  id | type |  schemaGroup  |  name  | compatibility |  description    | evolve |  timestamp  ----+------+---------------------+---------------------------+---------------+-------------------------------------------------------------------------------------------------------------------  ------------+--------+---------------    1 | avro | Kafka  | Test  | BACKWARD  | TEst    | t  | 1498680663869    2 | avro | Kafka  | RoutingSlip  | BACKWARD  | An implementation of the Routing Slip EIP (http://www.dummyurl.com/patterns/messaging/Routing  Table.html) | t  | 1498756496668    3 | avro | Kafka  | RoutingSlip  | BACKWARD  | An implementation of the Routing Slip EIP (http://www.dummyurl.com/patterns/messaging/Routing  Table.html) | t  | 1498756511480    4 | avro | Kafka  | EmailAddressMsg  | BACKWARD  | An email address    | t  | 1500669984537    5 | avro | Kafka  | EmailMessageMsg  | BACKWARD  | An email message    | t  | 1500670028183    6 | avro | truck-sensors-kafka | raw-truck_events_avro  | BACKWARD  | Raw Geo events from trucks in Kafka Topic    | t  | 1501266679367    7 | avro | Kafka  | MMS_Sales_email_dev  | BACKWARD  | Email Test Schema    | t  | 1501269422220    8 | avro | Kafka  | MMS_Sales_CarCompany_Emails | BACKWARD  | CarCompany Email Topic    | t  | 1501281176150 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-09-2017
	
		
		03:44 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hello Sarah.  Any updates on when the link will be working?  Thanks 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-09-2017
	
		
		03:39 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hello.  I am trying to use the tls-toolkit.sh utility to create some client certificates.  We are running HDF-2.1.4.0 which is NIFI version 1.1.0.  We have a two node cluster with the Certificate Authority installed on one of the two servers.  We are running the commands below as root.  I am using this as a reference -   https://docs.hortonworks.com/HDPDocuments/HDF2/HDF-2.1.4/bk_administration/content/client.html  I am running the command from -   /var/lib/ambari-agent/cache/common-services/NIFI/1.0.0/package/files/nifi-toolkit-$version  Command is the following -  tls-toolkit.sh client -c servername.domain.com -D "CN=admin, OU=NIFI" -t nifi -p 10443 -T pkcs12  When I run this command I get a error like this:  tls-toolkit.sh: JAVA_HOME not set; results may vary  2017/08/09 10:08:18 INFO [main] org.apache.nifi.toolkit.tls.commandLine.BaseCommandLine: Command line argument --keyStoreType=pkcs12 only applies to keystore, recommended truststore type of JKS unaffected.  2017/08/09 10:08:19 INFO [main] org.apache.nifi.toolkit.tls.service.client.TlsCertificateAuthorityClient: Requesting new certificate from servername.domain.com:10443  2017/08/09 10:08:19 INFO [main] org.apache.nifi.toolkit.tls.service.client.TlsCertificateSigningRequestPerformer: Requesting certificate with dn CN=admin,OU=NIFI.maritz.com from servername.domain.com:10443  Service client error: Received response code 500 with payload <html>  <head>  <meta http-equiv="Content-Type" content="text/html;charset=ISO-8859-1"/>  <title>Error 500 </title>  </head>  <body>  <h2>HTTP ERROR: 500</h2>  <p>Problem accessing /. Reason:  <pre>  javax.servlet.ServletException: Server error</pre></p>  <hr /><a href="http://eclipse.org/jetty">Powered by Jetty:// 9.3.9.v20160517</a><hr/>  </body>  </html>  In the /var/log/nifi/nifi-ca.std.out file I see this:  2017/08/09 13:29:31 WARN [qtp1653844940-8] org.eclipse.jetty.server.HttpChannel: https://servername.domain.com:10443/  javax.servlet.ServletException: Server error    at org.apache.nifi.toolkit.tls.service.server.TlsCertificateAuthorityServiceHandler.handle(TlsCertificateAuthorityServiceHandler.java:99)    at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)    at org.eclipse.jetty.server.Server.handle(Server.java:524)    at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:319)    at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:253)    at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)    at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)    at org.eclipse.jetty.io.ssl.SslConnection.onFillable(SslConnection.java:186)    at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)    at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)    at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)    at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)    at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)    at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)    at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)    at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)    at java.lang.Thread.run(Thread.java:745)  Any suggestions on what it might be looking for?  Thanks in advance  Kirk 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
- 
						
							
		
			Apache Ambari
- 
						
							
		
			Apache NiFi
			
    
	
		
		
		07-10-2017
	
		
		07:45 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Thank you very much.  I was able to get your suggestions to work.  I have one other question.  I have setup permissions at the /data/Process-Group/{uuid} level. The developer has created multiple Process Groups under where I have applied the permissions. Will these permissions propagate to the additional Process Groups or will I have to configure those as well?  They have not run the flow completely yet is why I am asking. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		 
         
					
				













