Member since 
    
	
		
		
		02-17-2015
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                40
            
            
                Posts
            
        
                25
            
            
                Kudos Received
            
        
                3
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 3273 | 01-31-2017 04:47 AM | |
| 3679 | 07-26-2016 05:46 PM | |
| 9040 | 05-02-2016 10:12 AM | 
			
    
	
		
		
		11-25-2020
	
		
		04:22 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 Today I ran into this same issue but the solutions in the post didn't resolve the problem.  I found each time you would start cloudera-scm-server (sudo systemctl start cloudera-scm-server) it would just add the entries back to the database we are instructed to delete.       The following did resolve the problem -  Edit /var/lib/cloudera-scm-server/certmanager/cm_init.txt  Change the following top 3 lines from true to false as follows.  setsettings AGENT_TLS false  setsettings WEB_TLS false  setsettings NEED_AGENT_VALIDATION false     Then stop and start the cloudera-scm-server.  This time you will see the entries back in the DB but they'll be set to false.  On the database server you can run the following to confirm they are set to false now.  select * from CONFIGS where ATTR='web_tls';  select * from CONFIGS where ATTR='agent_tls'; 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-19-2020
	
		
		10:49 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 with newer versions of spark, the sqlContext is not load by default, you have to specify it explicitly :     scala> val sqlContext = new org.apache.spark.sql.SQLContext(sc)  warning: there was one deprecation warning; re-run with -deprecation for details  sqlContext: org.apache.spark.sql.SQLContext = org.apache.spark.sql.SQLContext@6179af64    scala> import sqlContext.implicits._  import sqlContext.implicits._    scala> sqlContext.sql("describe mytable")  res2: org.apache.spark.sql.DataFrame = [col_name: string, data_type: string ... 1 more field]       I'm working with spark 2.3.2 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-22-2019
	
		
		07:50 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi @Jonas Straub,do as your article ,i create collection by curl command,and got the 401 error:  curl –negotiate –u :  ‘http://myhost:8983/solr/admin/collections?action=CREATE&name=col&numShards=1&replicationFactor=1&collection.configName=_default&wt=json’  {    “responseHeader”:{  “status”:0,  “QTime”:31818},    “failure”:{      “myhost:8983_solr”:”org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error from server at http://myhost:8983/solr:Excepted mime type application/octet-stream but got text/html.  <html>  <head>  <meta http-equiv=\”Content-Type\” content=\”text/html;charset=utf-8\”/>”  <title> Error 401 Authentication required </title>    </head>    <body>  <h2>HTTP ERROR 401</h2>  <p> Problem accessing /solr/admin/cores.Reason:    <pre> Authentication required</pre>  </p>    </body>  </html>  }  }     When I debug the solr source code, found this exception is returned by “coreContainer.getZKController().getOverseerCollectionQueue().offer(Utils.toJson(m), timeout)”,so I doubt maybe the solr don’t authenticate zookeeper info and I use a no-kerberos zookeeper to replace the Kerberos zookeeper, solr collection can be created successfully.  How to solve the problem with Kerberos ZK? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-01-2018
	
		
		01:31 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @vvinaga It looks like it cannot talk to the HDFS NameNode from the logs. Could you check if HDFS is configured correctly to use Kerberos? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-31-2017
	
		
		04:47 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 When I used the FullyQualifiedDomainName (with a '.' in it) the repo is working fine!      parcelRepositories: ["http://localrepo.cdh-cluster.internal/parcels/cdh5/", "http://localrepo.cdh-cluster.internal/parcels/spark2/"] 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-02-2016
	
		
		01:18 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 The issue was commonplace:unsufficient permissions on directories.  Thank you everyone! 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		04-29-2016
	
		
		09:15 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 You can locate them through ambari. When you (re)start a service you can click on the operations > operation > tasks and inspect the commands:      If you look closely the script being executed for restarting the nodemanager is at 08:53:13,592. The script is located in /usr/hdp/current/hadoop-yarn-nodemanager/sbin/yarn-daemon.sh. This file is shipped with the distribution. Before executing this file users are created and config is pushed.  The preparation of these steps happen on the AmbariServer. You can search for the python scripts. For example the nodemanager in the /var/lib/ambari-server/resources/common-services/YARN/2.1.0.2.0/package/scripts/.   If you change one these files, don't forget to restart the ambari-server, because the files are cached. After an ambari-server upgrade these changes will be overridden reverted.  Hope this helps. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		07-27-2016
	
		
		11:47 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi @Junichi Oda,  We have the same error in the Ranger log, even when the groupnames are filled:  ERROR LdapUserGroupBuilder [UnixUserSyncThread] - sink.addOrUpdateUser failed with exception: org/apache/commons/httpclient/URIException, for user: userX, groups: [groupX, groupY]  I have inspected the sourcecode from ranger-0.6 which is part of HDP-2.4.3.0 our current version of the stack.  Interesting enough all calls to remote server inside LdapUserGroupBuilder.addOrUpdateUser(user, groups) are wrapped in a try-catch(Exception e). There is addUser, addUserGroupInfo and delXUserGroupInfo. But we don't see that in the log. The addOrUpdateUser is wrapped with try-catch(Throwable t). Looks like its an Error not an Exception!  I found this RANGER-804 ticket revering to missing classes. I copied the jars in '/usr/hdp/current/ranger-usersync/lib' from another folder. The code runs but I have a Certificate PKI error at the moment because we use LDAPS, but looks like this might get you further.  Greetings, Alexander 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-18-2015
	
		
		01:16 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 Reset authorized_proxy_user_config to default (hue=*) still works. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		 
        













