Member since 
    
	
		
		
		03-21-2016
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                233
            
            
                Posts
            
        
                62
            
            
                Kudos Received
            
        
                33
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 1283 | 12-04-2020 07:46 AM | |
| 1579 | 11-01-2019 12:19 PM | |
| 2199 | 11-01-2019 09:07 AM | |
| 3146 | 10-30-2019 06:10 AM | |
| 1974 | 10-28-2019 10:03 AM | 
			
    
	
		
		
		10-31-2019
	
		
		09:47 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Thank you very much.  It helped me to find the error, in the end my provider had in the DC a schedule different from the host. synchronize and work.     Greetings 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-28-2019
	
		
		10:03 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 If clusterusers is a group then you should have a space separator between users and groups in acl config.      Something like      yarn.scheduler.capacity.root.default.acl_submit_applications=yarn,ambari-qa clusterusers 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-24-2019
	
		
		09:42 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @rguruvannagari  Thanks for the quick  reply and i able to start the process based on your inputs.  While running the spark application i am getting the below issue, Help to fix the issue.     19/10/24 16:36:09 INFO ContextHandler: Started o.s.j.s.ServletContextHandler@4a0df195{/history,null,AVAILABLE,@Spark}  19/10/24 16:36:09 INFO HistoryServer: Bound HistoryServer to 0.0.0.0, and started at http://hadoop02.prod.phenom.local:18081  [murali.kumpatla@hadoop02 spark2]$ spark-shell  Setting default log level to "WARN".  To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).  19/10/24 16:37:20 ERROR SparkContext: Error initializing SparkContext.  org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.  at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:89)  at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:63)  at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:164)  at org.apache.spark.SparkContext.<init>(SparkContext.scala:500)  at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2498)  at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:934)  at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:925)  at scala.Option.getOrElse(Option.scala:121)  at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:925)  at org.apache.spark.repl.Main$.createSparkSession(Main.scala:103)  at $line3.$read$$iw$$iw.<init>(<console>:15)  at $line3.$read$$iw.<init>(<console>:43)  at $line3.$read.<init>(<console>:45)  at $line3.$read$.<init>(<console>:49)  at $line3.$read$.<clinit>(<console>)  at $line3.$eval$.$print$lzycompute(<console>:7)  at $line3.$eval$.$print(<console>:6)  at $line3.$eval.$print(<console>)  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)  at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)  at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)  at java.lang.reflect.Method.invoke(Method.java:498)  at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:793)  at scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:1054)  at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:645)  at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:644)  at scala.reflect.internal.util.ScalaClassLoader$class.asContext(ScalaClassLoader.scala:31)  at scala.reflect.internal.util.AbstractFileClassLoader.asContext(AbstractFileClassLoader.scala:19)  at scala.tools.nsc.interpreter.IMain$WrappedRequest.loadAndRunReq(IMain.scala:644)  at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:576)  at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:572)  at scala.tools.nsc.interpreter.IMain$$anonfun$quietRun$1.apply(IMain.scala:231)  at scala.tools.nsc.interpreter.IMain$$anonfun$quietRun$1.apply(IMain.scala:231)  at scala.tools.nsc.interpreter.IMain.beQuietDuring(IMain.scala:221)  at scala.tools.nsc.interpreter.IMain.quietRun(IMain.scala:231)  at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1$$anonfun$apply$mcV$sp$1.apply(SparkILoop.scala:88)  at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1$$anonfun$apply$mcV$sp$1.apply(SparkILoop.scala:88)  at scala.collection.immutable.List.foreach(List.scala:392)  at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply$mcV$sp(SparkILoop.scala:88)  at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply(SparkILoop.scala:88)  at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply(SparkILoop.scala:88)  at scala.tools.nsc.interpreter.ILoop.savingReplayStack(ILoop.scala:91)  at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:87)  at org.apache.spark.repl.SparkILoop$$anonfun$process$1$$anonfun$org$apache$spark$repl$SparkILoop$$anonfun$$loopPostInit$1$1.apply$mcV$sp(SparkILoop.scala:170)  at org.apache.spark.repl.SparkILoop$$anonfun$process$1$$anonfun$org$apache$spark$repl$SparkILoop$$anonfun$$loopPostInit$1$1.apply(SparkILoop.scala:158)  at org.apache.spark.repl.SparkILoop$$anonfun$process$1$$anonfun$org$apache$spark$repl$SparkILoop$$anonfun$$loopPostInit$1$1.apply(SparkILoop.scala:158)  at scala.tools.nsc.interpreter.ILoop$$anonfun$mumly$1.apply(ILoop.scala:189)  at scala.tools.nsc.interpreter.IMain.beQuietDuring(IMain.scala:221)  at scala.tools.nsc.interpreter.ILoop.mumly(ILoop.scala:186)  at org.apache.spark.repl.SparkILoop$$anonfun$process$1.org$apache$spark$repl$SparkILoop$$anonfun$$loopPostInit$1(SparkILoop.scala:158)  at org.apache.spark.repl.SparkILoop$$anonfun$process$1$$anonfun$startup$1$1.apply(SparkILoop.scala:226)  at org.apache.spark.repl.SparkILoop$$anonfun$process$1$$anonfun$startup$1$1.apply(SparkILoop.scala:206)  at org.apache.spark.repl.SparkILoop$$anonfun$process$1.withSuppressedSettings$1(SparkILoop.scala:194)  at org.apache.spark.repl.SparkILoop$$anonfun$process$1.startup$1(SparkILoop.scala:206)  at org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply$mcZ$sp(SparkILoop.scala:241)  at org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply(SparkILoop.scala:141)  at org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply(SparkILoop.scala:141)  at scala.reflect.internal.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:97)  at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:141)  at org.apache.spark.repl.Main$.doMain(Main.scala:76)  at org.apache.spark.repl.Main$.main(Main.scala:56)  at org.apache.spark.repl.Main.main(Main.scala)  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)  at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)  at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)  at java.lang.reflect.Method.invoke(Method.java:498)  at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)  at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:904)  at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:198)  at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:228)  at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:137)  at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)  19/10/24 16:37:20 WARN YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to request executors before the AM has registered!  19/10/24 16:37:20 WARN MetricsSystem: Stopping a MetricsSystem that is not running  org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.  at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:89)  at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:63)  at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:164)  at org.apache.spark.SparkContext.<init>(SparkContext.scala:500)  at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2498)  at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:934)  at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:925)  at scala.Option.getOrElse(Option.scala:121)  at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:925)  at org.apache.spark.repl.Main$.createSparkSession(Main.scala:103)  ... 62 elided  <console>:14: error: not found: value spark  import spark.implicits._  ^  <console>:14: error: not found: value spark  import spark.sql  ^  Welcome to  ____ 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-22-2019
	
		
		12:06 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Check the config property hadoop.http.authentication.type, if this is set to kerberos , then accessing UIs would need kerberos credentials on client. By default this is et to kerberos in HDP 3.x version when cluster is kerberized.      If you want to disable kerberos auth then change below config properties.   ->  Ambari > HDFS> Configs> in core-site  hadoop.http.authentication.type=simple  hadoop.http.authentication.simple.anonymous.allowed=true 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		07-16-2018
	
		
		06:23 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Thank you @rguruvannagari I just enabled SolrCloud and restarted. I don't see audit error anymore. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		04-16-2017
	
		
		06:00 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @rguruvannagari Thank you it works. I followed the hortonworks hello wold tutorial. it doesn't mentioned that. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		04-13-2017
	
		
		12:27 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Yes, this node was part of one of the old HDP installations. However we have uninstalled that now and shifted to 2.5.3, a more stabler release. Have undertaken the current steps :  1) Deleted old 2.3.4 and current folder under /usr/hdp  2) Restarted the ambari agent  3) Added the new host again and took care of host run check issues (like pre-existing old 2.3.4 packages and users and folders. Have removed them)  4) Node was successfully added. But had to install a new rpm python-argparse   5) Added DataNode, Node Manager and clients in the new node successfully  Through ambari I can now see this node added successfully with required services. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		04-18-2017
	
		
		10:01 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Finally this problem is solved by change the hostname format.  the hostname could not contain "_" tag. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-03-2017
	
		
		05:47 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 hello   i hit the same error and ambari-server.log shows "ERROR [ambari-client-thread-25] HostImpl:1374 - Config inconsistency exists: unknown configType" 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		04-17-2017
	
		
		07:16 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 And also, we had to change another default configuration for Ambari Metrics:      You can see that by default, the HBase temporary directory is created inside /usr/hdp/, which should not. 
						
					
					... View more