Member since 
    
	
		
		
		04-11-2017
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                15
            
            
                Posts
            
        
                1
            
            
                Kudos Received
            
        
                0
            
            
                Solutions
            
        
			
    
	
		
		
		06-11-2018
	
		
		09:57 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 csguna,  Here is the connection related to zookeeper, it looks that our created topic  ciovInput_v3 and ciovInput_v1 are there. thanks     # bin/kafka-topics.sh --describe --zookeeper localhost:2181  SLF4J: Class path contains multiple SLF4J bindings.  SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/KAFKA-3.0.0-1.3.0.0.p0.40/lib/kafka/libs/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]  SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/KAFKA-3.0.0-1.3.0.0.p0.40/lib/kafka/libs/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]  SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.  SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]  Topic:__consumer_offsets PartitionCount:50 ReplicationFactor:3 Configs:segment.bytes=104857600,cleanup.policy=compact,compression.type=producer  Topic: __consumer_offsets Partition: 0 Leader: 256 Replicas: 256,257,255 Isr: 256  Topic: __consumer_offsets Partition: 1 Leader: 256 Replicas: 257,255,256 Isr: 256  Topic: __consumer_offsets Partition: 2 Leader: 256 Replicas: 255,256,257 Isr: 256  Topic: __consumer_offsets Partition: 3 Leader: 256 Replicas: 256,255,257 Isr: 256  Topic: __consumer_offsets Partition: 4 Leader: 256 Replicas: 257,256,255 Isr: 256  Topic: __consumer_offsets Partition: 5 Leader: 256 Replicas: 255,257,256 Isr: 256  Topic: __consumer_offsets Partition: 6 Leader: 256 Replicas: 256,257,255 Isr: 256  Topic: __consumer_offsets Partition: 7 Leader: 256 Replicas: 257,255,256 Isr: 256  Topic: __consumer_offsets Partition: 8 Leader: 256 Replicas: 255,256,257 Isr: 256  Topic: __consumer_offsets Partition: 9 Leader: 256 Replicas: 256,255,257 Isr: 256  Topic: __consumer_offsets Partition: 10 Leader: 256 Replicas: 257,256,255 Isr: 256  Topic: __consumer_offsets Partition: 11 Leader: 256 Replicas: 255,257,256 Isr: 256  Topic: __consumer_offsets Partition: 12 Leader: 256 Replicas: 256,257,255 Isr: 256  Topic: __consumer_offsets Partition: 13 Leader: 256 Replicas: 257,255,256 Isr: 256  Topic: __consumer_offsets Partition: 14 Leader: 256 Replicas: 255,256,257 Isr: 256  Topic: __consumer_offsets Partition: 15 Leader: 256 Replicas: 256,255,257 Isr: 256  Topic: __consumer_offsets Partition: 16 Leader: 256 Replicas: 257,256,255 Isr: 256  Topic: __consumer_offsets Partition: 17 Leader: 256 Replicas: 255,257,256 Isr: 256  Topic: __consumer_offsets Partition: 18 Leader: 256 Replicas: 256,257,255 Isr: 256  Topic: __consumer_offsets Partition: 19 Leader: 256 Replicas: 257,255,256 Isr: 256  Topic: __consumer_offsets Partition: 20 Leader: 256 Replicas: 255,256,257 Isr: 256  Topic: __consumer_offsets Partition: 21 Leader: 256 Replicas: 256,255,257 Isr: 256  Topic: __consumer_offsets Partition: 22 Leader: 256 Replicas: 257,256,255 Isr: 256  Topic: __consumer_offsets Partition: 23 Leader: 256 Replicas: 255,257,256 Isr: 256  Topic: __consumer_offsets Partition: 24 Leader: 256 Replicas: 256,257,255 Isr: 256  Topic: __consumer_offsets Partition: 25 Leader: 256 Replicas: 257,255,256 Isr: 256  Topic: __consumer_offsets Partition: 26 Leader: 256 Replicas: 255,256,257 Isr: 256  Topic: __consumer_offsets Partition: 27 Leader: 256 Replicas: 256,255,257 Isr: 256  Topic: __consumer_offsets Partition: 28 Leader: 256 Replicas: 257,256,255 Isr: 256  Topic: __consumer_offsets Partition: 29 Leader: 256 Replicas: 255,257,256 Isr: 256  Topic: __consumer_offsets Partition: 30 Leader: 256 Replicas: 256,257,255 Isr: 256  Topic: __consumer_offsets Partition: 31 Leader: 256 Replicas: 257,255,256 Isr: 256  Topic: __consumer_offsets Partition: 32 Leader: 256 Replicas: 255,256,257 Isr: 256  Topic: __consumer_offsets Partition: 33 Leader: 256 Replicas: 256,255,257 Isr: 256  Topic: __consumer_offsets Partition: 34 Leader: 256 Replicas: 257,256,255 Isr: 256  Topic: __consumer_offsets Partition: 35 Leader: 256 Replicas: 255,257,256 Isr: 256  Topic: __consumer_offsets Partition: 36 Leader: 256 Replicas: 256,257,255 Isr: 256  Topic: __consumer_offsets Partition: 37 Leader: 256 Replicas: 257,255,256 Isr: 256  Topic: __consumer_offsets Partition: 38 Leader: 256 Replicas: 255,256,257 Isr: 256  Topic: __consumer_offsets Partition: 39 Leader: 256 Replicas: 256,255,257 Isr: 256  Topic: __consumer_offsets Partition: 40 Leader: 256 Replicas: 257,256,255 Isr: 256  Topic: __consumer_offsets Partition: 41 Leader: 256 Replicas: 255,257,256 Isr: 256  Topic: __consumer_offsets Partition: 42 Leader: 256 Replicas: 256,257,255 Isr: 256  Topic: __consumer_offsets Partition: 43 Leader: 256 Replicas: 257,255,256 Isr: 256  Topic: __consumer_offsets Partition: 44 Leader: 256 Replicas: 255,256,257 Isr: 256  Topic: __consumer_offsets Partition: 45 Leader: 256 Replicas: 256,255,257 Isr: 256  Topic: __consumer_offsets Partition: 46 Leader: 256 Replicas: 257,256,255 Isr: 256  Topic: __consumer_offsets Partition: 47 Leader: 256 Replicas: 255,257,256 Isr: 256  Topic: __consumer_offsets Partition: 48 Leader: 256 Replicas: 256,257,255 Isr: 256  Topic: __consumer_offsets Partition: 49 Leader: 256 Replicas: 257,255,256 Isr: 256  Topic:ciovGSInput PartitionCount:1 ReplicationFactor:3 Configs:  Topic: ciovGSInput Partition: 0 Leader: 256 Replicas: 257,255,256 Isr: 256  Topic:ciovInput PartitionCount:1 ReplicationFactor:1 Configs:  Topic: ciovInput Partition: 0 Leader: 256 Replicas: 256 Isr: 256  Topic:ciovInput_v1 PartitionCount:1 ReplicationFactor:1 Configs:  Topic: ciovInput_v1 Partition: 0 Leader: 256 Replicas: 256 Isr: 256  Topic:ciovInput_v3 PartitionCount:1 ReplicationFactor:1 Configs:  Topic: ciovInput_v3 Partition: 0 Leader: 255 Replicas: 255 Isr: 255    
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-06-2018
	
		
		01:40 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							    anybody knows where is the server.prperties configuration file for kafka?     I checked the file /opt/cloudera/parcels/KAFKA-3.0.0-1.3.0.0.p0.40/etc/kafka/conf.dist/  server.properties  and noticed that its file timestamp is never changed even I changed advertised.host.name   from cloudera GUI for kafka. so it doesn't look like a valid configuration file for kafka in the running instance.     any suggestions? thanks 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-05-2018
	
		
		05:57 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hello,  I got kafka version KAFKA-3.0.0-1.3.0.0.p0.40  running with 3 instances.  when I run with one kafka instance, I got the error:     Group coordinator one_kafka_hostname:9092 (id: 2147483391 rack: null) is unavailable or invalid, will attempt rediscovery     later, I made a change for advertised.host.name  from default empty value to localhost, then run again, I got the following repated error with incremental correlation id:       [Producer clientId=producer-1] Error while fetching metadata with correlation id 362 : {ciovInput_v1=LEADER_NOT_AVAILABLE}     does anybody have any suggestions to reslove it?      many thanks for the help in advance.    
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
 - 
						
							
		
			Apache Kafka
 
			
    
	
		
		
		02-15-2018
	
		
		12:15 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 ben,  thanks.  the log file is way too big, I just checked again and found the following error:  Can't open /run/cloudera-scm-agent/process/1431-hive-HIVEMETASTORE/supervisor.conf: Permission denied.     all files in directory /run/cloudera-scm-agent/process/   is owned by hive except this file.           -rw-------  1 root root  3430 Feb 15 14:30 supervisor.conf        I have two hive servers, the one works has the same permission issue. but it doesn't prevent it from running successfully.     Here is the more detailed log file information:  + exec /opt/cloudera/parcels/CDH-5.7.6-1.cdh5.7.6.p0.6/lib/hive/bin/hive --config /run/cloudera-scm-agent/process/1431-hive-HIVEMETASTORE --service metastore -p 9083
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-5.7.6-1.cdh5.7.6.p0.6/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
18/02/15 14:30:16 ERROR conf.Configuration: error parsing conf core-default.xml
javax.xml.parsers.ParserConfigurationException: Feature 'http://apache.org/xml/features/xinclude' is not recognized.
	at org.apache.xerces.jaxp.DocumentBuilderFactoryImpl.newDocumentBuilder(Unknown Source)
	at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2541)
	at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2503)
	at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2409)
	at org.apache.hadoop.conf.Configuration.get(Configuration.java:982)
	at org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:1032)
	at org.apache.hadoop.conf.Configuration.getBoolean(Configuration.java:1433)
	at org.apache.hadoop.hbase.HBaseConfiguration.checkDefaultsVersion(HBaseConfiguration.java:67)
	at org.apache.hadoop.hbase.HBaseConfiguration.addHbaseResources(HBaseConfiguration.java:81)
	at org.apache.hadoop.hbase.HBaseConfiguration.create(HBaseConfiguration.java:96)
	at org.apache.hadoop.hbase.util.MapreduceDependencyClasspathTool.main(MapreduceDependencyClasspathTool.java:70)
Exception in thread "main" java.lang.RuntimeException: javax.xml.parsers.ParserConfigurationException: Feature 'http://apache.org/xml/features/xinclude' is not recognized.
	at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2659)
	at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2503)
	at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2409)
	at org.apache.hadoop.conf.Configuration.get(Configuration.java:982)
	at org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:1032)
	at org.apache.hadoop.conf.Configuration.getBoolean(Configuration.java:1433)
	at org.apache.hadoop.hbase.HBaseConfiguration.checkDefaultsVersion(HBaseConfiguration.java:67)
	at org.apache.hadoop.hbase.HBaseConfiguration.addHbaseResources(HBaseConfiguration.java:81)
	at org.apache.hadoop.hbase.HBaseConfiguration.create(HBaseConfiguration.java:96)
	at org.apache.hadoop.hbase.util.MapreduceDependencyClasspathTool.main(MapreduceDependencyClasspathTool.java:70)
Caused by: javax.xml.parsers.ParserConfigurationException: Feature 'http://apache.org/xml/features/xinclude' is not recognized.
	at org.apache.xerces.jaxp.DocumentBuilderFactoryImpl.newDocumentBuilder(Unknown Source)
	at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2541)
	... 9 more
18/02/15 14:30:17 ERROR conf.Configuration: error parsing conf core-default.xml
javax.xml.parsers.ParserConfigurationException: Feature 'http://apache.org/xml/features/xinclude' is not recognized.
	at org.apache.xerces.jaxp.DocumentBuilderFactoryImpl.newDocumentBuilder(Unknown Source)
	at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2541)
	at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2503)
	at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2409)
	at org.apache.hadoop.conf.Configuration.set(Configuration.java:1144)
	at org.apache.hadoop.conf.Configuration.set(Configuration.java:1116)
	at org.apache.hadoop.mapred.JobConf.setJar(JobConf.java:525)
	at org.apache.hadoop.mapred.JobConf.setJarByClass(JobConf.java:543)
	at org.apache.hadoop.mapred.JobConf.<init>(JobConf.java:437)
	at org.apache.hadoop.hive.conf.HiveConf.initialize(HiveConf.java:2652)
	at org.apache.hadoop.hive.conf.HiveConf.<init>(HiveConf.java:2611)
	at org.apache.hadoop.hive.common.LogUtils.initHiveLog4jCommon(LogUtils.java:74)
	at org.apache.hadoop.hive.common.LogUtils.initHiveLog4j(LogUtils.java:58)
	at org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:6083)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
	at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Exception in thread "main" java.lang.RuntimeException: javax.xml.parsers.ParserConfigurationException: Feature 'http://apache.org/xml/features/xinclude' is not recognized.
	at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2659)
	at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2503)
	at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2409)
	at org.apache.hadoop.conf.Configuration.set(Configuration.java:1144)
	at org.apache.hadoop.conf.Configuration.set(Configuration.java:1116)
	at org.apache.hadoop.mapred.JobConf.setJar(JobConf.java:525)
	at org.apache.hadoop.mapred.JobConf.setJarByClass(JobConf.java:543)
	at org.apache.hadoop.mapred.JobConf.<init>(JobConf.java:437)
	at org.apache.hadoop.hive.conf.HiveConf.initialize(HiveConf.java:2652)
	at org.apache.hadoop.hive.conf.HiveConf.<init>(HiveConf.java:2611)
	at org.apache.hadoop.hive.common.LogUtils.initHiveLog4jCommon(LogUtils.java:74)
	at org.apache.hadoop.hive.common.LogUtils.initHiveLog4j(LogUtils.java:58)
	at org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:6083)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
	at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: javax.xml.parsers.ParserConfigurationException: Feature 'http://apache.org/xml/features/xinclude' is not recognized.
	at org.apache.xerces.jaxp.DocumentBuilderFactoryImpl.newDocumentBuilder(Unknown Source)
	at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2541)
	... 18 more
+ date   it looks that the following log files do not have any informationl  /var/log/hive/hadoop-cmf-hive-HIVESERVER2-cahive-master01.log.out and   /var/log/hive/hadoop-cmf-hive-HIVESERVER2-cahive-master01.log.out     also, I tried to start the server from command line and it dosn't work.     $ sudo service hive-server2 start  Redirecting to /bin/systemctl start hive-server2.servic  Failed to start hive-server2.service: Unit not found.        thanks    
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-14-2018
	
		
		03:21 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi csguna,   thanks for your help. I checked the log file and found the problem is related to openjdk installed by another engineer.  after I removed it, it works fine. however, I have one problem unable to start hive in one machine, not sure what could be the cause? here is the error message in stderr.log.  many thanks     18/02/14 14:29:09 ERROR conf.Configuration: error parsing conf core-default.xml
javax.xml.parsers.ParserConfigurationException: Feature 'http://apache.org/xml/features/xinclude' is not recognized.
	at org.apache.xerces.jaxp.DocumentBuilderFactoryImpl.newDocumentBuilder(Unknown Source)
	at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2541)
	at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2503)
	at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2409)
	at org.apache.hadoop.conf.Configuration.get(Configuration.java:982)
	at org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:1032)
	at org.apache.hadoop.conf.Configuration.getBoolean(Configuration.java:1433)
	at org.apache.hadoop.hbase.HBaseConfiguration.checkDefaultsVersion(HBaseConfiguration.java:67)
	at org.apache.hadoop.hbase.HBaseConfiguration.addHbaseResources(HBaseConfiguration.java:81)
	at org.apache.hadoop.hbase.HBaseConfiguration.create(HBaseConfiguration.java:96)
	at org.apache.hadoop.hbase.util.MapreduceDependencyClasspathTool.main(MapreduceDependencyClasspathTool.java:70)
Exception in thread "main" java.lang.RuntimeException: javax.xml.parsers.ParserConfigurationException: Feature 'http://apache.org/xml/features/xinclude' is not recognized.
	at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2659)    
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-13-2018
	
		
		05:57 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 I nocied that directory  in /run/cloudera-scm-agent/process is empty, not sure why no directory is created over there 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-13-2018
	
		
		05:33 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hello,  after I rebooted the computer without shutdown all cloudera service first,  the cloudera management service and all services cannot be stopped or re-started.  the re-start requires to stop first, I tried to stop the service and got the following below.     Problem accessing /cmf/process/1282/logs. Reason:      http://cahive-master01:9000/process/1282-cloudera-mgmt-ALERTPUBLISHER/files/logs/stdout.log
The server declined access to the page or resource.      Problem accessing /cmf/process/1334/logs. Reason:      http://cahive-master01:9000/process/1334-spark_on_yarn-SPARK_YARN_HISTORY_SERVER/files/logs/stderr.log
The server declined access to the page or resource.     I searched the cloudera exising user's solution to this problem is to delete all node and then add back. is there a reason for this problem and a better solution? thanks 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
 - 
						
							
		
			Cloudera Manager
 
			
    
	
		
		
		06-12-2017
	
		
		07:06 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 go to machine's Status page, Click Action, it has only two options clickable : "Initialize" and "enter maintenance mode", unable to click "start/stop" etc options. I clicked "initialize", still I am unable to start the zookeeper.  Thanks 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-12-2017
	
		
		05:32 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi Elias,     I just reset the mysql password to get oozie work , thanks for the link.     there is one more issue since then.  I cannot start the zookeeper servers. I got the following error when I tried to start 3 zookeepers.     "Starting these new ZooKeeper Servers may cause the existing ZooKeeper Datastore to be lost. Try again after restarting any existing ZooKeeper Servers with outdated configurations. If you do not want to preserve the existing Datastore, you can start each ZooKeeper Server from its respective Status page. "     it is ok not to preserve the existing datastore. but I go to the host's status page, I didnt find the link or the location that can allow me to start the zookeeper server manually. do you have any suggestion?  Thanks 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-12-2017
	
		
		02:57 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Elias,  it now works with your solution. echo -n do the trick. thanks so much.     I have another issue with oozie, I may have deleted the oozie role from hadoop when decommission server.     Now I need to add the role back and it asks me the database username and password.  where do I find my previous username and password for oozie? or it doesn't matter if I create a new set of database username and password for oozie.    
						
					
					... View more