Member since 
    
	
		
		
		04-15-2018
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                32
            
            
                Posts
            
        
                0
            
            
                Kudos Received
            
        
                0
            
            
                Solutions
            
        
			
    
	
		
		
		05-20-2019
	
		
		09:29 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							Hi aks,    While installing CDH we will be asked to choose public and custom repositories.  Installation will download files from public repository into our local machines to complete the installation.So while installing a new set up we neet to select public repository.
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-20-2019
	
		
		09:04 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 This is quite common issue while installing CDH.However it is so easy to resolve it.     This is the issue with ssh.  You need to edit the  /etc/ssh/sshd_config   file by changing below parameter :-     PermitRootLogin yes     Now restart ssh service : sudo service ssh restart     Now Select root to login as below            click on Continue     Your issue will be resolved.     Thanks,  Solomonchinni 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-20-2019
	
		
		08:19 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 I'm running Ubuntu 18.04 LTS in my machine.I'm trying to install CDH 6.1.While installing agents I got error   message like this below                         Please help give soliution.     Thank you     Regard,  solomonchinni 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
- 
						
							
		
			Cloudera Manager
			
    
	
		
		
		09-27-2018
	
		
		02:19 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 I've followed you by adding a new user to super group and gave permissions using ACLs      The user name is 'perl'     I ran a new job      $ sudo -u perl hadoop distcp /solomon/data/data.txt /solomon     When I run this job the application was in pending status 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-20-2018
	
		
		10:53 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							The database is completely empty.
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-15-2018
	
		
		03:21 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Unfortunately hive was stopped.When I check hive instances hiveserver2, hive metastore server and Oozie server were down.     when I check hive metastore database, Oozie database no tables of shema were found.     The log file hive showed the following     3:01:24.277 PMWARNQuery  [main]: Query for candidates of org.apache.hadoop.hive.metastore.model.MVersionTable and subclasses resulted in no possible candidates
Required table missing : "`VERSION`" in Catalog "" Schema "". DataNucleus requires this table to perform its persistence operations. Either your MetaData is incorrect, or you need to enable "datanucleus.autoCreateTables"
org.datanucleus.store.rdbms.exceptions.MissingTableException: Required table missing : "`VERSION`" in Catalog "" Schema "". DataNucleus requires this table to perform its persistence operations. Either your MetaData is incorrect, or you need to enable "datanucleus.autoCreateTables"
	at org.datanucleus.store.rdbms.table.AbstractTable.exists(AbstractTable.java:485)
	at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.performTablesValidation(RDBMSStoreManager.java:3380)
	at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.addClassTablesAndValidate(RDBMSStoreManager.java:3190)
	at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.run(RDBMSStoreManager.java:2841)
	at org.datanucleus.store.rdbms.AbstractSchemaTransaction.execute(AbstractSchemaTransaction.java:122)
	at org.datanucleus.store.rdbms.RDBMSStoreManager.addClasses(RDBMSStoreManager.java:1605)
	at org.datanucleus.store.AbstractStoreManager.addClass(AbstractStoreManager.java:954)
	at org.datanucleus.store.rdbms.RDBMSStoreManager.getDatastoreClass(RDBMSStoreManager.java:679)
	at org.datanucleus.store.rdbms.query.RDBMSQueryUtils.getStatementForCandidates(RDBMSQueryUtils.java:408)
	at org.datanucleus.store.rdbms.query.JDOQLQuery.compileQueryFull(JDOQLQuery.java:947)
	at org.datanucleus.store.rdbms.query.JDOQLQuery.compileInternal(JDOQLQuery.java:370)
	at org.datanucleus.store.query.Query.executeQuery(Query.java:1744)
	at org.datanucleus.store.query.Query.executeWithArray(Query.java:1672)
	at org.datanucleus.store.query.Query.execute(Query.java:1654)
	at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:221)
	at org.apache.hadoop.hive.metastore.ObjectStore.getMSchemaVersion(ObjectStore.java:7341)
	at org.apache.hadoop.hive.metastore.ObjectStore.getMetaStoreSchemaVersion(ObjectStore.java:7320)
	at org.apache.hadoop.hive.metastore.ObjectStore.checkSchema(ObjectStore.java:7274)
	at org.apache.hadoop.hive.metastore.ObjectStore.verifySchema(ObjectStore.java:7258)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:103)
	at com.sun.proxy.$Proxy12.verifySchema(Unknown Source)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:662)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:710)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:509)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:78)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:84)
	at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:6487)
	at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:6482)
	at org.apache.hadoop.hive.metastore.HiveMetaStore.startMetaStore(HiveMetaStore.java:6732)
	at org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:6659)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
	at org.apache.hadoop.util.RunJar.main(RunJar.java:136)  3:01:24.291 PMERRORHiveMetaStore  [main]: MetaException(message:Version information not found in metastore. )
	at org.apache.hadoop.hive.metastore.ObjectStore.checkSchema(ObjectStore.java:7282)
	at org.apache.hadoop.hive.metastore.ObjectStore.verifySchema(ObjectStore.java:7258)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:103)
	at com.sun.proxy.$Proxy12.verifySchema(Unknown Source)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:662)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:710)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:509)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:78)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:84)
	at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:6487)
	at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:6482)
	at org.apache.hadoop.hive.metastore.HiveMetaStore.startMetaStore(HiveMetaStore.java:6732)
	at org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:6659)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
	at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
  3:01:24.291 PMERRORHiveMetaStore  [main]: Metastore Thrift Server threw an exception...
MetaException(message:Version information not found in metastore. )
	at org.apache.hadoop.hive.metastore.ObjectStore.checkSchema(ObjectStore.java:7282)
	at org.apache.hadoop.hive.metastore.ObjectStore.verifySchema(ObjectStore.java:7258)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:103)
	at com.sun.proxy.$Proxy12.verifySchema(Unknown Source)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:662)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:710)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:509)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:78)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:84)
	at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:6487)
	at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:6482)
	at org.apache.hadoop.hive.metastore.HiveMetaStore.startMetaStore(HiveMetaStore.java:6732)
	at org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:6659)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
	at org.apache.hadoop.util.RunJar.main(RunJar.java:136)  3:01:24.292 PMINFOHiveMetaStore  [Thread-2]: Shutting down hive metastore.  The log file of Oozie showed the following        3:01:06.120 PM  FATAL  Services   SERVER[chinni] Runtime Exception during Services Load. Check your list of 'oozie.services' or 'oozie.services.ext'     3:01:06.126 PM  FATAL  Services   SERVER[chinni] E0103: Could not load service classes, Cannot create PoolableConnectionFactory (Table 'oozie.VALIDATE_CONN' doesn't exist)
org.apache.oozie.service.ServiceException: E0103: Could not load service classes, Cannot create PoolableConnectionFactory (Table 'oozie.VALIDATE_CONN' doesn't exist)
	at org.apache.oozie.service.Services.loadServices(Services.java:309)
	at org.apache.oozie.service.Services.init(Services.java:213)
	at org.apache.oozie.servlet.ServicesLoader.contextInitialized(ServicesLoader.java:46)
	at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4276)
	at org.apache.catalina.core.StandardContext.start(StandardContext.java:4779)
	at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:803)
	at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:780)
	at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:583)
	at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:944)
	at org.apache.catalina.startup.HostConfig.deployWARs(HostConfig.java:779)
	at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:505)
	at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1322)
	at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:325)
	at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:142)
	at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1069)
	at org.apache.catalina.core.StandardHost.start(StandardHost.java:822)
	at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1061)
	at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:463)
	at org.apache.catalina.core.StandardService.start(StandardService.java:525)
	at org.apache.catalina.core.StandardServer.start(StandardServer.java:761)
	at org.apache.catalina.startup.Catalina.start(Catalina.java:595)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289)
	at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414)
Caused by: <openjpa-2.4.1-r422266:1730418 fatal general error> org.apache.openjpa.persistence.PersistenceException: Cannot create PoolableConnectionFactory (Table 'oozie.VALIDATE_CONN' doesn't exist)
	at org.apache.openjpa.jdbc.sql.DBDictionaryFactory.newDBDictionary(DBDictionaryFactory.java:106)
	at org.apache.openjpa.jdbc.conf.JDBCConfigurationImpl.getDBDictionaryInstance(JDBCConfigurationImpl.java:603)               How to bring hive and oozie instances to active status?     Thanks and regards  solomonchinni    
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
- 
						
							
		
			Apache Hive
- 
						
							
		
			Apache Oozie
			
    
	
		
		
		09-14-2018
	
		
		11:28 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 I have created /user/history/done and /tmp/log directories. Finally, I checked log file it showed everything is fine.     3:11:28.555 PM INFO AggregatedLogDeletionService  aggregated log deletion finished.  3:11:57.677 PM INFO JobHistory  History Cleaner started  3:11:57.682 PM INFO JobHistory  History Cleaner complete  3:14:27.676 PM INFO JobHistory  Starting scan to move intermediate done files  3:14:32.823 PM ERROR JobHistoryServer  RECEIVED SIGNAL 15: SIGTERM  3:14:32.826 PM INFO MetricsSystemImpl  Stopping JobHistoryServer metrics system...  3:14:32.826 PM INFO MetricsSystemImpl  JobHistoryServer metrics system stopped.  3:14:32.827 PM INFO MetricsSystemImpl  JobHistoryServer metrics system shutdown complete.  3:14:32.827 PM INFO Server  Stopping server on 10033  3:14:32.827 PM INFO Server  Stopping IPC Server listener on 10033  3:14:32.827 PM INFO Server  Stopping IPC Server Responder  3:14:32.827 PM INFO Server  Stopping server on 10020  3:14:32.828 PM INFO Server  Stopping IPC Server listener on 10020  3:14:32.828 PM INFO Server  Stopping IPC Server Responder  3:14:32.832 PM INFO log  Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@chinni:19888  3:14:32.932 PM INFO JobHistory  Stopping JobHistory  3:14:32.932 PM INFO JobHistory  Stopping History Cleaner/Move To Done  3:14:32.934 PM ERROR AbstractDelegationTokenSecretManager  ExpiredTokenRemover received java.lang.InterruptedException: sleep interrupted  3:14:32.934 PM INFO JobHistoryServer  SHUTDOWN_MSG:  /************************************************************  SHUTDOWN_MSG: Shutting down JobHistoryServer at chinni/127.0.0.1  ************************************************************/  3:14:34.969 PM INFO JobHistoryServer  STARTUP_MSG:  /************************************************************  STARTUP_MSG: Starting JobHistoryServer  STARTUP_MSG: user = mapred  STARTUP_MSG: host = chinni/127.0.0.1  STARTUP_MSG: args = []  STARTUP_MSG: version = 2.6.0-cdh5.15.0  STARTUP_MSG: classpath = /run/cloudera-scm-agent/process/389-yarn-JOBHISTORY:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/commons-net-3.1.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/stax-api-1.0-2.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/commons-beanutils-1.9.2.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/aws-java-sdk-bundle-1.11.134.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/jetty-util-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/commons-logging-1.1.3.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/commons-cli-1.2.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/commons-lang-2.6.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/jetty-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/api-asn1-api-1.0.0-M20.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/commons-io-2.4.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/logredactor-1.0.3.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/gson-2.2.4.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/commons-codec-1.4.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/jersey-core-1.9.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/commons-compress-1.4.1.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/api-util-1.0.0-M20.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/xmlenc-0.52.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/commons-digester-1.8.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/slf4j-api-1.7.5.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/guava-11.0.2.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/jettison-1.1.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/commons-httpclient-3.1.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/jsch-0.1.42.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/jsr305-3.0.0.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/httpcore-4.2.5.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/curator-client-2.7.1.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/apacheds-i18n-2.0.0-M15.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/htrace-core4-4.0.1-incubating.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/protobuf-java-2.5.0.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/xz-1.0.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/jasper-compiler-5.5.23.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/netty-3.10.5.Final.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/asm-3.2.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/activation-1.1.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/slf4j-log4j12.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/servlet-api-2.5.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/paranamer-2.3.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/jackson-jaxrs-1.8.8.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/curator-framework-2.7.1.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/junit-4.11.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/hue-plugins-3.9.0-cdh5.15.0.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/snappy-java-1.0.4.1.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/jets3t-0.9.0.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/zookeeper.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/commons-math3-3.1.1.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/commons-collections-3.2.2.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/jasper-runtime-5.5.23.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/jersey-json-1.9.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/mockito-all-1.8.5.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/hamcrest-core-1.3.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/azure-data-lake-store-sdk-2.2.5.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/avro.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/commons-el-1.0.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/java-xmlbuilder-0.4.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/commons-configuration-1.6.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/jackson-xc-1.8.8.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/curator-recipes-2.7.1.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/httpclient-4.2.5.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/jsp-api-2.1.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/jersey-server-1.9.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/jaxb-api-2.2.2.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/.//parquet-scala_2.10.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/.//hadoop-common-2.6.0-cdh5.15.0-tests.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/.//parquet-column.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/.//parquet-test-hadoop2.jar:/opt/cloudera/parcels/CDH-5.15.0-.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop-mapreduce/lib/hamcrest-core-1.3.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop-mapreduce/lib/jackson-core-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop-mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop-mapreduce/lib/javax.inject-1.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop-mapreduce/lib/jersey-core-1.9.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop-mapreduce/lib/jersey-guice-1.9.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop-mapreduce/lib/jersey-server-1.9.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop-mapreduce/lib/junit-4.11.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop-mapreduce/lib/leveldbjni-all-1.8.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop-mapreduce/lib/log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop-mapreduce/lib/netty-3.10.5.Final.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop-mapreduce/lib/paranamer-2.3.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop-mapreduce/lib/protobuf-java-2.5.0.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop-mapreduce/lib/snappy-java-1.0.4.1.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop-mapreduce/lib/xz-1.0.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop-mapreduce/modules/*.jar  STARTUP_MSG: build = http://github.com/cloudera/hadoop -r e3cb23a1cb2b89d074171b44e71f207c3d6ffa50 ; compiled by 'jenkins' on 2018-05-24T11:25Z  STARTUP_MSG: java = 1.7.0_67  ************************************************************/  3:14:34.992 PM INFO JobHistoryServer  registered UNIX signal handlers for [TERM, HUP, INT]  3:14:35.831 PM INFO MetricsConfig  loaded properties from hadoop-metrics2.properties  3:14:35.880 PM INFO MetricsSystemImpl  Scheduled snapshot period at 10 second(s).  3:14:35.881 PM INFO MetricsSystemImpl  JobHistoryServer metrics system started  3:14:35.889 PM INFO JobHistory  JobHistory Init  3:14:36.369 PM INFO JobHistoryUtils  Default file system [hdfs://chinni:8020]  3:14:36.494 PM INFO JobHistoryUtils  Default file system [hdfs://chinni:8020]  3:14:36.503 PM INFO HistoryFileManager  Initializing Existing Jobs...  3:14:36.510 PM INFO HistoryFileManager  Found 0 directories to load  3:14:36.510 PM INFO HistoryFileManager  Existing job initialization finished. 0.0% of cache is occupied.  3:14:36.512 PM INFO CachedHistoryStorage  CachedHistoryStorage Init  3:14:36.531 PM INFO CallQueueManager  Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 100  3:14:36.538 PM INFO Server  Starting Socket Reader #1 for port 10033  3:14:36.671 PM INFO AbstractDelegationTokenSecretManager  Updating the current master key for generating delegation tokens  3:14:36.673 PM INFO AbstractDelegationTokenSecretManager  Starting expired delegation token remover thread, tokenRemoverScanInterval=60 min(s)  3:14:36.673 PM INFO AbstractDelegationTokenSecretManager  Updating the current master key for generating delegation tokens  3:14:36.722 PM INFO log  Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog  3:14:36.727 PM INFO AuthenticationFilter  Unable to initialize FileSignerSecretProvider, falling back to use random secrets.  3:14:36.731 PM INFO HttpRequestLog  Http request log for http.requests.jobhistory is not defined  3:14:36.739 PM INFO HttpServer2  Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)  3:14:36.741 PM INFO HttpServer2  Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context jobhistory  3:14:36.741 PM INFO HttpServer2  Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static  3:14:36.741 PM INFO HttpServer2  Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs  3:14:36.744 PM INFO HttpServer2  adding path spec: /jobhistory/*  3:14:36.744 PM INFO HttpServer2  adding path spec: /ws/*  3:14:36.750 PM INFO HttpServer2  Jetty bound to port 19888  3:14:36.750 PM INFO log  jetty-6.1.26.cloudera.4  3:14:36.775 PM INFO log  Extract jar:file:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/jars/hadoop-yarn-common-2.6.0-cdh5.15.0.jar!/webapps/jobhistory to /tmp/Jetty_chinni_19888_jobhistory____wwal40/webapp  3:14:37.104 PM INFO log  Started HttpServer2$SelectChannelConnectorWithSafeStartup@chinni:19888  3:14:37.105 PM INFO WebApps  Web app /jobhistory started at 19888  3:14:37.366 PM INFO WebApps  Registered webapp guice modules  3:14:37.375 PM INFO CallQueueManager  Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 1000  3:14:37.380 PM INFO Server  Starting Socket Reader #1 for port 10020  3:14:37.386 PM INFO RpcServerFactoryPBImpl  Adding protocol org.apache.hadoop.mapreduce.v2.api.HSClientProtocolPB to the server  3:14:37.387 PM INFO Server  IPC Server Responder: starting  3:14:37.387 PM INFO Server  IPC Server listener on 10020: starting  3:14:37.388 PM INFO HistoryClientService  Instantiated HistoryClientService at chinni/127.0.0.1:10020  3:14:37.393 PM INFO RMProxy  Connecting to ResourceManager at chinni/127.0.0.1:8032  3:14:37.474 PM INFO AggregatedLogDeletionService  aggregated log deletion started.  3:14:37.474 PM INFO Server  IPC Server Responder: starting  3:14:37.474 PM INFO Server  IPC Server listener on 10033: starting  3:14:37.500 PM INFO JobHistoryUtils  Default file system [hdfs://chinni:8020]  3:14:37.501 PM INFO JvmPauseMonitor  Starting JVM pause monitor  3:14:37.653 PM INFO AggregatedLogDeletionService  aggregated log deletion finished.  3:15:06.674 PM INFO JobHistory  History Cleaner started  3:15:06.679 PM INFO JobHistory  History Cleaner complete  3:17:36.673 PM INFO JobHistory  Starting scan to move intermediate done files  3:20:36.673 PM INFO JobHistory  Starting scan to move intermediate done files  3:23:36.673 PM INFO JobHistory  Starting scan to move intermediate done files  3:26:36.673 PM INFO JobHistory  Starting scan to move intermediate done files  3:29:36.673 PM INFO JobHistory  Starting scan to move intermediate done files  3:32:36.673 PM INFO JobHistory  Starting scan to move intermediate done files  3:35:36.673 PM INFO JobHistory  Starting scan to move intermediate done files  3:38:36.673 PM INFO JobHistory  Starting scan to move intermediate done files  3:41:36.673 PM INFO JobHistory  Starting scan to move intermediate done files  3:44:36.673 PM INFO JobHistory  Starting scan to move intermediate done files  3:47:36.673 PM INFO JobHistory  Starting scan to move intermediate done files  3:50:36.673 PM INFO JobHistory  Starting scan to move intermediate done files  5:14:24.573 PM INFO JvmPauseMonitor  Detected pause in JVM or host machine (eg GC): pause of approximately 2981ms  No GCs detected  5:16:14.919 PM INFO JobHistory  Starting scan to move intermediate done files  5:19:15.620 PM INFO JobHistory  Starting scan to move intermediate done files  5:22:15.620 PM INFO JobHistory  Starting scan to move intermediate done files  5:25:15.620 PM INFO JobHistory  Starting scan to move intermediate done files  5:28:15.620 PM INFO JobHistory  Starting scan to move intermediate done files  5:31:15.620 PM INFO JobHistory  Starting scan to move intermediate done files  5:34:15.620 PM INFO JobHistory  Starting scan to move intermediate done files  5:37:15.620 PM INFO JobHistory  Starting scan to move intermediate done files  5:40:15.620 PM INFO JobHistory  Starting scan to move intermediate done files  5:43:15.620 PM INFO JobHistory  Starting scan to move intermediate done files  5:46:15.620 PM INFO JobHistory  Starting scan to move intermediate done files  5:49:15.620 PM INFO JobHistory  Starting scan to move intermediate done files  5:52:15.620 PM INFO JobHistory  Starting scan to move intermediate done files  5:55:15.620 PM INFO JobHistory  Starting scan to move intermediate done files  5:58:15.620 PM INFO JobHistory  Starting scan to move intermediate done files  6:01:15.620 PM INFO JobHistory  Starting scan to move intermediate done files  6:04:15.620 PM INFO JobHistory  Starting scan to move intermediate done files  6:07:15.620 PM INFO JobHistory  Starting scan to move intermediate done files  6:10:15.620 PM INFO JobHistory  Starting scan to move intermediate done files  6:13:15.620 PM INFO JobHistory  Starting scan to move intermediate done files  6:16:15.620 PM INFO JobHistory  Starting scan to move intermediate done files  6:19:15.620 PM INFO JobHistory  Starting scan to move intermediate done files  6:22:15.620 PM INFO JobHistory  Starting scan to move intermediate done files  6:25:15.620 PM INFO JobHistory  Starting scan to move intermediate done files  6:28:15.620 PM INFO JobHistory  Starting scan to move intermediate done files  6:31:15.620 PM INFO JobHistory  Starting scan to move intermediate done files  6:34:15.620 PM INFO JobHistory  Starting scan to move intermediate done files  6:37:15.620 PM INFO JobHistory  Starting scan to move intermediate done files  6:40:15.620 PM INFO JobHistory  Starting scan to move intermediate done files  6:43:15.620 PM INFO JobHistory  Starting scan to move intermediate done files  6:46:15.620 PM INFO JobHistory  Starting scan to move intermediate done files  6:49:15.620 PM INFO JobHistory  Starting scan to move intermediate done files  6:52:15.620 PM INFO JobHistory  Starting scan to move intermediate done files  6:55:15.620 PM INFO JobHistory  Starting scan to move intermediate done files  6:58:15.620 PM INFO JobHistory  Starting scan to move intermediate done files  7:01:15.620 PM INFO JobHistory  Starting scan to move intermediate done files  7:04:15.620 PM INFO JobHistory  Starting scan to move intermediate done files  7:07:15.620 PM INFO JobHistory  Starting scan to move intermediate done files  7:10:15.620 PM INFO JobHistory  Starting scan to move intermediate done files  7:13:15.620 PM INFO JobHistory  Starting scan to move intermediate done files  7:16:15.620 PM INFO JobHistory  Starting scan to move intermediate done files  7:19:15.620 PM INFO JobHistory  Starting scan to move intermediate done files  7:22:15.620 PM INFO JobHistory  Starting scan to move intermediate done files  7:25:15.620 PM INFO JobHistory  Starting scan to move intermediate done files  7:28:15.620 PM INFO JobHistory  Starting scan to move intermediate done files  7:31:15.620 PM INFO JobHistory  Starting scan to move intermediate done files  11:32:18.232 PM INFO JvmPauseMonitor  Detected pause in JVM or host machine (eg GC): pause of approximately 2413ms  No GCs detected  11:34:47.559 PM INFO JobHistory  Starting scan to move intermediate done files  11:37:47.559 PM INFO JobHistory  Starting scan to move intermediate done files  11:40:47.559 PM INFO JobHistory  Starting scan to move intermediate done files  11:43:47.559 PM INFO JobHistory  Starting scan to move intermediate done files  11:46:47.877 PM INFO JobHistory  Starting scan to move intermediate done files  11:49:47.877 PM INFO JobHistory  Starting scan to move intermediate done files  11:52:47.877 PM INFO JobHistory  Starting scan to move intermediate done files     Thanks alot and that's great that your reply is exactly matched to the work what I did before I watch your reply 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-14-2018
	
		
		03:20 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Thanks at @Harsh J  Your answer really helped me.I'm fine with looking at large logs. Found the root cause of the JobHistory Server failure.  The log showed me the following        2:57:45.615 PM  INFO  MetricsSystemImpl   Stopping JobHistoryServer metrics system...     2:57:45.615 PM  INFO  MetricsSystemImpl   JobHistoryServer metrics system stopped.     2:57:45.615 PM  INFO  MetricsSystemImpl   JobHistoryServer metrics system shutdown complete.     2:57:45.615 PM  FATAL  JobHistoryServer   Error starting JobHistoryServer
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Error creating done directory: [hdfs://chinni:8020/user/history/done]
	at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.tryCreatingHistoryDirs(HistoryFileManager.java:680)
	at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.createHistoryDirs(HistoryFileManager.java:616)
	at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.serviceInit(HistoryFileManager.java:577)
	at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
	at org.apache.hadoop.mapreduce.v2.hs.JobHistory.serviceInit(JobHistory.java:95)
	at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
	at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
	at org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer.serviceInit(JobHistoryServer.java:154)
	at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
	at org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer.launchJobHistoryServer(JobHistoryServer.java:229)
	at org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer.main(JobHistoryServer.java:239)
Caused by: org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/user":hdfs:supergroup:drwxrwxr-x                Then I checked the  HDFS root directory and I checked YARN configurations to know what user are system users for job history server. I found no tmp directory, no mapred user and all. Then I have created mapred user and /tmp, /tmp/log directories from log data.     Then I have given permission on /tmp directory to mapred user.  The restart command successfully restarted my JobHistory Server.     All this was happened because of failure of restoring root directory snapshot.     Thanks. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-13-2018
	
		
		12:32 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 YARN is not working 
 I'm not able run any MapReduce job, hive query, distcp and benchmarking jobs. 
   
   
 1) ResourceManager is down 
 2) NodeManager is down 
 3) JobHistory Server is down  
   
 Restart is not working to get them back to active status. 
 Need help. 
   
 Thanks and regards  
 solomonchinni. 
   
   
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
- 
						
							
		
			Apache Hive
- 
						
							
		
			Apache YARN
- 
						
							
		
			MapReduce
			
    
	
		
		
		09-12-2018
	
		
		01:12 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 I'm using Cloudera Quickstart VM 13.0 in my machine. 
 While I was trying to copy data within the cluster I got permission denied message because hdfs is owner of the directories I was accessing. 
   
 But distcp cannot be used with default hdfs use because hdfs is the blacklisted user for mapreduce jobs, but when we install cloudera hdfs is the default user against distributed file system. 
   
 I used ACls to give permissions to particular directories and ran same distcp command permission denied is happening. 
 Give me better way to copy data  
   
 Thanks and regards 
 solomonchinni 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
- 
						
							
		
			Cloudera Manager
- 
						
							
		
			HDFS
- 
						
							
		
			MapReduce
- 
						
							
		
			Quickstart VM
 
        





