128
            
            
                Posts
            
        
                15
            
            
                Kudos Received
            
        
                8
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 3458 | 01-13-2015 09:09 AM | |
| 5491 | 05-28-2014 09:28 AM | |
| 2457 | 04-22-2014 01:24 PM | |
| 2282 | 03-31-2014 09:07 AM | |
| 69351 | 02-07-2014 08:40 AM | 
			
    
	
		
		
		02-18-2021
	
		
		09:05 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi @pawski,      Here is a solution for comparing CDH clusters configurations managed by different Cloudera Managers   https://community.cloudera.com/t5/Support-Questions/Compare-settings-in-different-clusters/m-p/280536/highlight/true#M208909      Thanks,  Salim 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-06-2021
	
		
		10:04 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 hdfs,yarn,hive etc are system users, they will not have any passwords by default, however you can su from root. If you really want set passwords anyway then $ passwd hdfs command will prompt you to set a new password but i don't see a reason why anyone want to do that for system users.  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-24-2020
	
		
		05:49 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 I can connect with "beeline -u jdbc:hive2://"  [20:28 hadoop@Cavin-Y7000 hive]$ beeline -u jdbc:hive2://
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/hadoop/share/hadoop/hdfs/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Connecting to jdbc:hive2://
20/10/24 20:30:27 [main]: WARN conf.HiveConf: HiveConf of name hive.server2.connection.host does not exist
Hive Session ID = 4977083b-4d07-4ff0-930f-7afb9e214933
20/10/24 20:30:28 [main]: WARN session.SessionState: METASTORE_FILTER_HOOK will be ignored, since hive.security.authorization.manager is set to instance of HiveAuthorizerFactory.
20/10/24 20:30:29 [main]: WARN metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored
20/10/24 20:30:30 [main]: WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
20/10/24 20:30:30 [main]: WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
20/10/24 20:30:30 [main]: WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
20/10/24 20:30:30 [main]: WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
20/10/24 20:30:30 [main]: WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
20/10/24 20:30:30 [main]: WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
20/10/24 20:30:31 [main]: WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
20/10/24 20:30:31 [main]: WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
20/10/24 20:30:31 [main]: WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
20/10/24 20:30:31 [main]: WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
20/10/24 20:30:31 [main]: WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
20/10/24 20:30:31 [main]: WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
Connected to: Apache Hive (version 3.1.2)
Driver: Hive JDBC (version 3.1.2)
Transaction isolation: TRANSACTION_REPEATABLE_READ
Beeline version 3.1.2 by Apache Hive
0: jdbc:hive2://>     but when I use "beeline -u jdbc:hive2://localhost:10000/default" I got this error:     [20:26 hadoop@Cavin-Y7000 hive]$ ./bin/beeline -u jdbc:hive2://localhost:10000
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/hadoop/share/hadoop/hdfs/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Connecting to jdbc:hive2://localhost:10000
20/10/24 20:28:38 [main]: WARN jdbc.HiveConnection: Failed to connect to localhost:10000
Unknown HS2 problem when communicating with Thrift server.
Error: Could not open client transport with JDBC Uri: jdbc:hive2://localhost:10000: Invalid status 16 (state=08S01,code=0)
Beeline version 3.1.2 by Apache Hive   it's very confusing for me for several days, and I cannot just use "jdbc:hive2://" within java code, it will give me the same error as using "beeline -u jdbc:hive2://localhost:10000/default" in command line 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-27-2019
	
		
		11:29 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							Thats odd that the VM is read only....Are you making the change in CM for the flume logging safety valve?    -pd
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-12-2018
	
		
		09:18 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		2 Kudos
		
	
				
		
	
		
					
							 You can use PURGE option to delete data file as well along with partition mentadata but it works only in INTERNAL/MANAGED tables  ALTER TABLE table_name DROP [IF EXISTS] PARTITION partition_spec PURGE;     External Tables have a two step process to alterr table drop partition + removing file  ALTER TABLE table_name DROP [IF EXISTS] PARTITION partition_spec;  hadoop fs -rm -r <partition file path> 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-09-2017
	
		
		12:28 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 How to point namenode particular txid ?  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		07-04-2017
	
		
		12:54 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 You mentioned that you still need to fix the 'Under-Replicated Blocks'.     This is what I found with google to fix:     $ su - <$hdfs_user>   $ hdfs fsck / | grep 'Under replicated' | awk -F':' '{print $1}' >> /tmp/under_replicated_files   $ for hdfsfile in `cat /tmp/under_replicated_files`; do echo "Fixing $hdfsfile :" ; hadoop fs -setrep 3 $hdfsfile; done    
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-21-2016
	
		
		08:16 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Change the download path when you go through the initial setup of CDH where it asks to specify the parcel repository 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-28-2015
	
		
		11:03 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							Sorry the Error message looks like this  =============  Driver Version: V2.5.12.1005    Running connectivity tests...    Attempting connection  Failed to establish connection  SQLSTATE: HY000[Cloudera][HiveODBC] (34) Error from Hive: ETIMEDOUT.    TESTS COMPLETED WITH ERROR  =====================
						
					
					... View more