Member since 
    
	
		
		
		09-02-2016
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                523
            
            
                Posts
            
        
                89
            
            
                Kudos Received
            
        
                42
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 2724 | 08-28-2018 02:00 AM | |
| 2696 | 07-31-2018 06:55 AM | |
| 5685 | 07-26-2018 03:02 AM | |
| 2981 | 07-19-2018 02:30 AM | |
| 6464 | 05-21-2018 03:42 AM | 
			
    
	
		
		
		07-06-2018
	
		
		02:25 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 During install if SElinux is enabled then apparently the hadoop directories created in /var/lib like hbase, hive, impala, sqoop, zookeeper etc. seem to have all the permissions set as 000 instead of 755 and also owned by root instead of the service accounts. This causes these roles unable to startup. Ended up having to chmod 755 and chown all these 15 or so directories after which the install completed sucessfully. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		07-05-2018
	
		
		08:34 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 @Rod No, it is unsupported (as of writing) in both CDH5 and CDH6.     https://www.cloudera.com/documentation/enterprise/6/release-notes/topics/rg_cdh_600_unsupported_features.html#spark     Spark SQL CLI is not supported 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		07-03-2018
	
		
		07:16 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 The problem was probably caused by a table with over 1 mln small files in hdfs. When invalidate metadata command was started in impala refreshing metadata process lasted over 15 min. After compacting table problem disappeared. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-20-2018
	
		
		01:48 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Fawze  You said that any role can be moved. How is that done for ones like "HDFS Balancer" or JobHistory servers? These roles can't be added/moved via CM that I can see. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-14-2018
	
		
		08:25 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							Could you perform a hard stop and start of the Cloudera-scm-agent and let me know if . its reflecting or not .
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-14-2018
	
		
		07:02 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 I am facing same problem, MySQL server is in remote location, they have created user, but I can not access with same user from hadoop cluster, how to solve this? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-11-2018
	
		
		09:54 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Abu I think you can downaload what the Hue can show      so ... if your "download_row_limit" attribute = 100000 on hue.ini the result of your query will be truncated to 100000 and you can download this number of lines.     You can change the attibute on hue.ini or using the Configuration Snippet on Cloudera Manager. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-09-2018
	
		
		09:45 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							It is a recommendation based on the fact that active and standby are merely  states of the NameNode and not different daemons.    The NameNode doesn't check it's own hardware to be the same as other  NameNodes if that's what you are worried about.  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-21-2018
	
		
		03:42 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 @sim6     I hope you have more than 3 data nodes     Generally there two types of "data missing" issues are possible for many reasons  a. ReplicaNotFoundException  b. BlockMissingException     If your issue is related to BlockMissingException and if you have backup data in your DR environment then you are good otherwise it might be a problem, but for ReplicaNotFoundException, please make sure all your datanodes are healthy and commissioned state. In fact, namenode suppose to handle this automatically whenever a hit occurs on that data.. if not, you can also try hdfs rebalance (or) NN restart may fix this issue, but you don't need to try this option unless some user report any issue on the particular data. In your case no one reported yet and you found it, so you can ignore it for now 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-15-2018
	
		
		10:53 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 [cloudera@quickstart ~]$ ^C  [cloudera@quickstart ~]$ sqoop import --connect jdbc:mysql://192.168.186.128:3306/myfirsttutorial --username cloudera --password cloudera --table mytable --target-dir=/user/cloudera/myfirst-m  Warning: /usr/lib/sqoop/../accumulo does not exist! Accumulo imports will fail.  Please set $ACCUMULO_HOME to the root of your Accumulo installation.  18/05/15 10:47:00 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6-cdh5.13.0  18/05/15 10:47:00 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.  18/05/15 10:47:00 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.  18/05/15 10:47:00 INFO tool.CodeGenTool: Beginning code generation  18/05/15 10:47:01 ERROR manager.SqlManager: Error executing statement: java.sql.SQLException: Access denied for user 'cloudera'@'quickstart.cloudera' (using password: YES)  java.sql.SQLException: Access denied for user 'cloudera'@'quickstart.cloudera' (using password: YES)   at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:996)   at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3887)   at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3823)   at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:870)   at com.mysql.jdbc.MysqlIO.secureAuth411(MysqlIO.java:4332)   at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1258)   at com.mysql.jdbc.ConnectionImpl.coreConnect(ConnectionImpl.java:2234)   at com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:2265)   at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2064)   at com.mysql.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:790)   at com.mysql.jdbc.JDBC4Connection.<init>(JDBC4Connection.java:44)   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)   at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)   at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)   at java.lang.reflect.Constructor.newInstance(Constructor.java:526)   at com.mysql.jdbc.Util.handleNewInstance(Util.java:377)   at com.mysql.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:395)   at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:325)   at java.sql.DriverManager.getConnection(DriverManager.java:571)   at java.sql.DriverManager.getConnection(DriverManager.java:215)   at org.apache.sqoop.manager.SqlManager.makeConnection(SqlManager.java:904)   at org.apache.sqoop.manager.GenericJdbcManager.getConnection(GenericJdbcManager.java:52)   at org.apache.sqoop.manager.SqlManager.execute(SqlManager.java:763)   at org.apache.sqoop.manager.SqlManager.execute(SqlManager.java:786)   at org.apache.sqoop.manager.SqlManager.getColumnInfoForRawQuery(SqlManager.java:289)   at org.apache.sqoop.manager.SqlManager.getColumnTypesForRawQuery(SqlManager.java:260)   at org.apache.sqoop.manager.SqlManager.getColumnTypes(SqlManager.java:246)   at org.apache.sqoop.manager.ConnManager.getColumnTypes(ConnManager.java:327)   at org.apache.sqoop.orm.ClassWriter.getColumnTypes(ClassWriter.java:1858)   at org.apache.sqoop.orm.ClassWriter.generate(ClassWriter.java:1657)   at org.apache.sqoop.tool.CodeGenTool.generateORM(CodeGenTool.java:106)   at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:494)   at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:621)   at org.apache.sqoop.Sqoop.run(Sqoop.java:147)   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)   at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183)   at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234)   at org.apache.sqoop.Sqoop.runTool(Sqoop.java:243)   at org.apache.sqoop.Sqoop.main(Sqoop.java:252)  18/05/15 10:47:01 ERROR tool.ImportTool: Import failed: java.io.IOException: No columns to generate for ClassWriter   at org.apache.sqoop.orm.ClassWriter.generate(ClassWriter.java:1663)   at org.apache.sqoop.tool.CodeGenTool.generateORM(CodeGenTool.java:106)   at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:494)   at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:621)   at org.apache.sqoop.Sqoop.run(Sqoop.java:147)   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)   at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183)   at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234)   at org.apache.sqoop.Sqoop.runTool(Sqoop.java:243)   at org.apache.sqoop.Sqoop.main(Sqoop.java:252)  [cloudera@quickstart ~]$ ^C  [cloudera@quickstart ~]$        I have just started cloudera i am stuck here can anybody hep 
						
					
					... View more