Member since 
    
	
		
		
		10-03-2020
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                236
            
            
                Posts
            
        
                15
            
            
                Kudos Received
            
        
                18
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 1715 | 11-11-2024 09:31 AM | |
| 2084 | 08-28-2023 02:13 AM | |
| 2547 | 12-15-2021 05:26 PM | |
| 2315 | 10-22-2021 10:09 AM | |
| 6166 | 10-20-2021 08:44 AM | 
			
    
	
		
		
		09-11-2021
	
		
		10:00 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 Hi @DanHosier,     Just provide you a possible solution to bind the namenode http to localhost.  Add following property to service side advanced hdfs-site.xml and restart hdfs.  HDFS Service Advanced Configuration Snippet (Safety Valve) for hdfs-site.xml  <property>  <name>dfs.namenode.http-bind-host</name>  <value>127.0.0.1</value>  </property>  Then the property is added into /var/run/cloudera-scm-agent/process/<Latest process of NN>/hdfs-site.xml:  # grep -C2 "dfs.namenode.http-bind-host" hdfs-site.xml  </property>  <property>  <name>dfs.namenode.http-bind-host</name>  <value>127.0.0.1</value>  </property>  And then test curl commands:  # curl `hostname -f`:9870  curl: (7) Failed connect to xxxx.xxxx.xxxx.com:9870; Connection refused  # curl localhost:9870  <!--  Licensed to the Apache Software Foundation (ASF) under one or more  contributor license agreements. See the NOTICE file distributed with  this work for additional information regarding copyright ownership.  The ASF licenses this file to You under the Apache License, Version 2.0  (the "License"); you may not use this file except in compliance with  the License. You may obtain a copy of the License at    http://www.apache.org/licenses/LICENSE-2.0    Unless required by applicable law or agreed to in writing, software  distributed under the License is distributed on an "AS IS" BASIS,  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.  See the License for the specific language governing permissions and  limitations under the License.  -->  <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"  "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">  <html xmlns="http://www.w3.org/1999/xhtml">  <head>  <meta http-equiv="REFRESH" content="0;url=dfshealth.html" />  <title>Hadoop Administration</title>  </head>  </html>  Now the webUI only served on NN's localhost.  But you will see this alert on CM because Service Monitor cannot reach to NN WebUI:  NameNode summary: xxxx.xxxx.xxxx.com (Availability: Unknown, Health: Bad). This health test is bad because the Service Monitor did not find an active NameNode.  So this solution has side effects for service monitor, but actually hdfs is running well.      Regards,  Will  If the answer helps, please accept as solution and click thumbs up. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-10-2021
	
		
		10:48 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi @Ben621 ,  Please check this community post should answer your question.  https://community.cloudera.com/t5/Support-Questions/How-are-the-primary-keys-in-Phoenix-are-converted-as-row/td-p/147232     Regards,  Will  If the answer helps, please accept as solution and click thumbs up. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-10-2021
	
		
		10:30 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi @clouderaskme   Creating same folder name in same directory is not allowed.   Test:  # sudo -u hdfs hdfs dfs -mkdir /folder1  # sudo -u hdfs hdfs dfs -mkdir /folder1/subfolder1  # sudo -u hdfs hdfs dfs -mkdir /folder1/subfolder1  mkdir: `/folder1/subfolder1': File exists     So if you see two subfolder under folder1 with same name, it may due to contain special characters in name.     Can you log into the terminal and execute hdfs commands to check and also show us the output?  hdfs dfs -ls /folder1 | cat -A     Regards,  Will 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-10-2021
	
		
		10:04 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 Hi @Sudheend,     Pseudo-distributed mode means each of the separate processes run on the same server, rather than multiple servers in a cluster.      In CDH6/7, "start-hbase.sh" is not existing anymore, and "service hbase-master start/stop" is also not working anymore. Instead, Cloudera Manager will use multiple scripts to do this. You can check how a HBase role is started in CM by expanding steps and check stderr.log of the running commands.      So the best way is to use Cloudera Manager to install HBase service, you can choose same host to install Master role, RegionServer role. Then you could stop/start the HBase roles via CM > HBase > Instances > Select role > Actions > Stop/Start.     Then use jps or ps -ef to check the running processes:  For example:  # jps  7251 DataNode  8019 NodeManager  7253 NameNode  15238 HMaster  11384 HRegionServer  16105 Jps  7085 QuorumPeerMain     Regards,  Will  If the answer helps, please accept as solution and click thumbs up. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-10-2021
	
		
		07:07 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi @ighack,     If you mean the current RS heap is 50 megabytes or 80 megabytes, it's usually not enough.  A good number is 16GB ~ 31 GB for most cases. If you indeed don't have enough resource in RS nodes, at lease keep RS heap 4GB as default, if still see many long GC pauses you have to increase it.     Refer to below link to install phoenix and validate installation.  https://docs.cloudera.com/documentation/enterprise/latest/topics/phoenix_installation.html#concept_ofv_k4n_c3b     If you installed as above steps, then in any of the CDH node find the JDBC jar:  find / -name "phoenix-*client.jar"  and follow this guide:  https://docs.cloudera.com/runtime/7.2.10/phoenix-access-data/topics/phoenix-orchestrating-sql.html  Check your JDBC URL syntax should looks like:  jdbc:phoenix:zookeeper_quorum:zookeeper_port:zookeeper_hbase_path     Regards,  Will 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-09-2021
	
		
		06:58 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi @ighack,  Please search keywords "JvmPauseMonitor" in that RegionServer log to see if there are GC pause and determine is it GC or Non-GC.  - For GC pause general step is to increase heap size.   Please go through the setting of "Java Heap Size of HBase RegionServer in Bytes" in CM > HBase > Configuration. If the current setting is small please increase it.  Please check this KB for the heap size tuning concept.  https://community.cloudera.com/t5/Community-Articles/Tuning-Hbase-for-optimized-performance-Part-1/ta-p/248137  - For Non-GC pause you will need to check if:  Kernel blocking the process due to:  hardware issues  blocking I/O  page allocation failures due to heavy memory utilization  etc…  Look into kernel messages for clues  dmsg  /var/log/messages     Regards,  Will  If the answer helps, please accept as solution and click thumbs up. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-05-2021
	
		
		09:17 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi @shean      It looks like the hdfs pathname is wrong, you should use double "/" after "hdfs:": hdfs://localnode2:8020/user/hue/oozie/workspaces/hue-oozie-1630559728.5   Please check your command or configurations for this wrong path setting and correct it.     Regards,  Will  If the answer helps, please accept as solution and click thumbs up. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-04-2021
	
		
		04:49 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi @Sainath90 ,  Do you mean to install HDP sandbox (Hortonworks data platform for hadoop ) in your mac?  Please refer to below tutorials:  https://www.cloudera.com/tutorials/getting-started-with-hdp-sandbox.html  It has virtual box / vmware / docker versions to choose. I would recommend you to install docker version as it is easy to deploy / remove / stop / start.  I have installed docker version successfully in my mac with 16GB RAM, and mine is Intel core not M1 core, but I believe the docker will work in M1.  Please note the prerequisites is minimum 10 GB RAM dedicated to the virtual machine.   Below is the docker version tutorial.  https://www.cloudera.com/tutorials/sandbox-deployment-and-install-guide/3.html     Thanks,  Will  If the answer helps, please accept as solution and click thumbs up. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-03-2021
	
		
		12:36 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		3 Kudos
		
	
				
		
	
		
					
							 Hello @AnuradhaV,  Thanks for raising questions in community. We usually don't suggest to put files with special characters in file name to hdfs. If you have to do this you should replace the special characters with URL encoding.  For example, below characters should be encoded as:      #  encoded as  %23
?  encoded as  %3F
=  encoded as  %3D
;  encoded as  %3B     A full list of URL encoding characters:  https://www.degraeve.com/reference/urlencoding.php  To help you test it out, I created these files in my local and then put them to hdfs:     # ls
900-0314-Slide#2.vsi  abc.html?C=S;O=A
# hdfs dfs -put 900-0314-Slide%232.vsi /tmp/
# hdfs dfs -put abc.html%3FC%3DS%3BO%3DA /tmp/
# hdfs dfs -ls /tmp
Found 2 items
-rw-r--r--   3 hdfs   supergroup          0 2021-09-03 07:07 /tmp/900-0314-Slide#2.vsi
-rw-r--r--   3 hdfs   supergroup          0 2021-09-03 07:15 /tmp/abc.html?C=S;O=A     Thanks,  Will  If the answer helps, please accept as solution and click thumbs up. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-31-2021
	
		
		10:45 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hello @npr20202 ,     Could you specify from which log do you see this error. And what jobs are failing because of this error?  What is the CM version and CDH version?  Please ensure that KMS, Keytrustee Server, and keyHSM are in good health.  Please check KTS log, if the error is there please share the full error stack.     Thanks,  Will 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		- « Previous
 - Next »