Member since 
    
	
		
		
		03-01-2016
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                104
            
            
                Posts
            
        
                97
            
            
                Kudos Received
            
        
                3
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 2202 | 06-03-2018 09:22 PM | |
| 33744 | 05-21-2018 10:31 PM | |
| 2897 | 10-19-2016 07:13 AM | 
			
    
	
		
		
		12-22-2016
	
		
		03:28 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							  Consider increasing network capacity to overcome
the challenge caused due to non locality of blocks.    Create configuration groups of datanodes exclusively for HBASE,
disabling HDFS balancer on this group and allow only hbase balancer. Follow
this url Host_Config_Groups to create host config groups.   Few temporary workarounds can also be applied if problem is
severe and need immediate attention :-   Disable HDFS balancer permanently on the cluster and run it
manually on need basis. (Please spin a support case and have the situation
discussed before implementing this workaround.)    In case the performance issue needs to be fixed post running of
HDFS Balancer, a major compaction could be manually initiated. For performance
gains, major compaction is run on off peak hours such as weekends. This article Compaction_Best_Practices is a recommended read here.    Scheduling major compaction after scheduled balancer rather than
vice versa.    HDFS although has introduced concept of "favored
nodes" feature but HBase APIs are not yet equipped to choose specific
nodes during data writing.    Please note that these are expert level configurations and
procedures, if unsure of their implications, its always recommended to open a
support case with us.    Refer following Apache URLs to track the progress of region
blocks pinning implementation.   https://issues.apache.org/jira/browse/HBASE-13021  https://issues.apache.org/jira/browse/HDFS-6133 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		12-22-2016
	
		
		11:53 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		4 Kudos
		
	
				
		
	
		
					
							 
  
 /hbase , /hbase-unsecure ,
/hbase-secure   This is the root znode for HBase. Older versions used
to have just /hbase while newer ones distinguish whether cluster is secured or
unsecured.   
 /hbase-unsecure/hbaseid    UUID of the cluster. Also stored in /hbase/hbase.id
file in HDFS.  
 /hbase-unsecure/master   Contains the active master server hostname.Written
during master server startup.   
 /hbase-unsecure/backup-masters   All standby master servers
are registered here  
 /hbase-unsecure/meta-region-server   Registers the hostname of region server which holds
Meta table.  
 /hbase-unsecure/rs   Acts as the root node for all region
servers to list themselves when they start. It is used to track server
failures. Each znode inside is ephemeral and its name is the server name of the
region server.  
 /hbase-unsecure/splitWAL    The parent znode for all
log-splitting-related coordination  
 /hbase-unsecure/balancer   Status of load balancer enabled / disabled on cluster.   
 /hbase-unsecure/region-in-transition   List of regions in transition.   
   /hbase-unsecure/table-lock   Read / Write Lock on tables (not on regions inside) during
activities such as Create /delete/alter table ,  Column add /
delete/modification etc.   /hbase-unsecure/acl   The acl znode is used
for synchronizing the changes made to the _acl_ table by the grant/revoke
commands. Each table will have a sub-znode (/hbase/acl/tableName) containing
the ACLs of the table. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
	
					
			
		
	
	
	
	
				
		
	
	
			
    
	
		
		
		10-21-2016
	
		
		02:01 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 SYMPTOM : Immediately after exporting HDFS directories via NFS , some of the directories start throwing permission denied errors to authorized users added in Ranger policies.  ROOT CAUSE : NFS neither honors Ranger policies nor HDFS ACLs. If a directory has HDFS permission bits such as 000 and access is controlled fully via Ranger, this directory won’t be exported at all. Messages such as below can be seen in NFS gateway logs :-   2016-07-27 17:35:19,071 INFO mount.RpcProgramMountd (RpcProgramMountd.java:mnt(127)) - Path /test1 is not shared. 
2016-07-27 17:35:37,297 INFO mount.RpcProgramMountd (RpcProgramMountd.java:mnt(127)) - Path /test2 is not shared. 
2016-07-27 17:39:34,581 INFO mount.RpcProgramMountd (RpcProgramMountd.java:mnt(144)) - Giving handle (fileId:12345) to client for export /  Even if the directory gets exported due to some available permissions, effective permission bits are only from HDFS and not from Ranger policies.  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
	
					
			
		
	
	
	
	
				
		
	
	
			
    
	
		
		
		10-21-2016
	
		
		02:01 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 SYMPTOMS : Errors such as "KeeperErrorCode = NoAuth for /config/topics"  ROOT CAUSE : Errors such as above are reported while trying to create or delete topic from an ordinary user because only the process owner of Kafka service such as root can write to zookeeper znodes i.e. /configs/topics.Ranger policies do not get enforced when a non privileged user creates a topic is because kafka-topics.sh script talks directly to zookeeper in order to create the topics. It will add entries into the zookeeper nodes and the watchers on the broker side will monitor and create topics accordingly. Due to this process involving zookeeper, the authorization cannot be done through the ranger plugin.   NEXT STEPS : If one would want to allow users to be able to create topics, We have a script called kafka-acls.sh which would help allow or deny users on topics and many such options. The details are elaborated in the document mentioned below :-   http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.0/bk_secure-kafka-ambari/content/ch_secure-kafka-auth-cli.html  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
	
					
			
		
	
	
	
	
				
		
	
	
			
    
	
		
		
		10-20-2016
	
		
		03:30 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 Linux ACLs are implemented in such a way that setting default ACLs on parent directory shall automatically get inherited to child directories and umask shall have no influence in this behavior. However HDFS ACLs have slightly different approach here, they do take into account umask set in hdfs-site.xml in parameter "fs.permissions.umask-mode" and enforce ACLs on child folders based on these two parameter with umask taking precedence over the other.  Let's try and reproduce this case :-   [gaurav@test ~]$ fs -mkdir /tmp/acltest 
[gaurav@test ~]$ fs -setfacl -m default:mask::rwx /tmp/acltest 
[gaurav@test ~]$ fs -setfacl -m mask::rwx /tmp/acltest 
[gaurav@test ~]$ fs -setfacl -m default:user:adam:rwx /tmp/acltest 
[gaurav@test ~]$ fs -setfacl -m user:adam:rwx /tmp/acltest   Let's see what ACLs are implemented :-   [gaurav@test~]$ fs -getfacl /tmp/acltest 
# file: /tmp/acltest 
# owner: gaurav
# group: hdfs 
user::rwx 
user:adam:rwx 
group::r-x 
mask::rwx 
other::r-x 
default:user::rwx 
default:user:adam:rwx 
default:group::r-x 
default:mask::rwx 
default:other::r-x  Let's create a child directory now and see ACLs inherited.  [gaurav@test ~]$ fs -mkdir /tmp/acltest/subdir1 
[gaurav@test~]$ fs -getfacl /tmp/acltest/subdir1 
# file: /tmp/acltest/subdir1 
# owner: gaurav
# group: hdfs 
user::rwx 
user:adam:rwx #effective:r-x 
group::r-x 
mask::r-x 
other::r-x 
default:user::rwx 
default:user:adam:rwx 
default:group::r-x 
default:mask::rwx  In our example, umask was set as 022 and hence effective ACL on child directory turned out to be r-x.  REFERENCE: https://issues.apache.org/jira/browse/HDFS-6962 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
	
					
			
		
	
	
	
	
				
		
	
	
			
    
	
		
		
		10-19-2016
	
		
		07:21 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 paste the output of - > tail -100f /namenode/log/ when you restart namenode 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-19-2016
	
		
		07:13 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 if you would like to use maximum of cluster capacity when available, you need to keep user-limit-factor as 2/3/4 depending upon your queue capacity , if your queue capacity is 25% of total cluster capacity , you can keep ULF to at most 4 , which would mean this user can utilize 400 % of its queue capacity.  Condition : Queue max capacity should be more than its capacity , say 50 % capacity and 100 % max capacity to utilize above parameter. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-18-2016
	
		
		06:11 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Please check if there are no white-spaces before or after the link. Also paste the link here. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-18-2016
	
		
		06:02 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @ashnee please check hdfs-site.xml and confirm you have valid entries for dfs.namenode.https-address  ==  configuration parameter 'dfs.namenode.https-address' was not found in configurations dictionary!  === 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-21-2016
	
		
		03:14 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							  what this widget is showing is from this configuration "Memory Node " which you have set as 7 gb. That is the only usable memory which can be allocated to Yarn containers in total. You can increase up to 11 gb. Hope I got your question correctly. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		- « Previous
 - Next »