Member since 
    
	
		
		
		09-17-2015
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                70
            
            
                Posts
            
        
                79
            
            
                Kudos Received
            
        
                20
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 2897 | 02-27-2018 08:03 AM | |
| 2556 | 02-27-2018 08:00 AM | |
| 3472 | 10-09-2016 07:59 PM | |
| 1305 | 10-03-2016 07:27 AM | |
| 1291 | 06-17-2016 03:30 PM | 
			
    
	
		
		
		04-24-2017
	
		
		07:33 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Sami Ahmad  I tried your config and the ResourceManager's logs show error: "Illegal capacity of -1.0 for queue root.Support".  Support is defined in root.queues, but it's capacity is undefined. If you fixed that (add yarn.scheduler.capacity.root.Support.capacity) the next problem will be, that the sum of the capacities of any queue's child queues must be 100%. In your case, root.Marketing + root.Engineering = 200%. Try the following:  yarn.scheduler.capacity.root.Engineering.capacity=40
yarn.scheduler.capacity.root.Marketing.capacity=40
yarn.scheduler.capacity.root.Support.capacity=20
  or any other 3 numbers that add up to 100.   If you fix this too, your config will be valid:     
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		12-20-2016
	
		
		03:20 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		7 Kudos
		
	
				
		
	
		
					
							 This article will go over the concepts of
security in an Hbase cluster. More specifically we will concentrate on ACL
based security and how to apply at the different levels of granularity on an
Hbase model.  From an overall security perspective and acces
control list , or ACL, is a list of permissions associated with an object  ACLs focus on the access rules pattern.  ACL logic  Hbase access 
contol lists are granted on different levels of data abstractions and
cover types of operations.  Hbase data layout  Before we go further let us clear out the
hierarchical elements that compose the datastorage Hbase      CELL : All values written to Hbase are stored in a what is know as a CELL.
(Cell can also be refered to as KeyValue).  Cells are identified by a multidimensionnal key
{row, column, qualifier, timestamp}. In
the example above :   CELL =>
Rowkey1,CF1,Q11,TS1  COLUMN FAMILY : A column Family groups together
arbitrary cells.  TABLE : All Cells belong to a Column family and
are organized into a table.  NAMESPACE  : Tables in turn belong to Namespaces.
This can thought of as a database to table logic. With this in mind a table’s
fully qualiefied name is   Table =>
Namespace :Table (the default namespace can be omitted)  Hbase scopes  Permissions are
evaluated starting a the widest scope working to the narrowest scope.      
 Global  Namespace    Table  Column
Family (CF)  Column
Qualifier (Q)  Cell   For example, a permission granted at a tabe
dominates grants done at the column family level.  Permissions  Hbase can give granular access rights depending
on each scope. Permissions are either zero or more letters from the set RWXCA.  
 Superuser :  a special user that has unlimited access  Read(R) : Read
right on the given scope  Write(W) : Write
right on the given scope  Execute(X) : Coprocessor
execution on the given scope  Create(C) : Can
create and delete tables on the given scope  Admin(A) : Right to
perform cluster admin operations, fro example granting rights   Combining access rights and scopes creates a
complete matrix of access patterns and roles. In order to avoid complex
conflicting rules it can often be useful to build access patterns from roles
and reponsibilities up.   
   
   Role 
   Responsibilites 
  
  
   Superuser 
   Usually this role should be reserved solely
  to the Hbase user 
  
  
   Admin 
   (A) Operationnal
  role :  Performs cluster-wide
  operations like balancing, assigning regions
   (C) DBA type role, creates and
  drops tables and namespaces  
  
  
   Namespace Admin 
   (A) : Manages a specific
  namespaces from an operations perspective can take snapshots and splits etc..
   (C) From a DBA perspective can
  create tables and give access 
   
  
  
   Table Admin 
   (A) Operationnal role can
  manage splits,compactions ..
   (C) can create snpashots,
  restore a table etc..  
  
  
   Power User 
   (RWX) can use the table by writing
  or reading data and possibly use coprocessors. 
  
  
   Consumer 
   (R) User can only read and
  consume data 
  
   Some actions need  a mix of these permissions to be performed  
 CheckAndPut / CheckAndDelete : thee actions need RW permissions  Increment/Append :
only require W permissions   A full complete list of the acl matrix can be
found here : http://hbase.apache.org/book.html#appendix_acl_matrix    Setting up  In order to setup Hbase ACLs you will need to
modify the Hbase-site.xml with the following properties  <property>
  <name>hbase.coprocessor.region.classes</name>   <value>org.apache.hadoop.hbase.security.access.AccessController, 
org.apache.hadoop.hbase.security.token.TokenProvider</value>  
</property>  
<property>
  <name>hbase.coprocessor.master.classes</name> 
<value>org.apache.hadoop.hbase.security.access.AccessController</value>  </property>
<property>  
  <name>hbase.coprocessor.regionserver.classes</name>   <value>org.apache.hadoop.hbase.security.access.AccessController</value>  </property>  
<property>  
  <name>hbase.security.exec.permission.checks</name>  <value>true</value>  
</property>  In Ambari this is much easier just enable  security and Ambari will automatically set
all these configurations for you.                                                                    Applying ACLs  Now that we have restarted our Hbase cluster
and set up the ACL feature we can start setting up rules.  For simplitcity purposes we will use 2 users :
Hbase and testuser.  Hbase is the superuser for our cluster and will
let us set the rights accordingly.  Namespace  As the Hbase use we create an ‘acl’ namespace  hbase(main):001:10> create_namespace ‘acl’
0 row(s)in 0.3180 seconds  As testuser we will create a table in this new
namespace  hbase(main):001:0>
create 'atest','cf'ERROR:
org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient
permissions (user=testuser, scope=default,
params=[namespace=default,table=default:atest,family=cf],action=CREATE)    We are not allowed to create a tabe in this
namespace. Super user Hbase will give the rights to testuser.  hbase(main):001:10> grant 'testuser','C','@acl'0
row(s) in 0.3360 seconds  We can now run the previous command as the
testuser  hbase(main):002:0> create 'atest','cf'0
row(s) in 2.3360 seconds  We will now open this table to another user
testuser2  hbase(main):002:0> grant 'testuser2','R','@acl'ERROR:
org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient permissions
(user=testuser, scope=acl, params=[namespace=acl],action=ADMIN)  Notice we can’t grant rights to other users as
we are missing Admin permissions  We can fix this with our Hbase super user  hbase(main):002:20> grant 'testuser','A','@acl'0
row(s) in 0.460 seconds 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
	
					
			
		
	
	
	
	
				
		
	
	
			
    
	
		
		
		03-27-2019
	
		
		12:46 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 The link is invalid. JOAO's link is valid now. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-09-2016
	
		
		12:17 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		4 Kudos
		
	
				
		
	
		
					
							 I'm getting same error in HDP 2.4 sandbox, if use %hive on Zeppelin and then aggregate functions are not working...   %hive 
select count(*) from  health_table
java.lang.NullPointerException 
at org.apache.zeppelin.hive.HiveInterpreter.getConnection(HiveInterpreter.java:184)
at org.apache.zeppelin.hive.HiveInterpreter.getStatement(HiveInterpreter.java:204)
at org.apache.zeppelin.hive.HiveInterpreter.executeSql(HiveInterpreter.java:233)
at org.apache.zeppelin.hive.HiveInterpreter.interpret(HiveInterpreter.java:328)
at org.apache.zeppelin.interpreter.ClassloaderInterpreter.interpret(ClassloaderInterpreter.java:57)
at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:93)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:295)
at org.apache.zeppelin.scheduler.Job.run(Job.java:171)
at org.apache.zeppelin.scheduler.ParallelScheduler$JobRunner.run(ParallelScheduler.java:162)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)  This issue resolved when i use %sql.  I know your issue is not to related HDP sandbox 2.4 but may be this comment help someone using %hive on HDP sandbox 2.4.  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-28-2017
	
		
		04:37 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Thanks for the tutorial. I was able to set up the debugging in standalone mode. Few questions, i tried many other port, but it worked only on 7777. It said "no transports initialized". Next, is there any way to control whether I want to run in debug mode or regular mode. Sorry if its a very beginner's question.  Thanks again. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-14-2016
	
		
		07:04 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Thank you Neeraj 
						
					
					... View more