Member since 
    
	
		
		
		03-15-2016
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                35
            
            
                Posts
            
        
                13
            
            
                Kudos Received
            
        
                3
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 4703 | 11-06-2016 02:18 PM | |
| 5582 | 04-18-2016 10:54 AM | |
| 2517 | 03-31-2016 11:46 AM | 
			
    
	
		
		
		11-06-2016
	
		
		02:18 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 @Geoffrey Shelton Okot
  Here's how I got it working-  The existing nodes had DN directory as /hadoopdisk and the new nodes had /newdisk (using profiles in Ambari config. able to override properties selectively)  However later on I reverted the DN directory for all the nodes as /hadoopdisk and thats when I started to get the error log above.  Resolution was that I removed the unused /newdisk directories from the new DNs. Not sure why in first place this seemed to cause any issues as the DN property was /hadoopdisk only.  It seems somehow that the old DN property was causing the issue (in spite of being reverted back) till the time the unused directory existed. As soon as it was removed, voila !! 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		11-04-2016
	
		
		09:27 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Its resolved now. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		11-03-2016
	
		
		11:00 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Have already checked that ClusterIds for NN/DN are matching. Also have tried to delete the DN data folder and recreated using 'hdfs datanode'.  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		11-03-2016
	
		
		10:56 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Using Ambari on a perfectly running and working HDP cluster (v 2.4), tried to add a new host with Datanode service.  However on starting the new DN it starts only for 2-3 seconds and then stops.   Ambari also shows the "DataNodes Live" widget as 3/4.   PS-Have used the same DataNode directories setting on all the 3 existing as well as the newly added node.  Following are the logs from the DN that is not starting:  [root@node08 data]# tail -50 /var/log/hadoop/hdfs/hadoop-hdfs-datanode-node08.log
2016-11-03 06:29:01,769 INFO  ipc.Server (Server.java:run(906)) - IPC Server Responder: starting
2016-11-03 06:29:01,769 INFO  ipc.Server (Server.java:run(746)) - IPC Server listener on 8010: starting
2016-11-03 06:29:02,001 INFO  common.Storage (Storage.java:tryLock(715)) - Lock on /hadoopdisk/hadoop/hdfs/data/in_use.lock acquired by nodename 9074@node08.int.xyz.com
2016-11-03 06:29:02,048 INFO  common.Storage (BlockPoolSliceStorage.java:recoverTransitionRead(241)) - Analyzing storage directories for bpid BP-1435709756-10.131.138.24-1461308845727
2016-11-03 06:29:02,049 INFO  common.Storage (Storage.java:lock(675)) - Locking is disabled for /hadoopdisk/hadoop/hdfs/data/current/BP-1435709756-10.131.138.24-1461308845727
2016-11-03 06:29:02,051 INFO  datanode.DataNode (DataNode.java:initStorage(1402)) - Setting up storage: nsid=1525277556;bpid=BP-1435709756-10.131.138.24-1461308845727;lv=-56;nsInfo=lv=-63;cid=CID-95c88273-9764-4b48-8453-8cbc07cffc8b;nsid=1525277556;c=0;bpid=BP-1435709756-10.131.138.24-1461308845727;dnuuid=c06d42e7-c0be-458c-a494-015e472b3b49
2016-11-03 06:29:02,065 INFO  common.Storage (DataStorage.java:addStorageLocations(379)) - Storage directory [DISK]file:/hadoopdisk/hadoop/hdfs/data/ has already been used.
2016-11-03 06:29:02,100 INFO  common.Storage (BlockPoolSliceStorage.java:recoverTransitionRead(241)) - Analyzing storage directories for bpid BP-1435709756-10.131.138.24-1461308845727
2016-11-03 06:29:02,101 WARN  common.Storage (BlockPoolSliceStorage.java:loadBpStorageDirectories(219)) - Failed to analyze storage directories for block pool BP-1435709756-10.131.138.24-1461308845727
java.io.IOException: BlockPoolSliceStorage.recoverTransitionRead: attempt to load an used block storage: /hadoopdisk/hadoop/hdfs/data/current/BP-1435709756-10.131.138.24-1461308845727
        at org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.loadBpStorageDirectories(BlockPoolSliceStorage.java:210)
        at org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.recoverTransitionRead(BlockPoolSliceStorage.java:242)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:394)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:476)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1399)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1364)
        at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:317)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:224)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:821)
        at java.lang.Thread.run(Thread.java:745)
2016-11-03 06:29:02,103 WARN  common.Storage (DataStorage.java:addStorageLocations(397)) - Failed to add storage for block pool: BP-1435709756-10.131.138.24-1461308845727 : BlockPoolSliceStorage.recoverTransitionRead: attempt to load an used block storage: /hadoopdisk/hadoop/hdfs/data/current/BP-1435709756-10.131.138.24-1461308845727
2016-11-03 06:29:02,104 FATAL datanode.DataNode (BPServiceActor.java:run(833)) - Initialization failed for Block pool <registering> (Datanode Uuid c06d42e7-c0be-458c-a494-015e472b3b49) service to node04.int.xyz.com/10.131.138.27:8020. Exiting.
java.io.IOException: All specified directories are failed to load.
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:477)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1399)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1364)
        at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:317)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:224)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:821)
        at java.lang.Thread.run(Thread.java:745)
2016-11-03 06:29:02,104 FATAL datanode.DataNode (BPServiceActor.java:run(833)) - Initialization failed for Block pool <registering> (Datanode Uuid c06d42e7-c0be-458c-a494-015e472b3b49) service to node03.int.xyz.com/10.131.138.24:8020. Exiting.
org.apache.hadoop.util.DiskChecker$DiskErrorException: Invalid volume failure  config value: 1
        at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.<init>(FsDatasetImpl.java:285)
        at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetFactory.newInstance(FsDatasetFactory.java:34)
        at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetFactory.newInstance(FsDatasetFactory.java:30)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1412)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1364)
        at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:317)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:224)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:821)
        at java.lang.Thread.run(Thread.java:745)
2016-11-03 06:29:02,104 WARN  datanode.DataNode (BPServiceActor.java:run(854)) - Ending block pool service for: Block pool <registering> (Datanode Uuid c06d42e7-c0be-458c-a494-015e472b3b49) service to node04.int.xyz.com/10.131.138.27:8020
2016-11-03 06:29:02,104 WARN  datanode.DataNode (BPServiceActor.java:run(854)) - Ending block pool service for: Block pool <registering> (Datanode Uuid c06d42e7-c0be-458c-a494-015e472b3b49) service to node03.int.xyz.com/10.131.138.24:8020
2016-11-03 06:29:02,208 INFO  datanode.DataNode (BlockPoolManager.java:remove(103)) - Removed Block pool <registering> (Datanode Uuid c06d42e7-c0be-458c-a494-015e472b3b49)
2016-11-03 06:29:04,208 WARN  datanode.DataNode (DataNode.java:secureMain(2540)) - Exiting Datanode
2016-11-03 06:29:04,212 INFO  util.ExitUtil (ExitUtil.java:terminate(124)) - Exiting with status 0
2016-11-03 06:29:04,214 INFO  datanode.DataNode (LogAdapter.java:info(45)) - SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at node08.int.xyz.com/10.131.137.96
************************************************************/ 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
 - 
						
							
		
			Apache Hadoop
 
			
    
	
		
		
		10-25-2016
	
		
		05:53 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 After creating a table in Hive is it possible to modify the "comment" shown in highlight in attached screenshot?  Also for a table without any comment key field, is it possible to add the comment value?     
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
 - 
						
							
		
			Apache Atlas
 
			
    
	
		
		
		07-19-2016
	
		
		04:43 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Vadim   Can you please point me to steps etc. of how to achieve it- associate business metadata to technical metadata  e.g. how can I associate tag "This is employee table" with technical metadata HIVE_TABLE Employee?  Similarly also need to have a data dictionary/definition for each column.  Is this done through APIs after ingestion? How to achieve it? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-12-2016
	
		
		12:33 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Ana Gillan @Sagar Shimpi  Thanks, got partial resolution. Ranger Hive plugin applies only to Hiveserver2 and not to CLI.  But in below mentioned hive table file, how user mktg1 is able to query it using HIVE CLI?  [hive@sandbox ~]$  hadoop fs -ls /apps/hive/warehouse/xademo.db/customer_details/acct.txt   ----------   3 hive hdfs       1532 2016-03-14 14:52 /apps/hive/warehouse/xademo.db/customer_details/acct.txt  [mktg1@sandbox ~]$ hive   hive> use xademo;
OK
Time taken: 1.737 seconds   hive>  select * from customer_details limit 10;   OK   PHONE_NUM       PLAN    REC_DATE        STAUS   BALANCE IMEI    REGION
5553947406      6290    20130328        31      0       012565003040464 R06
7622112093      2316    20120625        21      28      359896046017644 R02
5092111043      6389    20120610        21      293     012974008373781 R06
9392254909      4002    20110611        21      178     357004045763373 R04
7783343634      2276    20121214        31      0       354643051707734 R02
5534292073      6389    20120223        31      83      359896040168211 R06
9227087403      4096    20081010        31      35      356927012514661 R04
9226203167      4060    20060527        21      450     010589003666377 R04
9221154050      4107    20100811        31      3       358665019197977 R04
Time taken: 6.467 seconds, Fetched: 10 row(s) 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-11-2016
	
		
		12:06 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 Steps done:  
 Disabled “HDFS Global Allow”.  Created new policy for Marketing group (Read/Execute enabled) "/apps/hive/warehouse/xademo.db/customer_details   PS- Policy sync successful as checked in Ranger->Audit->Plugins  Problem  User from a different group (e.g. user it1 user from IT group) was freely able to drop the Hive table "customer_details"  Troubleshooting done so far:  hadoop fs -ls /apps/hive/warehouse/xademo.db  drwxrwxrwx   - hive hdfs          0 2016-03-14 14:52 /apps/hive/warehouse/xademo.db/customer_details  It seems HDFS permissions is taking precedence over Ranger policies? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
 - 
						
							
		
			Apache Hadoop
 - 
						
							
		
			Apache Ranger
 
			
    
	
		
		
		05-10-2016
	
		
		04:00 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Sagar Shimpi    username/password are correct. How to login in Ranger from cli?  Usersync.log as below. Didnt find xa-portal.log   PS- So far have HDP sandbox setup (with openldap). Not using openldap for domain login.  [root@sandbox ~]# tail -f /usr/hdp/current/ranger-usersync/logs/usersync.log   09 May 2016 09:53:04  INFO LdapUserGroupBuilder [UnixUserSyncThread] - LDAPUserGroupBuilder.updateSink() completed with user count: 2
09 May 2016 09:53:04  INFO UserGroupSync [UnixUserSyncThread] - End: update user/group from source==>sink
09 May 2016 10:53:04  INFO UserGroupSync [UnixUserSyncThread] - Begin: update user/group from source==>sink
09 May 2016 10:53:04  INFO LdapUserGroupBuilder [UnixUserSyncThread] - LDAPUserGroupBuilder updateSink started
09 May 2016 10:53:04  INFO LdapUserGroupBuilder [UnixUserSyncThread] - LdapUserGroupBuilder initialization started
09 May 2016 10:53:04  INFO LdapUserGroupBuilder [UnixUserSyncThread] - LdapUserGroupBuilder initialization completed with --  ldapUrl: ldap://localhost:389,  ldapBindDn: cn=Manager,dc=my-domain,dc=com,  ldapBindPassword: ***** ,  ldapAuthenticationMechanism: simple,  searchBase: dc=my-domain,dc=com,  userSearchBase: ou=users,dc=my-domain,dc=com,  userSearchScope: 2,  userObjectClass: person,  userSearchFilter: ,  extendedUserSearchFilter: (objectclass=person),  userNameAttribute: uid,  userSearchAttributes: [uid, ismemberof, memberof],  userGroupNameAttributeSet: [ismemberof, memberof],  pagedResultsEnabled: true,  pagedResultsSize: 500,  groupSearchEnabled: false,  groupSearchBase: dc=my-domain,dc=com,  groupSearchScope: 2,  groupObjectClass: groupofnames,  groupSearchFilter: *,  extendedGroupSearchFilter: (&(objectclass=groupofnames)(*)(member={0})),  extendedAllGroupsSearchFilter: (&(objectclass=groupofnames)(*)),  groupMemberAttributeName: member,  groupNameAttribute: cn,  groupUserMapSyncEnabled: false,  ldapReferral: ignore
09 May 2016 10:53:04  INFO LdapUserGroupBuilder [UnixUserSyncThread] - Updating user count: 1, userName: atewari, groupList: []
09 May 2016 10:53:04  INFO LdapUserGroupBuilder [UnixUserSyncThread] - Updating user count: 2, userName: sbansal, groupList: []
09 May 2016 10:53:04  INFO LdapUserGroupBuilder [UnixUserSyncThread] - LDAPUserGroupBuilder.updateSink() completed with user count: 2
09 May 2016 10:53:04  INFO UserGroupSync [UnixUserSyncThread] - End: update user/group from source==>sink 
						
					
					... View more