Member since 
    
	
		
		
		06-02-2017
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                39
            
            
                Posts
            
        
                4
            
            
                Kudos Received
            
        
                3
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 1876 | 09-04-2018 09:07 PM | |
| 12946 | 08-30-2018 04:57 PM | |
| 5158 | 08-14-2018 03:49 PM | 
			
    
	
		
		
		08-23-2018
	
		
		11:51 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Thanks guys. I got the whitelist filter as mentioned by @Phil Zampino and updated it as my need. Then knox allowed my requests. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-21-2018
	
		
		07:36 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi,  I am using HDP3.0 and ambari 2.7 blueprint. webhdfs via knox failed due to:  2018-08-21 19:26:33,035 ERROR knox.gateway (GatewayDispatchFilter.java:isDispatchAllowed(155)) - The dispatch to http://myhost.com:50070/webhdfs/v1/user was disallowed because it fails the dispatch whitelist validation. See documentation for dispatch whitelisting.  I have verified webhdfs without knox works:  curl -vvv http://myhost.com:50070/webhdfs/v1/user/?op=LISTSTATUS  Also, ambari, zeppelin and ranger UI work fine via knox.  The knox settings are:  gateway.dispatch.whitelist: DEFAULT
gateway.dispatch.whitelist.services: DATANODE,HBASEUI,HDFSUI,JOBHISTORYUI,NODEUI,RESOURCEMANAGER,WEBHBASE,WEBHDFS,YARNUI  webhdfs via knox worked for me on HDP2.6. Any idea? Appreciate any help. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
	
					
			
		
	
	
	
	
				
		
	
	
			
    
	
		
		
		08-16-2018
	
		
		03:49 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 It turned out hbase_java_io_tmpdir is enough. The hbase regional server not starting issue is because our security setting disabled the yarn-ats user whose uid is out of a range. After creating this user in the desired range, ts reader v2 worked. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-15-2018
	
		
		05:10 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Thanks @Jay Kumar SenSharma  This setting worked:  "yarn-hbase-env": {
        "properties": {
          "hbase_java_io_tmpdir": "/u01/tmp"
        }
      }  These setting made TS reader v2 start. Progress!  However, after some time, it stopped again due to the issue described in this post. The reason is that the yarn hbase regional server cannot start (though the yarn hbase master starts). I guess it is still related to the fact that regional server is using /tmp.  In ambari UI, I cannot find a place to set yarn_hbase_java_io_tmpdir. Any idea? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-14-2018
	
		
		11:47 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Looks like this is also caused by noexec /tmp like this post:  
Suppressed: java.lang.UnsatisfiedLinkError: /tmp/liborg_apache_hbase_thirdparty_netty_transport_native_epoll_x86_644869269367588546881.so: /tmp/liborg_apache_hbase_thirdparty_netty_transport_
native_epoll_x86_644869269367588546881.so: failed to map segment from shared object: Operation not permitted
                at java.lang.ClassLoader$NativeLibrary.load(Native Method)
                at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1941)
                at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1824)
                at java.lang.Runtime.load0(Runtime.java:809)
                at java.lang.System.load(System.java:1086)
                at org.apache.hbase.thirdparty.io.netty.util.internal.NativeLibraryUtil.loadLibrary(NativeLibraryUtil.java:36)
                at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
                at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
                at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
                at java.lang.reflect.Method.invoke(Method.java:498)
                at org.apache.hbase.thirdparty.io.netty.util.internal.NativeLibraryLoader$1.run(NativeLibraryLoader.java:263)
                at java.security.AccessController.doPrivileged(Native Method)
                at org.apache.hbase.thirdparty.io.netty.util.internal.NativeLibraryLoader.loadLibraryByHelper(NativeLibraryLoader.java:255)
                at org.apache.hbase.thirdparty.io.netty.util.internal.NativeLibraryLoader.loadLibrary(NativeLibraryLoader.java:233)
                ... 27 more
  No sure what setting can be used to make yarn not use netty. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-14-2018
	
		
		07:06 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi,  I am using HDP3.0 and ambari 2.7 blueprint to install my cluster. Yarn's timeline service V2 reader cannot create zookeeper node:  at org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$ZKTask$1.exec(ReadOnlyZKClient.java:168)
    at org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:323)
    at java.lang.Thread.run(Thread.java:745)
2018-08-14 17:59:55,827 INFO  [main] client.RpcRetryingCallerImpl: Call exception, tries=6, retries=6, started=4142 ms ago, cancelled=false, msg=org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /atsv2-hbase-unsecure/meta-region-server, details=row 'prod.timelineservice.entity' on table 'hbase:meta' at null  I observed yarn is using timeline service 1.5 and timeline service reader v2. I am not sure if this is expected. My blueprint is using:  {
    "name": "APP_TIMELINE_SERVER"
},
  Adding hbase client per this post did not help. Any idea? Thanks. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
- 
						
							
		
			Hortonworks Data Platform (HDP)
			
    
	
		
		
		08-14-2018
	
		
		03:49 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 This problem is resolved by adding hbase.netty.nativetransport = false.   Do you foresee any issue using this fix? @Akhil S Naik  Thanks.  [UPDATE 8/15/2018]  This setting works and may be better than the above one:  {
      "hbase-env": {
        "properties": {
          "hbase_java_io_tmpdir": "/u01/tmp"
        }
      }
    }, 
  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-14-2018
	
		
		04:08 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Thanks Akhil S Naik  OS: oracle linux 7,  memory: 314g  deployment tool: ambari 2.7 blueprint.  Is HDP3.0 using the netty having bug 6678 fixed? 6678 seems to be just a warning. But it failed hbase in my case. Any solution? Thanks. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-13-2018
	
		
		11:33 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi,I installed hadoop cluster using HDP3.0 but hbase does not work due to this error:  Caused by: java.lang.UnsatisfiedLinkError: failed to load the required native library
at org.apache.hbase.thirdparty.io.netty.channel.epoll.Epoll.ensureAvailability(Epoll.java:81)
at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.<clinit>(EpollEventLoop.java:55)
at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoopGroup.newChild(EpollEventLoopGroup.java:134)
at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoopGroup.newChild(EpollEventLoopGroup.java:35)
at org.apache.hbase.thirdparty.io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:84)
at org.apache.hbase.thirdparty.io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:58)
at org.apache.hbase.thirdparty.io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:47)
at org.apache.hbase.thirdparty.io.netty.channel.MultithreadEventLoopGroup.<init>(MultithreadEventLoopGroup.java:59)
at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoopGroup.<init>(EpollEventLoopGroup.java:104)
at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoopGroup.<init>(EpollEventLoopGroup.java:91)
at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoopGroup.<init>(EpollEventLoopGroup.java:68)
at org.apache.hadoop.hbase.util.NettyEventLoopGroupConfig.<init>(NettyEventLoopGroupConfig.java:61)
at org.apache.hadoop.hbase.regionserver.HRegionServer.setupNetty(HRegionServer.java:673)
at org.apache.hadoop.hbase.regionserver.HRegionServer.<init>(HRegionServer.java:532)
at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:472)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2923)
... 5 more
Caused by: java.lang.UnsatisfiedLinkError: /tmp/liborg_apache_hbase_thirdparty_netty_transport_native_epoll_x86_644257058602762792223.so: /tmp/liborg_apache_hbase_thirdparty_netty_transport_native_ep
oll_x86_644257058602762792223.so: failed to map segment from shared object: Operation not permitted
at java.lang.ClassLoader$NativeLibrary.load(Native Method)
at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1941)
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1824)
at java.lang.Runtime.load0(Runtime.java:809)
at java.lang.System.load(System.java:1086)
at org.apache.hbase.thirdparty.io.netty.util.internal.NativeLibraryUtil.loadLibrary(NativeLibraryUtil.java:36)
at org.apache.hbase.thirdparty.io.netty.util.internal.NativeLibraryLoader.loadLibrary(NativeLibraryLoader.java:243)
at org.apache.hbase.thirdparty.io.netty.util.internal.NativeLibraryLoader.load(NativeLibraryLoader.java:187)
at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.loadNativeLibrary(Native.java:207)
at org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.<clinit>(Native.java:65)
at org.apache.hbase.thirdparty.io.netty.channel.epoll.Epoll.<clinit>(Epoll.java:33)
... 24 more
  Looks it is caused by nonexec permission of /tmp. To make hbase not use /tmp, I added:  { "hbase-site": { "properties": { "hbase.tmp.dir": "/u01/tmp" } } }  But this does not work and hbase still uses /tmp. Any idea? Appreciate any help. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
	
					
			
		
	
	
	
	
				
		
	
	
			
    
	
		
		
		08-08-2018
	
		
		08:42 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 By understanding ambari source code, the problem is solved. I need to specify below in blueprint:
  	 "cluster-env": {
	  "properties": {
	  "dfs_ha_initial_namenode_active": "%HOSTGROUP::master_host_group%",
	  "dfs_ha_initial_namenode_standby": "%HOSTGROUP::master2_host_group%"
	  }
	 }
	  
  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		- « Previous
- 
						- 1
- 2
 
- Next »
 
        











