Member since
10-20-2016
106
Posts
0
Kudos Received
0
Solutions
01-21-2020
12:04 AM
@Shelton I have created the sample DB and table and provided the access in ranger but you know still i am not able to access the table. Attaching the ranger policy screenshot as the user has all the permissions. Note: We have integrated hive with LDAP. select * from test; Error: Error while compiling statement: FAILED: NullPointerException null (state=42000,code=40000)
... View more
01-20-2020
04:33 AM
@lyubomirangelo still getting the same error select * from asop.test; Error: Error while compiling statement: FAILED: NullPointerException null (state=42000,code=40000) Full trace from hive server: 2020-01-20T07:27:50,724 INFO [fb06c5bc-0ca8-4f8f-93d8-76bd188d1e4c HiveServer2-Handler-Pool: Thread-115]: session.SessionState (:()) - Resetting thread name to HiveServer2-Handler-Pool: Thread-115 2020-01-20T07:27:50,724 WARN [HiveServer2-Handler-Pool: Thread-115]: thrift.ThriftCLIService (:()) - Error executing statement: org.apache.hive.service.cli.HiveSQLException: Error while compiling statement: FAILED: NullPointerException null at org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:335) ~[hive-service-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187] at org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:199) ~[hive-service-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187] at org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:262) ~[hive-service-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187] at org.apache.hive.service.cli.operation.Operation.run(Operation.java:247) ~[hive-service-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187] at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:541) ~[hive-service-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187] at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(HiveSessionImpl.java:527) ~[hive-service-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187] at org.apache.hive.service.cli.CLIService.executeStatementAsync(CLIService.java:315) ~[hive-service-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187] at org.apache.hive.service.cli.thrift.ThriftCLIService.ExecuteStatement(ThriftCLIService.java:562) ~[hive-service-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187] at org.apache.hive.service.rpc.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1557) ~[hive-exec-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187] at org.apache.hive.service.rpc.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1542) ~[hive-exec-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187] at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) ~[hive-exec-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187] at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) ~[hive-exec-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187] at org.apache.hive.service.auth.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:56) ~[hive-service-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187] at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286) ~[hive-exec-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_112] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_112] at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112] Caused by: java.lang.NullPointerException at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.checkResultsCache(SemanticAnalyzer.java:15019) ~[hive-exec-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187] at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:12315) ~[hive-exec-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187] at org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:358) ~[hive-exec-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187] at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:285) ~[hive-exec-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187] at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:664) ~[hive-exec-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187] at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1863) ~[hive-exec-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187] at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1810) ~[hive-exec-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187] at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1805) ~[hive-exec-3.1.0.3.0.1.0-187.jar:3.1.0.3.0.1.0-187]
... View more
01-20-2020
03:36 AM
Hi Team,
I am unable to access (select) the tables (both external and internal) in hive and permissions are managed by ranger.
Please find the error
0: jdbc:hive2://w0lxqhdp03:2181,w0lxq> select * from pcr_project; Error: Error while compiling statement: FAILED: NullPointerException null (state=42000,code=40000) 0: jdbc:hive2://w0lxqhdp03:2181,w0lxq> explain select * from pcr_project; Error: Error while compiling statement: FAILED: NullPointerException null (state=42000,code=40000) 0: jdbc:hive2://w0lxqhdp03:2181,w0lxq>
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Ranger
01-17-2020
02:07 AM
@ramineni Please find the netcat O/P [hdfs@w0lxdhdp01 ~]$ nc -v w0lxqhdp01 8020 Ncat: Version 7.50 ( https://nmap.org/ncat ) Ncat: Connected to 10.49.70.13:8020.
... View more
01-16-2020
11:46 PM
@GangWar both the source and destination clusters are not kerberized yet. Please find the full stack trace. [hdfs@w0lxthdp01 ~]$ HADOOP_ROOT_LOGGER=DEBUG,console hdfs dfs -ls hdfs://w0lxqhdp01 20/01/17 02:45:31 DEBUG util.Shell: setsid exited with exit code 0 20/01/17 02:45:31 DEBUG conf.Configuration: parsing URL jar:file:/usr/hdp/3.0.1.0-187/hadoop/hadoop-common-3.1.1.3.0.1.0-187.jar!/core-default.xml 20/01/17 02:45:31 DEBUG conf.Configuration: parsing input stream sun.net.www.protocol.jar.JarURLConnection$JarURLInputStream@66480dd7 20/01/17 02:45:31 DEBUG conf.Configuration: parsing URL file:/etc/hadoop/3.0.1.0-187/0/core-site.xml 20/01/17 02:45:31 DEBUG conf.Configuration: parsing input stream java.io.BufferedInputStream@1877ab81 20/01/17 02:45:31 DEBUG security.SecurityUtil: Setting hadoop.security.token.service.use_ip to true 20/01/17 02:45:31 DEBUG security.Groups: Creating new Groups object 20/01/17 02:45:31 DEBUG util.NativeCodeLoader: Trying to load the custom-built native-hadoop library... 20/01/17 02:45:31 DEBUG util.NativeCodeLoader: Loaded the native-hadoop library 20/01/17 02:45:31 DEBUG security.JniBasedUnixGroupsMapping: Using JniBasedUnixGroupsMapping for Group resolution 20/01/17 02:45:31 DEBUG security.JniBasedUnixGroupsMappingWithFallback: Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMapping 20/01/17 02:45:31 DEBUG security.Groups: Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback; cacheTimeout=300000; warningDeltaMs=5000 20/01/17 02:45:31 DEBUG core.Tracer: sampler.classes = ; loaded no samplers 20/01/17 02:45:31 DEBUG core.Tracer: span.receiver.classes = ; loaded no span receivers 20/01/17 02:45:31 DEBUG security.UserGroupInformation: hadoop login 20/01/17 02:45:31 DEBUG security.UserGroupInformation: hadoop login commit 20/01/17 02:45:31 DEBUG security.UserGroupInformation: using local user:UnixPrincipal: hdfs 20/01/17 02:45:31 DEBUG security.UserGroupInformation: Using user: "UnixPrincipal: hdfs" with name hdfs 20/01/17 02:45:31 DEBUG security.UserGroupInformation: User entry: "hdfs" 20/01/17 02:45:31 DEBUG security.UserGroupInformation: UGI loginUser:hdfs (auth:SIMPLE) 20/01/17 02:45:31 DEBUG core.Tracer: sampler.classes = ; loaded no samplers 20/01/17 02:45:31 DEBUG core.Tracer: span.receiver.classes = ; loaded no span receivers 20/01/17 02:45:31 DEBUG fs.FileSystem: Loading filesystems 20/01/17 02:45:31 DEBUG fs.FileSystem: file:// = class org.apache.hadoop.fs.LocalFileSystem from /usr/hdp/3.0.1.0-187/hadoop/hadoop-common-3.1.1.3.0.1.0-187.jar 20/01/17 02:45:31 DEBUG fs.FileSystem: viewfs:// = class org.apache.hadoop.fs.viewfs.ViewFileSystem from /usr/hdp/3.0.1.0-187/hadoop/hadoop-common-3.1.1.3.0.1.0-187.jar 20/01/17 02:45:31 DEBUG fs.FileSystem: har:// = class org.apache.hadoop.fs.HarFileSystem from /usr/hdp/3.0.1.0-187/hadoop/hadoop-common-3.1.1.3.0.1.0-187.jar 20/01/17 02:45:31 DEBUG fs.FileSystem: http:// = class org.apache.hadoop.fs.http.HttpFileSystem from /usr/hdp/3.0.1.0-187/hadoop/hadoop-common-3.1.1.3.0.1.0-187.jar 20/01/17 02:45:31 DEBUG fs.FileSystem: https:// = class org.apache.hadoop.fs.http.HttpsFileSystem from /usr/hdp/3.0.1.0-187/hadoop/hadoop-common-3.1.1.3.0.1.0-187.jar 20/01/17 02:45:31 DEBUG fs.FileSystem: hdfs:// = class org.apache.hadoop.hdfs.DistributedFileSystem from /usr/hdp/3.0.1.0-187/hadoop-hdfs/hadoop-hdfs-client-3.1.1.3.0.1.0-187.jar 20/01/17 02:45:31 DEBUG fs.FileSystem: webhdfs:// = class org.apache.hadoop.hdfs.web.WebHdfsFileSystem from /usr/hdp/3.0.1.0-187/hadoop-hdfs/hadoop-hdfs-client-3.1.1.3.0.1.0-187.jar 20/01/17 02:45:31 DEBUG fs.FileSystem: swebhdfs:// = class org.apache.hadoop.hdfs.web.SWebHdfsFileSystem from /usr/hdp/3.0.1.0-187/hadoop-hdfs/hadoop-hdfs-client-3.1.1.3.0.1.0-187.jar 20/01/17 02:45:32 DEBUG gcs.GoogleHadoopFileSystemBase: GHFS version: 1.9.0.3.0.1.0-187 20/01/17 02:45:32 DEBUG fs.FileSystem: gs:// = class com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem from /usr/hdp/3.0.1.0-187/hadoop-mapreduce/gcs-connector-1.9.0.3.0.1.0-187-shaded.jar 20/01/17 02:45:32 DEBUG fs.FileSystem: s3n:// = class org.apache.hadoop.fs.s3native.NativeS3FileSystem from /usr/hdp/3.0.1.0-187/hadoop-mapreduce/hadoop-aws-3.1.1.3.0.1.0-187.jar 20/01/17 02:45:32 DEBUG fs.FileSystem: Looking for FS supporting hdfs 20/01/17 02:45:32 DEBUG fs.FileSystem: looking for configuration option fs.hdfs.impl 20/01/17 02:45:32 DEBUG fs.FileSystem: Looking in service filesystems for implementation class 20/01/17 02:45:32 DEBUG fs.FileSystem: FS for hdfs is class org.apache.hadoop.hdfs.DistributedFileSystem 20/01/17 02:45:32 DEBUG impl.DfsClientConf: dfs.client.use.legacy.blockreader.local = false 20/01/17 02:45:32 DEBUG impl.DfsClientConf: dfs.client.read.shortcircuit = true 20/01/17 02:45:32 DEBUG impl.DfsClientConf: dfs.client.domain.socket.data.traffic = false 20/01/17 02:45:32 DEBUG impl.DfsClientConf: dfs.domain.socket.path = /var/lib/hadoop-hdfs/dn_socket 20/01/17 02:45:32 DEBUG hdfs.DFSClient: Sets dfs.client.block.write.replace-datanode-on-failure.min-replication to 0 20/01/17 02:45:32 DEBUG retry.RetryUtils: multipleLinearRandomRetry = null 20/01/17 02:45:32 DEBUG ipc.Server: rpcKind=RPC_PROTOCOL_BUFFER, rpcRequestWrapperClass=class org.apache.hadoop.ipc.ProtobufRpcEngine$RpcProtobufRequest, rpcInvoker=org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker@51f116b8 20/01/17 02:45:32 DEBUG ipc.Client: getting client out of cache: org.apache.hadoop.ipc.Client@29e495ff 20/01/17 02:45:32 DEBUG unix.DomainSocketWatcher: org.apache.hadoop.net.unix.DomainSocketWatcher$2@bdfd7f3: starting with interruptCheckPeriodMs = 60000 20/01/17 02:45:32 DEBUG shortcircuit.DomainSocketFactory: The short-circuit local reads feature is enabled. 20/01/17 02:45:32 DEBUG sasl.DataTransferSaslUtil: DataTransferProtocol not using SaslPropertiesResolver, no QOP found in configuration for dfs.data.transfer.protection 20/01/17 02:45:32 DEBUG ipc.Client: The ping interval is 60000 ms. 20/01/17 02:45:32 DEBUG ipc.Client: Connecting to w0lxqhdp01/10.49.70.13:8020 20/01/17 02:45:32 DEBUG ipc.Client: IPC Client (1603198149) connection to w0lxqhdp01/10.49.70.13:8020 from hdfs: starting, having connections 1 20/01/17 02:45:32 DEBUG ipc.Client: IPC Client (1603198149) connection to w0lxqhdp01/10.49.70.13:8020 from hdfs sending #0 org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo 20/01/17 02:45:32 DEBUG ipc.Client: closing ipc connection to w0lxqhdp01/10.49.70.13:8020: Connection reset by peer java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:197) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:57) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.FilterInputStream.read(FilterInputStream.java:133) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read(BufferedInputStream.java:265) at java.io.FilterInputStream.read(FilterInputStream.java:83) at java.io.FilterInputStream.read(FilterInputStream.java:83) at org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:554) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.hadoop.ipc.Client$IpcStreams.readResponse(Client.java:1802) at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1167) at org.apache.hadoop.ipc.Client$Connection.run(Client.java:1063) 20/01/17 02:45:32 DEBUG ipc.Client: IPC Client (1603198149) connection to w0lxqhdp01/10.49.70.13:8020 from hdfs: closed 20/01/17 02:45:32 DEBUG ipc.Client: IPC Client (1603198149) connection to w0lxqhdp01/10.49.70.13:8020 from hdfs: stopped, remaining connections 0 20/01/17 02:45:32 DEBUG retry.RetryInvocationHandler: Exception while invoking call #0 ClientNamenodeProtocolTranslatorPB.getFileInfo over null. Not retrying because try once and fail. java.io.IOException: DestHost:destPort w0lxqhdp01:8020 , LocalHost:localPort w0lxthdp01.ifc.org/10.49.194.14:0. Failed on local exception: java.io.IOException: Connection reset by peer at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:806) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1501) at org.apache.hadoop.ipc.Client.call(Client.java:1443) at org.apache.hadoop.ipc.Client.call(Client.java:1353) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116) at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:900) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1654) at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1583) at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1580) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1595) at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:65) at org.apache.hadoop.fs.Globber.doGlob(Globber.java:270) at org.apache.hadoop.fs.Globber.glob(Globber.java:149) at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:2067) at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:353) at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:250) at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:233) at org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:104) at org.apache.hadoop.fs.shell.Command.run(Command.java:177) at org.apache.hadoop.fs.FsShell.run(FsShell.java:328) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) at org.apache.hadoop.fs.FsShell.main(FsShell.java:391) Caused by: java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:197) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:57) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.FilterInputStream.read(FilterInputStream.java:133) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read(BufferedInputStream.java:265) at java.io.FilterInputStream.read(FilterInputStream.java:83) at java.io.FilterInputStream.read(FilterInputStream.java:83) at org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:554) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.hadoop.ipc.Client$IpcStreams.readResponse(Client.java:1802) at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1167) at org.apache.hadoop.ipc.Client$Connection.run(Client.java:1063) ls: DestHost:destPort w0lxqhdp01:8020 , LocalHost:localPort w0lxthdp01.ifc.org/10.49.194.14:0. Failed on local exception: java.io.IOException: Connection reset by peer 20/01/17 02:45:32 DEBUG ipc.Client: stopping client from cache: org.apache.hadoop.ipc.Client@29e495ff 20/01/17 02:45:32 DEBUG ipc.Client: removing client from cache: org.apache.hadoop.ipc.Client@29e495ff 20/01/17 02:45:32 DEBUG ipc.Client: stopping actual client because no more references remain: org.apache.hadoop.ipc.Client@29e495ff 20/01/17 02:45:32 DEBUG ipc.Client: Stopping client 20/01/17 02:45:32 DEBUG util.ShutdownHookManager: Completed shutdown in 0.003 seconds; Timeouts: 0 20/01/17 02:45:32 DEBUG util.ShutdownHookManager: ShutdownHookManger completed shutdown. [hdfs@w0lxthdp01 ~]$
... View more
01-14-2020
05:05 AM
Hi Team,
I am Unable to list the file and directories in remote cluster with the hostname of namenode, since the cluster is not kerberos enabled. Kindly helps us to fix the issue.
Note: same is working fine from dev - Test but not from dev to qa
[hdfs@ ~]$ hdfs dfs -ls hdfs://namenode/ Found 17 items drwxr-xr-x - hdfs hdfs 0 2019-06-03 12:25 hdfs://namenode/app drwxrwxrwt - yarn hadoop 0 2019-10-15 06:39 hdfs://namenode/app-logs drwxr-xr-x - hdfs hdfs 0 2019-05-29 22:13 hdfs://namenode/apps drwxr-xr-x - yarn hadoop 0 2019-05-29 22:05 hdfs://namenode/ats drwxr-xr-x - hdfs hdfs 0 2019-05-29 22:05 hdfs://namenode/atsv2 drwxrwxr-x+ - nifi hadoop 0 2019-06-03 12:00 hdfs://namenode/data drwxr-xr-x - hdfs hdfs 0 2019-05-29 22:05 hdfs://namenode/hdp drwxr-xr-x - hive hdfs 0 2019-12-31 07:24 hdfs://namenode/home drwx------ - livy hdfs 0 2019-05-29 22:06 hdfs://namenode/livy2-recovery drwxr-xr-x - mapred hdfs 0 2019-05-29 22:05 hdfs://namenode/mapred drwxrwxrwx - mapred hadoop 0 2019-05-29 22:05 hdfs://namenode/mr-history drwxr-xr-x - hdfs hdfs 0 2019-05-29 22:05 hdfs://namenode/ranger drwxrwxrwx - spark hadoop 0 2020-01-14 08:03 hdfs://namenode/spark2-history drwxrwxrwx - hdfs hdfs 0 2019-12-06 13:52 hdfs://namenode/system drwxrwxrwx - hdfs hdfs 0 2019-09-30 05:21 hdfs://namenode/tmp drwxrwxr-x - hdfs hdfs 0 2019-10-14 04:57 hdfs://namenode/user drwxr-xr-x - hdfs hdfs 0 2019-05-29 22:06 hdfs://namenode/warehouse
[hdfs@ ~]$ hdfs dfs -ls hdfs://namenode/ ls: DestHost:destPort namenode:8020 , LocalHost:localPort namenode/10.49.194.171:0. Failed on local exception: java.io.IOException: Connection reset by peer
... View more
Labels:
- Labels:
-
Apache Hadoop
01-06-2020
06:26 AM
@Shelton do u have any idea on this getting unknown exception while accessing the query from spark sql Time taken: 0.216 seconds, Fetched 318 row(s) 20/01/06 09:16:49 INFO SparkSQLCLIDriver: Time taken: 0.216 seconds, Fetched 318 row(s) spark-sql> select * from snapshot_table_list; 20/01/06 09:16:57 INFO ContextCleaner: Cleaned accumulator 0 20/01/06 09:16:57 INFO ContextCleaner: Cleaned accumulator 1 20/01/06 09:16:57 INFO ContextCleaner: Cleaned accumulator 2 20/01/06 09:16:58 INFO HiveMetastoreCatalog: Inferring case-sensitive schema for table project.snapshot_table_list_ext (inference mode: INFER_AND_SAVE) 20/01/06 09:16:58 INFO deprecation: No unit for dfs.client.datanode-restart.timeout(30) assuming SECONDS 20/01/06 09:16:58 ERROR SparkSQLDriver: Failed in [select * from snapshot_table_list] java.lang.IllegalArgumentException: java.net.UnknownHostException: datalakedev at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:445) at org.apache.hadoop.hdfs.NameNodeProxiesClient.createProxyWithClientProtocol(NameNodeProxiesClient.java:132) at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:353) at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:287)
... View more
01-06-2020
02:41 AM
@Shelton Any update on this? looks like it is looking for some java packages java.lang.UnsatisfiedLinkError: Could not load library. Reasons: [no leveldbjni64-1.8 in java.library.path, no leveldbjni-1.8 in java.library.path, no leveldbjni in java.library.path, /var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir/libleveldbjni-64-1-4657625312215122883.8 (Permission denied)] can we install it externally?
... View more
01-02-2020
06:06 AM
@Shelton As checked, the /tmp does not have noexec enabled. Please provide an alternate solution for this. /dev/mapper/rootvg-tmp on /tmp type xfs (rw,relatime,attr2,inode64,noquota)
... View more
01-02-2020
03:54 AM
@Shelton Could you please look into the above issue
... View more