Member since
02-02-2021
116
Posts
2
Kudos Received
5
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1309 | 08-13-2021 09:44 AM | |
| 5982 | 04-27-2021 04:23 PM | |
| 2333 | 04-26-2021 10:47 AM | |
| 1527 | 03-29-2021 06:01 PM | |
| 4200 | 03-17-2021 04:53 PM |
09-08-2021
03:10 PM
Hi experts, I ran a hive query using tez via beeline to Join tables and got the below error. 2021-09-08T17:07:55,932 INFO [HiveServer2-Background-Pool: Thread-140] hooks.ATSHook: Created ATS Hook 2021-09-08T17:07:55,933 INFO [HiveServer2-Background-Pool: Thread-140] ql.Driver: Query ID = hive_20210908170755_9492c1e6-50ee-48da-8353-e49138d8b527 2021-09-08T17:07:55,933 INFO [HiveServer2-Background-Pool: Thread-140] ql.Driver: Total jobs = 1 2021-09-08T17:07:55,933 INFO [HiveServer2-Background-Pool: Thread-140] ql.Driver: Launching Job 1 out of 1 2021-09-08T17:07:55,933 INFO [HiveServer2-Background-Pool: Thread-140] ql.Driver: Starting task [Stage-1:MAPRED] in serial mode 2021-09-08T17:07:55,933 INFO [HiveServer2-Background-Pool: Thread-140] tez.TezSessionPoolManager: QueueName: null nonDefaultUser: false defaultQueuePool: null hasInitialSessions: false 2021-09-08T17:07:55,933 INFO [HiveServer2-Background-Pool: Thread-140] tez.TezSessionPoolManager: Created new tez session for queue: null with session id: 1b689cf2-9a2e-4afc-96a7-bdeef34ed887 2021-09-08T17:07:55,946 INFO [HiveServer2-Background-Pool: Thread-140] ql.Context: New scratch dir is hdfs://sunny/tmp/hive/hive/334e90cf-525e-47f2-bf12-b227417647c2/hive_2021-09-08_17-07-55_686_3502860413990358095-7 2021-09-08T17:07:55,949 INFO [HiveServer2-Background-Pool: Thread-140] exec.Task: Tez session hasn't been created yet. Opening session 2021-09-08T17:07:55,949 INFO [HiveServer2-Background-Pool: Thread-140] tez.TezSessionState: User of session id 1b689cf2-9a2e-4afc-96a7-bdeef34ed887 is hive 2021-09-08T17:07:55,952 INFO [HiveServer2-Background-Pool: Thread-140] tez.DagUtils: Localizing resource because it does not exist: file:/usr/bgtp/current/ext/hive to dest: hdfs://sunny/tmp/hive/hive/_tez_session_dir/1b689cf2-9a2e-4afc-96a7-bdeef34ed887/hive 2021-09-08T17:07:55,952 INFO [HiveServer2-Background-Pool: Thread-140] tez.DagUtils: Looks like another thread or process is writing the same file 2021-09-08T17:07:55,953 INFO [HiveServer2-Background-Pool: Thread-140] tez.DagUtils: Waiting for the file hdfs://sunny/tmp/hive/hive/_tez_session_dir/1b689cf2-9a2e-4afc-96a7-bdeef34ed887/hive (5 attempts, with 5000ms interval) 2021-09-08T17:07:55,978 INFO [ATS Logger 0] hooks.ATSHook: ATS domain created:hive_334e90cf-525e-47f2-bf12-b227417647c2(anonymous,hive,anonymous,hive) 2021-09-08T17:07:55,980 INFO [ATS Logger 0] hooks.ATSHook: Received pre-hook notification for :hive_20210908170755_9492c1e6-50ee-48da-8353-e49138d8b527 2021-09-08T17:08:20,967 ERROR [HiveServer2-Background-Pool: Thread-140] tez.DagUtils: Could not find the jar that was being uploaded 2021-09-08T17:08:20,967 ERROR [HiveServer2-Background-Pool: Thread-140] exec.Task: Failed to execute tez graph. java.io.IOException: Previous writer likely failed to write hdfs://sunny/tmp/hive/hive/_tez_session_dir/1b689cf2-9a2e-4afc-96a7-bdeef34ed887/hive. Failing because I am unlikely to write too. at org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeResource(DagUtils.java:1028) ~[hive-exec-2.3.6.jar:2.3.6] at org.apache.hadoop.hive.ql.exec.tez.DagUtils.addTempResources(DagUtils.java:902) ~[hive-exec-2.3.6.jar:2.3.6] at org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeTempFilesFromConf(DagUtils.java:845) ~[hive-exec-2.3.6.jar:2.3.6] at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.refreshLocalResourcesFromConf(TezSessionState.java:471) ~[hive-exec-2.3.6.jar:2.3.6] at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.openInternal(TezSessionState.java:247) ~[hive-exec-2.3.6.jar:2.3.6] at org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager$TezSessionPoolSession.openInternal(TezSessionPoolManager.java:703) ~[hive-exec-2.3.6.jar:2.3.6] at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:196) ~[hive-exec-2.3.6.jar:2.3.6] at org.apache.hadoop.hive.ql.exec.tez.TezTask.updateSession(TezTask.java:303) ~[hive-exec-2.3.6.jar:2.3.6] at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:168) ~[hive-exec-2.3.6.jar:2.3.6] at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199) ~[hive-exec-2.3.6.jar:2.3.6] at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) ~[hive-exec-2.3.6.jar:2.3.6] at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2183) ~[hive-exec-2.3.6.jar:2.3.6] at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1839) ~[hive-exec-2.3.6.jar:2.3.6] at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1526) ~[hive-exec-2.3.6.jar:2.3.6] at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1237) ~[hive-exec-2.3.6.jar:2.3.6] at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1232) ~[hive-exec-2.3.6.jar:2.3.6] at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:255) ~[hive-service-2.3.6.jar:2.3.6] at org.apache.hive.service.cli.operation.SQLOperation.access$800(SQLOperation.java:91) ~[hive-service-2.3.6.jar:2.3.6] at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:348) ~[hive-service-2.3.6.jar:2.3.6] at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_112] at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_112] at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1926) ~[hadoop-common-2.10.1.jar:?] at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:362) ~[hive-service-2.3.6.jar:2.3.6] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_112] at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_112] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_112] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_112] at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112] 2021-09-08T17:08:20,968 INFO [HiveServer2-Background-Pool: Thread-140] hooks.ATSHook: Created ATS Hook 2021-09-08T17:08:20,969 INFO [ATS Logger 0] hooks.ATSHook: Received post-hook notification for :hive_20210908170755_9492c1e6-50ee-48da-8353-e49138d8b527 2021-09-08T17:08:20,969 ERROR [HiveServer2-Background-Pool: Thread-140] ql.Driver: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.tez.TezTask 2021-09-08T17:08:20,969 INFO [HiveServer2-Background-Pool: Thread-140] ql.Driver: Completed executing command(queryId=hive_20210908170755_9492c1e6-50ee-48da-8353-e49138d8b527); Time taken: 25.04 seconds 2021-09-08T17:08:20,984 ERROR [HiveServer2-Background-Pool: Thread-140] operation.Operation: Error running hive query: org.apache.hive.service.cli.HiveSQLException: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.tez.TezTask at org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:380) ~[hive-service-2.3.6.jar:2.3.6] at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:257) ~[hive-service-2.3.6.jar:2.3.6] at org.apache.hive.service.cli.operation.SQLOperation.access$800(SQLOperation.java:91) ~[hive-service-2.3.6.jar:2.3.6] at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:348) ~[hive-service-2.3.6.jar:2.3.6] at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_112] at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_112] at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1926) ~[hadoop-common-2.10.1.jar:?] at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:362) ~[hive-service-2.3.6.jar:2.3.6] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_112] at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_112] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_112] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_112] at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112] 2021-09-08T17:08:26,452 INFO [HiveServer2-Handler-Pool: Thread-63] session.SessionState: Updating thread name to 334e90cf-525e-47f2-bf12-b227417647c2 HiveServer2-Handler-Pool: Thread-63 2021-09-08T17:08:26,452 INFO [HiveServer2-Handler-Pool: Thread-63] session.SessionState: Resetting thread name to HiveServer2-Handler-Pool: Thread-63 2021-09-08T17:08:26,476 INFO [HiveServer2-Handler-Pool: Thread-63] session.SessionState: Updating thread name to 334e90cf-525e-47f2-bf12-b227417647c2 HiveServer2-Handler-Pool: Thread-63 2021-09-08T17:08:26,476 INFO [HiveServer2-Handler-Pool: Thread-63] session.SessionState: Resetting thread name to HiveServer2-Handler-Pool: Thread-63 2021-09-08T17:08:26,477 INFO [HiveServer2-Handler-Pool: Thread-63] session.SessionState: Updating thread name to 334e90cf-525e-47f2-bf12-b227417647c2 HiveServer2-Handler-Pool: Thread-63 2021-09-08T17:08:26,477 INFO [HiveServer2-Handler-Pool: Thread-63] session.SessionState: Resetting thread name to HiveServer2-Handler-Pool: Thread-63 2021-09-08T17:08:26,480 INFO [HiveServer2-Handler-Pool: Thread-63] session.SessionState: Updating thread name to 334e90cf-525e-47f2-bf12-b227417647c2 HiveServer2-Handler-Pool: Thread-63 2021-09-08T17:08:26,481 INFO [c5f4fd3b-f20e-4fcb-bcd6-245bb07a3c58 HiveServer2-Handler-Pool: Thread-63] operation.OperationManager: Closing operation: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=3ebe86bb-7347-4350-950e-0e202a1b6f9b] 2021-09-08T17:08:26,481 INFO [c5f4fd3b-f20e-4fcb-bcd6-245bb07a3c58 HiveServer2-Handler-Pool: Thread-63] exec.ListSinkOperator: Closing operator LIST_SINK[35] 2021-09-08T17:08:26,508 INFO [HiveServer2-Handler-Pool: Thread-63] session.SessionState: Resetting thread name to HiveServer2-Handler-Pool: Thread-63 Any help is much appreciated. Thanks,
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
-
Apache Hive
08-13-2021
09:44 AM
Ok nevermind, it was a firewall issue. Everything is working now. Thanks,
... View more
08-13-2021
09:11 AM
Hi experts, We recently changed the Ip address of our ambari in our dev enviornment. The cluster seems to be up and running and working properly, however, ambari is not recognizing which namenode is active and which is passive. Also some of the users are unable to access the ambari hive view. This is the error message when trying to access the hive view via ambari. USER HOME Check Message: test01.dmicorp.com:50070: No route to host (Host unreachable) Any help is much appreciated. Thanks,
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
-
Apache Hive
08-12-2021
06:23 AM
Thanks it worked.
... View more
08-11-2021
10:15 AM
Hi experts, As the root user, I am trying to delete a directory in HDFS which was created by root. However, when I try to delete it, it says "Permission denied: user=root, access=WRITE, inode="/user":hdfs:hdfs:drwxr-xr-x" Why does it say permission denied on "/user" when I am trying to delete the directory "/tmp/root/testdirectory" The error message is below. [root@test02 ~]# hdfs dfs -ls /tmp/root/ Picked up _JAVA_OPTIONS: -Xmx2048m -XX:MaxPermSize=512m -Djava.awt.headless=true Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0 Found 2 items drwxrwxrwx - root hdfs 0 2021-08-09 20:35 /tmp/root/testdirectory -rw-r--r-- 3 root hdfs 0 2021-08-10 13:54 /tmp/root/test [root@test02 ~]# hdfs dfs -rmr /tmp/root/testdirectory Picked up _JAVA_OPTIONS: -Xmx2048m -XX:MaxPermSize=512m -Djava.awt.headless=true Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0 rmr: DEPRECATED: Please use '-rm -r' instead. 21/08/11 12:08:30 WARN fs.TrashPolicyDefault: Can't create trash directory: hdfs://test/user/root/.Trash/Current/tmp/root org.apache.hadoop.security.AccessControlException: Permission denied: user=root, access=WRITE, inode="/user":hdfs:hdfs:drwxr-xr-x at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:351) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:251) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:189) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1756) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1740) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1699) at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:60) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3007) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1132) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:659) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1003) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:931) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1926) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2854) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:88) at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2498) at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2471) at org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1243) at org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1240) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1257) at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1232) at org.apache.hadoop.fs.TrashPolicyDefault.moveToTrash(TrashPolicyDefault.java:147) at org.apache.hadoop.fs.Trash.moveToTrash(Trash.java:109) at org.apache.hadoop.fs.Trash.moveToAppropriateTrash(Trash.java:95) at org.apache.hadoop.fs.shell.Delete$Rm.moveToTrash(Delete.java:153) at org.apache.hadoop.fs.shell.Delete$Rm.processPath(Delete.java:118) at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:327) at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:299) at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:281) at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:265) at org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:119) at org.apache.hadoop.fs.shell.Command.run(Command.java:175) at org.apache.hadoop.fs.FsShell.run(FsShell.java:317) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) at org.apache.hadoop.fs.FsShell.main(FsShell.java:380) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=root, access=WRITE, inode="/user":hdfs:hdfs:drwxr-xr-x at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:351) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:251) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:189) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1756) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1740) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1699) at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:60) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3007) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1132) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:659) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1003) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:931) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1926) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2854) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1549) at org.apache.hadoop.ipc.Client.call(Client.java:1495) at org.apache.hadoop.ipc.Client.call(Client.java:1394) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy10.mkdirs(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:587) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy11.mkdirs(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2496) ... 21 more rmr: Failed to move to trash: hdfs://test/tmp/root/testdirectory: Permission denied: user=root, access=WRITE, inode="/user":hdfs:hdfs:drwxr-xr-x [root@test02 ~]# Any help is much appreciated. Thanks,
... View more
Labels:
- Labels:
-
Apache Hadoop
-
HDFS
08-09-2021
10:19 AM
Hi experts, We are trying to copy hive tables from one cluster to another to do some testing. What is the proper way of doing this? Is it possible to distcp the hive tables at the hdfs level first and then somehow run a hive query to somehow have those hive tables recognized by hive? Any help is much appreciated. Thanks,
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Hive
-
HDFS
06-03-2021
03:54 PM
Also Squirrel seems to be connecting to the dev cluster. It just times out when running a query such as "show databases". If squirrel stays connected for a long time, I noticed that the query will eventually return results instead of timing out. Per cloudera "https://docs.cloudera.com/documentation/enterprise/latest/topics/cdh_ig_hive_metastore_configure.html#concept_jsw_bnc_rp" It says that minimum 4 dedicated cores to HS2 and 4 for hive metastore. The server that hosts hs2 and metastore only has a total of 8 cores. Can this be a reason for the performance issue? Any help on this is much appreciated. Thanks,
... View more
05-31-2021
09:05 PM
yeah we currently have 2 HS2 instances. For some reason our production seems to be working fine with Squirrel. Our dev seems to be timing out after running simple queries such as "show databases". Beeline seems to work fine on our dev cluster. The only difference I can think of is that our dev cluster has an external mysql server whereas the production cluster, mysql server is installed on one of the nodes. Am I missing some squirrel drivers or something? Wondering why it is just squirrel that seems to have issues running queries against our dev hiveserver2. Any help is much appreciated. Thanks,
... View more
05-28-2021
06:44 PM
Most of the time I get this time out, even after restarting hive. This happens on our dev cluster. I am able to use beeline to connect to our hive in our dev cluster. Our production cluster does not seem to have this issue. from the squirrel logs I see this... 2021-05-28 20:38:10,136 [Thread-1] WARN net.sourceforge.squirrel_sql.fw.sql.SQLDatabaseMetaData - DatabaseMetaData.getTables(...) threw an error when called with tableNamePattern = null. Trying tableNamePattern %. The error was: java.sql.SQLException: java.net.SocketTimeoutException: Read timed out 2021-05-28 20:38:40,145 [Thread-1] ERROR net.sourceforge.squirrel_sql.client.session.schemainfo.SchemaInfo - failed to load table names java.sql.SQLException: java.net.SocketTimeoutException: Read timed out at org.apache.hive.jdbc.HiveDatabaseMetaData.getTables(HiveDatabaseMetaData.java:656) at net.sourceforge.squirrel_sql.fw.sql.SQLDatabaseMetaData.getTables(SQLDatabaseMetaData.java:1008) at net.sourceforge.squirrel_sql.client.session.schemainfo.SchemaInfo.privateLoadTables(SchemaInfo.java:1212) at net.sourceforge.squirrel_sql.client.session.schemainfo.SchemaInfo.loadTables(SchemaInfo.java:412) at net.sourceforge.squirrel_sql.client.session.schemainfo.SchemaInfo.privateLoadAll(SchemaInfo.java:303) at net.sourceforge.squirrel_sql.client.session.schemainfo.SchemaInfo.initialLoad(SchemaInfo.java:179) at net.sourceforge.squirrel_sql.client.session.Session$1.run(Session.java:261) at net.sourceforge.squirrel_sql.fw.util.TaskExecuter.run(TaskExecuter.java:82) at java.base/java.lang.Thread.run(Thread.java:834) Caused by: org.apache.thrift.transport.TTransportException: java.net.SocketTimeoutException: Read timed out at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:129) at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86) at org.apache.thrift.transport.TSaslTransport.readLength(TSaslTransport.java:376) at org.apache.thrift.transport.TSaslTransport.readFrame(TSaslTransport.java:453) at org.apache.thrift.transport.TSaslTransport.read(TSaslTransport.java:435) at org.apache.thrift.transport.TSaslClientTransport.read(TSaslClientTransport.java:37) at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86) at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:429) at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:318) at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:219) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:77) at org.apache.hive.service.cli.thrift.TCLIService$Client.recv_GetTables(TCLIService.java:321) at org.apache.hive.service.cli.thrift.TCLIService$Client.GetTables(TCLIService.java:308) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.apache.hive.jdbc.HiveConnection$SynchronizedHandler.invoke(HiveConnection.java:1363) at com.sun.proxy.$Proxy5.GetTables(Unknown Source) at org.apache.hive.jdbc.HiveDatabaseMetaData.getTables(HiveDatabaseMetaData.java:654) ... 8 more Caused by: java.net.SocketTimeoutException: Read timed out at java.base/java.net.SocketInputStream.socketRead0(Native Method) at java.base/java.net.SocketInputStream.socketRead(SocketInputStream.java:115) at java.base/java.net.SocketInputStream.read(SocketInputStream.java:168) at java.base/java.net.SocketInputStream.read(SocketInputStream.java:140) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:252) at java.base/java.io.BufferedInputStream.read1(BufferedInputStream.java:292) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:351) at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127) ... 27 more
... View more
05-28-2021
04:25 PM
Even after I increased the hiveserver2 heap memory and restarted hive, I don't see any connections but still Squirrel is having issues connecting to the dev cluster. But our Prod cluster seems to work fine. Any help is much appreciated. Thanks,
... View more