Member since
02-02-2021
116
Posts
2
Kudos Received
5
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
745 | 08-13-2021 09:44 AM | |
3692 | 04-27-2021 04:23 PM | |
1368 | 04-26-2021 10:47 AM | |
923 | 03-29-2021 06:01 PM | |
2744 | 03-17-2021 04:53 PM |
09-08-2021
03:10 PM
Hi experts, I ran a hive query using tez via beeline to Join tables and got the below error. 2021-09-08T17:07:55,932 INFO [HiveServer2-Background-Pool: Thread-140] hooks.ATSHook: Created ATS Hook 2021-09-08T17:07:55,933 INFO [HiveServer2-Background-Pool: Thread-140] ql.Driver: Query ID = hive_20210908170755_9492c1e6-50ee-48da-8353-e49138d8b527 2021-09-08T17:07:55,933 INFO [HiveServer2-Background-Pool: Thread-140] ql.Driver: Total jobs = 1 2021-09-08T17:07:55,933 INFO [HiveServer2-Background-Pool: Thread-140] ql.Driver: Launching Job 1 out of 1 2021-09-08T17:07:55,933 INFO [HiveServer2-Background-Pool: Thread-140] ql.Driver: Starting task [Stage-1:MAPRED] in serial mode 2021-09-08T17:07:55,933 INFO [HiveServer2-Background-Pool: Thread-140] tez.TezSessionPoolManager: QueueName: null nonDefaultUser: false defaultQueuePool: null hasInitialSessions: false 2021-09-08T17:07:55,933 INFO [HiveServer2-Background-Pool: Thread-140] tez.TezSessionPoolManager: Created new tez session for queue: null with session id: 1b689cf2-9a2e-4afc-96a7-bdeef34ed887 2021-09-08T17:07:55,946 INFO [HiveServer2-Background-Pool: Thread-140] ql.Context: New scratch dir is hdfs://sunny/tmp/hive/hive/334e90cf-525e-47f2-bf12-b227417647c2/hive_2021-09-08_17-07-55_686_3502860413990358095-7 2021-09-08T17:07:55,949 INFO [HiveServer2-Background-Pool: Thread-140] exec.Task: Tez session hasn't been created yet. Opening session 2021-09-08T17:07:55,949 INFO [HiveServer2-Background-Pool: Thread-140] tez.TezSessionState: User of session id 1b689cf2-9a2e-4afc-96a7-bdeef34ed887 is hive 2021-09-08T17:07:55,952 INFO [HiveServer2-Background-Pool: Thread-140] tez.DagUtils: Localizing resource because it does not exist: file:/usr/bgtp/current/ext/hive to dest: hdfs://sunny/tmp/hive/hive/_tez_session_dir/1b689cf2-9a2e-4afc-96a7-bdeef34ed887/hive 2021-09-08T17:07:55,952 INFO [HiveServer2-Background-Pool: Thread-140] tez.DagUtils: Looks like another thread or process is writing the same file 2021-09-08T17:07:55,953 INFO [HiveServer2-Background-Pool: Thread-140] tez.DagUtils: Waiting for the file hdfs://sunny/tmp/hive/hive/_tez_session_dir/1b689cf2-9a2e-4afc-96a7-bdeef34ed887/hive (5 attempts, with 5000ms interval) 2021-09-08T17:07:55,978 INFO [ATS Logger 0] hooks.ATSHook: ATS domain created:hive_334e90cf-525e-47f2-bf12-b227417647c2(anonymous,hive,anonymous,hive) 2021-09-08T17:07:55,980 INFO [ATS Logger 0] hooks.ATSHook: Received pre-hook notification for :hive_20210908170755_9492c1e6-50ee-48da-8353-e49138d8b527 2021-09-08T17:08:20,967 ERROR [HiveServer2-Background-Pool: Thread-140] tez.DagUtils: Could not find the jar that was being uploaded 2021-09-08T17:08:20,967 ERROR [HiveServer2-Background-Pool: Thread-140] exec.Task: Failed to execute tez graph. java.io.IOException: Previous writer likely failed to write hdfs://sunny/tmp/hive/hive/_tez_session_dir/1b689cf2-9a2e-4afc-96a7-bdeef34ed887/hive. Failing because I am unlikely to write too. at org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeResource(DagUtils.java:1028) ~[hive-exec-2.3.6.jar:2.3.6] at org.apache.hadoop.hive.ql.exec.tez.DagUtils.addTempResources(DagUtils.java:902) ~[hive-exec-2.3.6.jar:2.3.6] at org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeTempFilesFromConf(DagUtils.java:845) ~[hive-exec-2.3.6.jar:2.3.6] at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.refreshLocalResourcesFromConf(TezSessionState.java:471) ~[hive-exec-2.3.6.jar:2.3.6] at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.openInternal(TezSessionState.java:247) ~[hive-exec-2.3.6.jar:2.3.6] at org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager$TezSessionPoolSession.openInternal(TezSessionPoolManager.java:703) ~[hive-exec-2.3.6.jar:2.3.6] at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:196) ~[hive-exec-2.3.6.jar:2.3.6] at org.apache.hadoop.hive.ql.exec.tez.TezTask.updateSession(TezTask.java:303) ~[hive-exec-2.3.6.jar:2.3.6] at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:168) ~[hive-exec-2.3.6.jar:2.3.6] at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199) ~[hive-exec-2.3.6.jar:2.3.6] at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) ~[hive-exec-2.3.6.jar:2.3.6] at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2183) ~[hive-exec-2.3.6.jar:2.3.6] at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1839) ~[hive-exec-2.3.6.jar:2.3.6] at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1526) ~[hive-exec-2.3.6.jar:2.3.6] at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1237) ~[hive-exec-2.3.6.jar:2.3.6] at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1232) ~[hive-exec-2.3.6.jar:2.3.6] at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:255) ~[hive-service-2.3.6.jar:2.3.6] at org.apache.hive.service.cli.operation.SQLOperation.access$800(SQLOperation.java:91) ~[hive-service-2.3.6.jar:2.3.6] at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:348) ~[hive-service-2.3.6.jar:2.3.6] at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_112] at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_112] at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1926) ~[hadoop-common-2.10.1.jar:?] at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:362) ~[hive-service-2.3.6.jar:2.3.6] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_112] at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_112] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_112] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_112] at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112] 2021-09-08T17:08:20,968 INFO [HiveServer2-Background-Pool: Thread-140] hooks.ATSHook: Created ATS Hook 2021-09-08T17:08:20,969 INFO [ATS Logger 0] hooks.ATSHook: Received post-hook notification for :hive_20210908170755_9492c1e6-50ee-48da-8353-e49138d8b527 2021-09-08T17:08:20,969 ERROR [HiveServer2-Background-Pool: Thread-140] ql.Driver: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.tez.TezTask 2021-09-08T17:08:20,969 INFO [HiveServer2-Background-Pool: Thread-140] ql.Driver: Completed executing command(queryId=hive_20210908170755_9492c1e6-50ee-48da-8353-e49138d8b527); Time taken: 25.04 seconds 2021-09-08T17:08:20,984 ERROR [HiveServer2-Background-Pool: Thread-140] operation.Operation: Error running hive query: org.apache.hive.service.cli.HiveSQLException: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.tez.TezTask at org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:380) ~[hive-service-2.3.6.jar:2.3.6] at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:257) ~[hive-service-2.3.6.jar:2.3.6] at org.apache.hive.service.cli.operation.SQLOperation.access$800(SQLOperation.java:91) ~[hive-service-2.3.6.jar:2.3.6] at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:348) ~[hive-service-2.3.6.jar:2.3.6] at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_112] at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_112] at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1926) ~[hadoop-common-2.10.1.jar:?] at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:362) ~[hive-service-2.3.6.jar:2.3.6] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_112] at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_112] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_112] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_112] at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112] 2021-09-08T17:08:26,452 INFO [HiveServer2-Handler-Pool: Thread-63] session.SessionState: Updating thread name to 334e90cf-525e-47f2-bf12-b227417647c2 HiveServer2-Handler-Pool: Thread-63 2021-09-08T17:08:26,452 INFO [HiveServer2-Handler-Pool: Thread-63] session.SessionState: Resetting thread name to HiveServer2-Handler-Pool: Thread-63 2021-09-08T17:08:26,476 INFO [HiveServer2-Handler-Pool: Thread-63] session.SessionState: Updating thread name to 334e90cf-525e-47f2-bf12-b227417647c2 HiveServer2-Handler-Pool: Thread-63 2021-09-08T17:08:26,476 INFO [HiveServer2-Handler-Pool: Thread-63] session.SessionState: Resetting thread name to HiveServer2-Handler-Pool: Thread-63 2021-09-08T17:08:26,477 INFO [HiveServer2-Handler-Pool: Thread-63] session.SessionState: Updating thread name to 334e90cf-525e-47f2-bf12-b227417647c2 HiveServer2-Handler-Pool: Thread-63 2021-09-08T17:08:26,477 INFO [HiveServer2-Handler-Pool: Thread-63] session.SessionState: Resetting thread name to HiveServer2-Handler-Pool: Thread-63 2021-09-08T17:08:26,480 INFO [HiveServer2-Handler-Pool: Thread-63] session.SessionState: Updating thread name to 334e90cf-525e-47f2-bf12-b227417647c2 HiveServer2-Handler-Pool: Thread-63 2021-09-08T17:08:26,481 INFO [c5f4fd3b-f20e-4fcb-bcd6-245bb07a3c58 HiveServer2-Handler-Pool: Thread-63] operation.OperationManager: Closing operation: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=3ebe86bb-7347-4350-950e-0e202a1b6f9b] 2021-09-08T17:08:26,481 INFO [c5f4fd3b-f20e-4fcb-bcd6-245bb07a3c58 HiveServer2-Handler-Pool: Thread-63] exec.ListSinkOperator: Closing operator LIST_SINK[35] 2021-09-08T17:08:26,508 INFO [HiveServer2-Handler-Pool: Thread-63] session.SessionState: Resetting thread name to HiveServer2-Handler-Pool: Thread-63 Any help is much appreciated. Thanks,
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
-
Apache Hive
08-31-2021
08:27 AM
Hi experts, In my current cluster, I have some datanodes that have only 2 disks and some datanodes that have 3 disks. I was wondering if it is ok to have a different number of disks, but specify in the datanode configs 3 disks. Also is it ok if some disks are 2T and some disks are 3T? Any advice is greatly appreciated. Thanks,
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
-
HDFS
08-13-2021
09:44 AM
Ok nevermind, it was a firewall issue. Everything is working now. Thanks,
... View more
08-13-2021
09:11 AM
Hi experts, We recently changed the Ip address of our ambari in our dev enviornment. The cluster seems to be up and running and working properly, however, ambari is not recognizing which namenode is active and which is passive. Also some of the users are unable to access the ambari hive view. This is the error message when trying to access the hive view via ambari. USER HOME Check Message: test01.dmicorp.com:50070: No route to host (Host unreachable) Any help is much appreciated. Thanks,
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
-
Apache Hive
08-12-2021
06:23 AM
Thanks it worked.
... View more
08-11-2021
10:15 AM
Hi experts, As the root user, I am trying to delete a directory in HDFS which was created by root. However, when I try to delete it, it says "Permission denied: user=root, access=WRITE, inode="/user":hdfs:hdfs:drwxr-xr-x" Why does it say permission denied on "/user" when I am trying to delete the directory "/tmp/root/testdirectory" The error message is below. [root@test02 ~]# hdfs dfs -ls /tmp/root/ Picked up _JAVA_OPTIONS: -Xmx2048m -XX:MaxPermSize=512m -Djava.awt.headless=true Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0 Found 2 items drwxrwxrwx - root hdfs 0 2021-08-09 20:35 /tmp/root/testdirectory -rw-r--r-- 3 root hdfs 0 2021-08-10 13:54 /tmp/root/test [root@test02 ~]# hdfs dfs -rmr /tmp/root/testdirectory Picked up _JAVA_OPTIONS: -Xmx2048m -XX:MaxPermSize=512m -Djava.awt.headless=true Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0 rmr: DEPRECATED: Please use '-rm -r' instead. 21/08/11 12:08:30 WARN fs.TrashPolicyDefault: Can't create trash directory: hdfs://test/user/root/.Trash/Current/tmp/root org.apache.hadoop.security.AccessControlException: Permission denied: user=root, access=WRITE, inode="/user":hdfs:hdfs:drwxr-xr-x at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:351) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:251) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:189) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1756) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1740) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1699) at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:60) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3007) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1132) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:659) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1003) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:931) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1926) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2854) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:88) at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2498) at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2471) at org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1243) at org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1240) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1257) at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1232) at org.apache.hadoop.fs.TrashPolicyDefault.moveToTrash(TrashPolicyDefault.java:147) at org.apache.hadoop.fs.Trash.moveToTrash(Trash.java:109) at org.apache.hadoop.fs.Trash.moveToAppropriateTrash(Trash.java:95) at org.apache.hadoop.fs.shell.Delete$Rm.moveToTrash(Delete.java:153) at org.apache.hadoop.fs.shell.Delete$Rm.processPath(Delete.java:118) at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:327) at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:299) at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:281) at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:265) at org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:119) at org.apache.hadoop.fs.shell.Command.run(Command.java:175) at org.apache.hadoop.fs.FsShell.run(FsShell.java:317) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) at org.apache.hadoop.fs.FsShell.main(FsShell.java:380) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=root, access=WRITE, inode="/user":hdfs:hdfs:drwxr-xr-x at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:351) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:251) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:189) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1756) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1740) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1699) at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:60) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3007) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1132) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:659) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1003) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:931) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1926) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2854) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1549) at org.apache.hadoop.ipc.Client.call(Client.java:1495) at org.apache.hadoop.ipc.Client.call(Client.java:1394) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy10.mkdirs(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:587) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) at com.sun.proxy.$Proxy11.mkdirs(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2496) ... 21 more rmr: Failed to move to trash: hdfs://test/tmp/root/testdirectory: Permission denied: user=root, access=WRITE, inode="/user":hdfs:hdfs:drwxr-xr-x [root@test02 ~]# Any help is much appreciated. Thanks,
... View more
Labels:
- Labels:
-
Apache Hadoop
-
HDFS
08-09-2021
10:19 AM
Hi experts, We are trying to copy hive tables from one cluster to another to do some testing. What is the proper way of doing this? Is it possible to distcp the hive tables at the hdfs level first and then somehow run a hive query to somehow have those hive tables recognized by hive? Any help is much appreciated. Thanks,
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Hive
-
HDFS
06-28-2021
11:31 AM
@Scharan Thanks for the response. So I added this in the metainfo.xml <metainfo> <schemaVersion>2.0</schemaVersion> <services> <service> ... <quickLinksConfigurations-dir>quicklinks</quickLinksConfigurations-dir> <quickLinksConfigurations> <quickLinksConfiguration> <fileName>quicklinks.json</fileName> <default>true</default> </quickLinksConfiguration> </quickLinksConfigurations> </service> </services> </metainfo> And this is the quicklinks.json file: { "name": "default", "description": "default quick links configuration", "configuration": { "protocol": { "type":"https", "checks":[ { "property":"dfs.http.policy", "desired":"HTTPS_ONLY", "site":"hdfs-site" } ] }, "links": [ { "name": "namenode_ui", "label": "NameNode UI", "url":"%@://%@:%@", "requires_user_name": "false", "port":{ "http_property": "dfs.namenode.http-address", "http_default_port": "50070", "https_property": "dfs.namenode.https-address", "https_default_port": "50470", "regex": "\\w*:(\\d+)", "site": "hdfs-site" } }, { "name": "namenode_logs", "label": "NameNode Logs", "url":"%@://%@:%@/logs", "requires_user_name": "false", "port":{ "http_property": "dfs.namenode.http-address", "http_default_port": "50070", "https_property": "dfs.namenode.https-address", "https_default_port": "50470", "regex": "\\w*:(\\d+)", "site": "hdfs-site" } }, { "name": "namenode_jmx", "label": "NameNode JMX", "url":"%@://%@:%@/jmx", "requires_user_name": "false", "port":{ "http_property": "dfs.namenode.http-address", "http_default_port": "50070", "https_property": "dfs.namenode.https-address", "https_default_port": "50470", "regex": "\\w*:(\\d+)", "site": "hdfs-site" } }, { "name": "Thread Stacks", "label": "Thread Stacks", "url":"%@://%@:%@/stacks", "requires_user_name": "false", "port":{ "http_property": "dfs.namenode.http-address", "http_default_port": "50070", "https_property": "dfs.namenode.https-address", "https_default_port": "50470", "regex": "\\w*:(\\d+)", "site": "hdfs-site" } } ] } } I have restarted ambari-server but however, still do not see the quicklinks in ambari UI. Any help is much appreciated. Thanks,
... View more
06-25-2021
04:33 PM
Hi experts, I have deployed a new cluster and our dev and prod clusters currently have quicklinks for HDFS. How do I add a quicklink to HDFS in ambari? Which metainfo.xml file do I modify to add the quicklinks in HDFS? Can someone give me the location of the metainfo.xml file? Thanks,
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
06-03-2021
03:54 PM
Also Squirrel seems to be connecting to the dev cluster. It just times out when running a query such as "show databases". If squirrel stays connected for a long time, I noticed that the query will eventually return results instead of timing out. Per cloudera "https://docs.cloudera.com/documentation/enterprise/latest/topics/cdh_ig_hive_metastore_configure.html#concept_jsw_bnc_rp" It says that minimum 4 dedicated cores to HS2 and 4 for hive metastore. The server that hosts hs2 and metastore only has a total of 8 cores. Can this be a reason for the performance issue? Any help on this is much appreciated. Thanks,
... View more