Member since
05-22-2016
32
Posts
4
Kudos Received
0
Solutions
07-16-2018
01:21 PM
Also if I run distcp command on this getting snapshot not found error simply .. providing the details of different path .. 2018-07-14 08:14:30 INFO Initiating replication for the directory: /sas/data/prod/ifrs9/data Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0 18/07/14 08:14:34 WARN tools.OptionsParser: -delete and -diff are mutually exclusive. The -delete option will be ignored. 18/07/14 08:14:35 INFO tools.DistCp: Input Options: DistCpOptions{atomicCommit=false, syncFolder=true, deleteMissing=false, ignoreFailures=true, overwrite=false, skipCRC=true, blocking=true, numListstatusThreads=0, maxMaps=100, mapBandwidth=100, sslConfigurationFile='null', copyStrategy='uniformsize', preserveStatus=[REPLICATION, BLOCKSIZE, USER, GROUP, PERMISSION], preserveRawXattrs=false, atomicWorkPath=null, logPath=null, sourceFileListing=null, sourcePaths=[hdfs://HDP05/sas/data/prod/ifrs9/data], targetPath=hdfs://HDP39/sas/data/prod/ifrs9/data, targetPathExists=true, filtersFile='null'} 18/07/14 08:14:36 INFO impl.TimelineClientImpl: Timeline service address: http://<<RMhost>>:8188/ws/v1/timeline/ 18/07/14 08:14:36 WARN retry.RetryInvocationHandler: Exception while invoking class org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getSnapshotDiffReport over <<HDP39NNhost>>/1<<HDP39 NN>>:8020. Not retrying because try once and fail. org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.SnapshotException): Cannot find the snapshot of directory /sas/data/prod/ifrs9/data with name snapshot_sasdataprodifrs9data_201803021147 at org.apache.hadoop.hdfs.server.namenode.snapshot.DirectorySnapshottableFeature.getSnapshotByName(DirectorySnapshottableFeature.java:285) at org.apache.hadoop.hdfs.server.namenode.snapshot.DirectorySnapshottableFeature.computeDiff(DirectorySnapshottableFeature.java:257) at org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotManager.diff(SnapshotManager.java:372) at org.apache.hadoop.hdfs.server.namenode.FSDirSnapshotOp.getSnapshotDiffReport(FSDirSnapshotOp.java:155) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getSnapshotDiffReport(FSNamesystem.java:7688) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getSnapshotDiffReport(NameNodeRpcServer.java:1792) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getSnapshotDiffReport(ClientNamenodeProtocolServerSideTranslatorPB.java:1149) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2273) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2269) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2267) at org.apache.hadoop.ipc.Client.call(Client.java:1455) at org.apache.hadoop.ipc.Client.call(Client.java:1392) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) at com.sun.proxy.$Proxy10.getSnapshotDiffReport(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getSnapshotDiffReport(ClientNamenodeProtocolTranslatorPB.java:1107) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:258) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104) at com.sun.proxy.$Proxy11.getSnapshotDiffReport(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getSnapshotDiffReport(DFSClient.java:2791) at org.apache.hadoop.hdfs.DistributedFileSystem$38.doCall(DistributedFileSystem.java:1816) at org.apache.hadoop.hdfs.DistributedFileSystem$38.doCall(DistributedFileSystem.java:1812) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getSnapshotDiffReport(DistributedFileSystem.java:1812) at org.apache.hadoop.tools.DistCpSync.checkNoChange(DistCpSync.java:252) at org.apache.hadoop.tools.DistCpSync.preSyncCheck(DistCpSync.java:92) at org.apache.hadoop.tools.DistCpSync.sync(DistCpSync.java:124) at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:179) at org.apache.hadoop.tools.DistCp.execute(DistCp.java:154) at org.apache.hadoop.tools.DistCp.run(DistCp.java:127) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.tools.DistCp.main(DistCp.java:461) 18/07/14 08:14:36 WARN tools.DistCp: Failed to compute snapshot diff on hdfs://HDP39/sas/data/prod/ifrs9/data org.apache.hadoop.hdfs.protocol.Snapshot Exception: Cannot find the snapshot of directory /sas/data/prod/ifrs9/data with name snapshot_sasdataprodifrs9data_201803021147 at org.apache.hadoop.hdfs.server.namenode.snapshot.DirectorySnapshottableFeature.getSnapshotByName(DirectorySnapshottableFeature.java:285) at org.apache.hadoop.hdfs.server.namenode.snapshot.DirectorySnapshottableFeature.computeDiff(DirectorySnapshottableFeature.java:257) at org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotManager.diff(SnapshotManager.java:372) at org.apache.hadoop.hdfs.server.namenode.FSDirSnapshotOp.getSnapshotDiffReport(FSDirSnapshotOp.java:155) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getSnapshotDiffReport(FSNamesystem.java:7688)
... View more
07-16-2018
01:01 PM
Sure I am able to run the HDFS commands of PROD(HDP05) like list from DR(HDP39) cluster .. but unable to run snapshotdiff which is failing.. here is the stack trace.. user hdp05-drmin is a super user on both the clusters Below the snapshot command running [hdp05-drmin ~]$ hdfs snapshotDiff hdfs://HDP05/data/IFRS9/prod snapshot_dataIFRS9prod_201807140834 snapshot_dataIFRS9prod_201807140821 Exception in thread "main" java.lang.IllegalArgumentException: Wrong FS: hdfs://HDP05/data/IFRS9/prod, expected: hdfs://HDP39
at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:651)
at org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:196)
at org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:108)
at org.apache.hadoop.hdfs.DistributedFileSystem$38.doCall(DistributedFileSystem.java:1816)
at org.apache.hadoop.hdfs.DistributedFileSystem$38.doCall(DistributedFileSystem.java:1812)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getSnapshotDiffReport(DistributedFileSystem.java:1812)
at org.apache.hadoop.hdfs.tools.snapshot.SnapshotDiff.run(SnapshotDiff.java:88)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at org.apache.hadoop.hdfs.tools.snapshot.SnapshotDiff.main(SnapshotDiff.java:100) Below same list command working fine.. [hdp05-drmin ~]$ hdfs dfs -ls hdfs://HDP05/data/IFRS9/prod Found 9 items drwxr-xr-x - HDP05-ss-uk HDP05-ss-uk-group 0 2018-07-13 13:03 hdfs://HDP05/data/IFRS9/prod/DisAgg drwxr-xr-x - HDP05-ss-uk HDP05-ss-uk-group 0 2017-10-11 13:06 hdfs://HDP05/data/IFRS9/prod/IFRS9 drwxr-xr-x - HDP05-ss-uk HDP05-ss-uk-group 0 2018-07-03 15:25 hdfs://HDP05/data/IFRS9/prod/LIVE
... View more
07-16-2018
12:29 PM
@Jitendra Yadav can you have some insights on this ..
... View more
07-16-2018
12:11 PM
Guys there is challenge I am facing .. when I am running the snapshotdiff from a remote cluster it is failing with snapshot not found error even though it is available .. do we have any solution for this .. we built a DR cluster and running distcp from DR to utilize the DR resources instead of overloading the PROD .. any solution how this can be achived..
... View more
- Tags:
- HDFS
Labels:
- Labels:
-
Apache Hadoop
07-16-2018
11:34 AM
There is challenge I am facing .. when I am running the snapshotdiff from a remote cluster it is failing with snapshot not found error even though it is available .. do we have any solution for this .. we built a DR cluster and running distcp from DR to utilize the DR resources instead of overloading the PROD .. any solution how this can be achived..
... View more
07-16-2018
11:33 AM
Guys there is challenge I am facing .. when I am running the snapshotdiff from a remote cluster it is failing with snapshot not found error even though it is available .. do we have any solution for this .. we built a DR cluster and running distcp from DR to utilize the DR resources instead of overloading the PROD .. any solution how this can be achived..
... View more
09-20-2016
03:13 PM
Just adding my two cents as I faced the same issue today..and able to resolve..After downloading the JAR file while copying this jar file into linux box I used winscp with transfer setting as default instead of using binary ..caused could not load DB drviers after I copied it back with binary it worked fine..please try it..
... View more
07-19-2016
01:58 PM
Hey Vishal ..where did you added these properties ..
... View more
06-20-2016
05:10 PM
Just after this discussion I noticed I am using Amabari 1.7 where there is no Spark instalation
... View more
06-16-2016
10:09 PM
Hi Guys ,,,for some reason I installed the SPARK on my cluster manually then I realized this can be done thru AMbari also..as it is already running ..I want to add that component to Ambari how can we do without uninstalling the spark ..
... View more
Labels:
- Labels:
-
Apache Ambari
06-16-2016
11:09 AM
🙂 not sure what soultion you are suggesting ..instead of exit and restart the hive CLI command ..again everytimteI have to edit this file to get new scratchdir and restart the Hive CLI..
... View more
06-16-2016
10:50 AM
Hope you got my point this is basicaly i am having multiple EZ so each time based on the table I have to change the scratch directory ..within Hive CLI if I set scratch dir it is not taking that as temp directory ...so every time I have to exit and start hive shell with new scratchdir as --hiveconf ...
... View more
06-15-2016
06:23 PM
1 Kudo
My cluster is having KMS so when ever I am running the load or manipulate in hive getting error saying that intermediate files can't be moved from non EZ to EZ so I did scratchdir set in the Hive shell still it is taking /tmp/hive/....instead of taking the latest scratch dir... As a work around I did while launching the hive shell if i give hive --hiveconf hive.exec.scratchdir then it is working fine.. Please suggest what was the issue ..I am using hive 0.14.0.2.2.4.10-3
... View more
- Tags:
- Data Processing
- Hive
Labels:
- Labels:
-
Apache Hive
05-27-2016
10:05 AM
thanks for the info but looks like this is not informative for issue debugging..and I got update ..we can't change logger debug mode once hiveserver2 started...this feature is available only with HDP2.3 onwards it seems.. anyway thanks for looking into this..
... View more
05-26-2016
10:01 AM
Hi team....Anyone aware how to get into debug mode for beeline command shell...like in hive we use hive -hiveconf hive.root.logger=Debug,console..do we have anything for beeline..the same is not working at beeling ..I don't want to restart the hiveserver ..as this is in production will impact other due to stop and start..
... View more
- Tags:
- beeline
- Hadoop Core
Labels:
- Labels:
-
Apache Hive
05-22-2016
10:52 PM
Can you please check if you are using KMS in those scenarios you can't copy the data from EZ to other EZ which gives the same kind of error ...To avoid this error use scratch directory which will resolve your issue.
... View more
05-22-2016
10:46 PM
Hi guys, Need help..currently I am using HDP2.2 when I am running join queries in Hive shell with auto convert mapjoin true configuration where I am able to get the results...but the same query when I am running in beeline it is failing ..when I turn off mapjoin it is working fine..anyone know what could be the reason..
... View more
Labels:
- Labels:
-
Apache Hive