Created 08-27-2020 03:14 AM
After upgrading to CDH 6.3.3, one map reduce jar started failing with the below error
`enter code here`
Error: org.apache.hadoop.hbase.DoNotRetryIOException: org.apache.hadoop.hbase.DoNotRetryIOException: java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/regionserver/UnexpectedStateException
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:469)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/regionserver/UnexpectedStateException
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.toFilter(ProtobufUtil.java:1600)
at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.toScan(ProtobufUtil.java:1154)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.newRegionScanner(RSRpcServices.java:2971)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3305)
at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42190)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
... 3 more
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hbase.regionserver.UnexpectedStateException
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 11 more
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:99)
at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:89)
at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:361)
at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:349)
at org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:344)
at org.apache.hadoop.hbase.client.ScannerCallable.rpcCall(ScannerCallable.java:242)
at org.apache.hadoop.hbase.client.ScannerCallable.rpcCall(ScannerCallable.java:58)
at org.apache.hadoop.hbase.client.RegionServerCallable.call(RegionServerCallable.java:127)
at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithoutRetries(RpcRetryingCallerImpl.java:192)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:387)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:361)
at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:107)
at org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:80)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)'
During re-compile code with CDH 6.3.3 compatible dependencies I had to change below to
//import org.apache.hadoop.hbase.regionserver.UnexpectedStateException;
import org.apache.hadoop.hbase.exceptions.UnexpectedStateException;
However during the execution the jar is still failing with no class found error
I am not sure why run time its pointing to 5.12 lib.
I have rebind the maven jar multiple time.
Also I checked the class path its pointing to CDH6.3.33 jar.
Created 08-27-2020 05:08 AM
Hello @Suyog1981 ,
thank you for reaching out to Community. I understand that your issue is:
After upgrading from CDH5.12 to CDH6.3.3. MR2 job that is connecting to HBase is failing and it seems to be that the runtime is still pointing to CDH5.12.
Can you please check if any of the links under /etc/alternatives or /var/lib/alternatives is still pointing to CDH5.12 paths on the node where the container of the MR job is failing? E.g. use:
grep CDH-5.12 * | awk -F ':' '{print $1}'
Thank you:
Ferenc
Ferenc Erdelyi, Technical Solutions Manager
Was your question answered? Make sure to mark the answer as the accepted solution.
If you find a reply useful, say thanks by clicking on the thumbs up button.
Learn more about the Cloudera Community: