Member since
12-11-2016
5
Posts
1
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2257 | 01-01-2017 04:18 AM |
06-30-2017
07:14 AM
Hi, We are facing errors while trying to import data from Oracle configured with walletsFollowing is the error while executing the sqoop command sqoop import --libjars /usr/local/sqoop/lib/oraclepki.jar -Djavax.net.ssl.trustStore=/home/hadoop/app/Wallets/client_wallet/ewallet.p12 -Djavax.net.ssl.trustStoreType=PKCS12 -Djavax.net.ssl.trustStorePassword=WalletPasswd123 --connect jdbc:oracle:thin:@testssl --username neon_main --password bullet --table APP_INSTANCE --verbose Here , testssl is the wallet name configured in the listerner.ora. The error thrown is 17/06/30 12:38:01 DEBUG manager.DefaultManagerFactory: Trying with scheme: jdbc:oracle:thin 17/06/30 12:38:01 DEBUG manager.OracleManager$ConnCache: Instantiated new connection cache. 17/06/30 12:38:01 INFO manager.SqlManager: Using default fetchSize of 1000 17/06/30 12:38:01 DEBUG sqoop.ConnFactory: Instantiated ConnManager org.apache.sqoop.manager.OracleManager@1623b78d 17/06/30 12:38:01 INFO tool.CodeGenTool: Beginning code generation
17/06/30 12:38:01 DEBUG manager.OracleManager: Using column names query: SELECT t.* FROM APP_INSTANCE t WHERE 1=0 17/06/30 12:38:01 DEBUG manager.SqlManager: Execute getColumnInfoRawQuery : SELECT t.* FROM APP_INSTANCE t WHERE 1=0
17/06/30 12:38:01 DEBUG manager.OracleManager: Creating a new connection for jdbc:oracle:thin:@testssl, using username: neon_main 17/06/30 12:38:01 DEBUG manager.OracleManager: No connection paramenters specified. Using regular API for making connection. 17/06/30 12:38:02 ERROR manager.SqlManager: Error executing statement: java.sql.SQLRecoverableException: IO Error: The Network Adapter could not establish the connection
java.sql.SQLRecoverableException: IO Error: The Network Adapter could not establish the connection
at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:489)
at oracle.jdbc.driver.PhysicalConnection.<init>(PhysicalConnection.java:553)
at oracle.jdbc.driver.T4CConnection.<init>(T4CConnection.java:254) at oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:32) at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:528) at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:247) at org.apache.sqoop.manager.OracleManager.makeConnection(OracleManager.java:327) at org.apache.sqoop.manager.GenericJdbcManager.getConnection(GenericJdbcManager.java:52) at org.apache.sqoop.manager.SqlManager.execute(SqlManager.java:744) at org.apache.sqoop.manager.SqlManager.execute(SqlManager.java:767) at org.apache.sqoop.manager.SqlManager.getColumnInfoForRawQuery(SqlManager.java:270) at org.apache.sqoop.manager.SqlManager.getColumnTypesForRawQuery(SqlManager.java:241)
at org.apache.sqoop.manager.SqlManager.getColumnTypes(SqlManager.java:227) at org.apache.sqoop.manager.ConnManager.getColumnTypes(ConnManager.java:295) at org.apache.sqoop.orm.ClassWriter.getColumnTypes(ClassWriter.java:1833) at org.apache.sqoop.orm.ClassWriter.generate(ClassWriter.java:1645) at org.apache.sqoop.tool.CodeGenTool.generateORM(CodeGenTool.java:107) at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:478) at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:605) at org.apache.sqoop.Sqoop.run(Sqoop.java:143) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:179) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:218) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227) at org.apache.sqoop.Sqoop.main(Sqoop.java:236) Caused by: oracle.net.ns.NetException: The Network Adapter could not establish the connection
at oracle.net.nt.ConnStrategy.execute(ConnStrategy.java:439)
at oracle.net.resolver.AddrResolution.resolveAndExecute(AddrResolution.java:454)
at oracle.net.ns.NSProtocol.establishConnection(NSProtocol.java:693)
at oracle.net.ns.NSProtocol.connect(NSProtocol.java:251)
at oracle.jdbc.driver.T4CConnection.connect(T4CConnection.java:1140) at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:340)
... 25 more Caused by: oracle.net.ns.NetException: The method specified in wallet_location is not supported. Location: /home/hadoop/app/Wallets/client_wallet at oracle.net.nt.CustomSSLSocketFactory.getSSLSocketFactory(CustomSSLSocketFactory.java:219) at oracle.net.nt.TcpsNTAdapter.connect(TcpsNTAdapter.java:119) at oracle.net.nt.ConnOption.connect(ConnOption.java:133)
at oracle.net.nt.ConnStrategy.execute(ConnStrategy.java:405)
... 30 more Steps followed to cofigure the SSL with Oracle wallet is described below. .We have configured SSL in Oracle using the instructions in the following link(have followed the same instructions for RHEL) https://drive.google.com/open?id=0B5rPtj-mVRvPbEhFVlBNa0ZyRTg 1) Have created wallets for server & client. 2) Exchanged the wallets between the server and client 3) Able to successfully connect to the oracle using SSL through JDBC thin client The sqlnet.ora content is as follows The tnsnames.ora content is as follows The listener.ora content is as follows I am able to sucessfully connect to the Database using JDBC client.The content of the java program and the files associated are attached.Note:Have truncated the ip before attaching the file jdbc-test.zip Any help would be much appreciated.
... View more
Labels:
- Labels:
-
Apache Sqoop
05-25-2017
06:07 AM
Does it require a restart after changing time & iptable status?
... View more
05-04-2017
08:48 AM
Almost all region servers in our production are dying with some lease expired exception followed by YouAreDeadException. Hbase version using is 1.1.2 Logs are as given below 2017-04-28 11:25:10,793 ERROR [sync.2] wal.FSHLog: Error syncing, request close of wal
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): No lease on /apps/hbase/data/WALs/node5,16020,1487446061404-splitting/node5%2C16020%2C1487446061404.default.1493358867035 (inode 4423489): File is not open for writing. [Lease. Holder: DFSClient_NONMAPREDUCE_1688735303_1, pendingcreates: 3]
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3445)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalDatanode(FSNamesystem.java:3347)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAdditionalDatanode(NameNodeRpcServer.java:759)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolServerSideTranslatorPB.java:515)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2137)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2133)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2131)
at org.apache.hadoop.ipc.Client.call(Client.java:1427)
at org.apache.hadoop.ipc.Client.call(Client.java:1358)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
at com.sun.proxy.$Proxy19.getAdditionalDatanode(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolTranslatorPB.java:443)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy20.getAdditionalDatanode(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279)
at com.sun.proxy.$Proxy21.getAdditionalDatanode(Unknown Source)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1010)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1165)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:909)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:412)
2017-04-28 11:25:10,793 WARN [B.defaultRpcServer.handler=17,queue=2,port=16020] ipc.RpcServer: (responseTooSlow): {"processingtimems":41472,"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","client":"162.168.6.235:60011","starttimems":1493358869320,"queuetimems":0,"class":"HRegionServer","responsesize":416,"method":"Scan"}
2017-04-28 11:25:10,793 WARN [B.defaultRpcServer.handler=16,queue=1,port=16020] ipc.RpcServer: (responseTooSlow): {"processingtimems":41495,"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","client":"162.168.6.234:48579","starttimems":1493358869298,"queuetimems":0,"class":"HRegionServer","responsesize":416,"method":"Scan"}
2017-04-28 11:25:10,795 WARN [B.defaultRpcServer.handler=2,queue=2,port=16020] ipc.RpcServer: (responseTooSlow): {"processingtimems":41695,"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","client":"162.168.6.235:60011","starttimems":1493358869100,"queuetimems":0,"class":"HRegionServer","responsesize":416,"method":"Scan"}
2017-04-28 11:25:10,802 WARN [sync.4] hdfs.DFSClient: Error while syncing
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): No lease on /apps/hbase/data/WALs/node5,16020,1487446061404-splitting/node5%2C16020%2C1487446061404.default.1493358867035 (inode 4423489): File is not open for writing. [Lease. Holder: DFSClient_NONMAPREDUCE_1688735303_1, pendingcreates: 3]
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3445)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalDatanode(FSNamesystem.java:3347)
2017-04-28 11:25:10,965 FATAL [regionserver/node5/162.168.6.235:16020] regionserver.HRegionServer: ABORTING region server node5,16020,1487446061404: org.apache.hadoop.hbase.YouAreDeadException: Server REPORT rejected; currently processing node5,16020,1487446061404 as dead server
at org.apache.hadoop.hbase.master.ServerManager.checkIsDead(ServerManager.java:413)
at org.apache.hadoop.hbase.master.ServerManager.regionServerReport(ServerManager.java:318)
at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerReport(MasterRpcServices.java:295)
at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:8617)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
at java.lang.Thread.run(Thread.java:744)
2017-04-28 11:25:10,951 WARN [B.defaultRpcServer.handler=29,queue=2,port=16020] ipc.RpcServer: (responseTooSlow): {"processingtimems":41653,"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","client":"162.168.6.235:59900","starttimems":1493358869298,"queuetimems":0,"class":"HRegionServer","responsesize":416,"method":"Scan"}
2017-04-28 11:25:10,951 WARN [B.defaultRpcServer.handler=19,queue=1,port=16020] ipc.RpcServer: (responseTooSlow): {"processingtimems":41820,"call":"Get(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$GetRequest)","client":"162.168.6.229:52499","starttimems":1493358869131,"queuetimems":0,"class":"HRegionServer","responsesize":3369,"method":"Get"}
2017-04-28 11:25:10,965 FATAL [regionserver/node5/162.168.6.235:16020] regionserver.HRegionServer: ABORTING region server node5,16020,1487446061404: org.apache.hadoop.hbase.YouAreDeadException: Server REPORT rejected; currently processing node5,16020,1487446061404 as dead server
at org.apache.hadoop.hbase.master.ServerManager.checkIsDead(ServerManager.java:413)
at org.apache.hadoop.hbase.master.ServerManager.regionServerReport(ServerManager.java:318)
at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerReport(MasterRpcServices.java:295)
at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:8617)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
at java.lang.Thread.run(Thread.java:744)
org.apache.hadoop.hbase.YouAreDeadException: org.apache.hadoop.hbase.YouAreDeadException: Server REPORT rejected; currently processing node5,16020,1487446061404 as dead server
at org.apache.hadoop.hbase.master.ServerManager.checkIsDead(ServerManager.java:413)
at org.apache.hadoop.hbase.master.ServerManager.regionServerReport(ServerManager.java:318)
at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerReport(MasterRpcServices.java:295)
at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:8617)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
at java.lang.Thread.run(Thread.java:744)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:325)
at org.apache.hadoop.hbase.regionserver.HRegionServer.tryRegionServerReport(HRegionServer.java:1141)
at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:949)
at java.lang.Thread.run(Thread.java:744)
Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.YouAreDeadException): org.apache.hadoop.hbase.YouAreDeadException: Server REPORT rejected; currently processing node5,16020,1487446061404 as dead server
at org.apache.hadoop.hbase.master.ServerManager.checkIsDead(ServerManager.java:413)
at org.apache.hadoop.hbase.master.ServerManager.regionServerReport(ServerManager.java:318)
at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerReport(MasterRpcServices.java:295)
at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:8617)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
... View more
- Tags:
- Ambari
- Data Processing
- HBase
- regionserver
- Upgrade to HDP 2.5.3 : ConcurrentModificationException When Executing Insert Overwrite : Hive
Labels:
- Labels:
-
Apache Ambari
-
Apache HBase
01-01-2017
04:18 AM
1 Kudo
We got this exception after migrating one table from another cluster. Root cause was duplicate region created with the same start-key and end-key.
... View more