Member since
05-12-2016
9
Posts
7
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
13436 | 05-12-2016 06:06 AM |
05-25-2016
02:41 AM
hi ben, The oracle db is hosted in a vm in my laptop, I have disabled windows firewall in both host and guest windows machine. Note: The DB IP address changes because of the DHCP. I think the server is listening so I'm able to telnet the port 1521 [bigdata@node01 yum.repos.d]$ telnet 10.7.48.236 1521
Trying 10.7.48.236...
Connected to 10.7.48.236.
Escape character is '^]'. I also have installed netcat rpm and i got no output from the command you told me. [bigdata@node01 ~]$ nc -z 10.7.48.236 1521
[bigdata@node01 ~]$ After disabling host computer firewall my sqoop was able to connect in order that showed me errors like wrong instance, and table not found, after some mistakes corrected in my syntax, sqoop is able to connect, check that the table exist, launch the mr job, and then it fails again with the following connection error. [admin@node01 bigdata]$ sqoop import --hive-import --create-hive-table --hive-home /user/hive/warehouse --connect jdbc:oracle:thin:@10.7.48.236:1521:xe --table TEST --username SYSTEM --password qwerty -m 1
Warning: /opt/cloudera/parcels/CDH-5.7.0-1.cdh5.7.0.p0.45/bin/../lib/sqoop/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
16/05/25 11:37:34 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6-cdh5.7.0
16/05/25 11:37:34 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
16/05/25 11:37:34 INFO tool.BaseSqoopTool: Using Hive-specific delimiters for output. You can override
16/05/25 11:37:34 INFO tool.BaseSqoopTool: delimiters with --fields-terminated-by, etc.
16/05/25 11:37:35 INFO oracle.OraOopManagerFactory: Data Connector for Oracle and Hadoop is disabled.
16/05/25 11:37:35 INFO manager.SqlManager: Using default fetchSize of 1000
16/05/25 11:37:35 INFO tool.CodeGenTool: Beginning code generation
16/05/25 11:37:46 INFO manager.OracleManager: Time zone has been set to GMT
16/05/25 11:37:46 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM TEST t WHERE 1=0
16/05/25 11:37:47 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /opt/cloudera/parcels/CDH/lib/hadoop-mapreduce
Note: /tmp/sqoop-admin/compile/d50ad448789b3035b2559cddbbca4411/TEST.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
16/05/25 11:37:48 ERROR orm.CompilationManager: Could not make directory: /home/bigdata/.
16/05/25 11:37:48 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-admin/compile/d50ad448789b3035b2559cddbbca4411/TEST.jar
16/05/25 11:37:48 INFO manager.OracleManager: Time zone has been set to GMT
16/05/25 11:37:48 INFO manager.OracleManager: Time zone has been set to GMT
16/05/25 11:37:48 INFO mapreduce.ImportJobBase: Beginning import of TEST
16/05/25 11:37:49 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
16/05/25 11:37:49 INFO manager.OracleManager: Time zone has been set to GMT
16/05/25 11:37:49 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
16/05/25 11:37:50 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 413 for admin on ha-hdfs:nameservice1
16/05/25 11:37:50 INFO security.TokenCache: Got dt for hdfs://nameservice1; Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:nameservice1, Ident: (HDFS_DELEGATION_TOKEN token 413 for admin)
16/05/25 11:37:50 WARN token.Token: Cannot find class for token kind kms-dt
16/05/25 11:37:50 INFO security.TokenCache: Got dt for hdfs://nameservice1; Kind: kms-dt, Service: 192.168.0.1:16000, Ident: 00 05 61 64 6d 69 6e 04 79 61 72 6e 00 8a 01 54 e7 46 ff a9 8a 01 55 0b 53 83 a9 8f 9c 07
16/05/25 11:37:50 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to rm50
16/05/25 11:37:52 INFO db.DBInputFormat: Using read commited transaction isolation
16/05/25 11:37:53 INFO mapreduce.JobSubmitter: number of splits:1
16/05/25 11:37:53 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1464086974400_0037
16/05/25 11:37:53 INFO mapreduce.JobSubmitter: Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:nameservice1, Ident: (HDFS_DELEGATION_TOKEN token 413 for admin)
16/05/25 11:37:53 WARN token.Token: Cannot find class for token kind kms-dt
16/05/25 11:37:53 WARN token.Token: Cannot find class for token kind kms-dt
Kind: kms-dt, Service: 192.168.0.1:16000, Ident: 00 05 61 64 6d 69 6e 04 79 61 72 6e 00 8a 01 54 e7 46 ff a9 8a 01 55 0b 53 83 a9 8f 9c 07
16/05/25 11:37:54 INFO impl.YarnClientImpl: Submitted application application_1464086974400_0037
16/05/25 11:37:54 INFO mapreduce.Job: The url to track the job: http://node03.giss.com:8088/proxy/application_1464086974400_0037/
16/05/25 11:37:54 INFO mapreduce.Job: Running job: job_1464086974400_0037
16/05/25 11:38:05 INFO mapreduce.Job: Job job_1464086974400_0037 running in uber mode : false
16/05/25 11:38:05 INFO mapreduce.Job: map 0% reduce 0%
16/05/25 11:38:14 INFO mapreduce.Job: Task Id : attempt_1464086974400_0037_m_000000_0, Status : FAILED
Error: java.lang.RuntimeException: java.lang.RuntimeException: java.sql.SQLRecoverableException: IO Error: The Network Adapter could not establish the connection
at org.apache.sqoop.mapreduce.db.DBInputFormat.setConf(DBInputFormat.java:167)
at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:749)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.lang.RuntimeException: java.sql.SQLRecoverableException: IO Error: The Network Adapter could not establish the connection
at org.apache.sqoop.mapreduce.db.DBInputFormat.getConnection(DBInputFormat.java:220)
at org.apache.sqoop.mapreduce.db.DBInputFormat.setConf(DBInputFormat.java:165)
... 9 more
Caused by: java.sql.SQLRecoverableException: IO Error: The Network Adapter could not establish the connection
at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:489)
at oracle.jdbc.driver.PhysicalConnection.<init>(PhysicalConnection.java:553)
at oracle.jdbc.driver.T4CConnection.<init>(T4CConnection.java:254)
at oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:32)
at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:528)
at java.sql.DriverManager.getConnection(DriverManager.java:571)
at java.sql.DriverManager.getConnection(DriverManager.java:215)
at org.apache.sqoop.mapreduce.db.DBConfiguration.getConnection(DBConfiguration.java:302)
at org.apache.sqoop.mapreduce.db.DBInputFormat.getConnection(DBInputFormat.java:213)
... 10 more
Caused by: oracle.net.ns.NetException: The Network Adapter could not establish the connection
at oracle.net.nt.ConnStrategy.execute(ConnStrategy.java:439)
at oracle.net.resolver.AddrResolution.resolveAndExecute(AddrResolution.java:454)
at oracle.net.ns.NSProtocol.establishConnection(NSProtocol.java:693)
at oracle.net.ns.NSProtocol.connect(NSProtocol.java:251)
at oracle.jdbc.driver.T4CConnection.connect(T4CConnection.java:1140)
at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:340)
... 18 more
Caused by: java.net.ConnectException: Network is unreachable
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at oracle.net.nt.TcpNTAdapter.connect(TcpNTAdapter.java:149)
at oracle.net.nt.ConnOption.connect(ConnOption.java:133)
at oracle.net.nt.ConnStrategy.execute(ConnStrategy.java:405)
... 23 more
16/05/25 11:38:25 INFO mapreduce.Job: Task Id : attempt_1464086974400_0037_m_000000_1, Status : FAILED
Error: java.lang.RuntimeException: java.lang.RuntimeException: java.sql.SQLRecoverableException: IO Error: The Network Adapter could not establish the connection
at org.apache.sqoop.mapreduce.db.DBInputFormat.setConf(DBInputFormat.java:167)
at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:749)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.lang.RuntimeException: java.sql.SQLRecoverableException: IO Error: The Network Adapter could not establish the connection
at org.apache.sqoop.mapreduce.db.DBInputFormat.getConnection(DBInputFormat.java:220)
at org.apache.sqoop.mapreduce.db.DBInputFormat.setConf(DBInputFormat.java:165)
... 9 more
Caused by: java.sql.SQLRecoverableException: IO Error: The Network Adapter could not establish the connection
at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:489)
at oracle.jdbc.driver.PhysicalConnection.<init>(PhysicalConnection.java:553)
at oracle.jdbc.driver.T4CConnection.<init>(T4CConnection.java:254)
at oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:32)
at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:528)
at java.sql.DriverManager.getConnection(DriverManager.java:571)
at java.sql.DriverManager.getConnection(DriverManager.java:215)
at org.apache.sqoop.mapreduce.db.DBConfiguration.getConnection(DBConfiguration.java:302)
at org.apache.sqoop.mapreduce.db.DBInputFormat.getConnection(DBInputFormat.java:213)
... 10 more
Caused by: oracle.net.ns.NetException: The Network Adapter could not establish the connection
at oracle.net.nt.ConnStrategy.execute(ConnStrategy.java:439)
at oracle.net.resolver.AddrResolution.resolveAndExecute(AddrResolution.java:454)
at oracle.net.ns.NSProtocol.establishConnection(NSProtocol.java:693)
at oracle.net.ns.NSProtocol.connect(NSProtocol.java:251)
at oracle.jdbc.driver.T4CConnection.connect(T4CConnection.java:1140)
at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:340)
... 18 more
Caused by: java.net.ConnectException: Network is unreachable
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at oracle.net.nt.TcpNTAdapter.connect(TcpNTAdapter.java:149)
at oracle.net.nt.ConnOption.connect(ConnOption.java:133)
at oracle.net.nt.ConnStrategy.execute(ConnStrategy.java:405)
... 23 more
16/05/25 11:38:30 INFO mapreduce.Job: Task Id : attempt_1464086974400_0037_m_000000_2, Status : FAILED
Error: java.lang.RuntimeException: java.lang.RuntimeException: java.sql.SQLRecoverableException: IO Error: The Network Adapter could not establish the connection
at org.apache.sqoop.mapreduce.db.DBInputFormat.setConf(DBInputFormat.java:167)
at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:749)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.lang.RuntimeException: java.sql.SQLRecoverableException: IO Error: The Network Adapter could not establish the connection
at org.apache.sqoop.mapreduce.db.DBInputFormat.getConnection(DBInputFormat.java:220)
at org.apache.sqoop.mapreduce.db.DBInputFormat.setConf(DBInputFormat.java:165)
... 9 more
Caused by: java.sql.SQLRecoverableException: IO Error: The Network Adapter could not establish the connection
at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:489)
at oracle.jdbc.driver.PhysicalConnection.<init>(PhysicalConnection.java:553)
at oracle.jdbc.driver.T4CConnection.<init>(T4CConnection.java:254)
at oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:32)
at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:528)
at java.sql.DriverManager.getConnection(DriverManager.java:571)
at java.sql.DriverManager.getConnection(DriverManager.java:215)
at org.apache.sqoop.mapreduce.db.DBConfiguration.getConnection(DBConfiguration.java:302)
at org.apache.sqoop.mapreduce.db.DBInputFormat.getConnection(DBInputFormat.java:213)
... 10 more
Caused by: oracle.net.ns.NetException: The Network Adapter could not establish the connection
at oracle.net.nt.ConnStrategy.execute(ConnStrategy.java:439)
at oracle.net.resolver.AddrResolution.resolveAndExecute(AddrResolution.java:454)
at oracle.net.ns.NSProtocol.establishConnection(NSProtocol.java:693)
at oracle.net.ns.NSProtocol.connect(NSProtocol.java:251)
at oracle.jdbc.driver.T4CConnection.connect(T4CConnection.java:1140)
at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:340)
... 18 more
Caused by: java.net.ConnectException: Network is unreachable
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at oracle.net.nt.TcpNTAdapter.connect(TcpNTAdapter.java:149)
at oracle.net.nt.ConnOption.connect(ConnOption.java:133)
at oracle.net.nt.ConnStrategy.execute(ConnStrategy.java:405)
... 23 more
16/05/25 11:38:36 INFO mapreduce.Job: map 100% reduce 0%
16/05/25 11:38:36 INFO mapreduce.Job: Job job_1464086974400_0037 failed with state FAILED due to: Task failed task_1464086974400_0037_m_000000
Job failed as tasks failed. failedMaps:1 failedReduces:0
16/05/25 11:38:37 INFO mapreduce.Job: Counters: 8
Job Counters
Failed map tasks=4
Launched map tasks=4
Other local map tasks=4
Total time spent by all maps in occupied slots (ms)=23722
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=23722
Total vcore-seconds taken by all map tasks=23722
Total megabyte-seconds taken by all map tasks=24291328
16/05/25 11:38:37 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
16/05/25 11:38:37 INFO mapreduce.ImportJobBase: Transferred 0 bytes in 47.1854 seconds (0 bytes/sec)
16/05/25 11:38:37 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
16/05/25 11:38:37 INFO mapreduce.ImportJobBase: Retrieved 0 records.
16/05/25 11:38:37 ERROR tool.ImportTool: Error during import: Import job failed!
[admin@node01 bigdata]$
Any idea about what is going on? Thank you in advance
... View more
05-24-2016
08:05 AM
Hello I have a 6 nodes cluster with cloudera CDH 5.7 with parcels I've a little oracle just for testing porpuses and wanted to test sqoop to import a table but allways got connection errors. I've installed the connector in the proper location [bigdata@node01 ~]$ ls -lha /var/lib/sqoop/lib
total 2.7M
drwxr-xr-x 2 sqoop sqoop 23 May 24 11:25 .
drwxr-xr-x 3 sqoop sqoop 86 May 24 11:25 ..
-rwxr-xr-x 1 sqoop sqoop 2.7M May 23 11:24 ojdbc6.jar I also tried installing it at "/var/lib/sqoop" but the error is the same. when I try any sqoop sentence... for example "sqoop import --hive-import --connect jdbc:oracle:thin:@//10.7.48.240:1521/orcl --table test --username <username> --password <password> --verbose" I also have tried other syntax as jdbc:oracle:thin:@10.7.48.240:1521:orcl.. I checked network and seems good and the firewalld service and SELinux is down in the oracle host. [bigdata@node01 ~]$ ping -c3 10.7.48.240
PING 10.7.48.240 (10.7.48.240) 56(84) bytes of data.
64 bytes from 10.7.48.240: icmp_seq=1 ttl=62 time=1.32 ms
64 bytes from 10.7.48.240: icmp_seq=2 ttl=62 time=0.930 ms
64 bytes from 10.7.48.240: icmp_seq=3 ttl=62 time=0.791 ms
--- 10.7.48.240 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 0.791/1.016/1.329/0.230 ms I allways the following error bigdata@node01 ~]$ sqoop import --hive-import --connect jdbc:oracle:thin:@//10.7.48.240:1521/orcl --table test --username bigdata --password qwerty --verbose Warning: /opt/cloudera/parcels/CDH-5.7.0-1.cdh5.7.0.p0.45/bin/../lib/sqoop/../accumulo does not exist! Accumulo imports will fail. Please set $ACCUMULO_HOME to the root of your Accumulo installation. 16/05/24 17:00:53 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6-cdh5.7.0 16/05/24 17:00:53 DEBUG tool.BaseSqoopTool: Enabled debug logging. 16/05/24 17:00:53 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead. 16/05/24 17:00:53 INFO tool.BaseSqoopTool: Using Hive-specific delimiters for output. You can override 16/05/24 17:00:53 INFO tool.BaseSqoopTool: delimiters with --fields-terminated-by, etc. 16/05/24 17:00:53 DEBUG sqoop.ConnFactory: Loaded manager factory: org.apache.sqoop.manager.oracle.OraOopManagerFactory 16/05/24 17:00:53 DEBUG sqoop.ConnFactory: Loaded manager factory: com.cloudera.sqoop.manager.DefaultManagerFactory 16/05/24 17:00:53 DEBUG sqoop.ConnFactory: Trying ManagerFactory: org.apache.sqoop.manager.oracle.OraOopManagerFactory 16/05/24 17:00:53 DEBUG oracle.OraOopManagerFactory: Data Connector for Oracle and Hadoop can be called by Sqoop! 16/05/24 17:00:53 INFO oracle.OraOopManagerFactory: Data Connector for Oracle and Hadoop is disabled. 16/05/24 17:00:53 DEBUG sqoop.ConnFactory: Trying ManagerFactory: com.cloudera.sqoop.manager.DefaultManagerFactory 16/05/24 17:00:53 DEBUG manager.DefaultManagerFactory: Trying with scheme: jdbc:oracle:thin:@ 16/05/24 17:00:53 DEBUG manager.OracleManager$ConnCache: Instantiated new connection cache. 16/05/24 17:00:53 INFO manager.SqlManager: Using default fetchSize of 1000 16/05/24 17:00:53 DEBUG sqoop.ConnFactory: Instantiated ConnManager org.apache.sqoop.manager.OracleManager@530f4c5e 16/05/24 17:00:53 INFO tool.CodeGenTool: Beginning code generation 16/05/24 17:00:53 DEBUG manager.OracleManager: Using column names query: SELECT t.* FROM test t WHERE 1=0 16/05/24 17:00:53 DEBUG manager.SqlManager: Execute getColumnInfoRawQuery : SELECT t.* FROM test t WHERE 1=0 16/05/24 17:00:53 DEBUG manager.OracleManager: Creating a new connection for jdbc:oracle:thin:@//10.7.48.240:1521/orcl, using username: bigdata 16/05/24 17:00:53 DEBUG manager.OracleManager: No connection paramenters specified. Using regular API for making connection. 16/05/24 17:00:53 ERROR manager.SqlManager: Error executing statement: java.sql.SQLRecoverableException: IO Error: The Network Adapter could not establish the connection java.sql.SQLRecoverableException: IO Error: The Network Adapter could not establish the connection at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:489) at oracle.jdbc.driver.PhysicalConnection.<init>(PhysicalConnection.java:553) at oracle.jdbc.driver.T4CConnection.<init>(T4CConnection.java:254) at oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:32) at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:528) at java.sql.DriverManager.getConnection(DriverManager.java:571) at java.sql.DriverManager.getConnection(DriverManager.java:215) at org.apache.sqoop.manager.OracleManager.makeConnection(OracleManager.java:327) at org.apache.sqoop.manager.GenericJdbcManager.getConnection(GenericJdbcManager.java:52) at org.apache.sqoop.manager.SqlManager.execute(SqlManager.java:763) at org.apache.sqoop.manager.SqlManager.execute(SqlManager.java:786) at org.apache.sqoop.manager.SqlManager.getColumnInfoForRawQuery(SqlManager.java:289) at org.apache.sqoop.manager.SqlManager.getColumnTypesForRawQuery(SqlManager.java:260) at org.apache.sqoop.manager.SqlManager.getColumnTypes(SqlManager.java:246) at org.apache.sqoop.manager.ConnManager.getColumnTypes(ConnManager.java:327) at org.apache.sqoop.orm.ClassWriter.getColumnTypes(ClassWriter.java:1846) at org.apache.sqoop.orm.ClassWriter.generate(ClassWriter.java:1646) at org.apache.sqoop.tool.CodeGenTool.generateORM(CodeGenTool.java:107) at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:478) at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:605) at org.apache.sqoop.Sqoop.run(Sqoop.java:143) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:179) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:218) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227) at org.apache.sqoop.Sqoop.main(Sqoop.java:236) Caused by: oracle.net.ns.NetException: The Network Adapter could not establish the connection at oracle.net.nt.ConnStrategy.execute(ConnStrategy.java:439) at oracle.net.resolver.AddrResolution.resolveAndExecute(AddrResolution.java:454) at oracle.net.ns.NSProtocol.establishConnection(NSProtocol.java:693) at oracle.net.ns.NSProtocol.connect(NSProtocol.java:251) at oracle.jdbc.driver.T4CConnection.connect(T4CConnection.java:1140) at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:340) ... 25 more Caused by: java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at oracle.net.nt.TcpNTAdapter.connect(TcpNTAdapter.java:149) at oracle.net.nt.ConnOption.connect(ConnOption.java:133) at oracle.net.nt.ConnStrategy.execute(ConnStrategy.java:405) ... 30 more 16/05/24 17:00:53 ERROR tool.ImportTool: Encountered IOException running import job: java.io.IOException: No columns to generate for ClassWriter at org.apache.sqoop.orm.ClassWriter.generate(ClassWriter.java:1652) at org.apache.sqoop.tool.CodeGenTool.generateORM(CodeGenTool.java:107) at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:478) at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:605) at org.apache.sqoop.Sqoop.run(Sqoop.java:143) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:179) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:218) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227) at org.apache.sqoop.Sqoop.main(Sqoop.java:236) Please any help with this?
... View more
05-12-2016
06:06 AM
4 Kudos
I finally solved it The problem was that I had no role at all, and roles created by hue didnt work for some reason. So I created an admin role via beeline: Obtained hive ticket kinit -k -t run/cloudera-scm-agent/process/218-hive-HIVESERVER2/hive.keytab hive/node01.test.com@TEST.COM Connected to hive with beeline using hive keytab. beeline>!connect jdbc:hive2://node01.test.com:10000/default;principal=hive/node01.test.com@TEST.COM Created the role admin. beeline>Create role admin; Granted priviledges to admin role. GRANT ALL ON SERVER server1 TO ROLE admin WITH GRANT OPTION; Assign the role to a group. GRANT ROLE admin TO GROUP administrators; After these steps all users within the group administrators are allowed to manage hive priviledges
... View more
05-12-2016
12:36 AM
1 Kudo
Hello! I have a 6 nodes Cloudera 5.7 kerberized cluster, and I'm trying to manage sentry roles with hue, but when I try to manage them, I only can see the databases. I have enabled sentry with cloudera manager as shown in this documentation: http://www.cloudera.com/documentation/enterprise/latest/topics/cm_sg_sentry_service.html If I try to run a query I get the following error: Your query has the following error(s):
Error while compiling statement: FAILED: SemanticException No valid privileges User hive does not have privileges for SWITCHDATABASE The required privileges: Server=server1->Db=*->Table=+->Column=*->action=insert;Server=server1->Db=*->Table=+->Column=*->action=select; I tried granting select and ALL permissions to the usergroup in the database.. and the error persist.. And I also still unavailable to grant permissions on tables. Any clue with what may be happening? Thank you in advance! Here is the configuration of my cluster sentry-site.xml <?xml version="1.0" encoding="UTF-8"?>
<!--Autogenerated by Cloudera Manager-->
<configuration>
<property>
<name>sentry.service.server.rpc-address</name>
<value>node01.test.com</value>
</property>
<property>
<name>sentry.service.server.rpc-port</name>
<value>8038</value>
</property>
<property>
<name>sentry.service.server.principal</name>
<value>sentry/_HOST@test.COM</value>
</property>
<property>
<name>sentry.service.security.mode</name>
<value>kerberos</value>
</property>
<property>
<name>sentry.service.admin.group</name>
<value>hive,impala,hue</value>
</property>
<property>
<name>sentry.service.allow.connect</name>
<value>hive,impala,hue,hdfs</value>
</property>
<property>
<name>sentry.store.group.mapping</name>
<value>org.apache.sentry.provider.common.HadoopGroupMappingService</value>
</property>
<property>
<name>sentry.service.server.keytab</name>
<value>sentry.keytab</value>
</property>
<property>
<name>sentry.store.jdbc.url</name>
<value>jdbc:mysql://node01.test.com:3306/sentry?useUnicode=true&characterEncoding=UTF-8</value>
</property>
<property>
<name>sentry.store.jdbc.driver</name>
<value>com.mysql.jdbc.Driver</value>
</property>
<property>
<name>sentry.store.jdbc.user</name>
<value>sentry</value>
</property>
<property>
<name>sentry.store.jdbc.password</name>
<value>********</value>
</property>
<property>
<name>cloudera.navigator.client.config</name>
<value>{{CMF_CONF_DIR}}/navigator.client.properties</value>
</property>
<property>
<name>hadoop.security.credential.provider.path</name>
<value>localjceks://file/{{CMF_CONF_DIR}}/creds.localjceks</value>
</property>
</configuration> hive-site.xml <?xml version="1.0" encoding="UTF-8"?>
<!--Autogenerated by Cloudera Manager-->
<configuration>
<property>
<name>hive.metastore.uris</name>
<value>thrift://node01.test.com:9083</value>
</property>
<property>
<name>hive.metastore.client.socket.timeout</name>
<value>300</value>
</property>
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/user/hive/warehouse</value>
</property>
<property>
<name>hive.warehouse.subdir.inherit.perms</name>
<value>true</value>
</property>
<property>
<name>hive.log.explain.output</name>
<value>false</value>
</property>
<property>
<name>hive.auto.convert.join</name>
<value>true</value>
</property>
<property>
<name>hive.auto.convert.join.noconditionaltask.size</name>
<value>20971520</value>
</property>
<property>
<name>hive.optimize.bucketmapjoin.sortedmerge</name>
<value>false</value>
</property>
<property>
<name>hive.smbjoin.cache.rows</name>
<value>10000</value>
</property>
<property>
<name>mapred.reduce.tasks</name>
<value>-1</value>
</property>
<property>
<name>hive.exec.reducers.bytes.per.reducer</name>
<value>67108864</value>
</property>
<property>
<name>hive.exec.copyfile.maxsize</name>
<value>33554432</value>
</property>
<property>
<name>hive.exec.reducers.max</name>
<value>1099</value>
</property>
<property>
<name>hive.vectorized.groupby.checkinterval</name>
<value>4096</value>
</property>
<property>
<name>hive.vectorized.groupby.flush.percent</name>
<value>0.1</value>
</property>
<property>
<name>hive.compute.query.using.stats</name>
<value>false</value>
</property>
<property>
<name>hive.vectorized.execution.enabled</name>
<value>true</value>
</property>
<property>
<name>hive.vectorized.execution.reduce.enabled</name>
<value>false</value>
</property>
<property>
<name>hive.merge.mapfiles</name>
<value>true</value>
</property>
<property>
<name>hive.merge.mapredfiles</name>
<value>false</value>
</property>
<property>
<name>hive.cbo.enable</name>
<value>false</value>
</property>
<property>
<name>hive.fetch.task.conversion</name>
<value>minimal</value>
</property>
<property>
<name>hive.fetch.task.conversion.threshold</name>
<value>268435456</value>
</property>
<property>
<name>hive.limit.pushdown.memory.usage</name>
<value>0.1</value>
</property>
<property>
<name>hive.merge.sparkfiles</name>
<value>true</value>
</property>
<property>
<name>hive.merge.smallfiles.avgsize</name>
<value>16777216</value>
</property>
<property>
<name>hive.merge.size.per.task</name>
<value>268435456</value>
</property>
<property>
<name>hive.optimize.reducededuplication</name>
<value>true</value>
</property>
<property>
<name>hive.optimize.reducededuplication.min.reducer</name>
<value>4</value>
</property>
<property>
<name>hive.map.aggr</name>
<value>true</value>
</property>
<property>
<name>hive.map.aggr.hash.percentmemory</name>
<value>0.5</value>
</property>
<property>
<name>hive.optimize.sort.dynamic.partition</name>
<value>false</value>
</property>
<property>
<name>hive.execution.engine</name>
<value>mr</value>
</property>
<property>
<name>spark.executor.memory</name>
<value>912680550</value>
</property>
<property>
<name>spark.driver.memory</name>
<value>966367641</value>
</property>
<property>
<name>spark.executor.cores</name>
<value>1</value>
</property>
<property>
<name>spark.yarn.driver.memoryOverhead</name>
<value>102</value>
</property>
<property>
<name>spark.yarn.executor.memoryOverhead</name>
<value>153</value>
</property>
<property>
<name>spark.dynamicAllocation.enabled</name>
<value>true</value>
</property>
<property>
<name>spark.dynamicAllocation.initialExecutors</name>
<value>1</value>
</property>
<property>
<name>spark.dynamicAllocation.minExecutors</name>
<value>1</value>
</property>
<property>
<name>spark.dynamicAllocation.maxExecutors</name>
<value>2147483647</value>
</property>
<property>
<name>hive.metastore.execute.setugi</name>
<value>true</value>
</property>
<property>
<name>hive.support.concurrency</name>
<value>true</value>
</property>
<property>
<name>hive.zookeeper.quorum</name>
<value>node01.test.com,node02.test.com,node03.test.com</value>
</property>
<property>
<name>hive.zookeeper.client.port</name>
<value>2181</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>node01.test.com,node02.test.com,node03.test.com</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
</property>
<property>
<name>hive.zookeeper.namespace</name>
<value>hive_zookeeper_namespace_hive</value>
</property>
<property>
<name>hive.cluster.delegation.token.store.class</name>
<value>org.apache.hadoop.hive.thrift.MemoryTokenStore</value>
</property>
<property>
<name>hive.server2.thrift.min.worker.threads</name>
<value>5</value>
</property>
<property>
<name>hive.server2.thrift.max.worker.threads</name>
<value>100</value>
</property>
<property>
<name>hive.server2.thrift.port</name>
<value>10000</value>
</property>
<property>
<name>hive.entity.capture.input.URI</name>
<value>true</value>
</property>
<property>
<name>hive.server2.enable.doAs</name>
<value>false</value>
</property>
<property>
<name>hive.server2.session.check.interval</name>
<value>900000</value>
</property>
<property>
<name>hive.server2.idle.session.timeout</name>
<value>43200000</value>
</property>
<property>
<name>hive.server2.idle.session.timeout_check_operation</name>
<value>true</value>
</property>
<property>
<name>hive.server2.idle.operation.timeout</name>
<value>21600000</value>
</property>
<property>
<name>hive.server2.webui.host</name>
<value>0.0.0.0</value>
</property>
<property>
<name>hive.server2.webui.port</name>
<value>10002</value>
</property>
<property>
<name>hive.server2.webui.max.threads</name>
<value>50</value>
</property>
<property>
<name>hive.server2.webui.use.ssl</name>
<value>false</value>
</property>
<property>
<name>hive.aux.jars.path</name>
<value>{{HIVE_HBASE_JAR}}</value>
</property>
<property>
<name>hive.metastore.sasl.enabled</name>
<value>true</value>
</property>
<property>
<name>hive.server2.authentication</name>
<value>kerberos</value>
</property>
<property>
<name>hive.metastore.kerberos.principal</name>
<value>hive/_HOST@TEST.COM</value>
</property>
<property>
<name>hive.server2.authentication.kerberos.principal</name>
<value>hive/_HOST@TEST.COM</value>
</property>
<property>
<name>hive.server2.authentication.kerberos.keytab</name>
<value>hive.keytab</value>
</property>
<property>
<name>hive.server2.webui.use.spnego</name>
<value>true</value>
</property>
<property>
<name>hive.server2.webui.spnego.keytab</name>
<value>hive.keytab</value>
</property>
<property>
<name>hive.server2.webui.spnego.principal</name>
<value>HTTP/node01.test.com@TEST.COM</value>
</property>
<property>
<name>cloudera.navigator.client.config</name>
<value>{{CMF_CONF_DIR}}/navigator.client.properties</value>
</property>
<property>
<name>hive.metastore.event.listeners</name>
<value>com.cloudera.navigator.audit.hive.HiveMetaStoreEventListener</value>
</property>
<property>
<name>hive.server2.session.hook</name>
<value>org.apache.sentry.binding.hive.HiveAuthzBindingSessionHook</value>
</property>
<property>
<name>hive.sentry.conf.url</name>
<value>file:///{{CMF_CONF_DIR}}/sentry-site.xml</value>
</property>
<property>
<name>hive.metastore.filter.hook</name>
<value>org.apache.sentry.binding.metastore.SentryMetaStoreFilterHook</value>
</property>
<property>
<name>hive.exec.post.hooks</name>
<value>com.cloudera.navigator.audit.hive.HiveExecHookContext,org.apache.hadoop.hive.ql.hooks.LineageLogger</value>
</property>
<property>
<name>hive.security.authorization.task.factory</name>
<value>org.apache.sentry.binding.hive.SentryHiveAuthorizationTaskFactoryImpl</value>
</property>
<property>
<name>spark.shuffle.service.enabled</name>
<value>true</value>
</property>
<property>
<name>hive.service.metrics.file.location</name>
<value>/var/log/hive/metrics-hiveserver2/metrics.log</value>
</property>
<property>
<name>hive.server2.metrics.enabled</name>
<value>true</value>
</property>
<property>
<name>hive.service.metrics.file.frequency</name>
<value>30000</value>
</property>
</configuration>
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Sentry
-
Cloudera Hue