Member since
11-04-2016
87
Posts
9
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
6503 | 11-15-2016 07:16 PM | |
2937 | 11-14-2016 08:05 PM | |
4105 | 11-08-2016 05:00 PM |
03-27-2017
06:57 PM
@Sonu Sahi Please let me know what technologies are available for Spark, R, Python. Thanks, Marcia
... View more
03-27-2017
06:46 PM
@Sonu Sahi Ok... If I would like users to use HiveQL, what are my options if I disable Hive CLI? What are the differences between Hive and Beeline? Can I connect via Spark? RStudio? Python? Thanks, Marcia
... View more
03-27-2017
06:24 PM
Hello, I'm trying to use Ranger to activate User Column Level permissions. I am able to do table level permissions by changing HDFS policies. When I try Hive, column level permissions, and then use Hive CLI, these permissions do not work. Please let me know what I am doing wrong and what I should be doing. Thanks, Marcy
... View more
Labels:
- Labels:
-
Apache Ranger
03-14-2017
01:44 PM
I mean yes, i do see this error with single table import... it appears its because of that one column - project_id... Must a table have a key column?
... View more
03-14-2017
01:24 PM
Now my HIVE is stuck, and will not go past this message: Logging initialized using configuration in jar:file:/usr/hdp/2.5.3.0-37/hive/lib/hive-common-1.2.1000.2.5.3.0-37.jar!/hive-log4j.properties
... View more
03-14-2017
01:05 PM
Thanks, but I'm using DB2. Also, I can import one table just fine... I just cannot import all tables...
... View more
03-14-2017
01:04 PM
I tried this schema thing, and I still get the same error. It appears that there is a problem with the first column...: 17/03/14 09:01:15 INFO mapreduce.Job: Task Id : attempt_1489428784285_0011_m_000000_0, Status : FAILED Error: java.io.IOException: SQLException in nextKeyValue at org.apache.sqoop.mapreduce.db.DBRecordReader.nextKeyValue(DBRecordReader.java:277) at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:556) at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80) at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145) at org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162) Caused by: com.ibm.db2.jcc.am.SqlSyntaxErrorException: DB2 SQL Error: SQLCODE=-206, SQLSTATE=42703, SQLERRMC=PROJECT_ID, DRIVER=4.11.77 at com.ibm.db2.jcc.am.gd.a(gd.java:676) at com.ibm.db2.jcc.am.gd.a(gd.java:60) at com.ibm.db2.jcc.am.gd.a(gd.java:127) at com.ibm.db2.jcc.am.jn.c(jn.java:2561) at com.ibm.db2.jcc.am.jn.d(jn.java:2549) at com.ibm.db2.jcc.am.jn.a(jn.java:2025) at com.ibm.db2.jcc.am.kn.a(kn.java:6836) at com.ibm.db2.jcc.t4.cb.g(cb.java:140) at com.ibm.db2.jcc.t4.cb.a(cb.java:40) at com.ibm.db2.jcc.t4.q.a(q.java:32) at com.ibm.db2.jcc.t4.rb.i(rb.java:135) at com.ibm.db2.jcc.am.jn.ib(jn.java:1996) at com.ibm.db2.jcc.am.kn.sc(kn.java:3058) at com.ibm.db2.jcc.am.kn.b(kn.java:3841) at com.ibm.db2.jcc.am.kn.fc(kn.java:702) at com.ibm.db2.jcc.am.kn.executeQuery(kn.java:672) at org.apache.sqoop.mapreduce.db.DBRecordReader.executeQuery(DBRecordReader.java:111) at org.apache.sqoop.mapreduce.db.DBRecordReader.nextKeyValue(DBRecordReader.java:235) ... 12 more Container killed by the ApplicationMaster. Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143 17/03/14 09:01:20 INFO mapreduce.Job: Task Id : attempt_1489428784285_0011_m_000000_1, Status : FAILED Error: java.io.IOException: SQLException in nextKeyValue at org.apache.sqoop.mapreduce.db.DBRecordReader.nextKeyValue(DBRecordReader.java:277) at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:556) at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80) at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145) at org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162) Caused by: com.ibm.db2.jcc.am.SqlSyntaxErrorException: DB2 SQL Error: SQLCODE=-206, SQLSTATE=42703, SQLERRMC=PROJECT_ID, DRIVER=4.11.77 at com.ibm.db2.jcc.am.gd.a(gd.java:676) at com.ibm.db2.jcc.am.gd.a(gd.java:60) at com.ibm.db2.jcc.am.gd.a(gd.java:127) at com.ibm.db2.jcc.am.jn.c(jn.java:2561) at com.ibm.db2.jcc.am.jn.d(jn.java:2549) at com.ibm.db2.jcc.am.jn.a(jn.java:2025) at com.ibm.db2.jcc.am.kn.a(kn.java:6836) at com.ibm.db2.jcc.t4.cb.g(cb.java:140) at com.ibm.db2.jcc.t4.cb.a(cb.java:40) at com.ibm.db2.jcc.t4.q.a(q.java:32) at com.ibm.db2.jcc.t4.rb.i(rb.java:135) at com.ibm.db2.jcc.am.jn.ib(jn.java:1996) at com.ibm.db2.jcc.am.kn.sc(kn.java:3058) at com.ibm.db2.jcc.am.kn.b(kn.java:3841) at com.ibm.db2.jcc.am.kn.fc(kn.java:702) at com.ibm.db2.jcc.am.kn.executeQuery(kn.java:672) at org.apache.sqoop.mapreduce.db.DBRecordReader.executeQuery(DBRecordReader.java:111) at org.apache.sqoop.mapreduce.db.DBRecordReader.nextKeyValue(DBRecordReader.java:235) ... 12 more 17/03/14 09:01:26 INFO mapreduce.Job: Task Id : attempt_1489428784285_0011_m_000000_2, Status : FAILED Error: java.io.IOException: SQLException in nextKeyValue at org.apache.sqoop.mapreduce.db.DBRecordReader.nextKeyValue(DBRecordReader.java:277) at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:556) at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80) at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145) at org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162) Caused by: com.ibm.db2.jcc.am.SqlSyntaxErrorException: DB2 SQL Error: SQLCODE=-206, SQLSTATE=42703, SQLERRMC=PROJECT_ID, DRIVER=4.11.77 at com.ibm.db2.jcc.am.gd.a(gd.java:676) at com.ibm.db2.jcc.am.gd.a(gd.java:60) at com.ibm.db2.jcc.am.gd.a(gd.java:127) at com.ibm.db2.jcc.am.jn.c(jn.java:2561) at com.ibm.db2.jcc.am.jn.d(jn.java:2549) at com.ibm.db2.jcc.am.jn.a(jn.java:2025) at com.ibm.db2.jcc.am.kn.a(kn.java:6836) at com.ibm.db2.jcc.t4.cb.g(cb.java:140) at com.ibm.db2.jcc.t4.cb.a(cb.java:40) at com.ibm.db2.jcc.t4.q.a(q.java:32) at com.ibm.db2.jcc.t4.rb.i(rb.java:135) at com.ibm.db2.jcc.am.jn.ib(jn.java:1996) at com.ibm.db2.jcc.am.kn.sc(kn.java:3058) at com.ibm.db2.jcc.am.kn.b(kn.java:3841) at com.ibm.db2.jcc.am.kn.fc(kn.java:702) at com.ibm.db2.jcc.am.kn.executeQuery(kn.java:672) at org.apache.sqoop.mapreduce.db.DBRecordReader.executeQuery(DBRecordReader.java:111) at org.apache.sqoop.mapreduce.db.DBRecordReader.nextKeyValue(DBRecordReader.java:235) ... 12 more 17/03/14 09:01:34 INFO mapreduce.Job: map 100% reduce 0% 17/03/14 09:01:35 INFO mapreduce.Job: Job job_1489428784285_0011 failed with state FAILED due to: Task failed task_1489428784285_0011_m_000000 Job failed as tasks failed. failedMaps:1 failedReduces:0 17/03/14 09:01:35 INFO mapreduce.Job: Counters: 8 Job Counters Failed map tasks=4 Launched map tasks=4 Other local map tasks=4 Total time spent by all maps in occupied slots (ms)=18518 Total time spent by all reduces in occupied slots (ms)=0 Total time spent by all map tasks (ms)=18518 Total vcore-milliseconds taken by all map tasks=18518 Total megabyte-milliseconds taken by all map tasks=47406080 17/03/14 09:01:35 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead 17/03/14 09:01:35 INFO mapreduce.ImportJobBase: Transferred 0 bytes in 38.022 seconds (0 bytes/sec) 17/03/14 09:01:35 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead 17/03/14 09:01:35 INFO mapreduce.ImportJobBase: Retrieved 0 records. 17/03/14 09:01:35 ERROR tool.ImportAllTablesTool: Error during import: Import job failed!
... View more
03-13-2017
07:19 PM
So my new schema should be called: TESTDB . And all the tables in SCHEMA should have aliases within the new schema, TESTDB? Also, which do I use? Alias / Synonym? Does the username, SCHEMA, do anything with the settings as well? I'm thinking of doing the following in DB2:
create schema TESTDB:
create alias TESTDB.RC_SUM for originalSchema.RC_SUM;
... View more
03-13-2017
07:11 PM
Hello, this sounds like an excellent idea! Please let me know how I should do this...
... View more
03-13-2017
07:03 PM
I think the problem is regarding importalltables and schema... I wish to import all the tables within one schema... is this possible using import-all-tables? Or must I go table by table??
... View more