Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

hive compactor

avatar
Master Collaborator

HI:

After compaction i received this error during update command in hive:

subtypes: 1
subtypes: 2
subtypes: 3
subtypes: 4
subtypes: 5
subtypes: 6
fieldNames: "operation"
fieldNames: "originalTransaction"
fieldNames: "bucket"
fieldNames: "rowId"
fieldNames: "currentTransaction"
fieldNames: "row"
, kind: INT
, kind: LONG
, kind: INT
, kind: LONG
, kind: LONG
, kind: STRUCT
subtypes: 7
subtypes: 8
subtypes: 9
subtypes: 10
subtypes: 11
subtypes: 12
subtypes: 13
subtypes: 14
subtypes: 15
subtypes: 16
subtypes: 17
subtypes: 18
subtypes: 19
subtypes: 20
subtypes: 21
subtypes: 22
subtypes: 23
subtypes: 24
fieldNames: "_col1"
fieldNames: "_col2"
fieldNames: "_col3"
fieldNames: "_col4"
fieldNames: "_col5"
fieldNames: "_col6"
fieldNames: "_col7"
fieldNames: "_col8"
fieldNames: "_col9"
fieldNames: "_col10"
fieldNames: "_col11"
fieldNames: "_col12"
fieldNames: "_col13"
fieldNames: "_col14"
fieldNames: "_col15"
fieldNames: "_col16"
fieldNames: "_col17"
fieldNames: "_col18"
, kind: INT
, kind: STRING
, kind: STRING
, kind: STRING
, kind: STRING
, kind: INT
, kind: INT
, kind: DOUBLE
, kind: INT
, kind: INT
, kind: DOUBLE
, kind: DOUBLE
, kind: DOUBLE
, kind: INT
, kind: INT
, kind: DOUBLE
, kind: DOUBLE
, kind: STRING
] schemaTypes [kind: STRUCT
subtypes: 1
subtypes: 2
subtypes: 3
subtypes: 4
subtypes: 5
subtypes: 6
fieldNames: "operation"
fieldNames: "originalTransaction"
fieldNames: "bucket"
fieldNames: "rowId"
fieldNames: "currentTransaction"
fieldNames: "row"
, kind: INT
, kind: LONG
, kind: INT
, kind: LONG
, kind: LONG
, kind: STRUCT
subtypes: 7
subtypes: 8
subtypes: 9
subtypes: 10
subtypes: 11
subtypes: 12
subtypes: 13
subtypes: 14
subtypes: 15
subtypes: 16
subtypes: 17
subtypes: 18
subtypes: 19
subtypes: 20
subtypes: 21
subtypes: 22
subtypes: 23
subtypes: 24
fieldNames: "_col1"
fieldNames: "_col2"
fieldNames: "_col3"
fieldNames: "_col4"
fieldNames: "_col5"
fieldNames: "_col6"
fieldNames: "_col7"
fieldNames: "_col8"
fieldNames: "_col9"
fieldNames: "_col10"
fieldNames: "_col11"
fieldNames: "_col12"
fieldNames: "_col13"
fieldNames: "_col14"
fieldNames: "_col15"
fieldNames: "_col16"
fieldNames: "_col17"
fieldNames: "_col18"
, kind: INT
, kind: STRING
, kind: STRING
, kind: STRING
, kind: STRING
, kind: INT
, kind: INT
, kind: DOUBLE
, kind: INT
, kind: INT
, kind: DOUBLE
, kind: DOUBLE
, kind: DOUBLE
, kind: INT
, kind: INT
, kind: DOUBLE
, kind: DOUBLE
, kind: STRING
] innerStructSubtype -1
	at org.apache.hadoop.hive.ql.io.orc.TreeReaderFactory$StructTreeReader.<init>(TreeReaderFactory.java:2056)
	at org.apache.hadoop.hive.ql.io.orc.TreeReaderFactory.createTreeReader(TreeReaderFactory.java:2482)
	at org.apache.hadoop.hive.ql.io.orc.TreeReaderFactory$StructTreeReader.<init>(TreeReaderFactory.java:2062)
	at org.apache.hadoop.hive.ql.io.orc.TreeReaderFactory.createTreeReader(TreeReaderFactory.java:2482)
	at org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.<init>(RecordReaderImpl.java:219)
	at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.rowsOptions(ReaderImpl.java:598)
	at org.apache.hadoop.hive.ql.io.orc.OrcRawRecordMerger$ReaderPair.<init>(OrcRawRecordMerger.java:179)
	at org.apache.hadoop.hive.ql.io.orc.OrcRawRecordMerger.<init>(OrcRawRecordMerger.java:476)
	at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getRawReader(OrcInputFormat.java:1406)
	at org.apache.hadoop.hive.ql.txn.compactor.CompactorMR$CompactorMap.map(CompactorMR.java:569)
	at org.apache.hadoop.hive.ql.txn.compactor.CompactorMR$CompactorMap.map(CompactorMR.java:548)
	at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
	at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453)
	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
	at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
	at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
1 ACCEPTED SOLUTION

avatar

Could you please share table structure as if we run compaction on ORC ACID Hive table still have bug in open state see below:-

https://issues.apache.org/jira/browse/HIVE-14017

View solution in original post

6 REPLIES 6

avatar

@Roberto Sancho which datafile format and hive version you are using?

avatar
Master Collaborator

ORC Format with snappy compression and buckets

avatar

@Roberto Sancho

Did you altered the table after creation and added any extra column?

avatar

Could you please share table structure as if we run compaction on ORC ACID Hive table still have bug in open state see below:-

https://issues.apache.org/jira/browse/HIVE-14017

avatar
Master Collaborator

yes, is ORC, snappy compression and ACID, so its not posible compaction...???

avatar

you are right, its not possible yet...