Support Questions
Find answers, ask questions, and share your expertise
Check out our newest addition to the community, the Cloudera Innovation Accelerator group hub.

Clobs and blobs not able to access in hive more than 2gb of blob in single cell.


1)Data in hadoop in Orc format

2)Content is the one of the column in a table which is clob in oracle and which is a string in hive. This is not query is not working" SELECT * FROM database.table where length(CONTENT) = 142418026"; getting as like below error message like below.

3)As per my investigation if clob size more than 500mb data not displayed in hive.

4) Tried in spark rdds that one also not working.Kindly let me know if any other procedure we can follow.

Hive Error:

Task with the most failures(4): ----- Task ID: task_1485019777366_228562_m_000040 URL: ----- Diagnostic Messages for this Task: Error: java.lang.reflect.InvocationTargetException at at at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.initNextRecordReader( at org.apache.hadoop.hive.shims.HadoopShimsSecure$ at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext( at org.apache.hadoop.mapred.MapTask$ at at org.apache.hadoop.mapred.MapTask.runOldMapper( at at org.apache.hadoop.mapred.YarnChild$ at Method) at at at org.apache.hadoop.mapred.YarnChild.main( Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance( at sun.reflect.DelegatingConstructorAccessorImpl.newInstance( at java.lang.reflect.Constructor.newInstance( at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.initNextRecordReader( ... 11 more Caused by: Protocol message was too large. May be malicious. Use CodedInputStream.setSizeLimit() to increase the size limit. at at at at at$StringStatistics.<init>( at$StringStatistics.<init>( at$StringStatistics$1.parsePartialFrom( at$StringStatistics$1.parsePartialFrom( at at$ColumnStatistics.<init>( at$ColumnStatistics.<init>( at$ColumnStatistics$1.parsePartialFrom( at$ColumnStatistics$1.parsePartialFrom( at at$RowIndexEntry.<init>( at$RowIndexEntry.<init>( at$RowIndexEntry$1.parsePartialFrom( at$RowIndexEntry$1.parsePartialFrom( at at$RowIndex.<init>( at$RowIndex.<init>( at$RowIndex$1.parsePartialFrom( at$RowIndex$1.parsePartialFrom( at at at at at$RowIndex.parseFrom( at at at at at at at at<init>( at at$VectorizedOrcRecordReader.<init>( at at at at<init>( ... 16 more



As for >2 GB blobs, Hive STRING or even BINARY won't handle AFAIK. But that is just googled, Hive experts please add your thoughts.

Please note that the "InvalidProtocolBufferException: Protocol message was too large. May be malicious. Use CodedInputStream.setSizeLimit() to increase the size limit." part in your stack trace tells you that you hit the limits of ProtocolBuffers, not Hive field type limitations. That could explain the 500 MB limit that you got in your investigations. In Hive code, orc input stream implementation I could see that there is 1 GB protobuf limit set but that is for the whole message and the blob is only a part of it.