Created on 09-29-2017 02:46 PM - edited 09-16-2022 05:19 AM
Hi All,
I am testing Druid with Hive2 and managed to push sample data (1.5M aggregated rows) the way described, but when I try to build a dataset with 192M rows, I got error ArrayIndexOutOfBoundsException at io.DruidRecordWriter. I searched Hive Jira and it is a bug that is resolved, but I understand this is not applied on HDP 2.6.2 that I am using. Is there any workaround?
Thanks
Created 09-29-2017 02:54 PM
The only fix is to apply the patch and replace the druid-hive-handler jar. It is only one jar that needs to be replaced. Otherwise 2.6.3 will have the fix if you want to wait. Sorry
Created 09-29-2017 02:54 PM
The only fix is to apply the patch and replace the druid-hive-handler jar. It is only one jar that needs to be replaced. Otherwise 2.6.3 will have the fix if you want to wait. Sorry
Created 10-02-2017 07:58 AM
I replaced every hive-druid-handler*.jar in my cluster with the new version, but now I am getting "org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.NullPointerException: cache" error. I think I will have to wait for 2.6.3 and hope it will work.
Created 10-02-2017 01:58 PM
can you please past the log error stack?
Created 10-02-2017 03:04 PM
Thanks @Slim. I'm attaching one task's log. hive-20171002172650-1672cd6f-dec7-46ca-bb45-f52491.txt.
Created 10-02-2017 03:27 PM
seems like you are compiling/building with the wrong druid version. Can you please explain how are you building this? are you using druid 0.10.1 or previous release?
Created 10-04-2017 06:29 AM
I am using HDP 2.6.2 and the installed version of Druid is 0.9.2. Is there any way to upgrade Druid while bypassing Ambari?
Created 10-04-2017 06:39 AM