I have a default database catalog in Cloudera Data Warehouse(CDW) for a environment and I have created a virtual warehouse with that database catalog. I am trying to use a newly created UDF with this virtual warehouse. But whenever I execute that UDF I get the following error.
When I looked over at virtual warehouse there comes a alert symbol, on hovering which I get a message saying "Hive server service is not ready. Service endpoint may not be reachable!"
I collected the diagnostic bundle and tried to search for the issue. In hiveserver log I found the issue of java core dump.
A fatal error has been detected by the Java Runtime Environment:
# SIGSEGV (0xb) at pc=0x0000000000093c9e, pid=1, tid=0x00007f7bf5bc2700
# JRE version: OpenJDK Runtime Environment (8.0_312-b07) (build 1.8.0_312-b07)
# Java VM: OpenJDK 64-Bit Server VM (25.312-b07 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# C 0x0000000000093c9e
# Core dump written. Default location: /usr/lib/core or core.1
# An error report file with more information is saved as:
So my question is what could be the issue here causing core dump or can there be possibly different reason for hiveserver not starting. And is there a way I can get the core dump present at /tmp/hs_err_pid1.log location to analyze?
Do you know how can I access hs_err_pid1.log. I tried looking for it in compute nodes and in some service nodes. I did not found them in /tmp directory and if they are present in some containers, how to locate which containers to access?
@sandipkumar, Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future.