Member since
11-02-2018
16
Posts
0
Kudos Received
0
Solutions
08-22-2019
09:37 AM
I have been having this problem intermittently as well. Did you find a solution? In my case, I found the following error in container log files: java.lang.NullPointerException at com.hortonworks.spark.sql.hive.llap.HiveWarehouseDataReader.close(HiveWarehouseDataReader.java:105) Looking at HiveWarehouseDataReader.java line 105, it looks like this: columnarBatch .close() ; So I thought this might be missing a null check, and after adding the null check, problem is solved: if ( columnarBatch != null ) { columnarBatch .close() ; } Since Hortonworks has not been responsive to a previous pull request I created for spark_llap project, I'm not going to bother creating one for this. If Hortonworks/Cloudera care anymore about their code, they should be able to make a fix.
... View more
05-16-2019
12:01 AM
Did this issue get resolved? I'm using HDP 3.1 with Ranger 1.2.0, and I have the correct Unlimited JCE, but still get this error when using the test connection button.
... View more
04-25-2019
07:33 PM
I finally realized this rounding is not done in hive, but in Zeppelin UI where I run my SELECT query which returns rounded result. There is an open bug for this issue: https://issues.apache.org/jira/browse/ZEPPELIN-1434 It is only rounding when showing on UI, so underlying data is correct.
... View more
01-29-2019
06:03 PM
Just to be sure, I upgraded the cluster to HDP 3.1.0.0-78, and issue is still there. Interestingly, if I add a non-numeric character, it saves it correctly without rounding.
... View more
01-29-2019
06:00 PM
I have posted the question in StackOverflow but no solution: https://stackoverflow.com/questions/54333105/hive-insert-to-string-column-rounds-the-numeric-string I wonder there is something in my setup which causes this.
... View more
01-27-2019
03:43 PM
In my case, I get a different error: Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/custom_actions/scripts/remove_previous_stacks.py", line 119, in <module>
RemovePreviousStacks().execute()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 352, in execute
method(env)
File "/var/lib/ambari-agent/cache/custom_actions/scripts/remove_previous_stacks.py", line 49, in actionexecute
self.remove_stack_version(structured_output, low_version)
File "/var/lib/ambari-agent/cache/custom_actions/scripts/remove_previous_stacks.py", line 53, in remove_stack_version
self.check_no_symlink_to_version(structured_output, version)
File "/var/lib/ambari-agent/cache/custom_actions/scripts/remove_previous_stacks.py", line 94, in check_no_symlink_to_version
raise Fail("{0} contains symlink to version for remove! {1}".format(stack_root_current, version))
resource_management.core.exceptions.Fail: /usr/hdp/current/ contains symlink to version for remove! 3.0.1.0-187
Do you have any solution for this error?
... View more
01-25-2019
07:47 PM
Thanks Geoffrey. I copied the backup of the ambari.properties to expected location and ran the upgrade command again and it worked this time.
... View more
01-25-2019
07:45 PM
When upgrading Ambari from 2.7.1 to 2.7.3, I get these messages with no mention of failure or success: Preparing to unpack .../ambari-server_2.7.3.0-139_amd64.deb ...
Backing up Ambari properties: /etc/ambari-server/conf/ambari.properties -> /etc/ambari-server/conf/ambari.properties.rpmsave
Backing up Ambari properties: /var/lib/ambari-server/ambari-env.sh -> /var/lib/ambari-server/ambari-env.sh.rpmsave
Backing up JAAS login file: /etc/ambari-server/conf/krb5JAASLogin.conf -> /etc/ambari-server/conf/krb5JAASLogin.conf.rpmsave
Backing up stacks directory: /var/lib/ambari-server/resources/stacks -> /var/lib/ambari-server/resources/stacks_25_01_19_17_08.old
Backing up common-services directory: /var/lib/ambari-server/resources/common-services -> /var/lib/ambari-server/resources/common-services_25_01_19_17_08.old
Backing up mpacks directory: /var/lib/ambari-server/resources/mpacks -> /var/lib/ambari-server/resources/mpacks_25_01_19_17_08.old
Backing up Ambari view jars: /var/lib/ambari-server/resources/views/*.jar -> /var/lib/ambari-server/resources/views/backups/
Backing up Ambari server jar: /usr/lib/ambari-server/ambari-server-2.7.1.0.169.jar -> /usr/lib/ambari-server-backups/
Unpacking ambari-server (2.7.3.0-139) over (2.7.1.0-169) ...
insserv: warning: script 'K01hst' missing LSB tags and overrides
insserv: warning: script 'hst' missing LSB tags and overrides
grep: Invalid content of \{\}
Processing triggers for ureadahead (0.100.0-19) ...
Processing triggers for systemd (229-4ubuntu21.15) ...
Setting up ambari-server (2.7.3.0-139) ...
insserv: warning: script 'K01hst' missing LSB tags and overrides
insserv: warning: script 'hst' missing LSB tags and overrides
grep: Invalid content of \{\}
and the subsequent "ambari-server upgrade" fails with this error: Using python /usr/bin/python
Upgrading ambari-server
INFO: Upgrade Ambari Server
INFO: Updating Ambari Server properties in ambari.properties ...
INFO: Updating Ambari Server properties in ambari-env.sh ...
INFO: Original file ambari-env.sh kept
WARNING: server.jdbc.database_name property isn't set in ambari.properties . Setting it to default value - ambari
INFO: Fixing database objects owner
ERROR: Unexpected AttributeError: 'NoneType' object has no attribute 'title'
For more info run ambari-server with -v or --verbose option
Any pointers/guidance to fix this is appreciated.
... View more
Labels:
- Labels:
-
Apache Ambari
01-23-2019
06:42 PM
UPDATE:
Issue occurs even when doing Hive SQL Insert: INSERT INTO test_ids SELECT "12345678901234567890" And the result is: 12345678901234567000 Original problem:
I'm using hive 3.0 streaming to write string columns which are long numeric ids. Interestingly these numbers are being rounded so when stored, last few digits are being saved as 0. This is the famous JSON/JavaScript problem with numbers, but my data is string. Here is sample Scala code to reproduce:
import shadehive.org.apache.hadoop.hive.conf.HiveConf
import org.apache.hive.streaming.HiveStreamingConnection
import org.apache.hive.streaming.StrictJsonWriter
import org.apache.hive.streaming.StrictDelimitedInputWriter
import org.junit.Assert
val hiveConf = new HiveConf()
hiveConf.set("hive.metastore.uris", "thrift://localhost:9083")
hiveConf.set("metastore.catalog.default", "hive")
hiveConf.setVar(HiveConf.ConfVars.HIVE_CLASSLOADER_SHADE_PREFIX, "shadehive")
val writer = StrictJsonWriter.newBuilder()
.build()
val connection = HiveStreamingConnection.newBuilder()
.withDatabase("default")
.withTable("test_ids")
.withRecordWriter(writer)
.withHiveConf(hiveConf)
.withAgentInfo("(my_test)")
.connect()
connection.beginTransaction()
val rec1 = "{\"id\" : \"12345678901234567890\"}"
connection.write(rec1.getBytes())
val rec2 = "{\"id\" : \"a12345678901234567890\"}"
connection.write(rec2.getBytes())
connection.commitTransaction()
connection.close();
12345678901234567890 Value gets stored as 12345678901234567000, but the second value starting with character 'a' is saved correctly. Hive table is created as follows:
CREATE TABLE `test_ids`(`id` STRING)
Am I doing anything wrong? Or is there a workaround for this issue?
... View more
Labels:
- Labels:
-
Apache Hive
01-11-2019
10:44 PM
Thanks, based on your feedback, I followed instruction on this page to use main hbase service and alert is gone.
... View more
01-11-2019
06:26 PM
Let me clarify, Yarn service does start and everything seems to be working, but this critical alert keeps appearing in the Ambari UI.
... View more
01-10-2019
06:03 PM
This did not make any difference. I still get this critical alert: The HBase application reported a 'STARTED' state.
... View more
01-10-2019
05:21 PM
@Guillaume Roger Thanks for providing a solution. Is this safe to do on a running cluster? Will it cause any loss of data?
... View more
11-12-2018
06:41 PM
I see the same Ambari alert "The HBase application reported a 'STARTED' state". Why dose Ambari alert on "STARTED" state?
... View more
11-02-2018
11:20 PM
Did you find a solution for this?
... View more