Member since
05-31-2016
23
Posts
4
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2875 | 10-25-2017 01:35 PM |
02-16-2017
04:23 PM
Thanks! It worked! I put the in Ambari in custom-hive-site, restarted the affected services and no more noise.
... View more
02-16-2017
01:04 PM
Hello Just upgraded to HDP 2.5.3 from HDP 2.3.6 and experiencing lots of problems. One for instance is that the 3 Hive metastores that I've running for HA sake are printing this to the metastore.log like crazy every few milliseconds: 2017-02-16 13:58:01,341 INFO [org.apache.hadoop.hive.ql.txn.AcidHouseKeeperService-0]: txn.TxnHandler (TxnHandler.java:performTimeOuts(2960)) - Aborted 0 transactions due to timeout
2017-02-16 13:58:01,345 INFO [org.apache.hadoop.hive.ql.txn.AcidHouseKeeperService-0]: txn.TxnHandler (TxnHandler.java:performTimeOuts(2949)) - Aborted the following transactions due to timeout: []
2017-02-16 13:58:01,345 INFO [org.apache.hadoop.hive.ql.txn.AcidHouseKeeperService-0]: txn.TxnHandler (TxnHandler.java:performTimeOuts(2960)) - Aborted 0 transactions due to timeout
2017-02-16 13:58:01,350 INFO [org.apache.hadoop.hive.ql.txn.AcidHouseKeeperService-0]: txn.TxnHandler (TxnHandler.java:performTimeOuts(2949)) - Aborted the following transactions due to timeout: []
2017-02-16 13:58:01,350 INFO [org.apache.hadoop.hive.ql.txn.AcidHouseKeeperService-0]: txn.TxnHandler (TxnHandler.java:performTimeOuts(2960)) - Aborted 0 transactions due to timeout
2017-02-16 13:58:01,354 INFO [org.apache.hadoop.hive.ql.txn.AcidHouseKeeperService-0]: txn.TxnHandler (TxnHandler.java:performTimeOuts(2949)) - Aborted the following transactions due to timeout: []
2017-02-16 13:58:01,354 INFO [org.apache.hadoop.hive.ql.txn.AcidHouseKeeperService-0]: txn.TxnHandler (TxnHandler.java:performTimeOuts(2960)) - Aborted 0 transactions due to timeout
2017-02-16 13:58:01,359 INFO [org.apache.hadoop.hive.ql.txn.AcidHouseKeeperService-0]: txn.TxnHandler (TxnHandler.java:performTimeOuts(2949)) - Aborted the following transactions due to timeout: []
I cannot find anything pointing to the cause of this problem. Any hint?
... View more
Labels:
02-16-2017
10:56 AM
@Arif Hossain did you manage to fix the compaction problem? How? I have the same problem on a new partition after upgrading to HDP 2.5.3 from 2.3.6 and it's not a permission problem as in @Benjamin Hopp case
... View more
01-19-2017
09:19 AM
Hello @Santhosh B Gowda we have fixed it by deleting the whole /storm path in Zookeeper + /var/hadoop/storm in the Nimbus hosts, and then deployed again the topologies. The only drawback is that we had to stop all the topologies for some minutes, with a minor downtime. Thanks for the help
... View more
01-18-2017
09:02 AM
1 Kudo
Hello yesterday we upgraded our Ambari installation from 2.2.2.0 to 2.4.2.0. Ambari is managing a HDP 2.3.6 cluster. After the upgrade (following all these instructions http://docs.hortonworks.com/HDPDocuments/Ambari-2.4.2.0/bk_ambari-upgrade/content/upgrade_ambari.html) Storm Nimbus crashes on start with this exception: 2017-01-17 19:39:29.871 b.s.zookeeper [INFO] Accepting leadership, all active topology found localy.
2017-01-17 19:39:29.928 b.s.d.nimbus [INFO] Starting Nimbus server...
2017-01-17 19:39:30.860 b.s.d.nimbus [ERROR] Error when processing event
java.lang.NullPointerException
at clojure.lang.Numbers.ops(Numbers.java:961) ~[clojure-1.6.0.jar:?]
at clojure.lang.Numbers.isZero(Numbers.java:90) ~[clojure-1.6.0.jar:?]
at backtype.storm.util$partition_fixed.invoke(util.clj:900) ~[storm-core-0.10.0.2.3.6.0-3796.jar:0.10.0.2.3.6.0-3796]
at clojure.lang.AFn.applyToHelper(AFn.java:156) ~[clojure-1.6.0.jar:?]
at clojure.lang.AFn.applyTo(AFn.java:144) ~[clojure-1.6.0.jar:?]
at clojure.core$apply.invoke(core.clj:624) ~[clojure-1.6.0.jar:?]
at clojure.lang.AFn.applyToHelper(AFn.java:156) ~[clojure-1.6.0.jar:?]
at clojure.lang.RestFn.applyTo(RestFn.java:132) ~[clojure-1.6.0.jar:?]
at clojure.core$apply.invoke(core.clj:626) ~[clojure-1.6.0.jar:?]
at clojure.core$partial$fn__4228.doInvoke(core.clj:2468) ~[clojure-1.6.0.jar:?]
at clojure.lang.RestFn.invoke(RestFn.java:408) ~[clojure-1.6.0.jar:?]
at backtype.storm.util$map_val$iter__1807__1811$fn__1812.invoke(util.clj:305) ~[storm-core-0.10.0.2.3.6.0-3796.jar:0.10.0.2.3.6.0-3796]
at clojure.lang.LazySeq.sval(LazySeq.java:40) ~[clojure-1.6.0.jar:?]
at clojure.lang.LazySeq.seq(LazySeq.java:49) ~[clojure-1.6.0.jar:?]
at clojure.lang.Cons.next(Cons.java:39) ~[clojure-1.6.0.jar:?]
at clojure.lang.RT.next(RT.java:598) ~[clojure-1.6.0.jar:?]
at clojure.core$next.invoke(core.clj:64) ~[clojure-1.6.0.jar:?]
at clojure.core.protocols$fn__6086.invoke(protocols.clj:146) ~[clojure-1.6.0.jar:?]
at clojure.core.protocols$fn__6057$G__6052__6066.invoke(protocols.clj:19) ~[clojure-1.6.0.jar:?]
at clojure.core.protocols$seq_reduce.invoke(protocols.clj:31) ~[clojure-1.6.0.jar:?]
at clojure.core.protocols$fn__6078.invoke(protocols.clj:54) ~[clojure-1.6.0.jar:?]
at clojure.core.protocols$fn__6031$G__6026__6044.invoke(protocols.clj:13) ~[clojure-1.6.0.jar:?]
at clojure.core$reduce.invoke(core.clj:6289) ~[clojure-1.6.0.jar:?]
at clojure.core$into.invoke(core.clj:6341) ~[clojure-1.6.0.jar:?]
at backtype.storm.util$map_val.invoke(util.clj:304) ~[storm-core-0.10.0.2.3.6.0-3796.jar:0.10.0.2.3.6.0-3796]
at backtype.storm.daemon.nimbus$compute_executors.invoke(nimbus.clj:491) ~[storm-core-0.10.0.2.3.6.0-3796.jar:0.10.0.2.3.6.0-3796]
at backtype.storm.daemon.nimbus$compute_executor__GT_component.invoke(nimbus.clj:502) ~[storm-core-0.10.0.2.3.6.0-3796.jar:0.10.0.2.3.6.0-3796]
at backtype.storm.daemon.nimbus$read_topology_details.invoke(nimbus.clj:394) ~[storm-core-0.10.0.2.3.6.0-3796.jar:0.10.0.2.3.6.0-3796]
at backtype.storm.daemon.nimbus$mk_assignments$iter__7809__7813$fn__7814.invoke(nimbus.clj:722) ~[storm-core-0.10.0.2.3.6.0-3796.jar:0.10.0.2.3.6.0-3796]
at clojure.lang.LazySeq.sval(LazySeq.java:40) ~[clojure-1.6.0.jar:?]
at clojure.lang.LazySeq.seq(LazySeq.java:49) ~[clojure-1.6.0.jar:?]
at clojure.lang.RT.seq(RT.java:484) ~[clojure-1.6.0.jar:?]
at clojure.core$seq.invoke(core.clj:133) ~[clojure-1.6.0.jar:?]
at clojure.core.protocols$seq_reduce.invoke(protocols.clj:30) ~[clojure-1.6.0.jar:?]
at clojure.core.protocols$fn__6078.invoke(protocols.clj:54) ~[clojure-1.6.0.jar:?]
at clojure.core.protocols$fn__6031$G__6026__6044.invoke(protocols.clj:13) ~[clojure-1.6.0.jar:?]
at clojure.core$reduce.invoke(core.clj:6289) ~[clojure-1.6.0.jar:?]
at clojure.core$into.invoke(core.clj:6341) ~[clojure-1.6.0.jar:?]
at backtype.storm.daemon.nimbus$mk_assignments.doInvoke(nimbus.clj:721) ~[storm-core-0.10.0.2.3.6.0-3796.jar:0.10.0.2.3.6.0-3796]
at clojure.lang.RestFn.invoke(RestFn.java:410) ~[clojure-1.6.0.jar:?]
at backtype.storm.daemon.nimbus$fn__8060$exec_fn__3866__auto____8061$fn__8068$fn__8069.invoke(nimbus.clj:1112) ~[storm-core-0.10.0.2.3.6.0-3796.jar:0.10.0.2.3.6.0-3796]
at backtype.storm.daemon.nimbus$fn__8060$exec_fn__3866__auto____8061$fn__8068.invoke(nimbus.clj:1111) ~[storm-core-0.10.0.2.3.6.0-3796.jar:0.10.0.2.3.6.0-3796]
at backtype.storm.timer$schedule_recurring$this__2489.invoke(timer.clj:102) ~[storm-core-0.10.0.2.3.6.0-3796.jar:0.10.0.2.3.6.0-3796]
at backtype.storm.timer$mk_timer$fn__2472$fn__2473.invoke(timer.clj:50) [storm-core-0.10.0.2.3.6.0-3796.jar:0.10.0.2.3.6.0-3796]
at backtype.storm.timer$mk_timer$fn__2472.invoke(timer.clj:42) [storm-core-0.10.0.2.3.6.0-3796.jar:0.10.0.2.3.6.0-3796]
at clojure.lang.AFn.run(AFn.java:22) [clojure-1.6.0.jar:?]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_40]
2017-01-17 19:39:30.873 b.s.util [ERROR] Halting process: ("Error when processing an event")
java.lang.RuntimeException: ("Error when processing an event")
at backtype.storm.util$exit_process_BANG_.doInvoke(util.clj:336) [storm-core-0.10.0.2.3.6.0-3796.jar:0.10.0.2.3.6.0-3796]
at clojure.lang.RestFn.invoke(RestFn.java:423) [clojure-1.6.0.jar:?]
at backtype.storm.daemon.nimbus$nimbus_data$fn__7411.invoke(nimbus.clj:118) [storm-core-0.10.0.2.3.6.0-3796.jar:0.10.0.2.3.6.0-3796]
at backtype.storm.timer$mk_timer$fn__2472$fn__2473.invoke(timer.clj:71) [storm-core-0.10.0.2.3.6.0-3796.jar:0.10.0.2.3.6.0-3796]
at backtype.storm.timer$mk_timer$fn__2472.invoke(timer.clj:42) [storm-core-0.10.0.2.3.6.0-3796.jar:0.10.0.2.3.6.0-3796]
at clojure.lang.AFn.run(AFn.java:22) [clojure-1.6.0.jar:?]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_40]
2017-01-17 19:39:30.876 b.s.d.nimbus [INFO] Shutting down master
Why is this happening? Storm version is 0.10.0.2.3 Any hint on how to debug more thoroughly this issue? Right now we cannot deploy new topologies
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Storm
08-16-2016
09:44 AM
Thanks for you answer @Constantin Stanca even if I'm sorry to hear it 😞 Well, we'll update to latest 2.3 and wait eagerly for 2.5. Do you have a more or less estimate on when 2.5 will be released as stable?
... View more
08-11-2016
08:06 AM
2 Kudos
Hello we are currently running HDP-2.3.4.0-3485 with Hive ACID and, after upgrading in our staging env to HDP-2.4.2.0-258, we cannot ALTER tables with partitions anymore. Here is an example of a query and the error ALTER TABLE clicks_new CHANGE COLUMN billing billing STRUCT<
Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Changing from type struct<cost:float,originalCost:float,currency:string,currencyExchangeRate:float,bid:struct<id:string,amount:float,originalAmount:float,currency:string,currencyExchangeRate:float>,revenue:float,revenueCurrency:string,publisherRevenue:float> to struct<cost:float,originalCost:float,currency:string,currencyExchangeRate:float,bid:struct<id:string,amount:float,originalAmount:float,currency:string,currencyExchangeRate:float>,revenue:float,revenueCurrency:string,publisherRevenue:float,revenueCurrencyExchangeRate:float> is not supported for column billing. SerDe may be incompatible (state=08S01,code=1) This used to work with HDP-2.3, and I know it's something disabled on purpose and that will come back with HDP-2.5 and Hive 2.0, but my question is: is there some setting to re-enable the old feature (with all its limits) as in HDP-2.3?
... View more
Labels:
06-01-2016
09:41 AM
After stopping the Hive mysql server from Ambari UI and issuing curl -u admin:admin -X DELETE -H 'X-Requested-By:admin' http://server:8080/api/v1/clusters/$NAME/hosts/$FQDN/host_components/MYSQL_SERVER
I've successfully removed Hive Mysql from ambari management and I guess that next Hive restarts will not touch anymore Mysql. Thank you @Alejandro Fernandez and all the others too
... View more
06-01-2016
07:25 AM
Thank you for your answers. To clarify: I don't want to touch Ambari's DB (where Ambari store its configs), I want to change the DB where Hive stores its metadata. Actually, it's what I did: as @jeff and @emaxwell said I changed the "Existing MySQL" option in Hive configuration and I pointed it to my own DB, but since Ambari was already managing the myql server, it restarted the mysqld daemon. So, probably I guess that @Alejandro Fernandez answer is right, with the `curl -X DELETE` operation. So to sum it up: we were using the Hive-MySQL DB instance installed by default by Ambari, installed alongside other HDP services in one of our masters. Now I wanted to make that MySQL installation high available so I installed another mysql on another master, made it a slave of the original instance and put a virtual, floating IP dedicated to the MySQL service. Then I changed the Hive MySQL address in Hive configuration to use the new VIP (that was at that moment pointing to the original mysql instance) and applied the new Hive config. It's then when Ambari decided to restart my original MySQL instance (and the VIP consequently moved to the MySQL slave). Hope it's more clear now 🙂
... View more
05-31-2016
07:37 PM
Hello we have an Ambari 2.3 installation with Hive using a local mysql installation as database. Now, we have implemented an HA solution for MySQL, MasterHA for the mater, which is a bunch of scripts+daemon that monitor if mysql is alive and move its floating IP to another slave (a slave promotion) in case of master failure. Doing the changes (when I changed the MySQL IP in Ambari), Ambari restarted the mysqld instance, triggering the master failover, which by the way worked well 🙂 So my question is: to avoid interference between Ambari and MasterHA, how can I tell Ambari that it shouldn't manage the mysql server, in a running Ambari installation? Thanks!
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hive
- « Previous
-
- 1
- 2
- Next »