Member since
06-27-2016
9
Posts
3
Kudos Received
0
Solutions
12-02-2016
06:37 PM
1 Kudo
do you have a workarround? E.g. hue or falcon or something?
... View more
12-02-2016
04:05 PM
1 Kudo
is there an update on that? Im facing the same issue. hdp2.5
HA hdfs
oozie workflow view
... View more
12-01-2016
04:36 PM
ok its not part of the HDP Stack now but what is the definition of a hdp product?
All services hdp is is built of are apache projects and so is kudu.
And how do you handle efficient updates/delete going in hand with huge joins on analytic side?
Hbase is not an option for me.
... View more
12-01-2016
10:03 AM
Hi, im really interessted in Kudu but the installation seems to be a bit different on a hdp installation(ambari).
Has anyone used/installed and tested Kudo on hortonworks now?
Is there a official doku how to install kudo on hdp?
Or information about the experience with kudu on hortonworks? Thanks in advance
... View more
- Tags:
- Hadoop Core
- hdp-2.5.0
Labels:
07-11-2016
07:31 AM
as written above i did all the changes you mentioned but it does not work.
I set: set hive.support.concurrency=true;
set hive.exec.dynamic.partition.mode =
nonstrict;
set hive.compactor.initiator.on =true ;
set
hive.compactor.worker.threads =1;
set hive.enforce.bucketing=true;
set
hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager;
The strange thing is that i cannot save the settings via ambari only through console.
... View more
07-11-2016
07:29 AM
i restarted the cluster, queries and anything else but it did not work.
Always the same error
... View more
07-07-2016
02:28 PM
Thank you for your answers, that really helps. Im a bit further now Right now:
A croned python script on the NameNode writes the kafka stream every 5 min to hdfs. (External Table JSON).
Every hour another script which executes a "insert overwrite" moves the data from the external table to an orc partitioned and clustered table.
This table should be the BI Table for realtime Analysis.
My next plan would be to change the 1. script to directly update/insert the hive table, so that i can eleminate the second script.
Thanks for any suggestions.
... View more
07-07-2016
10:34 AM
Hi community, i had a running an working HDP2.3 installation.
Metaserver/Ambari db local postgres 9.3. in NN/Hiveserver
Now i wanted to enable ACID features. First i tried to activate it completly through ambari but it does not work. Cause the option to save settings was not available after setting hive_txn_acid =true
Thats why i tried in manually:
1: set hive.support.concurrency=true;
set hive.exec.dynamic.partition.mode = nonstrict;
set hive.compactor.initiator.on =true ;
set hive.compactor.worker.threads =1;
set hive.enforce.bucketing=true;
set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager;
2.
restarted hive services. 3.
rechecked settings ->all applied 4.
via HIVE-CLI: update staging.test_hiveacid set data="updateddata1" where id=5;
...
..
nothing happends Checking the metaserver.log i can see the following: 2016-07-07 10:04:45,218 ERROR [pool-3-thread-4]: txn.TxnHandler (TxnHandler.java:getDbConn(984)) - There is a problem with a connection from the pool, retrying(rc=6): Timed out waiting for a free available connection.(SQLState=08001,ErrorCode=0)
java.sql.SQLException: Timed out waiting for a free available connection.
at com.jolbox.bonecp.DefaultConnectionStrategy.getConnectionInternal(DefaultConnectionStrategy.java:88)
at com.jolbox.bonecp.AbstractConnectionStrategy.getConnection(AbstractConnectionStrategy.java:90)
at com.jolbox.bonecp.BoneCP.getConnection(BoneCP.java:553)
at com.jolbox.bonecp.BoneCPDataSource.getConnection(BoneCPDataSource.java:131)
at org.apache.hadoop.hive.metastore.txn.TxnHandler.getDbConn(TxnHandler.java:978)
at org.apache.hadoop.hive.metastore.txn.TxnHandler.getOpenTxns(TxnHandler.java:240)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_open_txns(HiveMetaStore.java:5558)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107)
at com.sun.proxy.$Proxy4.get_open_txns(Unknown Source)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_open_txns.getResult(ThriftHiveMetastore.java:11560)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_open_txns.getResult(ThriftHiveMetastore.java:11545)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110)
at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745) Any Idea? Thanks for help
... View more
Labels:
06-27-2016
08:17 PM
1 Kudo
hi, we like to implement a pretty basic stream datapipeline to our hadoop cluster. App events are already send to a Kafka Topic.
The perfect solution would be to stream the data (Json) directly to a HIVE table. So that the BI Team can do its analysises nearly in realtime on those information. I researched a bit but did not found any "Best practice" solution for that case.
We use Hortonworks Hadoop HDP with the "Basic" Techstack such as Flume,Spark... Here my questions: - what is the best practise for an event stream to BI? - Is there an example which fits that case?
Thanks in advance
KF2
... View more