Member since
09-25-2015
356
Posts
382
Kudos Received
62
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2439 | 11-03-2017 09:16 PM | |
1917 | 10-17-2017 09:48 PM | |
3814 | 09-18-2017 08:33 PM | |
4510 | 08-04-2017 04:14 PM | |
3458 | 05-19-2017 06:53 AM |
10-16-2015
04:43 PM
The tables that you are expecting, were they created from the Hive View? I ran into issues with the tables getting refreshed. Can you kill session and launch new?
... View more
10-15-2015
04:25 PM
1 Kudo
Personally I haven't setup LDAP SSL but here are the properties you can set in hive-site.xml. hive.server2.authentication = LDAP
hive.server2.authentication.ldap.url = <LDAP URL>
hive.server2.authentication.ldap.baseDN = <LDAP Base DN>
hive.server2.use.SSL = true
hive.server2.keystore.path = <KEYSTORE FILE PATH>
hive.server2.keystore.password = <KEYSTORE PASSWORD>
... View more
10-15-2015
01:24 PM
Updated the answer to include the url for http mode as well as secure http mode. Beyond this there are other modes like ssl http, ldap, ldap http. For each one the URL is configured a little differently.
... View more
10-15-2015
12:53 AM
Its hard to give a generic answer on how to achieve high availability without knowing the topology the data and form of ingestion and where and how it is written in destination. In many cases if the data at source is available even if the agent gets killed, upon restarting the agent the checkpointing on the file channel will let the agent recover from the point where it failed. Sometimes topology has multiple Flume agents started for availability, ofcourse there will be issue with data redundancy but thats fine in some cases.
... View more
10-14-2015
08:08 PM
1 Kudo
I don't think there is HA in Flume. If you are worried about losing events because of Flume Agent going down you can use the File Channel which uses checkpointing. This makes sure that no events are lost while the Flume Agent is down and can begin to send event to sink from where it left off. In case you are worried about the destination sink your agent is writing to going down then you can use the Failover Sink Processor.
... View more
10-13-2015
11:41 PM
4 Kudos
According to the Hadoop YARN API, here is the breakdown of the Container ID string: "The format is container_e*epoch*_*clusterTimestamp*_*appId*_*attemptId*_*containerId*. When epoch is larger than 0 (e.g. container_e17_1410901177871_0001_01_000005). *epoch* is increased when RM restarts or fails over. When epoch is 0, epoch is omitted (e.g. container_1410901177871_0001_01_000005)." To add more context, this was added by YARN-2562 in Hadoop 2.6.0 as an improvement for readability of the containerId string. So, its an expected behavior.
... View more
10-13-2015
07:44 PM
Yes, that's right.
... View more
10-13-2015
07:01 PM
We tested this formally with Windows 2008. For HDP 2.3 ODBC driver we tested with Windows 2012 with limited testing on Windows 8. Its very likely that the ODBC driver for HDP 2.2 is compatible with Windows 8 as well.
... View more
10-13-2015
06:36 PM
1 Kudo
http://hortonworks.com/products/releases/hdp-2-2/#add_ons
... View more
10-13-2015
03:56 PM
4 Kudos
Can you try the following connection url (observe the / after the <ZOOKEEPER QUORUM>)? jdbc:hive2://<ZOOKEEPER QUORUM>/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver Above is for binary mode, for http mode jdbc:hive2://<ZOOKEEPER QUORUM>/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver;transportMode=http;httpPath=cliservice For secure environments you will additionally have to add the hive principal, eg. jdbc:hive2://<ZOOKEEPER QUORUM>/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver;principal=hive/_HOST@EXAMPLE.COM;transportMode=http;httpPath=cliservice
... View more