Member since
11-01-2019
28
Posts
3
Kudos Received
0
Solutions
12-13-2021
08:26 AM
1 Kudo
Hive I believe is vulnerable and running 2.10.
... View more
09-01-2021
06:51 AM
Ok, I think I understand. I CAN install secure versions of these components, but that would be separate from Ambari and would sacrifice that level of control and maintenance. In order to get Ambari and these more secure components, I'll need to reach out to Cloudera for a private hotfix version or to upgrade off of HDP. Thank you for the clarification.
... View more
08-31-2021
05:45 AM
Hi @Shifu Thanks for the response! Regarding something you posted: "You can either install a component or you can upgrade to the next available HDP 3.X version but I can see you are in the latest 3.1.5 version." If I installed a later version of Zookeeper (for example), would ambari recognize that later version in it's management? Or would it exist in parallel with the version of Zookeeper packaged with 3.1.5? The current big security issues I see I've listed in the original question. Is there a contact form? Grafana is running v6.4.2, but has a major security issue that was patched in future releases: https://grafana.com/blog/2020/06/03/grafana-6.7.4-and-7.0.2-released-with-important-security-fix/ Infra Solr is running SOLR 7.7 and has a RCE vulnerability. This was patched in SOLR 8.3, which is not part of Ambari 2.7.5's InfraSolr. Zookeeper packaged is 3.4.6, but SSL implementation was added in 3.5.5
... View more
08-25-2021
07:56 AM
The components in HDP 3.1.5 is outdated and lack key security functionality. Grafana is running v6.4.2, but has a major security issue that was patched in future releases: https://grafana.com/blog/2020/06/03/grafana-6.7.4-and-7.0.2-released-with-important-security-fix/ Infra Solr is running SOLR 7.7 and haa a RCE vulnerability. This was patched in SOLR 8.3, which is not part of InfraSolr. Zookeeper packaged is 3.4.6, but SSL implementation was add in 3.5.5 I saw some questions talking about "Patch Upgrades" but is there a guide to upgrading individual components in a cluster via Ambari or however?
... View more
04-13-2020
08:05 AM
@stevenmatison Thanks for responding! I did think this was a file permissions issue on the start, but I ran some tests. Test 1: I chown'd/chmod'd the underlining files to match ORC files that presto could read from (those not written by PutHive3Streaming). Didn't work. Test 2: I ran Nifi's SelectHive3QL (which supports inserts). This wrote the data with file permissions and ownership similar to the other processor. Presto is able to read that data. Were you able to get to work? Additionally here's a snippet of puthive3streaming (minus the specifics like table, pathways, dbs). Using an avroreader to write.
... View more
04-10-2020
07:55 AM
What version of NiFi? Is it the Apache standalone or the HDF version? Also, are you saying you have other nifi nodes that work but the new one doesn't? What does the ui say? Is there errors in nifi-app.log?
... View more
04-09-2020
08:36 AM
So the title basically states it, but I'm currently running into an issue when leveraging Presto to ready from a Hive3 environment if the table is populated with ORC data by Nifi's PutHive3Streaming processor.
Presto is able to read ORC ACID tables if Hive 3 and populated via command line or other nifi processors. I attempted to write data using PutHive3Streaming from later versions of Nifi (1.11.4) to no avail.
Error:
io.prestosql.spi.PrestoException: Error opening Hive split hdfs://path/to/bucket (offset=0, length=29205493): rowsInRowGroup must be greater than zero
Versions: Nifi HDF 1.9 PrestoSQL 331/332
... View more
Labels:
- Labels:
-
Apache NiFi
04-09-2020
08:26 AM
So Presto now supports ACID tables, but only for Hive3. However, the subdirectory exception is from a configuration on the presto client side. In the hive.properties in presto's catalog directory, add "hive.recursive-directories=true"
... View more
04-09-2020
08:24 AM
@Ellyly No permanent solution to this collision type issue. The workaround I did was to split up the table into several smaller tables. That way no collisions occur. Not a great solution, but worked for my need.
... View more
11-13-2019
06:45 AM
1 Kudo
If I can ask, why are you deleting? If you want to go through nifi, it sounds like an automated process. If it's a data retention issue, I think Hbase has a way to prune on timer. The TTL setting.
... View more
- « Previous
-
- 1
- 2
- Next »