1973
Posts
1225
Kudos Received
124
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1998 | 04-03-2024 06:39 AM | |
| 3167 | 01-12-2024 08:19 AM | |
| 1725 | 12-07-2023 01:49 PM | |
| 2504 | 08-02-2023 07:30 AM | |
| 3514 | 03-29-2023 01:22 PM |
09-23-2016
08:28 PM
this works https://community.hortonworks.com/content/kbentry/54947/reading-opendata-json-and-storing-into-phoenix-tab.html
... View more
09-23-2016
07:01 PM
1 Kudo
scala is native for spark and runs for flink. it also leverages your java skills. i did scala first and now i am learning a little python for tensorflow and sentiment analysis
... View more
09-23-2016
06:52 PM
HDF 2.0 and NiFi 1.0.0 using JDK 1.8. JDK 1.7 is deprecated by Oracle, so for your own installs, make sure you pick the latest JDK 1.8
... View more
09-22-2016
08:59 PM
Spark Livy API will be supported eventually. Also use site-to-site to trigger spark streaming or use Kafka to trigger spark streaming
... View more
09-22-2016
01:47 PM
1 Kudo
They work great together. Using Kafka to distribute once something is ingested and processed in NiFi http://hortonworks.com/hadoop-tutorial/realtime-event-processing-nifi-kafka-storm/ Kafka is great for connecting NiFi to Storm, Flink, Spark and other processors in hadoop https://blogs.apache.org/nifi/entry/integrating_apache_nifi_with_apache
... View more
09-21-2016
04:25 PM
1 Kudo
try /usr/hdp/current/phoenix-client/bin/sqlline.py myserver:2181:/hbase-unsecure check your table and table space. make sure the table is there. is there a schema? and try the same query 0: jdbc:phoenix:coolserverhortonworks> !tables
+------------+--------------+--------------+---------------+----------+------------+----------------------------+-----------------+--------------+-----------------+---------------+---+
| TABLE_CAT | TABLE_SCHEM | TABLE_NAME | TABLE_TYPE | REMARKS | TYPE_NAME | SELF_REFERENCING_COL_NAME | REF_GENERATION | INDEX_STATE | IMMUTABLE_ROWS | SALT_BUCKETS | M |
+------------+--------------+--------------+---------------+----------+------------+----------------------------+-----------------+--------------+-----------------+---------------+---+
| | SYSTEM | CATALOG | SYSTEM TABLE | | | | | | false | null | f |
| | SYSTEM | FUNCTION | SYSTEM TABLE | | | | | | false | null | f |
| | SYSTEM | SEQUENCE | SYSTEM TABLE | | | | | | false | null | f |
| | SYSTEM | STATS | SYSTEM TABLE | | | | | | false | null | f |
| | | PHILLYCRIME | TABLE | | | | | | false | null | f |
| | | PRICES | TABLE | | | | | | false | null | f |
| | | TABLE1 | TABLE | | | | | | false | null | f |
+------------+--------------+--------------+---------------+----------+------------+----------------------------+-----------------+--------------+-----------------+---------------+---+
0: jdbc:phoenix:coolhortonworks>
https://phoenix.apache.org/phoenix_spark.html
... View more
09-21-2016
01:54 PM
once you add that management pack to your ambari, you have to delete your ambari setup. Go back to a previous backup or uninstall and delete everything and reinstall. You have to keep these two clusters separate
... View more
09-21-2016
01:51 PM
Java Library + Spark => Magic https://github.com/gmallard/packed-decimal You could also have that in a dataflow in NiFi 1. get the file via NiFi GetFile 2. ExecuteStreamCommand packed-decimal Java class 2b. or call via Kafka/JMS to Java or Spark program 3. Insert or save as ORC 4. Create a hive table on top
... View more
09-20-2016
01:47 PM
1 Kudo
Doing a search you can find a number of certified on YARN ETL Tools including informatica, Microstrategy, and talend. http://hortonworks.com/partners/certified/yarn-ready/
... View more
09-20-2016
01:31 PM
Option 1) It looks like you can write your own custom processor that does this: http://omid.incubator.apache.org/quickstart.html using their library. Option 2) Or if you didn't want to add a custom processor you could have Spark, Flink or Storm program make the Omid client call and push to NiFi with Site-to-Site or Kafka and check. You must check to for failures and implement retry Option 3) Tephra is used by the Apache Phoenix as well to add cross-row and cross-table transaction support with full ACID semantics. So use JDBC Connection from NIFI to get the data. Option 4) CQRS / Event Sourcing instead of old style 2 Phase Commit which has heavy overhead and limits scalability. Option 5) http://trafodion.apache.org/faq.html with NiFi Option 6) Look at some HBase stuff: http://www.slideshare.net/HBaseCon/operations-session-6-49043532
... View more