1973
Posts
1225
Kudos Received
124
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1843 | 04-03-2024 06:39 AM | |
| 2875 | 01-12-2024 08:19 AM | |
| 1585 | 12-07-2023 01:49 PM | |
| 2349 | 08-02-2023 07:30 AM | |
| 3241 | 03-29-2023 01:22 PM |
03-20-2018
05:47 PM
7 Kudos
Integrating lucene-geo-gazetteer For Geo Parsing with Apache NiFi
lucene-geo-gazetteer is a very cool Apache Tika, Apache Lucene and Apache OpenNLP tool that builds a fast index of geo data built from a large list of all countries data. It then provides a REST API that we can easily integrate into a flow.
So I have connected this to a NiFi flow for enhancing and enriching Twitter data with Geo data.
Example NiFi Flow To Convert Twitter Locations Into Geo Information
Downloading the Countries Data and Building the Geo Indexes
Calling the Local Geo Server
Example JSON Data Returned
Let's Pull out the fields we want after the split
Let's build a new JSON file of just the fields we like including the new geo ones.
Example JSON Processed
{"msg":"RT @pauljauregui: Cybersecurity Startups Struggle - https://t.co/wADHLyUEEB #CyberSecurity #AI #IoT #IIoT #IndustrialIoT #DataSecurity #Sec…","unixtime":"1516754942404","friends_count":"4293","sentiment":"NEGATIVE","geolongitude":"-98.5","hashtags":"[\"CyberSecurity\",\"AI\",\"IoT\",\"IIoT\",\"IndustrialIoT\",\"DataSecurity\"]","listed_count":"520","tweet_id":"955965632402485248","user_name":"Lee Weiden","favourites_count":"12454","source":"<a href=\"http://twitter.com\" rel=\"nofollow\">Twitter Web Client</a>","vadersentiment":"Compound -0.3182 Negative 0.161 Neutral 0.839 Positive 0.0 \n","placename":"United States","media_url":"[]","sentiment2":"Negative\n","retweet_count":"0","user_mentions_name":"[]","geo":"","urls":"[]","countryCode":"US","user_url":"","place":"","timestamp":"1516754942404","geolatitude":"39.76","coordinates":"","handle":"LeeWeiden","profile_image_url":"http://xxx.xxxx.com/profile_images/777401884629803009/dUOFoLnt_normal.jpg","time_zone":"Eastern Time (US & Canada)","ext_media":"[]","statuses_count":"186127","followers_count":"1461","location":"United States","time":"Wed Jan 24 00:49:02 +0000 2018","user_mentions":"[]","user_description":"Drivers, Entrepreneur, Family, Faith, Fun, Fitness, Technology, CRM, Social, Mobility & Customer Experience."}
Test The API
http://localhost:8765/api/search?s=Hightstown&s=New+Jersey
Build the Index From All Countries Dataset
./src/main/bin/lucene-geo-gazetteer -i geoIndex -b allCountries.txt
Run the REST Server
./src/main/bin/lucene-geo-gazetteer -server
Mar 20, 2018 12:33:35 PM org.apache.catalina.core.StandardContext setPath
WARNING: A context path must either be an empty string or start with a '/' and do not end with a '/'. The path [/] does not meet these criteria and has been changed to []
Starting Embedded Tomcat on port : 8765
Mar 20, 2018 12:33:35 PM org.apache.coyote.AbstractProtocol init
INFO: Initializing ProtocolHandler ["http-nio-8765"]
Mar 20, 2018 12:33:35 PM org.apache.tomcat.util.net.NioSelectorPool getSharedSelector
INFO: Using a shared selector for servlet write/read
Mar 20, 2018 12:33:35 PM org.apache.catalina.core.StandardService startInternal
INFO: Starting service Tomcat
Mar 20, 2018 12:33:35 PM org.apache.catalina.core.StandardEngine startInternal
INFO: Starting Servlet Engine: Apache Tomcat/8.0.28
Mar 20, 2018 12:33:35 PM org.apache.cxf.transport.servlet.CXFNonSpringServlet loadBusNoConfig
INFO: Load the bus without application context
Mar 20, 2018 12:33:36 PM org.springframework.context.support.AbstractApplicationContext prepareRefresh
INFO: Refreshing org.apache.cxf.bus.spring.BusApplicationContext@293eb4d9: display name [org.apache.cxf.bus.spring.BusApplicationContext@293eb4d9]; startup date [Tue Mar 20 12:33:36 EDT 2018]; root of context hierarchy
Mar 20, 2018 12:33:36 PM org.apache.cxf.bus.spring.BusApplicationContext getConfigResources
INFO: No cxf.xml configuration file detected, relying on defaults.
Mar 20, 2018 12:33:36 PM org.springframework.context.support.AbstractApplicationContext obtainFreshBeanFactory
INFO: Bean factory for application context [org.apache.cxf.bus.spring.BusApplicationContext@293eb4d9]: org.springframework.beans.factory.support.DefaultListableBeanFactory@3d3d719b
Mar 20, 2018 12:33:36 PM org.springframework.beans.factory.support.DefaultListableBeanFactory preInstantiateSingletons
INFO: Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@3d3d719b: defining beans [cxf,org.apache.cxf.bus.spring.BusApplicationListener,org.apache.cxf.bus.spring.BusWiringBeanFactoryPostProcessor,org.apache.cxf.bus.spring.Jsr250BeanPostProcessor,org.apache.cxf.bus.spring.BusExtensionPostProcessor,org.apache.cxf.resource.ResourceManager,org.apache.cxf.configuration.Configurer,org.apache.cxf.binding.BindingFactoryManager,org.apache.cxf.transport.DestinationFactoryManager,org.apache.cxf.transport.ConduitInitiatorManager,org.apache.cxf.wsdl.WSDLManager,org.apache.cxf.phase.PhaseManager,org.apache.cxf.workqueue.WorkQueueManager,org.apache.cxf.buslifecycle.BusLifeCycleManager,org.apache.cxf.endpoint.ServerRegistry,org.apache.cxf.endpoint.ServerLifeCycleManager,org.apache.cxf.endpoint.ClientLifeCycleManager,org.apache.cxf.transports.http.QueryHandlerRegistry,org.apache.cxf.endpoint.EndpointResolverRegistry,org.apache.cxf.headers.HeaderManager,org.apache.cxf.catalog.OASISCatalogManager,org.apache.cxf.endpoint.ServiceContractResolverRegistry,org.apache.cxf.jaxrs.JAXRSBindingFactory]; root of factory hierarchy
Mar 20, 2018 12:33:36 PM org.apache.cxf.transport.servlet.AbstractCXFServlet replaceDestinationFactory
INFO: Replaced the http destination factory with servlet transport factory
Mar 20, 2018 12:33:36 PM edu.usc.ir.geo.gazetteer.api.SearchResource <init>
INFO: Initialising searcher from index /Volumes/seagate/opensourcecomputervision/lucene-geo-gazetteer/src/main/bin/../../../geoIndex
Example Call
http://localhost:8765/api/search?s=Hightstown&s=New+Jersey
Example Results
{"Hightstown":[{"name":"Hightstown","countryCode":"US","admin1Code":"NJ","admin2Code":"021","latitude":40.26955,"longitude":-74.52321}],"New Jersey":[{"name":"New Jersey","countryCode":"US","admin1Code":"NJ","admin2Code":"","latitude":40.16706,"longitude":-74.49987}]}
Example NiFi Flow
example-geo.xml
Example Schema
{
"type": "record",
"name": "twitter",
"fields": [
{
"name": "msg",
"type": "string"
},
{
"name": "unixtime",
"type": "string"
},
{
"name": "friends_count",
"type": "string"
},
{
"name": "sentiment",
"type": "string"
},
{
"name": "geolongitude",
"type": "string"
},
{
"name": "hashtags",
"type": "string"
},
{
"name": "listed_count",
"type": "string"
},
{
"name": "tweet_id",
"type": "string"
},
{
"name": "user_name",
"type": "string"
},
{
"name": "favourites_count",
"type": "string"
},
{
"name": "source",
"type": "string"
},
{
"name": "vadersentiment",
"type": "string"
},
{
"name": "placename",
"type": "string"
},
{
"name": "media_url",
"type": "string"
},
{
"name": "sentiment2",
"type": "string"
},
{
"name": "retweet_count",
"type": "string"
},
{
"name": "user_mentions_name",
"type": "string"
},
{
"name": "geo",
"type": "string"
},
{
"name": "urls",
"type": "string"
},
{
"name": "countryCode",
"type": "string"
},
{
"name": "user_url",
"type": "string"
},
{
"name": "place",
"type": "string",
"doc": "Type inferred from '\"\"'"
},
{
"name": "timestamp",
"type": "string",
"doc": "Type inferred from '\"1516754942404\"'"
},
{
"name": "geolatitude",
"type": "string",
"doc": "Type inferred from '\"39.76\"'"
},
{
"name": "coordinates",
"type": "string",
"doc": "Type inferred from '\"\"'"
},
{
"name": "handle",
"type": "string",
"doc": "Type inferred from '\"LeeWeiden\"'"
},
{
"name": "profile_image_url",
"type": "string",
"doc": "Type inferred from '\"http://xxx.xxx.com/profile_images/777401884629803009/dUOFoLnt_normal.jpg\"'"
},
{
"name": "time_zone",
"type": "string",
"doc": "Type inferred from '\"Eastern Time (US & Canada)\"'"
},
{
"name": "ext_media",
"type": "string",
"doc": "Type inferred from '\"[]\"'"
},
{
"name": "statuses_count",
"type": "string",
"doc": "Type inferred from '\"186127\"'"
},
{
"name": "followers_count",
"type": "string",
"doc": "Type inferred from '\"1461\"'"
},
{
"name": "location",
"type": "string",
"doc": "Type inferred from '\"United States\"'"
},
{
"name": "time",
"type": "string",
"doc": "Type inferred from '\"Wed Jan 24 00:49:02 +0000 2018\"'"
},
{
"name": "user_mentions",
"type": "string",
"doc": "Type inferred from '\"[]\"'"
},
{
"name": "user_description",
"type": "string",
"doc": "Type inferred from '\"Drivers, Entrepreneur, Family, Faith, Fun, Fitness, Technology, CRM, Social, Mobility & Customer Experience.\"'"
}
]
}
References
https://wiki.apache.org/tika/GeoTopicParser
https://github.com/chrismattmann/lucene-geo-gazetteer
http://www.geonames.org/
... View more
Labels:
04-02-2018
03:46 PM
I don't know what was wrong back on March 16 but I tried today and it is working fine.
... View more
03-16-2018
12:02 PM
does attunity work with CSV, JSON, XML and other files?
... View more
03-15-2018
01:30 PM
https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.4/bk_spark-component-guide/content/spark-dataframe-api.html
http://spark.apache.org/docs/2.2.0/ http://spark.apache.org/docs/2.2.0/rdd-programming-guide.html http://spark.apache.org/docs/2.2.0/quick-start.html https://community.hortonworks.com/articles/151164/how-to-submit-spark-application-through-livy-rest.html curl -H "Content-Type: application/json" -H "X-Requested-By: admin" -X POST -d '{"file": "/apps/example.jar","className": "com.dataflowdeveloper.example.Links"}' http://server:8999/batches curl -H "Content-Type: application/json" -H "X-Requested-By: admin" -X POST -d '{"file": "hdfs://server:8020/apps/example_2.11-1.0.jar","className": "com.dataflowdeveloper.example.Links"}' http://server:8999/batches FYI 18/03/14 11:54:54 INFO LineBufferedStream: stdout: 18/03/14 11:54:54 INFO Client: Source and destination file systems are the same. Not copying hdfs://server:8020/opt/demo/example.jar
... View more
03-16-2018
05:42 AM
Thanks for the reply. Can you tell any specifics of "various of docker"? And how much memory when you say lot of memory?
... View more
03-11-2018
03:02 PM
2 Kudos
Extracting Text or HTML from PDF, Excel and Word Documents via Apache NiFi This version has been tested with HDF 3.1 and Apache NiFi 1.5. This processor is using Apache Tika 1.17 and is a non-supported Open Source Community processor that I have written. A user posted asking about HTML output, I took a look and it was easy so I added an option for that. Apache NiFi Flow You must download or build the nifi-extracttextprocessor nar and put in your lib, then you can add the processor. Select html or text Here's is the autogenerate documentation: You can see we set the output mime.type to text/html. Apache NiFi Example Flow to Read a File and Convert to HTML Source and Junit in Eclipse Example Output HTML <html xmlns="http://www.w3.org/1999/xhtml">
<head><meta name="pdf:PDFVersion" content="1.3"/>
<meta name="X-Parsed-By" content="org.apache.tika.parser.DefaultParser"/>
<meta name="X-Parsed-By" content="org.apache.tika.parser.pdf.PDFParser"/>
<meta name="xmp:CreatorTool" content="Rave (http://www.nevrona.com/rave)"/>
<meta name="access_permission:modify_annotations" content="true"/>
<meta name="access_permission:can_print_degraded" content="true"/>
<meta name="meta:creation-date" content="2006-03-01T07:28:26Z"/>
<meta name="created" content="Wed Mar 01 02:28:26 EST 2006"/>
<meta name="access_permission:extract_for_accessibility" content="true"/><meta name="access_permission:assemble_document" content="true"/><meta name="xmpTPg:NPages" content="2"/><meta name="Creation-Date" content="2006-03-01T07:28:26Z"/><meta name="dcterms:created" content="2006-03-01T07:28:26Z"/><meta name="dc:format" content="application/pdf; version=1.3"/><meta name="access_permission:extract_content" content="true"/><meta name="access_permission:can_print" content="true"/><meta name="pdf:docinfo:creator_tool" content="Rave (http://www.nevrona.com/rave)"/><meta name="access_permission:fill_in_form" content="true"/><meta name="pdf:encrypted" content="false"/><meta name="producer" content="Nevrona Designs"/><meta name="access_permission:can_modify" content="true"/><meta name="pdf:docinfo:producer" content="Nevrona Designs"/><meta name="pdf:docinfo:created" content="2006-03-01T07:28:26Z"/>
<meta name="Content-Type" content="application/pdf"/>
<title></title></head>
<body>
<div class="page"><p/><p>
A Simple PDF File
This is a small demonstration .pdf file -</p><p> just for use in the Virtual Mechanics tutorials. More text. And moretext. And more text. And more text. And more text.
</p><p> And more text. And more text. And more text. And more text. And moretext. And more text. Boring, zzzzz. And more text. And more text. Andmore text. And more text. And more text. And more text. And more text.And more text. And more text.</p><p> And more text. And more text. And more text. And more text. And moretext. And more text. And more text. Even more. Continued on page 2 ...</p><p/></div>
<div class="page"><p/><p>
Simple PDF File 2...continued from page 1. Yet more text. And more text. And more text.And more text. And more text. And more text. And more text. And moretext. Oh, how boring typing this stuff. But not as boring as watching paint dry. And more text. And more text. And more text. And more text.Boring. More, a little more text. The end, and just as well.
</p><p/></div></body></html> Source Code: https://github.com/tspannhw/nifi-extracttext-processor NAR Release https://github.com/tspannhw/nifi-extracttext-processor/releases/tag/html Resources: See Part 1: https://community.hortonworks.com/articles/81694/extracttext-nifi-custom-processor-powered-by-apach.html https://community.hortonworks.com/articles/76924/data-processing-pipeline-parsing-pdfs-and-identify.html https://community.hortonworks.com/articles/163776/parsing-any-document-with-apache-nifi-15-with-apac.html
... View more
Labels:
03-11-2018
01:43 AM
3 Kudos
Big Data DevOps: Part 2: Schemas, Schemas, Schemas. Know Your Records, Know Your DataTypes, Know Your Fields, Know Your Data. Since we can do process records in Apache NiFi, Streaming Analytics Manager, Apache Kafka and any tool that can work with a schema, we have a real need to use a Schema Registry. I have mentioned them before. One thing that is important is to be able to automate the management of schemas. Today we will be listing and exporting them for backup and migration purposes. We will also cover how to upload new schemas and version of schemas. The steps to backup schemas with Apache NiFi 1.5+ is easy. Backup All Schemas
GetHTTP: Get the List of Schemas for SR via GET SplitJson to turn list into individual records EvaluateJsonPath: get the schema name. InvokeHTTP: get the schema body EvaluateJsonPath: turn the schema text into a separate flow file Rename and save both the full JSON record from the registry and the schema only. NiFi Flow Initial Call to List All Schemas Get The Schema Name Example Schema with Text An Example of JSON Schema Text Build a New Flow File from The Schema Text JSON Get the Latest Version of the Schema Text For this Schema By Name The List Returned Swagger Documentation for SR Example Flow backup-schema.xml Schema List JSON Formatting "entities" : [ {
"schemaMetadata" : {
"type" : "avro",
"schemaGroup" : "Kafka",
"name" : "adsb",
"description" : "adsb",
"compatibility" : "BACKWARD",
"validationLevel" : "ALL",
"evolve" : true
},
"id" : 3,
"timestamp" : 1520460239420 Get Schema List REST URL (GET) http://server:7788/api/v1/schemaregistry/schemas Get Schema Body REST URL (GET) http://server:7788/api/v1/schemaregistry/schemas/${schema}/versions/latest?branch=MASTER See: https://community.hortonworks.com/articles/177301/big-data-devops-apache-nifi-flow-versioning-and-au.html If you wish you can use the Confluent style API against SR and against Confluent Schema Registry. it is slighty different, but easy to change our REST calls to process this. Swagger Docs http://YourHWXRegistry:7788/swagger#!/4._Confluent_Schema_Registry_compatible_API/getSubjects Hortonworks Schema Registry from HDF 3.1 https://community.hortonworks.com/articles/171893/hdf-31-executing-apache-spark-via-executesparkinte-1.html
... View more
Labels:
03-12-2018
01:37 PM
For now, you can use this NiFi flow to do schema registry stuff: https://community.hortonworks.com/articles/177349/big-data-devops-apache-nifi-hwx-schema-registry-sc.html
... View more
03-08-2018
08:18 PM
2 Kudos
This is for people preparing to attend my talk on Deep Learning at DataWorks Summit Berling 2018 (https://dataworkssummit.com/berlin-2018/#agenda) on Thursday April 19, 2018 at 11:50AM Berlin time. In this example we required Apache NiFi 1.5 or newer. This is part 2 of https://community.hortonworks.com/articles/155435/using-the-new-mxnet-model-server.html Our flow that receives the JSON files from the server does some minimal processing. We add some meta data fields, infer an AVRO schema from the JSON file (we only need to do this once in development and then you can delete that box from your flow). As you can see I can easily push that data to HDFS as a parquet file. This is if you wish to not install Apache MXNet on your HDF, HDP or related nodes. You can now install Apache MXNet plus MMS on a cloud or edge server and call it via HTTP from Apache NiFi for processing. Local Apache NiFi Flow To Call Our SSD Predict and Squeeze Net Predict REST Services Cluster Receiving The Two Remote Ports Server Apache NiFi Flow Example Squeeze Net JSON Data Processed by Apache NiFi Set the Schema and Mime Type Storage Settings For Apache Parquet Files on HDFS SSD MMS Logs Squeeze Net MMS Logs Schemas Used An Example Prediction returned, as you can see you get the coordinates for drawing a box. To Store Apache Parquet Files: hdfs dfs -mkdir /ssdpredict hdfs dfs -chmod 755 /ssdpredict Inside one of the files stored by Apache NiFi in HDFS, as your can see there is an embedded Apache Avro schema in JSON format built by Avro Parquet MR tool version 1.8.2. parquet.avro.schema�{"type":"record","name":"ssdpredict","fields":[{"name":"prediction","type":{"type":"array","items":{"type":"array","items":["string","int"]}},"doc":"Type inferred from '[[\"person\",385,329,466,498],[\"bicycle\",96,386,274,498]]'"}]}writer.model.nameavroIparquet-mr version 1.8.2 (build c6522788629e590a53eb79874b95f6c3ff11f16c)sPAR1 Example File -rw-r--r-- 3 nifi hdfs 688 2018-03-08 18:32 /ssdpredict/201801081202602.jpg.parquet.avro Apache NiFi Flow File: apache-mxnet-cluster-processing.xml Reference: http://parquet.apache.org/documentation/latest/
... View more
Labels:
03-07-2018
09:26 PM
3 Kudos
Ingest All The Things Series: Flight Data Via Radio I am using the FlightAware Pro Stick Plus ADS-B USB Receiver with Built-in Filter on a Mac, I should hook this up to one of my raspberry pis and add a longer antenna outside. You need a good antenna, a good location and nothing blocking your signal. It also depends on what air traffic is nearby. For a proof of concept it's pretty cool to see air data going through a cheap USB stick into a computer stored in a file and loaded into Apache NiFi to send on for data processing. There is a web server you can run to see the planes on a map which is pretty cool, but I want to just ingest the data for processing. My Equipment If you wish to watch the data flash by in a command-line interface, you can run with interactive flag and watch all the updates. We are dumping the data as it streams as raw text into a file. A snippet of it tailed in Apache NiFi is shown below: We are also ingesting ADS-B data that is provided by an open data REST API (https://public-api.adsbexchange.com..) at https://www.adsbexchange.com/. Like everything else, we may want to add a schema to parse into records. Our ingest flow: I am getting the REST data from the ADSB Exchange REST API, tailing the raw text dump from dump1090 and reading the aircraft history JSON files produced by dump1090 as well. For further processing, I send the Aircraft History JSON files to my server cluster to send to a cloud hosted MongoDB database. Thanks to a free tier from mLab. And our data quickly arrives as JSON in Mongo. The main install is the dump1090 github and is pretty straight forward. Installation on OSX brew update
brew install librtlsdr pkg-config
make Running ./run2.sh >> raw.txt 2>&1
run2.sh
./dump1090 --net --lat 40.265887 --lon -74.534610 --modeac --mlat --write-json-every 1 --json-location-accuracy 2 --write-json /volumes/seagate/projects/dump1090/data I have entered my local latitude and longitude above. I also write to a local directory that we will read from in Apache NiFi. Example Data { "now" : 1507344034.5,
"messages" : 1448,
"aircraft" : [
{"hex":"a6cb48","lat":40.169403,"lon":-74.526123,"nucp":7,"seen_pos":6.1,"altitude":33000,"mlat":[],"tisb":[],"messages":9,"seen":4.9,"rssi":-6.1},
{"hex":"a668e2","altitude":17250,"mlat":[],"tisb":[],"messages":31,"seen":4.2,"rssi":-7.9},
{"hex":"a8bcdd","flight":"NKS710 ","lat":40.205841,"lon":-74.491150,"nucp":7,"seen_pos":1.5,"altitude":9875,"vert_rate":0,"track":45,"speed":369,"category":"A0","mlat":[],"tisb":[],"messages":17,"seen":1.5,"rssi":-5.0},
{"hex":"a54cd9","mlat":[],"tisb":[],"messages":44,"seen":94.4,"rssi":-7.2},
{"hex":"a678c3","mlat":[],"tisb":[],"messages":60,"seen":133.2,"rssi":-7.1},
{"hex":"a1ff83","mlat":[],"tisb":[],"messages":47,"seen":212.3,"rssi":-7.9},
{"hex":"a24ce0","mlat":[],"tisb":[],"messages":160,"seen":276.3,"rssi":-6.2}
]
}
cat /usr/local/var/dump1090-mut-data/history_75.json
{ "now" : 1507344034.5,
"messages" : 1448,
"aircraft" : [
{"hex":"a6cb48","lat":40.169403,"lon":-74.526123,"nucp":7,"seen_pos":6.1,"altitude":33000,"mlat":[],"tisb":[],"messages":9,"seen":4.9,"rssi":-6.1},
{"hex":"a668e2","altitude":17250,"mlat":[],"tisb":[],"messages":31,"seen":4.2,"rssi":-7.9},
{"hex":"a8bcdd","flight":"NKS710 ","lat":40.205841,"lon":-74.491150,"nucp":7,"seen_pos":1.5,"altitude":9875,"vert_rate":0,"track":45,"speed":369,"category":"A0","mlat":[],"tisb":[],"messages":17,"seen":1.5,"rssi":-5.0},
{"hex":"a54cd9","mlat":[],"tisb":[],"messages":44,"seen":94.4,"rssi":-7.2},
{"hex":"a678c3","mlat":[],"tisb":[],"messages":60,"seen":133.2,"rssi":-7.1},
{"hex":"a1ff83","mlat":[],"tisb":[],"messages":47,"seen":212.3,"rssi":-7.9},
{"hex":"a24ce0","mlat":[],"tisb":[],"messages":160,"seen":276.3,"rssi":-6.2}
]
}
There is also an open data API available at https://www.adsbexchange.com/data/# So I grabbed this via REST API: https://public-api.adsbexchange.com/VirtualRadar/AircraftList.json Again using my Latitude and Longitude. Alternative Approach For Ingestion: @Hellmar Becker has a really well developed example and presentation on how he is processing this data. See the Apache NiFi code, Python, Setup Scripts and Presentation here: https://github.com/hellmarbecker/plt-airt-2000 My example is with a different USB stick and a different continent. Resources: http://realadsb.com/ http://realadsb.com/piaware.html https://github.com/mutability/dump1090.git https://www.dzombak.com/blog/2017/01/Monitoring-aircraft-via-ADS-B-on-OS-X.html https://www.faa.gov/nextgen/programs/adsb/ https://community.hortonworks.com/articles/177232/apache-deep-learning-101-processing-apache-mxnet-m.html https://www.dzombak.com/blog/2017/08/Quick-ADS-B-monitoring-on-OS-X.html https://github.com/fredpalmer/flightaware https://walac.github.io/pyusb/ http://www.stuffaboutcode.com/2015/09/read-piaware-flight-data-with-python.html https://github.com/hellmarbecker/plt-airt-2000 https://github.com/jojonas/py1090 https://gist.github.com/fasiha/c123a9c6b6c78df7597bb45e0fed808f
... View more
Labels: