1973
Posts
1225
Kudos Received
124
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1906 | 04-03-2024 06:39 AM | |
| 3008 | 01-12-2024 08:19 AM | |
| 1641 | 12-07-2023 01:49 PM | |
| 2415 | 08-02-2023 07:30 AM | |
| 3349 | 03-29-2023 01:22 PM |
08-31-2018
02:34 PM
1 Kudo
IoT Edge Processing with Deep Learning on HDF 3.2 and HDP 3.0 - Part 2 For: https://conferences.oreilly.com/strata/strata-ny/public/schedule/detail/68140 See Pre-Work: https://community.hortonworks.com/articles/203638/ingesting-multiple-iot-devices-with-apache-nifi-17.html See Part 1: https://community.hortonworks.com/articles/215079/iot-edge-processing-with-deep-learning-on-hdf-32-a.html Step By Step Processing Step 1: Install Apache NiFi (One or More Nodes or clusters)
Choose: https://docs.hortonworks.com/HDPDocuments/HDF3/HDF-3.2.0/installing-hdf/content/install-ambari.html
or
docker pull hortonworks/nifi
Apache NiFi Configuration for IoT
https://community.hortonworks.com/articles/67756/ingesting-log-data-using-minifi-nifi.html
You will need to set: nifi.remote.input.host and nifi.remote.input.socket.port in the conf/nifi.properties or Ambari settings. Step 2: Install Apache NiFi - MiniFi on Your Device(s) Download MiniFi (https://nifi.apache.org/minifi/download.html)
You can choose Java or C++. For your first usage, I recommend the Java edition unless your device is too small.
You can also install on a RHEL or Debian Linux machine or OSX.
Download MiniFi Toolkit (https://nifi.apache.org/minifi/minifi-toolkit.html)
Resources:
https://cwiki.apache.org/confluence/display/MINIFI/Release+Notes#ReleaseNotes-Versioncpp-0.5.0 https://cwiki.apache.org/confluence/display/MINIFI/Release+Notes#ReleaseNotes-Version0.5.0 https://docs.hortonworks.com/HDPDocuments/HDF3/HDF-3.1.2/bk_release-notes/content/ch_hdf_relnotes.html#centos7 https://community.hortonworks.com/articles/108947/minifi-for-ble-bluetooth-low-energy-beacon-data-in.html https://community.hortonworks.com/content/kbentry/107379/minifi-for-image-capture-and-ingestion-from-raspbe.html Step 3: Install Apache MXNet (On MiniFi Devices and NiFi Nodes - optional) https://mxnet.incubator.apache.org/install/index.html?platform=Devices&language=Python&processor=CPU
Install build tools and build from scratch
Walk through install: https://community.hortonworks.com/articles/176932/apache-deep-learning-101-using-apache-mxnet-on-the.html
Resources and Source
https://github.com/tspannhw/StrataNYC2018
rainbow-processing.xml
rainbow-gateway-processing.xml
display-images-server.xml
rainbowminifi.xml
https://community.hortonworks.com/articles/203638/ingesting-multiple-iot-devices-with-apache-nifi-17.html Resources and Source https://github.com/tspannhw/StrataNYC2018 rainbow-processing.xml rainbow-gateway-processing.xml display-images-server.xml rainbowminifi.xml
... View more
Labels:
08-29-2018
03:43 PM
3 Kudos
IoT Edge Processing with Apache NiFi and MiniFi and Multiple Deep Learning Libraries Series For: https://conferences.oreilly.com/strata/strata-ny/public/schedule/detail/68140 In preparation for my talk on utilizing edge devices for deep learning, IoT sensor reading and big data processing I have updated my environment to the latest and greatest tools available. With the upgrade of HDF to 3.2, I can now use Apache NiFi 1.7 and MiniFi 0.5 for IoT data ingestion, simple event processing, conversion, data processing, data flow and storage. The architecture diagram above shows the basic flow we are utilizing. IoT Step by Step
Raspberry Pi with latest patches, Python, GPS software, USB Camera, Sensor libraries, Java 8, MiniFi 0.5, TensorFlow and Apache MXNet installed. minifi flow pushes JSON and JPEGs over HTTP(s) / Site-to-Site to an Apache NiFi gateway server. Option: NiFi can push to a central NiFi cloud cluster and/or Kafka cluster both of which running on HDF 3.2 environments. Apache NiFi cluster pushes to Hive, HDFS, Dockerized API running in HDP 3.0 and Third Party APIs. NiFi and Kafka integrate with Schema Registry for our tabular data including rainbow and gps JSON data. SQL Tables in Hive I stream my data into Apache ORC files stored on HDP 3.0 HDFS directories and build external tables on them. CREATE EXTERNAL TABLE IF NOT EXISTS rainbow (tempf DOUBLE, cputemp DOUBLE, pressure DOUBLE, host STRING, uniqueid STRING, ipaddress STRING, temp DOUBLE, diskfree STRING, altitude DOUBLE, ts STRING,
tempf2 DOUBLE, memory DOUBLE)
STORED AS ORC LOCATION '/rainbow';
CREATE EXTERNAL TABLE IF NOT EXISTS gps (speed STRING, diskfree STRING, altitude STRING, ts STRING, cputemp DOUBLE, latitude STRING, track STRING, memory DOUBLE, host STRING, uniqueid STRING, ipaddress STRING, epd STRING, utc STRING, epx STRING, epy STRING, epv STRING, ept STRING, eps STRING, longitude STRING, mode STRING, time STRING, climb STRING, epc STRING)
STORED AS ORC LOCATION '/gps';
For my processing needs I also have a Hive 3 ACID table for general table usage and updates. create table rainbowacid(tempf DOUBLE, cputemp DOUBLE, pressure DOUBLE, host STRING, uniqueid STRING, ipaddress STRING, temp DOUBLE, diskfree STRING, altitude DOUBLE, ts STRING,
tempf2 DOUBLE, memory DOUBLE) STORED AS ORC
TBLPROPERTIES ('transactional'='true');
CREATE TABLE IF NOT EXISTS gpsacid (speed STRING, diskfree STRING, altitude STRING, ts STRING, cputemp DOUBLE, latitude STRING, track STRING, memory DOUBLE, host STRING, uniqueid STRING, ipaddress STRING, epd STRING, utc STRING, epx STRING, epy STRING, epv STRING, ept STRING, eps STRING, longitude STRING, mode STRING, time STRING, climb STRING, epc STRING) STORED AS ORC
TBLPROPERTIES ('transactional'='true');
Then I load my initial data. insert into rainbowacid
select * from rainbow;
insert into gpsacid
select * from gps; Hive 3.x Updates %jdbc(hive) CREATE TABLE Persons_default (
ID Int NOT NULL,
Name String NOT NULL,
Age Int,
Creator String DEFAULT CURRENT_USER(),
CreateDate Date DEFAULT CURRENT_DATE()
) One of the cool new features in Hive is that you can now have defaults, as you can see which are helpful for things like standard defaults you might want like current data. This gives us even more relational style features in Hive. Another very interesting feature is materialized views which help you for having clean and fast subqueries. Here is a cool example: CREATE MATERIALIZED VIEW mv1
AS
SELECT dest,origin,count(*)
FROM flights_hdfs
GROUP BY dest,origin References: https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.0/hive-overview/content/hive_whats_new_in_this_release_hive.html https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.0/using-hiveql/content/hive_3_internals.html https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.0/hive-overview/content/hive-apache-hive-3-architecturural-overview.html https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.0/materialized-view/content/hive_create_materialized_view.html
... View more
Labels:
07-27-2018
01:27 PM
Are you running on AWS or another cloud provider? They often block ports. is your domain setup properly? is it available in DNS? are the machines able to communicate over other ports? it /etc/hosts setup correct is there an external firewall between them? Can you SSH between them?
... View more
07-26-2018
01:46 PM
1 Kudo
You need to upgrade in pieces. HDP 2.3 is very very old. Best option is to setup a new HDP 3.0 cluster and distcp your data over. Otherwise you are doing Ambari 2.0 -> 2.1 -> 2.3 -> 2.4 HDP 2.3 -> 2.4 -> 2.6 Possibly more jumps. Then Ambari 2.6 to Ambari 2.7 Then HDP 2.6.5 -> HDP 3.0
... View more
07-26-2018
11:56 AM
if you add -ot json to the end your output will be in JSON format. Which you can parse with your favorite tool, again I was thinking to call it from NiFi and process the output. Perhaps this is Nifinception all over again, perhaps this is Nifinception all over again. By default it's nice, easy to read text which is perfect if a person is watching it. It's really cool to be able to start and stop processor groups remotely via a command line command.
... View more
07-24-2018
09:02 PM
6 Kudos
HDF DevOps
It's become enough of an ask that I had to post to answer it. The ask is something like this, "What's with all this UI stuff, I want devops, automation, command line et al." So did I in 2002. It's nice to see everything and have a nice diagram on a website without any extra tools or ssh. Okay that didn't convince anyone. So here is a proper DevOps solution for you.
Option 1: REST
The full documentation for the NiFi REST API is here: https://nifi.apache.org/docs/nifi-docs/rest-api/
The follow is some examples I have accessed via CURL (if you have security, you will need to account for that, see the specifications.
curl http://hw13125.local:8080/nifi-api/resources
curl http://hw13125.local:8080/nifi-api/tenants/user-groups
curl http://hw13125.local:8080/nifi-api/tenants/users
curl http://hw13125.local:8080/nifi-api/flow/about
curl http://hw13125.local:8080/nifi-api/flow/banners
curl http://princeton1.field.hortonworks.com:8080/nifi-api/flow/bulletin-board
curl http://hw13125.local:8080/nifi-api/flow/cluster/summary
curl http://hw13125.local:8080/nifi-api/flow/config
{"flowConfiguration":{"supportsManagedAuthorizer":false,"supportsConfigurableAuthorizer":false,"supportsConfigurableUsersAndGroups":false,"autoRefreshIntervalSeconds":30,"currentTime":"16:12:51 EDT","timeOffset":-14400000,"defaultBackPressureObjectThreshold":10000,"defaultBackPressureDataSizeThreshold":"1 GB"}}%
curl http://hw13125.local:8080/nifi-api/flow/controller/bulletins
curl http://princeton1.field.hortonworks.com:8080/nifi-api/flow/history\?offset\=1\&count\=100
curl http://princeton1.field.hortonworks.com:8080/nifi-api/flow/prioritizers
curl http://princeton1.field.hortonworks.com:8080/nifi-api/flow/processor-types
curl http://princeton1.field.hortonworks.com:8080/nifi-api/flow/registries
curl http://princeton1.field.hortonworks.com:8080/nifi-api/flow/reporting-tasks
curl http://princeton1.field.hortonworks.com:8080/nifi-api/flow/search-results\?\q\=mxnet
curl http://princeton1.field.hortonworks.com:8080/nifi-api/flow/status
curl http://princeton1.field.hortonworks.com:8080/nifi-api/flow/templates
curl http://hw13125.local:8080/nifi-api/system-diagnostics
curl http://hw13125.local:8080/nifi-api/flow/controller/bulletins
curl http://hw13125.local:8080/nifi-api/flow/status
curl http://hw13125.local:8080/nifi-api/flow/cluster/summary
curl http://hw13125.local:8080/nifi-api/site-to-site
curl http://princeton1.field.hortonworks.com:8080/nifi-api/flow/process-groups/root
curl http://princeton1.field.hortonworks.com:8080/nifi-api/flow/process-groups/root/controller-services
curl http://princeton1.field.hortonworks.com:8080/nifi-api/flow/process-groups/root/status
curl http://princeton1.field.hortonworks.com:8080/nifi-api/flow/process-groups/7a01d441-0164-1000-ec7a-54109819f084
Option 2: Python: http://nipyapi.readthedocs.io/en/latest/readme.html Now in version 0.91.
This library is awesome, very easy to use and I love it. See here for a deep dive: https://community.hortonworks.com/articles/177301/big-data-devops-apache-nifi-flow-versioning-and-au.html
Option 3: Forget about it, just use Ambari, NiFi, Cloudbreak and DPS. Problem solved. WebGUIs are killer.
Option 4: The New NiFi Toolkit CLI
Let's examine the New NiFi CLI. I am using the version for Apache NiFi 1.7.
To install the CLI, you need to download Apache NiFi Toolkit (https://github.com/apache/nifi/tree/master/nifi-toolkit/nifi-toolkit-cli)
(https://www.apache.org/dyn/closer.lua?path=/nifi/1.7.1/nifi-toolkit-1.7.1-bin.zip)
Once you unzip it, you can run one of two ways. With no parameters and you will bring up an interactive console.
Now you an type help to see a nice list of commands. I think of this like the Spark Shell or Apache Zeppelin where you can experiment, find out what you want then you can use that single command with your automation suite. The toolkit lets you automate a number of actions in Apache NiFi and it's registry.
Below are a number of non-interactive commands:
./bin/cli.sh nifi pg-list -u http://hw13125.local:8080 -ot json
./bin/cli.sh registry list-buckets -u http://localhost:18080
./bin/cli.sh nifi pg-status -u http://hw13125.local:8080 --processGroupId f10700ba-3d5e-30a8-ea5d-33c59771d4f1
./bin/cli.sh nifi pg-get-services -u http://hw13125.local:8080 --processGroupId f10700ba-3d5e-30a8-ea5d-33c59771d4f1
./bin/cli.sh registry list-flows -bucketId 36cb79a4-f735-4f77-ba55-606718a9c3c9 -u http://localhost:18080
./bin/cli.sh registry list-buckets -u http://princeton1.field.hortonworks.com:18080/
./bin/cli.sh registry list-flows -u http://princeton1.field.hortonworks.com:18080/ -bucketIdentifier 36cb79a4-f735-4f77-ba55-606718a9c3c9
# Name Id Description- ------------------ ------------------------------------ -------------------------1 NiFi 1.7 Features 54b37ad8-274b-4d9d-a09c-0ee2816f271c NiFi 1.72 Rainbow Processing 5ebc2183-954e-4887-a28c-9d0ee54a02ed server rainbow processing
./bin/cli.sh registry export-flow-version -u http://princeton1.field.hortonworks.com:18080/ -f 5ebc2183-954e-4887-a28c-9d0ee54a02ed -o rainbow.json -ot json
How to Backup Registry
You can run this from the interactive command line or as a one-off command. You would have to capture the list of buckets from the list, use it to get flows and then use the list of flows to get versions. This could easily be in a for-loop in shell, Python, Go or automation scripting tool of your choice. I would probably do this in NiFi.
registry list-buckets -u http://localhost:18080
registry list-flows -u http://localhost:18080 -b 36cb79a4-f735-4f77-ba55-606718a9c3c9
registry export-flow-version -f 5ebc2183-954e-4887-a28c-9d0ee54a02ed -o rainbow.json -ot json
List What’s Running
nifi pg-list -u http://princeton1.field.hortonworks.com:8080
You will get a list of all the Processor Groups.
An Example Processor Group List from HDF NiFi Server in the Cloud
List of Commands
commands:
demo quick-import
nifi current-user
nifi get-root-id
nifi list-reg-clients
nifi create-reg-client
nifi update-reg-client
nifi get-reg-client-id
nifi pg-import
nifi pg-start
nifi pg-stop
nifi pg-get-vars
nifi pg-set-var
nifi pg-get-version
nifi pg-change-version
nifi pg-get-all-versions
nifi pg-list
nifi pg-status
nifi pg-get-services
nifi pg-enable-services
nifi pg-disable-services
registry current-user
registry list-buckets
registry create-bucket
registry delete-bucket
registry list-flows
registry create-flow
registry delete-flow
registry list-flow-versions
registry export-flow-version
registry import-flow-version
registry sync-flow-versions
registry transfer-flow-version
session keys
session show
session get
session set
session remove
session clear
exit
help
Transfer Between Servers (NiFi Registries)
registry transfer-flow-version
Transfers a version of a flow directly from one flow to another, without needing
to export/import. If --sourceProps is not specified, the source flow is assumed
to be in the same registry as the destination flow. If --sourceFlowVersion is
not specified, then the latest version will be transferred.
usage: transfer-flow-version
-f,--flowIdentifier <arg> A flow identifier
-h,--help Help
-kp,--keyPasswd <arg> The key password of the keystore being used
-ks,--keystore <arg> A keystore to use for TLS/SSL connections
-ksp,--keystorePasswd <arg> The password of the keystore being used
-kst,--keystoreType <arg> The type of key store being used (JKS or
PKCS12)
-ot,--outputType <arg> The type of output to produce (json or
simple)
-p,--properties <arg> A properties file to load arguments from,
command line values will override anything
in the properties file, must contain full
path to file
-pe,--proxiedEntity <arg> The identity of an entity to proxy
-sf,--sourceFlowIdentifier <arg> A flow identifier from the source registry
-sfv,--sourceFlowVersion <arg> A version of a flow from the source registry
-sp,--sourceProps <arg> A properties file to load for the source
-ts,--truststore <arg> A truststore to use for TLS/SSL connections
-tsp,--truststorePasswd <arg> The password of the truststore being used
-tst,--truststoreType <arg> The type of trust store being used (JKS or
PKCS12)
-u,--baseUrl <arg> The URL to execute the command against
-verbose,--verbose Indicates that verbose output should be
provided
An Example List of My Local Apache NiFi Flows
NIFI TOOLKIT Flow Analyzer
bin/flow-analyzer.sh
To run this with my massive amount of flows, I edited the flow-analyze.sh and upped java memory to below:
${JAVA_OPTS:--Xms2G -Xmx2G}
The rest of this article is a big command line dump, seems a huge text list is the way to go:
➜ nifi-toolkit-1.7.0 bin/flow-analyzer.sh /Volumes/seagate/apps/nifi-1.7.0/conf/flow.xml.gz
Using flow=/Volumes/seagate/apps/nifi-1.7.0/conf/flow.xml.gz
Total Bytes Utilized by System=519 GB
Max Back Pressure Size=1 GB
Min Back Pressure Size=1 GB
Average Back Pressure Size=0.990458015 GB
Max Flowfile Queue Size=10000
Min Flowfile Queue Size=10000
Avg Flowfile Queue Size=9904.580152672
bin/file-manager.sh
usage: org.apache.nifi.toolkit.admin.filemanager.FileManagerTool [-b <arg>] [-c <arg>] [-d <arg>] [-h] [-i <arg>] [-m] [-o <arg>] [-r <arg>] [-t <arg>] [-v] [-x]
This tool is used to perform backup, install and restore activities for a NiFi node.
-b,--backupDir <arg> Backup NiFi Directory (used with backup or restore operation)
-c,--nifiCurrentDir <arg> Current NiFi Installation Directory (used optionally with install or restore operation)
-d,--nifiInstallDir <arg> NiFi Installation Directory (used with install or restore operation)
-h,--help Print help info (optional)
-i,--installFile <arg> NiFi Install File
-m,--moveRepositories Allow repositories to be moved to new/restored nifi directory from existing installation, if available (used optionally with
install or restore operation)
-o,--operation <arg> File operation (install | backup | restore)
-r,--nifiRollbackDir <arg> NiFi Installation Directory (used with install or restore operation)
-t,--bootstrapConf <arg> Current NiFi Bootstrap Configuration File (optional)
-v,--verbose Set mode to verbose (optional, default is false)
-x,--overwriteConfigs Overwrite existing configuration directory with upgrade changes (used optionally with install or restore operation)
Java home: /Library/Java/Home
NiFi Toolkit home: /Volumes/seagate/apps/nifi-toolkit-1.7.0
Backups
nifi-toolkit-1.7.0 bin/file-manager.sh -o backup -b /Volumes/seagate/backupsNIFI/ -c /Volumes/seagate/apps/nifi-1.7.0 -v
➜ nifi-toolkit-1.7.0 bin/notify.sh
usage: org.apache.nifi.toolkit.admin.notify.NotificationTool [-b <arg>] [-d <arg>] [-h] [-l <arg>] [-m <arg>] [-p <arg>] [-v]
This tool is used to send notifications (bulletins) to a NiFi cluster.
-b,--bootstrapConf <arg> Existing Bootstrap Configuration file
-d,--nifiInstallDir <arg> NiFi Installation Directory
-h,--help Print help info
-l,--level <arg> Level for notification bulletin INFO,WARN,ERROR
-m,--message <arg> Notification message for nifi instance or cluster
-p,--proxyDn <arg> User or Proxy DN that has permission to send a notification. User must have view and modify privileges to 'access the controller'
in NiFi
-v,--verbose Set mode to verbose (default is false)
Java home: /Library/Java/Home
NiFi Toolkit home: /Volumes/seagate/apps/nifi-toolkit-1.7.0
nifi-toolkit-1.7.0 bin/s2s.sh
Must specify either Port Name or Port Identifier to build Site-to-Site client
s2s is a command line tool that can either read a list of DataPackets from stdin to send over site-to-site or write the received DataPackets to stdout
The s2s cli input/output format is a JSON list of DataPackets. They can have the following formats:
[{"attributes":{"key":"value"},"data":"aGVsbG8gbmlmaQ=="}]
Where data is the base64 encoded value of the FlowFile content (always used for received data) or
[{"attributes":{"key":"value"},"dataFile":"/Volumes/seagate/apps/nifi-toolkit-1.7.0/EXAMPLE"}]
Where dataFile is a file to read the FlowFile content from
Example usage to send a FlowFile with the contents of "hey nifi" to a local unsecured NiFi over http with an input port named input:
echo '[{"data":"aGV5IG5pZmk="}]' | bin/s2s.sh -n input -p http
usage: s2s
--batchCount <arg> Number of flow files in a batch
--batchDuration <arg> Duration of a batch
--batchSize <arg> Size of flow files in a batch
-c,--compression Use compression
-d,--direction <arg> Direction (valid directions: SEND, RECEIVE) (default: SEND)
-h,--help Show help message and exit
-i,--portIdentifier <arg> Port id
--keyStore <arg> Keystore
--keyStorePassword <arg> Keystore password
--keyStoreType <arg> Keystore type (default: JKS)
-n,--portName <arg> Port name
--needClientAuth Need client auth
-p,--transportProtocol <arg> Site to site transport protocol (default: RAW)
--peerPersistenceFile <arg> File to write peer information to so it can be recovered on restart
--penalization <arg> Penalization period
--proxyHost <arg> Proxy hostname
--proxyPassword <arg> Proxy password
--proxyPort <arg> Proxy port
--proxyUsername <arg> Proxy username
--timeout <arg> Timeout
--trustStore <arg> Truststore
--trustStorePassword <arg> Truststore password
--trustStoreType <arg> Truststore type (default: JKS)
-u,--url <arg> NiFI URL to connect to (default: http://localhost:8080/nifi)
I can see this being used for integration testing.
Notify
Send a bulletin to A Nifi Server
https://docs.hortonworks.com/HDPDocuments/HDF3/HDF-3.1.2/bk_administration/content/notify.html
notify.sh
References:
https://community.hortonworks.com/articles/183217/devops-backing-up-apache-nifi-registry-flows.html
For Hadoop Friends https://community.hortonworks.com/articles/108610/hadoop-devops-better-together.html
https://community.hortonworks.com/articles/177349/big-data-devops-apache-nifi-hwx-schema-registry-sc.html
https://community.hortonworks.com/articles/177301/big-data-devops-apache-nifi-flow-versioning-and-au.html
https://community.hortonworks.com/articles/161761/new-features-in-apache-nifi-15-apache-nifi-registr.html
https://community.hortonworks.com/articles/191658/devops-tips-using-the-apache-nifi-toolkit-with-apa.html
https://community.hortonworks.com/articles/191546/automated-provisioning-of-hdp-for-data-governance.html
https://community.hortonworks.com/articles/202559/distributed-pricing-engine-using-dockerized-spark.html
https://github.com/tspannhw/BackupRegistry
... View more
Labels:
07-24-2018
05:47 PM
I uninstalled everything and started clean. Worked no problem. There have been updates since my initial issue. I recommend upgrading to latest release. cleaning out old directories, logs, temporary files. reboot, restart ambari agent, restart ambari
... View more
07-24-2018
05:45 PM
Apache NiFi is great and the scheduling, UI, logging and provenance are great. 500 tables is going to take some resources. If you have them, Matt's process will work. Make sure you have 5-10 or more HDF 3.1 NiFi Nodes with 128GB RAM, 16-32 core CPUs. Also make sure Hive has enough resources and you have enough beefy Data Nodes. Are you using LLAP with dedicated LLAP nodes? Are you using Apache Hive ACID tables yet? Those should update quicker.
... View more