1973
Posts
1225
Kudos Received
124
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1843 | 04-03-2024 06:39 AM | |
| 2875 | 01-12-2024 08:19 AM | |
| 1585 | 12-07-2023 01:49 PM | |
| 2349 | 08-02-2023 07:30 AM | |
| 3241 | 03-29-2023 01:22 PM |
11-20-2018
09:49 AM
I am getting below error :2018-11-12 16:37:28,189 WARN [pool-2-thread-1] o.a.n.m.b.c.i.PullHttpChangeIngestor Hit an exception while trying to pull http://java.net.ConnectException: Failed to connect to /127.0.0.1:10080 Jira tkt for the same issuehttp://mail-archives.apache.org/mod_mbox/nifi-commits/201805.mbox/%3CJIRA.13157686.1525721467000.15178.1525721820467@Atlassian.JIRA%3E How to resolve above issue.Please help.
... View more
03-31-2018
05:30 PM
1 Kudo
DevOps: Backups Part 2 Apache NiFi Registry Rest API Docs https://nifi.apache.org/docs/nifi-registry-docs/rest-api/index.html Github: https://github.com/tspannhw/BackupRegistry
... View more
Labels:
03-30-2018
01:14 PM
2 Kudos
http://iotfusion.net/session/enterprise-iiot-edge-processing-with-apache-nifi-minifi-and-deep-learning/ Join me for this live in Philly on April 5, 2018. Sending Data to an Advanced Analytics Platform like https://www.zoomdata.com/ is childsplay. I added that as a live stream via REST while I am sending the JSON to be converted to AVRO then ORC for storage and Hive queries. As part of the ingest we store the images and make a current image for display. See: https://community.hortonworks.com/articles/182850/vision-thing.html https://community.hortonworks.com/content/kbentry/182984/vision-thing-part-2-processing-capturing-and-displ.html An Overview of a Flow to Ingest Both Apache MXNet + SenseHat Data plus multiple rows of TensorFlow data If TensorFlow json data, split it into individual JSON records This is how to split an array of JSON that doesn't have a top element. Reference: https://community.hortonworks.com/articles/182850/vision-thing.html Github: https://github.com/tspannhw/OpenSourceComputerVision https://github.com/tspannhw/ApacheDeepLearning101 https://github.com/tspannhw/nifi-tensorflow-processor
... View more
Labels:
05-22-2018
02:53 PM
found how to do it.
... View more
03-29-2018
09:06 PM
This is awesome.
... View more
03-28-2018
10:31 PM
6 Kudos
Vision Thing: Part 2: Processing, Capturing and Displaying Live Image Feeds with Apache NiFi, MiniFi, OpenCV, Python, MXNet As part of processing live webcam images from devices I want to display the last one ingested to see what's going on. Those same images run through Apache MXNet, NVidia TensorRT and OpenCV for analysis. I wanted to list all the images that I have stored on my NiFi server sent by Jetson TX1 device. So I went old school and have Apache NiFi server up some simple CGI script. It lists images and wraps them in HTML. index.sh ls /opt/demo/images2/ | /opt/demo/buildpage.sh buildpage.sh #!/bin/sh
echo '<html><head><meta http-equiv="refresh" content="60"> <title>NiFi List Images</title> </head> <body> <p> <br> <b>List Images</b> <br><br>'
sed 's/^.*/<a target="_new" href="http:\/\/princeton1\.field\.hortonworks\.com\:9099\?img_name\=&">&<\/a><br\/>/'
echo '</body></html>' This works and is triggered in NiFi by HTTP request calls. For each listed I use Apache NiFi to display that image: To Serve Images, You need to pass in ?img_name= which is translated by NiFi into the following attribute: http.query.param.img_name Here is one image served: Running the List Page WebSite I have three separate HTTP Request Handlers with three separate ports. One shows a web page with the current image. One returns the current image. And the last lists images. Ingest From NVidia Jetson TX1, Sending the Images for Processing and sending the deep learning analysis elsewhere. Our Combined Schema Store Images and Make a copy called current.jpg and overwrite existing one. Creating an Apache Hive Table for Jetson TX1 Updated Data %jdbc(hive) CREATE EXTERNAL TABLE IF NOT EXISTS jetsonscan (filename STRING, top1pct STRING, top5 STRING, top4 STRING, top3 STRING, top2 STRING, top1 STRING, y STRING, host STRING, h STRING, top2pct STRING, cputemp DOUBLE, endtime STRING, ipaddress STRING, imagefilename STRING, top3pct STRING, uuid STRING, facedetect STRING, diskfree STRING, cvfilename STRING, ts STRING, top4pct STRING, gputempf STRING, gputemp STRING, top5pct STRING, w STRING, memory DOUBLE, imagenet STRING, x STRING, cvface STRING, runtime STRING, cputempf STRING) STORED AS ORC
LOCATION '/jetsonscan' We have added two fields for OpenCV results. I am using OpenCV to find faces: Face [[357 62 61 61]] Source Make sure you turn your image gray scale before trying OpenCV HaaR Cascade Frontal Face. You need to install the XML file. # 2017 load pictures and analyze
# https://github.com/tspannhw/mxnet_rpi/blob/master/analyze.py
import time
import sys
import datetime
import subprocess
import urllib2
import os
import datetime
import traceback
import math
import random, string
import base64
import json
import mxnet as mx
import inception_predict
import numpy as np
import cv2
import random, string
import socket
import psutil
from time import sleep
from string import Template
from time import gmtime, strftime
# Time
start = time.time()
currenttime= strftime("%Y-%m-%d %H:%M:%S",gmtime())
host = os.uname()[1]
cpu = psutil.cpu_percent(interval=1)
if 1==1:
f = open('/sys/class/thermal/thermal_zone0/temp', 'r')
l = f.readline()
ctemp = 1.0 * float(l)/1000
usage = psutil.disk_usage("/")
mem = psutil.virtual_memory()
diskrootfree = "{:.1f} MB".format(float(usage.free) / 1024 / 1024)
mempercent = mem.percent
external_IP_and_port = ('198.41.0.4', 53) # a.root-servers.net
socket_family = socket.AF_INET
def IP_address():
try:
s = socket.socket(socket_family, socket.SOCK_DGRAM)
s.connect(external_IP_and_port)
answer = s.getsockname()
s.close()
return answer[0] if answer else None
except socket.error:
return None
ipaddress = IP_address()
face_cascade_path = '/media/nvidia/96ed93f9-7c40-4999-85ba-3eb24262d0a5/haarcascade_frontalface_default.xml'
face_cascade = cv2.CascadeClassifier(os.path.expanduser(face_cascade_path))
scale_factor = 1.1
min_neighbors = 3
min_size = (30, 30)
cap = cv2.VideoCapture(0)
packet_size=3000
def randomword(length):
return ''.join(random.choice(string.lowercase) for i in range(length))
#while True:
# Create unique image name
uniqueid = 'mxnet_uuid_{0}_{1}'.format(randomword(3),strftime("%Y%m%d%H%M%S",gmtime()))
ret, frame = cap.read()
imgdir = 'images/'
filename = 'tx1_image_{0}_{1}.jpg'.format(randomword(3),strftime("%Y%m%d%H%M%S",gmtime()))
cv2.imwrite(imgdir + filename, frame)
# Run inception prediction on image
try:
topn = inception_predict.predict_from_local_file(imgdir + filename, N=5)
except:
errorcondition = "true"
# CPU Temp
f = open("/sys/devices/virtual/thermal/thermal_zone1/temp","r")
cputemp = str( f.readline() )
cputemp = cputemp.replace('\n','')
cputemp = cputemp.strip()
cputemp = str(round(float(cputemp)) / 1000)
cputempf = str(round(9.0/5.0 * float(cputemp) + 32))
f.close()
# GPU Temp
f = open("/sys/devices/virtual/thermal/thermal_zone2/temp","r")
gputemp = str( f.readline() )
gputemp = gputemp.replace('\n','')
gputemp = gputemp.strip()
gputemp = str(round(float(gputemp)) / 1000)
gputempf = str(round(9.0/5.0 * float(gputemp) + 32))
f.close()
# NVidia Face Detect
p = os.popen('/media/nvidia/96ed93f9-7c40-4999-85ba-3eb24262d0a5/jetson-inference/build/aarch64/bin/facedetect.sh ' + filename).read()
face = p.replace('\n','|')
face = face.strip()
# NVidia Image Net Classify
p2 = os.popen('/media/nvidia/96ed93f9-7c40-4999-85ba-3eb24262d0a5/jetson-inference/build/aarch64/bin/runclassify.sh ' + filename).read()
imagenet = p2.replace('\n','|')
imagenet = imagenet.strip()
# 5 MXNET Analysis
top1 = str(topn[0][1])
top1pct = str(round(topn[0][0],3) * 100)
top2 = str(topn[1][1])
top2pct = str(round(topn[1][0],3) * 100)
top3 = str(topn[2][1])
top3pct = str(round(topn[2][0],3) * 100)
top4 = str(topn[3][1])
top4pct = str(round(topn[3][0],3) * 100)
top5 = str(topn[4][1])
top5pct = str(round(topn[4][0],3) * 100)
# OpenCV
infname = "/media/nvidia/96ed93f9-7c40-4999-85ba-3eb24262d0a5/images" + filename
flags = cv2.CASCADE_SCALE_IMAGE
#image_path = os.path.expanduser(infname)
image = cv2.imread(imgdir + filename)
#frame
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, scaleFactor = scale_factor, minNeighbors = min_neighbors, minSize = min_size, flags = flags)
# Create Face Images
x = 0
y = 0
w = 0
h = 0
outfilename = filename
outfname = filename
cvface = ''
cvfilename = ''
for( x1, y1, w1, h1 ) in faces:
cv2.rectangle(image, (x, y), (x + w, y + h), (255, 255, 0), 2)
outfname = "/media/nvidia/96ed93f9-7c40-4999-85ba-3eb24262d0a5/images/%s.faces.jpg" % os.path.basename(infname)
cv2.imwrite(os.path.expanduser(outfname), image)
cvfilename += outfname
cvface += 'Face {0}'.format(faces)
outfilename = outfname
x = x1
y = y1
w = w1
h = h1
endtime= strftime("%Y-%m-%d %H:%M:%S",gmtime())
end = time.time()
row = { 'uuid': uniqueid, 'top1pct': top1pct, 'top1': top1, 'top2pct': top2pct, 'top2': top2,'top3pct': top3pct, 'top3': top3,'top4pct': top4pct,'top4': top4, 'top5pct': top5pct,'top5': top5, 'gputemp': gputemp, 'imagefilename': filename, 'gputempf': gputempf, 'cputempf': cputempf, 'runtime': str(round(end - start)), 'facedetect': face, 'imagenet': imagenet, 'ts': currenttime, 'endtime': endtime, 'host': host, 'memory': mempercent, 'diskfree': diskrootfree, 'cputemp': round(ctemp,2), 'ipaddress': ipaddress, 'x': str(x), 'y': str(y), 'w': str(w), 'h': str(h), 'filename': outfname, 'cvface': cvface, 'cvfilename': cvfilename }
json_string = json.dumps(row)
print (json_string )
See Part 1: https://community.hortonworks.com/content/kbentry/182850/vision-thing.html You can find all the Python, Shell Scripts and HTML here: Github: https://github.com/tspannhw/OpenSourceComputerVision/
... View more
Labels:
03-27-2018
08:22 PM
6 Kudos
Open Source Computer Vision with TensorFlow, Apache MiniFi, Apache NiFi, OpenCV, Apache Tika and Python In preparation for this talk, I am releasing some articles detailing how to work with images. In this one I use my custom Web Camera processor to ingest web camera images via Apache NiFi on an OSX laptop equipped with two webcameras. This will let you ingest as many images as you need for security, motion detection or fun. Custom Processor: https://github.com/tspannhw/GetWebCamera My processor is a thin wrapper on an awesome Java library: http://webcam-capture.sarxos.pl/ See: https://github.com/sarxos/webcam-capture/blob/master/webcam-capture/src/example/java/TakePictureExample.java To use the processor, you can clone my github project or download the prebuilt NAR and install to your nifi/lib directory and restart your NiFi server. It is easy to use. Add the processor to your workflow at the start to ingest images from your webcam. The processor takes two properties: imagefilename and camername. The Imagefilename is the name you want the newly created image to be. The cameraname is the name of the camera if you have more than one. You can put in a partial name to match. For my OSX machine, I have the built-in one and one in my display. When I am working in locked mode I want it to grab the Display version so I enter the text Display. When combining with the custom TensorFlow processor it's really nice workflow. https://community.hortonworks.com/articles/178498/integrating-tensorflow-16-image-labelling-with-hdf.html The Easiest Java 8 Code Ever. Wrap Any Of Your Java Goodness and make it a processor today! Even easier to test. When you use the maven archetype it builds the full directory, a valid empty processor and a unit test. Apache NiFi builds the documentation for me. Add some comments in your code and blammo. Another awesome option is to use the MiniFi C++ Agent https://github.com/apache/nifi-minifi-cpp/blob/master/PROCESSORS.md#getusbcamera to ingest your webcam images. Example Image Installable Release https://github.com/tspannhw/GetWebCamera/releases/tag/1.0 References
https://community.hortonworks.com/articles/103863/using-an-asus-tinkerboard-with-tensorflow-and-pyth.html https://community.hortonworks.com/articles/118132/minifi-capturing-converting-tensorflow-inception-t.html https://github.com/tspannhw/rpi-noir-screen https://community.hortonworks.com/articles/77988/ingest-remote-camera-images-from-raspberry-pi-via.html https://community.hortonworks.com/articles/107379/minifi-for-image-capture-and-ingestion-from-raspbe.html https://community.hortonworks.com/articles/58265/analyzing-images-in-hdf-20-using-tensorflow.html
... View more
Labels:
03-23-2018
05:51 PM
8 Kudos
Integration Apache OpenNLP 1.8.4 into Apache NiFi 1.5 For Real-Time Natural Language Processing of Live Data Streams This is an update to the existing processor. This one seems to work better and faster. Versions Apache OpenNLP 1.8.4 with Name, Location and Date Processing. I also improved the output format and added Date parsing. Example Output nlp_location_1
China nlp_name_1
Andrew Turner Release https://github.com/tspannhw/nifi-nlp-processor/releases/tag/2.0 Installation Download NAR here: https://github.com/tspannhw/nifi-nlp-processor/releases/tag/2.0 Install nar file to /usr/hdf/current/nifi/lib/ Create a model directory with permissions for nifi user Download models (see below) Restart Apache NiFi via Ambari Download Models wget http://opennlp.sourceforge.net/models-1.5/en-ner-date.bin
wget http://opennlp.sourceforge.net/models-1.5/en-ner-location.bin
wget http://opennlp.sourceforge.net/models-1.5/en-ner-money.bin
wget http://opennlp.sourceforge.net/models-1.5/en-ner-organization.bin
wget http://opennlp.sourceforge.net/models-1.5/en-ner-percentage.bin
wget http://opennlp.sourceforge.net/models-1.5/en-ner-person.bin
wget http://opennlp.sourceforge.net/models-1.5/en-ner-time.bin
wget http://opennlp.sourceforge.net/models-1.5/en-chunker.bin
wget http://opennlp.sourceforge.net/models-1.5/en-parser-chunking.bin
wget http://opennlp.sourceforge.net/models-1.5/en-token.bin
wget http://opennlp.sourceforge.net/models-1.5/en-sent.bin
wget http://opennlp.sourceforge.net/models-1.5/en-pos-maxent.bin
wget http://opennlp.sourceforge.net/models-1.5/en-pos-perceptron.bin Resources: https://community.hortonworks.com/articles/76240/using-opennlp-for-identifying-names-from-text.html https://community.hortonworks.com/articles/163776/parsing-any-document-with-apache-nifi-15-with-apac.html https://community.hortonworks.com/articles/76924/data-processing-pipeline-parsing-pdfs-and-identify.html https://community.hortonworks.com/articles/80418/open-nlp-example-apache-nifi-processor.html https://community.hortonworks.com/articles/76935/using-sentiment-analysis-and-nlp-tools-with-hdp-25.html https://community.hortonworks.com/articles/142686/real-time-ingesting-and-transforming-sensor-and-so.html
... View more
Labels:
03-23-2018
03:02 PM
6 Kudos
Integrating TensorFlow 1.6 Image Labelling with HDF 3.1 and Apache NiFi 1.5
This is community Apache NiFi custom processor that I have written with help from Simon Ball. This is after his original change, I converted the results to a different style and incorporated changes to Google's TensorFlow Label_Image in Java example.
Images Supported as Flow Files
JPG, PNG, GIF
Versions
Updated TensorFlowProcessor to TF 1.6.
Summary
I added more tests, did more clode cleanup, change the top 5 returned with cleaner naming. This processor has been tested in a few environments, please give it a try and let me know your results. The performance and stability seem quite good, hopefully enough for your use cases. You can also use the MiniFi C++ Agent's built-in C++ TensorFlow support if you wish to have fast local TensorFlow at the edge.
Video
JUNit Test for TensorFlow Processor
Here is another flow where I am send an image to two Apache MXNet Model Servers and the local TensorFlow processor.
This is an example of the attributes returned by the processor. You pass in the image as a flowfile and it adds attributes without changing the contents.
In Apache NiFi, we can monitor how long the tasks are taking. After the first run the time of the processor goes down as we have cached the labels and the TensorFlow pre-built graph.
This is an example of an actual flow. I use this to grab images from Twitter, download them and run TensorFlow on them.
Installation
Download NAR here: https://github.com/tspannhw/nifi-tensorflow-processor/releases/tag/1.6
Install nar file to /usr/hdf/current/nifi/lib/
Create a model directory
wget https://raw.githubusercontent.com/tspannhw/nifi-tensorflow-processor/master/nifi-tensorflow-processors/src/test/resources/models/imagenet_comp_graph_label_strings.txt
wget https://github.com/tspannhw/nifi-tensorflow-processor/blob/master/nifi-tensorflow-processors/src/test/resources/models/tensorflow_inception_graph.pb?raw=true
Restart Apache NiFi via Ambari
References
https://community.hortonworks.com/articles/116803/building-a-custom-processor-in-apache-nifi-12-for.html
https://community.hortonworks.com/articles/178196/integrating-lucene-geo-gazetteer-for-geo-parsing-w.html
https://github.com/tspannhw/nifi-tensorflow-processor
https://community.hortonworks.com/articles/80339/iot-capturing-photos-and-analyzing-the-image-with.html
https://community.hortonworks.com/articles/104649/using-cloudbreak-recipes-to-deploy-anaconda-and-te.html
https://github.com/aymericdamien/TensorFlow-Examples
https://community.hortonworks.com/articles/142686/real-time-ingesting-and-transforming-sensor-and-so.html
https://community.hortonworks.com/articles/118132/minifi-capturing-converting-tensorflow-inception-t.html
https://community.hortonworks.com/articles/130814/sensors-and-image-capture-and-deep-learning-analys.html
https://community.hortonworks.com/articles/103863/using-an-asus-tinkerboard-with-tensorflow-and-pyth.html
https://community.hortonworks.com/articles/83872/data-lake-30-containerization-erasure-coding-gpu-p.html
... View more
Labels:
10-30-2018
08:40 PM
I have updated "Advanced nifi-bootstrap-env" config on Ambari as below and restarted NIFI service. But still I don't see any metrics coming up on http://nifi1:7071/metrics Am I missing anything ?
... View more