1973
Posts
1225
Kudos Received
124
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2468 | 04-03-2024 06:39 AM | |
| 3815 | 01-12-2024 08:19 AM | |
| 2061 | 12-07-2023 01:49 PM | |
| 3045 | 08-02-2023 07:30 AM | |
| 4181 | 03-29-2023 01:22 PM |
03-07-2017
03:51 PM
6 Kudos
Use Case I want to hide text messages inside images. When the images arrive somewhere else, I want to extract those messages. It let's you hide text in images, binaries in images and images in images. I was interesting in hiding text messages in images. After seeing https://en.wikipedia.org/wiki/Turn:_Washington's_Spies I thought secret messages were cool. So using the library, I take an image and text and hide the text in there. The library produces a new image (PNG) that has the message in it. I have a second script that extracts the text. The images look the same to my eyes. A future test would be to run a deep learning library or image analysis tool on the images to see if they spot the bits. They should be able to. A future NiFi tool would be to spot hidden images. It's a fun exercise to use NiFi and it seems possible that encoding messages in images were passing through Niagra Files back in the NSA days. Step 1: Hide Text (ExecuteStreamCommand) Step 2: Fetch File Step 3: UnHide Text (ExecuteStreamCommand) The left image is the original image and the right PNG is the output image with text. The size on disk has increased at a noticeable level.
The python source code is in github and referenced below: hide.sh wget $1 -O img.jpg
python hidetext.py img.jpg "$2" hidetext.py import cv
from LSBSteg import LSBSteg
import sys
imagename=sys.argv[1]
textstring=sys.argv[2]
carrier = cv.LoadImage(imagename)
steg = LSBSteg(carrier)
steg.hideText(textstring)
steg.saveImage(imagename + ".png")
#Image that contain datas unhide.sh python unhidetext.py $1 unhidetext.py import cv
from LSBSteg import LSBSteg
import sysimagename=sys.argv[1]
im = cv.LoadImage(imagename)steg = LSBSteg(im)
print steg.unhideText() For installation, you need to download LSB-Steganography script. OpenCV pip install cv
Reference: https://en.wikipedia.org/wiki/Steganography https://github.com/tspannhw/spy https://github.com/RobinDavid/LSB-Steganography
... View more
Labels:
05-30-2017
01:42 PM
what are your settings for minio? and you must be running a minio server and have permissions to it. you need to set your access and secret keys and host base and host bucket
... View more
03-02-2017
05:25 PM
3 Kudos
1. Host a Web Page (index.html) via HTTP GET with 200 OK Status
2. Receive POST from that page via AJAX with browser data
3. Extract Content and Attributes
4. Build a JSON file of HTTP data
5. Store it
To accept location in a phone or modern browser you must be running SSL.
So I added that for this HTTP Request.
Use openssl to create your 2048 RSA X509, PKCS12, JKS Keystore, Import Trust Store and import in browser
Your web page can be any web page, just POST back via AJAX or Form Submit.
<html>
<head>
<title>NiFi Browser Data Acquisition</title>
<body>
<script>
// Usage
window.onload = function() {
navigator.getBattery().then(function(battery) {
console.log(battery.level);
battery.addEventListener('levelchange', function() {
console.log(this.level);
});
});
};
////////////// print these
var latitude = "";
var longitude = "";
var ips = "";
var batteryInfo = "";
var screenInfo = screen.width +","+ screen.height + "," +
screen.availWidth +","+ screen.availHeight + "," +
screen.colorDepth + "," + screen.pixelDepth;
var pluginsInfo = "";
var coresInfo = "";
/////////////
////// Set Plugins
for (var i = 0; i < 12; i++) {
if ( typeof window.navigator.plugins[i] !== 'undefined' ) {
pluginsInfo += window.navigator.plugins[i].name + ', ';
}
}
////// Set Cores
if ( window.navigator.hardwareConcurrency > 0 ) {
coresInfo = window.navigator.hardwareConcurrency + " cores";
}
/////////////
/// send the information to the server
function loadDoc() {
var xhttp = new XMLHttpRequest();
xhttp.onreadystatechange = function() {
if (this.readyState == 4 && this.status == 200) {
document.getElementById("demo").innerHTML = 'Sent.';
}
};
// /send
xhttp.open("POST", "/send", true);
xhttp.setRequestHeader("Content-type", "application/json");
xhttp.send('{"plugins":"' + pluginsInfo +
'", "screen":"' + screenInfo +
'", "cores":"' + coresInfo +
'", "battery":"' + batteryInfo +
'", "ip":"' + ips +
'", "lat":"' + latitude + '", "lng":"' + longitude + '"}')
}
////////////
function geoFindMe() {
var output = document.getElementById("out");
if (!navigator.geolocation){
output.innerHTML = "<p>Geolocation is not supported by your browser</p>";
return;
}
function success(position) {
latitude = position.coords.latitude;
longitude = position.coords.longitude;
output.innerHTML = '<p>Latitude is ' + latitude + '° <br>Longitude is ' + longitude + '°</p>';
var img = new Image();
img.src="https://maps.googleapis.com/maps/api/staticmap?center=" + latitude + "," + longitude + "&zoom=13&size=300x300&sensor=false";
output.appendChild(img);
}
function error() {
output.innerHTML = "Unable to retrieve your location";
}
output.innerHTML = "<p>Locating…</p>";
navigator.geolocation.getCurrentPosition(success, error);
}
//get the IP addresses associated with an account
function getIPs(callback){
var ip_dups = {};
//compatibility for firefox and chrome
var RTCPeerConnection = window.RTCPeerConnection
|| window.mozRTCPeerConnection
|| window.webkitRTCPeerConnection;
var useWebKit = !!window.webkitRTCPeerConnection;
//bypass naive webrtc blocking using an iframe
if(!RTCPeerConnection){
//NOTE: you need to have an iframe in the page right above the script tag
//
//<iframe id="iframe" sandbox="allow-same-origin" style="display: none"></iframe>
//<script>...getIPs called in here...
//
var win = iframe.contentWindow;
RTCPeerConnection = win.RTCPeerConnection
|| win.mozRTCPeerConnection
|| win.webkitRTCPeerConnection;
useWebKit = !!win.webkitRTCPeerConnection;
}
//minimal requirements for data connection
var mediaConstraints = {
optional: [{RtpDataChannels: true}]
};
var servers = {iceServers: [{urls: "stun:stun.services.mozilla.com"}]};
//construct a new RTCPeerConnection
var pc = new RTCPeerConnection(servers, mediaConstraints);
function handleCandidate(candidate){
//match just the IP address
var ip_regex = /([0-9]{1,3}(\.[0-9]{1,3}){3}|[a-f0-9]{1,4}(:[a-f0-9]{1,4}){7})/
var ip_addr = ip_regex.exec(candidate)[1];
//remove duplicates
if(ip_dups[ip_addr] === undefined)
callback(ip_addr);
ip_dups[ip_addr] = true;
}
//listen for candidate events
pc.onicecandidate = function(ice){
//skip non-candidate events
if(ice.candidate)
handleCandidate(ice.candidate.candidate);
};
//create a bogus data channel
pc.createDataChannel("");
//create an offer sdp
pc.createOffer(function(result){
//trigger the stun server request
pc.setLocalDescription(result, function(){}, function(){});
}, function(){});
//wait for a while to let everything done
setTimeout(function(){
//read candidate info from local description
var lines = pc.localDescription.sdp.split('\n');
lines.forEach(function(line){
if(line.indexOf('a=candidate:') === 0)
handleCandidate(line);
});
}, 1000);
}
window.addEventListener("load", function (ev) {
"use strict";
var log = document.getElementById("log");
// https://dvcs.w3.org/hg/dap/raw-file/tip/sensor-api/Overview.html
window.addEventListener("devicetemperature", function (ev) {
log.textContent += "devicetemperature " + ev.value + "\n";
}, false);
window.addEventListener("devicepressure", function (ev) {
log.textContent += "devicepressure " + ev.value + "\n";
}, false);
window.addEventListener("devicelight", function (ev) {
log.textContent += "devicelight " + ev.value + "\n";
// toy tric
log.style.color = "rgb(" + (255 - 2*ev.value) + ",0,0)";
log.style.backgroundColor = "rgb(0,0," + (2*ev.value) + ")";
}, false);
window.addEventListener("deviceproximity", function (ev) {
log.textContent += "deviceproximity " + ev.value + "\n";
// toy tric
if (ev.value < 3) navigator.vibrate([300, 100, 100]);
}, false);
window.addEventListener("devicenoise", function (ev) {
log.textContent += "devicenoise " + ev.value + "\n";
}, false);
window.addEventListener("devicehumidity", function (ev) {
log.textContent += "devicehumidity " + ev.value + "\n";
}, false);
//https://wiki.mozilla.org/Magnetic_Field_Events
window.addEventListener("devicemagneticfield", function (ev) {
log.textContent += "devicemagneticfield " + [ev.x, ev.y, ev.x]+ "\n";
}, false);
// https://dvcs.w3.org/hg/dap/raw-file/default/pressure/Overview.html
window.addEventListener("atmpressure", function (ev) {
log.textContent += "atmpressure " + ev.value + "\n";
}, false);
// https://dvcs.w3.org/hg/dap/raw-file/tip/humidity/Overview.html
window.addEventListener("humidity", function (ev) {
log.textContent += "humidity " + ev.value + "\n";
}, false);
// https://dvcs.w3.org/hg/dap/raw-file/tip/temperature/Overview.html
window.addEventListener("temperature", function (ev) {
log.textContent += "temperature " + [ev.f, ev.c, ev.k, ev.value] + "\n";
}, false);
// https://dvcs.w3.org/hg/dap/raw-file/tip/battery/Overview.html
try {
if (typeof navigator.getBattery === "function") {
navigator.getBattery().then(function (battery) {
log.textContent += "battery.level " + battery.level + "\n";
log.textContent += "battery.charging " + battery.charging + "\n";
batteryInfo = "battery.level=" + battery.level + "," +
"battery.charging=" + battery.charging;
log.textContent += "battery.chargeTime " + battery.chargeTime + "\n";
log.textContent += "battery.dischargeTime " + battery.dischargeTime + "\n";
battery.addEventListener("levelcharge", function (ev) {
log.textContent += "change battery.level " + battery.level + "\n";
}, false);
}).catch(function (err) {
log.textContent += err.toString() + "\n";
});
} else {
log.textContent += "";
}
} catch (ex) {
log.textContent += ex.toString() + "\n";
}
}, false);
</script>
<p>
<br>
DEMO: Send Data to HDF / Apache NiFi via HandleHTTPRequest
<br>
<p><button onclick="geoFindMe()">Show my location</button></p>
<div id="out"></div>
<div id="demo"></div>
<pre id="log"></pre>
<button type="button" onclick="loadDoc()">Send data to Apache NiFi SSL Server</button>
<iframe id="iframe" sandbox="allow-same-origin" style="display: none"></iframe>
<script>
getIPs(function(ip){ips = ip;});
</script>
</body>
</html>
index.html : A web page to grab user information.
mobile-ingest-v3.xml : Apache NiFi 1.1.x template.
Note: Different browsers, devices, phones, tables and versions will send different values. Users should get a location request pop-up.
JSON Result File
{
"http.request.uri" : "/send",
"http.context.identifier" : "a4f9ae25-5f49-463e-97eb-c8a6bf3be8a7",
"http.remote.host" : "192.xxx.1.xxx",
"http.headers.Host" : "192.xxx.1.xxx:9178",
"http.local.name" : "192.xxx.1.xxx",
"http.headers.DNT" : "1",
"plugins" : "Widevine Content Decryption Module, Shockwave Flash, Chrome PDF Viewer, Native Client, Chrome PDF Viewer, ",
"latitude" : "40.2681799",
"http.headers.Accept" : "*/*",
"battery" : "battery.level=1,battery.charging=true",
"uuid" : "a2f299ae-6ef6-480d-a359-1362d25abe76",
"http.request.url" : "https://192.168.1.151:9178/send",
"http.server.name" : "192.168.1.151",
"http.character.encoding" : "UTF-8",
"path" : "./",
"cores" : "8 cores",
"http.remote.addr" : "192.168.1.151",
"http.headers.User-Agent" : "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36",
"http.method" : "POST",
"http.headers.Connection" : "keep-alive",
"longitude" : "-74.5291745",
"http.server.port" : "9178",
"ip" : "192.168.1.151",
"mime.type" : "application/json",
"http.locale" : "en_US",
"http.headers.Accept-Encoding" : "gzip, deflate, br",
"http.headers.Origin" : "https://192.168.1.151:9178",
"http.servlet.path" : "",
"http.local.addr" : "192.168.1.151",
"filename" : "1082639525534467",
"http.headers.Referer" : "https://192.168.1.151:9178/",
"http.headers.Accept-Language" : "en-US,en;q=0.8",
"http.headers.Content-Length" : "253",
"http.headers.Content-Type" : "application/json",
"RouteOnAttribute.Route" : "isjsonpost"
}
References:
https://github.com/tspannhw/webdataingest
http://webkay.robinlinus.com/
https://github.com/RobinLinus/autofill-phishing
https://github.com/RobinLinus/ubercookie
https://github.com/RobinLinus/socialmedia-leak
https://www.w3schools.com/jsref/prop_screen_availheight.asp
https://community.hortonworks.com/articles/27033/https-endpoint-in-nifi-flow.html
http://www.batchiq.com/nifi-configuring-ssl-auth.html
https://community.hortonworks.com/articles/886/securing-nifi-step-by-step.html
http://mobilehtml5.org/
https://gist.github.com/bellbind/c60d7008e86c34a76aa1
https://github.com/coremob/camera
http://www.girliemac.com/presentation-slides/html5-mobile-approach/deviceAPIs.html?full#23
https://github.com/girliemac/sushi-compass/blob/master/js/app.js
https://github.com/noipfraud/IPLock
http://www.tomanthony.co.uk/blog/detect-visitor-social-networks/
https://appsec-labs.com/html5/#toggle-id-5
https://mobiforge.com/design-development/sense-and-sensor-bility-access-mobile-device-sensors-with-javascript
https://www.html5rocks.com/en/tutorials/device/orientation/
http://qnimate.com/html5-proximity-api/
... View more
Labels:
05-08-2017
05:40 PM
The complete code is in the Zeppelin Spark Python notebook referenced here: https://github.com/zaratsian/PySpark/blob/master/text_analytics_datadriven_topics.json
... View more
02-13-2017
03:33 AM
3 Kudos
Overview I have been running a similar program on Raspberry Pi devices with TensorFlow. Now that MXNet has entered Apache incubation, it has become incredibly interesting to me. With the backing of Apache and Amazon, this library cannot be ignored. So I tried in on the same Raspberry Pi 3B that I was using for TensorFlow. For this example, we are grabbing images from the standard Raspberry Pi Camera and running live image analysis on it with MXNet using the Inception pre-built model from the MXNet Model Zoo. This is the nearly the same as the TensorFlow example. What I noticed is a bit faster execution and smoother process. For accuracy, I have not run enough tests for weighing the two libraries out, but that is something I will look at doing for large number of images. Training both with my camera and images I am interested in would be very helpful. Some use cases I am thinking of are: Security Camera, Water Leak Detection, Evil Cat Sensing, Engine Vibration and self-driving model car. Raspberry Pi v3 B with PI Camera Setup Your Device For Running MXNet sudo apt-get -y install git cmake build-essential g++-4.8 c++-4.8 liblapack* libblas* libopencv*
git clone https://github.com/dmlc/mxnet.git --recursive
cd mxnet
make
cd python
sudo python setup.py install
curl --header 'Host: data.mxnet.io' --header 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.11; rv:45.0) Gecko/20100101 Firefox/45.0' --header 'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8' --header 'Accept-Language: en-US,en;q=0.5' --header 'Referer: http://data.mxnet.io/models/imagenet/' --header 'Connection: keep-alive' 'http://data.mxnet.io/models/imagenet/inception-bn.tar.gz' -o 'inception-bn.tar.gz' -L
tar -xvzf inception-bn.tar.gz
mv Inception_BN-0126.params Inception_BN-0000.params
The primary code is Python taken from some examples from MXNet, OpenCV and PICamera. topn = inception_predict.predict_from_local_file(filename, N=5) This calls the inception_predict from MXNet example. The inception_predict code is referenced in the reference links below. Main Python Code #!/usr/bin/python
# 2017 load pictures and analyze
import time
import sys
import datetime
import subprocess
import sys
import urllib2
import os
import datetime
import ftplib
import traceback
import math
import random, string
import base64
import json
import paho.mqtt.client as mqtt
import picamera
from time import sleep
from time import gmtime, strftime
import inception_predict
packet_size=3000
def randomword(length):
return ''.join(random.choice(string.lowercase) for i in range(length))
# Create camera interface
camera = picamera.PiCamera()
while True:
# Create unique image name
uniqueid = 'mxnet_uuid_{0}_{1}'.format(randomword(3),strftime("%Y%m%d%H%M%S",gmtime()))
# Take the jpg image from camera
filename = '/home/pi/cap.jpg'
# Capture image from RPI
camera.capture(filename)
# Run inception prediction on image
topn = inception_predict.predict_from_local_file(filename, N=5)
# CPU Temp
p = subprocess.Popen(['/opt/vc/bin/vcgencmd','measure_temp'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = p.communicate()
# MQTT
client = mqtt.Client()
client.username_pw_set("username","password")
client.connect("mqttcloudprovider", 14162, 60)
# CPU Temp
out = out.replace('\n','')
out = out.replace('temp=','')
# 5 MXNET Analysis
top1 = str(topn[0][1])
top1pct = str(round(topn[0][0],3) * 100)
top2 = str(topn[1][1])
top2pct = str(round(topn[1][0],3) * 100)
top3 = str(topn[2][1])
top3pct = str(round(topn[2][0],3) * 100)
top4 = str(topn[3][1])
top4pct = str(round(topn[3][0],3) * 100)
top5 = str(topn[4][1])
top5pct = str(round(topn[4][0],3) * 100)
row = [ { 'uuid': uniqueid, 'top1pct': top1pct, 'top1': top1, 'top2pct': top2pct, 'top2': top2,'top3pct': top3pct, 'top3': top3,'top4pct': top4pct,'top4': top4, 'top5pct': top5pct,'top5': top5, 'cputemp': out} ]
json_string = json.dumps(row)
client.publish("mxnet",payload=json_string,qos=1,retain=False)
client.disconnect()
We grab an image from a camera, run it through MXNet, convert the results to JSON and then send the message to a cloud hosted MQTT broker. I also grab the CPU temperature to show we can add more sensors.
Example JSON Sent via MQTT
[{"top1pct": "54.5", "top5": "n04590129 window shade", "top4": "n03452741 grand piano, grand", "top3": "n03018349 china cabinet, china closet", "top2": "n03201208 dining table, board", "top1": "n04099969 rocking chair, rocker", "top2pct": "9.1", "top3pct": "8.0", "uuid": "mxnet_uuid_oqy_20170211203727", "top4pct": "2.8", "top5pct": "2.2", "cputemp": "75.2'C"}] Our schema is pretty consistent as above, so we can create a Hive or Phoenix table and insert into that.
HDF / NiFi Flow Consume MQTT This processor will receive messages from a cloud based MQTT broker sent by a few Raspberry PIs I have setup. Extract Fields from MXNET (EvaluateJSONPath) Build a Message (UpdateAttribute) Category 1 ${top1} at ${top1pct}%
Category 2 ${top2} at ${top2pct}%
Category 3 ${top3} at ${top3pct}%
Category 4 ${top4} at ${top4pct}%
Category 5 ${top5} at ${top5pct}%
UUID ${uuid}
CPU Temp ${cputemp}
Send Msg to Slack Channel (PutSlack)
Channel is mxnet Stores Files (PutFile)
We take the JSON convert it to a text message to a Slack channel. That's all it takes to ingest data from an edge device running a camera and running Deep Learning on a tiny device and then send the data asynchronously to a cloud hosted broker that can distribute to cloud and on-premise hosted Apache NiFi servers. We could also use Site-to-Site, HTTP or TCP/IP. MQTT is very lightweight, works over the Internet, has an easy Python library and works well with Apache NiFi.
Reference: This sample program is critical and gave me most of the code needed to run: http://mxnet.io/tutorials/embedded/wine_detector.html http://data.mxnet.io/models/imagenet/ https://community.hortonworks.com/content/repo/77987/rpi-picamera-mqtt-nifi.html https://github.com/tspannhw/mxnet_rpi/blob/master/analyze.py https://community.hortonworks.com/content/kbentry/80339/iot-capturing-photos-and-analyzing-the-image-with.html CloudMQTT has proven to be awesome. Instant setup and a free instance for testing. This is great for getting data from my remote raspberry pis to the cloud and back into HDF 2.1 servers behind firewalls.
http://cloudmqtt.com http://www.jsonpath.com/ Github Repo https://github.com/tspannhw/mxnet_rpi https://community.hortonworks.com/repos/83001/python-mxnet-raspberry-pi-example.html?shortDescriptionMaxLength=140 Pushing to Slack Channel https://nifi-se.slack.com/messages/mxnet/details/ Apache MXNet Incubation https://wiki.apache.org/incubator/MXNetProposal Awesome MXNet https://github.com/dmlc/mxnet/tree/master/example Install MXNet on Raspian http://mxnet.io/get_started/raspbian_setup.html Example Program for MXNet on Raspberry PI 3 http://mxnet.io/tutorials/embedded/wine_detector.html Raspberry Pi with MXNET http://mxnet.io/tutorials/embedded/wine_detector.html MQTT https://github.com/tspannhw/rpi-picamera-mqtt-nifi/blob/master/upload.py Real-Image with Pretrained Model http://mxnet.io/tutorials/r/classifyRealImageWithPretrainedModel.html MXNet GTC Tutorial
https://github.com/dmlc/mxnet-gtc-tutorial MXNet for Facial Identification
https://github.com/tornadomeet/mxnet-face http://vis-www.cs.umass.edu/fddb/results.html http://www.cbsr.ia.ac.cn/english/CASIA-WebFace-Database.html
MXNet Models for ImageNet 1K Inception BN
https://github.com/dmlc/mxnet-model-gallery/blob/master/imagenet-1k-inception-bn.md MXNet Example Image Classification
https://github.com/dmlc/mxnet/tree/master/example/image-classification sudo apt-get install imagemagick identify -verbose /home/pi/cap.jpg
... View more
Labels:
02-03-2017
04:48 AM
5 Kudos
Sentiment CoreNLP Processor [pool-1-thread-1] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP -
Adding annotator tokenize[pool-1-thread-1] INFO edu.stanford.nlp.pipeline.TokenizerAnnotator
- No tokenizer type provided. Defaulting to PTBTokenizer.[pool-1-thread-1] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP -
Adding annotator ssplit[pool-1-thread-1] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP -
Adding annotator parse[pool-1-thread-1] INFO edu.stanford.nlp.parser.common.ParserGrammar
- Loading parser from serialized file edu/stanford/nlp/models/lexparser/englishPCFG.ser.gz
... done [0.4 sec].[pool-1-thread-1] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP -
Adding annotator sentimentFILE:Header,Header2,Header3Value,Value2,Value3Value4,Value5,Value6Attribute: {"names":"NEGATIVE"} Service Source Code JUnit Test for Processor To Add Sentiment Analysis to Your NiFi Data Flow, just add the custom processor, CoreNLPProcessor. You can downloada pre-built NAR from the github listed below. Add to your NiFi/lib directory and restart each node. The results of the run will be an attribute named sentiment: You can see how easy it is to add to your dataflows. If you would like to add more features to this processor, please fork the github below. This is not an official NiFi processor, just one I wrote in a couple of hours for my own use and for testing. There are four easy ways to add Sentiment Analysis to your Big Data pipelines: executescript of Python NLP scripts, call my custom processor, make a REST call to a Stanford CoreNLP sentiment server, make a REST call to a public sentiment as a service and send a message via Kafka (or JMS) to Spark or Storm to run other JVM sentiment analysis tools. Download a release https://github.com/tspannhw/nifi-corenlp-processor/releases/tag/v1.0 sentimentanalysiscustomprocessor.xml http://stanfordnlp.github.io/CoreNLP https://github.com/tspannhw/neural-sentiment https://github.com/tspannhw/nlp-utilities https://community.hortonworks.com/content/kbentry/81222/adding-stanford-corenlp-to-big-data-pipelines-apac.html https://community.hortonworks.com/content/repo/81187/nifi-corenlp-processor-example-processor-for-doing.html https://community.hortonworks.com/repos/79537/various-utilities-and-examples-for-working-with-va.html https://community.hortonworks.com/articles/76935/using-sentiment-analysis-and-nlp-tools-with-hdp-25.html https://community.hortonworks.com/questions/20791/sentiment-analysis-with-hdp.html https://community.hortonworks.com/articles/30213/us-presidential-election-tweet-analysis-using-hdfn.html https://community.hortonworks.com/articles/52415/processing-social-media-feeds-in-stream-with-apach.html https://community.hortonworks.com/articles/81222/adding-stanford-corenlp-to-big-data-pipelines-apac.html https://community.hortonworks.com/content/kbentry/67983/apache-hive-with-apache-hivemall.html
... View more
Labels:
02-10-2017
05:21 AM
This was awesome Tim
... View more
02-14-2017
02:59 PM
the processor takes a property to run against. You just need to pass something in the sentence parameter. You can concatenate a few fields there. The source is open, it would be easy to ingest a flowfile and process that instead of doing an input attribute. It's changing 2-3 lines and rebuilding.
... View more
10-26-2017
06:47 AM
Thanks for your information. I think virtualenv venv. ./venv/bin/activate should be virtualenv venv
. ./venv/bin/activate
... View more
01-28-2017
04:50 PM
3 Kudos
Preparing a Raspberry PI to Run TensorFlow Image Recognition I can easily have a Python script that polls my webcam (use
official Raspberry Pi webcam) , calls TensorFlow and then sends the results to
NiFi via MQTT. You need to install Python MQTT Library (https://pypi.python.org/pypi/paho-mqtt/1.1) For setting up Python, Raspberry PI with Camera, see https://dzone.com/articles/picamera-ingest-real-time Raspberry Pi 3 B+ preparation Buy a good quality 16 GIG SD Card and from OSX, Run SD Formatter to Overwrite Format the device at FAT, download here: https://www.sdcard.org/downloads/formatter_4/. Download the BerryBoot image from here. Unzip it and then copy it to your complete SD card. For examples of RPi TensorFlow You Can Run: https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/pi_examples/ You need to build tensorflow for pi, which took me over 4 hours. See: https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/makefile https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/pi_examples/ Process: wget https://github.com/tensorflow/tensorflow/archive/master.zipapt-get install -y libjpeg-devcd tensorflow-mastertensorflow/contrib/makefile/download_dependencies.shsudo apt-get install -y autoconf automake libtool gcc-4.8
g++-4.8cd tensorflow/contrib/makefile/downloads/protobuf/./autogen.sh./configuremakesudo make installsudo ldconfig #
refresh shared library cachecd ../../../../..make -f tensorflow/contrib/makefile/Makefile HOST_OS=PI
TARGET=PI \ OPTFLAGS="-Os
-mfpu=neon-vfpv4 -funsafe-math-optimizations -ftree-vectorize" CXX=g++-4.8curl
https://storage.googleapis.com/download.tensorflow.org/models/inception_dec_2015_stripped.zip
\-o /tmp/inception_dec_2015_stripped.zipunzip /tmp/inception_dec_2015_stripped.zip \-d tensorflow/contrib/pi_examples/label_image/data/make -f tensorflow/contrib/pi_examples/label_image/Makefile root@raspberrypi:/opt/demo/tensorflow-master#
tensorflow/contrib/pi_examples/label_image/gen/bin/label_image2017-01-28 01:46:48: I tensorflow/contrib/pi_examples/label_image/label_image.cc:144]
Loaded JPEG: 512x600x32017-01-28 01:46:50: W
tensorflow/core/framework/op_def_util.cc:332] Op
BatchNormWithGlobalNormalization is deprecated. It will cease to work in
GraphDef version 9. Use tf.nn.batch_normalization().2017-01-28 01:46:52: I
tensorflow/contrib/pi_examples/label_image/label_image.cc:378] Running model
succeeded!2017-01-28 01:46:52: I
tensorflow/contrib/pi_examples/label_image/label_image.cc:272] military uniform
(866): 0.6242942017-01-28 01:46:52: I
tensorflow/contrib/pi_examples/label_image/label_image.cc:272] suit (794):
0.04739812017-01-28 01:46:52: I
tensorflow/contrib/pi_examples/label_image/label_image.cc:272] academic gown
(896): 0.02809252017-01-28 01:46:52: I tensorflow/contrib/pi_examples/label_image/label_image.cc:272]
bolo tie (940): 0.01569552017-01-28 01:46:52: I
tensorflow/contrib/pi_examples/label_image/label_image.cc:272] bearskin (849):
0.0143348 It took over 4 hours to build. But only 4 seconds to run and gave good results for analyzing a picture of Computer Legend Grace Hopper. root@raspberrypi:/opt/demo/tensorflow-master#
tensorflow/contrib/pi_examples/label_image/gen/bin/label_image --help2017-01-28 01:51:26: E
tensorflow/contrib/pi_examples/label_image/label_image.cc:337]usage: tensorflow/contrib/pi_examples/label_image/gen/bin/label_imageFlags: --image="tensorflow/contrib/pi_examples/label_image/data/grace_hopper.jpg" string image to be processed --graph="tensorflow/contrib/pi_examples/label_image/data/tensorflow_inception_stripped.pb" string graph
to be executed --labels="tensorflow/contrib/pi_examples/label_image/data/imagenet_comp_graph_label_strings.txt" string name
of file containing labels --input_width=299 int32 resize image to this
width in pixels --input_height=299 int32 resize image to this height in pixels --input_mean=128 int32 scale pixel values to
this mean --input_std=128 int32 scale pixel values to this std deviation --input_layer="Mul" string name of input layer --output_layer="softmax" string name of output layer --self_test=false bool run a self test --root_dir="" string interpret image and graph file names relative
to this directory
... View more
Labels: