1973
Posts
1225
Kudos Received
124
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1901 | 04-03-2024 06:39 AM | |
| 2981 | 01-12-2024 08:19 AM | |
| 1633 | 12-07-2023 01:49 PM | |
| 2402 | 08-02-2023 07:30 AM | |
| 3344 | 03-29-2023 01:22 PM |
10-30-2018
07:57 PM
https://community.hortonworks.com/articles/92495/monitor-apache-nifi-with-apache-nifi.html https://community.hortonworks.com/questions/157929/how-to-enable-apache-nifi-metrics-for-jmx.html https://community.hortonworks.com/articles/224554/building-a-custom-apache-nifi-operations-dashboard.html
... View more
10-24-2018
04:06 PM
https://community.hortonworks.com/articles/101679/iot-ingesting-gps-data-from-raspberry-pi-zero-wire.html
... View more
10-24-2018
04:06 PM
1 Kudo
If you search on HCC you can see my articles on using Python with GPS to ingest with minifi
... View more
10-18-2018
09:27 PM
2 Kudos
Simple Apache NiFi Operations Dashboard - Part 2 Part 1: https://community.cloudera.com/t5/Community-Articles/Building-a-Custom-Apache-NiFi-Operations-Dashboard-Part-1/ta-p/249060 To access data to display in our dashboard we will use some Spring Boot 2.06 Java 8 microservices to call Apache Hive 3.1.0 tables in HDP 3.0 on Hadoop 3.1. We will have our web site hosted and make REST Calls to Apache NiFi, our microservices, YARN and other APIs. As you can see we can easily incorporate data from HDP 3 - Apache Hive 3.1.0 in Spring Boot java applications with not much trouble. You can see the Maven build script (all code is in github.) Our motivation is put all this data somewhere and show it in a dashboard that can use REST APIs for data access and updates. We may choose to use Apache NiFi for all REST APIs or we can do some in Apache NiFi. We are still exploring. We can also decide to change the backend to HBase 2.0, Phoenix or Druid or a combination. We will see. Spring Boot 2.0.6 Loading JSON Output Spring Boot Microservices and UI https://github.com/tspannhw/operations-dashboard To start I have a simple web page that calls one of the REST APIs. The microservice can be run off of YARN 3.1, Kubernetes, CloudFoundry, OpenShift or any machine that can run a simple Java 8 jar. We can have this HTML as part of a larger dashboard or hosted anywhere. For Parsing the Monitoring Data We have some schemas for Metrics, Status and Bulletins. Now that monitoring data is in Apache Hive, I can query it with easy in Apache Zeppelin (or any JDBC/ODBC tool) Apache Zeppelin Screens We have a lot of reporting tasks for Monitoring NiFi We read from NiFi and send to NiFi, would be nice to have a dedicated reporting cluster Just Show Me Bulletins for MonitorMemory (You can see that in Reporting Tasks) NiFi Query To Limit Which Bulletins We Are Storing In Hive (For Now Just grab Errors) Spring Boot Code for REST APIs Metrics REST API Results Bulletin REST API Results Metrics Home Page Run The Microservice java -Xms512m -Xmx2048m -Dhdp.version=3.0.0 -Djava.net.preferIPv4Stack=true -jar target/operations-0.0.1-SNAPSHOT.jar Maven POM <?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.dataflowdeveloper</groupId>
<artifactId>operations</artifactId>
<version>0.0.1-SNAPSHOT</version>
<packaging>jar</packaging>
<name>operations</name>
<description>Apache Hive Operations Spring Boot</description>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.0.5.RELEASE</version>
<relativePath/>
</parent>
<properties>
<java.version>1.8</java.version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
<exclusions>
<exclusion>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-logging</artifactId>
</exclusion>
<exclusion>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-tomcat</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-log4j2</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-jdbc</artifactId>
</dependency>
<dependency>
<groupId>org.apache.hive</groupId>
<artifactId>hive-jdbc</artifactId>
<version>3.1.0</version>
<exclusions>
<exclusion>
<groupId>org.eclipse.jetty.aggregate</groupId>
<artifactId>*</artifactId>
</exclusion>
<exclusion>
<artifactId>servlet-api</artifactId>
<groupId>javax.servlet</groupId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.restdocs</groupId>
<artifactId>spring-restdocs-mockmvc</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
<repositories>
<repository>
<id>spring-releases</id>
<url>https://repo.spring.io/libs-release</url>
</repository>
</repositories>
<pluginRepositories>
<pluginRepository>
<id>spring-releases</id>
<url>https://repo.spring.io/libs-release</url>
</pluginRepository>
</pluginRepositories>
</project> With some help from the Internet, we have a simple Javascript to read the Spring Boot /metrics REST API and fill some values: HTML and Javascript (see src/main/resources/static/index.html) <h1>Metrics</h1>
<div id="output" name="output" style="align: center; overflow:auto; height:400px; width:800px" class="white-frame">
<ul id="metrics"></ul>
</div>
<script language="javascript">var myList = document.querySelector('ul');var myRequest = new Request('./metrics/');fetch(myRequest).then(function(response) { return response.json(); }).then(function(data) {for (var i = 0; i < data.length; i++) {var listItem = document.createElement('li');listItem.innerHTML = '<strong>Timestamp' + data[i].timestamp + '</strong>Flow Files Received: ' +data[i].flowfilesreceivedlast5minutes + ' JVM Heap Usage:' + data[i].jvmheap_usage +' Threads Waiting:' + data[i].jvmthread_statestimed_waiting +' Thread Count: ' + data[i].jvmthread_count +' Total Task Duration: ' + data[i].totaltaskdurationnanoseconds +' Bytes Read Last 5 min: ' + data[i].bytesreadlast5minutes +' Flow Files Queued: ' + data[i].flowfilesqueued +' Bytes Queued: ' + data[i].bytesqueued;myList.appendChild(listItem);}});</script> Resources https://github.com/tspannhw/operations-dashboard https://community.hortonworks.com/articles/177256/spring-boot-20-on-acid-integrating-rest-microservi.html https://community.hortonworks.com/articles/207858/more-devops-for-hdf-apache-nifi-and-friends.html https://pierrevillard.com/2017/05/16/monitoring-nifi-ambari-grafana/ Example API Calls to Spring Boot http://localhost:8090/status/Update http://localhost:8090/bulletin/error http://localhost:8090/metrics/ TODO: We will add more calls directly to REST APIs of Apache NiFi clusters for display in our dashboard. REST API for NiFi of Interest /nifi-api/flow/process-groups/root/status /nifi-api/resources /flow/cluster/summary /nifi-api/flow/process-groups/root /nifi-api/Site-to-site /nifi-api/flow/bulletin-board /flow/history\?offset\=1\&count\=100 /nifi-api/flow/search-results\?\q\=NiFi+Operations /nifi-api/flow/status /flow/process-groups/root/controller-services /nifi-api/flow/process-groups/root/status /nifi-api/system-diagnostics
... View more
Labels:
10-18-2018
08:38 PM
4 Kudos
Simple Apache NiFi Operations Dashboard
This is an evolving work in progress, please get involved everything is open source. @milind pandit and I are working on a project to build something useful for teams to analyze their flows, current cluster state, start and stop flows and have a rich one look dashboard.
There's a lot of data provided by Apache NiFi and related tools to aggregate, sort, categorize, search and eventually do machine learning analytics on.
There are a lot of tools that come out of the box that solve parts of these problems. Ambari Metrics, Grafana and Log Search provide a ton of data and analysis abilities. You can find all your errors easily in Log Search and see nice graphs of what is going on in Ambari Metrics and Grafana.
What is cool with Apache NiFi is that is has SitetoSite tasks for sending all the provenance, analytics, metrics and operational data you need to wherever you want it. That includes to Apache NiFi! This is Monitoring Driven Development (MDD).
Monitoring Driven Development (MDD)
MDD - https://pierrevillard.com/2018/08/29/monitoring-driven-development-with-nifi-1-7/
In this little proof of concept work, we grab some of these flows process them in Apache NiFi and then store them in Apache Hive 3 tables for analytics. We should probably push the data to HBase for aggregates and Druid for time series. We will see as this expands.
There are also other data access options including the NiFi REST API and the NiFi Python APIs.
Boostrap Notifier
Send notification when the NiFi starts, stops or died unexpectedly
Two OOTB notifications
Email notification service
HTTP notification service
It’s easy to write a custom notification service
https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#notification_services
Reporting Tasks
AmbariReportingTask (global, per process group)
MonitorDiskUsage(Flowfile, content, provenance repositories)
MonitorMemory
Monitor Disk Usage
MonitorActivity
See:
https://nipyapi.readthedocs.io/en/latest/readme.html
https://community.hortonworks.com/articles/177301/big-data-devops-apache-nifi-flow-versioning-and-au.html
These are especially useful for doing things like purging connections.
Purge it!
nipyapi.canvas.purge_connection(con_id)
nipyapi.canvas.purge_process_group(process_group, stop=False)
nipyapi.canvas.delete_process_group(process_group, force=True, refresh=True)
Use Cases
Example Metrics Data
[ {
"appid" : "nifi",
"instanceid" : "7c84501d-d10c-407c-b9f3-1d80e38fe36a",
"hostname" : "#.#.hortonworks.com",
"timestamp" : 1539411679652,
"loadAverage1min" : 0.93,
"availableCores" : 16,
"FlowFilesReceivedLast5Minutes" : 14,
"BytesReceivedLast5Minutes" : 343779,
"FlowFilesSentLast5Minutes" : 0,
"BytesSentLast5Minutes" : 0,
"FlowFilesQueued" : 59952,
"BytesQueued" : 294693938,
"BytesReadLast5Minutes" : 241681,
"BytesWrittenLast5Minutes" : 398753,
"ActiveThreads" : 2,
"TotalTaskDurationSeconds" : 273,
"TotalTaskDurationNanoSeconds" : 273242860763,
"jvmuptime" : 224997,
"jvmheap_used" : 5.15272616E8,
"jvmheap_usage" : 0.9597700387239456,
"jvmnon_heap_usage" : -5.1572632E8,
"jvmthread_statesrunnable" : 11,
"jvmthread_statesblocked" : 2,
"jvmthread_statestimed_waiting" : 26,
"jvmthread_statesterminated" : 0,
"jvmthread_count" : 242,
"jvmdaemon_thread_count" : 125,
"jvmfile_descriptor_usage" : 0.0709,
"jvmgcruns" : null,
"jvmgctime" : null
} ]
Example Status Data
{
"statusId" : "a63818fe-dbd2-44b8-af53-eaa27fd9ef05",
"timestampMillis" : "2018-10-18T20:54:38.218Z",
"timestamp" : "2018-10-18T20:54:38.218Z",
"actorHostname" : "#.#.hortonworks.com",
"componentType" : "RootProcessGroup",
"componentName" : "NiFi Flow",
"parentId" : null,
"platform" : "nifi",
"application" : "NiFi Flow",
"componentId" : "7c84501d-d10c-407c-b9f3-1d80e38fe36a",
"activeThreadCount" : 1,
"flowFilesReceived" : 1,
"flowFilesSent" : 0,
"bytesReceived" : 1661,
"bytesSent" : 0,
"queuedCount" : 18,
"bytesRead" : 0,
"bytesWritten" : 1661,
"bytesTransferred" : 16610,
"flowFilesTransferred" : 10,
"inputContentSize" : 0,
"outputContentSize" : 0,
"queuedContentSize" : 623564,
"activeRemotePortCount" : null,
"inactiveRemotePortCount" : null,
"receivedContentSize" : null,
"receivedCount" : null,
"sentContentSize" : null,
"sentCount" : null,
"averageLineageDuration" : null,
"inputBytes" : null,
"inputCount" : 0,
"outputBytes" : null,
"outputCount" : 0,
"sourceId" : null,
"sourceName" : null,
"destinationId" : null,
"destinationName" : null,
"maxQueuedBytes" : null,
"maxQueuedCount" : null,
"queuedBytes" : null,
"backPressureBytesThreshold" : null,
"backPressureObjectThreshold" : null,
"isBackPressureEnabled" : null,
"processorType" : null,
"averageLineageDurationMS" : null,
"flowFilesRemoved" : null,
"invocations" : null,
"processingNanos" : null
}
Example Failure Data
[ {
"objectId" : "34c3249c-4a42-41ce-b94e-3563409ad55b",
"platform" : "nifi",
"project" : null,
"bulletinId" : 28321,
"bulletinCategory" : "Log Message",
"bulletinGroupId" : "0b69ea51-7afb-32dd-a7f4-d82b936b37f9",
"bulletinGroupName" : "Monitoring",
"bulletinLevel" : "ERROR",
"bulletinMessage" : "QueryRecord[id=d0258284-69ae-34f6-97df-fa5c82402ef3] Unable to query StandardFlowFileRecord[uuid=cd305393-f55a-40f7-8839-876d35a2ace1,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1539633295746-10, container=default, section=10], offset=95914, length=322846],offset=0,name=783936865185030,size=322846] due to Failed to read next record in stream for StandardFlowFileRecord[uuid=cd305393-f55a-40f7-8839-876d35a2ace1,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1539633295746-10, container=default, section=10], offset=95914, length=322846],offset=0,name=783936865185030,size=322846] due to -40: org.apache.nifi.processor.exception.ProcessException: Failed to read next record in stream for StandardFlowFileRecord[uuid=cd305393-f55a-40f7-8839-876d35a2ace1,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1539633295746-10, container=default, section=10], offset=95914, length=322846],offset=0,name=783936865185030,size=322846] due to -40",
"bulletinNodeAddress" : null,
"bulletinNodeId" : "91ab706b-5d92-454e-bc7a-6911d155fdca",
"bulletinSourceId" : "d0258284-69ae-34f6-97df-fa5c82402ef3",
"bulletinSourceName" : "QueryRecord",
"bulletinSourceType" : "PROCESSOR",
"bulletinTimestamp" : "2018-10-18T20:54:39.179Z"
} ]
Apache Hive 3 Tables
CREATE EXTERNAL TABLE IF NOT EXISTS failure (statusId STRING, timestampMillis BIGINT, `timestamp` STRING, actorHostname STRING, componentType STRING, componentName STRING, parentId STRING, platform STRING, `application` STRING, componentId STRING, activeThreadCount BIGINT, flowFilesReceived BIGINT, flowFilesSent BIGINT, bytesReceived BIGINT, bytesSent BIGINT, queuedCount BIGINT, bytesRead BIGINT, bytesWritten BIGINT, bytesTransferred BIGINT, flowFilesTransferred BIGINT, inputContentSize BIGINT, outputContentSize BIGINT, queuedContentSize BIGINT, activeRemotePortCount BIGINT, inactiveRemotePortCount BIGINT, receivedContentSize BIGINT, receivedCount BIGINT, sentContentSize BIGINT, sentCount BIGINT, averageLineageDuration BIGINT, inputBytes BIGINT, inputCount BIGINT, outputBytes BIGINT, outputCount BIGINT, sourceId STRING, sourceName STRING, destinationId STRING, destinationName STRING, maxQueuedBytes BIGINT, maxQueuedCount BIGINT, queuedBytes BIGINT, backPressureBytesThreshold BIGINT, backPressureObjectThreshold BIGINT, isBackPressureEnabled STRING, processorType STRING, averageLineageDurationMS BIGINT, flowFilesRemoved BIGINT, invocations BIGINT, processingNanos BIGINT) STORED AS ORC
LOCATION '/failure';
CREATE EXTERNAL TABLE IF NOT EXISTS bulletin (objectId STRING, platform STRING, project STRING, bulletinId BIGINT, bulletinCategory STRING, bulletinGroupId STRING, bulletinGroupName STRING, bulletinLevel STRING, bulletinMessage STRING, bulletinNodeAddress STRING, bulletinNodeId STRING, bulletinSourceId STRING, bulletinSourceName STRING, bulletinSourceType STRING, bulletinTimestamp STRING) STORED AS ORC
LOCATION '/error';
CREATE EXTERNAL TABLE IF NOT EXISTS memory (objectId STRING, platform STRING, project STRING, bulletinId BIGINT, bulletinCategory STRING, bulletinGroupId STRING, bulletinGroupName STRING, bulletinLevel STRING, bulletinMessage STRING, bulletinNodeAddress STRING, bulletinNodeId STRING, bulletinSourceId STRING, bulletinSourceName STRING, bulletinSourceType STRING, bulletinTimestamp STRING) STORED AS ORC
LOCATION '/memory'
;
// backpressure
CREATE EXTERNAL TABLE IF NOT EXISTS status (statusId STRING, timestampMillis BIGINT, `timestamp` STRING, actorHostname STRING, componentType STRING, componentName STRING, parentId STRING, platform STRING, `application` STRING, componentId STRING, activeThreadCount BIGINT, flowFilesReceived BIGINT, flowFilesSent BIGINT, bytesReceived BIGINT, bytesSent BIGINT, queuedCount BIGINT, bytesRead BIGINT, bytesWritten BIGINT, bytesTransferred BIGINT, flowFilesTransferred BIGINT, inputContentSize BIGINT, outputContentSize BIGINT, queuedContentSize BIGINT, activeRemotePortCount BIGINT, inactiveRemotePortCount BIGINT, receivedContentSize BIGINT, receivedCount BIGINT, sentContentSize BIGINT, sentCount BIGINT, averageLineageDuration BIGINT, inputBytes BIGINT, inputCount BIGINT, outputBytes BIGINT, outputCount BIGINT, sourceId STRING, sourceName STRING, destinationId STRING, destinationName STRING, maxQueuedBytes BIGINT, maxQueuedCount BIGINT, queuedBytes BIGINT, backPressureBytesThreshold BIGINT, backPressureObjectThreshold BIGINT, isBackPressureEnabled STRING, processorType STRING, averageLineageDurationMS BIGINT, flowFilesRemoved BIGINT, invocations BIGINT, processingNanos BIGINT) STORED AS ORC
LOCATION '/status';
... View more
Labels:
10-18-2018
08:28 PM
2 Kudos
As of today, there is only support for JDK 8 and JDK9 https://cwiki.apache.org/confluence/display/NIFI/Release+Notes#ReleaseNotes-Version1.7.1 JDK 11 is coming. But not today. It will not work.
... View more
10-18-2018
08:28 PM
1 Kudo
you need to use JDK 8
... View more
10-15-2018
06:30 PM
2 Kudos
It's always safest to use the version that is part of the official HDF release for support purposes. https://docs.hortonworks.com/HDPDocuments/HDF3/HDF-3.2.0/release-notes/content/component_support.html That is registry 0.2.0. Version 0.3.0 will be in the next release - HDF 3.3 which will be out in a reasonable time frame. See these notes for your own purposes: https://cwiki.apache.org/confluence/display/NIFIREG/Migration+Guidance https://cwiki.apache.org/confluence/display/NIFI/Release+Notes#ReleaseNotes-NiFiRegistry0.3.0 There is a Ranger update and some security updates. If you are a user of the open source tool suite, feel free to upgrade. I would recommend trying on a dev cluster first and test for a few weeks.
... View more
10-12-2018
06:11 PM
8 Kudos
Running TensorFlow on YARN 3.1 with or without GPU
You have the option to run with or without Docker containers. If you are not using Docker containers you will need CUDA, TensorFlow and all your Data Science libraries.
See: https://community.hortonworks.com/articles/222242/running-apache-mxnet-deep-learning-on-yarn-31-hdp.html
Tips from Wangda
Basically GPU on YARN give you isolation of GPU device. Let's say a Node with 4 GPUS. First task comes ask 1 GPU. (Yarn.io/gpu=1). And YARN NM gives the task GPU0. Then the second task comes, ask 2 GPUs. And YARN NM gives the task GPU1/GPU2. So from TF perspective, you don't need to specify which GPUs to use. TF will automatically detect and consume whatever available to the job. For this case, task2 cannot see other GPUs apart from GPU1/GPU2.
If you wish to run Apache MXNet deep learning programs, see this article: https://community.hortonworks.com/articles/222242/running-apache-mxnet-deep-learning-on-yarn-31-hdp.html
Installation
Install CUDA and Nvidia libraries if you have NVidia cards.
Install Python 3.x
Install Docker
Install PIP
sudo yum groupinstall 'Development Tools' -y
sudo yum install cmake git pkgconfig -y
sudo yum install libpng-devel libjpeg-turbo-devel jasper-devel openexr-devel libtiff-devel libwebp-devel -y
sudo yum install libdc1394-devel libv4l-devel gstreamer-plugins-base-devel -y
sudo yum install gtk2-devel -ysudo yum install tbb-devel eigen3-devel -y
pip3.6 install --upgrade pip
pip3.6 install tensorflow
pip3.6 install numpy -U
pip3.6 install scikit-learn -U
pip3.6 install opencv-python -U
pip3.6 install keras
pip3.6 install hdfs
git clone https://github.com/tensorflow/models/
You can see a docker example: https://github.com/hortonworks/hdp-assemblies/blob/master/tensorflow/markdown/Dockerfile.md
https://github.com/hortonworks/hdp-assemblies/blob/master/tensorflow/markdown/TensorflowOnYarnTutorial.md
Run Command for an Example Classification
yarn jar /usr/hdp/current/hadoop-yarn-client/hadoop-yarn-applications-distributedshell.jar -jar /usr/hdp/current/hadoop-yarn-client/hadoop-yarn-applications-distributedshell.jar -shell_command python3.6 -shell_args "/opt/demo/DWS-DeepLearning-CrashCourse/tf.py /opt/demo/images/photo1.jpg" -container_resources memory-mb=512,vcores=1
Without Docker
container_resources memory-mb=3072,vcores=1,yarn.io/gpu=2
With Docker (Enable it first: https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.1/data-operating-system/content/dosg_enable_gpu_for_docker_ambari_cluster.html)
-shell_env YARN_CONTAINER_RUNTIME_TYPE=docker \
-shell_env YARN_CONTAINER_RUNTIME_DOCKER_IMAGE=<docker-image-name> \
Running a More Complex Training Job
https://github.com/hortonworks/hdp-assemblies/blob/master/tensorflow/markdown/RunTensorflowJobUsingNativeServiceSpec.md
This is the main example: https://github.com/tensorflow/models/tree/master/tutorials/image/cifar10_estimator
yarn jar /usr/hdp/current/hadoop-yarn-client/hadoop-yarn-applications-distributedshell.jar -jar /usr/hdp/current/hadoop-yarn-client/hadoop-yarn-applications-distributedshell.jar -shell_command python3.6 -shell_args "/opt/demo/models/tutorials/image/cifar10_estimator/cifar10_main.py --data-dir=hdfs://default/tmp/cifar-10-data --job-dir=hdfs://default/tmp/cifar-10-jobdir --train-steps=10000 --eval-batch-size=16 --train-batch-size=16 --sync --num-gpus=0" -container_resources memory-mb=512,vcores=1
Example Output
[hdfs@princeton0 DWS-DeepLearning-CrashCourse]$ python3.6 tf.py
2018-10-15 02:37:23.892791: W tensorflow/core/framework/op_def_util.cc:355] Op BatchNormWithGlobalNormalization is deprecated. It will cease to work in GraphDef version 9. Use tf.nn.batch_normalization().
2018-10-15 02:37:24.181707: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
273 racer, race car, racing car 37.46013343334198%
274 sports car, sport car 25.35209059715271%
267 cab, hack, taxi, taxicab 11.118262261152267%
268 convertible 9.854312241077423%
271 minivan 3.2295159995555878%
Output Written to HDFS
hdfs dfs -ls /tfyarn
Found 1 items
-rw-r--r-- 3 root hdfs 457 2018-10-15 02:35 /tfyarn/tf_uuid_img_20181015023542.json
hdfs dfs -cat /tfyarn/tf_uuid_img_20181015023542.json
{"node_id273": "273", "humanstr273": "racer, race car, racing car", "score273": "37.46013343334198", "node_id274": "274", "humanstr274": "sports car, sport car", "score274": "25.35209059715271", "node_id267": "267", "humanstr267": "cab, hack, taxi, taxicab", "score267": "11.118262261152267", "node_id268": "268", "humanstr268": "convertible", "score268": "9.854312241077423", "node_id271": "271", "humanstr271": "minivan", "score271": "3.2295159995555878"}
Full Source Code
https://github.com/tspannhw/TensorflowOnYARN
Resources
https://www.tensorflow.org/
https://github.com/tspannhw/ApacheDeepLearning101/blob/master/yarn.sh
https://github.com/hortonworks/hdp-assemblies/
https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.1/data-operating-system/content/configuring_gpu_scheduling_and_isolation.html
https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.1/data-operating-system/content/dosg_enable_gpu_for_docker_ambari_cluster.html
https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.0/data-operating-system/content/dosg_recommendations_for_running_docker_containers_on_yarn.html
https://feathercast.apache.org/2018/10/02/deep-learning-on-yarn-running-distributed-tensorflow-mxnet-caffe-xgboost-on-hadoop-clusters-wangda-tan/
https://github.com/deep-diver/CIFAR10-img-classification-tensorflow
https://aajisaka.github.io/hadoop-document/hadoop-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine/RunningDistributedCifar10TFJobs.html
https://conferences.oreilly.com/strata/strata-ny-2018/public/schedule/detail/68289
https://github.com/tspannhw/ApacheDeepLearning101/blob/master/analyzehdfs.py
https://github.com/open-source-for-science/TensorFlow-Course
https://github.com/hortonworks/hdp-assemblies/ https://github.com/hortonworks/hdp-assemblies/blob/master/tensorflow/markdown/Dockerfile.md https://github.com/hortonworks/hdp-assemblies/blob/master/tensorflow/markdown/TensorflowOnYarnTutorial.md https://github.com/hortonworks/hdp-assemblies/blob/master/tensorflow/markdown/RunTensorflowJobUsingHelperScript.md
Documentation
https://hadoop.apache.org/docs/r3.1.0/
https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.1/data-operating-system/content/options_distributed_shell_gpu.html
Coming Soon
https://github.com/leftnoteasy/hadoop-1/tree/submarine/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine
https://conferences.oreilly.com/strata/strata-ny/public/schedule/detail/68289
https://github.com/leftnoteasy/hadoop-1/blob/submarine/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine/src/site/QuickStart.md
... View more
10-05-2018
05:51 PM
2 Kudos
Posting Images with Apache NiFi 1.7 and a Custom Processor I have been using a shell script for this since Apache NiFi did not have a good way to natively post an image to HTTP servers su such as the model server for Apache MXNet. So I wrote a quick and dirty processor that posts an image there and gathers the headers, result body, status text and status code and returns them to you as attributes. In this example I am download images from picsum.photos free photo service. To use this new processor, download to your lib directory and restart Apache NiFi, then you can add the PostImageProcessor. Eclipse For Building My Processor Configure the Post Image Processor with your URL, fieldname, imagename and image type. MXNet Model Server Results The Attribute Results From the Data Results Example Results post.header
{Server=[Werkzeug/0.14.1 Python/3.6.6], Access-Control-Allow-Origin=[*], Content-Length=[396], Date=[Fri, 05 Oct 2018 17:47:22 GMT], Content-Type=[application/json]}
post.results
{"prediction":[[{"probability":0.24173378944396973,"class":"n02281406 sulphur butterfly, sulfur butterfly"},{"probability":0.19173663854599,"class":"n02190166 fly"},{"probability":0.052654966711997986,"class":"n02280649 cabbage butterfly"},{"probability":0.05147545784711838,"class":"n03485794 handkerchief, hankie, hanky, hankey"},{"probability":0.048753462731838226,"class":"n02834397 bib"}]]}
post.status
OK
post.statuscode
200 Results from HTTP Posting an Image to MXNet Model Server [INFO 2018-10-05 13:47:22,217 PID:88561 /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/mms/serving_frontend.py:predict_callback:467] Request input: data should be image with jpeg format.
[INFO 2018-10-05 13:47:22,218 PID:88561 /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/mms/request_handler/flask_handler.py:get_file_data:137] Getting file data from request.
[INFO 2018-10-05 13:47:22,262 PID:88561 /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/mms/serving_frontend.py:predict_callback:510] Response is text.
[INFO 2018-10-05 13:47:22,262 PID:88561 /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/mms/request_handler/flask_handler.py:jsonify:159] Jsonifying the response: {'prediction': [[{'probability': 0.24173378944396973, 'class': 'n02281406 sulphur butterfly, sulfur butterfly'}, {'probability': 0.19173663854599, 'class': 'n02190166 fly'}, {'probability': 0.052654966711997986, 'class': 'n02280649 cabbage butterfly'}, {'probability': 0.05147545784711838, 'class': 'n03485794 handkerchief, hankie, hanky, hankey'}, {'probability': 0.048753462731838226, 'class': 'n02834397 bib'}]]}
[INFO 2018-10-05 13:47:22,263 PID:88561 /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/werkzeug/_internal.py:_log:88] 127.0.0.1 - - [05/Oct/2018 13:47:22] "POST /squeezenet/predict HTTP/1.1" 200 - Example HTTP Server https://github.com/awslabs/mxnet-model-server Source Code For Processor https://github.com/tspannhw/nifi-postimage-processor Pre-Built NAR To Install https://github.com/tspannhw/nifi-postimage-processor/releases/tag/1.0
... View more
Labels: