1973
Posts
1225
Kudos Received
124
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1913 | 04-03-2024 06:39 AM | |
| 3010 | 01-12-2024 08:19 AM | |
| 1642 | 12-07-2023 01:49 PM | |
| 2419 | 08-02-2023 07:30 AM | |
| 3358 | 03-29-2023 01:22 PM |
03-02-2018
11:42 PM
which version of TensorFlow needs to be installed? Can you link the installation document?
... View more
03-02-2018
11:36 PM
3 Kudos
It will be supported soon. Come to DWS, I am also talking about Apache MXNet on HDP.
... View more
03-02-2018
06:26 PM
Great content. I think that should be published with full text and images here in HCC. Great stuff.
Using WebSockets With Apache NiFi
This simple WebSockets server and client is like the "Hello World" of WebSockets. It's like an echo—whatever the client sends, we send it back! https://dzone.com/articles/using-websockets-with-apache-nifi?_lrsc=30fa43b3-bd3f-439d-9629-60b694497b53
... View more
03-02-2018
05:32 PM
3 Kudos
This is for people preparing to attend my talk on Deep Learning at DataWorks Summit Berling 2018 (https://dataworkssummit.com/berlin-2018/#agenda) on Thursday April 19, 2018 at 11:50AM Berlin time. Another way to work with Apache MXNet is by using your Apache Zeppelin notebook to run your Python deep learning scripts. Apache Zeppelin Notebook As you can see we can format that data as a table using Apache Zeppelin display technology. Use this print statement: print("%table top1pct\ttop1\top2\ttop2pct\ttop3pct\ttop3\ttop4pct\ttop4\ttop5pct\ttop5\timagefilename\truntime\tuuid\n" + top1pct + "\t" + top1 + "\t" + top2pct + "\t" + top2 + "\t" + top3pct + "\t" + top3 + "\t" + top4pct + "\t" + top4 + "\t" + top5pct + "\t" + top5 + "\t" + filename + "\t" + str(round(end - start)) + "\t" + uniqueid + "\n" ) We use the pyspark interpreter to run this Python script, but there's no Spark in here yet. This data also gets loaded in Apache Hive via Apache NiFi as shown here: Deep Learning Models You will need to download the pre-built Inception models and reference them on your server. synset.txt Inception-BN-0000.params Inception-BN-symbol.json See: https://mxnet.incubator.apache.org/tutorials/embedded/wine_detector.html curl --header 'Host: data.mxnet.io' --header 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.11; rv:45.0) Gecko/20100101 Firefox/45.0' --header 'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8' --header 'Accept-Language: en-US,en;q=0.5' --header 'Referer: http://data.mxnet.io/models/imagenet/' --header 'Connection: keep-alive' 'http://data.mxnet.io/models/imagenet/inception-bn.tar.gz' -o 'inception-bn.tar.gz' -L
curl http://data.mxnet.io/models/imagenet/synset.txt More Models http://data.mxnet.io/models/imagenet/ Source Code https://github.com/tspannhw/mxnet-in-notebooks https://github.com/tspannhw/ApacheBigData101 References If you want to run in DSX or Jupyter: https://community.hortonworks.com/articles/176784/deep-learning-101-using-apache-mxnet-in-dsx-notebo.html Setup If you need to setup Apache MXNet on HDF: https://community.hortonworks.com/articles/174227/apache-deep-learning-101-using-apache-mxnet-on-an.html Other Articles in The Series https://community.hortonworks.com/articles/174538/apache-deep-learning-101-using-apache-mxnet-with-h.html https://community.hortonworks.com/articles/174399/apache-deep-learning-101-using-apache-mxnet-on-apa.html https://community.hortonworks.com/articles/155435/using-the-new-mxnet-model-server.html https://community.hortonworks.com/articles/171960/using-apache-mxnet-on-an-apache-nifi-15-instance-w.html
... View more
Labels:
03-02-2018
04:47 PM
4 Kudos
This is for people preparing to attend my talk on Deep Learning at DataWorks Summit Berling 2018 (https://dataworkssummit.com/berlin-2018/#agenda) on Thursday April 19, 2018 at 11:50AM Berlin time. Many people are using IBM's excellent DSX platform which uses Jupyter Notebooks and the ever popular Kubernetes. I wanted to try out Apache MXNet in this environment. It's great. Create or reuse an existing notebook. For Python, the default is Jupyter. Zeppelin is now also supported. I am using Python 2.7 with DSX Desktop on an OSX workstation. This supports Apache MXNet. My local Apache MXNet installation and MXNet python installation worked well with DSX. I needed OpenCV for this example, so I was able to install right inside IBM DSX via !pip install --user opencv-python. Very easy to start a notebook and add your code, you get nice syntax coloring. I uploaded the precompiled model Here we can check our list of Python libraries with !pip list --isolated --format=columns. Very easy to run your Apache MXNet code right in a notebook. Easy to share with other data scientists and engineers in your group and others. IBM DSX Assets You will need to download the pre-built Inception model and add that to assets. synset.txt Inception-BN-0000.params Inception-BN-symbol.json See: https://mxnet.incubator.apache.org/tutorials/embedded/wine_detector.html curl --header 'Host: data.mxnet.io' --header 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.11; rv:45.0) Gecko/20100101 Firefox/45.0' --header 'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8' --header 'Accept-Language: en-US,en;q=0.5' --header 'Referer: http://data.mxnet.io/models/imagenet/' --header 'Connection: keep-alive' 'http://data.mxnet.io/models/imagenet/inception-bn.tar.gz' -o 'inception-bn.tar.gz' -L curl http://data.mxnet.io/models/imagenet/synset.txt More Models http://data.mxnet.io/models/imagenet/ Source Code https://github.com/tspannhw/mxnet-in-notebooks https://github.com/tspannhw/ApacheBigData101
... View more
Labels:
02-28-2018
02:14 PM
1 Kudo
Some examples of ETL/ELT https://community.hortonworks.com/articles/155527/ingesting-golden-gate-records-from-apache-kafka-an.html https://community.hortonworks.com/articles/64122/incrementally-streaming-rdbms-data-to-your-hadoop.html https://community.hortonworks.com/articles/149982/hl7-ingest-part-4-streaming-analytics-manager-and.html https://community.hortonworks.com/articles/66861/nifi-etl-removing-columns-filtering-rows-changing.html https://community.hortonworks.com/articles/82346/spark-pyspark-for-etl-to-join-text-files-with-data.html https://community.hortonworks.com/articles/102519/a-reference-architecture-for-enterprise-data-wareh.html https://community.hortonworks.com/articles/48843/slowly-changing-dimensions-on-hadoop-part-1.html
... View more
02-28-2018
02:09 PM
Does your user have permissions? See also: https://community.hortonworks.com/questions/47197/phoenix-backup.html https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.4/bk_data-access/content/ch_hbase_bar.html You have to specify table names, so specify the SYSTEM ones as well. Creating and Maintaining a Complete Backup Image
The first step in running the backup-and-restore utilities is to perform a full backup and to store the data in a separate image from the source. At a minimum, you must do this to get a baseline before you can rely on incremental backups. Important Tip Record the backup ID that appears at the end of a successful backup. In case the source cluster fails and you need to recover the dataset with a restore operation, having the backup ID readily available can save time.
... View more
02-28-2018
02:03 PM
For somethings SQOOP is a good choice. For somethings Spark works as well.
... View more
02-28-2018
01:58 PM
1. Turn translate fields name to true 2. You must specify a schema registry 3. Change your schema access strategy. It's usually not from header Where is your schema name? Where is schema stored?
... View more