Member since
06-10-2017
8
Posts
0
Kudos Received
0
Solutions
08-26-2017
02:47 PM
@Nagesh Kumar So it looks like you are installing the Python version of Tensorflow using Anaconda3 as your Python distribution. The error you are getting is a GLIBC error and isn't really that uncommon it seems. It's an OS/Anaconda/Tensorflow issues in general. What OS are you running? You are likely running into this: https://stackoverflow.com/questions/39807621/glibc-2-14-not-recognized-by-tensorflow-installation-in-redhat. It seems this problem is fairly common as a Google search yields a number of similar results. This appears to suggest a workaround: https://stackoverflow.com/questions/33655731/error-while-importing-tensorflow-in-python2-7-in-ubuntu-12-04-glibc-2-17-not-f. If you are not running Ubuntu, you should be able to adapt the specific versions as needed to your OS.
... View more
08-26-2017
12:31 PM
1 Kudo
Speed Apache Spark –Spark is lightning fast cluster computing tool. Apache Spark runs applications up to 100x faster in memory and 10x faster on disk than Hadoop. Because of reducing the number of read/write cycle to disk and storing intermediate data in-memory Spark makes it possible. Hadoop MapReduce –MapReduce reads and writes from disk, as a result, it slows down the processing speed. Difficulty Apache Spark –Spark is easy to program as it has tons of high-level operators with RDD – Resilient Distributed Dataset. Hadoop MapReduce –In MapReduce, developers need to hand code each and every operation which makes it very difficult to work. Easy to Manage Apache Spark –Spark is capable of performing batch, interactive and Machine Learning and Streaming all in the same cluster. As a result makes it a completedata analyticsengine. Thus, no need to manage different component for each need. Installing Spark on a cluster will be enough to handle all the requirements. Hadoop MapReduce –As MapReduce only provides the batch engine. Hence, we are dependent on different engines. For example- Storm, Giraph, Impala, etc. for other requirements. So, it is very difficult to manage many components. For more refer below link: Spark vs Hadoop
... View more