Member since
02-02-2016
31
Posts
41
Kudos Received
6
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3809 | 03-03-2016 06:05 AM | |
2028 | 03-01-2016 12:30 PM | |
25015 | 02-23-2016 09:19 AM | |
1383 | 02-18-2016 09:12 AM | |
10838 | 02-15-2016 09:49 AM |
03-01-2016
10:08 AM
1 Kudo
@Neeraj Sabharwal I guess there is an article which shows that both Spark and TensorFlow can work together - https://databricks.com/blog/2016/01/25/deep-learning-with-spark-and-tensorflow.html So wanted to know if there are any recommended ways of installing TensorFlow with HDP
... View more
03-01-2016
06:58 AM
4 Kudos
Here are the installation modes for TensorFlow: https://www.tensorflow.org/versions/r0.7/get_started/os_setup.html Can any of these installations be automated through Cloudbreak? If not, do you already have a recommended way of installing TensorFlow on HDP?
... View more
Labels:
- Labels:
-
Apache Spark
02-23-2016
09:28 AM
1 Kudo
More information: http://stackoverflow.com/questions/19205057/how-to-fix-corrupt-hadoop-hdfs
... View more
02-23-2016
09:19 AM
2 Kudos
Hi Pranshu, You can follow the instructions in the link below: https://community.hortonworks.com/articles/4427/fix-under-replicated-blocks-in-hdfs-manually.html Regards, Karthik Gopal
... View more
02-18-2016
12:46 PM
2 Kudos
You can use the command - hdfs fsck / -delete to list corrupt of missing blocks and then follow the article above to fix the same.
... View more
02-18-2016
09:12 AM
2 Kudos
https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.2/bk_hdfs_admin_tools/content/ch04.html
... View more
02-18-2016
09:12 AM
2 Kudos
Hi Rushikesh, Hadoop jobs are data intensive, compressing data can speed up the I/O operations - MapReduce jobs are almost always I/O bound Compressed data can save storage space and speed up data transfers across the network - Capital allocation for hardware can go further Reduced I/O and network load can bring significant performance improvements - MapReduce jobs can finish faster overall On the other hand, CPU utilization and processing time increases during compression and decompression - Understanding the tradeoff is important for MapReduce pipeline's overall performance
... View more
02-18-2016
08:04 AM
1 Kudo
HBase Snapshots allow you to take a snapshot of a table without much impact on Region Servers. Snapshot, clone, and restore operations don't involve data copying. In addition, exporting a snapshot to another cluster has no impact on region servers.
... View more
02-18-2016
08:02 AM
1 Kudo
Hi Rushikesh, Suggest you to go through this article might be helpful: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.2/bk_hbase_snapshots_guide/content/ch_hbase_snapshots_chapter.html Regards, Karthik Gopal
... View more
02-15-2016
09:49 AM
2 Kudos
Hi Rushikesh, You can create a daily script and use the following option of hadoop fs command to append the an existing file on HDFS. Usage: hadoop fs -appendToFile <localsrc> ... <dst> Append single src, or multiple srcs from local file system to the destination file system. Also reads input from stdin and appends to destination file system.
hadoop fs -appendToFile localfile /user/hadoop/hadoopfile hadoop fs -appendToFile localfile1 localfile2 /user/hadoop/hadoopfile hadoop fs -appendToFile localfile hdfs://nn.example.com/hadoop/hadoopfile hadoop fs -appendToFile - hdfs://nn.example.com/hadoop/hadoopfile Reads the input from stdin. Exit Code: Returns 0 on success and 1 on error.
... View more
- « Previous
-
- 1
- 2
- Next »