Member since
10-01-2015
3933
Posts
1150
Kudos Received
374
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 3651 | 05-03-2017 05:13 PM | |
| 3009 | 05-02-2017 08:38 AM | |
| 3264 | 05-02-2017 08:13 AM | |
| 3216 | 04-10-2017 10:51 PM | |
| 1681 | 03-28-2017 02:27 AM |
03-04-2016
06:49 PM
@rkanchu everything SAS related is here http://hortonworks.com/partner/sas/
... View more
03-04-2016
04:04 PM
@Michael DURIEUX please accept the best answer.
... View more
03-04-2016
01:10 PM
OK you need to confirm which directories you specified for datanode in Ambari > hdfs > configs
... View more
03-04-2016
11:31 AM
here's an example, file type doesn't matter as everything is bytes. You can the ingest csv with Hive, pig or spark. http://www.lampdev.org/programming/hadoop/apache-flume-spooldir-sink-tutorial.html
... View more
03-04-2016
10:56 AM
Go to the node and investigate the data dir directory you specified. Run hdfs fsck / command see if you have issue with hdfs, post screenshot of main page of Ambari with all widgets.
... View more
03-04-2016
10:36 AM
That's remaining capacity not total
... View more
03-04-2016
10:31 AM
1 Kudo
I think you're interpreting it wrong it's the opposite, only slave 4 is not taking up data, the other nodes are filled.
... View more
03-04-2016
10:28 AM
1 Kudo
Is your replication factor set to 3? Are you using one reducer in your ingestion? You can use hdfs balancer to spread the data around your cluster https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HDFSCommands.html#Administration_Commands
... View more