I've been trying to read large csv files from HDFS into R using the data.table package since it's a lot faster than the rhdfs package in my experience.
I have been successful with reading entire files with the following commands:
data <- fread("/usr/bin/hadoop fs -text /path/to/the/file.csv"), fill=TRUE
Then, I would like to only read in rows that contain the value "2MS-US". I tried to do it with grep
:
data <- fread("/usr/bin/hadoop fs -text /path/to/the/file.csv | grep '2MS-US'"), fill=TRUE)
This returns the correct number of rows, but it removes all the headers. They now become "V1", "V2", etc.
According to this thread, the issue with losing column names when using grep
has been resolved in data.package 1.9.6, but I am still experiencing it even though I am using 1.9.7. Any thoughts on this? Thanks!