Can someone point me to a good resource for "best practices" for a hadoop directory structure for storing raw data, intermediate files, output files, metadata etc in HDFS? Do you segregate different data types into different directory structures? Are the directory structures labeled per YYMMDD? What would a typical HDFS directory structure look like when setting up to store data?
It could be depends on data layers in your HDFS directory, for instance, if you have raw and standard layer this would be one of the practices.
Raw is the first landing of data and need to be as close to the original data as possible. Standard is the staging of the data where it converted into different data formats and still no semantic changed have been done to data.
the structure for raw data and meta is :
the structure of standard data/meta folder is: standard/businessarea/sourcesystem/data/date&time
these standards can also help to make sentry/ranger policies based AD groups