@Nicola PoffoI don't think we have any model or a method to follow the data ingestion activities but here I will try to give few examples the way we are arranging the data in HDFS
hdaoop fs -ls /basedata/ --it contains company's RAW data, nothing but source data without any ETL process
hadoop fs -ls /stage/ --all the ETL tables are creating here before inserting the data into target tables.
hadoop fs -ls /target/ --tables for Analysis
How we are Organizing individual applications's data?
Example for One Schema Data.
hadoop fs -ls /target/
hadoop fs -ls /target/<application/data sourcename>/<partitions>/
hadoop fs -ls /target/<application2/data sourcename>/<partitions>/
hadoop fs -ls /target/<application3/data sourcename>/<partitions>/
I hope this may help you 🙂