Created on 04-27-2016 10:44 AM - edited 09-16-2022 03:15 AM
Hi Team,
I was reading a KB article which can help us to protect our HDFS dir, but when I tested it then I am able to delete a protected dir.
Actually I have configured fs.protected.directories in core-site.xml with /lowes/sampleTest dir and tested below.
[root@samplehost ~]$ hadoop fs -rm -R -skipTrash /lowes/sampleTest
rm: Cannot delete non-empty protected directory /lowes/sampleTest
[root@samplehost ~]$ hadoop fs -rm -R /lowes/sampleTest
16/04/27 05:50:15 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 360 minutes, Emptier interval = 0 minutes.
Moved: 'hdfs://HDPINFHA/lowes/sampleTest' to trash at: hdfs://HDPINFHA/user/root/.Trash/Current
So do you have any help on that.
Created 04-27-2016 04:03 PM
@Saurabh Kumar - Which version of HDP are you using? I see that protected directory feature is there in hadoop 2.8.0
Created 04-27-2016 04:03 PM
@Saurabh Kumar - Which version of HDP are you using? I see that protected directory feature is there in hadoop 2.8.0
Created 04-27-2016 07:08 PM
@Kuldeep Kulkarni: I am using hdp 2.3.4 and Hadoop 2.7.1.2.3.4.0-3485. I can see it is not properly supported in our hdp stack ?
Created 04-27-2016 07:27 PM
@Saurabh Kumar - I just checked and 2.3.4 has HDFS-8983 implemented in it. I will try to re-produce and keep you posted.
Created 04-27-2016 07:30 PM
Can you please try to delete hdfs://HDPINFHA/user/root/.Trash/Current//lowes/sampleTest ?
Created 04-29-2016 07:05 AM
@Kuldeep Kulkarni: I am able to delete trash as well.
[root@samplehost ~]$ hadoop fs -rmr hdfs://HDPINFHA/user/root/.Trash/Current/lowes/sampleTest
rmr: DEPRECATED: Please use 'rm -r' instead.
16/04/29 03:07:06 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 360 minutes, Emptier interval = 0 minutes.
Deleted hdfs://HDPINFHA/user/root/.Trash/Current/lowes/sampleTest
Created 04-29-2016 07:42 AM
You can also use HDFS snapshot for protecting data from user errors : https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsSnapshots.html
Created 04-29-2016 10:33 AM
@Abdelkrim Hadjidj: Yes you are right. Right now we are using snapshot only in all clusters. But as I saw this functionality so I was curious about it and thats why I have posted my concern.
Created 04-28-2017 04:10 PM
This is a good article by our intern James Medel to protect against accidental deletion:
Sometime back, we introduced the ability to create snapshots to protect important enterprise data sets from user or application errors.
HDFS Snapshots are read-only point-in-time copies of the file system. Snapshots can be taken on a subtree of the file system or the entire file system and are:
In this blog post we’ll walk through how to administer and use HDFS snapshots.
In an example scenario, Web Server logs are being loaded into HDFS on a daily basis for processing and long term storage. The logs are loaded in a few times a day, and the dataset is organized into directories that hold log files per day in HDFS. Since the Web Server logs are stored only in HDFS, it’s imperative that they are protected from deletion.
/data/weblogs
/data/weblogs/20130901
/data/weblogs/20130902
/data/weblogs/20130903
In order to provide data protection and recovery for the Web Server log data, snapshots are enabled for the parent directory:
hdfs dfsadmin -allowSnapshot /data/weblogs
Snapshots need to be explicitly enabled for directories. This provides system administrators with the level of granular control they need to manage data in HDP.
The following command creates a point in time snapshot of the /data/weblogs/directory and its subtree:
hdfs dfs -createSnapshot /data/weblogs
This will create a snapshot, and give it a default name which matches the timestamp at which the snapshot was created. Users can provide an optional snapshot name instead of the default. With the default name, the created snapshot path will be: /data/weblogs/.snapshot/s20130903-000941.091. Users can schedule a CRON job to create snapshots at regular intervals. Example, when you run CRON job: 30 18 * * * rm /home/someuser/tmp/*, the comand tells your file system to run the content from the tmp folder at 18:30 every day. For instance, to integrate CRON jobs with HDFS snapshots, run the command: 30 18 * * * hdfs dfs -createSnapshot /data/weblogs/* to schedule Snapshots to be created each day at 6:30.
To view the state of the directory at the recently created snapshot:
hdfs dfs -ls /data/weblogs/.snapshot/s20130903-000941.091
Found3 items
drwxr-xr-x - web hadoop 02013-09-0123:59/data/weblogs/.snapshot/s20130903-000941.091/20130901
drwxr-xr-x - web hadoop 02013-09-0200:55/data/weblogs/.snapshot/s20130903-000941.091/20130902
drwxr-xr-x - web hadoop 02013-09-0323:57/data/weblogs/.snapshot/s20130903-000941.091/20130903
As new data is loaded into the web logs dataset, there could be an erroneous deletion of a file or directory. For example, an application could delete the set of logs pertaining to Sept 2nd, 2013 stored in the /data/weblogs/20130902 directory.
Since /data/weblogs has a snapshot, the snapshot will protect from the file blocks being removed from the file system. A deletion will only modify the metadata to remove /data/weblogs/20130902 from the working directory.
To recover from this deletion, data is restored by copying the needed data from the snapshot path:
hdfs dfs -cp /data/weblogs/.snapshot/s20130903-000941.091/20130902/data/weblogs/
This will restore the lost set of files to the working data set:
hdfs dfs -ls /data/weblogs
Found3 items
drwxr-xr-x - web hadoop 02013-09-0123:59/data/weblogs/20130901
drwxr-xr-x - web hadoop 02013-09-0412:10/data/weblogs/20130902
drwxr-xr-x - web hadoop 02013-09-0323:57/data/weblogs/20130903
Since snapshots are read-only, HDFS will also protect against user or application deletion of the snapshot data itself. The following operation will fail:
hdfs dfs -rmdir /data/weblogs/.snapshot/s20130903-000941.091/20130902
With HDP 2.1, you can use snapshots to protect your enterprise data from accidental deletion, corruption and errors. Download HDP to get started.