Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

How to protect HDFS directories from deletion by mistake

avatar
Guru

Hi Team,

I was reading a KB article which can help us to protect our HDFS dir, but when I tested it then I am able to delete a protected dir.

Actually I have configured fs.protected.directories in core-site.xml with /lowes/sampleTest dir and tested below.

[root@samplehost ~]$ hadoop fs -rm -R -skipTrash /lowes/sampleTest

rm: Cannot delete non-empty protected directory /lowes/sampleTest

[root@samplehost ~]$ hadoop fs -rm -R /lowes/sampleTest

16/04/27 05:50:15 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 360 minutes, Emptier interval = 0 minutes.

Moved: 'hdfs://HDPINFHA/lowes/sampleTest' to trash at: hdfs://HDPINFHA/user/root/.Trash/Current

So do you have any help on that.

1 ACCEPTED SOLUTION

avatar
Master Guru

@Saurabh Kumar - Which version of HDP are you using? I see that protected directory feature is there in hadoop 2.8.0

https://issues.apache.org/jira/browse/HDFS-8983

View solution in original post

8 REPLIES 8

avatar
Master Guru

@Saurabh Kumar - Which version of HDP are you using? I see that protected directory feature is there in hadoop 2.8.0

https://issues.apache.org/jira/browse/HDFS-8983

avatar
Guru

@Kuldeep Kulkarni: I am using hdp 2.3.4 and Hadoop 2.7.1.2.3.4.0-3485. I can see it is not properly supported in our hdp stack ?

avatar
Master Guru

@Saurabh Kumar - I just checked and 2.3.4 has HDFS-8983 implemented in it. I will try to re-produce and keep you posted.

avatar
Master Guru
@Saurabh Kumar

Can you please try to delete hdfs://HDPINFHA/user/root/.Trash/Current//lowes/sampleTest ?

avatar
Guru

@Kuldeep Kulkarni: I am able to delete trash as well.

[root@samplehost ~]$ hadoop fs -rmr hdfs://HDPINFHA/user/root/.Trash/Current/lowes/sampleTest

rmr: DEPRECATED: Please use 'rm -r' instead.

16/04/29 03:07:06 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 360 minutes, Emptier interval = 0 minutes.

Deleted hdfs://HDPINFHA/user/root/.Trash/Current/lowes/sampleTest

avatar

You can also use HDFS snapshot for protecting data from user errors : https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsSnapshots.html

avatar
Guru

@Abdelkrim Hadjidj: Yes you are right. Right now we are using snapshot only in all clusters. But as I saw this functionality so I was curious about it and thats why I have posted my concern.

avatar
Contributor

This is a good article by our intern James Medel to protect against accidental deletion:

USING HDFS SNAPSHOTS TO PROTECT IMPORTANT ENTERPRISE DATASETS

Sometime back, we introduced the ability to create snapshots to protect important enterprise data sets from user or application errors.

HDFS Snapshots are read-only point-in-time copies of the file system. Snapshots can be taken on a subtree of the file system or the entire file system and are:

  • Performant and Reliable: Snapshot creation is atomic and instantaneous, no matter the size or depth of the directory subtree
  • Scalable: Snapshots do not create extra copies of blocks on the file system. Snapshots are highly optimized in memory and stored along with the NameNode’s file system namespace

In this blog post we’ll walk through how to administer and use HDFS snapshots.

ENABLE SNAPSHOTS

In an example scenario, Web Server logs are being loaded into HDFS on a daily basis for processing and long term storage. The logs are loaded in a few times a day, and the dataset is organized into directories that hold log files per day in HDFS. Since the Web Server logs are stored only in HDFS, it’s imperative that they are protected from deletion.

/data/weblogs

/data/weblogs/20130901

/data/weblogs/20130902

/data/weblogs/20130903

In order to provide data protection and recovery for the Web Server log data, snapshots are enabled for the parent directory:

hdfs dfsadmin -allowSnapshot /data/weblogs

Snapshots need to be explicitly enabled for directories. This provides system administrators with the level of granular control they need to manage data in HDP.

TAKE POINT IN TIME SNAPSHOTS

The following command creates a point in time snapshot of the /data/weblogs/directory and its subtree:

hdfs dfs -createSnapshot /data/weblogs

This will create a snapshot, and give it a default name which matches the timestamp at which the snapshot was created. Users can provide an optional snapshot name instead of the default. With the default name, the created snapshot path will be: /data/weblogs/.snapshot/s20130903-000941.091. Users can schedule a CRON job to create snapshots at regular intervals. Example, when you run CRON job: 30 18 * * * rm /home/someuser/tmp/*, the comand tells your file system to run the content from the tmp folder at 18:30 every day. For instance, to integrate CRON jobs with HDFS snapshots, run the command: 30 18 * * * hdfs dfs -createSnapshot /data/weblogs/* to schedule Snapshots to be created each day at 6:30.

To view the state of the directory at the recently created snapshot:

hdfs dfs -ls /data/weblogs/.snapshot/s20130903-000941.091

Found3 items

drwxr-xr-x - web hadoop 02013-09-0123:59/data/weblogs/.snapshot/s20130903-000941.091/20130901

drwxr-xr-x - web hadoop 02013-09-0200:55/data/weblogs/.snapshot/s20130903-000941.091/20130902

drwxr-xr-x - web hadoop 02013-09-0323:57/data/weblogs/.snapshot/s20130903-000941.091/20130903

RECOVER LOST DATA

As new data is loaded into the web logs dataset, there could be an erroneous deletion of a file or directory. For example, an application could delete the set of logs pertaining to Sept 2nd, 2013 stored in the /data/weblogs/20130902 directory.

Since /data/weblogs has a snapshot, the snapshot will protect from the file blocks being removed from the file system. A deletion will only modify the metadata to remove /data/weblogs/20130902 from the working directory.

To recover from this deletion, data is restored by copying the needed data from the snapshot path:

hdfs dfs -cp /data/weblogs/.snapshot/s20130903-000941.091/20130902/data/weblogs/

This will restore the lost set of files to the working data set:

hdfs dfs -ls /data/weblogs

Found3 items

drwxr-xr-x - web hadoop 02013-09-0123:59/data/weblogs/20130901

drwxr-xr-x - web hadoop 02013-09-0412:10/data/weblogs/20130902

drwxr-xr-x - web hadoop 02013-09-0323:57/data/weblogs/20130903

Since snapshots are read-only, HDFS will also protect against user or application deletion of the snapshot data itself. The following operation will fail:

hdfs dfs -rmdir /data/weblogs/.snapshot/s20130903-000941.091/20130902

NEXT STEPS

With HDP 2.1, you can use snapshots to protect your enterprise data from accidental deletion, corruption and errors. Download HDP to get started.