Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

How do I configure HDFS storage types and policies?

Solved Go to solution

How do I configure HDFS storage types and policies?

New Contributor

According to the Apache and our own documentation, I would use the hdfs dfsadmin -setStoragePolicy and -getStoragePolicy commands to configure and use HDFS storage types and policies. However, on my HDP 2.3.0 cluster, installed using Ambari 2.1.1, the hdfs dfsadmin command does not have the -getStoragePolicy and -setStoragPolicy commands. So I do I configure storage types and policies?

1 ACCEPTED SOLUTION

Accepted Solutions

Re: How do I configure HDFS storage types and policies?

I think the command has changed, its not hdfs dfsadmin anymore

Try this:

Set a storage policy to a file or a directory.

hdfs storagepolicies -setStoragePolicy -path <path> -policy <policy>

Get the storage policy of a file or a directory.

hdfs storagepolicies -getStoragePolicy -path <path>

Source: https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/ArchivalStorage.html#Set_Stor...

2 REPLIES 2

Re: How do I configure HDFS storage types and policies?

I think the command has changed, its not hdfs dfsadmin anymore

Try this:

Set a storage policy to a file or a directory.

hdfs storagepolicies -setStoragePolicy -path <path> -policy <policy>

Get the storage policy of a file or a directory.

hdfs storagepolicies -getStoragePolicy -path <path>

Source: https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/ArchivalStorage.html#Set_Stor...

Highlighted

Re: How do I configure HDFS storage types and policies?

Contributor

Following steps working for me:

  • 1.Create mount points:

#mkdir /hadoop/hdfs/data1 /hadoop/hdfs/data2 /hadoop/hdfs/data3

#chown hdfs:hadoop /hadoop/hdfs/data1 /hadoop/hdfs/data2 /hadoop/hdfs/data3

(**We are using the configuration for test purpose only, so no disks are mounted.)

  • 2.Login to Ambari > HDFS>setting
  • 3.Add datanode directories as shown below:
  • Datanode>datanode directories:
  • [DISK]/hadoop/hdfs/data,[SSD]/hadoop/hdfs/data1,[RAMDISK]/hadoop/hdfs/data2,[ARCHIVE]/hadoop/hdfs/data3
  • 5050-sp.png

    Restart hdfs hdfs service.

    Restart all other afftected services.

    Create a directory /cold

    # su hdfs

    [hdfs@hdp-qa2-n1 ~]$ hadoop fs -mkdir /cold

    Set COLD storage policy on /cold

    [hdfs@hdp-qa2-n1 ~]$ hdfs storagepolicies -setStoragePolicy -path /cold -policy COLD

    Set storage policy COLD on /cold

    5. Run get storage policy:

    [hdfs@hdp-qa2-n1 ~]$ hdfs storagepolicies -getStoragePolicy -path /cold

    The storage policy of /cold:

    BlockStoragePolicy{COLD:2, storageTypes=[ARCHIVE], creationFallbacks=[], replicationFallbacks=[]}
Don't have an account?
Coming from Hortonworks? Activate your account here