Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Cloudbreak on Openstack not populating local storage directories.

Solved Go to solution
Highlighted

Cloudbreak on Openstack not populating local storage directories.

Cloudera Employee

I'm trying to build a Namenode HA cluster on OpenStack. I have two mount points for my instance when it's created: / for all the os related bits and /hadoopfs/fs1 for all the HDFS/YARN data. I think the /hadoopfs/fs{1..n} is standard. When I deploy my cluster and completes I end up with dfs.datanodes.data.dir:/hadoopfs/fs1/hdfs/data set, but then all the config groups that get generated during the build process have null values set. So this is causing the datanode process to create its data dir in /tmp/hadoop-hdfs/dfs/data/ which is on the root filesystem instead of the 20TB data store for the instance. What am I missing that could be causing this to happen?

From Ambari:

13454-screen-shot-2017-03-09-at-121706-am.png

From the command line:

13455-screen-shot-2017-03-09-at-122032-am.png

Finally here's a copy of my blueprint:

{
  "Blueprints": {
    "blueprint_name": "ha-hdfs",
    "stack_name": "HDP",
    "stack_version": "2.5"
  },
  "host_groups": [
    {
      "name": "gateway",
      "cardinality" : "1",
      "components": [
        { "name": "HDFS_CLIENT" },
        { "name": "MAPREDUCE2_CLIENT" },
        { "name": "METRICS_COLLECTOR" },
        { "name": "METRICS_MONITOR" },
        { "name": "TEZ_CLIENT" },
        { "name": "YARN_CLIENT" },
        { "name": "ZOOKEEPER_CLIENT" }
      ]
    },
    {
      "name": "master_1",
      "cardinality" : "1",
      "components": [
        { "name": "HISTORYSERVER" },
        { "name": "JOURNALNODE" },
        { "name": "METRICS_MONITOR" },
        { "name": "NAMENODE" },
        { "name": "ZKFC" },
        { "name": "ZOOKEEPER_SERVER" }
      ]
    },
    {
      "name": "master_2",
      "cardinality" : "1",
      "components": [
        { "name": "APP_TIMELINE_SERVER" },
        { "name": "JOURNALNODE" },
        { "name": "METRICS_MONITOR" },
        { "name": "RESOURCEMANAGER" },
        { "name": "ZOOKEEPER_SERVER" }
      ]
    },
    {
      "name": "master_3",
      "cardinality" : "1",
      "components": [
        { "name": "JOURNALNODE" },
        { "name": "METRICS_MONITOR" },
        { "name": "NAMENODE" },
        { "name": "ZKFC" },
        { "name": "ZOOKEEPER_SERVER" }
      ]
    },
    {
      "name": "slave_1",
      "components": [
        { "name": "DATANODE" },
        { "name": "METRICS_MONITOR" },
        { "name": "NODEMANAGER" }
      ]
    }
  ],
  "configurations": [
    {
      "core-site": {
        "properties" : {
          "fs.defaultFS" : "hdfs://myclusterhaha",
          "ha.zookeeper.quorum" : "%HOSTGROUP::master_1%:2181,%HOSTGROUP::master_2%:2181,%HOSTGROUP::master_3%:2181"
      }}
    },
    {
      "hdfs-site": {
        "properties" : {
          "dfs.client.failover.proxy.provider.myclusterhaha" : "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider",
          "dfs.ha.automatic-failover.enabled" : "true",
          "dfs.ha.fencing.methods" : "shell(/bin/true)",
          "dfs.ha.namenodes.myclusterhaha" : "nn1,nn2",
          "dfs.namenode.http-address" : "%HOSTGROUP::master_1%:50070",
          "dfs.namenode.http-address.myclusterhaha.nn1" : "%HOSTGROUP::master_1%:50070",
          "dfs.namenode.http-address.myclusterhaha.nn2" : "%HOSTGROUP::master_3%:50070",
          "dfs.namenode.https-address" : "%HOSTGROUP::master_1%:50470",
          "dfs.namenode.https-address.myclusterhaha.nn1" : "%HOSTGROUP::master_1%:50470",
          "dfs.namenode.https-address.myclusterhaha.nn2" : "%HOSTGROUP::master_3%:50470",
          "dfs.namenode.rpc-address.myclusterhaha.nn1" : "%HOSTGROUP::master_1%:8020",
          "dfs.namenode.rpc-address.myclusterhaha.nn2" : "%HOSTGROUP::master_3%:8020",
          "dfs.namenode.shared.edits.dir" : "qjournal://%HOSTGROUP::master_1%:8485;%HOSTGROUP::master_2%:8485;%HOSTGROUP::master_3%:8485/myclusterhaha",
          "dfs.nameservices" : "myclusterhaha",
          "dfs.datanode.data.dir" : "/hadoopfs/fs1/hdfs/data"
      }
      }

    },
    {
      "hadoop-env": {
        "properties": {
          "hadoop_heapsize": "4096",
          "dtnode_heapsize": "8192m",
          "namenode_heapsize": "32768m"
        }
      }
    }
  ]
}

Any advice you can provide would be great.

Thanks,

Scott

1 ACCEPTED SOLUTION

Accepted Solutions
Highlighted

Re: Cloudbreak on Openstack not populating local storage directories.

Expert Contributor

Hi,

It looks like it's a bug in Cloudbreak when you don't attach volumes to the instances. It can be configured when you create a template on the UI. We'll get this fixed.

View solution in original post

1 REPLY 1
Highlighted

Re: Cloudbreak on Openstack not populating local storage directories.

Expert Contributor

Hi,

It looks like it's a bug in Cloudbreak when you don't attach volumes to the instances. It can be configured when you create a template on the UI. We'll get this fixed.

View solution in original post

Don't have an account?
Coming from Hortonworks? Activate your account here