Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

cloudbreak default blueprint installation in openstack has HDFS Capacity issues.

avatar

document.txtI have setup cloudbreak on openstack and trying to create a cluster with hdp-small-default bp. The cluster comes up by there are issues with HDFS capacity. On Ambari, i see HDFS Capacity Utilization Capacity Used:[100%, 24576], Capacity Remaining:[0] alert.

In the HDFS config, i see name node directory set to /hadoopfs/fs1/hdfs/namenode. Namenode is running on a host with 80 GB space on file system /dev/vda1 mounted on /. But /hadoopfs/fs1 is mounted on /dev/vdb which has just 9.8 GB of space. Not sure why its defaulting namenode dir to /hadoopfs/fs1/hdfs/namenode since the blueprint does not specify anything.

Tried adding "properties": { "dfs.namenode.name.dir": "/grid/0/hadoop/hdfs/namenode", "dfs.datanode.data.dir": "/grid/0/hadoop/hdfs/data" } in hdfs-site properties globally and in all host groups but the default still points to /hadoopfs/fs1/hdfs/namenode

attached is the blueprint i have used

1 ACCEPTED SOLUTION

avatar
hide-solution

This problem has been solved!

Want to get a detailed solution you have to login/registered on the community

Register/Login
2 REPLIES 2

avatar

I am also facing the same issue -- I see cloudbreak creates config group for HDFS and YARN service while the same in not mentioned in blueprint definition. How do I say cloudbreak to not create config group ?

10668-screen-shot-2016-12-22-at-150758.png

10670-screen-shot-2016-12-22-at-150816.png


screen-shot-2016-12-22-at-150816.png

avatar
hide-solution

This problem has been solved!

Want to get a detailed solution you have to login/registered on the community

Register/Login