Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

cloudbreak default blueprint installation in openstack has HDFS Capacity issues.

avatar

document.txtI have setup cloudbreak on openstack and trying to create a cluster with hdp-small-default bp. The cluster comes up by there are issues with HDFS capacity. On Ambari, i see HDFS Capacity Utilization Capacity Used:[100%, 24576], Capacity Remaining:[0] alert.

In the HDFS config, i see name node directory set to /hadoopfs/fs1/hdfs/namenode. Namenode is running on a host with 80 GB space on file system /dev/vda1 mounted on /. But /hadoopfs/fs1 is mounted on /dev/vdb which has just 9.8 GB of space. Not sure why its defaulting namenode dir to /hadoopfs/fs1/hdfs/namenode since the blueprint does not specify anything.

Tried adding "properties": { "dfs.namenode.name.dir": "/grid/0/hadoop/hdfs/namenode", "dfs.datanode.data.dir": "/grid/0/hadoop/hdfs/data" } in hdfs-site properties globally and in all host groups but the default still points to /hadoopfs/fs1/hdfs/namenode

attached is the blueprint i have used

1 ACCEPTED SOLUTION

avatar
@gvenkataramanan

Please try the blueprint. small-default-hdp-corrected.txt . Issue was the NameNode default heapsize of 1024 is causing issue. Overriding it to 2048 resolves the issue. Regarding HDFS datanode configured to /hadoopfs by default ? that is how cloudbreak is designed i believe so that datanode have seperate dedicated disks.

View solution in original post

2 REPLIES 2

avatar

I am also facing the same issue -- I see cloudbreak creates config group for HDFS and YARN service while the same in not mentioned in blueprint definition. How do I say cloudbreak to not create config group ?

10668-screen-shot-2016-12-22-at-150758.png

10670-screen-shot-2016-12-22-at-150816.png


screen-shot-2016-12-22-at-150816.png

avatar
@gvenkataramanan

Please try the blueprint. small-default-hdp-corrected.txt . Issue was the NameNode default heapsize of 1024 is causing issue. Overriding it to 2048 resolves the issue. Regarding HDFS datanode configured to /hadoopfs by default ? that is how cloudbreak is designed i believe so that datanode have seperate dedicated disks.