Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Multiple EBS drives on ec2

avatar
Explorer

Trying to figure out how to mount multiple EBS drives using Director.

 

I have root drives as EBS, but I would like to mount a couple more of EBS backed drives.

1 ACCEPTED SOLUTION

avatar
Super Collaborator

Hi ujj,

 

Director doesn't have direct support for mounting extra EBS drives, but you could try using a bootstrap script that does the work using the AWS CLI tool. A bootstrap script runs under sudo on each instance whose template includes the script.

 

Here's an example script that may work. I tried the commands out on a stock RHEL 7.1 instance using bash. It installs pip (which happens to not be included in the AMI I used) and then the AWS CLI, and then uses the CLI to create two 10 GB EBS volumes and attach them to the current instance.

 

curl -O https://bootstrap.pypa.io/get-pip.py
python get-pip.py
pip install awscli

export AWS_ACCESS_KEY_ID=XXXXX
export AWS_SECRET_ACCESS_KEY=YYYYY
export INSTANCE_ID=$(curl http://instance-data/latest/meta-data/instance-id)
export AVAILABILITY_ZONE=$(curl http://instance-data/latest/meta-data/placement/availability-zone)

export VOL1_ID=$(aws ec2 create-volume --size 10 --region us-east-1 --availability-zone ${AVAILABILITY_ZONE} --volume-type gp2 --query "VolumeId" | tr -d '"')
export VOL2_ID=$(aws ec2 create-volume --size 10 --region us-east-1 --availability-zone ${AVAILABILITY_ZONE} --volume-type gp2 --query "VolumeId" | tr -d '"')

sleep 30

aws ec2 attach-volume --region us-east-1 --volume-id ${VOL1_ID} --instance-id ${INSTANCE_ID} --device /dev/sdf
aws ec2 attach-volume --region us-east-1 --volume-id ${VOL2_ID} --instance-id ${INSTANCE_ID} --device /dev/sdg

You would need to adapt this script to use your AWS credentials and specify your correct region. If the instances are in a private subnet and cannot download and install pip and the AWS CLI, you could use a custom AMI that has the tools pre-installed, or use some other mechanisms to get the CLI. Also, if your instances are spun up with an EC2 instance profile whose role has permissions to work with EBS, you should be able to omit setting the AWS credentials in the script, and rely on the profile.

 

The new EBS volumes will be independent of the lifetime of the instances, so you will need to delete them when they are no longer needed.

View solution in original post

2 REPLIES 2

avatar
Super Collaborator

Hi ujj,

 

Director doesn't have direct support for mounting extra EBS drives, but you could try using a bootstrap script that does the work using the AWS CLI tool. A bootstrap script runs under sudo on each instance whose template includes the script.

 

Here's an example script that may work. I tried the commands out on a stock RHEL 7.1 instance using bash. It installs pip (which happens to not be included in the AMI I used) and then the AWS CLI, and then uses the CLI to create two 10 GB EBS volumes and attach them to the current instance.

 

curl -O https://bootstrap.pypa.io/get-pip.py
python get-pip.py
pip install awscli

export AWS_ACCESS_KEY_ID=XXXXX
export AWS_SECRET_ACCESS_KEY=YYYYY
export INSTANCE_ID=$(curl http://instance-data/latest/meta-data/instance-id)
export AVAILABILITY_ZONE=$(curl http://instance-data/latest/meta-data/placement/availability-zone)

export VOL1_ID=$(aws ec2 create-volume --size 10 --region us-east-1 --availability-zone ${AVAILABILITY_ZONE} --volume-type gp2 --query "VolumeId" | tr -d '"')
export VOL2_ID=$(aws ec2 create-volume --size 10 --region us-east-1 --availability-zone ${AVAILABILITY_ZONE} --volume-type gp2 --query "VolumeId" | tr -d '"')

sleep 30

aws ec2 attach-volume --region us-east-1 --volume-id ${VOL1_ID} --instance-id ${INSTANCE_ID} --device /dev/sdf
aws ec2 attach-volume --region us-east-1 --volume-id ${VOL2_ID} --instance-id ${INSTANCE_ID} --device /dev/sdg

You would need to adapt this script to use your AWS credentials and specify your correct region. If the instances are in a private subnet and cannot download and install pip and the AWS CLI, you could use a custom AMI that has the tools pre-installed, or use some other mechanisms to get the CLI. Also, if your instances are spun up with an EC2 instance profile whose role has permissions to work with EBS, you should be able to omit setting the AWS credentials in the script, and rely on the profile.

 

The new EBS volumes will be independent of the lifetime of the instances, so you will need to delete them when they are no longer needed.

avatar
Super Collaborator

Guru Medasani, a Solutions Architect here at Cloudera, has been able to use this technique to add EBS volumes to new instances, and he added additional commands to format the drives and mount them. Here's an extension to the script above that handles those tasks, based on his work. He used a Red Hat Enterprise Linux 7.2 AMI (ami-2051294a), so the commands for other operating systems or versions may vary.

 

# Create filesystems on the devices
mkfs.xfs /dev/xvdf
mkfs.xfs /dev/xvdg

# Create directories for the mount points
mkdir /data1
mkdir /data2
# Add the mount points to /etc/fstab echo "/dev/xvdf /data1 xfs defaults,noatime 0 0" >> /etc/fstab echo "/dev/xvdg /data2 xfs defaults,noatime 0 0" >> /etc/fstab sleep 10 # Mount all the devices mount -a