Member since
02-18-2014
94
Posts
23
Kudos Received
23
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3393 | 08-29-2019 07:56 AM | |
4137 | 07-09-2019 08:22 AM | |
2114 | 07-01-2019 02:21 PM | |
3619 | 03-19-2019 07:42 AM | |
2786 | 09-04-2018 05:29 AM |
08-26-2016
12:44 PM
Guru Medasani, a Solutions Architect here at Cloudera, has been able to use this technique to add EBS volumes to new instances, and he added additional commands to format the drives and mount them. Here's an extension to the script above that handles those tasks, based on his work. He used a Red Hat Enterprise Linux 7.2 AMI (ami-2051294a), so the commands for other operating systems or versions may vary. # Create filesystems on the devices
mkfs.xfs /dev/xvdf
mkfs.xfs /dev/xvdg
# Create directories for the mount points mkdir /data1 mkdir /data2
# Add the mount points to /etc/fstab
echo "/dev/xvdf /data1 xfs defaults,noatime 0 0" >> /etc/fstab
echo "/dev/xvdg /data2 xfs defaults,noatime 0 0" >> /etc/fstab
sleep 10
# Mount all the devices
mount -a
... View more
08-26-2016
06:37 AM
2 Kudos
Hi ProKirk, Looks like our logging could be better here. The exception is being thrown because Director asked Cloudera Manager to deploy updated client configurations after Kerberos was configured, but that deployment failed. I recommend taking a look at the Cloudera Manager instance that Director spun up to see what went wrong. Director doesn't have much visibility into the details of failed Cloudera Manager commands on its side. If you log in to Cloudera Manager, you can see its recent command history by selecting the scroll icon on the upper right, and then pressing the "All Recent Commands" button. You can also look in /var/log/cloudera-scm-server, or select Diagnostics > Logs or Diagnostics > Server Log from the Cloudera Manager navigation bar, to look at the server log directly. Let us know what you find! Hopefully there will be good information on what went wrong.
... View more
07-08-2016
09:30 AM
1 Kudo
Hi Cerno, With the SSH tunnel established to port 7189 of your Director instance, try instead connecting to http://localhost:7189/. That connects you to the local end of the tunnel, which on the other end will talk to the Director process. If that does not work, use netstat on the Director instance to verify that the Director process is listening on port 7189. If it is, then do a curl or wget on http://localhost:7189 on the Director instance itself to make sure that Director is responding to any requests. Hopefully the above steps will either get you connected or narrow down where the problem lies. Bill P.S. Just to be clear, when you use the SSH tunnel for port 7189, the security group(s) for the Director instance do not need to allow any outside access to port 7189. They only need to allow access to port 22, so that the tunnel can reach through over SSH. Since you're able to SSH in, that seems to be working.
... View more
06-03-2016
02:10 PM
Hi visokoo, I recommend taking a look at the blog post by Ben Spivey which was just published today. It covers deploying a secure cluster on AWS using Director, and includes use of Sentry. http://blog.cloudera.com/blog/2016/06/how-to-deploy-a-secure-enterprise-data-hub-on-aws-part-2/ For the error that you are getting following the Director documentation, can you attach your entire configuration file, sanitized to remove any sensitive information?
... View more
06-01-2016
07:07 AM
Hi ujj, Director doesn't have direct support for mounting extra EBS drives, but you could try using a bootstrap script that does the work using the AWS CLI tool. A bootstrap script runs under sudo on each instance whose template includes the script. Here's an example script that may work. I tried the commands out on a stock RHEL 7.1 instance using bash. It installs pip (which happens to not be included in the AMI I used) and then the AWS CLI, and then uses the CLI to create two 10 GB EBS volumes and attach them to the current instance. curl -O https://bootstrap.pypa.io/get-pip.py
python get-pip.py
pip install awscli
export AWS_ACCESS_KEY_ID=XXXXX
export AWS_SECRET_ACCESS_KEY=YYYYY
export INSTANCE_ID=$(curl http://instance-data/latest/meta-data/instance-id)
export AVAILABILITY_ZONE=$(curl http://instance-data/latest/meta-data/placement/availability-zone)
export VOL1_ID=$(aws ec2 create-volume --size 10 --region us-east-1 --availability-zone ${AVAILABILITY_ZONE} --volume-type gp2 --query "VolumeId" | tr -d '"')
export VOL2_ID=$(aws ec2 create-volume --size 10 --region us-east-1 --availability-zone ${AVAILABILITY_ZONE} --volume-type gp2 --query "VolumeId" | tr -d '"')
sleep 30
aws ec2 attach-volume --region us-east-1 --volume-id ${VOL1_ID} --instance-id ${INSTANCE_ID} --device /dev/sdf
aws ec2 attach-volume --region us-east-1 --volume-id ${VOL2_ID} --instance-id ${INSTANCE_ID} --device /dev/sdg You would need to adapt this script to use your AWS credentials and specify your correct region. If the instances are in a private subnet and cannot download and install pip and the AWS CLI, you could use a custom AMI that has the tools pre-installed, or use some other mechanisms to get the CLI. Also, if your instances are spun up with an EC2 instance profile whose role has permissions to work with EBS, you should be able to omit setting the AWS credentials in the script, and rely on the profile. The new EBS volumes will be independent of the lifetime of the instances, so you will need to delete them when they are no longer needed.
... View more
05-31-2016
03:38 PM
Hi RanCohen, Since CentOS 7 has no predefined alias in Director, you need to enter the URL for the image you want to use. I found that this one is accepted: https://www.googleapis.com/compute/v1/projects/centos-cloud/global/images/centos-7-v20160511 In the dialog for defining an instance template, try entering that URL in the editable input field for the image alias or URL, rather than picking from the associated dropdown menu. It can be typed or pasted in. Bill
... View more
07-13-2015
02:17 PM
1 Kudo
The log you sent along seems to be for a different, failed run of the Director client where it couldn't find the configuration file you specified ("./demok16.aws.simple.conf"). There should have been a log generated for the successful run that bootstrapped the cluster you were talking about. It's possible that it was overwritten by a later run, or maybe you installed the client locally via tarball (if so, it's in a logs directory under where you decompressed it)? The instance type m1.xlarge does come with 4 instance store (ephemeral) volumes of 420 GB each, so what you're seeing is normal for that instance type. Director preserves those volumes and doesn't currently provide a way to resize them. (Also, Director should be naming the devices /dev/sdb through /dev/sde, unless you've configured custom configuration values for lp.ec2.ephemeral.deviceNamePrefix and lp.ec2.ephemeral.rangeStart - the log would show some more information about that.) If you don't need as much ephemeral storage, then you can try a different instance type - m1.xlarge is an older type and not recommended by AWS anyway. m3.xlarge has the same vCPU count and memory as m1.xlarge (4, 15GB) but only 2 40GB SSD instance stores. It's also cheaper than m1.xlarge, according to AWS docs. The new m4.xlarge has 4 vCPU, 16 GB memory, and does away with instance stores completely.
... View more
07-13-2015
07:22 AM
Hi Kirk, To help figure out your situation, could you please send along the application.log file that Director wrote to while creating your cluster? It should be under /var/log/cloudera-director-client. Also, it would help if you send your client configuration file containing your cluster details - just be sure to remove any passwords or AWS keys before doing so. I'm particularly interested in knowing what instance type and AMI you used for your nodes. Thanks, Bill
... View more
07-02-2015
12:45 PM
Great! Glad things are working for you now.
... View more
06-25-2015
06:39 AM
Hi Ryan, I'm just following up to see if editing the init script eliminated the runlevel warnings. The error code of 3 is likely just a confirmation from the script that the server is not running. If that keeps happening, we'll want to take a look at /var/log/cloudera-director-server/application.log file to see if there are any error messages indicating a problem. Bill
... View more
- « Previous
- Next »