Member since
01-07-2019
217
Posts
135
Kudos Received
18
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1914 | 12-09-2021 09:57 PM | |
1858 | 10-15-2018 06:19 PM | |
9216 | 10-10-2018 07:03 PM | |
3975 | 07-24-2018 06:14 PM | |
1472 | 07-06-2018 06:19 PM |
01-20-2017
08:01 PM
2 Kudos
We just updated Hortonworks Data Cloud for AWS to Technical Preview #1.12. The release is packed with goodies such as:
Support for deploying compute nodes with spot pricing. Support for executing node recipes - custom scripts that can be run pre- or post- cluster deployment for customizing the cluster and installing additional software. Support for HDP 2.6 (Technical Preview) that can launch two new cluster configurations for Spark 2.1 and Druid. To create an HDP 2.6 cluster, launch the cloud controller and when creating a cluster choose HDP Version: HDP 2.6 (Technical Preview) and then choose one of the available cluster types: For more details, refer to the Release Notes: http://hortonworks.github.io/hdp-aws/releasenotes/. To get started with HDCloud for AWS, visit http://hortonworks.github.io/hdp-aws/. To get started with Spark 2.1, see Vinay's blog at http://hortonworks.com/blog/try-apache-spark-2-1-zeppelin-hortonworks-data-cloud/. Have fun!
... View more
01-10-2017
05:35 PM
2 Kudos
After creating a cluster on the HDCLoud for AWS, you may notice that certain ports are not opened by default, so you may need to manually open these ports by editing the inbound access on the security group. In this tutorial, I will show you how to open YARN Resource Manager UI (8088) and Hive UI (10502) ports by manually editing the inbound access on the master node security group. Let’s get started! 1. On AWS, from the Services menu, select EC2 to navigate to the EC2 console: 2. In the left pane, in the INSTANCES section, click on Instances. Note: If you can’t see your instances, check the top right corner to make sure that you are in the correct region. 3. Identify the instance corresponding to your master node and. The name of the instance should be <your-cluster-name>-1-master. Next, select that instance. This will allow you to see the Description tab, which includes the link to the security group configuration: 4. Click on the security group URL to open the Security Group section. 5. Select the Inbound tab: 6. Check if 8088 and 10502 are found in the Port Range column. If not, add them by clicking the Edit button, then Add Rule, and add a new Custom TCP Rule for port 8088 with source “0.0.0.0/0”. Next, do the same for port 10502. Save changes by hitting the Save button.
... View more
Labels:
01-03-2017
06:00 PM
Hi @Vivek Sharma What do you mean by "AWS public cloud"? You have an option to launch HDCloud in your own custom VPC that can be configured according to your needs. See https://aws.amazon.com/vpc/. What else do you need?
... View more
01-03-2017
05:37 PM
1 Kudo
Hi @Vivek Sharma When you are creating a cluster, the "Instance Role" parameter allows you to configure S3 access. By default, a new S3 role is created to grant you access to S3 data in your AWS account. See "Instance Role" in Step 7 at http://docs.hortonworks.com/HDPDocuments/HDCloudAWS/HDCloudAWS-1.8.0/bk_hdcloud-aws/content/create/index.html In addition, there are ways to authenticate with S3 using keys or tokens: http://docs.hortonworks.com/HDPDocuments/HDCloudAWS/HDCloudAWS-1.8.0/bk_hdcloud-aws/content/s3-security/index.html. @Ram Venkatesh
... View more
12-22-2016
11:06 PM
@Shyam Shaw Is your cluster running on EC2 instances? What S3 access policy did you create?
... View more
12-19-2016
04:37 PM
@stevel Do you know if using S3 is supported in Ranger?
... View more
12-16-2016
07:14 PM
@Anandha L Ranganathan I don't know if this will help, but you could try setting the parameters in the xml files rather than at runtime. http://hortonworks.github.io/hdp-aws/s3-security/index.html#configuring-authentication
... View more
12-13-2016
11:57 PM
s3n is deprecated in newer versions of Hadoop (see https://wiki.apache.org/hadoop/AmazonS3), so it's better to use s3a. To use s3a, specify s3a:// in front of the path when accessing files. The following properties need to be configured first: <property><name>fs.s3a.access.key</name><value>ACCESS-KEY</value></property><property><name>fs.s3a.secret.key</name><value>SECRET-KEY</value></property>
... View more
12-13-2016
11:52 PM
S3N is deprecated in newer versions of Hadoop, so it's better to use s3a. To use s3a, specify s3a:// in front of the path. The following properties need to be configured first: <property><name>fs.s3a.access.key</name><value>ACCESS-KEY</value></property><property><name>fs.s3a.secret.key</name><value>SECRET-KEY</value></property>
... View more
12-13-2016
11:37 PM
Thanks for answering this @stevel. I will add a note to the docs that authentication configuration allows you to access all the buckets to which a single account has access and that you cannot work across multiple accounts. I will not add the dangerous workaround unless you recommend that I do...
... View more