Created on 08-07-2018 11:12 PM - edited 09-16-2022 01:43 AM
Starting with version 2.6.5 on the 2.x line and 3.0.0 on the 3.x line, HDP has added support for running workloads against Google Cloud Storage (GCS). This article talks about how to get started using GCS for workloads.
Apache Hadoop includes a ‘FileSystem’ abstraction, which is used to access data on HDFS. This is an abstraction, which can be used to talk not just to HDFS, but any other storage layer which has an implementation of the FileSystem abstract class for the specific storage layer.
Existing applications don’t need to change, to make use of GCS. Instead, some configuration and path changes is all that is required for existing jobs to run against Cloud Storage.
The Google Storage Connector provides an implementation of the FileSystem abstraction, and is available in HDP versions mentioned above, to facilitate access to GCS.
“gs” is the filesystem prefix used for Google Cloud Storage. e.g. gs://tmp
There’s several ways in which credentials can be configured to access cloud storage, each with its own implications. Configuring access to Google Cloud Storage (GCS) has details on required permissions for the service accounts, as well as steps on configuring credentials.
When running on GCE VMs - it is possible to associate a service account to the launched VMs. This can be done via the Google Cloud Console when launching VMs, or it can be configured in Cloudbreak while deploying a cluster.
This is the simplest mechanism to get credentials set up. Keep in mind
This is a more complicated approach, which involves propagating keys for the service account to all nodes in the cluster, and then making changes to relevant configuration files to point to these keys.
Details here
This approach can be used on VMs deployed on GCE, or on other nodes - such as on-prem deployments. It also provides more fine grained access control. As an example, it’s possible to configure Hive to use specific credentials while not configuring any default credentials for the rest of the cluster.
This approach can also be used by end users to provide their own credentials when running a job - instead of relying on cluster default credentials.
Once credentials have been set up, access can be verified by running the following.
hadoop fs -ls gs://<bucket-name>
where ‘<bucket-name>’ is a GCS bucket which can be accessed using the configured service account.
Some additional configuration changes may be required depending on the version of Cloudbreak and Ambari being used. These are listed here
These changes need to be made to core-site.xml
fs.gs.working.dir=/ fs.gs.path.encoding=uri-path fs.gs.reported.permissions=777
Make sure to restart affected services, as well as Hive and Spark after configuring credentials and making these changes.
Set the following configuration property in hive-site.xml
hive.metastore.warehouse.dir=gs://<bucket_name>/<path_to_warehouse_dir> (e.g. gs://demobucket/apps/hive/warehouse)
After this, any new tables created will have their data stores on GCS.
In this case, the path for a table needs to be explicitly set during table creation.
CREATE TABLE employee (e_id int) STORED AS ORC LOCATION ‘gs://hivewarehouse/employee’
When running a cluster where Hive manages the data (doAs=false), it may be desirable to configure Cloud Storage credentials in a manner where only Hive has access to the data.
To do this, make the credentials config changes mentioned further up in hiveserver2-site and/or hiveserver2-interactive-site.
Similar to Hive, this is as simple as replacing the path being used in the script, with a path pointing at a Google Cloud Storage location.
As an example.
text_file = sc.textFile("gs://<bucket_name>/file") counts = text_file.flatMap(lambda line: line.split(" ")) \ .map(lambda word: (word, 1)) \ .reduceByKey(lambda a, b: a + b) counts.saveAsTextFile("gs://<bucket_name>/outfile")
Accessing data stored on Google Cloud Storage requires very few changes to existing applications with the FileSystem abstraction being used. Cloudbreak provides a simple mechanism to launch these clusters on GCE with credentials configured. Deploying clusters via cloudbreak is covered here.