Member since
09-28-2015
4
Posts
4
Kudos Received
0
Solutions
09-16-2023
08:51 AM
Hi, You need to set in hive-site.xml, these three tags to get this working with hive: <property>
<name>google.cloud.auth.service.account.json.keyfile</name>
<value>/home/hadoop/keyfile.json</value>
</property>
<property>
<name>fs.gs.reported.permissions</name>
<value>777</value>
</property>
<property>
<name>fs.gs.path.encoding</name>
<value>/home/hadoop/</value>
</property> Same xml tags we can have it on hadoop in core-site.xml to have it working with hdfs, On beeline, just execute this and it shall work: INSERT OVERWRITE DIRECTORY 'gs://bucket/table' ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' SELECT * FROM table; Please upvote if you found helpful!
... View more
08-07-2018
11:12 PM
1 Kudo
Starting with version 2.6.5 on the 2.x line and 3.0.0 on the 3.x line, HDP has added support for running workloads against Google Cloud Storage (GCS). This article talks about how to get started using GCS for workloads. The FileSystem Abstraction
Apache Hadoop includes a ‘FileSystem’ abstraction, which is used to access data on HDFS. This is an abstraction, which can be used to talk not just to HDFS, but any other storage layer which has an implementation of the FileSystem abstract class for the specific storage layer.
Existing applications don’t need to change, to make use of GCS. Instead, some configuration and path changes is all that is required for existing jobs to run against Cloud Storage. Google Storage Connector
The
Google Storage Connector provides an implementation of the FileSystem abstraction, and is available in HDP versions mentioned above, to facilitate access to GCS.
“gs” is the filesystem prefix used for Google Cloud Storage. e.g. gs://tmp Setting up Credentials to Access GCS
There’s several ways in which credentials can be configured to access cloud storage, each with its own implications.
Configuring access to Google Cloud Storage (GCS) has details on required permissions for the service accounts, as well as steps on configuring credentials. Option1: Accessing GCS via VMs deployed on Google Compute Engine
When running on GCE VMs - it is possible to associate a service account to the launched VMs. This can be done via the Google Cloud Console when launching VMs, or it can be configured in Cloudbreak while deploying a cluster.
This is the simplest mechanism to get credentials set up. Keep in mind
This mechanism allows any process running on this VM to access Google Cloud resources that this service account has access to. (e.g. if users have ssh access to the nodes - they would be able to access data based on the service account permissions) Option2: Configuring credentials via configuration files
This is a more complicated approach, which involves propagating keys for the service account to all nodes in the cluster, and then making changes to relevant configuration files to point to these keys.
Details
here
This approach can be used on VMs deployed on GCE, or on other nodes - such as on-prem deployments. It also provides more fine grained access control. As an example, it’s possible to configure Hive to use specific credentials while not configuring any default credentials for the rest of the cluster.
This approach can also be used by end users to provide their own credentials when running a job - instead of relying on cluster default credentials. Validating setup
Once credentials have been set up, access can be verified by running the following.
hadoop fs -ls gs://<bucket-name>
where ‘<bucket-name>’ is a GCS bucket which can be accessed using the configured service account. Additional Configuration Changes
Some additional configuration changes may be required depending on the version of Cloudbreak and Ambari being used. These are listed
here
These changes need to be made to core-site.xml
fs.gs.working.dir=/
fs.gs.path.encoding=uri-path
fs.gs.reported.permissions=777
Make sure to restart affected services, as well as Hive and Spark after configuring credentials and making these changes. Running Hive Jobs against GCS
Option 1: Configure the Hive Warehouse Directory so that all new tables are created on GCS
Set the following configuration property in hive-site.xml
hive.metastore.warehouse.dir=gs://<bucket_name>/<path_to_warehouse_dir>
(e.g. gs://demobucket/apps/hive/warehouse)
After this, any new tables created will have their data stores on GCS.
Options 2: Specify an explicit path when creating tables
In this case, the path for a table needs to be explicitly set during table creation.
CREATE TABLE employee (e_id int) STORED AS ORC
LOCATION ‘gs://hivewarehouse/employee’
When running a cluster where Hive manages the data (doAs=false), it may be desirable to configure Cloud Storage credentials in a manner where only Hive has access to the data.
To do this, make the credentials config changes mentioned further up in hiveserver2-site and/or hiveserver2-interactive-site. Running Spark Jobs against GCS
Similar to Hive, this is as simple as replacing the path being used in the script, with a path pointing at a Google Cloud Storage location.
As an example.
text_file = sc.textFile("gs://<bucket_name>/file")
counts = text_file.flatMap(lambda line: line.split(" ")) \
.map(lambda word: (word, 1)) \
.reduceByKey(lambda a, b: a + b)
counts.saveAsTextFile("gs://<bucket_name>/outfile")
Things to Watch Out For
As mentioned previously, configuring credentials at the instance level gives access to all users who have access to this VM (e.g. ssh from google cloud console into the running node)
File ownership and permissions returned by listing commands (hadoop fs -ls) do not reflect the actual owner or permissions for the file. These are controlled via Google cloud IAM instead. Wrapping Up
Accessing data stored on Google Cloud Storage requires very few changes to existing applications with the FileSystem abstraction being used. Cloudbreak provides a simple mechanism to launch these clusters on GCE with credentials configured. Deploying clusters via cloudbreak is covered
here.
... View more
05-11-2018
10:06 AM
Hi. I had the same issue, and setting hive.llap.client.consistent.splits to false make it working. However, I had to turn set hive.llap.execution.mode=none; to make it working. AFAIK i am now using hive2.10 without llap, but with doas
... View more