Member since
08-17-2017
36
Posts
1
Kudos Received
0
Solutions
09-16-2023
08:51 AM
Hi, You need to set in hive-site.xml, these three tags to get this working with hive: <property>
<name>google.cloud.auth.service.account.json.keyfile</name>
<value>/home/hadoop/keyfile.json</value>
</property>
<property>
<name>fs.gs.reported.permissions</name>
<value>777</value>
</property>
<property>
<name>fs.gs.path.encoding</name>
<value>/home/hadoop/</value>
</property> Same xml tags we can have it on hadoop in core-site.xml to have it working with hdfs, On beeline, just execute this and it shall work: INSERT OVERWRITE DIRECTORY 'gs://bucket/table' ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' SELECT * FROM table; Please upvote if you found helpful!
... View more
11-09-2022
12:06 AM
Default Page Size seems to be 200 on most APIs. Use query parameters pageSize and startIndex to page through the results
... View more
05-31-2018
03:31 AM
I understand that you have multiple clusters, each one with apache ranger installed and you want to have a single database engine for all of them. You can have a single database engine outside of your HDP nodes, but you need to create a different database name for each cluster. I recommend you also to have a different database user account to administrate each database. Each ambari and ranger node will need to have access to MySQL host using the port 3306. By the way MariaDB is also supported. The following link may help you, check the point number 5 : https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.5/bk_security/content/ranger_admin_settings.html
... View more