Member since
07-19-2017
8
Posts
1
Kudos Received
0
Solutions
09-07-2017
08:59 PM
@Fahad Sarwar The hive metastore client reads the configuration property hive.metastore.uri to get the list of metastore servers with which it can communicate. The hive.metastore.uri value should be a list comma separated metastore uris e.g <property>
<name> hive.metastore.uris </name>
<value> thrift://$Metastore_Server1_FQDN,thrift://$Metastore_Server2_FQDN</value>
</property> For secure clusters, add the following configuration property to the hive-site.xml file for each metastore server: <property>
<name> hive.cluster.delegation.token.store.class</name>
<value>org.apache.hadoop.hive.thrift.ZooKeeperTokenStore</value>
</property> Failover Scenario A Hive metastore client always uses the first URI to connect with the metastore server. If the metastore server becomes unreachable, the client randomly picks up a URI from the list and attempts to connect with that.
... View more
08-16-2017
03:39 PM
@Fahad Sarwar What tool are you using for authorization? If Ranger then give appropriate permissions to group_med_jazz_air
... View more
08-15-2017
11:45 AM
@Fahad Sarwar @Laeeq Ahmad Can you change the property in the server.properties file from listeners=PLAINTEXT://hostname:{port} to listeners=PLAINTEXT://0.0.0.0:{port} Then restart the kafka process
... View more
08-14-2017
07:40 AM
@Fahad Sarwar Have you try this post before? https://community.hortonworks.com/questions/120861/ambari-agent-ssl-certificate-verify-failed-certifi.html
... View more
08-15-2017
10:04 AM
Thanks for responding Geoffery. Above solution provided by Mark resolved the issue./
... View more
07-31-2017
06:33 PM
1 Kudo
@Fahad Sarwar, Capacity scheduler does not have placement rules, where you can configure that if userX is running job, place the job in QueueX. Capacity scheduler ACLS only check whether the application is allowed to run in specific queue or not. In order to be able to run the job in a specific queue, you will need to set the queue config while running apps. ( If the queue config is not set, by default application gets launched in "default" queue) For mapreduce jobs : set -Dmapred.job.queue.name=<queue-name> or -Dmapred.job.queuename=<queue-name> yarn jar /usr/lib/gphd/hadoop-mapreduce/hadoop-mapreduce-examples-x.x.x-alpha-gphd-x.x.x.x.jar wordcount -D mapreduce.job.queuename=<queue-name> /tmp/test_input /user/fail_user/test_output For spark jobs: set --queue <queue-name> spark-submit --class org.apache.spark.examples.SparkPi --master yarn-cluster --queue <queue-name> /usr/hdp/2.x.x.x-xxxx/spark/lib/spark-examples-x.x.x.x.x.x.x-xxxx-hadoopx.x.x.x.x.x.x-xxxx.jar 10
... View more