Member since
09-18-2015
3274
Posts
1159
Kudos Received
426
Solutions
11-26-2015
04:21 PM
1 Kudo
The Hadoop Ecosystem Table https://hadoopecosystemtable.github.io/
... View more
Labels:
06-09-2016
04:41 AM
In addition don't forget to re-start Amabri-server after making above mentioned changes.
... View more
11-27-2015
05:03 PM
Also, If we are using MR then we might need to revisit the mapper/reducer container's heap size accordingly.
... View more
11-26-2015
10:59 AM
1 Kudo
Use case There are 2 groups Analytics and DW. We want to split the cluster resources between these 2 groups. User - neeraj belongs to Analytics group. User - dwuser belongs to DW group User neeraj is not allowed to use Default and dwuser queue. Be default, all the jobs submitted by user neeraj must go to it's assigned queue. User dwuser is not allowed to use Default and Analytics queue. By default, all the jobs submitted by user dwuser must go to it's assigned queue. Environment HDP 2.3 (Hortonworks Data Platform) and Ambari 2.1 This tutorial completely independent of Hadoop distribution. Yarn is must i,e Hadoop 2.x I will be using Capacity Scheduler view to configure queues. yarn.scheduler.capacity.maximum-am-resource-percent=0.2
yarn.scheduler.capacity.maximum-applications=10000
yarn.scheduler.capacity.node-locality-delay=40
yarn.scheduler.capacity.queue-mappings=u:neeraj:Analytics,u:dwuser:DW
yarn.scheduler.capacity.queue-mappings-override.enable=true
yarn.scheduler.capacity.root.accessible-node-labels=*
yarn.scheduler.capacity.root.acl_administer_queue=yarn
yarn.scheduler.capacity.root.acl_submit_applications=yarn
yarn.scheduler.capacity.root.Analytics.acl_administer_queue=yarn
yarn.scheduler.capacity.root.Analytics.acl_submit_applications=neeraj
yarn.scheduler.capacity.root.Analytics.capacity=60
yarn.scheduler.capacity.root.Analytics.maximum-capacity=60
yarn.scheduler.capacity.root.Analytics.minimum-user-limit-percent=100
yarn.scheduler.capacity.root.Analytics.ordering-policy=fifo
yarn.scheduler.capacity.root.Analytics.state=RUNNING
yarn.scheduler.capacity.root.Analytics.user-limit-factor=1
yarn.scheduler.capacity.root.capacity=100
yarn.scheduler.capacity.root.default.acl_administer_queue=yarn
yarn.scheduler.capacity.root.default.acl_submit_applications=yarn
yarn.scheduler.capacity.root.default.capacity=10
yarn.scheduler.capacity.root.default.maximum-capacity=100
yarn.scheduler.capacity.root.default.state=RUNNING
yarn.scheduler.capacity.root.default.user-limit-factor=1
yarn.scheduler.capacity.root.DW.acl_administer_queue=yarn
yarn.scheduler.capacity.root.DW.acl_submit_applications=dwuser
yarn.scheduler.capacity.root.DW.capacity=30
yarn.scheduler.capacity.root.DW.maximum-capacity=30
yarn.scheduler.capacity.root.DW.minimum-user-limit-percent=100
yarn.scheduler.capacity.root.DW.ordering-policy=fifo
yarn.scheduler.capacity.root.DW.state=RUNNING
yarn.scheduler.capacity.root.DW.user-limit-factor=1
yarn.scheduler.capacity.root.maximum-capacity=100
yarn.scheduler.capacity.root.queues=Analytics,DW,default [root@nsfed01 ~]# su - neeraj [neeraj@nsfed01 ~]$ mapred queue -showacls 15/08/18 14:45:03 INFO impl.TimelineClientImpl: Timeline service address: http://nsfed03.cloud.hortonworks.com:8188/ws/v1/timeline/ 15/08/18 14:45:03 INFO client.RMProxy: Connecting to ResourceManager at nsfed03.cloud.hortonworks.com/172.24.64.22:8050 Queue acls for user : neeraj Queue Operations ===================== root Analytics SUBMIT_APPLICATIONS DW default [neeraj@nsfed01 ~]$ [root@nsfed01 ~]# su - neeraj [neeraj@nsfed01 ~]$ yarn jar /usr/hdp/2.3.0.0-2557/hadoop-mapreduce/hadoop-mapreduce-examples.jar pi 20 1000000009 Number of Maps = 20 Samples [root@nsfed03 yarn]# su - dwuser [dwuser@nsfed03 ~]$ yarn jar /usr/hdp/2.3.0.0-2557/hadoop-mapreduce/hadoop-mapreduce-examples.jar pi 20 1000000009 Number of Maps = 20 CS view
... View more
Labels:
11-26-2015
10:55 AM
3 Kudos
yum install expect*
#!/usr/bin/expect
spawn ambari-server sync-ldap --existing
expect "Enter Ambari Admin login:"
send "admin\r"
expect "Enter Ambari Admin password:"
send "admin\r"
expect eof
... View more
Labels:
01-10-2017
09:41 PM
@Shihab That worked for me. Thanks so much. I also had to delete /system/diskbalancer.id to run it successfully. But for some reason I have to do this for every rebalancer I run.
... View more
11-12-2015
12:11 AM
Got it syncing to the hub! So if i understand this correct, now if I want to sync these notebooks to another zeppelin, i just put in the same "hub_api_token" in that zeppelin and will it sync to that zeppelin instance? Or is that a feature that's not developed yet?
... View more
11-11-2015
02:23 PM
@Neeraj I needed to add credential to hive-site to wasb to work inside hive. Did it work for you only with hdfs-site?
... View more
05-20-2016
02:13 PM
Latest / Trunk code for the Grafana DataSource can be found here... https://github.com/apache/ambari/tree/trunk/ambari-metrics/ambari-metrics-grafana
... View more
- « Previous
- Next »