Member since
02-08-2016
793
Posts
669
Kudos Received
85
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3067 | 06-30-2017 05:30 PM | |
3988 | 06-30-2017 02:57 PM | |
3312 | 05-30-2017 07:00 AM | |
3885 | 01-20-2017 10:18 AM | |
8404 | 01-11-2017 02:11 PM |
05-31-2016
01:18 PM
For client you can use "/etc/ambari-agent/conf/ambari-agent.ini" and modify below params - url_port=8440 secured_url_port=8441
... View more
05-31-2016
01:17 PM
@Milan Sladky
per documentation it say we can add the property "client.api.port=<port_number>" and can modify the port number to some other value - https://docs.hortonworks.com/HDPDocuments/Ambari-2.1.1.0/bk_ambari_reference_guide/content/_optional_changing_the_default_ambari_server_port.html But this doesn't works. When i tried adding this property and restart ambari then usually ambari listen on both ports - ie new port and 8441. When checked in ambari code, i see the value is hard coded. Please check below - ambari -> Configuration.java public static final String SRVR_TWO_WAY_SSL_PORT_DEFAULT = "8441"; public static final String SRVR_ONE_WAY_SSL_PORT_DEFAULT = "8440"; I think this is BUG.
... View more
05-31-2016
11:07 AM
@Alexander Feldman Were you able to get this activity completed ? Please update. If Not Please refer below links - https://community.hortonworks.com/questions/11369/is-there-a-way-to-export-ranger-policies-from-clus.html https://community.hortonworks.com/questions/10826/rest-api-url-to-configure-ranger-objects.html
... View more
05-31-2016
10:08 AM
@vishal patil In addition to what @Jitendra Yadav mentioned pls check this also - https://community.hortonworks.com/articles/18088/ambari-shows-hdp-services-to-be-down-whereas-they.html
... View more
05-31-2016
05:44 AM
1 Kudo
@Tim Dunphy 1. Make sure your below command has same output - $hostname $hostname -f 2. Make sure if iptables and selinux are disabled 3. SSH to localhost and make sure it works without password. 4. Once you make sure your passwordless ssh is working you try installing agent manually also - http://docs.hortonworks.com/HDPDocuments/Ambari-2.2.1.0/bk_ambari_reference_guide/content/ch_amb_ref_installing_ambari_agents_manually.html
... View more
05-30-2016
06:50 PM
@Tajinderpal Singh You can also use the below script recommended by HWX - https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.0/bk_installing_manually_book/content/determine-hdp-memory-config.html You can try this to get recommendations as per your cluster resources.
... View more
05-30-2016
03:05 PM
1 Kudo
Can someone explain the mechanism behind the property "yarn.scheduler.capacity.node-locality-delay" ? from apache site i see - Property Description yarn.scheduler.capacity.node-locality-delay Number of missed scheduling opportunities after which the CapacityScheduler attempts to schedule rack-local containers. Typically, this should be set to number of nodes in the cluster. By default is setting approximately number of nodes in one rack which is 40. Positive integer value is expected.
... View more
Labels:
- Labels:
-
Apache YARN
05-30-2016
05:59 AM
1 Kudo
@suresh krish Please find below reply inline - 1, What should be the permission for my data directory or file system eg, drwxrwx--- appwowner appgroup /data/ ---- is this the correct way.. or only appowner show have fulll permission and for others and group is restricted ? --> If you are planing to use Ranger to centralize governance for all access then you should set permission recursively to 700 from cli 2) What is the recommended file system that ranger should maintain and what is the recommended file system that ACL should maintain ? -> There is no recommendation given as such. But from my experience you should try to govern everything from single console. ie Ranger, else it will difficult for you to manage when your environment grows bigger n bigger. You do have an option to keep Application data[user app dir data, hive external tables,etc..] managed by Ranger and hadoop hdfs data[hive warehouse dir] by acls. 3) what if i have ACL restricted for a appowner and in ranger it is granted ? --> It a basic fundamentals of HDP that If you have Ranger setup in your environment, Ranger policies will take first precedence and then acl's. So for example - userA has a directory which has policy in ranger which mentions no write access on the dir , but from cli assume it has permission of 777 then, whenever the user tries to create a subdir inside the dir it will first check for ranger policy and if the ranger policy is not allowing then it will check for HDFS POXIS. Chk this link for more details - http://hortonworks.com/blog/best-practices-in-hdfs-authorization-with-apache-ranger/ 4) If i restrict a particular directory for a user. Will that user can access that directory through HIVE or HBASE ? --> Its all up to you how you manage the policy and from where [ie. Ranger / Acls] 5) what is the recommended permission that we need to set to handle by ranger --> Already replied in point 1.
... View more
05-30-2016
05:15 AM
@Mon key The best way is to find which are corrupted blocks using below command - hdfs fsck /path/to/corrupt/file -locations -blocks -files And then try to manually remove this using "hadoop rm -r </path>" to avoid dataloss. But still fsck does not remove good copied of data blocks.
... View more