Member since
03-21-2016
233
Posts
62
Kudos Received
33
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
923 | 12-04-2020 07:46 AM | |
1198 | 11-01-2019 12:19 PM | |
1630 | 11-01-2019 09:07 AM | |
2552 | 10-30-2019 06:10 AM | |
1279 | 10-28-2019 10:03 AM |
10-24-2019
08:06 AM
1 Kudo
Make sure that /tmp is not mounted with noexec option on the host where spark2 history is failing. #mount -v #df -h /tmp
... View more
10-22-2019
12:06 PM
Check the config property hadoop.http.authentication.type, if this is set to kerberos , then accessing UIs would need kerberos credentials on client. By default this is et to kerberos in HDP 3.x version when cluster is kerberized. If you want to disable kerberos auth then change below config properties. -> Ambari > HDFS> Configs> in core-site hadoop.http.authentication.type=simple hadoop.http.authentication.simple.anonymous.allowed=true
... View more
07-15-2018
03:13 PM
2 Kudos
@Anurag Mishra du -sh /hadoop/hdfs/data shows the space used and not the space available. You should check for the space available on the directory. You can check the available space using df -h command. # df -h /hadoop/hdfs/data Also to know the available space for HDFS you can use hdfs command, which should show the available HDFS space : #hdfs dfs -df -s / And to add more space with one datanode you should either add space to underlaying filesystem where /hadoop/hdfs/data is mounted or create additional filesystem something like /hadoop/hdfs/data1 and configure datanode dir(dfs.datanode.data.dir) to have two directory paths in comma separated format. You can also add HDFS space by adding another datanode to the cluster.
... View more
07-15-2018
08:13 AM
1 Kudo
@Erkan ŞİRİN
It looks that ranger_audits collection is not created. This is supposed to be created when Audit to solr is enabled in "Ranger Audit" Tab. Ambari will create ranger_audits collection when ranger admin service is restarted with Audit to solr enabled. Make sure that Infra-solr service is insatalled in cluster and "Audit to solr" and "Solr cloud" is enabled. Restart Ranger admin service, check the Ambari operation log for Ranger admin service start and see if ranger_audits collection is successfully created. You may need to troubleshoot solr if the create-collection fails with any error.
... View more
07-15-2018
07:53 AM
@Sundar Lakshmanan
Try sort with delimiter # ls -1 | sort -t '_' -nk3
# ls -1 | sort -t '_' -nk3
abcd_1_20180703
abcd_2_20180703
abcd_3_20180703
abcd_4_20180703
abcd_5_20180703
abcd_6_20180703
abcd_1_20180704
abcd_2_20180704
abcd_3_20180704
abcd_4_20180704
abcd_5_20180704
abcd_6_20180704
#cat test.out | sort -t '_' -nk3
... View more
07-15-2018
07:08 AM
1 Kudo
It looks that Ambari 2.7.0 only includes YARN,FILES,SMARTSENSE and WFM views. https://docs.hortonworks.com/HDPDocuments/Ambari-2.7.0.0/administering-ambari-views/content/amb_understanding_ambari_views.html I dont see any Jira related to hive view being removed from Ambari 2.7.0 version. Below jira gives a clue about Ambari Hive view 2.0 being removed. https://issues.apache.org/jira/browse/AMBARI-23742 Additional reference from Ambari upgrade docs. https://docs.hortonworks.com/HDPDocuments/Ambari-2.7.0.0/bk_ambari-upgrade/content/bhvr_changes_upgrade_hdp3_amb27.html
... View more
02-28-2018
01:19 PM
Wrong "type":"toppology" .. I think you found it already
... View more
05-31-2017
06:09 AM
@Robin Dong smartsense package is included in ambari repo, you can download ambari repo to get smartsense package.
... View more
05-28-2017
08:36 AM
1 Kudo
@Haaris Khan In Zeppelin 0.7, HDP2.6 we have new LdapRealm, that allows to specify search filter. With the search filter we can restrict login based on groups. Below is one such example I tested in my lab. Please note that this works only in HDP2.6 or zeppelin 0.7 and above. In HDP2.5 this was not possible because active directory realm was based on UserPrincipalName attribute and there was no way to filter the users based on groups so login cannot be restricted, but with Authorization(as mentioned by @Vipin Rathor you can restrict the users accessing specific urls based on group role map) [main]
ldapADGCRealm = org.apache.zeppelin.realm.LdapRealm
ldapADGCRealm.contextFactory.systemUsername = hadoopadmin@lab.hortonworks.net
ldapADGCRealm.contextFactory.systemPassword = <Password>
ldapADGCRealm.searchBase = "dc=lab,dc=hortonworks,dc=net"
ldapADGCRealm.userSearchBase = "dc=lab,dc=hortonworks,dc=net"
ldapADGCRealm.userSearchFilter=(&(objectclass=user)(sAMAccountName={0})(|(memberOf=CN=hr,OU=CorpUsers,DC=lab,DC=hortonworks,DC=net)(memberOf=CN=hadoop-admins,OU=CorpUsers,DC=lab,DC=hortonworks,DC=net)(memberOf=CN=sales,OU=CorpUsers,DC=lab,DC=hortonworks,DC=net)))
ldapADGCRealm.contextFactory.url = ldap://LdapServer:389
#ldapADGCRealm.userSearchAttributeName = sAMAccountName
ldapADGCRealm.contextFactory.authenticationMechanism = simple
#ldapADGCRealm.userObjectClass = user
ldapADGCRealm.groupObjectClass = group
ldapADGCRealm.memberAttribute = member
sessionManager = org.apache.shiro.web.session.mgt.DefaultWebSessionManager
securityManager.sessionManager = $sessionManager
securityManager.sessionManager.globalSessionTimeout = 86400000
shiro.loginUrl = /api/login
... View more
05-07-2017
06:33 PM
1 Kudo
@Daniel Kozlowski Install EPEL repo and try the installation again. Required dependencies for R-devel are available in epel repo. # yum install -y epel-release #yum install R-devel libcurl-devel openssl-devel Note that you need internet connectivity to allow connection for epel repo.
... View more