Member since
09-21-2015
38
Posts
31
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1843 | 06-09-2017 09:18 AM | |
1274 | 06-08-2017 03:01 PM |
07-13-2017
01:48 PM
When using post-user-creation-hook.sh script to create home directories for users we can edit the script to set a quota too. For information on enabling HDFS home dir creation see: https://docs.hortonworks.com/HDPDocuments/Ambari-2.5.0.3/bk_ambari-administration/content/create_user_home_directory.html If you want to set a quota on this dir you can edit: /var/lib/ambari-server/resources/scripts/post-user-creation-hook.sh # the default implementation creates user home folders; the first argument must be the username
ambari_sudo "yarn jar /var/lib/ambari-server/resources/stacks/HDP/2.0.6/hooks/before-START/files/fast-hdfs-resource.jar $JSON_INPUT"
#ADD THESE LINES
while read -r LINE
do
USR_NAME=$(echo "$LINE" | awk -F, '{print $1}')
hdfs dfsadmin -setSpaceQuota 10 /user/$USR_NAME > /tmp/posthook.tmp
done <"$CSV_FILE"
#END ADD QUOTA
if [ "$DEBUG" -gt "0" ]; then echo "Switch debug OFF";set -x;unset DEBUG; else echo "debug: OFF"; fi
unset DEBUG
}
main "$@"
Add the lines between the comments and save. Now when a user is added both a home dir is created with a 10G quota set.
... View more
Labels:
06-30-2017
10:59 AM
2 Kudos
When configuring LDAPS in HDP its common to see wrong certificates used or certificates without the correct chain. To ensure the correct chain of certificates is used when configuring LDAPS you can use openssl to read the certificate from the server and save it to a file. This file can them be imported into, for example, the Ambari truststore. echo -n | openssl s_client -connect <ad-server>:636 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > /root/ldaps-cert.pem
... View more
06-21-2017
02:23 PM
Current on version HDP-2.6.x - Ambari-2.5.x if the zookeeper principal name is changed or customized manual changes are required for HDFS, Yarn and Ambari-Infra. In Ambari - config - for yarn and HDFS:
yarn-env.sh YARN_OPTS="-Dzookeeper.sasl.client=true -Dzookeeper.sasl.client.username=sand-box-zookeeper -Djava.security.auth.login.config=/etc/hadoop/2.6.0.3-8/0/yarn_jaas.conf -Dzookeeper.sasl.clientconfig=Client $YARN_OPTS"
hadoop/conf/hadoop-env.sh export HADOOP_ZKFC_OPTS="-Dzookeeper.sasl.client=true -Dzookeeper.sasl.client.username=sandbox-zookeeper -Djava.security.auth.login.config=/usr/hdp/current/hadoop-client/conf/secure/hdfs_jaas.conf -Dzookeeper.sasl.clientconfig=Client $HADOOP_ZKFC_OPTS"
For Ambari-Infra/Solr
edit /usr/lib/ambari-infra-solr-client/solrCloudCli.sh PATH=$JAVA_HOME/bin:$PATH $JVM -classpath "$sdir:$sdir/libs/*" -Dzookeeper.sasl.client.username=sandbox-zookeeper org.apache.ambari.logsearch.solr.AmbariSolrCloudCLI ${1+"$@"}
edit /usr/lib/ambari-infra-solr/bin/solr
#add lines below to bottom of script
ADDITIONAL_CMD_OPTS="$ADDITIONAL_CMD_OPTS -Dzookeeper.sasl.client.username=hcedhp02-zookeeper"
launch_solr "$FG" "$ADDITIONAL_CMD_OPTS" These services will now restart correctly and use your custom zookeeper principal name for the client connection.
... View more
06-12-2017
06:18 PM
1) Standby namenode stores a second copy of the fsimage and should be up and ideally hosted on a second node. (this is not HA) 2) No Hbase does not store any information for Namenode or any other service as far as I am aware. We do have an embedded Hbase server for our Metrics system but thats out side the scope of this conversation. 3)Here is something important you need to know. Anything you allocate to heap size on a java program will be allocated at run-time. So if you have 5 apps each assigned 1G ram heap on a 4G system 4 will start but the 5th will fail because it cannot allocate the RAM. Simple example. 4) Check you have the HDP.repo and Ambari.repo in /etc/yum.repos.d/ redhat/centos6 is not a problem at all. I would stick with that in my personal opinion as there is much more OS specific detail of HDP on this platform. Other OS are also fine but for beginners I would stick with Centos6/7 ---- How you should approach this: Stop everything from Ambari. Start Zookeeper 1 or 3 nodes depends on setup but not 2. Namenode usually likes ZK up before it starts. Now start namenode and snamenode. Attach any failure logs here as an attachment.
HDFS is the first system that needs to be up. I assume you have not installed ranger at this time? In case you have remove it will complicate things at this point.
If I was learning all over again I would just start with HDFS/Zookeeper/YARN/MAPRED get those working on a single node and do some tutorials. Everything else will build off this and can be added a service at a time.
... View more
06-12-2017
11:03 AM
You might need to provide a little further information. Do you have namenode HA. Is there a port listening on 50070 on that node (netstat -plant | grep 50070)
Is there any information in the namenode logs> (/var/log/hadoop/hdfs/)
... View more
06-09-2017
09:18 AM
1 Kudo
If you are using Ambari we already collect this information. If you go to the hosts section you can see some graphs on the right hand side. If you have Ambari Metrics and Grafana installed you will see we already have a very robust monitoring platform intergrated within Ambari.
... View more
06-08-2017
03:01 PM
I guess you are using some type of cloud instance? Normally the images used for provision of these instances are based on small partitions for root even if you specifically requested a 800G disk. Fortunately these are using LVM disk partitions. The command lsblk will show you what "physical" disks you have: lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 931.5G 0 disk
├─sda1 8:1 0 4G 0 part
│ └─md1 9:1 0 4G 0 raid1 /
├─sda2 8:2 0 2G 0 part [SWAP]
└─sda3 8:3 0 925.5G 0 part
└─md3 9:3 0 925.5G 0 raid1
├─vg00-usr 252:0 0 5G 0 lvm /usr
├─vg00-var 252:1 0 105G 0 lvm /var
└─vg00-home 252:2 0 165G 0 lvm /home
sdb 8:16 0 931.5G 0 disk
├─sdb1 8:17 0 4G 0 part
│ └─md1 9:1 0 4G 0 raid1 /
├─sdb2 8:18 0 2G 0 part [SWAP]
└─sdb3 8:19 0 925.5G 0 part
└─md3 9:3 0 925.5G 0 raid1
├─vg00-usr 252:0 0 5G 0 lvm /usr
├─vg00-var 252:1 0 105G 0 lvm /var
└─vg00-home 252:2 0 165G 0 lvm /home
lvscan will show logical volumes and the sizes they are. # lvscan
ACTIVE '/dev/vg00/usr' [5.00 GiB] inherit
ACTIVE '/dev/vg00/var' [105.00 GiB] inherit
ACTIVE '/dev/vg00/home' [165.00 GiB] inherit
In HDP the two directories you should assign space to are /usr and /var and if they are already defined as an LVM partition you can resize with the resize2fs command. sudo resize2fs /dev/vg00/usr
... View more
05-17-2017
12:55 PM
5 Kudos
Ambari user sync will fail to map or import users when trying to pull from groups with 1,500+ members. What we see when we use ldapsearch to query a large group is: <snip>
member;range=0-1499: CN=Elgine Metzger,OU=users,OU=test,DC=j4ck3l,DC=net
member;range=0-1499: CN=Friedolf Welter,OU=users,OU=test,DC=j4ck3l,DC=net
</snip> This is seems to come from the LDAP Policy value: MaxValRange "MaxValueRange controls the number of values that are returned on a single attribute on a single object.
Default"1500
Hard Limit: 5000"
-- http://ldapwiki.com/wiki/MaxValRange To fix this: Go to the domain controller that we’re connecting to for the sync Find the file ntdsutil.exe (most likely under c:\windows\system32 or c:\winnt\system32 Run the ntdsutil.exe Type “ldap policies” and enter Type "connections" and enter Type "Connect to server [YourDCName]" and enter Type "q" and enter Type "Show Values" to see the current settings Type “Set MaxValRange to 2500” and enter Type “Commit Changes” and enter Type “Show Values” and enter -- https://support.intranetconnections.com/hc/en-us/articles/214747288-Changing-LDAP-Settings-Increasing-MaxPageSize member: CN=Elgine Metzger,OU=users,OU=test,DC=j4ck3l,DC=net
member: CN=Friedolf Welter,OU=users,OU=test,DC=j4ck3l,DC=net The group should now successfully sync with Ambari
... View more
Labels:
03-31-2017
06:45 PM
2 Kudos
After a few hours of debugging SSSD and mapping users/groups I wanted to make a post here to try and save someone the pain. I had SSD configured correctly using the following document: https://github.com/HortonworksUniversity/Security_Labs#lab-1 What I found by adding debug_level=7 to the sssd.conf file was this cryptic message: Trying to resolve service 'AD_GC' I realized at some point I was firewall'd off to the Active Directory Global Catalog port 3286, once I opened this I can now get the correct groups mapped to my SSSD users. Hope this saves someone some time in the future!
... View more
03-29-2017
08:38 AM
2 Kudos
When using kerberos with HDP it's not uncommon to find the odd strange encryption type floating around, possibly from a badly configured AD server. By adding the following to the Ambari -> Kerberos config section under supported encryption types its possible to isolate this issue for diagnostic. While its probably not a wise idea to run with all these enabled in production having a full list of supported types can be useful for diagnostic or reference. des-cbc-crc des-cbc-md4 des-cbc-md5 des3-cbc-sha1 arcfour-hmac arcfour-hmac-exp aes128-cts-hmac-sha1-96 aes256-cts-hmac-sha1-96 aes128-cts-hmac-sha256-128 aes256-cts-hmac-sha384-192 camellia128-cts-cmac camellia256-cts-cmac
... View more