Member since
09-25-2015
21
Posts
10
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2672 | 10-15-2015 01:19 AM |
12-06-2017
11:27 PM
1 Kudo
Mandatory Pre-requisite Upgrade JDK from 1.7 to 1.8 If the step is missed it will result in to below errors.
Nodemanager will fail to start if spark shuffle service is enabled. "java.lang.UnsupportedClassVersionError: org/apache/spark/network/yarn/YarnShuffleService : Unsupported major.minor version 52.0"
Hive Metastore will fail to start. "Exception in thread "main" java.lang.UnsupportedClassVersionError: org/apache/hadoop/hive/ql/log/NullAppender : Unsupported major.minor version 52.0"
Service checks will fail.
Yarn Hive Finalize upgrade will fail and cluster have to be downgraded.
... View more
01-20-2017
07:49 PM
Hive Compare Tables and Databases before and after Upgrades. Run below script to collect all databases. hive -e "show databases" | sed "s/ *$//g" | sed "s/^ *//g" | sort 1>databaselist Run below script to collect all tables in database. for dbname in `cat databaselist` ; do hive -e "use ${dbname} ; show tables ;" | sed "s/ *$//g" | sed "s/^ *//g" | sort 1> tablelist; done ; Execute Scripts above before upgrade and once after upgrade . Simply do vimdiff afterwards to identify the difference if any. Happy Migrating or Upgrading.
... View more
Labels:
01-05-2017
10:43 PM
[users] # To use a different strategy (LDAP / Database / ...) check the shiro doc at http://shiro.apache.org/configuration.html#Configuration-INISections
admin = admin [main] ldapRealm = org.apache.zeppelin.server.LdapGroupRealm
ldapRealm.contextFactory.environment[ldap.searchBase] = dc=mgmt,dc=example,dc=net
ldapRealm.contextFactory.url = ldaps://me.abc.example.net:636 ldapRealm.userDnTemplate = uid={0},cn=group,dc=mgmt,dc=example,dc=net ldapRealm.contextFactory.authenticationMechanism = SIMPLE ldapRealm.contextFactory.systemUsername=uid=example,ou=example,dc=mgmt,dc=example,dc=net ldapRealm.contextFactory.systemPassword=example sessionManager=org.apache.shiro.web.session.mgt.DefaultWebSessionManager securityManager.sessionManager = $sessionManager securityManager.sessionManager.globalSessionTimeout = 86400000
shiro.loginUrl = /api/login [urls] /** = authc
... View more
Labels:
11-14-2016
11:50 PM
cat /etc/hbase/conf/hbase-site.xml |grep -1 hbase.tmp, please make sure chmod 777 to hbase.tmp.dir . Things should work after same.
... View more
09-21-2016
02:37 PM
1 Kudo
Logical Disk Encryption The approach is to avoid using HDFS encryption and use Disk
LUKS encryption for data at rest encryption requirement specially when using
public cloud IAAS. To build manually encrypted volumes or drives ,use following
steps on a d2*8xLarge instance flavor. lsblk
cryptsetup --verbose --verify-passphrase luksFormat /dev/xvdb
cryptsetup --verbose --verify-passphrase luksFormat /dev/xvdc
cryptsetup --verbose --verify-passphrase luksFormat /dev/xvdd
cryptsetup luksOpen /dev/xvdb vol1
cryptsetup luksOpen /dev/xvdc vol2
cryptsetup luksOpen /dev/xvdd vol3
dd if=/dev/urandom of=/root/keyfile1 bs=1024 count=4
chmod 0400 /root/keyfile1
cryptsetup luksAddKey /dev/xvdb /root/keyfile1
cryptsetup luksAddKey /dev/xvdc /root/keyfile1
cryptsetup luksAddKey /dev/xvdd /root/keyfile1
mkfs.ext4 /dev/mapper/vol1
mkfs.ext4 /dev/mapper/vol2
mkfs.ext4 /dev/mapper/vol3
echo "/dev/mapper/vol1 /data/vol1 ext4 defaults,nofail,nodev 0 2" >> /etc/fstab
echo "/dev/mapper/vol2 /data/vol2 ext4 defaults,nofail,nodev 0 2" >> /etc/fstab
echo "/dev/mapper/vol3 /data/vol3 defaults,nofail,nodev 0 2" >> /etc/fstab
echo "vol1 /dev/xvdb /root/keyfile1 luks" >>/etc/crypttab
echo "vol1 /dev/xvdc /root/keyfile1 luks" >>/etc/crypttab
echo "vol1 /dev/xvdd /root/keyfile1 luks" >>/etc/crypttab
mount -a Automated Shell Script
#!/bin/bash
set -x
set:
${PLATFORM_DISK_PREFIX:? required}:
${START_LABEL:? required}
format_disks_encrypted()
{
mkdir /hadoopfs
openssl rand -base64 32 > /root/encrypt
cat /root/encrypt > /root/encrypt1
cat /root/encrypt1 > /root/encrypt2
yum -y install cryptsetup-luks
for (( i=1; i<=24;i++ )); do
LABEL=$(printf "\x$(printf %x $((START_LABEL+i)))")
DEVICE=/dev/${PLATFORM_DISK_PREFIX}${LABEL}
if [ -e $DEVICE ]; then
MOUNTPOINT=$(grep $DEVICE /etc/fstab | tr -s ' \t' ' ' | cut -d' ' -f 2)
if [ -n "$MOUNTPOINT" ]; then
umount "$MOUNTPOINT"
sed -i "\|^$DEVICE|d" /etc/fstab
fi
mkdir /hadoopfs/fs${i}
cryptsetup --verbose luksFormat $DEVICE -yrq --key-file=/root/encrypt
cryptsetup luksOpen $DEVICE vol${i} --key-file=/root/encrypt
cryptsetup luksAddKey $DEVICE /root/encrypt --key-file=/root/encrypt
mkfs.ext4 /dev/mapper/vol${i}
echo UUID=$(blkid -o value /dev/mapper/vol${i} | head -1) /hadoopfs/fs${i} ext4 inode_readahead_blks=128,data=writeback,noatime,nodiratime 0 2 >> /etc/fstab
echo "vol${i} `UUID=$(blkid -o value /dev/mapper/vol${i} | head -1)` /root/encrypt luks" >> /etc/crypttab
mount /hadoopfs/fs${i}
chmod 777 /hadoopfs/fs${i}
fi
done
}
main()
{
format_disks_encrypted
} Reference: https://wiki.archlinux.org/index.php/Dm-crypt/Device_encryption
... View more
09-14-2016
07:26 PM
Valid for Release 1.3 Any recommendations around minimum aws instance sizes to satisfy its requirements? We use m3.large for this node with 100GB mount used for logs and postgres database of ambari-server. Can this node be turned into an edge node with client libraries? No, this node does not have ambari-agent installed it only have ambari-server and its database.
... View more
09-12-2016
12:42 AM
1 Kudo
Enabling SMTP in Cloudbreak --------------------------- 1. The Profile file 2. Bug in mailer.js and a workaround 2.1 The Problem 2.2 The Cause 2.3 Details 2.4 A Workaround 3. Fix postfix config --- 1. The Profile file ------------------- In the Profile file, set the following CLOUDBREAK_SMTP_*
variables and set CBD_FORCE_START to enable starting Cloudbreak containers
with a modified file docker-composer.yml (see section 2.4): cloudbreak $ cd $CBD_ROOT cloudbreak $ more Profile export PUBLIC_IP=example.compute.amazonaws.com export AWS_SECRET_ACCESS_KEY=*** export AWS_ACCESS_KEY_ID=*** export CBD_FORCE_START=true export CLOUDBREAK_SMTP_SENDER_HOST="172.17.0.1" export
CLOUDBREAK_SMTP_SENDER_FROM="cloudbreak@compute.amazonaws.com" export CLOUDBREAK_SMTP_AUTH=false export CLOUDBREAK_SMTP_STARTTLS_ENABLE=false --- 2. Bug in mailer.js and a workaround ------------------------------------ See https://github.com/sequenceiq/cloudbreak/issues/1492 --- 2.1 The Problem --------------- Can not receive mail from the cloufd UI, e.g., to reset the
password: http://example.compute.amazonaws.com:3000 I found the cause of the problem and a workaround. --- 2.2 The Cause ------------- When the environment variables SL_SMTP_SENDER_USERNAME SL_SMTP_SENDER_PASSWORD are defined in the sultans container -- and they are derived from the environment variables CLOUDBREAK_SMTP_SENDER_USERNAME CLOUDBREAK_SMTP_SENDER_PASSWORD on the cloudbreak deployer -- even if they are set to the
empty string, the Javascript code in /sultans/mailer.js in he sultans
container tries to do authentication with the SMTP server. --- 2.3 Details ----------- 1. Because on the Cloudbreak deployer the env varts are
defined cloudbreak $ cbd env show | egrep SMTP CLOUDBREAK_SMTP_SENDER_USERNAME = CLOUDBREAK_SMTP_SENDER_PASSWORD = CLOUDBREAK_SMTP_SENDER_HOST = 172.17.0.1 CLOUDBREAK_SMTP_SENDER_PORT = 25 CLOUDBREAK_SMTP_SENDER_FROM =
cloudbreak@compute.amazonaws.com CLOUDBREAK_SMTP_AUTH = false CLOUDBREAK_SMTP_STARTTLS_ENABLE = false CLOUDBREAK_SMTP_TYPE = smtp the cbd start command will inser in docker-compose.yml cloudbreak $ egrep -A 10 sultans:
/var/lib/cloudbreak-deployment/docker-compose.yml sultans: environment: - SL_CLIENT_ID=sultans - SL_CLIENT_SECRET=cbsecret2015 - SERVICE_NAME=sultans #- SERVICE_CHECK_HTTP=/ - SL_PORT=3000 - SL_SMTP_SENDER_HOST=172.17.0.1 - SL_SMTP_SENDER_PORT=25 - SL_SMTP_SENDER_USERNAME= - SL_SMTP_SENDER_PASSWORD= 2. The above settings in docker-compose.yml will in turn
cause the sultans container to have SL_SMTP_SENDER_USERNAME SL_SMTP_SENDER_PASSWORD Indeed: bash-4.3# cat /proc/5/environ | sed 's/\0/\n/' | egrep
SMTP | sort SL_SMTP_SENDER_FROM=cloudbreak@compute.amazonaws.com SL_SMTP_SENDER_HOST=172.17.0.1 SL_SMTP_SENDER_PASSWORD= SL_SMTP_SENDER_PORT=25 SL_SMTP_SENDER_USERNAME= 3. The code in /sultans/mailer.js will do auth if these are
defined, even if they are the empry string: SL_SMTP_SENDER_USERNAME SL_SMTP_SENDER_PASSWORD Indeed: bash-4.3# egrep -A10 ^sendSimple /sultans/mailer.js sendSimpleEmail = function(to, subject, content) { var transport = null; if (process.env.SL_SMTP_SENDER_USERNAME == null
&& process.env.SL_SMTP_SENDER_PASSWORD == null) { transport =
nodemailer.createTransport(smtpTransport({ host: process.env.SL_SMTP_SENDER_HOST, port: process.env.SL_SMTP_SENDER_PORT, secure: false, tls: { rejectUnauthorized: false } })); --- 2.4 A Workaround ----------------- Make sure that SL_SMTP_SENDER_USERNAME SL_SMTP_SENDER_PASSWORD are not defined in the cbreak_sultans_1 bash container. Steps: 1. Hack the file docker-compose.yml: cloudbreak $ diff
/var/lib/cloudbreak-deployment/docker-compose.yml \ /var/lib/cloudbreak-deployment/docker-compose.yml.sav 149a150,151 > - SL_SMTP_SENDER_USERNAME= > - SL_SMTP_SENDER_PASSWORD= 2. Restart the containers but not with cbd start, because
that will overwrite docker-compose.yml.sav: cloudbreak $ cbd kill cloudbreak $ cd /var/lib/cloudbreak-deployment/ cloudbreak $ ./.deps/bin/docker-compose -p cbreak up -d 3. Check that SL_SMTP_SENDER_USERNAME SL_SMTP_SENDER_PASSWORD are not defined on the container: cloudbreak $ alias sultans alias sultans='docker exec -it cbreak_sultans_1 bash' cloudbreak $ sultans bash-4.3#
ps PID USER TIME COMMAND 1 root 0:00 {start-docker.sh} /bin/bash
/sultans/start-docker.sh 5 root 0:03 node main.js bash-4.3# cat /proc/5/environ | sed 's/\0/\n/' | egrep
SMTP SL_SMTP_SENDER_FROM=cloudbreak@compute.amazonaws.com SL_SMTP_SENDER_PORT=25 SL_SMTP_SENDER_HOST=172.17.0.1 --- 3. Fix postfix config --------------------- Change /etc/postfix/main.cf cloudbreak # egrep "inet_.*="
/etc/postfix/main.cf.orig #inet_interfaces = all #inet_interfaces = $myhostname #inet_interfaces = $myhostname, localhost inet_interfaces = localhost inet_protocols = all Set inet_interfaces = all cloudbreak # diff /etc/postfix/main.cf
/etc/postfix/main.cf.orig 113c113 < inet_interfaces = all --- > #inet_interfaces = all 116c116 < #inet_interfaces = localhost --- > inet_interfaces = localhost Restart cloudbreak # systemctl stop postfix.service cloudbreak # systemctl start postfix.service
... View more
Labels:
09-06-2016
11:11 PM
Document exists for wasb http://falcon.apache.org/DataReplicationAzure.html, may be just use s3a instead.
... View more
07-21-2016
01:51 PM
3 Kudos
1. Fetch the latest FS Image from the Active NameNode: Look at the (NameNode directories) property in Ambari and copy the latest image to a node with free disk space and memory. (Ex: fsimage_0000000001138083674) 2. Load the FS Image: On the node where you copied the FS Image. Run the below commands: export HADOOP_OPTS="-Xms16000m -Xmx16000m $HADOOP_OPTS"
nohup hdfs oiv -i fsimage_0000000001138083674 -o fsimage_0000000001138083674.txt & Above command will make the FS Image available on a web server (temporary). 3. Create "ls -R" report from the FS Image: nohup hdfs dfs -ls -R webhdfs://127.0.0.1:5978/ > /data/home/hdfs/lsrreport.txt & This could take some time. Copy the data from /data/home/hdfs/lsrreport.txt to hdfs /user/hdfs/lsr/lsrreport.txt 4. Analyze the ls-R output: Create required table, load data, create view and analyze: hive> add jar /usr/hdp/2.3.2.0-2950/hive/lib/hive-contrib.jar;
hive> CREATE EXTERNAL TABLE lsr (permissions STRING, replication STRING, owner STRING, ownergroup STRING, size STRING, fileaccessdate STRING, time STRING, file_path STRING ) ROW FORMAT SERDE 'org.apache.hadoop.hive.contrib.serde2.RegexSerDe' WITH SERDEPROPERTIES ("input.regex" = "(\\S+)\\s+(\\S+)\\s+(\\S+)\\s+(\\S+)\\s+(\\S+)\\s+(\\S+)\\s+(\\S+)\\s+(.*)");
hive> load data inpath ‘/user/hdfs/lsr/lsrreport.txt’ overwrite into table lsr;
hive> create view lsr_view as select (case substr(permissions,1,1) when 'd' then 'dir' else 'file' end) as file_type,owner,cast(size as int) as size, fileaccessdate,time,file_path from lsr; Query 1: Files < 1 MB (Top 100) hive> select relative_size,fileaccessdate,file_path as total from (select (case size < 1048576 when true then 'small' else 'large' end) as relative_size,fileaccessdate,file_path from lsr_view where file_type='file') tmp where relative_size='small' limit 100; Query 1: Files < 1 MB (Grouped by Path) hive> select substr(file_path,1,45) ,count(*) from (select relative_size,fileaccessdate,file_path from (select (case size < 1048576 when true then 'small' else 'large' end) as relative_size,fileaccessdate,file_path from lsr_view where file_type='file') tmp where relative_size='small') tmp2 group by substr(file_path,1,45) order by 2 desc; Query 1: Files < 1 KByte (Grouped by Owner) hive> select owner ,count(1) from (select (case size < 1024 when true then 'small' else 'large' end) as relative_size,fileaccessdate,owner from lsr_view where file_type='file') tmp where relative_size='small' group by owner; Query 1: Files < 1 KByte (Grouped by Date) hive> select fileaccessdate ,count(1) from (select (case size < 1024 when true then 'small' else 'large' end) as relative_size,fileaccessdate,owner from lsr_view where file_type='file' ) tmp where relative_size='small' group by fileaccessdate;
... View more
Labels:
01-18-2016
09:30 PM
Check the ambari heap size, it may be running out of memory. /var/lib/ambari-server/ambari-env.sh Change -Xmx2048m to 8GB if you have enough memory availbale and restart ambari-server.
... View more