Member since
09-10-2015
54
Posts
24
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2319 | 10-20-2016 07:44 PM | |
7260 | 10-16-2016 03:14 PM | |
2627 | 05-11-2016 02:16 PM | |
1972 | 09-29-2015 08:27 PM |
04-04-2019
07:06 PM
Thanks Balaji for the explanation.
... View more
04-04-2019
07:06 PM
After enabling Ranger Plugin for Hive, I was running a "dfs -ls" in hive/ beeline to list hdfs files. I'm getting the following error : jdbc:hive2://vserver69901.example.com> dfs -ls ;Error:
Error while processing statement: Permission denied: user [ambari] does
not have privilege for [DFS] command (state=,code=1) Do we need to update/enable any other properties. hadoop fs -ls works without any issues The user has admin access in Ranger. The storage is on Isilon , so the HDFS Ranger plugin cannot be enabled. 2015-11-12 17:07:31,980 INFO [HiveServer2-Handler-Pool: Thread-46]: operation.Operation (HiveCommandOperation.java:setupSessionIO(69)) - Putting temp output to file /tmp/hive/012f5aa7-fa31-4fb2-8cd5-4f3fe3f3120624919258595801789.pipeout
2015-11-12 17:07:31,981 ERROR [HiveServer2-Handler-Pool: Thread-46]: processors.CommandUtil (CommandUtil.java:authorizeCommand(66)) - Error authorizing command [-ls]
org.apache.hadoop.hive.ql.security.authorization.plugin.HiveAccessControlException: Permission denied: user [ambari] does not have privilege for [DFS] command
at com.xasecure.authorization.hive.authorizer.XaSecureHiveAuthorizer.handleDfsCommand(XaSecureHiveAuthorizer.java:644)
at com.xasecure.authorization.hive.authorizer.XaSecureHiveAuthorizer.checkPrivileges(XaSecureHiveAuthorizer.java:227)
at org.apache.hadoop.hive.ql.processors.CommandUtil.authorizeCommandThrowEx(CommandUtil.java:86)
at org.apache.hadoop.hive.ql.processors.CommandUtil.authorizeCommand(CommandUtil.java:59)
at org.apache.hadoop.hive.ql.processors.DfsProcessor.run(DfsProcessor.java:71)
at org.apache.hive.service.cli.operation.HiveCommandOperation.runInternal(HiveCommandOperation.java:105)
at org.apache.hive.service.cli.operation.Operation.run(Operation.java:256)
at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:376)
at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(HiveSessionImpl.java:363)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:79)
at org.apache.hive.service.cli.session.HiveSessionProxy.access$000(HiveSessionProxy.java:37)
at org.apache.hive.service.cli.session.HiveSessionProxy$1.run(HiveSessionProxy.java:64)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
at org.apache.hadoop.hive.shims.HadoopShimsSecure.doAs(HadoopShimsSecure.java:536)
at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:60)
at com.sun.proxy.$Proxy32.executeStatementAsync(Unknown Source)
at org.apache.hive.service.cli.CLIService.executeStatementAsync(CLIService.java:271)
at org.apache.hive.service.cli.thrift.ThriftCLIService.ExecuteStatement(ThriftCLIService.java:401)
at org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1313)
at org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:129
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Ranger
09-28-2017
03:47 AM
1 Kudo
Changing Ambari Service name (via Rest API) using a customized single script: Create a property file (ambServActNmChg.properties) with following parameters : export AMBARI_ADMIN_USERID=admin
export AMBARI_ADMIN_PASSWORD=admin
export AMBARI_SERVER=<Ambari Host>
export AMBARI_SERVER_PORT=<Ambari port>
export CLUSTER_NAME=<Ambari Cluster>
export NEW_SRV_USER_NAME=<New User Suffix> Create shell script : #!/bin/bash
echo " ****** Starting Script to Change Service User names in Ambari ***** "
#######################################################################################################
#
# This script changes the service account user names in ambari using inputs from properties files. ##
#
#######################################################################################################
. `dirname ${0}`/ambServActNmChg.properties
curr_date=`date +"%Y%m%d_%H%M%S"`
function check_ambari_server() {
echo " Check and Start Ambari Server : "
server_start=`ambari-server status |grep running|awk '{print $3}'`
echo $server_start
if [ "$server_start" == "running" ]; then
echo "Ambari server running already .. "
else
echo " Initiating ambari server start "
ambari-server start
sleep 30
finished=0
retries=0
while [ $finished -ne 1 ]
do
server_start=`ambari-server status |grep running|awk '{print $3}'`
echo $server_start
if [ "$server_start" == "running" ]; then
finished=1
fi
sleep 5
let retries=$retries+1
if [[ $retries == 30 ]]
then
echo " Unable to Start Ambari Server. Please check the Ambari Server logs to determine the issue ... "
exit 1
fi
echo " Polling for Ambari Server status $retries "
done
fi
}
function change_user_name() {
echo " "
echo " Changing Username in progress ..... The New service users will be suffixed with $NEW_SRV_USER_NAME as follows : "
echo " "
while read line
do
#####echo $line |sed 's/://g'
newuservar=`echo $line |awk -F':' '{print $2}'`
newuser=`echo $line |awk -F':' '{print $3}'|sed 's/"//g'`
echo $newuservar ":" $newuser$NEW_SRV_USER_NAME
done < amb_srv_usr_backup.txt
echo " "
echo " Hit ENTER to update the user names ......: "
echo " "
read input
while read line
do
###echo $line |sed 's/://g'
envfile=`echo $line |awk -F':' '{print $1}'`
newuservar=`echo $line |awk -F':' '{print $2}'`
newuser=`echo $line |awk -F':' '{print $3}'|sed 's/"//g' |xargs`
nuser=\"$newuser$NEW_SRV_USER_NAME\"
echo " Updating $envfile with " $newuservar "---" $nuser
setuser=`echo "/var/lib/ambari-server/resources/scripts/configs.sh -u $AMBARI_ADMIN_USERID -p $AMBARI_ADMIN_PASSWORD set $AMBARI_SERVER ${CLUSTER_NAME} $envfile $newuservar $nuser"`
eval $setuser
done < amb_srv_usr_backup.txt
echo " "
echo " Update Completed. Validating new users ... "
echo " "
while read line
do
envfile=`echo $line |awk -F':' '{print $1}'`
/var/lib/ambari-server/resources/scripts/configs.sh -u $AMBARI_ADMIN_USERID -p $AMBARI_ADMIN_PASSWORD get $AMBARI_SERVER ${CLUSTER_NAME} $envfile |grep user\"
done < amb_srv_usr_backup.txt
}
function get_user_name() {
check_ambari_server
envs=`curl -u $AMBARI_ADMIN_USERID:$AMBARI_ADMIN_PASSWORD -X GET http://$AMBARI_SERVER:$AMBARI_SERVER_PORT/api/v1/clusters/${CLUSTER_NAME}?fields=Clusters/desired_configs |grep -env |awk -F'"' '{print $2}'`
envvars=`echo "$envs"`
cluster_env="cluster-env"
NEWLINE=\n'
envvars=`echo $envvars ${NEWLINE} $cluster_env`
### echo $envvars
rm -f amb_srv_usr_backup.txt
echo " "
echo " "
echo " ------------------------------------------------------------------------------------------------ "
echo " NOTE: Current Ambari User List below: They will be backed up to the file amb_srv_usr_backup.txt "
echo " ------------------------------------------------------------------------------------------------ "
for env in $envvars
do
userlist=`/var/lib/ambari-server/resources/scripts/configs.sh -u $AMBARI_ADMIN_USERID -p $AMBARI_ADMIN_PASSWORD get $AMBARI_SERVER ${CLUSTER_NAME} $env |grep user\" |grep ':'`
####echo $userlist
if [ "$userlist" != "" ]; then
ulist=$(echo "$userlist" | sed 's/,//g')
echo "$ulist"
printf '%s\n' "$ulist"| while IFS= read ul
do
echo $env ": " $ul >> amb_srv_usr_backup.txt
done
fi
done
echo " "
echo " "
echo " Backing up the exiting config for furture Restore amb_srv_usr_backup_$curr_date.txt ... "
cp amb_srv_usr_backup.txt amb_srv_usr_backup_$curr_date.txt
response=0
while [ $response -ne 1 ]
do
echo " "
echo " About to Change the Service account user names ... Response is CASE SENSITIVE YES or NO .... Proceed (YES/NO) ?? "
echo " "
read resp
if ([ $resp == "YES" ] || [ $resp == "NO" ]); then
echo " Response provided is " $resp
if [ $resp == "YES" ]; then
change_user_name
else
echo " Ambari USer Service account change ABORTED ... "
fi
response=1
else
echo " Response provided is " $resp
reponse=0
fi
done
}
function restore_change_user_name() {
echo " "
echo " Changing Username in progress ..... : "
echo " "
while read line
do
newuservar=`echo $line |awk -F':' '{print $2}'`
newuser=`echo $line |awk -F':' '{print $3}'|sed 's/"//g'`
echo $newuservar ":" $newuser
done < amb_srv_usr_backup_RESTORE.txt
echo " "
echo " Hit ENTER to update the user names ......: "
echo " "
read input
while read line
do
###echo $line |sed 's/://g'
envfile=`echo $line |awk -F':' '{print $1}'`
newuservar=`echo $line |awk -F':' '{print $2}'`
newuser=`echo $line |awk -F':' '{print $3}'|sed 's/"//g' |xargs`
nuser=\"$newuser\"
echo " Updating $envfile with " $newuservar "---" $nuser
setuser=`echo "/var/lib/ambari-server/resources/scripts/configs.sh -u $AMBARI_ADMIN_USERID -p $AMBARI_ADMIN_PASSWORD set $AMBARI_SERVER ${CLUSTER_NAME} $envfile $newuservar $nuser"`
eval $setuser
done < amb_srv_usr_backup_RESTORE.txt
echo " "
echo " Update Completed. Validating new users ... "
echo " "
while read line
do
envfile=`echo $line |awk -F':' '{print $1}'`
/var/lib/ambari-server/resources/scripts/configs.sh -u $AMBARI_ADMIN_USERID -p $AMBARI_ADMIN_PASSWORD get $AMBARI_SERVER ${CLUSTER_NAME} $envfile |grep user\"
done < amb_srv_usr_backup_RESTORE.txt
}
function restore_user_name(){
echo "Make sure the file to be restore is named as amb_srv_usr_backup_RESTORE.txt ... Enter to proceed "
read input
restore_change_user_name
}
#### Main code Starts HERE ####
if [ "$1" == "UPDATE" ]; then
get_user_name
echo " Deleting residual .json file on the local folder !!! "
rm -f *.json
elif [ "$1" == "RESTORE" ]; then
restore_user_name
echo " Deleting residual .json file on the local folder !!! "
rm -f *.json
else
echo " "
echo " "
echo "Usage: ./ambServActNmChg.sh [UPDATE] [RESTORE] "
echo " "
echo " "
exit 1
fi Execution: ./ambServActNmChg.sh [UPDATE] [RESTORE] The update process backup file with the existing user names. We can use the file created to restore back to original configuration. Note: This script needs to be executed on the Ambari Server.
... View more
Labels:
06-14-2017
05:17 PM
8 Kudos
Goal: This article provides script to extract the DDLs for all tables and partition in a given hive database. This scripts comes handy when migrating/creating Hive Tables from one cluster to another. You can modify and loop this script by passing all the databases via command line. #!/bin/bash
hiveDBName=testdbname;
showcreate="show create table "
showpartitions="show partitions "
terminate=";"
tables=`hive -e "use $hiveDBName;show tables;"`
tab_list=`echo "${tables}"`
rm -f ${hiveDBName}_all_table_partition_DDL.txt
for list in $tab_list
do
showcreatetable=${showcreatetable}${showcreate}${list}${terminate}
listpartitions=`hive -e "use $hiveDBName; ${showpartitions}${list}"`
for tablepart in $listpartitions
do
partname=`echo ${tablepart/=/=\"}`
echo $partname
echo "ALTER TABLE $list ADD PARTITION ($partname\");" >> ${hiveDBName}_all_table_partition_DDL.txt
done
done
echo " ====== Create Tables ======= : " $showcreatetable
## Remove the file
rm -f ${hiveDBName}_extract_all_tables.txt
hive -e "use $hiveDBName; ${showcreatetable}" >> ${hiveDBName}_extract_all_tables.txt
... View more
Labels:
03-04-2017
09:22 PM
Pre-requsite:
1.Make
sure /etc/hosts has the correct ip address and the hostnames and the
master hostnames can be pinged from each other
.
3.
OpenLDAP to be setup.
4. Make sure the logs are captured in a valid log
file for slapd process as follows:
vi /etc/rsyslog.conf
Append the follwowing line to log OpenLDAP messages and restart rsyslogd service:
# Point SLAPD logs to /var/log/slapd.log
local4.* /var/log/slapd.log
service rsyslog restart
HA OpenLDAP Sync Setup:
The following ldif files needs to be
created on the users home directory and should be executed on
all the master nodes:
basednupdate.ldif
syncproc_module.ldif
syncproc.ldif
addladpservers.ldif
NOTE:
Please change the cn , dc accordingly.
vi /tmp/basednupdate.ldif
# updated your base dn below:
dn: olcDatabase={1}monitor,cn=configchangetype: modifyreplace: olcAccessolcAccess: {0}to * by dn.base="gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth"read by dn.base="cn=Manager,dc=srv,dc=world" read by * none
Execute the following command:
ldapadd -Y EXTERNAL -H ldapi:/// -f /tmp/basednupdate.ldif
Note: If
the above command fails, then edit the following file and update it and run
vi /etc/openldap/slapd.d/cn=config/olcDatabase={1}monitor.ldif
Change cn and dc:
olcAccess: {0}to * by
dn.base="gidNumber=0+uidNumber=0,cn=peercred,cn=extern al,cn=auth" read by dn.base="cn=grj,dc=ganeshrj,dc=com" read by * none
Run slaptest -u to check if the update is successful.
vi /tmp/syncproc_module.ldif
dn: cn=module,cn=config
objectClass: olcModuleList
cn: module
olcModulePath: /usr/lib64/openldap
olcModuleLoad: syncprov.la
Execute:
ldapadd -Y EXTERNAL -H ldapi:/// -f /tmp/syncproc_module.ldif
vi /tmp/syncproc.ldif
dn:olcOverlay=syncprov,olcDatabase={2}hdb,cn=config
objectClass: olcOverlayConfig
objectClass: olcSyncProvConfig
olcOverlay: syncprov
olcSpSessionLog: 100
olcSpCheckpoint: 10 1
Execute:
ldapadd -Y EXTERNAL -H ldapi:/// -f /tmp/syncproc.ldif
vi /tmp/addladpservers.ldif
# Execute this on all master nodes. Make
sure the server ids are incremented for each node:
dn: cn=configchangetype: modifyreplace: olcServerIDolcServerID: 1 ldap://ldap1.ganeshrj.comolcServerID: 2 ldap://ldap2.ganeshrj.com
Execute:
ldapadd -Y EXTERNAL -H ldapi:/// -f /tmp/addladpservers.ldif
vi /tmp/master01.ldif
dn: olcDatabase={2}hdb,cn=config
changetype: modify
add: olcSyncRepl
olcSyncRepl: rid=001
provider=ldap://ldap1.ganeshrj.com:389/
bindmethod=simple scope=sub
binddn="cn=grj,dc=ganeshrj,dc=com" credentials=password searchbase="dc=ganeshrj,dc=com"
schemachecking=off type=refreshAndPersist retry="5 10 30 +"
interval=00:00:00:10
olcSyncRepl: rid=002
provider=ldap://ldap2.ganeshrj.com:389/ bindmethod=simple scope=sub
binddn="cn=grj,dc=ganeshrj,dc=com" credentials=password
searchbase="dc=ganeshrj,dc=com" schemachecking=off
type=refreshAndPersist retry="5 10 30 +" interval=00:00:00:10
add: olcMirrorMode
olcMirrorMode: TRUE
add: olcDbIndex
olcDbIndex: entryCSN eq
olcDbIndex: entryUUID eq
Execute:
ldapadd -Y EXTERNAL -H ldapi:/// -f /tmp/master01.ldif
Note: Make sure master01.ldif is also executed on other master nodes.
If the above command fails, then update it manually and run
slaptest –u .
vi /etc/openldap/slapd.d/cn=config/olcDatabase={2}hdb olcOverlay={0}syncprov.ldif
Paste the following:
-------------------
olcSyncRepl: rid=001
provider=ldap://ldap1.ganeshrj.com:389/
bindmethod=simple
scope=sub
binddn="cn=grj,dc=ganeshrj,dc=com"
credentials=password
searchbase="dc=ganeshrj,dc=com"
schemachecking=off
type=refreshAndPersist
retry="5 10 30 +"
interval=00:00:00:10
olcSyncRepl: rid=002
provider=ldap://ldap2.ganeshrj.com:389/
bindmethod=simple
scope=sub
binddn="cn=grj,dc=ganeshrj,dc=com"
credentials=password
searchbase="dc=ganeshrj,dc=com"
schemachecking=off
type=refreshAndPersist
retry="5 10 30 +"
interval=00:00:00:10
olcDbIndex: entryUUID eq
olcDbIndex: entryCSN eq
olcMirrorMode: TRUE
Restart
slapd service
service
slapd restart
Check the sladp.log file to see if both the
servers are communicating.
Testing:
1. Do an LDAP search to see the users are getting pulled.
ldapsearch -x -b "dc=ganeshrj,dc=com"
2. Add users on one of the server and see if its replicated
vi /tmp/Adduser.ldif
dn: uid=ganesh,ou=people,dc=testorg1,dc=ganeshrj,dc=com
objectclass:top
objectclass:person
objectclass:organizationalPerson
objectclass:inetOrgPerson
cn: Ganesh
sn: Ganesh
uid: ganesh
userPassword:ganesh-password
Execute:
ldapadd -x -D "cn=grj,dc=ganeshrj,dc=com" -W -f adduser.ldif
Then try ldapsearch on both the boxes to validate:
ldapsearch -x -b "dc=ganeshrj,dc=com"
OpenLdap High Availability Setup with SSL/TLS
There are two steps:
1. Creation of Self Signing certificate for OpenLDAP.
2. Update OpenLDAP with Certs created and Update the Config and hdp databases with the certificate information.
Creation self-Signing Certificate
1. Create a CA First
2. Create a Client Openldap Cert
3. Sign the Client Openldap cert with the CA Created in Step 1
4. Use the cert (from step 3) with a ca cert (from Step 1) in open ldap
Update to openssl.conf:
This is “Nice to have” change since the default values are setup and don’t have to key in every time when a new key is generated.
Update /etc/pki/tls/openssl.conf and change as follows (This is an example and can be updated to organizational needs)
[ req_distinguished_name ]
countryName = Country Name (2 letter code)
#countryName_default = XX
countryName_default = US
countryName_min = 2
countryName_max = 2
stateOrProvinceName = State or Province Name (full name)
#stateOrProvinceName_default = Default Province
stateOrProvinceName_default = Virginia
localityName = Locality Name (eg, city)
#localityName_default = Default City
localityName_default = Ashburn
0.organizationName = Organization Name (eg, company)
#0.organizationName_default = Default Company Ltd
0.organizationName_default = Unknown Company Ltd
# we can do this but it is not needed normally :-)
#1.organizationName = Second Organization Name (eg, company)
#1.organizationName_default = World Wide Web Pty Ltd
organizationalUnitName = Organizational Unit Name (eg, section)
#organizationalUnitName_default =
Update date to 10 year validity
default_days = 3650 # how long to certify for
Create a cert file on the need you need (Optional)
#certificate = $dir/cacert.pem # The CA certificate
certificate = $dir/ganeshrj.crt # The CA certificate
Change path to the private Key if needed (Optional)
#private_key =$dir/private/cakey.pem
private_key =$dir/private/ca.key
Update unique serial number and index file
Note: This step should be done only once as part of the setup. This file will keep tab of number of certs generated.
cd /etc/pki/CA
touch serial
vi serial and add 01 for the 1st serial number
touch index.txt
Creating a Certifying Authority (CA)
Now generate the key mentioned the openssl.conf (which is here: private_key = $dir/private/cakey.key# The private key). This key will be used to generate a certificate.
openssl genrsa -des3 2048 > private/cakey.key
Using the key above create a CA certificate
openssl req -new -x509 -key private/cakey.key -out ganeshrj.crt -days 3560
... View more
Labels:
10-20-2016
07:44 PM
1 Kudo
@Amit Dass Take a look here to start with.
... View more
10-19-2016
05:19 PM
1 Kudo
@rama Here are your options: 1. You can go with falcon Hive replication from Prod to Dev if its the same cluster config. 2. You can just distcp the /user/hive/warehouse from PROD to DEV and generate the 'create table' DDL statement from hive, change the NN info on the table and recreate them in DEV. You need to also generate the manual ALTER TABLE ADD Partition statement to get the partitions recognized. 3. You can use Hive Replication to copy the tables.
... View more
10-19-2016
04:48 PM
@Anbu Eswaran Just add the records to a file and LOAD the file to Hive.
... View more
10-18-2016
02:40 PM
1 Kudo
@Gayathri Reddy G Please check if you have that user directory created on hdfs (/user/<user>) and make sure its owned by that user. Also, try a insert overwrite into that table to test the access.
... View more