Member since
07-10-2017
68
Posts
30
Kudos Received
5
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4138 | 02-20-2018 11:18 AM | |
3369 | 09-20-2017 02:59 PM | |
17977 | 09-19-2017 02:22 PM | |
3573 | 08-03-2017 10:34 AM | |
2235 | 07-28-2017 10:01 AM |
10-13-2017
06:55 AM
Did you do this? service krb5-kdc restart service krb5-admin-server restart Please check the status of above two services. (service krb5-kdc status). Restart them if they are not running. If it's not installed, install again. apt-get install krb5-kdc krb5-admin-server
... View more
10-13-2017
06:36 AM
If you admin is knxadmin/admin@MYDOMAIN, just having */admin@MYDOMAIN * is enough. Remove other lines and restart. Also see whether kadmin.local -q 'listprincs' lists your admin principal.
... View more
10-13-2017
05:55 AM
Yes try creating one in that case. Refer this link http://manpages.ubuntu.com/manpages/trusty/man5/kadm5.acl.5.html Do service krb5-admin-server restart after adding.
... View more
10-13-2017
04:57 AM
Can you search for this file? find / -iname kadm5.acl Usually this is created when you install krb5-kdc krb5-admin-server.
... View more
10-13-2017
04:16 AM
Check /etc/krb5.conf, whether you have correct setting for domain,kdc,admin_principal Also check Kerberos acl at below location: RHEL/CentOS/Oracle Linux vi /var/kerberos/krb5kdc/kadm5.acl SLES vi /var/lib/kerberos/krb5kdc/kadm5.acl Ubuntu/Debian vi /etc/krb5kdc/kadm5.acl If it's of the kind */admin@HADOOP.COM * , please change it to *admin@YOURDOMAIN * and restart the kadmin process. Kerberos will consider only principals matching this acl as valid admins and will let you create user/service principals using them. Or you can also change your admin_principal to knxadmin/admin@YOURDOMAIN.
... View more
10-11-2017
03:40 PM
What does Hivectx.sql(s).show() give? Does Hivectx.sql('show tables').show() gives expected output? How are you giving values to $targetdb,$sourcedb?
... View more
09-28-2017
08:06 AM
1 Kudo
Hi, Yes this is default behavior (if you're placing the file from within a data node). You can have them distributed by issuing hadoop fs -put command from a client that isn't running DataNode. According to docs: * The replica placement strategy is that if the writer is on a datanode,
* the 1st replica is placed on the local machine,
* otherwise a random datanode. The 2nd replica is placed on a datanode
* that is on a different rack. The 3rd replica is placed on a datanode
* which is on the same rack as the first replica.
... View more
09-21-2017
06:59 AM
Ok, do this:- A = LOAD 'YYYYMMDD_claims_portal.csv' using PigStorage(',','-tagFile') AS (filename:chararray, {other columns as per your schema}) y = FOREACH A GENERATE $1..,SUBSTRING(filename,0,8) AS day; describe y; DUMP y;
... View more
09-20-2017
02:59 PM
1 Kudo
@Sumee singh Please try this: A = LOAD 'YYYYMMDD_claims_portal.csv' using PigStorage(',','-tagFile'); y = FOREACH A GENERATE SUBSTRING($0,0,8),$1..; DUMP y; (Input file name comes as the first field in tuple). You can modify after this as you wish.
... View more
09-19-2017
02:22 PM
1 Kudo
Hi, Please try changing udf = UserDefinedFunction(lambda x: re.sub(',','',x), StringType()) to udf = UserDefinedFunction(lambda x: re.sub(',','',str(x)), StringType()) Some fields may not be string. So it throws exception.
... View more