Member since
02-05-2016
26
Posts
16
Kudos Received
5
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1358 | 09-28-2016 11:11 AM | |
283 | 09-20-2016 11:03 AM | |
183 | 09-16-2016 05:00 AM | |
1339 | 09-15-2016 10:14 AM | |
387 | 09-15-2016 08:49 AM |
11-15-2018
10:46 AM
I got same error, as below Caused by: GSSException: No valid credentials provided (Mechanism level: Fail to create credential. (63) - No service creds)
at sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:770)
at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:248)
at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:192)
... 41 more
Caused by: KrbException: Fail to create credential. (63) - No service creds
at sun.security.krb5.internal.CredentialsUtil.acquireServiceCreds(CredentialsUtil.java:162)
at sun.security.krb5.Credentials.acquireServiceCreds(Credentials.java:458)
at sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:693) And i am able to resolve it by deleting existing krbtgt prinicipals and recreating krbtgt cross realm prinicipals on both clusters with same password. the password must be same for these prinicipals on both KDC's. krbtgt/A.COM@B.COM krbtgt/B.COM@A.COM
... View more
10-07-2016
05:16 PM
Internal users are mostly for the applications (hadoop ecosystem components e.g hive,storm,hdfs,kafka,hbase etc). and external users are those which you will sync with AD are Ldap, these are the users for which you will set different policies.
... View more
09-28-2016
11:11 AM
1 Kudo
this is resolved. after checking /var/log/ambari-metrics-collector/hbase-ams-master-XXXX.log found Caused by: javax.security.auth.login.LoginException: Unable to obtain password from user. checked keytabs files under /etc/security/keytabs/ams-hbase.master.keytab after klist to this keytab came to know that its has invalid entry for hosts. klist -ekt ams-hbase.master.keytab Keytab name: FILE:ams-hbase.master.keytab KVNO Timestamp Principal
---- ----------------- --------------------------------------------------------
2 07/04/16 03:09:57 host/kdchost1.EXAMPLE.COM@EXAMPLE.COM (aes256-cts-hmac-sha1-96)
2 07/04/16 03:09:57 host/kdchost1.EXAMPLE.COM@EXAMPLE.COM (aes128-cts-hmac-sha1-96)
2 07/04/16 03:09:57 host/kdchost1.EXAMPLE.COM@EXAMPLE.COM (des3-cbc-sha1)
2 07/04/16 03:09:57 host/kdchost1.EXAMPLE.COM@EXAMPLE.COM (arcfour-hmac)
2 07/04/16 03:09:57 host/kdchost1.EXAMPLE.COM@EXAMPLE.COM (aes256-cts-hmac-sha1-96)
2 07/04/16 03:09:57 host/kdchost1.EXAMPLE.COM@EXAMPLE.COM (aes128-cts-hmac-sha1-96)
2 07/04/16 03:09:57 host/kdchost1.EXAMPLE.COM@EXAMPLE.COM (des3-cbc-sha1)
2 07/04/16 03:09:57 host/kdchost1.EXAMPLE.COM@EXAMPLE.COM (arcfour-hmac)
2 07/13/16 11:11:54 kmsusr@EXAMPLE.COM (aes256-cts-hmac-sha1-96)
2 07/13/16 11:11:54 kmsusr@EXAMPLE.COM (aes128-cts-hmac-sha1-96)
2 07/13/16 11:11:54 kmsusr@EXAMPLE.COM (des3-cbc-sha1)
2 07/13/16 11:11:54 kmsusr@EXAMPLE.COM (arcfour-hmac)
3 07/13/16 11:14:14 kmsusr@EXAMPLE.COM (aes256-cts-hmac-sha1-96)
3 07/13/16 11:14:14 kmsusr@EXAMPLE.COM (aes128-cts-hmac-sha1-96)
3 07/13/16 11:14:14 kmsusr@EXAMPLE.COM (des3-cbc-sha1)
3 07/13/16 11:14:14 kmsusr@EXAMPLE.COM (arcfour-hmac)
4 07/13/16 11:15:46 kmsusr@EXAMPLE.COM (aes256-cts-hmac-sha1-96)
4 07/13/16 11:15:46 kmsusr@EXAMPLE.COM (aes128-cts-hmac-sha1-96)
4 07/13/16 11:15:46 kmsusr@EXAMPLE.COM (des3-cbc-sha1)
4 07/13/16 11:15:46 kmsusr@EXAMPLE.COM (arcfour-hmac)
2 09/21/16 11:11:23 amshbase/edgenode.EXAMPLE.COM@EXAMPLE.COM (aes256-cts-hmac-sha1-96)
2 09/21/16 11:11:23 amshbase/edgenode.EXAMPLE.COM@EXAMPLE.COM (aes128-cts-hmac-sha1-96)
2 09/21/16 11:11:23 amshbase/edgenode.EXAMPLE.COM@EXAMPLE.COM (des3-cbc-sha1)
2 09/21/16 11:11:23 amshbase/edgenode.EXAMPLE.COM@EXAMPLE.COM (arcfour-hmac)
3 09/21/16 11:13:40 amshbase/edgenode.EXAMPLE.COM@EXAMPLE.COM (aes256-cts-hmac-sha1-96)
3 09/21/16 11:13:40 amshbase/edgenode.EXAMPLE.COM@EXAMPLE.COM (aes128-cts-hmac-sha1-96)
3 09/21/16 11:13:40 amshbase/edgenode.EXAMPLE.COM@EXAMPLE.COM (des3-cbc-sha1)
3 09/21/16 11:13:40 amshbase/edgenode.EXAMPLE.COM@EXAMPLE.COM (arcfour-hmac)
4 09/21/16 11:15:07 amshbase/edgenode.EXAMPLE.COM@EXAMPLE.COM (aes256-cts-hmac-sha1-96)
4 09/21/16 11:15:07 amshbase/edgenode.EXAMPLE.COM@EXAMPLE.COM (aes128-cts-hmac-sha1-96)
4 09/21/16 11:15:07 amshbase/edgenode.EXAMPLE.COM@EXAMPLE.COM (des3-cbc-sha1)
4 09/21/16 11:15:07 amshbase/edgenode.EXAMPLE.COM@EXAMPLE.COM (arcfour-hmac) so deleted that keytab file and generated new keytab with valid entries and placed on the metrics-collector host under /etc/security/keytabs restarted metrics-collector service the service started.
... View more
09-28-2016
06:35 AM
1 Kudo
Recently we had moved ambari-server to other host. cluster is kerberized. metrics collector service dies after sometime with below error in ambari-metrics-collector.log ERROR [main] master.HMasterCommandLine: Master exiting java.lang.RuntimeException: Failed construction of Master: class org.apache.hadoop.hbase.master.HMaster
... View more
Labels:
09-20-2016
02:44 PM
1 Kudo
@ pfctic2 pfctic2 can you go to /etc/yum.repos.d/ and list the configured repositories. if possible cat the repository file and paste here
... View more
09-20-2016
12:44 PM
@ Tristan Fily can you share hue.ini configuration here.
... View more
09-20-2016
12:09 PM
1 Kudo
@Roland Simonis most of the people follow same process which @Gerd Koenig has suggested, its best way as of now. Else you can go ahead and have a new server keep the host_name and ip_address same as older one and install the master components on it.
... View more
09-20-2016
11:57 AM
1 Kudo
I got this error in Starting AMBARI-METRICES from Ambari WEB-UI after
upgrading. Traceback (most recent
call last): File "/var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/metrics_collector.py",
line 131, in <module>
AmsCollector().execute() File
"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
line 219, in execute
method(env) File
"/var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/metrics_collector.py",
line 44, in start
self.configure(env, action = 'start') # for security File
"/var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/metrics_collector.py",
line 41, in configure
ams(name='collector') File
"/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py",
line 89, in thunk return
fn(*args, **kwargs) File "/var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/ams.py",
line 202, in ams
group=params.user_group File
"/usr/lib/python2.6/site-packages/resource_management/core/base.py",
line 154, in __init__
self.env.run() File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py",
line 152, in run
self.run_action(resource, action) File
"/usr/lib/python2.6/site-packages/resource_management/core/environment.py",
line 118, in run_action
provider_action() File
"/usr/lib/python2.6/site-packages/resource_management/libraries/providers/xml_config.py",
line 67, in action_create encoding =
self.resource.encoding File
"/usr/lib/python2.6/site-packages/resource_management/core/base.py",
line 154, in __init__
self.env.run() File
"/usr/lib/python2.6/site-packages/resource_management/core/environment.py",
line 152, in run
self.run_action(resource, action) File
"/usr/lib/python2.6/site-packages/resource_management/core/environment.py",
line 118, in run_action
provider_action() File
"/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py",
line 90, in action_create content =
self._get_content() File
"/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py",
line 127, in _get_content return
content() File
"/usr/lib/python2.6/site-packages/resource_management/core/source.py",
line 51, in __call__ return
self.get_content() File
"/usr/lib/python2.6/site-packages/resource_management/core/source.py",
line 142, in get_content rendered =
self.template.render(self.context) File
"/usr/lib/python2.6/site-packages/ambari_jinja2/environment.py", line
891, in render return
self.environment.handle_exception(exc_info, True) File
"<template>", line 5, in top-level template code UnicodeEncodeError:
'ascii' codec can't encode character u'\u2028' in position 108: ordinal not in
range(128)
... View more
Labels:
09-20-2016
11:03 AM
1 Kudo
@ Nitin ROOT CAUSE:-
There
should be a table named by namespace for maintaining the info of tables, which
is already exists on /hbase directory , While starting the process HMaster it
will create the namespace directory under /hbase directory So it is showing the
Table Exists Exception.
SOLUTION:-
We have to
manually repair the /hbase metastore by using offline command as,
$HBASE_HOME/bin/hbase
org.apache.hadoop.hbase.util.hbck.OfflineMetaRepair
Then start
the process of HMASTER. IF this is not working Use the zookeeper client for
removing it as,
From the
Hbase docker or any one of the docker.
zookeeper-3.4.6/bin/zkCli.sh
-server 192.168.1.90
ls /
rmr /hbase
ls /
exit
zookeeper-3.4.6/bin/zkCli.sh
-server 192.168.1.91
ls /
exit
... View more
09-16-2016
08:07 AM
Add below properties in core-site.xml using Ambari if you are not using ambari then edit core-site.xml with below values. <property> <name>hadoop.proxyuser.oozie.hosts</name> <value>*</value> </property>
<property> <name>hadoop.proxyuser.oozie.groups</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.falcon.hosts</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.falcon.groups</name>
<value>*</value> </property> Please accept this if it answers your question
... View more
09-16-2016
07:55 AM
as in your logs it looks like following file does not exist on hdfs hdfs://cassandraProd/var/opt/hosting/hadoop/hive/scratchdir/pns/248c8712-4eb6-4926-98cf-fdacc54e3425/hive_2016-09-15_16-00-56_218_8691044763703447835-2/-mr-10002/1b934a7e-fdb2-4279-b104-f3970de433a4/map.xml#map.xml to check if file exist or not do #hadoop fs -ls /cassandraProd/var/opt/hosting/hadoop/hive/scratchdir/pns/248c8712-4eb6-4926-98cf-fdacc54e3425/hive_2016-09-15_16-00-56_218_8691044763703447835-2/-mr-10002/1b934a7e-fdb2-4279-b104-f3970de433a4/
... View more
09-16-2016
07:41 AM
Hi Vijay Kumar J If your hdfs is HA enabled then you need to do below config changes in hue.ini fs_defaultfs=<hdfs nameservice> httpfs is needed to support a centralized WebHDFS interface to an HA enable NN Cluster for that follow below link https://community.hortonworks.com/articles/804/httpfs-configure-and-run-with-hdp-224x.html and in hue.ini put webhdfs_url=http://<your httpfs host>:14000/webhdfs/v1/ and in resourcemanager_host=<your resource manger host> Please accept this answer if it answers your question.
... View more
09-16-2016
07:14 AM
Goto hive service config in ambari and search hive.heapsize and edit hiveserver2 heap size PFA screenshot
... View more
09-16-2016
05:00 AM
1 Kudo
@J. D. Bacolod i would suggest you should with 2. because with option 2 you will have 1.more control to the operations , 2.there will be less mistakes (negligible) , 3. you and will be away from mis-configuration headaches. 4.Daily operations and configurations will be quick and simple. Please find below documentation from hortonworks for automated installation of HDP using ambari. please go through all the steps carefully. https://docs.hortonworks.com/HDPDocuments/Ambari-2.2.2.0/bk_Installing_HDP_AMB/content/index.html if you want to go to the part where you want to install multiple nodes then goto step 3. Installing, Configuring, and Deploying a HDP Cluster if this answers your query please accept the answer
... View more
09-16-2016
04:43 AM
@Suneet patil : can you let me know how you installed hue. is it 1. yum install hue OR 2. from tarballs. if it is 1. then goto node where you have installed hue and run #locate hue.ini or locate file /etc/hue/conf.empty/hue.ini and do the necessary changes. if it is 2 then. you will find hue.ini in at /usr/local/hue/desktop/conf/hue.ini If this resolve your issue please accept the answer.
... View more
09-15-2016
10:31 AM
1 Kudo
As in error log you provide you can see ERROR Error: That port is already in use. it means that 8000 port is already in use. locate hue.ini file and change the http port something else (port which is currently not in use) # Webserver listens on this address and port
#http_host=0.0.0.0
http_host=//127.0.0.1 http_port=8000
... View more
09-15-2016
08:49 AM
2 Kudos
You are using redhat 7 which is registered with RHN. but it dont have the packages required for ambari server you need to configure ambari repository to install ambari server. Follow below documentation to configure repository and install ambari server. https://docs.hortonworks.com/HDPDocuments/Ambari-2.2.1.0/bk_Installing_HDP_AMB/content/_download_the_ambari_repo_lnx7.html
... View more
09-15-2016
08:09 AM
1 Kudo
Your cluster is kerberized and this issue looks like kerberos ticket issue. you dont have a valid kerberos ticket. to run this command first get a valid ticket for your user to run this commad. To get a valid ticket type #kinit then it will ask for kerberos password. after you enter password #klist it will display kerberos ticket for your user. now try running command to connect to hive.
... View more
07-27-2016
09:14 AM
ntpd is already running and all the nodes are in sync, still facing this issue
... View more
07-27-2016
07:56 AM
1 Kudo
2016-07-27 03:49:38,874 ERROR [pool-5-thread-20]: transport.TSaslTransport (TSaslTransport.java:open(315)) - SASL negotiation failure
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: Failure unspecified at GSS-API level (Mechanism level: Clock skew too great (37))]
at com.sun.security.sasl.gsskerb.GssKrb5Server.evaluateResponse(GssKrb5Server.java:177)
at org.apache.thrift.transport.TSaslTransport$SaslParticipant.evaluateChallengeOrResponse(TSaslTransport.java:539)
at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:283)
at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:739)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:736)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:356)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1637)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory.getTransport(HadoopThriftAuthBridge.java:736)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:268)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: GSSException: Failure unspecified at GSS-API level (Mechanism level: Clock skew too great (37))
at sun.security.jgss.krb5.Krb5Context.acceptSecContext(Krb5Context.java:788)
at sun.security.jgss.GSSContextImpl.acceptSecContext(GSSContextImpl.java:342)
at sun.security.jgss.GSSContextImpl.acceptSecContext(GSSContextImpl.java:285)
at com.sun.security.sasl.gsskerb.GssKrb5Server.evaluateResponse(GssKrb5Server.java:155)
... 14 more
Caused by: KrbException: Clock skew too great (37)
at sun.security.krb5.KrbApReq.authenticate(KrbApReq.java:301)
at sun.security.krb5.KrbApReq.<init>(KrbApReq.java:144)
at sun.security.jgss.krb5.InitSecContextToken.<init>(InitSecContextToken.java:108)
at sun.security.jgss.krb5.Krb5Context.acceptSecContext(Krb5Context.java:771)
... 17 more 2016-07-27 03:50:11,109 ERROR [pool-5-thread-15]: server.TThreadPoolServer (TThreadPoolServer.java:run(296)) - Error occurred during processing of message.
java.lang.RuntimeException: org.apache.thrift.transport.TTransportException: GSS initiate failed
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:739)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:736)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:356)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1637)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory.getTransport(HadoopThriftAuthBridge.java:736)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:268)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.thrift.transport.TTransportException: GSS initiate failed
at org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232)
at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:316)
at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
... 10 more
2016-07-27 03:50:11,109 ERROR [pool-5-thread-20]: server.TThreadPoolServer (TThreadPoolServer.java:run(296)) - Error occurred during processing of message.
java.lang.RuntimeException: org.apache.thrift.transport.TTransportException: GSS initiate failed
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:739)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:736)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:356)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1637)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory.getTransport(HadoopThriftAuthBridge.java:736)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:268)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.thrift.transport.TTransportException: GSS initiate failed
at org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232)
at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:316)
at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
... 10 more
... View more
Labels:
06-15-2016
04:55 PM
@rajdip chaudhuri : is hive server2 is HA enabled and you are trying to connect it using zookeeper through beeline ?
... View more
06-15-2016
12:47 PM
@Neeraj Sabharwal : what should be the ideal replication to be used with setrep if i have 4 datanodes?
... View more
03-08-2016
11:32 PM
1 Kudo
executing a sample job using curl -iku guest:guest-password -X POST -d arg=/user/guest/knox-sample/input -d arg=/user/guest/knox-sample/output -d jar=/user/guest/knox-sample/lib/hadoop-examples.jar -d class=wordcount https://localhost:8443/gateway/sandbox/templeton/v1/mapreduce/jar throws exception: HTTP/1.1 500 Server Error
Set-Cookie: JSESSIONID=nudt2jfbtz4q1r0sx5p3nauq3;Path=/gateway/sandbox;Secure;HttpOnly
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Server: Jetty(7.6.0.v20120127)
Content-Type: application/json
Content-Length: 136
{"error":"Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
... View more
- Tags:
- knox-gateway
- Security
Labels:
03-08-2016
11:24 PM
@Kevin Minder thanks Kevin, I missed to add knox_sample topology file in {GATEWAY_HOME}/conf/topologies directory
... View more
03-07-2016
08:52 PM
1 Kudo
curl -iku guest:guest-password -X put 'https://localhost:8443/gateway/knox_sample/webhdfs/v1/user/guest/knox-sample?op=MKDIRS&permission=777' HTTP/1.1 404 Not Found
Content-Length: 0
Server: Jetty(8.1.14.v20131031)
... View more
- Tags:
- knox-gateway
- Security
Labels: