Member since
06-19-2014
78
Posts
2
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3428 | 04-05-2016 12:07 AM |
04-05-2016
12:07 AM
I have solved this problem. I install impala-kudu sentry-provider-common-1.4.0-cdh5.5.0-SNAPSHOT.jar,and this mismatch cdh5.5.1's sentry. rube
... View more
03-31-2016
06:59 PM
sentry-core-common-1.5.1-cdh5.5.1.jar The test environment and the producting environment have the same problem.
... View more
03-30-2016
11:17 PM
hi: CDH5.5.1 impala+sentry,thirft api protocol versoin mismatch,catalog service can not been started: Error initialializing Catalog. Please run 'invalidate metadata'
Java exception follows:
com.cloudera.impala.catalog.CatalogException: Error updating authorization policy:
at com.cloudera.impala.catalog.CatalogServiceCatalog.reset(CatalogServiceCatalog.java:359)
at com.cloudera.impala.service.JniCatalog.<init>(JniCatalog.java:94)
Caused by: com.cloudera.impala.common.ImpalaRuntimeException: Error refreshing authorization policy, current policy state may be inconsistent. Running 'invalidate metadata' may resolve this problem:
at com.cloudera.impala.util.SentryProxy.refresh(SentryProxy.java:306)
at com.cloudera.impala.catalog.CatalogServiceCatalog.reset(CatalogServiceCatalog.java:357)
... 1 more
Caused by: java.util.concurrent.ExecutionException: java.lang.AssertionError: Sentry thrift API protocol version mismatch: Client thrift version is: 1 , server thrift verion is 2. Server Stacktrace: org.apache.sentry.provider.db.SentryThriftAPIMismatchException: Sentry thrift API protocol version mismatch: Client thrift version is: 1 , server thrift verion is 2
at org.apache.sentry.provider.db.service.thrift.SentryPolicyStoreProcessor.validateClientVersion(SentryPolicyStoreProcessor.java:856)
at org.apache.sentry.provider.db.service.thrift.SentryPolicyStoreProcessor.list_sentry_roles_by_group(SentryPolicyStoreProcessor.java:515)
at org.apache.sentry.provider.db.service.thrift.SentryPolicyService$Processor$list_sentry_roles_by_group.getResult(SentryPolicyService.java:1013)
at org.apache.sentry.provider.db.service.thrift.SentryPolicyService$Processor$list_sentry_roles_by_group.getResult(SentryPolicyService.java:998)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
at org.apache.sentry.provider.db.service.thrift.SentryProcessorWrapper.process(SentryProcessorWrapper.java:35)
at org.apache.thrift.TMultiplexedProcessor.process(TMultiplexedProcessor.java:123)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:188)
at com.cloudera.impala.util.SentryProxy.refresh(SentryProxy.java:302)
... 2 more
Caused by: java.lang.AssertionError: Sentry thrift API protocol version mismatch: Client thrift version is: 1 , server thrift verion is 2. Server Stacktrace: org.apache.sentry.provider.db.SentryThriftAPIMismatchException: Sentry thrift API protocol version mismatch: Client thrift version is: 1 , server thrift verion is 2
at org.apache.sentry.provider.db.service.thrift.SentryPolicyStoreProcessor.validateClientVersion(SentryPolicyStoreProcessor.java:856)
at org.apache.sentry.provider.db.service.thrift.SentryPolicyStoreProcessor.list_sentry_roles_by_group(SentryPolicyStoreProcessor.java:515)
at org.apache.sentry.provider.db.service.thrift.SentryPolicyService$Processor$list_sentry_roles_by_group.getResult(SentryPolicyService.java:1013)
at org.apache.sentry.provider.db.service.thrift.SentryPolicyService$Processor$list_sentry_roles_by_group.getResult(SentryPolicyService.java:998)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
at org.apache.sentry.provider.db.service.thrift.SentryProcessorWrapper.process(SentryProcessorWrapper.java:35)
at org.apache.thrift.TMultiplexedProcessor.process(TMultiplexedProcessor.java:123)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
at org.apache.sentry.service.thrift.Status.throwIfNotOk(Status.java:110)
at org.apache.sentry.provider.db.service.thrift.SentryPolicyServiceClientDefaultImpl.listRolesByGroupName(SentryPolicyServiceClientDefaultImpl.java:231)
at org.apache.sentry.provider.db.service.thrift.SentryPolicyServiceClientDefaultImpl.listRoles(SentryPolicyServiceClientDefaultImpl.java:274)
at com.cloudera.impala.util.SentryPolicyService.listAllRoles(SentryPolicyService.java:335)
at com.cloudera.impala.util.SentryProxy$PolicyReader.run(SentryProxy.java:104)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745) Regards, Rube
... View more
Labels:
- Labels:
-
Apache Impala
-
Apache Sentry
02-28-2016
07:42 PM
https://community.cloudera.com/t5/Batch-SQL-Apache-Hive/hive-ldap-LDAP-error-code-34-invalid-DN/m-p/37586#M1126 CDH5.5.x,ldap+hive do not work,but CDH5.4.X is ok. Can you help me out?
... View more
02-04-2016
05:47 PM
hi, "HiveServer2 and the Hive Metastore running with strong authentication. For HiveServer2, strong authentication is either Kerberos or LDAP. For the Hive Metastore, only Kerberos is considered strong authentication." Is that mean if I want sentry work with ldap authentication hive,hive metastore must run with kerbreos,and hive server2 run with ldap.It makes me confused,how to config hive-site.xml. regards rube
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Sentry
-
Kerberos
05-12-2015
06:03 PM
I have used sentry service. If I turn this on,It will be an error: 1 validation error. Hive Impersonation is enabled for Hive Server2 role 'hiveserver2 (slave-73)'. Hive Impersonation should be disabled to enable Hive authorization using Sentry
... View more
05-12-2015
02:00 AM
I had login beeline/hue with the user 'test'. Then I submit a sql,on the yarn web,I can see the job submiter is 'hive',why? Obviously the user must be 'test'. Is there a configuration I had missed? (cdh5.2.0) rube
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache YARN
-
Cloudera Hue
05-11-2015
08:16 PM
My keytab ticket maxlife time is 7days. with the api code UserGroupInformation.loginUserFromKeytab(conf.get("hbase.master.kerberos.principal"), conf.get("hbase.keytab.path")); The ticket will expire 7days later,how to fix it? (How to access hbase all the time?) rube
... View more
- Tags:
- expires
Labels:
- Labels:
-
Apache HBase
01-04-2015
11:42 PM
1 Kudo
hi: cdh5.2.0 ,impala 2.0. Every hour I will pull data from hive to impala(parquet file).Sometimes,can not excute sql like below,and restart the impala cluster could solve the problem,why? select id,sub1,(ifnull(sum(click),0)) as clicks > from new_td_impala_test
where (part1>='2014-11-06' and part1<='2014-11-06') and (unix_timestamp(substr(time_stamp,1,19)) >= unix_timestamp('2014-10-31 16:00:00') and unix_timestamp(substr(time_stamp,1,19)) < unix_timestamp('2014-11-07 16:00:00')) group by id,sub1 limit 4 ;
Query: select id,sub1,(ifnull(sum(click),0)) as clicks
from new_td_impala_test
where (part1>='2014-11-06' and part1<='2014-11-06') and (unix_timestamp(substr(time_stamp,1,19)) >= unix_timestamp('2014-10-31 16:00:00') and unix_timestamp(substr(time_stamp,1,19)) < unix_timestamp('2014-11-07 16:00:00')) group by id,sub1 limit 4
WARNINGS: Create file /tmp/impala-scratch/7241111ac15574e0:abef55f7a589b3a5_7241111ac15574e0:abef55f7a589b3a7_dc8fc8e3-f98c-41e8-a86a-d9c8f7840cdb failed with errno=2 description=Error(2): No such file or directory
Backend 1:Create file /tmp/impala-scratch/7241111ac15574e0:abef55f7a589b3a5_7241111ac15574e0:abef55f7a589b3a7_dc8fc8e3-f98c-41e8-a86a-d9c8f7840cdb failed with errno=2 description=Error(2): No such file or directory
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Impala
11-05-2014
09:47 PM
So does that mean if I use cdh5.02,the cluster will install impala1.3,and if I use cdh5.2.0 ,impala version will be impala1.4? and if I want to upgrade impala,should upgrade cdh first?
... View more
07-10-2014
08:11 PM
I chose the same path:' hdfs://namenode11.yeahmobi.c om:8020/tmp/analyst/test '
... View more
07-10-2014
02:10 AM
Thanks for you regards! sentry configuration: 1.server=server1->uri=hdfs://namenode11.yeahmobi.com:8020/tmp/analyst/test 2. this directory is writable and readable the sql " INSERT OVERWRITE DIRECTORY 'hdfs://namenode11.yeahmobi.com:8020/tmp/analyst/test' select * from cc_normal_log limit 10 " runs ok. But "Save Query Results big query in hdfs " still not work.... rube
... View more
07-06-2014
11:40 PM
Not work. the sql: INSERT OVERWRITE DIRECTORY '/user/hue/test' select * from cc_log; the error log: [07/Jul/2014 14:22:53 +0800] views INFO Saved auto design "My saved query" (id 26) for hue [07/Jul/2014 14:22:54 +0800] dbms ERROR Bad status for request TExecuteStatementReq(confOverlay={}, sessionHandle=TSessionHandle(sessionId=THandleIdentifier(secret='\x16\x037i\xeb\x18O\x86\x9b\xa6\x9f\x0f\xde\xd8\xd1 ', guid='\x1cV\xeb\xa5\x88\xd6@\xec\x93(\tt\x101\xb3\x90')), runAsync=True, statement="INSERT OVERWRITE DIRECTORY '/user/hue/test' select * from cc_log"): TExecuteStatementResp(status=TStatus(errorCode=40000, errorMessage='Error while compiling statement: FAILED: SemanticException No valid privileges', sqlState='42000', infoMessages=None, statusCode=3), operationHandle=None) Traceback (most recent call last): File "/usr/lib/hue/apps/beeswax/src/beeswax/server/dbms.py", line 402, in execute_and_watch handle = self.client.query(query, query_history.statement_number) File "/usr/lib/hue/apps/beeswax/src/beeswax/server/hive_server2_lib.py", line 666, in query return self._client.execute_async_query(query, statement) File "/usr/lib/hue/apps/beeswax/src/beeswax/server/hive_server2_lib.py", line 503, in execute_async_query return self.execute_async_statement(statement=query_statement, confOverlay=configuration) File "/usr/lib/hue/apps/beeswax/src/beeswax/server/hive_server2_lib.py", line 515, in execute_async_statement res = self.call(self._client.ExecuteStatement, req) File "/usr/lib/hue/apps/beeswax/src/beeswax/server/hive_server2_lib.py", line 427, in call raise QueryServerException(Exception('Bad status for request %s:\n%s' % (req, res)), message=message) QueryServerException: Bad status for request TExecuteStatementReq(confOverlay={}, sessionHandle=TSessionHandle(sessionId=THandleIdentifier(secret='\x16\x037i\xeb\x18O\x86\x9b\xa6\x9f\x0f\xde\xd8\xd1 ', guid='\x1cV\xeb\xa5\x88\xd6@\xec\x93(\tt\x101\xb3\x90')), runAsync=True, statement="INSERT OVERWRITE DIRECTORY '/user/hue/test' select * from cc_normal_log"): TExecuteStatementResp(status=TStatus(errorCode=40000, errorMessage='Error while compiling statement: FAILED: SemanticException No valid privileges', sqlState='42000', infoMessages=None, statusCode=3), operationHandle=None) and the sentry provider file: analyst_role = server=server1->db=analyst1, \ server=server1->db=jranalyst1->table=*->action=select,\ server=server1->db=default->table=*->action=select,\ server=server1->db=test->table=*->action=select,\ server=server1->db=test->table=*->action=create,\ server=server1->uri=hdfs://namenode11:8020/user/hue/test
... View more
06-30-2014
08:10 PM
“Could not save results”。The mapreduce job of exporting data couldn't be created,so the query result is not exported to HDFS. Maybe there is a error occured before the mapreduce job created.
... View more
06-27-2014
04:34 AM
I had deployed JCE. It does not work.The cluster has 4 nodes,hosts: 172.20.0.11 namenode11.yeahmobi.com namenode11 172.20.0.12 datanode12.yeahmobi.com datanode12 172.20.0.13 datanode13.yeahmobi.com datanode13 172.20.0.14 datanode14.yeahmobi.com datanode14 I guess,maybe I missed some configurations. I had Enable Authentication for HTTP Web-Consoles,if want to access webUI(eg:namenode:50070) from a windows client,what should I do? Should I do Integrating Hadoop Security with Alternate Authentication? http://www.cloudera.com/content/cloudera-content/cloudera-docs/CDH5/latest/CDH5-Security-Guide/cdh5sg_hadoop_security_alternate_authen_integrate.html
... View more
06-26-2014
11:36 PM
hello Romain: When I click 'file blowser',this error show in the runcpserver.log: [27/Jun/2014 14:31:08 +0800] kerberos_ ERROR handle_other(): Mutual authentication unavailable on 200 response [27/Jun/2014 14:31:08 +0800] kerberos_ ERROR handle_other(): Mutual authentication unavailable on 200 response [27/Jun/2014 14:31:08 +0800] kerberos_ ERROR handle_other(): Mutual authentication unavailable on 200 response [27/Jun/2014 14:31:08 +0800] kerberos_ ERROR handle_other(): Mutual authentication unavailable on 200 response [27/Jun/2014 14:31:08 +0800] kerberos_ ERROR handle_other(): Mutual authentication unavailable on 200 response [27/Jun/2014 14:31:08 +0800] kerberos_ ERROR handle_other(): Mutual authentication unavailable on 200 response Any configuration I missed?
... View more
06-26-2014
06:46 PM
Thank you for your reply! 1.I did from CM 2.krb5.conf ....log conf.... [libdefaults] default_realm = HADOOP.COM dns_lookup_realm = false dns_lookup_kdc = false ticket_lifetime = 24h renew_lifetime = 7d forwardable = true [realms] HADOOP.COM = { kdc = datanode14.yeahmobi.com admin_server = datanode14.yeahmobi.com } [domain_realm] .yeahmobi.com = HADOOP.COM namenode11 = HADOOP.COM datanode14 = HADOOP.COM datanode12 = HADOOP.COM datanode13 = HADOOP.COM 3.hdfs klist -ef Default principal: hdfs@HADOOP.COM Valid starting Expires Service principal 06/26/14 16:31:27 06/27/14 16:31:27 krbtgt/HADOOP.COM@HADOOP.COM renew until 07/03/14 16:31:27, Flags: FRI Etype (skey, tkt): aes256-cts-hmac-sha1-96, aes256-cts-hmac-sha1-96 06/26/14 16:31:36 06/27/14 16:31:27 HTTP/namenode11.yeahmobi.com@HADOOP.COM renew until 07/01/14 16:31:36, Flags: FRT Etype (skey, tkt): aes256-cts-hmac-sha1-96, aes256-cts-hmac-sha1-96 4.centos6.4
... View more
06-26-2014
03:24 AM
http://www.cloudera.com/content/cloudera-content/cloudera-docs/CM5/latest/Configuring-Hadoop-Security-with-Cloudera-Manager/cm5chs_enable_web_auth_s19.html After step 19,I restart the cluster,http://namenode:50070 required a username and password,and I use hdfs and it's password. namenode log: 2014-06-26 17:55:39,907 WARN org.apache.hadoop.security.authentication.server.AuthenticationFilter: Authentication exception: GSSException: Defective token detected (Mechanism level: GSSHeader did not find the right tag) org.apache.hadoop.security.authentication.client.AuthenticationException: GSSException: Defective token detected (Mechanism level: GSSHeader did not find the right tag) at org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler.authenticate(KerberosAuthenticationHandler.java:360) at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:349) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1183) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450) at org.mortbay.jetty.servlet.Dispatcher.forward(Dispatcher.java:327) at org.mortbay.jetty.servlet.Dispatcher.forward(Dispatcher.java:126) at org.mortbay.jetty.servlet.DefaultServlet.doGet(DefaultServlet.java:503) at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221) at org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1183) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450) at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:326) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542) at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404) at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Caused by: GSSException: Defective token detected (Mechanism level: GSSHeader did not find the right tag) at sun.security.jgss.GSSHeader.<init>(GSSHeader.java:97) at sun.security.jgss.GSSContextImpl.acceptSecContext(GSSContextImpl.java:306) at sun.security.jgss.GSSContextImpl.acceptSecContext(GSSContextImpl.java:285) at org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler$2.run(KerberosAuthenticationHandler.java:327) at org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler$2.run(KerberosAuthenticationHandler.java:309) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler.authenticate(KerberosAuthenticationHandler.java:309) ... 41 more curl -v -u hdfs --negotiate http://namenode:50070 and press the password worked. What is the problem? Is the username and password right?(I created the user and password by kadmin.local)? rube thx
... View more
Labels:
- Labels:
-
Apache Hadoop
-
HDFS
-
Security
06-25-2014
09:53 PM
yes,I export hive query result to hdfs with method 'big query in hdfs'. the /logs: 25/Jun/2014 21:50:10 -0700] access WARNING 172.20.0.224 hue - "GET /logs HTTP/1.1" [25/Jun/2014 21:50:07 -0700] resource DEBUG GET Got response: {"FileStatus":{"accessTime":0,"b... [25/Jun/2014 21:50:07 -0700] kerberos_ DEBUG handle_response(): returning <Response [200]> [25/Jun/2014 21:50:07 -0700] kerberos_ ERROR handle_other(): Mutual authentication unavailable on 200 response [25/Jun/2014 21:50:07 -0700] kerberos_ DEBUG handle_other(): Handling: 200 [25/Jun/2014 21:50:07 -0700] connectionpool DEBUG "GET /webhdfs/v1/user/hue/kt?op=GETFILESTATUS&user.name=hue&doas=hue HTTP/1.1" 200 None [25/Jun/2014 21:50:07 -0700] connectionpool DEBUG Setting read timeout to None [25/Jun/2014 21:50:07 -0700] dbms DEBUG Query Server: {'server_host': 'datanode12.yeahmobi.com', 'server_port': 10000, 'server_name': 'beeswax', 'principal': 'hive/datanode12.yeahmobi.com@HADOOP.COM'} [25/Jun/2014 21:50:07 -0700] thrift_util DEBUG Thrift call <class 'TCLIService.TCLIService.Client'>.GetOperationStatus returned in 1ms: TGetOperationStatusResp(status=TStatus(errorCode=None, errorMessage=None, sqlState=None, infoMessages=None, statusCode=0), operationState=2, errorMessage=None, sqlState=None, errorCode=None) [25/Jun/2014 21:50:07 -0700] thrift_util DEBUG Thrift call: <class 'TCLIService.TCLIService.Client'>.GetOperationStatus(args=(TGetOperationStatusReq(operationHandle=TOperationHandle(hasResultSet=True, modifiedRowCount=None, operationType=0, operationId=THandleIdentifier(secret='Y^\xc1\xcb\x93\xaeN\xce\x8ao\x10\x9b\x1f\xf6\xa3\xf2', guid='\xe2\xaaM\xca\xba\x8dK\xf1\xb1\xa2\xb7\x1b\xe3a\x0e\x82'))),), kwargs={}) [25/Jun/2014 21:50:07 -0700] dbms DEBUG Query Server: {'server_host': 'datanode12.yeahmobi.com', 'server_port': 10000, 'server_name': 'beeswax', 'principal': 'hive/datanode12.yeahmobi.com@HADOOP.COM'} [25/Jun/2014 21:50:07 -0700] access INFO 172.20.0.224 hue - "POST /beeswax/api/query/32/results/save/hdfs/directory HTTP/1.1" [25/Jun/2014 21:50:04 -0700] thrift_util DEBUG Thrift call <class 'hadoop.api.jobtracker.Jobtracker.Client'>.getRunningJobs returned in 1ms: ThriftJobList(jobs=[]) [25/Jun/2014 21:50:04 -0700] thrift_util DEBUG Thrift call: <class 'hadoop.api.jobtracker.Jobtracker.Client'>.getRunningJobs(args=(RequestContext(confOptions={'effective_user': u'hue'}),), kwargs={}) [25/Jun/2014 21:50:04 -0700] access INFO 172.20.0.224 hue - "GET /jobbrowser/ HTTP/1.1" [25/Jun/2014 21:50:00 -0700] access DEBUG 172.20.0.224 hue - "GET /favicon.ico HTTP/1.1" [25/Jun/2014 21:50:00 -0700] thrift_util DEBUG Thrift call <class 'hadoop.api.jobtracker.Jobtracker.Client'>.getRunningJobs returned in 1ms: ThriftJobList(jobs=[]) [25/Jun/2014 21:50:00 -0700] thrift_util DEBUG Thrift call: <class 'hadoop.api.jobtracker.Jobtracker.Client'>.getRunningJobs(args=(RequestContext(confOptions={'effective_user': u'hue'}),), kwargs={}) [25/Jun/2014 21:50:00 -0700] access INFO 172.20.0.224 hue - "GET /jobbrowser/ HTTP/1.1" [25/Jun/2014 21:50:00 -0700] access DEBUG 172.20.0.224 hue - "GET /static/art/icon_hue_24.png HTTP/1.1" [25/Jun/2014 21:50:00 -0700] access DEBUG 172.20.0.224 hue - "GET /oozie/static/art/icon_oozie_dashboard_24.png HTTP/1.1" [25/Jun/2014 21:50:00 -0700] access DEBUG 172.20.0.224 hue - "GET /oozie/static/art/icon_oozie_editor_24.png HTTP/1.1" [25/Jun/2014 21:50:00 -0700] access DEBUG 172.20.0.224 hue - "GET /sqoop/static/art/icon_sqoop_24.png HTTP/1.1" [25/Jun/2014 21:50:00 -0700] access DEBUG 172.20.0.224 hue - "GET /hbase/static/art/icon_24.png HTTP/1.1" [25/Jun/2014 21:50:00 -0700] access DEBUG 172.20.0.224 hue - "GET /zookeeper/static/art/icon_24.png HTTP/1.1" [25/Jun/2014 21:50:00 -0700] access DEBUG 172.20.0.224 hue - "GET /metastore/static/art/icon_metastore_24.png HTTP/1.1" [25/Jun/2014 21:50:00 -0700] access DEBUG 172.20.0.224 hue - "GET /impala/static/art/icon_impala_24.png HTTP/1.1" [25/Jun/2014 21:50:00 -0700] access DEBUG 172.20.0.224 hue - "GET /beeswax/static/art/icon_beeswax_24.png HTTP/1.1" [25/Jun/2014 21:50:00 -0700] access DEBUG 172.20.0.224 hue - "GET /rdbms/static/art/icon_rdbms_24.png HTTP/1.1" [25/Jun/2014 21:50:00 -0700] access DEBUG 172.20.0.224 hue - "GET /jobsub/static/art/icon_jobsub_24.png HTTP/1.1" [25/Jun/2014 21:50:00 -0700] access DEBUG 172.20.0.224 hue - "GET /pig/static/art/icon_pig_24.png HTTP/1.1" [25/Jun/2014 21:50:00 -0700] access DEBUG 172.20.0.224 hue - "GET /static/art/hue-logo-mini-white.png HTTP/1.1"
... View more
06-25-2014
03:56 AM
Click the 'select a file or directory',the error.log print the error too.
... View more
06-25-2014
03:51 AM
cdh5.0.2+kerberos security hue3.5 the jobtracker log,every 5 seconds,there is a exception: --------------------------------------------------------------------------------------------------------------------------------------------------------------- 2014-06-25 18:44:45,370 ERROR org.apache.hadoop.thriftfs.SanerThreadPoolServer: Error occurred during processing of message. java.lang.RuntimeException: com.cloudera.hue.org.apache.thrift.transport.TTransportException: Peer indicated failure: GSS initiate failed at com.cloudera.hue.org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219) at org.apache.hadoop.thriftfs.HadoopThriftAuthBridge$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:237) at org.apache.hadoop.thriftfs.HadoopThriftAuthBridge$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:235) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:356) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1528) at org.apache.hadoop.thriftfs.HadoopThriftAuthBridge$TUGIAssumingTransportFactory.getTransport(HadoopThriftAuthBridge.java:235) at org.apache.hadoop.thriftfs.SanerThreadPoolServer$WorkerProcess.run(SanerThreadPoolServer.java:277) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:744) Caused by: com.cloudera.hue.org.apache.thrift.transport.TTransportException: Peer indicated failure: GSS initiate failed at com.cloudera.hue.org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:190) at com.cloudera.hue.org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125) at com.cloudera.hue.org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:253) at com.cloudera.hue.org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41) at com.cloudera.hue.org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216) ... 10 more --------------------------------------------------------------------------------------------------------------------------------------------------------------- I confige the security with cloudera manager,and now kerberos security run normally.The test mapreduce is ok. Thanks.
... View more
Labels:
06-24-2014
06:54 PM
ps: the service kt_renewer is running.
... View more
06-24-2014
06:45 PM
The warnings on the Hue page /about/admin_wizard: "Potential misconfiguration detected. Fix and restart Hue. Impala Editor No available Impalad to send queries to. “ no other warnings. It has nothing to do with this warnning,must be some other misconfiguration.
... View more
06-24-2014
03:35 AM
cdh5.0.2 hue3.5 cdh was configured hadoop security with cloudera manager. can not Save Query Results Big Query in HDFS . user:hue the error: kerberos_ ERROR handle_other(): Mutual authentication unavailable on 200 response
... View more
06-24-2014
03:15 AM
I do it with cm. http://www.cloudera.com/content/cloudera-content/cloudera-docs/CM5/latest/Configuring-Hadoop-Security-with-Cloudera-Manager/cm5chs_using_cm_sec_config.html
... View more
06-24-2014
03:13 AM
cdh5.0.2 hue3.5 Save Query Results In an HDFS file,when it does not complete,I can not do other hue actions,just like hue hangs. the exception: Traceback (most recent call last): File "/usr/lib/hue/build/env/lib/python2.6/site-packages/Django-1.4.5-py2.6.egg/django/core/handlers/base.py", line 111, in get_response response = callback(request, *callback_args, **callback_kwargs) File "/usr/lib/hue/desktop/core/src/desktop/views.py", line 69, in home 'json_documents': json.dumps(massaged_documents_for_json(docs, request.user)), File "/usr/lib/hue/desktop/core/src/desktop/api.py", line 55, in massaged_documents_for_json return [massage_doc_for_json(doc, user) for doc in documents] File "/usr/lib/hue/desktop/core/src/desktop/api.py", line 59, in massage_doc_for_json perms = doc.list_permissions() File "/usr/lib/hue/desktop/core/src/desktop/models.py", line 481, in list_permissions return DocumentPermission.objects.list(document=self) File "/usr/lib/hue/desktop/core/src/desktop/models.py", line 536, in list perm, created = DocumentPermission.objects.get_or_create(doc=document, perms=DocumentPermission.READ_PERM) File "/usr/lib/hue/build/env/lib/python2.6/site-packages/Django-1.4.5-py2.6.egg/django/db/models/manager.py", line 134, in get_or_create return self.get_query_set().get_or_create(**kwargs) File "/usr/lib/hue/build/env/lib/python2.6/site-packages/Django-1.4.5-py2.6.egg/django/db/models/query.py", line 452, in get_or_create obj.save(force_insert=True, using=self.db) File "/usr/lib/hue/build/env/lib/python2.6/site-packages/Django-1.4.5-py2.6.egg/django/db/models/base.py", line 463, in save self.save_base(using=using, force_insert=force_insert, force_update=force_update) File "/usr/lib/hue/build/env/lib/python2.6/site-packages/Django-1.4.5-py2.6.egg/django/db/models/base.py", line 551, in save_base result = manager._insert([self], fields=fields, return_id=update_pk, using=using, raw=raw) File "/usr/lib/hue/build/env/lib/python2.6/site-packages/Django-1.4.5-py2.6.egg/django/db/models/manager.py", line 203, in _insert return insert_query(self.model, objs, fields, **kwargs) File "/usr/lib/hue/build/env/lib/python2.6/site-packages/Django-1.4.5-py2.6.egg/django/db/models/query.py", line 1593, in insert_query return query.get_compiler(using=using).execute_sql(return_id) File "/usr/lib/hue/build/env/lib/python2.6/site-packages/Django-1.4.5-py2.6.egg/django/db/models/sql/compiler.py", line 912, in execute_sql cursor.execute(sql, params) File "/usr/lib/hue/build/env/lib/python2.6/site-packages/Django-1.4.5-py2.6.egg/django/db/backends/sqlite3/base.py", line 344, in execute return Database.Cursor.execute(self, query, params) DatabaseError: database is locked help,thank you very much.
... View more
Labels:
- Labels:
-
Cloudera Hue
-
HDFS
06-19-2014
07:40 PM
Is there a way to forbidden the user drop hive table, If do not want the user have DDL privilege ,how to do that?
... View more
Labels:
- Labels:
-
Apache Hive
-
Cloudera Hue
-
Security