Member since
06-19-2014
78
Posts
2
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4629 | 04-05-2016 12:07 AM |
07-06-2014
11:40 PM
Not work. the sql: INSERT OVERWRITE DIRECTORY '/user/hue/test' select * from cc_log; the error log: [07/Jul/2014 14:22:53 +0800] views INFO Saved auto design "My saved query" (id 26) for hue [07/Jul/2014 14:22:54 +0800] dbms ERROR Bad status for request TExecuteStatementReq(confOverlay={}, sessionHandle=TSessionHandle(sessionId=THandleIdentifier(secret='\x16\x037i\xeb\x18O\x86\x9b\xa6\x9f\x0f\xde\xd8\xd1 ', guid='\x1cV\xeb\xa5\x88\xd6@\xec\x93(\tt\x101\xb3\x90')), runAsync=True, statement="INSERT OVERWRITE DIRECTORY '/user/hue/test' select * from cc_log"): TExecuteStatementResp(status=TStatus(errorCode=40000, errorMessage='Error while compiling statement: FAILED: SemanticException No valid privileges', sqlState='42000', infoMessages=None, statusCode=3), operationHandle=None) Traceback (most recent call last): File "/usr/lib/hue/apps/beeswax/src/beeswax/server/dbms.py", line 402, in execute_and_watch handle = self.client.query(query, query_history.statement_number) File "/usr/lib/hue/apps/beeswax/src/beeswax/server/hive_server2_lib.py", line 666, in query return self._client.execute_async_query(query, statement) File "/usr/lib/hue/apps/beeswax/src/beeswax/server/hive_server2_lib.py", line 503, in execute_async_query return self.execute_async_statement(statement=query_statement, confOverlay=configuration) File "/usr/lib/hue/apps/beeswax/src/beeswax/server/hive_server2_lib.py", line 515, in execute_async_statement res = self.call(self._client.ExecuteStatement, req) File "/usr/lib/hue/apps/beeswax/src/beeswax/server/hive_server2_lib.py", line 427, in call raise QueryServerException(Exception('Bad status for request %s:\n%s' % (req, res)), message=message) QueryServerException: Bad status for request TExecuteStatementReq(confOverlay={}, sessionHandle=TSessionHandle(sessionId=THandleIdentifier(secret='\x16\x037i\xeb\x18O\x86\x9b\xa6\x9f\x0f\xde\xd8\xd1 ', guid='\x1cV\xeb\xa5\x88\xd6@\xec\x93(\tt\x101\xb3\x90')), runAsync=True, statement="INSERT OVERWRITE DIRECTORY '/user/hue/test' select * from cc_normal_log"): TExecuteStatementResp(status=TStatus(errorCode=40000, errorMessage='Error while compiling statement: FAILED: SemanticException No valid privileges', sqlState='42000', infoMessages=None, statusCode=3), operationHandle=None) and the sentry provider file: analyst_role = server=server1->db=analyst1, \ server=server1->db=jranalyst1->table=*->action=select,\ server=server1->db=default->table=*->action=select,\ server=server1->db=test->table=*->action=select,\ server=server1->db=test->table=*->action=create,\ server=server1->uri=hdfs://namenode11:8020/user/hue/test
... View more
06-30-2014
08:10 PM
“Could not save results”。The mapreduce job of exporting data couldn't be created,so the query result is not exported to HDFS. Maybe there is a error occured before the mapreduce job created.
... View more
06-27-2014
04:34 AM
I had deployed JCE. It does not work.The cluster has 4 nodes,hosts: 172.20.0.11 namenode11.yeahmobi.com namenode11 172.20.0.12 datanode12.yeahmobi.com datanode12 172.20.0.13 datanode13.yeahmobi.com datanode13 172.20.0.14 datanode14.yeahmobi.com datanode14 I guess,maybe I missed some configurations. I had Enable Authentication for HTTP Web-Consoles,if want to access webUI(eg:namenode:50070) from a windows client,what should I do? Should I do Integrating Hadoop Security with Alternate Authentication? http://www.cloudera.com/content/cloudera-content/cloudera-docs/CDH5/latest/CDH5-Security-Guide/cdh5sg_hadoop_security_alternate_authen_integrate.html
... View more
06-26-2014
11:36 PM
hello Romain: When I click 'file blowser',this error show in the runcpserver.log: [27/Jun/2014 14:31:08 +0800] kerberos_ ERROR handle_other(): Mutual authentication unavailable on 200 response [27/Jun/2014 14:31:08 +0800] kerberos_ ERROR handle_other(): Mutual authentication unavailable on 200 response [27/Jun/2014 14:31:08 +0800] kerberos_ ERROR handle_other(): Mutual authentication unavailable on 200 response [27/Jun/2014 14:31:08 +0800] kerberos_ ERROR handle_other(): Mutual authentication unavailable on 200 response [27/Jun/2014 14:31:08 +0800] kerberos_ ERROR handle_other(): Mutual authentication unavailable on 200 response [27/Jun/2014 14:31:08 +0800] kerberos_ ERROR handle_other(): Mutual authentication unavailable on 200 response Any configuration I missed?
... View more
06-26-2014
06:46 PM
Thank you for your reply! 1.I did from CM 2.krb5.conf ....log conf.... [libdefaults] default_realm = HADOOP.COM dns_lookup_realm = false dns_lookup_kdc = false ticket_lifetime = 24h renew_lifetime = 7d forwardable = true [realms] HADOOP.COM = { kdc = datanode14.yeahmobi.com admin_server = datanode14.yeahmobi.com } [domain_realm] .yeahmobi.com = HADOOP.COM namenode11 = HADOOP.COM datanode14 = HADOOP.COM datanode12 = HADOOP.COM datanode13 = HADOOP.COM 3.hdfs klist -ef Default principal: hdfs@HADOOP.COM Valid starting Expires Service principal 06/26/14 16:31:27 06/27/14 16:31:27 krbtgt/HADOOP.COM@HADOOP.COM renew until 07/03/14 16:31:27, Flags: FRI Etype (skey, tkt): aes256-cts-hmac-sha1-96, aes256-cts-hmac-sha1-96 06/26/14 16:31:36 06/27/14 16:31:27 HTTP/namenode11.yeahmobi.com@HADOOP.COM renew until 07/01/14 16:31:36, Flags: FRT Etype (skey, tkt): aes256-cts-hmac-sha1-96, aes256-cts-hmac-sha1-96 4.centos6.4
... View more
06-26-2014
03:24 AM
http://www.cloudera.com/content/cloudera-content/cloudera-docs/CM5/latest/Configuring-Hadoop-Security-with-Cloudera-Manager/cm5chs_enable_web_auth_s19.html After step 19,I restart the cluster,http://namenode:50070 required a username and password,and I use hdfs and it's password. namenode log: 2014-06-26 17:55:39,907 WARN org.apache.hadoop.security.authentication.server.AuthenticationFilter: Authentication exception: GSSException: Defective token detected (Mechanism level: GSSHeader did not find the right tag) org.apache.hadoop.security.authentication.client.AuthenticationException: GSSException: Defective token detected (Mechanism level: GSSHeader did not find the right tag) at org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler.authenticate(KerberosAuthenticationHandler.java:360) at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:349) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1183) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450) at org.mortbay.jetty.servlet.Dispatcher.forward(Dispatcher.java:327) at org.mortbay.jetty.servlet.Dispatcher.forward(Dispatcher.java:126) at org.mortbay.jetty.servlet.DefaultServlet.doGet(DefaultServlet.java:503) at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221) at org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1183) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450) at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:326) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542) at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404) at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Caused by: GSSException: Defective token detected (Mechanism level: GSSHeader did not find the right tag) at sun.security.jgss.GSSHeader.<init>(GSSHeader.java:97) at sun.security.jgss.GSSContextImpl.acceptSecContext(GSSContextImpl.java:306) at sun.security.jgss.GSSContextImpl.acceptSecContext(GSSContextImpl.java:285) at org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler$2.run(KerberosAuthenticationHandler.java:327) at org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler$2.run(KerberosAuthenticationHandler.java:309) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler.authenticate(KerberosAuthenticationHandler.java:309) ... 41 more curl -v -u hdfs --negotiate http://namenode:50070 and press the password worked. What is the problem? Is the username and password right?(I created the user and password by kadmin.local)? rube thx
... View more
Labels:
- Labels:
-
Apache Hadoop
-
HDFS
-
Security
06-25-2014
09:53 PM
yes,I export hive query result to hdfs with method 'big query in hdfs'. the /logs: 25/Jun/2014 21:50:10 -0700] access WARNING 172.20.0.224 hue - "GET /logs HTTP/1.1" [25/Jun/2014 21:50:07 -0700] resource DEBUG GET Got response: {"FileStatus":{"accessTime":0,"b... [25/Jun/2014 21:50:07 -0700] kerberos_ DEBUG handle_response(): returning <Response [200]> [25/Jun/2014 21:50:07 -0700] kerberos_ ERROR handle_other(): Mutual authentication unavailable on 200 response [25/Jun/2014 21:50:07 -0700] kerberos_ DEBUG handle_other(): Handling: 200 [25/Jun/2014 21:50:07 -0700] connectionpool DEBUG "GET /webhdfs/v1/user/hue/kt?op=GETFILESTATUS&user.name=hue&doas=hue HTTP/1.1" 200 None [25/Jun/2014 21:50:07 -0700] connectionpool DEBUG Setting read timeout to None [25/Jun/2014 21:50:07 -0700] dbms DEBUG Query Server: {'server_host': 'datanode12.yeahmobi.com', 'server_port': 10000, 'server_name': 'beeswax', 'principal': 'hive/datanode12.yeahmobi.com@HADOOP.COM'} [25/Jun/2014 21:50:07 -0700] thrift_util DEBUG Thrift call <class 'TCLIService.TCLIService.Client'>.GetOperationStatus returned in 1ms: TGetOperationStatusResp(status=TStatus(errorCode=None, errorMessage=None, sqlState=None, infoMessages=None, statusCode=0), operationState=2, errorMessage=None, sqlState=None, errorCode=None) [25/Jun/2014 21:50:07 -0700] thrift_util DEBUG Thrift call: <class 'TCLIService.TCLIService.Client'>.GetOperationStatus(args=(TGetOperationStatusReq(operationHandle=TOperationHandle(hasResultSet=True, modifiedRowCount=None, operationType=0, operationId=THandleIdentifier(secret='Y^\xc1\xcb\x93\xaeN\xce\x8ao\x10\x9b\x1f\xf6\xa3\xf2', guid='\xe2\xaaM\xca\xba\x8dK\xf1\xb1\xa2\xb7\x1b\xe3a\x0e\x82'))),), kwargs={}) [25/Jun/2014 21:50:07 -0700] dbms DEBUG Query Server: {'server_host': 'datanode12.yeahmobi.com', 'server_port': 10000, 'server_name': 'beeswax', 'principal': 'hive/datanode12.yeahmobi.com@HADOOP.COM'} [25/Jun/2014 21:50:07 -0700] access INFO 172.20.0.224 hue - "POST /beeswax/api/query/32/results/save/hdfs/directory HTTP/1.1" [25/Jun/2014 21:50:04 -0700] thrift_util DEBUG Thrift call <class 'hadoop.api.jobtracker.Jobtracker.Client'>.getRunningJobs returned in 1ms: ThriftJobList(jobs=[]) [25/Jun/2014 21:50:04 -0700] thrift_util DEBUG Thrift call: <class 'hadoop.api.jobtracker.Jobtracker.Client'>.getRunningJobs(args=(RequestContext(confOptions={'effective_user': u'hue'}),), kwargs={}) [25/Jun/2014 21:50:04 -0700] access INFO 172.20.0.224 hue - "GET /jobbrowser/ HTTP/1.1" [25/Jun/2014 21:50:00 -0700] access DEBUG 172.20.0.224 hue - "GET /favicon.ico HTTP/1.1" [25/Jun/2014 21:50:00 -0700] thrift_util DEBUG Thrift call <class 'hadoop.api.jobtracker.Jobtracker.Client'>.getRunningJobs returned in 1ms: ThriftJobList(jobs=[]) [25/Jun/2014 21:50:00 -0700] thrift_util DEBUG Thrift call: <class 'hadoop.api.jobtracker.Jobtracker.Client'>.getRunningJobs(args=(RequestContext(confOptions={'effective_user': u'hue'}),), kwargs={}) [25/Jun/2014 21:50:00 -0700] access INFO 172.20.0.224 hue - "GET /jobbrowser/ HTTP/1.1" [25/Jun/2014 21:50:00 -0700] access DEBUG 172.20.0.224 hue - "GET /static/art/icon_hue_24.png HTTP/1.1" [25/Jun/2014 21:50:00 -0700] access DEBUG 172.20.0.224 hue - "GET /oozie/static/art/icon_oozie_dashboard_24.png HTTP/1.1" [25/Jun/2014 21:50:00 -0700] access DEBUG 172.20.0.224 hue - "GET /oozie/static/art/icon_oozie_editor_24.png HTTP/1.1" [25/Jun/2014 21:50:00 -0700] access DEBUG 172.20.0.224 hue - "GET /sqoop/static/art/icon_sqoop_24.png HTTP/1.1" [25/Jun/2014 21:50:00 -0700] access DEBUG 172.20.0.224 hue - "GET /hbase/static/art/icon_24.png HTTP/1.1" [25/Jun/2014 21:50:00 -0700] access DEBUG 172.20.0.224 hue - "GET /zookeeper/static/art/icon_24.png HTTP/1.1" [25/Jun/2014 21:50:00 -0700] access DEBUG 172.20.0.224 hue - "GET /metastore/static/art/icon_metastore_24.png HTTP/1.1" [25/Jun/2014 21:50:00 -0700] access DEBUG 172.20.0.224 hue - "GET /impala/static/art/icon_impala_24.png HTTP/1.1" [25/Jun/2014 21:50:00 -0700] access DEBUG 172.20.0.224 hue - "GET /beeswax/static/art/icon_beeswax_24.png HTTP/1.1" [25/Jun/2014 21:50:00 -0700] access DEBUG 172.20.0.224 hue - "GET /rdbms/static/art/icon_rdbms_24.png HTTP/1.1" [25/Jun/2014 21:50:00 -0700] access DEBUG 172.20.0.224 hue - "GET /jobsub/static/art/icon_jobsub_24.png HTTP/1.1" [25/Jun/2014 21:50:00 -0700] access DEBUG 172.20.0.224 hue - "GET /pig/static/art/icon_pig_24.png HTTP/1.1" [25/Jun/2014 21:50:00 -0700] access DEBUG 172.20.0.224 hue - "GET /static/art/hue-logo-mini-white.png HTTP/1.1"
... View more
06-25-2014
03:56 AM
Click the 'select a file or directory',the error.log print the error too.
... View more
06-25-2014
03:51 AM
cdh5.0.2+kerberos security hue3.5 the jobtracker log,every 5 seconds,there is a exception: --------------------------------------------------------------------------------------------------------------------------------------------------------------- 2014-06-25 18:44:45,370 ERROR org.apache.hadoop.thriftfs.SanerThreadPoolServer: Error occurred during processing of message. java.lang.RuntimeException: com.cloudera.hue.org.apache.thrift.transport.TTransportException: Peer indicated failure: GSS initiate failed at com.cloudera.hue.org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219) at org.apache.hadoop.thriftfs.HadoopThriftAuthBridge$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:237) at org.apache.hadoop.thriftfs.HadoopThriftAuthBridge$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:235) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:356) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1528) at org.apache.hadoop.thriftfs.HadoopThriftAuthBridge$TUGIAssumingTransportFactory.getTransport(HadoopThriftAuthBridge.java:235) at org.apache.hadoop.thriftfs.SanerThreadPoolServer$WorkerProcess.run(SanerThreadPoolServer.java:277) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:744) Caused by: com.cloudera.hue.org.apache.thrift.transport.TTransportException: Peer indicated failure: GSS initiate failed at com.cloudera.hue.org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:190) at com.cloudera.hue.org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125) at com.cloudera.hue.org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:253) at com.cloudera.hue.org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41) at com.cloudera.hue.org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216) ... 10 more --------------------------------------------------------------------------------------------------------------------------------------------------------------- I confige the security with cloudera manager,and now kerberos security run normally.The test mapreduce is ok. Thanks.
... View more
Labels:
- « Previous
-
- 1
- 2
- Next »