Member since
04-11-2016
174
Posts
29
Kudos Received
6
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3339 | 06-28-2017 12:24 PM | |
2519 | 06-09-2017 07:20 AM | |
7021 | 08-18-2016 11:39 AM | |
5135 | 08-12-2016 09:05 AM | |
5294 | 08-09-2016 09:24 AM |
09-09-2016
10:12 AM
1 Kudo
A prod. cluster is already in place - HDP-2.4.2.0-258 installed using Ambari 2.2.2.0. Following are the existing and upcoming scenarios : There are various 'actors' - hadoop developers and admin., data scientists, enthusiasts etc. who currently download and use the HDP sandbox on their local machines The prod. cluster has lot of data and it is NOT advisable to have a large no. of users right away The idea is to have a central system using which a large no. of users can 'spawn'/download & install their own sandboxes which are a tiny image of the prod. cluster in terms of the data and the services It's indispensable for this system to allow the users to decide what subset of data they want to include in their sandbox I have a few thoughts : Maybe, it's sensible to provide a centralized download of the latest HDP sandbox, however, this may be version and otherwise different(maybe, far ahead !) from the prod. cluster While the users would be willing to execute the queries/drag-drop tables, files etc. to select the data they want, almost none would be prepared to load this data manually from the production to their own sandboxes Maybe, there are some existing tools that can be used to do this Can the community help me to assess the viability of this requirement/suggest alternatives ?
... View more
Labels:
09-08-2016
10:06 PM
@Alexandru Anghel I have edited(********************EDIT-1 : New Zeppelin version(zeppelin-0.6.1-bin-all.tgz)********************) my original question to include the progress and the new issue faced after installing the latest stable version of Zeppelin (0.6.1) (may need some time to reflect as it's under moderation)
... View more
09-08-2016
10:06 PM
@Alexandru Anghel For now, I wish to continue with the existing versions but I will try the Zeppelin 0.6.2. Well, I discovered several facts : On canceling the log-in, the UI is seen but it shows a 'Login' button on the top right corner I had NOT created any conf/zeppelin-site.xml, a template file exists there, though I tried using ldapRealm.userDnTemplate=CN={0},CN=devadmin,ou=Group,dc=company,dc=SE but the same result Is there any way the exact cause/error can be captured(the Zeppelin log errors I have already provided) ?
... View more
09-08-2016
10:06 PM
Yeah I read about that but does that mean that with HDP 2.4, Zeppelin cannot be secured in the way I am trying to ? Can Hue or some other component help ?
... View more
09-08-2016
10:06 PM
HDP-2.4.2.0-258 installed using Ambari 2.2.2.0 I installed Zeppelin(0.6.0.2.4.2.0-258)manually and was able to execute several paragraphs in a notebook. Now I wish to secure it step-by-step, starting with the authentication for the web UI, integrated with LDAP i.e when a user enters his credentials after hitting http://<zeppelin_server_hostname>:9995/, he can proceed only if he is present in at least one of the several Unix LDAP groups as follows : devdatalakeadm datascientist developer I tried the ways mentioned in the Hortonworks article, Hortonworks Zeppelin tutorial, Apache Zeppelin doc. etc. but getting some or the other error, currently, I am focusing on just one LDAP group. The conf/shiro.ini file : #
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#[users]
# List of users with their password allowed to access Zeppelin.
# To use a different strategy (LDAP / Database / ...) check the shiro doc at http://shiro.apache.org/configuration.html#Configuration-INISections
#admin = password1
#user1 = password2, role1, role2
#user2 = password3, role3
#user3 = password4, role2
# Sample LDAP configuration, for user Authentication, currently tested for single Realm
[main]
#ldapRealm = org.apache.zeppelin.server.LdapGroupRealm
ldapRealm = org.apache.shiro.realm.ldap.JndiLdapRealm
ldapRealm.contextFactory.environment[ldap.searchBase]=dc=company,dc=SE
ldapRealm.userDnTemplate = uid={0},CN=devadmin,ou=Group,dc=company,dc=SE
ldapRealm.contextFactory.url = ldap://unix-ldap.company.com:389
ldapRealm.contextFactory.authenticationMechanism = SIMPLE
shiro.loginUrl = /api/login
[urls]
# anon means the access is anonymous.
# authcBasic means Basic Auth Security
# To enfore security, comment the line below and uncomment the next one
/api/version = anon
#/** = anon
/** = authcBasic For the ldapRealm, if I provide org.apache.zeppelin.server.LdapGroupRealm, I get the following error and Zeppelin fails to start ERROR [2016-09-05 14:26:14,996] ({main} ZeppelinServer.java[main]:117) - Error while running jettyServer
org.apache.shiro.config.ConfigurationException: Unable to instantiate class [org.apache.zeppelin.server.LdapGroupRealm] for object named 'ldapRealm'. Please ensure you've specified the fully qualified class name correctly.
at org.apache.shiro.config.ReflectionBuilder.createNewInstance(ReflectionBuilder.java:151)
at org.apache.shiro.config.ReflectionBuilder.buildObjects(ReflectionBuilder.java:119)
at org.apache.shiro.config.IniSecurityManagerFactory.buildInstances(IniSecurityManagerFactory.java:161)
at org.apache.shiro.config.IniSecurityManagerFactory.createSecurityManager(IniSecurityManagerFactory.java:124)
at org.apache.shiro.config.IniSecurityManagerFactory.createSecurityManager(IniSecurityManagerFactory.java:102)
at org.apache.shiro.config.IniSecurityManagerFactory.createInstance(IniSecurityManagerFactory.java:88)
at org.apache.shiro.config.IniSecurityManagerFactory.createInstance(IniSecurityManagerFactory.java:46)
at org.apache.shiro.config.IniFactorySupport.createInstance(IniFactorySupport.java:123)
at org.apache.shiro.util.AbstractFactory.getInstance(AbstractFactory.java:47)
at org.apache.shiro.web.env.IniWebEnvironment.createWebSecurityManager(IniWebEnvironment.java:203)
at org.apache.shiro.web.env.IniWebEnvironment.configure(IniWebEnvironment.java:99)
at org.apache.shiro.web.env.IniWebEnvironment.init(IniWebEnvironment.java:92)
at org.apache.shiro.util.LifecycleUtils.init(LifecycleUtils.java:45)
at org.apache.shiro.util.LifecycleUtils.init(LifecycleUtils.java:40)
at org.apache.shiro.web.env.EnvironmentLoader.createEnvironment(EnvironmentLoader.java:221)
at org.apache.shiro.web.env.EnvironmentLoader.initEnvironment(EnvironmentLoader.java:133)
at org.apache.shiro.web.env.EnvironmentLoaderListener.contextInitialized(EnvironmentLoaderListener.java:58)
at org.eclipse.jetty.server.handler.ContextHandler.callContextInitialized(ContextHandler.java:782)
at org.eclipse.jetty.servlet.ServletContextHandler.callContextInitialized(ServletContextHandler.java:424)
at org.eclipse.jetty.server.handler.ContextHandler.startContext(ContextHandler.java:774)
at org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:249)
at org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:717)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
at org.eclipse.jetty.server.handler.HandlerCollection.doStart(HandlerCollection.java:229)
at org.eclipse.jetty.server.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:172)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
at org.eclipse.jetty.server.handler.HandlerWrapper.doStart(HandlerWrapper.java:95)
at org.eclipse.jetty.server.Server.doStart(Server.java:282)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
at org.apache.zeppelin.server.ZeppelinServer.main(ZeppelinServer.java:115)
Caused by: org.apache.shiro.util.UnknownClassException: Unable to load class named [org.apache.zeppelin.server.LdapGroupRealm] from the thread context, current, or system/application ClassLoaders. All heuristics have been exhausted. Class could not be found.
at org.apache.shiro.util.ClassUtils.forName(ClassUtils.java:148)
at org.apache.shiro.util.ClassUtils.newInstance(ClassUtils.java:164)
at org.apache.shiro.config.ReflectionBuilder.createNewInstance(ReflectionBuilder.java:144)
... 29 more If I use org.apache.shiro.realm.ldap.JndiLdapRealm, Zeppelin starts successfully When accessing http://<zeppelin_server_hostname>:9995/, I get an username password prompt in the browser I enter my credentials and probably the log-in fails as the window reappears If I cancel instead of entering the username and password, I get the Zeppelin UI(that's crazy !) The error : ERROR [2016-09-05 14:29:36,153] ({qtp762227630-30} NotebookServer.java[onMessage]:207) - Can't handle message
java.lang.Exception: Invalid ticket != 16731c36-4f7e-4dd6-b567-8da934aeecd0
at org.apache.zeppelin.socket.NotebookServer.onMessage(NotebookServer.java:113)
at org.apache.zeppelin.socket.NotebookSocket.onMessage(NotebookSocket.java:56)
at org.eclipse.jetty.websocket.WebSocketConnectionRFC6455$WSFrameHandler.onFrame(WebSocketConnectionRFC6455.java:835)
at org.eclipse.jetty.websocket.WebSocketParserRFC6455.parseNext(WebSocketParserRFC6455.java:349)
at org.eclipse.jetty.websocket.WebSocketConnectionRFC6455.handle(WebSocketConnectionRFC6455.java:225)
at org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:667)
at org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:52)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
at java.lang.Thread.run(Thread.java:745)
ERROR [2016-09-05 14:29:36,159] ({qtp762227630-34} NotebookServer.java[onMessage]:207) - Can't handle message
java.lang.Exception: Invalid ticket != 16731c36-4f7e-4dd6-b567-8da934aeecd0
at org.apache.zeppelin.socket.NotebookServer.onMessage(NotebookServer.java:113)
at org.apache.zeppelin.socket.NotebookSocket.onMessage(NotebookSocket.java:56)
at org.eclipse.jetty.websocket.WebSocketConnectionRFC6455$WSFrameHandler.onFrame(WebSocketConnectionRFC6455.java:835)
at org.eclipse.jetty.websocket.WebSocketParserRFC6455.parseNext(WebSocketParserRFC6455.java:349)
at org.eclipse.jetty.websocket.WebSocketConnectionRFC6455.handle(WebSocketConnectionRFC6455.java:225)
at org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:667)
at org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:52)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
at java.lang.Thread.run(Thread.java:745)
ERROR [2016-09-05 14:29:46,150] ({qtp762227630-30} NotebookServer.java[onMessage]:207) - Can't handle message
java.lang.Exception: Invalid ticket != 16731c36-4f7e-4dd6-b567-8da934aeecd0
at org.apache.zeppelin.socket.NotebookServer.onMessage(NotebookServer.java:113)
at org.apache.zeppelin.socket.NotebookSocket.onMessage(NotebookSocket.java:56)
at org.eclipse.jetty.websocket.WebSocketConnectionRFC6455$WSFrameHandler.onFrame(WebSocketConnectionRFC6455.java:835)
at org.eclipse.jetty.websocket.WebSocketParserRFC6455.parseNext(WebSocketParserRFC6455.java:349)
at org.eclipse.jetty.websocket.WebSocketConnectionRFC6455.handle(WebSocketConnectionRFC6455.java:225)
at org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:667)
at org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:52)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
at java.lang.Thread.run(Thread.java:745)
ERROR [2016-09-05 14:29:56,150] ({qtp762227630-31} NotebookServer.java[onMessage]:207) - Can't handle message
java.lang.Exception: Invalid ticket != 16731c36-4f7e-4dd6-b567-8da934aeecd0
at org.apache.zeppelin.socket.NotebookServer.onMessage(NotebookServer.java:113)
at org.apache.zeppelin.socket.NotebookSocket.onMessage(NotebookSocket.java:56)
at org.eclipse.jetty.websocket.WebSocketConnectionRFC6455$WSFrameHandler.onFrame(WebSocketConnectionRFC6455.java:835)
at org.eclipse.jetty.websocket.WebSocketParserRFC6455.parseNext(WebSocketParserRFC6455.java:349)
at org.eclipse.jetty.websocket.WebSocketConnectionRFC6455.handle(WebSocketConnectionRFC6455.java:225)
at org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:667)
at org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:52)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
at java.lang.Thread.run(Thread.java:745)
ERROR [2016-09-05 14:30:06,151] ({qtp762227630-29} NotebookServer.java[onMessage]:207) - Can't handle message
java.lang.Exception: Invalid ticket != 16731c36-4f7e-4dd6-b567-8da934aeecd0
at org.apache.zeppelin.socket.NotebookServer.onMessage(NotebookServer.java:113)
at org.apache.zeppelin.socket.NotebookSocket.onMessage(NotebookSocket.java:56)
at org.eclipse.jetty.websocket.WebSocketConnectionRFC6455$WSFrameHandler.onFrame(WebSocketConnectionRFC6455.java:835)
at org.eclipse.jetty.websocket.WebSocketParserRFC6455.parseNext(WebSocketParserRFC6455.java:349)
at org.eclipse.jetty.websocket.WebSocketConnectionRFC6455.handle(WebSocketConnectionRFC6455.java:225)
at org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:667)
at org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:52)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
at java.lang.Thread.run(Thread.java:745)
ERROR [2016-09-05 14:30:16,151] ({qtp762227630-32} NotebookServer.java[onMessage]:207) - Can't handle message
java.lang.Exception: Invalid ticket != 16731c36-4f7e-4dd6-b567-8da934aeecd0
at org.apache.zeppelin.socket.NotebookServer.onMessage(NotebookServer.java:113)
at org.apache.zeppelin.socket.NotebookSocket.onMessage(NotebookSocket.java:56)
at org.eclipse.jetty.websocket.WebSocketConnectionRFC6455$WSFrameHandler.onFrame(WebSocketConnectionRFC6455.java:835)
at org.eclipse.jetty.websocket.WebSocketParserRFC6455.parseNext(WebSocketParserRFC6455.java:349)
at org.eclipse.jetty.websocket.WebSocketConnectionRFC6455.handle(WebSocketConnectionRFC6455.java:225)
at org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:667)
at org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:52)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
at java.lang.Thread.run(Thread.java:745) ********************EDIT-1 : New Zeppelin version(zeppelin-0.6.1-bin-all.tgz)******************** I am running the new version on the same machine as Ambari and the existing/older Zeppelin version. In spite of entering the valid credentials, I get an LDAP authentication exception : INFO [2016-09-08 11:46:05,017] ({main} Log.java[initialized]:186) - Logging initialized @356ms
INFO [2016-09-08 11:46:05,089] ({main} ZeppelinServer.java[setupWebAppContext]:266) - ZeppelinServer Webapp path: /usr/share/dumphere/installhere/zeppelin-0.6.1-bin-all/webapps
INFO [2016-09-08 11:46:05,301] ({main} AuthorizingRealm.java[getAuthorizationCacheLazy]:248) - No cache or cacheManager properties have been set. Authorization cache cannot be obtained.
INFO [2016-09-08 11:46:05,345] ({main} ZeppelinServer.java[main]:114) - Starting zeppelin server
INFO [2016-09-08 11:46:05,349] ({main} Server.java[doStart]:327) - jetty-9.2.15.v20160210
INFO [2016-09-08 11:46:05,515] ({main} StandardDescriptorProcessor.java[visitServlet]:297) - NO JSP Support for /, did not find org.eclipse.jetty.jsp.JettyJspServlet
INFO [2016-09-08 11:46:05,529] ({main} ContextHandler.java[log]:2052) - Initializing Shiro environment
INFO [2016-09-08 11:46:05,529] ({main} EnvironmentLoader.java[initEnvironment]:128) - Starting Shiro environment initialization.
INFO [2016-09-08 11:46:05,591] ({main} AuthorizingRealm.java[getAuthorizationCacheLazy]:248) - No cache or cacheManager properties have been set. Authorization cache cannot be obtained.
INFO [2016-09-08 11:46:05,596] ({main} EnvironmentLoader.java[initEnvironment]:141) - Shiro environment initialized in 67 ms.
WARN [2016-09-08 11:46:05,601] ({main} ServletHolder.java[getNameOfJspClass]:923) - Unable to make identifier for jsp rest trying rest instead
ERROR [2016-09-08 11:46:05,819] ({main} Interpreter.java[register]:315) - Static initialization is deprecated. You should change it to use interpreter-setting.json in your jar or interpreter/{interpreter}/interpreter-setting.json
INFO [2016-09-08 11:46:05,820] ({main} InterpreterFactory.java[init]:154) - Interpreter alluxio.alluxio found. class=org.apache.zeppelin.alluxio.AlluxioInterpreter
ERROR [2016-09-08 11:46:05,825] ({main} Interpreter.java[register]:315) - Static initialization is deprecated. You should change it to use interpreter-setting.json in your jar or interpreter/{interpreter}/interpreter-setting.json
INFO [2016-09-08 11:46:05,825] ({main} InterpreterFactory.java[init]:154) - Interpreter angular.angular found. class=org.apache.zeppelin.angular.AngularInterpreter
INFO [2016-09-08 11:46:05,862] ({main} InterpreterFactory.java[init]:154) - Interpreter bigquery.sql found. class=org.apache.zeppelin.bigquery.BigQueryInterpreter
INFO [2016-09-08 11:46:05,895] ({main} CassandraInterpreter.java[<clinit>]:155) - Bootstrapping Cassandra Interpreter
ERROR [2016-09-08 11:46:05,896] ({main} Interpreter.java[register]:315) - Static initialization is deprecated. You should change it to use interpreter-setting.json in your jar or interpreter/{interpreter}/interpreter-setting.json
INFO [2016-09-08 11:46:05,896] ({main} InterpreterFactory.java[init]:154) - Interpreter cassandra.cassandra found. class=org.apache.zeppelin.cassandra.CassandraInterpreter
ERROR [2016-09-08 11:46:05,933] ({main} Interpreter.java[register]:315) - Static initialization is deprecated. You should change it to use interpreter-setting.json in your jar or interpreter/{interpreter}/interpreter-setting.json
INFO [2016-09-08 11:46:05,934] ({main} InterpreterFactory.java[init]:154) - Interpreter elasticsearch.elasticsearch found. class=org.apache.zeppelin.elasticsearch.ElasticsearchInterpreter
ERROR [2016-09-08 11:46:05,948] ({main} Interpreter.java[register]:315) - Static initialization is deprecated. You should change it to use interpreter-setting.json in your jar or interpreter/{interpreter}/interpreter-setting.json
INFO [2016-09-08 11:46:05,948] ({main} InterpreterFactory.java[init]:154) - Interpreter file.hdfs found. class=org.apache.zeppelin.file.HDFSFileInterpreter
INFO [2016-09-08 11:46:06,007] ({main} InterpreterFactory.java[init]:154) - Interpreter flink.flink found. class=org.apache.zeppelin.flink.FlinkInterpreter
ERROR [2016-09-08 11:46:06,072] ({main} Interpreter.java[register]:315) - Static initialization is deprecated. You should change it to use interpreter-setting.json in your jar or interpreter/{interpreter}/interpreter-setting.json
INFO [2016-09-08 11:46:06,072] ({main} InterpreterFactory.java[init]:154) - Interpreter hbase.hbase found. class=org.apache.zeppelin.hbase.HbaseInterpreter
ERROR [2016-09-08 11:46:06,103] ({main} Interpreter.java[register]:315) - Static initialization is deprecated. You should change it to use interpreter-setting.json in your jar or interpreter/{interpreter}/interpreter-setting.json
INFO [2016-09-08 11:46:06,103] ({main} InterpreterFactory.java[init]:154) - Interpreter ignite.ignite found. class=org.apache.zeppelin.ignite.IgniteInterpreter
ERROR [2016-09-08 11:46:06,104] ({main} Interpreter.java[register]:315) - Static initialization is deprecated. You should change it to use interpreter-setting.json in your jar or interpreter/{interpreter}/interpreter-setting.json
INFO [2016-09-08 11:46:06,104] ({main} InterpreterFactory.java[init]:154) - Interpreter ignite.ignitesql found. class=org.apache.zeppelin.ignite.IgniteSqlInterpreter
INFO [2016-09-08 11:46:06,122] ({main} InterpreterFactory.java[init]:154) - Interpreter jdbc.sql found. class=org.apache.zeppelin.jdbc.JDBCInterpreter
ERROR [2016-09-08 11:46:06,131] ({main} Interpreter.java[register]:315) - Static initialization is deprecated. You should change it to use interpreter-setting.json in your jar or interpreter/{interpreter}/interpreter-setting.json
INFO [2016-09-08 11:46:06,132] ({main} InterpreterFactory.java[init]:154) - Interpreter kylin.kylin found. class=org.apache.zeppelin.kylin.KylinInterpreter
ERROR [2016-09-08 11:46:06,188] ({main} Interpreter.java[register]:315) - Static initialization is deprecated. You should change it to use interpreter-setting.json in your jar or interpreter/{interpreter}/interpreter-setting.json
INFO [2016-09-08 11:46:06,189] ({main} InterpreterFactory.java[init]:154) - Interpreter lens.lens found. class=org.apache.zeppelin.lens.LensInterpreter
ERROR [2016-09-08 11:46:06,212] ({main} Interpreter.java[register]:315) - Static initialization is deprecated. You should change it to use interpreter-setting.json in your jar or interpreter/{interpreter}/interpreter-setting.json
INFO [2016-09-08 11:46:06,212] ({main} InterpreterFactory.java[init]:154) - Interpreter livy.spark found. class=org.apache.zeppelin.livy.LivySparkInterpreter
ERROR [2016-09-08 11:46:06,216] ({main} Interpreter.java[register]:315) - Static initialization is deprecated. You should change it to use interpreter-setting.json in your jar or interpreter/{interpreter}/interpreter-setting.json
INFO [2016-09-08 11:46:06,216] ({main} InterpreterFactory.java[init]:154) - Interpreter livy.pyspark found. class=org.apache.zeppelin.livy.LivyPySparkInterpreter
ERROR [2016-09-08 11:46:06,217] ({main} Interpreter.java[register]:315) - Static initialization is deprecated. You should change it to use interpreter-setting.json in your jar or interpreter/{interpreter}/interpreter-setting.json
INFO [2016-09-08 11:46:06,217] ({main} InterpreterFactory.java[init]:154) - Interpreter livy.sparkr found. class=org.apache.zeppelin.livy.LivySparkRInterpreter
INFO [2016-09-08 11:46:06,218] ({main} InterpreterFactory.java[init]:154) - Interpreter livy.sql found. class=org.apache.zeppelin.livy.LivySparkSQLInterpreter
ERROR [2016-09-08 11:46:06,222] ({main} Interpreter.java[register]:315) - Static initialization is deprecated. You should change it to use interpreter-setting.json in your jar or interpreter/{interpreter}/interpreter-setting.json
INFO [2016-09-08 11:46:06,222] ({main} InterpreterFactory.java[init]:154) - Interpreter md.md found. class=org.apache.zeppelin.markdown.Markdown
ERROR [2016-09-08 11:46:06,232] ({main} Interpreter.java[register]:315) - Static initialization is deprecated. You should change it to use interpreter-setting.json in your jar or interpreter/{interpreter}/interpreter-setting.json
INFO [2016-09-08 11:46:06,233] ({main} InterpreterFactory.java[init]:154) - Interpreter psql.sql found. class=org.apache.zeppelin.postgresql.PostgreSqlInterpreter
ERROR [2016-09-08 11:46:06,240] ({main} Interpreter.java[register]:315) - Static initialization is deprecated. You should change it to use interpreter-setting.json in your jar or interpreter/{interpreter}/interpreter-setting.json
INFO [2016-09-08 11:46:06,240] ({main} InterpreterFactory.java[init]:154) - Interpreter python.python found. class=org.apache.zeppelin.python.PythonInterpreter
INFO [2016-09-08 11:46:06,248] ({main} InterpreterFactory.java[init]:154) - Interpreter sh.sh found. class=org.apache.zeppelin.shell.ShellInterpreter
INFO [2016-09-08 11:46:06,413] ({main} InterpreterFactory.java[init]:154) - Interpreter spark.spark found. class=org.apache.zeppelin.spark.SparkInterpreter
INFO [2016-09-08 11:46:06,415] ({main} InterpreterFactory.java[init]:154) - Interpreter spark.pyspark found. class=org.apache.zeppelin.spark.PySparkInterpreter
INFO [2016-09-08 11:46:06,418] ({main} InterpreterFactory.java[init]:154) - Interpreter spark.r found. class=org.apache.zeppelin.spark.SparkRInterpreter
INFO [2016-09-08 11:46:06,419] ({main} InterpreterFactory.java[init]:154) - Interpreter spark.sql found. class=org.apache.zeppelin.spark.SparkSqlInterpreter
INFO [2016-09-08 11:46:06,420] ({main} InterpreterFactory.java[init]:154) - Interpreter spark.dep found. class=org.apache.zeppelin.spark.DepInterpreter
INFO [2016-09-08 11:46:06,437] ({main} InterpreterFactory.java[init]:218) - Interpreter setting group angular : id=2BVXP3PZM, name=angular
INFO [2016-09-08 11:46:06,437] ({main} InterpreterFactory.java[init]:218) - Interpreter setting group md : id=2BUZ75MW2, name=md
INFO [2016-09-08 11:46:06,437] ({main} InterpreterFactory.java[init]:218) - Interpreter setting group alluxio : id=2BVFEWB5S, name=alluxio
INFO [2016-09-08 11:46:06,437] ({main} InterpreterFactory.java[init]:218) - Interpreter setting group psql : id=2BX5GS8CM, name=psql
INFO [2016-09-08 11:46:06,438] ({main} InterpreterFactory.java[init]:218) - Interpreter setting group jdbc : id=2BUTPYPSJ, name=jdbc
INFO [2016-09-08 11:46:06,438] ({main} InterpreterFactory.java[init]:218) - Interpreter setting group lens : id=2BVRSAGY7, name=lens
INFO [2016-09-08 11:46:06,438] ({main} InterpreterFactory.java[init]:218) - Interpreter setting group hbase : id=2BXPDVZ2D, name=hbase
INFO [2016-09-08 11:46:06,438] ({main} InterpreterFactory.java[init]:218) - Interpreter setting group cassandra : id=2BXZM149V, name=cassandra
INFO [2016-09-08 11:46:06,438] ({main} InterpreterFactory.java[init]:218) - Interpreter setting group kylin : id=2BW73AW1W, name=kylin
INFO [2016-09-08 11:46:06,438] ({main} InterpreterFactory.java[init]:218) - Interpreter setting group elasticsearch : id=2BX4SVYDE, name=elasticsearch
INFO [2016-09-08 11:46:06,438] ({main} InterpreterFactory.java[init]:218) - Interpreter setting group python : id=2BWU8NAJN, name=python
INFO [2016-09-08 11:46:06,439] ({main} InterpreterFactory.java[init]:218) - Interpreter setting group livy : id=2BUY5977F, name=livy
INFO [2016-09-08 11:46:06,439] ({main} InterpreterFactory.java[init]:218) - Interpreter setting group flink : id=2BWKEGFMT, name=flink
INFO [2016-09-08 11:46:06,439] ({main} InterpreterFactory.java[init]:218) - Interpreter setting group ignite : id=2BWT4SB6V, name=ignite
INFO [2016-09-08 11:46:06,439] ({main} InterpreterFactory.java[init]:218) - Interpreter setting group spark : id=2BXJ91NCU, name=spark
INFO [2016-09-08 11:46:06,439] ({main} InterpreterFactory.java[init]:218) - Interpreter setting group sh : id=2BXD1EJ7Q, name=sh
INFO [2016-09-08 11:46:06,439] ({main} InterpreterFactory.java[init]:218) - Interpreter setting group bigquery : id=2BVY56RAA, name=bigquery
INFO [2016-09-08 11:46:06,439] ({main} InterpreterFactory.java[init]:218) - Interpreter setting group file : id=2BW4YR6DA, name=file
INFO [2016-09-08 11:46:06,452] ({main} VfsLog.java[info]:138) - Using "/tmp/vfs_cache" as temporary files store.
INFO [2016-09-08 11:46:06,599] ({main} NotebookAuthorization.java[loadFromFile]:58) - /usr/share/dumphere/installhere/zeppelin-0.6.1-bin-all/conf/notebook-authorization.json
INFO [2016-09-08 11:46:06,600] ({main} Credentials.java[loadFromFile]:71) - /usr/share/dumphere/installhere/zeppelin-0.6.1-bin-all/conf/credentials.json
INFO [2016-09-08 11:46:06,628] ({main} StdSchedulerFactory.java[instantiate]:1184) - Using default implementation for ThreadExecutor
INFO [2016-09-08 11:46:06,630] ({main} SimpleThreadPool.java[initialize]:268) - Job execution threads will use class loader of thread: main
INFO [2016-09-08 11:46:06,642] ({main} SchedulerSignalerImpl.java[<init>]:61) - Initialized Scheduler Signaller of type: class org.quartz.core.SchedulerSignalerImpl
INFO [2016-09-08 11:46:06,643] ({main} QuartzScheduler.java[<init>]:240) - Quartz Scheduler v.2.2.1 created.
INFO [2016-09-08 11:46:06,644] ({main} RAMJobStore.java[initialize]:155) - RAMJobStore initialized.
INFO [2016-09-08 11:46:06,645] ({main} QuartzScheduler.java[initialize]:305) - Scheduler meta-data: Quartz Scheduler (v2.2.1) 'DefaultQuartzScheduler' with instanceId 'NON_CLUSTERED'
Scheduler class: 'org.quartz.core.QuartzScheduler' - running locally.
NOT STARTED.
Currently in standby mode.
Number of jobs executed: 0
Using thread pool 'org.quartz.simpl.SimpleThreadPool' - with 10 threads.
Using job-store 'org.quartz.simpl.RAMJobStore' - which does not support persistence. and is not clustered.
INFO [2016-09-08 11:46:06,645] ({main} StdSchedulerFactory.java[instantiate]:1339) - Quartz scheduler 'DefaultQuartzScheduler' initialized from default resource file in Quartz package: 'quartz.properties'
INFO [2016-09-08 11:46:06,645] ({main} StdSchedulerFactory.java[instantiate]:1343) - Quartz scheduler version: 2.2.1
INFO [2016-09-08 11:46:06,645] ({main} QuartzScheduler.java[start]:575) - Scheduler DefaultQuartzScheduler_$_NON_CLUSTERED started.
INFO [2016-09-08 11:46:06,873] ({main} Notebook.java[<init>]:121) - Notebook indexing started...
INFO [2016-09-08 11:46:07,113] ({main} LuceneSearch.java[addIndexDocs]:305) - Indexing 3 notebooks took 239ms
INFO [2016-09-08 11:46:07,113] ({main} Notebook.java[<init>]:123) - Notebook indexing finished: 3 indexed in 0s
INFO [2016-09-08 11:46:07,227] ({main} ServerImpl.java[initDestination]:94) - Setting the server's publish address to be /
INFO [2016-09-08 11:46:07,876] ({main} ContextHandler.java[doStart]:744) - Started o.e.j.w.WebAppContext@4c6e276e{/,file:/usr/share/dumphere/installhere/zeppelin-0.6.1-bin-all/webapps/webapp/,AVAILABLE}{/usr/share/dumphere/installhere/zeppelin-0.6.1-bin-all/zeppelin-web-0.6.1.war}
INFO [2016-09-08 11:46:07,887] ({main} AbstractConnector.java[doStart]:266) - Started ServerConnector@433348bc{HTTP/1.1}{l4373t.sss.se.com:9996}
INFO [2016-09-08 11:46:07,887] ({main} Server.java[doStart]:379) - Started @3230ms
INFO [2016-09-08 11:46:07,887] ({main} ZeppelinServer.java[main]:121) - Done, zeppelin server started
INFO [2016-09-08 11:46:08,116] ({qtp754666084-13} NotebookServer.java[onOpen]:97) - New connection from 10.254.70.164 : 57165
INFO [2016-09-08 11:46:12,553] ({qtp754666084-16} NotebookServer.java[onClose]:227) - Closed connection to 10.254.70.164 : 57165. (1001) null
INFO [2016-09-08 11:46:13,178] ({qtp754666084-16} AbstractValidatingSessionManager.java[enableSessionValidation]:230) - Enabling session validation scheduler...
WARN [2016-09-08 11:46:13,225] ({qtp754666084-18} JAXRSUtils.java[findTargetMethod]:499) - No operation matching request path "/api/login;JSESSIONID=26181c87-1e79-4686-b406-f745bce776e4" is found, Relative Path: /, HTTP Method: GET, ContentType: */*, Accept: application/json,text/plain,*/*,. Please enable FINE/TRACE log level for more details.
WARN [2016-09-08 11:46:13,227] ({qtp754666084-18} WebApplicationExceptionMapper.java[toResponse]:73) - javax.ws.rs.ClientErrorException
at org.apache.cxf.jaxrs.utils.JAXRSUtils.findTargetMethod(JAXRSUtils.java:503)
at org.apache.cxf.jaxrs.interceptor.JAXRSInInterceptor.processRequest(JAXRSInInterceptor.java:227)
at org.apache.cxf.jaxrs.interceptor.JAXRSInInterceptor.handleMessage(JAXRSInInterceptor.java:103)
at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:272)
at org.apache.cxf.transport.ChainInitiationObserver.onMessage(ChainInitiationObserver.java:121)
at org.apache.cxf.transport.http.AbstractHTTPDestination.invoke(AbstractHTTPDestination.java:239)
at org.apache.cxf.transport.servlet.ServletController.invokeDestination(ServletController.java:248)
at org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:222)
at org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:153)
at org.apache.cxf.transport.servlet.CXFNonSpringServlet.invoke(CXFNonSpringServlet.java:167)
at org.apache.cxf.transport.servlet.AbstractHTTPServlet.handleRequest(AbstractHTTPServlet.java:286)
at org.apache.cxf.transport.servlet.AbstractHTTPServlet.doGet(AbstractHTTPServlet.java:211)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:575)
at org.apache.cxf.transport.servlet.AbstractHTTPServlet.service(AbstractHTTPServlet.java:262)
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:812)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1669)
at org.apache.shiro.web.servlet.ProxiedFilterChain.doFilter(ProxiedFilterChain.java:61)
at org.apache.shiro.web.servlet.AdviceFilter.executeChain(AdviceFilter.java:108)
at org.apache.shiro.web.servlet.AdviceFilter.doFilterInternal(AdviceFilter.java:137)
at org.apache.shiro.web.servlet.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:125)
at org.apache.shiro.web.servlet.ProxiedFilterChain.doFilter(ProxiedFilterChain.java:66)
at org.apache.shiro.web.servlet.AbstractShiroFilter.executeChain(AbstractShiroFilter.java:449)
at org.apache.shiro.web.servlet.AbstractShiroFilter$1.call(AbstractShiroFilter.java:365)
at org.apache.shiro.subject.support.SubjectCallable.doCall(SubjectCallable.java:90)
at org.apache.shiro.subject.support.SubjectCallable.call(SubjectCallable.java:83)
at org.apache.shiro.subject.support.DelegatingSubject.execute(DelegatingSubject.java:383)
at org.apache.shiro.web.servlet.AbstractShiroFilter.doFilterInternal(AbstractShiroFilter.java:362)
at org.apache.shiro.web.servlet.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:125)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
at org.apache.zeppelin.server.CorsFilter.doFilter(CorsFilter.java:72)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
at org.eclipse.jetty.server.Server.handle(Server.java:499)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:544)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
at java.lang.Thread.run(Thread.java:745)
INFO [2016-09-08 11:46:13,279] ({qtp754666084-14} NotebookServer.java[onOpen]:97) - New connection from 10.254.70.164 : 57172
ERROR [2016-09-08 11:46:21,706] ({qtp754666084-14} LoginRestApi.java[postLogin]:103) - Exception in login:
org.apache.shiro.authc.AuthenticationException: LDAP authentication failed.
at org.apache.shiro.realm.ldap.JndiLdapRealm.doGetAuthenticationInfo(JndiLdapRealm.java:300)
at org.apache.shiro.realm.AuthenticatingRealm.getAuthenticationInfo(AuthenticatingRealm.java:568)
at org.apache.shiro.authc.pam.ModularRealmAuthenticator.doSingleRealmAuthentication(ModularRealmAuthenticator.java:180)
at org.apache.shiro.authc.pam.ModularRealmAuthenticator.doAuthenticate(ModularRealmAuthenticator.java:267)
at org.apache.shiro.authc.AbstractAuthenticator.authenticate(AbstractAuthenticator.java:198)
at org.apache.shiro.mgt.AuthenticatingSecurityManager.authenticate(AuthenticatingSecurityManager.java:106)
at org.apache.shiro.mgt.DefaultSecurityManager.login(DefaultSecurityManager.java:270)
at org.apache.shiro.subject.support.DelegatingSubject.login(DelegatingSubject.java:256)
at org.apache.zeppelin.rest.LoginRestApi.postLogin(LoginRestApi.java:76)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.cxf.service.invoker.AbstractInvoker.performInvocation(AbstractInvoker.java:180)
at org.apache.cxf.service.invoker.AbstractInvoker.invoke(AbstractInvoker.java:96)
at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:192)
at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:100)
at org.apache.cxf.interceptor.ServiceInvokerInterceptor$1.run(ServiceInvokerInterceptor.java:57)
at org.apache.cxf.interceptor.ServiceInvokerInterceptor.handleMessage(ServiceInvokerInterceptor.java:93)
at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:272)
at org.apache.cxf.transport.ChainInitiationObserver.onMessage(ChainInitiationObserver.java:121)
at org.apache.cxf.transport.http.AbstractHTTPDestination.invoke(AbstractHTTPDestination.java:239)
at org.apache.cxf.transport.servlet.ServletController.invokeDestination(ServletController.java:248)
at org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:222)
at org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:153)
at org.apache.cxf.transport.servlet.CXFNonSpringServlet.invoke(CXFNonSpringServlet.java:167)
at org.apache.cxf.transport.servlet.AbstractHTTPServlet.handleRequest(AbstractHTTPServlet.java:286)
at org.apache.cxf.transport.servlet.AbstractHTTPServlet.doPost(AbstractHTTPServlet.java:206)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:595)
at org.apache.cxf.transport.servlet.AbstractHTTPServlet.service(AbstractHTTPServlet.java:262)
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:812)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1669)
at org.apache.shiro.web.servlet.ProxiedFilterChain.doFilter(ProxiedFilterChain.java:61)
at org.apache.shiro.web.servlet.AdviceFilter.executeChain(AdviceFilter.java:108)
at org.apache.shiro.web.servlet.AdviceFilter.doFilterInternal(AdviceFilter.java:137)
at org.apache.shiro.web.servlet.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:125)
at org.apache.shiro.web.servlet.ProxiedFilterChain.doFilter(ProxiedFilterChain.java:66)
at org.apache.shiro.web.servlet.AbstractShiroFilter.executeChain(AbstractShiroFilter.java:449)
at org.apache.shiro.web.servlet.AbstractShiroFilter$1.call(AbstractShiroFilter.java:365)
at org.apache.shiro.subject.support.SubjectCallable.doCall(SubjectCallable.java:90)
at org.apache.shiro.subject.support.SubjectCallable.call(SubjectCallable.java:83)
at org.apache.shiro.subject.support.DelegatingSubject.execute(DelegatingSubject.java:383)
at org.apache.shiro.web.servlet.AbstractShiroFilter.doFilterInternal(AbstractShiroFilter.java:362)
at org.apache.shiro.web.servlet.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:125)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
at org.apache.zeppelin.server.CorsFilter.doFilter(CorsFilter.java:72)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
at org.eclipse.jetty.server.Server.handle(Server.java:499)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:544)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
at java.lang.Thread.run(Thread.java:745)
Caused by: javax.naming.AuthenticationException: [LDAP: error code 49 - Invalid Credentials]
at com.sun.jndi.ldap.LdapCtx.mapErrorCode(LdapCtx.java:3135)
at com.sun.jndi.ldap.LdapCtx.processReturnCode(LdapCtx.java:3081)
at com.sun.jndi.ldap.LdapCtx.processReturnCode(LdapCtx.java:2883)
at com.sun.jndi.ldap.LdapCtx.connect(LdapCtx.java:2797)
at com.sun.jndi.ldap.LdapCtx.<init>(LdapCtx.java:319)
at com.sun.jndi.ldap.LdapCtxFactory.getUsingURL(LdapCtxFactory.java:192)
at com.sun.jndi.ldap.LdapCtxFactory.getUsingURLs(LdapCtxFactory.java:210)
at com.sun.jndi.ldap.LdapCtxFactory.getLdapCtxInstance(LdapCtxFactory.java:153)
at com.sun.jndi.ldap.LdapCtxFactory.getInitialContext(LdapCtxFactory.java:83)
at javax.naming.spi.NamingManager.getInitialContext(NamingManager.java:684)
at javax.naming.InitialContext.getDefaultInitCtx(InitialContext.java:313)
at javax.naming.InitialContext.init(InitialContext.java:244)
at javax.naming.ldap.InitialLdapContext.<init>(InitialLdapContext.java:154)
at org.apache.shiro.realm.ldap.JndiLdapContextFactory.createLdapContext(JndiLdapContextFactory.java:508)
at org.apache.shiro.realm.ldap.JndiLdapContextFactory.getLdapContext(JndiLdapContextFactory.java:495)
at org.apache.shiro.realm.ldap.JndiLdapRealm.queryForAuthenticationInfo(JndiLdapRealm.java:375)
at org.apache.shiro.realm.ldap.JndiLdapRealm.doGetAuthenticationInfo(JndiLdapRealm.java:295)
... 64 more
WARN [2016-09-08 11:46:21,713] ({qtp754666084-14} LoginRestApi.java[postLogin]:111) - {"status":"FORBIDDEN","message":"","body":""} The shiro.ini file, please note the following : I have entirely commented the [users] and [roles] For 'ldapRealm.userDnTemplate', it's immaterial whether I use uid={0} or CN={0} I'm assuming(as per the original requirement) that the group where the provided credentials must be searched is provided as a value to 'ldapRealm.userDnTemplate'. Is it that the LDAP groups must have a different key ? #
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#[users]
# List of users with their password allowed to access Zeppelin.
# To use a different strategy (LDAP / Database / ...) check the shiro doc at http://shiro.apache.org/configuration.html#Configuration-INISections
#admin = go4zeppelin
#hanny = hannyuseszeppelin, role1
#henrik = henrikuseszeppelin, role2
# Sample LDAP configuration, for user Authentication, currently tested for single Realm
[main]
### A sample for configuring Active Directory Realm
#activeDirectoryRealm = org.apache.zeppelin.server.ActiveDirectoryGroupRealm
#activeDirectoryRealm.systemUsername = userNameA
#activeDirectoryRealm.systemPassword = passwordA
#activeDirectoryRealm.searchBase = CN=Users,DC=SOME_GROUP,DC=COMPANY,DC=COM
#activeDirectoryRealm.url = ldap://ldap.test.com:389
#activeDirectoryRealm.groupRolesMap = "CN=admin,OU=groups,DC=SOME_GROUP,DC=COMPANY,DC=COM":"admin","CN=finance,OU=groups,DC=SOME_GROUP,DC=COMPANY,DC=COM":"finance","CN=hr,OU=groups,DC=SOME_GROUP,DC=COMPANY,DC=COM":"hr"
#activeDirectoryRealm.authorizationCachingEnabled = false
### A sample for configuring LDAP Directory Realm
ldapRealm = org.apache.zeppelin.server.LdapGroupRealm
## search base for ldap groups (only relevant for LdapGroupRealm):
ldapRealm.contextFactory.environment[ldap.searchBase] = dc=scompany,dc=SE
ldapRealm.contextFactory.url = ldap://unix-ldap.company.com:389
ldapRealm.userDnTemplate = uid={0},cn=devdatalakeadm,ou=Group,dc=company,dc=se
ldapRealm.contextFactory.authenticationMechanism = SIMPLE
sessionManager = org.apache.shiro.web.session.mgt.DefaultWebSessionManager
### If caching of user is required then uncomment below lines
#cacheManager = org.apache.shiro.cache.MemoryConstrainedCacheManager
#securityManager.cacheManager = $cacheManager
securityManager.sessionManager = $sessionManager
# 86,400,000 milliseconds = 24 hour
securityManager.sessionManager.globalSessionTimeout = 86400000
shiro.loginUrl = /api/login
#[roles]
#role1 = *
#role2 = *
[urls]
# anon means the access is anonymous.
# authcBasic means Basic Auth Security
# authc means Form based Auth Security
# To enfore security, comment the line below and uncomment the next one
/api/version = anon
#/** = anon
/** = authc I now wonder if ldap is behaving itself, following is the output of two commands that makes me believe that ldap is NOT able to check if a particular user, say ojoqcu, belongs to a ldap group : If I query just for the user, all his membership groups are returned : ldapsearch -h unix-ldap.company.com -p 389 -x -b "dc=company,dc=SE" "(&(cn=*)(memberUid=ojoqcu))"
# extended LDIF
#
# LDAPv3
# base <dc=company,dc=SE> with scope subtree
# filter: (&(cn=*)(memberUid=ojoqcu))
# requesting: ALL
#
# datalake, Group, company.se
dn: cn=datalake,ou=Group,dc=company,dc=se
objectClass: posixGroup
description: company Data Lake
gidNumber: 5019
cn: datalake
memberUid: hbrdmv
memberUid: ojoqcu
memberUid: ssserz
memberUid: sssktw
memberUid: sssjtz
memberUid: tekzn7
# devdatalakeadm, Group, company.se
dn: cn=devdatalakeadm,ou=Group,dc=company,dc=se
objectClass: posixGroup
description: Data Lake Admins
gidNumber: 14000
cn: devdatalakeadm
memberUid: hbrdmv
memberUid: ojoqcu
# search result
search: 2
result: 0 Success
# numResponses: 3
# numEntries: 2 but if I try to check if the user is part of group, no entries returned : ldapsearch -h unix-ldap.company.com -p 389 -x -b "dc=company,dc=SE" "(&(cn=devdatalakeadm,ou=Group,dc=company,dc=se)(memberUid=ojoqcu))"
# extended LDIF
#
# LDAPv3
# base <dc=company,dc=SE> with scope subtree
# filter: (&(cn=devdatalakeadm,ou=Group,dc=company,dc=se)(memberUid=ojoqcu))
# requesting: ALL
#
# search result
search: 2
result: 0 Success
# numResponses: 1 What could be the root cause ?
... View more
Labels:
- Labels:
-
Apache Zeppelin
09-05-2016
10:03 AM
@vranganathan I already did that now only one confusion remains - is the compression taking place as expected ?
... View more
08-18-2016
11:39 AM
I have either discovered something strange or I lack the understanding of how Sqoop works : Sqoop doc. says that in case of a composite PK, the --split-by column should be specified during sqoop import, however, I proceeded without doing so. Sqoop then picked up one int column belonging to the PK Only in case of few tables(all of them having at least 1.2 billion rows) did I face this mismatch issue I then used --split-by for those tables and also added --validate. Then I got the same no. of rows imported
... View more
08-18-2016
11:06 AM
1 Kudo
A late answer but maybe it will help someone 🙂 I am just adding to what @Ravi Mutyala has mentioned : sqoop import --null-string '\\N' --null-non-string '\\N' --hive-delims-replacement '\0D' --num-mappers 8 --validate --hcatalog-home /usr/hdp/current/hive-webhcat --hcatalog-database default --hcatalog-table Inactivity --create-hcatalog-table --hcatalog-storage-stanza 'stored as orc tblproperties ("orc.compress"="ZLIB")' --connect 'jdbc:sqlserver://<IP>;database=<db-name>' --username --password --table Inactivity -- --schema QlikView 2>&1| tee -a log Now if you describe the table : 0: jdbc:hive2://> describe formatted inactivity;
OK
16/08/18 11:23:25 [main]: WARN lazy.LazyStruct: Extra bytes detected at the end of the row! Ignoring similar problems.
+-------------------------------+----------------------------------------------------------------------+-----------------------+--+
| col_name | data_type | comment |
+-------------------------------+----------------------------------------------------------------------+-----------------------+--+
| # col_name | data_type | comment |
| | NULL | NULL |
| period | int | |
| vin | string | |
| customerid | int | |
| subscriberdealersisid | string | |
| subscriberdistributorsisid | string | |
| packagename | string | |
| timemodify | string | |
| | NULL | NULL |
| # Detailed Table Information | NULL | NULL |
| Database: | default | NULL |
| Owner: | hive | NULL |
| CreateTime: | Thu Aug 18 11:20:28 CEST 2016 | NULL |
| LastAccessTime: | UNKNOWN | NULL |
| Protect Mode: | None | NULL |
| Retention: | 0 | NULL |
| Location: | hdfs://l4283t.sss.com:8020/apps/hive/warehouse/inactivity | NULL |
| Table Type: | MANAGED_TABLE | NULL |
| Table Parameters: | NULL | NULL |
| | orc.compress | ZLIB |
| | transient_lastDdlTime | 1471512028 |
| | NULL | NULL |
| # Storage Information | NULL | NULL |
| SerDe Library: | org.apache.hadoop.hive.ql.io.orc.OrcSerde | NULL |
| InputFormat: | org.apache.hadoop.hive.ql.io.orc.OrcInputFormat | NULL |
| OutputFormat: | org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat | NULL |
| Compressed: | No | NULL |
| Num Buckets: | -1 | NULL |
| Bucket Columns: | [] | NULL |
| Sort Columns: | [] | NULL |
| Storage Desc Params: | NULL | NULL |
| | serialization.format | 1 |
+-------------------------------+----------------------------------------------------------------------+-----------------------+--+
33 rows selected (0.425 seconds) To verify if the compression has really taken place, you can first import the table without any compression and execute analyze table <table-name>compute statistics and note the 'totalSize'. Then repeat the process with compression and compare the 'totalSize'
... View more
08-16-2016
05:50 PM
I agree that's one way but that also means that if there are 100s of tables, one has to either manually pre-create those or execute some sqoop script to do so which means total two scripts to import one table, is there another way ?
... View more
08-16-2016
03:52 PM
HDP-2.4.2.0-258 installed using Ambari 2.2.2.0 There are aplenty schema in SQL Server and Oracle DB that need to be imported to Hadoop, I have chosen the rdbms to HCatalog/Hive approach. I am quite confused because of the following threads :
As per the Sqoop 1.4.6 documentation : One downside to compressing tables imported into Hive is that many codecs cannot be split for processing by parallel map tasks. The lzop codec, however, does support splitting. When importing tables with this codec, Sqoop will automatically index the files for splitting and configuring a new Hive table with the correct InputFormat. This feature currently requires that all partitions of a table be compressed with the lzop codec.
Does that mean that gzip/zlib will cause performance/data integrity issues during Sqoop import AND subsequent processing?
The following from the Hive documentation confused me : The parameters are all placed in the TBLPROPERTIES (see Create Table). They are: Key
Default
Notes
orc.bloom.filter.columns "" comma separated list of column names for which bloom filter should be created
orc.bloom.filter.fpp 0.05 false positive probability for bloom filter (must >0.0 and <1.0)
orc.compress ZLIB high level compression (one of NONE, ZLIB, SNAPPY)
I guess the default compression codec is gzip, I executed the following command(with both -z and --compress ways) : sqoop import --null-string '\\N' --null-non-string '\\N' --hive-delims-replacement '\0D' --num-mappers 8 --validate --hcatalog-home /usr/hdp/current/hive-webhcat --hcatalog-database default --hcatalog-table Inactivity --create-hcatalog-table --hcatalog-storage-stanza "stored as orcfile" -z --connect 'jdbc:sqlserver://<IP>;database=VehicleDriverServicesFollowUp' --username --password --table Inactivity -- --schema QlikView 2>&1| tee -a log but the ORC table says compression : NO (am I missing/misinterpreting something or is some lib. missing, I didn't get any exception/error) : hive>
>
> describe formatted inactivity;
OK
# col_name data_type comment
period int
vin string
customerid int
subscriberdealersisid string
subscriberdistributorsisid string
packagename string
timemodify string
# Detailed Table Information
Database: default
Owner: hive
CreateTime: Tue Aug 16 17:34:36 CEST 2016
LastAccessTime: UNKNOWN
Protect Mode: None
Retention: 0
Location: hdfs://l4283t.sss.com:8020/apps/hive/warehouse/inactivity
Table Type: MANAGED_TABLE
Table Parameters:
transient_lastDdlTime 1471361676
# Storage Information
SerDe Library: org.apache.hadoop.hive.ql.io.orc.OrcSerde
InputFormat: org.apache.hadoop.hive.ql.io.orc.OrcInputFormat
OutputFormat: org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat
Compressed: No
Num Buckets: -1
Bucket Columns: []
Sort Columns: []
Storage Desc Params:
serialization.format 1
Time taken: 0.395 seconds, Fetched: 32 row(s)
hive>
As per this existing thread, for Hive, ORC + Zlib should be used How do I specify this Zlib during the import command ? Is it the case that I have to pre-create that tables in Hive to use Zlib ?
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Sqoop