Member since
01-09-2016
56
Posts
44
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
10227 | 12-27-2016 05:00 AM | |
2526 | 09-28-2016 08:19 AM | |
16589 | 09-12-2016 09:44 AM | |
7735 | 02-09-2016 10:48 AM |
12-22-2016
09:14 AM
1 Kudo
Ambari Metrics works intermittently. It works a few minutes and then stops to show pictures. Sometimes, the Metric Collector just stops, after a manual start it's working but in a few minutes it stops again. What is going wrong? screen-ams1.png
My settings HDP 2.5
Ambari 2.4.0.1
No Kerberos
iptables off hbase.zookeeper.property.tickTime = 6000
Metrics Service operation mode = distributed
hbase.cluster.distributed = true
hbase.zookeeper.property.clientPort = 2181
hbase.rootdir=hdfs://prodcluster/ams/hbase Logs ambari-metrics-collector.log
2016-12-22 11:59:23,030 ERROR org.mortbay.log: /ws/v1/timeline/metrics
javax.ws.rs.WebApplicationException: javax.xml.bind.MarshalException
- with linked exception:
[org.mortbay.jetty.EofException]
at com.sun.jersey.core.provider.jaxb.AbstractRootElementProvider.writeTo(AbstractRootElementProvider.java:159)
at com.sun.jersey.spi.container.ContainerResponse.write(ContainerResponse.java:306)
at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1437)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)
at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)
at com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:895)
at com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:843)
at com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:804)
at com.google.inject.servlet.FilterDefinition.doFilter(FilterDefinition.java:163)
at com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:58)
at com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:118)
at com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:113)
at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at org.apache.hadoop.security.http.XFrameOptionsFilter.doFilter(XFrameOptionsFilter.java:57)
at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:109)
at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1294)
at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:767)
at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
at org.mortbay.jetty.Server.handle(Server.java:326)
at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
at org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:945)
at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:756)
at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:218)
at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
Caused by: javax.xml.bind.MarshalException
- with linked exception:
[org.mortbay.jetty.EofException]
at com.sun.xml.bind.v2.runtime.MarshallerImpl.write(MarshallerImpl.java:325)
at com.sun.xml.bind.v2.runtime.MarshallerImpl.marshal(MarshallerImpl.java:249)
at javax.xml.bind.helpers.AbstractMarshallerImpl.marshal(AbstractMarshallerImpl.java:95)
at com.sun.jersey.core.provider.jaxb.AbstractRootElementProvider.writeTo(AbstractRootElementProvider.java:179)
at com.sun.jersey.core.provider.jaxb.AbstractRootElementProvider.writeTo(AbstractRootElementProvider.java:157)
... 37 more
Caused by: org.mortbay.jetty.EofException
at org.mortbay.jetty.AbstractGenerator$Output.write(AbstractGenerator.java:634)
at org.mortbay.jetty.AbstractGenerator$Output.write(AbstractGenerator.java:580)
at com.sun.jersey.spi.container.servlet.WebComponent$Writer.write(WebComponent.java:307)
at com.sun.jersey.spi.container.ContainerResponse$CommittingOutputStream.write(ContainerResponse.java:134)
at com.sun.xml.bind.v2.runtime.output.UTF8XmlOutput.flushBuffer(UTF8XmlOutput.java:416)
at com.sun.xml.bind.v2.runtime.output.UTF8XmlOutput.endDocument(UTF8XmlOutput.java:141)
at com.sun.xml.bind.v2.runtime.XMLSerializer.endDocument(XMLSerializer.java:856)
at com.sun.xml.bind.v2.runtime.MarshallerImpl.postwrite(MarshallerImpl.java:374)
at com.sun.xml.bind.v2.runtime.MarshallerImpl.write(MarshallerImpl.java:321)
... 41 more
2016-12-22 12:00:05,549 INFO TimelineClusterAggregatorMinute: 0 row(s) updated.
2016-12-22 12:00:19,105 INFO TimelineClusterAggregatorMinute: Aggregated cluster metrics for METRIC_AGGREGATE_MINUTE, with startTime = Thu Dec 22 11:50:00 MSK 2016, endTime = Thu Dec 22 11:55:00 MSK 2016
2016-12-22 12:00:19,111 INFO TimelineClusterAggregatorMinute: End aggregation cycle @ Thu Dec 22 12:00:19 MSK 2016
2016-12-22 12:00:19,111 INFO TimelineClusterAggregatorMinute: End aggregation cycle @ Thu Dec 22 12:00:19 MSK 2016
2016-12-22 12:00:24,077 INFO TimelineClusterAggregatorMinute: Started Timeline aggregator thread @ Thu Dec 22 12:00:24 MSK 2016
2016-12-22 12:00:24,083 INFO TimelineClusterAggregatorMinute: Last Checkpoint read : Thu Dec 22 11:55:00 MSK 2016
2016-12-22 12:00:24,083 INFO TimelineClusterAggregatorMinute: Rounded off checkpoint : Thu Dec 22 11:55:00 MSK 2016
2016-12-22 12:00:24,084 INFO TimelineClusterAggregatorMinute: Last check point time: 1482396900000, lagBy: 324 seconds.
2016-12-22 12:00:24,085 INFO TimelineClusterAggregatorMinute: Start aggregation cycle @ Thu Dec 22 12:00:24 MSK 2016, startTime = Thu Dec 22 11:55:00 MSK 2016, endTime = Thu Dec 22 12:00:00 MSK 2016
hbase-ams-master-hdp-nn2.hostname.log
2016-12-22 11:22:46,455 INFO [hdp-nn2.hostname,61300,1482394421267_ChoreService_1] zookeeper.ZooKeeper: Session: 0x3570f2523bb3db4 closed
2016-12-22 11:22:46,455 INFO [hdp-nn2.hostname,61300,1482394421267_ChoreService_1-EventThread] zookeeper.ClientCnxn: EventThread shut down
2016-12-22 11:23:31,207 INFO [timeline] timeline.HadoopTimelineMetricsSink: Unable to connect to collector, http://hdp-nn2.hostname:6188/ws/v1/timeline/metrics
This exceptions will be ignored for next 100 times
2016-12-22 11:23:31,208 WARN [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hdp-nn2.hostname:6188/ws/v1/timeline/metrics
2016-12-22 11:23:41,484 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=159.41 KB, freeSize=150.39 MB, max=150.54 MB, blockCount=0, accesses=0, hits=0, hitRatio=0, cachingAccesses=0, cachingHits=0, cachingHitsRatio=0,evictions=59, evicted=0, evictedPerRun=0.0
2016-12-22 11:23:46,460 INFO [hdp-nn2.hostname,61300,1482394421267_ChoreService_1] zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x12f8ba29 connecting to ZooKeeper ensemble=hdp-nn1.hostname:2181,hdp-dn1.hostname:2181,hdp-nn2.hostname:2181
2016-12-22 11:23:46,461 INFO [hdp-nn2.hostname,61300,1482394421267_ChoreService_1] zookeeper.ZooKeeper: Initiating client connection, connectString=hdp-nn1.hostname:2181,hdp-dn1.hostname:2181,hdp-nn2.hostname:2181 sessionTimeout=120000 watcher=org.apache.hadoop.hbase.zookeeper.PendingWatcher@20fc295f
2016-12-22 11:23:46,464 INFO [hdp-nn2.hostname,61300,1482394421267_ChoreService_1-SendThread(hdp-nn2.hostname:2181)] zookeeper.ClientCnxn: Opening socket connection to server hdp-nn2.hostname/10.255.242.181:2181. Will not attempt to authenticate using SASL (unknown error)
2016-12-22 11:23:46,466 INFO [hdp-nn2.hostname,61300,1482394421267_ChoreService_1-SendThread(hdp-nn2.hostname:2181)] zookeeper.ClientCnxn: Socket connection established to hdp-nn2.hostname/10.255.242.181:2181, initiating session
2016-12-22 11:23:46,469 INFO [hdp-nn2.hostname,61300,1482394421267_ChoreService_1-SendThread(hdp-nn2.hostname:2181)] zookeeper.ClientCnxn: Session establishment complete on server hdp-nn2.hostname/10.255.242.181:2181, sessionid = 0x3570f2523bb3db5, negotiated timeout = 40000
2016-12-22 11:23:46,495 INFO [hdp-nn2.hostname,61300,1482394421267_ChoreService_1] client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3570f2523bb3db5
2016-12-22 11:23:46,499 INFO [hdp-nn2.hostname,61300,1482394421267_ChoreService_1] zookeeper.ZooKeeper: Session: 0x3570f2523bb3db5 closed
2016-12-22 11:23:46,499 INFO [hdp-nn2.hostname,61300,1482394421267_ChoreService_1-EventThread] zookeeper.ClientCnxn: EventThread shut down
2016-12-22 11:28:41,485 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=159.41 KB, freeSize=150.39 MB, max=150.54 MB, blockCount=0, accesses=0, hits=0, hitRatio=0, cachingAccesses=0, cachingHits=0, cachingHitsRatio=0,evictions=89, evicted=0, evictedPerRun=0.0
2016-12-22 11:29:49,178 INFO [WALProcedureStoreSyncThread] wal.WALProcedureStore: Remove log: hdfs://prodcluster/ams/hbase/MasterProcWALs/state-00000000000000000001.log
2016-12-22 11:29:49,180 INFO [WALProcedureStoreSyncThread] wal.WALProcedureStore: Removed logs: [hdfs://prodcluster/ams/hbase/MasterProcWALs/state-00000000000000000002.log]
2016-12-22 11:33:41,484 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=159.41 KB, freeSize=150.39 MB, max=150.54 MB, blockCount=0, accesses=0, hits=0, hitRatio=0, cachingAccesses=0, cachingHits=0, cachingHitsRatio=0,evictions=119, evicted=0, evictedPerRun=0.0
2016-12-22 11:38:41,484 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=159.41 KB, freeSize=150.39 MB, max=150.54 MB, blockCount=0, accesses=0, hits=0, hitRatio=0, cachingAccesses=0, cachingHits=0, cachingHitsRatio=0,evictions=149, evicted=0, evictedPerRun=0.0
2016-12-22 11:43:41,485 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=159.41 KB, freeSize=150.39 MB, max=150.54 MB, blockCount=0, accesses=0, hits=0, hitRatio=0, cachingAccesses=0, cachingHits=0, cachingHitsRatio=0,evictions=179, evicted=0, evictedPerRun=0.0
2016-12-22 11:44:51,222 INFO [timeline] timeline.HadoopTimelineMetricsSink: Unable to connect to collector, http://hdp-nn2.hostname:6188/ws/v1/timeline/metrics
This exceptions will be ignored for next 100 times
2016-12-22 11:44:51,223 WARN [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hdp-nn2.hostname:6188/ws/v1/timeline/metrics
2016-12-22 11:48:41,484 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=159.41 KB, freeSize=150.39 MB, max=150.54 MB, blockCount=0, accesses=0, hits=0, hitRatio=0, cachingAccesses=0, cachingHits=0, cachingHitsRatio=0,evictions=209, evicted=0, evictedPerRun=0.0
2016-12-22 11:50:51,205 INFO [timeline] timeline.HadoopTimelineMetricsSink: Unable to connect to collector, http://hdp-nn2.hostname:6188/ws/v1/timeline/metrics
This exceptions will be ignored for next 100 times
2016-12-22 11:50:51,205 WARN [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hdp-nn2.hostname:6188/ws/v1/timeline/metrics I removed folder /var/lib/ambari-metrics-collector/hbase-tmp and restarted AMS as recommended here https://community.hortonworks.com/articles/11805/how-to-solve-ambari-metrics-corrupted-data.html, but it did not help.
... View more
Labels:
- Labels:
-
Apache Ambari
10-22-2016
09:11 AM
got it, thanks!
... View more
10-21-2016
11:11 AM
3 Kudos
@rbiswas, @Lester Martin I tested
4 variants of partitioning on 6 queries: Daily partitions (calday=2016-10-20)
Year-month partitions (year_month=2016-10)
Year partitions (year=2016)
No partitions (but 10 files with yearly data) It was created 4 tables following @rbiswas recommendations. Here is yearly aggregate information about data. Just to give you idea about scale of data.
partition
size
records
year=2006
539.4
K
12 217
year=2007
2.8
M
75 584
year=2008
6.4
M
155 850
year=2009
9.1
M
228 247
year=2010
9.3
M
225 357
year=2011
8.5
M
196 280
year=2012
19.5
M
448 145
year=2013
113.4
M
2 494 787
year=2014
196.7
M
4 038 632
year=2015
204.3
M
4 047 002
year=2016
227.2
M
4 363 214
I run every query 5 times, cast the worst/best results and took the
average of the remaining three. The results are below: Obviously, daily partitioning
is the worst case. But it is not so clearly to the rest of the options. The
results depend on the query. In the end I decided that the yearly partitioning
in our case would be optimal. @rbiswas, thanks for the idea! @rbiswas, I have
couple of questions: 1. Given
that I have less than 10,000 records per day would it be better to set
orc.row.index.stride less than 12000? 2. In my table I have columns: Order_date string (looks '2016-10-20'),
Order_time timestamp (looks '2016-10-20 12:45:55') The
table is sorted by order_time as you recommended and has a bloom filter index. But
filter WHERE to_date(order_time) BETWEEN ... any period works 15-20% slower than WHERE order_date BETWEEN ... any period Actually
I expected that using column with bloom filter speeds up query execution. Why it did not happen?
... View more
10-18-2016
10:58 AM
@rbiswas Thank you! It's interesting idea. I'll test parititions by year (YYYY), by year and month as suggested @Lester Martin (YYYY-MM) and daily (YYYY-MM-DD). I'll share results here. By the way, what would be difference between two of your approaches? Partitions by year and compact one year in one file gives the same 10 files.
... View more
10-17-2016
01:45 PM
@Lester Martin Thank you, I keep in reserve option with monthly partition (YYYY-MM). This complicates queries. But if it's the only way I'll have to use it.
... View more
10-17-2016
11:36 AM
@jramakrishnan Thanks. I have already read these links. There is no clear answer. What would be good file format for small partitions? csv, orc, smth else? HBase as an alternative metastore is fine, but my Hive 1.2.1 still uses MySQL.
It was an idea about generate hash using the date. I would be glad if someone explains this idea in details.
... View more
10-17-2016
08:14 AM
3 Kudos
There is a lot of information how is necessary to avoid small files and a large number of partitions in Hive. But what if I can’t avoid them? I have to store a Hive-table with 10 years of history data. It contains 3710 daily partitions at present day. Every partition is really small, from 80 to 15000 records. In csv format partitions vary from 25Kb to 10Mb. In ORC format partitions vary from 10Kb to 2Mb. Though I don’t think that ORC format would be effective for that small size.
Queries to this table usually include date or period of dates, so daily partition is preferred. What would be optimal approach (in terms of performance) for such a large amount of small data?
... View more
Labels:
- Labels:
-
Apache Hive
09-28-2016
08:19 AM
2 Kudos
Finally I solved this problem by copying folder ZEPPELIN_HOME/incubator-zeppelin from dev cluster (that I installed one month ago) to production cluster. Now it works fine.
... View more
09-26-2016
03:56 PM
@pankaj singh thanks for the help. According to https://zeppelin.apache.org/docs/0.7.0-SNAPSHOT/install/upgrade.html: I stopped Zeppelin 0.6 from Ambari, stopped Zeppelin 0.7 Copied conf folder from /etc/zeppelin/2.5.0.0-1245/0 to /home/zeppelin_price/incubator-zeppelin/conf Run Zeppelin 0.7 Nothing change. Still "Interpreter hive not found", logs the same as I posted before.
... View more
09-26-2016
11:17 AM
1 Kudo
My HDP-2.5 cluster has Zeppelin 0.6 from the box. It works fine. But when I install additional Zeppelin 0.7.0-SNAPSHOT (on the same datanode with Zeppelin 0.6) I get error "Interpreter hive not found". It's not tutorial code, I create new notebook and try run my own query. I follow this instruction https://zeppelin.apache.org/docs/0.5.5-incubating/install/yarn_install.html. Using this instruction I successfully installed Zeppelin 0.7.0 on HDP-2.4 cluster. Hadoop version 2.5.0.0-1245
Dependencies in JDBC interpreter:
org.apache.hive:hive-jdbc:2.0.1
org.apache.hadoop:hadoop-common:2.7.2
File hive-site.xml is copied from /etc/hive/conf/hive-site.xml to /home/zeppelin_price/incubator-zeppelin/conf zeppelin-env.sh export ZEPPELIN_PORT=8096
export JAVA_HOME=/usr/jdk64/jdk1.7.0_79
export ZEPPELIN_JAVA_OPTS="-Dhdp.version=2.5.0.0-1245"
export HADOOP_CONF_DIR=/etc/hadoop/conf
other variables are commented Log zeppelin-zeppelin_price-hdp-dn2.co.vectis.local.out ZEPPELIN_CLASSPATH: ::/home/zeppelin_price/incubator-zeppelin/zeppelin-server/target/lib/*:/home/zeppelin_price/incubator-zeppelin/zeppelin-zengine/target/lib/*:/home/zeppelin_price/incubator-zeppelin/zeppelin-interpreter/target/lib/*:/home/zeppelin_price/incubator-zeppelin/*::/home/zeppelin_price/incubator-zeppelin/conf:/home/zeppelin_price/incubator-zeppelin/zeppelin-interpreter/target/classes:/home/zeppelin_price/incubator-zeppelin/zeppelin-zengine/target/classes:/home/zeppelin_price/incubator-zeppelin/zeppelin-server/target/classes
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/zeppelin_price/incubator-zeppelin/zeppelin-server/target/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/zeppelin_price/incubator-zeppelin/zeppelin-server/target/lib/zeppelin-interpreter-0.7.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/zeppelin_price/incubator-zeppelin/zeppelin-zengine/target/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/zeppelin_price/incubator-zeppelin/zeppelin-zengine/target/lib/zeppelin-interpreter-0.7.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/zeppelin_price/incubator-zeppelin/zeppelin-interpreter/target/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Sep 26, 2016 1:19:29 PM com.sun.jersey.api.core.PackagesResourceConfig init
INFO: Scanning for root resource and provider classes in the packages:
org.apache.zeppelin.rest
Sep 26, 2016 1:19:29 PM com.sun.jersey.api.core.ScanningResourceConfig logClasses
INFO: Root resource classes found:
class org.apache.zeppelin.rest.ZeppelinRestApi
class org.apache.zeppelin.rest.ConfigurationsRestApi
class org.apache.zeppelin.rest.InterpreterRestApi
class org.apache.zeppelin.rest.NotebookRestApi
class org.apache.zeppelin.rest.CredentialRestApi
class org.apache.zeppelin.rest.LoginRestApi
class org.apache.zeppelin.rest.SecurityRestApi
class org.apache.zeppelin.rest.HeliumRestApi
Sep 26, 2016 1:19:29 PM com.sun.jersey.api.core.ScanningResourceConfig init
INFO: No provider classes found.
Sep 26, 2016 1:19:29 PM com.sun.jersey.server.impl.application.WebApplicationImpl _initiate
INFO: Initiating Jersey application, version 'Jersey: 1.13 06/29/2012 05:14 PM'
Sep 26, 2016 1:19:30 PM com.sun.jersey.spi.inject.Errors processErrorMessages
WARNING: The following warnings have been detected with resource and/or provider classes:
WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.zeppelin.rest.CredentialRestApi.getCredentials(java.lang.String) throws java.io.IOException,java.lang.IllegalArgumentException, should not consume any entity.
WARNING: A sub-resource method, public javax.ws.rs.core.Response org.apache.zeppelin.rest.NotebookRestApi.createNote(java.lang.String) throws java.io.IOException, with URI template, "/", is treated as a resource method
WARNING: A sub-resource method, public javax.ws.rs.core.Response org.apache.zeppelin.rest.NotebookRestApi.getNotebookList() throws java.io.IOException, with URI template, "/", is treated as a resource method
WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.zeppelin.rest.InterpreterRestApi.listInterpreter(java.lang.String), should not consume any entity.
Log zeppelin-zeppelin_price-hdp-dn2.co.vectis.local.log ERROR [2016-09-26 13:21:27,987] ({qtp396501207-56} NotebookServer.java[runParagraph]:1154) - Exception from run
org.apache.zeppelin.interpreter.InterpreterException: paragraph_1474884934206_-1103344564's Interpreter hive not found
at org.apache.zeppelin.notebook.Note.run(Note.java:489)
at org.apache.zeppelin.socket.NotebookServer.runParagraph(NotebookServer.java:1152)
at org.apache.zeppelin.socket.NotebookServer.onMessage(NotebookServer.java:195)
at org.apache.zeppelin.socket.NotebookSocket.onWebSocketText(NotebookSocket.java:56)
at org.eclipse.jetty.websocket.common.events.JettyListenerEventDriver.onTextMessage(JettyListenerEventDriver.java:128)
at org.eclipse.jetty.websocket.common.message.SimpleTextMessage.messageComplete(SimpleTextMessage.java:69)
at org.eclipse.jetty.websocket.common.events.AbstractEventDriver.appendMessage(AbstractEventDriver.java:65)
at org.eclipse.jetty.websocket.common.events.JettyListenerEventDriver.onTextFrame(JettyListenerEventDriver.java:122)
at org.eclipse.jetty.websocket.common.events.AbstractEventDriver.incomingFrame(AbstractEventDriver.java:161)
at org.eclipse.jetty.websocket.common.WebSocketSession.incomingFrame(WebSocketSession.java:309)
at org.eclipse.jetty.websocket.common.extensions.ExtensionStack.incomingFrame(ExtensionStack.java:214)
at org.eclipse.jetty.websocket.common.Parser.notifyFrame(Parser.java:220)
at org.eclipse.jetty.websocket.common.Parser.parse(Parser.java:258)
at org.eclipse.jetty.websocket.common.io.AbstractWebSocketConnection.readParse(AbstractWebSocketConnection.java:632)
at org.eclipse.jetty.websocket.common.io.AbstractWebSocketConnection.onFillable(AbstractWebSocketConnection.java:480)
at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:544)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
at java.lang.Thread.run(Thread.java:745)
... View more
Labels: