Member since
05-26-2016
37
Posts
6
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3019 | 07-01-2016 09:18 AM | |
2867 | 07-01-2016 09:07 AM | |
1773 | 06-09-2016 03:21 AM |
06-23-2017
09:07 PM
@Beverley Andalora Thank you. This helped my case. I have provided the error which I receiving below (before making your recommended changes) as it may help others. java.lang.RuntimeException: org.apache.falcon.FalconException: java.lang.RuntimeException: GraphFactory could not instantiate this Graph implementation [com.thinkaurelius.titan.core.TitanFactory].
at org.apache.falcon.listener.ContextStartupListener.contextInitialized(ContextStartupListener.java:59)
at org.mortbay.jetty.handler.ContextHandler.startContext(ContextHandler.java:549)
at org.mortbay.jetty.servlet.Context.startContext(Context.java:136)
at org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
at org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
at org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
at org.mortbay.jetty.Server.doStart(Server.java:224)
at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at org.apache.falcon.util.EmbeddedServer.start(EmbeddedServer.java:58)
at org.apache.falcon.FalconServer.main(FalconServer.java:118)
Caused by: org.apache.falcon.FalconException: java.lang.RuntimeException: GraphFactory could not instantiate this Graph implementation [com.thinkaurelius.titan.core.TitanFactory].
at org.apache.falcon.service.ServiceInitializer.initialize(ServiceInitializer.java:50)
at org.apache.falcon.listener.ContextStartupListener.contextInitialized(ContextStartupListener.java:56)
... 11 more
Caused by: java.lang.RuntimeException: GraphFactory could not instantiate this Graph implementation [com.thinkaurelius.titan.core.TitanFactory].
at com.tinkerpop.blueprints.GraphFactory.open(GraphFactory.java:50)
at org.apache.falcon.metadata.MetadataMappingService.initializeGraphDB(MetadataMappingService.java:146)
at org.apache.falcon.metadata.MetadataMappingService.init(MetadataMappingService.java:113)
at org.apache.falcon.service.ServiceInitializer.initialize(ServiceInitializer.java:47)
... 12 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.tinkerpop.blueprints.GraphFactory.open(GraphFactory.java:45)
... 15 more
Caused by: java.lang.NoClassDefFoundError: com/sleepycat/je/LockMode
at com.thinkaurelius.titan.diskstorage.berkeleyje.BerkeleyJEStoreManager.<clinit>(BerkeleyJEStoreManager.java:47)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
at com.thinkaurelius.titan.util.system.ConfigurationUtil.instantiate(ConfigurationUtil.java:42)
at com.thinkaurelius.titan.diskstorage.Backend.getImplementationClass(Backend.java:421)
at com.thinkaurelius.titan.diskstorage.Backend.getStorageManager(Backend.java:361)
at com.thinkaurelius.titan.graphdb.configuration.GraphDatabaseConfiguration.<init>(GraphDatabaseConfiguration.java:1275)
at com.thinkaurelius.titan.core.TitanFactory.open(TitanFactory.java:93)
at com.thinkaurelius.titan.core.TitanFactory.open(TitanFactory.java:73)
... 20 more
Caused by: java.lang.ClassNotFoundException: com.sleepycat.je.LockMode
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 29 more
2017-06-23 20:51:14,687 INFO - [main:] ~ Started SocketConnector@0.0.0.0:15000 (log:67)
... View more
07-11-2016
11:50 AM
1 Kudo
add these lines to hive-log4j.properties
logger.PerfLogger.name = org.apache.hadoop.hive.ql.log.PerfLogger logger.PerfLogger.level = DEBUG
... View more
11-20-2016
11:46 PM
@Shihab, how did you solve this issue ? what clean-up is required so kafka does not need authorization ? i'm getting the same issue .. I'm on HDP 2.4. thanks for your help in advance.
... View more
07-01-2016
09:18 AM
Hi @Takahiko Saito @@vpoornalingam There where multiple instances of Metastores running .When Ambari crashed we tried to start the service manually and created one extra instance and it was consuming the port number .Manually stopped the other instance and rstarted Hive every thing came back to normal . Thanks for the support
... View more
12-28-2016
11:06 PM
1 Kudo
@Shihab Hive view calls GetResultSetMetadata without verifying query completion with GetOperationStatus. This results in an error status from hive server. This is https://issues.apache.org/jira/browse/AMBARI-13575 bug that supposed to be fixed in 2.2.0. However, you may encounter a variation of the same issue. There is no issue to track it. You should login to https://issues.apache.org/jira/browse/AMBARI-19313?jql=project%20%3D%20AMBARI and submit the issue with proper documentation. If you are still on Ambari 2.2.2.2, try to upgrade to a more recent version. I have seen the error you describe in 2.2.x versions, but I haven't seen it on 2.4.x. I recall seeing some similar issues with 2.3.x as well.
... View more
06-21-2016
06:45 PM
4 Kudos
Try the below steps : 1.CREATE DATABASE IF NOT EXISTS test_1 2. DROP DATABASE IF EXISTS test_1 CASCADE 3.I see the error message has Meta Exception, so guessing it could be because of metastore not running. So do this service hive-metastore status check for the result is not process not started or metastore dead then restart the service service hive-metastore start
... View more
06-15-2016
09:47 AM
5 Kudos
HWX doesn't recommend using lvm for the datanodes (overhead and no benefit). You typically create a partition per disk (no raid) with the your FS directly on top. FS typically are ext4 or xfs.
... View more
06-15-2016
01:11 PM
1 Kudo
@Shihab Hive uses temporary Directory structures both on the node where Hive client is running and the default HDFS instance. These folders are used to store temp/imtermediary data for each query(as separate files)- gets cleaned up by hive client after a while(configurable) after successful execution of query , But sometimes gets pooled up on client abnormal termination. One such configurable parameter on HDFS storage is hive.exec.scratchdir (generally set to /tmp/hive) When writing data to a Hive table/partition, Hive will first write to a temporary location (ie hive.exec.scratchdir) and then move the data to the target table. (The storage could be your underlying filesystem .. could be HDFS (normal case) or S3) Work around is to clean these directory structure through a cron Job periodically (when size exceeds)
... View more
06-10-2016
05:55 AM
2 Kudos
@Shihab There are couple of things to consider in this scenario. 1. Time to recover when a machine fails : With these new disks you have 78 TB of data on each data node. Depending on the network speed and how many datanodes you have in the cluster reconstruction can take some time. 2. Booting up the cluster : if all your datanodes have such huge capacity -- the block reports could get quite large. This would cause increase in bootup times especially during the initial bootup of data node, and possibly in larger block reports. Disk scans can be expensive -- but I am hoping that these 16 TB disks are all Samsung SSDs and quite fast. 3. Data Imbalance : If you are adding these disks because you are running out of space, then you have this issue that older disks have far less space. if you are running a round-robin block placement algorithm (which most probably you are) then it is possible to get errors from these nodes since datanode would try to write to these older disks with more data. Balancer may not solve this issue since balancer tries to achieve good data distribution over the cluster not between disks on a node. if you run into this problem -- There are 2 ways to fix it. 1. Decommission the node and let it rejoin. That process of rebuilding a data node will create an even distribution of data on all disks. 2, Run disk Balancer - A tool that is still a work in progress - tracked in HDFS-1312. So generally what @Sunile Manjee said is correct. While this should not cause any performance issues, it does have potential to cause operational issues.
... View more
06-09-2016
03:21 AM
Hi All fixed the issue /var/log was full .mysql was giving errors cz of this . Port 8080 was not listening .Cleared the /var and restarted Ambari everything started working 🙂 Thanks for your support
... View more