Member since
04-08-2016
48
Posts
4
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
6618 | 04-15-2016 11:18 PM |
05-11-2016
02:46 AM
Thanks! Do you happen to know a easy way to add additional storage to those partition (its is hosted on the AWS) without compromising my current installation (3 node cluster running on t2. large). Thanks-
... View more
05-09-2016
01:40 AM
Hello all, I am having a similar issue here. Everything was working fine on HIVE until I have installed HBase and Ambari Metrics services. I performed pretty much all the changes proposed on this post and the documentation and still getting the error below: Any ideias here appreciated! H060 Unable to open Hive session: org.apache.thrift.transport.TTransportException: java.net.SocketTimeoutException: Read timed out org.apache.ambari.view.hive.client.HiveClientException: H060 Unable to open Hive session: org.apache.thrift.transport.TTransportException: java.net.SocketTimeoutException: Read timed out
org.apache.ambari.view.hive.client.HiveClientException: H060 Unable to open Hive session: org.apache.thrift.transport.TTransportException: java.net.SocketTimeoutException: Read timed out
at org.apache.ambari.view.hive.client.Connection$2.body(Connection.java:488)
at org.apache.ambari.view.hive.client.Connection$2.body(Connection.java:475)
at org.apache.ambari.view.hive.client.HiveCall.call(HiveCall.java:101)
at org.apache.ambari.view.hive.client.Connection.openSession(Connection.java:475)
at org.apache.ambari.view.hive.client.Connection.getOrCreateSessionByTag(Connection.java:523)
at org.apache.ambari.view.hive.resources.browser.HiveBrowserService.databases(HiveBrowserService.java:87)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
... View more
05-09-2016
01:40 AM
Hello all, I am having a similar issue here. Everything was working fine on HIVE until I have installed HBase and Ambari Metrics services. I performed pretty much all the changes proposed on this post and the documentation and still getting the error below: Any ideias here appreciated! H060 Unable to open Hive session: org.apache.thrift.transport.TTransportException: java.net.SocketTimeoutException: Read timed out org.apache.ambari.view.hive.client.HiveClientException: H060 Unable to open Hive session: org.apache.thrift.transport.TTransportException: java.net.SocketTimeoutException: Read timed out
org.apache.ambari.view.hive.client.HiveClientException: H060 Unable to open Hive session: org.apache.thrift.transport.TTransportException: java.net.SocketTimeoutException: Read timed out
at org.apache.ambari.view.hive.client.Connection$2.body(Connection.java:488)
at org.apache.ambari.view.hive.client.Connection$2.body(Connection.java:475)
at org.apache.ambari.view.hive.client.HiveCall.call(HiveCall.java:101)
at org.apache.ambari.view.hive.client.Connection.openSession(Connection.java:475)
at org.apache.ambari.view.hive.client.Connection.getOrCreateSessionByTag(Connection.java:523)
at org.apache.ambari.view.hive.resources.browser.HiveBrowserService.databases(HiveBrowserService.java:87)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
... View more
05-08-2016
10:34 PM
Thanks. I did some investigation and here below is what I found out. Is there some standard practices for releasing space like removing content that might be no so relevant? In the last command below I noticed that var/logs and var/cache sums up to almost 1GB. Are these folder that I could empty without affecting services? [root@ip-172-31-34-25 /]# df -h Sist. Arq. Tam. Usado Disp. Uso% Montado em /dev/xvda2 10G 10G 9,2M 100% / devtmpfs 3,9G 0 3,9G 0% /dev tmpfs 3,7G 0 3,7G 0% /dev/shm tmpfs 3,7G 17M 3,7G 1% /run tmpfs 3,7G 0 3,7G 0% /sys/fs/cgroup tmpfs 757M 0 757M 0% /run/user/1000 Then : [root@ip-172-31-34-25 /]# du -h --max-depth=1 / 0/dev du: não é possível acessar “/proc/2284/task/2284/fd/4”: Arquivo ou diretório não encontrado du: não é possível acessar “/proc/2284/task/2284/fdinfo/4”: Arquivo ou diretório não encontrado du: não é possível acessar “/proc/2284/fd/4”: Arquivo ou diretório não encontrado du: não é possível acessar “/proc/2284/fdinfo/4”: Arquivo ou diretório não encontrado 0/proc 17M/run 0/sys 24M/etc 199M/root 2,4M/tmp 2,8G/var 4,9G/usr 115M/boot 75M/home 0/media 0/mnt 9,8M/opt 0/srv 0/data 0/cgroups_test 1,9G/hadoop 10G/ and going into more detail into var: [root@ip-172-31-34-25 /]# du -h --max-depth=1 /var/ 1,6G/var/lib 1009M/var/log 0/var/adm 246M/var/cache 8,0K/var/db 0/var/empty 0/var/games 0/var/gopher 0/var/local 0/var/nis 0/var/opt 0/var/preserve 28K/var/spool 48K/var/tmp 0/var/yp 0/var/kerberos 0/var/crash 2,8G/var/ Thanks! Wellington
... View more
05-08-2016
08:59 PM
Hello, I am working with a 3 node cluster with t2.large machines on AWS. One of those hosts has reached 100% storage capacity. Capacity Used: [100.00%, 10.7 GB], Capacity Total: [10.7 GB], path=/usr/hdp What are the best practices to release some storage space from this hots? Deleting unnecessary services? Thanks- Wellington
... View more
Labels:
- Labels:
-
Hortonworks Data Platform (HDP)
05-01-2016
12:32 AM
Thanks again. It worked. I have performed the cahnges and also fixed a permission issue around the /storm/ folder on hdfs. Things seem to be working very well now and I don't see any errors on the Storm UI. However when I go to the HIVE view and try to do a simple SELECT * FROM tweet_counts LIMIT 10; here are the exceptions I get (see below). Have you ever run into this? I have been investigating, but not sure where this is coming from... ps. ambari is the name of the db where I created the tweet_counts table on my HIVE instance. {"trace":"org.apache.ambari.view.hive.client.HiveErrorStatusException: H170 Unable to fetch results. java.io.IOException: java.io.FileNotFoundException: Path is not a file: /apps/hive/warehouse/ambari.db/tweet_counts\n\tat org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:75)\n\tat org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:61)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1828)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1799)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1712)\n\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:652)\n\tat .......
... View more
04-30-2016
12:58 PM
Thanks Pierre. For some reason when I try to run on local I get these kind of exceptions: 10887 [Thread-20-TweetHdfsBolt] WARN o.a.h.u.NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 11100 [Thread-20-TweetHdfsBolt] ERROR b.s.util - Async loop died! java.lang.RuntimeException: Error preparing HdfsBolt: java.net.UnknownHostException: mycluster at org.apache.storm.hdfs.bolt.AbstractHdfsBolt.prepare(AbstractHdfsBolt.java:109) ~[storm-twitter-0.0.1-SNAPSHOT.jar:?] at backtype.storm.daemon.executor$fn__7245$fn__7258.invoke(executor.clj:746) ~[storm-core-0.10.0.2.4.0.0-169.jar:0.10.0.2.4.0.0-169] at backtype.storm.util$async_loop$fn__544.invoke(util.clj:473) [storm-core-0.10.0.2.4.0.0-169.jar:0.10.0.2.4.0.0-169] at clojure.lang.AFn.run(AFn.java:22) [clojure-1.6.0.jar:?] at java.lang.Thread.run(Thread.java:745) [?:1.8.0_60] I was wondering if I should perform this setup (cloning the package and configuring the pom.xml) in order to make the stream collection from Twitter work, or if something that has been included on your package: http://twitter4j.org/en/index.html Thanks!
... View more
04-30-2016
03:26 AM
Thanks for the tips Pierre. I was able to run the topoly with no errors (apparently): 460 [main] INFO b.s.u.Utils - Using defaults.yaml from resources 523 [main] INFO b.s.u.Utils - Using storm.yaml from resources 584 [main] INFO b.s.u.Utils - Using defaults.yaml from resources 602 [main] INFO b.s.u.Utils - Using storm.yaml from resources 608 [main] INFO b.s.StormSubmitter - Generated ZooKeeper secret payload for MD5-digest: -8800577250600957523:-7254875077049838623 609 [main] INFO b.s.s.a.AuthUtils - Got AutoCreds [] 624 [main] INFO b.s.u.StormBoundedExponentialBackoffRetry - The baseSleepTimeMs [2000] the maxSleepTimeMs [60000] the maxRetries [5] 653 [main] INFO b.s.u.StormBoundedExponentialBackoffRetry - The baseSleepTimeMs [2000] the maxSleepTimeMs [60000] the maxRetries [5] 654 [main] INFO b.s.u.StormBoundedExponentialBackoffRetry - The baseSleepTimeMs [2000] the maxSleepTimeMs [60000] the maxRetries [5] 659 [main] INFO b.s.u.StormBoundedExponentialBackoffRetry - The baseSleepTimeMs [2000] the maxSleepTimeMs [60000] the maxRetries [5] 663 [main] INFO b.s.u.StormBoundedExponentialBackoffRetry - The baseSleepTimeMs [2000] the maxSleepTimeMs [60000] the maxRetries [5] 668 [main] INFO b.s.u.StormBoundedExponentialBackoffRetry - The baseSleepTimeMs [2000] the maxSleepTimeMs [60000] the maxRetries [5] 676 [main] INFO b.s.StormSubmitter - Uploading topology jar storm-twitter-0.0.1-SNAPSHOT.jar to assigned location: /hadoop/storm/nimbus/inbox/stormjar-cac9d9fd-128d-43d2-b13d-5effadbbbe75.jar 1702 [main] INFO b.s.StormSubmitter - Successfully uploaded topology jar to assigned location: /hadoop/storm/nimbus/inbox/stormjar-cac9d9fd-128d-43d2-b13d-5effadbbbe75.jar 1702 [main] INFO b.s.StormSubmitter - Submitting topology storm-twitter in distributed mode with conf {"topology.message.timeout.secs":120,"storm.zookeeper.topology.auth.scheme":"digest","storm.zookeeper.topology.auth.payload":"-8800577250600957523:-7254875077049838623"} 1939 [main] INFO b.s.StormSubmitter - Finished submitting topology: storm-twitter I am checking in the Storm UI and things seems to be OK, however not much processing is going on (nothing seems to bee emitting or transferred). I don't have any output stats from the Spout nor errors; Same thing for all the other Bolts. I posted 2 tweets from my account and created the HIVE table indicated on your docs but no resultas have ever appeared. How often does your spouts collect new streams from the twitter account? My Keys and Access token are all set to ready and write... What do you think might be possible causes for not getting anything here? Any tips around the best way to troubleshoot this kind of thing? Thanks!
... View more
04-27-2016
12:25 AM
Hey Pierre. I performed a successful build however I am getting an error saying that the Topology class was not found when I run this: [root@ip-172-31-34-25 storm-twitter]# storm jar storm-twitter-0.0.1-SNAPSHOT.jar fr.pvillard.storm.topology.Topology host=ec2-52-67-8-253.sa-east-1.compute.amazonaws.com kFI3G29IJ5UOMnbe3qmJpDw5L iZszClk61Lfdu6hTxRAIW1STPX1TtbFXpIKlehxHNUGIpMWYFT 140206682-thGEZ8KIYfYbHY9Rzvzu2CO8ry6UBmSEvUe0zOGZ 8uA5F0T4yhnLv16fgrFP4S6W5ETflmGzLd3dPW1chb46v Error: Not able to locate nor load fr.pvillard.storm.topology.Topology Should I run the command above from specific folder? Thanks- Wellington
... View more
04-26-2016
03:16 AM
Thanks Pierre. Sorry for keeping going back to the same point, but this is the first time I use Maven. I am getting the errors below when I try to clean the package. I am not sure if I have to change something on the pom.xml, or where it is located.... [root@ip-172-31-34-25 bin]# ./mvn clean package https://github.com/pvillard31/storm-twitter [INFO] Scanning for projects... [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 0.075 s [INFO] Finished at: 2016-04-25T23:10:05-04:00 [INFO] Final Memory: 5M/115M [INFO] ------------------------------------------------------------------------ [ERROR] The goal you specified requires a project to execute but there is no POM in this directory (/opt/apache-maven-3.3.9/bin). Please verify you invoked Maven from the correct directory. -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MissingProjectException
... View more