Member since
08-21-2014
23
Posts
14
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2144 | 02-23-2017 02:51 PM | |
1763 | 02-21-2016 06:09 PM | |
12193 | 08-21-2014 01:51 PM |
02-23-2017
02:51 PM
1 Kudo
@Oriane Unfortunately the Sandbox has a minimum requirement of 8GB of RAM. It tends to run better with 10-12GB. Given that your computer only has 8GB of RAM total, you are going to have issues trying to run the Sandbox. As @Sandeep Nemuri mentioned, you can try to turn off HDP components that you don't need to use. This will save some memory, but I believe you'll still have issues.
... View more
09-28-2016
01:41 PM
@Robbert Naastepad No worries!
... View more
02-21-2016
06:09 PM
2 Kudos
You can download configuration scripts which will provide you recommended settings based on the number of cores and memory of your servers. Download Companion Files
... View more
02-01-2016
07:44 PM
I discovered several posts via Google that suggested the problem was hbase.rpc.timeout. When I changed the value of this via Ambari, it had no effect. As I responded below, the problem seemed to be that I was missing the HBASE_CONF_DIR environment variable so it was using the default settings instead of the Ambari configured settings.
... View more
02-01-2016
07:42 PM
This was the issue. Well more specifically HBASE_CONF_DIR. Running phoenix_utils.py showed HBASE_CONF_PATH and HBASE_CONF_DIR as equal to ".". I set my HBASE_CONF_DIR environment variable to to use "/etc/hbase/conf" and now the query seems to work ok.
... View more
02-01-2016
07:31 PM
I tried that already, as I indicated in my original post. It did not work.
... View more
02-01-2016
05:25 PM
4 Kudos
I have a table in HBase created via Phoenix. The table has approxmiately 20 million records. I'm connecting to Phoenix via: phoenix-sqlline.py hbasemaster:2181:/hbase-unsecure I'm trying to run a count as follows: select count(columnname) from tablename; When I run that SQL, Phoenix reports a timeout org.apache.phoenix.exception.PhoenixIOException: org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=36, exceptions:
java.net.SocketTimeoutException: callTimeout=60000, callDuration=60317: row .... I've tried changing the hbase.rpc.timeout via Ambari, but that doesn't seem to be the issue. The default timeout in Ambari was set to 1m30s and I changed it to 2m. The timeout reported by Phoenix is 60s before and after the change, so I don't think that's the culprit anyway. What setting do I need to change to allow for longer running queries? Is there something else that I should be looking at?
... View more
Labels:
- Labels:
-
Apache HBase
-
Apache Phoenix
01-30-2016
12:21 AM
@Josh Elser Thank you for the clarification regarding the clients. I didn't expect to see difference in terms of how the sql worked. Using the thin client did help to identify there was a difference between the two.
... View more
01-30-2016
12:18 AM
1 Kudo
I had an issue with ZK. I stopped HBase via Ambari. I ran "hbase clean --cleanZk". I then started HBase via Ambari. Now the Pig script is loading data. @Neeraj Sabharwal @Josh ElserThanks for helping to resolve the issue via another post.
... View more
01-30-2016
12:15 AM
@Neeraj Sabharwal Good catch! Running "hbase clean --cleanZk" fixed the problem. Now the thick and thin client are working similarly.
... View more