Member since
10-24-2015
207
Posts
18
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4438 | 03-04-2018 08:18 PM | |
4331 | 09-19-2017 04:01 PM | |
1809 | 01-28-2017 10:31 PM | |
976 | 12-08-2016 03:04 PM |
03-28-2017
05:40 PM
3 Kudos
@PJ Are you setting: set hive.execution.engine=spark; Hive's execution engine only supports MapReduce & Tez. Running with
Spark is not supported in HDP at this current moment in time. https://issues.apache.org/jira/browse/HIVE-7292
... View more
06-08-2017
01:04 PM
I agree. HDP 2.6 is very solid. We have thoroughly tested against 200k queries/hr (ootb is about 40k/hr, need to go to local metastore, increase appmaster's, etc. to increase). HDP 2.5.3 was solid too, though technically LLAP was tech preview in that release.
... View more
04-11-2017
06:57 AM
I discovered that specifically it is 2.6.6-2.6.9 you can't use 2.7.x - it failed when looking for the 2.6.x86_64.1.0 binary or some such. Hence I needed to install 2.6.9 from source and use virtualenv to handle 2.7 with 2.6.9 installed. Theres plenty of docs on how to use virtualenv to handle dual python environments.
... View more
03-08-2017
04:03 PM
This could occur if you have an overloaded NM and the liveness monitor expiration has occurred. Are you seeing an Nodemanagers in a Lost state? What does resource consumption look like on your nodemanagers when this occurs?
... View more
04-06-2018
12:54 AM
Hi @PJ, Could you please share what you ended up using at the end? Thanks
... View more
02-07-2017
06:34 PM
Here's an updated Falcon doc link for HDP 2.5.3.
... View more
02-07-2017
06:29 AM
1 Kudo
@PJ Regarding your question - "I understand what you are saying but how can I change this to contact the active RM first? And how come this worked in 2.4.2 and not in 2.5.3, there should be some parameter changes?? Also, everytime it contacts resource manager, it is wasting some time checking which is active.." You can make "rm1" as active using failover command. Please use below command to failover from rm2 to rm1 as 'yarn' user on any yarn-client yarn rmadmin -failover rm2 rm1 If you have enabled automatic failover and for some reason above command fails then use below command( If you are doing it for production, please be very careful or contact Hortonworks Support ) yarn rmadmin -transitionToActive rm1 --forceactive --forcemanual OR If no jobs are running, then simply restart "rm2" from Ambari, rm1 will automatically become Active if automatic failover is enabled.
... View more
10-26-2017
06:47 AM
Thanks for your information. I think virtualenv venv. ./venv/bin/activate should be virtualenv venv
. ./venv/bin/activate
... View more
01-28-2017
10:31 PM
FYI: there was no default queue in the capacity scheduler ... just added it... and it worked.
... View more
01-24-2017
08:36 PM
1 Kudo
@PJ It's hard to tell but I would suggest you to go with hot fix when you don't have time to validate all your production jobs with HDP2.5.0 stack if you have time I would suggest you to install HDP2.5.0 in your Dev environment first and test all you production jobs, ensure that all are running with out having any issue.
... View more