Member since
09-27-2015
66
Posts
56
Kudos Received
15
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1329 | 02-13-2017 02:17 PM | |
2000 | 02-03-2017 05:23 PM | |
2126 | 01-27-2017 04:03 PM | |
1246 | 01-26-2017 12:17 PM | |
1824 | 09-28-2016 11:03 AM |
07-15-2020
11:12 PM
We have updated this article to remove links to videos that are no longer available.
... View more
02-03-2017
05:28 PM
Thanks, is it somehow possible to influence the default behaviour for the standard auto created hive views? E.g adding a special property to blueprint to force the auto instantiated views to connect to 10501? The only thing what I can think of is that I change the ports in the blueprint config. E.g. HIVE_SERVER to port 10501, and HIVE_SERVER_INTERACTIVE to port 10001. But it does not seem as the most elegant solution.
... View more
07-11-2017
05:04 PM
Hi Olivier, Is this approach considered an in place upgrade of the OS? We need to upgrade to RHEL7 from RHEL6 and our system team doesn't use any configuration management tools to do an in place upgrade. It sounds like the systems/hosts in the cluster will be wiped to do the OS upgrade. Do you have any information on how to do this and preserve the Hadoop data disks? We also need to upgrade from HDP 2.5.3 to 2.6.0 and our cluster is kerberized. What's the best approach for us to take in upgrading the OS and HDP simultaneously?
... View more
06-15-2016
09:47 AM
5 Kudos
HWX doesn't recommend using lvm for the datanodes (overhead and no benefit). You typically create a partition per disk (no raid) with the your FS directly on top. FS typically are ext4 or xfs.
... View more
04-07-2016
10:12 PM
I set the Guest IP address in the port forwarding settings and restarted the VM, now it's working. (don't know why)
... View more
05-27-2016
02:22 PM
1 Kudo
I had smartsense-hst-1.1.0 packages installed on CentOS 6 (with Oracle JDK 1.8.0_73) affected by this issue. To follow these instructions I had to replace /var/lib/smartsense/ with /usr/hdp/share/hst/ in the paths given above. E.g instead of rm -f /var/lib/smartsense/hst-gateway/keys/*.crt I used rm -f /usr/hdp/share/hst/hst-gateway/keys/*.crt Thanks for posting the solution.
... View more
02-15-2016
10:02 PM
The behavior I'm looking for is something like this: Export all deltas for all configuration files beyond settings that are built-in defaults for Hortonworks installation. I believe this would be defined as version numbers > 1 (is that correct?) Import these into a newly-built cluster and be prompted for manual intervention when a delta includes a machine name, IP address or port (the latter can probably be determined by a regex match on the property name). As an audit tool, the presentation of environment (shell) scripts should be in the form of unified diffs rather than a dump of the entire file at each revision. Just a few ideas off the top of my head. There's no way this process can be totally automated, but I think it's possible to get very close.
... View more
12-11-2015
02:43 AM
@Andrea D'Orio Thanks for sharing, indeed we need information like this. keep sharing 🙂
... View more
09-19-2017
07:19 AM
4 Kudos
From HDP-2.6 onwards Hortonworks Data Platform is supported on IBM Power Systems You can refer below documentation for installing/upgrading HDP on IBM Power https://docs.hortonworks.com/HDPDocuments/Ambari-2.5.2.0/bk_ambari-installation-ppc/content/ch_Getting_Ready.html https://docs.hortonworks.com/HDPDocuments/Ambari-2.5.2.0/bk_ambari-upgrade-ppc/content/ambari_upgrade_guide-ppc.html
... View more