Member since
09-27-2015
66
Posts
56
Kudos Received
15
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1277 | 02-13-2017 02:17 PM | |
1935 | 02-03-2017 05:23 PM | |
2064 | 01-27-2017 04:03 PM | |
1204 | 01-26-2017 12:17 PM | |
1730 | 09-28-2016 11:03 AM |
06-16-2016
02:11 PM
5 Kudos
The following process is assuming that you are installing your HDP cluster using a configuration management tool such as Ansible, Puppet or Chef and that you are deploying the cluster using Ambari blueprint. If you are looking into automating your deployment you might be interested to use the great from @Alex Bush with Ansible. https://github.com/bushnoh/ansible-hadoop-asap This process could be used for : - Migrating from HDP 2.2 to 2.4 using a full reinstallation - Upgrading OS from RHEL 6 to RHEL 7 - OS boot from network and reinstallation of HDP The following process has been tested to migrate from HDP 2.3 to HDP 2.4 on a kerberised cluster and to reinstall an HDP 2.3 cluster. It should also work for HDP 2.5 as HDFS version is consistent across those versions. step 1: - Make a backup of your metastore DB ( Ranger, Hive and Oozie ). step 2: - Check that the Namenode is having a folder called namenode-formatted under dfs.namenode.name.dir. If you are using namenode HA, you need to check on both namenode ( one will probably missing it ) step 3: - Launch the reinstallation of your OS making sure that the disk / folder used by HDP to store data are not reinstalled. If you are deploying your OS using kickstart, you want to add the line --noformat to the disk concerned. step 4: - Grab a coffee whilst OS installation is taking place. step 5: - Following a successful OS installation, you can now launch your automated deployment of HDP. It doesn't matter if you also upgrade Ambari at the same time. step 6: - Grab a coffee whilst installation is taking place. step 7: - If your DB server has also been reinstalled as part of the process. You will need to stop the services ( hive, ranger and oozie ) and restore the DB. (NB: Upon restart, the schema will automatically be upgraded if it's required) On HDP 2.3 step 8: - You should have all your services already available. If not, start them manually. - Restart services for Hive, Ranger and Oozie. step 9: - Congratulate yourself for a smooth upgrade On HDP 2.2 step 8 - Start HDFS manually # Log as hdfs
su - hdfs
# Start all journalnodes
hdfs journalnode
# Start namenode in upgrade mode from command line
hdfs namenode -upgrade
# Start the second namenode
hdfs namenode -bootstrapStandby step 9 - Start all services from ambari except for namenode. ( It should all start ) step 10 - Check that all your data are there and that you can access them (run a couple of known hive, hbase, ... query) If everything is correct, move to step 11. You won't be able to return back so make sure everything is working as you expect. step 11 - Finalize upgrade # Log as HDFS
su - hdfs
# Run finalize command
hdfs dfsadmin -finalizeUpgrade
Finalize upgrade successful step 12 - Restart all HDFS components via ambari step 13 - Congratulate yourself on a smooth upgrade
... View more
06-15-2016
09:47 AM
5 Kudos
HWX doesn't recommend using lvm for the datanodes (overhead and no benefit). You typically create a partition per disk (no raid) with the your FS directly on top. FS typically are ext4 or xfs.
... View more
06-15-2016
07:10 AM
1 Kudo
HWX doesn't support individual component upgrade, it requires to upgrade the full HDP platform. HDP 2.5 ( ETA summer ) is due to ship with Storm 1.0. If you want to upgrade ahead of the release, you probably want to look into the apache storm documentation.
... View more
05-27-2016
07:53 AM
1 Kudo
The normal workflow will be to create the encryption zone ahead of time and to associate it with a user/group from the Ranger KMS. As long as your Nifi workflow is running as an authorized user, it will be able to write into your HDFS encrypted folder.
... View more
04-07-2016
05:16 PM
Could you have a proxy configured within your web browser ? Can you telnet to port 8080 from your laptop as it looks like something is listening on port 8080 within the VM.
... View more
04-07-2016
11:46 AM
Have you tried to run the script start_ambari.sh? It's available in /root/
... View more
03-29-2016
12:52 PM
6 Kudos
When using smartsense 1.2 or below in conjunction with OpenJDK, you get the following error upon startup. It's a none issue which will be resolved in the next smart sense version. Traceback (most recent call last):
File "/usr/sbin/hst-agent.py", line 420, in <module> main(sys.argv)
File "/usr/sbin/hst-agent.py", line 397, in main setup(options)
File "/usr/sbin/hst-agent.py", line 323, in setup server_hostname = get_server_hostname(server, tries, try_sleep, options.quiet)
File "/usr/sbin/hst-agent.py", line 107, in get_server_hostname hostname = validate_server_hostname(default_hostname, tries, try_sleep)
File "/usr/sbin/hst-agent.py", line 125, in validate_server_hostname elif not register_agent(server_hostname):
File "/usr/sbin/hst-agent.py", line 143, in register_agent if not server_api.register_agent(agent_version):
File "/usr/hdp/share/hst/hst-agent/lib/hst_agent/ServerAPI.py", line 104, in register_agent content = self.call(request)
File "/usr/hdp/share/hst/hst-agent/lib/hst_agent/ServerAPI.py", line 52, in call self.cachedconnect = security.CachedHTTPSConnection(self.config)
File "/usr/hdp/share/hst/hst-agent/lib/hst_agent/security.py", line 111, in __init__ self.connect()
File "/usr/hdp/share/hst/hst-agent/lib/hst_agent/security.py", line 116, in connect self.httpsconn.connect()
File "/usr/hdp/share/hst/hst-agent/lib/hst_agent/security.py", line 87, in connect raise err
ssl.SSLError: [Errno 8] _ssl.c:492: EOF occurred in violation of protocol
To fix this issue, you will need to modify the SSL Digest from md5 to sha256. Here are the steps required to do it. From Ambari stop the SmartSense service ( all components ) Backup the old server keys on the HST server host cp -rp /var/lib/smartsense/hst-server/keys /var/lib/smartsense/hst-server/keys.backup
Clean out the old keys on the HST server host rm -f /var/lib/smartsense/hst-server/keys/ca.key
rm -f /var/lib/smartsense/hst-server/keys/*.csr
rm -f /var/lib/smartsense/hst-server/keys/*.crt
rm -rf /var/lib/smartsense/hst-server/keys/db/*
mkdir /var/lib/smartsense/hst-server/keys/db/newcerts
touch /var/lib/smartsense/hst-server/keys/db/index.txt
echo 01 > /var/lib/smartsense/hst-server/keys/db/serial
Modify default digest on HST server host Edit file /var/lib/smartsense/hst-server/keys/ca.config
change line "default_md = md5" to "default_md = sha256"
Clean out the old keys on each HST Agent hosts. rm -f /var/lib/smartsense/hst-agent/keys/* If using HST Gateway, on HST gateway stop the service and remove certs hst gateway stop
rm -f /var/lib/smartsense/hst-gateway/keys/ca.key
rm -f /var/lib/smartsense/hst-gateway/keys/*.csr
rm -f /var/lib/smartsense/hst-gateway/keys/*.crt
rm -rf /var/lib/smartsense/hst-gateway/keys/db/*
mkdir /var/lib/smartsense/hst-gateway/keys/db/newcerts
touch /var/lib/smartsense/hst-gateway/keys/db/index.txt
echo 01 > /var/lib/smartsense/hst-gateway/keys/db/serial
If using HST Gateway, modify default digest on HST gateway host Edit file /var/lib/smartsense/hst-gateway/keys/ca.config
change line "default_md = md5" to "default_md = sha256" If using HST Gateway, on HST server remove old certs rm -f /var/lib/smartsense/hst-gateway-client/keys If using HST Gateway, on HST Gateway restart service hst gateway start Restart SmartSense service from Ambari ( all components ) and verify both Ambari SmartSense service and SmartSense view shows correct number of agents registered.
... View more
Labels:
01-28-2016
08:42 AM
You seem to be using a self signed certificate as such your Web browser doesn't trust it. You need to import your local CA into your Web browser and the error will go away.
... View more
01-22-2016
06:24 AM
1 Kudo
Instead of using a symlink, could you try doing a mount bind as it won't impact the folder permissions. For eg, mount -o bind /p01/app/had /usr/hdp If it works, you will need to add it to /etc/fstab
... View more
01-18-2016
09:43 AM
Switching off ambari-agent is a bit overkill as it will be done by sysinit upon system reboot. In the same way, it will be restarted automatically. For your datanodes, you may want to do one at a time to limit the downtime.
... View more