Member since
06-01-2016
5
Posts
6
Kudos Received
0
Solutions
09-01-2016
10:13 PM
2 Kudos
We're going through this process now, migrating a non-trivial amount
of data from an older cluster onto a new cluster and environment. We
have a couple of requirements and constraints that limited some of the
options:
The datanodes on the 2 clusters don't have
network connectivity. Each cluster resides in it's own private
firewalled network. (As an added complication, we also use the same
hostnames in each of the two private environments.) distcp scales
requires the datanodes in the 2 clusters to be able communicate
directly. We have different security models in the two
models. The old cluster uses simple authentication. The new cluster
uses kerberos for authentication. I've found that getting some of the
tools to work with 2 different authentication models can be difficult. I
want to preserve the file metadata from the old cluster on the new
cluster - e.g. file create time, ownership, file system permissions.
Some of the options can move the data from the source cluster, but they
write 'new' files on the target cluster. The old cluster has been
running running for around 2 years so there's alot of useful information
in those file timestamps. I need to perform a near-live
migration. I have the keep the old cluster running in parallel while
migrating data and users to the new cluster. Can't just cut access to
the old cluster After trying a number of tools and combinations, inculding WebHDFS and Knox combinations. we've settled on the following:
Export the
old cluster via NFS gateways. We lock the NFS access controls to only
allow the edge servers on the new cluster to mount the HDFS NFS volume.
The edge servers in our target cluster are airflow workers running as a
grid. We've created a source NFS gateway for each target edge server
airflow worker enabling a degree of scale-out. Not as good as distcp
scale-out but better than a single point pipe.
run good
old fashioned hdfs dfs -copyFromLocal -p <old_cluster_nfs_dir>
<new_cluster_hdfs_dir>. This enables us to preserve the file
timestamps as well as ownerships. As part of managing
the migration process, we're also making use of HDFS snapshots on both
source and target to enable consistency management. Our migration jobs
take snapshots at the beginning and end of each migration job and issue
delta or difference reports to identify if data was modified and
possibly missed during the migration process. I'm expecting that some
of our larger data sets will take hours to complete, for the largest
few, possible > 24hrs. In order to perform the snapshot management
we also added some additional wrapper code. WebHDFS can be used to
create and list snapshots, but it doesn't yet have an operation for
returning a snapshot difference report. For the hive metadata,
the majority of our hive DDL exists in git/source code control. We're
actually using this migration as an opportunity to enforce this for our
production objects. For end user objects, e.g. analysts data labs,
we're exporting the DDL on the old cluster and re-playing DDL on the new
cluster - with tweeks for any reserved words collisions. We don't have HBase operating on our old cluster so I didn't have to come up with a solution for that problem.
... View more
06-04-2016
12:15 AM
1 Kudo
If you're looking to improve access to back-end service UI's for the ops team, as opposed to exposing the services to the larger user base, we make use of ssh tunneling via our admin jump hosts to effectively create personal SOCKS proxies for each ops/admin user. We then use one of the dynamic proxy config plugins in Chrome or Firefox to direct requests to those services based on hostname, or in our case the domain of the hadoop environment. This has the advantage of being very transparent and service URL's all tend to resolve correctly , including https based services. The disadvantage is that the person using this approach needs to know how to setup an ssh tunnel and how to configure their browser to use that tunnel for the Hadoop services.
... View more