Member since
10-02-2017
116
Posts
3
Kudos Received
8
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
544 | 07-18-2020 12:04 PM | |
908 | 09-11-2019 01:14 PM | |
1287 | 08-16-2019 08:17 AM | |
3944 | 08-15-2019 12:23 PM | |
2397 | 05-14-2019 08:48 AM |
12-22-2021
02:07 PM
Apache recently posted an update to the existing log4j2 vulnerabilities already discussed here. From the Apache page: " Apache Log4j2 versions 2.0-alpha1 through 2.16.0, excluding 2.12.3, did not protect from uncontrolled recursion from self-referential lookups. When the logging configuration uses a non-default Pattern Layout with a Context Lookup (for example, $${ctx:loginId}), attackers with control over Thread Context Map (MDC) input data can craft malicious input data that contains a recursive lookup, resulting in a StackOverflowError that will terminate the process. This is also known as a DOS (Denial of Service) attack." My company runs several private CDH6.3 clusters and we've already applied the original log4j2 patch from https://github.com/cloudera/cloudera-scripts-for-log4j. Can anyone confirm if CDH products are susceptible to CVE-2021-45105? Seems this particular vulnerability is enabled only if logging configuration uses a non-default pattern layout with a context lookup. The mitigation strategy differs from that taken by the existing log4j2 patch from Cloudera. Apache log4j security vulnerabilities: https://logging.apache.org/log4j/2.x/security.html#CVE-2021-45105
... View more
12-22-2021
07:56 AM
There was a recent update from Apache; https://logging.apache.org/log4j/2.x/security.html#CVE-2021-45105 "Apache Log4j2 does not always protect from infinite recursion in lookup evaluation", which seems to imply the mitigation strategy of removing references to jndilookup class will not address this. Is this a concern for CDH6.3? If so, will the patch script be updated to address?
... View more
12-08-2021
12:58 PM
Answered my own question. I mistakingly thought distcp2 supported the -direct copy option but it does not.
... View more
11-23-2021
10:55 AM
I'm running CDH 6.3 ( Hadoop 3.0.0 and MR2 ). If what you are saying is true, then shouldn't I be able to pass the "-direct" option to my hadoop distcp commands? Currently it fails to recognize this command line option, which tells me distcp2 isn't being used.
... View more
02-05-2021
08:47 AM
Regarding the updated paywall policy - specifically the following comment: " Furthermore, when a license expires, users will no longer be able to access the Cloudera Manager Admin console to manage clusters until a valid license is uploaded. " We have several unlicensed clusters running. Do you interpret unlicensed clusters as clusters with expired licenses, in which case we stand to lose access to our Cloudera Manager consoles at any time now?
... View more
02-05-2021
06:45 AM
Unbelievable.
... View more
07-18-2020
12:04 PM
Solved: The Altus Director web console provides a way to update an environment's provider credentials, but only after all clusters/deployments have been deleted. To update the credentials for existing deployment, I had to use director's API: /api/d6.2/environments/{name}/provider/credentials "Update provider credentials for a specific environment"
... View more
07-18-2020
11:11 AM
During an overly ambitious cleanup effort, the IAM user / access key used by our Altus Director server was deleted from AWS and now we can no longer manage our clusters. I would like to either update the access key used by director, or remove the key and force director to rely on IAM role. Please advise.
... View more
- Tags:
- access key
- aws
- IAM
09-11-2019
01:14 PM
We updated our Packer image build to skip the CIS rule for more restrictive umask (referenced above), after which Hue successfully started during cluster firstrun.
... View more
08-16-2019
09:42 AM
Some additional information... We use Packer to build our images, and apply RedHat's CIS security policy for compliance reasons which sets a more restrictive umask in /etc/bashrc if [ $UID -gt 199 ] && [ “/usr/bin/id -gn” = “/usr/bin/id -un” ]; then
umask 027
else
umask 022
fi I'm thinking this doesn't effect our CDH5 deployments because those images already have the correct package version installed, however during CDH6 bootstrap, updated version are required and are installed using 027 umask resulting in no permissions for non-root user. Does the cluster bootstrap process assume umask 022 for non-root users?
... View more
08-16-2019
09:06 AM
Using Cloudera Altus Director 6.3 to deploy CDH 6.3 to AWS. I submitted a similar post back in April, thinking that it had since been resolved - but apparently not. We are ramping up CDH6 deployments again since our initial testing in April and again are seeing Hue fail to start during first run of CDH6 deployments due to Python lib folders being too restrictive. Interestingly we do not see this issue when using the same CentOS 7.6 based AMI to deploy CDH 5.16.2 clusters. The Python lib permissions differ on between the CDH5 and CDH6 deployments: CDH5: ls -l /usr/lib/python2.7/site-packages/six* -rw-r--r--. 1 root root 29664 Jan 2 2015 /usr/lib/python2.7/site-packages/six.py -rw-r--r--. 1 root root 29708 Nov 20 2015 /usr/lib/python2.7/site-packages/six.pyc -rw-r--r--. 1 root root 29708 Nov 20 2015 /usr/lib/python2.7/site-packages/six.pyo /usr/lib/python2.7/site-packages/six-1.9.0-py2.7.egg-info: total 16 -rw-r--r--. 1 root root 1 Nov 20 2015 dependency_links.txt -rw-r--r--. 1 root root 1419 Nov 20 2015 PKG-INFO -rw-r--r--. 1 root root 249 Nov 20 2015 SOURCES.txt -rw-r--r--. 1 root root 4 Nov 20 2015 top_level.txt CDH6: ls -l /usr/lib/python2.7/site-packages/six* -rw-r-----. 1 root root 32452 Aug 15 18:09 /usr/lib/python2.7/site-packages/six.py -rw-r-----. 1 root root 31828 Aug 15 18:09 /usr/lib/python2.7/site-packages/six.pyc /usr/lib/python2.7/site-packages/six-1.12.0.dist-info: total 24 -rw-r-----. 1 root root 4 Aug 15 18:09 INSTALLER -rw-r-----. 1 root root 1066 Aug 15 18:09 LICENSE -rw-r-----. 1 root root 1940 Aug 15 18:09 METADATA -rw-r-----. 1 root root 537 Aug 15 18:09 RECORD -rw-r-----. 1 root root 4 Aug 15 18:09 top_level.txt -rw-r-----. 1 root root 110 Aug 15 18:09 WHEEL Again, we use the same AWS AMI for the CDH5 and 6 deployments. For CDH6 I have to manually fix the permissions on all the nodes prior to first run in order to avoid director declaring the deployment failed. This is really hampering our CDH6 rollout. + run_syncdb_and_migrate_subcommands + '[' 6 -ge 6 ']' + /opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/lib/hue/build/env/bin/hue makemigrations --noinput Traceback (most recent call last): File "/opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/lib/hue/build/env/bin/hue", line 9, in <module> from pkg_resources import load_entry_point File "/opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/lib/hue/build/env/lib/python2.7/site-packages/pkg_resources/__init__.py", line 3241, in <module> @_call_aside File "/opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/lib/hue/build/env/lib/python2.7/site-packages/pkg_resources/__init__.py", line 3225, in _call_aside f(*args, **kwargs) File "/opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/lib/hue/build/env/lib/python2.7/site-packages/pkg_resources/__init__.py", line 3254, in _initialize_master_working_set working_set = WorkingSet._build_master() File "/opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/lib/hue/build/env/lib/python2.7/site-packages/pkg_resources/__init__.py", line 574, in _build_master ws = cls() File "/opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/lib/hue/build/env/lib/python2.7/site-packages/pkg_resources/__init__.py", line 567, in __init__ self.add_entry(entry) File "/opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/lib/hue/build/env/lib/python2.7/site-packages/pkg_resources/__init__.py", line 623, in add_entry for dist in find_distributions(entry, True): File "/opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/lib/hue/build/env/lib/python2.7/site-packages/pkg_resources/__init__.py", line 2056, in find_on_path for dist in factory(fullpath): File "/opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/lib/hue/build/env/lib/python2.7/site-packages/pkg_resources/__init__.py", line 2118, in distributions_from_metadata if len(os.listdir(path)) == 0: OSError: [Errno 13] Permission denied: '/usr/lib/python2.7/site-packages/six-1.12.0.dist-info' + /opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/lib/hue/build/env/bin/hue migrate --fake-initial Traceback (most recent call last): File "/opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/lib/hue/build/env/bin/hue", line 9, in <module> from pkg_resources import load_entry_point File "/opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/lib/hue/build/env/lib/python2.7/site-packages/pkg_resources/__init__.py", line 3241, in <module> @_call_aside File "/opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/lib/hue/build/env/lib/python2.7/site-packages/pkg_resources/__init__.py", line 3225, in _call_aside f(*args, **kwargs) File "/opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/lib/hue/build/env/lib/python2.7/site-packages/pkg_resources/__init__.py", line 3254, in _initialize_master_working_set working_set = WorkingSet._build_master() File "/opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/lib/hue/build/env/lib/python2.7/site-packages/pkg_resources/__init__.py", line 574, in _build_master ws = cls() File "/opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/lib/hue/build/env/lib/python2.7/site-packages/pkg_resources/__init__.py", line 567, in __init__ self.add_entry(entry) File "/opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/lib/hue/build/env/lib/python2.7/site-packages/pkg_resources/__init__.py", line 623, in add_entry for dist in find_distributions(entry, True): File "/opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/lib/hue/build/env/lib/python2.7/site-packages/pkg_resources/__init__.py", line 2056, in find_on_path for dist in factory(fullpath): File "/opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/lib/hue/build/env/lib/python2.7/site-packages/pkg_resources/__init__.py", line 2118, in distributions_from_metadata if len(os.listdir(path)) == 0: OSError: [Errno 13] Permission denied: '/usr/lib/python2.7/site-packages/six-1.12.0.dist-info' + '[' dumpdata = runcpserver ']' + '[' syncdb = runcpserver ']' + '[' ldaptest = runcpserver ']' + exec /opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/lib/hue/build/env/bin/hue runcpserver Traceback (most recent call last): File "/opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/lib/hue/build/env/bin/hue", line 9, in <module> from pkg_resources import load_entry_point File "/opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/lib/hue/build/env/lib/python2.7/site-packages/pkg_resources/__init__.py", line 3241, in <module> @_call_aside File "/opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/lib/hue/build/env/lib/python2.7/site-packages/pkg_resources/__init__.py", line 3225, in _call_aside f(*args, **kwargs) File "/opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/lib/hue/build/env/lib/python2.7/site-packages/pkg_resources/__init__.py", line 3254, in _initialize_master_working_set working_set = WorkingSet._build_master() File "/opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/lib/hue/build/env/lib/python2.7/site-packages/pkg_resources/__init__.py", line 574, in _build_master ws = cls() File "/opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/lib/hue/build/env/lib/python2.7/site-packages/pkg_resources/__init__.py", line 567, in __init__ self.add_entry(entry) File "/opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/lib/hue/build/env/lib/python2.7/site-packages/pkg_resources/__init__.py", line 623, in add_entry for dist in find_distributions(entry, True): File "/opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/lib/hue/build/env/lib/python2.7/site-packages/pkg_resources/__init__.py", line 2056, in find_on_path for dist in factory(fullpath): File "/opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/lib/hue/build/env/lib/python2.7/site-packages/pkg_resources/__init__.py", line 2118, in distributions_from_metadata if len(os.listdir(path)) == 0: OSError: [Errno 13] Permission denied: '/usr/lib/python2.7/site-packages/six-1.12.0.dist-info’
... View more
Labels:
- Labels:
-
Cloudera Manager
08-16-2019
08:17 AM
We discovered that there were sporadic network issues in the tunnel between Azure and AWS ( our director instance is in AWS). Our assumption is that this was causing transient connection issues between director and Azure instances. Declaring issue solved for now as we no longer are experiencing it.
... View more
08-15-2019
12:23 PM
OK - so it seems that the yum repo for cloudera manager was defined / pinned to 6.2 in the cluster definition - so of course Director did what it was being asked to do. Updated to point to 6.3. repository: "https://archive.cloudera.com/cm6/6.3/redhat7/yum/" repositoryKeyUrl: "https://archive.cloudera.com/cm6/6.3/redhat7/yum/RPM-GPG-KEY-cloudera"
... View more
08-15-2019
09:54 AM
Any suggestions for why director is deploying conflicting versions of the agent and parcels? Seems like a fundamental issue.
... View more
08-15-2019
05:56 AM
We use templates to deploy our clusters. I've discovered why the parcel activation is failing. Version 6.2 agents are being installed while version 6.3 parcels are being distributed / activated. This same template worked fine when 6 was originally released. In the template we have: cluster {
products {
CDH: 6 We are not defining a parcel URL, which I assume means we are using the default based on the version of director, which is 6.3.0. I've tried defining more specific version of CDH above (6.2, 6.3) but director continues to deploy 6.2 agents and 6.3 parcels.
... View more
08-14-2019
05:49 PM
I tried another deployment. Here are some entires that standout in the manager server log - at least to me: 2019-08-15 00:09:29,753 WARN scm-web-93:com.cloudera.server.web.cmf.ParcelController: No hash for parcel CDH-6.3.0-1.cdh6.3.0.p0.1279813-el7.parcel.torrent 2019-08-15 00:09:29,754 INFO scm-web-93:com.cloudera.server.web.cmf.ParcelController: Served parcel CDH-6.3.0-1.cdh6.3.0.p0.1279813-el7.parcel.torrent (79750 bytes) to 172.24.71.158 over PT0S. Rate: Infinity MB/s 2019-08-15 00:09:29,755 WARN scm-web-177:com.cloudera.server.web.cmf.ParcelController: No hash for parcel CDH-6.3.0-1.cdh6.3.0.p0.1279813-el7.parcel.torrent 2019-08-15 00:09:29,757 INFO scm-web-177:com.cloudera.server.web.cmf.ParcelController: Served parcel CDH-6.3.0-1.cdh6.3.0.p0.1279813-el7.parcel.torrent (79750 bytes) to 172.24.71.246 over PT0S. Rate: Infinity MB/s 2019-08-15 00:09:29,771 WARN scm-web-282:com.cloudera.server.web.cmf.ParcelController: No hash for parcel CDH-6.3.0-1.cdh6.3.0.p0.1279813-el7.parcel.torrent 2019-08-15 00:09:29,774 INFO scm-web-282:com.cloudera.server.web.cmf.ParcelController: Served parcel CDH-6.3.0-1.cdh6.3.0.p0.1279813-el7.parcel.torrent (79750 bytes) to 172.24.71.244 over PT0.003S. Rate: 25.352 MB/s 2019-08-15 00:09:29,785 WARN scm-web-161:com.cloudera.server.web.cmf.ParcelController: No hash for parcel CDH-6.3.0-1.cdh6.3.0.p0.1279813-el7.parcel.torrent 2019-08-15 00:09:29,785 INFO scm-web-161:com.cloudera.server.web.cmf.ParcelController: Served parcel CDH-6.3.0-1.cdh6.3.0.p0.1279813-el7.parcel.torrent (79750 bytes) to 172.24.71.191 over PT0S. Rate: Infinity MB/s 2019-08-15 00:09:29,819 WARN scm-web-282:com.cloudera.server.web.cmf.ParcelController: No hash for parcel CDH-6.3.0-1.cdh6.3.0.p0.1279813-el7.parcel.torrent 2019-08-15 00:09:29,819 INFO scm-web-282:com.cloudera.server.web.cmf.ParcelController: Served parcel CDH-6.3.0-1.cdh6.3.0.p0.1279813-el7.parcel.torrent (79750 bytes) to 172.24.71.215 over PT0S. Rate: Infinity MB/s 2019-08-15 00:09:29,827 WARN scm-web-93:com.cloudera.server.web.cmf.ParcelController: No hash for parcel CDH-6.3.0-1.cdh6.3.0.p0.1279813-el7.parcel.torrent 2019-08-15 00:09:29,827 INFO scm-web-93:com.cloudera.server.web.cmf.ParcelController: Served parcel CDH-6.3.0-1.cdh6.3.0.p0.1279813-el7.parcel.torrent (79750 bytes) to 172.24.71.226 over PT0S. Rate: Infinity MB/s 2019-08-15 00:09:29,868 WARN scm-web-91:com.cloudera.server.web.cmf.ParcelController: No hash for parcel CDH-6.3.0-1.cdh6.3.0.p0.1279813-el7.parcel.torrent 2019-08-15 00:09:29,869 INFO scm-web-91:com.cloudera.server.web.cmf.ParcelController: Served parcel CDH-6.3.0-1.cdh6.3.0.p0.1279813-el7.parcel.torrent (79750 bytes) to 172.24.71.208 over PT0.001S. Rate: 76.056 MB/s 2019-08-15 00:09:29,874 WARN scm-web-282:com.cloudera.server.web.cmf.ParcelController: No hash for parcel CDH-6.3.0-1.cdh6.3.0.p0.1279813-el7.parcel.torrent 2019-08-15 00:09:29,874 INFO scm-web-282:com.cloudera.server.web.cmf.ParcelController: Served parcel CDH-6.3.0-1.cdh6.3.0.p0.1279813-el7.parcel.torrent (79750 bytes) to 172.24.71.250 over PT0S. Rate: Infinity MB/s 2019-08-15 00:09:29,874 WARN scm-web-290:com.cloudera.server.web.cmf.ParcelController: No hash for parcel CDH-6.3.0-1.cdh6.3.0.p0.1279813-el7.parcel.torrent 2019-08-15 00:09:29,875 INFO scm-web-290:com.cloudera.server.web.cmf.ParcelController: Served parcel CDH-6.3.0-1.cdh6.3.0.p0.1279813-el7.parcel.torrent (79750 bytes) to 172.24.71.241 over PT0.001S. Rate: 76.056 MB/s 2019-08-15 00:09:29,905 WARN scm-web-282:com.cloudera.server.web.cmf.ParcelController: No hash for parcel CDH-6.3.0-1.cdh6.3.0.p0.1279813-el7.parcel.torrent 2019-08-15 00:09:29,905 INFO scm-web-282:com.cloudera.server.web.cmf.ParcelController: Served parcel CDH-6.3.0-1.cdh6.3.0.p0.1279813-el7.parcel.torrent (79750 bytes) to 172.24.71.243 over PT0S. Rate: Infinity MB/s 2019-08-15 00:09:29,914 WARN scm-web-170:com.cloudera.server.web.cmf.ParcelController: No hash for parcel CDH-6.3.0-1.cdh6.3.0.p0.1279813-el7.parcel.torrent 2019-08-15 00:09:29,915 INFO scm-web-170:com.cloudera.server.web.cmf.ParcelController: Served parcel CDH-6.3.0-1.cdh6.3.0.p0.1279813-el7.parcel.torrent (79750 bytes) to 172.24.71.137 over PT0.001S. Rate: 76.056 MB/s 2019-08-15 00:09:29,920 WARN scm-web-157:com.cloudera.server.web.cmf.ParcelController: No hash for parcel CDH-6.3.0-1.cdh6.3.0.p0.1279813-el7.parcel.torrent 2019-08-15 00:09:29,925 INFO scm-web-157:com.cloudera.server.web.cmf.ParcelController: Served parcel CDH-6.3.0-1.cdh6.3.0.p0.1279813-el7.parcel.torrent (79750 bytes) to 172.24.71.185 over PT0.001S. Rate: 76.056 MB/s 2019-08-15 00:09:29,928 WARN scm-web-177:com.cloudera.server.web.cmf.ParcelController: No hash for parcel CDH-6.3.0-1.cdh6.3.0.p0.1279813-el7.parcel.torrent 2019-08-15 00:09:29,929 INFO scm-web-177:com.cloudera.server.web.cmf.ParcelController: Served parcel CDH-6.3.0-1.cdh6.3.0.p0.1279813-el7.parcel.torrent (79750 bytes) to 172.24.71.138 over PT0.001S. Rate: 76.056 MB/s 2019-08-15 00:09:29,993 WARN scm-web-282:com.cloudera.server.web.cmf.ParcelController: No hash for parcel CDH-6.3.0-1.cdh6.3.0.p0.1279813-el7.parcel.torrent 2019-08-15 00:09:29,993 INFO scm-web-282:com.cloudera.server.web.cmf.ParcelController: Served parcel CDH-6.3.0-1.cdh6.3.0.p0.1279813-el7.parcel.torrent (79750 bytes) to 172.24.71.165 over PT0S. Rate: Infinity MB/s 2019-08-15 00:09:29,993 WARN scm-web-157:com.cloudera.server.web.cmf.ParcelController: No hash for parcel CDH-6.3.0-1.cdh6.3.0.p0.1279813-el7.parcel.torrent 2019-08-15 00:09:29,994 INFO scm-web-157:com.cloudera.server.web.cmf.ParcelController: Served parcel CDH-6.3.0-1.cdh6.3.0.p0.1279813-el7.parcel.torrent (79750 bytes) to 172.24.71.149 over PT0.001S. Rate: 76.056 MB/s 2019-08-15 00:09:30,018 WARN scm-web-157:com.cloudera.server.web.cmf.ParcelController: No hash for parcel CDH-6.3.0-1.cdh6.3.0.p0.1279813-el7.parcel.torrent 2019-08-15 00:09:30,018 INFO scm-web-157:com.cloudera.server.web.cmf.ParcelController: Served parcel CDH-6.3.0-1.cdh6.3.0.p0.1279813-el7.parcel.torrent (79750 bytes) to 172.24.71.150 over PT0S. Rate: Infinity MB/s 2019-08-15 00:09:30,030 WARN scm-web-94:com.cloudera.server.web.cmf.ParcelController: No hash for parcel CDH-6.3.0-1.cdh6.3.0.p0.1279813-el7.parcel.torrent 2019-08-15 00:09:30,030 INFO scm-web-94:com.cloudera.server.web.cmf.ParcelController: Served parcel CDH-6.3.0-1.cdh6.3.0.p0.1279813-el7.parcel.torrent (79750 bytes) to 172.24.71.188 over PT0S. Rate: Infinity MB/s 2019-08-15 00:09:30,034 INFO scm-web-89:com.cloudera.server.web.cmf.AuthenticationSuccessEventListener: Authentication success for user: 'admin' from 172.24.1.165 2019-08-15 00:09:30,048 WARN scm-web-157:com.cloudera.server.web.cmf.ParcelController: No hash for parcel CDH-6.3.0-1.cdh6.3.0.p0.1279813-el7.parcel.torrent 2019-08-15 00:09:30,048 INFO scm-web-157:com.cloudera.server.web.cmf.ParcelController: Served parcel CDH-6.3.0-1.cdh6.3.0.p0.1279813-el7.parcel.torrent (79750 bytes) to 172.24.71.247 over PT0S. Rate: Infinity MB/s 2019-08-15 00:09:30,065 WARN scm-web-94:com.cloudera.server.web.cmf.ParcelController: No hash for parcel CDH-6.3.0-1.cdh6.3.0.p0.1279813-el7.parcel.torrent Then there are several INFO entries which seem to indicate parcels are being distributed / activated successfully until we hit the following error, after which Altus Director fails the deployment: /clusters/ez2/parcels/products/CDH/versions/6.3.0-1.cdh6.3.0.p0.1279813/commands/activate 2019-08-15 00:12:19,738 INFO scm-web-290:com.cloudera.parcel.components.ParcelManagerImpl: Activating parcel CDH:6.3.0-1.cdh6.3.0.p0.1279813 on cluster ez2 2019-08-15 00:12:19,756 INFO scm-web-290:com.cloudera.enterprise.AbstractWrappedEntityManager: Rolling back transaction that wasn't marked for rollback-only. java.lang.Exception: Non-thrown exception for stack trace. at com.cloudera.enterprise.AbstractWrappedEntityManager.close(AbstractWrappedEntityManager.java:161) at com.cloudera.cmf.persist.CmfEntityManager.close(CmfEntityManager.java:367) at com.cloudera.server.web.cmf.AuthFilterEntityManager.close(AuthFilterEntityManager.java:191) at com.cloudera.api.dao.impl.ManagerDaoBase.runInNewTransaction(ManagerDaoBase.java:209) at com.cloudera.api.dao.impl.ManagerDaoBase.access$100(ManagerDaoBase.java:82) at com.cloudera.api.dao.impl.ManagerDaoBase$TransactionCallable.call(ManagerDaoBase.java:239) at com.cloudera.server.common.RetryWrapper.executeWithRetry(RetryWrapper.java:32) at com.cloudera.server.common.RetryUtils.executeWithRetryHelper(RetryUtils.java:210) at com.cloudera.server.common.RetryUtils.executeWithRetry(RetryUtils.java:131) at com.cloudera.api.dao.impl.ManagerDaoBase.runInNewTransactionWithRetry(ManagerDaoBase.java:169) at com.cloudera.api.dao.impl.ManagerDaoBase.invoke(ManagerDaoBase.java:272) at com.sun.proxy.$Proxy205.activate(Unknown Source) at com.cloudera.api.v3.impl.ParcelResourceImpl.activateCommand(ParcelResourceImpl.java:70) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.cxf.service.invoker.AbstractInvoker.performInvocation(AbstractInvoker.java:180) at org.apache.cxf.service.invoker.AbstractInvoker.invoke(AbstractInvoker.java:96) at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:189) at com.cloudera.api.ApiInvoker.invoke(ApiInvoker.java:117) at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:261) at com.cloudera.api.ApiInvoker.invoke(ApiInvoker.java:117) at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:261) at com.cloudera.api.ApiInvoker.invoke(ApiInvoker.java:117) at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:261) at com.cloudera.api.ApiInvoker.invoke(ApiInvoker.java:117) at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:261) at com.cloudera.api.ApiInvoker.invoke(ApiInvoker.java:117) at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:99) at org.apache.cxf.interceptor.ServiceInvokerInterceptor$1.run(ServiceInvokerInterceptor.java:59) at org.apache.cxf.interceptor.ServiceInvokerInterceptor.handleMessage(ServiceInvokerInterceptor.java:96) at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:308) at org.apache.cxf.transport.ChainInitiationObserver.onMessage(ChainInitiationObserver.java:121) at org.apache.cxf.transport.http.AbstractHTTPDestination.invoke(AbstractHTTPDestination.java:263) at org.apache.cxf.transport.servlet.ServletController.invokeDestination(ServletController.java:234) at org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:208) at org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:160) at org.apache.cxf.transport.servlet.CXFNonSpringServlet.invoke(CXFNonSpringServlet.java:189) at org.apache.cxf.transport.servlet.AbstractHTTPServlet.handleRequest(AbstractHTTPServlet.java:299) at org.apache.cxf.transport.servlet.AbstractHTTPServlet.doPost(AbstractHTTPServlet.java:218) at javax.servlet.http.HttpServlet.service(HttpServlet.java:665) at org.apache.cxf.transport.servlet.AbstractHTTPServlet.service(AbstractHTTPServlet.java:274) at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:867) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1623) at com.cloudera.enterprise.JavaMelodyFacade$MonitoringFilter.doFilter(JavaMelodyFacade.java:200) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1610) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:317) at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:127) at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.doFilter(FilterSecurityInterceptor.java:91) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:115) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.springframework.security.web.session.SessionManagementFilter.doFilter(SessionManagementFilter.java:137) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:111) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:169) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:63) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.springframework.security.web.authentication.www.BasicAuthenticationFilter.doFilterInternal(BasicAuthenticationFilter.java:215) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at com.cloudera.api.ApiBasicAuthFilter.doFilter(ApiBasicAuthFilter.java:86) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.springframework.security.web.header.HeaderWriterFilter.doFilterInternal(HeaderWriterFilter.java:66) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:56) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.springframework.security.web.context.SecurityContextPersistenceFilter.doFilter(SecurityContextPersistenceFilter.java:105) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:214) at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:177) at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:347) at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:263) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1610) at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:197) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588) at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1557) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1247) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144) at org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:174) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) at org.eclipse.jetty.server.Server.handle(Server.java:502) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:364) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260) at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:305) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103) at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:118) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:310) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:168) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:126) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:366) at com.cloudera.server.common.BoundedQueuedThreadPool$2.run(BoundedQueuedThreadPool.java:94) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:765) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:683) at java.lang.Thread.run(Thread.java:748) 2019-08-15 00:12:19,759 INFO scm-web-290:com.cloudera.api.ApiExceptionMapper: Exception caught in API invocation. Msg:The version change by activating the parcel would cause an error java.lang.IllegalArgumentException: The version change by activating the parcel would cause an error
... View more
- Tags:
- tried
08-14-2019
04:49 PM
The log entry from Altus Director I provided mirrors the error I saw on the manager node while tailing the logs there. Parcel distribution / activation seemed to be going fine until it hit the error I posted. Unfortunately I've terminated the deployment as this was our 4th attempt. I was thinking of pinning CDH version in the cluster bootstrap to 6.2 instead of 6 so that it doesn't grab the latest (6.3) parcels. If I hit the same issue, I will rerun my original config and provide more logs from the manager node.
... View more
08-14-2019
04:10 PM
Altus Director 6.3 Bootstrap of CDH6.3 clusters are failing due to failed activation of parcels. We've successfully deployed earlier versions of CDH 6.x. On all nodes the agent logs show the following error for the same parcel: [14/Aug/2019 22:45:18 +0000] 31970 MainThread parcel INFO Executing command ['/usr/sbin/useradd', '-r', '-m', '-g', 'spark', '-K', 'UMASK=022', '--home', '/var/lib/spark', '--comment', 'Spark', '--shell', '/sbin/nologin', 'spark']
[14/Aug/2019 22:45:18 +0000] 31970 MainThread parcel INFO Ensuring correct file permissions for new parcel CDH-6.3.0-1.cdh6.3.0.p0.1279813.
[14/Aug/2019 22:45:18 +0000] 31970 MainThread parcel INFO chown: /opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/lib/hue/desktop/desktop.db hue hue
[14/Aug/2019 22:45:18 +0000] 31970 MainThread parcel INFO Executing command ['chown', 'hue:hue', u'/opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/lib/hue/desktop/desktop.db']
[14/Aug/2019 22:45:18 +0000] 31970 MainThread parcel INFO chmod: /opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/lib/hue/desktop/desktop.db 0660
[14/Aug/2019 22:45:18 +0000] 31970 MainThread parcel INFO Executing command ['chmod', '0660', u'/opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/lib/hue/desktop/desktop.db']
[14/Aug/2019 22:45:18 +0000] 31970 MainThread parcel INFO chown: /opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/lib/hue/desktop hue hue
[14/Aug/2019 22:45:18 +0000] 31970 MainThread parcel INFO Executing command ['chown', 'hue:hue', u'/opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/lib/hue/desktop']
[14/Aug/2019 22:45:18 +0000] 31970 MainThread parcel INFO chmod: /opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/lib/hue/desktop 0755
[14/Aug/2019 22:45:18 +0000] 31970 MainThread parcel INFO Executing command ['chmod', '0755', u'/opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/lib/hue/desktop']
[14/Aug/2019 22:45:18 +0000] 31970 MainThread parcel INFO chown: /opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/lib/hadoop-yarn/bin/container-executor root yarn
[14/Aug/2019 22:45:18 +0000] 31970 MainThread parcel INFO Executing command ['chown', 'root:yarn', u'/opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/lib/hadoop-yarn/bin/container-executor']
[14/Aug/2019 22:45:18 +0000] 31970 MainThread parcel INFO chmod: /opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/lib/hadoop-yarn/bin/container-executor 6050
[14/Aug/2019 22:45:18 +0000] 31970 MainThread parcel INFO Executing command ['chmod', '6050', u'/opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/lib/hadoop-yarn/bin/container-executor']
[14/Aug/2019 22:45:18 +0000] 31970 MainThread parcel ERROR Error while attempting to modify permissions of file '/opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/lib/hadoop-0.20-mapreduce/sbin/Linux/task-controller'.
Traceback (most recent call last):
File "/opt/cloudera/cm-agent/lib/python2.7/site-packages/cmf/parcel.py", line 520, in ensure_permissions
file = cmf.util.validate_and_open_fd(path, self.get_parcel_home(parcel))
File "/opt/cloudera/cm-agent/lib/python2.7/site-packages/cmf/util/__init__.py", line 358, in validate_and_open_fd
fd = os.open(path, flags)
OSError: [Errno 2] No such file or directory: '/opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/lib/hadoop-0.20-mapreduce/sbin/Linux/task-controller' Altus director server log shows: 2019-08-14 22:45:35.535 +0000] ERROR [p-b96f4ed12bad-DefaultBootstrapClusterJob] 3812e6e2-ba96-42c5-b53e-4cd2e99a54de POST /api/d6.2/environments/ez2/deployments/ez2/clusters com.cloudera.launchpad.bootstrap.cluster.ActivateParcel - com.cloudera.launchpad.pipeline.util.PipelineRunner: Attempt to execute job failed
com.cloudera.api.ext.ClouderaManagerException: API call to Cloudera Manager failed. Method=ParcelResource.activateCommand. Response Status Code: 400. Message: { "message" : "The version change by activating the parcel would cause an error"
... View more
- Tags:
- Parcels
Labels:
- Labels:
-
Cloudera Manager
07-18-2019
09:25 AM
I'm in the middle of another deployment where I see random failures reported by director when attempting to connect to port 22 of cluster instances during bootstrap. I'm seeing this for deployments where I haven't enabled publicip - so it's no just for publicip enabled deployments. Each time, it's only 1 or 2 of the instances and each time I can connect to the instance from the director server just fine, even during the retry window when director still claims it cannot connect. Not sure why director thinks it can't connect when I clearly can from the same server. I've confirmed the forward/reverse DNS resolution is working. I basically have to deploy several times before I hit the sweet spot and the deployment completes. This type of inconsistent behavior has been systemic with Azure deployments. I've experienced none of this with AWS deployments. I'll attempt to collect more logs.
... View more
07-18-2019
05:43 AM
Thanks for your reply. Bootstrap did fail in this case, which surprised me given the fact that during the retry window I could ssh to the instance's private IP from Director. Also, Director had no problem connecting to the other 2 master nodes that I enabled public IP on. The log entries I provided are repeated over and over until the retries are exhausted.
... View more
07-17-2019
07:13 AM
Just checking in to see if there are any updates. As a workaround we are using Azure MySQL backend, but we would prefer to use Azure Postgresql. Let me know if I can provide any additional info. Unfortunately it's a very repeatable issue. 🙂
... View more
07-15-2019
09:50 AM
Thank you for the clarification. D.
... View more
07-13-2019
02:50 PM
I'm using Altus Director 6.2.
Is the following declaration in your latest Aluts Director documentation still accurate?
Changing the Instance Type is Not Supported in Azure
Changing the instance type of an already-deployed VM is not supported in Azure.
Changing the instance type of a VM through a tool external to Altus Director is not supported in Azure. You cannot, for example, use the Azure Portal to change the instance type. Altus Director is not updated when instance type changes are made with external tools, and your cluster will show errors in Altus Director.
I have performed both of the actions described in the bullet points above after we decided our existing workers were not sized properly for the amount of data being ingested / processed. Cloudera Director still shows the cluster in a "green" state. No complaints / errors / warnings thus far. Is this really not supported?
Reference: https://www.cloudera.com/documentation/director/latest/topics/director_get_started_azure_important_notes.html
... View more
07-11-2019
08:55 AM
Thank you for the response. To your point about bootstrapping/terminating - I have noticed during testing when I'm cycling through many bootstrapping / terminations that Director can get into a state where it seems to no longer respond to bootstrapping requests (from command line). I'll typically let Director "cool down" for a few minutes and retry.
... View more
07-10-2019
11:31 AM
Assuming the latest version of Altus Director... are there any documentated recommendations for the number of deployments/clusters that Director can reasonbly be expected to manage?
Our clusters are less than 21 nodes each and our Director server is running on an EC2 instance with 8 cores and 60GB of RAM and I have a dedicated RDS MariaDB DB sized at 2 cores and 16GB of RAM with 200GB of storage. Thus far I'm managing around 14 clusters from ranging in size from 9 to 20 nodes. No issues thus far. I'm just curious if there is general guidance around this subject.
... View more
07-10-2019
08:49 AM
We have a need to enable public IPs on the master nodes due to an requirement in Azure when adding instances to a load balancer pool. Our cluster is accessed via private IPs only, outside of this requirement.
Within the cluster bootstrap file, we enable public IPs for the master instance groups. During deployment, director will throw an error similar to the following for some but not all master nodes:
', errorInfo=ErrorInfo{code=INSTANCE_SSH_PORT_UNAVAILABLE, properties={sshServiceEndpoints=[BaseServiceEndpoint{hostEndpoint=HostEndpoint{hostAddressString='10.0.14.15', hostAddress=Optional.of(/10.0.14.15)}, port=Optional.absent(), url=Optional.absent()}, BaseServiceEndpoint{hostEndpoint=HostEndpoint{hostAddressString='52.232.245.168', hostAddress=Optional.of(/52.232.245.168)}, port=Optional.absent(), url=Optional.absent()}]}, causes=[]}}
[2019-07-10 02:49:45.507 +0000] INFO [p-d4ed238466ce-WaitForSshToSucceed] 5f79b1c9-b7d6-41d0-a933-0cfff86cf6e4 POST /api/d6.2/environments/azuresb/deployments/azuresb-1/clusters com.cloudera.launchpad.bootstrap.WaitForServersUntilTime - com.cloudera.launchpad.bootstrap.WaitForServersUntilTime: Waiting until 2019-07-10T03:09:45.038Z for an accessible port on endpoints [10.0.14.15:22, 40.79.57.122:22]
From the logs, it look like Director should try both IP's. I confirmed that I can ssh to the instance's private IP from Director - which leads me to believe it tried to SSH via the public IP, failed, and never tried to connect via the private IP. As I mentioned, this does not happen for all masters, however all masters are configured identically - with a private and public IP.
Can someone confirm if both IP's would need to be accessible from the Director server for the installation to continue?
... View more
07-09-2019
10:47 AM
Cleaned up the redundant common-instanceTemplate section per the latest spec. Looks like Director is now accepting my custom image - however now I'm running into the other limitation that's been a thorn in my side, which is the fact that the custom image can't have data disks defined like they can in AWS. I believe the workaround for that is to define disk count as 0, but I'll need to rebuild my worker packer image to include the additional data disks for hdfs. Anyway, different subject. Thank you Bill for the guidance. I'll mark this issue as resolved.
... View more
07-09-2019
08:31 AM
Bill, I belived I misinterpreted your initial suggestion. I was focused on the common-instanceTemplate section entirely, and didn't check the instances section. My initial interpretation of "instances" section was the more instance specific parameter definitions within the common-instanceTemplate section. Now I see what you are getting at. Let me try another test.....
... View more