Member since
06-13-2016
19
Posts
1
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
155 | 07-07-2020 09:10 AM | |
814 | 08-30-2016 07:09 AM |
07-23-2020
08:19 AM
Well I have the same error but I don't see extra trailing ,. I see yarn.resourcemanager.ha.rm-ids set to rm1, rm2
... View more
07-07-2020
09:10 AM
After spending lots of efforts it got fixed by changing the password and making sure the DP UI user's password doesn't contain any special characters.
... View more
05-21-2020
07:51 AM
Hi,
After installing Data Plane Profiler via Ambari, I headed to check the logs for issue. I see the impersonation related issue that a user "User '$N41000-DEALKL33P4LJ' not allowed to impersonate 'Some(dpprofiler)"
Not sure what am I missing here.
2020-05-21 12:54:18,034 [WARN] from application in application-akka.actor.default-dispatcher-2 - Unknown message Failure(job.R
estApiException: 403 : "User '$N41000-DEALKL33P4LJ' not allowed to impersonate 'Some(dpprofiler)'.")
2020-05-21 12:54:18,036 [ERROR] from application in ForkJoinPool-2-worker-3 - 403 : "User '$N41000-DEALKL33P4LJ' not allowed to impersonate 'Some(dpprofiler)'."
job.RestApiException: 403 : "User '$N41000-DEALKL33P4LJ' not allowed to impersonate 'Some(dpprofiler)'."
at job.WSResponseMapper$class.parseResponse(WSResponseMapper.scala:36)
at job.runner.LivyJobRunner.parseResponse(LivyJobRunner.scala:36)
at job.runner.LivyJobRunner$$anonfun$submitJobToLivy$2.apply(LivyJobRunner.scala:79)
at job.runner.LivyJobRunner$$anonfun$submitJobToLivy$2.apply(LivyJobRunner.scala:78)
at scala.concurrent.Future$$anonfun$flatMap$1.apply(Future.scala:253)
at scala.concurrent.Future$$anonfun$flatMap$1.apply(Future.scala:251)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
at scala.concurrent.impl.ExecutionContextImpl$AdaptedForkJoinTask.exec(ExecutionContextImpl.scala:121)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Moreover, when I go to the <DPS_HOST>:21900/profilers I see a JSON response like following.
[{"id":1,"name":"audit_profiler","version":"1.6.0.1.6.0.0-8","jobType":"Livy","assetType":"AuditLog","profilerConf":{"file":"/apps/dpprofiler/profilers/audit_profiler/1.6.0.1.6.0.0-8/lib/com.hortonworks.dataplane.audit-profiler-assembly-1.6.0.1.6.0.0-8.jar","proxyUser":"dpprofiler","className":"com.hortonworks.dataplane.profilers.auditprofiler.AuditProfiler","jars":[]},"user":"dpprofiler","description":
......
......
.....
"user":"dpprofiler","description":"Hive Sensitive Info Profiler","created":1590024132419,"metaManagementSchema":{"rootPath":"/user/dpprofiler/csp/kraptr","metaConfigs":[{"id":"dsl","metaType":"global","nodeName":"dsl","fileType":"json","extension":".kraptr_dsl.json","permission":"create"},{"id":"files","metaType":"global","nodeName":"files","fileType":"text","extension":".kraptr_meta_files.txt","permission":"create"}]},"dryRunSchema":{"outputPath":"/apps/dpprofiler/profilers/sensitive_info_profiler/1.6.0.1.6.0.0-8/lib/dryrun"}}]
However, when I click on data steward studio icon in DPS portal I am redirect to a page which is blank.
It redirects to DPS_HOST/null
clicking on DSS Takes to the page Host/null
Any idea what is missed here?
Thanks in advance.
... View more
- Tags:
- Data Plane
- dps
- DSS
Labels:
05-20-2020
06:50 PM
Hi After installing the DP Profiler on Ambari and integrating with DP services following https://docs.cloudera.com/HDPDocuments/DSS1/DSS-1.6.1/installation/content/dss_install_the_dss_service.html . I am logging in to Ambari -> Profiler Agent -> using my A/D crendentials and I land o the profiles page. To my surprise this page is blank with a header Profiler Agent. Not sure what am I missing here. **Attaching the screenshot of the webpage after logging in. Thanks in advnace.
... View more
Labels:
05-20-2020
05:57 PM
Never mind. cleaning yum repos helped. $ yum clean all
... View more
05-20-2020
05:46 PM
I also setup another repo on the repo server and pointed to it in Ambari versions. When I do it first time I get the following error. I spent many hours trying to fix this issue. Any help on this will be highly appreciated. Thanks. Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/DPPROFILER/1.6.0/package/scripts/dpprofiler_agent.py", line 455, in <module>
DpProfilerAgent().execute()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 352, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/DPPROFILER/1.6.0/package/scripts/dpprofiler_agent.py", line 54, in install
self.install_packages(env)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 849, in install_packages
retry_count=agent_stack_retry_count)
File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
self.env.run()
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/ambari-agent/lib/resource_management/core/providers/packaging.py", line 30, in action_install
self._pkg_manager.install_package(package_name, self.__create_context())
File "/usr/lib/ambari-agent/lib/ambari_commons/repo_manager/yum_manager.py", line 219, in install_package
shell.repository_manager_executor(cmd, self.properties, context)
File "/usr/lib/ambari-agent/lib/ambari_commons/shell.py", line 753, in repository_manager_executor
raise RuntimeError(message)
RuntimeError: Failed to execute command '/usr/bin/yum -y install profiler_agent', exited with code '1', message: 'Not using downloaded DSS-1.6-repo-1/repomd.xml because it is older than what we have:
Current : Fri Oct 4 05:35:14 2019
Downloaded: Thu Oct 3 08:15:18 2019
Error: Nothing to do
... View more
05-20-2020
04:32 PM
Hi,
I am trying to install DP Profiler Agent using Ambari 2.7. While installing DP Profiler Agent, I am getting the following error
stderr:
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/DPPROFILER/1.6.0/package/scripts/dpprofiler_agent.py", line 455, in <module>
DpProfilerAgent().execute()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 352, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/DPPROFILER/1.6.0/package/scripts/dpprofiler_agent.py", line 54, in install
self.install_packages(env)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 849, in install_packages
retry_count=agent_stack_retry_count)
File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
self.env.run()
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/ambari-agent/lib/resource_management/core/providers/packaging.py", line 30, in action_install
self._pkg_manager.install_package(package_name, self.__create_context())
File "/usr/lib/ambari-agent/lib/ambari_commons/repo_manager/yum_manager.py", line 219, in install_package
shell.repository_manager_executor(cmd, self.properties, context)
File "/usr/lib/ambari-agent/lib/ambari_commons/shell.py", line 753, in repository_manager_executor
raise RuntimeError(message)
RuntimeError: Failed to execute command '/usr/bin/yum -y install profiler_agent', exited with code '1', message: 'Error: Nothing to do
'
stdout:
2020-05-20 23:26:47,841 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=None -> 3.1
2020-05-20 23:26:47,849 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2020-05-20 23:26:47,851 - Group['kms'] {}
2020-05-20 23:26:47,852 - Group['livy'] {}
2020-05-20 23:26:47,852 - Group['spark'] {}
2020-05-20 23:26:47,852 - Group['dpprofiler'] {}
2020-05-20 23:26:47,852 - Group['ranger'] {}
2020-05-20 23:26:47,853 - Group['hdfs'] {}
2020-05-20 23:26:47,853 - Group['hadoop'] {}
2020-05-20 23:26:47,853 - Group['users'] {}
2020-05-20 23:26:47,853 - Group['knox'] {}
2020-05-20 23:26:47,854 - User['yarn-ats'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2020-05-20 23:26:47,855 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2020-05-20 23:26:47,857 - User['infra-solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2020-05-20 23:26:47,858 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2020-05-20 23:26:47,859 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None}
2020-05-20 23:26:47,860 - User['atlas'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2020-05-20 23:26:47,861 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2020-05-20 23:26:47,862 - User['ranger'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['ranger', 'hadoop'], 'uid': None}
2020-05-20 23:26:47,864 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None}
2020-05-20 23:26:47,865 - User['kms'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['kms', 'hadoop'], 'uid': None}
2020-05-20 23:26:47,866 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['livy', 'hadoop'], 'uid': None}
2020-05-20 23:26:47,867 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['spark', 'hadoop'], 'uid': None}
2020-05-20 23:26:47,868 - User['dpprofiler'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2020-05-20 23:26:47,869 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None}
2020-05-20 23:26:47,871 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2020-05-20 23:26:47,872 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop'], 'uid': None}
2020-05-20 23:26:47,873 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2020-05-20 23:26:47,874 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2020-05-20 23:26:47,875 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2020-05-20 23:26:47,876 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2020-05-20 23:26:47,878 - User['knox'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'knox'], 'uid': None}
2020-05-20 23:26:47,878 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2020-05-20 23:26:47,880 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2020-05-20 23:26:47,887 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if
2020-05-20 23:26:47,887 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2020-05-20 23:26:47,888 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2020-05-20 23:26:47,890 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2020-05-20 23:26:47,891 - call['/var/lib/ambari-agent/tmp/changeUid.sh hbase'] {}
2020-05-20 23:26:47,901 - call returned (0, '1019')
2020-05-20 23:26:47,901 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1019'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2020-05-20 23:26:47,907 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1019'] due to not_if
2020-05-20 23:26:47,908 - Group['hdfs'] {}
2020-05-20 23:26:47,908 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop', u'hdfs']}
2020-05-20 23:26:47,909 - FS Type: HDFS
2020-05-20 23:26:47,909 - Directory['/etc/hadoop'] {'mode': 0755}
2020-05-20 23:26:47,930 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'root', 'group': 'hadoop'}
2020-05-20 23:26:47,931 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2020-05-20 23:26:47,953 - Repository['HDP-3.1-repo-1'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP/centos7/3.x/updates/3.1.4.0', 'action': ['prepare'], 'components': [u'HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-1', 'mirror_list': ''}
2020-05-20 23:26:47,961 - Repository['DSS-1.6-repo-1'] {'base_url': 'http://18.132.155.36:9090/DSS-APP/centos7/1.6.0.0-4/', 'action': ['prepare'], 'components': [u'DSS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-1', 'mirror_list': ''}
2020-05-20 23:26:47,963 - Repository['HDP-UTILS-1.1.0.22-repo-1'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.22/repos/centos7', 'action': ['prepare'], 'components': [u'HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-1', 'mirror_list': ''}
2020-05-20 23:26:47,965 - Repository['das-1.2.0-repo-1'] {'base_url': 'http://18.132.155.36:9090/DSS-APP/centos7/1.6.0.0-4/', 'action': ['prepare'], 'components': [u'dasbn-repo', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-1', 'mirror_list': ''}
2020-05-20 23:26:47,967 - Repository['HDP-3.1-GPL-repo-1'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP-GPL/centos7/3.x/updates/3.1.4.0', 'action': ['prepare'], 'components': [u'HDP-GPL', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-1', 'mirror_list': ''}
2020-05-20 23:26:47,969 - Repository[None] {'action': ['create']}
2020-05-20 23:26:47,970 - File['/tmp/tmppat0NY'] {'content': ...}
2020-05-20 23:26:47,970 - Writing File['/tmp/tmppat0NY'] because contents don't match
2020-05-20 23:26:47,971 - File['/tmp/tmpQ7EMIj'] {'content': StaticFile('/etc/yum.repos.d/ambari-hdp-1.repo')}
2020-05-20 23:26:47,971 - Writing File['/tmp/tmpQ7EMIj'] because contents don't match
2020-05-20 23:26:47,972 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2020-05-20 23:26:48,060 - Skipping installation of existing package unzip
2020-05-20 23:26:48,061 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2020-05-20 23:26:48,069 - Skipping installation of existing package curl
2020-05-20 23:26:48,069 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2020-05-20 23:26:48,080 - Skipping installation of existing package hdp-select
2020-05-20 23:26:48,087 - The repository with version 3.1.4.0-315 for this command has been marked as resolved. It will be used to report the version of the component which was installed
2020-05-20 23:26:48,092 - Skipping stack-select on DPPROFILER because it does not exist in the stack-select package structure.
2020-05-20 23:26:48,314 - call['ambari-sudo.sh su dpprofiler -l -s /bin/bash -c 'date +%s | sha256sum | base64 | head -c 32 1>/tmp/tmpMnrNdd 2>/tmp/tmpHFniTv''] {'quiet': False}
2020-05-20 23:26:48,372 - call returned (0, '')
2020-05-20 23:26:48,372 - get_user_call_output returned (0, u'OGQyODg1YzVlNTNmZTc3YzhjM2M0ZjEz', u'')
2020-05-20 23:26:48,381 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2020-05-20 23:26:48,383 - Package['profiler_agent'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2020-05-20 23:26:48,465 - Installing package profiler_agent ('/usr/bin/yum -y install profiler_agent')
2020-05-20 23:26:49,673 - The repository with version 3.1.4.0-315 for this command has been marked as resolved. It will be used to report the version of the component which was installed
2020-05-20 23:26:49,679 - Skipping stack-select on DPPROFILER because it does not exist in the stack-select package structure.
Command failed after 1 tries
Can anyone suggest what is the issue/solution?
Regards,
... View more
Labels:
05-14-2020
04:09 PM
Hi, Following the official documentation I logged in to https://public-IP Logged in using the Super User admin account. Post that I configured A/D integration and on the next page I added a few users/groups for admin say users1, user2, etc.. Post that it logged me out and asked me to login to the same page https://<public-IP> . Now I am trying to login using user1 and corresponding A/D password but I get and error - invalid username/password error. I tried with user2 also which was configured. I also tried using user1@realm.com . To my surprise, I am not longer able to login using super admin also. I tried to see the various container logs but no success. Has anyone faced similar issue? What am I missing here? Thanks in advance. Regards Tagging @SumitraMenon @Saurav @abajwa
... View more
04-04-2020
06:47 PM
1 Kudo
Hi, I went through the documentation of Data Plan Platform and also Data Steward Studio , I did not find options for making these services highly available. One of our requirements to use a component in production is High Availability. We would like to understand how can we make DataPlane Platform and it's applications like DSS/DLM/DAS highly available? Unlike other HDP components, we install these tools on a single node on Docker, we are also concerned about the failover/DR. Could you please throw some light on how can we achieve disaster recovery and High Availability for the above said components/services? Many thanks, Ar @Shelton @jsensharma
... View more
04-29-2017
10:34 AM
@Kuldeep Kulkarni, Thanks for the response. I have already mentioned that I sent a few emails to certification@hortonwork.com. I've also mentioned the auto response from zendesk which mentions request/ticket id. Thanks.
... View more
04-28-2017
02:08 PM
Hi team Hortonworks Certification, @William Gonzalez, I cleared the exam on 26/March/2017,I have not had received any communication from Hortonworks about the badge? After that I wrote and cleared HDPCA on 23/April, for HDPCA I got the digital badge yet to hear from Hortonworks on HCA. I also sent a few emails to certification at hortonwork dot com. Request ID on ZenDesk HelpDesk: 5154, 5178 Can someone help? Regards, A
... View more
- Tags:
- certificate
- hca
02-23-2017
11:27 AM
Hi All, In the secured cluster, we have a HiveServer2 started as "hive". When we connect to Hive via Beeline, Ranger policies (defined via Hive Ranger plugin) are respected as the control flow goes through HiveServer2. It verifies if the user has access to table x,y,z or not,etc. Since we have set impersonation set to false, the query gets executed as hive user, and hive user has more permission on HDFS and things work well in the case of Beeline-Hive scenario. Now, with Spark, when the end user needs to connect to Hive using Spark-Shell or Python-shell for example, we see that the connections directly go to HiveMetaStore and not HS2, so Ranger does not play its part. Also all the queries are executed as the end user, obviously, the end user does not have permission to access the file directly on HDFS and the Spark-Hive fails. I know its a pending issue, but for now we need a work around so that we can give end user permissions to access the data, without creating (or with minimally changing ) existing hdfs access permissions. (This is on HDP 2.5.x) Many thanks, Arpan
... View more
Labels:
01-16-2017
05:38 PM
Worked like a charm with the third party CA cert also. Only issue I faced to start is to change the owner of the /usr/hdp/2.5.0.0-1245/zeppelin/webapps/ to zeppelin. Thank you.
... View more
12-01-2016
05:06 PM
@Terry Stebbens Thanks Terry for quick response. We are using third party CA. (Not the self signed ones). Currently while generating the CSR we given Common Name = {hostname} $hostname yields : abc-xyz-001.CompanyName.COM Instead when we give CN = *.CompanyName.COM, do we need to to get a domain set up in DNS to handle this? Thanks, Arpan
... View more
12-01-2016
04:34 PM
Hi all, Currently in the cluster we have different host certs for each host of the cluster. Is it possible to configure a single SSL cert for all the hosts? (This is to avoid generating multiple CSR and getting them signed) 1) Is it possible? 2) How if possible. 3) Possible security concerns? Regards, Arpan
... View more
Labels:
08-30-2016
07:09 AM
This got fixed, it was strange issue with casing.
... View more
08-30-2016
07:08 AM
Hi All, In our CDH production cluster, we have setup spark, we plan to give access to all the end users/developers/BI users,etc. But we learnt any valid user after getting their own user kerb TGT, can get hold of sqlContext (in program or in shell) and can run any query against any secure databases. This puts us in a critical condition as we do not want to give blanket permission to everyone. We are looking forward to a solution or a work around, by which we can give secure access only to the selected users to sensetive tables/database. Failing to do so, we would like to remove/disable the SparkSQL context/feature for everyone. Any pointers in this direction will be very valueable. Thank you, Arpan
... View more
06-13-2016
04:37 AM
Guys, I am t rying to run the Spark PI application in YARN cluster mode, here are the steps briefly. (This works for service users like spark, hdfs, hive,etc) - try doing kinit using spark headless keytab - run sparkPI example, check application ID - check yarn logs using yarn logs -applicationId <ID> I see the logs / PI value,etc. Now follow these steps: - kdestory - kinit adm_user (user in active directory) - check klist and user has valid TGT. - run same spark PI application in YARN Cluster mode - get the application id from history server or from console. - Execute : yarn logs -applicationId <appID> from above step. I see following : 16/06/09 12:11:53 INFO impl.TimelineClientImpl: Timeline service address: http://host:8188/ws/v1/timeline/ 16/06/09 12:11:54 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to rm2 /app-logs/<USER_WHO_WAS_USED_TO_KINIT>/logs/application_1465312809105_0024 does not have any log files. I checked /logs/<KINIT_USER>/logs/applicationID, indeed it does not have any logs. Could you please suggest what is missing? Thanks in advance, Arpan
... View more