Member since
03-19-2016
69
Posts
10
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2923 | 01-04-2017 07:30 PM | |
5239 | 12-20-2016 02:30 AM | |
1313 | 12-17-2016 06:13 PM |
02-20-2017
05:29 AM
Thanks @Jay SenSharma. Thanks and I am able to delete the service.
... View more
02-20-2017
03:30 AM
Thanks for the quick reply. Here are the logs from ambari-server. 20 Feb 2017 03:28:57,527 INFO [qtp-ambari-client-3389] ClusterImpl:2052 - Deleting service for cluster, clusterName=prod, serviceName=PRESTO
20 Feb 2017 03:28:57,528 INFO [qtp-ambari-client-3389] ServiceImpl:608 - Deleting all components for service, clusterName=prod, serviceName=PRESTO
20 Feb 2017 03:28:57,528 INFO [qtp-ambari-client-3389] ServiceImpl:574 - Deselecting config mapping for cluster, clusterId=2, configTypes=[]
20 Feb 2017 03:28:57,531 ERROR [qtp-ambari-client-3389] AmbariJpaLocalTxnInterceptor:180 - [DETAILED ERROR] Rollback reason:
Local Exception Stack:
Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.5.2.v20140319-9ad6abd): org.eclipse.persistence.exceptions.DatabaseException
Internal Exception: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '))) AND (selected > 0))' at line 1
Error Code: 1064
Call: SELECT type_name, create_timestamp, cluster_id, selected, version_tag, user_name FROM clusterconfigmapping WHERE (((cluster_id = ?) AND (type_name IN ())) AND (selected > ?))
bind => [2 parameters bound]
Query: ReadAllQuery(referenceClass=ClusterConfigMappingEntity sql="SELECT type_name, create_timestamp, cluster_id, selected, version_tag, user_name FROM clusterconfigmapping WHERE (((cluster_id = ?) AND (type_name IN ?)) AND (selected > ?))")
at org.eclipse.persistence.exceptions.DatabaseException.sqlException(DatabaseException.java:340)
at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.basicExecuteCall(DatabaseAccessor.java:682)
at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeCall(DatabaseAccessor.java:558)
at org.eclipse.persistence.internal.sessions.AbstractSession.basicExecuteCall(AbstractSession.java:2002)
at org.eclipse.persistence.sessions.server.ServerSession.executeCall(ServerSession.java:570)
at org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.executeCall(DatasourceCallQueryMechanism.java:242)
at org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.executeCall(DatasourceCallQueryMechanism.java:228)
at org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.executeSelectCall(DatasourceCallQueryMechanism.java:299)
at org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.selectAllRows(DatasourceCallQueryMechanism.java:694)
at org.eclipse.persistence.internal.queries.ExpressionQueryMechanism.selectAllRowsFromTable(ExpressionQueryMechanism.java:2738)
at org.eclipse.persistence.internal.queries.ExpressionQueryMechanism.selectAllRows(ExpressionQueryMechanism.java:2691)
at org.eclipse.persistence.queries.ReadAllQuery.executeObjectLevelReadQuery(ReadAllQuery.java:495)
at org.eclipse.persistence.queries.ObjectLevelReadQuery.executeDatabaseQuery(ObjectLevelReadQuery.java:1168)
at org.eclipse.persistence.queries.DatabaseQuery.execute(DatabaseQuery.java:899)
at org.eclipse.persistence.queries.ObjectLevelReadQuery.execute(ObjectLevelReadQuery.java:1127)
at org.eclipse.persistence.queries.ReadAllQuery.execute(ReadAllQuery.java:403)
at org.eclipse.persistence.queries.ObjectLevelReadQuery.executeInUnitOfWork(ObjectLevelReadQuery.java:1215)
at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.internalExecuteQuery(UnitOfWorkImpl.java:2896)
at org.eclipse.persistence.internal.sessions.AbstractSession.executeQuery(AbstractSession.java:1804)
at org.eclipse.persistence.internal.sessions.AbstractSession.executeQuery(AbstractSession.java:1786)
at org.eclipse.persistence.internal.sessions.AbstractSession.executeQuery(AbstractSession.java:1751)
at org.eclipse.persistence.internal.jpa.QueryImpl.executeReadQuery(QueryImpl.java:258)
at org.eclipse.persistence.internal.jpa.QueryImpl.getResultList(QueryImpl.java:469)
at org.apache.ambari.server.orm.dao.DaoUtils.selectList(DaoUtils.java:62)
at org.apache.ambari.server.orm.dao.ClusterDAO.getSelectedConfigMappingByTypes(ClusterDAO.java:259)
at org.apache.ambari.server.orm.AmbariLocalSessionInterceptor.invoke(AmbariLocalSessionInterceptor.java:53)
at org.apache.ambari.server.state.ServiceImpl.deleteAllServiceConfigs(ServiceImpl.java:577)
at org.apache.ambari.server.orm.AmbariJpaLocalTxnInterceptor.invoke(AmbariJpaLocalTxnInterceptor.java:118)
at org.apache.ambari.server.state.ServiceImpl.delete(ServiceImpl.java:680)
at org.apache.ambari.server.orm.AmbariJpaLocalTxnInterceptor.invoke(AmbariJpaLocalTxnInterceptor.java:128)
at org.apache.ambari.server.state.cluster.ClusterImpl.deleteService(ClusterImpl.java:2081)
at org.apache.ambari.server.state.cluster.ClusterImpl.deleteService(ClusterImpl.java:2060)
at org.apache.ambari.server.controller.internal.ServiceResourceProvider.deleteServices(ServiceResourceProvider.java:886)
at org.apache.ambari.server.controller.internal.ServiceResourceProvider$3.invoke(ServiceResourceProvider.java:247)
at org.apache.ambari.server.controller.internal.ServiceResourceProvider$3.invoke(ServiceResourceProvider.java:244)
at org.apache.ambari.server.controller.internal.AbstractResourceProvider.invokeWithRetry(AbstractResourceProvider.java:450)
at org.apache.ambari.server.controller.internal.AbstractResourceProvider.modifyResources(AbstractResourceProvider.java:331)
at org.apache.ambari.server.controller.internal.ServiceResourceProvider.deleteResources(ServiceResourceProvider.java:244)
at org.apache.ambari.server.controller.internal.ClusterControllerImpl.deleteResources(ClusterControllerImpl.java:330)
at org.apache.ambari.server.api.services.persistence.PersistenceManagerImpl.delete(PersistenceManagerImpl.java:111)
... View more
02-20-2017
12:07 AM
I am trying to delete a service but I am unable to do it. In the past I was successful in deleting the service but this time it is throwing weird error. curl -u admin:xxxxxxxx -H "X-Requested-By: ambari" -X DELETE http://localhost:8080/api/v1/clusters/prod/services/PRESTO
{
"status": 500,
"message": "Server Error"
} How do I delete the service ?
... View more
Labels:
- Labels:
-
Apache Ambari
01-04-2017
11:00 PM
I bumped into the same problem and your solution helped me to resolve the problem. Thanks
... View more
01-04-2017
07:48 PM
Un-installing and re-installing resolved the problem. yum remove ranger_2_4_*-admin
yum remove ranger_2_4_*-usersync
... View more
01-04-2017
07:30 PM
@Rahul Pathak
I don't see any ranger-admin and ranger-usersync folder under /usr/hdp/2.4.0.0-169/
I noticed that in the installation steps, the message says Skipping installation of existing package
2017-01-04 18:43:28,703 - Skipping installation of existing package hdp-select2017-01-04 18:43:28,825 - Package['ranger_2_4_*-admin'] {}2017-01-04 18:43:28,913 - Skipping installation of existing package ranger_2_4_*-admin2017-01-04 18:43:28,914 - Package['ranger_2_4_*-usersync'] {}
[root@usw2dbdpmn01 ~]# yum list installed |grep ranger
ranger_2_2_6_0_2800-hdfs-plugin.x86_64 0.4.0.2.2.6.0-2800.el6 @HDP-2.2
ranger_2_2_6_0_2800-hive-plugin.x86_64 0.4.0.2.2.6.0-2800.el6 @HDP-2.2
ranger_2_4_0_0_169-admin.x86_64 0.5.0.2.4.0.0-169.el6 @HDP-2.4
ranger_2_4_0_0_169-hdfs-plugin.x86_64 0.5.0.2.4.0.0-169.el6 @HDP-2.4.0.0
ranger_2_4_0_0_169-hive-plugin.x86_64 0.5.0.2.4.0.0-169.el6 @HDP-2.4.0.0
ranger_2_4_0_0_169-usersync.x86_64 0.5.0.2.4.0.0-169.el6 @HDP-2.4
ranger_2_4_0_0_169-yarn-plugin.x86_64 0.5.0.2.4.0.0-169.el6 @HDP-2.4.0.0
How do I un-install ranger-admin and re-install ? is running yum remove ranger_2_4_0_0_169-admin will have other adverse effect ?
... View more
01-04-2017
07:11 PM
@Rahul Pathak Probably I should have framed question better. I have the same question too. When I am installing Ranger, it should have installed those packages on that path. But I don't see any error while installing ranger-admin package. 2017-01-04 18:43:28,703 - Skipping installation of existing package hdp-select2017-01-04 18:43:28,825 - Package['ranger_2_4_*-admin'] {}2017-01-04 18:43:28,913 - Skipping installation of existing package ranger_2_4_*-admin2017-01-04 18:43:28,914 - Package['ranger_2_4_*-usersync'] {}
... View more
01-04-2017
06:46 PM
Ranger is failing to install admin. HDP version - 2.4 Ranger version - 0.5 Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/RANGER/0.4.0/package/scripts/ranger_admin.py", line 124, in <module>
RangerAdmin().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 219, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/RANGER/0.4.0/package/scripts/ranger_admin.py", line 43, in install
setup_ranger_db()
File "/var/lib/ambari-agent/cache/common-services/RANGER/0.4.0/package/scripts/setup_ranger_xml.py", line 186, in setup_ranger_db
sudo=True)
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 158, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 121, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 238, in action_run
tries=self.resource.tries, try_sleep=self.resource.try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 70, in inner
result = function(command, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 92, in checked_call
tries=tries, try_sleep=try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 140, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 291, in _call
raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of 'cp --remove-destination /var/lib/ambari-agent/tmp/mysql-connector-java.jar /usr/hdp/current/ranger-admin/ews/lib' returned 1. cp: cannot create regular file `/usr/hdp/current/ranger-admin/ews/lib': No such file or directory
2017-01-04 18:43:28,496 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.4.0.0-169
2017-01-04 18:43:28,497 - Checking if need to create versioned conf dir /etc/hadoop/2.4.0.0-169/0
2017-01-04 18:43:28,497 - call['conf-select create-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2017-01-04 18:43:28,519 - call returned (1, '/etc/hadoop/2.4.0.0-169/0 exist already', '')
2017-01-04 18:43:28,519 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False}
2017-01-04 18:43:28,540 - checked_call returned (0, '/usr/hdp/2.4.0.0-169/hadoop/conf -> /etc/hadoop/2.4.0.0-169/0')
2017-01-04 18:43:28,540 - Ensuring that hadoop has the correct symlink structure
2017-01-04 18:43:28,540 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-01-04 18:43:28,542 - Group['spark'] {}
2017-01-04 18:43:28,543 - Group['ranger'] {}
2017-01-04 18:43:28,543 - Group['zeppelin'] {}
2017-01-04 18:43:28,544 - Group['hadoop'] {}
2017-01-04 18:43:28,544 - Group['users'] {}
2017-01-04 18:43:28,544 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-01-04 18:43:28,545 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-01-04 18:43:28,545 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2017-01-04 18:43:28,546 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-01-04 18:43:28,547 - User['falcon'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2017-01-04 18:43:28,547 - User['ranger'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['ranger']}
2017-01-04 18:43:28,548 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2017-01-04 18:43:28,548 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-01-04 18:43:28,549 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-01-04 18:43:28,550 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2017-01-04 18:43:28,550 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-01-04 18:43:28,551 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-01-04 18:43:28,551 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-01-04 18:43:28,552 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-01-04 18:43:28,552 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-01-04 18:43:28,553 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-01-04 18:43:28,555 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2017-01-04 18:43:28,559 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2017-01-04 18:43:28,559 - Group['hdfs'] {}
2017-01-04 18:43:28,560 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'hdfs']}
2017-01-04 18:43:28,560 - Directory['/etc/hadoop'] {'mode': 0755}
2017-01-04 18:43:28,572 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2017-01-04 18:43:28,573 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0777}
2017-01-04 18:43:28,584 - Repository['HDP-2.4'] {'base_url': 'http://usw2dvdprp01.glassdoor.local/HDP-2.4.0.0', 'action': ['create'], 'components': ['HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP', 'mirror_list': None}
2017-01-04 18:43:28,592 - File['/etc/yum.repos.d/HDP.repo'] {'content': '[HDP-2.4]\nname=HDP-2.4\nbaseurl=http://usw2dvdprp01.glassdoor.local/HDP-2.4.0.0\n\npath=/\nenabled=1\ngpgcheck=0'}
2017-01-04 18:43:28,593 - Repository['HDP-UTILS-1.1.0.20'] {'base_url': 'http://usw2dvdprp01.glassdoor.local/HDP-UTILS-1.1.0.20', 'action': ['create'], 'components': ['HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP-UTILS', 'mirror_list': None}
2017-01-04 18:43:28,596 - File['/etc/yum.repos.d/HDP-UTILS.repo'] {'content': '[HDP-UTILS-1.1.0.20]\nname=HDP-UTILS-1.1.0.20\nbaseurl=http://usw2dvdprp01.glassdoor.local/HDP-UTILS-1.1.0.20\n\npath=/\nenabled=1\ngpgcheck=0'}
2017-01-04 18:43:28,596 - Package['unzip'] {}
2017-01-04 18:43:28,684 - Skipping installation of existing package unzip
2017-01-04 18:43:28,684 - Package['curl'] {}
2017-01-04 18:43:28,694 - Skipping installation of existing package curl
2017-01-04 18:43:28,694 - Package['hdp-select'] {}
2017-01-04 18:43:28,703 - Skipping installation of existing package hdp-select
2017-01-04 18:43:28,825 - Package['ranger_2_4_*-admin'] {}
2017-01-04 18:43:28,913 - Skipping installation of existing package ranger_2_4_*-admin
2017-01-04 18:43:28,914 - Package['ranger_2_4_*-usersync'] {}
2017-01-04 18:43:28,923 - Skipping installation of existing package ranger_2_4_*-usersync
2017-01-04 18:43:28,926 - File['/var/lib/ambari-agent/tmp/mysql-connector-java.jar'] {'content': DownloadSource('http://usw2dbdpmn02.glassdoor.local:8080/resources//mysql-jdbc-driver.jar'), 'mode': 0644}
2017-01-04 18:43:28,927 - Not downloading the file from http://usw2dbdpmn02.glassdoor.local:8080/resources//mysql-jdbc-driver.jar, because /var/lib/ambari-agent/tmp/mysql-jdbc-driver.jar already exists
2017-01-04 18:43:28,930 - Directory['/usr/share/java'] {'recursive': True, 'mode': 0755, 'cd_access': 'a'}
2017-01-04 18:43:28,930 - Execute[('cp', '--remove-destination', '/var/lib/ambari-agent/tmp/mysql-connector-java.jar', '/usr/share/java/mysql-connector-java.jar')] {'path': ['/bin', '/usr/bin/'], 'sudo': True}
2017-01-04 18:43:28,938 - File['/usr/share/java/mysql-connector-java.jar'] {'mode': 0644}
2017-01-04 18:43:28,938 - Execute[('cp', '--remove-destination', '/var/lib/ambari-agent/tmp/mysql-connector-java.jar', '/usr/hdp/current/ranger-admin/ews/lib')] {'path': ['/bin', '/usr/bin/'], 'sudo': True}
Please help.
... View more
Labels:
- Labels:
-
Apache Ranger
12-20-2016
02:30 AM
1 Kudo
I resolved the problem by adding this configuration in custom-hiveserver2-site.xml hive.security.authorization.sqlstd.confwhitelist.append=fs\.s3a\..*|fs\.s3n\..* |
... View more
12-19-2016
10:34 PM
@Ramesh Mani This is just hiveserver2 configuration. The underlying file system is untouched. My expectation is Hive should work as usual. Please correct me if my understanding is incorrect after enabling Ranger.
... View more