Member since
12-29-2018
20
Posts
3
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1846 | 05-22-2019 06:28 AM |
05-22-2019
06:28 AM
2 Kudos
Issue was resolved by putting mysql jar into /opt/cloudera/parcels/CDH/lib/hive/lib Thanks ALL
... View more
05-22-2019
05:45 AM
Guys, i`ve faced this error during Hive installation. Deep log searching gave me this: Caused by: org.datanucleus.exceptions.NucleusException: Attempt to invoke the "BONECP" plugin to create a ConnectionPool gave an error : The specified datastore driver ("com.mysql.jdbc.Driver") was not found in the CLASSPATH. Please check your CLASSPATH specification, and the name of the driver. the main question is: where to configure CLASSPATH ? i`ve put mysql-jar into: # ls /opt/cloudera/parcels/CDH-6.1.1-1.cdh6.1.1.p0.875250/jars/mysql* /opt/cloudera/parcels/CDH-6.1.1-1.cdh6.1.1.p0.875250/jars/mysql-connector-java-5.1.46.jar /opt/cloudera/parcels/CDH-6.1.1-1.cdh6.1.1.p0.875250/jars/mysql-connector-java.jar # But no use, can anyone help me please on this ? Thanks.
... View more
Labels:
04-08-2019
03:04 AM
Good day guys, i`ve a problem with solr server which is not starting.
Version : Cloudera Express 6.1.1 (#853290 built by jenkins on 20190129-0327 git: 95cef145d6f0659643b81d020b27a2de6c07f5b6)
[08/Apr/2019 13:52:29 +0000] 9271 Thread-14 process INFO [223-solr-SOLR_SERVER] Refreshing process files: None [08/Apr/2019 13:52:29 +0000] 9271 Thread-14 parcel INFO prepare_environment begin: {u'CDH': u'6.1.1-1.cdh6.1.1.p0.875250'}, [u'cdh', u'solr'], [u'hdfs-client-plugin', u'solr-plugin'] [08/Apr/2019 13:52:29 +0000] 9271 Thread-14 parcel INFO The following requested parcels are not available: {} [08/Apr/2019 13:52:29 +0000] 9271 Thread-14 parcel INFO Obtained tags ['cdh', 'impala', 'sentry', 'solr', 'spark', 'kafka', 'kudu'] for parcel CDH [08/Apr/2019 13:52:29 +0000] 9271 Thread-14 parcel INFO prepare_environment end: {'CDH': '6.1.1-1.cdh6.1.1.p0.875250'} [08/Apr/2019 13:52:29 +0000] 9271 Thread-14 __init__ INFO Extracted 15 files and 0 dirs to /var/run/cloudera-scm-agent/process/223-solr-SOLR_SERVER. [08/Apr/2019 13:52:29 +0000] 9271 Thread-14 process INFO [223-solr-SOLR_SERVER] Evaluating resource: directory [08/Apr/2019 13:52:29 +0000] 9271 Thread-14 process INFO [223-solr-SOLR_SERVER] Evaluating resource: tcp_listen [08/Apr/2019 13:52:29 +0000] 9271 Thread-14 process INFO reading limits: {u'limit_memlock': None, u'limit_fds': None} [08/Apr/2019 13:52:29 +0000] 9271 Thread-14 process INFO [223-solr-SOLR_SERVER] Launching process. one-off False, command solr/solr.sh, args [] [08/Apr/2019 13:52:29 +0000] 9271 Thread-14 supervisor WARNING Failed while getting process info. Retrying. (<Fault 10: 'BAD_NAME: 223-solr-SOLR_SERVER'>) [08/Apr/2019 13:52:31 +0000] 9271 Thread-14 supervisor INFO Triggering supervisord update. [08/Apr/2019 13:52:31 +0000] 9271 Thread-14 process INFO Begin audit plugin refresh [08/Apr/2019 13:52:31 +0000] 9271 Thread-14 process INFO Begin metadata plugin refresh [08/Apr/2019 13:52:31 +0000] 9271 Thread-14 process INFO Begin profile plugin refresh [08/Apr/2019 13:52:32 +0000] 9271 Thread-14 process INFO Begin monitor refresh. [08/Apr/2019 13:52:32 +0000] 9271 Thread-14 abstract_monitor INFO Refreshing SolrServerMonitor for None [08/Apr/2019 13:52:32 +0000] 9271 Thread-14 daemon INFO New monitor: (<cmf.monitor.solrserver.SolrServerMonitor object at 0x7f20e7c6f2d0>,) [08/Apr/2019 13:52:32 +0000] 9271 Thread-14 process INFO Daemon refresh complete for process 223-solr-SOLR_SERVER. [08/Apr/2019 13:52:33 +0000] 9271 Metadata-Plugin navigator_plugin INFO Pipelines updated for Metadata Plugin: [] [08/Apr/2019 13:52:33 +0000] 9271 Profile-Plugin navigator_plugin INFO Pipelines updated for Profile Plugin: set([]) [08/Apr/2019 13:52:33 +0000] 9271 Audit-Plugin navigator_plugin INFO Pipelines updated for Audit Plugin: []
Can anyone help me to solve this issue ?
Thanks.
... View more
Labels:
03-28-2019
04:07 AM
Thanks for your reply, so if i get it the right way, size on each server depends on replication factor i put, is there any table of dependencies of replication factor and disk sizing ? Also wanted to ask about the resources on each node, so summary i need some documentation about replica factor, sizing and ram usage.
... View more
03-28-2019
12:15 AM
Good day guys, im newby in Cloudera and wanted to ask 2 questions. 1) I got 20TB of data and i should migrate it to 10 servers, do i need to have 20TB of disk on each server ? 2) How do i organize the right HDFS model (NameNode, DataNode, SecondaryNameNone) on those 10 servers ? Thanks, i hope to receive the answer very soon )
... View more
Labels:
12-29-2018
08:59 AM
1 Kudo
Hi guys, im facing troubles with hue on cloudera 6.1, trying to create a new database but: never faced this before, could anybody help me with that ? Or recommend me some logs where to dig ? Yet, from hive console everything is fine: hive> create database bank; OK Time taken: 0.122 seconds hive> drop database bank; OK Time taken: 0.12 seconds hive> Thanks.
... View more
Labels:
01-25-2018
12:21 PM
@Sankaru Thumuluru I have mentioned that this issue occurs only with HDP 2.6.x, later i`ve installed 2.5 and installation was clean... And yes, i got only current versions, i think that there are some issue with scripts as: 2018-01-25 16:07:32,156 - The 'accumulo-client' component did not advertise a version. This may indicate a problem with the component packaging.
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/ACCUMULO/1.6.1.2.2.0/package/scripts/accumulo_client.py", line 61, in <module>
AccumuloClient().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 375, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/ACCUMULO/1.6.1.2.2.0/package/scripts/accumulo_client.py", line 34, in install
self.configure(env)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 120, in locking_configure
original_configure(obj, *args, **kw)
File "/var/lib/ambari-agent/cache/common-services/ACCUMULO/1.6.1.2.2.0/package/scripts/accumulo_client.py", line 41, in configure
setup_conf_dir(name='client')
File "/var/lib/ambari-agent/cache/common-services/ACCUMULO/1.6.1.2.2.0/package/scripts/accumulo_configuration.py", line 32, in setup_conf_dir
create_parents = True
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 166, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 185, in action_create
sudo.makedirs(path, self.resource.mode or 0755)
File "/usr/lib/python2.6/site-packages/resource_management/core/sudo.py", line 107, in makedirs
raise Fail("Cannot create directory '{0}' as '{1}' is a broken symlink".format(path, dirname))
resource_management.core.exceptions.Fail: Cannot create directory '/usr/hdp/current/accumulo-client/conf' as '/usr/hdp/current/accumulo-client' is a broken symlink
stdout: /var/lib/ambari-agent/data/output-566.txt
2018-01-25 16:06:11,928 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=None -> 2.6
2018-01-25 16:06:11,934 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2018-01-25 16:06:11,935 - Group['livy'] {}
2018-01-25 16:06:11,938 - Group['spark'] {}
2018-01-25 16:06:11,939 - Adding group Group['spark']
2018-01-25 16:06:11,967 - Group['hdfs'] {}
2018-01-25 16:06:11,968 - Adding group Group['hdfs']
2018-01-25 16:06:11,987 - Group['zeppelin'] {}
2018-01-25 16:06:11,987 - Adding group Group['zeppelin']
2018-01-25 16:06:12,005 - Group['hadoop'] {}
2018-01-25 16:06:12,006 - Group['users'] {}
2018-01-25 16:06:12,006 - Group['knox'] {}
2018-01-25 16:06:12,007 - Adding group Group['knox']
2018-01-25 16:06:12,025 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-01-25 16:06:12,026 - Adding user User['hive']
2018-01-25 16:06:12,055 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-01-25 16:06:12,055 - Adding user User['storm']
2018-01-25 16:06:12,124 - User['infra-solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-01-25 16:06:12,125 - Adding user User['infra-solr']
2018-01-25 16:06:12,180 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-01-25 16:06:12,181 - Adding user User['zookeeper']
2018-01-25 16:06:12,215 - User['atlas'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-01-25 16:06:12,216 - Adding user User['atlas']
2018-01-25 16:06:12,252 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2018-01-25 16:06:12,253 - Adding user User['oozie']
2018-01-25 16:06:12,289 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-01-25 16:06:12,290 - Adding user User['ams']
2018-01-25 16:06:12,323 - User['falcon'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2018-01-25 16:06:12,324 - Adding user User['falcon']
2018-01-25 16:06:12,360 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2018-01-25 16:06:12,360 - Adding user User['tez']
2018-01-25 16:06:12,394 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'zeppelin', u'hadoop'], 'uid': None}
2018-01-25 16:06:12,394 - Adding user User['zeppelin']
2018-01-25 16:06:12,428 - User['accumulo'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-01-25 16:06:12,429 - Adding user User['accumulo']
2018-01-25 16:06:12,464 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-01-25 16:06:12,467 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-01-25 16:06:12,467 - Adding user User['spark']
2018-01-25 16:06:12,504 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2018-01-25 16:06:12,504 - Adding user User['ambari-qa']
2018-01-25 16:06:12,538 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-01-25 16:06:12,539 - Adding user User['flume']
2018-01-25 16:06:12,581 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-01-25 16:06:12,582 - Adding user User['kafka']
2018-01-25 16:06:12,618 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs'], 'uid': None}
2018-01-25 16:06:12,619 - Adding user User['hdfs']
2018-01-25 16:06:12,654 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-01-25 16:06:12,655 - Adding user User['sqoop']
2018-01-25 16:06:12,692 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-01-25 16:06:12,693 - Adding user User['yarn']
2018-01-25 16:06:12,750 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-01-25 16:06:12,751 - Adding user User['hbase']
2018-01-25 16:06:12,800 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-01-25 16:06:12,801 - Adding user User['hcat']
2018-01-25 16:06:12,838 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-01-25 16:06:12,838 - Adding user User['mapred']
2018-01-25 16:06:12,879 - User['knox'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-01-25 16:06:12,880 - Adding user User['knox']
2018-01-25 16:06:12,908 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2018-01-25 16:06:12,949 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2018-01-25 16:06:12,958 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if
2018-01-25 16:06:12,958 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2018-01-25 16:06:12,959 - Changing owner for /tmp/hbase-hbase from 1025 to hbase
2018-01-25 16:06:12,960 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2018-01-25 16:06:12,962 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('chan...
... View more
01-24-2018
04:50 PM
@J Koppole It is accessible and i do have wrote permissions, installing under root.
... View more
01-24-2018
12:24 PM
Good day guys, im facing an issue with CentOS 7 HDP 2.6.3 installations, during the process the error is: 2018-01-24 16:13:24,818 - Repository['HDP-2.6-repo-52'] {'append_to_file': False, 'base_url': 'http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.6.3.0', 'action': ['create'], 'components': [u'HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-52', 'mirror_list': None}
2018-01-24 16:13:24,833 - File['/etc/yum.repos.d/ambari-hdp-52.repo'] {'content': '[HDP-2.6-repo-52]\nname=HDP-2.6-repo-52\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.6.3.0\n\npath=/\nenabled=1\ngpgcheck=0'}
2018-01-24 16:13:24,835 - Writing File['/etc/yum.repos.d/ambari-hdp-52.repo'] because contents don't match
2018-01-24 16:13:24,836 - Repository['HDP-UTILS-1.1.0.21-repo-52'] {'append_to_file': True, 'base_url': 'http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7', 'action': ['create'], 'components': [u'HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-52', 'mirror_list': None}
2018-01-24 16:13:24,843 - File['/etc/yum.repos.d/ambari-hdp-52.repo'] {'content': '[HDP-2.6-repo-52]\nname=HDP-2.6-repo-52\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.6.3.0\n\npath=/\nenabled=1\ngpgcheck=0\n[HDP-UTILS-1.1.0.21-repo-52]\nname=HDP-UTILS-1.1.0.21-repo-52\nbaseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7\n\npath=/\nenabled=1\ngpgcheck=0'}
2018-01-24 16:13:24,843 - Writing File['/etc/yum.repos.d/ambari-hdp-52.repo'] because contents don't match
2018-01-24 16:13:24,844 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2018-01-24 16:13:24,968 - Skipping installation of existing package unzip
2018-01-24 16:13:24,969 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2018-01-24 16:13:24,981 - Skipping installation of existing package curl
2018-01-24 16:13:24,981 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2018-01-24 16:13:24,993 - Skipping installation of existing package hdp-select
2018-01-24 16:13:25,097 - call[('ambari-python-wrap', u'/usr/bin/hdp-select', 'versions')] {}
2018-01-24 16:13:25,133 - call returned (0, '2.6.0.3-8\n2.6.3.0-235')
2018-01-24 16:13:25,551 - Command repositories: HDP-2.6-repo-52, HDP-UTILS-1.1.0.21-repo-52
2018-01-24 16:13:25,552 - Applicable repositories: HDP-2.6-repo-52, HDP-UTILS-1.1.0.21-repo-52
2018-01-24 16:13:25,554 - Looking for matching packages in the following repositories: HDP-2.6-repo-52, HDP-UTILS-1.1.0.21-repo-52
Command aborted. Reason: 'Server considered task failed and automatically aborted it'
Command failed after 1 tries
I`ve already mixed this staff with repos, selected different repositories, version, did yum-complete-transaction --cleanup-only, no use, can anyone suggest me the solution ?
Please Help.
Thanks.
... View more
- Tags:
- Hadoop Core
- hdp-2.6.0
Labels:
12-04-2017
10:07 AM
Dears, i`ve started to work woth sqoop, one command gives me the following error: ERROR tool.CreateHiveTableTool: Encountered IOException running create table job: java.io.IOException: Exception thrown in Hive But this is not the main problem, after this error i got a huge hdfs space increase, seems like it wrote a lot of logs files somewhere and i cannot find them to delete. Could anyone please help me with this issue ? Thanks.
... View more
- Tags:
- Sqoop
Labels:
10-05-2017
12:04 PM
capture.jpg Guys, i have an interesting issue, had some running processes on Hive Debug Query console(web), killed them with yarn application -kill / hadoop job -kill, now there are no any processes via yarn application -list / hadoop job -list all, but they are still showing on Debug web console. ######## [root@master ~]# yarn application -list
17/10/05 16:02:37 INFO client.RMProxy: Connecting to ResourceManager at node1.unibank.lan/10.130.10.57:8050
17/10/05 16:02:38 INFO client.AHSProxy: Connecting to Application History server at node1.unibank.lan/10.130.10.57:10200
Total number of applications (application-types: [] and states: [SUBMITTED, ACCEPTED, RUNNING]):0
Application-Id Application-Name Application-Type User Queue State Final-State Progress Tracking-URL
[root@master ~]# ######## Also i wanted to mention that there is no app id assigned to process... capture1.jpg Has anyone faced this issue ? Please Help...
... View more
- Tags:
- Data Processing
- Hive
Labels:
10-05-2017
11:12 AM
@Geoffrey Shelton Okot Problem resolved by simply Service Restart ) Thanks.
... View more
10-05-2017
10:53 AM
@Aditya AppID is empty, it is seams like there is an error with web console, i`ve killed all running jobs with yarn application -kill and hadoop job -kill and still capture.jpg Please Help (
... View more
10-05-2017
09:17 AM
Dears, after restart of the server i`ve started to receive the error when i try to open hive view: ssues detected Service Hive check failed: Cannot open a hive connection with connect string jdbc:hive2://node1.unibank.lan:2181,master.unibank.lan:2181,node2.unibank.lan:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2;hive.server2.proxy.user=admin Any ideas ? LOG: Service Hive check failed:
org.apache.ambari.view.hive2.internal.ConnectionException: Cannot open a hive connection with connect string jdbc:hive2://node1.unibank.lan:2181,master.unibank.lan:2181,node2.unibank.lan:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2;hive.server2.proxy.user=admin
org.apache.ambari.view.hive2.internal.ConnectionException: Cannot open a hive connection with connect string jdbc:hive2://node1.unibank.lan:2181,master.unibank.lan:2181,node2.unibank.lan:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2;hive.server2.proxy.user=admin
at org.apache.ambari.view.hive2.internal.HiveConnectionWrapper.connect(HiveConnectionWrapper.java:89)
at org.apache.ambari.view.hive2.resources.browser.ConnectionService.attemptHiveConnection(ConnectionService.java:102)
at org.apache.ambari.view.hive2.resources.browser.ConnectionService.attemptConnection(ConnectionService.java:85)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302)
at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:137)
at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:137)
at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:137)
at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:137)
at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542)
at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409)
at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409)
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558)
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:848)
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:684)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1507)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:330)
at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:118)
at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.doFilter(FilterSecurityInterceptor.java:84)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.apache.ambari.server.security.authorization.AmbariAuthorizationFilter.doFilter(AmbariAuthorizationFilter.java:287)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:113)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.session.SessionManagementFilter.doFilter(SessionManagementFilter.java:103)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:113)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:54)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:45)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.apache.ambari.server.security.authentication.AmbariDelegatingAuthenticationFilter.doFilter(AmbariDelegatingAuthenticationFilter.java:132)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.apache.ambari.server.security.authorization.AmbariUserAuthorizationFilter.doFilter(AmbariUserAuthorizationFilter.java:91)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.context.SecurityContextPersistenceFilter.doFilter(SecurityContextPersistenceFilter.java:87)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:192)
at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:160)
at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:237)
at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:167)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1478)
at org.apache.ambari.server.api.MethodOverrideFilter.doFilter(MethodOverrideFilter.java:72)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1478)
at org.apache.ambari.server.api.AmbariPersistFilter.doFilter(AmbariPersistFilter.java:47)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1478)
at org.apache.ambari.server.view.AmbariViewsMDCLoggingFilter.doFilter(AmbariViewsMDCLoggingFilter.java:54)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1478)
at org.apache.ambari.server.view.ViewThrottleFilter.doFilter(ViewThrottleFilter.java:161)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1478)
at org.apache.ambari.server.security.AbstractSecurityHeaderFilter.doFilter(AbstractSecurityHeaderFilter.java:125)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1478)
at org.apache.ambari.server.security.AbstractSecurityHeaderFilter.doFilter(AbstractSecurityHeaderFilter.java:125)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1478)
at org.eclipse.jetty.servlets.UserAgentFilter.doFilter(UserAgentFilter.java:82)
at org.eclipse.jetty.servlets.GzipFilter.doFilter(GzipFilter.java:294)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1478)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:499)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1086)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:427)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1020)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at org.apache.ambari.server.controller.AmbariHandlerList.processHandlers(AmbariHandlerList.java:212)
at org.apache.ambari.server.controller.AmbariHandlerList.processHandlers(AmbariHandlerList.java:201)
at org.apache.ambari.server.controller.AmbariHandlerList.handle(AmbariHandlerList.java:150)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.eclipse.jetty.server.Server.handle(Server.java:370)
at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:494)
at org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:973)
at org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1035)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:641)
at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:231)
at org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82)
at org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:696)
at org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:53)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.sql.SQLException: org.apache.hive.jdbc.ZooKeeperHiveClientException: Unable to read HiveServer2 configs from ZooKeeper
at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:134)
at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:107)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:247)
at org.apache.ambari.view.hive2.internal.HiveConnectionWrapper$1.run(HiveConnectionWrapper.java:78)
at org.apache.ambari.view.hive2.internal.HiveConnectionWrapper$1.run(HiveConnectionWrapper.java:75)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
at org.apache.ambari.view.hive2.internal.HiveConnectionWrapper.connect(HiveConnectionWrapper.java:75)
... 99 more
Caused by: org.apache.hive.jdbc.ZooKeeperHiveClientException: Unable to read HiveServer2 configs from ZooKeeper
at org.apache.hive.jdbc.ZooKeeperHiveClientHelper.configureConnParams(ZooKeeperHiveClientHelper.java:96)
at org.apache.hive.jdbc.Utils.configureConnParams(Utils.java:514)
at org.apache.hive.jdbc.Utils.parseURL(Utils.java:434)
at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:132)
... 108 more
Caused by: org.apache.hive.jdbc.ZooKeeperHiveClientException: Tried all existing HiveServer2 uris from ZooKeeper.
at org.apache.hive.jdbc.ZooKeeperHiveClientHelper.configureConnParams(ZooKeeperHiveClientHelper.java:68)
... 111 more
... View more
- Tags:
- error
10-03-2017
10:05 AM
also Application ID is empty
... View more
10-03-2017
10:02 AM
Thanks for answer, but how can i kill it using Query id ?
... View more
09-24-2017
03:25 PM
Ambari 2.6.2 The file was uploaded via FileView and then i click "open" capture.jpg - this happens.
... View more
09-24-2017
02:44 PM
Dears, while im browsing files in ambari $IP:8080/#/main/view/FILES/auto_files_instance recently uploaded there with Russian symbols, i see only "?????????????????" -this. Unix locate ru_RU.UTF-8 is applied Is there any places i have to change the locale to see Russian properly ? Thanks for help.
... View more
Labels: