Member since
04-22-2016
931
Posts
46
Kudos Received
26
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1860 | 10-11-2018 01:38 AM | |
| 2220 | 09-26-2018 02:24 AM | |
| 2252 | 06-29-2018 02:35 PM | |
| 2931 | 06-29-2018 02:34 PM | |
| 6104 | 06-20-2018 04:30 PM |
06-27-2017
06:44 PM
thanks Pardeep. so should we keep the HDFS disk seperate from the OS disk , i.e where the linux operating system is installed ?
... View more
06-27-2017
06:26 PM
does the location of these two parameters tell where HDFS will be using its space from? NameNode directories=/u01/hadoop/hdfs/namenode
DataNode directories=/u01/hadoop1/hdfs/data also if i want to add more space to the cluster by adding another disk , how do i add it to the cluster?
... View more
Labels:
- Labels:
-
Apache Hadoop
06-21-2017
03:49 PM
I managed to bring up my cluster with minimum services , adding components now . When adding "hbase" I am seeing this msg on the console (please see the pic below). how can I find which services are not running and on which hosts ?
... View more
Labels:
06-20-2017
09:50 PM
ambari was picking a left of directory from 2.4.x in /usr/hdp folder. After I deleted the error and reinstalled Ambari-server the installation is moving forward now. cd /usr/hdp
rm -rf 2.4.3.0-227
... View more
06-20-2017
08:05 PM
anyone seeing this post ? need help to fix to fix this issue
... View more
06-20-2017
07:05 PM
also checked and internet connection is fine and I can download the HDP repo file , still accumulo is unable to install [root@hadoop5 yum.repos.d]# wget http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.5.3.0/hdp.repo
--2017-06-20 15:03:35-- http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.5.3.0/hdp.repo
Resolving dotatofwproxy.tolls.dot.state.fl.us... 10.100.30.27
Connecting to dotatofwproxy.tolls.dot.state.fl.us|10.100.30.27|:8080... connected.
Proxy request sent, awaiting response... 200 OK
Length: 574 [binary/octet-stream]
Saving to: “hdp.repo”
100%[=======================================================================================================>] 574 --.-K/s in 0s
2017-06-20 15:03:35 (138 MB/s) - “hdp.repo” saved [574/574]
[root@hadoop5 yum.repos.d]# pwd
/etc/yum.repos.d
[root@hadoop5 yum.repos.d]# ls
ambari.repo CentOS-Debuginfo.repo CentOS-Media.repo epel.repo hdp.repo HDP-UTILS.repo
CentOS-Base.repo CentOS-fasttrack.repo CentOS-Vault.repo epel-testing.repo HDP.repo
[root@hadoop5 yum.repos.d]# more hdp.repo
#VERSION_NUMBER=2.5.3.0-37
[HDP-2.5.3.0]
name=HDP Version - HDP-2.5.3.0
baseurl=http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.5.3.0
gpgcheck=1
gpgkey=http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.5.3.0/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins
enabled=1
priority=1
[HDP-UTILS-1.1.0.21]
name=HDP-UTILS Version - HDP-UTILS-1.1.0.21
baseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos6
gpgcheck=1
gpgkey=http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.5.3.0/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins
enabled=1
priority=1
[root@hadoop5 yum.repos.d]#
[root@hadoop5 yum.repos.d]# more HDP.repo
[HDP-2.5]
name=HDP-2.5
baseurl=http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.5.3.0
path=/
enabled=1
gpgcheck=0
[root@hadoop5 yum.repos.d]#
... View more
06-20-2017
06:50 PM
its trying to install an older version of accumulo and failing . stderr: /var/lib/ambari-agent/data/errors-113.txt
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/ACCUMULO/1.6.1.2.2.0/package/scripts/accumulo_client.py", line 66, in <module>
AccumuloClient().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 329, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/ACCUMULO/1.6.1.2.2.0/package/scripts/accumulo_client.py", line 37, in install
self.install_packages(env)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 693, in install_packages
retry_count=agent_stack_retry_count)
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 54, in action_install
self.install_package(package_name, self.resource.use_repos, self.resource.skip_repos)
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/yumrpm.py", line 51, in install_package
self.checked_call_with_retries(cmd, sudo=True, logoutput=self.get_logoutput())
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 86, in checked_call_with_retries
return self._call_with_retries(cmd, is_checked=True, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 98, in _call_with_retries
code, out = func(cmd, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 72, in inner
result = function(command, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 102, in checked_call
tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 150, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 303, in _call
raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of '/usr/bin/yum -d 0 -e 0 -y install accumulo_2_4_3_0_227' returned 1. Error: Nothing to dostdout: /var/lib/ambari-agent/data/output-113.txt
2017-06-20 14:41:24,319 - Stack Feature Version Info: stack_version=2.5, version=None, current_cluster_version=None -> 2.5
2017-06-20 14:41:24,322 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
User Group mapping (user_group) is missing in the hostLevelParams
2017-06-20 14:41:24,323 - Group['livy'] {}
2017-06-20 14:41:24,324 - Group['spark'] {}
2017-06-20 14:41:24,324 - Group['hadoop'] {}
2017-06-20 14:41:24,325 - Group['users'] {}
2017-06-20 14:41:24,325 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-20 14:41:24,325 - Adding user User['hive']
2017-06-20 14:41:24,348 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-20 14:41:24,349 - Adding user User['storm']
2017-06-20 14:41:24,367 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-20 14:41:24,367 - Adding user User['zookeeper']
2017-06-20 14:41:24,392 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2017-06-20 14:41:24,393 - Adding user User['oozie']
2017-06-20 14:41:24,411 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-20 14:41:24,412 - Adding user User['ams']
2017-06-20 14:41:24,431 - User['falcon'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2017-06-20 14:41:24,431 - Adding user User['falcon']
2017-06-20 14:41:24,449 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2017-06-20 14:41:24,449 - Adding user User['tez']
2017-06-20 14:41:24,466 - User['accumulo'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-20 14:41:24,466 - Adding user User['accumulo']
2017-06-20 14:41:24,484 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-20 14:41:24,485 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-20 14:41:24,485 - Adding user User['spark']
2017-06-20 14:41:24,511 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2017-06-20 14:41:24,512 - Adding user User['ambari-qa']
2017-06-20 14:41:24,537 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-20 14:41:24,537 - Adding user User['flume']
2017-06-20 14:41:24,563 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-20 14:41:24,563 - Adding user User['kafka']
2017-06-20 14:41:24,591 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-20 14:41:24,591 - Adding user User['hdfs']
2017-06-20 14:41:24,617 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-20 14:41:24,617 - Adding user User['sqoop']
2017-06-20 14:41:24,646 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-20 14:41:24,647 - Adding user User['yarn']
2017-06-20 14:41:24,673 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-20 14:41:24,673 - Adding user User['mapred']
2017-06-20 14:41:24,701 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-20 14:41:24,702 - Adding user User['hbase']
2017-06-20 14:41:24,729 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-20 14:41:24,730 - Adding user User['hcat']
2017-06-20 14:41:24,757 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-06-20 14:41:24,759 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2017-06-20 14:41:24,762 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2017-06-20 14:41:24,763 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2017-06-20 14:41:24,763 - Changing owner for /tmp/hbase-hbase from 1024 to hbase
2017-06-20 14:41:24,764 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-06-20 14:41:24,765 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2017-06-20 14:41:24,768 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
2017-06-20 14:41:24,768 - Group['hdfs'] {}
2017-06-20 14:41:24,769 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'hdfs']}
2017-06-20 14:41:24,769 - Modifying user hdfs
2017-06-20 14:41:24,792 - FS Type:
2017-06-20 14:41:24,792 - Directory['/etc/hadoop'] {'mode': 0755}
2017-06-20 14:41:24,792 - Creating directory Directory['/etc/hadoop'] since it doesn't exist.
2017-06-20 14:41:24,793 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2017-06-20 14:41:24,793 - Changing owner for /var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir from 0 to hdfs
2017-06-20 14:41:24,794 - Directory['/var/lib/ambari-agent/tmp/AMBARI-artifacts/'] {'create_parents': True}
2017-06-20 14:41:24,794 - Creating directory Directory['/var/lib/ambari-agent/tmp/AMBARI-artifacts/'] since it doesn't exist.
2017-06-20 14:41:24,794 - File['/var/lib/ambari-agent/tmp/jdk-8u112-linux-x64.tar.gz'] {'content': DownloadSource('http://hadoop1.tolls.dot.state.fl.us:8080/resources//jdk-8u112-linux-x64.tar.gz'), 'not_if': 'test -f /var/lib/ambari-agent/tmp/jdk-8u112-linux-x64.tar.gz'}
2017-06-20 14:41:24,797 - Downloading the file from http://hadoop1.tolls.dot.state.fl.us:8080/resources//jdk-8u112-linux-x64.tar.gz
2017-06-20 14:41:26,206 - File['/var/lib/ambari-agent/tmp/jdk-8u112-linux-x64.tar.gz'] {'mode': 0755}
2017-06-20 14:41:26,207 - Changing permission for /var/lib/ambari-agent/tmp/jdk-8u112-linux-x64.tar.gz from 644 to 755
2017-06-20 14:41:26,207 - Directory['/usr/jdk64'] {}
2017-06-20 14:41:26,207 - Execute[('chmod', 'a+x', '/usr/jdk64')] {'sudo': True}
2017-06-20 14:41:26,212 - Execute['cd /var/lib/ambari-agent/tmp/jdk_tmp_JalOZR && tar -xf /var/lib/ambari-agent/tmp/jdk-8u112-linux-x64.tar.gz && ambari-sudo.sh cp -rp /var/lib/ambari-agent/tmp/jdk_tmp_JalOZR/* /usr/jdk64'] {}
2017-06-20 14:41:30,155 - Directory['/var/lib/ambari-agent/tmp/jdk_tmp_JalOZR'] {'action': ['delete']}
2017-06-20 14:41:30,155 - Removing directory Directory['/var/lib/ambari-agent/tmp/jdk_tmp_JalOZR'] and all its content
2017-06-20 14:41:30,244 - File['/usr/jdk64/jdk1.8.0_112/bin/java'] {'mode': 0755, 'cd_access': 'a'}
2017-06-20 14:41:30,244 - Execute[('chmod', '-R', '755', '/usr/jdk64/jdk1.8.0_112')] {'sudo': True}
2017-06-20 14:41:30,270 - Initializing 2 repositories
2017-06-20 14:41:30,270 - Repository['HDP-2.5'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.5.3.0', 'action': ['create'], 'components': ['HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP', 'mirror_list': None}
2017-06-20 14:41:30,278 - File['/etc/yum.repos.d/HDP.repo'] {'content': InlineTemplate(...)}
2017-06-20 14:41:30,278 - Writing File['/etc/yum.repos.d/HDP.repo'] because it doesn't exist
2017-06-20 14:41:30,279 - Repository['HDP-UTILS-1.1.0.21'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos6', 'action': ['create'], 'components': ['HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP-UTILS', 'mirror_list': None}
2017-06-20 14:41:30,283 - File['/etc/yum.repos.d/HDP-UTILS.repo'] {'content': InlineTemplate(...)}
2017-06-20 14:41:30,284 - Writing File['/etc/yum.repos.d/HDP-UTILS.repo'] because it doesn't exist
2017-06-20 14:41:30,284 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-06-20 14:41:30,361 - Skipping installation of existing package unzip
2017-06-20 14:41:30,362 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-06-20 14:41:30,373 - Skipping installation of existing package curl
2017-06-20 14:41:30,373 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-06-20 14:41:30,384 - Installing package hdp-select ('/usr/bin/yum -d 0 -e 0 -y install hdp-select')
2017-06-20 14:41:35,089 - Package['accumulo_2_4_3_0_227'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-06-20 14:41:35,167 - Installing package accumulo_2_4_3_0_227 ('/usr/bin/yum -d 0 -e 0 -y install accumulo_2_4_3_0_227')
2017-06-20 14:41:35,598 - Execution of '/usr/bin/yum -d 0 -e 0 -y install accumulo_2_4_3_0_227' returned 1. Error: Nothing to do
2017-06-20 14:41:35,599 - Failed to install package accumulo_2_4_3_0_227. Executing '/usr/bin/yum clean metadata'
2017-06-20 14:41:35,775 - Retrying to install package accumulo_2_4_3_0_227 after 30 seconds
Command failed after 1 tries
[root@hadoop5 ~]# yum list | grep accumulo
accumulo.noarch 1.7.0.2.5.3.0-37.el6 HDP-2.5
accumulo-conf-standalone.noarch 1.7.0.2.5.3.0-37.el6 HDP-2.5
accumulo-source.noarch 1.7.0.2.5.3.0-37.el6 HDP-2.5
accumulo-test.noarch 1.7.0.2.5.3.0-37.el6 HDP-2.5
accumulo_2_5_3_0_37.x86_64 1.7.0.2.5.3.0-37.el6 HDP-2.5
accumulo_2_5_3_0_37-conf-standalone.x86_64 1.7.0.2.5.3.0-37.el6 HDP-2.5
accumulo_2_5_3_0_37-source.x86_64 1.7.0.2.5.3.0-37.el6 HDP-2.5
accumulo_2_5_3_0_37-test.x86_64 1.7.0.2.5.3.0-37.el6 HDP-2.5
[root@hadoop5 ~]# yum repolist
Loaded plugins: fastestmirror, refresh-packagekit, security
Loading mirror speeds from cached hostfile
* base: ftp.osuosl.org
* epel: archive.linux.duke.edu
* extras: centos.sonn.com
* updates: bay.uchicago.edu
repo id repo name status
HDP-2.5 HDP-2.5 200
HDP-UTILS-1.1.0.21 HDP-UTILS-1.1.0.21 56
ambari-2.5.1.0 ambari Version - ambari-2.5.1.0 12
base CentOS-6 - Base 6,706
epel Extra Packages for Enterprise Linux 6 - x86_64 12,344
extras CentOS-6 - Extras 45
updates CentOS-6 - Updates 379
repolist: 19,742
... View more
Labels:
06-20-2017
06:36 PM
this I found on here .. it fixed the issue
mysql> grant all privileges on *.* to 'hive'@'hadoop2.tolls.dot.state.fl.us' identified by 'hive';
Query OK, 0 rows affected (0.00 sec)
mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)
mysql> exit
... View more
06-20-2017
06:18 PM
now that I have resolved the wget proxy issues , I am continuing installing with public HDP repositories but now I am having issues connecting to the hive user in MySQL . I can connect to this user fine on command prompt please see below. stderr:
2017-06-20 14:12:11,925 - Check db_connection_check was unsuccessful. Exit code: 1. Message: ERROR: Unable to connect to the DB. Please check DB connection properties.
java.sql.SQLException: Access denied for user 'hive'@'hadoop2.tolls.dot.state.fl.us' (using password: YES)
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/custom_actions/scripts/check_host.py", line 531, in <module>
CheckHost().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 329, in execute
method(env)
File "/var/lib/ambari-agent/cache/custom_actions/scripts/check_host.py", line 206, in actionexecute
raise Fail(error_message)
resource_management.core.exceptions.Fail: Check db_connection_check was unsuccessful. Exit code: 1. Message: ERROR: Unable to connect to the DB. Please check DB connection properties.
java.sql.SQLException: Access denied for user 'hive'@'hadoop2.tolls.dot.state.fl.us' (using password: YES)stdout:
2017-06-20 14:12:11,568 - Host checks started.
2017-06-20 14:12:11,568 - Check execute list: db_connection_check
2017-06-20 14:12:11,568 - DB connection check started.
WARNING: File /var/lib/ambari-agent/cache/DBConnectionVerification.jar already exists, assuming it was downloaded before
WARNING: File /var/lib/ambari-agent/cache/mysql-connector-java.jar already exists, assuming it was downloaded before
2017-06-20 14:12:11,569 - call['/usr/jdk64/jdk1.8.0_112/bin/java -cp /var/lib/ambari-agent/cache/DBConnectionVerification.jar:/var/lib/ambari-agent/cache/mysql-connector-java.jar -Djava.library.path=/var/lib/ambari-agent/cache org.apache.ambari.server.DBConnectionVerification "jdbc:mysql://hadoop2.tolls.dot.state.fl.us/hive" "hive" [PROTECTED] com.mysql.jdbc.Driver'] {}
2017-06-20 14:12:11,924 - call returned (1, "ERROR: Unable to connect to the DB. Please check DB connection properties.\njava.sql.SQLException: Access denied for user 'hive'@'hadoop2.tolls.dot.state.fl.us' (using password: YES)")
2017-06-20 14:12:11,924 - DB connection check completed.
2017-06-20 14:12:11,925 - Host checks completed.
2017-06-20 14:12:11,925 - Check db_connection_check was unsuccessful. Exit code: 1. Message: ERROR: Unable to connect to the DB. Please check DB connection properties.
java.sql.SQLException: Access denied for user 'hive'@'hadoop2.tolls.dot.state.fl.us' (using password: YES)
Command failed after 1 tries [root@hadoop2 ~]# mysql -u hive -p
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 9
Server version: 5.1.73 Source distribution
Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| hive |
| mysql |
| oozie |
| ranger |
| ranger_audit |
+--------------------+
6 rows in set (0.00 sec)
Database changed
mysql> select user,host from user;
+-------------+-------------------------------+
| user | host |
+-------------+-------------------------------+
| hive | % |
| oozie | % |
| rangeradmin | % |
| rangerdba | % |
| root | 127.0.0.1 |
| root | hadoop2 |
| hive | hadoop2.tolls.dot.state.fl.us |
| oozie | hadoop2.tolls.dot.state.fl.us |
| root | hadoop2.tolls.dot.state.fl.us |
| hive | localhost |
| oozie | localhost |
| rangeradmin | localhost |
| rangerdba | localhost |
| root | localhost |
+-------------+-------------------------------+
14 rows in set (0.00 sec)
mysql> GRANT ALL PRIVILEGES ON *.* TO 'hive'@'%';
Query OK, 0 rows affected (0.00 sec)
mysql> GRANT ALL PRIVILEGES ON *.* TO 'hive'@'hadoop2.tolls.dot.state.fl.us';
Query OK, 0 rows affected (0.00 sec)
mysql> flush privileges
-> ;
Query OK, 0 rows affected (0.00 sec)
... View more
Labels:
06-20-2017
04:13 PM
found the solution , on web all info about no_proxy settings for Centos is incorrect. what worked for me was removing the ~.wgetrc file and putting the following file in place . issue is that wget is not taking the no_proxy settings from .wgetrc file, but if I define it on the system level it picks it up. /etc/profile.d/proxy.sh export http_proxy="http://dotatofwproxy.tolls.dot.state.fl.us:8080/"
export https_proxy="https://dotatofwproxy.tolls.dot.state.fl.us:8080/"
export ftp_proxy="ftp://dotatofwproxy.tolls.dot.state.fl.us:8080/"
export no_proxy=".tolls.dot.state.fl.us,hadoop1,hadoop2,hadoop3,hadoop4,hadoop5"
... View more