Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2438 | 04-27-2020 03:48 AM | |
4870 | 04-26-2020 06:18 PM | |
3973 | 04-26-2020 06:05 PM | |
3212 | 04-13-2020 08:53 PM | |
4907 | 03-31-2020 02:10 AM |
03-28-2017
03:17 AM
1 Kudo
@Elvis Zhang You can avoid Ambari UI to ask password for kadmin. You can store that kadmin credential to ambari store. https://community.hortonworks.com/articles/42927/adding-kdc-administrator-credentials-to-the-ambari.html curl -H "X-Requested-By:ambari" -u admin:admin -X PUT -d '{ "Credential" : { "principal" : "admin/admin@EXAMPLE.COM", "key" : "pwd$hwx", "type" : "persisted" } }' http://ambari.example.com:8080/api/v1/clusters/c1/credentials/kdc.admin.credential . Above will require a keystore setup and then "[2] Encrypt passwords stored in ambari.properties file." on ambari-server setup-security wizard. Using this way you can avoid entering the kadmin credentials everytime on ambari.
... View more
03-27-2017
04:43 AM
Ok so I updated the /etc/krb5.conf file to the same found on the KDC host and it seems to work now. I didn't see this step anywhere in the documentation and thought the wizard would install the clients. Thanks a lot!
... View more
03-26-2017
04:36 AM
I did the following this, mine issue has been Resolved 1) Download the “Requires: perl(DBI)” package ,we are using centos 6.8 version, download the
below RPM perl-DBI-1.609-4.el6.x86_64.rpm http://rpmfind.net/linux/rpm2html/search.php?query=perl-DBI 2) Install the perl DBI rpm on Linux server
[root@centos2 ~]# rpm -ivh perl-DBI-1.609-4.el6.x86_64.rpm
Preparing... ########################################### [100%]
1:perl-DBI ########################################### [100%]
[root@centos2 ~]#
3) [root@hostname ~]# sudo yum install mysql-community-server
Loaded plugins: fastestmirror
Setting up Install Process
Repository HDP-UTILS-1.1.0.19 is listed more than once in the configuration
Loading mirror speeds from cached hostfile
Resolving Dependencies
--> Running transaction check
---> Package mysql-community-server.x86_64 0:5.6.35-2.el6 will be installed
--> Processing Dependency: mysql-community-client(x86-64) >= 5.6.10 for package: mysql-community-server-5.6.35-2.el6.x86_64
--> Running transaction check
---> Package mysql-community-client.x86_64 0:5.6.35-2.el6 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
=========================================================================================================================================
Package Arch Version Repository Size
=========================================================================================================================================
Installing:
mysql-community-server x86_64 5.6.35-2.el6 mysql56-community 54 M
Installing for dependencies:
mysql-community-client x86_64 5.6.35-2.el6 mysql56-community 18 M
Transaction Summary
=========================================================================================================================================
Install 2 Package(s)
Total download size: 73 M
Installed size: 324 M
Is this ok [y/N]: y
Downloading Packages:
(1/2): mysql-community-client-5.6.35-2.el6.x86_64.rpm | 18 MB 00:01
(2/2): mysql-community-server-5.6.35-2.el6.x86_64.rpm | 54 MB 00:04
-----------------------------------------------------------------------------------------------------------------------------------------
Total 11 MB/s | 73 MB 00:06
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
Warning: RPMDB altered outside of yum.
Installing : mysql-community-client-5.6.35-2.el6.x86_64 1/2
Installing : mysql-community-server-5.6.35-2.el6.x86_64 2/2
Verifying : mysql-community-client-5.6.35-2.el6.x86_64 1/2
Verifying : mysql-community-server-5.6.35-2.el6.x86_64 2/2
Installed:
mysql-community-server.x86_64 0:5.6.35-2.el6
Dependency Installed:
mysql-community-client.x86_64 0:5.6.35-2.el6
Complete!
[root@hostname ~]# 4) Verify the packages installed or not
[root@hostname ~]# rpm -qa |grep mysql
mysql-community-common-5.6.35-2.el6.x86_64
mysql57-community-release-el6-9.noarch
mysql-community-libs-compat-5.6.35-2.el6.x86_64
mysql-community-libs-5.6.35-2.el6.x86_64
mysql-community-client-5.6.35-2.el6.x86_64
mysql-community-server-5.6.35-2.el6.x86_64
[root@hostname ~]#
[root@hostname ~]# service mysqld status
mysqld is stopped
[root@hostname ~]#
... View more
10-15-2018
10:10 PM
@Ivan Georgiev Thank you for sharing the parameter.
... View more
03-31-2017
09:08 AM
Thank you for giving the video link . It was very helpful.
... View more
03-23-2017
04:37 PM
@Kent Brodie
Great to hear that your issue is resolved. It will be wonderful if you can mark the answer of this thread as "Accepted" so that it will be useful for community.
... View more
03-22-2017
03:45 PM
@saravanan gopalsamy
Sure, The Database queries should definitely work well as it leaves no foot print in the database of the service that we wanted to delete and completely cleans all the entries related to that mentioned service from the Database completely.
You will need to restart AmbariServer after making the database changes.
... View more
03-24-2017
11:19 AM
Yes, I'm afraid that fast upload can overload the buffers in Hadoop 2.5, as it uses JVM heap to store blocks while it uploads them. The bigger the mismatch between the data generated (i.e. how fast things can be read) and the upload bandwidth, the more heap you need. On a long-haul upload you usually have limited bandwidth, and the more distcp workers, the more the bandwidth is divided between them, the bigger the mismatch a In Hadoop 2.5 you can get away with tuning the fast uploader to use less heap. It's tricky enough to configure that in the HDP 2.5 docs we chose not to mention the fs.s3a.fast.upload option entirely. It was just too confusing and we couldn't come up with some good defaults which would work reliably. Which is why I rewrote it completely for HDP 2.6. The HDP 2.6/Apache Hadoop 2.8 (and already in HDCloud) block output stream can buffer on disk (default), or via byte buffers, as well as heap, and tries to do better queueing of writes. For HDP 2.5. the tuning options are measured in the Hadoop 2.7 docs, Essentially a lower value of fs.s3a.threads.core and fs.s3a.threads.max keeps the number of buffered blocks down, while changing the size of fs.s3a.multipart.size to something like 10485760 (10 MB) and setting fs.s3a.multipart.threshold to the same value reduces the buffer size before the uploads begin. Like I warned, you can end up spending time tuning, because the heap consumed increases with the threads.max value, and decreases on the multipart threshold and size values. And over a remote connection, the more workers you have in the distcp operation (controlled by the -m option), the less bandwidth each one gets, so again: more heap overflows. And you will invariably find out on the big uploads that there are limits. As a result In HDP-2.5, I'd recommend avoiding the fast upload except in the special case of: you have a very high speed connection to an S3 server in the same infrastructure, and use it for code generating data, rather than big distcp operations, which can read data as fast as it can be streamed off multiple disks.
... View more
03-21-2017
09:52 AM
yes i checked "/etc/krb5.conf", again ,nothing wrong ,from error text ,it means "*.keytabs" file used old "EXAMPLE.COM", I don't know how to update or rebuild the keytabs?
... View more