Member since
01-19-2017
3627
Posts
608
Kudos Received
361
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
230 | 08-02-2024 08:15 AM | |
3433 | 04-06-2023 12:49 PM | |
777 | 10-26-2022 12:35 PM | |
1519 | 09-27-2022 12:49 PM | |
1788 | 05-27-2022 12:02 AM |
01-14-2016
03:31 PM
@Artem@neeraj Thanks guys for your responses as you realise HDP creates a couple of users and bit difficult to manage across the cluster. 1. I want to have only one user eg.tom to own all the hive,hdfs,pig etc as it easy to ssh to any server and quickly be effective avoiding the su or sudo which file should I edit to achieve this ? 2. I have done a lot of Linux installs the default HDP FS layout doesn't please me at all I want to install HDP outside the /var /usr or /etc directories so if anything goes wrong I can just delete all files in that partition and and relaunch after some minor cleanup. My reasoning is I would like to allocate /u01 like 300 GB HDD on each of the 4 server in the cluster so I end up with 1.2 T after HDFS format for data.
... View more
01-14-2016
08:37 AM
failed-cluster-install.pdfHi all, I have test driven the sandbox for a while and I decided to take my knowledge to another level. I got 4 refurbished PowerEdge servers 2 Dell 2850 and 2 Dell 2950. The Ambari server preparation and host discovery was successful see attached pdf. In the assign master I realised the Ambari server was overloaded so I reassigned some components to other servers. I was lost when it came to assign slave and client ONLY one of the servers have been check so I decided to go with the default. The Install start and smoke test failed.(Attached pdf) I don't intend to create multiple users across the cluster ,how do I achieve this which file should I edit prior to the launch ? Below extract of the user creation error 2016-01-14 00:25:23,320 - Group['hadoop'] {'ignore_failures': False} 2016-01-14 00:25:23,320 - Group['users'] {'ignore_failures': False} 2016-01-14 00:25:23,320 - Group['knox'] {'ignore_failures': False} 2016-01-14 00:25:23,320 - User['hive'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': ['hadoop']} 2016-01-14 00:25:23,321 - User['storm'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': ['hadoop']} 2016-01-14 00:25:23,322 - User['zookeeper'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': ['hadoop']} Any advice is welcome
... View more
Labels:
01-07-2016
06:42 AM
Just reset the root password I had to reset
the MySQL password to do so follow the below process.Start the DB if its down [root@sandbox]# service mysqld start Starting mysqld:
[ OK ] Reset the MySQL
password [root@sandbox]# mysqladmin -u root -h sandbox.hortonworks.com password 'newpassword' [root@sandbox]# mysqladmin
-u root password 'newpassword' Try out the new
password and it seems to work out well [root@sandbox]# mysql -u root -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 6 Server version: 5.1.73 Source distribution Copyright (c) 2000, 2013, Oracle and/or its affiliates.
All rights reserved. Oracle is a registered trademark of Oracle Corporation
and/or its affiliates. Other names may be trademarks of their respective
owners. Type 'help;' or '\h' for help. Type '\c' to clear
the current input statement. mysql>
select user(); After validating the successful logon retry
... View more
01-02-2016
06:05 PM
1 Kudo
Hi Ramesh, First check your mysql.sock If your file my.cnf (usually in the /etc/mysql/ folder) is correctly configured with
socket=/var/lib/mysql/mysql.sock Then check if mysql is running with the following command: #mysqladmin -u root -p status Try changing your permission to mysql folder. If you are working locally, you can try: # chmod -R 755 /var/lib/mysql/
Start the MySQL # service mysqld start or /etc/init.d/mysqld start
Or if There no MySQl the you need to install first # yum install mysql mysql-server Enable the MySQL service: #/sbin/chkconfig mysqld on Start the MySQL server: #/sbin/service mysqld start afterwards set the MySQL root password: #mysqladmin -u root password 'new-password' (with the quotes)
... View more
01-01-2016
10:35 AM
1 Kudo
I think its your internet configuration. Your Sandbox can't access the internet so revise your config.
... View more
12-29-2015
02:32 PM
Try this 2 methods they should help https://goo.gl/g8kkh8 http://goo.gl/tJOLlh
... View more
12-29-2015
02:32 PM
Try this 2 methods they should help https://goo.gl/g8kkh8 http://goo.gl/tJOLlh
... View more
12-22-2015
07:47 PM
1 Kudo
@ergo al2011 Did you import the generated id_rsa.pub that you copied to authorized_keys? Can you tell me precisely where you failed what step?
... View more
12-22-2015
05:04 PM
Log on as root on your sandbox and follow the below procedure
Just press "Enter" when prompted for a passphrase. [root@hadoop01 ~]# ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa):
/root/.ssh/id_rsa already exists.
Overwrite (y/n)? y Enter passphrase (empty for no passphrase): Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is:
6f:40:61:c1:89:a5:f0:f4:8d:6b:01:98:fc:01:fb:d4 root@hadoop01 The key's randomart image is: Two files will be created in the folder /root/.ssh
[ root@hadoop01 ~]# cd /root/.ssh
id_rsa
id_rsa.pub
Copy the contents of id_rsa.pub to authorized_keys and copy this files to your desktop Execute the below commands [hdp@hadoop01 .ssh]$ cat id_rsa.pub >> authorized_keys copy this authorized_keys to your local desktop/laptop c:\drive or wherever. Then on your putty under connection>>Ssh>>auth browse and locate the downloaded authoriszed_keys on your computer and import it into putty remember to chose the good algorithm either ssh-1 or ssh-2 RSA
Now you can log on without any issues. Please let me know if you are succesfully and accept the answer
... View more
12-15-2015
08:19 PM
ssh is only from the console eg user@host$ ssh root@hadoop03 /* Thats when you have a FQDN entry in your /etc/host like below 192.xxx.xxx.xx hadoop03.something.com hadoop03 The virtual machine is started from the VM interface To access the login page you need http server to already be running otherwise as the root user run [root@hadoop03]# service httpd start When you have successfully imported the most of the hadoop components can be started from Sandbox image, the startup scripts are in /$HADOOP_INSTALL_HOME/hadoop/conf Make sure your user has the execute privileges on the files or .sh scripts in the above directory
start-dfs.sh,start-yarn.sh stop-dfs.sh,stop-yarn.sh As the root user run the command root@hostname# ifconfig This should give you the ip address depending on your NIC either eth0 or eth1 then use that ip ie 192.168.1.1:8080 this should give you access to the login web page
... View more
- « Previous
- Next »