Member since
01-19-2017
3679
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 855 | 06-04-2025 11:36 PM | |
| 1430 | 03-23-2025 05:23 AM | |
| 718 | 03-17-2025 10:18 AM | |
| 2577 | 03-05-2025 01:34 PM | |
| 1691 | 03-03-2025 01:09 PM |
03-15-2018
09:35 PM
@hadoop hdfs Can you run the below command and then the report to see if the internal view was refreshed. Note dfsadmin are admin scripts so run with caution $ hdfs dfsadmin -refreshNodes Please let me know !
... View more
03-15-2018
09:12 PM
@Aymen Rahal To connect to the CLI If you have the root password then from the Mysql host you do, notice the -p is attached no spaces assuming the password is welcome1 # mysql -u root -pwelcome1 If you don't have the mysql root password then you can change it as below, in the below example I want my new password to be 'welcome1' # mysql -u root UPDATE mysql.user SET Password=PASSWORD('welcome1') WHERE User='root'; flush privileges; Otherwise, you have the below clients to choose from MySQL Workbench (Mac, Windows, Linux), Free, open-source phpMyAdmin (web app), Free, open-source Toad for MySQL (Windows), Free MySQL-Front (Windows), Free, open-source Neor Profile SQL (Mac, Windows, Linux), Free You should implement the secure Mysql/mariaDB installation see link
... View more
03-15-2018
03:32 PM
@Aymen Rahal What service are you trying to connect to? That looks the mysql database port ! Incase you wanted the Ambari port is
http://ambari-server:8080 Where ambari-server is the FQDN of the host where Ambari server Please clarify.
... View more
03-15-2018
02:43 PM
@Mudassar Hussain I am positive that command should and will work without fail if you have successfully created a snapshottable directory. Its a sub command of hdfs can you simply run hdfs a the hdfs user? $ hdfs Usage: hdfs [--config confdir] [--loglevel loglevel] COMMAND
where COMMAND is one of:
dfs run a filesystem command on the file systems supported in Hadoop.
classpath prints the classpath
namenode -format format the DFS filesystem
secondarynamenode run the DFS secondary namenode
namenode run the DFS namenode
journalnode run the DFS journalnode
.......
........
snapshotDiff diff two snapshots of a directory or diff the current directory contents a snapshot
lsSnapshottableDir list all snapshottable dirs owned by the current user
Use -help to see options
....
Most commands print help when invoked w/o parameters. Now once you have confirmed the above run as below # su - hdfs
$ hdfs lsSnapshottableDir
output ..........................
drwxr-xr-x 0 mudassar hdfs 0 2018-03-15 10:38 1 65536 /user/mudassar/snapdemo That the directory I created to reproduce your issues on my cluster.
... View more
03-15-2018
10:05 AM
@Mudassar Hussain The steps 1,2 and 3 are okay.You dont need to create directory2, when you enable a directory as snapshottable in this case /user/mudassar/snapdemo the snapshots will be created under this directory with .snapshot....... which makes it invisible when you run the hdfs dfs -ls command Let me demo on HDP 2.6 I will create your user on my local environment. As mudassar as the root user # adduser mudassar Switch to superuser and owner of HDFS # su - hdfs Create the snapshot demo notice the -p option as the root directory /user/mudassar doesn't exist yet Note: hdfs hadoop will be deprecated so use hdfs dfs command! $ hdfs dfs -mkdir -p /user/mudassar/snapdemo Validate directory $ hdfs dfs -ls /user/mudassar
Found 1 items
drwxr-xr-x - hdfs hdfs 0 2018-03-15 09:39 /user/mudassar/snapdemo Change ownership to mudassar $ hdfs dfs -chown mudassar /user/mudassar/snapdemo Validate change of ownership $ hdfs dfs -ls /user/mudassar
Found 1 items
drwxr-xr-x - mudassar hdfs 0 2018-03-15 09:39 /user/mudassar/snapdemo Make the directory snapshottable $ hdfs dfsadmin -allowSnapshot /user/mudassar/snapdemo
Allowing snaphot on /user/mudassar/snapdemo succeeded Show all the snapshottable directories in your cluster a subcommand under hdfs $ hdfs lsSnapshottableDir
drwxr-xr-x 0 mudassar hdfs 0 2018-03-15 09:39 0 65536 /user/mudassar/snapdemo Create 2 sample files in /tmp $ echo "Test one for snaphot No worries No worries I was worried you got stuck and didn't revert the HCC is full of solutions so" > /tmp/text1.txt
$ echo "The default behavior is that only a superuser is allowed to access all the resources of the Kafka cluster, and no other user can access those resources" > /tmp/text2.txt Validate the files were created $cd /tmp
$ls -lrt
-rw-r--r-- 1 hdfs hadoop 121 Mar 15 10:04 text1.txt
-rw-r--r-- 1 hdfs hadoop 152 Mar 15 10:04 text2.txt Copy the files from locall to HDFS $ hdfs dfs -put text1.txt /user/mudassar/snapdemo Create a snapshot of the file text1.txt $ hdfs dfs -createSnapshot /user/mudassar/snapdemo
Created snapshot /user/mudassar/snapdemo/.snapshot/s20180315-101148.262 Note above the .snapshot directory which is a hidden system directory Show the snapshot of text1.txt $ hdfs dfs -ls /user/mudassar/snapdemo/.snapshot/s20180315-101148.262
Found 1 items
-rw-r--r-- 3 hdfs hdfs 121 2018-03-15 10:10 /user/mudassar/snapdemo/.snapshot/s20180315-101148.262/text1.txt Copied the second file text2.txt from local /tmp to HDFS $ hdfs dfs -put text2.txt /user/mudassar/snapdemo Validation that the 2 files should be resent $ hdfs dfs -ls /user/mudassar/snapdemo
Found 2 items
-rw-r--r-- 3 hdfs hdfs 121 2018-03-15 10:10 /user/mudassar/snapdemo/text1.txt
-rw-r--r-- 3 hdfs hdfs 152 2018-03-15 10:19 /user/mudassar/snapdemo/text2.txt Demo simulate loss of file text1.txt $ hdfs dfs -rm /user/mudassar/snapdemo/text1.txt Indeed file text1.txt was deleted ONLY text2.txt remains $ hdfs dfs -ls /user/mudassar/snapdemo
Found 1 items
-rw-r--r-- 3 hdfs hdfs 152 2018-03-15 10:19 /user/mudassar/snapdemo/text2.txt Restore the text1.txt $ hdfs dfs -cp -ptopax /user/mudassar/snapdemo/.snapshot/s20180315-101148.262/text1.txt /user/mudassar/snapdemo To use -ptopax this ensure the timestamp is restored you will need to set the dfs.namenode.accesstime.precision to default 1 hr which is 360000 seconds Check the original timestamp for text1.txt above !!! hdfs dfs -ls /user/mudassar/snapdemo
Found 2 items
-rw-r--r-- 3 hdfs hdfs 121 2018-03-15 10:10 /user/mudassar/snapdemo/text1.txt
-rw-r--r-- 3 hdfs hdfs 152 2018-03-15 10:19 /user/mudassar/snapdemo/text2.txt In a nutshell, you don't need to create directory2 because when you run hdfs dfs -createSnapshot command it autocreates a directory under the original starting with .snapshot, that also saves you from extra steps of creating sort of a backup directory. I hope that explains it clearly this time
... View more
03-15-2018
07:44 AM
@chandramouli muthukumaran I was just checking one of the issues you raised chandramouli muthukumaran Jun 14, 2016 at 03:24 PM. You received a couple of answers from me and Ashnee Sharma but unfortunately, you didn't accept either of the answers, this is unacceptable as HCC members go a long way to help whenever you open a thread and any solution provided if it resolved the issue should benefit other members who encounter the same issue. Please ensure members who give solutions to your problems are rewarded this helps to keep member motivated to respond.
... View more
03-14-2018
11:14 PM
@Vivek Sabha Once your setup is correctly you can switch the standby to primary. I hope you have successfully moved your old standby to the new host and validated that indeed its taken over the role of standby ! If you have set up automatic failover with ZooKeeper Failover Controllers then the ZKFC processes will automatically transition the Standby NN to Active status if the current active is unresponsive.The cluster failover is automatically done by Ambari. If you want to make your standby namenode the active one, you can stop the current active namenode (ensuring the current standby namenode is alive) and the current standby namenode will take over as the active namenode.
... View more
03-14-2018
10:59 PM
1 Kudo
@Vivek Sabha Unfortunately, that is not possible with the current versions of HDP aka Hadoop 2.0 but Hadoop 3.0 current GA will support more than 2 NameNodes. See this doc which includes all the significant enhancements. The only viable option is to migrate a Namenode from one machine to another via Ambari, that should be the only solution for you and here is a HCC document by Kuldeep Kulkarni that you could use as a guide, unfortunately its 2 years old valid for Ambari - 2.2.2.X running HDP - 2.2.X-->/2.4.X Hope that helps
... View more
03-14-2018
07:38 AM
@Abdul Saboor The warning emitted by Simple Logging Facade for Java (SLF4J) is just that, a warning. Even when multiple bindings are present, SLF4J will pick one logging framework/implementation and bind to it.
... View more