Member since
03-14-2016
67
Posts
29
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1780 | 09-21-2018 10:02 AM | |
3241 | 09-11-2018 10:44 AM | |
3540 | 07-06-2016 01:14 PM |
09-05-2018
10:25 AM
Yes, you can use remaining space to other blocks.
... View more
09-05-2018
09:51 AM
@Michael Bronson 134+18=152 GB is your total configured capacity. It is not 320 GB. Please confirm if all volumes (/dev/sdb,/dev/sdc,/dev/sdd,/dev/sde) are added in "dfs.datanode.data.dir" (hdfs-site.xml) to sum up 320 GB as configured capacity.
... View more
09-05-2018
09:46 AM
@rabbit s -1. I totally disagree. HDFS calculates exactly what it used. Please don't confuse it.
... View more
06-05-2018
09:23 AM
4 Kudos
Simple API to find region name for given row key. It will also give you hint about the region - Where key will be placed. To compile class: $JAVA_HOME/bin/javac -cp `hbase classpath`: Regionfinder.java To run tool: $JAVA_HOME/bin/java -cp `hbase classpath`: Regionfinder <tablename> <rowkey> Regionfinder.java import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.HRegionLocation;
import org.apache.hadoop.hbase.TableName;
import org.apache.hadoop.hbase.client.Connection;
import org.apache.hadoop.hbase.client.ConnectionFactory;
import org.apache.hadoop.hbase.client.Get;
import org.apache.hadoop.hbase.client.RegionLocator;
import org.apache.hadoop.hbase.client.Result;
import org.apache.hadoop.hbase.client.Table;
import org.apache.hadoop.hbase.util.Bytes;
public class Regionfinder {
private static void usage(String[] args) {
if (args.length < 2) {
System.out.println("$JAVA_HOME/bin/java -cp `hbase classpath`: Regionfinder <tablename> <rowkey>");
System.exit(-1);
}
}
public static void main(String[] args) throws IOException {
usage(args);
TableName tablename = TableName.valueOf(args[0]);
String rowkey = args[1];
byte[] keyInBytes = Bytes.toBytes(rowkey);
Configuration conf = HBaseConfiguration.create();
Connection connection = ConnectionFactory.createConnection(conf);
Table table = connection.getTable(tablename);
RegionLocator regionLocater = connection.getRegionLocator(tablename);
HRegionLocation regionLocation = regionLocater.getRegionLocation(keyInBytes);
Result result = table.get(new Get(keyInBytes));
if(result.isEmpty()){
System.out.println("Rowkey "+rowkey+" is not exist in any region. It will be placed in region : "+regionLocation.getRegionInfo().getRegionNameAsString());
}else{
System.out.println("Table Name = " + tablename + "\n" + "Row Key = " + rowkey + "\n" + "Region Server = "
+ regionLocation.getServerName() + "\n" + "Region Name = "
+ regionLocation.getRegionInfo().getRegionNameAsString() + "\n" + "Encoded Region Name = "
+ regionLocation.getRegionInfo().getEncodedName());
}
}
}
Ex Output:
#java -cp `hbase classpath`: Regionfinder sp 100
Table Name = sp
Row Key = 100
Region Server = hwx2633.openstacklocal,16020,1528109888044
Region Name = sp,100,1521182497105.6e87f8a4f3bf7c2762d644dba8e98022.
Encoded Region Name = 6e87f8a4f3bf7c2762d644dba8e98022
Non existing row key:
#java -cp `hbase classpath`: Regionfinder sp 10
Rowkey 10 is not exist in any region. It will be placed in region : sp,10,1521182497105.ecc8568cb94776ad83ee04cbb422bff0
... View more
Labels:
01-30-2018
12:56 PM
Appreciated @D Giri. Your step is perfect for changing the Journal directory without affecting any existing service. But my article has steps for doing it offline. Why we need to re-initialize? I haven't copied the edits and meta from old directory. So there will not be any Journal Layout. Let me add your steps in another section in the same article. Thank you for your contribution.
... View more
01-30-2018
07:57 AM
There are two sections which illustrate moving the Journal directory on the same Host.
You can follow one of the section according to your need.
Section 1: ( Service Downtime is not required )
1. Change the settings in Ambari for journal node edits directory(dfs.journalnode.edits.dir) from /hadoop/hdfs/journal/ to /data/1/journal/
2. Don't restart any services immediately.
3. Stop the journal node on
NODE1.
a. SSH to NODE1
b. sudo mkdir -p /data/1/journal
c. sudo chown hdfs:hadoop /data/1/journal
d. sudo rsync -ahvAX /hadoop/hdfs/journal/* /data/1/journal/
Start the journal node on
NODE1
4. Repeat step 3 for remaining two journal nodes NODE2 and NODE3.
5. Restart the required services accordingly (Rolling or All at once)
Section 2: ( Service Downtime is required )
1. Create the new directory on all Journal nodes.
Example:
# mkdir -p /data/1/journal/
2. Change ownership to "hdfs:hadoop" on all Journal nodes.
Example:
# chown hdfs:hadoop /data/1/journal/
3. Backup an exiting "Fsimage and Edits" from "
dfs.namenode.name.dir", "dfs.journalnode.edits.dir" directory on both Namenode and all Journal nodes.
4. Active Namenode - Perform the
saveNamespace which will Merge "FSImage and Edits".
# su - hdfs
$ hdfs dfsadmin -safemode enter
$ hdfs dfsadmin -saveNamespace
5. Stop Active Namenode and Standby Namenode.
6. Stop all Journal nodes.
6. Update, latest Journal node directory.
Ambari -> HDFS -> Configs -> Advanced -> Advanced hdfs-site -> dfs.journalnode.edits.dir
7. Start all Journal nodes.
8. Format JN directory.
# su - hdfs
$ hdfs namenode -initializeSharedEdits
Note: The command should be run on one of the
Namenode hosts.
9. Start Active Namenode.
Note:
Don't proceed next step until Namenode out of Safemode.
10. Bootstrap Standby Namenode.
# su - hdfs
$ hdfs namenode -bootstrapStandby
Note:
The command should be run on
Standby Namenode.
During, You will get the option to format the Storage directory. Press Y and hit enter.
Re-format filesystem in Storage Directory /hadoop/hdfs/namenode ? (Y or N) Y
11. Start Standby Namenode.
... View more
Labels:
11-23-2017
02:43 PM
@Maryem Mary It appears some logs got generated. Your directory size is 4KB now. #ls -lrth /var/log/zookeeper ??
... View more
11-23-2017
02:07 PM
Ok, ensure you have right permission & ownership of the directory. /var/log/zookeeper #ls -lrth /var/log/ | grep zookeeper drwxr-xr-x. 3 zookeeper hadoop 4.0K Nov 23 13:32 zookeeper Did you check? after the pattern update. Ambari -> Zookeeper -> Configs -> Advanced zookeeper-log4j, .. log4j.appender.ROLLINGFILE.MaxBackupIndex=10 log4j.appender.ROLLINGFILE.layout=org.apache.log4j.PatternLayout log4j.appender.ROLLINGFILE.layout.ConversionPattern=%d{ISO8601} - %-5p [%t:%C{1}@%L] - %m%n
... View more
11-23-2017
01:42 PM
It could be your configuration problem if you don't find any logs even after this setting. or you may be configured different log directory location. Share the output of "/etc/zookeeper/conf/zookeeper-env.sh" ? Also, you are missing the log patterns. Please append to log4j.properties as highlighted here. Ambari -> Zookeeper -> Configs -> Advanced zookeeper-log4j, .. log4j.appender.ROLLINGFILE.MaxBackupIndex=10 log4j.appender.ROLLINGFILE.layout=org.apache.log4j.PatternLayout log4j.appender.ROLLINGFILE.layout.ConversionPattern=%d{ISO8601} - %-5p [%t:%C{1}@%L] - %m%n
... View more
11-23-2017
01:04 PM
@Maryem Mary Share the zookeeper log or error trace to find a cause, as above configuration for enabling the logs.
... View more