Member since
09-27-2015
66
Posts
56
Kudos Received
15
Solutions
10-03-2016
05:59 AM
Great article "PS: only jdbc/odbc clients can use initialized Tez AM, hive cli (command line) or other external hive components don't use initialized Tez AM." Beeline is also able to use initialized containers. Hive cli can't as it is bypassing hs2.
... View more
06-16-2016
02:11 PM
5 Kudos
The following process is assuming that you are installing your HDP cluster using a configuration management tool such as Ansible, Puppet or Chef and that you are deploying the cluster using Ambari blueprint. If you are looking into automating your deployment you might be interested to use the great from @Alex Bush with Ansible. https://github.com/bushnoh/ansible-hadoop-asap This process could be used for : - Migrating from HDP 2.2 to 2.4 using a full reinstallation - Upgrading OS from RHEL 6 to RHEL 7 - OS boot from network and reinstallation of HDP The following process has been tested to migrate from HDP 2.3 to HDP 2.4 on a kerberised cluster and to reinstall an HDP 2.3 cluster. It should also work for HDP 2.5 as HDFS version is consistent across those versions. step 1: - Make a backup of your metastore DB ( Ranger, Hive and Oozie ). step 2: - Check that the Namenode is having a folder called namenode-formatted under dfs.namenode.name.dir. If you are using namenode HA, you need to check on both namenode ( one will probably missing it ) step 3: - Launch the reinstallation of your OS making sure that the disk / folder used by HDP to store data are not reinstalled. If you are deploying your OS using kickstart, you want to add the line --noformat to the disk concerned. step 4: - Grab a coffee whilst OS installation is taking place. step 5: - Following a successful OS installation, you can now launch your automated deployment of HDP. It doesn't matter if you also upgrade Ambari at the same time. step 6: - Grab a coffee whilst installation is taking place. step 7: - If your DB server has also been reinstalled as part of the process. You will need to stop the services ( hive, ranger and oozie ) and restore the DB. (NB: Upon restart, the schema will automatically be upgraded if it's required) On HDP 2.3 step 8: - You should have all your services already available. If not, start them manually. - Restart services for Hive, Ranger and Oozie. step 9: - Congratulate yourself for a smooth upgrade On HDP 2.2 step 8 - Start HDFS manually # Log as hdfs
su - hdfs
# Start all journalnodes
hdfs journalnode
# Start namenode in upgrade mode from command line
hdfs namenode -upgrade
# Start the second namenode
hdfs namenode -bootstrapStandby step 9 - Start all services from ambari except for namenode. ( It should all start ) step 10 - Check that all your data are there and that you can access them (run a couple of known hive, hbase, ... query) If everything is correct, move to step 11. You won't be able to return back so make sure everything is working as you expect. step 11 - Finalize upgrade # Log as HDFS
su - hdfs
# Run finalize command
hdfs dfsadmin -finalizeUpgrade
Finalize upgrade successful step 12 - Restart all HDFS components via ambari step 13 - Congratulate yourself on a smooth upgrade
... View more
03-29-2016
12:52 PM
6 Kudos
When using smartsense 1.2 or below in conjunction with OpenJDK, you get the following error upon startup. It's a none issue which will be resolved in the next smart sense version. Traceback (most recent call last):
File "/usr/sbin/hst-agent.py", line 420, in <module> main(sys.argv)
File "/usr/sbin/hst-agent.py", line 397, in main setup(options)
File "/usr/sbin/hst-agent.py", line 323, in setup server_hostname = get_server_hostname(server, tries, try_sleep, options.quiet)
File "/usr/sbin/hst-agent.py", line 107, in get_server_hostname hostname = validate_server_hostname(default_hostname, tries, try_sleep)
File "/usr/sbin/hst-agent.py", line 125, in validate_server_hostname elif not register_agent(server_hostname):
File "/usr/sbin/hst-agent.py", line 143, in register_agent if not server_api.register_agent(agent_version):
File "/usr/hdp/share/hst/hst-agent/lib/hst_agent/ServerAPI.py", line 104, in register_agent content = self.call(request)
File "/usr/hdp/share/hst/hst-agent/lib/hst_agent/ServerAPI.py", line 52, in call self.cachedconnect = security.CachedHTTPSConnection(self.config)
File "/usr/hdp/share/hst/hst-agent/lib/hst_agent/security.py", line 111, in __init__ self.connect()
File "/usr/hdp/share/hst/hst-agent/lib/hst_agent/security.py", line 116, in connect self.httpsconn.connect()
File "/usr/hdp/share/hst/hst-agent/lib/hst_agent/security.py", line 87, in connect raise err
ssl.SSLError: [Errno 8] _ssl.c:492: EOF occurred in violation of protocol
To fix this issue, you will need to modify the SSL Digest from md5 to sha256. Here are the steps required to do it. From Ambari stop the SmartSense service ( all components ) Backup the old server keys on the HST server host cp -rp /var/lib/smartsense/hst-server/keys /var/lib/smartsense/hst-server/keys.backup
Clean out the old keys on the HST server host rm -f /var/lib/smartsense/hst-server/keys/ca.key
rm -f /var/lib/smartsense/hst-server/keys/*.csr
rm -f /var/lib/smartsense/hst-server/keys/*.crt
rm -rf /var/lib/smartsense/hst-server/keys/db/*
mkdir /var/lib/smartsense/hst-server/keys/db/newcerts
touch /var/lib/smartsense/hst-server/keys/db/index.txt
echo 01 > /var/lib/smartsense/hst-server/keys/db/serial
Modify default digest on HST server host Edit file /var/lib/smartsense/hst-server/keys/ca.config
change line "default_md = md5" to "default_md = sha256"
Clean out the old keys on each HST Agent hosts. rm -f /var/lib/smartsense/hst-agent/keys/* If using HST Gateway, on HST gateway stop the service and remove certs hst gateway stop
rm -f /var/lib/smartsense/hst-gateway/keys/ca.key
rm -f /var/lib/smartsense/hst-gateway/keys/*.csr
rm -f /var/lib/smartsense/hst-gateway/keys/*.crt
rm -rf /var/lib/smartsense/hst-gateway/keys/db/*
mkdir /var/lib/smartsense/hst-gateway/keys/db/newcerts
touch /var/lib/smartsense/hst-gateway/keys/db/index.txt
echo 01 > /var/lib/smartsense/hst-gateway/keys/db/serial
If using HST Gateway, modify default digest on HST gateway host Edit file /var/lib/smartsense/hst-gateway/keys/ca.config
change line "default_md = md5" to "default_md = sha256" If using HST Gateway, on HST server remove old certs rm -f /var/lib/smartsense/hst-gateway-client/keys If using HST Gateway, on HST Gateway restart service hst gateway start Restart SmartSense service from Ambari ( all components ) and verify both Ambari SmartSense service and SmartSense view shows correct number of agents registered.
... View more
Labels:
10-14-2015
03:55 PM
2 Kudos
Please find below a sample code which write to a kerberised HBase using keytab. package hbase;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.HColumnDescriptor;
import org.apache.hadoop.hbase.HTableDescriptor;
import org.apache.hadoop.hbase.KeyValue;
import org.apache.hadoop.hbase.MasterNotRunningException;
import org.apache.hadoop.hbase.ZooKeeperConnectionException;
import org.apache.hadoop.hbase.client.Delete;
import org.apache.hadoop.hbase.client.Get;
import org.apache.hadoop.hbase.client.HBaseAdmin;
import org.apache.hadoop.hbase.client.HTable;
import org.apache.hadoop.hbase.client.Result;
import org.apache.hadoop.hbase.client.ResultScanner;
import org.apache.hadoop.hbase.client.Scan;
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.util.Bytes;
import org.apache.hadoop.security.UserGroupInformation;
public class HBaseTest {
private static Configuration conf = null;
/**
* Initialization
*/
static {
conf = HBaseConfiguration.create();
conf.set("hadoop.security.authentication", "Kerberos");
}
/**
* Create a table
*/
public static void creatTable(String tableName, String[] familys)
throws Exception {
HBaseAdmin admin = new HBaseAdmin(conf);
if (admin.tableExists(tableName)) {
System.out.println("table already exists!");
} else {
HTableDescriptor tableDesc = new HTableDescriptor(tableName);
for (int i = 0; i < familys.length; i++) {
tableDesc.addFamily(new HColumnDescriptor(familys[i]));
}
admin.createTable(tableDesc);
System.out.println("create table " + tableName + " ok.");
}
}
/**
* Delete a table
*/
public static void deleteTable(String tableName) throws Exception {
try {
HBaseAdmin admin = new HBaseAdmin(conf);
admin.disableTable(tableName);
admin.deleteTable(tableName);
System.out.println("delete table " + tableName + " ok.");
} catch (MasterNotRunningException e) {
e.printStackTrace();
} catch (ZooKeeperConnectionException e) {
e.printStackTrace();
}
}
/**
* Put (or insert) a row
*/
public static void addRecord(String tableName, String rowKey,
String family, String qualifier, String value) throws Exception {
try {
HTable table = new HTable(conf, tableName);
Put put = new Put(Bytes.toBytes(rowKey));
put.add(Bytes.toBytes(family), Bytes.toBytes(qualifier), Bytes
.toBytes(value));
table.put(put);
System.out.println("insert recored " + rowKey + " to table "
+ tableName + " ok.");
} catch (IOException e) {
e.printStackTrace();
}
}
/**
* Delete a row
*/
public static void delRecord(String tableName, String rowKey)
throws IOException {
HTable table = new HTable(conf, tableName);
List<Delete> list = new ArrayList<Delete>();
Delete del = new Delete(rowKey.getBytes());
list.add(del);
table.delete(list);
System.out.println("del recored " + rowKey + " ok.");
}
/**
* Get a row
*/
public static void getOneRecord (String tableName, String rowKey) throws IOException{
HTable table = new HTable(conf, tableName);
Get get = new Get(rowKey.getBytes());
Result rs = table.get(get);
for(KeyValue kv : rs.raw()){
System.out.print(new String(kv.getRow()) + " " );
System.out.print(new String(kv.getFamily()) + ":" );
System.out.print(new String(kv.getQualifier()) + " " );
System.out.print(kv.getTimestamp() + " " );
System.out.println(new String(kv.getValue()));
}
}
/**
* Scan (or list) a table
*/
public static void getAllRecord (String tableName) {
try{
HTable table = new HTable(conf, tableName);
Scan s = new Scan();
ResultScanner ss = table.getScanner(s);
for(Result r:ss){
for(KeyValue kv : r.raw()){
System.out.print(new String(kv.getRow()) + " ");
System.out.print(new String(kv.getFamily()) + ":");
System.out.print(new String(kv.getQualifier()) + " ");
System.out.print(kv.getTimestamp() + " ");
System.out.println(new String(kv.getValue()));
}
}
} catch (IOException e){
e.printStackTrace();
}
}
public static void main(String[] agrs) {
try {
String tablename = "scores";
String[] familys = { "grade", "course" };
if (!UserGroupInformation.isSecurityEnabled()) throw new IOException("Security is not enabled in core-site.xml");
try {
UserGroupInformation.setConfiguration(conf);
UserGroupInformation userGroupInformation = UserGroupInformation.loginUserFromKeytabAndReturnUGI("hbase-app@KRB.HDP", "/home/hbase-app/hbase-app.headless.keytab" );
UserGroupInformation.setLoginUser(userGroupInformation);
} catch(IOException e) {
e.printStackTrace();
}
HBaseTest.creatTable(tablename, familys);
// add record zkb
HBaseTest.addRecord(tablename, "zkb", "grade", "", "5");
HBaseTest.addRecord(tablename, "zkb", "course", "", "90");
HBaseTest.addRecord(tablename, "zkb", "course", "math", "97");
HBaseTest.addRecord(tablename, "zkb", "course", "art", "87");
// add record baoniu
HBaseTest.addRecord(tablename, "baoniu", "grade", "", "4");
HBaseTest.addRecord(tablename, "baoniu", "course", "math", "89");
System.out.println("===========get one record========");
HBaseTest.getOneRecord(tablename, "zkb");
System.out.println("===========show all record========");
HBaseTest.getAllRecord(tablename);
System.out.println("===========del one record========");
HBaseTest.delRecord(tablename, "baoniu");
HBaseTest.getAllRecord(tablename);
System.out.println("===========show all record========");
HBaseTest.getAllRecord(tablename);
} catch (Exception e) {
e.printStackTrace();
}
}
}
You can build / compile by doing From command line, please run to create the directory called 'bld': # mkdir bld From command line, please run to compile the 'HBaseTest.java' class into 'bld' directory: # javac -cp `hbase classpath` -d bld/ HBaseTest.java From command line, please run to package the compiled class into a jar file called 'HBaseTest.jar': # jar -cvf HBaseTest.jar -C bld/ . On the client side, from command line, please run the 'HBaseTest.jar' by using the 'hadoop jar' commands: # export HADOOP_CLASSPATH=`hbase classpath` # hadoop jar HBaseTest.jar hbase.HBaseTest NOTE: The source of the code above is based on the Hortonworks Developer class.
... View more
Labels: