Member since
09-27-2015
66
Posts
56
Kudos Received
15
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1294 | 02-13-2017 02:17 PM | |
1965 | 02-03-2017 05:23 PM | |
2085 | 01-27-2017 04:03 PM | |
1223 | 01-26-2017 12:17 PM | |
1777 | 09-28-2016 11:03 AM |
11-17-2015
12:22 PM
@Neeraj, thanks, could you confirm if we need a share db for ambari view or each view server is using a standalone DB. Also how many concurrent users do you expect to manage with 128G / 16vCPU ?
... View more
11-17-2015
11:50 AM
2 Kudos
Do we have a guideline on how many users which should be running per ambari view servers ? What are the guidelines ? Do we have a spec for a "server" ? I was thinking to install Ambari View on a VM.
... View more
Labels:
- Labels:
-
Apache Ambari
11-10-2015
10:57 AM
1 Kudo
no we are not supporting ppc architecture. Olivier
... View more
11-03-2015
09:19 AM
Yes, we do. It's tracked under etl in 1h project. You may want to check the latest roadmap doc on box.
... View more
11-02-2015
10:28 AM
Could you confirm that from an operation point of view, i can add / remove coprocessor using the following process. - Stop application relying on coprocessor - Remove coprocessor from hbase-site - Rolling / restart of master ( assuming HA master ) - Rolling / restart of region server ( if we are using HA region server, we should not have any disruption of services )
... View more
11-01-2015
09:59 PM
Is there a way of restricting access to an HBase coprocessor in a multi tenant environment ? What should i be taking into consideration when using coprocessor ?
... View more
Labels:
- Labels:
-
Apache HBase
10-28-2015
04:58 PM
3 Kudos
Here is some of the key points to use Flume in "HA" 1. Setup File Channels instead of Memory Channels (using a RAID array is very paranoid but possible) on any Flume agent in use 2. Create a nanny process/script to watch for flume agent failures and restart immediately 3. Put the Flume agent collector/aggregation/2nd tier behind a network load balancer and use a VIP. This also has the benefit for balancing load for high ingest 4. Optionally have a sink that dumps to cycling files (separate from the drive the File Channel operates on) on the local drives in addition to a sink that forwards it on the next flume node or directly to HDFS. At least then you have the time it takes to fill a drive to correct any major issues and recover lost ingest streams. 5. Use the built in JMX counters in Flume to setup alerts in your favorite Operations Center application
... View more
10-20-2015
11:10 AM
I've got a customer who is going to have multiple storm topology running ( different business area ) on top of a single cluster. What's the best way to guarantee multi-tenancy? ( they are not using Slider )
Thanks, Olivier
... View more
Labels:
- Labels:
-
Apache Storm
10-14-2015
03:55 PM
2 Kudos
Please find below a sample code which write to a kerberised HBase using keytab. package hbase;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.HColumnDescriptor;
import org.apache.hadoop.hbase.HTableDescriptor;
import org.apache.hadoop.hbase.KeyValue;
import org.apache.hadoop.hbase.MasterNotRunningException;
import org.apache.hadoop.hbase.ZooKeeperConnectionException;
import org.apache.hadoop.hbase.client.Delete;
import org.apache.hadoop.hbase.client.Get;
import org.apache.hadoop.hbase.client.HBaseAdmin;
import org.apache.hadoop.hbase.client.HTable;
import org.apache.hadoop.hbase.client.Result;
import org.apache.hadoop.hbase.client.ResultScanner;
import org.apache.hadoop.hbase.client.Scan;
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.util.Bytes;
import org.apache.hadoop.security.UserGroupInformation;
public class HBaseTest {
private static Configuration conf = null;
/**
* Initialization
*/
static {
conf = HBaseConfiguration.create();
conf.set("hadoop.security.authentication", "Kerberos");
}
/**
* Create a table
*/
public static void creatTable(String tableName, String[] familys)
throws Exception {
HBaseAdmin admin = new HBaseAdmin(conf);
if (admin.tableExists(tableName)) {
System.out.println("table already exists!");
} else {
HTableDescriptor tableDesc = new HTableDescriptor(tableName);
for (int i = 0; i < familys.length; i++) {
tableDesc.addFamily(new HColumnDescriptor(familys[i]));
}
admin.createTable(tableDesc);
System.out.println("create table " + tableName + " ok.");
}
}
/**
* Delete a table
*/
public static void deleteTable(String tableName) throws Exception {
try {
HBaseAdmin admin = new HBaseAdmin(conf);
admin.disableTable(tableName);
admin.deleteTable(tableName);
System.out.println("delete table " + tableName + " ok.");
} catch (MasterNotRunningException e) {
e.printStackTrace();
} catch (ZooKeeperConnectionException e) {
e.printStackTrace();
}
}
/**
* Put (or insert) a row
*/
public static void addRecord(String tableName, String rowKey,
String family, String qualifier, String value) throws Exception {
try {
HTable table = new HTable(conf, tableName);
Put put = new Put(Bytes.toBytes(rowKey));
put.add(Bytes.toBytes(family), Bytes.toBytes(qualifier), Bytes
.toBytes(value));
table.put(put);
System.out.println("insert recored " + rowKey + " to table "
+ tableName + " ok.");
} catch (IOException e) {
e.printStackTrace();
}
}
/**
* Delete a row
*/
public static void delRecord(String tableName, String rowKey)
throws IOException {
HTable table = new HTable(conf, tableName);
List<Delete> list = new ArrayList<Delete>();
Delete del = new Delete(rowKey.getBytes());
list.add(del);
table.delete(list);
System.out.println("del recored " + rowKey + " ok.");
}
/**
* Get a row
*/
public static void getOneRecord (String tableName, String rowKey) throws IOException{
HTable table = new HTable(conf, tableName);
Get get = new Get(rowKey.getBytes());
Result rs = table.get(get);
for(KeyValue kv : rs.raw()){
System.out.print(new String(kv.getRow()) + " " );
System.out.print(new String(kv.getFamily()) + ":" );
System.out.print(new String(kv.getQualifier()) + " " );
System.out.print(kv.getTimestamp() + " " );
System.out.println(new String(kv.getValue()));
}
}
/**
* Scan (or list) a table
*/
public static void getAllRecord (String tableName) {
try{
HTable table = new HTable(conf, tableName);
Scan s = new Scan();
ResultScanner ss = table.getScanner(s);
for(Result r:ss){
for(KeyValue kv : r.raw()){
System.out.print(new String(kv.getRow()) + " ");
System.out.print(new String(kv.getFamily()) + ":");
System.out.print(new String(kv.getQualifier()) + " ");
System.out.print(kv.getTimestamp() + " ");
System.out.println(new String(kv.getValue()));
}
}
} catch (IOException e){
e.printStackTrace();
}
}
public static void main(String[] agrs) {
try {
String tablename = "scores";
String[] familys = { "grade", "course" };
if (!UserGroupInformation.isSecurityEnabled()) throw new IOException("Security is not enabled in core-site.xml");
try {
UserGroupInformation.setConfiguration(conf);
UserGroupInformation userGroupInformation = UserGroupInformation.loginUserFromKeytabAndReturnUGI("hbase-app@KRB.HDP", "/home/hbase-app/hbase-app.headless.keytab" );
UserGroupInformation.setLoginUser(userGroupInformation);
} catch(IOException e) {
e.printStackTrace();
}
HBaseTest.creatTable(tablename, familys);
// add record zkb
HBaseTest.addRecord(tablename, "zkb", "grade", "", "5");
HBaseTest.addRecord(tablename, "zkb", "course", "", "90");
HBaseTest.addRecord(tablename, "zkb", "course", "math", "97");
HBaseTest.addRecord(tablename, "zkb", "course", "art", "87");
// add record baoniu
HBaseTest.addRecord(tablename, "baoniu", "grade", "", "4");
HBaseTest.addRecord(tablename, "baoniu", "course", "math", "89");
System.out.println("===========get one record========");
HBaseTest.getOneRecord(tablename, "zkb");
System.out.println("===========show all record========");
HBaseTest.getAllRecord(tablename);
System.out.println("===========del one record========");
HBaseTest.delRecord(tablename, "baoniu");
HBaseTest.getAllRecord(tablename);
System.out.println("===========show all record========");
HBaseTest.getAllRecord(tablename);
} catch (Exception e) {
e.printStackTrace();
}
}
}
You can build / compile by doing From command line, please run to create the directory called 'bld': # mkdir bld From command line, please run to compile the 'HBaseTest.java' class into 'bld' directory: # javac -cp `hbase classpath` -d bld/ HBaseTest.java From command line, please run to package the compiled class into a jar file called 'HBaseTest.jar': # jar -cvf HBaseTest.jar -C bld/ . On the client side, from command line, please run the 'HBaseTest.jar' by using the 'hadoop jar' commands: # export HADOOP_CLASSPATH=`hbase classpath` # hadoop jar HBaseTest.jar hbase.HBaseTest NOTE: The source of the code above is based on the Hortonworks Developer class.
... View more
Labels:
10-09-2015
05:17 PM
Have you tried to run the exact same command from the cli? (beware of which user is running the ambari-agent) Thanks Olivier
... View more
- « Previous
- Next »