Member since
05-30-2018
1322
Posts
715
Kudos Received
148
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 4062 | 08-20-2018 08:26 PM | |
| 1957 | 08-15-2018 01:59 PM | |
| 2382 | 08-13-2018 02:20 PM | |
| 4130 | 07-23-2018 04:37 PM | |
| 5042 | 07-19-2018 12:52 PM |
03-23-2017
09:41 PM
1 Kudo
On a kerberized HDP 2.5.3 cluster, I created a kafka topic and now trying to produce a message ./bin/kafka-console-producer.sh --broker-list xx.hortonworks.com:6667,xxxl.hortonworks.com:6667,xxxxhortonworks.com:6667,xxx.hortonworks.com:6667 --topic sunile1 --security-protocol PLAINTEXTSASL and get the following error: [2017-03-23 21:36:42,838] WARN Unexpected error from xxx.hortonworks.com/xxx.xx.xxx.xx; closing connection (org.apache.kafka.common.network.Selector)
org.apache.kafka.common.errors.IllegalSaslStateException: Unexpected handshake request with client mechanism GSSAPI, enabled mechanisms are [GSSAPI]
[2017-03-23 21:36:42,931] WARN Unexpected error from xx.hortonworks.com/xxx.xx.xxx.xx; closing connection (org.apache.kafka.common.network.Selector)
Any ideas?
... View more
Labels:
- Labels:
-
Apache Kafka
03-20-2017
04:55 PM
This is adding a comment within the script and I am aware of the functionality. The question is about adding comment to table or column, ie like oracle/TD
... View more
03-20-2017
03:20 PM
1 Kudo
Is there way to add comment to a phoenix table? I know of the "comments" functinality but it only is used to add comments to a script. I need to add comment to a table (ie field) like TD or oracle functionality.
... View more
Labels:
- Labels:
-
Apache HBase
-
Apache Phoenix
03-18-2017
05:27 PM
5 Kudos
HBase along with Phoenix is one of the most powerful NoSQL combinations. HBase/Phoenix capabilities allow users to host OLTPish workloads natively on Hadoop using HBase/Phoenix with all the goodness of HA and analytic benefits on a single platform (Ie Spark-hbase connector or Phoenix Hive storage handler). Often a requirement for HA implementations is a need for DR environment. Here I will describe a few common patterns and in no way is this the exhaustive HBase DR patterns. In my opinion, pattern 5 is the simplest to implement and provides operational ease & efficiency.
Here are some of the high level replication and availability strategies
with HBase/Phoenix
HBASE
provides High Availability within a cluster by managing region server failures
transparently.
HBASE
provides various cross DC asynchronous replication schemes
Master/Master
replication topology
Two
clusters replicating all edits, bi-directionally to each other
Master/Slave
topology replication
One
cluster replicating all edits to second cluster
Cyclic
topology for replication
A
ring topology for clusters, replicating all edits in an acyclic manner
Hub
and spoke topology for replication
A
central cluster replicating all edits to multiple clusters in a uni-directional
manner
Using
various topologies described above cross DC replication scheme can be setup as
per desired architecture
Pattern 1
Reads
& Writes served by both clusters
An
implementation of client to provide for stickiness for writes/reads based on a
session ID like concept needs to investigated
Master/Master
replication between clusters
Bidirectional
replication
Replication
post failover - recovery
instrumented via Cyclic
Replication
Pattern 2
Reads
served by both
clusters
Writes
served by single cluster
Master/Master
replication between clusters
Bidirectional
replication
Client
will failover to secondary cluster
Replication
post failover - recovery instrumented via Cyclic
Replication
Pattern 3
Reads
& Writes served
by single cluster Master/Master
replication between clusters Bidirectional
replication Client
will failover to secondary cluster Replication
post failover - recovery instrumented
via Cyclic
Replication
Pattern 4
Reads
& Writes served
by single cluster Master/Slave
replication between clusters Unidirectional
replication Client
will failover to secondary cluster Manual
resync required on ”primary” cluster due to unidirectional replication
Pattern 5 Ingestion via NiFi Rest API Supports handling secure calls and round trip responses Push data to Kafka to democratize data to all apps interested in data set Secure Kafka topics via Apache Ranger NiFi dual ingest into N number of HBase/Phoenix clusters Enables in-sync data stores Operational ease NiFi back pressuring will handle any ODS downtime UI flow orchestration Data Governance built in via Data Provenance Event level linage Additional HBase Replication Documentation
Monitor
replication status
https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.0/bk_hadoop-ha/content/hbase-replication-monitoring-status.html Replication metrics
https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.0/bk_hadoop-ha/content/hbase-cluster-replication-metrics.html Replication Configuration options
https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.0/bk_hadoop-ha/content/hbase-cluster-repl-repl-config-options.html HBaseReplication Internals
https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.0/bk_hadoop-ha/content/hbase-replication-internals.html HBase
Cluster Replication Details
https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.0/bk_hadoop-ha/content/hbase-cluster-repl-details.html
... View more
Labels:
03-18-2017
03:00 AM
2 Kudos
A often requested feature has popped up in HDP 2.5, Phoenix namespace mapping AKA Phoenix schemas. This feature allows creation of schemas and development within a schema work space. This guide will walk through steps to enable this feature and how to use it. Use Case: Create 2 schemea named "schema1" and "scheama2" Within both schemas create table customer. Create another schema named "schema3". Drop schema "schema3" Note - A schema can only be dropped if it is empty (does not host any tables). 1. Through Ambari go to HBase. Select Advanced tab
2. Add a parameter to hbase-site.xml. To do this go to "Custom hbase-site" and click on "Add Property" 3. Add the property Key: phoenix.schema.isNamespaceMappingEnabled
Value: true Click on Add button. A restart hbase service will be required.
4. Test the functionality, use sqlline.py. It is located under /usr/hdp/current/phoenix/bin Note - I am using /usr/hdp/2.5.0.0-1245/phoenix/bin since I want to prove to the audience I am in fact using HDP 2.5. 5. Create two schemeas named "schema1" and "schmea2" 6. Set work space to use schema1 and create table customer 7. Set work space to use schema2 and create table customer 8. Run a !table to view all schemas and tables 9. Create a schema "schema3" and drop "schema3" That's it! Simple and easy to use. Enjoy the new feature
... View more
03-18-2017
01:55 AM
perfect. thanks
... View more
03-16-2017
05:36 PM
I have upgraded HDP from 2.4.2 to 2.5.3 and Ambari to 2.4.2. I do not see phoenix query server on my phoenix/hbase nodes. I don't see an install option through the ambari menu either. Any ideas?
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache HBase
-
Apache Phoenix
03-16-2017
05:11 AM
2 Kudos
Really the only way is his CTAS if you want to change compression format or add compression. .
... View more
03-14-2017
12:14 AM
2 Kudos
Run hadoop dfsadmin -safemode get [LAKE] [xxx@lake1 ~]# hadoop dfsadmin -safemode get
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
Safe mode is OFF in xx.xx.xx.com/1xxxx5:8020
Safe mode is OFF in xx.xx.xx.com/1xxx:8020
[LAKE] [xxx@lake1 ~]#
... View more
03-13-2017
08:07 PM
1 Kudo
You can also use apache falcon and build data retention policies for hdfs.
... View more