Member since
09-24-2015
178
Posts
113
Kudos Received
28
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3376 | 05-25-2016 02:39 AM | |
3590 | 05-03-2016 01:27 PM | |
839 | 04-26-2016 07:59 PM | |
14394 | 03-24-2016 04:10 PM | |
2019 | 02-02-2016 11:50 PM |
01-20-2018
12:22 AM
Very helpful response Yolanda. One minor typo - In 1.a) the path is - /etc/yum.repos.d/ambari-hdp-*.repo For those who are not familiar with yum repo location. BTW - This solutuion works for me.
... View more
03-17-2017
11:34 PM
I am attempting to create and use a Phoenix table on HBase table that was originally created from Hive using HBaseStorageHandler. However, I am getting an error when selecting data from phoenix table.
Hive Table DDL create table MYTBL(
col1 string,
col2 int,
col3 int,
col4 string )
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
WITH SERDEPROPERTIES
("hbase.columns.mapping" = ":key,
attr:col2,
attr:col3,
attr:col4")
TBLPROPERTIES("hbase.table.name" = "MYTBL");
Phoenix Table DDL CREATE TABLE "MYTBL" (
pk VARCHAR PRIMARY KEY,
"attr"."col2" INTEGER,
"attr"."col3" INTEGER,
"attr"."col4" VARCHAR ) Once both the tables are created, I insert the data into Hive table using - insert into table MYTBL values ("hive", 1, 2, "m"); At this point, the data is available in Hive table and underlying HBase table. HBase table shown below I can also insert data into Phoenix table and it shows up in underlying HBase table. upsert into "MYTBL" values ('phoenix', 3, 4, 'm+c'); One thing to note here is how the integer values are being stored for the data inserted through Phoenix. When I run a select query from Phoenix, it gives an error while parsing the integer field inserted from Hive -> HBase. Text version of the error below - p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo}
p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #c33720}
span.s1 {font-variant-ligatures: no-common-ligatures}
span.Apple-tab-span {white-space:pre} 0: jdbc:phoenix:> select * from MYTBL; Error: ERROR 201 (22000): Illegal data. Expected length of at least 4 bytes, but had 2 (state=22000,code=201) java.sql.SQLException: ERROR 201 (22000): Illegal data. Expected length of at least 4 bytes, but had 2 at org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:441) at org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145) at org.apache.phoenix.schema.KeyValueSchema.next(KeyValueSchema.java:211) at org.apache.phoenix.schema.KeyValueSchema.iterator(KeyValueSchema.java:165) at org.apache.phoenix.schema.KeyValueSchema.iterator(KeyValueSchema.java:171) at org.apache.phoenix.schema.KeyValueSchema.iterator(KeyValueSchema.java:175) at org.apache.phoenix.expression.ProjectedColumnExpression.evaluate(ProjectedColumnExpression.java:114) at org.apache.phoenix.compile.ExpressionProjector.getValue(ExpressionProjector.java:69) at org.apache.phoenix.jdbc.PhoenixResultSet.getString(PhoenixResultSet.java:608) at sqlline.Rows$Row.<init>(Rows.java:183) at sqlline.BufferedRows.<init>(BufferedRows.java:38) at sqlline.SqlLine.print(SqlLine.java:1650) at sqlline.Commands.execute(Commands.java:833) at sqlline.Commands.sql(Commands.java:732) at sqlline.SqlLine.dispatch(SqlLine.java:808) at sqlline.SqlLine.begin(SqlLine.java:681) at sqlline.SqlLine.start(SqlLine.java:398) at sqlline.SqlLine.main(SqlLine.java:292)
... View more
Labels:
- Labels:
-
Apache HBase
-
Apache Phoenix
02-14-2017
06:54 PM
Is it possible to store the timestamp value in a column in Hive without the timezone part? For e.g. Oracle supports the timestamp value without a timezone attached to it - https://docs.oracle.com/cd/B19306_01/server.102/b14225/ch4datetime.htm#i1006760 The requirement simply is that store whatever value of the timestamp is given in a column. Currently, Hive automatically applies Day Light Savings adjustment based on the timezone value. Any inputs are appreciated. Thanks
... View more
Labels:
- Labels:
-
Apache Hive
02-09-2017
05:59 PM
Which timezone setting (OS/Hadoop Core/HDFS/Hive etc.) is used as the default timezone for Hive? Which configuration / property to change to use a different timezone?
... View more
Labels:
- Labels:
-
Apache Hive
01-10-2017
07:56 PM
1 Kudo
@Raj B I pondered over this when started using NiFi and realized that this is good feature; In fact almost necessary to support a real-life data flow scenario. Let us take a simple scenario - Lets say my group does risk evaluation in a bank and provides services to different LOB (consider credit & debit transaction groups only for this discussion) within the bank. Lets assume that the format of the transaction received from these two groups is exactly the same. However, the way data is received is different. While the Credit group places the data on a windows share, the Debit group requires the data to be read from their server via FTP. Now the data ingestion process of risk group, built on NiFi, will look something like this -
A process group to read the data from shared drive A process group to read the data using FTP Another process group to take the input from the above two process groups and apply some further processing like - split records into individual transactions, ensure all mandatory data elements are present (if not, route to error) and then do two things
A process group with flow to place the data on Kafka for the Storm topology to pick up and apply the model to evaluate risk Also, another process group with flow to store the data in HDFS for archival purpose to support audit, compliance requirement. Now as you can see you would need to be able to support multiple input ports and output ports to support this flow. Why cant we just place the entire flow? - Technically you can but wont that be messy, hard to manage, reduce the reusability drastically and make the overall flow less flexible / scalable. Hope this helps!
... View more
08-17-2016
03:21 PM
1 Kudo
I am using PutSyslog and need to pass in the content of the flow file. Q 1 - Is there a way to reference the content of flow file directly within the MessageBody field of the PutSyslog processor? Q 2 - If not, how do I add the content of the flow file as processor so that I can pass the attribute to the MessageBody?
... View more
Labels:
- Labels:
-
Apache NiFi
05-25-2016
02:39 AM
3 Kudos
@Sri Bandaru In ref to kerbeors, it is better to create hadoop accounts locally to avoid sending hadoop internal auth requests to AD and add to the AD load. Setting up hadoop accounts locally in a KDC and setting up one way trust between KDC and AD is the way to go.
... View more
05-18-2016
01:34 AM
+1 I am looking for the same information. Can someone also share the following pieces of information Sample code using Spark HBase Connector Link to latest documentation Is this GA yet?
... View more
05-03-2016
01:38 PM
1 Kudo
Fix is to delete that link manually and recreate it correctly. rm /usr/hdp/current/zookeeper-client/conf
ln -s /etc/zookeeper/2.3.2.0-2950/0 /usr/hdp/current/zookeeper-client/conf
... View more
05-03-2016
01:27 PM
4 Kudos
@Felix Karabanov
I recently experience something similar during an upgrade. I am not sure what the root cause within the software but here is what Ambari is expecting and work around that worked for me - Correct setup - The configuration directories are expected to be setup this way - Starting with /etc/zookeeper/conf [root@sandbox zookeeper]# ll /etc/zookeeper/conf
lrwxrwxrwx 1 root root 38 2015-10-27 14:31 /etc/zookeeper/conf -> /usr/hdp/current/zookeeper-client/conf
[root@sandbox zookeeper]# ll /usr/hdp/current/zookeeper-client/conf
lrwxrwxrwx 1 root root 29 2015-10-27 14:31 /usr/hdp/current/zookeeper-client/conf -> /etc/zookeeper/2.3.2.0-2950/0
In your case, the link /usr/hdp/current/zookeeper-client/conf probably points back to /etc/zookeeper/conf, which causes the issue.
... View more