Member since
07-25-2018
20
Posts
1
Kudos Received
0
Solutions
10-12-2018
03:47 PM
Hello Tracy, How did you handle the data types in Phoenix after the setting the parameter ? Are all columns mapped as VARCHAR ? or Were there any scenarios to use the datatypes ( INTEGER,TIMESTAMP's etc) Thank you
... View more
07-31-2018
09:50 PM
@Bhavesh Were you able to resolve the issue? Even after setting the property phoenix.schema.isNamespaceMappingEnabled on all nodes, i am not able to read the data from phoenix command line hbase(main):002:0> scan 'DEFAULT:TEST'
ROW COLUMN+CELL
1 column=B:MESSAGE, timestamp=1533062047409, value=Hello
2 column=B:MESSAGE, timestamp=1533062316879, value=World
2 row(s) in 0.0860 seconds
--phoenix
0: jdbc:phoenix:thin:url=http://localhost:876> select * from "DEFAULT".TEST;
+-----+----------+
| ID | MESSAGE |
+-----+----------+
+-----+----------+
No rows selected (0.031 seconds) But count works 0: jdbc:phoenix:thin:url=http://localhost:876> select count(*) from "DEFAULT".TEST;
+-----------+
| COUNT(1) |
+-----------+
| 2 |
+-----------+
1 row selected (0.013 seconds)
... View more
07-30-2018
08:48 PM
Hello @schhabra thank you for your reply I updated hbase-site.xml in 2 locations - /usr/hbase/conf/hbase-site.xml and /usr/phoenix/conf/hbase-site.xml PQS and Hbase service are restarted after the change. I still get the same error. Client and server are on the same EMR Master ec2 node.
... View more
07-30-2018
08:06 PM
Hello experts, I added properties in hbase-site.xml to support NameSpace Mapping referring to https://phoenix.apache.org/namspace_mapping.html When I try to connect to Phoenix from command line (/usr/lib/phoenix/bin/sqlline-thin.py) see an error Inconsistent namespace mapping properties. Ensure that config phoenix.schema.isNamespaceMappingEnabled is consistent on client and server I added the properties in two files - /usr/hbase/conf/hbase-site.xml and /usr/phoenix/conf/hbase-site.xml and tried copying the file to the local client directory /usr/lib/phoenix/bin and set HBASE_CONF_DIR Am I missing any configuration step.? Environment - AWS EMR 5.16.0, HBase 1.4.4, Phoenix 4.14.0 Really appreciate your suggestions. $ /usr/lib/phoenix/bin/sqlline-thin.py
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/lib/phoenix/phoenix-4.14.0-HBase-1.4-thin-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
Setting property: [incremental, false]
Setting property: [isolation, TRANSACTION_READ_COMMITTED]
issuing: !connect jdbc:phoenix:thin:url=http://localhost:8765;serialization=PROTOBUF none none org.apache.phoenix.queryserver.client.Driver
Connecting to jdbc:phoenix:thin:url=http://localhost:8765;serialization=PROTOBUF
AvaticaClientRuntimeException: Remote driver error: RuntimeException: java.sql.SQLException: ERROR 726 (43M10): Inconsistent namespace mapping properties. Ensure that config phoenix.schema.isNamespaceMappingEnabled is consistent on client and server. -> SQLException: ERROR 726 (43M10): Inconsistent namespace mapping properties. Ensure that config phoenix.schema.isNamespaceMappingEnabled is consistent on client and server.. Error -1 (00000) null
java.lang.RuntimeException: java.sql.SQLException: ERROR 726 (43M10): Inconsistent namespace mapping properties. Ensure that config phoenix.schema.isNamespaceMappingEnabled is consistent on client and server.
at org.apache.calcite.avatica.jdbc.JdbcMeta.openConnection(JdbcMeta.java:621)
at org.apache.calcite.avatica.remote.LocalService.apply(LocalService.java:285)
at org.apache.calcite.avatica.remote.Service$OpenConnectionRequest.accept(Service.java:1771)
at org.apache.calcite.avatica.remote.Service$OpenConnectionRequest.accept(Service.java:1751)
at org.apache.calcite.avatica.remote.AbstractHandler.apply(AbstractHandler.java:94)
at org.apache.calcite.avatica.remote.ProtobufHandler.apply(ProtobufHandler.java:46)
at org.apache.calcite.avatica.server.AvaticaProtobufHandler.handle(AvaticaProtobufHandler.java:127)
at org.apache.phoenix.shaded.org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:52)
at org.apache.phoenix.shaded.org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
at org.apache.phoenix.shaded.org.eclipse.jetty.server.Server.handle(Server.java:499)
at org.apache.phoenix.shaded.org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311)
at org.apache.phoenix.shaded.org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
at org.apache.phoenix.shaded.org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:544)
at org.apache.phoenix.shaded.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
at org.apache.phoenix.shaded.org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.sql.SQLException: ERROR 726 (43M10): Inconsistent namespace mapping properties. Ensure that config phoenix.schema.isNamespaceMappingEnabled is consistent on client and server.
at org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:494)
at org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.checkClientServerCompatibility(ConnectionQueryServicesImpl.java:1310)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:1093)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:1491)
at org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:2717)
at org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:1114)
at org.apache.phoenix.compile.CreateTableCompiler$1.execute(CreateTableCompiler.java:192)
at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:408)
at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:391)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:390)
at org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378)
at org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1806)
at org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2528)
at org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2491)
at org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2491)
at org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255)
at org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:150)
at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:208)
at org.apache.calcite.avatica.jdbc.JdbcMeta.openConnection(JdbcMeta.java:618)
... 15 more
at org.apache.phoenix.shaded.org.apache.calcite.avatica.remote.Service$ErrorResponse.toException(Service.java:2476)
at org.apache.phoenix.shaded.org.apache.calcite.avatica.remote.RemoteProtobufService._apply(RemoteProtobufService.java:63)
at org.apache.phoenix.shaded.org.apache.calcite.avatica.remote.ProtobufService.apply(ProtobufService.java:81)
at org.apache.phoenix.shaded.org.apache.calcite.avatica.remote.Driver.connect(Driver.java:176)
at sqlline.DatabaseConnection.connect(DatabaseConnection.java:157)
at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:203)
at sqlline.Commands.connect(Commands.java:1064)
at sqlline.Commands.connect(Commands.java:996)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:38)
at sqlline.SqlLine.dispatch(SqlLine.java:809)
at sqlline.SqlLine.initArgs(SqlLine.java:588)
at sqlline.SqlLine.begin(SqlLine.java:661)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:291)
at org.apache.phoenix.queryserver.client.SqllineWrapper.main(SqllineWrapper.java:93)
sqlline version 1.2.0
... View more
Labels:
- Labels:
-
Apache HBase
-
Apache Phoenix
07-12-2018
06:23 AM
Thank you @Vinicius Higa Murakami for sharing the issues. I am exploring alternatives to presto, for querying star schemas (built on hive ACID tables)
... View more
07-10-2018
05:51 AM
Hello Experts,
I am using presto (v 0.194) on AWS EMR (v 5.14.0) as querying layer.
Data is stored in Hadoop data nodes.
Issue : When querying Hive table with ACID property enabled, Presto query fails with below error ( attached document presto-query-errors.txt has full errors)
select * from default.poc_date_bucket limit 10
An error occurred while calling o163.next. : java.sql.SQLException: Query failed (#20180709_164933_00004_hgb6d): Hive table 'default.poc_date_bucket' is corrupt. Found sub-directory in bucket directory for partition: <UNPARTITIONED> at com.facebook.presto.jdbc.PrestoResultSet.resultsException(PrestoResultSet.java:1798) at
Creating a new table with partitioning and bucketing enabled, query fails with similar error
select * from default.poc_date_partition limit 10
An error occurred while calling o169.next. : java.sql.SQLException: Query failed (#20180709_174041_00005_hgb6d): Hive table 'default.poc_date_partition' is corrupt. Found sub-directory in bucket directory for partition: year_start_date=2019-01-01 at com.facebook.presto.jdbc.PrestoResultSet.resultsException(PrestoResultSet.java:1798) at
Bucketing is required when enabling ACID property on a Hive table.
Has anyone encountered this issue? Appreciate any suggestions
Thanks again
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Hive
06-04-2018
05:45 PM
Hello Experts , I am working with Zeppelin on amason EMR 5.13.0 and when I try to install zeppelin interpreters jdbc ,shell, I see below error Cannot fetch dependencies for org.apache.zeppelin:zeppelin-jdbc:0.7.3
I also tried to change permissions sudo chown -R zeppelin:zeppelin /usr/lib/zeppelin/local-repo/ and also set JAVA_HOME , but no luck Has anyone resolved or seen this issue? Any suggestions would be appreciated. Command : sudo /usr/lib/zeppelin/bin/install-interpreter.sh --name jdbc,shell sudo /usr/lib/zeppelin/bin/install-interpreter.sh --name jdbc,shell
OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/lib/zeppelin/lib/interpreter/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/lib/zeppelin/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Install jdbc(org.apache.zeppelin:zeppelin-jdbc:0.7.3) to /usr/lib/zeppelin/interpreter/jdbc ...
org.sonatype.aether.RepositoryException: Cannot fetch dependencies for org.apache.zeppelin:zeppelin-jdbc:0.7.3
at org.apache.zeppelin.dep.DependencyResolver.getArtifactsWithDep(DependencyResolver.java:181)
at org.apache.zeppelin.dep.DependencyResolver.loadFromMvn(DependencyResolver.java:131)
at org.apache.zeppelin.dep.DependencyResolver.load(DependencyResolver.java:79)
at org.apache.zeppelin.dep.DependencyResolver.load(DependencyResolver.java:96)
at org.apache.zeppelin.dep.DependencyResolver.load(DependencyResolver.java:88)
at org.apache.zeppelin.interpreter.install.InstallInterpreter.install(InstallInterpreter.java:172)
at org.apache.zeppelin.interpreter.install.InstallInterpreter.install(InstallInterpreter.java:136)
at org.apache.zeppelin.interpreter.install.InstallInterpreter.install(InstallInterpreter.java:128)
at org.apache.zeppelin.interpreter.install.InstallInterpreter.main(InstallInterpreter.java:280)
Install shell(org.apache.zeppelin:zeppelin-shell:0.7.3) to /usr/lib/zeppelin/interpreter/shell ...
org.sonatype.aether.RepositoryException: Cannot fetch dependencies for org.apache.zeppelin:zeppelin-shell:0.7.3
at org.apache.zeppelin.dep.DependencyResolver.getArtifactsWithDep(DependencyResolver.java:181)
at org.apache.zeppelin.dep.DependencyResolver.loadFromMvn(DependencyResolver.java:131)
at org.apache.zeppelin.dep.DependencyResolver.load(DependencyResolver.java:79)
at org.apache.zeppelin.dep.DependencyResolver.load(DependencyResolver.java:96)
at org.apache.zeppelin.dep.DependencyResolver.load(DependencyResolver.java:88)
at org.apache.zeppelin.interpreter.install.InstallInterpreter.install(InstallInterpreter.java:172)
at org.apache.zeppelin.interpreter.install.InstallInterpreter.install(InstallInterpreter.java:136)
at org.apache.zeppelin.interpreter.install.InstallInterpreter.install(InstallInterpreter.java:128)
at org.apache.zeppelin.interpreter.install.InstallInterpreter.main(InstallInterpreter.java:280)
... View more
Labels:
- Labels:
-
Apache Zeppelin
05-30-2018
07:39 PM
thank you again for your inputs. yes, I extracted the schema from parquet file and and created external table I am not clear on your comment " use parquet serde at time of creation of hive table." based on the hive documentation, I am using, STORED AS PARQUET (Hive 2.3.2-amzn-2) Also I am not sure if the conversion using fast-parquet python library is causing it or if this is a bug in hive
... View more
05-30-2018
06:30 PM
Yes, it is parquet. here is some additional information Environment: Running hive on AWS EMR (emr-5.13.0) cluster - Hive 2.3.2-amzn-2. Verified that all the fields exist in the parquet file using parquet tools. Parquet file is generated from nested json using fast-parquet python library
... View more
05-30-2018
05:49 PM
Hi Venkat, Below is the create table statement. The parquet file has all the columns listed and the data types match the schema Create external table Hive_Parquet_Test(
statement_Id int,
statement_MessageId string,
prepaidFlag boolean,
item_Count int,
first_Name string,
last_Name string
)
STORED AS PARQUET
LOCATION 's3://bucket_name/hive_parq_test'
... View more