- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
Not able to insert any record in apache phoenix table
- Labels:
-
Apache HBase
-
Apache Phoenix
Created on ‎01-16-2025 05:52 AM - edited ‎01-16-2025 07:10 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I am facing issue with upserting any record in phoenix table thru mapreduce jobs. When using upsert command inside sqlline.py, records are getting inserted but when trying to load/BulkLoad records thru mapreduce or psql.py
getting below error:
Exception in thread "main" org.apache.phoenix.schema.MaxPhoenixColumnSizeExceededException: ERROR 732 (LIM03): The Phoenix Column size is bigger than maximum HBase client key value allowed size for ONE_CELL_PER_COLUMN table, try upserting column in smaller value. Upsert data to table SYSTEM.CATALOG on Column TENANT_ID exceed max HBase client keyvalue size allowance, the rowkey is TENANT_ID= AND TABLE_SCHEM=SYSTEM AND TABLE_NAME=CATALOG AND COLUMN_NAME=TENANT_ID AND COLUMN_FAMILY= at org.apache.phoenix.compile.UpsertCompiler.setValues(UpsertCompiler.java:150) at org.apache.phoenix.compile.UpsertCompiler.access$500(UpsertCompiler.java:124) at org.apache.phoenix.compile.UpsertCompiler$UpsertValuesMutationPlan.execute(UpsertCompiler.java:1309) at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:441) at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:423) at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:422) at org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:410) at org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:173) at org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:183) at org.apache.phoenix.schema.MetaDataClient.addColumnMutation(MetaDataClient.java:966) at org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:2936) at org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:1105) at org.apache.phoenix.compile.CreateTableCompiler$CreateTableMutationPlan.execute(CreateTableCompiler.java:420) at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:441) at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:423) at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:422) at org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:410) at org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1967) at org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:3267) at org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:3230) at org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76) at org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:3230) at org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255) at org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:144) at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221) at java.sql.DriverManager.getConnection(DriverManager.java:664) at java.sql.DriverManager.getConnection(DriverManager.java:208) at org.apache.phoenix.util.QueryUtil.getConnection(QueryUtil.java:422)
I have created sample table separately and tried to run mapr job to load single record in a file with smaller values per column, but same error is appearing everytime. Please suggest.
Hbase version: hbase-2.4.14
Phoenix version: phoenix-hbase-2.4.0-5.1.2-bin
hadoop version: hadoop-3.3.4
Created ‎01-16-2025 10:25 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi @Kalpit ,
Can you try increasing the below values in hbase
HBase Service Advanced Configuration Snippet (Safety Valve) for hbase-site.xml & HBase Client Advanced Configuration Snippet (Safety Valve) for hbase-site.xml
hbase.client.keyvalue.maxsize=150485760
hbase.server.keyvalue.maxsize=150485760
Created ‎01-17-2025 03:12 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi @cloude
Thanks for reply. Updated configuration as suggested and restarted Hbase. Post that getting below error:
x/shaded/com/google/protobuf/Descriptors$ServiceDescriptor; at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.translateException(RpcRetryingCallerImpl.java:221) at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:119) at org.apache.hadoop.hbase.client.RegionCoprocessorRpcChannel.callExecService(RegionCoprocessorRpcChannel.java:91) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callMethod(SyncCoprocessorRpcChannel.java:52) at org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService$Stub.getVersion(MetaDataProtos.java:17684) at org.apache.phoenix.query.ConnectionQueryServicesImpl$5.call(ConnectionQueryServicesImpl.java:1644) at org.apache.phoenix.query.ConnectionQueryServicesImpl$5.call(ConnectionQueryServicesImpl.java:1632) at org.apache.hadoop.hbase.client.HTable$11.call(HTable.java:1032) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) Caused by: java.lang.NoSuchMethodError: org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService.getDescriptor()Lorg/apache/phoenix/shaded/com/google/protobuf/Descriptors$ServiceDescriptor; at org.apache.hadoop.hbase.ipc.CoprocessorRpcUtils.<clinit>(CoprocessorRpcUtils.java:61) at org.apache.hadoop.hbase.client.RegionCoprocessorRpcChannel$1.rpcCall(RegionCoprocessorRpcChannel.java:86) at org.apache.hadoop.hbase.client.RegionCoprocessorRpcChannel$1.rpcCall(RegionCoprocessorRpcChannel.java:81) at org.apache.hadoop.hbase.client.RegionServerCallable.call(RegionServerCallable.java:127) at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:107) ... 10 more 2025-01-17 16:23:04,803 WARN client.HTable: Error calling coprocessor service org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService for row SYSTEM.CATALOG java.util.concurrent.ExecutionException: java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.hbase.ipc.CoprocessorRpcUtils at java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.util.concurrent.FutureTask.get(FutureTask.java:192) at org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:1044) at org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:1006) at org.apache.phoenix.query.ConnectionQueryServicesImpl.checkClientServerCompatibility(ConnectionQueryServicesImpl.java:1631) at org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:1462) at org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:1913) at org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:3074) at org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:1105) at org.apache.phoenix.compile.CreateTableCompiler$CreateTableMutationPlan.execute(CreateTableCompiler.java:420) at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:441) at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:423)
Command using : HADOOP_CLASSPATH=/hbase-2.4.14/lib/hbase-protocol-2.4.14.jar:/hbase-2.4.14/conf hadoop jar /exahdp/installations/phoenix-hbase-2.4.0-5.1.2-bin/phoenix-client-hbase-2.4.0-5.1.2.jar org.apache.phoenix.mapreduce.CsvBulkLoadTool --table <tablename> --delimiter $ --input hdfsfile/temp_1/part-m-00000
Created ‎01-17-2025 05:45 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@Kalpit , May i know which cdp version you are using and from where you get this command to do bulk upload ?
Created ‎01-19-2025 10:54 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@cloude , this command m using in apache hadoop 3.3.4. I have installed all apache hadoop components manually with version mentioned initially. Above command i got from phoenix site for bulk loading.
https://phoenix.apache.org/bulk_dataload.html
Created ‎01-20-2025 02:06 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
HI @Kalpit , I believe you are using wrong jar, you need to take the correct jar (It needs to be phoenix-version-client.jar, )
