Community Articles

Find and share helpful community-sourced technical articles.
Celebrating as our community reaches 100,000 members! Thank you!
Master Guru

JSON Batch to Single Row Phoenix

I grabbed open data on Crime from Philly's Open Data (, after a free sign up you get access to JSON crime data ( You can grab individual dates or ranges for thousands of records. I wanted to spool each JSON record as a separate HBase row. With the flexibility of Apache NiFi 1.0.0, I can specify run times via cron or other familiar setup. This is my master flow.


First I use GetHTTP to retrieve the SSL JSON messages, I split the records up and store them as RAW JSON in HDFS as well as send some of them via Email, format them for Phoenix SQL and store them in Phoenix/HBase. All with no coding and in a simple flow. For extra output, I can send them to Reimann server for monitoring.


Setting up SSL for accessing HTTPS data like Philly Crime, require a little configuration and knowing what Java JRE you are using to run NiFi. You can run service nifi status to quickly get which JRE.


Split the Records

The Open Data set has many rows of data, let's split them up and pull out the attributes we want from the JSON.



Another part that requires specific formatting is setting up the Phoenix connection. Make sure you point to the correct driver and if you have security make sure that is set.


Load the Data (Upsert)


Once your data is loaded you can check quickly with /usr/hdp/current/phoenix-client/bin/ localhost:2181:/hbase-unsecure


The SQL for this data set is pretty straight forward.

CREATE TABLE phillycrime (dc_dist varchar,
dc_key varchar not null primary key,dispatch_date varchar,dispatch_date_time varchar,dispatch_time varchar,hour varchar,location_block varchar,psa varchar,
text_general_code varchar,ucr_general varchar);

{"dc_dist":"18","dc_key":"200918067518","dispatch_date":"2009-10-02","dispatch_date_time":"2009-10-02T14:24:00.000","dispatch_time":"14:24:00","hour":"14","location_block":"S 38TH ST  / MARKETUT ST","psa":"3","text_general_code":"Other Assaults","ucr_general":"800"}
upsert into phillycrime values ('18', '200918067518', '2009-10-02','2009-10-02T14:24:00.000','14:24:00','14', 'S 38TH ST  / MARKETUT ST','3','Other Assaults','800');
!describe phillycrime

The DC_KEY is unique so I used that as the Phoenix key. Now all the data I get will be added and any repeats will safely update. Sometimes during the data we may reget some of the same data, that's okay, it will just update to the same value.

Master Guru

default JKS/TLS password is changeit

Rising Star

Hi , We have followed the same method . It is working successfully most of the time . But sometimes we are getting the below error . 2016-10-28 18:43:03,603 ERROR [Timer-Driven Process Thread-70] o.apache.nifi.processors.standard.PutSQL PutSQL[id=df59f4c8-f60c-4eb3-7fda-882f7ece2d2a] PutSQL[id=df59f4c8-f60c-4eb3-7fda-882f7ece2d2a] failed to process session due to java.lang.IllegalArgumentException: Row length 37812 is > 32767: java.lang.IllegalArgumentException: Row length 37812 is > 32767 2016-10-28 18:43:03,611 ERROR [Timer-Driven Process Thread-70] o.apache.nifi.processors.standard.PutSQL java.lang.IllegalArgumentException: Row length 37812 is > 32767 at org.apache.hadoop.hbase.client.Mutation.checkRow( ~[na:na] at org.apache.hadoop.hbase.client.Put.<init>( ~[na:na] at org.apache.hadoop.hbase.client.Put.<init>( ~[na:na] at org.apache.hadoop.hbase.client.Put.<init>( ~[na:na] at org.apache.phoenix.index.IndexMaintainer.buildUpdateMutation( ~[na:na] at org.apache.phoenix.util.IndexUtil.generateIndexData( ~[na:na] at org.apache.phoenix.execute.MutationState$ ~[na:na] at org.apache.phoenix.execute.MutationState$ ~[na:na] at org.apache.phoenix.execute.MutationState.commit( ~[na:na] at org.apache.phoenix.jdbc.PhoenixConnection$ ~[na:na] at org.apache.phoenix.jdbc.PhoenixConnection$ ~[na:na] at ~[na:na] at org.apache.phoenix.jdbc.PhoenixConnection.commit( ~[na:na] at org.apache.commons.dbcp.DelegatingConnection.commit( ~[na:na] at org.apache.commons.dbcp.PoolingDataSource$PoolGuardConnectionWrapper.commit( ~[na:na] at org.apache.nifi.processors.standard.PutSQL.onTrigger( ~[na:na] at org.apache.nifi.processor.AbstractProcessor.onTrigger( ~[nifi-api-] at org.apache.nifi.controller.StandardProcessorNode.onTrigger( ~[nifi-framework-core-] at [nifi-framework-core-] at [nifi-framework-core-] at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$ [nifi-framework-core-] at java.util.concurrent.Executors$ [na:1.8.0_91] at java.util.concurrent.FutureTask.runAndReset( [na:1.8.0_91] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301( [na:1.8.0_91] at java.util.concurrent.ScheduledThreadPoolExecutor$ [na:1.8.0_91] at java.util.concurrent.ThreadPoolExecutor.runWorker( [na:1.8.0_91] at java.util.concurrent.ThreadPoolExecutor$ [na:1.8.0_91] at [na:1.8.0_91] FYI - we are using Nifi1.0 and we dont have each rowlength more than 500 bytes . firsttime when we got the error , we just cleaned the queue and restarted . its working fine later . once again we got th error . restarting i snot the good solution , and we are loosing the data if we do that . PFA for more information .





Master Guru

good article

Rising Star

Could you share the nifi template for this flow

New Contributor

Thanks for your artical , which is informative.

I had tried same for my specific case, I am unable to insert data into phoenix using Nifi.

Cloud you please help US by sharing simple template or try to address my problem.

Problem: Need to insert system logs into to the Phoenix table using Nifi

New Contributor

hello! If I insert a string containing 'or "or, PutSQL to Phoenix will be return the grammatical errors, this should be how to solve?