Member since
09-21-2015
133
Posts
130
Kudos Received
24
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
7083 | 12-17-2016 09:21 PM | |
4486 | 11-01-2016 02:28 PM | |
2213 | 09-23-2016 09:50 PM | |
3420 | 09-21-2016 03:08 AM | |
1779 | 09-19-2016 06:41 PM |
07-06-2016
07:12 PM
1 Kudo
I have custom co-processors that I need to make available to HBase. Do I copy my jar files into a lib directory somewhere? Use Ambari to append to a RegionServer classpath environment variable?
... View more
Labels:
- Labels:
-
Apache HBase
07-03-2016
11:06 PM
1 Kudo
Hi, @Timothy Spann, the Phoenix interpreter has been removed in favor of the generic JDBC interpreter. Please see the 0.6 snapshot docs here for an example of using it to connect to Phoenix.
... View more
06-23-2016
10:38 PM
1 Kudo
http://spark.apache.org/docs/latest/sql-programming-guide.html#json-datasets
... View more
06-23-2016
08:42 PM
You're missing a ':' after 2181
... View more
06-22-2016
07:19 PM
It looks like you're trying to write a Map type, which Phoenix does not support. Can you share the DDL for your Phoenix table and the schema of the relation you're trying to write into Phoenix?
... View more
06-20-2016
06:57 PM
Unfortunately I don't. I suggest an inverted grep on the file like: grep -v '(100%) Done' out.txt
... View more
06-20-2016
05:33 PM
1 Kudo
bash> cat /path/to/my_script.sql !outputformat csv select count(*) from system.catalog /usr/hdp/current/phoenix-client/bin/sqlline.py zk_host:2181:/hbase-unsecure /path/to/my_script.sql > out.txt bash> cat out.txt 83/83 (100%) Done 'COUNT(1)' '87'
... View more
06-18-2016
11:16 PM
3 Kudos
@Sunile Manjee, you can read and write to Phoenix from NiFi. You'll need to setup a DBConnectionPool ControllerService (click the Wrench/ScrewDriver icon on the top right) pointing to the phoenix-client.jar. Then you can use ExecuteSQL for reads. To write, the following pattern works:
ConvertAttributesToJSON->ConvertJSONToSQL in Insert mode->ReplaceText to replace "INSERT" with "UPSERT" -> PutSQL. For details on connecting to Phoenix on a Kerberized cluster, see the instructions at phoenix.apache.org (use your browser's search/find feature, Kerberos details are about 1/3 the way down the page).
... View more
06-13-2016
03:37 PM
My ingest pipeline writes small files to S3 frequently. I have a periodic job that aggregates these into bigger files. Is there a way to use Spark to empty an S3 path? Something like "insert overwrite s3://bucket/my_folder" with an empty DataFrame?
... View more
Labels:
- Labels:
-
Apache Spark
06-13-2016
03:32 PM
Does that mean I should use separate buckets for different datasets? I've seen single buckets housing multiple different tables via "folders". Is that a bad idea?
... View more