Member since
10-22-2015
241
Posts
86
Kudos Received
20
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2430 | 03-11-2018 12:01 AM | |
1461 | 01-03-2017 10:48 PM | |
1861 | 12-20-2016 11:11 PM | |
3642 | 09-03-2016 01:57 AM | |
1380 | 09-02-2016 04:55 PM |
07-05-2016
09:25 PM
Can you outline the type of queries involving probability_score and real_labelVal fields ? The answer to the above would determine whether declaring fields as String is good practice.
... View more
07-05-2016
01:37 AM
Looks like HBASE-14963 was integrated into HDP 2.3 on Fri Jan 22 Can you post the commit hash of the HDP you use ? Please post the complete stack trace. Thanks
... View more
07-02-2016
12:49 AM
Please install Azure storage SDK for Java (com.microsoft.azure:azure-storage)
... View more
07-01-2016
11:31 PM
Can you try setting the following before invoking spark-shell ? export HADOOP_HOME=/usr/hdp/current/hadoop-client You can also set the following in conf/log4j.properties: log4j.logger.org.apache.spark.repl.Main=DEBUG so that you can get more information.
... View more
06-29-2016
02:17 PM
1 Kudo
You can add debug logs in your coprocessor. Enable DEBUG logging on region servers and check the logs after action is performed in shell.
... View more
06-29-2016
02:16 PM
Can you describe the function of the coprocessor ? It should be possible to use shell and observe the side effect your coprocessor makes.
... View more
06-27-2016
09:48 PM
You can find a ton of hbase coprocessors under: https://github.com/apache/phoenix
... View more
06-27-2016
09:04 PM
Where is the custom jar located ? Can you show related snippet from hbase-site.xml ? Thanks
... View more
06-27-2016
03:33 PM
1 Kudo
In region server log, you should observe something similar to the following: 2016-06-16 19:44:10,222 INFO [regionserver/hbase5-merge-normalizer-3.openstacklocal/172.22.78.125:16020] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.AccessController was loaded successfully with priority (536870911). If the coprocessor was loaded per table, you can use 'describe' command in shell to verify.
... View more
06-23-2016
08:39 PM
The functions `phoenixTableAsDataFrame`, `phoenixTableAsRDD` and `saveToPhoenix` all support
optionally specifying a `conf` Hadoop configuration parameter with custom Phoenix client settings,
as well as an optional `zkUrl` parameter for the Phoenix connection URL. val configuration = new Configuration() // set zookeeper.znode.parent and hbase.zookeeper.quorum in the conf "TABLE1", Seq("ID", "COL1"), conf = configuration
... View more