Member since
07-17-2019
738
Posts
433
Kudos Received
111
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2615 | 08-06-2019 07:09 PM | |
2854 | 07-19-2019 01:57 PM | |
4041 | 02-25-2019 04:47 PM | |
4028 | 10-11-2018 02:47 PM | |
1343 | 09-26-2018 02:49 PM |
04-15-2020
05:10 AM
I am using ConstantSizeRegionSplitPolicy and MaxFileSize is set to 30 GB. But, I found that file is not split across regions when file size reaches 30 GB. Some of my file size is 300 GB across particular regions. Can you please help me to solve this probelm. I have huge volume of data 10 TB.
... View more
01-02-2020
08:27 AM
1 Kudo
@o20021106 You can check if you are missing ZK quorum property under hive config [hive-site.xml]. hbase.zookeeper.quorum= Because of missing quorum hive is trying to connect to default hbase znode [/hbase] and not getting anything cause it is not even existing.
... View more
12-30-2019
05:53 AM
If i am having more than one columns as primary key.How shall i proceed.
... View more
09-02-2019
08:33 PM
Can you please elaborate in detail with commands you used to resolve?
... View more
04-16-2018
07:39 PM
ok thanks Josh I figured out my mistake . I didn't realize that phoenix automatically find the primary key and I don't have to specify the primary key column name explicitly . thanks for the guidance .
... View more
11-14-2017
04:37 PM
Session expiration is often hard to track down. It can be a factor of JVM pauses (due to Garbage Collection) on either the client (HBase Master) or server (ZK Server) or it could be a result of a ZNode which has an inordinately large number of children. The brute-force operation would be to disable your replication process, (potentially) drop the root znode, and re-enable replication, and then sync up the tables with an ExportSnapshot or CopyTable. This would eliminate the data in ZooKeeper being a problem. The other course of action would be looking more at the Master log and ZooKeeper server log to understand why the ZK session is expiring (See https://zookeeper.apache.org/doc/trunk/images/state_dia.jpg for more details on the session lifecycle). A good first step would be checking the number of znodes under /hbase-unsecure/replication.
... View more
12-28-2018
01:36 PM
What helped me was, first copying the file to the $NIFI_HOME/lib folder then giving the full path of the jar file in the ExecuteStreamCommand processor. So the config looked like "-jar; /opt/nifi-1.7.1/lib/mycode.jar". Couple things to ensure is that the jar is owned by the same user that NiFi is running as and the jar could be located anywhere as long as you give full path you should be fine.
... View more
12-27-2017
02:48 PM
I deleted all the snapshots and data after getting a go-ahead from the developers...
... View more
08-09-2017
07:35 AM
1 Kudo
End of the story : In fact, the problem was related to https://issues.apache.org/jira/browse/HADOOP-10786 I moved to hadoop-common 2.6.1 and used AuthUtil class: http://hbase.apache.org/1.2/devapidocs/org/apache/hadoop/hbase/AuthUtil.html And everything started to work fine 🙂 Thanks for your help
... View more
08-08-2017
12:52 AM
That's an incorrect approach. You don't need to add xml files to the jars. As I already mentioned before, you need to add directories where those files located, not files themselves. That's how java classpath work. It accepts jars and directories only. So if you need a resource in the java classpath, you need to have it in a jar file (like you did) OR put the parent directory to the classpath. In Squirrel it can be done in the Extra classpath tab of the Driver configuration:
... View more