Member since
10-05-2015
105
Posts
83
Kudos Received
25
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1310 | 07-13-2017 09:13 AM | |
1549 | 07-11-2017 10:23 AM | |
782 | 07-10-2017 10:43 AM | |
3597 | 03-23-2017 10:32 AM | |
3546 | 03-23-2017 10:04 AM |
08-01-2016
01:15 PM
1 Kudo
HDP 2.4 we are using phoenix version 4.4.0 which doesn't have transactions feature so there is no support for tephra in it.
... View more
07-20-2016
02:07 PM
@Sasikumar Natarajan Phoenix translate upsert query into batch of KeyValues for each row. For each non primary key column we have a key value where value part will be column value in upsert query. All primary key column values will be combined as row key and it's the row part of the key value. As for your schema we have 1440 rows but for each row we have a keyvalue only. So it's better to have 1440 rows than 1440 columns for row or 1440 versions for row. Performance wise there won't be much difference.
... View more
07-20-2016
12:09 PM
Don't you want to have 1440 rows? If you want to have 1440 records in a single row you need to have 1440 columns which is not good or you can mention number of versions to 1440 and then access all the versions from HBase which is also may not be good idea.
... View more
07-20-2016
12:01 PM
Hi @Sasikumar Natarajan You can include timestamp also part of primary key so you will have 1440 rows and also when you search by giving device id and date you can get all the 1440 records as usual. It will be fast also because it's going to be range query. create table devicedata (deviceId integer not null, day date not null, ts timestamp, val double CONSTRAINT my_pk PRIMARY KEY (deviceId, day, timestamp)) Since it's time series data your regions might be bottle neck because continuous writes might go to single region. Then you you can use salt buckets to avoid it. create table devicedata (deviceId integer not null, day date not null, ts timestamp, val double CONSTRAINT my_pk PRIMARY KEY (deviceId, day, timestamp)) SALT_BUCKETS=N.
... View more
07-15-2016
08:39 AM
3 Kudos
Here you can find more detailed information: The FileNotFoundExceptions is coming from the split daughters not being able to find the files of the parent region. The parent regions' files might already been deleted at this point. HBCK has a flag to fix this, but if it is a handful of regions/files affected, I usually prefer to manually move the reference files out of the hbase root directory. For reference, here is the high level flow:
Go to region servers log, and find the file name for FileNotFoundException, copy the file name
Check hdfs to see whether the file is really not there.
Figure out whether this is an actual hfile or a reference file. HFiles are named like<region_name>/<column_family>/<UUID> while reference files are named like<region_name>/<column_family>/<UUID>.<parent_region_name>.
If the missing file does not belong to the region which is throwing the exception, then it is due to the reference file referring to the missing file. So we should find and move the reference file (which should be very small) out of the daughter regions directory. Notice that the reference file name should contain the actual UUID of the referred file and the parent regions name.
... View more
07-15-2016
08:35 AM
Hi @Mark Heydenrych This can happen if RS went down during region splitting(this got fixed in latest versions). You need to sideline reference files of the region which is FAILED_OPEN and restart the RS. If you share the logs we can suggest you which files to be sidelined. Thanks, Rajeshbabu.
... View more
07-08-2016
07:03 AM
Hi @Vijayant kumar, It's saying the region is not served by server. Can you check whether the region in transition at master UI if yes we need to check why it's in transition from master and RS logs. It would be better to post the logs for more details.
... View more
07-05-2016
03:33 AM
1 Kudo
Hi @Michael Dennis "MD" Uanang This should help you https://community.hortonworks.com/questions/2349/tip-when-you-get-a-message-in-job-log-user-dr-who.html
... View more
06-02-2016
09:41 AM
Can you share the region server logs to check the reason why RegionTooBusyException was coming. In case if you feel major compaction is the reason then you can disable automatic major compactions by configuring below property. <property>
<name>hbase.hregion.majorcompaction</name>
<value>0</value>
<description>The time (in miliseconds) between 'major' compactions of all
HStoreFiles in a region. Default: 1 day.
Set to 0 to disable automated major compactions.
</description>
</property>
... View more
05-26-2016
07:52 AM
3 Kudos
There is time difference of more than half minute between master and region servers that's why getting clock is out of sync exception. You can set same time for master and regionservers machines or install ntp to maintain time sync among all the machines.
... View more
- « Previous
- Next »