Member since
10-22-2015
241
Posts
86
Kudos Received
20
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1443 | 03-11-2018 12:01 AM | |
572 | 01-03-2017 10:48 PM | |
697 | 12-20-2016 11:11 PM | |
1854 | 09-03-2016 01:57 AM | |
654 | 09-02-2016 04:55 PM |
07-20-2016
04:10 PM
You should spread out the major compaction on individual tables. Refer to the following: HBASE-16147 Add ruby wrapper for getting compaction state If there is not enough time to wait for compaction on one table to finish, at least leave some time between the start of major compactions.
... View more
07-19-2016
06:18 PM
Here is sample /jmx output: "name" : "Hadoop:service=HBase,name=RegionServer,sub=Server", "modelerType" : "RegionServer,sub=Server", ... "percentFilesLocal" : 97, "percentFilesLocalSecondaryRegions" : 97,
... View more
07-19-2016
06:16 PM
Did you mean locality for HFiles ? You can get such metric through /jmx There isn't command line which would directly give you this metric.
... View more
07-18-2016
09:01 PM
Please take a look at: https://issues.apache.org/jira/browse/HBASE-11339 which would reduce I/O amplification incurred by medium objects. This feature is in the upcoming HDP 2.5 release.
... View more
07-16-2016
10:25 AM
1 Kudo
By default, each component (NodeManager, Datanode, and RegionServer daemon) would have their own predetermined heap memory setting. You need to tune yourself.
... View more
07-14-2016
10:49 AM
1 Kudo
Can you find encoded region name for regions stuck in FAILED_OPEN state and pastebin related region server log ? It would help us understand what caused the region not to open. BTW which HDP version are you running ?
... View more
07-06-2016
05:22 PM
1 Kudo
2016-07-06 14:59:14,466 INFO org.apache.zookeeper.ZooKeeper: Initiating client connection, connectString=m2.domain:2181 sessionTimeout=120000 watcher=hconnection- 0x7bc9e6ab0x0, quorum=m2.domain:2181, baseZNode=/hbase-secure Looks like AMS tried to connect to hbase cluster's znode. AMS should use /ams-hbase-secure as base znode. Can you check your configuration ?
... View more
07-05-2016
09:25 PM
Can you outline the type of queries involving probability_score and real_labelVal fields ? The answer to the above would determine whether declaring fields as String is good practice.
... View more
07-05-2016
01:37 AM
Looks like HBASE-14963 was integrated into HDP 2.3 on Fri Jan 22 Can you post the commit hash of the HDP you use ? Please post the complete stack trace. Thanks
... View more
07-02-2016
12:49 AM
Please install Azure storage SDK for Java (com.microsoft.azure:azure-storage)
... View more
07-01-2016
11:31 PM
Can you try setting the following before invoking spark-shell ? export HADOOP_HOME=/usr/hdp/current/hadoop-client You can also set the following in conf/log4j.properties: log4j.logger.org.apache.spark.repl.Main=DEBUG so that you can get more information.
... View more
06-29-2016
02:17 PM
1 Kudo
You can add debug logs in your coprocessor. Enable DEBUG logging on region servers and check the logs after action is performed in shell.
... View more
06-29-2016
02:16 PM
Can you describe the function of the coprocessor ? It should be possible to use shell and observe the side effect your coprocessor makes.
... View more
06-27-2016
09:48 PM
You can find a ton of hbase coprocessors under: https://github.com/apache/phoenix
... View more
06-27-2016
09:04 PM
Where is the custom jar located ? Can you show related snippet from hbase-site.xml ? Thanks
... View more
06-27-2016
03:33 PM
1 Kudo
In region server log, you should observe something similar to the following: 2016-06-16 19:44:10,222 INFO [regionserver/hbase5-merge-normalizer-3.openstacklocal/172.22.78.125:16020] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.AccessController was loaded successfully with priority (536870911). If the coprocessor was loaded per table, you can use 'describe' command in shell to verify.
... View more
06-23-2016
08:39 PM
The functions `phoenixTableAsDataFrame`, `phoenixTableAsRDD` and `saveToPhoenix` all support
optionally specifying a `conf` Hadoop configuration parameter with custom Phoenix client settings,
as well as an optional `zkUrl` parameter for the Phoenix connection URL. val configuration = new Configuration() // set zookeeper.znode.parent and hbase.zookeeper.quorum in the conf "TABLE1", Seq("ID", "COL1"), conf = configuration
... View more
06-22-2016
06:01 PM
As the last ERROR showed you, the effective hbase-site.xml seems to not be on the classpath. The zookeeper ensemble was redacted. Please make sure the quorum is the one used by hbase.
... View more
06-21-2016
02:05 PM
1 Kudo
Please take a look at hbase-shell//src/test/ruby/hbase/table_test.rb for sample syntax where x is column family: @test_table._scan_internal(FILTER => "SingleColumnValueFilter('x', 'a', >=, 'binary:\x82', true, true)", COLUMNS => 'x:a, x:b'
... View more
06-21-2016
01:25 PM
Hive using MR-over-hbase-snapshots would be a viable solution. Perform a snapshot in hbase, then use hive to directly read from the underlying HFiles.
... View more
06-21-2016
01:22 PM
1 Kudo
You can use SingleColumnValueFilter on 'column1' Meanwhile, specify all the columns you want to retrieve using COLUMNS.
... View more
06-20-2016
03:51 PM
bq. Hbase created the columns based on alphabetical order When you query hbase, you observe alphabetical order because that's what hbase stores internally. Do you observe fewer than 10 rows after importing the sample data ?
... View more
06-20-2016
03:47 PM
18414 was the region server process. Was it still running ?
... View more
06-20-2016
02:18 PM
When I tried 'hbase' command within shell (HDP 2.5), I got: hbase(main):001:0> hbase org.apache.hadoop.hbase.mapreduce.ImportTsv
NoMethodError: undefined method `hbase' for #<Object:0x415d88de>
... View more
06-20-2016
02:13 PM
hbase.csv should be placed on hdfs.
... View more
06-20-2016
01:21 PM
1 Kudo
Disabled replication would still hold on to the WAL files because,
it guarantees to not lose data between disable and enable. You can execute
remove_peer, which frees up the WAL files eligible for deletion. When
you re-add replication peer again, the replication would start from the
current status, versus if you re-enable a peer, it will continue from where
it left off.
... View more
06-20-2016
01:16 PM
2 Kudos
There are currently two services which may keep the
files in the archive directory. First is a TTL process, which ensures that the
WAL files are kept at least for 10 min. This is controlled by hbase.master.logcleaner.ttl configuration property in master. The other one is replication.
If you had replication setup before, the replication processes will hang on to
the WAL files until they are replicated. Even if you disabled the
replication, the files are still referenced. You can look at the logs of master from classes (LogCleaner,
TimeToLiveLogCleaner, ReplicationLogCleaner) to see whether any exception was thrown.
... View more
06-18-2016
12:59 PM
It is work in progress: https://issues.apache.org/jira/browse/NIFI-991
... View more
06-17-2016
05:38 PM
hbase(main):001:0> scan 't2' ROW COLUMN+CELL
5842 column=pressure:in, timestamp=1466184191234, value=240 5842 column=pressure:out, timestamp=1466184191234, value=340 5842 column=temp:in, timestamp=1466184191234, value=50 5842 column=temp:out, timestamp=1466184191234, value=30 5842 column=vibration:, timestamp=1466184191234, value=4 1 row(s) in 0.4840 seconds
... View more
06-17-2016
05:26 PM
I created a table with schema similar to yours. create 't2', {NAME => 'pressure'}, {NAME => 'temp'}, {NAME => 'vibration'} I created hbase.csv with the line in your first post. I then executed the following (HDP 2.5.0.0-267 which is 1.1.2 with some patches): hbase org.apache.hadoop.hbase.mapreduce.ImportTsv -Dimporttsv.separator=',' -Dimporttsv.columns='HBASE_ROW_KEY,temp:in,temp:out,vibration,pressure:in,pressure:out' t2 hbase.csv 2016-06-17 17:23:18,142 INFO [main] mapreduce.Job: Running job: job_1462297012839_0003 2016-06-17 17:23:29,332 INFO [main] mapreduce.Job: Job job_1462297012839_0003 running in uber mode : false
2016-06-17 17:23:29,335 INFO [main] mapreduce.Job: map 0% reduce 0% 2016-06-17 17:23:38,466 INFO [main] mapreduce.Job: map 100% reduce 0% 2016-06-17 17:23:38,478 INFO [main] mapreduce.Job: Job job_1462297012839_0003 completed successfully
2016-06-17 17:23:38,670 INFO [main] mapreduce.Job: Counters: 31
... View more