Member since
08-10-2016
33
Posts
9
Kudos Received
0
Solutions
01-07-2017
12:00 PM
I Find phoenix write is too slow can you help me? Thanks ? My post is here https://community.hortonworks.com/questions/76862/phoenix-write-is-too-slow.html
... View more
11-19-2016
05:05 AM
I use hdinsight, HDinsight version is i change some parameter in hbase-site.xml
<property> <name>hbase.client.scanner.timeout.period</name> <value>9200000</value></property> <property> <name>hbase.rpc.timeout</name> <value>9200000</value></property> <property> <name>hbase.regionserver.lease.period</name> <value>9200000</value></property> <property> <name>phoenix.query.timeoutMs</name> <value>9200000</value></property>
but i Encounter problems this table is 700G My cluster is 5 regionserer(8 core,14G) Where is the problem?
... View more
11-18-2016
02:32 AM
Thank you ! I find some error in hbase regioneserver log ,TERMINALDATA ’s index state is disable(x) so now hbase now can rebuild but faild 2016-11-17 06:01:47,352 WARN org.apache.phoenix.coprocessor.MetaDataRegionObserver: ScheduledBuildIndexTask failed!
org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table undefined. tableName=TERMINALDATA
at org.apache.phoenix.schema.PMetaDataImpl.getTable(PMetaDataImpl.java:241)
at org.apache.phoenix.util.PhoenixRuntime.getTable(PhoenixRuntime.java:316)
at org.apache.phoenix.coprocessor.MetaDataRegionObserver$BuildIndexScheduleTask.run(MetaDataRegionObserver.java:228)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745) so i check table schema so I Want to know is there relation with table schema when i create table i din't creat table schema my create table sql is create table if not exists "TerminalData"
(
"RowKey" varchar primary Key,
"ID" varchar,
"CtrlAddress" varchar,
"CanSN" varchar,
"CtrlVersion" varchar,
"Voltage" varchar,
"A_Voltage" varchar,
"B_Voltage" varchar,
"C_Voltage" varchar,
"Current" varchar,
"A_current" varchar,
"B_current" varchar,
"C_current" varchar,
"RatedPower" varchar,
"ReactivePower" varchar,
"TotalPowerFactor" varchar,
"ZeroLineCurrent" varchar,
"VoltageUR" varchar,
"CurrentUR" varchar,
"DirectVoltage" varchar,
"DirectCurrent" varchar,
"UpTime" varchar,
"FaultState" varchar,
"ActivePower" varchar,
"ChageBillId" varchar,
"DataKey" varchar
) default_column_family = 'd'
... View more
11-17-2016
05:35 AM
I Hear through tools can find which region is used in regionserver ?Did you know about tool? what's name about tool? I think My region server show 3000-4000 region (8G Heap), May be only part region is common used other 3000 region 3000*2M =6G this Impossible
... View more
11-17-2016
05:29 AM
My HDP Version HDP-2.4.2.0-258, I diont change config about phoenix default paramater but I didn't find my index is rebuilded automatically because this index'state is always rebuild(b) why? Thanks !
... View more
11-17-2016
02:05 AM
I USE THIS commond ALTER INDEX IF EXISTS SysActionLog_idx ON "SysActionLog" REBUILD; but error the error is timeout. so i change phoeinx query sql timeout but index current state is i execute rebuild command but the other error occured,I cant drip index ,because my cluster is not strong ,cant create index success
this table is 800G.
... View more
11-16-2016
06:52 AM
Thanks. I Find my table's index state is x ,How I can change this index state? rebuild it ?
... View more
11-16-2016
06:51 AM
Thanks. I Find my table's index state is x ,How I can change this index state? rebuild it ?
... View more
11-15-2016
06:22 AM
1 Kudo
hello everyone, My Phoenix's one index Become unavailable. Before can normal work . but now other is normal.Details are as follows。x
My phoenix table name is "SysAction". I Create index sql is
create index SysActionLog_idx on "SysActionLog"
("CreateTime", "ModuleCode","AppCode","Invoker","ClientIP")
I execute sql with phoenix shell but find can't use phoenix index.
How to solve this problem ? I dont want to rebulid index because rebulid index may be shut down my hbase cluster
... View more
Labels:
- Labels:
-
Apache Phoenix
10-09-2016
06:34 AM
hello everyone: I Want to know how many rgions on every regionserver is normal?(hbase version :3.2.7.964) I find one formula in hbase guide is ((RS Xmx) * hbase.regionserver.global.memstore.size) / (hbase.hregion.memstore.flush.size * (# column families))
so my regionserver Physical memory is 14G,region server heap is 8G, memstore.size is 0.4, flush.size is 128M. cfs is 1
so my regionserver normal region count is 8*1024*0.4/128*1=25.6
but this Does not conform to the actual situation,my regionserver have 2000 region ,read and write normal. why? how to compute regionserver's normal region count?
... View more
Labels:
- Labels:
-
Apache HBase
08-16-2016
12:50 PM
thanks,InAmbari the data node dri permission default is775,but my folder is 777 so i change 775 to 777 use ambari my problem solved
... View more
08-16-2016
03:03 AM
hi,everyone: my data store azure storage,but azure data user is root. cant change hdfs. this cause my datanode cant start。 Have other method Resolve this confilcts? the error is: 2016-08-16 02:13:05,270 INFO datanode.DataNode (LogAdapter.java:info(47)) - registered UNIX signal handlers for [TERM, HUP, INT]
2016-08-16 02:13:07,256 WARN datanode.DataNode (DataNode.java:checkStorageLocations(2439)) - Invalid dfs.datanode.data.dir /mnt/data :
EPERM: Operation not permitted
at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmodImpl(Native Method)
at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmod(NativeIO.java:230)
at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:727)
at org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java:502)
at org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:140)
at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:156)
at org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:2394)
at org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:2436)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2418)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2310)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2357)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2538)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2562)
2016-08-16 02:13:07,267 ERROR datanode.DataNode (DataNode.java:secureMain(2545)) - Exception in secureMain
java.io.IOException: All directories in dfs.datanode.data.dir are invalid: "/mnt/data/"
at org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:2445)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2418)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2310)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2357)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2538)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2562)
2016-08-16 02:13:07,269 INFO util.ExitUtil (ExitUtil.java:terminate(124)) - Exiting with status 1
2016-08-16 02:13:07,278 INFO datanode.DataNode (LogAdapter.java:info(47)) - SHUTDOWN_MSG:
... View more
Labels:
- Labels:
-
Apache Hadoop
08-10-2016
10:12 AM
Thank you !Thank you !Thank you !
... View more
08-10-2016
09:12 AM
hi,every one! I use shell command in hbase shell start hbase reset ,but fail . I need help! Thanks! [root@t bin]# hbase reset start 8082 Error: Could not find or load main class reset
... View more
- Tags:
- Data Processing
- HBase
Labels:
- Labels:
-
Apache HBase