Member since
04-12-2019
105
Posts
3
Kudos Received
7
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
704 | 05-28-2019 07:41 AM | |
425 | 05-28-2019 06:49 AM | |
403 | 12-20-2018 10:54 AM | |
303 | 06-27-2018 09:05 AM | |
1433 | 06-27-2018 09:02 AM |
10-06-2019
06:56 AM
Hi, I'm facing same issue. Have there any luck?
... View more
09-24-2019
06:35 AM
Hi @Shelton Thanks for give your precious time. As you have mentioned, already value is set: #cat /proc/sys/fs/file-max 11534502 #ulimit unlimited #ulimit -Sn 128000 #ulimit -Hn 128000 Each node in Hbase Cluster are already running with these settings. Noticed: vm.swappiness value is 60, also sometime swap uses is more than 80%. #free -m Swap: 7167 6066 1101 Looking for other solution. Regards, Vinay K
... View more
09-24-2019
06:02 AM
Hi Team, We have spring boot application which take data or query data to HBase. When application initially start, it make a connection to zookeeper server remotely and later execute hbase query and get data according to requirement. But multiple time we get below error and application disconnected from zookeeper server. 2019-09-24 07:12:26.920 INFO 60447 --- [0.100.135:2181)] o.a.h.h.s.o.apache.zookeeper.ClientCnxn : Unable to read additional data from server sessionid 0x26c4d39814f4b8e, likely server has closed socket, closing socket connection and attempting reconnect
..
2019-09-24 07:14:48.042 INFO 60447 --- [io-8181-exec-41] com.mems.energy.GetTimeseriseCompare : org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=6, exceptions:
Tue Sep 24 07:07:58 UTC 2019, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=94368: row 'abc' on table 'table1' at region region1, abc Kindly help me to resolve it. Regards, Vinay K
... View more
Labels:
09-23-2019
06:59 AM
Hi Folks,
We are using HDP 2.6.5 version with HBase, oozie. We are getting "java.lang.OutOfMemoryError: unable to create new native thread" error in our logs:
2019-09-22 05:56:37,297 INFO container.ContainerImpl - Container container_e56_1565687270063_197657_01_001489 transitioned from LOCALIZING to LOCALIZED
2019-09-22 05:56:37,299 FATAL event.AsyncDispatcher - Error in dispatcher thread
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:717)
at java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:957)
at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1378)
at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:134)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainersLauncher.handle(ContainersLauncher.java:118)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainersLauncher.handle(ContainersLauncher.java:55)
at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:184)
at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:110)
at java.lang.Thread.run(Thread.java:748)
2019-09-22 05:56:37,300 INFO ipc.Server - Auth successful for appattempt_1565687270063_197657_000001 (auth:SIMPLE)
2019-09-22 05:56:37,303 FATAL yarn.YarnUncaughtExceptionHandler - Thread Thread[AsyncDispatcher event handler,5,main] threw an Error. Shutting down now...
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:717)
at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:197)
at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:110)
at java.lang.Thread.run(Thread.java:748)
2019-09-22 05:56:37,308 INFO util.ExitUtil - Halt with status -1 Message: HaltException2019-09-22 05:56:37,308 INFO util.ExitUtil - Halt with status -1 Message: HaltException
I have gone through multiple links, But i didn't get valuable output.
Ouput of max thread: cat /proc/sys/kernel/threads-max 902309
ulimit output: ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 451154 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 128000 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 451154 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited
Could some help me to resolve the issue?
Regards,
Vinay K
... View more
08-08-2019
03:36 PM
Hey Harsh Thanks for responding. As multiple client are requesting data to hbase, at some point, sometimes user don’t get data, EOF exception or connection interruptions occur. We are not able to track the record of requested data and size of input and output data sending to end user. Regards Vinay K
... View more
07-04-2019
09:41 AM
Hi All, Our oozie workflow jobs are getting suspended. After search in logs i have found some error in logs Error in oozie logs: 2019-07-01 14:53:34,740 WARN ipc.Server - IPC Server handler 3 on 10020, call org.apache.hadoop.mapreduce.v2.api.HSClientProtocolPB.getJobReport from 172.20.100.146:58166 Call#137071 Retry#0
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Could not load history file /mr-history/tmp/sshuser/job_1561807385110_19761-1562133916809-sshuser-oozie%3Alauncher%3AT%3Djava%3AW%3DE%2DAgg-1562133931559-1-0-SUCCEEDED-default-1562133922065.jhist Error in History server: 2019-07-03 06:05:34,723 INFO hs.HistoryFileManager - Moving /mr-history/tmp/sshuser/job_1561807385110_19761-1562133916809-sshuser-oozie%3Alauncher%3AT%3Djav
a%3AW%3DE%2DAgg-1562133931559-1-0-SUCCEEDED-default-1562133922065.jhist to /mr-history/done/2019/07/03/000019/job_1561807385110_19761-1562133916809-sshuser-oozie%3Alauncher%3AT%3Djava%3AW%3DE%2DAgg-1562133931559-1-0-SUCCEEDED-default-1562133922065.jhist
2019-07-03 06:05:34,780 ERROR hs.HistoryFileManager - Error while trying to move a job to done
java.io.IOException: rename from /mr-history/tmp/sshuser/job_1561807385110_19761-1562133916809-sshuser-oozie%3Alauncher%3AT%3Djava%3AW%3DE%2DAgg-1562133931559-1-0-SUCCEEDED-default-1562133922065.jhist to /mr-history/done/2019/07/03/000019/job_1561807385110_19761-1562133916809-sshuser-oozie%3Alauncher%3AT
%3Djava%3AW%3DE%2DAgg-1562133931559-1-0-SUCCEEDED-default-1562133922065.jhist failed.
at org.apache.hadoop.fs.FileSystem.rename(FileSystem.java:1342)
at org.apache.hadoop.fs.DelegateToFileSystem.renameInternal(DelegateToFileSystem.java:197)
at org.apache.hadoop.fs.AbstractFileSystem.renameInternal(AbstractFileSystem.java:749)
at org.apache.hadoop.fs.AbstractFileSystem.rename(AbstractFileSystem.java:679)
at org.apache.hadoop.fs.FileContext.rename(FileContext.java:960)
at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.moveToDoneNow(HistoryFileManager.java:1022)
at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.access$1000(HistoryFileManager.java:82)
at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager$HistoryFileInfo.moveToDone(HistoryFileManager.java:440)
at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager$1.run(HistoryFileManager.java:917)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748 Kindly help me to resolve the issue.
... View more
Labels:
07-02-2019
04:51 PM
Hi All, We are getting oozie workflow suspended automatically, as mentioned below User : sshuser
Group : -
Created : 2019-07-01 14:50 GMT
Started : 2019-07-01 14:50 GMT
Last Modified : 2019-07-02 04:06 GMT
Ended : 2019-07-02 04:06 GMT
CoordAction ID: 0173513-190628063928782-oozie-oozi-C@174
Actions
------------------------------------------------------------------------------------------------------------------------------------
ID Status Ext ID Ext Status Err Code
------------------------------------------------------------------------------------------------------------------------------------
0178959-190629112435711-oozie-oozi-W@:start: OK - OK -
------------------------------------------------------------------------------------------------------------------------------------
0178959-190629112435711-oozie-oozi-W@virtual KILLED 0178960-190629112435711-oozie-oozi-WSUSPENDED After dig into logs, i have found the error: 2019-07-01 14:53:35,067 WARN ActionCheckXCommand:523 - SERVER[hn1.cloudapp.net] USER[sshuser] GROUP[-] TOKEN[] APP[E_virtualWf] JOB[0178960-190629112435711-oozie-oozi-W] ACTION[0178960-190629112435711-oozie-oozi-W@hbaseDelete] Exception while executing check(). Error Code [JA009], Message[JA009: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Could not load history file wasbs://hbase@hdpsa.blob.core.windows.net/mr-history/tmp/sshuser/job_1561807385110_11674-1561992748186-sshuser-oozie%3Alauncher%3AT%3Djava%3AW%3DE_virtualWf-1561992812433-1-0-SUCCEEDED-default-1561992804743.jhist I have checked the permission rwxrwxrwx - /mr-history and rwxrwxrwxt - /tmp Sometime job working fine and sometime i'm getting above issue. Could someone help me to find the solution? Will be thankful to you.
... View more
Labels:
06-27-2019
07:48 AM
Hi All, I'm getting memory allocation in worker logs while allocation memory to container 2019-06-27 07:19:42,879 INFO monitor.ContainersMonitorImpl - Memory usage of ProcessTree 2703 for container-id container_e35_1560508149155_60211_01_000339: -1B of 2 GB physical memory used; -1B of 4.2 GB virtual memory used
2019-06-27 07:19:42,948 INFO monitor.ContainersMonitorImpl - Memory usage of ProcessTree 2700 for container-id container_e35_1560508149155_60211_01_000417: -1B of 2 GB physical memory used; -1B of 4.2 GB virtual memory used Why the memory allocation goes in -1B? Kindly help me to resolve it Regards, Vinay K
... View more
Labels:
06-11-2019
08:46 AM
@Jay Kumar SenSharma Thanks for quick response. We can create custom alert which is fine. But i'm looking if we can create monitoring for hbase queries and hbase exceptions. Regards, Vinay K
... View more
06-11-2019
07:43 AM
Hi Folks We have using multi hbase cluster with HDP2.5. I'm looking for create monitoring for error occurring at any regionserver or regions or hbase master. Could someone suggest? Regards Vinay K
... View more
Labels:
05-29-2019
04:56 AM
1 Kudo
Hi @Adil BAKKOURI After installation, when service were starting, one of your service start failed. Can you share logs? Regards, Vinay K
... View more
05-28-2019
10:24 AM
@PJ you can use * in database column while define policy in ranger's hive plugin. * will apply restriction policy for all hive DB.
... View more
05-28-2019
09:01 AM
Hi @ritzz learner From Error, It seems you are unable to connect smtp server. Check your network connection is reachable to smtp server. Use command -> telnet smtp.gmail.com 25 and share output. Regards, Vinay K
... View more
05-28-2019
07:41 AM
1 Kudo
Hi @Haijin Li The input file you are using is incorrect stock2.json {"myid":"0001","mytype":"donut"} command is correct CREATE EXTERNAL TABLE stock4_json (myjson struct<myid:string, mytype:string>)ROW FORMAT SERDE 'org.apache.hive.hcatalog.data.JsonSerDe' LOCATION '/warehouse/tablespace/external/hive/haijintest.db/stock2'; myjson struct define it is having further list and map. You have to place correct input file stock3.json {"myjson":{"myid":"0001","mytype":"donut"}} and create table with same command and check. Let me know if still you are facing issue. Regards, Vinay K
... View more
05-28-2019
06:49 AM
Hi @PJ It depends on you, how you define policies for hive DB. You may restrict by multiple ranger policies. What is your exactly question can you explain it?
... View more
05-28-2019
06:44 AM
Hi Read once link: https://community.hortonworks.com/questions/146908/nifi-putemail-failure-to-send-flowfile-to-gmail-ac.html If given answer may help u.
... View more
05-16-2019
12:57 PM
Hi Folks Env HDP 2.6.5 We are using multi node of hbase cluster. We are facing issue sometime while fetching data from hbase 04:36:03 UTC Error ScanHBase[id=5e4b7f7b-5157d-3e3r-9834-456f5rr] Unable to fetch rows from HBase table abc due to Failed after attempts=2, exceptions:
Thu may 16 04:36:02 UTC 2019, RpcRetryingCaller{globalStartTime=1557981362790,pause=100,retries=2}, java.jet.UnknownHostException: unknown host: wn1-hbase.xxxx.com
Thu may 16 04:36:03 UTC 2019, RpcRetryingCaller{globalStartTime=1557981362790,pause=100,retries=2}, java.jet.UnknownHostException: unknown host: wn1-hbase.xxxx.com: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=2,exceptions While wn1-hbase.xxxx.com node is working fine. Kindly help me to sort it out. Thankful to you in advance. Regards, Vinay K
... View more
Labels:
04-15-2019
01:45 PM
Hi Folks We are running hbase. We are using MultiRowRangeFilter in our java code for get the data for multi ranges. Below is snippet code: ranges.add(new RowRange(Bytes.toBytes(startRow), true, Bytes.toBytes(endRow), true));
Scan scan = new Scan();
MultiRowRangeFilter filter = new MultiRowRangeFilter(ranges);
FilterList filterList = new FilterList(FilterList.Operator.MUST_PASS_ALL);
filterList.addFilter(filter);
scan.setFilter(filterList);
scan.setCaching(500);
results = table.getScanner(scan); For specific table, we are getting null data, while same code is running for other tables. If i'm check checking manually by hbase shell, data is available. Even through single scan data is available using same code. How could i debug the data or query on Hbase level? Could someone help me to sort out the issue? Regards, Vinay K
... View more
Labels:
04-12-2019
08:10 AM
Hi All, We have setup multiple node of HDP cluster. We are using HBase 1.1.2 in environment, we are having 20 regionserver and each RS has approx 100 regions, which all are running good. I want to track the query being executed on which region server. We're running query by java code using multirowRangeFilter. Will be appreciable for sort it out. Regards, Vinay K
... View more
Labels:
04-12-2019
07:32 AM
HI Have you find any method?
... View more
04-12-2019
05:34 AM
Hi I'm planning to use hbase, where i've written code in hbase java api and running query. Have there any way to check the executed query in logs as well the response from query in logs?
... View more
Labels:
04-05-2019
06:19 AM
Hi Folks Hope all are doing good.! We are using HDP 2.6.5 and we are using 20 nodes of cluster. Everyday we are getting NodeManager health issue and connection refused and sometimes Nodemanager restart itself. i got logs from nodemanager log file: SHUTDONN LOGS:
2019-04-02 22:28:20,947 INFO monitor.ContainersMonitorImpl - Memory usage of ProcessTree 6948 for container-id container_e17_1553506205851_46103_01_000054
: 520.5 MB of 2 GB physical memory used; 3.6 GB of 4.2 GB virtual memory used
2019-04-02 22:28:20,948 WARN monitor.ContainersMonitorImpl - org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl is i
nterrupted. Exiting.
2019-04-02 22:28:21,162 INFO launcher.ContainerLaunch - Container container_e17_1553506205851_46103_01_000055 succeeded
2019-04-02 22:28:21,660 INFO ipc.Server - Stopping server on 8040
2019-04-02 22:28:21,661 INFO ipc.Server - Stopping IPC Server Responder
2019-04-02 22:28:21,662 INFO localizer.ResourceLocalizationService - Public cache exiting
2019-04-02 22:28:21,663 INFO ipc.Server - Stopping IPC Server listener on 8040
2019-04-02 22:28:21,683 INFO nodemanager.NodeManager - SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NodeManager at wn NodeManager connection refused and bad health:
2019-04-05 06:00:02,972 INFO nodemanager.NodeStatusUpdaterImpl - Sending out 61 NM container statuses: [[container_e23_1554290874215_9521_01_000002, Creat
eTime: 1554442308478, State: RUNNING, Capability: <memory:4096, vCores:1>, Diagnostics: , ExitStatus: -1000, Priority: 0], [container_e23_1554290874215_960
7_01_000075, CreateTime: 1554443718725, State: COMPLETE, Capability: <memory:2048, vCores:1>, Diagnostics: Container killed by the ApplicationMaster.
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143.
, ExitStatus: 143, Priority: 20], [container_e23_1554290874215_9607_01_000076, CreateTime: 1554443718726, State: COMPLETE, Capability: <memory:2048, vCores
:1>, Diagnostics: Container killed by the ApplicationMaster.
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143. Without giving any error, AM killing the containers. Could someone help me to sort out this issue? Regards, VInay K
... View more
Labels:
01-17-2019
09:37 AM
Hi, We are using HDP 3 in our environment. we are trying to load data from Local into using LOAD DATA LOCAL INPATH but we are getting error No path found. While if we load file to HDFS and then LOAD DATA INPATH to table it is successfully. Is LOAD DATA from locally has been removed in HDP3?
... View more
Labels:
01-11-2019
07:50 AM
Hi @Pulkit Bhardwaj I have gone through this link, from this majorly i understand the performance of 3-way replication vs EC. Still i didn't understand how data is storing in HDFS. If i have to store 1GB file in HDFS, Logically File size is divide into 1024MB/128MB = 8 blocks, So now how RS-6-3-1024k store data these 8 blocks? what is meaning of 6 data block in RS and how 3 parity will work? Is EC further divide 8 blocks into sub-blocks? Could anyone help me to understand the logic?
... View more
01-10-2019
08:47 AM
Almost, we had done. Thanks again @Geoffrey Shelton Okot
... View more
01-10-2019
08:46 AM
Hi All, I'm trying to understand How hadoop 3 store data on HDFS by erasure encoding. As per erasure encoding, currently six built-in policies are supported: RS-3-2-1024k,RS-6-3-1024k, RS-10-4-1024k, RS-LEGACY-6-3-1024k, XOR-2-1-1024k and REPLICATION. Replication is general term which was also using in hadoop2(replicate the data 3x). How Reed Solomony RS-3-2-1024k(3 data blocks, 2 parity blocks and 1024k cell size) or RS-6-3-1-24k(6 data blocks, 3 parity blocks and 1024k cell size) store the data? Suppose we are having 3 data nodes, 2 NNs, 1 Edge node. We have to store the 1GB file(abc.txt) and Block size is 128MB. How RS-3-2-1024k, RS-6-3-1024k works? What is meaning of 6 data blocks, 1024K? Is there any specific prerequisites for number of DATANODE's required, according to policy? Will appreciable in advance to help me to understand the hadoop 3 concept. Regards, Vinay K
... View more
Labels:
01-07-2019
07:31 AM
Thanks @Geoffrey Shelton Okot
... View more
01-07-2019
07:31 AM
Thanks @subhash parise
... View more
01-04-2019
07:42 AM
@Geoffrey Shelton Okot Yes interactive query is running fine. i have edited below properties in custom spark2-default configuration: spark.sql.hive.hiveserver2.jdbc.url.principal spark.hadoop.hive.zookeeper.quorum spark.hadoop.hive.llap.daemon.service.hosts spark.datasource.hive.warehouse.load.staging.dir spark.datasource.hive.warehouse.metastoreUri spark.sql.hive.hiveserver2.jdbc.url After taken restart. run the spark-shell sql("show databases").show() still only DEFAULT database is visible.
... View more
01-03-2019
05:34 AM
@Geoffrey Shelton Okot No luck. Pre-emption is already enabled via yarn config and all other prerequisite has completed. Hive interactive query service is running fine. Still 19/01/03 05:16:45 INFO RetryingMetaStoreClient: RetryingMetaStoreClient proxy=class org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient ugi=vinay@TEST.COM (auth:KERBEROS) retries=1 delay=5 lifetime=0 19/01/03 05:16:47 INFO CodeGenerator: Code generated in 294.781928 ms 19/01/03 05:16:47 INFO CodeGenerator: Code generated in 18.011739 ms +------------+
|databaseName|
+ ------------+
| default|
+------------+
... View more