Member since
04-12-2019
105
Posts
3
Kudos Received
7
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2235 | 05-28-2019 07:41 AM | |
1056 | 05-28-2019 06:49 AM | |
912 | 12-20-2018 10:54 AM | |
711 | 06-27-2018 09:05 AM | |
3998 | 06-27-2018 09:02 AM |
10-06-2019
06:56 AM
Hi, I'm facing same issue. Have there any luck?
... View more
09-24-2019
06:35 AM
Hi @Shelton Thanks for give your precious time. As you have mentioned, already value is set: #cat /proc/sys/fs/file-max 11534502 #ulimit unlimited #ulimit -Sn 128000 #ulimit -Hn 128000 Each node in Hbase Cluster are already running with these settings. Noticed: vm.swappiness value is 60, also sometime swap uses is more than 80%. #free -m Swap: 7167 6066 1101 Looking for other solution. Regards, Vinay K
... View more
09-24-2019
06:02 AM
Hi Team, We have spring boot application which take data or query data to HBase. When application initially start, it make a connection to zookeeper server remotely and later execute hbase query and get data according to requirement. But multiple time we get below error and application disconnected from zookeeper server. 2019-09-24 07:12:26.920 INFO 60447 --- [0.100.135:2181)] o.a.h.h.s.o.apache.zookeeper.ClientCnxn : Unable to read additional data from server sessionid 0x26c4d39814f4b8e, likely server has closed socket, closing socket connection and attempting reconnect
..
2019-09-24 07:14:48.042 INFO 60447 --- [io-8181-exec-41] com.mems.energy.GetTimeseriseCompare : org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=6, exceptions:
Tue Sep 24 07:07:58 UTC 2019, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=94368: row 'abc' on table 'table1' at region region1, abc Kindly help me to resolve it. Regards, Vinay K
... View more
Labels:
- Labels:
-
Apache Zookeeper
09-23-2019
06:59 AM
Hi Folks,
We are using HDP 2.6.5 version with HBase, oozie. We are getting "java.lang.OutOfMemoryError: unable to create new native thread" error in our logs:
2019-09-22 05:56:37,297 INFO container.ContainerImpl - Container container_e56_1565687270063_197657_01_001489 transitioned from LOCALIZING to LOCALIZED
2019-09-22 05:56:37,299 FATAL event.AsyncDispatcher - Error in dispatcher thread
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:717)
at java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:957)
at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1378)
at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:134)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainersLauncher.handle(ContainersLauncher.java:118)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainersLauncher.handle(ContainersLauncher.java:55)
at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:184)
at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:110)
at java.lang.Thread.run(Thread.java:748)
2019-09-22 05:56:37,300 INFO ipc.Server - Auth successful for appattempt_1565687270063_197657_000001 (auth:SIMPLE)
2019-09-22 05:56:37,303 FATAL yarn.YarnUncaughtExceptionHandler - Thread Thread[AsyncDispatcher event handler,5,main] threw an Error. Shutting down now...
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:717)
at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:197)
at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:110)
at java.lang.Thread.run(Thread.java:748)
2019-09-22 05:56:37,308 INFO util.ExitUtil - Halt with status -1 Message: HaltException2019-09-22 05:56:37,308 INFO util.ExitUtil - Halt with status -1 Message: HaltException
I have gone through multiple links, But i didn't get valuable output.
Ouput of max thread: cat /proc/sys/kernel/threads-max 902309
ulimit output: ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 451154 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 128000 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 451154 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited
Could some help me to resolve the issue?
Regards,
Vinay K
... View more
Labels:
08-08-2019
03:36 PM
Hey Harsh Thanks for responding. As multiple client are requesting data to hbase, at some point, sometimes user don’t get data, EOF exception or connection interruptions occur. We are not able to track the record of requested data and size of input and output data sending to end user. Regards Vinay K
... View more
06-11-2019
08:46 AM
@Jay Kumar SenSharma Thanks for quick response. We can create custom alert which is fine. But i'm looking if we can create monitoring for hbase queries and hbase exceptions. Regards, Vinay K
... View more
06-11-2019
07:43 AM
Hi Folks We have using multi hbase cluster with HDP2.5. I'm looking for create monitoring for error occurring at any regionserver or regions or hbase master. Could someone suggest? Regards Vinay K
... View more
Labels:
- Labels:
-
Apache HBase
05-29-2019
04:56 AM
1 Kudo
Hi @Adil BAKKOURI After installation, when service were starting, one of your service start failed. Can you share logs? Regards, Vinay K
... View more
05-28-2019
10:24 AM
@PJ you can use * in database column while define policy in ranger's hive plugin. * will apply restriction policy for all hive DB.
... View more
05-28-2019
09:01 AM
Hi @ritzz learner From Error, It seems you are unable to connect smtp server. Check your network connection is reachable to smtp server. Use command -> telnet smtp.gmail.com 25 and share output. Regards, Vinay K
... View more
05-28-2019
07:41 AM
1 Kudo
Hi @Haijin Li The input file you are using is incorrect stock2.json {"myid":"0001","mytype":"donut"} command is correct CREATE EXTERNAL TABLE stock4_json (myjson struct<myid:string, mytype:string>)ROW FORMAT SERDE 'org.apache.hive.hcatalog.data.JsonSerDe' LOCATION '/warehouse/tablespace/external/hive/haijintest.db/stock2'; myjson struct define it is having further list and map. You have to place correct input file stock3.json {"myjson":{"myid":"0001","mytype":"donut"}} and create table with same command and check. Let me know if still you are facing issue. Regards, Vinay K
... View more
05-28-2019
06:49 AM
Hi @PJ It depends on you, how you define policies for hive DB. You may restrict by multiple ranger policies. What is your exactly question can you explain it?
... View more
05-28-2019
06:44 AM
Hi Read once link: https://community.hortonworks.com/questions/146908/nifi-putemail-failure-to-send-flowfile-to-gmail-ac.html If given answer may help u.
... View more
05-16-2019
12:57 PM
Hi Folks Env HDP 2.6.5 We are using multi node of hbase cluster. We are facing issue sometime while fetching data from hbase 04:36:03 UTC Error ScanHBase[id=5e4b7f7b-5157d-3e3r-9834-456f5rr] Unable to fetch rows from HBase table abc due to Failed after attempts=2, exceptions:
Thu may 16 04:36:02 UTC 2019, RpcRetryingCaller{globalStartTime=1557981362790,pause=100,retries=2}, java.jet.UnknownHostException: unknown host: wn1-hbase.xxxx.com
Thu may 16 04:36:03 UTC 2019, RpcRetryingCaller{globalStartTime=1557981362790,pause=100,retries=2}, java.jet.UnknownHostException: unknown host: wn1-hbase.xxxx.com: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=2,exceptions While wn1-hbase.xxxx.com node is working fine. Kindly help me to sort it out. Thankful to you in advance. Regards, Vinay K
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache HBase
04-12-2019
07:32 AM
HI Have you find any method?
... View more
04-12-2019
05:34 AM
Hi I'm planning to use hbase, where i've written code in hbase java api and running query. Have there any way to check the executed query in logs as well the response from query in logs?
... View more
Labels:
- Labels:
-
Apache HBase
-
Cloudera Search
04-05-2019
06:19 AM
Hi Folks Hope all are doing good.! We are using HDP 2.6.5 and we are using 20 nodes of cluster. Everyday we are getting NodeManager health issue and connection refused and sometimes Nodemanager restart itself. i got logs from nodemanager log file: SHUTDONN LOGS:
2019-04-02 22:28:20,947 INFO monitor.ContainersMonitorImpl - Memory usage of ProcessTree 6948 for container-id container_e17_1553506205851_46103_01_000054
: 520.5 MB of 2 GB physical memory used; 3.6 GB of 4.2 GB virtual memory used
2019-04-02 22:28:20,948 WARN monitor.ContainersMonitorImpl - org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl is i
nterrupted. Exiting.
2019-04-02 22:28:21,162 INFO launcher.ContainerLaunch - Container container_e17_1553506205851_46103_01_000055 succeeded
2019-04-02 22:28:21,660 INFO ipc.Server - Stopping server on 8040
2019-04-02 22:28:21,661 INFO ipc.Server - Stopping IPC Server Responder
2019-04-02 22:28:21,662 INFO localizer.ResourceLocalizationService - Public cache exiting
2019-04-02 22:28:21,663 INFO ipc.Server - Stopping IPC Server listener on 8040
2019-04-02 22:28:21,683 INFO nodemanager.NodeManager - SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NodeManager at wn NodeManager connection refused and bad health:
2019-04-05 06:00:02,972 INFO nodemanager.NodeStatusUpdaterImpl - Sending out 61 NM container statuses: [[container_e23_1554290874215_9521_01_000002, Creat
eTime: 1554442308478, State: RUNNING, Capability: <memory:4096, vCores:1>, Diagnostics: , ExitStatus: -1000, Priority: 0], [container_e23_1554290874215_960
7_01_000075, CreateTime: 1554443718725, State: COMPLETE, Capability: <memory:2048, vCores:1>, Diagnostics: Container killed by the ApplicationMaster.
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143.
, ExitStatus: 143, Priority: 20], [container_e23_1554290874215_9607_01_000076, CreateTime: 1554443718726, State: COMPLETE, Capability: <memory:2048, vCores
:1>, Diagnostics: Container killed by the ApplicationMaster.
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143. Without giving any error, AM killing the containers. Could someone help me to sort out this issue? Regards, VInay K
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache YARN
01-17-2019
09:37 AM
Hi, We are using HDP 3 in our environment. we are trying to load data from Local into using LOAD DATA LOCAL INPATH but we are getting error No path found. While if we load file to HDFS and then LOAD DATA INPATH to table it is successfully. Is LOAD DATA from locally has been removed in HDP3?
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Hive
01-11-2019
07:50 AM
Hi @Pulkit Bhardwaj I have gone through this link, from this majorly i understand the performance of 3-way replication vs EC. Still i didn't understand how data is storing in HDFS. If i have to store 1GB file in HDFS, Logically File size is divide into 1024MB/128MB = 8 blocks, So now how RS-6-3-1024k store data these 8 blocks? what is meaning of 6 data block in RS and how 3 parity will work? Is EC further divide 8 blocks into sub-blocks? Could anyone help me to understand the logic?
... View more
01-10-2019
08:47 AM
Almost, we had done. Thanks again @Geoffrey Shelton Okot
... View more
01-10-2019
08:46 AM
Hi All, I'm trying to understand How hadoop 3 store data on HDFS by erasure encoding. As per erasure encoding, currently six built-in policies are supported: RS-3-2-1024k,RS-6-3-1024k, RS-10-4-1024k, RS-LEGACY-6-3-1024k, XOR-2-1-1024k and REPLICATION. Replication is general term which was also using in hadoop2(replicate the data 3x). How Reed Solomony RS-3-2-1024k(3 data blocks, 2 parity blocks and 1024k cell size) or RS-6-3-1-24k(6 data blocks, 3 parity blocks and 1024k cell size) store the data? Suppose we are having 3 data nodes, 2 NNs, 1 Edge node. We have to store the 1GB file(abc.txt) and Block size is 128MB. How RS-3-2-1024k, RS-6-3-1024k works? What is meaning of 6 data blocks, 1024K? Is there any specific prerequisites for number of DATANODE's required, according to policy? Will appreciable in advance to help me to understand the hadoop 3 concept. Regards, Vinay K
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Hortonworks SmartSense
01-07-2019
07:31 AM
Thanks @Geoffrey Shelton Okot
... View more
01-07-2019
07:31 AM
Thanks @subhash parise
... View more
01-04-2019
07:42 AM
@Geoffrey Shelton Okot Yes interactive query is running fine. i have edited below properties in custom spark2-default configuration: spark.sql.hive.hiveserver2.jdbc.url.principal spark.hadoop.hive.zookeeper.quorum spark.hadoop.hive.llap.daemon.service.hosts spark.datasource.hive.warehouse.load.staging.dir spark.datasource.hive.warehouse.metastoreUri spark.sql.hive.hiveserver2.jdbc.url After taken restart. run the spark-shell sql("show databases").show() still only DEFAULT database is visible.
... View more
01-03-2019
05:34 AM
@Geoffrey Shelton Okot No luck. Pre-emption is already enabled via yarn config and all other prerequisite has completed. Hive interactive query service is running fine. Still 19/01/03 05:16:45 INFO RetryingMetaStoreClient: RetryingMetaStoreClient proxy=class org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient ugi=vinay@TEST.COM (auth:KERBEROS) retries=1 delay=5 lifetime=0 19/01/03 05:16:47 INFO CodeGenerator: Code generated in 294.781928 ms 19/01/03 05:16:47 INFO CodeGenerator: Code generated in 18.011739 ms +------------+
|databaseName|
+ ------------+
| default|
+------------+
... View more
01-02-2019
02:24 PM
@Geoffrey Shelton Okot Ohh. I did not enable the pre-emption via yarn config, It is only point which is pending. Rest of part, i have completed. let me check with enable yarn pre-emption. Will update you once done it.
... View more
01-02-2019
12:54 PM
Hi Subhash
below is code from pyspark import SparkConf
from pyspark.sql import SparkSession, HiveContext
from pyspark.sql import functions as fn
from pyspark.sql.functions import rank,sum,col
from pyspark.sql import Window
sparkSession = (SparkSession
.builder
.master("local")
.appName('sprk-job')
.enableHiveSupport()
.getOrCreate())
sparkSession.sql("show databases").show()
sparkSession.stop()
Even i'm also trying from spark-shell.
... View more
01-02-2019
12:09 PM
Hi Subhash I have already added spark user for access the all database by ranger and all HDFS storage path.
... View more
01-02-2019
08:37 AM
@Geoffrey Shelton Okot Could you please confirm do we really need to enable Interactive query? because after enable Interactive query, i'm unable to start interactive query service. Below are the logs: 2019-01-02T08:36:41,455 WARN [main] cli.LlapStatusServiceDriver: Watch mode enabled and got YARN error. Retrying..
2019-01-02T08:36:43,462 WARN [main] cli.LlapStatusServiceDriver: Watch mode enabled and got YARN error. Retrying..
2019-01-02T08:36:45,469 WARN [main] cli.LlapStatusServiceDriver: Watch mode enabled and got YARN error. Retrying..
2019-01-02T08:36:47,476 INFO [main] LlapStatusServiceDriverConsole: LLAP status unknown
... View more
01-02-2019
05:31 AM
@Geoffrey Shelton Okot Hive and spark client has already installed on hive and spark node.
... View more