Member since
11-14-2019
18
Posts
1
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3895 | 06-29-2020 04:14 AM | |
2229 | 03-03-2020 11:55 PM | |
4185 | 02-04-2020 06:22 AM |
07-03-2020
05:51 AM
Hello @amitkumarDR : Kindly check both kdc.conf and krb5.conf. https://docs.cloudera.com/documentation/enterprise/5-13-x/topics/sg_kerberos_troubleshoot.html
... View more
06-29-2020
08:22 AM
Hello @sarm : Kindly let me know what is the hdfs rack configuration and configurations related to data storage.
... View more
06-29-2020
08:19 AM
Hello @hitachi_ben : Have you gone through the cloudera documentation links ?? for the version what you have.. I meant the general administration document would give some knowledge and about adding extensions if they have explicitly mentioned you can proceedwith that. References: https://docs.cloudera.com/documentation/enterprise/5-3-x/topics/cm_mc_rolling_restart.html https://docs.cloudera.com/documentation/enterprise/5-3-x/categories/hub_administrators.html
... View more
06-29-2020
04:43 AM
@Lakshu : thanks for quick response 🙂 good to hear that works ....
... View more
06-29-2020
04:14 AM
1 Kudo
@Lakshu : Seems to be configuration issue, your code doesnt find the correct region as per the configration. Go through hbase site xml files for the below configuration parameters, if not found add: hbase.thrift.support.proxyuser --> true hbase.regionserver.thrift.http --> true Add these configurations and restart the hbase and let me know how this works.
... View more
06-29-2020
04:06 AM
@Lakshu : Did u get this while running a phoenix sqlline ?? Are u trying to run a phoenix sql query within this sqlline utility.. Be precise so that we could get this resolved.
... View more
05-28-2020
06:30 AM
@TCloud : API calls would be recommended, if you are doing sql calls it might be some burden to the underlying layer. If you try to API it will go via system and predefined things you can fetch. ~ Govind
... View more
05-28-2020
06:05 AM
Try your table name with CAPITAL letters.
... View more
05-25-2020
05:14 AM
https://issues.apache.org/jira/browse/HIVE-13037 Perhaps this may make you see some insights
... View more
03-03-2020
11:55 PM
Fixed: This is what i infered, while running spark the mode is made as client as you see below: Parsed arguments: master local[*] deployMode null executorMemory null executorCores null totalExecutorCores null propertiesFile /usr/hdp/current/spark2-client/conf/spark-defaults.conf driverMemory 4g driverCores null driverExtraClassPath null driverExtraLibraryPath /usr/hdp/current/hadoop-client/lib/native:/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64 driverExtraJavaOptions null supervise false queue default numExecutors null files null pyFiles null archives null mainClass null primaryResource pyspark-shell name PySparkShell childArgs [] jars null packages null packagesExclusions null repositories null verbose true When we use --master yarn this gets success !! .
... View more
03-03-2020
11:54 PM
Tried verbose mode and still finding this issues !!
... View more
03-03-2020
11:53 PM
Welcome to ____ __ / __/__ ___ _____/ /__ _\ \/ _ \/ _ `/ __/ '_/ /__ / .__/\_,_/_/ /_/\_\ version 2.3.0.2.6.5.0-292 /_/ Using Python version 2.7.14 (default, Dec 7 2017 17:05:42) SparkSession available as 'spark'. >>> >>> df=spark.sql('select * from sws_dev.vw_dlx_rpr_ordr_dtl_base limit 1').show() [Stage 0:=====================> (18 + 28) / 46]20/03/03 07:01:08 ERROR DiskBlockObjectWriter: Uncaught exception while reverting partial writes to file /tmp/blockmgr-c5bcbbe3-8da0-44a0-8025-1b183c81d532/03/temp_shuffle_280c5065-f954-4ec8-b3d0-7c1f5c18b581 java.io.FileNotFoundException: /tmp/blockmgr-c5bcbbe3-8da0-44a0-8025-1b183c81d532/03/temp_shuffle_280c5065-f954-4ec8-b3d0-7c1f5c18b581 (Too many open files) at java.io.FileOutputStream.open0(Native Method) at java.io.FileOutputStream.open(FileOutputStream.java:270) at java.io.FileOutputStream.<init>(FileOutputStream.java:213) at org.apache.spark.storage.DiskBlockObjectWriter$$anonfun$revertPartialWritesAndClose$2.apply$mcV$sp(DiskBlockObjectWriter.scala:217) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1386) at org.apache.spark.storage.DiskBlockObjectWriter.revertPartialWritesAndClose(DiskBlockObjectWriter.scala:214) at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.stop(BypassMergeSortShuffleWriter.java:237) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:102) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53) at org.apache.spark.scheduler.Task.run(Task.scala:109) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 20/03/03 07:01:08 ERROR Executor: Exception in task 3.0 in stage 0.0 (TID 3) java.io.FileNotFoundException: /tmp/blockmgr-c5bcbbe3-8da0-44a0-8025-1b183c81d532/3c/temp_shuffle_8450fcd1-d97c-4c34-ac52-196e03030bf9 (Too many open files) at java.io.FileOutputStream.open0(Native Method) at java.io.FileOutputStream.open(FileOutputStream.java:270) at java.io.FileOutputStream.<init>(FileOutputStream.java:213) at org.apache.spark.storage.DiskBlockObjectWriter.initialize(DiskBlockObjectWriter.scala:103) at org.apache.spark.storage.DiskBlockObjectWriter.open(DiskBlockObjectWriter.scala:116) at org.apache.spark.storage.DiskBlockObjectWriter.write(DiskBlockObjectWriter.scala:237) at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:151) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53) at org.apache.spark.scheduler.Task.run(Task.scala:109) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 20/03/03 07:01:08 ERROR Executor: Exception in task 9.0 in stage 0.0 (TID 9) java.io.FileNotFoundException: /tmp/blockmgr-c5bcbbe3-8da0-44a0-8025-1b183c81d532/21/temp_shuffle_19e93f90-4de2-43c9-a715-c8668e96d793 (Too many open files)
... View more
02-04-2020
06:22 AM
The following values are used based on ZooKeeper settings: minSessionTimeout = ticktime * 2, maxSessionTimeout=ticktime * 20 If no Session Timeouts are set in the client, the server will use tickTime*2 . Hence, any session which establishes to ZooKeeper, will have this value as a minimum timeout. Likewise, if there are no SessionTimout set, the maximum value will be tick Time*20 . Hence, any session which establishes to ZooKeeper will have this value as a maximum timeout. The sessionTimeout cannot be less than minSessionTimeout (ticktime * 2) or greater than maxSessionTimeout (ticktime * 20) .
... View more
02-04-2020
02:54 AM
Hello Team,
Consider i have a client which is negotiating a timeout from the ZK server.
I have observed that 0x36e18413b196cfc is the session id and i could see that ZK is terminating the connection after 23,419 seconds ??? Could anyone suggest why ??
2020-01-30 01:03:23,762 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@861] - Client attempting to renew session 0x36e18413b196cfc at /xx.xx.xx.xx:37952 2020-01-30 01:03:23,762 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:Learner@108] - Revalidating client: 0x36e18413b196cfc 2020-01-30 01:03:23,762 - INFO [QuorumPeer[myid=1]/0.0.0.0:2181:ZooKeeperServer@617] - Established session 0x36e18413b196cfc with negotiated timeout 10000 for client /xx.xx.xx.xx:37952 EndOfStreamException: Unable to read additional data from client sessionid 0x36e18413b196cfc, likely client has closed socket 2020-01-30 07:33:42,962 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1033] - Closed socket connection for client //xx.xx.xx.xx:37952 which had sessionid 0x36e18413b196cfc
... View more
Labels:
- Labels:
-
Apache Zookeeper
11-15-2019
04:40 AM
@Cloudera: Any response
... View more
11-14-2019
05:46 AM
Sqoop Version 1.4.6
HDP Version 2.6.5
sqoop job --list Warning: /usr/hdp/2.6.5.0-292/accumulo does not exist! Accumulo imports will fail. Please set $ACCUMULO_HOME to the root of your Accumulo installation. 19/11/14 06:44:44 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6.2.6.5.0-292 Available jobs: incjob
----------------------------- DELETE OPERATION
sqoop job --delete incjob --verbose Warning: /usr/hdp/2.6.5.0-292/accumulo does not exist! Accumulo imports will fail. Please set $ACCUMULO_HOME to the root of your Accumulo installation. 19/11/14 06:44:57 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6.2.6.5.0-292 19/11/14 06:44:57 DEBUG tool.JobTool: Enabled debug logging. 19/11/14 06:44:57 DEBUG hsqldb.HsqldbJobStorage: Checking for table: SQOOP_ROOT 19/11/14 06:44:57 DEBUG hsqldb.HsqldbJobStorage: Found table: SQOOP_ROOT 19/11/14 06:44:57 DEBUG hsqldb.HsqldbJobStorage: Looking up property sqoop.hsqldb.job.storage.version for version null 19/11/14 06:44:57 DEBUG hsqldb.HsqldbJobStorage: => 0 19/11/14 06:44:57 DEBUG hsqldb.HsqldbJobStorage: Looking up property sqoop.hsqldb.job.info.table for version 0 19/11/14 06:44:57 DEBUG hsqldb.HsqldbJobStorage: => SQOOP_SESSIONS 19/11/14 06:44:57 DEBUG hsqldb.HsqldbJobStorage: Checking for table: SQOOP_SESSIONS 19/11/14 06:44:57 DEBUG hsqldb.HsqldbJobStorage: Found table: SQOOP_SESSIONS 19/11/14 06:44:57 DEBUG hsqldb.HsqldbJobStorage: Deleting job: incjob 19/11/14 06:44:57 DEBUG hsqldb.HsqldbJobStorage: Flushing current transaction 19/11/14 06:44:57 DEBUG hsqldb.HsqldbJobStorage: Closing connection
----------------------------- SELECT OPERATION
sqoop job --list Warning: /usr/hdp/2.6.5.0-292/accumulo does not exist! Accumulo imports will fail. Please set $ACCUMULO_HOME to the root of your Accumulo installation. 19/11/14 06:44:44 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6.2.6.5.0-292 Available jobs: incjob
... View more
Labels: