Member since
07-31-2013
1924
Posts
462
Kudos Received
311
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2128 | 07-09-2019 12:53 AM | |
| 12446 | 06-23-2019 08:37 PM | |
| 9559 | 06-18-2019 11:28 PM | |
| 10523 | 05-23-2019 08:46 PM | |
| 4894 | 05-20-2019 01:14 AM |
10-15-2018
01:30 PM
Did anyone succussfully solved this problem. I am installing a new cloudera cluster using 5.15.1 version and we want to restrict the firewall rules in a range for the nodes to communicate. However, when i run jobs it starts using ports that are not open and hence, fails to run the job.
... View more
10-13-2018
03:54 AM
1 Kudo
Hi Harsh, Thanks alot for your support. Really appreciate. I was able to make hbase stable by adding the line mentioned by you but the only one change was reuiqred. -Dzookeeper.skipACL=yes we need to give "yes" not true. It worked for me. Thanks for making my cluster happpy. Regards Ayush
... View more
10-08-2018
07:19 PM
1 Kudo
The command in CM -> HDFS -> Actions to run Balancer is ad-hoc. There's no schedule it runs by - you'll need to invoke it manually to trigger the HDFS Balancer work. If you'd like to setup a frequency, you can use the CM API to trigger it via crontab/etc.
... View more
09-27-2018
02:19 AM
I've followed you by adding a new user to super group and gave permissions using ACLs The user name is 'perl' I ran a new job $ sudo -u perl hadoop distcp /solomon/data/data.txt /solomon When I run this job the application was in pending status
... View more
09-27-2018
12:39 AM
1 Kudo
I'm not aware of an option to disable use of .archive, but you should certainly not be running a service with different minor versions on its hosts.
... View more
09-24-2018
06:06 AM
1 Kudo
There is not a way to specify a raw scan when creating a remote scanner via HBase REST API. The backing server implementation does not carry arguments that toggle raw scans, but it could be added as a new feature. Please file a feature request over https://issues.apache.org/jira/browse/HBASE for this. The existing service-end implementation that builds the scanner on the REST server is at (upstream) https://github.com/apache/hbase/blob/master/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/ScannerResultGenerator.java#L74-L105
... View more
09-14-2018
11:28 AM
I have created /user/history/done and /tmp/log directories. Finally, I checked log file it showed everything is fine. 3:11:28.555 PM INFO AggregatedLogDeletionService aggregated log deletion finished. 3:11:57.677 PM INFO JobHistory History Cleaner started 3:11:57.682 PM INFO JobHistory History Cleaner complete 3:14:27.676 PM INFO JobHistory Starting scan to move intermediate done files 3:14:32.823 PM ERROR JobHistoryServer RECEIVED SIGNAL 15: SIGTERM 3:14:32.826 PM INFO MetricsSystemImpl Stopping JobHistoryServer metrics system... 3:14:32.826 PM INFO MetricsSystemImpl JobHistoryServer metrics system stopped. 3:14:32.827 PM INFO MetricsSystemImpl JobHistoryServer metrics system shutdown complete. 3:14:32.827 PM INFO Server Stopping server on 10033 3:14:32.827 PM INFO Server Stopping IPC Server listener on 10033 3:14:32.827 PM INFO Server Stopping IPC Server Responder 3:14:32.827 PM INFO Server Stopping server on 10020 3:14:32.828 PM INFO Server Stopping IPC Server listener on 10020 3:14:32.828 PM INFO Server Stopping IPC Server Responder 3:14:32.832 PM INFO log Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@chinni:19888 3:14:32.932 PM INFO JobHistory Stopping JobHistory 3:14:32.932 PM INFO JobHistory Stopping History Cleaner/Move To Done 3:14:32.934 PM ERROR AbstractDelegationTokenSecretManager ExpiredTokenRemover received java.lang.InterruptedException: sleep interrupted 3:14:32.934 PM INFO JobHistoryServer SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down JobHistoryServer at chinni/127.0.0.1 ************************************************************/ 3:14:34.969 PM INFO JobHistoryServer STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting JobHistoryServer STARTUP_MSG: user = mapred STARTUP_MSG: host = chinni/127.0.0.1 STARTUP_MSG: args = [] STARTUP_MSG: version = 2.6.0-cdh5.15.0 STARTUP_MSG: classpath = /run/cloudera-scm-agent/process/389-yarn-JOBHISTORY:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/commons-net-3.1.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/stax-api-1.0-2.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/commons-beanutils-1.9.2.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/aws-java-sdk-bundle-1.11.134.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/jetty-util-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/commons-logging-1.1.3.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/commons-cli-1.2.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/commons-lang-2.6.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/jetty-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/api-asn1-api-1.0.0-M20.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/commons-io-2.4.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/logredactor-1.0.3.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/gson-2.2.4.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/commons-codec-1.4.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/jersey-core-1.9.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/commons-compress-1.4.1.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/api-util-1.0.0-M20.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/xmlenc-0.52.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/commons-digester-1.8.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/slf4j-api-1.7.5.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/guava-11.0.2.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/jettison-1.1.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/commons-httpclient-3.1.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/jsch-0.1.42.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/jsr305-3.0.0.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/httpcore-4.2.5.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/curator-client-2.7.1.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/apacheds-i18n-2.0.0-M15.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/htrace-core4-4.0.1-incubating.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/protobuf-java-2.5.0.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/xz-1.0.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/jasper-compiler-5.5.23.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/netty-3.10.5.Final.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/asm-3.2.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/activation-1.1.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/slf4j-log4j12.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/servlet-api-2.5.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/paranamer-2.3.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/jackson-jaxrs-1.8.8.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/curator-framework-2.7.1.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/junit-4.11.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/hue-plugins-3.9.0-cdh5.15.0.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/snappy-java-1.0.4.1.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/jets3t-0.9.0.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/zookeeper.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/commons-math3-3.1.1.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/commons-collections-3.2.2.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/jasper-runtime-5.5.23.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/jersey-json-1.9.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/mockito-all-1.8.5.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/hamcrest-core-1.3.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/azure-data-lake-store-sdk-2.2.5.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/avro.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/commons-el-1.0.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/java-xmlbuilder-0.4.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/commons-configuration-1.6.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/jackson-xc-1.8.8.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/curator-recipes-2.7.1.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/httpclient-4.2.5.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/jsp-api-2.1.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/jersey-server-1.9.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/lib/jaxb-api-2.2.2.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/.//parquet-scala_2.10.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/.//hadoop-common-2.6.0-cdh5.15.0-tests.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/.//parquet-column.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/.//parquet-test-hadoop2.jar:/opt/cloudera/parcels/CDH-5.15.0-.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop-mapreduce/lib/hamcrest-core-1.3.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop-mapreduce/lib/jackson-core-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop-mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop-mapreduce/lib/javax.inject-1.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop-mapreduce/lib/jersey-core-1.9.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop-mapreduce/lib/jersey-guice-1.9.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop-mapreduce/lib/jersey-server-1.9.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop-mapreduce/lib/junit-4.11.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop-mapreduce/lib/leveldbjni-all-1.8.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop-mapreduce/lib/log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop-mapreduce/lib/netty-3.10.5.Final.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop-mapreduce/lib/paranamer-2.3.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop-mapreduce/lib/protobuf-java-2.5.0.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop-mapreduce/lib/snappy-java-1.0.4.1.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop-mapreduce/lib/xz-1.0.jar:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop-mapreduce/modules/*.jar STARTUP_MSG: build = http://github.com/cloudera/hadoop -r e3cb23a1cb2b89d074171b44e71f207c3d6ffa50 ; compiled by 'jenkins' on 2018-05-24T11:25Z STARTUP_MSG: java = 1.7.0_67 ************************************************************/ 3:14:34.992 PM INFO JobHistoryServer registered UNIX signal handlers for [TERM, HUP, INT] 3:14:35.831 PM INFO MetricsConfig loaded properties from hadoop-metrics2.properties 3:14:35.880 PM INFO MetricsSystemImpl Scheduled snapshot period at 10 second(s). 3:14:35.881 PM INFO MetricsSystemImpl JobHistoryServer metrics system started 3:14:35.889 PM INFO JobHistory JobHistory Init 3:14:36.369 PM INFO JobHistoryUtils Default file system [hdfs://chinni:8020] 3:14:36.494 PM INFO JobHistoryUtils Default file system [hdfs://chinni:8020] 3:14:36.503 PM INFO HistoryFileManager Initializing Existing Jobs... 3:14:36.510 PM INFO HistoryFileManager Found 0 directories to load 3:14:36.510 PM INFO HistoryFileManager Existing job initialization finished. 0.0% of cache is occupied. 3:14:36.512 PM INFO CachedHistoryStorage CachedHistoryStorage Init 3:14:36.531 PM INFO CallQueueManager Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 100 3:14:36.538 PM INFO Server Starting Socket Reader #1 for port 10033 3:14:36.671 PM INFO AbstractDelegationTokenSecretManager Updating the current master key for generating delegation tokens 3:14:36.673 PM INFO AbstractDelegationTokenSecretManager Starting expired delegation token remover thread, tokenRemoverScanInterval=60 min(s) 3:14:36.673 PM INFO AbstractDelegationTokenSecretManager Updating the current master key for generating delegation tokens 3:14:36.722 PM INFO log Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 3:14:36.727 PM INFO AuthenticationFilter Unable to initialize FileSignerSecretProvider, falling back to use random secrets. 3:14:36.731 PM INFO HttpRequestLog Http request log for http.requests.jobhistory is not defined 3:14:36.739 PM INFO HttpServer2 Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter) 3:14:36.741 PM INFO HttpServer2 Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context jobhistory 3:14:36.741 PM INFO HttpServer2 Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 3:14:36.741 PM INFO HttpServer2 Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 3:14:36.744 PM INFO HttpServer2 adding path spec: /jobhistory/* 3:14:36.744 PM INFO HttpServer2 adding path spec: /ws/* 3:14:36.750 PM INFO HttpServer2 Jetty bound to port 19888 3:14:36.750 PM INFO log jetty-6.1.26.cloudera.4 3:14:36.775 PM INFO log Extract jar:file:/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/jars/hadoop-yarn-common-2.6.0-cdh5.15.0.jar!/webapps/jobhistory to /tmp/Jetty_chinni_19888_jobhistory____wwal40/webapp 3:14:37.104 PM INFO log Started HttpServer2$SelectChannelConnectorWithSafeStartup@chinni:19888 3:14:37.105 PM INFO WebApps Web app /jobhistory started at 19888 3:14:37.366 PM INFO WebApps Registered webapp guice modules 3:14:37.375 PM INFO CallQueueManager Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 1000 3:14:37.380 PM INFO Server Starting Socket Reader #1 for port 10020 3:14:37.386 PM INFO RpcServerFactoryPBImpl Adding protocol org.apache.hadoop.mapreduce.v2.api.HSClientProtocolPB to the server 3:14:37.387 PM INFO Server IPC Server Responder: starting 3:14:37.387 PM INFO Server IPC Server listener on 10020: starting 3:14:37.388 PM INFO HistoryClientService Instantiated HistoryClientService at chinni/127.0.0.1:10020 3:14:37.393 PM INFO RMProxy Connecting to ResourceManager at chinni/127.0.0.1:8032 3:14:37.474 PM INFO AggregatedLogDeletionService aggregated log deletion started. 3:14:37.474 PM INFO Server IPC Server Responder: starting 3:14:37.474 PM INFO Server IPC Server listener on 10033: starting 3:14:37.500 PM INFO JobHistoryUtils Default file system [hdfs://chinni:8020] 3:14:37.501 PM INFO JvmPauseMonitor Starting JVM pause monitor 3:14:37.653 PM INFO AggregatedLogDeletionService aggregated log deletion finished. 3:15:06.674 PM INFO JobHistory History Cleaner started 3:15:06.679 PM INFO JobHistory History Cleaner complete 3:17:36.673 PM INFO JobHistory Starting scan to move intermediate done files 3:20:36.673 PM INFO JobHistory Starting scan to move intermediate done files 3:23:36.673 PM INFO JobHistory Starting scan to move intermediate done files 3:26:36.673 PM INFO JobHistory Starting scan to move intermediate done files 3:29:36.673 PM INFO JobHistory Starting scan to move intermediate done files 3:32:36.673 PM INFO JobHistory Starting scan to move intermediate done files 3:35:36.673 PM INFO JobHistory Starting scan to move intermediate done files 3:38:36.673 PM INFO JobHistory Starting scan to move intermediate done files 3:41:36.673 PM INFO JobHistory Starting scan to move intermediate done files 3:44:36.673 PM INFO JobHistory Starting scan to move intermediate done files 3:47:36.673 PM INFO JobHistory Starting scan to move intermediate done files 3:50:36.673 PM INFO JobHistory Starting scan to move intermediate done files 5:14:24.573 PM INFO JvmPauseMonitor Detected pause in JVM or host machine (eg GC): pause of approximately 2981ms No GCs detected 5:16:14.919 PM INFO JobHistory Starting scan to move intermediate done files 5:19:15.620 PM INFO JobHistory Starting scan to move intermediate done files 5:22:15.620 PM INFO JobHistory Starting scan to move intermediate done files 5:25:15.620 PM INFO JobHistory Starting scan to move intermediate done files 5:28:15.620 PM INFO JobHistory Starting scan to move intermediate done files 5:31:15.620 PM INFO JobHistory Starting scan to move intermediate done files 5:34:15.620 PM INFO JobHistory Starting scan to move intermediate done files 5:37:15.620 PM INFO JobHistory Starting scan to move intermediate done files 5:40:15.620 PM INFO JobHistory Starting scan to move intermediate done files 5:43:15.620 PM INFO JobHistory Starting scan to move intermediate done files 5:46:15.620 PM INFO JobHistory Starting scan to move intermediate done files 5:49:15.620 PM INFO JobHistory Starting scan to move intermediate done files 5:52:15.620 PM INFO JobHistory Starting scan to move intermediate done files 5:55:15.620 PM INFO JobHistory Starting scan to move intermediate done files 5:58:15.620 PM INFO JobHistory Starting scan to move intermediate done files 6:01:15.620 PM INFO JobHistory Starting scan to move intermediate done files 6:04:15.620 PM INFO JobHistory Starting scan to move intermediate done files 6:07:15.620 PM INFO JobHistory Starting scan to move intermediate done files 6:10:15.620 PM INFO JobHistory Starting scan to move intermediate done files 6:13:15.620 PM INFO JobHistory Starting scan to move intermediate done files 6:16:15.620 PM INFO JobHistory Starting scan to move intermediate done files 6:19:15.620 PM INFO JobHistory Starting scan to move intermediate done files 6:22:15.620 PM INFO JobHistory Starting scan to move intermediate done files 6:25:15.620 PM INFO JobHistory Starting scan to move intermediate done files 6:28:15.620 PM INFO JobHistory Starting scan to move intermediate done files 6:31:15.620 PM INFO JobHistory Starting scan to move intermediate done files 6:34:15.620 PM INFO JobHistory Starting scan to move intermediate done files 6:37:15.620 PM INFO JobHistory Starting scan to move intermediate done files 6:40:15.620 PM INFO JobHistory Starting scan to move intermediate done files 6:43:15.620 PM INFO JobHistory Starting scan to move intermediate done files 6:46:15.620 PM INFO JobHistory Starting scan to move intermediate done files 6:49:15.620 PM INFO JobHistory Starting scan to move intermediate done files 6:52:15.620 PM INFO JobHistory Starting scan to move intermediate done files 6:55:15.620 PM INFO JobHistory Starting scan to move intermediate done files 6:58:15.620 PM INFO JobHistory Starting scan to move intermediate done files 7:01:15.620 PM INFO JobHistory Starting scan to move intermediate done files 7:04:15.620 PM INFO JobHistory Starting scan to move intermediate done files 7:07:15.620 PM INFO JobHistory Starting scan to move intermediate done files 7:10:15.620 PM INFO JobHistory Starting scan to move intermediate done files 7:13:15.620 PM INFO JobHistory Starting scan to move intermediate done files 7:16:15.620 PM INFO JobHistory Starting scan to move intermediate done files 7:19:15.620 PM INFO JobHistory Starting scan to move intermediate done files 7:22:15.620 PM INFO JobHistory Starting scan to move intermediate done files 7:25:15.620 PM INFO JobHistory Starting scan to move intermediate done files 7:28:15.620 PM INFO JobHistory Starting scan to move intermediate done files 7:31:15.620 PM INFO JobHistory Starting scan to move intermediate done files 11:32:18.232 PM INFO JvmPauseMonitor Detected pause in JVM or host machine (eg GC): pause of approximately 2413ms No GCs detected 11:34:47.559 PM INFO JobHistory Starting scan to move intermediate done files 11:37:47.559 PM INFO JobHistory Starting scan to move intermediate done files 11:40:47.559 PM INFO JobHistory Starting scan to move intermediate done files 11:43:47.559 PM INFO JobHistory Starting scan to move intermediate done files 11:46:47.877 PM INFO JobHistory Starting scan to move intermediate done files 11:49:47.877 PM INFO JobHistory Starting scan to move intermediate done files 11:52:47.877 PM INFO JobHistory Starting scan to move intermediate done files Thanks alot and that's great that your reply is exactly matched to the work what I did before I watch your reply
... View more
09-13-2018
09:40 AM
Here is an example of that log entry. There are tons of them in my logs. I'm on Streamsets 3.4.2 and CDH 5.14. I would love to understand what the root cause of this is. Removing server 053a1bbcc6b243b0a9c90f37b336fac1 from this tablet's cache 747423b5bf834fbb9a6508aae8eb1f63 AsyncKuduClient *admin 0 New I/O worker #965
... View more
09-12-2018
04:58 AM
2 Kudos
In that case the balancer can be run later, but the compaction may still help with keeping the HBase application request latency low, if that is an immediate concern in the cluster.
... View more
09-11-2018
07:45 AM
Thanks for the help, proper implmentation of readfields solved the problem.
... View more