Member since
07-27-2018
5
Posts
0
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1925 | 08-10-2018 04:30 PM |
09-13-2018
09:40 AM
Here is an example of that log entry. There are tons of them in my logs. I'm on Streamsets 3.4.2 and CDH 5.14. I would love to understand what the root cause of this is. Removing server 053a1bbcc6b243b0a9c90f37b336fac1 from this tablet's cache 747423b5bf834fbb9a6508aae8eb1f63 AsyncKuduClient *admin 0 New I/O worker #965
... View more
09-06-2018
09:23 AM
I have a kafka-->kudu pipeline that had been running for about two weeks without issue. Yesterday the streamsets pipeline started failing Here's a snippet of the pipeline log: Error while running: com.streamsets.pipeline.api.StageException: KUDU_03 - Errors while interacting with Kudu: Row error for primary key=[-128, 0, 0, 0, 111, -92, -60, -24], tablet=null, server=null, status=Timed out: can not complete before timeout: Batch{operations=193, tablet="84c0c8c073e14c26b0a3da415a84dc53" [0x00000001, 0x00000002), ignoreAllDuplicateRows=false, rpc=KuduRpc(method=Write, tablet=84c0c8c073e14c26b0a3da415a84dc53, attempt=25, DeadlineTracker(timeout=10000, elapsed=9691), Traces: [0ms] sending RPC to server 053a1bbcc6b243b0a9c90f37b336fac1, [12ms] received from server 053a1bbcc6b243b0a9c90f37b336fac1 response Service unavailable: Service unavailable: Soft memory limit exceeded (at 99.05% of capacity). See https://kudu.apache.org/releases/1.6.... I'm also seeing things in the logs about removing servers from a tablet's cache and WebSocket queue is full, discarding 'status' message. I've done a preview on the data in Streamsets and the PK field is populated. Streamsets itself is not giving me any information about the record itself that is failing, it just fails the entire pipeline. At this point I'm not even sure where to look.
... View more
Labels:
- Labels:
-
Apache Kudu
08-10-2018
04:30 PM
Restarted my cloudera manager server, problem magically went away.
... View more
08-10-2018
07:41 AM
I'm trying to upgrade CDH from 5.9.0 to 5.9.13 (which is what {latest_supported} returns) but I'm running into problems. I was able to use the parcels window to download the parcel for 5.9.3-1.cdh5.9.3.p0.4 no problem. When I click distribute, it quickly gets to 50%, but then stalls. Clicking on "Details" shows that it has completed 6/6 Distribution steps. Looking at /var/log/cloudera-scm-server/cloudera-scm-server.log gives very little help: 2018-08-10 09:16:49,997 INFO 417074583@agentServer-745696:com.cloudera.server.common.MonitoringThreadPool: agentServer: execution stats: average=37ms, min=0ms, max=342ms.
2018-08-10 09:16:49,997 INFO 417074583@agentServer-745696:com.cloudera.server.common.MonitoringThreadPool: agentServer: waiting in queue stats: average=0ms, min=0ms, max=16ms.
2018-08-10 09:17:50,054 INFO 983417458@agentServer-745700:com.cloudera.server.common.MonitoringThreadPool: agentServer: execution stats: average=37ms, min=0ms, max=342ms.
2018-08-10 09:17:50,054 INFO 983417458@agentServer-745700:com.cloudera.server.common.MonitoringThreadPool: agentServer: waiting in queue stats: average=0ms, min=0ms, max=16ms.
2018-08-10 09:18:50,093 INFO 954942595@agentServer-745687:com.cloudera.server.common.MonitoringThreadPool: agentServer: execution stats: average=37ms, min=0ms, max=342ms.
2018-08-10 09:18:50,093 INFO 954942595@agentServer-745687:com.cloudera.server.common.MonitoringThreadPool: agentServer: waiting in queue stats: average=0ms, min=0ms, max=16ms.
2018-08-10 09:18:56,513 INFO ScmActive-0:com.cloudera.server.cmf.components.ScmActive: (119 skipped) ScmActive completed successfully.
2018-08-10 09:19:50,166 INFO 235172469@agentServer-745703:com.cloudera.server.common.MonitoringThreadPool: agentServer: execution stats: average=38ms, min=0ms, max=342ms.
2018-08-10 09:19:50,166 INFO 235172469@agentServer-745703:com.cloudera.server.common.MonitoringThreadPool: agentServer: waiting in queue stats: average=0ms, min=0ms, max=16ms.
2018-08-10 09:20:50,225 INFO 1657097256@agentServer-745702:com.cloudera.server.common.MonitoringThreadPool: agentServer: execution stats: average=38ms, min=0ms, max=342ms.
2018-08-10 09:20:50,225 INFO 1657097256@agentServer-745702:com.cloudera.server.common.MonitoringThreadPool: agentServer: waiting in queue stats: average=0ms, min=0ms, max=16ms.
2018-08-10 09:21:32,740 INFO 1273326417@scm-web-758047:com.cloudera.parcel.components.ParcelManagerImpl: Distributing parcel CDH:5.9.3-1.cdh5.9.3.p0.4 on cluster cluster
2018-08-10 09:22:02,895 INFO 235172469@agentServer-745703:com.cloudera.server.common.MonitoringThreadPool: agentServer: execution stats: average=40ms, min=0ms, max=343ms.
2018-08-10 09:22:02,895 INFO 235172469@agentServer-745703:com.cloudera.server.common.MonitoringThreadPool: agentServer: waiting in queue stats: average=0ms, min=0ms, max=16ms.
6 node cluster running centos 6.6, if that helps. I appreciate any guidance to help me get this upgrade moving. Ultimately I need to get CDH & CM to 5.15, but I wanted to have a success under my belt before I tackled that.
... View more
Labels:
- Labels:
-
Cloudera Manager
-
Manual Installation