@Ed Prout One way around the issue, is to monitor the success relationship out of the GetSplunk processor using the MonitorActivity processor. If data does not pass through in a set time period, the the MonitorActivity processor generates an "inactive" flow file and this can be used as a trigger for an ExecuteScript processor which would run a curl script to restart the processor. Not an elegant solution, but it should work.
... View more
If things aren't working with HDP 2.5 or HDCloud, I'd recommend starting with [Troubleshooting S3a](https://docs.hortonworks.com/HDPDocuments/HDCloudAWS/HDCloudAWS-1.8.0/bk_hdcloud-aws/content/s3-trouble/index.html) If you are using ASF released binaries, then those docs are mostly valid too, though as we pulled in much of the later features coming in S3a on Hadoop 2.8 (after writing them!), the docs are a bit inconsistent. The closest ASF docs on troubleshooting are those for [Hadoop 2.8](https://github.com/apache/hadoop/blob/branch-2.8/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md#troubleshooting-s3a). As Kasper pointed out, this is due to AWS JAR versioning. the Amazon SDK has been pretty brittle against change, and you *must* run with the same version of the AWS SDK which Hadoop was built with (which also needs a consistent version of jackson, ...). Hadoop 2.7.x: AWS SDK 1.7.4 Hadoop 2.8.x: 1.10.6 Hadoop 2.9+: probably 10.11+ or later, with jackson bumped up to 2.7.8 to match.
... View more