You can indeed pass work onto a different (async) thread from your streaming batch RDDs. However, make sure to use the StreamingContext.remember(…) method to ensure the DStream keeps the data around for at least as long as it takes for your processing to complete over the batch.
Some aspects of this is discussed also in Spark's streaming programming guide (discussed within an SQL context, but it can be generalised to your use-case in the same way):
""" You can also run SQL queries on tables defined on streaming data from a different thread (that is, asynchronous to the running StreamingContext). Just make sure that you set the StreamingContext to remember a sufficient amount of streaming data such that the query can run. Otherwise the StreamingContext, which is unaware of the any asynchronous SQL queries, will delete off old streaming data before the query can complete. For example, if you want to query the last batch, but your query can take 5 minutes to run, then call streamingContext.remember(Minutes(5)) (in Scala, or equivalent in other languages).