@Chaitanya D
It will be possible in with unix and spark combination.
hadoop fs -ls /filedirectory/*txt_processed
Above command will return the desired file you need. Then pass the result to spark and process the file as you need.
Alternatively in spark you can select the desired file using the below command.
val lsResult =Seq("hadoop","fs","-ls","hdfs://filedirectory/*txt_prcoessed").!!
Hope it helps !