@kishore sanchina LLAP is a long-running service so it will preempt memory for the llap queue. The best practice is to dedicate nodes to LLAP workloads.
You can utilize the LLAPContext in Spark which will stream data from HDFS to the spark executor but this is more of a Hive process and not Spark which can incorporate some masking and filtering security features but you may see a 3x-4x performance degradation.