Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

LLAP not using io cache

avatar
Rising Star

I've setup LLAP and it is working fine, but it is not using the IO Cache. I've set the below in both the CLI and HS2, but Grafana shows no cache used (and HDFS name node is very busy keeping the edits). Any ideas on what I might be missing?

--hiveconf hive.execution.mode=llap

--hiveconf hive.llap.execution.mode=all

--hiveconf hive.llap.io.enabled=true

--hiveconf hive.llap.daemon.service.hosts=@llap0

1 ACCEPTED SOLUTION

avatar
Super Guru

@James Dinkel

Is there a typo n your question above? You mention hive.llap.iomemory.mode.cache

Correct is: set hive.llap.io.memory.mode=cache

Just checking before moving forward.

What makes me to believe is a typo is that you stated that it was null which is not correct. The default is actually "cache". That makes me to believe that you mistyped the variable.

View solution in original post

18 REPLIES 18

avatar
Super Guru

@James Dinkel

"In-Memory Cache per Daemon", by default is set to none. Did you allocate anything to it? This configuration is also available in hive-interactive-site.

avatar
Rising Star

@Constantin Stanca yes, in-memory cache per daemon = 12288 (Mb)

avatar
Super Guru

Scott mentioned below some good practices for memory sizing.

avatar

Hi @James Dinkel

I'm guessing there is a memory sizing issue. Make sure you follow these sizing rules:

  • MemPerDaemon (Container Size) > LLAP Heapsize (Java process heap) + CacheSize (off heap) +headroom
    • Multiple of yarn min allocation
    • Should be less than yarn.nodemanager.resource.memory-mb
    • Headroom is capped at 6GB
  • QueueSize (Yarn Queue) >= MemPerDaemon * num daemons + slider + (tez AM size * concurrency)
  • Cachesize = MemPerDaemon - (hive tez container * num of executors)
  • Num executors per daemon = (MemPerDaemon - cache_size)/hive tez container size

In addition, be sure your LLAP queue is setup appropriately and has sufficient capacity:

  • <queue>.user_limit_factor =1
  • <queue>.ama-resource-percent =1 (its actually a factor between 0 and 1)
  • <queue>.capacity=100
  • <queue>.max-capacity=100

avatar
Rising Star

i forgot to update this. Actually, the LLAP and the cache was setup correctly. In the queries taken from the Teradata, each of the 200k queries performs several joins and one of the joins was on a column that was a null. So, result set was null. Side benefit though was a nice gain in knowledge on all the nobs/levers of this product. And it is a very nice product. My experience is in Teradata/Netezza/Hawq and I've found LLAP to be a clear winner in replacing those. Very fast, very robust.

Couple of Notes on the things that mattered:

-Use ORC (duh)

-Get Partitioned by / Clustered by right (use hive --orcfiledump as shown in Sergey's slides to make sure you get 100k records per orc file)

-Get number of nodes / appmasters right

-Use local Metastore

-Get heap's right

-Get yarn capacity scheduler settings right

-Increase executors

-(probably not supported) clone IS2 jvm's for more throughput if needed.

Pull-up hive ui or grafana and sit back, smile and enjoy watching the transactions fly.

avatar
Rising Star

avatar

Hi

I'm using hdp 2.6.4 with hive 2 + llap of course

I've followed all your recommendation here expect for your last comment @James Dinkel

because you do not explain all the other settings you've fixed

but basically I make twice or more the exact same query and the query is not faster the second time that the first

so may be I'm wrong about what is caching and how to use it. Anyway how do you know if you hit the data in cache or not ?

Thanks

avatar
Contributor

I am seeing the same issue @Scott Shaw.. Followed your recommendation on daemon and heap settings but still no luck..

avatar
Contributor

Is there anyway to debug the io cache component to find out why it's not caching