Create an account

Very important

  • To access the important data of the forums, you must be active in each forum and especially in the leaks and database leaks section, send data and after sending the data and activity, data and important content will be opened and visible for you.
  • You will only see chat messages from people who are at or below your level.
  • More than 500,000 database leaks and millions of account leaks are waiting for you, so access and view with more activity.
  • Many important data are inactive and inaccessible for you, so open them with activity. (This will be done automatically)


Thread Rating:
  • 311 Vote(s) - 3.45 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Allocate static memory in CPU cache in c/c++ : is it possible?

#1
Is it possible to explicitly create static objects in the CPU cache, sort of to make sure those objects always stay in the cache so no performance hit is ever taken from reaching all the way into RAM or god forbid - hdd virtual memory?

I am particular interested in targeting the large L3 shared cache, not intending to target L1, L2, instruction or any other cache, just the largest on-die chub of memory there is.

And just to clarify to differentiate from other threads I searched before posting this, I am not interested in privatizing the entire cache, just a small, few classes worth of region.
Reply

#2
No. Cache is not addressable, so you can't allocate objects in it.

What it seems like you meant to ask is: *Having allocated space in virtual memory, can I ensure that I always get cache hits?*

This is a more complicated question, and the answer is: partly.

You definitely can avoid being swapped out to disk, by using the memory management API of your OS (e.g. `mlock()`) to mark the region as non-pageable. Or allocate from "non-paged pool" to begin with.

I don't believe there's a similar API to pin memory into CPU cache. Even if you could reserve CPU cache for that block, you can't avoid cache misses. If another core writes to the memory, ownership WILL be transferred, and you WILL suffer a cache miss and associated bus transfer (possibly to main memory, possibly to the cache of the other core).

As Mathew mentions in his comment, you can also force the cache miss to occur in parallel with other useful work in the pipeline, so that the data is in cache when you need it.
Reply

#3
You could run another thread that loops over the data and brings it into the L3 cache.
Reply



Forum Jump:


Users browsing this thread:
1 Guest(s)

©0Day  2016 - 2023 | All Rights Reserved.  Made with    for the community. Connected through