cuFFT Plan Cache¶
- class cupy.fft._cache.PlanCache(Py_ssize_t size=16, Py_ssize_t memsize=-1, int dev=-1)¶
A per-thread, per-device, least recently used (LRU) cache for cuFFT plans.
size (int) – The number of plans that the cache can accommodate. The default is 16. Setting this to
-1will make this limit ignored.
memsize (int) – The amount of GPU memory, in bytes, that the plans in the cache will use for their work areas. Default is
-1, meaning it is unlimited.
dev (int) – The ID of the device that the cache targets.
By setting either
set_memsize()), the cache is disabled, and any operation is no-op. To re-enable it, simply set a nonzero
This class can be instantiated by users, but it is discouraged. Instead, we expect the following canonical usage pattern to retrieve a handle to the cache through
from cupy.cuda import Device from cupy.fft.config import get_plan_cache # get the cache for device n with Device(n): cache = get_plan_cache() cache.set_size(0) # disable the cache
In particular, the cache for device
nshould be manipulated under device
This class is thread-safe since by default it is created on a per-thread basis. When starting a new thread, a new cache is not initialized until
get_plan_cache()is called or when the constructor is manually invoked.
For multi-GPU plans, the plan will be added to each participating GPU’s cache. Upon removal (by any of the caches), the plan will be removed from each participating GPU’s cache.
This cache supports the iterator protocol, and returns a 2-tuple:
(key, node)starting from the most recently used plan.
- get(self, tuple key, default=None)¶
- get_curr_memsize(self) Py_ssize_t ¶
- get_curr_size(self) Py_ssize_t ¶
- get_memsize(self) Py_ssize_t ¶
- get_size(self) Py_ssize_t ¶
- set_memsize(self, Py_ssize_t memsize)¶
- set_size(self, Py_ssize_t size)¶