cuFFT Plan Cache

class cupy.fft._cache.PlanCache(Py_ssize_t size=16, Py_ssize_t memsize=-1, int dev=-1)

A per-thread, per-device, least recently used (LRU) cache for cuFFT plans.

Parameters
  • size (int) – The number of plans that the cache can accommodate. The default is 16. Setting this to -1 will make this limit ignored.

  • memsize (int) – The amount of GPU memory, in bytes, that the plans in the cache will use for their work areas. Default is -1, meaning it is unlimited.

  • dev (int) – The ID of the device that the cache targets.

Note

  1. By setting either size to 0 (by calling set_size()) or memsize to 0 (by calling set_memsize()), the cache is disabled, and any operation is no-op. To re-enable it, simply set a nonzero size and/or memsize.

  2. This class can be instantiated by users, but it is discouraged. Instead, we expect the following canonical usage pattern to retrieve a handle to the cache through get_plan_cache():

    from cupy.cuda import Device
    from cupy.fft.config import get_plan_cache
    
    # get the cache for device n
    with Device(n):
        cache = get_plan_cache()
        cache.set_size(0)  # disable the cache
    

    In particular, the cache for device n should be manipulated under device n’s context.

  3. This class is thread-safe since by default it is created on a per-thread basis. When starting a new thread, a new cache is not initialized until get_plan_cache() is called or when the constructor is manually invoked.

  4. For multi-GPU plans, the plan will be added to each participating GPU’s cache. Upon removal (by any of the caches), the plan will be removed from each participating GPU’s cache.

  5. This cache supports the iterator protocol, and returns a 2-tuple: (key, node) starting from the most recently used plan.

clear(self)
get(self, tuple key, default=None)
get_curr_memsize(self)Py_ssize_t
get_curr_size(self)Py_ssize_t
get_memsize(self)Py_ssize_t
get_size(self)Py_ssize_t
set_memsize(self, Py_ssize_t memsize)
set_size(self, Py_ssize_t size)
show_info(self)