cupy.cuda.MemoryPool#

class cupy.cuda.MemoryPool(allocator=None)[source]#

Memory pool for all GPU devices on the host.

A memory pool preserves any allocations even if they are freed by the user. Freed memory buffers are held by the memory pool as free blocks, and they are reused for further memory allocations of the same sizes. The allocated blocks are managed for each device, so one instance of this class can be used for multiple devices.

Note

When the allocation is skipped by reusing the pre-allocated block, it does not call cudaMalloc and therefore CPU-GPU synchronization does not occur. It makes interleaves of memory allocations and kernel invocations very fast.

Note

The memory pool holds allocated blocks without freeing as much as possible. It makes the program hold most of the device memory, which may make other CUDA programs running in parallel out-of-memory situation.

Parameters

allocator (function) – The base CuPy memory allocator. It is used for allocating new blocks when the blocks of the required size are all in use.

Methods

free_all_blocks(self, stream=None)#

Releases free blocks.

Parameters

stream (cupy.cuda.Stream) – Release free blocks in the arena of the given stream. The default releases blocks in all arenas.

Note

A memory pool may split a free block for space efficiency. A split block is not released until all its parts are merged back into one even if free_all_blocks() is called.

free_all_free(self)#

(Deprecated) Use free_all_blocks() instead.

free_bytes(self) size_t#

Gets the total number of bytes acquired but not used by the pool.

Returns

The total number of bytes acquired but not used by the pool.

Return type

int

get_limit(self) size_t#

Gets the upper limit of memory allocation of the current device.

Returns

The number of bytes

Return type

int

malloc(self, size_t size) MemoryPointer#

Allocates the memory, from the pool if possible.

This method can be used as a CuPy memory allocator. The simplest way to use a memory pool as the default allocator is the following code:

set_allocator(MemoryPool().malloc)

Also, the way to use a memory pool of Managed memory (Unified memory) as the default allocator is the following code:

set_allocator(MemoryPool(malloc_managed).malloc)
Parameters

size (int) – Size of the memory buffer to allocate in bytes.

Returns

Pointer to the allocated buffer.

Return type

MemoryPointer

n_free_blocks(self) size_t#

Counts the total number of free blocks.

Returns

The total number of free blocks.

Return type

int

set_limit(self, size=None, fraction=None)#

Sets the upper limit of memory allocation of the current device.

When fraction is specified, its value will become a fraction of the amount of GPU memory that is available for allocation. For example, if you have a GPU with 2 GiB memory, you can either use set_limit(fraction=0.5) or set_limit(size=1024**3) to limit the memory size to 1 GiB.

size and fraction cannot be specified at the same time. If both of them are not specified or 0 is specified, the limit will be disabled.

Note

You can also set the limit by using CUPY_GPU_MEMORY_LIMIT environment variable, see Environment variables for the details. The limit set by this method supersedes the value specified in the environment variable.

Also note that this method only changes the limit for the current device, whereas the environment variable sets the default limit for all devices.

Parameters
  • size (int) – Limit size in bytes.

  • fraction (float) – Fraction in the range of [0, 1].

total_bytes(self) size_t#

Gets the total number of bytes acquired by the pool.

Returns

The total number of bytes acquired by the pool.

Return type

int

used_bytes(self) size_t#

Gets the total number of bytes used by the pool.

Returns

The total number of bytes used by the pool.

Return type

int

__eq__(value, /)#

Return self==value.

__ne__(value, /)#

Return self!=value.

__lt__(value, /)#

Return self<value.

__le__(value, /)#

Return self<=value.

__gt__(value, /)#

Return self>value.

__ge__(value, /)#

Return self>=value.