cupy.cuda.malloc_managed#

cupy.cuda.malloc_managed(size_t size) MemoryPointer#

Allocate managed memory (unified memory).

This method can be used as a CuPy memory allocator. The simplest way to use a managed memory as the default allocator is the following code:

set_allocator(malloc_managed)

The advantage using managed memory in CuPy is that device memory oversubscription is possible for GPUs that have a non-zero value for the device attribute cudaDevAttrConcurrentManagedAccess. CUDA >= 8.0 with GPUs later than or equal to Pascal is preferrable.

Read more at: https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__MEMORY.html#axzz4qygc1Ry1 # NOQA

Parameters:

size (int) – Size of the memory allocation in bytes.

Returns:

Pointer to the allocated buffer.

Return type:

MemoryPointer