NVIDIA CUDA Library Documentation 4.2
created by cmlab.csie.ntu.edu.tw

driver_types.h File Reference

#include "host_defines.h"
#include <limits.h>
#include <stddef.h>
Include dependency graph for driver_types.h:
This graph shows which files directly or indirectly include this file:

Data Structures

struct  cudaChannelFormatDesc
 CUDA Channel format descriptor. More...
struct  cudaPitchedPtr
 CUDA Pitched memory pointer. More...
struct  cudaExtent
 CUDA extent. More...
struct  cudaPos
 CUDA 3D position. More...
struct  cudaMemcpy3DParms
 CUDA 3D memory copying parameters. More...
struct  cudaMemcpy3DPeerParms
 CUDA 3D cross-device memory copying parameters. More...
struct  cudaPointerAttributes
 CUDA pointer attributes. More...
struct  cudaFuncAttributes
 CUDA function attributes. More...
struct  cudaDeviceProp
 CUDA device properties. More...
struct  cudaIpcEventHandle_st
struct  cudaIpcMemHandle_st

Variables

 cudaSuccess = 0
 The API call returned with no errors.
 cudaErrorMissingConfiguration = 1
 The device function being invoked (usually via cudaLaunch()) was not previously configured via the cudaConfigureCall() function.
 cudaErrorMemoryAllocation = 2
 The API call failed because it was unable to allocate enough memory to perform the requested operation.
 cudaErrorInitializationError = 3
 The API call failed because the CUDA driver and runtime could not be initialized.
 cudaErrorLaunchFailure = 4
 An exception occurred on the device while executing a kernel.
 cudaErrorPriorLaunchFailure = 5
 This indicated that a previous kernel launch failed.
 cudaErrorLaunchTimeout = 6
 This indicates that the device kernel took too long to execute.
 cudaErrorLaunchOutOfResources = 7
 This indicates that a launch did not occur because it did not have appropriate resources.
 cudaErrorInvalidDeviceFunction = 8
 The requested device function does not exist or is not compiled for the proper device architecture.
 cudaErrorInvalidConfiguration = 9
 This indicates that a kernel launch is requesting resources that can never be satisfied by the current device.
 cudaErrorInvalidDevice = 10
 This indicates that the device ordinal supplied by the user does not correspond to a valid CUDA device.
 cudaErrorInvalidValue = 11
 This indicates that one or more of the parameters passed to the API call is not within an acceptable range of values.
 cudaErrorInvalidPitchValue = 12
 This indicates that one or more of the pitch-related parameters passed to the API call is not within the acceptable range for pitch.
 cudaErrorInvalidSymbol = 13
 This indicates that the symbol name/identifier passed to the API call is not a valid name or identifier.
 cudaErrorMapBufferObjectFailed = 14
 This indicates that the buffer object could not be mapped.
 cudaErrorUnmapBufferObjectFailed = 15
 This indicates that the buffer object could not be unmapped.
 cudaErrorInvalidHostPointer = 16
 This indicates that at least one host pointer passed to the API call is not a valid host pointer.
 cudaErrorInvalidDevicePointer = 17
 This indicates that at least one device pointer passed to the API call is not a valid device pointer.
 cudaErrorInvalidTexture = 18
 This indicates that the texture passed to the API call is not a valid texture.
 cudaErrorInvalidTextureBinding = 19
 This indicates that the texture binding is not valid.
 cudaErrorInvalidChannelDescriptor = 20
 This indicates that the channel descriptor passed to the API call is not valid.
 cudaErrorInvalidMemcpyDirection = 21
 This indicates that the direction of the memcpy passed to the API call is not one of the types specified by cudaMemcpyKind.
 cudaErrorAddressOfConstant = 22
 This indicated that the user has taken the address of a constant variable, which was forbidden up until the CUDA 3.1 release.
 cudaErrorTextureFetchFailed = 23
 This indicated that a texture fetch was not able to be performed.
 cudaErrorTextureNotBound = 24
 This indicated that a texture was not bound for access.
 cudaErrorSynchronizationError = 25
 This indicated that a synchronization operation had failed.
 cudaErrorInvalidFilterSetting = 26
 This indicates that a non-float texture was being accessed with linear filtering.
 cudaErrorInvalidNormSetting = 27
 This indicates that an attempt was made to read a non-float texture as a normalized float.
 cudaErrorMixedDeviceExecution = 28
 Mixing of device and device emulation code was not allowed.
 cudaErrorCudartUnloading = 29
 This indicates that a CUDA Runtime API call cannot be executed because it is being called during process shut down, at a point in time after CUDA driver has been unloaded.
 cudaErrorUnknown = 30
 This indicates that an unknown internal error has occurred.
 cudaErrorNotYetImplemented = 31
 This indicates that the API call is not yet implemented.
 cudaErrorMemoryValueTooLarge = 32
 This indicated that an emulated device pointer exceeded the 32-bit address range.
 cudaErrorInvalidResourceHandle = 33
 This indicates that a resource handle passed to the API call was not valid.
 cudaErrorNotReady = 34
 This indicates that asynchronous operations issued previously have not completed yet.
 cudaErrorInsufficientDriver = 35
 This indicates that the installed NVIDIA CUDA driver is older than the CUDA runtime library.
 cudaErrorSetOnActiveProcess = 36
 This indicates that the user has called cudaSetValidDevices(), cudaSetDeviceFlags(), ::cudaD3D9SetDirect3DDevice(), ::cudaD3D10SetDirect3DDevice, ::cudaD3D11SetDirect3DDevice(), or cudaVDPAUSetVDPAUDevice() after initializing the CUDA runtime by calling non-device management operations (allocating memory and launching kernels are examples of non-device management operations).
 cudaErrorInvalidSurface = 37
 This indicates that the surface passed to the API call is not a valid surface.
 cudaErrorNoDevice = 38
 This indicates that no CUDA-capable devices were detected by the installed CUDA driver.
 cudaErrorECCUncorrectable = 39
 This indicates that an uncorrectable ECC error was detected during execution.
 cudaErrorSharedObjectSymbolNotFound = 40
 This indicates that a link to a shared object failed to resolve.
 cudaErrorSharedObjectInitFailed = 41
 This indicates that initialization of a shared object failed.
 cudaErrorUnsupportedLimit = 42
 This indicates that the cudaLimit passed to the API call is not supported by the active device.
 cudaErrorDuplicateVariableName = 43
 This indicates that multiple global or constant variables (across separate CUDA source files in the application) share the same string name.
 cudaErrorDuplicateTextureName = 44
 This indicates that multiple textures (across separate CUDA source files in the application) share the same string name.
 cudaErrorDuplicateSurfaceName = 45
 This indicates that multiple surfaces (across separate CUDA source files in the application) share the same string name.
 cudaErrorDevicesUnavailable = 46
 This indicates that all CUDA devices are busy or unavailable at the current time.
 cudaErrorInvalidKernelImage = 47
 This indicates that the device kernel image is invalid.
 cudaErrorNoKernelImageForDevice = 48
 This indicates that there is no kernel image available that is suitable for the device.
 cudaErrorIncompatibleDriverContext = 49
 This indicates that the current context is not compatible with this the CUDA Runtime.
 cudaErrorPeerAccessAlreadyEnabled = 50
 This error indicates that a call to cudaDeviceEnablePeerAccess() is trying to re-enable peer addressing on from a context which has already had peer addressing enabled.
 cudaErrorPeerAccessNotEnabled = 51
 This error indicates that cudaDeviceDisablePeerAccess() is trying to disable peer addressing which has not been enabled yet via cudaDeviceEnablePeerAccess().
 cudaErrorDeviceAlreadyInUse = 54
 This indicates that a call tried to access an exclusive-thread device that is already in use by a different thread.
 cudaErrorProfilerDisabled = 55
 This indicates profiler has been disabled for this run and thus runtime APIs cannot be used to profile subsets of the program.
 cudaErrorProfilerNotInitialized = 56
 This indicates profiler has not been initialized yet.
 cudaErrorProfilerAlreadyStarted = 57
 This indicates profiler is already started.
 cudaErrorProfilerAlreadyStopped = 58
 This indicates profiler is already stopped.
 cudaErrorAssert = 59
 An assert triggered in device code during kernel execution.
 cudaErrorTooManyPeers = 60
 This error indicates that the hardware resources required to enable peer access have been exhausted for one or more of the devices passed to ::cudaEnablePeerAccess().
 cudaErrorHostMemoryAlreadyRegistered = 61
 This error indicates that the memory range passed to cudaHostRegister() has already been registered.
 cudaErrorHostMemoryNotRegistered = 62
 This error indicates that the pointer passed to cudaHostUnregister() does not correspond to any currently registered memory region.
 cudaErrorOperatingSystem = 63
 This error indicates that an OS call failed.
 cudaErrorStartupFailure = 0x7f
 This indicates an internal startup failure in the CUDA runtime.
 cudaChannelFormatKindSigned = 0
 Signed channel format.
 cudaChannelFormatKindUnsigned = 1
 Unsigned channel format.
 cudaChannelFormatKindFloat = 2
 Float channel format.
 cudaMemoryTypeHost = 1
 Host memory.
 cudaMemcpyHostToHost = 0
 Host -> Host.
 cudaMemcpyHostToDevice = 1
 Host -> Device.
 cudaMemcpyDeviceToHost = 2
 Device -> Host.
 cudaMemcpyDeviceToDevice = 3
 Device -> Device.
 cudaGraphicsRegisterFlagsNone = 0
 Default.
 cudaGraphicsRegisterFlagsReadOnly = 1
 CUDA will not write to this resource.
 cudaGraphicsRegisterFlagsWriteDiscard = 2
 CUDA will only write to and will not read from this resource.
 cudaGraphicsRegisterFlagsSurfaceLoadStore = 4
 CUDA will bind this resource to a surface reference.
 cudaGraphicsMapFlagsNone = 0
 Default; Assume resource can be read/written.
 cudaGraphicsMapFlagsReadOnly = 1
 CUDA will not write to this resource.
 cudaGraphicsCubeFacePositiveX = 0x00
 Positive X face of cubemap.
 cudaGraphicsCubeFaceNegativeX = 0x01
 Negative X face of cubemap.
 cudaGraphicsCubeFacePositiveY = 0x02
 Positive Y face of cubemap.
 cudaGraphicsCubeFaceNegativeY = 0x03
 Negative Y face of cubemap.
 cudaGraphicsCubeFacePositiveZ = 0x04
 Positive Z face of cubemap.
 cudaFuncCachePreferNone = 0
 Default function cache configuration, no preference.
 cudaFuncCachePreferShared = 1
 Prefer larger shared memory and smaller L1 cache.
 cudaFuncCachePreferL1 = 2
 Prefer larger L1 cache and smaller shared memory.
 cudaSharedMemBankSizeDefault = 0
 cudaSharedMemBankSizeFourByte = 1
 cudaComputeModeDefault = 0
 Default compute mode (Multiple threads can use cudaSetDevice() with this device)
 cudaComputeModeExclusive = 1
 Compute-exclusive-thread mode (Only one thread in one process will be able to use cudaSetDevice() with this device)
 cudaComputeModeProhibited = 2
 Compute-prohibited mode (No threads can use cudaSetDevice() with this device)
 cudaLimitStackSize = 0x00
 GPU thread stack size.
 cudaLimitPrintfFifoSize = 0x01
 GPU printf/fprintf FIFO size.
 cudaKeyValuePair = 0x00
 Output mode Key-Value pair format.

Data types used by CUDA Runtime

Data types used by CUDA Runtime
Author:
NVIDIA Corporation

#define cudaHostAllocDefault   0x00
 Default page-locked allocation flag.
#define cudaHostAllocPortable   0x01
 Pinned memory accessible by all CUDA contexts.
#define cudaHostAllocMapped   0x02
 Map allocation into device space.
#define cudaHostAllocWriteCombined   0x04
 Write-combined memory.
#define cudaHostRegisterDefault   0x00
 Default host memory registration flag.
#define cudaHostRegisterPortable   0x01
 Pinned memory accessible by all CUDA contexts.
#define cudaHostRegisterMapped   0x02
 Map registered memory into device space.
#define cudaPeerAccessDefault   0x00
 Default peer addressing enable flag.
#define cudaEventDefault   0x00
 Default event flag.
#define cudaEventBlockingSync   0x01
 Event uses blocking synchronization.
#define cudaEventDisableTiming   0x02
 Event will not record timing data.
#define cudaEventInterprocess   0x04
 Event is suitable for interprocess use.
#define cudaDeviceScheduleAuto   0x00
 Device flag - Automatic scheduling.
#define cudaDeviceScheduleSpin   0x01
 Device flag - Spin default scheduling.
#define cudaDeviceScheduleYield   0x02
 Device flag - Yield default scheduling.
#define cudaDeviceScheduleBlockingSync   0x04
 Device flag - Use blocking synchronization.
#define cudaDeviceBlockingSync   0x04
 Device flag - Use blocking synchronization.
#define cudaDeviceScheduleMask   0x07
 Device schedule flags mask.
#define cudaDeviceMapHost   0x08
 Device flag - Support mapped pinned allocations.
#define cudaDeviceLmemResizeToMax   0x10
 Device flag - Keep local memory allocation after launch.
#define cudaDeviceMask   0x1f
 Device flags mask.
#define cudaArrayDefault   0x00
 Default CUDA array allocation flag.
#define cudaArrayLayered   0x01
 Must be set in cudaMalloc3DArray to create a layered CUDA array.
#define cudaArraySurfaceLoadStore   0x02
 Must be set in cudaMallocArray or cudaMalloc3DArray in order to bind surfaces to the CUDA array.
#define cudaArrayCubemap   0x04
 Must be set in cudaMalloc3DArray to create a cubemap CUDA array.
#define cudaArrayTextureGather   0x08
 Must be set in cudaMallocArray or cudaMalloc3DArray in order to perform texture gather operations on the CUDA array.
#define cudaIpcMemLazyEnablePeerAccess   0x01
 Automatically enable peer access between remote devices as needed.
#define cudaDevicePropDontCare
 Empty device properties.
#define CUDA_IPC_HANDLE_SIZE   64
 CUDA Interprocess types.
typedef __device_builtin__
enum cudaError 
cudaError_t
 CUDA Error types.
typedef __device_builtin__
struct CUstream_st * 
cudaStream_t
 CUDA stream.
typedef __device_builtin__
struct CUevent_st * 
cudaEvent_t
 CUDA event types.
typedef __device_builtin__
struct cudaGraphicsResource * 
cudaGraphicsResource_t
 CUDA graphics resource types.
typedef __device_builtin__
struct CUuuid_st 
cudaUUID_t
 CUDA UUID types.
typedef __device_builtin__
struct cudaIpcEventHandle_st 
cudaIpcEventHandle_t
 Interprocess Handles.
typedef __device_builtin__
struct cudaIpcMemHandle_st 
cudaIpcMemHandle_t
typedef __device_builtin__
enum cudaOutputMode 
cudaOutputMode_t
 CUDA output file modes.
enum __device_builtin__ cudaError
 CUDA error types.
enum __device_builtin__ cudaChannelFormatKind
 Channel format kind.
enum __device_builtin__ cudaMemoryType
 CUDA memory types.
enum __device_builtin__ cudaMemcpyKind
 CUDA memory copy types.
enum __device_builtin__ cudaGraphicsRegisterFlags
 CUDA graphics interop register flags.
enum __device_builtin__ cudaGraphicsMapFlags
 CUDA graphics interop map flags.
enum __device_builtin__ cudaGraphicsCubeFace
 CUDA graphics interop array indices for cube maps.
enum __device_builtin__ cudaFuncCache
 CUDA function cache configurations.
enum __device_builtin__ cudaSharedMemConfig
 CUDA shared memory configuration.
enum __device_builtin__ cudaComputeMode
 CUDA device compute modes.
enum __device_builtin__ cudaLimit
 CUDA Limits.
enum __device_builtin__ cudaOutputMode
 CUDA Profiler Output modes.

Detailed Description


Variable Documentation

Float channel format.

Signed channel format.

Unsigned channel format.

Default compute mode (Multiple threads can use cudaSetDevice() with this device)

Compute-exclusive-thread mode (Only one thread in one process will be able to use cudaSetDevice() with this device)

Compute-prohibited mode (No threads can use cudaSetDevice() with this device)

This indicated that the user has taken the address of a constant variable, which was forbidden up until the CUDA 3.1 release.

Deprecated:
This error return is deprecated as of CUDA 3.1. Variables in constant memory may now have their address taken by the runtime via cudaGetSymbolAddress().

An assert triggered in device code during kernel execution.

The device cannot be used again until cudaThreadExit() is called. All existing allocations are invalid and must be reconstructed if the program is to continue using CUDA.

This indicates that a CUDA Runtime API call cannot be executed because it is being called during process shut down, at a point in time after CUDA driver has been unloaded.

This indicates that a call tried to access an exclusive-thread device that is already in use by a different thread.

This indicates that all CUDA devices are busy or unavailable at the current time.

Devices are often busy/unavailable due to use of cudaComputeModeExclusive, cudaComputeModeProhibited or when long running CUDA kernels have filled up the GPU and are blocking new work from starting. They can also be unavailable due to memory constraints on a device that already has active CUDA work being performed.

This indicates that multiple surfaces (across separate CUDA source files in the application) share the same string name.

This indicates that multiple textures (across separate CUDA source files in the application) share the same string name.

This indicates that multiple global or constant variables (across separate CUDA source files in the application) share the same string name.

This indicates that an uncorrectable ECC error was detected during execution.

This error indicates that the memory range passed to cudaHostRegister() has already been registered.

This error indicates that the pointer passed to cudaHostUnregister() does not correspond to any currently registered memory region.

This indicates that the current context is not compatible with this the CUDA Runtime.

This can only occur if you are using CUDA Runtime/Driver interoperability and have created an existing Driver context using the driver API. The Driver context may be incompatible either because the Driver context was created using an older version of the API, because the Runtime API call expects a primary driver context and the Driver context is not primary, or because the Driver context has been destroyed. Please see Interactions with the CUDA Driver API" for more information.

The API call failed because the CUDA driver and runtime could not be initialized.

This indicates that the installed NVIDIA CUDA driver is older than the CUDA runtime library.

This is not a supported configuration. Users should install an updated NVIDIA display driver to allow the application to run.

This indicates that the channel descriptor passed to the API call is not valid.

This occurs if the format is not one of the formats specified by cudaChannelFormatKind, or if one of the dimensions is invalid.

This indicates that a kernel launch is requesting resources that can never be satisfied by the current device.

Requesting more shared memory per block than the device supports will trigger this error, as will requesting too many threads or blocks. See cudaDeviceProp for more device limitations.

This indicates that the device ordinal supplied by the user does not correspond to a valid CUDA device.

The requested device function does not exist or is not compiled for the proper device architecture.

This indicates that at least one device pointer passed to the API call is not a valid device pointer.

This indicates that a non-float texture was being accessed with linear filtering.

This is not supported by CUDA.

This indicates that at least one host pointer passed to the API call is not a valid host pointer.

This indicates that the device kernel image is invalid.

This indicates that the direction of the memcpy passed to the API call is not one of the types specified by cudaMemcpyKind.

This indicates that an attempt was made to read a non-float texture as a normalized float.

This is not supported by CUDA.

This indicates that one or more of the pitch-related parameters passed to the API call is not within the acceptable range for pitch.

This indicates that a resource handle passed to the API call was not valid.

Resource handles are opaque types like cudaStream_t and cudaEvent_t.

This indicates that the surface passed to the API call is not a valid surface.

This indicates that the symbol name/identifier passed to the API call is not a valid name or identifier.

This indicates that the texture passed to the API call is not a valid texture.

This indicates that the texture binding is not valid.

This occurs if you call cudaGetTextureAlignmentOffset() with an unbound texture.

This indicates that one or more of the parameters passed to the API call is not within an acceptable range of values.

An exception occurred on the device while executing a kernel.

Common causes include dereferencing an invalid device pointer and accessing out of bounds shared memory. The device cannot be used until cudaThreadExit() is called. All existing device memory allocations are invalid and must be reconstructed if the program is to continue using CUDA.

This indicates that a launch did not occur because it did not have appropriate resources.

Although this error is similar to cudaErrorInvalidConfiguration, this error usually indicates that the user has attempted to pass too many arguments to the device kernel, or the kernel launch specifies too many threads for the kernel's register count.

This indicates that the device kernel took too long to execute.

This can only occur if timeouts are enabled - see the device property kernelExecTimeoutEnabled for more information. The device cannot be used until cudaThreadExit() is called. All existing device memory allocations are invalid and must be reconstructed if the program is to continue using CUDA.

This indicates that the buffer object could not be mapped.

The API call failed because it was unable to allocate enough memory to perform the requested operation.

This indicated that an emulated device pointer exceeded the 32-bit address range.

Deprecated:
This error return is deprecated as of CUDA 3.1. Device emulation mode was removed with the CUDA 3.1 release.

The device function being invoked (usually via cudaLaunch()) was not previously configured via the cudaConfigureCall() function.

Mixing of device and device emulation code was not allowed.

Deprecated:
This error return is deprecated as of CUDA 3.1. Device emulation mode was removed with the CUDA 3.1 release.

This indicates that no CUDA-capable devices were detected by the installed CUDA driver.

This indicates that there is no kernel image available that is suitable for the device.

This can occur when a user specifies code generation options for a particular CUDA source file that do not include the corresponding device configuration.

This indicates that asynchronous operations issued previously have not completed yet.

This result is not actually an error, but must be indicated differently than cudaSuccess (which indicates completion). Calls that may return this value include cudaEventQuery() and cudaStreamQuery().

This indicates that the API call is not yet implemented.

Production releases of CUDA will never return this error.

Deprecated:
This error return is deprecated as of CUDA 4.1.

This error indicates that an OS call failed.

This error indicates that a call to cudaDeviceEnablePeerAccess() is trying to re-enable peer addressing on from a context which has already had peer addressing enabled.

This error indicates that cudaDeviceDisablePeerAccess() is trying to disable peer addressing which has not been enabled yet via cudaDeviceEnablePeerAccess().

This indicated that a previous kernel launch failed.

This was previously used for device emulation of kernel launches.

Deprecated:
This error return is deprecated as of CUDA 3.1. Device emulation mode was removed with the CUDA 3.1 release.

This indicates profiler is already started.

This error can be returned if cudaProfilerStart() is called multiple times without subsequent call to cudaProfilerStop().

This indicates profiler is already stopped.

This error can be returned if cudaProfilerStop() is called without starting profiler using cudaProfilerStart().

This indicates profiler has been disabled for this run and thus runtime APIs cannot be used to profile subsets of the program.

This can happen when the application is running with external profiling tools like visual profiler.

This indicates profiler has not been initialized yet.

cudaProfilerInitialize() must be called before calling cudaProfilerStart and cudaProfilerStop to initialize profiler.

This indicates that the user has called cudaSetValidDevices(), cudaSetDeviceFlags(), ::cudaD3D9SetDirect3DDevice(), ::cudaD3D10SetDirect3DDevice, ::cudaD3D11SetDirect3DDevice(), or cudaVDPAUSetVDPAUDevice() after initializing the CUDA runtime by calling non-device management operations (allocating memory and launching kernels are examples of non-device management operations).

This error can also be returned if using runtime/driver interoperability and there is an existing CUcontext active on the host thread.

This indicates that initialization of a shared object failed.

This indicates that a link to a shared object failed to resolve.

This indicates an internal startup failure in the CUDA runtime.

This indicated that a synchronization operation had failed.

This was previously used for some device emulation functions.

Deprecated:
This error return is deprecated as of CUDA 3.1. Device emulation mode was removed with the CUDA 3.1 release.

This indicated that a texture fetch was not able to be performed.

This was previously used for device emulation of texture operations.

Deprecated:
This error return is deprecated as of CUDA 3.1. Device emulation mode was removed with the CUDA 3.1 release.

This indicated that a texture was not bound for access.

This was previously used for device emulation of texture operations.

Deprecated:
This error return is deprecated as of CUDA 3.1. Device emulation mode was removed with the CUDA 3.1 release.

This error indicates that the hardware resources required to enable peer access have been exhausted for one or more of the devices passed to ::cudaEnablePeerAccess().

This indicates that an unknown internal error has occurred.

This indicates that the buffer object could not be unmapped.

This indicates that the cudaLimit passed to the API call is not supported by the active device.

Prefer larger L1 cache and smaller shared memory.

Default function cache configuration, no preference.

Prefer larger shared memory and smaller L1 cache.

Negative X face of cubemap.

Negative Y face of cubemap.

Positive X face of cubemap.

Positive Y face of cubemap.

Positive Z face of cubemap.

Default; Assume resource can be read/written.

CUDA will not write to this resource.

CUDA will not write to this resource.

CUDA will bind this resource to a surface reference.

CUDA will only write to and will not read from this resource.

Output mode Key-Value pair format.

GPU printf/fprintf FIFO size.

GPU thread stack size.

Device -> Device.

Device -> Host.

Host -> Device.

Host -> Host.

Host memory.

The API call returned with no errors.

In the case of query calls, this can also mean that the operation being queried is complete (see cudaEventQuery() and cudaStreamQuery()).

 All Data Structures Files Functions Variables Typedefs Enumerations Enumerator Defines