Download Help (Windows Only) 
Owning Palette: LVCUBLAS
Requires: GPU Analysis Toolkit
Solves the matrix equation op(A)*X = alpha*B or X*op(A) = alpha*B. When you wire data to A in and B, this VI automatically selects the first available instance.
To use a different instance, you must manually select the polymorphic instance you want to use.
Use the pulldown menu to select an instance of this VI.
The connector pane displays the default data types for this polymorphic instance.
operation specifies the operation the VI performs on matrix A, where matrix op(A) can equal A, A', or conj(A').


fill mode specifies the triangular portion of the matrix A in the calculation.


CUBLAS Handle in specifies the initialized CUBLAS library to use for the BLAS calculation. For example, you can wire the CUBLAS Handle output from the Initialize Library VI to specify the CUBLAS handle to the CUBLAS library you already initialized. This input also determines the device that executes the function.  
B specifies the matrix B stored on the device. This input specifies a class that can contain the following data types:


A in specifies the triangular matrix A stored on the device. This input specifies a class that can contain the following data types:


a represents alpha and specifies the scalar operand in the product alpha*op([A] ^1)*B. The default is 1.  
error in describes error conditions that occur before this node runs. This input provides standard error in functionality.  
m specifies the number of rows to use in B.  
n specifies the number of columns to use in B.  
diag specifies the value of the diagonal elements of the triangular matrix A.


leading dimensions specifies the column dimension to index consecutive rows. Use lda, ldb, and ldc for A, B, and C, respectively.  
CUBLAS Handle out returns the handle that defines the BLAS operation.  
a(op[A]^1)B is a complex matrix of the same dimensions as B. a(op[A]^1)B contains the solution X, such that op(A)*X = alpha*B.  
A out returns the triangular matrix A stored on the device.  
error out contains error information. This output provides standard error out functionality. 
The connector pane displays the default data types for this polymorphic instance.
operation specifies the operation the VI performs on matrix A, where matrix op(A) can equal A, A', or conj(A').


fill mode specifies the triangular portion of the matrix A in the calculation.


CUBLAS Handle in specifies the initialized CUBLAS library to use for the BLAS calculation. For example, you can wire the CUBLAS Handle output from the Initialize Library VI to specify the CUBLAS handle to the CUBLAS library you already initialized. This input also determines the device that executes the function.  
B specifies the matrix B stored on the device. This input specifies a class that can contain the following data types:


A in specifies the triangular matrix A stored on the device. This input specifies a class that can contain the following data types:


a represents alpha and specifies the scalar operand in the product alpha*B*(op[A]^1). The default is 1.  
error in describes error conditions that occur before this node runs. This input provides standard error in functionality.  
m specifies the number of rows to use in B.  
n specifies the number of columns to use in B.  
diag specifies the value of the diagonal elements of the triangular matrix A.


leading dimensions specifies the column dimension to index consecutive rows. Use lda, ldb, and ldc for A, B, and C, respectively.  
CUBLAS Handle out returns the handle that defines the BLAS operation.  
aB(op[A]^1) is a complex matrix of the same dimensions as B. a(op[A]^1)B contains the solution X, such that X*op(A) = alpha*B.  
A out returns the triangular matrix A stored on the device.  
error out contains error information. This output provides standard error out functionality. 
For more information on how to use this VI, refer to the Designing the Block Diagram to Compute on a GPU Device topic.
For more information about the CUBLAS library and BLAS operations, refer to the NVIDIA GPU Computing Documentation website at nvidia.com and download the CUBLAS Library User Guide.
Refer to the BLAS (Basic Linear Algebra Subprograms) website at netlib.org for more information on BLAS functions.