|
|
| GPUAwareMPISender (const OwnerOverlapCopyCommunicationType &cpuOwnerOverlapCopy) |
| void | copyOwnerToAll (const X &source, X &dest) const override |
| | copyOwnerToAll will copy the data in source to all processes.
|
|
| GPUSender (const OwnerOverlapCopyCommunicationType &cpuOwnerOverlapCopy) |
| void | project (X &x) const |
| | project will project x to the owned subspace
|
| void | dot (const X &x, const X &y, field_type &output) const |
| | dot will carry out the dot product between x and y on the owned indices, then sum up the result across MPI processes.
|
| field_type | norm (const X &x) const |
| | norm computes the l^2-norm of x across processes.
|
| const ::Dune::Communication< MPI_Comm > & | communicator () const |
| | communicator returns the MPI communicator used by this GPUSender
|
template<class field_type, int block_size, class OwnerOverlapCopyCommunicationType>
class Opm::gpuistl::GPUAwareMPISender< field_type, block_size, OwnerOverlapCopyCommunicationType >
Derived class of GPUSender that handles MPI made with CUDA aware MPI The copOwnerToAll function uses MPI calls refering to data that resides on the GPU in order to send it directly to other GPUs, skipping the staging step on the CPU.
- Template Parameters
-
| field_type | is float or double |
| block_size | is the blocksize of the blockelements in the matrix |
| OwnerOverlapCopyCommunicationType | is typically a Dune::LinearOperator::communication_type |