|
opm-simulators
|
GPUSender is a wrapper class for classes which will implement copOwnerToAll This is implemented with the intention of creating communicators with generic GPUSender To hide implementation that will either use GPU aware MPI or not. More...
#include <GpuSender.hpp>
Public Types | |
| using | X = GpuVector<field_type> |
Public Member Functions | |
| GPUSender (const OwnerOverlapCopyCommunicationType &cpuOwnerOverlapCopy) | |
| virtual void | copyOwnerToAll (const X &source, X &dest) const =0 |
| copyOwnerToAll will copy the data in source to all processes. | |
| virtual void | initIndexSet () const =0 |
| void | project (X &x) const |
| project will project x to the owned subspace | |
| void | dot (const X &x, const X &y, field_type &output) const |
| dot will carry out the dot product between x and y on the owned indices, then sum up the result across MPI processes. | |
| field_type | norm (const X &x) const |
| norm computes the l^2-norm of x across processes. | |
| const ::Dune::Communication< MPI_Comm > & | communicator () const |
| communicator returns the MPI communicator used by this GPUSender | |
Protected Attributes | |
| std::once_flag | m_initializedIndices |
| std::unique_ptr< GpuVector< int > > | m_indicesOwner |
| std::unique_ptr< GpuVector< int > > | m_indicesCopy |
| const OwnerOverlapCopyCommunicationType & | m_cpuOwnerOverlapCopy |
GPUSender is a wrapper class for classes which will implement copOwnerToAll This is implemented with the intention of creating communicators with generic GPUSender To hide implementation that will either use GPU aware MPI or not.
| field_type | is float or double |
| OwnerOverlapCopyCommunicationType | is typically a Dune::LinearOperator::communication_type |
|
inline |
communicator returns the MPI communicator used by this GPUSender
|
pure virtual |
copyOwnerToAll will copy the data in source to all processes.
| [in] | source | |
| [out] | dest |
Implemented in Opm::gpuistl::GPUAwareMPISender< field_type, block_size, OwnerOverlapCopyCommunicationType >, and Opm::gpuistl::GPUObliviousMPISender< field_type, block_size, OwnerOverlapCopyCommunicationType >.
|
inline |
dot will carry out the dot product between x and y on the owned indices, then sum up the result across MPI processes.
| [in] | x | First vector in dot product |
| [in] | y | Second vector in dot product |
| [out] | output | result will be stored here |
|
inline |
norm computes the l^2-norm of x across processes.
This will compute the dot product of x with itself on owned indices, then sum the result across process and return the square root of the sum.
|
inline |
project will project x to the owned subspace
For each component i which is not owned, x_i will be set to 0
| [in,out] | x | the vector to project |