WaveBlocksND
|
This class applies the gradient operator \( -i\varepsilon^2\nabla_x \) to an arbitrary scalar wavepacket \( \Phi \). More...
#include <hawp_gradient_operator.hpp>
Public Member Functions | |
HaWpGradient< D, MultiIndex > | operator() (AbstractScalarHaWp< D, MultiIndex > const &wp) const |
Applies the gradient operator \( -i\varepsilon^2\nabla_x \) to an arbitrary scalar wavepacket \( \Phi \). More... | |
This class applies the gradient operator \( -i\varepsilon^2\nabla_x \) to an arbitrary scalar wavepacket \( \Phi \).
All you have to do is:
This class uses the HaWpGradientEvaluator to evaluate the coefficients of the resulting wavepacket. This class simplifies taking gradients, since it assembles the resulting wavepacket in contrast to HaWpGradientEvaluator, which just returns the new coefficients.
You cannot apply the gradient to multi-component wavepackets (yet). If you want the gradient of multi-component wavepackets, you have to loop over all components and apply the gradient operator on each component.
D | The dimensionality of the processed wavepackets. |
MultiIndex | The multi-index type of the processed wavepackets. |
|
inline |
Applies the gradient operator \( -i\varepsilon^2\nabla_x \) to an arbitrary scalar wavepacket \( \Phi \).
Vectorial wavepackets: You cannot apply this function directly to vectorial wavepackets \( \Psi \). You have to apply the gradient to each component \( \Phi_n \) (which is scalar) of the vectorial wavepacket. \( -i\varepsilon^2\nabla \Psi = \left( -i\varepsilon^2\nabla \Phi_1, \dots, -i\varepsilon^2\nabla \Phi_N \right)^T \)
Thread-Safety: Computing the gradient involves creating a shape extension. Since computing a shape extension is very expensive, shape extensions are cached. Concurrently applying any gradient operator to the same wavepacket is unsafe (and is pointless anyway) since cached shape extensions are stored inside the wavepacket objects without mutex guard. Till now applying the same gradient operator to different wavepacket objects in parallel is completely safe. But to ensure future compatibility, each thread should use its own gradient operator instance.
wp | The scalar Hagedorn wavepacket |