HealpixMPI.jl provides MPI-parallel versions, through overloads, of most of the other spherical-harmonic related functions of Healpix.jl. Refer to Healpix.jl documentation for their description.

Algebraic operations in harmonic space

HealpixMPI.jl provides overloads of the Base functions +, -, *, /, as well as LinearAlgebra.dot (which embeds and MPI.Allreduce call), allowing to carry out these fundamental operations element-wise in harmonic space directly.

Healpix.almxflFunction
almxfl(alm::DAlm{S,T}, fl::AA) where {S<:Strategy, T<:Number, N<:Number, AA<:AbstractArray{N,1}}

Multiply a subset of aℓm in the form of DAlm by a vector bℓ representing an ℓ-dependent function, without changing the a_ℓm passed in input.

Arguments

  • alm::DAlm{S,T}: The array representing the spherical harmonics coefficients
  • fl::AbstractVector{T}: The array giving the factor fℓ by which to multiply aℓm

Returns

  • Alm{S,T}: The result of aℓm * fℓ.
source
Healpix.almxfl!Function
almxfl!(alm::DAlm{S,T}, fl::AA) where {S<:Strategy, T<:Number, N<:Number, AA<:AbstractArray{N}}

Multiply IN-PLACE a subset of a_ℓm in the form of DAlm by a vector fl representing an ℓ-dependent function.

Arguments

  • alm::DAlm{S,T}: The subset of spherical harmonics coefficients
  • fl: The array giving the factor fℓ to multiply by aℓm, can be a Vector{T} or have as many columns as the components of alm we want to multiply
source
Base.:+Function
+(alm₁::DAlm{S,T}, alm₂::DAlm{S,T}) where {S<:Strategy, T<:Number}

Perform the element-wise SUM of two DAlm objects in a_ℓm space. A new DAlm object is returned.

source
Base.:-Function
-(alm₁::DAlm{S,T}, alm₂::DAlm{S,T}) where {S<:Strategy, T<:Number}

Perform the element-wise SUBTRACTION of two DAlm objects in a_ℓm space. A new DAlm object is returned.

source
Base.:*Function
*(alm::DAlm{S,T}, fl::AA) where {S<:Strategy, T<:Number, AA<:AbstractArray{T,1}}
*(fl::AA, alm::DAlm{S,T}) where {S<:Strategy, T<:Number, AA<:AbstractArray{T,1}}

Perform the MULTIPLICATION of a DAlm object by a function of ℓ in a_ℓm space. Note: this consists in a shortcut of almxfl, therefore a new DAlm object is returned.

source
*(alm₁::DAlm{S,T}, alm₂::DAlm{S,T}) where {S<:Strategy, T<:Number}
*(alm₁::DAlm{S,T}, c::Number) where {S<:Strategy, T<:Number}
*(c::Number, alm₁::DAlm{S,T}) where {S<:Strategy, T<:Number}

Perform the element-wise MULTIPLICATION of two DAlm objects or of a DAlm by a constant in a_ℓm space. A new DAlm object is returned.

source
Base.:/Function
/(alm::DAlm{S,T}, fl::A1) where {S<:Strategy, T<:Number, N<:Number, A1<:AbstractArray{N,1}}
/(alm::DAlm{S,T}, fl::A2) where {S<:Strategy, T<:Number, N<:Number, A2<:AbstractArray{N,2}}

Perform an element-wise DIVISION by a function of ℓ in a_ℓm space. Note: this consists in a shortcut of almxfl, therefore a new DAlm object is returned.

source
/(alm₁::DAlm{S,T}, alm₂::DAlm{S,T}) where {S<:Strategy, T<:Number}
/(alm₁::DAlm{S,T}, c::Number) where {S<:Strategy, T<:Number}

Perform the element-wise DIVISION of two DAlm objects or of a DAlm by a constant in a_ℓm space. A new DAlm object is returned.

source
LinearAlgebra.dotFunction
dot(alm₁::DAlm{S,T}, alm₂::DAlm{S,T}; comp₁::Integer = 1, comp₂::Integer = 1) where {S<:Strategy, T<:Number} -> Number

MPI-parallel dot product between two DAlm object of matching size. Use the comp keywords (defaulted to 1) to specify which component (column) of each alm arrays is to be used for the computation.

source
HealpixMPI.:≃Function
≃(alm₁::DAlm{S,T}, alm₂::DAlm{S,T}) where {S<:Strategy, T<:Number}

Similarity operator, returns true if the two arguments have matching info objects.

source
≃(alm₁::DAlm{S,T}, alm₂::DAlm{S,T}) where {S<:Strategy, T<:Real}

Similarity operator, returns true if the two arguments have matching info objects.

source

Power spectrum

Power spectrum components $C_{\ell}$ are encoded as Vector{T}. HealpixMPI.jl implements overloads of Healpix.jl functions to compute a power spectrum from a set of DAlm (alm2cl) and to generate a set of DAlm from a power spectrum (synalm!).

Healpix.alm2clFunction
alm2cl(alm₁::DAlm{S,T}, alm₂::DAlm{S,T}; comp₁::Integer = 1, comp₂::Integer = 1) where {S<:Strategy, T<:Number} -> Vector{T}
alm2cl(alm::DAlm{S,T}; comp₁::Integer = 1, comp₂::Integer = 1) where {S<:Strategy, T<:Number} -> Vector{T}

Compute the power spectrum $C_{\ell}$ on each MPI task from the spherical harmonic coefficients of one or two fields, distributed as DAlm. Use the keywords comp₁ and comp₂ to specify which component (column) of the alms is to be used for the computations

source
Healpix.synalm!Function
synalm!(cl::Vector{T}, alm::DAlm{S,N}, rng::AbstractRNG; comp::Integer = 1) where {S<:Strategy, T<:Real, N<:Number}
synalm!(cl::Vector{T}, alm::DAlm{S,N}; comp::Integer = 1) where {S<:Strategy, T<:Real, N<:Number}

Generate a set of DAlm from a given power spectra array cl. The output is written into the comp column (defaulted to 1) of the Alm object passed in input. If comp is greater than the number of components (columns) in Alm an error will be thrown. An RNG can be specified, otherwise it's defaulted to Random.GLOBAL_RNG.

source

Distributing auxiliary arrays

It is often useful to make use of auxiliary arrays in pixel space, for which it is unnecessary to re-define a whole new map object, e.g., masks or noise covariance matrices. HealpixMPI.jl provides an overload of MPI.Scatter to distribute the corresponding chunks of such arrays on the correct task.

MPI.ScatterFunction
Scatter(arr::AA, nside::Integer, comm::MPI.Comm; strategy::Type = RR, root::Integer = 0) where {T <: Real, AA <: AbstractArray{T,1}}
Scatter(nothing, nside::Integer, comm::MPI.Comm; strategy::Type = RR, root::Integer = 0)

Distributes a map-space array (e.g. masks, diagonal noise matrices, etc.) passed in input on the `root` task,
according to the specified strategy(e.g. pass ":RR" for Round Robin).

As in the standard MPI function, the input `arr` can be `nothing` on non-root tasks, since it will be ignored anyway.

# Arguments:
- `arr::AA`: array to distribute over the MPI tasks.
- `nside::Integer`: NSIDE parameter of the map we are referring to.
- `comm::MPI.Comm`: MPI communicator to use.

# Keywords:
- `strategy::Type`: Strategy to be used, by default `:RR` for "Round Robin".
- `root::Integer`: rank of the task to be considered as "root", it is 0 by default.
source