Distributed Classes

As mentioned in the introduction, HealpixMPI has the main purpose of providing an MPI parallelization of the main functionalities of Healpix.jl, distributing maps and harmonic coefficients over the MPI tasks efficiently. This is made possible by the implementation of two data types: DMap and DAlm, mirroring HealpixMap and Alm types of Healpix.jl respectively, and containing a well-defined subset of a map or harmonic coefficients, to be constructed on each MPI task.

HealpixMPI.DMapType
struct DMap{S<:Strategy, T<:Real, N} <: AbstractDMap

A subset of a Healpix map, containing only certain rings (as specified in the info field). The type T is used for the value of the pixels in a map, it must be a Real (usually float). The signature field N represents the number of components in the DMap object, it can only be 1 for a spin-0 field and 2 otherwise.

A DMap type contains the following fields:

  • pixels::Matrix{T}: array of pixels composing the subset, dimensions are (npixels, ncomp).
  • info::GeomInfo: a GeomInfo object describing the HealpixMap subset.

The GeomInfoMPI contained in info must match exactly the characteristic of the Map subset, this is already automatically constructed when MPI.Scatter! is called, reason why this method for initializing a DMap is reccomended.

source
HealpixMPI.DAlmType
struct DAlm{S<:Strategy, T<:Number}

An MPI-distributed subset of harmonic coefficients a_ℓm, referring only to certain values of m.

The type T is used for the value of each harmonic coefficient, and it must be a Number (one should however only use complex types for this).

A SubAlm type contains the following fields:

  • alm::Matrix{T}: the array of harmonic coefficients, of dimensions (nalm, ncomp).
  • info::AlmInfoMPI{I}: an AlmInfo object describing the alm subset.

ncomp can be greater than one for supporting polarized shts. The AlmInfo contained in info must match exactly the characteristic of the Alm subset, this can be constructed through the function make_general_alm_info, for instance.

source

An instance of DMap (or DAlm) embeds, whithin the field info, a GeomInfoMPI (or AlmInfoMPI) object. These latter, in turn, contain all the necessairy information about:

  • The whole map geometry (or the whole set of harmonic coefficients).
  • The composition of the local subset.
  • the MPI communicator.
HealpixMPI.GeomInfoMPIType
struct GeomInfoMPI

Information describing an MPI-distributed subset of a HealpixMap, contained in a DMap.

A GeomInfoMPI type contains:

  • comm: MPI communicator used.
  • nside: NSIDE parameter of the whole map.
  • maxnr: maximum number of rings in the subsets, over the tasks involved.
  • thetatot: array of the colatitudes of the whole map ordered by task first and RR within each task
  • rings: array of the ring indexes (w.r.t. the whole map) contained in the subset.
  • rstart: array containing the 1-based index of the first pixel of each ring contained in the subset.
  • nphi: array containing the number of pixels in every ring contained in the subset.
  • theta: array of colatitudes (in radians) of the rings contained in the subset.
  • phi0: array containing the values of the azimuth (in radians) of the first pixel in every ring.
source
HealpixMPI.AlmInfoMPIType
struct AlmInfoMPI

Information describing an MPI-distributed subset of Alm, contained in a DAlm.

An AlmInfoMPI type contains:

  • comm: MPI communicator used.
  • lmax: the maximum value of $ℓ$.
  • mmax: the maximum value of $m$ on the whole Alm set.
  • maxnm: maximum value (over tasks) of nm (number of m values).
  • mval: array of values of $m$ contained in the subset.
  • mstart: array hypothetical indexes of the harmonic coefficient with ℓ=0, m. #FIXME: for now 0-based
source

Initializing a distributed type

The recommended way to construct a local subset of a map or harmonic coefficients, is to start with an instance of HealpixMap (in RingOrder) or Alm on the root task, and call one of the apposite overloads of the standard MPI.Scatter! function, provided by HealpixMPI.jl. Such function would in fact save the user the job of constructing all the required ancillary information describing the data subset, doing so through efficient and tested methods.

MPI.Scatter!Function
Scatter!(in_alm::Union{Healpix.Alm{T,Array{T,1}}, AbstractArray{Healpix.Alm{T,Array{T,1}},1}}, out_d_alm::DAlm{T}; root::Integer = 0, clear::Bool = false) where {S<:Strategy, T<:Number}
Scatter!(in_alm::AbstractArray{Healpix.Alm{T,Array{T,1}},1}, out_d_alm::DAlm{T}, out_d_pol_alm::DAlm{T}; root::Integer = 0, clear::Bool = false) where {S<:Strategy, T<:Number}
Scatter!(::Nothing, out_d_alm::DAlm{T}; root::Integer = 0, clear::Bool = false) where {S<:Strategy, T<:Number}
Scatter!(::Nothing, out_d_alm::DAlm{T}, out_d_pol_alm::DAlm{T}; root::Integer = 0, clear::Bool = false) where {S<:Strategy, T<:Number}
Scatter!(in_alm, out_d_alm::DAlm{S,T}, comm::MPI.Comm; root::Integer = 0, clear::Bool = false) where {S<:Strategy, T<:Number}
Scatter!(in_alm, out_d_alm::DAlm{S,T}, out_d_alm::DAlm{S,T}, comm::MPI.Comm; root::Integer = 0, clear::Bool = false) where {S<:Strategy, T<:Number}

Distributes the Alm object passed in input on the root task overwriting the DAlm objects passed on each task, according to the specified strategy.

As in the standard MPI function, the in_alm in input can be nothing on non-root tasks, since it will be ignored anyway.

To distribute a set of Alms representing a POLARIZED field there are 2 options:

  • Pass in input a AbstractArray{Healpix.Alm{T,Array{T,1}},1} with only E and B components and one output DAlm object which will contain both.
  • Pass in input a AbstractArray{Healpix.Alm{T,Array{T,1}},1} with T, E and B components and two output DAlm objects which will contain T and E & B respectively.

This is so that the resulting DAlm objects can be directly passed to the sht functions which only accept in input the intensity component for a scalar transform and two polarization components for a spinned transform.

If the keyword clear is set to true it frees the memory of each task from the (potentially big) Alm object.

Arguments:

  • in_alm::Alm{T,Array{T,1}}: Alm object to distribute over the MPI tasks.
  • out_d_alm::DAlm{T}: output DAlm object.

Keywords:

  • root::Integer: rank of the task to be considered as "root", it is 0 by default.
  • clear::Bool: if true deletes the input Alm after having performed the "scattering".
source
Scatter!(in_map::Union{Healpix.HealpixMap{T1, Healpix.RingOrder}, Healpix.PolarizedHealpixMap{T1, Healpix.RingOrder}}, out_d_map::DMap{S,T2,I}; root::Integer = 0, clear::Bool = false) where {T1<:Real, T2<:Real, S<:Strategy}
Scatter!(in_map::Healpix.PolarizedHealpixMap{T1, Healpix.RingOrder}, out_d_map::DMap{S,T2}, out_d_pol_map::DMap{S,T2}; root::Integer = 0, clear::Bool = false) where {T1<:Real, T2<:Real, S<:Strategy}
Scatter!(in_map::Nothing, out_d_map::DMap{S,T}; root::Integer = 0, clear::Bool = false) where {T<:Real, S<:Strategy}
Scatter!(in_map::Nothing, out_d_map::DMap{S,T}, out_d_pol_map::DMap{S,T}; root::Integer = 0, clear::Bool = false) where {T<:Real, S<:Strategy}
Scatter!(in_map, out_d_map::DMap{S,T}, comm::MPI.Comm; root::Integer = 0, clear::Bool = false) where {T<:Real, S<:Strategy}
Scatter!(in_map, out_d_map::DMap{S,T}, out_d_pol_map::DMap{S,T}, comm::MPI.Comm; root::Integer = 0, clear::Bool = false) where {T<:Real, S<:Strategy}

Distributes the HealpixMap object passed in input on the root task overwriting the DMap objects passed on each task, according to the specified strategy.

There are two options to distribute a HealpixMap which represents a polarized field:

  • To pass an input PolarizedHealpixMap object and only one output DMap object, in this case the latter will be exclusively filled with its q and u components.
  • To pass an input PolarizedHealpixMap object and two output DMap object, in this case one will contain the i map, while the second q and u.

This is so that the resulting DMap objects can be directly passed to the sht functions which only accept in input the intensity component for a scalar transform and two polarization components for a spinned transform.

As in the standard MPI function, the in_map in input can be nothing on non-root tasks, since it will be ignored anyway.

If the keyword clear is set to true it frees the memory of each task from the (potentially bulky) HealpixMap object.

Arguments:

  • in_map:HealpixMaporPolarizedHealpixMap` object to distribute over the MPI tasks.
  • out_d_map::DMap{S,T}: output DMap object.

Keywords:

  • root::Integer: rank of the task to be considered as "root", it is 0 by default.
  • clear::Bool: if true deletes the input map after having performed the "scattering".
source

While distributing a set of harmonic coefficients means that each MPI task will host a DAlm object containing only the coefficients corresponding to some specific values of m, the distribution of a map is performed by rings. Each MPI task will then host a DMap object containing only the pixels composing some specified rings of the entire HealpixMap. Note that, for spherical harmonic transforms efficiency, it is recommended to assign pairs of rings with same latitude (i.e. symmetric w.r.t. the equator) to the same task, in order to preserve the geometric symmetry of the map.

The following example shows the standard way to initialize a DAlm object through a round robin strategy (see the paragraph Distributing Strategy for more details about this). #Distributing Strategy

using HealpixMPI

MPI.Init()
comm = MPI.COMM_WORLD

alm = Alm(5, 5, randn(ComplexF64, numberOfAlms(5))) #inizialize random Healpix Alm
d_alm = DAlm{RR}() #inizialize empty DAlm to be filled according to RR strategy

MPI.Scatter!(alm, d_alm, comm) #fill d_alm

Gathering data

Analogously to MPI.Scatter!, HealpixMPI.jl also provides overloads of MPI.Gather! (and MPI.Allgather!). These latter allow to re-group subsets of map or alm into a HealpixMap or Alm only on the root task (or on every MPI task involved).

MPI.Gather!Function
Gather!(in_d_alm::DAlm{S,T}, out_alm::Union{Healpix.Alm{T,Array{T,1}}, Nothing}, comp::Integer; root::Integer = 0, clear::Bool = false) where {S<:Strategy, T<:Number}
Gather!(in_d_alm::DAlm{S,T}, out_alm::Healpix.Alm{T,Array{T,1}}; root::Integer = 0, clear::Bool = false) where {S<:Strategy, T<:Number}
Gather!(in_d_alm::DAlm{S,T}, out_alm::AbstractArray{Healpix.Alm{T,Array{T,1}},1}; root::Integer = 0, clear::Bool = false) where {S<:Strategy, T<:Number}
Gather!(in_d_alm::DAlm{S,T}, out_alm::Nothing; root::Integer = 0, clear::Bool = false) where {S<:Strategy, T<:Number}
Gather!(in_d_alm::DAlm{S,T}, in_d_pol_alm::DAlm{S,T}, out_alm::AbstractArray{Healpix.Alm{T,Array{T,1}},1}; root::Integer = 0, clear::Bool = false) where {S<:Strategy, T<:Number}
Gather!(in_d_alm::DAlm{S,T}, in_d_pol_alm::DAlm{S,T}, out_alm::Nothing; root::Integer = 0, clear::Bool = false) where {S<:Strategy, T<:Number}

Gathers the DAlm objects passed on each task overwriting the Alm object passed in input on the root task according to the specified strategy (by default :RR for Round Robin). Note that the strategy must match the one used to "scatter" the a_lm.

As in the standard MPI function, the out_alm can be nothing on non-root tasks, since it will be ignored anyway.

Note: for a polarized field, if two input DAlm objects are passed, they are expected to contain the T and E & B components respectively and the output will be an array of three Alm objects which can be passed to Healpix.jl functions directly.

If the keyword clear is set to true it frees the memory of each task from the (potentially bulky) DAlm object.

Arguments:

  • in_d_alm::DAlm{T}: DAlm object to gather from the MPI tasks.
  • out_d_alm::Alm{T,Array{T,1}}: output Alm object.

Keywords:

  • strategy::Symbol: Strategy to be used, by default :RR for "Round Robin".
  • root::Integer: rank of the task to be considered as "root", it is 0 by default.
  • clear::Bool: if true deletes the input Alm after having performed the "scattering".
source
Gather!(in_d_map::DMap{S,T}, out_map::Union{Healpix.HealpixMap{T,Healpix.RingOrder}, Nothing}, comp::Integer; root::Integer = 0, clear::Bool = false) where {T<:Real, S<:Strategy}
Gather!(in_d_map::DMap{S,T}, out_map; root::Integer = 0, clear::Bool = false) where {T<:Real, S<:Strategy}
Gather!(in_d_map::DMap{S,T}, in_d_pol_map::DMap{S,T}, out_map::Healpix.PolarizedHealpixMap{T,Healpix.RingOrder}; root::Integer = 0, clear::Bool = false) where {T<:Real, S<:Strategy}
Gather!(in_d_map::DMap{S,T}, in_d_pol_map::DMap{S,T}, out_map::Nothing; root::Integer = 0, clear::Bool = false) where {T<:Real, S<:Strategy}

Gathers the DMap objects passed on each task overwriting the HealpixMap or PolarizedHealpixMap object passed in input on the root task according to the specified Strategy.

Similarly to the standard MPI function, the out_map can be nothing on non-root tasks, since it will be ignored anyway.

If the keyword clear is set to true it frees the memory of each task from the (potentially bulky) DMap object.

Arguments:

  • in_d_map::DMap{S,T}: DMap object to gather from the MPI tasks.
  • out_map: output HealpixMap or PolarizedHealpixMap object.

Optional:

  • comp::Integer: Specify which component (column) from the pixel matrix in DMap is to be gathered, defaulted to 1.

Keywords:

  • root::Integer: rank of the task to be considered as "root", it is 0 by default.
  • clear::Bool: if true deletes the input DMap after having performed the "scattering".
source
MPI.Allgather!Function
Allgather!(in_d_alm::DAlm{S,T}, out_alm::Healpix.Alm{T,Array{T,1}}, comp::Integer; clear::Bool = false) where {S<:Strategy, T<:Number}
Allgather!(in_d_alm::DAlm{S,T}, out_alm::Healpix.Alm{T,Array{T,1}}; clear::Bool = false) where {S<:Strategy, T<:Number}
Allgather!(in_d_alm::DAlm{S,T}, out_alm::AbstractArray{Healpix.Alm{T,Array{T,1}},1}; clear::Bool = false) where {S<:Strategy, T<:Number}
Allgather!(in_d_alm::DAlm{S,T}, in_d_pol_alm::DAlm{S,T}, out_alm::AbstractArray{Healpix.Alm{T,Array{T,1}},1}; clear::Bool = false) where {S<:Strategy, T<:Number}

Gathers the DAlm objects passed on each task overwriting the Alm object passed in input on the root task according to the specified strategy (by default :RR for Round Robin). Note that the strategy must match the one used to "scatter" the a_lm.

As in the standard MPI function, the out_alm can be nothing on non-root tasks, since it will be ignored anyway.

If the keyword clear is set to true it frees the memory of each task from the (potentially bulky) DAlm object.

Arguments:

  • in_d_alm::DAlm{T}: DAlm object to gather from the MPI tasks.
  • out_d_alm::Alm{T,Array{T,1}}: output Alm object.

Keywords:

  • strategy::Symbol: Strategy to be used, by default :RR for "Round Robin".
  • clear::Bool: if true deletes the input Alm after having performed the "scattering".
source
Allgather!(in_d_map::DMap{S,T}, out_map::Healpix.HealpixMap{T,Healpix.RingOrder} comp::Integer; clear::Bool = false) where {T<:Real, S<:Strategy}
Allgather!(in_d_map::DMap{S,T}, out_map::Healpix.HealpixMap{T,Healpix.RingOrder}; clear::Bool = false) where {T<:Real, S<:Strategy}
Allgather!(in_d_map::DMap{S,T}, out_map::Healpix.PolarizedHealpixMap{T,Healpix.RingOrder}; clear::Bool = false) where {T<:Real, S<:Strategy}

Gathers the DMap objects passed on each task overwriting the out_map object passed in input on EVERY task according to the specified strategy (by default :RR for Round Robin). Note that the strategy must match the one used to "scatter" the map.

If the keyword clear is set to true it frees the memory of each task from the (potentially bulky) DMap object.

Arguments:

  • in_d_map::DMap{S,T}: DMap object to gather from the MPI tasks.
  • out_d_map: output HealpixMap or PolarizedHealpixMap object to overwrite.

Optional:

  • comp::Integer: Specify which component (column) from the pixel matrix in DMap is to be gathered, defaulted to 1.

Keywords:

  • clear::Bool: if true deletes the input Alm after having performed the "scattering".
source

Distributing Strategy

It is also worth mentioning that one could find many different strategies to distribute a set of data over multiple MPI tasks. So far, the only one implemented in HealpixMPI.jl, which should guarantee an adequate work balance between tasks, is the so-called "round robin" strategy: assuming $N$ MPI tasks, the map is distributed such that task $i$ hosts the map rings $i$, $i + N$, $i + 2N$, etc. (and their counterparts on the other hemisphere). Similarly, for the spherical harmonic coefficients, task $i$ would hold all coefficients for $m = i$, $i + N$, $i + 2 N$, etc.

The strategy is intrinsically specified in a DMap or DAlm instance through an abstract type (e.g. RR), inherited from a super-type Strategy; in the same way as the pixel ordering is specified in a HealpixMap in Healpix.jl.

HealpixMPI.StrategyType
abstract type Strategy

Abstract type representing the strategy used to distribute a Healpix map or alm. If the user wishes to implement it's own, it should be added as an inherited type, see RR as an example.

source
HealpixMPI.RRType
abstract type RR <: Strategy

The RR type should be used when creating a "Distributed" type in order to specify that the data has been distributed according to "Round Robin".

source

This kind of solution allows for two great features:

  • An efficient and fast multiple-dispatch, allowing a function to recognize the distribution strategy used on data structure without the usage of any if statement.

  • Allows to add other distributing strategies if needed for future developments by simply adding an inherited type in the source file strategy.jl with a single line of code:

abstract type NewStrat<:Strategy end

Map-specific functions

HealpixMPI.get_nrings_RRFunction
get_nrings_RR(eq_idx::Integer, task_rank::Integer, c_size::Integer)
get_nrings_RR(res::Resolution, task_rank::Integer, c_size::Integer)

Return number of rings on specified task given total map resolution and communicator size according to Round Robin.

source
HealpixMPI.get_rindexes_RRFunction
get_rindexes_RR(local_nrings::Integer, eq_idx::Integer, t_rank::Integer, c_size::Integer)
get_rindexes_RR(nside::Integer, t_rank::Integer, c_size::Integer)

Return array of rings on specified task (0-base index) given total map resolution and communicator size, ordered from the equator to the poles alternating N/S, according to Round Robin.

source

Alm-specific functions

HealpixMPI.get_nm_RRFunction
get_nm_RR(global_mmax::Integer, task_rank::Integer, c_size::Integer)

Return number of m's on specified task in a Round Robin strategy

source
HealpixMPI.get_mval_RRFunction
get_mval_RR(global_mmax::Integer, task_rank::Integer, c_size::Integer)

Return array of m values on specified task in a Round Robin strategy

source
HealpixMPI.get_m_tasks_RRFunction
get_m_tasks_RR(mmax::Integer, c_size::Integer)

Computes an array containing the task each m in the full range [0, mmax] is assigned to according to a Round Robin strategy, given the communicator size.

source
HealpixMPI.make_mstart_complexFunction
make_mstart_complex(lmax::Integer, stride::Integer, mval::AbstractArray{T}) where T <: Integer

Computes the 1-based mstart array given any mval and lmax for Alm in complex representation.

source