Generating the map maybe accomplished by several different
alogrithms. These generally start from

the minimization of chi-squared of the observations to
the selected map realization or determining the maximum likelihood set
of pixel values corresponding to the observations. Both of these lead to
a

relatively simple linear matrix equation for the value
of the map pixels **M**.

M** = (****A ^{T }N^{-1 }A)
^{-1 }A^{T }N^{-1 }D**

where

A similar linear algebra matrix equation provides the pixel-pixel error correlation matrix

The generation of the map is can then be done by a bruet
force solution of these equations. However, when one gets to data streams
and maps which are quite large, the computational costs are quite large.

The solution to the two linear equations is broken into
three steps.

Given an original data stream with N_{t}
time ordered observations to generate an N_{p}-pixel map of the
sky temperature M=[\Delta_{p}] and a measure of the pixel-pixel
correlations in the noise Y=[N_{pq}] requires the following the
computational resources and time:

Calculation |
Disk |
RAM |
Flops |
Serial CPU Time
MAXIMA |
Serial CPU Time
MAP |
Serial CPU Time
PLANCK |

Y=[N_{pq}] |
4 N_{t}^{2} |
16 N_{t} |
2 N_{p}N_{t}^{2} |
^{14 years} |
2.6 x 10^{8 }yrs |
4 x 10^{10 }yrs |

Z=A^{T }N^{-1 }D |
4 N_{t}^{2} |
16 N_{t} |
2 N_{t}^{2} |
4 hours | 260 years | 4,130 yrs |

M=(Y^{-1})^{-1}D |
4 N_{p}^{2} |
8 N_{p}^{2} |
8/3 N_{p}^{3} |
^{37 hours} |
^{150 years} |
1.4 x 10^{5 }yrs |

Brute |
Force |
Solution |

- all the necessary matrices are simultaneously stored on disc in single (4-byte) precision.
- matrices are loaded into memory no more than two at a time in double (8-byte) precision.

- If we can make approximations that exploit symmetry, pixel approximations, and use knowledge of the time-ordered
- noise autocorrelation properties we can shorten the time and resources required.

directly observed and 1 for the pixel being most directly observed. This is an approximation to the true response which is a convolution of the beam pattern on the sky. If the pixels are small enough, then the deconvolution could be performed after the map was generated. This will be an approximation and a trade-off, since simplifying the pointing matrix can be exploited to save

time and storage greatly but later the larger number of pixels required as a result will require vastly greater computing time

and space. We may have to solve, fix, and then coarsen for the next stage of data analysis.

The second major assumption is that the noise autocorrelation
function effectively goes to zero (actually it is treated formally this
way) outside of a time **tau **which is much shorter than the data stream
time** t**_{total}**.**

Calculation |
Disk |
RAM |
Flops |
Serial CPU Time
MAXIMA |
Serial CPU Time
MAP |
Serial CPU Time
PLANCK |

Y=[N_{pq}] |
4 N_{p}^{2} |
8 N_{p}^{2} |
2 tau N_{t} |
^{12 hours} |
4.5 months | 3 years |

Z=A^{T }N^{-1 }D |
4 N_{p}^{2} |
8 N_{p}^{2} |
4 tau N_{t} |
20 hours | 9 months | 6 years |

M=(Y^{-1})^{-1}D |
4 N_{p}^{2} |
8 N_{p}^{2} |
8/3 N_{p}^{3} |
^{37 hours} |
^{150 years} |
1.4 x 10^{5 }yrs |

Solution |
Small Pixel |
Finite Noise Correrlation |
Time dominated by data transfer | 0.5 hours | few days | few days |