@@ -39,7 +39,7 @@ Conception Artistique)}
\begin{abstract}
\begin{abstract}
In this contribution we introduce a little-known property of error diffusion
In this contribution we introduce a little-known property of error diffusion
halftoning algorithms which we call error diffusion displacement.
halftoning algorithms which we call {\it error diffusion displacement} .
By accounting for the inherent sub-pixel displacement caused by the error
By accounting for the inherent sub-pixel displacement caused by the error
propagation, we correct an important flaw in most metrics used to assess the
propagation, we correct an important flaw in most metrics used to assess the
quality of resulting halftones. We find these metrics to usually highly
quality of resulting halftones. We find these metrics to usually highly
@@ -64,8 +64,8 @@ or even the design of banknotes.
Countless methods have been published for the last 40 years that try to best
Countless methods have been published for the last 40 years that try to best
address the problem of colour reduction. Comparing two algorithms in terms of
address the problem of colour reduction. Comparing two algorithms in terms of
speed or memory usage is often straightforward, but how exactly a halftoning
speed or memory usage is often straightforward, but how exactly a halftoning
algorithm performs in terms of quality is a far more complex issue, as it
highly depends on the display device and the inner workings of the human eye.
algorithm performs quality-wise is a far more complex issue, as it highly
depends on the display device and the inner workings of the human eye.
Though this document focuses on the particular case of bilevel halftoning,
Though this document focuses on the particular case of bilevel halftoning,
most of our results can be directly adapted to the more generic problem of
most of our results can be directly adapted to the more generic problem of
@@ -81,10 +81,10 @@ ordered dither matrices \cite{bayer}. However, modern techniques such as the
void-and-cluster method \cite{void1}, \cite{void2} allow to generate screens
void-and-cluster method \cite{void1}, \cite{void2} allow to generate screens
yielding visually pleasing results.
yielding visually pleasing results.
Error diffusion dithering, introduced in 1976 by Floyd and Steinberg
\medskip Error diffusion dithering, introduced in 1976 by Floyd and Steinberg
\cite{fstein}, tries to compensate for the thresholding error through the use
\cite{fstein}, tries to compensate for the thresholding error through the use
of feedback. Typically applied in raster scan order, it uses an error diffusion
of feedback. Typically applied in raster scan order, it uses an error diffusion
matrix such as the following one:
matrix such as the following one, where $x$ denotes the pixel being processed :
\[ \frac{1}{16} \left| \begin{array}{ccc}
\[ \frac{1}{16} \left| \begin{array}{ccc}
- & x & 7 \\
- & x & 7 \\
@@ -92,12 +92,13 @@ matrix such as the following one:
Though efforts have been made to make error diffusion parallelisable
Though efforts have been made to make error diffusion parallelisable
\cite{parfstein}, it is generally considered more computationally expensive
\cite{parfstein}, it is generally considered more computationally expensive
than screening.
than screening, but carefully chosen coefficients yield good visual results
\cite{kite}.
Model-based halftoning is the third important algorithm category. It relies
on a model of the human visual system (HVS) and attempts to minimise an error
value based on that model. One such algorithm is direct binary seach (DBS)
\cite{allebach}, also referred to as least-squares model-based halftoning
\medskip Model-based halftoning is the third important algorithm category. It
relies on a model of the human visual system (HVS) and attempts to minimise
an error value based on that model. One such algorithm is direct binary seach
(DBS) \cite{allebach}, also referred to as least-squares model-based halftoning
(LSMB) \cite{lsmb}.
(LSMB) \cite{lsmb}.
HVS models are usually low-pass filters. Nasanen \cite{nasanen}, Analoui
HVS models are usually low-pass filters. Nasanen \cite{nasanen}, Analoui
@@ -107,7 +108,7 @@ studies \cite{mcnamara}.
DBS yields halftones of impressive quality. However, despite efforts to make
DBS yields halftones of impressive quality. However, despite efforts to make
it more efficient \cite{bhatt}, it suffers from its large computational
it more efficient \cite{bhatt}, it suffers from its large computational
requirements and error diffusion remains a widely used technique.
requirements and error diffusion remains a more widely used technique.
\section{Error diffusion displacement}
\section{Error diffusion displacement}
@@ -120,8 +121,8 @@ improve the image quality significantly.
Intuitively, as the error is always propagated to the bottom-left or
Intuitively, as the error is always propagated to the bottom-left or
bottom-right of each pixel (Fig. \ref{fig:direction}), one may expect the
bottom-right of each pixel (Fig. \ref{fig:direction}), one may expect the
resulting image to be slightly translated. This expectation is confirmed
resulting image to be slightly translated. This expectation is confirmed
when alternatively viewing an error diffused image and the corresponding DBS
halftone.
visually when rapidly switching between an error diffused image and the
corresponding DBS halftone.
\begin{figure}
\begin{figure}
\begin{center}
\begin{center}
@@ -133,8 +134,8 @@ halftone.
This small translation is visually innocuous but we found that it means a lot
This small translation is visually innocuous but we found that it means a lot
in terms of error computation. A common way to compute the error between an
in terms of error computation. A common way to compute the error between an
image $h_{i,j}$ and the corresponding halftone $b_{i,j}$ is to compute the
mean square error between modified versions of the images, in the form:
image $h_{i,j}$ and the corresponding binary halftone $b_{i,j}$ is to compute
the mean square error between modified versions of the images, in the form:
\begin{equation}
\begin{equation}
E(h,b) = \frac{(||v * h_{i,j} - v * b_{i,j}||_2)^2}{wh}
E(h,b) = \frac{(||v * h_{i,j} - v * b_{i,j}||_2)^2}{wh}
@@ -143,15 +144,15 @@ mean square error between modified versions of the images, in the form:
\noindent where $w$ and $h$ are the image dimensions, $*$ denotes the
\noindent where $w$ and $h$ are the image dimensions, $*$ denotes the
convolution and $v$ is a model for the human visual system.
convolution and $v$ is a model for the human visual system.
To compensate for the slight translation experienced in the halftone, w e
use the following error metric instead:
To compensate for the slight translation observed in the halftone, we use th e
following error metric instead:
\begin{equation}
\begin{equation}
E_{dx,dy}(h,b) = \frac{(||v * h_{i,j} - v * t_{dx,dy} * b_{i,j}||_2)^2}{wh}
E_{dx,dy}(h,b) = \frac{(||v * h_{i,j} - v * t_{dx,dy} * b_{i,j}||_2)^2}{wh}
\end{equation}
\end{equation}
\noindent where $t_{dx,dy}$ is an operator which translates the image along the
\noindent where $t_{dx,dy}$ is an operator which translates the image along the
$(dx,dy)$ vector.
$(dx,dy)$ vector. By design, $E_{0,0} = E$.
A simple example can be given using a Gaussian HVS model:
A simple example can be given using a Gaussian HVS model:
@@ -184,33 +185,32 @@ call the local minimum $E_{min}$:
\end{center}
\end{center}
\end{figure}
\end{figure}
For instance, a Floyd-Steinberg dither of \textit{Lena}, with $\sigma = 1.2$
yields a per-pixel mean square error of $8.51 \times10^{-4}$. However, when
taking the displacement into account, the error becomes $7.77\times10^{-4}$
for $(dx,dy) = (0.167709,0.299347)$. The new, corrected error is significantly
smaller, with the exact same input and output images.
For instance, a Floyd-Steinberg dither of \textit{Lena} with $\sigma = 1.2$
yields a per-pixel mean square error of $3.67 \times10^{-4}$. However, when
taking the displacement into account, the error becomes $3.06\times10^{-4}$ for
$(dx,dy) = (0.165,0.293)$. The new, corrected error is significantly smaller,
with the exact same input and output images.
Experiments show that the corrected error is always noticeably smaller except
Experiments show that the corrected error is always noticeably smaller except
in the case of images that are already mostly pure black and white. The
in the case of images that are already mostly pure black and white. The
experiment was performed on a database of 10,000 images from common computer
experiment was performed on a database of 10,000 images from common computer
vision sets and from the image board \textit{4chan}, providing a representative
vision sets and from the image board \textit{4chan}, providing a representative
sampling of the photographs, digital art and business graphics widely exchanged
sampling of the photographs, digital art and business graphics widely exchanged
on the Internet.
on the Internet nowadays \cite{4chan} .
In addition to the classical Floyd-Steinberg and Jarvis, Judice and Ninke
kernels, we tested two serpentine error diffusion algorithms: Ostromoukhov's
simple error diffusion \cite{ostromoukhov}, which uses a variable coefficient
kernel, and Wong and Allebach's optimum error diffusion kernel \cite{wong}.
In addition to the classical Floyd-Steinberg and Jarvis-Judice-Ninke kernels,
we tested two serpentine error diffusion algorithms: Ostromoukhov's simple
error diffusion \cite{ostromoukhov}, which uses a variable coefficient kernel,
and Wong and Allebach's optimum error diffusion kernel \cite{wong}.
\begin{center}
\begin{center}
\begin{tabular}{|l|l|l |}
\begin{tabular}{|l|c|c |}
\hline
\hline
& $E$ & $E_{min}$ \\ \hline
raster Floyd-Steinberg & 0.00089705 & 0.000346514 \\ \hline
raster Ja-Ju-Ni & 0.0020309 & 0.000692003 \\ \hline
Ostromoukhov & 0.00189721 & 0.00186343 \\ \hline
% raster optimum kernel & 0.00442951 & 0.00135092 \\ \hline
optimum kernel & 0.00146338 & 0.00136522 \\
&~ $E\times10^4$ ~&~ $E_{min}\times10^4$ ~\\ \hline
~raster Floyd-Steinberg ~&~ 3.7902 ~&~ 3.1914 ~\\ \hline
~raster Ja-Ju-Ni ~&~ 9.7013 ~&~ 6.6349 ~\\ \hline
~Ostromoukhov ~&~ 4.6892 ~&~ 4.4783 ~\\ \hline
~optimum kernel ~&~ 7.5209 ~&~ 6.5772 ~\\
\hline
\hline
\end{tabular}
\end{tabular}
\end{center}
\end{center}
@@ -226,7 +226,7 @@ halftones}
We have seen that for a given image, $E_{min}(h,b)$ is a better and fairer
We have seen that for a given image, $E_{min}(h,b)$ is a better and fairer
visual error measurement than $E(h,b)$. However, its major drawback is that it
visual error measurement than $E(h,b)$. However, its major drawback is that it
is highly computationally expensive: for each image, the new $(dx,dy)$ values
is highly computationally expensive: for each image, the new $(dx,dy)$ values
need to be calculated to minimise the energy value.
need to be calculated to minimise the error value.
Fortunately, we found that for a given raster or serpentine scan
Fortunately, we found that for a given raster or serpentine scan
error diffusion algorithm, there was often very little variation in
error diffusion algorithm, there was often very little variation in
@@ -270,13 +270,13 @@ is a lot faster to compute than $E_{min}$, and as it is statistically closer to
$E_{min}$, we can expect it to be a better error estimation than $E$.
$E_{min}$, we can expect it to be a better error estimation than $E$.
\begin{center}
\begin{center}
\begin{tabular}{|l|l|l|l|l |}
\begin{tabular}{|l|c|c|c|c |}
\hline
\hline
& $E$ & $dx$ & $dy$ & $E_{fast}$ \\ \hline
raster Floyd-Steinberg & 0.00089705 & 0.16 & 0.28 & 0.00083502 \\ \hline
raster Ja-Ju-Ni & 0.0020309 & 0.26 & 0.76 & 0.00192991 \\ \hline
Ostromoukhov & 0.00189721 & 0.00 & 0.19 & 0.00186839 \\ \hline
optimum kernel & 0.00146338 & 0.00 & 0.34 & 0.00138165 \\
&~ $E\times10^4 $ ~ &~ $dx$ ~ &~ $dy$ ~ &~ $E_{fast}\times10^4 $ ~ \\ \hline
~raster Floyd-Steinberg ~&~ 3.7902 ~&~ 0.16 ~&~ 0.28 ~&~ 3.3447 ~ \\ \hline
~raster Ja-Ju-Ni ~&~ 9.7013 ~&~ 0.26 ~&~ 0.76 ~&~ 7.5891 ~ \\ \hline
~Ostromoukhov ~&~ 4.6892 ~&~ 0.00 ~&~ 0.19 ~&~ 4.6117 ~ \\ \hline
~optimum kernel ~&~ 7.5209 ~&~ 0.00 ~&~ 0.34 ~&~ 6.8233 ~ \\
\hline
\hline
\end{tabular}
\end{tabular}
\end{center}
\end{center}
@@ -290,53 +290,53 @@ computationally expensive algorithms such as DBS try to minimise.
Our first experiment was a study of the Floyd-Steinberg-like 4-block error
Our first experiment was a study of the Floyd-Steinberg-like 4-block error
diffusion kernels. According to the original authors, the coefficients were
diffusion kernels. According to the original authors, the coefficients were
found "mostly by trial and error" \cite{fstein}. With our improved metric, we
found "mostly by trial and error" \cite{fstein}. With our improved metric, we
now have the tools to confirm or infirm Floyd and Steinberg's initial proposal .
now have the tools to confirm or infirm Floyd and Steinberg's initial choice .
We chose to do an exhaustive study of every $\frac{1}{16}\{a,b,c,d\}$ integer
We chose to do an exhaustive study of every $\frac{1}{16}\{a,b,c,d\}$ integer
combination. We deliberately chose positive integers whose sum is 16. E rror
combination. We deliberately chose positive integers whose sum was 16: e rror
diffusion coefficients smaller than zero or adding up to more than 1 are known
diffusion coefficients smaller than zero or adding up to more than 1 are known
to be unstable \cite{stability}, and diffusing less than 100\% of the error is
known to cause important error in shadow and highlight areas of the image.
to be unstable \cite{stability}, and diffusing less than 100\% of the error
causes important loss of detail in the shadow and highlight areas of the image.
First we studied all possible coefficients on a pool of 250 images with an
error metric $E$ based on a standard Gaussian HVS model. Since we are studying
algorithms on different images but error values are only meaningful for a given
image, we chose a Condorcet voting scheme to determine winners. $E_{min}$ is
only given here as an indication and had no role in the computation:
We studied all possible coefficients on a pool of 3,000 images with an error
metric $E$ based on a standard Gaussian HVS model. $E_{min}$ is only given here
as an indication and only $E$ was used to elect the best coefficients:
\begin{center}
\begin{center}
\begin{tabular}{|c|c|c|c|}
\begin{tabular}{|c|c|c|c|}
\hline
\hline
rank & coefficients & $E$ & $E_{min}$ \\ \hline
1 & 8 3 5 0 & 0.00129563 & 0.000309993 \\ \hline
2 & 7 3 6 0 & 0.00131781 & 0.000313941 \\ \hline
3 & 9 3 4 0 & 0.00131115 & 0.000310815 \\ \hline
4 & 9 2 5 0 & 0.00132785 & 0.000322754 \\ \hline
5 & 8 4 4 0 & 0.0013117 & 0.00031749 \\ \hline
\dots & \dots & \dots & \dots \\
~ rank ~ &~ coefficients ~ &~ $E$ ~ &~ $E_{min}$ ~ \\ \hline
~ 1 ~ &~ 8 3 5 0 ~ &~ 0.00129563 ~ &~ 0.000309993 ~ \\ \hline
~ 2 ~ &~ 7 3 6 0 ~ &~ 0.00131781 ~ &~ 0.000313941 ~ \\ \hline
~ 3 ~ &~ 9 3 4 0 ~ &~ 0.00131115 ~ &~ 0.000310815 ~ \\ \hline
~ 4 ~ &~ 9 2 5 0 ~ &~ 0.00132785 ~ &~ 0.000322754 ~ \\ \hline
~ 5 ~ &~ 8 4 4 0 ~ &~ 0.00131170 ~ &~ 0.000317490 ~ \\ \hline
~ \dots ~ &~ \dots ~ &~ \dots ~ &~ \dots ~ \\
\hline
\hline
\end{tabular}
\end{tabular}
\end{center}
\end{center}
The exact same operation using $E_{min}$ as the decision variable yields very
The exact same operation using $E_{min}$ as the decision variable yields very
different results. Again , $E$ is only given here as an indication:
different results. Similarly , $E$ is only given here as an indication:
\begin{center}
\begin{center}
\begin{tabular}{|c|c|c|c|}
\begin{tabular}{|c|c|c|c|}
\hline
\hline
rank & coefficients & $E_{min}$ & $E$ \\ \hline
1 & 7 3 5 1 & 0.000306251 & 0.00141414 \\ \hline
2 & 6 3 6 1 & 0.000325488 & 0.00145197 \\ \hline
3 & 8 3 4 1 & 0.000313537 & 0.00141632 \\ \hline
4 & 7 3 4 2 & 0.000336239 & 0.00156376 \\ \hline
5 & 6 4 5 1 & 0.000333702 & 0.00147671 \\ \hline
\dots & \dots & \dots & \dots \\
~ rank ~ &~ coefficients ~ &~ $E_{min}$ ~ &~ $E$ ~ \\ \hline
~ 1 ~ &~ 7 3 5 1 ~ &~ 0.000306251 ~ &~ 0.00141414 ~ \\ \hline
~ 2 ~ &~ 6 3 6 1 ~ &~ 0.000325488 ~ &~ 0.00145197 ~ \\ \hline
~ 3 ~ &~ 8 3 4 1 ~ &~ 0.000313537 ~ &~ 0.00141632 ~ \\ \hline
~ 4 ~ &~ 7 3 4 2 ~ &~ 0.000336239 ~ &~ 0.00156376 ~ \\ \hline
~ 5 ~ &~ 6 4 5 1 ~ &~ 0.000333702 ~ &~ 0.00147671 ~ \\ \hline
~ \dots ~ &~ \dots ~ &~ \dots ~ &~ \dots ~ \\
\hline
\hline
\end{tabular}
\end{tabular}
\end{center}
\end{center}
Our improved metric was able to confirm that the original Floyd-Steinberg
coefficients were indeed the best possible for raster scan.
Our improved metric allowed us to confirm that the original Floyd-Steinberg
coefficients were indeed amongst the best possible for raster scan.
More importantly, using $E$ as the decision variable may have elected
$\frac{1}{16}\{8,4,4,0\}$, which is in fact a poor choice.
For serpentine scan, however, our experiment suggests that
For serpentine scan, however, our experiment suggests that
$\frac{1}{16}\{7,4,5,0\}$ is a better choice than the Floyd-Steinberg
$\frac{1}{16}\{7,4,5,0\}$ is a better choice than the Floyd-Steinberg
@@ -349,7 +349,7 @@ coefficients that have nonetheless been widely in use so far (Fig.
\caption{halftone of \textit{Lena} using serpentine error diffusion and
\caption{halftone of \textit{Lena} using serpentine error diffusion and
the optimum coefficients $\frac{1}{16}\{7,4,5,0\}$ that improve
the optimum coefficients $\frac{1}{16}\{7,4,5,0\}$ that improve
on the standard Floyd-Steinberg coefficients in terms of visual
on the standard Floyd-Steinberg coefficients in terms of visual
quality for a given HVS model }
quality for the HVS model studied in section 3. }
\label{fig:lena7450}
\label{fig:lena7450}
\end{center}
\end{center}
\end{figure}
\end{figure}
@@ -364,9 +364,8 @@ we hope to see even more development in simple error diffusion methods.
Confirming Floyd and Steinberg's 30-year old "trial-and-error" result with our
Confirming Floyd and Steinberg's 30-year old "trial-and-error" result with our
work is only the beginning: future work may cover more complex HVS models,
work is only the beginning: future work may cover more complex HVS models,
for instance by taking into account the angular dependance of the human eye
for instance by taking into account the angular dependance of the human eye
\cite{sullivan}. And now that we have a proper metric, we plan to improve all
error diffusion methods that may require fine-tuning of their propagation
coefficients.
\cite{sullivan}. We plan to use our new metric to improve all error diffusion
methods that may require fine-tuning of their propagation coefficients.
%
%
% ---- Bibliography ----
% ---- Bibliography ----
@@ -400,7 +399,7 @@ P. Metaxas,
Color Imaging: Device-Indep. Color, Color Hardcopy, and Graphic Arts IV, Proc.
Color Imaging: Device-Indep. Color, Color Hardcopy, and Graphic Arts IV, Proc.
SPIE 3648, 485--494 (1999)
SPIE 3648, 485--494 (1999)
\bibitem[6]{quality }
\bibitem[6]{kite }
T. D. Kite,
T. D. Kite,
\textit{Design and Quality Assessment of Forward and Inverse Error-Diffusion
\textit{Design and Quality Assessment of Forward and Inverse Error-Diffusion
Halftoning Algorithms}.
Halftoning Algorithms}.
@@ -409,38 +408,38 @@ PhD thesis, Dept. of ECE, The University of Texas at Austin, Austin, TX, Aug.
\bibitem[7]{halftoning}
\bibitem[7]{halftoning}
R. Ulichney,
R. Ulichney,
\textit{Digital Halftoning}
\textit{Digital Halftoning}.
MIT Press, 1987
MIT Press, 1987
\bibitem[8]{spacefilling}
\bibitem[8]{spacefilling}
L. Velho and J. Gomes,
L. Velho and J. Gomes,
\textit{Digital halftoning with space-filling curves}
\textit{Digital halftoning with space-filling curves}.
Computer Graphics (Proceedings of SIGGRAPH 91), 25(4):81--90, 1991
Computer Graphics (Proceedings of SIGGRAPH 91), 25(4):81--90, 1991
\bibitem[9]{peano}
\bibitem[9]{peano}
I.~H. Witten and R.~M. Neal,
I.~H. Witten and R.~M. Neal,
\textit{Using peano curves for bilevel display of continuous-tone images}
\textit{Using peano curves for bilevel display of continuous-tone images}.
IEEE Computer Graphics \& Appl., 2:47--52, 1982
IEEE Computer Graphics \& Appl., 2:47--52, 1982
\bibitem[10]{nasanen}
\bibitem[10]{nasanen}
R. Nasanen,
R. Nasanen,
\textit{Visibility of halftone dot textures}
\textit{Visibility of halftone dot textures}.
IEEE Trans. Syst. Man. Cyb., vol. 14, no. 6, pp. 920--924, 1984
IEEE Trans. Syst. Man. Cyb., vol. 14, no. 6, pp. 920--924, 1984
\bibitem[11]{allebach}
\bibitem[11]{allebach}
M. Analoui and J.~P. Allebach,
M. Analoui and J.~P. Allebach,
\textit{Model-based halftoning using direct binary search}
\textit{Model-based halftoning using direct binary search}.
Proc. of SPIE/IS\&T Symp. on Electronic Imaging Science and Tech.,
Proc. of SPIE/IS\&T Symp. on Electronic Imaging Science and Tech.,
February 1992, San Jose, CA, pp. 96--108
February 1992, San Jose, CA, pp. 96--108
\bibitem[12]{mcnamara}
\bibitem[12]{mcnamara}
Ann McNamara,
Ann McNamara,
\textit{Visual Perception in Realistic Image Synthesis}
\textit{Visual Perception in Realistic Image Synthesis}.
Computer Graphics Forum, vol. 20, no. 4, pp. 211--224, 2001
Computer Graphics Forum, vol. 20, no. 4, pp. 211--224, 2001
\bibitem[13]{bhatt}
\bibitem[13]{bhatt}
Bhatt \textit{et al.},
Bhatt \textit{et al.},
\textit{Direct Binary Search with Adaptive Search and Swap}
\textit{Direct Binary Search with Adaptive Search and Swap}.
\url{http://www.ima.umn.edu/2004-2005/MM8.1-10.05/activities/Wu-Chai/halftone.pdf}
\url{http://www.ima.umn.edu/2004-2005/MM8.1-10.05/activities/Wu-Chai/halftone.pdf}
\bibitem[14]{4chan}
\bibitem[14]{4chan}
@@ -449,35 +448,29 @@ moot,
\bibitem[15]{wong}
\bibitem[15]{wong}
P.~W. Wong and J.~P. Allebach,
P.~W. Wong and J.~P. Allebach,
\textit{Optimum error-diffusion kernel design}
\textit{Optimum error-diffusion kernel design}.
Proc. SPIE Vol. 3018, p. 236--242, 1997
Proc. SPIE Vol. 3018, p. 236--242, 1997
\bibitem[16]{ostromoukhov}
\bibitem[16]{ostromoukhov}
Victor Ostromoukhov,
Victor Ostromoukhov,
\textit{A Simple and Efficient Error-Diffusion Algorithm}
\textit{A Simple and Efficient Error-Diffusion Algorithm}.
in Proceedings of SIGGRAPH 2001, in ACM Computer Graphics, Annual Conference
in Proceedings of SIGGRAPH 2001, in ACM Computer Graphics, Annual Conference
Series, pp. 567--572, 2001
Series, pp. 567--572, 2001
\bibitem[17]{lsmb}
\bibitem[17]{lsmb}
T.~N. Pappas and D.~L. Neuhoff,
T.~N. Pappas and D.~L. Neuhoff,
\textit{Least-squares model-baed halftoning}
\textit{Least-squares model-bas ed halftoning}.
in Proc. SPIE, Human Vision, Visual Proc., and Digital Display III, San Jose,
in Proc. SPIE, Human Vision, Visual Proc., and Digital Display III, San Jose,
CA, Feb. 1992, vol. 1666, pp. 165--176
CA, Feb. 1992, vol. 1666, pp. 165--176
\bibitem[18]{stability}
\bibitem[18]{stability}
R. Eschbach, Z. Fan, K.~T. Knox and G. Marcu,
R. Eschbach, Z. Fan, K.~T. Knox and G. Marcu,
\textit{Threshold Modulation and Stability in Error Diffusion}
\textit{Threshold Modulation and Stability in Error Diffusion}.
in Signal Processing Magazine, IEEE, July 2003, vol. 20, issue 4, pp. 39--50
in Signal Processing Magazine, IEEE, July 2003, vol. 20, issue 4, pp. 39--50
\bibitem[19]{kite}
T.~D. Kite,
\textit{Design and quality assessment of forward and inverse error diffusion
halftoning algorithms}
Ph.~D. in Electrical Engineering (Image Processing), August 1998
\bibitem[20]{sullivan}
\bibitem[19]{sullivan}
J. Sullivan, R. Miller and G. Pios,
J. Sullivan, R. Miller and G. Pios,
\textit{Image halftoning using a visual model in error diffusion}
\textit{Image halftoning using a visual model in error diffusion}.
J. Opt. Soc. Am. A, vol. 10, pp. 1714--1724, Aug. 1993
J. Opt. Soc. Am. A, vol. 10, pp. 1714--1724, Aug. 1993
\end{thebibliography}
\end{thebibliography}