You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
 
 

488 line
19 KiB

  1. % This is LLNCS.DEM the demonstration file of
  2. % the LaTeX macro package from Springer-Verlag
  3. % for Lecture Notes in Computer Science,
  4. % version 2.2 for LaTeX2e
  5. %
  6. \documentclass{llncs}
  7. %
  8. \usepackage{makeidx} % allows for indexgeneration
  9. \usepackage{graphicx} % for gnuplot epslatex stuff
  10. \usepackage{color} % ditto
  11. \usepackage{pstricks} % for inkscape TeX output
  12. %
  13. \begin{document}
  14. %
  15. \mainmatter % start of the contributions
  16. %
  17. \title{Reinstating Floyd-Steinberg: Improved Metrics for Quality Assessment
  18. of Error Diffusion Algorithms}
  19. %
  20. \titlerunning{Adapting Qualitative Metrics to Common Error Diffusion Algorithms} % abbreviated title (for running head)
  21. % also used for the TOC unless
  22. % \toctitle is used
  23. %
  24. \author{Sam Hocevar\inst{1} \and Gary Niger\inst{2}}
  25. %
  26. \authorrunning{Sam Hocevar et al.} % abbreviated author list (for running head)
  27. %
  28. %%%% modified list of authors for the TOC (add the affiliations)
  29. \tocauthor{Sam Hocevar, Gary Niger (Laboratoire d'Imagerie Bureautique et de
  30. Conception Artistique)}
  31. %
  32. \institute{Laboratoire d'Imagerie Bureautique et de Conception Artistique\\
  33. 14 rue de Plaisance, Paris, France
  34. \and
  35. 143 Rolloffle Avenue, Tarzana, California 91356\\
  36. \email{sam@hocevar.net}, \email{gary\_niger@gnaa.us}}
  37. \maketitle % typeset the title of the contribution
  38. \begin{abstract}
  39. In this contribution we introduce a little-known property of error diffusion
  40. halftoning algorithms which we call {\it error diffusion displacement}.
  41. By accounting for the inherent sub-pixel displacement caused by the error
  42. propagation, we correct an important flaw in most metrics used to assess the
  43. quality of resulting halftones. We find these metrics to usually highly
  44. underestimate the quality of error diffusion in comparison to more modern
  45. algorithms such as direct binary search.
  46. Using empirical observation, we give a method for creating computationally
  47. efficient, image-independent, model-based metrics for this quality assessment.
  48. Finally, we use the properties of error diffusion displacement to justify
  49. Floyd and Steinberg's well-known choice of algorithm coefficients.
  50. {\bf Keywords}: halftoning, error diffusion, image quality, human visual
  51. system, color quantization
  52. \end{abstract}
  53. %
  54. \section{Introduction}
  55. Image dithering is the process of reducing continuous-tone images to images
  56. with a limited number of available colours. Applications vary tremendously,
  57. from laser and ink-jet printing to display on small devices such as cellphones,
  58. or even the design of banknotes.
  59. Countless methods have been published for the last 40 years that try to best
  60. address the problem of colour reduction. Comparing two algorithms in terms of
  61. speed or memory usage is often straightforward, but how exactly a halftoning
  62. algorithm performs quality-wise is a far more complex issue, as it highly
  63. depends on the display device and the inner workings of the human eye.
  64. Though this document focuses on the particular case of bilevel halftoning,
  65. most of our results can be directly adapted to the more generic problem of
  66. colour reduction.
  67. \section{Halftoning algorithms}
  68. The most ancient halftoning method is probably classical screening. This highly
  69. parallelisable algorithm consists in tiling a dither matrix over the image
  70. and using its elements as threshold values. Classical screening is known for
  71. its structural artifacts such as the cross-hatch patterns caused by Bayer
  72. ordered dither matrices \cite{bayer}. However, modern techniques such as the
  73. void-and-cluster method \cite{void1}, \cite{void2} allow to generate screens
  74. yielding visually pleasing results.
  75. \medskip Error diffusion dithering, introduced in 1976 by Floyd and Steinberg
  76. \cite{fstein}, tries to compensate for the thresholding error through the use
  77. of feedback. Typically applied in raster scan order, it uses an error diffusion
  78. matrix such as the following one, where $x$ denotes the pixel being processed:
  79. \[ \frac{1}{16} \left| \begin{array}{ccc}
  80. - & x & 7 \\
  81. 3 & 5 & 1 \end{array} \right| \]
  82. Though efforts have been made to make error diffusion parallelisable
  83. \cite{parfstein}, it is generally considered more computationally expensive
  84. than screening, but carefully chosen coefficients yield good visual results
  85. \cite{kite}.
  86. \medskip Model-based halftoning is the third important algorithm category. It
  87. relies on a model of the human visual system (HVS) and attempts to minimise
  88. an error value based on that model. One such algorithm is direct binary seach
  89. (DBS) \cite{allebach}, also referred to as least-squares model-based halftoning
  90. (LSMB) \cite{lsmb}.
  91. HVS models are usually low-pass filters. Nasanen \cite{nasanen}, Analoui and
  92. Allebach found that using Gaussian models gave visually pleasing results, an
  93. observation confirmed by independent visual perception studies \cite{mcnamara}.
  94. DBS yields halftones of impressive quality. However, despite efforts to make
  95. it more efficient \cite{bhatt}, it suffers from its large computational
  96. requirements and error diffusion remains a more widely used technique.
  97. \section{Error diffusion displacement}
  98. Most error diffusion implementations parse the image in raster scan order.
  99. Boustrophedonic (serpentine) scanning has been shown to cause fewer visual
  100. artifacts \cite{halftoning}, but other, more complex processing paths such as
  101. Hilbert curves \cite{spacefilling} are seldom used as they do not improve the
  102. image quality significantly.
  103. Intuitively, as the error is always propagated to the bottom-left or
  104. bottom-right of each pixel (Fig. \ref{fig:direction}), one may expect the
  105. resulting image to be slightly translated. This expectation is confirmed
  106. visually when rapidly switching between an error diffused image and the
  107. corresponding DBS halftone.
  108. \begin{figure}
  109. \begin{center}
  110. \input{direction}
  111. \caption{Floyd-Steinberg error diffusion direction in raster scan (left)
  112. and serpentine scan (right).}\label{fig:direction}
  113. \end{center}
  114. \end{figure}
  115. This small translation is visually innocuous but we found that it means a lot
  116. in terms of error computation. A common way to compute the error between an
  117. image $h_{i,j}$ and the corresponding binary halftone $b_{i,j}$ is to compute
  118. the mean square error between modified versions of the images, in the form:
  119. \begin{equation}
  120. E(h,b) = \frac{(||v * h_{i,j} - v * b_{i,j}||_2)^2}{wh}
  121. \end{equation}
  122. \noindent where $w$ and $h$ are the image dimensions, $*$ denotes the
  123. convolution and $v$ is a model for the human visual system.
  124. To compensate for the slight translation observed in the halftone, we use the
  125. following error metric instead:
  126. \begin{equation}
  127. E_{dx,dy}(h,b) = \frac{(||v * h_{i,j} - v * t_{dx,dy} * b_{i,j}||_2)^2}{wh}
  128. \end{equation}
  129. \noindent where $t_{dx,dy}$ is an operator which translates the image along the
  130. $(dx,dy)$ vector. By design, $E_{0,0} = E$.
  131. A simple example can be given using a Gaussian HVS model:
  132. \begin{equation}
  133. v(x,y) = e^{\frac{x^2+y^2}{2\sigma^2}}
  134. \end{equation}
  135. Finding the second filter is then straightforward:
  136. \begin{equation}
  137. (v * t_{dx,dy})(x,y) = e^{\frac{(x-dx)^2+(y-dy)^2}{2\sigma^2}}
  138. \end{equation}
  139. Experiments show that for a given image and a given corresponding halftone,
  140. $E_{dx,dy}$ has a local minimum almost always away from $(dx,dy) = (0,0)$ (Fig.
  141. \ref{fig:lena-values}). Let $E$ be an error metric where this remains true. We
  142. call the local minimum $E_{min}$:
  143. \begin{equation}
  144. E_{min}(h,b) = \min_{dx,dy}E_{dx,dy}(h,b)
  145. \end{equation}
  146. \begin{figure}
  147. \begin{minipage}[c]{0.8\textwidth}
  148. \input{lena-values}
  149. \end{minipage}
  150. \begin{center}
  151. \caption{Mean square error for the \textit{Lena} image ($\times10^4$). $v$
  152. is a simple $11\times11$ Gaussian convolution kernel with $\sigma
  153. = 1.2$ and $(dx,dy)$ vary in $[-1,1]\times[-1,1]$.}
  154. \label{fig:lena-values}
  155. \end{center}
  156. \end{figure}
  157. For instance, a Floyd-Steinberg dither of \textit{Lena} with $\sigma = 1.2$
  158. yields a per-pixel mean square error of $3.67\times10^{-4}$. However, when
  159. taking the displacement into account, the error becomes $3.06\times10^{-4}$ for
  160. $(dx,dy) = (0.165,0.293)$. The new, corrected error is significantly smaller,
  161. with the exact same input and output images.
  162. Experiments show that the corrected error is always noticeably smaller except
  163. in the case of images that are already mostly pure black and white. The
  164. experiment was performed on a database of 10,000 images from common computer
  165. vision sets and from the image board \textit{4chan}, providing a representative
  166. sampling of the photographs, digital art and business graphics widely exchanged
  167. on the Internet nowadays \cite{4chan}.
  168. In addition to the classical Floyd-Steinberg and Jarvis-Judice-Ninke kernels,
  169. we tested two serpentine error diffusion algorithms: Ostromoukhov's simple
  170. error diffusion \cite{ostromoukhov}, which uses a variable coefficient kernel,
  171. and Wong and Allebach's optimum error diffusion kernel \cite{wong}:
  172. \begin{center}
  173. \begin{tabular}{|l|c|c|}
  174. \hline
  175. &~ $E\times10^4$ ~&~ $E_{min}\times10^4$ ~\\ \hline
  176. ~raster Floyd-Steinberg ~&~ 3.7902 ~&~ 3.1914 ~\\ \hline
  177. ~raster Ja-Ju-Ni ~&~ 9.7013 ~&~ 6.6349 ~\\ \hline
  178. ~Ostromoukhov ~&~ 4.6892 ~&~ 4.4783 ~\\ \hline
  179. ~optimum kernel ~&~ 7.5209 ~&~ 6.5772 ~\\
  180. \hline
  181. \end{tabular}
  182. \end{center}
  183. We clearly see that usual metrics underestimate the quality of error-diffused
  184. halftones, especially in raster scan. Algorithms such as direct binary search,
  185. on the other hand, do not suffer from this bias since they are designed to
  186. minimise the very error induced by the HVS model.
  187. \section{An image-independent corrected quality metric for error-diffused
  188. halftones}
  189. We have seen that for a given image, $E_{min}(h,b)$ is a better and fairer
  190. visual error measurement than $E(h,b)$. However, its major drawback is that it
  191. is highly computationally expensive: for each image, the new $(dx,dy)$ values
  192. need to be calculated to minimise the error value.
  193. Fortunately, we found that for a given raster or serpentine scan
  194. error diffusion algorithm, there was often very little variation in
  195. the optimal $(dx,dy)$ values (Fig. \ref{fig:table-historaster} and
  196. \ref{fig:table-histoserp}).
  197. \begin{figure}
  198. \begin{center}
  199. \begin{minipage}[c]{0.50\textwidth}
  200. \input{fs-histo}
  201. \end{minipage}
  202. \begin{minipage}[c]{0.40\textwidth}
  203. \input{jajuni-histo}
  204. \end{minipage}
  205. \caption{error diffusion displacement histograms for the raster
  206. Floyd-Steinberg (left) and raster Jarvis, Judis and Ninke (right)
  207. algorithms applied to a corpus of 10,000 images}
  208. \label{fig:table-historaster}
  209. \end{center}
  210. \end{figure}
  211. \begin{figure}
  212. \begin{center}
  213. \begin{minipage}[c]{0.50\textwidth}
  214. \input{ostro-histo}
  215. \end{minipage}
  216. \begin{minipage}[c]{0.40\textwidth}
  217. \input{serpopt-histo}
  218. \end{minipage}
  219. \caption{error diffusion displacement histograms for the Ostromoukhov (left) and optimum kernel (right) algorithms applied to a corpus
  220. of 10,000 images}
  221. \label{fig:table-histoserp}
  222. \end{center}
  223. \end{figure}
  224. For each algorithm, we choose the $(dx,dy)$ values at the histogram peak and
  225. we refer to them as the \textit{algorithm's displacement}, as opposed to the
  226. \textit{image's displacement} for a given algorithm. We call $E_{fast}(h,b)$
  227. the error computed at $(dx,dy)$. As $E_{fast}$ does not depend on the image, it
  228. is a lot faster to compute than $E_{min}$, and as it is statistically closer to
  229. $E_{min}$, we can expect it to be a better error estimation than $E$:
  230. \begin{center}
  231. \begin{tabular}{|l|c|c|c|c|c|}
  232. \hline
  233. &~ $E\times10^4$ ~&~ $E_{min}\times10^4$ ~&~ $dx$ ~&~ $dy$ ~&~ $E_{fast}\times10^4$ ~\\ \hline
  234. ~raster Floyd-Steinberg ~&~ 3.7902 ~&~ 3.1914 ~&~ 0.16 ~&~ 0.28 ~&~ 3.3447 ~\\ \hline
  235. ~raster Ja-Ju-Ni ~&~ 9.7013 ~&~ 6.6349 ~&~ 0.26 ~&~ 0.76 ~&~ 7.5891 ~\\ \hline
  236. ~Ostromoukhov ~&~ 4.6892 ~&~ 4.4783 ~&~ 0.00 ~&~ 0.19 ~&~ 4.6117 ~\\ \hline
  237. ~optimum kernel ~&~ 7.5209 ~&~ 6.5772 ~&~ 0.00 ~&~ 0.34 ~&~ 6.8233 ~\\
  238. \hline
  239. \end{tabular}
  240. \end{center}
  241. \section{Using error diffusion displacement for optimum kernel design}
  242. We believe that our higher quality $E_{min}$ error metric may be useful in
  243. kernel design, because it is the very same error that admittedly superior yet
  244. computationally expensive algorithms such as DBS try to minimise.
  245. Our first experiment was a study of the Floyd-Steinberg-like 4-block error
  246. diffusion kernels. According to the original authors, the coefficients were
  247. found "mostly by trial and error" \cite{fstein}. With our improved metric, we
  248. now have the tools to confirm or infirm Floyd and Steinberg's initial choice.
  249. We chose to do an exhaustive study of every $\frac{1}{16}\{a,b,c,d\}$ integer
  250. combination. We deliberately chose positive integers whose sum was 16: error
  251. diffusion coefficients smaller than zero or adding up to more than 1 are known
  252. to be unstable \cite{stability}, and diffusing less than 100\% of the error
  253. causes important loss of detail in the shadow and highlight areas of the image.
  254. We studied all possible coefficients on a pool of 3,000 images with an error
  255. metric $E$ based on a standard Gaussian HVS model. $E_{min}$ is only given here
  256. as an indication and only $E$ was used to elect the best coefficients:
  257. \begin{center}
  258. \begin{tabular}{|c|c|c|c|}
  259. \hline
  260. ~ rank ~&~ coefficients ~&~ $E\times10^4$ ~&~ $E_{min}\times10^4$ ~\\ \hline
  261. ~ 1 ~&~ 7 3 6 0 ~&~ 4.65512 ~&~ 3.94217 ~\\ \hline
  262. ~ 2 ~&~ 8 3 5 0 ~&~ 4.65834 ~&~ 4.03699 ~\\ \hline
  263. \hline
  264. ~ 5 ~&~ 7 3 5 1 ~&~ 4.68588 ~&~ 3.79556 ~\\ \hline
  265. \hline
  266. ~ 18 ~&~ 6 3 5 2 ~&~ 4.91020 ~&~ 3.70465 ~\\ \hline
  267. ~ \dots ~&~ \dots ~&~ \dots ~&~ \dots ~\\
  268. \hline
  269. \end{tabular}
  270. \end{center}
  271. The exact same operation using $E_{min}$ as the decision variable yields very
  272. different results. Similarly, $E$ is only given here as an indication:
  273. \begin{center}
  274. \begin{tabular}{|c|c|c|c|}
  275. \hline
  276. ~ rank ~&~ coefficients ~&~ $E_{min}\times10^4$ ~&~ $E\times10^4$ ~\\ \hline
  277. ~ 1 ~&~ 6 3 5 2 ~&~ 3.70465 ~&~ 4.91020 ~\\ \hline
  278. ~ 2 ~&~ 7 3 5 1 ~&~ 3.79556 ~&~ 4.68588 ~\\ \hline
  279. \hline
  280. ~ 15 ~&~ 7 3 6 0 ~&~ 3.94217 ~&~ 4.65512 ~\\ \hline
  281. \hline
  282. ~ 22 ~&~ 8 3 5 0 ~&~ 4.03699 ~&~ 4.65834 ~\\ \hline
  283. ~ \dots ~&~ \dots ~&~ \dots ~&~ \dots ~\\
  284. \hline
  285. \end{tabular}
  286. \end{center}
  287. Our improved metric allowed us to confirm that the original Floyd-Steinberg
  288. coefficients were indeed amongst the best possible for raster scan.
  289. More importantly, using $E$ as the decision variable may have elected
  290. $\frac{1}{16}\{7,3,6,0\}$ or $\frac{1}{16}\{8,3,5,0\}$, which are in fact poor
  291. choices.
  292. For serpentine scan, however, our experiment suggests that
  293. $\frac{1}{16}\{7,4,5,0\}$ is a better choice than the Floyd-Steinberg
  294. coefficients that have nonetheless been widely in use so far (Fig.
  295. \ref{fig:lena7450}).
  296. \begin{figure}
  297. \begin{center}
  298. \includegraphics[width=0.4\textwidth]{output-7-3-5-1-serp.eps}
  299. ~
  300. \includegraphics[width=0.4\textwidth]{output-7-4-5-0-serp.eps}
  301. \end{center}
  302. \begin{center}
  303. \includegraphics[width=0.4\textwidth]{crop-7-3-5-1-serp.eps}
  304. ~
  305. \includegraphics[width=0.4\textwidth]{crop-7-4-5-0-serp.eps}
  306. \caption{halftone of \textit{Lena} using serpentine error diffusion
  307. (\textit{left}) and the optimum coefficients
  308. $\frac{1}{16}\{7,4,5,0\}$ (\textit{right}) that improve on the
  309. standard Floyd-Steinberg coefficients in terms of visual quality
  310. for the HVS model used in section 3. The detailed area
  311. (\textit{bottom}) shows fewer structure artifacts in the regions
  312. with low contrast.}
  313. \label{fig:lena7450}
  314. \end{center}
  315. \end{figure}
  316. \section{Conclusion}
  317. We have disclosed an interesting property of error diffusion algorithms
  318. allowing to more precisely measure the quality of such halftoning methods.
  319. Having showed that such quality is often underestimated by usual metrics,
  320. we hope to see even more development in simple error diffusion methods.
  321. Confirming Floyd and Steinberg's 30-year old "trial-and-error" result with our
  322. work is only the beginning: future work may cover more complex HVS models,
  323. for instance by taking into account the angular dependance of the human eye
  324. \cite{sullivan}. We plan to use our new metric to improve all error diffusion
  325. methods that may require fine-tuning of their propagation coefficients.
  326. %
  327. % ---- Bibliography ----
  328. %
  329. \begin{thebibliography}{}
  330. %
  331. \bibitem[1]{bayer}
  332. B. Bayer,
  333. \textit{Color imaging array}.
  334. U.S. patent 3,971,065 (1976)
  335. \bibitem[2]{void1}
  336. R.A. Ulichney (Digital Equipment Corporation),
  337. \textit{Void and cluster apparatus and method for generating dither templates}.
  338. U.S. patent 5,535,020 (1992)
  339. \bibitem[3]{void2}
  340. H. Ancin, A. Bhattacharjya and J. Shu (Seiko Epson Corporation),
  341. \textit{Void-and-cluster dither-matrix generation for better half-tone
  342. uniformity}.
  343. U.S. patent 6,088,512 (1997)
  344. \bibitem[4]{fstein}
  345. R.W. Floyd, L. Steinberg,
  346. \textit{An adaptive algorithm for spatial grey scale}.
  347. Proceedings of the Society of Information Display 17, (1976) 75--77
  348. \bibitem[5]{parfstein}
  349. P. Metaxas,
  350. \textit{Optimal Parallel Error-Diffusion Dithering}.
  351. Color Imaging: Device-Indep. Color, Color Hardcopy, and Graphic Arts IV, Proc.
  352. SPIE 3648, 485--494 (1999)
  353. \bibitem[6]{kite}
  354. T. D. Kite,
  355. \textit{Design and Quality Assessment of Forward and Inverse Error-Diffusion
  356. Halftoning Algorithms}.
  357. PhD thesis, Dept. of ECE, The University of Texas at Austin, Austin, TX, Aug.
  358. 1998
  359. \bibitem[7]{halftoning}
  360. R. Ulichney,
  361. \textit{Digital Halftoning}.
  362. MIT Press, 1987
  363. \bibitem[8]{spacefilling}
  364. L. Velho and J. Gomes,
  365. \textit{Digital halftoning with space-filling curves}.
  366. Computer Graphics (Proceedings of SIGGRAPH 91), 25(4):81--90, 1991
  367. \bibitem[9]{nasanen}
  368. R. Nasanen,
  369. \textit{Visibility of halftone dot textures}.
  370. IEEE Trans. Syst. Man. Cyb., vol. 14, no. 6, pp. 920--924, 1984
  371. \bibitem[10]{allebach}
  372. M. Analoui and J.~P. Allebach,
  373. \textit{Model-based halftoning using direct binary search}.
  374. Proc. of SPIE/IS\&T Symp. on Electronic Imaging Science and Tech.,
  375. February 1992, San Jose, CA, pp. 96--108
  376. \bibitem[11]{mcnamara}
  377. Ann McNamara,
  378. \textit{Visual Perception in Realistic Image Synthesis}.
  379. Computer Graphics Forum, vol. 20, no. 4, pp. 211--224, 2001
  380. \bibitem[12]{bhatt}
  381. Bhatt \textit{et al.},
  382. \textit{Direct Binary Search with Adaptive Search and Swap}.
  383. \url{http://www.ima.umn.edu/2004-2005/MM8.1-10.05/activities/Wu-Chai/halftone.pdf}
  384. \bibitem[13]{4chan}
  385. moot,
  386. \url{http://www.4chan.org/}
  387. \bibitem[14]{wong}
  388. P.~W. Wong and J.~P. Allebach,
  389. \textit{Optimum error-diffusion kernel design}.
  390. Proc. SPIE Vol. 3018, p. 236--242, 1997
  391. \bibitem[15]{ostromoukhov}
  392. Victor Ostromoukhov,
  393. \textit{A Simple and Efficient Error-Diffusion Algorithm}.
  394. in Proceedings of SIGGRAPH 2001, in ACM Computer Graphics, Annual Conference
  395. Series, pp. 567--572, 2001
  396. \bibitem[16]{lsmb}
  397. T.~N. Pappas and D.~L. Neuhoff,
  398. \textit{Least-squares model-based halftoning}.
  399. in Proc. SPIE, Human Vision, Visual Proc., and Digital Display III, San Jose,
  400. CA, Feb. 1992, vol. 1666, pp. 165--176
  401. \bibitem[17]{stability}
  402. R. Eschbach, Z. Fan, K.~T. Knox and G. Marcu,
  403. \textit{Threshold Modulation and Stability in Error Diffusion}.
  404. in Signal Processing Magazine, IEEE, July 2003, vol. 20, issue 4, pp. 39--50
  405. \bibitem[18]{sullivan}
  406. J. Sullivan, R. Miller and G. Pios,
  407. \textit{Image halftoning using a visual model in error diffusion}.
  408. J. Opt. Soc. Am. A, vol. 10, pp. 1714--1724, Aug. 1993
  409. \end{thebibliography}
  410. \end{document}