October 1991
LIDSP2039
Deformable kernels for early vision
Pietro Perona Universitg di Padova Dipartimento di Elettronica ed Informatica
and
California Institute of Technology 11681 Engineering and Applied Science Pasadena, CA 91125 perona@ verona. caltech. edu
Abstract Early vision algorithms often have a first stage of linearfiltering that 'extracts' from the image information at multiple scales of resolution and multiple orientations. A common difficulty in the design and implementation of such schemes is that one feels compelled to discretize coarsely the space of scales and orientations in order to reduce computation and storage costs. This discretization produces anisotropies due to a loss of traslation, rotation, scalinginvariance that makes early vision algorithms less precise and more difficult to design. This need not be so: one can compute and store efficiently the response of families of linear filters defined on a continuum of orientations and scales. A technique is presented that allows (1) to compute the best approximation of a given family using linear combinations of a small number of 'basis' functions; (2) to describe all finitedimensional families, i.e. the families of filters for which a finite dimensional representation is possible with no error. The technique is based on singular value decomposition and may be applied to generating filters in arbitrary dimensions. Experimental results are presented that demonstrate the applicability of the technique to generating multiorientation multiscale 2D edgedetection kernels. The implementation issues are also discussed.
This research was supported by the Army Research Office under grant DAAL86K0171 (Center for Intelligent Control Systems).
1
Introduction
Points, lines, edges, textures, motions are present in almost all images of everyday's world. These elementary visual structures often encode a great proportion of the information contained in the image, moreover they can be characterized using a small set of parameters that are locally defined: position, orientation, characteristic size or scale, phase, curvature, velocity. It is threrefore resonable to start visual computations with measurements of these parameters. The earliest stage of visual processing, common for all the classical early vision modules, could consist of a collection of operators that calculate one or more dominant orientations, curvatures, scales, velocities at each point of the image or, alternatively, assign an 'energy', or 'probability', value to points of a positionorientationphasescaleetc. space. Ridges and local maxima of this energy would mark special interest loci such as edges and junctions. The idea that biological visual systems might analyze images along dimensions such as orientation and scale dates back to work by Hubel and Wiesel [21, 20] in the 1960's. In the computational vision literature the idea of analyzing images along multiple orientations appears at the beginning of the seventies with the BinfordHorn linefinder [19, 4] and later work by Granlund [16]. A computational framework that may be used to performs this protovisual analysis is the convolution of the image with kernels of various shapes, orientations, phases, elongation, scale. This approach is attractive because it is simple to describe, implement and analyze. It has been proposed and demonstrated for a variety of early vision tasks [28, 27, 6, 3, 7, 18, 44, 34, 32, 35, 12, 31, 5, 45, 25, 26, 15, 39, 2]. Various 'general' computational justifications have been proposed for basing visual processing on the output of a rich set of linar filters: (a) Koenderink has argued that a structure of this type is an adequate substrate for local geometrical computations [29] on the image brightness, (b) Adelson and Bergen [2] have derived it from the 'first principle' that the visual system computes derivatives of the image along the dimensions of wavelength, parallax, position, time, (c) a third point of view is the one of 'matched filtering': where the kernels are synthesized to match the visual events that one looks for. The kernels that have been proposed in the computational literature have typically been chosen according to one or more of three classes of criteria: (a) 'generic optimality' (e.g. optimal sampling of spacefrequency space), (b) 'task optimality' (e.g. signal to noise ratio, localization of edges) (c) emulation of biological mechanisms. While there is no general consensus in the literature on precise kernel shapes, there is convergence on kernels roughly shaped like either Gabor functions, or derivatives or differences of either round or elongated Gaussian functions  all these functions have the advantage that they can be specified and computed easily. A good rule of the thumb in the choice of kernels for early vision tasks is that they should have good localization in space and frequency, and should be roughly tuned to the visual events that one wants to analyze. Since points, edges, lines, textures, motions can exist at all possible positions, orientations, scales of resolution, curvatures one would like to be able to use families of filters that are tuned to all orientations, scales and positions. Therefore once a particular convolution kernel has been chosen one would like to convolve the image with deformations (rotations, scalings, stretchings, bendings etc.) of this 'template'. In reality one can afford only a finite (and small) number of filtering operations, hence the common practice of 'sampling' the set of orientations, scales, positions, curvatures, phases 1. This operation has the strong drawback of introducing anisotropies
1Motion flow computation using spatiotemporal filters has been proposed by Adelson and Bergen [3] as a model of human vision and has been demonstrated by Heeger [18] (his implementation had 12 discrete spatiotemporal orientations and 3 scales of resolution). Work on texture with multipleresolution multipleorientation kernels is due to Knuttson and Granlund [28] (4 scales, 4 orientations, 2 phases), Turner [44] (4 scales, 4 orientations, 2 phases), Fogel and Sagi [12] (4 scales, 4 orientations, 2 phases), Malik and Perona [31] (11 scales, 6 orientations, 1 phase)
~~~II ·
1
and algorithmic difficulties in the computational implementations. It would be preferable to keep thinking in terms of a continuum, of angles for example, and be able to localize the orientation of an edge with the maximum accuracy allowed by the fiiter one has chosen. This aim may sometimes be achieved by means of interpolation: one convolves the image with a small set of kernels, say at a number of discrete orientations, and obtains the result of the convolution at any orientation by taking linear combinations of the results. Since convolution is a linear operation the interpolation problem may be formulated in terms of the kernels (for the sake of simplicity the case of rotations in the plane is discussed here): Given a kernel F : R2 , C1 , define the family of 'rotated' copies of F as: Fs= F o R 0 , 0 e $1, where $1 is the circle and Rs is a rotation. Sometimes it is possible to express Fs as
n
Fo(x) = Ea(9)iGi(x)
i=l
VO e S',Vx E
(1)
a finite linear combination of functions Gi : R2 + Ci1. It must be noted that, at least for positions and phases, the mechanism for realizing this in a systematic way is well understood: in the case of positions the sampling theorem gives conditions and an interpolation technique for calculating the value of the filtered image at any point in a continuum; in the case of phases a pair of filters in quadrature can be used for calculating the response at any phase [3, 33]. Rotation, scalings and other deformations are less well understood. An example of 'rotating' families of kernels that have a finite representation is well known: the first derivative along an arbitrary direction of a round (a, = a,) Gaussian may be obtained by linear combination of the X and Yderivatives of the same. The common implementations of the Canny edge detector [7] are based on this principle. Unfortunately the kernel obtained this way has poor orientation selectivity and therefore it is unsuited for edge detection if one wants to recover edgejunctions (see in Fig. 1 the comparison with a detector that uses narrow orientationselective filters). Freeman and Adelson have recently argued [15, 14] that it would be desirable to construct orientationselective kernels that can be exactly rotated by interpolation (they call this property "steerability" and the term will be used in this paper) and have shown that higher order derivatives of round Gaussians, indeed all polynomials multiplied by a radially symmetric function are steerable (they have a more general result  see comments to Theorem 1). For high polynomial orders these functions may be designed to have higher orientation selectivity and can be used for contour detection and signal processing [15]. However, one must be aware of the fact that for most kernels F of interest a finite decomposition of Fs as in Eq. (1) cannot be found. For example the elongated kernels used in edge detection by [38, 39] (see Fig. 1 top right) do not have a finite decomposition as in Eq. (1). One needs an approximation technique that, given an Fs, allows one to generate a function ] Gf which is sufficiently similar to Fs and that is steerable, i.e. can be expressed as a finite sum of n terms as in (1). Freeman and Adelson propose to approximate the kernel with an adequately high order polynomial multiplied by a radially symmetric function (which they show is steerable). However, this method does not guarantee a parsimonious approximation: given a tolerable amount
and Bovik et al. [5] (n scales, m orientations, I phases). Work on stereo by Kass [27] (12 filters, scales, orientations and phases unspecified) and Jones and Malik [25, 26] (6 scales, 26 orientations, 2 phases). Work on curved line grouping by Parent and Zucker [35] (1 scale, 8 orientations, iphase) and Malik and Gigus [30] (9 curvatures, 1 scale, 18 orientations, 2 phases). Work on brightness edge detection by Binford and Horn [19, 4] (24 orientations), Canny [7] (12 scales, oo6 orientations, 1 phase), Morrone,Owens and Burr [34, 32] (13 scales, 24 orientations, oo phases), unpublished work on edge and illusory contour detection by Heitger, Rosenthaler, Kiibler and von der Heydt (6 orientations, 1 scale, 2 phases). Image compression by Zhong and Mallat [45] (4 scales, 2 orientations, 1 pahse).
2
kernel wi(dth (or ill pixel unu itISs
.·;~5·2;5··~::~::~::~:i ·:~~::~:: ..........
~
.........
~~~::::~~~~~~~~~~~ ~~~~~~~It
........ t..........................................''.'. .·.. .............................. '.'. . '.
~
(Paolina)
30
36
*
(T junction)
t
/ff f/
% ·
30
::::::·:~::: ·· · 37 :s·:··:
/ ///.
36·....~'······::::::37
·· · 3 3
.. ' .'....'...'i32
30
, 1323334353637303 ~' 9
32..................
36'".. ..
~~~f
.. 3 /3
....
/3
3
303 1 32 33 34 3536 37
30 3132 33 34353637
energies 2sided 8x8
orientations 8x8
frma eio ouhy tth enr f th riialiag) (8Iide et)Te ene f h fle (itis(gus3)inFi. ) s logaedtohae ighorenatonseecivty (idlecetr) odlu R(XY, 0 of he otputof te coplexvalud fiter pola plo s o fo 8x8pixes inthe egio of heTuntin) (idleriht Th lca namna o IR:, . ) Iwih esec t 0 Ntie ha inthe egion o thejnto n id w oa ixiaii orsodn oteoinaino , th ege. eachngfo lca mxiiali (~y) iia.diecio rto'iil o hemaimzig s n caiifndte des(oto lf) ih ih curc ero aond1 dgeei oi ttonad . piel i oston . B tt m iht) o il rio ih111 ( tJ)t J'Ca lv etcor u ligth ail k(,ritl wid h ((Tin iel I ilts) i * .. : , ,,,~~zt~~l tz~~~~ ...
KVi
..
'.
i'
:.b4 ~ )~.......... . ......... ''''.~ "'1';",:,_·. ~
PeronaMalik o. = 3, o, = 1 Calnny c, = 1
K
_.
Figure 1: Example of the use of orientationselective filtering oll a continuum of orientations (see Perona and Malik [38, 39]). (Top left) Original image. (Top right) A Tjunction (64x64 pixel detail from a region roughly at the centre of the original image). (Middle left) The kernel of the filter (it is (gaus3) in Fig. 3) is elongated to have high orientation selectivity. (Middlecentre) Modulus R(z, y, 0) of the output of the complexvalued filter (polar plot shown for 8x8 pixels in the region of the Tjunction). (Middleright) The local maxima of jI(.x,:y.)1 with respect to 6. Notice that in the region of the junction one finds two local maxima in 6fcorresponding to the orientation of the edges. Searching for local mnaxima in (x,y) in a.dir(et ion ortogonal to the ma.ximizing 6's one can find the edges (Bottomn left,) with high accuracy (error alround 1 degree in orientation and 0.1
pixels in positionl). (BottoIn right ) ( olon )arisoI willl1, oultl)lt. of a (anny (letector using the same he
of error one would like to find an approximating G1] that has minimum number n of components. ; A different design perspective could also be taken: given a number n of filtering operations allowed, synthesize the best (with respect to the specific task at hand) kernel within the class of functions that car, be exactly represented by a sum of n terms. Therefore it is useful to be able to answer to the question: What is the set of functions that can be represented exactly as in Eq. (1)? Neither this question, nor the approximation question have yet been addressed in the vision and signal processing literature so far. This paper is organized as follows: the special case of the rotations (Eq. (1)) is explored and solved in section 2 and appendix A.1. In section 3 a few results from functional analysis are recalled to extend the approximation technique to all 'compact' deformations. In section 4 an application of the approximation technique to generating steerable filters for edge detection is described. In section 5 it is shown how to generate a steerable and scalable family. Experimental results and implementation issues are presented and discussed for the schema presented in sections 5 and 4.
2
Steerable approximations
In order to solve the approximation problem proposed in the introduction one needs of course to define the 'quality' of the approximation G ]1  FO. There are two reasonable choices: (a) a distance h D(Fo, Gfn ) in the space R 2 x 51 where F0 is defined; (b) if F0 is the kernel of some filter one is interested in the worstcase error in the 'output' space: the maximum distance d((Fo, f), (G' ] f)) , over all unitnorm f defined on R2 . The symbols An and 6, will indicate the 'optimal' distances, i.e. the minimum possible approximation errors using n components. These quantities may be defined using the distances induced by the L 2 norm: Definition.
Dn(Fo, GIn ) = lIFo  G I112 X,, I1
An (Fo) =infD (F0, G])
dn(Fo, ,G[]
)
= sup 11(Fo  G[], f)u2llsl lf 11=1
en(Fo) = inf dn(Fs, G6)
Gn]
Consider the approximation to F0 defined as follows: Definition. Call FAn] the nterms sum:
n
Fn = Zaia (x)bi(9) 1
i=l
(2)
with ai, ai and bi defined in the following way: let h(v) be the (discrete) Fourier transform of the function h(O) defined by: h(o) =
A2
Fo(x)Fo,=o(x)dx
(3)
and let vi be the frequencies on which h(v) is defined, ordered in such a way that h(vi) > h(vj) if i < j. Call N < oo the number of nonzero terms h(vi). Finally, define the quantities: ci =
/ h(vi)1 2
(4)
4
bi(6)
ai(x)
=
=
ej(5)
ot
1
f
F(x)ej27rvids
(6)
Then F?] is the best ndimensional approximation to Fo in the following sense: Theorem 1 Given the definitions and notation introduced above, suppose that F G L 2 (R 2) then: 1. {ai} and {bi} are orthonormal sequences of functions. 2. F IN] is the smallest possible exact representation of Fo , i.e. TY pi(90)gi(x) then M > N.
finite, and N = M.
if 3 M, pi, gi s.t. Fo(x) =
3. The number N of terms is finite iff the number M of indices i for which ai(x) 0 OL2(R2) is
4. Fn] is an optimal napproximation of F8 with respect to both distances: D~(Fo,F?)=An(Fo)=
dn(Fo, Fn]) = E
i=n+l
2 rr
(7)
(8)
4(F)=
a,+l
5. Dn, 6, 
O0 n * N. for o Ro.
6. F
=Fno L]
7. 301,. . ., n s. t. F[] = ,i=1 ai()Fn] . In fact this is true for all 01 , . ,On but a set of measure
zero.
Comment. 1. The expression for the bi is independent of F. Only vi and ai depend on F. The bi depend on the particular group of transformations (the rotations of the plane in this case) that is used to generate F0 from F. 3. The 'if' part of statement 3 is equivalent to the main theorem of [13]  Freeman and Adelson show that all functions that one can write as a linear combination of functions like the ai's (polarseparable with sinusoidal 0 component) are steerable. The 'only if' part says that the functions that they described are all the steerable functions. 4. For deciding at what point n to truncate the sum one plots the error E, or An v.s. n and looks for the smallest integer n for which the error is less than some assigned value. See Figs. 4, 6. 6. This means that F[ ] is steerable, i.e. its shape does not change with 0, modulo a rotation in the domain. Therefore FAn] is the best approximation to Fs in the space of 'nsteerable' functions ('best' is intended with respect to the L2 induced distance).
5
7.1. A set of size n of rotated copies of F[ =] is enough for representing F? , so we may choose to use this different decomposition instead of 2. On the other hand this representation has some numerical disadvantages: (1) The set Foi is not orthonormal, so its numerical implementation is less efficient (it will require more significant bits in the calculations to obtain the same final precision). (2) The functions at are easier to approximate with sums of XY separable functions then the Foi (see the experimental section 4.2, and Fig. 8). 7.2. The error d(Fo, F??]) of the napproximation is constant with respect to 0 since Fs = F o R0 [ and Fn = F0 ] o Ro. There is no anisotropy even if F In ] is an approximation. The proof of this theorem can be found in appendix A.1. It is based on the fact that the triples (a/,ai(x),b(90)) are the singular value decomposition (SVD) of a linear operator associated to Fo(x). (From now on the abbreviation SVD will be used to indicate the decomposition of a kernel into such triples).
3
Deformable functions
The existence of the optimal finitesum approximation of the kernel Fo(x) as decribed in the previous section and Sec. A.1 is not peculiar to the case of rotations. This is true in more general circumstances: this section collects a few facts of functional analysis that show that one can compute finite optimal approximations to continuous families of kernels whenever certain 'compactness' conditions are met. Consider a parametrized family of kernels F(x; 0) where x E X now indicates a generic vector of variables in a set X and 0 E T a vector of parameters in a set T. (The notation is changed slightly from the previous section.) Consider the sets A and B of continuous functions from X and T to the complex numbers, call a(x) and b(8) the generic elements of these two sets. Proceed as at the beginning of sec. A.1 and consider the operator L : A B defined by F as (La(.))(0) = (F(; 0), a(.))A. A first theorem says that if the kernel F has bounded norm then the associated operator L is compact (see [8] pag. 316): Theorem 2 Let X and T be locally compact Hausdorff spaces and F E L 2 (X x T). Then L is well defined and is a compact operator. Such a kernel is commonly called a HilbertSchmidt kernel. A second result tells us that if a linear operator is compact, then it has a discrete spectrum (see [9] pag. 323): Theorem 3 Let L be a compact operator on (complex) normed spaces, then the spectrum S of L is at most denumerable. A third result says that if L is continuous and operates on Hilbert spaces then the compactness property transfers to the adjoint of L (see [9] pag. 329): Theorem 4 Let L be a compact operator on Hilbert spaces, then the adjoint L* is compact. Trivially, the composition of two compact operators is compact, so the operators LL* and L*L are compact and will have a discrete spectrum as guaranteed by theorem 3. The singular value
6
decomposition (SVD) for the operator L can therefore be computed as the collection of triples (ai, ai, bi), i = 0, ... where the ai constitute the spectra of both LL* and L*L and the ai and bi are the corresponding eigenvectors. The last result can now be enunciated (see [40] Chap.IV,Theorem 2.2): Theorem 5 Let L : A + B be a linear compact operator between two Hilbert spaces. Let at,bi, ai be the singular value decomposition of L, where the ai are in decreasing order of magnitude. Then 1. An optimal ndimensional approximation to L is L, = 2. The approximation error is d,(L) = an+l, A2 (L) N=
aiaibi an=_l
+l (72
As a result we know that when our original template kernel F(x) and the chosen family of deformations R(9) define a HilbertSchmidt kernel F(x; 0) = (F o R(0))(x) then it is possible to compute a finite discrete approximation as for the case of 2D rotations. Are the families of kernels F(x; 0) of interest in vision HilbertSchmidt kernels? In the cases of interest for vision applications the 'template' kernel F(x) typically has a finite norm, i.e. it belongs to L 2 (X) (all kernels used in vision are bounded compactsupport kernels such as Gaussian derivatives, Gabors etc.). However, this is not a sufficient condition for the family F(x; 0) = F o R(0)(x) obtained composing F(x) with deformations R(0) (rotations, scalings) to be a HilbertSchmidt kernel: the norm of F(x; 0) could be unbounded. A sufficient condition for the associated family F(x; 0) to be a HilbertSchmidt kernel is that the inverse of the Jacobian of the transformation R, IJRI1 belongs to L 2 (T). In fact the norm of F(.;.) is bounded above by the product of the norm
of F in X and the norm of IJRL in T:
IIF(;.)112 =
=
IF(x; 0)
2dxds
JTX I(F o R(0))(x)12dxdG
= IiF(.)112111JR(.)il112
J<x IJR(0)llIF(y)i2dydO
Which is bounded by hypothesis. A typical condition in which this arises is when the transformation R is unitary, e.g. a rotation, translation, or an appropriately normalized scaling, and the set T is bounded. In that case the norm of IIJRII1 is equal to the measure of T. The following sections in this paper will illustrate the power of these results by applying them to the decomposition of rotating 2D kernels (section 2), 2D kernels into sums of XYseparable kernels (section 4.2), rotating and scaled kernels (section 5). A useful subclass of kernels F for which the finite orthonormal approximation can be in part explicitly computed is obtained by composing a template function with transformations To belonging to a compact group. This situation arises in the case of ndimensional rotations and is useful for edge detection in tomographic data and spatiotemporal filtering. It is discussed in [36, 37].
4
Steerable approximations: practical issues and experimental results
In this section the formalism described so far is applied to the problem of generating steerable and scalable approximations to convolution kernels for an edgedetector. The Gaussianderivative 7
Kernel of LL*
h(angle) x 103
500.00 450.00 kerel 400.00 350.00 300.00250.00 i
.
I
l
~ l
kernel 1
5000kernel 2 3

200.00 150.00 \
100.00

50000.00 Il~~~~
0.00
100.00
l~~~ l
200.00
300.00
langle
~
Figure 2: The kernel h(O) (see eq. (3) and (23)). Angle 0 in abscissa. The template function is as in Fig. 3 with oa,: ay ratios of 1:1, 1:2, 1:3. The orientationselectivity of the filters with impulse response equal to the template functions can be estimated from these plots: the halfwidth of the
peak is approximately 600, 350, 20 ° .
kernels used by [38, 39] have been chosen for this example. These kernels have an elongated shape to achieve high orientation selectivity (equivalently, narrow orientation tuning). The real part of the kernels is a Gaussian G(x, ot, a,) = exp ((x/ao) 2 + (y/a,) 2 ) differenciated twice along the Y axis. The imaginary part is the Hilbert transform of the real part taken along the Y axis (see Figures 3 and 6, topleft). One more twist is added: the functions ai in the decomposition may in turn be decomposed as sums of a small number of XYseparable functions making the implementation of the filters considerably faster. 4.1 Rotations
In the case of rotations theorem 1 may be used directly to compute the decomposition. The calculations proceded as indicated in section 2. For convenience they are summarized in a recipe: 1. Select the 'template' kernel F(x) of which one wants rotated versions Fo(x) = F o Ro(x) = F(x cos(O) + ysin(0), x sin(O) + ycos(0)) and the maximum tolerable error r/. 2. Compute numerically the function h(O) using its definition (Eq. (3)). See Fig. 2. 3. Compute the discrete Fourier transform h of h. Verify that the coefficients hk are nonnegative.
8
......... · i::I:f:~
::X'S'''~''~'' Z~·~:: :: , singular values vs singular frequencies :::::::::: ~~~log(s.val.)
... :::~:~:~:~~: :le+01 
5
'SXMI:::0.00
le+00 
·
2.2
~~~~ ~::~~:::~5......
2

l e02 
A.>2.222R,$SSS, R R
R
g5
(gaus3)
le03 
Thefirst 28ashwn a logarthmic on scalepottedagainsttheassociate s. freq. 0.00 10.00 20.00 30.00
.Ire
f'"
~~..................... '"':'...~:':
' ::.
................... .......................
(sfnc.O)
(sfnc.1)
(sfnc.2)
(sfnc.3)
(sfnc.4)
(sfnc.5)
(sfnc.6)
(sfnc.7)
(sfnc.8)
Figure 3: The decomposition (ai, bi, oi) of a complex kernel used for brightnessedge detection [39].
for i = 0 ... 8. The real part is above; the imaginary part below. The functions bi(8) are complex exponentials (see text) with associated frequencies vi = i.
9
reconstruction error v.s. comporents number
log(error) le+0052le01
5
2 le02
le03 0.00 n.components 10.00 20.00
Fig. 6).
10:.·::::.::5,.:::::...... :..·. .....
::~t;:ss·:s·~·~·~·::........ ...
:
i ::::::  . : ~..:~.:.~"'"' '*'''. "'~:..'::'.'..::':':::::::::::...":":::"':""::.,:.,::.'.?':.:.::,'::.::: '"~.....M ":~:<.:~ss·.·.·..'..:.z (gans3):~:~::~:~'~'~':s:: :.v..,': ,.
:::::::::
Mks::::: ...... · :.; :.
Figure.....(B.'t:om:..l:f:.::. ;o:.r:....) ::si. kernel  with ig. . ..... :'..:..components wi.t:....',.... t ,.....3.:.....::. Or...:.a:,:~:~ :..; components::....8!: :'d'".:. .. . 3.. Rec.: sru:.ti. jco:mpo:e:.:.. '"''"''.

. ..... ~~~..:~ ::::.:.:::::''.:..':: :, : : . 1::4:::
:
:f....:..:.. ........... .
:".......~.~.'
~
of::t T:he
reconstruction in the computer implementation is respectively 50.2%, 13.4% and 2.2%. The error
Fig. 6).
~~r~nr~~:·:s ~ ~ ~ ~
i
:s~~t:

:~~~:~:~~:~~:~:~~:~~:~:~~i~~:~i:::
(rec0.9)
(rec50.9)
(rec111.9)
(rec117.9)
° Figure 5: Functions reconstructed at angles 0° , 50 ° , 1110 117 ° from the optimal decomposition
shown in Fig 3. Nine components were ue. The reconstruction error is 13% at all angles (see used. caption of Fig. 4).
4. Order the hk computed at the previous step by decreasing magnitude and call their square roots vi number (see Fig. is (Top right)) and the corresponding frequencies Yv (see Eq. (4)). of coefficients 3 required. The kernel reconstructed using 9 compone nts is shown at fo:S:ur:~:~j~i:M :::
different in Fig. 5. The reconstruction may be co:::: angles puted at any angle 0 in a continuum.~: Fig. the functionstheapproxmate to Eq. (5) and the shownfor n _ 4,9, 5. Notce tha In 4 (Bo::~~·S··:tztzttom) 5. the Define bi(0) according rconstrution i vi filter inceases with the number elongation and therefre the 'orientation seletivity' of the calculated at the previous step. of components. In Fig. 7 the modulus of the response of the contplex filter to an edge is shown for;·t~.·~·zz twodifferent increasing levels of from eq. (7), The n Obtain 6.required toreconstructthe a 2 5(n) andI:A(n)approximation. (8)2(see Fig. issmaller asthe number bn Computekernels and plots a:::::~~St::s::~~···:~~~~:~:i::~:Y number 6). of'singular component the error G:·t::::z· 1, and a,: a Y:~:::::;;;;~~~';~l·..... I : arnifies ndicated the components required for Fig. 6. and ~ ; tt~z of plots iii the caption of the approximation as the first integer where the error drops below
Noic frs tatth errors. This is very important zroexonntaly t;:·:··~·::·:.·:·s·:··;i·5;~~.: very small;ot: cefiietsaicoveget oneqene h sametrue for both is in practice since it at;asa · ~· implies that a
the tolerable error a/. 7. Compute the functions ai(x) using Eq. (6). (See Fig. 3).
8. The napproximation of Fs(x) can now be calculated using Eq. (2). The numerical implementation of the formulae of section 2 is straightforward. In the implementation used to produce the figures and the data reported in this section the kernels F4 were defined on a 128x128 array of singleprecision floatingpoint numbers. The set of all angles was discretized in 128 samples. The Yaxis variance was oy = 8 pixels, and the Xaxis variance was = kay with k = 1, 2, 3. Calculations were reduced by a factor of two exploiting the hermitian symmetry of these kernels; the number of components can also be halved  the experimental data given below and in the figures are calculated this way. the Notice and in that caption of Fig. 6. converge to zero exponentially fast; as a consequence the plots first the the coefficients as
same is true for both errors. This is very important in practice since it implies that a very small
number of coefficients is required. The kernel reconstructed using 9 components is shown at four different angles in Fig. 5. The reconstruction may be computed a~t any angle 0 in a continuum. In Fig. 4 (Bottom) the approximate reconstruction is shown for n = 4,9, 15. Notice that the elongation and therefore the 'orientation selectivity' of the filter increases with the number of components. In Fig. 7 the modulus of the response of the complex filter to an edge is shown for two different kernels and increasing levels of approximation. The number n of singular components required to reconstruct the Ar. :ry = 1: 1, and wa: ry = 1: 2 families is smalle as indicated by r
reconstruction error
log(s.val)
t::~'""'r'.. .............. .... 2
le+00
:,:,:,: ........
%l...................~ . . .
.
.
12
~e02 
(gaus1) (gaus1)
(gaus2) (gaus2)
(gaus3)2(gaus3) .... . ..
le03..... 2le03 
_
0.00
5.00
10.00
15.00
20.00
Figure 6: Comparison of the error plots for three kernels constructed differenciating Gaussians of different aspect ratios as in Fig. 3. (Left) The three kernels shown at an angle of 120°; the ratios The log of the reconstruction errors are plotted 1: . 1 a,r: cr number n of components employed: (Right) exp()5 of ther reconstruction errors are plotted 1.7, 5.2, 8.2... to "are respectively 1 : 1, 1: 2, 1 : 3 . (Right) The log with theu are against the number of components employed. For 10% reconstruction error 3, 6, 10 components are needed. For 5% reconstruction error 3, 7, 12 components are needed. Notice that for these Gaussianderivative functions the reconstruction error decreases roughly exponentially with respect with r " 1.7,5.2,8.2. to the number n of components employed: An ; exp()
12
Gaussian 2:1 second derivative
energy x 103
t 1 1
I
Gaussian 3:1 second derivative
energy x 103
I I9
2 components
1
14.00 13.00o
1200
11.00 10.00 0.00 9.008.00 7.006.00 
 4 components t 6 cponeponnts  or t 1 rcomponents
.2 components 4 components


t:"
70.00 6
60.00
omponents
IZcoiponais


50.00 40.00 
5.00 4.00.00 1.00 , 0.00
i
30.00
120.00 
t I angle
0.00angle
0.00
50.00
100.00
150.00
0.00
50.00
100.00
150.00
(a)
(b)
Figure 7: Magnitude of the response vs. orientation angle of orientationselective kernels. The image is an edge oriented at 1200 (notice the corresponding peak in the filter responses). The kernels are as in Fig. 6: (gaus2) for (a) and (gaus3) for (b). The plots show the response of the filters for increasing approximation. The first approximation (2 components) gives a broadly tuned response, while the other approximations (4,6,8 ... components) have more or less the same orientation selectivity (halfwidth of the peak at halfheight). The peak of the response sharpens and the precision of the approximation is increased (1 % error for the top curves) when more
components are used.
13
4.2
XY separability
Whenever a function F is to be used as a kernel for a 2D convolution it is of practical importance to know wether the function is XYsepa,rable, i.e. if there are two functions fX and fY such that F(x, y) = fx(x)fY(y). If this is true the I2D convolution can be implemented cheaply as a sequence of two ID convolutions. Even when the kernel F is not XYseparable it may be the sum of a small number of separable kernels [41]: F(x, y) = Ei f(x)fjY(y). One may notice the analogy of this decomposition with the one expressed in Eq. (1): the function F(x, y) may be thought of as a kernel defining an operator from a space A of functions fx(x) to a space B of functions fY(y). At a glance on can see that the kernels ai of Fig. 3 are HilbertSchmidt: they have bounded norm. Therefore (see sec. 3) a discrete optimal decomposition and approximation are possible. Again the SVD machine may be applied to calculate the decomposition of each one of the ai and its optimal nicomponent approximation. If the SVD of ai is indicated as: ai(x, y) = h_=1 Pihaih(x)aYh(y) then the approximate decomposition of Fo(x, y) expressed in Eq. (2) and (20) becomes:
N ni
Fo(x, y8)=
i=l
ibi() E Piha'h(x)aYh(y)
h=l
(9)
How is this done in practice? For all practical pourposes the kernels ai are defined on a discrete lattice. The SVD of a kernel defined on a discrete M x N rectangular lattice may be computed numerically using any one of the common numerical libraries [10, 42] as if it was a M x N square matrix A of rank R. The typical notation for the matrix SVD is: A = UWVT where U and V are orthonormal M x R and R x N matrices and W is R x R banddiagonal with positive entries wi of decreasing magnitude along the diagonal. If this is written as
R
A =
k=l
kUkVk
(10)
where Uk and Vk are columns of U and V the analogy with Eq. (2) becomes obvious. Notice that the (hidden in the vector notation) row index of U plays the role of the coordinate y and the row index of V plays the role of the coordinate x. Rewriting the above'in continuous notation we obtain:
R
a(x, y) =
E
Wkuk(y)vk(z)
(11)
k=l
The first two terms of the separable decompositions are shown in Fig. (8) for the functions a 3 and
as 8
Wether few or many components will be needed for obtaining a good approximation is again an empirical issue and will depend on the kernel in question. The decomposition of the singular functions ai associated to the Gaussianderivative functions used for these simulations is particularly advantageous; the approximation error typically shows a steep drop after a few components are added. This can be seen from he curves in Fig. 8 (Bottomleft) where the log of the error is plotted against the number of XYseparable components. All the ai of Fig. 3 can be decomposed this way in sums of XYseparable kernels. The number of components needed for approximating each with 1% accuracy or better is indicated in the plots of Fig. 8 (Bottomright) the real and imaginary parts have roughly the same separability. One can see that the number of components increases linearly with i. The caption of Fig. 8 gives precise figures for the Gaussian 3 : I1case. 14
'":"? ""'"''""~
......
......
..
..
.:.approximate a ' to: less than ?1% error is plotted:... . :" :....:~ t. '''.'. ? ~·S ... .: ,.o~...;..j:..:.~::.? against i. From the plots one ?.!~Y'":" ?.:·:' ~"::'.:':':i:~Y"' ~~ ;::".? .;;../...).~:', ...... :~...~~ ................... ::.......~.~ ~~,,..~ .... umber i nt .... n.....at '"'.ompo twoelemntrofthedecmpsito
may deduce that this': ~ :.q ~......::~....::~:.nj~E
i~i~s~
a:ppro:::: .imat..::.:::: le:+00
reoala : 5e
·12° · ~·: · (sf.cnl1)~
number.om error :i~~j~~::a~
ad
30~~ii au130° aresf7.cmpO)2: gf2cmpO ofauiiis
iv.s.
iitus
6t3.00Ie
ahtzzs3 .50Boto 5nd
thrbecmon;sp be coploonent
fr nededuc '~~r
decomposi rerom I ef)Aproiatarh
5nd
r
15
It is important to notice that rotated versions of the original template functions F cannot be represented by sums of XYseparable functions with the same parsimony (see again Fig. 8 (Bottomleft) upper curves). This is one more reason to represent F[n] as a sum of orthonormal singular functions, rather than as as sum of rotated copies of the template functior (Theorem 1, statement 7.), as discussed at the end of Sec. 2. One must remember that beyond XYseparation there are a number of techniques for speeding up 2D FIR filtering, for example small generating kernel (SGK) filtering [1], that could further speed up the convolutions necessary to implement deformable filtering.
5
Rotation and scale
A number of filterbased early vision and signal processing algorithms analyze the image at multiple scales of resolution. Although most of the algorithms are defined on, and would take advantage of, the availability of a continuum of scales only a discrete and small set of scales is usually employed due to the computational costs involved with filtering and storing images. The problem of multiscale filtering is somewhat analogue to the multiorientation filtering problem that has been analyzed so far: given a template function F(x) and defined Fa(x) as F,(x) = al/2F(ax), a E (0, oo) one would like to be able to write F, as a (small) linear combination:
F (x) = Zsi(a)d(x)
a E (O, oo)
(12)
Unfortunately the domain of definition of s is not bounded (it is the real line) and therefore the kernel Fq(x) is not HilbertSchmidt (it has infinite norm). As a consequence the spectrum of the LL* and L*L operators is continuus and no discrete approximation may be computed. One has therefore to renounce to the idea of generating a continuum of scales spanning the whole positive line. This is not a great loss: the range of scales of interest is never the entire real line. An interval of scales (al, a 2 ), with 0 < al < a 2 < oo is a very realistic scenario; if one takes the human visual system as an example, the range of frequencies to which it is most sensitive goes from approximatly 2 to 16 cycles per degree of visual angle i.e. a range of 3 octaves. In this case the interval of scales is compact and one can apply the results of section 3 and calculate the SVD and therefore an L 2 optimal finite approximation. In this section the optimal scheme for doing so is proposed. The problem of simultaneously steering and scaling a given kernel F(x) generating a family F(a,O)(x) wich has a finite approximation will be tackled. Previous nonoptimal schemes are due to Perona [36, 37] and Freeman and Adelson [15]. 5.1 Polarseparable decomposition
Observe first that the functions ai defined in eq.(6) are polarseparable. In fact x may be written in polar coordinates as x = IIxIIR0(x)u where u is some fixed unit vector (e.g. the 1st coordinate axis versor) and q(x) is the angle between x and u and R(x) is a rotation by q. Substituting the definition of F 0 in (6) we get:
ai(x) =
at 1
f
F(lxlIR°+±(x)(u))eJ 27rvi"d
= )'d1
=a7 e
j2i(X)
f
F(IIXIIR(U))ej2 i
16
so that (2) may be also written as:
N
Fo(x) = E aici(IIxII)e3
i=1
i(G(x)z
d?
(13) (14)
ci(xllxi) = ai
2 F(jlxIIlR+(u))eJvi r
The scaling operation only affects the radial components ci and does not affect the angular components. The problem of scaling the kernels ai, and therefore F0 through its decomposition, is then the problem of finding a finite (approximate) decomposition of continuously scaled versions of functions c(p):
ca(p) = ZSk(a)r'k(P)
k
a E (al,a2 )
(15)
If the scale interval (al, a 2) and the function c are such that the operator L associated to F is compact then we can obtain the optimal finite decomposition via the singular value decomposition. The conditions for compactness of L are easily met in the cases of practical importance: it is sufficient that the interval (rl, a 2 ) is bounded and that the norm of c(p) is bounded (p E R+) . Even if these conditions are met, the calculations usually cannot be performed analytically. One can employ a numerical routine as in sec. 4.2 for XYseparation and for each ci (below indicated as c i ) obtain an SVD expansion of the form: ci(p) =
E yksk(or)rk(p)
(16)
k
As discussed before one can calculate the approximation error from the sequence of the singular values ?k. Finally, substituting (16) into (14) the scaleorientation expansion takes the form (see Fig. 11):
N
Fo,=(x) =
i=l
aei2 7rvi(6k(x))
>Z Sk (a)r(llI
k=1
ni
Ixj)
(17)
Filtering an image I with a deformable kernel built this way proceeds as follows: first the image is filtered with kernels a'(x) = exp(j27vi0(x))rT(ljxll), i = 0,...,N, k = 0,...,ni, the outputs Ik of this operation can be combined as Io,a(x) = Zl aibi(O) Enl '4i(a)Ik(x)to yeld the result. The filtering operations described above can of course be implemented as XYseparable convolutions as described in sec. 4.2. 5.1.1 Polarseparable decomposition, experimental results
An orientationscale decomposition was performed on the usual kernel (second derivative of a Gaussian and its Hilbert transform, a : ay = 3: 1). The decomposition described in sec. 4.1 was taken as a starting point. The corresponding functions ci of eq. 13 are shown in Fig. 9. The interval of scales chosen was (arl,a 2 ) s.t. al : a 2 = 1 : 8, an interval which is arguably ample enough for a wide range of visual tasks. The range of scales was discretized in 128 samples for computing numerically the singular value decomposition (7, S , rk) of ci (p). The computed weights ,; are plotted on a logarithmic scale in Fig. 10 (Top). The 'X' axis corresponds to the k index, each curve is indexed by i, i = 0,..., 8. One can see that for all the ci the error decreases exponentially at approximately the same rate. 17
Gaussian 3:1  singular functions
Yx 103 polarsfnc.0
polarsfnc.1
0.00  ;\  polarsfnc.2 polarsfnc.3
s \~~wi~
t'srl~~~~~
\ I';
~polarsfnc.6
 polarsfnc.7 polarsfnc.8
'polarsfnc.5
10.00
15.00 20.00 25.00 30.00 0.00
/

10.00
20.00
polar decomposition G3
' ...'........ ::..:: .: . . :  ::.. .. ::":. : :.'.:,. .. .
... .. i i i i i i i ii.....
,:::::::::::::::::: ::::::: :
.. . ....
....
::. :.:.
::
.
,. ...:; ... ....
i :::.... : :: :: :::::::::::::. . ::... . 
..........
,... ..
..
:::::... . :
..
:..
sfnc O
sfnc 4
sfnc 8
Figure 9: (Top)The plots of ci(p), the radial part of the singular functions ai (cfr. eq. 13). The 0 part is always a complex exponential. The original kernel is the same as in fig. 3. (Bottom) The 0th, 4th and 8th components co, c4 and cs represented in two dimensions.
18
Gaussian 3:1  s.f. scale decomposition  weights
Y
2' le+00 X leOdeom oitoweights
I
t
,::.
 weights 0. 8 _weights .
.
".......
2
'
_
....
_.'."..'' ....... ..
5
r
`ei weights 6 weights,6

s
. .
50002 '
lrveigl ''(cos(2rV
weghweights'8
 4
......
)s'(P.))
(cos(2V
4
)s:":'
le02,. ',

52 0.00
.......
'"'" ~' :~"  .. '' .00

..
" "~~..' ~'*'~' ~ " .. "'"
'~~
5.00
10.00
W...5
.
decomposition weights
4
Y x 10 3
(cos(2wv 0)s~(p)) 4
(cos(2~t 40)s}(p))
y x 10 3
Gaussian 3:1

singular function n.4  radius
radius 40 42
Gaussian 3:1  singular function n.4  scale
, scale 40 scale decomsingular 41
~~n4350.002300.00200.00*
Figur 10: Scledeomoradius
(Bottom) The firstfourradius =,
,in 9). fig
a43 ' 1.
(.1, 100.000 50.00. 
o saecmoet150.00 100.00  :104 0 0~._1 50.00' :X
t5su
,:
fcn
1
X
s 0.000 )h=
..
,
'
'
Ga300.00 0.00 250.0
oosfnc 4.0
5.00 ·  radius
10.00
2.'.,,
,
,'
0.5
sfnc 4.0  scale ,

50.00
Figure 10: Scaledecomposition of the radial component of the functions ai. The interval of scales
?rcr
z~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~...
.
.
.
.
.
...........
.
....
x
~
.... :~:: .........
..
~z~,t
K·~f::
· ftft~;:·:·~t
i. ... .~1~:~:~ ·K .X.~···~:
...
l.·.·
X M
::~~:::::~~ttSz...... X.., I,:,:~::~ts
:~::~~::~::~~::~:zi': ;~~·~~·:t ·
... .
...
.~
~ ~ ~ ~." ~~ts............ ~::~·::~~::~~tssz ~·~· ~~~~~~~~~~~::~·;·~~····tii:
':f;~f:~:~:~~:~::~~:~ .' " :~:~t~t::~:~:: x: I5:t·'f;;:;~ S · .. ...... · · · X X: X 2~
........... :~~:~::~.:~~:~::::.
~
...... :~:~~';S''~':f:~;f;~:
~ ~~~55·f~f25t~rZf·t·· ~,~ ~222~ ~·.f.··f·. ~f ~~~:::·:s ~: ~ ~ ~:~S ~·:::i~::i:~:::~~
x ....... ..... . ttzt:;~:~:: ... ·.. ,., ..... f2~~x f tt:S~:~:~:~:::t~l:~ 55S·t · .. ff~'~~:~:~..........~t ............ :~f ......... . f;·:~~;$~::~~:: · t~:::::::Kx.:~t · · Z·;.sfr~f·::
~ (G2r3scO.125).M. ::. ~2:~:~:~::~: :~~::~~:~::~: I.  .. ................. iX. ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ;t::f:f;::~~:::::~t: ::~::::~::~~:~~::~~:~~::~··tsr: . ............. ~ t~~··~t~·:: · ~ ~
.............. ;.;'
. .......... ~~~
KK.~~~~~~~~~~~~~~~~~''S:ffz·.,t. S Sf :~~~~~~·::~:~~~t:::
.T..rtss~
(G2r3scO.33)
~
~
~
t~::5:~''' '··
· r~~~i~~si~~z ~~t~~f~~ ·~~~(G2r3sc··::·x··:O.77),,,,  ,5fZtt ~ 5~5·II dt~~ i
Figure 11: The kenel at differentscales and orienations: the scals are (left to rght) 0.125, 0.33 0.77, The orientations are (left to right) 30', 66', 122':,:::~::~:~::~::t:~::::~::~ 1.00. .~.:'.':.~:~:~~~: 155' . The kernels shown here were:~::~.~.5~:,~ .222.... obtaied frm thescalangl decoposiion sown i theprevius fiures ~:~~:~~·::;~:ti~ii~i:~::s;.s'
IIt··~·ts·.·~~~:~S~f:~~~::::~~~:;;~:::,::~:ii1 ... ~::~5 ::~~~:~~:520 5''"
The components ri(p) and s,(a), i = 4, k = 0,...,3 are shown in the two plots at the bottom of Fig. 10. In figuce Fig. 11 reconstructions of the kernel based on a 1% error decomposition are shown for various scales and angles. A maximum of 1% error was imposed on the original steerable decomposition, and again on the scale decomposition of each single ai. The measured error was 2.5% independently from angle and scale. The total number of filters required to implement a 3octave 1% (nominal, 2.5% real) approximation error of the 3:1 Gaussian pair is 16 (rotation) times 8 (scale) = 128. If 10% approximation error is allowed the number of filters decreases by approximately a factor 4 to 32.
6
Open issues and discussion
A few issues remain open to investigation: 1. Sometimes one would like to generate a discrete decomposition of a family of filters that obeys other constraints than just being the most parsimonious one. For example (a.1) hardware limitations could constrain the shape of the interpolating funtions b(0), (a.2) one would like to build pyramid implementations of the decomposition for speeding up the filtering stage (work on this issue has been done by Simoncelli et al. [11]). 2. Another interesting question mentioned in the introduction is the synthesis of the discrete decomposition directly from the specification of an early vision task, rather than passing through the synthesis of a 2D (nD) kernel which then is deformed somewhat arbitrarily. Work in this direction has been done by Hueckel [22, 23], Hummel [24], and Haralick [17] who approached the problem of feature (step edge, line in [23]) detection and localization as one of projecting image neighbourhoods on smalldimension linear subspaces, and deriving the relevant parameters (orientation, position) of the feature from this reduced representation. Hummel's approach is particularly interesting: the parameters describing the feature are modelled as continuous random variables. The neighbourhood operators (= kernels of the linear filters) used to project each neighbourhood onto a smalldimensional subspace space are selected using the KarhunenLoeve transform. Such procedure guarantees that the projection maximizes the variance of the parameters and therefore the parameters thus obtained are maximally informative. The similarity of the kernels derived by Hueckel amd Hummel to the ai depicted in Figure 3 is not totally surprising: the polar separability and the fact that the tangential component of the kernels is sinusoidal has to be expected from the fact that one of the parameters in question is a rotation in the plane.
7
Conclusions
A technique has been presented for implementing families of deformable kernels for early vision applications. A given family of kernels obtained by deforming continuously a template kernel is approximated by interpolating a finite discrete set of kernels. The technique may be applied if and only if the family of kernels involved satisfy a compactness condition. This improves upon previous work by Freeman and Adelson on steerable filters in that (a) it is formulated with maximum generality to the case of any compact deformation, or, equivalently any compact family of kernels,
21
and (b) it provides a design technique which is guaranteed to find the most parsimonious discrete approximation. Unlike common techniques used in early vision where the set of orientations is discretized, here the kernel and the response of the corresnonding filter may be computed in a continuum for any value of the deformation parameters, with no anisotropies. The approximation error is computable a priori and it is constant with respect to the deformation parameter. This allows one, for example, to recover edges with great spatial and angular accuracy.
8
Acknowledgements
I am very grateful to Massimo Porrati, Alberto Grunbaum, David Donoho, Federico Girosi and Frank Ade for giving me good advice and useful references. I would also like to acknowledge useful conversations with Ted Adelson, Stefano Casadei, Charles Desoer, Peter Falb, Bill Freeman, Milan Jovovic, Takis Konstantopoulos, Olaf Kiibler, Paul Kube, Jitendra Malik, Stephane Mallat, Sanjoy Mitter, Richard Murray. The simulations in this work have been carried out using Paul Kube's "viz" imagemanipulation package, and would have taken much longer without Paul's very friendly support. The images in this paper have been printed with software kindly provided by Eero Simoncelli. Some of the simulations have been run on a computer kindly provided by prof. Canali of the Universit, di Padova. Part of this work was carried out at the International Computer Science Institute at Berkeley. This work was partially conducted while at MITLIDS with the Center for Intelligent Control Systems sponsored by U.S. Army Research Office grant number DAAL 0386K0171.
References
[1] JF. Abramatic and O. Faugeras. Sequential convolution techniques for image filtering. IEEE trans. Acoust., Speech, Signal Processing, 30(1):110, 1982. [2] E. Adelson and J. Bergen. Computational models of visual processing. M. Landy and J. Movshon eds., chapter "The plenoptic function and the elements of early vision". MIT press, 1991. Also appeared as MITMediaLabTR148. September 1990. [3] E. Adelson and James Bergen. Spatiotemporal energy models for the perception of motion. J. Opt. Soc. Am., 2(2):284299, 1985. [4] T.O. Binford. Inferring surfaces from images. Artificial Intelligence, 17:205244, 1981.
[5] A. Bovik, M. Clark, and W.S. Geisler. IEEE trans. Pattern Anal. Mach. Intell., 1990.
[6] P.J. Burt and E.A. Adelson. The laplacian algorithm as a compact image code. IEEE Transactions on Communications, 31:532540, 1983. [7] John Canny. A computational approach to edge detection. IEEE trans. Pattern Anal. Mach. Intell., 8:679698, 1986. [8] Gustave Choquet. Lectures on analysis, volume I. W. A. Benjamin Inc., New York, 1969. [9] J. Dieudonne. Foundations of modern analysis. Academic Press, New York, 1969.
22
[10] J.J. Dongarra, C.B. Moler, J.R. Bunch, and G.W. Stuart. Linpack, user's guide. Society for Industrial and Applied Mathematics, 1972. [11] E. Adelson E. Simoncelli, W. Freeman and D. Heeger. Shiftable multiscale transforms. Technical Report 161, MITMedia Lab, 1991. [12] Itzhak Fogel and Dov Sagi. Gabor filters as texture discriminators. Biol. Cybern., 61:103113, 1989. [13] W. Freeman and E. Adelson. Steerable filters for image analysis. Technical Report 126, MIT, Media Laboratory, 1990. [14] W. Freeman and E Adelson. The design and use of steerable filters for image analysis, enhancement and multiscale representation. IEEE trans. Pattern Anal. Mach. Intell., 1991. [15] W. T. Freeman and E. H. Adelson. Steerable filters for early vision, image analysis and wavelet decomposition. In Third International Conference on Computer Vision, pages 406415. IEEE Computer Society, 1990. [16] G. H. Granlund. In search of a general picture processing operator. Computer Graphics and Image Processing,8:155173, 1978. [17] R. Haralik. Digital step edges from zero crossing of second directional derivatives. IEEE trans. Pattern Anal. Mach. Intell., 6(1):5868, 1984. [18] D. Heeger. Optical flow from spatiotemporal filters. In Proceedings of the First International Conference on Computer Vision, pages 181190, 1987. [19] B. Horn. The binfordhorn linefinder. Technical report, MIT AI Lab. Memo 285, 1971. [20] D. Hubel and T. Wiesel. Receptive fields of single neurones in the cat's striate cortex. J. Physiol. (Loud.), 148:574591, 1959. [21] D. Hubel and T. Wiesel. Receptive fields, binocular interaction and functional architecture in the cat's visual cortex. J. Physiol. (Lond.), 160:106154, 1962. [22] M. Hueckel. An operator which locates edges in digitized pictures. J. Assoc. Comp. Mach., 18(1):113125, 1971. [23] M. Hueckel. A local visual operator which recognizes edges and lines. J. Assoc. Comp. Mach., 20(4):643647, 1973. [24] R. Hummel. Feature detection using basis functions. Comp. Vision, Graphicsand Image Proc., 9:4055, 1979. [25] D. Jones and J. Malik. Computational stereopsisbeyond zerocrossings. Invest. Ophtalmol. Vis. Sci. (Supplement), 31(4):529, 1990. [26] D. Jones and J. Malik. Using orientation and spatial frequency disparities to recover 3d surface shape  a computational model. Invest. Ophtalmol. Vis. Sci. (Supplement), 32(4):710, 1991. [27] Michael Kass. Computing visual correspondence. In Proceedings: Image Understanding Workshop, pages 5460, McLean, Virginia, June 1983. Science Applications, Inc. 23
[28] H. Knuttson and G. H. Granlund. Texture analysis using twodimensional quadrature filters. In Workshop on Computer Architecture for Pattern Analysis ans Image Database Management, pages 206213. IEEE Computer Society, 1983. [29] J.J. Koenderink and A.J. van Doorn. Representation of local geometry in the visual system. Biol. Cybern., 55:367375, 1987. [30] J. Malik and Ziv Gigus. A model for curvilinear segregation. Invest. Ophtalmol. Vis. Sci. (Supplement), 32(4):715, 1991. [31] J. Malik and P. Perona. Preattentive texture discrimination with early vision mechanisms. Journal of the Optical Society of America  A, 7(5):923932, 1990. [32] M.C. Morrone and D.C. Burr. Robots and biological systems, chapter: "A model of human feature detection based on matched filters". Academic Press, eds. P. Dario and G. Sandini, 1990. [33] M.C. Morrone, D.C. Burr, J. Ross, and R. Owens. Mach bands depend on spatial phase. Nature, (324):250253, 1986. [34] M.C. Morrone and R.A. Owens. Feature detection from local energy. Pattern Recognition Letters, 6:303313, 1987. [35] Pierre Parent and Steven Zucker. Trace inference, curvature consistency, and curve detection. IEEE trans. Pattern Anal. Mach. Intell., 11(8):823839, 1989. [36] P. Perona. Finite representation of deformable functions. Technical Report 90034, International Computer Science Institute, 1947 Center st., Berkeley CA 94704, 1990. [37] P. Perona. Deformable kernels for early vision. IEEE Conference on Computer Vision and Pattern Recognition, pages 222227, June 1991. [38] P. Perona and J. Malik. Detecting and localizing edges composed of steps, peaks and roofs. Technical Report UCB/CSD 90/590, Computer Science Division (EECS), U.C.Berkeley, 1990. [39] P. Perona and J. Malik. Detecting and localizing edges composed of steps, peaks and roofs. In Proceedings of the Third International Conference of Computer Vision, pages 5257. IEEE Computer Society, Osaka, 1990. [40] A. Pinkus. nWidths in Approximation Theory. Springer Verlag, 1985. [41] W. K. Pratt. An intelligent image processing display terminal. In Proc. SPIE Tech. Symp., page Vol 27, San Diego, 1979. [42] W.H. Press, B.P. Flannery, S.A. Teukolsky, and W.T. Vetterling. Numerical Recipes in C. Cambridge University Press, 1988. [43] Walter Rudin. Functional Analysis. Mc.GrawHill, 1973. [44] M.R. Turner. Texture discrimination by gabor functions. Biol. Cybern., 55:7182, 1986. [45] S. Zhong and S. Mallat. Compact image representation from multiscale edges. In Proceedings of the Third InternationalConference of Computer Vision. IEEE Computer Society, Osaka, 1990. 24
A
A.1
Appendix
Proof of theorem 1
What follows is a derivation to prove theorem 1. The proof is summarized at the end of the section Proof. The family of functions Fs defined by eq. (1) may be thought of as the kernel associated B, defined by to a linear operator L : A b(0) = (La)(0) =
JR
Fo(x)a(x)dx
(18)
where A = L2 (R 2 ), and B = L 2 (S1), the square integrable functions defined on the plane and the circle respectively, and a E A, b E B, 0 C $1. Let L* denote the adjoint to L, i.e. the unique linear operator satisfying the equality (19) (La, b)B = (a, Lb)A with (, )c indicating the inner product of a Hilbert space C. If IIFII < oo then Fo is a HilbertSchmidt kernel and L has a discrete spectrum; then Fo can be written as a sum:
N
Fo(x) _
E
i=1
aa(x)b(0)
(20)
where the a 2 are the nonzero (positive, in fact) eigenvalues of the autoadjoint operators LL* L o L* and L*L = L* o L, and the ai and bi are the associated eigenfunctions of L*L and LL*
aib)i=l ,N is the singular ... respectively, and N could be infinite. The collection of triples (0,ai,
value decomposition (SVD) of L (see e.g. [43]). Observe that expression (20) is in the desired form of (1), with the additional advantage that the ai and bi are orthonormal bases of A and B (see below). Therefore if one could calculate the SVD (i.e. the aj, bi and ai) of L explicitly one would be able to write Fo as in (1), and the problem would be solved. L* can be computed from its definition (19) and (18) and is: (L*b)(x) = hence the LL* operator is
(LL*b)(O) =
f
Fo(x)b(0)dO
(21)
j
JR
H(O,O')b(O')dO'
Fo(x)Fo,(x)dx
(22)
H(0, 0')
and L*L is
(L*La)(x) =
(23)
JR
K(x,x')a(x')dx'
(24)
K(x,x') =
j
Fo(x)Fo(x')d0
(25)
Observe that the kernel associated with LL* is a function of the difference of its arguments only. To see that change the variable of integration in (23), y = Rox, obtaining H(O, 0') = H(O  O', 0) = h(0  0'). 25
The eigenvalueeigenvector problem for LL* LL*bi = Aibi can then be solved explicitly substituting (22) in (26): (26)
j
h(

')bi(O')dO' = Aibi(O)
(27)
The equation holds between the Fourier transforms of the two terms of the shiftinvariant convolution of eq. (27):
h(v)bi(v) = Aibi()
(28)
s.t. The nonzero solutions of this equation are the couples (Ai, bh) Ai = h(vi) and bi(v) = 6(v vi); therefore (29) b() = ej2 7rvti = h(Vi)112 <xi where, by convention, the frequency numbers vi are ordered so that h(vi) is a nonincreasing sequence: h(vi) > h(v/i+l) > 0. For multiple eigenvalues any linear combinations of the corresponding eigenfunctions (eigenvectors) will also be eigenfunctions. The eigenfunctions of the L*L operator cannot be easily determined directly from its integral expression (24) as for LL*. However one can make use of the fact that L*bi is an eigenfunction of L*L, which can be verified as follows: L*L(L*bi) = L*(LL*b) = L*Aibi = Ai(L*bi). The unitnorm eigenfunctions of L*L are therefore the functions defined by ai(x)
A
bi2= ai
2 F(x)eji Vid
(30)
i.e. at each x, ai(x) is the conjugate of the Fourier coefficient of Fo(x) corresponding to the frequency vi. In conclusion (the numbers refer to the corresponding statements of the theorem): 1. Follows from the properties of SVD and the fact that the sum (20) is built using the SVD triples. 2. As above. 3. From equation (30) and the fact that the dimension of the range of L* is equal to N, the number of nonzero eigenvalues. 4. Follows from SVD properties. 5. Follows from the fact that ai > ai+l and that Ei vi = IILI12 < oo. May be seen directly from SVD properties. 6. From (13) one can see that
F
]
is a function of Ixl and 0  +(x) only.
7. The functions ai are linearly independent, so any collection of n of them spans an ndimensional subspace of L2 (R 2). This is the same subspace spanned by any linearly independent collection Fi] i = 1,..., n. The thesis follows from the fact F[ ] = F[no Ro. The coefficients oai can be obtained with the usual change of basis formulae.
26
文档资料共享网 nexoncn.com
copyright ©right 20102020。
文档资料共享网内容来自网络，如有侵犯请联系客服。email:zhit325@126.com