nexoncn.com

文档资料共享网 文档搜索专家

文档资料共享网 文档搜索专家

当前位置：首页 >> >> Optimization of the index assignments for multiple description vector quantizers_图文

TO APPEAR IN IEEE TRANSACTIONS ON COMMUNICATIONS, MARCH 2003

1

Optimization of the Index Assignments for Multiple Description Vector Quantizers

Norbert G¨ ortz, Member, IEEE and Pornchai Leelapornchai

Abstract—The optimization criterion and a practically feasible new algorithm is stated for the optimization of the index assignments of a multiple description unconstrained vector quantizer with an arbitrary number of descriptions. In the simulations, the index-optimized multiple description vector quantizer achieves signi?cant gains in source SNR over scalar multiple description schemes.

I. I NTRODUCTION HE principle of multiple descriptions (MD) is to represent a source signal by two or more descriptions, which are sent separately over erasure channels. The descriptions are generated in such a way, that a basic quality is achieved at the decoder output from each individual description; the quality smoothly increases with the number of received descriptions. The design of a multiple description quantizer can be divided into two parts: the selection of the quantizer reproduction levels (codebook training) and the choice of the index assignment. The latter is, in contrast to systems with a single description, a mapping from the indexes of the quantizer reproduction levels to a set of descriptions. The descriptions of an MD quantizer can be interpreted as the row- and column-indexes of a matrix, in which the codevectors, or respectively their indexes, are placed. The dimension of the matrix is denoted by K ; it equals the number of descriptions that is selected by the system designer. The choice of the index assignments for multiple description quantizers may be seen as the problem, how to allocate the quantizer reproduction levels to the matrix cells in such a way, that the distortion is minimized, when the descriptions are transmitted over an erasure channel. For multiple description scalar quantization (MDSQ) with two descriptions, optimal solutions for the index-assignment problem have been stated in [1]; the codevectors are systematically placed along the main diagonal of the matrix. This concept makes use of the fact, that a scalar quantizer codebook may be ordered in such a way, that a reproduction level with a large value is allocated to a large index number in the quantizer codebook. Generally, such an ordering is impossible for a vector quantizer codebook. Multiple description constraint vector quantizers with two descriptions have been studied, e.g., in [2, 3], where a lattice structure is imposed on the quantizer codebook in order to allow for a systematic design of the index assignments. In [4], an algorithm

T

based on a ternary tree structure of the quantizer codebook is given for the design of multiple description vector quantizers with an arbitrary number of descriptions; the codebook and the index assignments are jointly optimized. Compared with the algorithm in [4], the work in this paper is solely on the optimization of the index assignments of a multiple description vector quantizer; the quantizer codebook is assumed to be ?xed and, in contrast to [2, 3], without any structural constraints. This scenario is practically relevant, e.g., for packet based speech communication, where standardized speech codecs shall be used. In such a setup, the lossy part of the source encoder – e.g., the vector quantizer codebooks of the codec parameters – must not be changed, but the mapping of the encoder output bits to packets on the channel may be optimized. The index assignment problem for a quantizer with a single description has been studied in [5]; there, the binary switching algorithm (BSA) is proposed, which minimizes the average system distortion, when the quantizer index is corrupted by biterrors on the channel. The algorithm introduced in this paper applies the idea of the BSA to optimize the index assignments of a multiple description unconstrained vector quantizer (MDVQ). The paper is organized as follows: ?rst, the system model of an MDVQ system is stated. Then, the optimal decoder is derived, and a useful formulation of the quality criterion is given for the optimization of the index assignments. Based on that, a new algorithm denoted by MD-BSA is introduced for the optimization of the index assignments of an MDVQ. Finally, the performance of the optimized MDVQ is discussed and compared with other multiple description quantizers and the rate distortion bounds. II. S YSTEM M ODEL The block diagram of an MDVQ system with K descriptions is shown in Fig. 1. The quantizer maps the input source

MD Encoder

Quantizer

¤

VQ

#

?

Index Assignment § ? ¨?

???! ? ??? ??? " ? " ? ?? ?? ???! ? ? " ???! ?? ?? ?

§ ??¨? § ¨?

MD Decoder

¤?

Manuscript received December 28, 2001; revised June 18, 2002 and August 06, 2002. The authors are with the Institute for Communications Engineering, Munich University of Technology, 80290 Munich, Germany, Tel.: +49-89-289-23494, Fax.: +49-89-289-23490, E-mail: norbert.goertz@ei.tum.de This work was supported by Siemens AG, ICM, Germany

MDVQ

Fig. 1. Transmission system with a multiple description vector quantizer (MDVQ) with K descriptions.

2

TO APPEAR IN IEEE TRANSACTIONS ON COMMUNICATIONS, MARCH 2003

vector1 X to the nearest codevector yI from the codebook Y = {y0 , y1 , . . . , yN ?1 }, where N is the size of the codebook. The index assignments Il = al (I ), l = 1, ..., K , map the quantizer output index I to K descriptions Il , l = 1, ..., K , that are transmitted over K memoryless and mutually independent channels. The latter cause packet erasures with the probabilities pl , l = 1, ..., K . Let Nl , l = 1, ..., K be the sizes of the index-sets of the descriptions Il , l = 1, ..., K . Since the index I must be uniquely . decoded from the complete set of descriptions, N ≤ M = K l=1 Nl must be ful?lled. In most cases, N < M is selected, which is equivalent to adding redundancy. In what follows the optimal MDVQ decoder for a given codebook Y and given index assignments is stated. Then a new algorithm for the optimization of the index assignments a1 (I ), ..., aK (I ) is given. III. O PTIMAL D ECODER FOR A GIVEN I NDEX A SSIGNMENT The goal for the design of the decoder is to minimize the expected distortion at the output, for a given set of received indexes ? i1 , ..., ? iK and an encoder with a ?xed codebook and a ?xed index assignment. Thus, the optimization problem can be stated as follows: ? (? X i1 , ..., ? iK ) ?1 = ? ?K = ? ?) I i1 , ..., I iK } . = arg min E {d(yI , X

? X

If, additionally, the independence of the erasures on the channels is considered we obtain: ?1 = ? ?K = ? i1 , ..., I iK P I=λ I = with if ? il = ? (erasure) . if ? il = al (λ) else (6) Equation (6) follows directly from the assumption for the channel that a packet is either erased with the probability pl or it is received correctly. The constant B in (5) is de?ned by . ?1 = ? ?K = ? i1 , ..., I iK , but it is convenient to exploit, B=P I that the left-hand side of (5) is a probability that must sum up to one over all possible λ = 0, 1, ..., N ? 1. Therefore, B can be calculated by

N ?1 K

1 · B

K

?l = ? P I il | Il = al (λ) · P I = λ

l=1

(5)

? ? pl ?l = ? P I il | Il = al (λ) = 1 ? pl ? 0

B=

ξ =0 l=1

?l = ? P I il | Il = al (ξ ) · P I = ξ

.

(7)

(1)

As an example, let us consider a system with K = 2 descriptions. If the quantizer index I = ? is transmitted and the second ?1 = a1 (?) and I ?2 = ? are description is erased by the channel, I received. Thus, we obtain from (5) and (6): ?1 = a1 (?), I ?2 = ? P I=λ I 1 ?1 = a1 (?) | I1 = a1 (λ) · p2 · P I = λ = ·P I B with

N ?1

?l , l = 1, ..., K . In (1), the received indexes are denoted by I Their realizations ? il , l = 1, ..., K, take values from the sets Sl = ?, 0, 1, ..., Nl ? 1 ,

·

(8)

l = 1, ..., K ,

(2)

which also contain the case of “erasure” indicated by “?”. In what follows, the mean squared error will be used as a distortion measure d(·, ·). Therefore, the minimization (1) results in

N ?1

B=

ξ =0

?1 = a1 (?) | I1 = a1 (ξ ) · p2 · P I = ξ P I = (1 ? p1 ) · p2 P I=ξ .

?ξ ∈S : a1 (ξ )=a1 (?)

? (? X i1 , ..., ? iK ) =

λ=0

?1 = ? ?K = ? i1 , ..., I iK , (3) yλ · P I = λ I

(9)

where λ denotes the realizations of the output index I of the . quantizer, with λ ∈ S, S = {0, 1, ..., N ?1}, and yλ is the vector with the number λ from the quantizer codebook Y . The main task of the receiver is to compute the a posteriori ?1 = ? ?K = ? i1 , ..., I iK . It can be reforprobability P I = λ I mulated by use of the Bayes-rule and by insertion of the index assignments according to ?1 = ? ?K = ? i1 , ..., I iK I = λ P I ?1 = ? ?K = ? i1 , ..., I iK I1 = a1 (λ), ..., IK = aK (λ) . =P I (4)

1 The source vectors X and the codevectors y have dimension N ; the comx I ponents are assumed to be real numbers.

Hence, (8) equals ?1 = a1 (?), I ?2 = ? P I =λ I ? ? ? P I=ξ ? P (I = λ) ?ξ ∈S : a1 (ξ )=a1 (?) = ? ? ? 0

if a1 (λ) = a1 (?) if a1 (λ) = a1 (?) (10)

and the so-called “side decoder” for the the case that only the index I1 has been received and I2 is erased is given by (10) and (3). A similar formula results, if the ?rst description is erased, ?1 = ? and I2 = a2 (?). Equation (10) indicates, that all i.e., I ? by (3) those indexes λ have to be used for the estimation of X that have the same ?rst description a1 (λ) as the one a1 (?) that has been received. The denominator in the upper line of (10)

¨ GORTZ, LEELAPORNCHAI: OPTIMIZATION OF THE INDEX ASSIGNMENTS FOR MULTIPLE DESCRIPTION VECTOR QUANTIZERS

3

???? ?"!#¨ ? ? ¤? ? 0 ¤? 1 2 3 $% ?"! ¤? ? ¤? ? 0 ¤? 1 ¤? ?? ¤? ?? ¤?? 0 ¤? ¤? ¤? ? ? 2 ¤¤? 3 4 ? ¤ 1 ¤? ? ¤? ? ¤? ¤?

2 3

5 6 7

?§??¨ ? ? ?? ? 0 ? ?? ? ?? ?1 ? ?? 2 3 ?? ? ? ?? ? ? ?? ?0 ?? ? ?? ? 1 ? ?? ? ? ? ?? ? ?? ? ?? ? ?? ? ?? 0???? ?? ?

1 2 3

2 3 4 5 6 7

the index sets Sl , l = 1, ..., K, are given by (2). By insertion of the index assignments as in (4) and, due to the mutual independence of the erasures, (14) equals:

N ?1

D=

λ=0

C (yλ ) ,

(15)

? & ? ('") ? & ( 0 a)

b)

? & ? 102) ? & '

with the “cost” C (yλ ) = P I = λ

·

...

K

? (? d yλ , X i1 , ..., ? iK ) · ?l = ? P I il Il = al (λ)

l=1

Fig. 2. Example with K = 2 descriptions that illustrates the decoding operation in the index assignment matrix, if I = 0 has been transmitted and a) ?2 is erased, b) description I ?1 is erased. description I

i K ∈S K ?? i1 ∈S1 ??

·

(16)

normalizes the left-hand side, so that it becomes a true probability. The decoding process is illustrated by Fig. 2. When, for ?1 = 0 is received, but description I ?2 is instance, description I erased (Fig. 2 a)), the “side”-decoder according to (10) takes the codevector indexes λ = 0, 2 into account. If both descriptions are received, the index a posteriori probability (5) equals if λ = ? , else (11) ? = y? using (3). This is, for so the “central” decoder issues X sure, the desired result for an uncorrupted transmission of the ?1 = ? and I2 = index I = ?. If no description is received, i.e., I ?1 = ?, I ?2 = ? = P I = λ . Thus, the ?, we have P I = λ I output of the receiver according to (3) equals the unconditioned expectation of all codevectors. The optimal decoding operation described above is independent of the erasure probabilities, because the latter cancel out in (10). This still holds, if a system with more than K = 2 descriptions is used. ?1 = a1 (?), I ?2 = a2 (?) = P I =λ I 1 0 IV. Q UALITY C RITERION FOR THE O PTIMIZATION OF THE I NDEX A SSIGNMENTS Similar as for the optimal decoder, the goal is to minimize the expected mean squared error, but, in contrast to (1), we now try to minimize it by the proper choices a ?l (), l = 1, ..., K, of the index assignments: {a ?1 (), ..., a ?K ()} = arg with

· a1 (),...,aK ()

of each codevector yλ , λ = 0, 1, ..., N ? 1. The probabilities in the product-term in (16) are given by (6). As an example, we will again discuss the case of K = 2 descriptions. If (6) is inserted for the conditional probabilities in (16), the sums are expanded, and it is considered that ? (a1 (λ), a2 (λ)) = yλ , the costs of the codevectors may be reX formulated according to C (yλ ) = P I = λ ? (a1 (λ), ?) · (1 ? p1 ) · p2 + d yλ , X ? (?, a2 (λ) · p1 · (1 ? p2 ) + d yλ , X ? ( ?, ?) · p 1 · p 2 d yλ , X . (17)

Now, the sum (15) may be split into two parts; the ?rst one involves the ?rst two terms from (17) that depend on the index assignments a1 (), a2 (), the last term is a positive constant for the optimization. If, additionally, the erasure probabilities on all channels are assumed to be equal, i.e., p1 = p = p2 , (15) can be rewritten as follows:

N ?1

D =

(1 ? p) · p pos. const. p2

λ=0 λ=0

?C ( y λ ) . = ?D

N ?1

+

min

{D}

(12) (13)

? ( ?, ?) P I = λ · d yλ , X positive constant

(18)

? (I ?1 , ..., I ?K ) D = E d yI , X

.

with ?C (yλ ) = P I = λ ? (a1 (λ), ?) + d yλ , X ? (?, a2 (λ) d yλ , X . (19)

? in (13) is computed as described The optimal receiver output X in Section III. The expectation (13) is unconditioned, since the index assignments shall be optimized only once, involving the statistics of the source and the channel. The expectation (13) may be expanded as

N ?1

D=

λ=0

P I=λ

...

? (? d yλ , X i1 , ..., ? iK ) · (14)

iK ∈SK ?? i1 ∈S1 ??

?1 = ? ?K = ? ·P I i1 , ..., I iK I = λ ;

Thus, for the optimization of the index assignments it suf?ces to minimize ?D. It is important, that ?D is independent of the erasure probability p, i.e., for the case K = 2 and p1 = p = p2 , the index assignments are optimal for all possible values of the erasure probability. That is not true if more than two descriptions (K > 2) are used.

4

TO APPEAR IN IEEE TRANSACTIONS ON COMMUNICATIONS, MARCH 2003

V. O PTIMIZATION OF THE I NDEX A SSIGNMENTS A. The Complexity Problem As illustrated by Fig. 2, the selection of the index assignments can be seen as the problem, how to place N indexes into a K · K dimensional matrix with M = l=1 Nl > N locations in such a way, that the distortion D given by (15) is minimized. The easiest way to do the optimization is the brute-force approach: one would simply have to compute the distortion for each possible index assignment and select the one with the lowest distortion. Since N locations are taken out of M possible ones in the matrix and N ! possible allocations of the codevector indexes exist for each choice of matrix locations, there are M N · N! = M! (M ? N )! (20)

distortion are calculated by (16) and (15), respectively3 . The codevectors are sorted according to their costs in decreasing order, and the candidate codevectors for switching are picked from the top of the list. In contrast to the conventional BSA for single descriptions, now the switch partner can be either another codevector or a location in the matrix that has not been assigned to any codevector; this is illustrated by Fig. 3.

?????¨? ???¤???§??¨?

0 0 1 0 2 1 1 3 4 6 5 7 0 4 6 5 7 7 1 3 4 6 5 2 2

3

a)

0 1 3 2

2 3

b)

possible assignments, i.e., the brute force approach is infeasible in all practically relevant cases: for example, if only N = 32 codevectors are mapped to two 3-bit descriptions, i.e., N1 = 23 = N2 and, thus, M = 64, the distortion of 4.8 · 1053 different index assignments would have to be computed. B. Index Optimization by the Binary Switching Algorithm for a System with a Single Description The problem of assigning N indexes to N codevectors to control the performance degradation caused by bit errors on the channel is studied in [5], where the Binary Switching Algorithm (BSA) is proposed to overcome the complexity problems of the brute-force approach, which would require to check N ! different assignments2 . The basic idea of the BSA is to pick the codevector with the highest cost (which has the strongest contribution to the total distortion) and try to switch the index of this codevector with the index of another codevector, the “switch partner”. The latter is selected such, that the decrease of the total distortion due to the index switch is as large as possible. If no switch partner can be found for the codevector with the highest cost (that means all possible switches result in a higher total distortion), the codevector with the second-highest cost will be tried to switch next. This process continues until a codevector from the list, sorted by decreasing costs, is found that allows a switch that lowers the total distortion. After an accepted switch, the cost of each codevector and the total distortion are recalculated, a new ordered list of codevectors is generated, and the algorithm continues as described above, until no further reduction of the total distortion is possible. C. BSA for Multiple Descriptions: MD-BSA In section IV the total distortion for the MDVQ system was formulated as the sum of the costs of the codevectors. Hence, it is easy to adopt the idea of the normal BSA for multiple descriptions: as for the single-description case, an initial index assignment is used as a starting point for the multiple description BSA (MD-BSA), but the cost of each codevector and the total

2 As indicated by (20), the complexity problem is even worse in the multiple description case that is discussed in this paper.

Fig. 3. Two possible ways of switching codevectors in the index assignment matrix: a) switch with an empty position, b) switch with another codevector.

The switch that is accepted achieves the lowest total distortion for the current candidate-codevector from the list. After an accepted index switch, the cost of each codevector and the total distortion are recomputed and a new list of codevectors, ordered by decreasing cost, is generated for the next step. The algorithm stops, when no more switches are possible that further reduce the total distortion. VI. S IMULATION R ESULTS A memoryless zero-mean unit-variance Gaussian source signal was used for the simulations. The VQ codebooks of size 64 and 128 (quantizer indexes of 6 and 7 bits) with a vector dimension of two were designed by the LBG algorithm; the splitting method [6] was used for the initialization. The quantizer indexes were mapped to K = 2 descriptions4 , each with 4 bits. For this purpose, a 16 × 16-matrix had to be ?lled with the indexes of the codevectors. For the initialization of the index assignment matrix for the MD-BSA and for the reference simulations two schemes were used: 1. 1000 different random index assignments were tried and selected was the one with the lowest total distortion (“random initialization”).

3 For a system with two descriptions and equal packet erasure probabilities on both channels, the cost functions and the distortion can also be computed by (19) and (18), respectively. 4 Although the algorithm presented in this paper is able to optimize the index assignments in more general cases, K = 2 descriptions were used in the simulations because the rate-distortion bounds are explicitly known and well understood for this case and, as stated above, the result of the index optimization is independent of the particular value of the packet-erasure probability if both descriptions are independently erased with the same probability. Moreover, comparable numerical results for other MD vector-quantization schemes with more than two descriptions are not available in literature. To the best of our knowledge, [4] is the only paper where SNR-plots (SNR vs. packet erasure probability) can be found for multiple descriptions with K > 2, but the results are given for magnetic resonance brain scan images, i.e., the source signal in [4] is different from the Gaussian source model used in this paper.

¨ GORTZ, LEELAPORNCHAI: OPTIMIZATION OF THE INDEX ASSIGNMENTS FOR MULTIPLE DESCRIPTION VECTOR QUANTIZERS

5

2. the modi?ed linear (ML) index assignment from [1] was used, which places the quantizer indexes on the diagonals of the assignment matrix; therefore, this method is denoted by “diagonal initialization”. The ML-assignments were derived in [1] for multiple description scalar quantizers (MDSQ), where the indexes have a direct relationship to the amplitudes of the quantizer reproduction levels. Since the splitting method was used for the initialization of the LBG codebook training, most of the neighbouring indexes of the VQ codebook lie also relatively close in the signal space [7]. Therefore, the ML-assignment is useful for the MD-BSA as an initialization because it is already closer to a “good” assignment than some random initialization. The descriptions were transmitted over mutually independent erasure channels with the erasure probabilities p1 , p2 ∈ (0...1). For both erasure probabilities the same values were always selected, i.e., p1 = p = p2 , so the index optimizations were independent of the particular value of p, as stated for K = 2 in (18). The performances of the initial and the optimized index assignments were compared by the SNR-values at the decoder output. The results in Fig. 4 show, that the MD-BSA achieves strong gains for both initializations; as expected, the MLinitialization works better than the random initialization, but the results after the optimizations are only slightly different.

Gaussian source, VQ with 6 bits / 2 samples, two 4?bit descriptions 16 optimized, diagonal init. 14 optimized, random init. diagonal init. 12 random init. SNR in dB 10 8 6 4 2 0 0 0.2 0.4 0.6 packet erasure probability 0.8 1

central distortion (central SNR/dB) 0.030 (15.2) 0.015 (18.2) OPTA 0.063 (12.0) 0.083 (10.8)

side distortion (side SNR/dB) MDVQ (dim. 2) 0.153 (8.2) 26 codevectors 0.256 (5.9) 27 codevectors

MDSQ 0.190 (7.2) 0.482 (3.2)

TABLE I S IDE DISTORTIONS OF THE OPTIMIZED MDVQ, THE RATE DISTORTION BOUND FOR MULTIPLE DESCRIPTIONS [8] (OPTA), AND THE MDSQ [1] FOR THE SAME CENTRAL DISTORTIONS . A LL SYSTEMS HAVE TWO DESCRIPTIONS , EACH WITH A RATE OF 2 BITS PER SOURCE SAMPLE .

Fig. 4. Performance of MDVQ with the initial and the MD-BSA-optimized index assignments for a Gaussian source signal. The rate is 2 bits per source sample per description.

mapped to two 4-bit descriptions. The side distortions for the optimized MDVQ were measured and inserted into the table. At the same central distortions and for the same rates (2 bits per source sample per description) the values of the side distortions were also picked from the OPTA6 curve, given by the rate distortion function derived in [8]. The same was done for the multiple description scalar quantization (MDSQ) [1]: again for the same rates and central distortions, the values of the side distortions were picked from [1] (Fig. 12) and they were inserted into the rightmost column of Table I. Within the brackets the SNR-values corresponding to the distortions were added in the whole table. Table I indicates that the MDVQ with MD-BSA index optimization achieves signi?cant gains (1 dB in side SNR for the higher central distortion, 2.7 dB for the lower central distortion) over MDSQ. The gain is larger, if the redundancy (number of unused matrix locations) is small. In the example with the two-dimensional vector quantizer, the side distortion of the optimized MDVQ is, however, still more than 4–5 dB away from the rate distortion bound (OPTA). In the simulation of the transmission system, the gains in side distortion (indicated by Table I) of the index-optimized MDVQ turn into maximum SNR improvements over MDSQ of 0.5 dB and 1.9 dB for the higher and the lower central distortions, respectively, both for an erasure probability of about p = 0.2. VII. C ONCLUSION A new algorithm called MD-BSA was stated for the optimization of the index assignments of multiple description vector quantizers (MDVQ) with an arbitrary number of descriptions. The index assignments for MDVQ resulting from the MDBSA signi?cantly improve the SNR at the receiving end compared with MDSQ; this was shown by simulations for a system with two descriptions and a Gaussian source signal. In the MDVQ system model, the codevectors of the quantizer are assumed to be ?xed, i.e., the lossy part of the source coding scheme is not affected by the optimization. This allows to apply the MD-BSA to standardized speech, audio, and image codecs that shall be used for signal transmission over packet erasure channels.

6 Optimal

It is interesting to compare the performance of the optimized MDVQ scheme with the rate-distortion bounds for multiple descriptions [8] and the MDSQ [1] for the Gaussian source with zero mean and unit variance. In Table I the side distortions of several schemes that all have the same central distortions5 are compared. As stated above, the vector quantizers had a dimension of two and indexes with 6 bits (central distortion of 0.03) and 7 bits (central distortion of 0.015). The indexes were

5 The performance of the vector quantizer without any erasures is described by the central distortion; it is a property of the quantizer that does not depend on the index assignment. The side distortions result at the decoder output if one of the channels always erases a description. If the side distortions are not the same for each description, they are called “unbalanced”. In this case, one can use the average of the side distortions to obtain a single value as a ?gure of merit.

Performance Theoretically Attainable

6

TO APPEAR IN IEEE TRANSACTIONS ON COMMUNICATIONS, MARCH 2003

ACKNOWLEDGMENT The authors wish to thank the anonymous reviewers for their comments, which helped to improve the paper. R EFERENCES

[1] V. A. Vaishampayan, “Design of multiple description scalar quantizers,” IEEE Transactions on Information Theory, vol. 39, no. 3, pp. 821–834, May 1993. [2] V. A. Vaishampayan, N. J. A. Sloane, and S. D. Servetto, “Multipledescription vector quantization with lattice codebooks: Design and analysis,” IEEE Transactions on Information Theory, vol. 47, no. 5, pp. 1718– 1734, July 2001. [3] V. K. Goyal, J. A. Kelner, and J. Kovaˇ cevi? c, “Multiple description vector quantization with a coarse lattice,” IEEE Transactions on Information Theory, vol. 48, no. 3, pp. 781–788, Mar. 2002. [4] M. Fleming and M. Effros, “Generalized multiple description vector quantization,” in Proceedings of the IEEE Data Compression Conference, Mar. 1999, pp. 3–12. [5] K. Zeger and A. Gersho, “Pseudo-Gray coding,” IEEE Transactions on Communications, vol. 38, no. 12, pp. 2147–2158, Dec. 1990. [6] Y. Linde, A. Buzo, and R. M. Gray, “An algorithm for vector quantizer design,” IEEE Transactions on Communications, vol. COM-28, no. 1, pp. 84–95, Jan. 1980. [7] N. Farvardin, “A study of vector quantization for noisy channels,” IEEE Transactions on Information Theory, vol. 36, no. 4, pp. 799–809, July 1990. [8] L. Ozarow, “On a source-coding problem with two channels and three receivers,” The Bell System Technical Journal, vol. 59, no. 10, pp. 1909–1921, Dec. 1980.

更多相关文档:

A *VECTOR* QUANTIZATION BASED APPROACH *FOR* CFA DATA COMPRESSION IN WIRELESS ENDOSCOPY CAPSULE_电子/电路_工程科技_专业资料。V15No6O2_. . J0URNAL0FE ...

Liu, ‘‘Fast search algorithms *for* *vector* quantization *of* images using *multiple* triangle inequalities and wavelet transform’’, IEEE Trans. Image Process. ...

Low Bit-rate Compression *of* Computer-Generated Fresnel Holograms based on *Vector* Quantization_信息与通信_工程科技_专业资料。?2011 OSA: DH DTuD1.pdf Low ...

A *Vector* Quantization Schema *for* Non-Sta

VLSI Array Architectures *for* Pyramid *Vector* ...(Structured High-level *Description* Language) 2]. ...will be required to access *multiple* N ...

An Affinity Propagation Based method *for* *Vector* Quantization Codebook Design_专业资料。In this paper, we firstly modify a parameter in affinity propagation (...

DISPERSION PHASE *VECTOR* QUANTIZATION *FOR* ENHANCEMENT ...P? where i is *the* running phase codebook *index*...J. Bradley, "New Techniques *for* *Multi*-Prototype ...

Sarode, “*Vector* Quantized Codebook *Optimization* ... http://journals.*index*copernicus.com/abstracted....Sarode, “*Multi*level *Vector* Quantization Method *for*...

A Genetic Algorithm *for* Density-Based *Vector* Quantization 隐藏>> A Genetic Algorithm *for* Density-Based *Vector* Quantization Chair Professor Chin-Chen Chang (...

1 are *multi*-layer feed-forward networks and *the*...*optimization* process on *the* smallest between sample... Learning *Vector* *Quanti*... 暂无评价 22页 免费 ...

Graph *of* sample cross correlation between similarity *of* binary *indices* and similarity *of* *the* corresponding codebook *vectors* *for* different *vector* *quanti*sers. 0....

A Feature Correction Two Stage *Vector* Quantization_专业资料。A Feature ...In *the* second stage, *the* length coding *of* *the* codeword *indices*, 9 bits ...

n-Channel Asymmetric *Multiple*-*Description* Lattice *Vector* Quantization We present analytical expressions *for* optimal entropy-constrained *multiple*-*description* lattice ...

2000 244 Rotated Partial Distance Search *for* Faster *Vector* Quantization Encoding James McNames Abstract Partial Distance Search (PDS) is a method *of* ...

Compression *of* Binary Images by Hierachical Decomposition Based on *the* *Vector* Quantization_专业资料。In this paper we address *the* new lossy coding method ...

更多相关标签:

相关文档

- Necessary conditions for the optimality of variable rate residual vector quantizers
- Optimal Design of Multiple Description Lattice Vector Quantizers
- Resolving the wave-vector and the refractive index from the coefficient of reflectance
- A Variant of Evolution Strategies for Vector Optimization
- A Suite of Simple Algorithms for Support Vector Machine Optimization
- Interfaces-2006-Global Optimization of Emergency Evacuation Assignments
- An Index Interpretation For the Number Of Limit Cycles Of A Vector Field
- Asymmetric Multiple Description Lattice Vector
- Multiple Scattering Theory for the Photoproduction of Vector Mesons off Nuclei

文档资料共享网 nexoncn.com
copyright ©right 2010-2020。

文档资料共享网内容来自网络，如有侵犯请联系客服。email:zhit325@126.com