nexoncn.com

文档资料共享网 文档搜索专家

文档资料共享网 文档搜索专家

当前位置：首页 >> >> Worst-case

International Journal of Computer Mathematics Vol. 83, No. 1, January 2006, 59–67

Worst-case analysis of generalized heapsort algorithm revisited

TARIQUE MESBAUL ISLAM*? and M. KAYKOBAD?

?Department of Computer Science, University of Waterloo, Waterloo, Ontario N2L 3G1, Canada ?Department of Computer Science and Engineering, Bangladesh University of Engineering and Technology, Dhaka-1000, Bangladesh

(Received 21 September 2004; in ?nal form 19 October 2004) In this paper we present a generalized heapsort algorithm and its worst-case complexity analysis. The weighted sum of the number of comparisons and movements has been de?ned as a measure of the complexity. Theoretical analysis suggests that, by this criterion, 3-heap should be optimal in the worst case, contradicting the ?ndings of A. Paulik (Paulik, A., 1990, Worst-case analysis of a generalized heapsort algorithm. Information Processing Letters, 36, 159–65.). Experimental data supporting the analysis are also presented. Keywords: Heap; d-heap; d-heapsort C.R. categories: F.2; F.2.2

1.

Introduction

Heapsort is an internal in-place sorting algorithm which sorts a given sequence of elements. In the average case, quicksort strongly competes with heapsort, both of which have average case complexity O(n log n). However, new variations of heapsort have been developed [1], which have shown better performance than that of quicksort in the average case. Moreover, heapsort outperforms quicksort in the worst case, as the worst-case complexity of heapsort, which is O(n log n), is asymptotically better than that of quicksort, which is O(n2 ). The traditional heapsort algorithm is based on a binary tree and is considered as a 2-heap here. Paulik [2] showed that the 4-heap is the best of all d-heaps in the worst case, when the worst-case performance is measured as F1 (n) = k1 C(n) + k2 M(n) (1)

where n is the number of elements to sort, C(n) is the number of element comparisons to sort n elements, M(n) is the number of element movements to sort n elements, k1 is the time

*Corresponding author. Email: tmislam@uwaterloo.ca

International Journal of Computer Mathematics ISSN 0020-7160 print/ISSN 1029-0265 online ? 2006 Taylor & Francis http://www.tandf.co.uk/journals DOI: 10.1080/00207160500113272

60

T. M. Islam and M. Kaykobad

constant to complete a single comparison and k2 is the time constant to complete a single movement. The function can be rede?ned as F (n) = C(n) + kM(n) (2)

where k = k2 /k1 . We have noticed some assumptions in Paulik’s paper [2] that are inconsistent with the characteristics of modern hardware. In particular, the weight he assigned to k does not seem reasonable. Islam et al. [3] suggested that a 3-heap should be better than any other heap in terms of the number of comparisons in the worst case. This case was not studied by Paulik. Following our earlier work [4], we have analysed d-heaps (d > 1), using Paulik’s time function F (n) as the worst-case complexity measure, but with a different weight assignment for k.

2.

d-heaps

A d-heap is a complete d-ary tree (each node has at most d children) with the property that the value at each node is at least as large as the values at its children (if they exist). One major advantage of a d-heap is that it can be implemented using a sequential array, rather than an explicit tree structure. Let us consider a sequence of n elements (x1 , . . . , xn ) arranged as a d-heap. Then the parent of xi is located at (i + (d ? 2))/d . Conversely, the children of element xi are located at (i × d ? (d ? 2)) to t, where t = min(n, d × i + 1). 3. Worst-case analysis of a d-heap In this section we analyse the performance of heapsort in the worst case when each element is pushed down to the bottom. We analyse the construction and sorting phases separately. ALGORITHM 1 Sorting an array of elements using d-heapsort. Procedure adjust(A, i, n) var i, j, s, t, x: integer; begin j = d ? i ? (d ? 2); x = A[i]; while j ≤ n do begin k = j; t = j + (d ? 1); if n < t then t = n; for s = j + 1 to t do if A[k] < A[s] then k = s; j = k; if x ≥ A[j ] then exit; else A[i] = A[j ]; i = j; j = d ? j ? (d ? 2); end

Generalized heapsort algorithm

61

A[i] = x; end Procedure Heapify(A, n) var n, i: integer; begin for i = (n + (d ? 2))/d to 1 do adjust(A, i, n); end Procedure Heapsort(A, n) var i, t, n: integer; begin Heapify(A, n); for i = n to 2 do t = A[i]; A[i] = A[1]; A[1] = t; adjust(A, 1, i ? 1); end Height of a d-heap containing n + 1 elements

3.1

Let h be the height of a heap containing n + 1 elements. At level i, 0 ≤ i ≤ h, there can be at most d i number of elements. Therefore, the following inequality holds. 1 + d + d 2 + d 3 + · · · + d h?1 < n + 1 ≤ 1 + d + d 2 + d 3 + · · · + d h =? (d h ? 1) (d h+1 ? 1) <n+1≤ (d ? 1) (d ? 1)

=? d h ? 1 < (n(d ? 1) + d ? 1) ≤ d h+1 ? 1 =? d h < (n(d ? 1) + d) ≤ d h+1 . This implies d h ≤ (n(d ? 1) + 1) < d h+1 =? h ≤ logd (n(d ? 1) + 1) < h + 1 =? h = logd (n(d ? 1) + 1) .

3.2

Insertion of an element into a d-heap

Consider the insertion of the (n + 1)th element xn+1 into a d-heap of n elements. The algorithm starts from the root of the d-heap and goes down it until the appropriate place for xn+1 is found. In the worst case, xn+1 will go down to the bottom of the d-heap. At each level, d ? 1 comparisons are needed to ?nd the largest child and one more comparison is needed between the largest child and xn+1 to determine whether xn+1 should be pushed down further. When the algorithm goes down one level, the largest child replaces its parent, which corresponds to a movement. The algorithm is given as the procedure adjust in algorithm 1.

62

T. M. Islam and M. Kaykobad

Thus in the worst-case the number of element comparisons is Cadjust (n + 1) = d logd (n(d ? 1) + 1) , the number of element movements is Madjust (n + 1) = logd (n(d ? 1) + 1) , and the cost function is Fadjust (n + 1) = d logd (n(d ? 1) + 1) + k logd (n(d ? 1) + 1) 3.3 Construction of a d-heap from a sequence of n elements

Assume that we are given a sequence of n elements (in any order) and that we have to construct a d-heap with these elements. Since the leaves have no children, they satisfy the heap property trivially. The algorithm starts from the m = (n + (d ? 2))/d th element and goes down to the ?rst element, inserting one element into the partially constructed d-heap at each step. The algorithm is given as procedure Heapify in algorithm 1. In the worst case, each new element will go down to the bottom of the d-heap. Therefore in the worst case the number of movements is

m

Mheapify (n) =

i=1

h(i . . . n)

where the term inside the summation is the height of a d-heap with n ? i + 1 elements. Paulik [2] showed that the height of a d-heap with n ? i + 1 elements, xi , . . . , xn is logd (n(d ? 1) + 1) ? logd (i(d ? 1) + 1) . For further simpli?cation we will use the inequality x ? y ? 1 < x ? y < x ? y + 1. Therefore

m

Mheapify (n) =

i=1 m

[ logd (n(d ? 1) + 1) ? logd (i(d ? 1) + 1) ] logd (n(d ? 1) + 1) ? logd (i(d ? 1) + 1) + 1

i=1 m m

< =

i=1

[logd (n(d ? 1) + 1)] ?

i=1

[logd (i(d ? 1) + 1)] + m.

For large n and m,

m m

Mheapify (n) <

i=1

logd (n(d ? 1)) ?

i=1 m

logd (i(d ? 1)) + m

m

= logd

i=1

(n(d ? 1)) ? logd

i=1

(i(d ? 1)) + m

= m logd (n(d ? 1)) ? logd (m!(d ? 1)m ) + m = m logd (n(d ? 1)) ? logd (m!) ? m logd (d ? 1) + m.

Generalized heapsort algorithm

63

Using Stirling’s formula for loge (n!), we obtain Mheapify (n) < m logd (n(d ? 1)) ? (m + 1/2) loge m ? m ? m logd (d ? 1) + m loge d 1 1 logd m + m 1 + ? m logd (d ? 1) 2 loge d ? m logd (d ? 1) ? m logd (d ? 1).

= m logd (n(d ? 1)) ? m logd m ? = m logd = m logd When n → ∞, Mheapify (n) < m logd (d(d ? 1)) ?

nd(d ? 1) 1 1 ? logd m + m 1 + (n ? 1) 2 loge d d(d ? 1) 1 1 ? logd m + m 1 + (1 ? 1/n) 2 loge d

1 1 logd m + m 1 + 2 loge d

? m logd (d ? 1) ? m logd (d ? 1)

= m logd d + m logd (d ? 1) ? =m 2+ = = n?1 d 1 loge d 2+ ? 1 logd m 2 ?

1 1 logd m + m 1 + 2 loge d

1 loge d

1 logd 2

n?1 d

n(2 + 1/loge d) ? O(logd n) d = O(n).

The number of comparisons in the worst case is

m

Cheapify (n) = d

i=1

h(i . . . n)

which, after simpli?cation, reduces to: Cheapify (n) < n 2 + = O(n). Therefore the cost function becomes Fheapify (n) < Cheapify (n) + kMheapify (n) =n 2+ = O(n). 1 loge d 1+ 1 d ? O(logd n) 1 loge d ? O(d logd n)

64

T. M. Islam and M. Kaykobad

3.4 Sorting a sequence of n elements arranged as a d-heap The Heapsort algorithm starts by creating an initial heap using the procedure described in sections 3.2 and 3.3. The algorithm then iterates n ? 1 times. In step i, the ith largest element is removed from the root and placed at the end of the current heap. The last of the remaining n ? i elements is then placed at the root and pushed down the heap to its proper position. The algorithm is given as procedure Heapsort in algorithm 1. In the worst case, each element placed at the root will go down to the bottom of the heap. Thus the number of movements in the worst case will be Mheapsort (n) = Msort (n) + Mheapify (n)

n

=

i=1 n?1

logd ((i ? 1)(d ? 1) + 1) + O(n) logd (i(d ? 1) + 1) + O(n).

i=1

=

To simplify we make use of the inequality x ? 1 < x ≤ x. Therefore we have

n?1

Mheapsort (n) ≤

i=1

logd (i(d ? 1) + 1) + O(n)

n?1

= logd

i=1

(i(d ? 1) + 1) + O(n).

For large n we obtain

n?1

Mheapsort (n) ≤ logd

i=1 n

(i(d ? 1)) + O(n) (i(d ? 1)) + O(n)

i=1

< logd

= logd [n!(d ? 1)n ] + O(n) = logd (n!) + n logd (d ? 1) + O(n). Using Stirling’s formula for loge (n!) we obtain Mheapsort (n) < n + = n+ 1 logd n ? n + n logd (d ? 1) + O(n) 2 1 logd n + O(n). 2

Generalized heapsort algorithm

65

Then the number of comparisons in the worst case is Cheapsort (n) = dMheapsort (n) =d n+ and the worst-case cost function is Fheapsort (n) < Cheapsort (n) + kMheapsort (n) = (d + k) n + = (d + k) n + 1 logd n + O(n) 2 1 2 log10 n + O(n). log10 d 1 logd n + O(n) 2

4.

Experimental results

We have implemented the algorithms for d values of 2, 3 and 4. The results are given in table 1. We calculated the weighted sum of numbers of comparisons and numbers of movements for different values of k (0.6–0.9). Each algorithm was run on the same sequence of elements which were generated by the turboc random-number generating function. The data are taken from a 10-run average. The data in table 1 clearly show that, provided that comparison cost is dominant over movement cost, 3-heap is better than any other heap.

Table 1. No. of elements 100 200 500 1000 2000 5000 10000 15000 No. of comparisons 1,044 1,016 1,079 2,476 2,419 2,604 7,448 7,298 7,801 16,834 16,357 17,582 37,720 36,860 39,170 1,07,643 1,04,376 1,10,668 2,35,299 2,26,579 2,42,789 3,70,303 3,58,093 3,81,706 Performance of heapsort algorithms. Weighted sum of comparison and movement k = 0.6 1339.8 1211.0 1233.2 3182.2 2884.0 2983.8 9575.0 8697.8 8930.8 21676.0 19513.0 20144.0 48624.4 43992.8 44873.0 138878.4 124636.2 126833.8 303787.8 270715.6 278360.6 478263.4 427904.8 437694.4 k = 0.7 1389.1 1243.5 1258.9 3299.9 2961.5 3047.1 9929.5 8931.1 9119.1 22483.0 20039.0 20571.0 50441.8 45181.6 45823.5 144084.3 128012.9 129528.1 315202.6 278068.7 284289.2 496256.8 439540.1 447025.8 k = 0.8 1438.4 1276.0 1284.6 3417.6 3039.0 3110.4 10284.0 9164.4 9307.4 23290.0 20565.0 20998.0 52259.2 46370.4 46774.0 149290.2 131389.6 132222.4 326617.4 285421.8 290217.8 514250.2 451175.4 456357.2 k = 0.9 1487.7 1308.5 1310.3 3535.3 3116.5 3173.7 10638.5 9397.7 9495.7 24097.0 21091.0 21425.0 54076.6 47559.2 47724.5 154496.1 134766.3 134916.7 338032.2 292774.9 296146.4 532243.6 462810.7 465688.6

d 2 3 4 2 3 4 2 3 4 2 3 4 2 3 4 2 3 4 2 3 4 2 3 4

No. of movements 493 325 257 1,177 775 633 3,545 2,333 1,883 8,070 5,260 4,270 18,174 11,888 9,505 52,059 33,767 26,943 1,14,148 73,531 59,286 1,79,934 1,16,353 93,314

66 Table 2. No. of elements 100 200 500 1000 2000 5000 8000

T. M. Islam and M. Kaykobad Run-time performance of heapsort algorithms. Integer (ms) d=2 0.0 0.55 1.65 3.85 8.25 23.10 39.05 d=3 0.0 0.55 1.65 3.30 7.70 20.90 34.65 d=4 0.0 0.55 1.65 3.30 7.15 19.80 33.55 Long double (ms) d=2 2.20 4.40 13.75 30.80 68.20 193.05 325.05 d=3 1.65 4.40 12.65 28.05 63.25 176.55 295.35 d=4 2.20 4.95 13.75 29.70 64.90 178.75 306.35

We also measured the run time of the Heapsort procedure for d values of 2, 3 and 4 (for integer and long double data). The result (100-run average) is given in table 2. Here we have selected integer and long double data types as two extremes. The experimental data show that 3-heap is clearly the best for long double data. For integer data, 4-heap competes strongly with 3-heap and sometimes shows better run-time performance. This is because index comparisons become a dominant factor for integer data types. In addition, we have used randomly generated data rather than worst-case data.

5.

Conclusion

The term c = (d + k)/ log10 d is the dominant factor in the worst-case cost function of a d-heap. A key feature in this term is the value of k. Let us consider the comparison operation performed by procedure adjust on the elements A[k] and A[s]. At least one of the elements has to be transferred into a register from memory ?rst and then the comparison is made. During movement, the element A[j ] (loaded in a register during comparison) is simply written to memory location A[i]. Therefore cost of comparison involves the cost of a movement from memory to register and the cost of comparison between two register contents, while the cost of movement simply involves the cost of writing a register content to a memory location. This implies that value of k should be <1. In a Pentium processor, the number of clock cycles needed for a single comparison and a single movement are 2(1 + 1) and 1, respectively [5]. Therefore the value of k is 0.5. Table 3 shows the values of c for different values of d and k. Table 3 shows that, provided that k ≤ 0.8, 3-heap is optimal in terms of the worst-case cost function Fheapsort (n). 4-heap closely competes with 3-heap and outperforms it for k ≥ 0.82. However, practical values of k lie well below 0.8 [5] and so 3-heap should be better than 4-heap in the worst case.

Table 3. Value of c for different values of d and k. c d 2 3 4 k = 0.6 8.637013 7.545251 7.640434 k = 0.7 8.969205 7.754842 7.806531 k = 0.8 9.301398 7.964432 7.972627 k = 0.9 9.633591 8.174022 8.138723

Generalized heapsort algorithm

67

References

[1] Carlsson, S., 1987, Average-case results on heapsort. BIT, 27, 2–17. [2] Paulik, A., 1990, Worst-case analysis of a generalized heapsort algorithm. Information Processing Letters, 36, 159–165. [3] Mamunul Islam, M., Kaykobad, M., Murshed, M.M. and Amyeen, E., 1998, 3 is a more promising algorithmic parameter than 2. Computers and Mathematics with Applications, 36, 19–24. [4] Islam, T.M. and Kaykobad, M., 1998, Worst-case analysis of generalized heapsort algorithm revisited. Proceedings of the International Conference on Computer and Information Technology, Dhaka, 18–20 December, pp. 224–228. [5] Antonakos, J.L., 1997, The Pentium Microprocessor (Englewood Cliffs, NJ: Prentice-Hall).

赞助商链接

更多相关文档:

更多相关标签:

相关文档

- Worst-CaseAverage-Case Equivalence
- Worst-case Scenarios for American Options Robert
- and worst-case error
- Worst-case Analysis and Practical Results
- Worst-case analysis of nonlinear systems
- Worst-case analysis of nonlinear systems
- Worst Case Power Noise Estimation and Compensation Design for Zero Coupling
- On worst case design strategies
- Using a Worst-Case Execution Time Tool for Real-Time Verification of the DEBIE
- Robust adaptive beamforming based on iterative implementation of worst-case performance optimisation
- Abstract A Worst-Case Worm
- Tradeoffs in worst-case equilibria
- DesignCon 2005 Measurements of Impedance, Current and Worst-Case Noise on Chip Power Delivery System
- No Curse of Dimensionality for Contraction Fixed Points Even in the Worst Case
- Java virtual-machine support for portable worst-case execution-time analysis

文档资料共享网 nexoncn.com
copyright ©right 2010-2020。

文档资料共享网内容来自网络，如有侵犯请联系客服。email:zhit325@126.com