Chapter 26 Self-Assessment Quiz: Complexity Analysis
Section 1: Multiple Choice
Q1. The expression 3n² + 7n + 15 is:
(a) O(n) (b) O(n²) (c) O(n³) (d) O(15)
Answer
(b) The dominant term is 3n². Constants (3) and lower-order terms (7n + 15) are dropped in Big-O notation.Q2. An algorithm that halves its input at each step has complexity:
(a) O(n) (b) O(n/2) (c) O(log n) (d) O(sqrt(n))
Answer
(c) Halving n at each step means log₂(n) steps to reach 1. O(n/2) = O(n), which would be a single linear pass, not halving. Note: O(n/2) simplifies to O(n) because constants are dropped.Q3. Which operation is O(1) on an array?
(a) Finding the minimum value (b) Accessing element at index i (c) Inserting at the beginning (d) Checking if a value exists
Answer
(b) Array access by index is O(1) due to address arithmetic: address = base + i × element_size. Finding minimum requires scanning all elements (O(n)). Inserting at the beginning requires shifting all elements (O(n)). Checking existence requires scanning (O(n)).Q4. What is the time complexity of this code?
for I := 1 to N do
for J := 1 to N do
WriteLn(I, J);
(a) O(n) (b) O(n log n) (c) O(n²) (d) O(2n)
Answer
(c) The outer loop runs n times. For each outer iteration, the inner loop runs n times. Total: n × n = n². O(2n) = O(n), which would apply to two separate (non-nested) loops.Q5. What is the space complexity of merge sort?
(a) O(1) (b) O(log n) (c) O(n) (d) O(n²)
Answer
(c) Merge sort requires a temporary array of size n for the merge step. The recursion stack adds O(log n), but the dominant space usage is the O(n) temporary array.Q6. An algorithm is O(n) in the best case and O(n²) in the worst case. What can you say about its average-case complexity?
(a) It must be O(n) (b) It must be O(n²) (c) It must be between O(n) and O(n²), but the exact value depends on the input distribution (d) It must be O(n^1.5)
Answer
(c) The average case depends on the probability distribution of inputs. For insertion sort: best O(n), worst O(n²), average O(n²) with a smaller constant. The average is NOT automatically the midpoint — it requires analysis of the specific algorithm and input distribution.Q7. Which of the following is TRUE about the optimization hierarchy?
(a) Micro-optimizations always give bigger speedups than algorithm changes (b) You should optimize code before measuring performance (c) Improving the algorithm is usually more effective than optimizing the code (d) All O(n²) algorithms are too slow for practical use
Answer
(c) Algorithmic improvements (e.g., O(n²) to O(n log n)) typically provide orders-of-magnitude speedups, while micro-optimizations provide at most 2-3x improvement. Always improve the algorithm first, measure, then micro-optimize if needed.Q8. A dynamic array doubles its capacity when full. The amortized cost of n insertions is:
(a) O(n²) (b) O(n log n) (c) O(n) (d) O(1) per insertion
Answer
(d) Although individual insertions can cost O(n) when doubling occurs, the total cost of n insertions is O(n) (the doublings at sizes 1, 2, 4, ..., n sum to about 2n). Therefore the amortized cost per insertion is O(n)/n = O(1).Section 2: True or False
Q9. True or False: O(n) is always faster than O(n²) for any input size.
Answer
False. For small n, the constants matter. An O(n²) algorithm with a tiny constant (e.g., 0.01n²) can be faster than an O(n) algorithm with a large constant (e.g., 1000n) for small n. Big-O describes asymptotic behavior — what happens as n grows large. For n < 100,000, the O(n²) algorithm might be faster.Q10. True or False: An O(n log n) algorithm performs n × log₂(n) operations on every input.
Answer
False. O(n log n) means the algorithm performs AT MOST c × n × log(n) operations for some constant c and sufficiently large n. The actual number of operations may be less (e.g., best case), and the constant factor c can vary. Big-O is an upper bound, not an exact count.Q11. True or False: Space complexity includes the memory used by the input itself.
Answer
False. Space complexity typically measures only the extra (auxiliary) memory used by the algorithm, not the input. An in-place sorting algorithm has O(1) extra space even though the input array is O(n). Some definitions include input space — always clarify which convention is being used.Q12. True or False: If doubling the input size approximately quadruples the running time, the algorithm is likely O(n²).
Answer
True. For O(n²), time ∝ n². When n doubles: (2n)² = 4n². So the time should approximately quadruple. This empirical test — measuring growth ratios — is a practical way to verify theoretical complexity. Ratio ≈ 2 suggests O(n), ≈ 4 suggests O(n²), ≈ 8 suggests O(n³).Section 3: Short Answer
Q13. What is the time complexity of this code?
I := 1;
while I < N do begin
J := 1;
while J < N do begin
Process(I, J);
J := J * 2;
end;
I := I + 1;
end;
Answer
Outer loop: I goes from 1 to N, so n iterations. Inner loop: J doubles from 1 to N, so log₂(n) iterations. Total: O(n log n). This is NOT O(n²) because the inner loop runs only log n times, not n times.Q14. Give the recurrence relation for the time complexity of this recursive function, and solve it:
function Mystery(N: Integer): Integer;
begin
if N <= 1 then Mystery := 1
else Mystery := Mystery(N div 2) + Mystery(N div 2) + N;
end;
Answer
Recurrence: T(n) = 2T(n/2) + O(n). This is the same as merge sort. By the Master Theorem: a=2, b=2, c=1, log₂(2) = 1 = c, so T(n) = O(n log n).Q15. You have an algorithm that takes 10 ms for n = 100. Predict its time for n = 1000 if it is (a) O(n), (b) O(n²), (c) O(n³). Which complexities would be acceptable if the time limit is 1 second?