Skip to Content
πŸŽ‰ Welcome to my notes πŸŽ‰
DSA⏱️ Complexity & Big O Notation

⏱️ Complexity & Big O Notation

πŸ‘‰ What is Algorithmic Complexity?

Complexity is a measure of the amount of resources an algorithm consumes as the size of its input grows. The input size is usually denoted by n. The two main resources we measure are time and space.


⏳ Time Complexity

Time complexity measures the number of basic operations an algorithm performs, not the actual time in seconds. It quantifies how the runtime of an algorithm scales with the input size (n).

  • Example: Accessing an element in a Python list by its index (my_list[i]) takes the same amount of time regardless of the list’s size. This is a single operation. In contrast, looping through the entire list requires n operations.

πŸ’Ύ Space Complexity

Space complexity measures the amount of memory an algorithm uses as the input size (n) grows. This includes both the space taken by the input data and any extra memory (auxiliary space) used by the algorithm.

  • Example: An algorithm that sorts a list in-place uses minimal extra memory. An algorithm that creates a copy of the list to sort it has a higher space complexity.

πŸ…ΎοΈ Big O Notation

Big O notation is the standard mathematical notation used to describe the upper bound or worst-case scenario of an algorithm’s complexity. It focuses on the most dominant factor as the input size (n) becomes very large, ignoring constants and less significant terms.

  • Example: If an algorithm performs 3n^2 + 2n + 5 operations, its Big O notation is O(n^2), because as n gets large, the n^2 term dominates the growth.

⭐ Common Big O Complexities

Here are some common complexities, from most to least efficient.

  1. O(1) - Constant Time: The runtime is constant and does not change with the input size.

    • Example: Accessing an element in a dictionary or list by index.
  2. O(\log n) - Logarithmic Time: The runtime grows logarithmically. The algorithm gets slightly slower as the input size doubles.

    • Example: Binary search in a sorted array.
  3. O(n) - Linear Time: The runtime grows linearly with the input size.

    • Example: Looping through all elements in a list once.
  4. O(n \log n) - Log-Linear Time: The runtime is a product of linear and logarithmic growth.

    • Example: Efficient sorting algorithms like Merge Sort or Timsort.
  5. O(n^2) - Quadratic Time: The runtime is proportional to the square of the input size.

    • Example: Nested loops iterating over the same list (e.g., Bubble Sort).
  6. O(2^n) - Exponential Time: The runtime doubles with each addition to the input size. These algorithms are very slow and impractical for large n.

    • Example: Recursive calculation of Fibonacci numbers without memoization.

🧐 How to Analyze an Algorithm’s Efficiency

A simplified process for finding the Big O complexity:

  1. Identify the Input: Determine what n represents (e.g., the number of elements in a list).
  2. Count Operations: Count the number of operations performed in terms of n.
    • A simple loop from 0 to n is O(n).
    • A nested loop from 0 to n is O(n^2).
  3. Find the Dominant Term: Keep the fastest-growing term and drop all constants and lower-order terms.
  4. Determine the Worst Case: Consider the input that would cause the algorithm to perform the maximum number of operations.
Last updated on