β±οΈ Complexity & Big O Notation
π What is Algorithmic Complexity?
Complexity is a measure of the amount of resources an algorithm consumes as the size of its input grows. The input size is usually denoted by n. The two main resources we measure are time and space.
β³ Time Complexity
Time complexity measures the number of basic operations an algorithm performs, not the actual time in seconds. It quantifies how the runtime of an algorithm scales with the input size (n).
- Example: Accessing an element in a Python list by its index (
my_list[i]) takes the same amount of time regardless of the listβs size. This is a single operation. In contrast, looping through the entire list requiresnoperations.
πΎ Space Complexity
Space complexity measures the amount of memory an algorithm uses as the input size (n) grows. This includes both the space taken by the input data and any extra memory (auxiliary space) used by the algorithm.
- Example: An algorithm that sorts a list in-place uses minimal extra memory. An algorithm that creates a copy of the list to sort it has a higher space complexity.
π ΎοΈ Big O Notation
Big O notation is the standard mathematical notation used to describe the upper bound or worst-case scenario of an algorithmβs complexity. It focuses on the most dominant factor as the input size (n) becomes very large, ignoring constants and less significant terms.
- Example: If an algorithm performs
3n^2 + 2n + 5operations, its Big O notation isO(n^2), because asngets large, then^2term dominates the growth.
β Common Big O Complexities
Here are some common complexities, from most to least efficient.
-
O(1)- Constant Time: The runtime is constant and does not change with the input size.- Example: Accessing an element in a dictionary or list by index.
-
O(\log n)- Logarithmic Time: The runtime grows logarithmically. The algorithm gets slightly slower as the input size doubles.- Example: Binary search in a sorted array.
-
O(n)- Linear Time: The runtime grows linearly with the input size.- Example: Looping through all elements in a list once.
-
O(n \log n)- Log-Linear Time: The runtime is a product of linear and logarithmic growth.- Example: Efficient sorting algorithms like Merge Sort or Timsort.
-
O(n^2)- Quadratic Time: The runtime is proportional to the square of the input size.- Example: Nested loops iterating over the same list (e.g., Bubble Sort).
-
O(2^n)- Exponential Time: The runtime doubles with each addition to the input size. These algorithms are very slow and impractical for largen.- Example: Recursive calculation of Fibonacci numbers without memoization.
π§ How to Analyze an Algorithmβs Efficiency
A simplified process for finding the Big O complexity:
- Identify the Input: Determine what
nrepresents (e.g., the number of elements in a list). - Count Operations: Count the number of operations performed in terms of
n.- A simple loop from
0tonisO(n). - A nested loop from
0tonisO(n^2).
- A simple loop from
- Find the Dominant Term: Keep the fastest-growing term and drop all constants and lower-order terms.
- Determine the Worst Case: Consider the input that would cause the algorithm to perform the maximum number of operations.