Time Complexity
Translated from German using DeepL.
Date: April 2025
Reading time: 8 minutes
"In theoretical computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm."
Source: Wikipedia
The Big O notation represents this complexity and thus helps to analyze and compare the resource consumption of different algorithms. This enables the implementation of high-performance and scalable systems.
Formula Structure
The efficiency is described as follows.
Example: O(2^n)
n
: amount of data
O
: Order of growth - how slow the algorithm becomes as n
increases
In this example, the required execution time increases exponentially as the amount of data increases.
The result is independent of the computer. It is only a matter of the number of steps required for execution.
In addition, the notation does not usually reflect an exact count, but describes the rough ratio.
Overview
The following complexity types are described below.
Source: https://www.bigocheatsheet.com/ (opens in a new tab)
Notation | Name | Example |
---|---|---|
O(1) | Constant | Access to an array element |
O(log n) | Logarithmic | Binary search |
O(n) | Linear | Array traversal |
O(n log n) | Log-linear | Mergesort, Heapsort |
O(n^2) | Quadratic | Double loop, e.g. bubble sort |
O(2^n) | Exponential | Recursive solutions, e.g. Fibonacci sequence |
O(n!) | Factorial | Brute force for permutations |
O(1) | Constant
This complexity class requires the same number of steps, regardless of the amount of data, and is therefore theoretically optimal.
In practice, a logarithmic solution, for example, can exceed the constant complexity. Furthermore, O(1)
does not guarantee a single operation. It means that the effort is independent of the input variable. However, this effort can also comprise ten constant steps.
function getFirstElement(arr) {
return arr[0];
}
Even if the array has a thousand entries, only one operation is necessary.
O(log n) | Logarithmic
With logarithmic complexity, the execution time increases only slowly as the amount of data increases, as the number of elements to be checked is halved at each step.
An example of this is the binary search, in which the area of a list to be searched is continuously halved.
If the list has four entries, two halvings are necessary: log(4) = 2
Three for eight elements: log(8) = 3
But even if the list grows rapidly, the steps follow only slowly: log2(1024) = 10
function binarySearch(arr, target) {
let left = 0;
let right = arr.length - 1;
while (left <= right) {
const middle = Math.floor((left + right) / 2);
if (arr[middle] === target) {
return middle;
} else if (arr[middle] < target) {
left = middle + 1;
} else {
right = middle - 1;
}
}
return -1;
}
Only ten steps are required to search a list with thousands of entries.
O(n) |Β Linear
The time increases proportionally to the amount of data.
function hasItem(arr, target) {
for (let item of arr) {
if (item === target) {
return true;
}
}
return false;
}
For an array with a thousand entries, approximately a thousand steps are required.
O(n log n) | Log-linear
The execution time is initially similar to that of linear complexity, but then experiences a more prominent negative influence as the amount of data increases.
With a log-linear solution, the data is run through both linearly (O(n)
) and subjected to an additional logarithmic step (O(log n)
).
The calculation is as follows:
n = 4
O(4 log 4) = O(4 x 2) = O(8)
The table shows how the increase in data volume also causes an additional increase in time.
n | log n | O(n log n) |
---|---|---|
8 | 3 | 24 |
32 | 5 | 160 |
- | - | - |
256 | 8 | 2048 |
280 | ~8.1 | ~2276 |
Both from n=8
to n=32
and from n=256
to n=280
the value was increased by 24
in each case. However, the resulting additional work differs. In the first example, this leads to 136
additional steps. In the latter, the effort is increased by 228
steps.
mergeSort
is a classic example of this complexity. It is a sorting procedure that breaks down the array into the smallest possible parts and then sorts and assembles them.
function mergeSort(arr) {
if (arr.length <= 1) {
return arr;
}
const middle = Math.floor(arr.length / 2);
const left = mergeSort(arr.slice(0, middle));
const right = mergeSort(arr.slice(middle));
return merge(left, right);
}
function merge(left, right) {
const result = [];
while (left.length && right.length) {
result.push(left[0] < right[0] ? left.shift() : right.shift());
}
return result.concat(left, right);
}
By looking at the sequence of this function, you can see why it is a log-linear algorithm. This is because exactly eight steps are required to perform it with an array of size four: O(4 log 4) = 8
Slice:
[9, 4, 5, 2]
[9, 4] | [5, 2]
[9] | [4] | [5] | [2]
Merge:
[4, 9] | [2, 5]
[2] - [4, 9] | [5]
[2, 4] - [9] | [5]
[2, 4, 5] - [9]
[2, 4, 5, 9]
To sort an array with a thousand elements, ten thousand steps are run through.
O(n^2) | Quadratic
With quadratic complexity, the execution time increases in proportion to the square of the input quantity. If the data doubles, the time required quadruples.
Such algorithms usually contain nested loops.
function hasDuplicates(arr) {
for (let i = 0; i < arr.length; i++) {
for (let j = i + 1; j < arr.length; j++) {
if (arr[i] === arr[j]) return true;
}
}
return false;
}
One million runs are required to check a thousand entries.
O(2^n) | Exponential
Exponential algorithms check all possible combinations. This is required for some brute force solutions, among others. If the data grows, such solutions quickly become impractical (from approx. n = 30
).
This recursive function is so complex that each call generates two more.
function fibonacci(n) {
if (n <= 1) return n;
return fibonacci(n - 1) + fibonacci(n - 2);
}
However, the number of calls does not grow exactly with 2^n
, but less strongly. fibonacci(10)
generates 177 function calls, not 1,024. fibonacci(30)
generates 2.7 million function calls, not a billion.
However, the point is to represent the behavior - and this is exponential - with the notation.
O(n!) | Factorial
While exponential growth doubles the effort for each additional input, this increases even more with factorial algorithms. The aim of such code is to find all combinations. Algorithms of this class are so inefficient that they can only be used for very small amounts of data.
With exponential brute force, the subsets are searched for:
n = 4
Array = [1, 2, 3]
Subsets: 8
[], [1], [2], [3], [1, 2], [1, 3], [2, 3], [1, 2, 3]
This is about the so-called permutations of objects:
n = 3
Array = [1, 2, 3]
Permutations: 6
[1, 2, 3], [1, 3, 2], [2, 1, 3], [2, 3, 1], [3, 1, 2], [3, 2, 1]
function generatePermutations(arr) {
if (arr.length <= 1) return [arr];
const result = [];
for (let i = 0; i < arr.length; i++) {
const rest = [...arr.slice(0, i), ...arr.slice(i + 1)];
for (const perm of generatePermutations(rest)) {
result.push([arr[i], ...perm]);
}
}
return result;
}
Factorial complexity grows at a multiplicative rate with each additional element and is more inefficient than an exponential solution from n = 4
.
n Γ (n-1) Γ ... Γ1
4! = 4 Γ 3 Γ 2 Γ 1 = 24
5! = 5 x 4 Γ 3 Γ 2 Γ 1 = 120
10! = ~3.6 Mio.
These permutations cannot be generated for an array with a thousand elements. The number 1000! comprises around 2'568 digits.
Growth Behavior
The following table illustrates how strongly the execution time of algorithms with different complexity is influenced by the growth of the input quantity n
.
n | O(1) | O(log n) | O(n) | O(n log n) | O(n^2) | O(2^n) | O(n!) |
---|---|---|---|---|---|---|---|
10 | 1 | ~3.3 | 10 | ~33 | 100 | 1'024 | 3'628'800 |
100 | 1 | ~6.6 | 100 | ~660 | 10'000 | 1.27x10^30 | 9.3x10^157 |
1'000 | 1 | ~10 | 1'000 | ~10'000 | 1'000'000 | 1.0715Γ10^301 | Nicht berechenbar |
10'000 | 1 | ~14 | 10'000 | ~140'000 | 100'000'000 | Nicht berechenbar | Nicht berechenbar |
Performance of Interactions
Operations with Data Structures
Datatype | Avg Access | Avg Search | Avg Insertion | Avg Deletion | Worst Access | Worst Search | Worst Insertion | Worst Deletion |
---|---|---|---|---|---|---|---|---|
Array | βοΈ O(1) | π¨ O(n) | π¨ O(n) | π¨ O(n) | βοΈ O(1) | π¨ O(n) | π¨ O(n) | π¨ O(n) |
Stack | π¨ O(n) | π¨ O(n) | βοΈ O(1) | βοΈ O(1) | π¨ O(n) | π¨ O(n) | βοΈ O(1) | βοΈ O(1) |
Queue | π¨ O(n) | π¨ O(n) | βοΈ O(1) | βοΈ O(1) | π¨ O(n) | π¨ O(n) | βοΈ O(1) | βοΈ O(1) |
Singly-Linked List | π¨ O(n) | π¨ O(n) | βοΈ O(1) | βοΈ O(1) | π¨ O(n) | π¨ O(n) | βοΈ O(1) | βοΈ O(1) |
Doubly-Linked List | π¨ O(n) | π¨ O(n) | βοΈ O(1) | βοΈ O(1) | π¨ O(n) | π¨ O(n) | βοΈ O(1) | βοΈ O(1) |
Skip List | π© O(log n) | π© O(log n) | π© O(log n) | π© O(log n) | π¨ O(n) | π¨ O(n) | π¨ O(n) | π¨ O(n) |
Hash Table | β¬οΈ N/A | βοΈ O(1) | βοΈ O(1) | βοΈ O(1) | β¬οΈ N/A | π¨ O(n) | π¨ O(n) | π¨ O(n) |
Binary Search Tree | π© O(log n) | π© O(log n) | π© O(log n) | π© O(log n) | π¨ O(n) | π¨ O(n) | π¨ O(n) | π¨ O(n) |
Cartesian Tree | β¬οΈ N/A | π© O(log n) | π© O(log n) | π© O(log n) | β¬οΈ N/A | π¨ O(n) | π¨ O(n) | π¨ O(n) |
B-Tree | π© O(log n) | π© O(log n) | π© O(log n) | π© O(log n) | π© O(log n) | π© O(log n) | π© O(log n) | π© O(log n) |
Red-Black Tree | π© O(log n) | π© O(log n) | π© O(log n) | π© O(log n) | π© O(log n) | π© O(log n) | π© O(log n) | π© O(log n) |
Splay Tree | β¬οΈ N/A | π© O(log n) | π© O(log n) | π© O(log n) | β¬οΈ N/A | π© O(log n) | π© O(log n) | π© O(log n) |
AVL Tree | π© O(log n) | π© O(log n) | π© O(log n) | π© O(log n) | π© O(log n) | π© O(log n) | π© O(log n) | π© O(log n) |
KD Tree | π© O(log n) | π© O(log n) | π© O(log n) | π© O(log n) | π¨ O(n) | π¨ O(n) | π¨ O(n) | π¨ O(n) |
βοΈ = Excellent
π© = Very good
π¨ = Medium
π§ = Weak
β¬οΈ = Not available / not defined
Source: https://www.bigocheatsheet.com/ (opens in a new tab)
Array Sorting
Algorithm | Best Case | Average Case | Worst Case |
---|---|---|---|
Quicksort | π§ Ξ©(n log n) | π§ O(n log n) | π₯ O(n^2) |
Mergesort | π§ Ξ©(n log n) | π§ O(n log n) | π§ O(n log n) |
Timsort | βοΈ Ξ©(n) | π§ O(n log n) | π§ O(n log n) |
Heapsort | π§ Ξ©(n log n) | π§ O(n log n) | π§ O(n log n) |
Bubble Sort | βοΈ Ξ©(n) | π₯ O(n^2) | π₯ O(n^2) |
Insertion Sort | βοΈ Ξ©(n) | π₯ O(n^2) | π₯ O(n^2) |
Selection Sort | π₯ Ξ©(n^2) | π₯ O(n^2) | π₯ O(n^2) |
Tree Sort | π§ Ξ©(n log n) | π§ O(n log n) | π₯ O(n^2) |
Shell Sort | π§ Ξ©(n log n) | π₯ O(n (log n)Β²) | π₯ O(n (log n)^2) |
Bucket Sort | βοΈ Ξ©(n + k) | π© O(n + k) | π₯ O(n^2) |
Radix Sort | βοΈ Ξ©(nk) | π© O(nk) | π© O(nk) |
Counting Sort | βοΈ Ξ©(n + k) | π© O(n + k) | π© O(n + k) |
Cubesort | βοΈ Ξ©(n) | π§ O(n log n) | π§ O(n log n) |
βοΈ = Excellent
π© = Very good
π¨ = Medium
π§ = Weak
π₯ = Very poor
β¬οΈ = Not available / not defined
Source https://www.bigocheatsheet.com/ (opens in a new tab)
Ξ© (Omega) describes the minimum effort.
Space Complexity
This documentation described time complexity. In the field of computational complexity, however, there is another important way of analyzing the efficiency of algorithms - space complexity.
It describes how much additional memory an algorithm requires in relation to the input. It is about the unstable part. Not variables, but the dynamic description of the memory.
Conclusion
The Big O notation is a way of evaluating the efficiency of a code and helps to select the appropriate algorithms for scalable solutions.