Table of Contents

# Time Complexity and Big O notation

The worst-case complexity of an algorithm is represented as Big O notation. It describes an algorithm’s complexity in algebraic terms.

Big O determines how the performance of your algorithm will change as the input size increases and uses this information to define the runtime necessary to execute an algorithm. However, it doesn’t indicate how quickly your algorithm runs.

Big O notation uses time and space complexity to gauge the effectiveness and performance of your algorithm.

## Time Complexity and Space Complexity

The time complexity describes how long it will take to run or to execute the code. The space complexity describes how much space or memory will be needed to run the method overall, in relation to the size of the input.

There are six main categories of complexity (in terms of time and space) in Big O:

- Constant: O(1)-Best
- Linear time: O(n)-Fair
- Logarithmic time: O(n log n)-Bad
- Quadratic time: O(n^2)-worst
- Exponential time: O(2^n)-worst
- Factorial time: O(n!)-worst

**Constant: O(1)- **When your algorithm have a constant time complexity with order O when it is independent of the input size n. (1). This indicates that regardless of the size of the input, the execution time will always be the same.

Example:

let number = 2; let y = 1 + 3; if(y < 2) { }

**Linear time: O(n)- **When an algorithm’s execution time grows linearly with the size of the input, you have linear time complexity. This means that a function is considered to have an order O time complexity if its iteration covers an input size of n. (n).

Also Read:About Time Complexity

**Logarithmic time: O(n log n)-**Similar to linear time complexity, with the exception that the runtime only depends on the input size’s half. An algorithm is considered to have logarithmic time complexity when the input size gets less with each iteration or step.

Because your application only uses half of the input size rather than the complete value, this approach is the second-best. After all, each iteration results in a smaller input size.

Binary search functions, which divide your sorted array based on the target value, are a fantastic example.

**Quadratic time: O(n^2)-**

Nested iteration, or having a loop inside another loop, has a terrible temporal complexity of quadratic.

If you have an array with n elements, that would be the ideal scenario to illustrate this. The inner loop will run n times for each iteration of the outer loop, giving a total of n^2 prints. The outer loop will run n times. Ten will print 100 times (10^2) if the array has ten entries .

**Exponential time: O(2^n)-**Every additional input unit of 1 results in a doubling of the number of processes that are carried out.

A nice illustration is the Fibonacci sequence that repeats. Think about being given a number and needing to locate the nth Fibonacci element.

Each number in the Fibonacci sequence, where 0 and 1 are the first two numbers, is equal to the sum of the two numbers before it. (0, 1, 1, 2, 3, 5, 8, 13) The third number in the series is 1, the fourth is 2, the fifth is 3, and so on.

Also Read: Data Structure