Big O notation is a way to describe how fast an algorithm runs as the size of its input grows. Instead of measuring time in seconds, it focuses on how the number of operations increases. This makes it easy to compare algorithms and choose the most efficient one.
O(1) – Constant Time
An O(1) algorithm always takes the same amount of time, no matter how big the input is. For example, looking up a name by page number in a phone book is O(1) because you go directly to that page. It doesn't matter if the book has 100 or 10,000 pages.
O(n) – Linear Time
An O(n) algorithm's time grows directly with the input size. If you search for a name by reading every entry from start to finish, that's O(n). In a 100-page book, you might read 100 pages; in a 10,000-page book, 10,000 pages. The time increases at the same rate as the data.
O(n²) – Quadratic Time
An O(n²) algorithm's time grows much faster because the number of operations is roughly the square of the input size. Imagine you have a shopping basket with n items, and you compare every item with every other item to find duplicates. For 10 items, that's about 100 comparisons; for 100 items, 10,000 comparisons. That's why O(n²) algorithms are slow for large inputs.
Why It Matters
Software engineers use Big O to select the best algorithm for a task. A fast algorithm can save seconds, minutes, or even hours when processing large datasets. Understanding O(1), O(n), and O(n²) gives you a solid foundation for writing efficient code and acing technical interviews.