A beginner’s guide to Big O notation

In Computer Science we use Big O notation to describe the efficiency or complexity of an algorithm. The Big O specifically describes worst-case scenario, and it can also be used to describe the execution time and the space the code takes to run.
This article only covers the very basics or Big O and logarithms. For a more in-depth explanation take a look at their respective Wikipedia entries: Big O Notation, Logarithms.

Related posts

Data Scraping in 2025: Trends, Tools, and Best Practices

The Future of Mobile Security: Challenges and Solutions

How to Improve Your Cyber Resilience by Strengthening User Privileges