Apex Algorithm: Big O Notation

Apex Algorithm: Big O Notation

This article will cover

  1. What is Big O Notation
  2. Why we need to understand this concept
  3. What is O(1)
  4. What is O(n)
  5. What is O(n^2)
  6. What is O(log n)

We’ve all heard of Big(O). It’s something most of us learn in college and promptly forget. However, to crack coding interview of top tech companies it is important to understand this concept.

What is Big O Notation?

Big O notation,sometimes also called “asymptotic analysis is used in computer science. It is used for measuring the efficiency for many types of other algorithms as well. Sorting algorithms are also a small subset of the algorithms it use to analyze.

In other words, it is how the complexity of an algorithm grows as its input increases, or in other words how much more “work” will an algorithm do an we give it bigger and bigger inputs.

Sometimes multiple solutions give same result but we need to go with the most optimum best code. Best usually means which is fastest. This is where “Big O” came into play.

Big O notation ranks an algorithms’ efficiency

It does this with regard to “O” and “n”, where

  • O refers to the order of the function, or its growth rate, and
  • n is the length of the array to be sorted.

What is n?

time complexity is the computational complexity that describes the amount of time it takes to run an algorithm.

Confused with the definition?

Lets say we have a list of five letters:

[a,b,c,d,e]

Since there are five of them, here would be equal to 5

O(1)

O(1) describes an algorithm that will always execute in the same time (or space) regardless of the size of the input data set.

If your program wants to remove one letter from your list like this:

[a,b,c,d,e] => [a,b,c,d]

Its time complexity is simply 1 because it doesn’t matter how many letters are in the list, it will always take just one operation.

Lets take a real life Example:

Imagine that you are moving from your house and you are packing all the items of your house. It takes you exactly 1 day to pack all the items, and then that particular task is considered to be complete. It doesn’t matter if you packed one box, or a hundred box that day, you have completed the packing in a consistent amount of time.

So as per our example above, we don’t care about O(1), O(2), etc. We round it down to O(1) which is to say that our operation is a flat line in terms of scalability. It will take the same amount of time.

Real time Example

public class add{

    public integer addmethod(){
    integer a = 10;
    integer b = 10;
    integer c = a + b;
        system.debug(c);
        return c;
    }
}

No matter how big of a number we give to this function it will always be doing one single addition. We call this constant time because our complexity is always constant.

O(1) is the best possible time complexity!

O(N)

O(N) describes an algorithm whose performance will grow linearly and in direct proportion to the size of the input data set.

If your program does something like duplicate all the letters like so:

[a,b,c,d,e] => [aa,bb,cc,dd,ee]

For each letter, the letter is duplicated. The program goes through one at a time.

You could say that its runtime is Order n because the number of operations it has to do is proportional to how many letters are in your list

Example 2

Integer count = 0;
for (integer x = 0; x < n; x++)
    count++;

If we run this code where the size of n is 10, then count would equal 10 because it looped 10 times.

O(n) is a decent time complexity and you often can’t avoid it.

O(N2)

If the code is O(n²), then we consider that a quadratic runtime. This is best shown with the nested for loop:

Now say you want your program to add every member of the list to each other.

[a,b,c,d,e] => [abcde, bacde, cabde, dabce, eabcd]

Since for each item in the list you have to also go through the whole rest of the list, the number of operations your program has to do is the number of inputs (or n) times itself (n squared). If your list has two letters in it, your program will take 4 operations to run. If your list has four trillion letters, it may never finish running!

Example 2

    for(Integer a = 1; b <= 100; b++) {
        for(Integer b = 1; c <= 100; c++) {
            System.debug(a * b * c);
        }
}

In general, loops inside of loops, especially nested more than once, can be catastrophic for your performance. For example, let’s say you have two loops nested together, and they each count from 1 to 100. you find that this code requires 10,000 lines of script execution on the debug statement.

O(n²) is a big no-no in Computer Science and should be avoided at all costs. This is especially true in Salesforce because you could very well be handling large amounts of data in your org and you will go over your rate limits.

N could be the actual input, or the size of the input

Both of these methods have O(n) runtime, even though one takes an integer as its input and the other takes an array:

public class HiPRint {

  public static void sayHinTimes(integer n) {
    for (integer i = 0; i < n; i++) {
        System.debug('hi');
    }
}

public static void printAllItems(integer[] items) {
    for (integer item : items) {
        System.debug(item);
    }
}
}

Drop the constants

When you’re calculating the big O complexity of something, you just throw out the constants.

public class PrintItem{

  public static void printAllItemsTwice(integer [] items) {
  //first loop
    for (integer item : items) {
        System.debug(item);
    }

   //second loop
    for (integer item : items) {
        System.debug(item);
    }
}
}

This is O(2n), which we just call O(n)

Drop the less significant terms

public class PrintNum{
public static void printAllNumbersThenAllPairSums(integer[] numbers) {

    System.debug('these are the numbers:');
    for (integer num : numbers) {
        System.debug(num);
    }

    System.debug('and these are their sums:');
    for (integer firstNumber : numbers) {
        for (integer secondNumber : numbers) {
            System.debug(firstNumber + secondNumber);
        }
    }
}
}

Here our runtime is O(n + n^2), which we just call O(n^2)

We’re usually talking about the “worst case”

 public class HayStack{
  public static boolean contains(integer[] haystack, integer needle) {

    // does the haystack contain the needle?
    for (integer n : haystack) {
        if (n == needle) {
            return true;
        }
    }

    return false;
}
}

Here we might have 100 items in our haystack, but the first item might be the needle, in which case we would return in just 1 iteration of our loop.

In general we’d say this is O(n) runtime and the “worst case” part would be implied. But to be more specific we could say this is worst case O(n) and best case O(1) runtime.

Logarithmic time or O(log n)

The idea is to use algorithm O(log n) instead of scrolling through a structure 1 by 1, you divide the structure in half over and over again and do a constant number of operations for each split. Search algorithms where the answer space keeps getting split are O(log n). An example of this is binary search, where you keep splitting an ordered array in half over and over again until you find the number.

Example

Given a person’s name, find the phone number by picking a random point about halfway through the part of the book you haven’t searched yet, then check to see whether the person’s name is at that point. Then repeat the process about halfway through the part of the book where the person’s name lies. (This is a binary search for a person’s name.)

Conclusion 

Studying Big O notation makes you grasp the very important concept of efficiency in your code. So when you do work with huge data sets, you will have a good sense of where major slowdowns are likely to cause bottlenecks, and where more attention should be paid to get the largest improvements. This is also called sensitivity analysis, and is an important part of solving problems and writing great software.

Increasing your knowledge on concepts like Big O Notation and other computer science topics will help give you a leg up. You’ll be better equipped to demonstrate your potential.

I hope you enjoyed this article. For more of these kind of articles stay tuned!

Together we can learn faster

You can join the Whatsapp group 

You can join Facebook group

Reference

A Gentle Explanation of Logarithmic Time Complexity

Learning and Understanding Big O Notation

 

Did you enjoy this article?
Signup today and receive free updates straight in your inbox.
I agree to have my personal information transfered to MailChimp ( more information )
50% LikesVS
50% Dislikes