Lecture 1
Running Time
- Algorithms transform input data into output data
- Running time of an algorithm typically grows with input size
- Average case time is often difficult to determine
- We focus on the worst case running time
- easier to analyse
- crucial for many real-world applications
Best, Average and Worst Cases
Given a list of unsorted numbers, L, and a specific number, k, Return true if k is in
L, or false otherwise.
Consider the following algorithm to solve this problem:
def number_exists(L, k):
for item in L:
if item == k:
return True
return False
How Can We Analyse Algorithms?
- Experimental (empirical) studies
- Write program implementing the algorithm
- Run program with inputs of varying size and composition
- Use a method like time.time() to measure actual running time
- Plot the results
- Limitations:
- Need to implement the algorithm
- May be difficult, time consuming, …
- Time may differ based on the implementation
- Example: Finding the maximum element in a list
- Time may differ based on the hardware
- Results may not be indicative of the running time on other inputs not included in the experiment
- How do I know if my inputs are best, average, or worst case?
- Need to implement the algorithm
- Theoretical analysis
- Use a high-level description of the algorithm
- instead of an implementation
- Characterise running time as a function of input size n
- Takes into account all possible inputs
- at least those that are “bad” (hence, worst-case)
- Evaluation is independent of the hardware and software environment!
- Use a high-level description of the algorithm
Theoretical Analysis Steps:
- Express algorithm as pseudo-code
- Example: Find maximum element of an array
Algorithm arrayMax(A, n)
Input array A of n integers
Output maximum element of A
currentMax <- A[0]
i <- 1
while i < n do
if A[i] > currentMax then
currentMax <- A[i]
i <- i + 1
return currentMax
- Count primitive operations
- Assume word RAM model in this course.
- IMPORTANT:
- A word is a sequence of w bits (most of our computers have w = 64 bits)
- Basic arithmetic operations take a
singleoperation- (+ - % * //) and so on
- Bitwise operands take a
singleoperation- &, |, <<, >>, …
- Comparisons take a
singleoperation- (>, <, ==, !=)
- Accessing or writing a word in memory takes a single operation
- Most modern computers are byte addressable
- Need to be able to access every memory “cell” limits memory size
- w ≥ #bits required to represent the largest memory address
- Example:
num = 10- 1 operation. Assigning a value to a variablenum = A[10]- 2 operations. Indexing into an array, and then assigning a value to a variablewhile i < 10- 1 operation per iteration + 1- Loops:
for i in range (10) # 1 + (n + 1)
let counted operations = X # n * X
# count increment # n
int i = 0is executed oncei < nconditional is evaluatedn + 1timesi++increment is evaluated n times
- Describe algorithm as
- function of (the input size)
- By inspecting the pseudo-code, we can determine the maximum number of primitive operations executed by an algorithm, as a function of the input size
- Perform asymptotic analysis
- express in asymptotic notation
- Seven functions that often appear in algorithm analysis
| Function | term |
|---|---|
| Constant | |
| Logarithmic | |
| Linear | |
| N-Log-N | |
| Quadratic | |
| Cubic | |
| Exponential |
Big O-notation
- Big-O notation describes an upper bound on a function
- is if is asymptotically less than or equal to That is,
Given functions and , we say that is if there are positive constants and such that
Big-O and growth rate
- Big-O notation gives an upper bound on the growth rate of a function
- “f(n) is O(g(n))” means the growth rate of f(n) is no more than the growth rate of g(n)
- Big-O notation ranks functions according to their growth rate
Some big-O rules
Rule 1: If f(n) is a polynomial of degree d, then f(n) is
- We can drop lower-order terms
- We can drop constant factors (coefficients) eg. is
Rule 2: Use the smallest possible class of functions (the “tightest” possible bound)
- “2n is O(n)” instead of “2n is ”, even if the latter is still mathematically correct…
- Quiz gotcha: Is “8n is ”?
Rule 3: Use the simplest expression of the class
- “3n + 5 is O(n)” instead of “3n + 5 is O(3n)”
Operations:
| Type | name |
|---|---|
| Constant | |
| Logarithmic | |
| Linear | |
| Quadratic | |
| Exponential | |
| Factorial |
Some more operations
Big Omega notation
- Big-Omega notation describes a lower bound on a function
- f(n) is if is asymptotically greater than or equal to
- That is,
Given functions and , we say that is if there are positive constants and such that
Big-Theta Notation
- Big-Theta notation describes a tight bound on a function (if one exists)
- is if f(n) is asymptotically equal to g(n)
- That is,
Given functions and , we say that is if there are positive constants and such that