Algorithms: General

Asymptotic analysis

Big O is the upper bound.
g(n) = O(f(n)), if there exists a constant c and n0 such that f(n) ≤ c*g(n) for all n > n0

Big omega is the lower bound.
g(n) = Ω(f(n)), if there exists a constant c and n0 such that c*g(n)  ≤  f(n) for all n > n0

Big theta is both upper and lower bound.
g(n) = Θ(f(n)) if g(n) = O(f(n)) and g(n) = Ω(f(n))
So, there exist constants c0 and c1 and n0 such that
c0*g(n) ≤ f(n) ≤ c1*g(n) for all n > n0

NP-problems

NP stands for non-deterministic polinomial. The problem is NP if a proposed solution can be tested in polinomial time. Non-deterministic trials of proposed solutions can lead to an acceptable solution.

Every polinomial problem can be solved non-deterministically, and therefore they are a subset of NP-problems. Example: pick random permutation of elements in an array and check in polinomial (O(n)) time if it is sorted. Do so, until a sorted permutation found. So, P ⊆ NP

For other problems in the NP-class there exist non-polinomial solutions (usually exponential or factorial) and it is not known if there is a polinomial solution. Often one such problem can be reduced to another. So if we pick one NP-problem (a base) to which we can reduce a broad spectrum of other NP-problems and someday find a polinomial solution to the base problem, then all reduced NP-problems can be considered solved in polinomial time. The class of such reduced problems is called NP-complete problems, and the phenomenon itself is termed NP-completeness.

It seems that all NP-problems without known polinomial solution are NP-complete problems (can be reduced to one another). So, the fact that P = NP is neither proved, nor disproved.

Examples of NP-complete problems: graph coloring, finding shortest path in the graph, and finding a minimal spanning tree.

Approaches to design algorithms

Divide and conquer. This is recursion. Brought down until the smallest problem is degenerate. Used in problems that as fully divisible to elementary parts. Example, merge sort.

Dynamic programming. This is caching. Cached calculations are used in further steps as part of the solution. Used in problems that have overlapping parts between iterations. Example, calculating Fibonacci numbers.

Greedy approach. Use the best locally in a hope that it will lead to the best globally. Often used for NP problems. Example, Dijkstra algorithms for shortest path.