Algorithms are a set of instructions that define how to solve a problem. They are useful not just in coding, but also in mathematics and computer science. This article is an introduction to algorithms by way of a few examples.
Table of Contents
What is an Algorithm?
An algorithm is a step-by-step procedure for solving a given problem. The concept of an algorithm was introduced early in the 20th century by John Von Neumann, a Hungarian-American mathematician and polymath who was among the very first computer scientists.
An algorithm is also referred to as an effective method, systematic procedure or mechanical process to obtain results from beginning to end in a finite number of steps that always terminate at an end point or stopping point.
In computer science, an algorithm is considered to be any effective method for solving problems or accomplishing tasks that has been described in precise detail and can be programmed on some kind of machine or computer system (for example: software).
Time complexity
The time complexity of an algorithm is the amount of time it takes to complete. It’s usually expressed as O(n), where n represents the input size, or O(n^2), or some other expression that gives you an idea of how much longer it will take as your input grows. Time complexity can be used to compare algorithms and determine which one is more efficient in specific situations.
Time complexity is usually expressed using one of these notations:
- O(1) means constant time, which means that no matter what size your input is, the algorithm will always take the same amount of time
- O(n) means linear time – this means that if you double your inputs, it will take twice as long for the algorithm to complete
- O(n^2) means quadratic time – this means if you double your inputs, it will take four times longer for the algorithm to finish running
Space complexity
The space complexity of an algorithm is the amount of storage required to run it. It’s typically expressed as a function of the input size, for example, O(n) or O(n^2). Space complexity can be measured in memory units (e.g., bytes), but these are often not very precise and do not convey much meaning about when an algorithm will run out of memory. A better way to measure space complexity is usually by counting the number of operations performed on each item being processed by your algorithm—that way, you can compare algorithms based on how many steps they require before completing their task (and thus using up all available resources). For example:
Worst case analysis
The worst case analysis is the most pessimistic analysis of an algorithm. It assumes that a very bad situation occurs, and the running time or space required by an algorithm usually increases. The worst case analysis gives us an upper bound on how long it will take to complete a task; if you know this number, then you can plan for it as part of your overall schedule for a project.
It also provides insight into how large data sets can be handled by an algorithm. For example, suppose your project involves processing 10 GB of data over several hours: if you know that algorithms A and B will each run in O(n) time where n is the length of your input data set (in other words they’re linear), then A will finish its work in 5 hours while B finishes in 10 hours because B must process twice as much data!
Asymptotic notations
In this section, we’ll focus on two classes of asymptotic notations:
- O(N**2) – means that the algorithm’s runtime is proportional to N**2. For example, if an algorithm takes 0 seconds to process 1,000 elements and 100 seconds to process 100,000 elements then it has an O(N**2) time complexity.
- O(N**3) – means that the algorithm’s runtime is proportional to N**3. For example, if an algorithm takes 0 seconds to process 1 element and 20 minutes (1 hour) for 10 elements then it has an O(N**3) time complexity.
Analysis of Loops
You can find loops in many computer programs. A loop repeats a sequence of instructions until some condition is met; the condition is checked before each iteration of the loop body.
A loop invariant is a property that remains true throughout all iterations of the loop.
The body contains one or more statements that are executed repeatedly as long as the test condition evaluates to false. If we want to terminate a while loop, we need to change our invariant so it becomes false (i.e., False or True).
The test statement sets up conditions for evaluating whether an iteration should stop executing; this statement usually contains an expression whose value determines whether execution should continue with another cycle through the main part of your code (loop body) or return from it (break).
Arrays
Arrays are a fundamental data structure that stores a collection of elements. They can be stored in memory, and they are often used to store the different objects you need to manage in your program. Arrays are also an important part of many algorithms, so it is good to know how they work and why they’re useful.
Arrays have a lot of properties that make them useful for storing data:
- Arrays can be stored in memory
- The order of elements matters
- Elements must all be the same type (homogeneous)
- Only one dimension can be accessed at once — meaning each element has a unique index or position number — which means accessing any other index will automatically adjust all indexes after it by 1
Binary Search Trees
Binary search trees (BSTs) are a special case of a tree data structure. They provide efficient storage and retrieval of values, and can be used to implement other data structures such as heaps and hash tables.
- Storage efficiency: A binary search tree stores values in its leaves, so the number of items stored is equal to the height of the tree. This means that BSTs have O(log n) space complexity—they take up less space than linked lists or arrays because each node has at most two children.
- Searching efficiency: Since values are stored in their leaves, you can find them using a simple binary search on an array or linked list without having to traverse all nodes in between your starting point and destination node. This makes finding something in an BST faster than searching through an unsorted array or linked list even if both data structures use linear time algorithms (such as bubble sort).
Algorithms As An Computer Program
An algorithm is a sequence of instructions that takes input, performs some operation on it, and produces output. Algorithms are the building blocks of computer programs.
A computer program is just a bunch of algorithms working together to do something useful, like finding the shortest path between two points or encrypting emails so they can’t be read by anyone other than their intended recipient. Computers don’t understand plain English; they understand binary code (the 0s and 1s behind every digital thing you see). So before you can write code for a computer to run, you have to translate it into something called assembly language—a low-level programming language made up entirely of codes representing numbers and basic operations. Assembly language works great in theory but has lots of limitations because each instruction only represents one step in your program’s execution: You have no way to specify any kind of conditional branching or looping unless you jump around randomly through your code until you find what should happen next! This makes assembly languages very inefficient compared with modern high-level programming languages such as Java or Python where there’s lots more room for creativity when writing algorithms.”
Conclusion
In this article, we have learned about the basics of algorithms. We have looked at how to analyze them, with an emphasis on time and space complexity analysis methods. We have also studied some common algorithms such as searching, sorting and searching trees.
Comments are closed.