15 49.0138 8.38624 1 0 4000 1 https://yourtermpapers.com 300 0

Design and Analysis of Algorithms Term Paper

0 Comments
  • Algorithms encompass any set of steps that are taken to complete certain tasks. It may involve a simple step such as even making a cup of tea.
  • As far as a computer algorithm is concerned, it dwells on the ability of a computer to complete or accomplish a certain task. The steps have to be thoroughly defined so that the computer can be in a position to run it.
  • One fact that should always be taken into account is the fact that there are various tasks which are very difficult for computers to run. Some of the tasks may appear simple when done practically, but each of the steps has to be made by a computer systematically.
  • A good example of a task which may seem simple but a big challenge is making a cup of tea. Some of the steps involved include determining the amount of water that would be required, the quantity of milk to add, and the right mix of sugar or other sweeteners among others.

Visit this site if you need professional help with Design and Analysis of Algorithms from experts!

  • An algorithm is necessary when trying to run some of these activities on any computing device. A more precise example is how the GPS works on mobile devices. One such algorithm that is used extensively in GPS trackers is the shortest path algorithm.
  • Some of the characteristic features that can be used to distinguish an algorithm from a general set of steps include:
  • It must take input.
  • It should be in a position to give some output such as a specific value, a YES or NO response, among others.
  • It should be able to depict definiteness so that all the instructions that are involved are unambiguous.
  • It should also possess some aspect of finiteness so that once an algorithm starts running, it should be able to come to an end after a defined number of steps.
  • It should also be effective by all means, which can be made possible by ensuring that each of instructions are basic, understandable, and easy to execute.
  • Correctness is also another feature that can be used to define an algorithm starting from the production of a correct result. Additionally, if it fails to come up with a correct answer, the control should come up with the wrong result. There is also an approximation algorithm where exact solutions are not required.
  • It should always use as few resources as possible in terms of space or time.
  • Time is one of the concepts that is used to measure the efficiency of an algorithm. Other aspects which are involved include computer memory.
  • Some of the things that are also taken into account when dealing with algorithms is looking at other backgrounds. These include the language that has been used to implement the algorithm, the speed of the computing platform, and the skills of the programmer who sets up the algorithm.
  • Time taken by an algorithm is estimable by calculating the total time taken to execute the program successfully. An algorithm is not any different from technology since to execute it accordingly there is a need to have the greatest processors, which would make the algorithm as effective as possible.
  • The importance of analyzing algorithms is to ensure that there is a precise understanding of how long it should take to come up with a solution to a particular problem.
  • Various operations are involved when running algorithms. Further, there are several factors involved in the operations which have a great impact on the effectiveness of an algorithm.
  • The analysis should, therefore, be about understanding the constructs of the algorithm and its relative efficiency.
  • Time should never be a key determinant in choosing an algorithm since what matters is the ability of the algorithm to deliver the solution. When one algorithm takes too long to come up with a solution, it does not mean there is a need to switch to another that takes a shorter period if it does not deliver the required result.
  • There are two categories of algorithms: repetitive and recursive.
    Repetitive algorithms have loops and other conditional statements whereas recursive ones solve problems by first breaking them into subproblems. These algorithms are sometimes referred to as divide-and_conquer algorithms.
  • To analyze recursive techniques, which involve breaking down problems into smaller and straightforward aspects, one would require to take a closer look at the amount of work that needs to be done to come up with the smaller manageable pieces.
  • Looking at each of the recursive algorithms would imply looking at the work done to generate the smaller subproblems then later combining the solutions for the individual subproblems.
  • Analysis of algorithms is important since it brings out the facts associated with the time required to successfully generate a solution, the memory space used by the computing platform, the resources used, and efficiency aspects in general.
  • There are three cases which should always be taken into account when analyzing algorithms; the best case, worst case, and average case. These are the concepts that ensure the process is cumulative enough to arrive at certain conclusions.
  • The best case entails the aspect where the algorithm takes the shortest time.
  • The worst case brings out a better idea of the longest time that an algorithm may take to come up with a solution.
  • The average case is hard to determine since various details need highlighting
  • As far as growth of functions is concerned some of the things to bring to light are best cases and worst cases which paint a more precise picture.
  • The best cases occurs when the array is sorted whereas the worst case is evident when the array is reverse sorted.
  • Worst-case running time is the part to focus on since it brings out a guaranteed upper bound as far as any input is concerned.
  • One fact that should be considered is that worst cases occur more often especially in instances where what is being searched is not present.
  • The order of growth is established by looking at the highest degree of the term of the formula and the running time.
  • Based on some of these aspects, the efficiency of an algorithm can be determined by looking at its worst-case running time and if it has a smaller order of growth.
  • Asymptotic notations aid in coming up with descriptions that outline the characteristics of certain functions in the limit
  • Asymptotic notations also come in handy when giving out descriptions of the rate of growth associated with functions. They are made possible by only focusing on where they are important, say by abstracting all the low order terms and constant factors away. Additionally, they can be used to make a comparison of the sizes of functions.
  • Recursion is a kind of reduction which can be described by a simple notion of solving instances that appear to be simple. If it’s not as simple, look into ways in which you can reduce the problem to come up with simpler sub-instances that make up the initial problem.
  • Recursion is a concept in algorithms expressed in terms of the recurrences. Take for an instance, where an algorithm can call to itself. It is at this point when it can be described by a recurrence equation which brings out the overall running time of the problem so that it takes into account the running time as far as smaller inputs are concerned.
  • Some of the techniques that can be used to solve recurrences include:
  • Substitution method, which encompasses three major steps namely; guessing the form of the solution, verifying the induction then finally solving the constants. It is defined as a substitution technique since it involves guessing the solution for the function when solving or applying the hypothesis to any smaller values. It is perceived to be the most powerful method only that one has to guess the form of the answer for it to be applied.
    Iterative method.
  • Recursion tree method. As for the recursion tree method, nodes are used and are used to bring out a representation of the costs of each of the sub-problems. In all sets of the recursive problems, there is a need to sum costs at each of the levels of the trees so that we are better placed to come up with the sum at all levels.
  • Master method. The master method is guided by the form, T(n)=aT(n/b)+f(n), where a>=1 and b>1 are the constants while f(n) is the asymptotically positive function.
    There are sensitive aspects such as time complexity which need to be addressed accordingly when analyzing algorithms.
  • One of the critical concepts is ‘Big Oh’ which comes in handy when comparing running time growth rates of any algorithm which is done against the growth rate of a function.
  • Big Omega come in when there is a need to state lower bound on growth rates which is almost similar to that of ‘Big Oh’, only that the inequality is reversed.
  • Big Theta on the other hand is focused on stating both the upper and the lower bounds of the growth rates associated with functions.
  • To solve other problems such as those of recurrence, recurrence trace can be employed to give a precise breakdown of how the recursion unfolds.
  • Divide and Conquer algorithm works by applying three crucial steps: divide, conquer and combine.
  • Dividing entails breaking the problem in such a manner that sub-problems are smaller and manageable in size. Conquering involves solving the problem recursively while combining all the solutions obtained from the subproblems then applying them to the original problem.
  • Merge-sort is one of the most popular divide-and-conquer algorithms. It is not only simple but also very efficient when it comes to the sorting of lists of numbers.
  • Dividing entails splitting each problem into two then going ahead to sort each of the subsequences.
  • Merging comes later to try and recombine each of the sub-sorted sequences in a meaningful single sorted list.
  • The dividing process only comes to an end once the subsequences have been broken down to single items.
  • Key operations take place at the combining stage where the sorted lists need to be brought together to one single sorted list.
  • Quicksort technique employs three-step problems namely; worst-case running time, the expected running time, and sorts in place.
  • The quicksort technique is based on the three steps that make up the divide-and-conquer algorithm. Dividing encompasses partitioning the arrays while conquering involves sorting the two subarrays by recursive calls to quicksort. Combining comes last by summing up the subarrays since they are sorted.
  • The running time associated with quicksort is dependent on how the subarrays are partitioned. If they are balanced, then the quicksort works in a faster manner. However, if they are unbalanced, then the quicksort would work at a slower rate.
  • The worst case of quicksort is evident when the subarrays are completely imbalanced while the best case is when the subarrays are balanced accordingly.
  • Heapsort encompasses in-place algorithms, running time and a complete binary tree.
  • A heap data structure mainly consists of an array structure that can be viewed as a binary tree.
  • Each of the binary tree’s node corresponds to one of the elements of the array. The binary tree is considered to be full after all its levels are filled; with the exception of when an empty node is on the lowest node.
  • A binary heap can be a max-heap or min-hip. As for max-heaps, the max-property is that of every node other than the root while min-heap is organized oppositely so that the smallest element is at the root.
  • Quicksort is an in-place algorithm even though it needs a stack of size to keep track of the recursion.
  • Some of the slow algorithms include SelectionSort, BubbleSort. They make up the list of in-place sorting algorithms. However, BubbleSort can be implemented like any other stable algorithm without necessarily employing any significant modification.
  • MergeSort is a stable sorting algorithm only that it requires additional array storage which directly points to the fact that it is not an in-place algorithm.
  • Quicksort is viewed to be one among the fastest algorithms while heapsort is built based on a nice data structure usually referred to as a heap or a priority queue.
  • A decision tree argument comes in handy when there is a need to prove lower bounds. It is a mathematical representation of a sorting algorithm where each of the nodes in the tree brings out a comparison made in the algorithm.
  • The greedy approach comes up with a construction of the solution by following a sequence of steps where the choice of a step is such that it represents the best alternative among all the other feasible choices that may have been available. Once the step is taken, no alterations can be made in the rest of the subsequent steps.
  • To choose the step to be taken in the greedy approach, some of the factors include the fact that the step must be: feasible, locally optimal, and unalterable.
  • As for dynamic programming, it has been viewed to be one of the most powerful design techniques that come in handy as far as optimization problems are concerned. It has some connections to the divide-and-conquer techniques where there is a division of the problem in question to come up with manageable subproblems that can be solved recursively.
  • With dynamic programming, sub-problems are solved and then stored in a table.
  • The steps that are involved in dynamic programming are;
  • Dividing the problem into subproblems. Here the problem is divided into smaller sub-problems.
  • Storing the solutions in the table. Each of the solutions for the sub-problems is stored in the table then later retrieved and used for reference whenever required.
  • Bottom-computation- The technique first looks at the smallest problem then develops a solution to all the sub-instances then later obtain a solution of the original instance.
  • Matrix chains come in handy when there is a need to come up with optimal parenthesizing of various matrices that are multiplied so that the number of the scalar multiplications is minimized.
  • Disjoint-set data structures encompass all those data structures that are concerned with keeping track of elements that have been partitioned to come up with non-overlapping subsets.
  • Some of the useful set operations supported here include find, union and make-set.
    Find is concerned with finding the subset that contains a particular element. It then goes ahead to return an item from the set which serves as its representative. This is made possible by making a comparison of the results which come from two find operations.
  • The union is an operation that focuses on joining two subsets to come up with a single but larger subset.
  • Make-set, on the other hand, constructs sets which only contain certain elements.
  • The three operations, Find, Union and Make Set, ensure that there is a practical way in which solutions to problems are reached.
  • To define some of these operation calls, then one has to use more precise ways to represent the sets.
  • Some of the applications of disjoint sets include the fact that they are used when partitioning sets of determining connected components.
  • Disjoint-set encompasses distinct dynamic sets where each of the set is identified by a member of the set normally referred to as a representative.
  • Some of the disjoint set operations include; MAKE-SET (x), UNION (x, y) and FIND-SET (x).
  • MAKE-SET is concerned with the creation of a new set with the only x, where the assumption is that x, does not exist in another set. UNION combines the two sets which could be containing both x and y to come up with one new set, and a new representative is selected. FIND-SET, on the other hand, is concerned with returning the representative of the set which contains x.
  • Linked list representation has a head and a tail with each node containing a value. They also contain a pointer to the next node and one to the representative pointer.
  • UNION implementation is appending x to the end of y.
    Other critical aspects are spanning trees which play a very crucial role in designing efficient algorithms.
  • Greedy methods come in handy in spanning trees and can be used in the construction of the tree edge by the edge so that each of the edges that has been chosen accounts for some criteria used for optimization.
  • he shortest path problem plays a very crucial role in algorithms as a whole. It is concerned with finding shortest paths one by one.
All free essay examples and term paper samples you can find online are completely plagiarized. Don't use them as your own academic papers! If you need unique essays, term papers or research projects of superior quality, don't hesitate to hire experts at EssayLib who will write any custom paper for you. A professional team of essay writers is available 24/7 for immediate assistance: Order Custom Essay on Any Topic
Previous Post
Brexit and Northern Ireland Term Paper
Next Post
Goals and Objectives Essay Sample

0 Comments

Leave a Reply