id
int64 0
25.6k
| text
stringlengths 0
4.59k
|
---|---|
12,600 | implement the remaining operations defined in the unorderedlist adt (appendindexpopinsert implement slice method for the unorderedlist class it should take two parametersstart and stopand return copy of the list starting at the start position and going up to but not including the stop position implement the remaining operations defined in the orderedlist adt implement stack using linked lists implement queue using linked lists implement deque using linked lists design and implement an experiment that will compare the performance of python list with list implemented as linked list design and implement an experiment that will compare the performance of the python list based stack and queue with the linked list implementation the linked list implementation given above is called singly linked list because each node has single reference to the next node in sequence an alternative implementation is known as doubly linked list in this implementationeach node has reference to the next node (commonly called nextas well as reference to the preceding node (commonly called backthe head reference also contains two referencesone to the first node in the linked list and one to the last code this implementation in python create an implementation of queue that would have an average performance of ( for enqueue and dequeue operations programming exercises |
12,601 | basic data structures |
12,602 | four recursion objectives the goals for this are as followsto understand that complex problems that may otherwise be difficult to solve may have simple recursive solution to learn how to formulate programs recursively to understand and apply the three laws of recursion to understand recursion as form of iteration to implement the recursive formulation of problem to understand how recursion is implemented by computer system what is recursionrecursion is method of solving problems that involves breaking problem down into smaller and smaller subproblems until you get to small enough problem that it can be solved trivially usually recursion involves function calling itself while it may not seem like much on the surfacerecursion allows us to write elegant solutions to problems that may otherwise be very difficult to program calculating the sum of list of numbers we will begin our investigation with simple problem that you already know how to solve without using recursion suppose that you want to calculate the sum of list of numbers such as[ an iterative function that computes the sum is shown below the function uses an accumulator variable (the_sumto compute running total of all the numbers in the list by starting with and adding each number in the list def list_sum(num_list)the_sum |
12,603 | for in num_listthe_sum the_sum return the_sum print(list_sum([ , , , , ])pretend for minute that you do not have while loops or for loops how would you compute the sum of list of numbersif you were mathematician you might start by recalling that addition is function that is defined for two parametersa pair of numbers to redefine the problem from adding list to adding pairs of numberswe could rewrite the list as fully parenthesized expression such an expression looks like this(((( we can also parenthesize the expression the other way around( ( ( ( )))notice that the innermost set of parentheses( )is problem that we can solve without loop or any special constructs in factwe can use the following sequence of simplifications to compute final sum total ( ( ( ( )))total ( ( ( ))total ( ( )total ( total how can we take this idea and turn it into python programfirstlet' restate the sum problem in terms of python lists we might say the the sum of the list num_list is the sum of the first element of the list (num_list[ ])and the sum of the numbers in the rest of the list (num_list[ :]to state it in functional formlist_sum(num_listfirst(num_listlist_sum(rest(num_list)in this equation first(num_listreturns the first element of the list and rest(num_listreturns list of everything but the first element this is easily expressed in python as the followingdef list_sum(num_list)if len(num_list= return num_list[ elsereturn num_list[ list_sum(num_list[ :]print(list_sum([ , , , , ])there are few key ideas in this to look at firston line we are checking to see if the list is one element long this check is crucial and is our escape clause from the function the sum of list of length is trivialit is just the number in the list secondon line our function calls recursion |
12,604 | figure series of recursive calls adding list of numbers itselfthis is the reason that we call the list_sum algorithm recursive recursive function is function that calls itself figure shows the series of recursive calls that are needed to sum the list [ you should think of this series of calls as series of simplifications each time we make recursive call we are solving smaller problemuntil we reach the point where the problem cannot get any smaller when we reach the point where the problem is as simple as it can getwe begin to piece together the solutions of each of the small problems until the initial problem is solved figure shows the additions that are performed as list_sum works its way backward through the series of calls when list_sum returns from the topmost problemwe have the solution to the whole problem the three laws of recursion like the robots of asimovall recursive algorithms must obey three important laws recursive algorithm must have base case recursive algorithm must change its state and move toward the base case recursive algorithm must call itselfrecursively let' look at each one of these laws in more detail and see how it was used in the list_sum algorithm firsta base case is the condition that allows the algorithm to stop recursing base case is typically problem that is small enough to solve directly in the list_sum algorithm the base case is list of length to obey the second lawwe must arrange for change of state that moves the algorithm toward the base case change of state means that some data that the algorithm is using is modified what is recursion |
12,605 | figure series of recursive returns from adding list of numbers usually the data that represents our problem gets smaller in some way in the list_sum algorithm our primary data structure is listso we must focus our state-changing efforts on the list since the base case is list of length natural progression toward the base case is to shorten the list this is exactly what happens on line of the code below when we call list_sum with shorter list the final law is that the algorithm must call itself this is the very definition of recursion recursion is confusing concept to many beginning programmers as novice programmeryou have learned that functions are good because you can take large problem and break it up into smaller problems the smaller problems can be solved by writing function to solve each problem when we talk about recursion it may seem that we are talking ourselves in circles we have problem to solve with functionbut that function solves the problem by calling itselfbut the logic is not circular at allthe logic of recursion is an elegant expression of solving problem by breaking it down into smaller and easier problems in the remainder of this we will look at more examples of recursion in each case we will focus on designing solution to problem by using the three laws of recursion self check how many recursive calls are made when computing the sum of the list [ ] recursion |
12,606 | suppose you are going to write recursive function to calculate the factorial of number fact(nreturns where the factorial of zero is defined to be what would be the most appropriate base case = = > < converting an integer to string in any base suppose you want to convert an integer to string in some base between binary and hexadecimal for exampleconvert the integer to its string representation in decimal as " ,or to its string representation in binary as " while there are many algorithms to solve this problemincluding the algorithm discussed in the stack sectionthe recursive formulation of the problem is very elegant let' look at concrete example using base and the number suppose we have sequence of characters corresponding to the first digitslike conv_string " it is easy to convert number less than to its string equivalent by looking it up in the sequence for exampleif the number is then the string is conv_string[ or " if we can arrange to break up the number into three single-digit numbers and then converting it to string is simple number less than sounds like good base case knowing what our base is suggests that the overall algorithm will involve three components reduce the original number to series of single-digit numbers convert the single digit-number to string using lookup concatenate the single-digit strings together to form the final result the next step is to figure out how to change state and make progress toward the base case since we are working with an integerlet' consider what mathematical operations might reduce number the most likely candidates are division and subtraction while subtraction might workit is unclear what we should subtract from what integer division with remainders gives us clear direction let' look at what happens if we divide number by the base we are trying to convert to using integer division to divide by we get with remainder of this gives us two good results firstthe remainder is number less than our base that can be converted to string immediately by lookup secondwe get number that is smaller than our original and moves us toward the base case of having single number less than our base now our job is to convert to its string representation again we will use integer division plus remainder to get results of and respectively finallywe have reduced the problem to converting which we can do easily since it satisfies the base case condition of <basewhere base the series of operations we have just performed is illustrated in figure notice that the numbers we want to remember are in the remainder boxes along the right side of the diagram what is recursion |
12,607 | figure converting an integer to string in base the code below shows the python code that implements the algorithm outlined above for any base between and def to_str(nbase)convert_string " abcdefif basereturn convert_string[nelsereturn to_str( basebaseconvert_string[ base print(to_str( )notice that in line we check for the base case where is less than the base we are converting to when we detect the base casewe stop recursing and simply return the string from the convertstring sequence in line we satisfy both the second and third laws by making the recursive call and by reducing the problem size using division let us trace the algorithm againthis time we will convert the number to its base string representation (" "figure shows that we get the results we are looking forbut it looks like the digits are in the wrong order the algorithm works correctly because we make the recursive call first on line then we add the string representation of the remainder if we reversed returning the convertstring lookup and returning the tostr callthe resulting string would be backwardbut by delaying the concatenation operation until after the recursive call has returnedwe get the result in the proper order this should remind you of our discussion of stacks back in the previous self check write function that takes string as parameter and returns new string that is the reverse of the old string recursion |
12,608 | figure converting an integer to string in base write function that takes string as parameter and returns true if the string is palindromefalse otherwise remember that string is palindrome if it is spelled the same both forward and backward for exampleradar is palindrome for bonus points palindromes can also be phrasesbut you need to remove the spaces and punctuation before checking for examplemadam ' adam is palindrome other fun palindromes includekayak aibohphobia live not on evil reviled did livesaid ias evil did deliver go hang salamii' lasagna hog able was ere saw elba kanakanak town in alaska wassamassaw town in south dakota stack framesimplementing recursion suppose that instead of concatenating the result of the recursive call to tostr with the string from convertstringwe modified our algorithm to push the strings onto stack prior to making the recursive call the code for this modified algorithm is shown in the code below import stack as previously defined stack framesimplementing recursion |
12,609 | r_stack stack(def to_str(nbase)convert_string " abcdefwhile if baser_stack push(convert_string[ ]elser_stack push(convert_string[ base] /base res "while not r_stack is_empty()res res str(r_stack pop()return res print(to_str( )each time we make call to tostrwe push character on the stack returning to the previous example we can see that after the fourth call to tostr the stack would look like figure notice that now we can simply pop the characters off the stack and concatenate them into the final result" figure strings placed on the stack during conversion the previous example gives us some insight into how python implements recursive function call when function is called in pythona stack frame is allocated to handle the local variables of the function when the function returnsthe return value is left on top of the stack for the calling function to access figure illustrates the call stack after the return statement on line notice that the call to tostr( // leaves return value of "on the stack this return value is then used in place of the function call (tostr( )in the expression " convertstring[ % ]which will leave the string " on the top of the stack in this waythe python call stack takes the place of the stack we used explicitly earlier in our list summing exampleyou can think of the return value on the stack taking the place of an accumulator variable the stack frames also provide scope for the variables used by the function even though we are calling the same function over and overeach call creates new scope for the variables that are local to the function recursion |
12,610 | figure call stack generated from tostr( , if you keep this idea of the stack in your headyou will find it much easier to write proper recursive function visualising recursion in the previous section we looked at some problems that were easy to solve using recursionhoweverit can still be difficult to find mental model or way of visualizing what is happening in recursive function this can make recursion difficult for people to grasp in this section we will look at couple of examples of using recursion to draw some interesting pictures as you watch these pictures take shape you will get some new insight into the recursive process that may be helpful in cementing your understanding of recursion the tool we will use for our illustrations is python' turtle graphics module called turtle the turtle module is standard with all versions of python and is very easy to use the metaphor is quite simple you can create turtle and the turtle can move forwardbackwardturn leftturn rightetc the turtle can have its tail up or down when the turtle' tail is down and the turtle moves it draws line as it moves to increase the artistic value of the turtle you can change the width of the tail as well as the color of the ink the tail is dipped in here is simple example to illustrate some turtle graphics basics we will use the turtle module to draw spiral recursively the code below shows how it is done after importing the turtle module we create turtle when the turtle is created it also creates window for itself to draw in next we define the drawspiral function the base case for this simple function is when the length of the line we want to drawas given by the len parameteris reduced to zero or less if the length of the line is longer than zero we instruct the turtle to go forward by len units and then turn right degrees the recursive step is when we call drawspiral again with reduced length at the end of the code below you will notice that we call the function my_win exitonclick()this is handy little method of the window that puts the turtle into wait mode until you click inside the windowafter which the program cleans up and exits visualising recursion |
12,611 | import turtle my_turtle turtle turtle(my_win turtle screen(def draw_spiral(my_turtleline_len)if linelen my_turtle forward(line_lenmy_turtle right( draw_spiral(my_turtleline_len draw_spiral(my_turtle my_win exitonclick(that is really about all the turtle graphics you need to know in order to make some pretty impressive drawings for our next program we are going to draw fractal tree fractals come from branch of mathematicsand have much in common with recursion the definition of fractal is that when you look at it the fractal has the same basic shape no matter how much you magnify it some examples from nature are the coastlines of continentssnowflakesmountainsand even trees or shrubs the fractal nature of many of these natural phenomenon makes it possible for programmers to generate very realistic looking scenery for computer generated movies in our next example we will generate fractal tree to understand how this is going to work it is helpful to think of how we might describe tree using fractal vocabulary remember that we said above that fractal is something that looks the same at all different levels of magnification if we translate this to trees and shrubs we might say that even small twig has the same shape and characteristics as whole tree using this idea we could say that tree is trunkwith smaller tree going off to the right and another smaller tree going off to the left if you think of this definition recursively it means that we will apply the recursive definition of tree to both of the smaller left and right trees lets translate this idea to some python code the code below shows how we can use our turtle to generate fractal tree lets look at the code bit more closely you will see that on lines and we are making recursive call on line we make the recursive call right after the turtle turns to the right by degreesthis is the right tree mentioned above then in line the turtle makes another recursive callbut this time after turning left by degrees the reason the turtle must turn left by degrees is that it needs to undo the original degree turn to the right and then do an additional degree turn to the left in order to draw the left tree also notice that each time we make recursive call to tree we subtract some amount from the branchlen parameterthis is to make sure that the recursive trees get smaller and smaller you should also recognize the initial if statement on line as check for the base case of branchlen getting too small def tree(branch_lent)if branch_len forward(branch_lent right( tree(branch_len recursion |
12,612 | left( tree(branch_len ,tt right( backward(branch_lenthe complete program for this tree example is shown below before you run the code think about how you expect to see the tree take shape look at the recursive calls and think about how this tree will unfold will it be drawn symmetrically with the right and left halves of the tree taking shape simultaneouslywill it be drawn right side first then left sideimport turtle def tree(branch_lent)if branch_len forward(branch_lent right( tree(branch_len tt left( tree(branch_len tt right( backward(branch_lendef main() turtle turtle(my_win turtle screen( left( up( backward( down( color("green"tree( tmy_win exitonclick(main(notice how each branch point on the tree corresponds to recursive calland notice how the tree is drawn to the right all the way down to its shortest twig you can see this in figure nownotice how the program works its way back up the trunk until the entire right side of the tree is drawn you can see the right half of the tree in figure then the left side of the tree is drawnbut not by going as far out to the left as possible ratheronce again the entire right side of the left tree is drawn until we finally make our way out to the smallest twig on the left this simple tree program is just starting point for youand you will notice that the tree does not look particularly realistic because nature is just not as symmetric as computer program the exercises at the end of the will give you some ideas for how to explore some interesting options to make your tree look more realistic visualising recursion |
12,613 | figure the beginning of fractal tree recursion |
12,614 | figure the first half of the tree visualising recursion |
12,615 | self check modify the recursive tree program using one or all of the following ideasmodify the thickness of the branches so that as the branchlen gets smallerthe line gets thinner modify the color of the branches so that as the branchlen gets very short it is colored like leaf modify the angle used in turning the turtle so that at each branch point the angle is selected at random in some range for example choose the angle between and degrees play around to see what looks good modify the branchlen recursively so that instead of always subtracting the same amount you subtract random amount in some range sierpinski triangle another fractal that exhibits the property of self-similarity is the sierpinski triangle an example is shown in figure the sierpinski triangle illustrates three-way recursive algorithm the procedure for drawing sierpinski triangle by hand is simple start with single large triangle divide this large triangle into four new triangles by connecting the midpoint of each side ignoring the middle triangle that you just createdapply the same procedure to each of the three corner triangles each time you create new set of trianglesyou recursively apply this procedure to the three smaller corner triangles you can continue to apply this procedure indefinitely if you have sharp enough pencil before you continue readingyou may want to try drawing the sierpinski triangle yourselfusing the method described since we can continue to apply the algorithm indefinitelywhat is the base casewe will see that the base case is set arbitrarily as the number of times we want to divide the triangle into pieces sometimes we call this number the "degreeof the fractal each time we make recursive callwe subtract from the degree until we reach when we reach degree of we stop making recursive calls the code that generated the sierpinski triangle is shown below import turtle def draw_triangle(pointscolormy_turtle)my_turtle fillcolor(colormy_turtle up(my_turtle goto(points[ ][ ],points[ ][ ]my_turtle down(my_turtle begin_fill(my_turtle goto(points[ ][ ]points[ ][ ]my_turtle goto(points[ ][ ]points[ ][ ]my_turtle goto(points[ ][ ]points[ ][ ]my_turtle end_fill(def get_mid( )return (( [ [ ] ( [ [ ] recursion |
12,616 | figure the sierpinski triangle def sierpinski(pointsdegreemy_turtle)color_map ['blue''red''green''white''yellow''violet''orange'draw_triangle(pointscolor_map[degree]my_turtleif degree sierpinski([points[ ]get_mid(points[ ]points[ ])get_mid(points[ ]points[ ])]degree- my_turtlesierpinski([points[ ]get_mid(points[ ]points[ ])get_mid(points[ ]points[ ])]degree- my_turtlesierpinski([points[ ]get_mid(points[ ]points[ ])get_mid(points[ ]points[ ])]degree- my_turtledef main()my_turtle turtle turtle(my_win turtle screen(my_points [[- - ][ ][ - ]sierpinski(my_points my_turtlemy_win exitonclick( visualising recursion |
12,617 | figure building sierpinski triangle main(the first thing sierpinski does is draw the outer triangle nextthere are three recursive callsone for each of the new corner triangles we get when we connect the midpoints once again we make use of the standard turtle module that comes with python you can learn all the details of the methods available in the turtle module by using help('turtle'from the python prompt look at the code and think about the order in which the triangles will be drawn while the exact order of the corners depends upon how the initial set is specifiedlet' assume that the corners are ordered lower lefttoplower right because of the way the sierpinski function calls itselfsierpinski works its way to the smallest allowed triangle in the lower-left cornerand then begins to fill out the rest of the triangles working back then it fills in the triangles in the top corner by working toward the smallesttopmost triangle finallyit fills in the lower-right cornerworking its way toward the smallest triangle in the lower right sometimes it is helpful to think of recursive algorithm in terms of diagram of function calls figure shows that the recursive calls are always made going to the left the active functions are outlined in blackand the inactive function calls are in gray the farther you go toward the bottom of figure the smaller the triangles the function finishes drawing one level at timeonce it is finished with the bottom left it moves to the bottom middleand so on the sierpinski function relies heavily on the getmid function getmid takes as arguments two endpoints and returns the point halfway between them in additionthe code has function that draws filled triangle using the begin_fill and end_fill turtle methods this means that each degree of the sierpinski triangle is drawn in different color recursion |
12,618 | figure an example arrangement of disks for the tower of hanoi complex recursive problems in the previous sections we looked at some problems that are relatively easy to solve and some graphically interesting problems that can help us gain mental model of what is happening in recursive algorithm in this section we will look at some problems that are really difficult to solve using an iterative programming style but are very elegant and easy to solve using recursion we will finish up by looking at deceptive problem that at first looks like it has an elegant recursive solution but in fact does not the towers of hanoi the tower of hanoi puzzle was invented by the french mathematician edouard lucas in he was inspired by legend that tells of hindu temple where the puzzle was presented to young priests at the beginning of timethe priests were given three poles and stack of gold diskseach disk little smaller than the one beneath it their assignment was to transfer all disks from one of the three poles to anotherwith two important constraints they could only move one disk at timeand they could never place larger disk on top of smaller one the priests worked very efficientlyday and nightmoving one disk every second when they finished their workthe legend saidthe temple would crumble into dust and the world would vanish although the legend is interestingyou need not worry about the world ending any time soon the number of moves required to correctly move tower of disks is at rate of one move per secondthat is yearsclearly there is more to this puzzle than meets the eye figure shows an example of configuration of disks in the middle of move from the first peg to the third notice thatas the rules specifythe disks on each peg are stacked so that smaller disks are always on top of the larger disks if you have not tried to solve this puzzle beforeyou should try it now you do not need fancy disks and poles- pile of books or pieces of paper will work how do we go about solving this problem recursivelyhow would you go about solving this problem at allwhat is our base caselet' think about this problem from the bottom up suppose you have tower of five disksoriginally on peg one if you already knew how to move tower of four disks to peg twoyou could then easily move the bottom disk to peg three complex recursive problems |
12,619 | and then move the tower of four from peg two to peg three but what if you do not know how to move tower of height foursuppose that you knew how to move tower of height three to peg threethen it would be easy to move the fourth disk to peg two and move the three from peg three on top of it but what if you do not know how to move tower of threehow about moving tower of two disks to peg two and then moving the third disk to peg threeand then moving the tower of height two on top of itbut what if you still do not know how to do thissurely you would agree that moving single disk to peg three is easy enoughtrivial you might even say this sounds like base case in the making here is high-level outline of how to move tower from the starting poleto the goal poleusing an intermediate pole move tower of height- to an intermediate poleusing the final pole move the remaining disk to the final pole move the tower of height- from the intermediate pole to the final pole using the original pole as long as we always obey the rule that the larger disks remain on the bottom of the stackwe can use the three steps above recursivelytreating any larger disks as though they were not even there the only thing missing from the outline above is the identification of base case the simplest tower of hanoi problem is tower of one disk in this casewe need move only single disk to its final destination tower of one disk will be our base case in additionthe steps outlined above move us toward the base case by reducing the height of the tower in steps and def move_tower(heightfrom_poleto_polewith_pole)if height > move_tower(height from_polewith_poleto_polemove_disk(from_poleto_polemove_tower(height with_poleto_polefrom_polenotice this code is almost identical to the english description the key to the simplicity of the algorithm is that we make two different recursive callsone on line and second on line on line we move all but the bottom disk on the initial tower to an intermediate pole the next line simply moves the bottom disk to its final resting place then on line we move the tower from the intermediate pole to the top of the largest disk the base case is detected when the tower height is in this case there is nothing to doso the movetower function simply returns the important thing to remember about handling the base case this way is that simply returning from movetower is what finally allows the movedisk function to be called the function movediskshown belowis very simple all it does is print out that it is moving disk from one pole to another if you type in and run the movetower program you can see that it gives you very efficient solution to the puzzle def move_disk(fp,tp)print("moving disk from",fp,"to",tpthe following program provides the entire solution for three disks recursion |
12,620 | def move_tower(heightfrom_poleto_polewith_pole)if height > move_tower(height from_polewith_poleto_polemove_disk(from_poleto_polemove_tower(height with_poleto_polefrom_poledef move_disk(fp,tp)print("moving disk from",fp,"to",tp move_tower( " "" "" "now that you have seen the code for both movetower and movediskyou may be wondering why we do not have data structure that explicitly keeps track of what disks are on what poles here is hintif you were going to explicitly keep track of the disksyou would probably use three stack objectsone for each pole the answer is that python provides the stacks that we need implicitly through the call stack exploring maze in this section we will look at problem that has relevance to the expanding world of roboticshow do you find your way out of mazeif you have roomba vacuum cleaner for your dorm room (don' all college students?you will wish that you could reprogram it using what you have learned in this section the problem we want to solve is to help our turtle find its way out of virtual maze the maze problem has roots as deep as the greek myth about theseus who was sent into maze to kill the minotaur theseus used ball of thread to help him find his way back out again once he had finished off the beast in our problem we will assume that our turtle is dropped down somewhere into the middle of the maze and must find its way out look at figure to get an idea of where we are going in this section to make it easier for us we will assume that our maze is divided up into "squares each square of the maze is either open or occupied by section of wall the turtle can only pass through the open squares of the maze if the turtle bumps into wall it must try different direction the turtle will require systematic procedure to find its way out of the maze here is the procedurefrom our starting position we will first try going north one square and then recursively try our procedure from there if we are not successful by trying northern path as the first step then we will take step to the south and recursively repeat our procedure if south does not work then we will try step to the west as our first step and recursively apply our procedure if northsouthand west have not been successful then apply the procedure recursively from position one step to our east if none of these directions works then there is no way to get out of the maze and we fail nowthat sounds pretty easybut there are couple of details to talk about first suppose we take our first recursive step by going north by following our procedure our next step would exploring maze |
12,621 | figure the finished maze search program recursion |
12,622 | also be to the north but if the north is blocked by wall we must look at the next step of the procedure and try going to the south unfortunately that step to the south brings us right back to our original starting place if we apply the recursive procedure from there we will just go back one step to the north and be in an infinite loop sowe must have strategy to remember where we have been in this case we will assume that we have bag of bread crumbs we can drop along our way if we take step in certain direction and find that there is bread crumb already on that squarewe know that we should immediately back up and try the next direction in our procedure as we will see when we look at the code for this algorithmbacking up is as simple as returning from recursive function call as we do for all recursive algorithms let us review the base cases some of them you may already have guessed based on the description in the previous paragraph in this algorithmthere are four base cases to consider the turtle has run into wall since the square is occupied by wall no further exploration can take place the turtle has found square that has already been explored we do not want to continue exploring from this position or we will get into loop we have found an outside edgenot occupied by wall in other words we have found an exit from the maze we have explored square unsuccessfully in all four directions for our program to work we will need to have way to represent the maze to make this even more interesting we are going to use the turtle module to draw and explore our maze so we can watch this algorithm in action the maze object will provide the following methods for us to use in writing our search algorithm__init__ reads in data file representing mazeinitializes the internal representation of the mazeand finds the starting position for the turtle draw_maze draws the maze in window on the screen update_position updates the internal representation of the maze and changes the position of the turtle in the window is_exit checks to see if the current position is an exit from the maze the maze class also overloads the index operator [so that our algorithm can easily access the status of any particular square let' examine the code for the search function which we call searchfrom the code is shown below notice that this function takes three parametersa maze objectthe starting rowand the starting column this is important because as recursive function the search logically starts again with each recursive call def search_from(mazestart_rowstart_column)maze update_position(start_rowstart_columncheck for base cases we have run into an obstaclereturn false if maze[start_row][start_column=obstacle return false exploring maze |
12,623 | we have found square that has already been explored if maze[start_row][start_column=triedreturn false successan outside edge not occupied by an obstacle if maze is_exit(start_rowstart_column)maze update_position(start_rowstart_columnpart_of_pathreturn true maze update_position(start_rowstart_columntried otherwiseuse logical short circuiting to try each direction in turn (if neededfound search_from(mazestart_row start_columnor search_from(mazestart_row start_columnor search_from(mazestart_rowstart_column or search_from(mazestart_rowstart_column if foundmaze update_position(start_rowstart_columnpart_of_pathelsemaze update_position(start_rowstart_columndead_endreturn found as you look through the algorithm you will see that the first thing the code does (line is call update_position this is simply to help you visualize the algorithm so that you can watch exactly how the turtle explores its way through the maze next the algorithm checks for the first three of the four base caseshas the turtle run into wall (line )has the turtle circled back to square already explored (line )has the turtle found an exit (line )if none of these conditions is true then we continue the search recursively you will notice that in the recursive step there are four recursive calls to searchfrom it is hard to predict how many of these recursive calls will be used since they are all connected by or statements if the first call to searchfrom returns true then none of the last three calls would be needed you can interpret this as meaning that step to (row- ,column(or north if you want to think geographicallyis on the path leading out of the maze if there is not good path leading out of the maze to the north then the next recursive call is triedthis one to the south if south fails then try westand finally east if all four recursive calls return false then we have found dead end you should download or type in the whole program and experiment with it by changing the order of these calls the code for the maze class is shown below the __init__ method takes the name of file as its only parameter this file is text file that represents maze by using "+characters for wallsspaces for open squaresand the letter "sto indicate the starting position an example of maze data file could look as follows+++++++++++++++++++++++++++++++++++++++++++++++++++++++ recursion |
12,624 | ++++++++++++ +++++++++++++++++++++the internal representation of the maze is list of lists each row of the maze_list instance variable is also list this secondary list contains one character per square using the characters described above for the data file shown above the internal representation looks like the following['+','+','+','+','+','+','+','+','+','+','+']['+',',',',',',','+',',',']['+',','+',','+','+',','+',','+','+']['+',','+',',',',','+',','+','+']['+','+','+',','+','+',','+',',','+']['+',',',','+','+',',',',','+']['+','+','+','+','+','+','+','+','+',','+']['+',',',','+','+',',','+',','+']['+',','+','+',',','+',',',','+']['+',',',',',','+',','+','+','+']['+','+','+','+','+','+','+',','+','+','+']the update_position methodas shown below uses the same internal representation to see if the turtle has run into wall it also updates the internal representation with or "-to indicate that the turtle has visited particular square or if the square is part of dead end in additionthe update_position method uses two helper methodsmove_turtle and drop_bread_crumbto update the view on the screen finallythe is_exit method uses the current position of the turtle to test for an exit condition an exit condition is whenever the turtle has navigated to the edge of the mazeeither row zero or column zeroor the far right column or the bottom row class mazedef __init__(selfmaze_file_name)rows_in_maze columns_in_maze self maze_list [maze_file open(maze_file_name,' 'rows_in_maze for line in maze_filerow_list [col for ch in line[:- ]row_list append(chif ch =' 'self start_row rows_in_maze self start_col col col col rows_in_maze rows_in_maze self maze_list append(row_list exploring maze |
12,625 | columns_in_maze len(row_listself rows_in_maze rows_in_maze self columns_in_maze columns_in_maze self x_translate columns_in_maze self y_translate rows_in_maze self turtle(shape 'turtle'setup(width height setworldcoordinates((columns_in_maze (rows_in_maze (columns_in_maze (rows_in_maze def draw_maze(self)for in range(self rows_in_maze)for in range(self columns_in_maze)if self maze_list[ ][ =obstacleself draw_centered_box( self x_translatey self y_translate'tan'self color('black''blue'def draw_centered_box(selfxycolor)tracer( self up(self goto( , self color('black',colorself setheading( self down(self begin_fill(for in range( )self forward( self right( self end_fill(update(tracer( def move_turtle(selfxy)self up(self setheading(self towards( self x_translatey self y_translate)self goto( self x_translatey self y_translatedef drop_bread_crumb(selfcolor)self dot(colordef update_position(selfrowcolval=none)if valself maze_list[row][colval self move_turtle(colrowif val =part_of_path recursion |
12,626 | color 'greenelif val =obstaclecolor 'redelif val =triedcolor 'blackelif val =dead_endcolor 'redelsecolor none if colorself drop_bread_crumb(colordef is_exit(selfrowcol)return (row = or row =self rows_in_maze or col = or col =self columns_in_maze def __getitem__(selfidx)return self maze_list[idxthe complete program is shown below this program uses the data file maze txt shown below which stores the following maze++++++++++++++++++++++++++++++++++++++++++++++++++++++++ +++++++++++++++++++++++++note that it is much more simple example file in that the exit is very close to the starting position of the turtle completed maze program takes maze txt as input import turtle part_of_path 'otried obstacle '+dead_end '- exploring maze |
12,627 | class mazedef __init__(selfmaze_file_name)rows_in_maze columns_in_maze self maze_list [maze_file open(maze_file_name,' 'rows_in_maze for line in maze_filerow_list [col for ch in line[- ]row_list append(chif ch =' 'self start_row rows_in_maze self start_col col col col rows_in_maze rows_in_maze self maze_list append(row_listcolumns_in_maze len(row_listself rows_in_maze rows_in_maze self columns_in_maze columns_in_maze self x_translate columns_in_maze self y_translate rows_in_maze self turtle turtle(self shape('turtle'self wn turtle screen(self wn setworldcoordinates((columns_in_maze (rows_in_maze (columns_in_maze (rows_in_maze def draw_maze(self)self speed( for in range(self rows_in_maze)for in range(self columns_in_maze)if self maze_list[ ][ =obstacleself draw_centered_box( self x_translatey self y_translate'orange'self color('black'self fillcolor('blue'def draw_centered_box(selfxycolor)self up(self goto( self color(colorself fillcolor(colorself setheading( self down(self begin_fill(for in range( ) recursion |
12,628 | self forward( self right( self end_fill(def move_turtle(selfxy)self up(self setheading(self towards( self x_translatey self y_translate)self goto( self x_translatey self y_translatedef drop_bread_crumb(selfcolor)self dot( colordef update_position(selfrowcolval=none)if valself maze_list[row][colval self move_turtle(colrowif val =part_of_pathcolor 'greenelif val =obstaclecolor 'redelif val =triedcolor 'blackelif val =dead_endcolor 'redelsecolor none if colorself drop_bread_crumb(colordef is_exit(selfrowcol)return (row = or row =self rows_in_maze or col = or col =self columns_in_maze def __getitem__(self,idx)return self maze_list[idxdef search_from(mazestart_rowstart_column)try each of four directions from this point until we find way out base case return values we have run into an obstaclereturn false maze update_position(start_rowstart_columnif maze[start_row][start_column=obstacle return false we have found square that has already been explored exploring maze |
12,629 | if maze[start_row][start_column=tried or maze[start_row][start_column=dead_endreturn false we have found an outside edge not occupied by an obstacle if maze is_exit(start_row,start_column)maze update_position(start_rowstart_columnpart_of_pathreturn true maze update_position(start_rowstart_columntriedotherwiseuse logical short circuiting to try each direction in turn (if neededfound search_from(mazestart_row- start_columnor search_from(mazestart_row+ start_columnor search_from(mazestart_rowstart_column- or search_from(mazestart_rowstart_column+ if foundmaze update_position(start_rowstart_columnpart_of_pathelsemaze update_position(start_rowstart_columndead_endreturn found my_maze maze('maze txt'my_maze draw_maze(my_maze update_position(my_maze start_rowmy_maze start_colsearch_from(my_mazemy_maze start_rowmy_maze start_colself check modify the maze search program so that the calls to searchfrom are in different order watch the program run can you explain why the behavior is differentcan you predict what path the turtle will follow for given change in order summary in this we have looked at examples of several recursive algorithms these algorithms were chosen to expose you to several different problems where recursion is an effective problem-solving technique the key points to remember from this are as followsall recursive algorithms must have base case recursive algorithm must change its state and make progress toward the base case recursive algorithm must call itself (recursivelyrecursion can take the place of iteration in some cases recursive algorithms often map very naturally to formal expression of the problem you are trying to solve recursion |
12,630 | recursion is not always the answer sometimes recursive solution may be more computationally expensive than an alternative algorithm content key terms base case recursive call decrypt stack frame recursion discussion questions draw call stack for the tower of hanoi problem assume that you start with stack of three disks using the recursive rules as describeddraw sierpinski triangle using paper and pencil programming exercises write recursive function to compute the factorial of number write recursive function to reverse list modify the recursive tree program using one or all of the following ideasmodify the thickness of the branches so that as the branchlen gets smallerthe line gets thinner modify the color of the branches so that as the branchlen gets very short it is colored like leaf modify the angle used in turning the turtle so that at each branch point the angle is selected at random in some range for example choose the angle between and degrees play around to see what looks good modify the branchlen recursively so that instead of always subtracting the same amount you subtract random amount in some range if you implement all of the above ideas you will have very realistic looking tree find or invent an algorithm for drawing fractal mountain hintone approach to this uses triangles again write recursive function to compute the fibonacci sequence how does the performance of the recursive function compare to that of an iterative versionimplement solution to the tower of hanoi using three stacks to keep track of the disks using the turtle graphics modulewrite recursive program to display hilbert curve using the turtle graphics modulewrite recursive program to display koch snowflake key terms |
12,631 | write program to solve the following problemyou have two jugsa -gallon jug and -gallon jug neither of the jugs have markings on them there is pump that can be used to fill the jugs with water how can you get exactly two gallons of water in the -gallon juggeneralize the problem above so that the parameters to your solution include the sizes of each jug and the final amount of water to be left in the larger jug write program that solves the following problemthree missionaries and three cannibals come to river and find boat that holds two people everyone must get across the river to continue on the journey howeverif the cannibals ever outnumber the missionaries on either bankthe missionaries will be eaten find series of crossings that will get everyone safely to the other side of the river modify the tower of hanoi program using turtle graphics to animate the movement of the disks hintyou can make multiple turtles and have them shaped like rectangles pascal' triangle is number triangle with numbers arranged in staggered rows such that nthis equation is the equation for binomial coefficient you can build an !( - )pascal' triangle by adding the two numbers that are diagonally above number in the triangle an example of pascal' triangle is shown below write program that prints out pascal' triangle your program should accept parameter that tells how many rows of the triangle to print recursion |
12,632 | five sorting and searching objectives to be able to explain and implement sequential search and binary search to be able to explain and implement selection sortbubble sortmerge sortquick sortinsertion sortand shell sort to understand the idea of hashing as search technique to introduce the map abstract data type to implement the map abstract data type using hashing searching we will now turn our attention to some of the most common problems that arise in computingthose of searching and sorting in this section we will study searching we will return to sorting later in the searching is the algorithmic process of finding particular item in collection of items search typically answers either true or false as to whether the item is present on occasion it may be modified to return where the item is found for our purposes herewe will simply concern ourselves with the question of membership in pythonthere is very easy way to ask whether an item is in list of items we use the in operator in [ , , , , false in [ , , , , true even though this is easy to writean underlying process must be carried out to answer the question it turns out that there are many different ways to search for the item what we are interested in here is how these algorithms work and how they compare to one another |
12,633 | figure the sequential search of list of integers the sequential search when data items are stored in collection such as listwe say that they have linear or sequential relationship each data item is stored in position relative to the others in python liststhese relative positions are the index values of the individual items since these index values are orderedit is possible for us to visit them in sequence this process gives rise to our first searching techniquethe sequential search figure shows how this search works starting at the first item in the listwe simply move from item to itemfollowing the underlying sequential ordering until we either find what we are looking for or run out of items if we run out of itemswe have discovered that the item we were searching for was not present the python implementation for this algorithm is shown below the function needs the list and the item we are looking for and returns boolean value as to whether it is present the boolean variable found is initialized to false and is assigned the value true if we discover the item in the list def sequential_search(a_listitem)pos found false while pos len(a_listand not foundif a_list[pos=itemfound true elsepos pos+ return found test_list [ print(sequential_search(test_list )print(sequential_search(test_list )analysis of sequential search to analyze searching algorithmswe need to decide on basic unit of computation recall that this is typically the common step that must be repeated in order to solve the problem for searchingit makes sense to count the number of comparisons performed each comparison may or may not discover the item we are looking for in additionwe make another assumption here the list of items is not ordered in any way the items have been placed randomly into the sorting and searching |
12,634 | case best case worst case average case item is present item is not present table comparisons used in sequential search of an unordered list figure sequential search of an ordered list of integers list in other wordsthe probability that the item we are looking for is in any particular position is exactly the same for each position of the list if the item is not in the listthe only way to know it is to compare it against every item present if there are itemsthen the sequential search requires comparisons to discover that the item is not there in the case where the item is in the listthe analysis is not so straightforward there are actually three different scenarios that can occur in the best case we will find the item in the first place we lookat the beginning of the list we will need only one comparison in the worst casewe will not discover the item until the very last comparisonthe nth comparison what about the average caseon averagewe will find the item about halfway into the listthat iswe will compare against items recallhoweverthat as gets largethe coefficientsno matter what they arebecome insignificant in our approximationso the complexity of the sequential searchis (ntable summarizes these results we assumed earlier that the items in our collection had been randomly placed so that there is no relative order between the items what would happen to the sequential search if the items were ordered in some waywould we be able to gain any efficiency in our search techniqueassume that the list of items was constructed so that the items were in ascending orderfrom low to high if the item we are looking for is present in the listthe chance of it being in any one of the positions is still the same as before we will still have the same number of comparisons to find the item howeverif the item is not present there is slight advantage figure shows this process as the algorithm looks for the item notice that items are still compared in sequence until at this pointhoweverwe know something extra not only is not the item we are looking forbut no other elements beyond can work either since the list is sorted in this casethe algorithm does not have to continue looking through all of the items to report that the item was not found it can stop immediately def ordered_sequential_search(a_listitem)pos found false stop false while pos len(a_listand not found and not stopif a_list[pos=itemfound true else searching |
12,635 | case item is present item is not present best case worst case average case table comparisons used in sequential search of an ordered list if a_list[positemstop true elsepos pos+ return found test_list [ ,print(ordered_sequential_search(test_list )print(ordered_sequential_search(test_list )table summarizes these results note that in the best case we might discover that the item is not in the list by looking at only one item on averagewe will know after looking through only items howeverthis technique is still (nin summarya sequential search is improved by ordering the list only in the case where we do not find the item self check suppose you are doing sequential search of the list [ how many comparisons would you need to do in order to find the key suppose you are doing sequential search of the ordered list [ how many comparisons would you need to do in order to find the key the binary search it is possible to take greater advantage of the ordered list if we are clever with our comparisons in the sequential searchwhen we compare against the first itemthere are at most more items to look through if the first item is not what we are looking for instead of searching the sorting and searching |
12,636 | figure binary search of an ordered list of integers list in sequencea binary search will start by examining the middle item if that item is the one we are searching forwe are done if it is not the correct itemwe can use the ordered nature of the list to eliminate half of the remaining items if the item we are searching for is greater than the middle itemwe know that the entire lower half of the list as well as the middle item can be eliminated from further consideration the itemif it is in the listmust be in the upper half we can then repeat the process with the upper half start at the middle item and compare it against what we are looking for againwe either find it or split the list in halftherefore eliminating another large part of our possible search space figure shows how this algorithm can quickly find the value def binary_search(a_listitem)first last len(a_list found false while first <last and not foundmidpoint (first last/ if a_list[midpoint=itemfound true elseif item a_list[midpoint]last midpoint elsefirst midpoint return found test_list [ ,print(binary_search(test_list )print(binary_search(test_list )before we move on to the analysiswe should note that this algorithm is great example of divide and conquer strategy divide and conquer means that we divide the problem into smaller piecessolve the smaller pieces in some wayand then reassemble the whole problem to get the result when we perform binary search of listwe first check the middle item if the item we are searching for is less than the middle itemwe can simply perform binary search of the left half of the original list likewiseif the item is greaterwe can perform binary search of the right half either waythis is recursive call to the binary search function passing smaller list searching |
12,637 | comparisons ** approximate number of items left ** table tabular analysis for binary search def binary_search(a_listitem)if len(a_list= return false elsemidpoint len(a_list/ if a_list[midpoint=itemreturn true elseif item a_list[midpoint]return binary_search(a_list[:midpoint]itemelsereturn binary_search(a_list[midpoint :]itemtest_list [ ,print(binary_search(test_list )print(binary_search(test_list )analysis of binary search to analyze the binary search algorithmwe need to recall that each comparison eliminates about half of the remaining items from consideration what is the maximum number of comparisons this algorithm will require to check the entire listif we start with itemsabout items will be left after the first comparison after the second comparisonthere will be about then and so on how many times can we split the listtable helps us to see the answer when we split the list enough timeswe end up with list that has just one item either that is the item we are looking for or it is not either waywe are done the number of comparisons necessary to get to this point is where ni solving for gives us log the maximum number of comparisons is logarithmic with respect to the number of items in the list thereforethe binary search is (log none additional analysis issue needs to be addressed in the recursive solution shown abovethe recursive callbinary_search(a_list[:midpoint],itemuses the slice operator to create the left half of the list that is then passed to the next invocation (similarly for the right half as wellthe analysis that we did above assumed that the slice operator takes constant time howeverwe know that the slice operator in python is actually (kthis means that the binary search using slice will not perform in strict logarithmic time luckily this can be remedied by passing the list along with the starting and ending indices even though binary search is generally better than sequential searchit is important to note sorting and searching |
12,638 | that for small values of nthe additional cost of sorting is probably not worth it in factwe should always consider whether it is cost effective to take on the extra work of sorting to gain searching benefits if we can sort once and then search many timesthe cost of the sort is not so significant howeverfor large listssorting even once can be so expensive that simply performing sequential search from the start may be the best choice self check suppose you have the following sorted list [ and are using the recursive binary search algorithm which group of numbers correctly shows the sequence of comparisons used to find the key suppose you have the following sorted list [ and are using the recursive binary search algorithm which group of numbers correctly shows the sequence of comparisons used to search for the key hashing in previous sections we were able to make improvements in our search algorithms by taking advantage of information about where items are stored in the collection with respect to one another for exampleby knowing that list was orderedwe could search in logarithmic time using binary search in this section we will attempt to go one step further by building data structure that can be searched in ( time this concept is referred to as hashing in order to do thiswe will need to know even more about where the items might be when we go to look for them in the collection if every item is where it should bethen the search can use single comparison to discover the presence of an item we will seehoweverthat this is typically not the case hash table is collection of items which are stored in such way as to make it easy to find them later each position of the hash tableoften called slotcan hold an item and is named by an integer value starting at for examplewe will have slot named slot named slot named and so on initiallythe hash table contains no items so every slot is empty we can implement hash table by using list with each element initialized to the special python value none figure shows hash table of size in other wordsthere are slots in the tablenamed through searching |
12,639 | figure hash table with empty slots item hash value table simple hash function using remainders the mapping between an item and the slot where that item belongs in the hash table is called the hash function the hash function will take any item in the collection and return an integer in the range of slot namesbetween and assume that we have the set of integer items and our first hash functionsometimes referred to as the "remainder method,simply takes an item and divides it by the table sizereturning the remainder as its hash value ( (itemitem% table gives all of the hash values for our example items note that this remainder method (modulo arithmeticwill typically be present in some form in all hash functionssince the result must be in the range of slot names once the hash values have been computedwe can insert each item into the hash table at the designated position as shown in figure note that of the slots are now occupied this for this is referred to as the load factorand is commonly denoted by number_of_items table_size example now when we want to search for an itemwe simply use the hash function to compute the slot name for the item and then check the hash table to see if it is present this searching operation is ( )since constant amount of time is required to compute the hash value and then index the hash table at that location if everything is where it should bewe have found constant time search algorithm you can probably already see that this technique is going to work only if each item maps to unique location in the hash table for exampleif the item had been the next item in our collectionit would have hash value of ( % = since also had hash value of figure hash table with six items sorting and searching |
12,640 | we would have problem according to the hash functiontwo or more items would need to be in the same slot this is referred to as collision (it may also be called "clash"clearlycollisions create problem for the hashing technique we will discuss them in detail later hash functions given collection of itemsa hash function that maps each item into unique slot is referred to as perfect hash function if we know the items and the collection will never changethen it is possible to construct perfect hash function (refer to the exercises for more about perfect hash functionsunfortunatelygiven an arbitrary collection of itemsthere is no systematic way to construct perfect hash function luckilywe do not need the hash function to be perfect to still gain performance efficiency one way to always have perfect hash function is to increase the size of the hash table so that each possible value in the item range can be accommodated this guarantees that each item will have unique slot although this is practical for small numbers of itemsit is not feasible when the number of possible items is large for exampleif the items were nine-digit social security numbersthis method would require almost one billion slots if we only want to store data for class of studentswe will be wasting an enormous amount of memory our goal is to create hash function that minimizes the number of collisionsis easy to computeand evenly distributes the items in the hash table there are number of common ways to extend the simple remainder method we will consider few of them here the folding method for constructing hash functions begins by dividing the item into equalsize pieces (the last piece may not be of equal sizethese pieces are then added together to give the resulting hash value for exampleif our item was the phone number - we would take the digits and divide them into groups of ( after the addition we get if we assume our hash table has slotsthen we need to perform the extra step of dividing by and keeping the remainder in this case % is so the phone number hashes to slot some folding methods go one step further and reverse every other piece before the addition for the above examplewe get which gives % another numerical technique for constructing hash function is called the mid-square method we first square the itemand then extract some portion of the resulting digits for exampleif the item were we would first compute by extracting the middle two digits and performing the remainder stepwe get ( % table shows items under both the remainder method and the mid-square method you should verify that you understand how these values were computed we can also create hash functions for character-based items such as strings the word "catcan be thought of as sequence of ordinal values ord(' ' ord(' ' ord(' ' searching |
12,641 | item remainder mid-square table comparisons of remainder and mid-square methods figure hashing string using ordinal values we can then take these three ordinal valuesadd them upand use the remainder method to get hash value (see figure the code below shows function called hash that takes string and table size and returns the hash value in the range from to table_size- def hash(a_stringtable_size)sum for pos in range(len(a_string))sum sum ord(a_string[pos]return sum table_size it is interesting to note that when using this hash functionanagrams will always be given the same hash value to remedy thiswe could use the position of the character as weight figure shows one possible way to use the positional value as weighting factor the modification to the hash function is left as an exercise you may be able to think of number of additional ways to compute hash values for items in collection the important thing to remember is that the hash function has to be efficient so that it does not become the dominant part of the storage and search process if the hash function is too complexthen it becomes more work to compute the slot name than it would be to simply do basic sequential or binary search as described earlier this would quickly defeat the purpose of hashing collision resolution we now return to the problem of collisions when two items hash to the same slotwe must have systematic method for placing the second item in the hash table this process is called collision resolution as we stated earlierif the hash function is perfectcollisions will never sorting and searching |
12,642 | figure hashing string using ordinal values with weighting figure collision resolution with linear probing occur howeversince this is often not possiblecollision resolution becomes very important part of hashing one method for resolving collisions looks into the hash table and tries to find another open slot to hold the item that caused the collision simple way to do this is to start at the original hash value position and then move in sequential manner through the slots until we encounter the first slot that is empty note that we may need to go back to the first slot (circularlyto cover the entire hash table this collision resolution process is referred to as open addressing in that it tries to find the next open slot or address in the hash table by systematically visiting each slot one at timewe are performing an open addressing technique called linear probing figure shows an extended set of integer items under the simple remainder method hash function ( table above shows the hash values for the original items figure shows the original contents when we attempt to place into slot collision occurs under linear probingwe look sequentiallyslot by slotuntil we find an open position in this casewe find slot again should go in slot but must be placed in slot since it is the next open position the final value of hashes to slot since slot is fullwe begin to do linear probing we visit slots and and finally find an empty slot at position once we have built hash table using open addressing and linear probingit is essential that we utilize the same methods to search for items assume we want to look up the item when we compute the hash valuewe get looking in slot reveals and we can return true what if we are looking for now the hash value is and slot is currently holding we cannot simply return false since we know that there could have been collisions we are now forced to do sequential searchstarting at position looking until either we find the item or we find an empty slot searching |
12,643 | figure cluster of items for slot figure collision resolution using "plus disadvantage to linear probing is the tendency for clusteringitems become clustered in the table this means that if many collisions occur at the same hash valuea number of surrounding slots will be filled by the linear probing resolution this will have an impact on other items that are being insertedas we saw when we tried to add the item above cluster of values hashing to had to be skipped to finally find an open position this cluster is shown in figure one way to deal with clustering is to extend the linear probing technique so that instead of looking sequentially for the next open slotwe skip slotsthereby more evenly distributing the items that have caused collisions this will potentially reduce the clustering that occurs figure shows the items when collision resolution is done with "plus probe this means that once collision occurswe will look at every third slot until we find one that is empty the general name for this process of looking for another slot after collision is rehashing with simple linear probingthe rehash function is new_hash_value rehash(old_hash_valuewhere rehash(pos(pos )%size_of_table the "plus rehash can be defined as rehash(pos(pos )%size_of_table in generalrehash(pos(pos skip)%sizeoftable it is important to note that the size of the "skipmust be such that all the slots in the table will eventually be visited otherwisepart of the table will be unused to ensure thisit is often suggested that the table size be prime number this is the reason we have been using in our examples variation of the linear probing idea is called quadratic probing instead of using constant "skipvaluewe use rehash function that increments the hash value by and so on sorting and searching |
12,644 | figure collision resolution with quadratic probing figure collision resolution with quadratic probing this means that if the first hash value is hthe successive values are + + + + and so on in other wordsquadratic probing uses skip consisting of successive perfect squares figure shows our example values after they are placed using this technique an alternative method for handling the collision problem is to allow each slot to hold reference to collection (or chainof items chaining allows many items to exist at the same location in the hash table when collisions happenthe item is still placed in the proper slot of the hash table as more and more items hash to the same locationthe difficulty of searching for the item in the collection increases figure shows the items as they are added to hash table that uses chaining to resolve collisions when we want to search for an itemwe use the hash function to generate the slot where it should reside since each slot holds collectionwe use searching technique to decide whether the item is present the advantage is that on the average there are likely to be many fewer items in each slotso the search is perhaps more efficient we will look at the analysis for hashing at the end of this section self check in hash table of size which index positions would the following two keys map to searching |
12,645 | suppose you are given the following set of keys to insert into hash table that holds exactly values which of the following best demonstrates the contents of the has table after all the keys have been inserted using linear probing ____ __ __ ____ ____ implementing the map abstract data type one of the most useful python collections is the dictionary recall that dictionary is an associative data type where you can store key-data pairs the key is used to look up the associated data value we often refer to this idea as map the map abstract data type is defined as follows the structure is an unordered collection of associations between key and data value the keys in map are all unique so that there is one-to-one relationship between key and value the operations are given below map(create newempty map it returns an empty map collection put(key,valadd new key-value pair to the map if the key is already in the map then replace the old value with the new value get(keygiven keyreturn the value stored in the map or none otherwise del delete the key-value pair from the map using statement of the form del map[keylen(return the number of key-value pairs stored in the map in return true for statement of the form key in mapif the given key is in the mapfalse otherwise one of the great benefits of dictionary is the fact that given keywe can look up the associated data value very quickly in order to provide this fast look up capabilitywe need an implementation that supports an efficient search we could use list with sequential or binary search but it would be even better to use hash table as described above since looking up an item in hash table can approach ( performance below we use two lists to create hashtable class that implements the map abstract data type one listcalled slotswill hold the key items and parallel listcalled datawill hold the data values when we look up keythe corresponding position in the data list will hold the associated data value we will treat the key list as hash table using the ideas presented earlier note that the initial size for the hash table has been chosen to be although this is arbitraryit is important that the size be prime number so that the collision resolution algorithm can be as efficient as possible sorting and searching |
12,646 | class hashtabledef __init__(self)self size self slots [noneself size self data [noneself size hash_function implements the simple remainder method the collision resolution technique is linear probing with "plus rehash function the put function (see listing assumes that there will eventually be an empty slot unless the key is already present in the self slots it computes the original hash value and if that slot is not emptyiterates the rehash function until an empty slot occurs if nonempty slot already contains the keythe old data value is replaced with the new data value listing functions to place items in the hash table def put(selfkeydata)hash_value self hash_function(key,len(self slots) if self slots[hash_value=noneself slots[hash_valuekey self data[hash_valuedata elseif self slots[hash_value=keyself data[hash_valuedata #replace elsenext_slot self rehash(hash_valuelen(self slots)while self slots[next_slot!none and self slots[next_slot!keynext_slot self rehash(next_slotlen(self slots) if self slots[next_slot=noneself slots[next_slotkey self data[next_slotdata elseself data[next_slotdata #replace def hash_function(selfkeysize)return key size def rehash(selfold_hashsize)return (old_hash size likewisethe get function begins by computing the initial hash value if the value is not in the initial slotrehash is used to locate the next possible position notice that line guarantees that the search will terminate by checking to make sure that we have not returned to the initial slot if that happenswe have exhausted all possible slots and the item must not be present the final methods of the hashtable class provide additional dictionary functionality we overload the __getitem__ and __setitem__ methods to allow access using "[this means that once hashtable has been createdthe familiar index operator will be available searching |
12,647 | we leave the remaining methods as exercises def get(selfkey)start_slot self hash_function(keylen(self slots)data none stop false found false position start_slot while self slots[position!none and not found and not stopif self slots[position=keyfound true data self data[positionelseposition=self rehash(positionlen(self slots)if position =start_slotstop true return data def __getitem__(selfkey)return self get(keydef __setitem__(selfkeydata)self put(keydatathe following session shows the hashtable class in action first we will create hash table and store some items with integer keys and string data values =hashtable( [ ]="cath[ ]="dogh[ ]="lionh[ ]="tigerh[ ]="birdh[ ]="cowh[ ]="goath[ ]="pigh[ ]="chickenh slots [ nonenone data ['bird''goat''pig''chicken''dog''lion''tiger'nonenone'cow''cat'next we will access and modify some items in the hash table note that the value for the key is being replaced sorting and searching |
12,648 | [ 'chickenh[ 'tigerh[ ]='duckh[ 'duckh data ['bird''goat''pig''duck''dog''lion''tiger'nonenone'cow''cat'>print( [ ]none analysis of hashing we stated earlier that in the best case hashing would provide ( )constant time search technique howeverdue to collisionsthe number of comparisons is typically not so simple even though complete analysis of hashing is beyond the scope of this textwe can state some well-known results that approximate the number of comparisons necessary to search for an item the most important piece of information we need to analyze the use of hash table is the load factorconceptuallyif is smallthen there is lower chance of collisionsmeaning that items are more likely to be in the slots where they belong if is largemeaning that the table is filling upthen there are more and more collisions this means that collision resolution is more difficultrequiring more comparisons to find an empty slot with chainingincreased collisions means an increased number of items on each chain as beforewe will have result for both successful and an unsuccessful search for successful search using open addressing with linear probingthe average number of comparisons ) if we are using is approximately and an unsuccessful search gives chainingthe average number of comparisons is for the successful caseand simply comparisons if the search is unsuccessful sorting sorting is the process of placing elements from collection in some kind of order for examplea list of words could be sorted alphabetically or by length list of cities could be sorted by populationby areaor by zip code we have already seen number of algorithms that were able to benefit from having sorted list (recall the final anagram example and the binary searchthere are manymany sorting algorithms that have been developed and analyzed this suggests that sorting is an important area of study in computer science sorting large number of items can take substantial amount of computing resources like searchingthe efficiency of sorting algorithm is related to the number of items being processed for small collectionsa complex sorting method may be more trouble than it is worth the overhead may be too high sorting |
12,649 | on the other handfor larger collectionswe want to take advantage of as many improvements as possible in this section we will discuss several sorting techniques and compare them with respect to their running time before getting into specific algorithmswe should think about the operations that can be used to analyze sorting process firstit will be necessary to compare two values to see which is smaller (or largerin order to sort collectionit will be necessary to have some systematic way to compare values to see if they are out of order the total number of comparisons will be the most common way to measure sort procedure secondwhen values are not in the correct position with respect to one anotherit may be necessary to exchange them this exchange is costly operation and the total number of exchanges will also be important for evaluating the overall efficiency of the algorithm bubble sort the bubble sort makes multiple passes through list it compares adjacent items and exchanges those that are out of order each pass through the list places the next largest value in its proper place in essenceeach item "bubblesup to the location where it belongs figure shows the first pass of bubble sort the shaded items are being compared to see if they are out of order if there are items in the listthen there are pairs of items that need to be compared on the first pass it is important to note that once the largest value in the list is part of pairit will continually be moved along until the pass is complete at the start of the second passthe largest value is now in place there are items left to sortmeaning that there will be pairs since each pass places the next largest value in placethe total number of passes necessary will be after completing the passesthe smallest item must be in the correct position with no further processing required the code below shows the complete bubble_sort function it takes the list as parameterand modifies it by exchanging items as necessary def bubble_sort(a_list)for pass_num in range(len(a_list - )for in range(pass_num)if a_list[ia_list[ ]temp a_list[ia_list[ia_list[ a_list[ temp a_list [ bubble_sort(a_listprint(a_listthe exchange operationsometimes called "swap,is slightly different in python than in most other programming languages typicallyswapping two elements in list requires temporary storage location (an additional memory locationa code fragment such as temp a_list[ia_list[ia_list[ sorting and searching |
12,650 | figure bubble sortthe first pass a_list[jtemp will exchange the ith and th items in the list without the temporary storageone of the values would be overwritten in pythonit is possible to perform simultaneous assignment the statement ab ba will result in two assignment statements being done at the same time (see figure using simultaneous assignmentthe exchange operation can be done in one statement lines - in the bubble_sort function perform the exchange of the and ( + )th items using the three-step procedure described earlier note that we could also have used the simultaneous assignment to swap the items to analyze the bubble sortwe should note that regardless of how the items are arranged in the initial listn passes will be made to sort list of size table shows the number of comparisons for each pass the total number of comparisons is the sum of the first integers recall that the sum of the first integers is the sum of the first integers is nwhich is this is still ( comparisons in the best caseif the list is already orderedno exchanges will be made howeverin the worst caseevery comparison will cause an exchange on averagewe exchange half of the time bubble sort is often considered the most inefficient sorting method since it must exchange items before the final location is known these "wastedexchange operations are very costly howeverbecause the bubble sort makes passes through the entire unsorted portion of the list sorting |
12,651 | figure exchanging two values in python pass - comparisons - - - table comparisons for each pass of bubble sort it has the capability to do something most sorting algorithms cannot in particularif during pass there are no exchangesthen we know that the list must be sorted bubble sort can be modified to stop early if it finds that the list has become sorted this means that for lists that require just few passesa bubble sort may have an advantage in that it will recognize the sorted list and stop the code below shows this modificationwhich is often referred to as the short bubble def short_bubble_sort(a_list)exchanges true pass_num len(a_list while pass_num and exchangesexchanges false for in range(pass_num)if a_list[ia_list[ ]exchanges true temp a_list[ia_list[ia_list[ a_list[ temp sorting and searching |
12,652 | pass_num pass_num a_list=[ short_bubble_sort(a_listprint(a_listself check suppose you have the following list of numbers to sort[ which list represents the partially sorted list after three complete passes of bubble sort [ [ [ [ selection sort the selection sort improves on the bubble sort by making only one exchange for every pass through the list in order to do thisa selection sort looks for the largest value as it makes pass andafter completing the passplaces it in the proper location as with bubble sortafter the first passthe largest item is in the correct place after the second passthe next largest is in place this process continues and requires passes to sort itemssince the final item must be in place after the ( )st pass figure shows the entire sorting process on each passthe largest remaining item is selected and then placed in its proper location the first pass places the second pass places the third places and so on the function is shown below def selection_sort(a_list)for fill_slot in range(len(a_list - )pos_of_max for location in range( fill_slot )if a_list[locationa_list[pos_of_max]pos_of_max location temp a_list[fill_slota_list[fill_slota_list[pos_of_maxa_list[pos_of_maxtemp a_list [ selection_sort(a_listprint(a_listyou may see that the selection sort makes the same number of comparisons as the bubble sort and is therefore also ( howeverdue to the reduction in the number of exchangesthe sorting |
12,653 | figure selection sort sorting and searching |
12,654 | selection sort typically executes faster in benchmark studies in factfor our listthe bubble sort makes exchangeswhile the selection sort makes only self check suppose you have the following list of numbers to sort[ which list represents the partially sorted list after three complete passes of selection sort [ [ [ [ the insertion sort the insertion sortalthough still ( )works in slightly different way it always maintains sorted sublist in the lower positions of the list each new item is then "insertedback into the previous sublist such that the sorted sublist is one item larger figure shows the insertion sorting process the shaded items represent the ordered sublists as the algorithm makes each pass we begin by assuming that list with one item (position is already sorted on each passone for each item through the current item is checked against those in the already sorted sublist as we look back into the already sorted sublistwe shift those items that are greater to the right when we reach smaller item or the end of the sublistthe current item can be inserted figure shows the fifth pass in detail at this point in the algorithma sorted sublist of five items consisting of and exists we want to insert back into the already sorted items the first comparison against causes to be shifted to the right and are also shifted when the item is encounteredthe shifting process stops and is placed in the open position now we have sorted sublist of six items the implementation of insertion_sort shows that there are again passes to sort items the iteration starts at position and moves through position as these are the items that need to be inserted back into the sorted sublists line performs the shift operation that moves value up one position in the listmaking room behind it for the insertion remember that this is not complete exchange as was performed in the previous algorithms the maximum number of comparisons for an insertion sort is the sum of the first - integers againthis is ( howeverin the best caseonly one comparison needs to be done on each pass this would be the case for an already sorted list one note about shifting versus exchanging is also important in generala shift operation requires approximately third of the processing work of an exchange since only one assignment is performed in benchmark studiesinsertion sort will show very good performance sorting |
12,655 | figure insertion sort def insertion_sort(a_list)for index in range( len(a_list))current_value a_list[indexposition index while position and a_list[position current_valuea_list[positiona_list[position position position a_list[positioncurrent_value a_list [ insertion_sort(a_listprint(a_listself check suppose you have the following list of numbers to sort[ which list represents the partially sorted list after three complete passes of insertion sort sorting and searching |
12,656 | figure insertion sortfifth pass of the sort [ [ [ [ shell sort the shell sortsometimes called the "diminishing increment sort,improves on the insertion sort by breaking the original list into number of smaller sublistseach of which is sorted using an insertion sort the unique way that these sublists are chosen is the key to the shell sort instead of breaking the list into sublists of contiguous itemsthe shell sort uses an increment isometimes called the gapto create sublist by choosing all items that are items apart this can be seen in figure this list has nine items if we use an increment of threethere are three sublistseach of which can be sorted by an insertion sort after completing these sortswe get the list shown in figure although this list is not completely sortedsomething very interesting has happened by sorting the sublistswe have moved the items closer to where they actually belong figure shows final insertion sort using an increment of onein other wordsa standard insertion sort note that by performing the earlier sublist sortswe have now reduced the total number of shifting operations necessary to put the list in its final order for this casewe need only four more shifts to complete the process we said earlier that the way in which the increments are chosen is the unique feature of the shell sort the function shell_sort shown below uses different set of increments in this casewe begin with sublists on the next passn sublists are sorted eventuallya single list sorting |
12,657 | figure shell sort with increments of three figure shell sort after sorting each sublist is sorted with the basic insertion sort figure shows the first sublists for our example using this increment the following invocation of the shell_sort function shows the partially sorted lists after each incrementwith the final sort being an insertion sort with an increment of one def shell_sort(a_list)sublist_count len(a_list/ while sublist_count for start_position in range(sublist_count)gap_insertion_sort(a_liststart_positionsublist_countprint("after increments of size"sublist_count"the list is"a_listsublist_count sublist_count / def gap_insertion_sort(a_liststartgap)for in range(start gaplen(a_list)gap)current_value a_list[iposition sorting and searching |
12,658 | figure shell sorta final insertion sort with increment of figure initial sublists for shell sort while position >gap and a_list[position gapcurrent_valuea_list[positiona_list[position gapposition position gap a_list[positioncurrent_value a_list [ shell_sort(a_listprint(a_listat first glance you may think that shell sort cannot be better than an insertion sortsince it does complete insertion sort as the last step it turns outhoweverthat this final insertion sort does not need to do very many comparisons (or shiftssince the list has been pre-sorted by earlier incremental insertion sortsas described above in other wordseach pass produces list that is "more sortedthan the previous one this makes the final pass very efficient sorting |
12,659 | although general analysis of the shell sort is well beyond the scope of this textwe can say that it tends to fall somewhere between (nand ( )based on the behaviour described above by changing the incrementfor example using ( and so on) shell sort can perform at ( self check given the following list of numbers[ which answer illustrates the contents of the list after all swapping is complete for gap size of [ [ [ [ the merge sort we now turn our attention to using divide and conquer strategy as way to improve the performance of sorting algorithms the first algorithm we will study is the merge sort merge sort is recursive algorithm that continually splits list in half if the list is empty or has one itemit is sorted by definition (the base caseif the list has more than one itemwe split the list and recursively invoke merge sort on both halves once the two halves are sortedthe fundamental operationcalled mergeis performed merging is the process of taking two smaller sorted lists and combining them together into singlesortednew list figure shows our familiar example list as it is being split by merge_sort figure shows the simple listsnow sortedas they are merged back together the merge_sort function shown below begins by asking the base case question if the length of the list is less than or equal to onethen we already have sorted list and no more processing is necessary ifon the other handthe length is greater than onethen we use the python slice operation to extract the left and right halves it is important to note that the list may not have an even number of items that does not matteras the lengths will differ by at most one def merge_sort(a_list)print("splitting "a_listif len(a_list mid len(a_list/ left_half a_list[:midright_half a_list[mid: merge_sort(left_halfmerge_sort(right_half sorting and searching |
12,660 | figure splitting the list in merge sort figure lists as they are merged together sorting |
12,661 | while len(left_halfand len(right_half)if left_half[iright_half[ ]a_list[kleft_half[ii elsea_list[kright_half[jj while len(left_half)a_list[kleft_half[ii while len(right_half)a_list[kright_half[jj print("merging "a_list a_list [ merge_sort(a_listprint(a_listonce the merge_sort function is invoked on the left half and the right half (lines - )it is assumed they are sorted the rest of the function (lines - is responsible for merging the two smaller sorted lists into larger sorted list notice that the merge operation places the items back into the original list (a_listone at time by repeatedly taking the smallest item from the sorted lists the merge_sort function has been augmented with print statement (line to show the contents of the list being sorted at the start of each invocation there is also print statement (line to show the merging process the transcript shows the result of executing the function on our example list note that the list with and will not divide evenly the first split gives [ and the second gives [ it is easy to see how the splitting process eventually yields list that can be immediately merged with other sorted lists in order to analyze the merge_sort functionwe need to consider the two distinct processes that make up its implementation firstthe list is split into halves we already computed (in binary searchthat we can divide list in half log times where is the length of the list the second process is the merge each item in the list will eventually be processed and placed on the sorted list so the merge operation which results in list of size requires operations the result of this analysis is that log splitseach of which costs for total of log operations merge sort is an ( log nalgorithm recall that the slicing operator is (kwhere is the size of the slice in order to guarantee that merge_sort will be ( log nwe will need to remove the slice operator againthis is possible if we simply pass the starting and ending indices along with the list when we make the recursive call we leave this as an exercise it is important to notice that the merge_sort function requires extra space to hold the two sorting and searching |
12,662 | halves as they are extracted with the slicing operations this additional space can be critical factor if the list is large and can make this sort problematic when working on large data sets self check given the following list of numbers[ which answer illustrates the list to be sorted after recursive calls to mergesort [ [ [ [ given the following list of numbers[ which answer illustrates the first two lists to be merged [ and [ [[ and [ [ and [ [ and [ the quick sort the quick sort uses divide and conquer to gain the same advantages as the merge sortwhile not using additional storage as trade-offhoweverit is possible that the list may not be divided in half when this happenswe will see that performance is diminished quick sort first selects valuewhich is called the pivot value although there are many different ways to choose the pivot valuewe will simply use the first item in the list the role of the pivot value is to assist with splitting the list the actual position where the pivot value belongs in the final sorted listcommonly called the split pointwill be used to divide the list for subsequent calls to the quick sort figure shows that will serve as our first pivot value since we have looked at this example few times alreadywe know that will eventually end up in the position currently holding the partition process will happen next it will find the split point and at the same time move other items to the appropriate side of the listeither less than or greater than the pivot value figure the first pivot value for quick sort sorting |
12,663 | partitioning begins by locating two position markers let' call them left_mark and right_mark at the beginning and end of the remaining items in the list (positions and in figure the goal of the partition process is to move items that are on the wrong side with respect to the pivot value while also converging on the split point figure shows this process as we locate the position of we begin by incrementing left_mark until we locate value that is greater than the pivot value we then decrement right_mark until we find value that is less than the pivot value at this point we have discovered two items that are out of place with respect to the eventual split point for our examplethis occurs at and now we can exchange these two items and then repeat the process again at the point where right_mark becomes less than left_markwe stop the position of right_mark is now the split point the pivot value can be exchanged with the contents of the split point and the pivot value is now in place (figure in additionall the items to the left of the split point are less than the pivot valueand all the items to the right of the split point are greater than the pivot value the list can now be divided at the split point and the quick sort can be invoked recursively on the two halves the quick_sort function shown below invokes recursive functionquick_sort_helper quick_sort_helper begins with the same base case as the merge sort if the length of the list is less than or equal to oneit is already sorted if it is greaterthen it can be partitioned and recursively sorted the partition function implements the process described earlier def quick_sort(a_list)quick_sort_helper(a_list len(a_list def quick_sort_helper(a_listfirstlast)if first lastsplit_point partition(a_listfirstlastquick_sort_helper(a_listfirstsplit_point quick_sort_helper(a_listsplit_point lastdef partition(a_listfirstlast)pivot_value a_list[firstleft_mark first right_mark last done false while not donewhile left_mark <right_mark and a_list[left_mark<pivot_valueleft_mark left_mark while a_list[right_mark>pivot_value and right_mark >left_mark sorting and searching |
12,664 | figure finding the split point for sorting |
12,665 | figure completing the partition process to find the split point for right_mark right_mark if right_mark left_markdone true elsetemp a_list[left_marka_list[left_marka_list[right_marka_list[right_marktemp temp a_list[firsta_list[firsta_list[right_marka_list[right_marktemp return right_mark a_list [ quick_sort(a_listprint(a_listto analyze the quick_sort functionnote that for list of length nif the partition always occurs in the middle of the listthere will again be log divisions in order to find the split pointeach of the items needs to be checked against the pivot value the result is log in additionthere is no need for additional memory as in the merge sort process unfortunatelyin the worst casethe split points may not be in the middle and can be very skewed to the left or the rightleaving very uneven division in this casesorting list of items divides into sorting list of items and list of items then sorting list of divides into list of size and list of size and so on the result is an ( sort with all of the overhead that recursion requires we mentioned earlier that there are different ways to choose the pivot value in particularwe can attempt to alleviate some of the potential for an uneven division by using technique called median of three to choose the pivot valuewe will consider the firstthe middleand the last element in the list in our examplethose are and now pick the median valuein our case and use it for the pivot value (of coursethat was the pivot value we used originallythe idea is that in the case where the the first item in the list does not belong toward the middle sorting and searching |
12,666 | of the listthe median of three will choose better "middlevalue this will be particularly useful when the original list is somewhat sorted to begin with we leave the implementation of this pivot value selection as an exercise self check given the following list of numbers [ which answer shows the contents of the list after the second partitioning according to the quicksort algorithm [ [ [ [ given the following list of numbers [ what would be the first pivot value using the median of method which of the following sort algorithms are guaranteed to be ( log neven in the worst case shell sort quick sort merge sort insertion sort summary sequential search iso(nfor ordered and unordered lists binary search of an ordered list is (log nin the worst case hash tables can provide constant time searching bubble sorta selection sortand an insertion sort are ( algorithms shell sort improves on the insertion sort by sorting incremental sublists it falls between (nand ( merge sort is ( log )but requires additional space for the merging process quick sort is ( log )but may degrade to ( if the split points are not near the middle of the list it does not require additional space summary |
12,667 | key terms binary search clustering folding method hash table linear probing median of three mid-square method perfect hash function quick sort sequential search slot bubble sort collision gap hashing load factor merge open addressing pivot value rehashing shell sort split point chaining collision resolution hash function insertion sort map merge sort partition quadratic probing selection sort short bubble discussion questions using the hash table performance formulas given in the compute the average number of comparisons necessary when the table is full full full full full full at what point do you think the hash table is too smallexplain modify the hash function for strings to use positional weightings we used hash function for strings that weighted the characters by position devise an alternative weighting scheme what are the biases that exist with these functions research perfect hash functions using list of names (classmatesfamily membersetc )generate the hash values using the perfect hash algorithm generate random list of integers show how this list is sorted by the following algorithms bubble sort selection sort insertion sort shell sort (you decide on the incrementsmerge sort sorting and searching |
12,668 | quick sort (you decide on the pivot value consider the following list of integers[ show how this list is sorted by the following algorithmsbubble sort selection sort insertion sort shell sort (you decide on the incrementsmerge sort quick sort (you decide on the pivot value consider the following list of integers[ show how this list is sorted by the following algorithmsbubble sort selection sort insertion sort shell sort (you decide on the incrementsmerge sort quick sort (you decide on the pivot value consider the list of characters[' ',' ',' ',' ',' ',' 'show how this list is sorted using the following algorithmsbubble sort selection sort insertion sort shell sort (you decide on the incrementsmerge sort quick sort (you decide on the pivot value devise alternative strategies for choosing the pivot value in quick sort for examplepick the middle item re-implement the algorithm and then execute it on random data sets under what criteria does your new strategy perform better or worse than the strategy from this programming exercises set up random experiment to test the difference between sequential search and binary search on list of integers use the binary search functions given in the text programming exercises |
12,669 | (recursive and iterativegenerate randomordered list of integers and do benchmark analysis for each one what are your resultscan you explain them implement the binary search using recursion without the slice operator recall that you will need to pass the list along with the starting and ending index values for the sublist generate randomordered list of integers and do benchmark analysis implement the len method (__len__for the hash table map adt implementation implement the in method (__contains__for the hash table map adt implementation how can you delete items from hash table that uses chaining for collision resolutionhow about if open addressing is usedwhat are the special circumstances that must be handledimplement the del method for the hashtable class in the hash table map implementationthe hash table size was chosen to be if the table gets fullthis needs to be increased re-implement the put method so that the table will automatically resize itself when the loading factor reaches predetermined value (you can decide the value based on your assessment of load versus performance implement quadratic probing as rehash technique using random number generatorcreate list of integers perform benchmark analysis using some of the sorting algorithms from this what is the difference in execution speed implement the bubble sort using simultaneous assignment bubble sort can be modified to "bubblein both directions the first pass moves "upthe listand the second pass moves "down this alternating pattern continues until no more passes are necessary implement this variation and describe under what circumstances it might be appropriate implement the selection sort using simultaneous assignment perform benchmark analysis for shell sortusing different increment sets on the same list implement the merge_sort function without using the slice operator one way to improve the quick sort is to use an insertion sort on lists that have small length (call it the "partition limit"why does this make sensere-implement the quick sort and use it to sort random list of integers perform an analysis using different list sizes for the partition limit implement the median-of-three method for selecting pivot value as modification to quick_sort run an experiment to compare the two techniques sorting and searching |
12,670 | six trees and tree algorithms objectives to understand what tree data structure is and how it is used to see how trees can be used to implement map data structure to implement trees using list to implement trees using classes and references to implement trees as recursive data structure to implement priority queue using heap examples of trees now that we have studied linear data structures like stacks and queues and have some experience with recursionwe will look at common data structure called the tree trees are used in many areas of computer scienceincluding operating systemsgraphicsdatabase systemsand computer networking tree data structures have many things in common with their botanical cousins tree data structure has rootbranchesand leaves the difference between tree in nature and tree in computer science is that tree data structure has its root at the top and its leaves on the bottom notice that you can start at the top of the tree and follow path made of circles and arrows all the way to the bottom at each level of the tree we might ask ourselves question and then follow the path that agrees with our answer for example we might ask"is this animal chordate or an arthropod?if the answer is "chordatethen we follow that path and ask"is this chordate mammal?if notwe are stuck (but only in this simplified examplewhen we are at the mammal level we ask"is this mammal primate or carnivore?we can keep following paths until we get to the very bottom of the tree where we have the common name second property of trees is that all of the children of one node are independent of the children of another node for examplethe genus felis has the children domestica and leo the genus musca also has child named domesticabut it is different node and is independent of the domestica child of felis this means that we can change the node that is the child of musca without affecting the child of felis |
12,671 | figure taxonomy of some common animals shown as tree trees and tree algorithms |
12,672 | figure small part of the unix file system hierarchy third property is that each leaf node is unique we can specify path from the root of the tree to leaf that uniquely identifies each species in the animal kingdomfor exampleanimalia chordate mammal carnivora felidae felis domestica another example of tree structure that you probably use every day is file system in file systemdirectoriesor foldersare structured as tree figure illustrates small part of unix file system hierarchy the file system tree has much in common with the biological classification tree you can follow path from the root to any directory that path will uniquely identify that subdirectory (and all the files in itanother important property of treesderived from their hierarchical natureis that you can move entire sections of tree (called subtreeto different position in the tree without affecting the lower levels of the hierarchy for examplewe could take the entire subtree staring with /etc/detach etcfrom the root and reattach it under usrthis would change the unique pathname to httpd from /etc/httpd to /usr/etc/httpdbut would not affect the contents or any children of the httpd directory final example of tree is web page the following is an example of simple web page written using html figure shows the tree that corresponds to each of the html tags used to create the page <meta http-equiv="content-typecontent="text/htmlcharset=utf- /simple simple web page list item one list item two the html source code and the tree accompanying the source illustrate another hierarchy notice that each level of the tree corresponds to level of nesting inside the html tags the first tag in the source is and the last is all the rest of the tags in the page are examples of trees |
12,673 | figure tree corresponding to the markup elements of web page inside the pair if you checkyou will see that this nesting property is true at all levels of the tree vocabulary and definitions vocabulary node node is fundamental part of tree it can have namewhich we call the "key node may also have additional information we call this additional information the "payload while the payload information is not central to many tree algorithmsit is often critical in applications that make use of trees edge an edge is another fundamental part of tree an edge connects two nodes to show that there is relationship between them every node (except the rootis connected by exactly one incoming edge from another node each node may have several outgoing edges root the root of the tree is the only node in the tree that has no incoming edges in figure is the root of the tree path path is an ordered list of nodes that are connected by edges for examplemammal carnivora felidae felis domestica is path children the set of nodes that have incoming edges from the same node to are said to be the children of that node in figure nodes log/spool/and ypare the children of node varparent node is the parent of all the nodes it connects to with outgoing edges in figure the node varis the parent of nodes log/spool/and yp trees and tree algorithms |
12,674 | figure tree consisting of set of nodes and edges sibling nodes in the tree that are children of the same parent are said to be siblings the nodes etcand usrare siblings in the filesystem tree subtree subtree is set of nodes and edges comprised of parent and all the descendants of that parent leaf node leaf node is node that has no children for examplehuman and chimpanzee are leaf nodes in figure level the level of node is the number of edges on the path from the root node to for examplethe level of the felis node in figure is five by definitionthe level of the root node is zero height the height of tree is equal to the maximum level of any node in the tree the height of the tree in figure is two definitions with the basic vocabulary now definedwe can move on to formal definition of tree in factwe will provide two definitions of tree one definition involves nodes and edges the second definitionwhich will prove to be very usefulis recursive definition definition one tree consists of set of nodes and set of edges that connect pairs of nodes tree has the following propertiesone node of the tree is designated as the root node every node nexcept the root nodeis connected by an edge from exactly one other node pwhere is the parent of unique path traverses from the root to each node if each node in the tree has maximum of two childrenwe say that the tree is binary tree figure illustrates tree that fits definition one the arrowheads on the edges indicate the direction of the connection definition two tree is either empty or consists of root and zero or more subtreeseach of which is also tree the root of each subtree is connected to the root of the parent tree vocabulary and definitions |
12,675 | figure recursive definition of tree by an edge figure illustrates this recursive definition of tree using the recursive definition of treewe know that the tree in figure has at least four nodessince each of the triangles representing subtree must have root it may have many more nodes than thatbut we do not know unless we look deeper into the tree implementation keeping in mind the definitions from the previous sectionwe can use the following functions to create and manipulate binary treebinarytree(creates new instance of binary tree get_left_child(returns the binary tree corresponding to the left child of the current node get_right_child(returns the binary tree corresponding to the right child of the current node set_root_val(valstores the object in parameter val in the current node get_root_val(returns the object stored in the current node insert_left(valcreates new binary tree and installs it as the left child of the current node insert_right(valcreates new binary tree and installs it as the right child of the current node the key decision in implementing tree is choosing good internal storage technique python allows us two very interesting possibilitiesso we will examine both before choosing one the first technique we will call "list of lists,the second technique we will call "nodes and references trees and tree algorithms |
12,676 | figure small tree list of lists representation in tree represented by list of listswe will begin with python' list data structure and write the functions defined above although writing the interface as set of operations on list is bit different from the other abstract data types we have implementedit is interesting to do so because it provides us with simple recursive data structure that we can look at and examine directly in list of lists treewe will store the value of the root node as the first element of the list the second element of the list will itself be list that represents the left subtree the third element of the list will be another list that represents the right subtree to illustrate this storage techniquelet' look at an example figure shows simple tree and the corresponding list implementation my_tree [' '#root [' '#left subtree [' [][]][' [][]][' '#right subtree [' [][]][notice that we can access subtrees of the list using standard list indexing the root of the tree is my_tree[ ]the left subtree of the root is my_tree[ ]and the right subtree is my_tree[ the code below illustrates creating simple tree using list once the tree is constructedwe can access the root and the left and right subtrees one very nice property of this list of lists approach is that the structure of list representing subtree adheres to the structure defined for treethe structure itself is recursivea subtree that has root value and two empty lists is leaf node another nice feature of the list of lists approach is that it generalizes to tree that has many subtrees in the case where the tree is more than binary treeanother subtree is just another list my_tree [' '[' '[' ',[],[]][' ',[],[]][' '[' ',[],[]][]print(my_treeprint('left subtree 'my_tree[ ]print('root 'my_tree[ ]print('right subtree 'my_tree[ ] implementation |
12,677 | let us formalize this definition of the tree data structure by providing some functions that make it easy for us to use lists as trees note that we are not going to define binary tree class the functions we will write will just help us manipulate standard list as though we are working with tree def binary_tree( )return [ [][]the binary_tree function simply constructs list with root node and two empty sublists for the children to add left subtree to the root of treewe need to insert new list into the second position of the root list we must be careful if the list already has something in the second positionwe need to keep track of it and push it down the tree as the left child of the list we are adding the code below shows the python code for inserting left child def insert_left(rootnew_branch) root pop( if len( root insert( [new_brancht[]]elseroot insert( [new_branch[][]]return root notice that to insert left childwe first obtain the (possibly emptylist that corresponds to the current left child we then add the new left childinstalling the old left child as the left child of the new one this allows us to splice new node into the tree at any position the code for insert_right is similar to insert_left and is shown below def insert_right(rootnew_branch) root pop( if len( root insert( [new_branch[] ]elseroot insert( [new_branch[][]]return root to round out this set of tree-making functions let' write couple of access functions for getting and setting the root valueas well as getting the left or right subtrees def get_root_val(root)return root[ def set_root_val(root,new_val)root[ new_val def get_left_child(root)return root[ trees and tree algorithms |
12,678 | def get_right_child(root)return root[ the following code exercises the tree functions we have just written you should try it out for yourself one of the exercises asks you to draw the tree structure resulting from this set of calls def binary_tree( )return [ [][]def insert_left(rootnew_branch) root pop( if len( root insert( [new_brancht[]]elseroot insert( [new_branch[][]]return root def insert_right(rootnew_branch) root pop( if len( root insert( [new_branch[] ]elseroot insert( [new_branch[][]]return root def get_root_val(root)return root[ def set_root_val(rootnew_val)root[ new_val def get_left_child(root)return root[ def get_right_child(root)return root[ binary_tree( insert_left( insert_left( insert_right( insert_right( get_left_child(rprint(lset_root_val( print(rinsert_left( print( implementation |
12,679 | print(get_right_child(get_right_child( ))self check given the following statementsx binary_tree(' 'insert_left( ,' 'insert_right( ,' 'insert_right(get_right_child( )' 'insert_left(get_right_child(get_right_child( ))' 'which of the answers is the correct representation of the tree [' '[' '[][]][' '[][' '[][]]] [' '[' '[][' '[' '[][]][]]][' '[][]] [' '[' '[][]][' '[][' '[' '[][]][]]] [' '[' '[][' '[' '[][]][]]][' '[][]]write function build_tree that returns tree using the list of lists functions that looks like thisnodes and references our second method to represent tree uses nodes and references in this case we will define class that has attributes for the root valueas well as the left and right subtrees since this representation more closely follows the object-oriented programming paradigmwe will continue to use this representation for the remainder of the using nodes and referenceswe might think of the tree as being structured like the one shown in we will start out with simple class definition for the nodes and references approach as shown below the important thing to remember about this representation is that the attributes left and right will become references to other instances of the binarytree class for examplewhen we insert new left child into the tree we create another instance of binarytree and trees and tree algorithms |
12,680 | figure simple tree using nodes and references approach modify self left_child in the root to reference the new tree class binarytreedef __init__(selfroot)self key root self left_child none self right_child none notice that the constructor function expects to get some kind of object to store in the root just like you can store any object you like in listthe root object of tree can be reference to any object for our early exampleswe will store the name of the node as the root value using nodes and references to represent the tree in figure we would create six instances of the binarytree class next let' look at the functions we need to build the tree beyond the root node to add left child to the treewe will create new binary tree object and set the left attribute of the root to refer to this new object the code for insert_left is shown below def insert_left(self,new_node)if self left_child =noneself left_child binarytree(new_nodeelset binarytree(new_nodet left_child self left_child self left_child we must consider two cases for insertion the first case is characterized by node with no existing left child when there is no left childsimply add node to the tree the second case is characterized by node with an existing left child in the second casewe insert node and push the existing child down one level in the tree the second case is handled by the else statement on line of insert_left the code for insert_right must consider symmetric set of cases there will either be no right childor we must insert the node between the root and an existing right child the insertion code is shown below implementation |
12,681 | def insert_right(self,new_node)if self right_child =noneself right_child binarytree(new_nodeelset binarytree(new_nodet right_child self right_child self right_child to round out the definition for simple binary tree data structurewe will write accessor methods for the left and right childrenas well as the root values def get_right_child(self)return self right_child def get_left_child(self)return self left_child def set_root_val(self,obj)self key obj def get_root_val(self)return self key now that we have all the pieces to create and manipulate binary treelet' use them to check on the structure bit more let' make simple tree with node as the rootand add nodes and as children the code below creates the tree and looks at the some of the values stored in keyleftand right notice that both the left and right children of the root are themselves distinct instances of the binarytree class as we said in our original recursive definition for treethis allows us to treat any child of binary tree as binary tree itself class binarytreedef __init__(selfroot)self key root self left_child none self right_child none def insert_left(selfnew_node)if self left_child =noneself left_child binarytree(new_nodeelset binarytree(new_nodet left_child self left_child self left_child def insert_right(selfnew_node)if self right_child =noneself right_child binarytree(new_nodeelse trees and tree algorithms |
12,682 | binarytree(new_nodet right_child self right_child self right_child def get_right_child(self)return self right_child def get_left_child(self)return self left_child def set_root_val(selfobj)self key obj def get_root_val(self)return self key binarytree(' 'print( get_root_val()print( get_left_child() insert_left(' 'print( get_left_child()print( get_left_child(get_root_val() insert_right(' 'print( get_right_child()print( get_right_child(get_root_val() get_right_child(set_root_val('hello'print( get_right_child(get_root_val()self check write function build_tree that returns tree using the nodes and references implementation that looks like this implementation |
12,683 | priority queues with binary heaps in earlier sections you learned about the first-in first-out data structure called queue one important variation of queue is called priority queue priority queue acts like queue in that you dequeue an item by removing it from the front howeverin priority queue the logical order of items inside queue is determined by their priority the highest priority items are at the front of the queue and the lowest priority items are at the back thus when you enqueue an item on priority queuethe new item may move all the way to the front we will see that the priority queue is useful data structure for some of the graph algorithms we will study in the next you can probably think of couple of easy ways to implement priority queue using sorting functions and lists howeverinserting into list is (nand sorting list is ( log nwe can do better the classic way to implement priority queue is using data structure called binary heap binary heap will allow us both enqueue and dequeue items in (log nthe binary heap is interesting to study because when we diagram the heap it looks lot like treebut when we implement it we use only single list as an internal representation the binary heap has two common variationsthe min heapin which the smallest key is always at the frontand the max heapin which the largest key value is always at the front in this section we will implement the min heap we leave max heap implementation as an exercise binary heap operations the basic operations we will implement for our binary heap are as followsbinaryheap(creates newemptybinary heap insert(kadds new item to the heap find_min(returns the item with the minimum key valueleaving item in the heap del_min(returns the item with the minimum key valueremoving the item from the heap is_empty(returns true if the heap is emptyfalse otherwise size(returns the number of items in the heap build_heap(listbuilds new heap from list of keys the code below demonstrates the use of some of the binary heap methods notice that no matter the order that we add items to the heapthe smallest is removed each time we will now turn our attention to creating an implementation for this idea import binheap bh binheap(bh insert( bh insert( bh insert( bh insert( as defined below trees and tree algorithms |
12,684 | figure complete binary tree print(bh del_min() print(bh del_min() print(bh del_min() print(bh del_min() binary heap implementation the structure property in order to make our heap work efficientlywe will take advantage of the logarithmic nature of the binary tree to represent our heap in order to guarantee logarithmic performancewe must keep our tree balanced balanced binary tree has roughly the same number of nodes in the left and right subtrees of the root in our heap implementation we keep the tree balanced by creating complete binary tree complete binary tree is tree in which each level has all of its nodes the exception to this is the bottom level of the treewhich we fill in from left to right figure shows an example of complete binary tree another interesting property of complete tree is that we can represent it using single list we do not need to use nodes and references or even lists of lists because the tree is completethe left child of parent (at position pis the node that is found in position in the list similarlythe right child of the parent is at position in the list to find the parent of any node in the treewe can simply use python' integer division given that node is at position in the listthe parent is at position / figure shows complete binary tree and also gives the list representation of the tree note the and relationship between parent and children the list representation of the treealong with the full structure propertyallows us to efficiently priority queues with binary heaps |
12,685 | figure complete binary tree along with its list representation traverse complete binary tree using only few simple mathematical operations we will see that this also leads to an efficient implementation of our binary heap the heap order property the method that we will use to store items in heap relies on maintaining the heap order property the heap order property is as followsin heapfor every node with parent pthe key in is smaller than or equal to the key in figure also illustrates complete binary tree that has the heap order property heap operations we will begin our implementation of binary heap with the constructor since the entire binary heap can be represented by single listall the constructor will do is initialize the list and an attribute current_size to keep track of the current size of the heap below we show the python code for the constructor you will notice that an empty binary heap has single zero as the first element of heap_list and that this zero is not usedbut is there so that simple integer division can be used in later methods class binheapdef __init__(self)self heap_list [ self current_size the next method we will implement is insert the easiestand most efficientway to add an item to list is to simply append the item to the end of the list the good news about trees and tree algorithms |
12,686 | appending is that it guarantees that we will maintain the complete tree property the bad news about appending is that we will very likely violate the heap structure property howeverit is possible to write method that will allow us to regain the heap structure property by comparing the newly added item with its parent if the newly added item is less than its parentthen we can swap the item with its parent figure shows the series of swaps needed to percolate the newly added item up to its proper position in the tree figure percolate the new node up to its proper position notice that when we percolate an item upwe are restoring the heap property between the newly added item and the parent we are also preserving the heap property for any siblings of courseif the newly added item is very smallwe may still need to swap it up another level in factwe may need to keep swapping until we get to the top of the tree below we show the perc_up methodwhich percolates new item as far up in the tree as it needs to go to maintain the heap property here is where our wasted element in heap_list is important priority queues with binary heaps |
12,687 | notice that we can compute the parent of any node by using simple integer division the parent of the current node can be computed by dividing the index of the current node by def perc_up(selfi)while / if self heap_list[iself heap_list[ / ]tmp self heap_list[ / self heap_list[ / self heap_list[iself heap_list[itmp / we are now ready to write the insert method most of the work in the insert method is really done by perc_up once new item is appended to the treeperc_up takes over and positions the new item properly def insert(selfk)self heap_list append(kself current_size self current_size self perc_up(self current_sizewith the insert method properly definedwe can now look at the del_min method since the heap property requires that the root of the tree be the smallest item in the treefinding the minimum item is easy the hard part of del_min is restoring full compliance with the heap structure and heap order properties after the root has been removed we can restore our heap in two steps firstwe will restore the root item by taking the last item in the list and moving it to the root position moving the last item maintains our heap structure property howeverwe have probably destroyed the heap order property of our binary heap secondwe will restore the heap order property by pushing the new root node down the tree to its proper position figure shows the series of swaps needed to move the new root node to its proper position in the heap in order to maintain the heap order propertyall we need to do is swap the root with its smallest child less than the root after the initial swapwe may repeat the swapping process with node and its children until the node is swapped into position on the tree where it is already less than both children the code for percolating node down the tree is found in the perc_down and min_child methods def perc_down(selfi)while ( <self current_sizemc self min_child(iif self heap_list[iself heap_list[mc]tmp self heap_list[iself heap_list[iself heap_list[mcself heap_list[mctmp mc def min_child(selfi)if self current_sizereturn trees and tree algorithms |
12,688 | figure percolating the root node down the tree priority queues with binary heaps |
12,689 | elseif self heap_list[ self heap_list[ ]return elsereturn the code for the del_min operation is below note that once again the hard work is handled by helper functionin this case perc_down def del_min(self)ret_val self heap_list[ self heap_list[ self heap_list[self current_sizeself current_size self current_size self heap_list pop(self perc_down( return ret_val to finish our discussion of binary heapswe will look at method to build an entire heap from list of keys the first method you might think of may be like the following given list of keysyou could easily build heap by inserting each key one at time since you are starting with list of one itemthe list is sorted and you could use binary search to find the right position to insert the next key at cost of approximately (log noperations howeverremember that inserting an item in the middle of the list may require (noperations to shift the rest of the list over to make room for the new key thereforeto insert keys into the heap would require total of ( log noperations howeverif we start with an entire list then we can build the whole heap in (noperations the build_heap function shows the code to build the entire heap def build_heap(selfa_list) len(a_list/ self current_size len(a_listself heap_list [ a_list[:while ( )self perc_down(ii figure shows the swaps that the build_heap method makes as it moves the nodes in an initial tree of [ into their proper positions although we start out in the middle of the tree and work our way back toward the rootthe perc_down method ensures that the largest child is always moved down the tree because the heap is complete binary treeany nodes past the halfway point will be leaves and therefore have no children notice that when we are percolating down from the root of the treeso this may require multiple swaps as you can see in the rightmost two trees of figure first the is moved out of the root positionbut after is moved down one level in the treeperc_down ensures that we check the next set of children farther down in the tree to ensure that it is pushed as low as it can go in this case it results in second swap with now that has been moved to the lowest level of the treeno further swapping can be done it is useful to compare the list representation of this series of swaps as shown in figure with the tree representation trees and tree algorithms |
12,690 | figure building heap from the list [ priority queues with binary heaps |
12,691 | figure parse tree for simple sentence the assertion that we can build the heap in (nmay seem bit mysterious at firstand proof is beyond the scope of this book howeverthe key to understanding that you can build the heap in (nis to remember that the log factor is derived from the height of the tree for most of the work in build_heapthe tree is shorter than log using the fact that you can build heap from list in (ntimeyou will construct sorting algorithm that uses heap and sorts list in ( log nas an exercise at the end of this binary tree applications with the implementation of our tree data structure completewe now look at an example of how tree can be used to solve some real problems in this section we will look at parse trees parse trees can be used to represent real-world constructions like sentences or mathematical expressions figure shows the hierarchical structure of simple sentence representing sentence as tree structure allows us to work with the individual parts of the sentence by using subtrees we can also represent mathematical expression such as (( ( )as parse treeas shown in figure we have already looked at fully parenthesized expressionsso what do we know about this expressionwe know that multiplication has higher precedence than either addition or subtraction because of the parentheseswe know that before we can do the multiplication we must evaluate the parenthesized addition and subtraction expressions the hierarchy of the tree helps us understand the order of evaluation for the whole expression before we can evaluate the top-level multiplicationwe must evaluate the addition and the subtraction in the subtrees the additionwhich is the left subtreeevaluates to the subtractionwhich is the right subtreeevaluates to using the hierarchical structure of treeswe can simply trees and tree algorithms |
12,692 | figure parse tree for (( ( )replace an entire subtree with one node once we have evaluated the expressions in the children applying this replacement procedure gives us the simplified tree shown in figure simplified parse tree for (( ( )in the rest of this section we are going to examine parse trees in more detail in particular we will look at how to build parse tree from fully parenthesized mathematical expression how to evaluate the expression stored in parse tree how to recover the original mathematical expression from parse tree the first step in building parse tree is to break up the expression string into list of tokens there are four different kinds of tokens to considerleft parenthesesright parenthesesoperatorsand operands we know that whenever we read left parenthesis we are starting new expressionand hence we should create new tree to correspond to that expression converselywhenever we read right parenthesiswe have finished an expression we also know that operands are going to be leaf nodes and children of their operators finallywe know that every operator is going to have both left and right child using the information from above we can define four rules as follows if the current token is '('add new node as the left child of the current nodeand descend to the left child if the current token is in the list ['+','-','/','*']set the root value of the current node to the operator represented by the current token add new node as the right child of the current node and descend to the right child if the current token is numberset the root value of the current node to the number and return to the parent if the current token is ')'go to the parent of the current node binary tree applications |
12,693 | before writing the python codelet' look at an example of the rules outlined above in action we will use the expression ( ( )we will parse this expression into the following list of character tokens ['('' ''+''('' ''*'' ,')'')'initially we will start out with parse tree that consists of an empty root node figure illustrates the structure and contents of the parse treeas each new token is processed using figure let' walk through the example step by step create an empty tree read as the first token by rule create new node as the left child of the root make the current node this new child read as the next token by rule set the root value of the current node to and go back up the tree to the parent read as the next token by rule set the root value of the current node to and add new node as the right child the new right child becomes the current node read as the next token by rule create new node as the left child of the current node the new left child becomes the current node read as the next token by rule set the value of the current node to make the parent of the current node read as the next token by rule set the root value of the current node to and create new right child the new right child becomes the current node read as the next token by rule set the root value of the current node to make the parent of the current node read as the next token by rule we make the parent of the current node read as the next token by rule we make the parent of the current node at this point there is no parent for so we are done from the example aboveit is clear that we need to keep track of the current node as well as the parent of the current node the tree interface provides us with way to get children of nodethrough the get_left_child and get_right_child methodsbut how can we keep track of the parenta simple solution to keeping track of parents as we traverse the tree is to use stack whenever we want to descend to child of the current nodewe first push the current node on the stack when we want to return to the parent of the current nodewe pop the parent off the stack using the rules described abovealong with the stack and binarytree operationswe are now ready to write python function to create parse tree the code for our parse tree builder is presented belowdef build_parse_tree(fp_exp)fp_list fp_exp split(p_stack stack(e_tree binarytree(''p_stack push(e_treecurrent_tree e_tree for in fp_list trees and tree algorithms |
12,694 | ( ( ( ( ( ( ( (hfigure tracing parse tree construction binary tree applications |
12,695 | if ='('current_tree insert_left(''p_stack push(current_treecurrent_tree current_tree get_left_child(elif not in ['+''-''*''/'')']current_tree set_root_val(int( )parent p_stack pop(current_tree parent elif in ['+''-''*''/']current_tree set_root_val(icurrent_tree insert_right(''p_stack push(current_treecurrent_tree current_tree get_right_child(elif =')'current_tree p_stack pop(elseraise valueerror return e_tree pt build_parse_tree(" )"pt postorder(#defined and explained in the next section the four rules for building parse tree are coded as the first four clauses of the if statement on lines and in each case you can see that the code implements the ruleas described abovewith few calls to the binarytree or stack methods the only error checking we do in this function is in the else clause where we raise valueerror exception if we get token from the list that we do not recognize now that we have built parse treewhat can we do with itas first examplewe will write function to evaluate the parse treereturning the numerical result to write this functionwe will make use of the hierarchical nature of the tree look back at figure recall that we can replace the original tree with the simplified tree shown in figure this suggests that we can write an algorithm that evaluates parse tree by recursively evaluating each subtree as we have done with past recursive algorithmswe will begin the design for the recursive evaluation function by identifying the base case natural base case for recursive algorithms that operate on trees is to check for leaf node in parse treethe leaf nodes will always be operands since numerical objects like integers and floating points require no further interpretationthe evaluate function can simply return the value stored in the leaf node the recursive step that moves the function toward the base case is to call evaluate on both the left and the right children of the current node the recursive call effectively moves us down the treetoward leaf node to put the results of the two recursive calls togetherwe can simply apply the operator stored in the parent node to the results returned from evaluating both children in the example from figure we see that the two children of the root evaluate to themselvesnamely and applying the multiplication operator gives us final result of the code for recursive evaluate function is shown below firstwe obtain references to the left and the right children of the current node if both the left and right children evaluate to nonethen we know that the current node is really leaf node this check is on line if trees and tree algorithms |
12,696 | the current node is not leaf nodelook up the operator in the current node and apply it to the results from recursively evaluating the left and right children to implement the arithmeticwe use dictionary with the keys '+''-''*'and '/the values stored in the dictionary are functions from python' operator module the operator module provides us with the functional versions of many commonly used operators when we look up an operator in the dictionarythe corresponding function object is retrieved since the retrieved object is functionwe can call it in the usual way function(param ,param so the lookup opers['+']( , is equivalent to operator add( , import operator def evaluate(parse_tree)opers {'+':operator add'-':operator sub'*':operator mul'/':operator truedivleft parse_tree get_left_child(right parse_tree get_right_child(if left and rightfn opers[parse_tree get_root_val()return fn(evaluate(left),evaluate(right)elsereturn parse_tree get_root_val(finallywe will trace the evaluate function on the parse tree we created in figure when we first call evaluatewe pass the root of the entire tree as the parameter parse_tree then we obtain references to the left and right children to make sure they exist the recursive call takes place on line we begin by looking up the operator in the root of the treewhich is '+the '+operator maps to the operator add function callwhich takes two parameters as usual for python function callthe first thing python does is to evaluate the parameters that are passed to the function in this case both parameters are recursive function calls to our evaluate function using left-to-right evaluationthe first recursive call goes to the left in the first recursive call the evaluate function is given the left subtree we find that the node has no left or right childrenso we are in leaf node when we are in leaf node we just return the value stored in the leaf node as the result of the evaluation in this case we return the integer at this point we have one parameter evaluated for our top-level call to operator add but we are not done yet continuing the left-to-right evaluation of the parameterswe now make recursive call to evaluate the right child of the root we find that the node has both left and right child so we look up the operator stored in this node'*'and call this function using the left and right children as the parameters at this point you can see that both recursive calls will be to leaf nodeswhich will evaluate to the integers four and five respectively with the two parameters evaluatedwe return the result of operator mul( at this point we have evaluated the operands for the top level '+operator and all that is left to do is finish the call to operator add( the result of the evaluation of the entire expression tree for ( ( )is binary tree applications |
12,697 | figure representing book as tree tree traversals now that we have examined the basic functionality of our tree data structureit is time to look at some additional usage patterns for trees these usage patterns can be divided into the three ways that we access the nodes of the tree there are three commonly used patterns to visit all the nodes in tree the difference between these patterns is the order in which each node is visited we call this visitation of the nodes "traversal the three traversals we will look at are called preorderinorderand postorder let' start out by defining these three traversals more carefullythen look at some examples where these patterns are useful preorder in preorder traversalwe visit the root node firstthen recursively do preorder traversal of the left subtreefollowed by recursive preorder traversal of the right subtree inorder in an inorder traversalwe recursively do an inorder traversal on the left subtreevisit the root nodeand finally do recursive inorder traversal of the right subtree postorder in postorder traversalwe recursively do postorder traversal of the left subtree and the right subtree followed by visit to the root node let' look at some examples that illustrate each of these three kinds of traversals first let' look at the preorder traversal as an example of tree to traversewe will represent this book as tree the book is the root of the treeand each is child of the root each section within is child of the and each subsection is child of its sectionand so on figure shows limited version of book with only two note that the traversal algorithm works for trees with any number of childrenbut we will stick with binary trees for now suppose that you wanted to read this book from front to back the preorder traversal gives you exactly that ordering starting at the root of the tree (the book nodewe will follow the preorder traversal instructions we recursively call preorder on the left childin this case we again recursively call preorder on the left child to get to section since section has no childrenwe do not make any additional recursive calls when we are finished with section we move up the tree to at this point we still need to visit the right trees and tree algorithms |
12,698 | subtree of which is section as before we visit the left subtreewhich brings us to section then we visit the node for section with section finishedwe return to then we return to the book node and follow the same procedure for the code for writing tree traversals is surprisingly elegantlargely because the traversals are written recursively below we shows the python code for preorder traversal of binary tree you may wonderwhat is the best way to write an algorithm like preorder traversalshould it be function that simply uses tree as data structureor should it be method of the tree data structure itselfthe code below shows version of the preorder traversal written as an external function that takes binary tree as parameter the external function is particularly elegant because our base case is simply to check if the tree exists if the tree parameter is nonethen the function returns without taking any action def preorder(tree)if treeprint(tree get_root_val()preorder(tree get_left_child()preorder(tree get_right_child()we can also implement preorder as method of the binarytree class the code for implementing preorder as an internal method is shown below notice what happens when we move the code from internal to external in generalwe just replace tree with self howeverwe also need to modify the base case the internal method must check for the existence of the left and the right children before making the recursive call to preorder def preorder(self)print(self keyif self left_childself left preorder(if self right_childself right preorder(which of these two ways to implement preorder is bestthe answer is that implementing preorder as an external function is probably better in this case the reason is that you very rarely want to just traverse the tree in most cases you are going to want to accomplish something else while using one of the basic traversal patterns in factwe will see in the next example that the postorder traversal pattern follows very closely with the code we wrote earlier to evaluate parse tree therefore we will write the rest of the traversals as external functions the algorithm for the postorder traversalshown belowis nearly identical to preorder except that we move the call to print to the end of the function def postorder(tree)if tree !nonepostorder(tree get_left_child()postorder(tree get_right_child()print(tree get_root_val() tree traversals |
12,699 | we have already seen common use for the postorder traversalnamely evaluating parse tree look back at our evaluate function above what we are doing is evaluating the left subtreeevaluating the right subtreeand combining them in the root through the function call to an operator assume that our binary tree is going to store only expression tree data let' rewrite the evaluation functionbut model it even more closely on the postorder code def postorder_eval(tree)opers {'+':operator add'-':operator sub'*':operator mul'/':operator truedivres none res none if treeres postorder_eval(tree get_left_child()res postorder_eval(tree get_right_child()if res and res return opers[tree get_root_val()](res res elsereturn tree get_root_val(notice that the form in postorder is the same as the form in postorder_evalexcept that instead of printing the key at the end of the functionwe return it this allows us to save the values returned from the recursive calls in lines and we then use these saved values along with the operator on line the final traversal we will look at in this section is the inorder traversal in the inorder traversal we visit the left subtreefollowed by the rootand finally the right subtree below we show our code for the inorder traversal notice that in all three of the traversal functions we are simply changing the position of the print statement with respect to the two recursive function calls def inorder(tree)if tree !noneinorder(tree get_left_child()print(tree get_root_val()inorder(tree get_right_child()if we perform simple inorder traversal of parse tree we get our original expression backwithout any parentheses let' modify the basic inorder algorithm to allow us to recover the fully parenthesized version of the expression the only modifications we will make to the basic template are as followsprint left parenthesis before the recursive call to the left subtreeand print right parenthesis after the recursive call to the right subtree def print_exp(tree)str_val "if treestr_val '(print_exp(tree get_left_child()str_val str_val str(tree get_root_val()str_val str_val print_exp(tree get_right_child()')return str_val trees and tree algorithms |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.