id
int64
0
25.6k
text
stringlengths
0
4.59k
13,400
reconstructing the shortest-path tree our pseudo-code description of dijkstra' algorithm in code fragment and our implementation in code fragment computes the value [ ]for each vertex vthat is the length of the shortest path from the source vertex to howeverthose forms of the algorithm do not explicitly compute the actual paths that achieve those distances the collection of all shortest paths emanating from source can be compactly represented by what is known as the shortest-path tree the paths form rooted tree because if shortest path from to passes through an intermediate vertex uit must begin with shortest path from to in this sectionwe demonstrate that the shortest-path tree rooted at source can be reconstructed in ( mtimegiven the set of [vvalues produced by dijkstra' algorithm using as the source as we did when representing the dfs and bfs treeswe will map each vertex  to parent (possiblyu )such that is the vertex immediately before on shortest path from to if is the vertex just before on the shortest path from to vit must be that [uw(uvd[vconverselyif the above equation is satisfiedthen the shortest path from to ufollowed by the edge (uvis shortest path to our implementation in code fragment reconstructs the tree based on this logictesting all incoming edges to each vertex vlooking for (uvthat satisfies the key equation the running time is ( )as we consider each vertex and all incoming edges to those vertices (see proposition def shortest path tree(gsd) """reconstruct shortest-path tree rooted at vertex sgiven distance map return tree as map from each reachable vertex (other than sto the edge =( ,vthat is used to reach from its parent in the tree "" tree for in if is not sconsider incoming edges for in incident edges(vfalse) opposite( wgt element if [ = [uwgt tree[ve edge is used to reach return tree code fragment python function that reconstructs the shortest pathsbased on knowledge of the single-source distances
13,401
minimum spanning trees suppose we wish to connect all the computers in new office building using the least amount of cable we can model this problem using an undirectedweighted graph whose vertices represent the computersand whose edges represent all the possible pairs (uvof computerswhere the weight (uvof edge (uvis equal to the amount of cable needed to connect computer to computer rather than computing shortest-path tree from some particular vertex vwe are interested instead in finding tree that contains all the vertices of and has the minimum total weight over all such trees algorithms for finding such tree are the focus of this section problem definition given an undirectedweighted graph gwe are interested in finding tree that contains all the vertices in and minimizes the sum ( (uv(uvin treesuch as thisthat contains every vertex of connected graph is said to be spanning treeand the problem of computing spanning tree with smallest total weight is known as the minimum spanning tree (or mstproblem the development of efficient algorithms for the minimum spanning tree problem predates the modern notion of computer science itself in this sectionwe discuss two classic algorithms for solving the mst problem these algorithms are both applications of the greedy methodwhichas was discussed briefly in the previous sectionis based on choosing objects to join growing collection by iteratively picking an object that minimizes some cost function the first algorithm we discuss is the prim-jarnik algorithmwhich grows the mst from single root vertexmuch in the same way as dijkstra' shortest-path algorithm the second algorithm we discuss is kruskal' algorithmwhich "growsthe mst in clusters by considering edges in nondecreasing order of their weights in order to simplify the description of the algorithmswe assumein the followingthat the input graph is undirected (that isall its edges are undirectedand simple (that isit has no self-loops and no parallel edgeshencewe denote the edges of as unordered vertex pairs (uvbefore we discuss the details of these algorithmshoweverlet us give crucial fact about minimum spanning trees that forms the basis of the algorithms
13,402
crucial fact about minimum spanning trees the two mst algorithms we discuss are based on the greedy methodwhich in this case depends crucially on the following fact (see figure belongs to minimum spanning tree min-weight "bridgeedge figure an illustration of the crucial fact about minimum spanning trees proposition let be weighted connected graphand let and be partition of the vertices of into two disjoint nonempty sets furthermorelet be an edge in with minimum weight from among those with one endpoint in and the other in there is minimum spanning tree that has as one of its edges justificationlet be minimum spanning tree of if does not contain edge ethe addition of to must create cycle thereforethere is some edge  of this cycle that has one endpoint in and the other in moreoverby the choice of ew( <wf if we remove from { }we obtain spanning tree whose total weight is no more than before since was minimum spanning treethis new tree must also be minimum spanning tree in factif the weights in are distinctthen the minimum spanning tree is uniquewe leave the justification of this less crucial fact as an exercise ( - in additionnote that proposition remains valid even if the graph contains negative-weight edges or negative-weight cyclesunlike the algorithms we presented for shortest paths
13,403
prim-jarnik algorithm in the prim-jarnik algorithmwe grow minimum spanning tree from single cluster starting from some "rootvertex the main idea is similar to that of dijkstra' algorithm we begin with some vertex sdefining the initial "cloudof vertices thenin each iterationwe choose minimum-weight edge (uv)connecting vertex in the cloud to vertex outside of the vertex is then brought into the cloud and the process is repeated until spanning tree is formed againthe crucial fact about minimum spanning trees comes into playfor by always choosing the smallest-weight edge joining vertex inside to one outside cwe are assured of always adding valid edge to the mst to efficiently implement this approachwe can take another cue from dijkstra' algorithm we maintain label [vfor each vertex outside the cloud cso that [vstores the weight of the minimum observed edge for joining to the cloud (in dijkstra' algorithmthis label measured the full path length from starting vertex to vincluding an edge (uvthese labels serve as keys in priority queue used to decide which vertex is next in line to join the cloud we give the pseudo-code in code fragment algorithm primjarnik( )inputan undirectedweightedconnected graph with vertices and edges outputa minimum spanning tree for pick any vertex of [ for each vertex  do [vinitialize initialize priority queue with an entry ( [ ](vnone)for each vertex vwhere [vis the key in the priority queueand (vnoneis the associated value while is not empty do (uevalue returned by remove min(connect vertex to using edge for each edge (uvsuch that is in do {check if edge (uvbetter connects to if (uvd[vthen [vw(uvchange the key of vertex in to [vchange the value of vertex in to (ve return the tree code fragment the prim-jarnik algorithm for the mst problem
13,404
analyzing the prim-jarnik algorithm the implementation issues for the prim-jarnik algorithm are similar to those for dijkstra' algorithmrelying on an adaptable priority queue (section we initially perform insertions into qlater perform extract-min operationsand may update total of priorities as part of the algorithm those steps are the primary contributions to the overall running time with heap-based priority queueeach operation runs in (log ntimeand the overall time for the algorithm is (( mlog )which is ( log nfor connected graph alternativelywe can achieve ( running time by using an unsorted list as priority queue illustrating the prim-jarnik algorithm we illustrate the prim-jarnik algorithm in figures through bos ord bwi lax mia bos ord jfk bwi mia ( dfw lax pvd jfk sfo dfw lax pvd bos ord bwi ( sfo ( jfk mia dfw pvd sfo dfw lax ord jfk sfo pvd bos bwi mia (dfigure an illustration of the prim-jarnik mst algorithmstarting with vertex pvd (continues in figure
13,405
bos ord ord jfk lax mia ( bos ord ord jfk lax bwi mia mia jfk dfw pvd sfo bwi dfw lax pvd bos sfo bwi ( jfk mia dfw pvd sfo bwi dfw lax pvd sfo bos ( ( bos bos ord ord jfk bwi lax pvd jfk sfo dfw dfw lax sfo pvd bwi mia (imia (jfigure an illustration of the prim-jarnik mst algorithm (continued from figure
13,406
python implementation code fragment presents python implementation of the prim-jarnik algorithm the mst is returned as an unordered list of edges def mst primjarnik( ) """compute minimum spanning tree of weighted graph return list of edges that comprise the mst (in arbitrary order "" ={ [vis bound on distance to tree tree list of edges in spanning tree pq adaptableheappriorityqueued[vmaps to value (ve=( , ) pqlocator map from vertex to its pq locator for each vertex of the graphadd an entry to the priority queuewith the source having distance and all others having infinite distance for in vertices) if len( = this is the first node [ make it the root elsepositive infinity [vfloatinf pqlocator[vpq add( [ ]( ,none) while not pq is empty) key,value pq remove min ,edge value unpack tuple from pq del pqlocator[uu is no longer in pq if edge is not none tree append(edgeadd edge to tree for link in incident edges( ) link opposite( if in pqlocatorthus not yet in tree see if edge ( ,vbetter connects to the growing tree wgt link element if wgt [ ]better edge to [vwgt update the distance pq update(pqlocator[ ] [ ](vlink)update the pq entry return tree code fragment python implementation of the prim-jarnik algorithm for the minimum spanning tree problem
13,407
kruskal' algorithm in this sectionwe introduce kruskal' algorithm for constructing minimum spanning tree while the prim-jarnik algorithm builds the mst by growing single tree until it spans the graphkruskal' algorithm maintains forest of clustersrepeatedly merging pairs of clusters until single cluster spans the graph initiallyeach vertex is by itself in singleton cluster the algorithm then considers each edge in turnordered by increasing weight if an edge connects two different clustersthen is added to the set of edges of the minimum spanning treeand the two clusters connected by are merged into single cluster ifon the other hande connects two vertices that are already in the same clusterthen is discarded once the algorithm has added enough edges to form spanning treeit terminates and outputs this tree as the minimum spanning tree we give pseudo-code for kruskal' mst algorithm in code fragment and we show an example of this algorithm in figures and algorithm kruskal( )inputa simple connected weighted graph with vertices and edges outputa minimum spanning tree for for each vertex in do define an elementary cluster ( {vinitialize priority queue to contain all edges in gusing the weights as keys { will ultimately contain the edges of the mstwhile has fewer than edges do (uvvalue returned by remove min(let (ube the cluster containing uand let (vbe the cluster containing if (  (vthen add edge (uvto merge (uand (vinto one cluster return tree code fragment kruskal' algorithm for the mst problem as was the case with the prim-jarnik algorithmthe correctness of kruskal' algorithm is based upon the crucial fact about minimum spanning trees from proposition each time kruskal' algorithm adds an edge (uvto the minimum spanning tree we can define partitioning of the set of vertices (as in the propositionby letting be the cluster containing and letting contain the rest of the vertices in this clearly defines disjoint partitioning of the vertices of andmore importantlysince we are extracting edges from in order by their weightse must be minimum-weight edge with one vertex in and the other in thuskruskal' algorithm always adds valid minimum spanning tree edge
13,408
bos ord lax mia ( bos ord jfk bwi lax mia bos ord ord jfk bwi mia dfw lax pvd jfk sfo dfw lax pvd bos ( sfo bwi ( mia jfk dfw pvd sfo dfw lax pvd bos ord bwi (asfo mia jfk dfw pvd sfo bwi dfw lax ord jfk sfo pvd bos bwi mia ( ( figure example of an execution of kruskal' mst algorithm on graph with integer weights we show the clusters as shaded regions and we highlight the edge being considered in each iteration (continues in figure
13,409
bos bos ord lax bwi mia ( bos bos ord pvd jfk ord lax mia bos ord jfk bwi mia ( dfw lax pvd jfk sfo dfw lax pvd bos ord ( sfo bwi ( jfk mia dfw pvd sfo bwi dfw lax (gsfo jfk mia dfw pvd sfo bwi dfw lax ord jfk sfo pvd bwi mia (lfigure an example of an execution of kruskal' mst algorithm rejected edges are shown dashed (continues in figure
13,410
bos bos ord jfk bwi lax pvd jfk sfo dfw dfw lax sfo ord pvd bwi mia mia ( (nfigure example of an execution of kruskal' mst algorithm (continuedthe edge considered in (nmerges the last two clusterswhich concludes this execution of kruskal' algorithm (continued from figure the running time of kruskal' algorithm there are two primary contributions to the running time of kruskal' algorithm the first is the need to consider the edges in nondecreasing order of their weightsand the second is the management of the cluster partition analyzing its running time requires that we give more details on its implementation the ordering of edges by weight can be implemented in ( log )either by use of sorting algorithm or priority queue if that queue is implemented with heapwe can initialize in ( log mtime by repeated insertionsor in (mtime using bottom-up heap construction (see section )and the subsequent calls to remove min each run in (log mtimesince the queue has size (mwe note that since is ( for simple grapho(log mis the same as (log nthereforethe running time due to the ordering of edges is ( log nthe remaining task is the management of clusters to implement kruskal' algorithmwe must be able to find the clusters for vertices and that are endpoints of an edge eto test whether those two clusters are distinctand if soto merge those two clusters into one none of the data structures we have studied thus far are well suited for this task howeverwe conclude this by formalizing the problem of managing disjoint partitionsand introducing efficient union-find data structures in the context of kruskal' algorithmwe perform at most find operations and union operations we will see that simple union-find structure can perform that combination of operations in ( log ntime (see proposition )and more advanced structure can support an even faster time for connected graphm > and thereforethe bound of ( log ntime for ordering the edges dominates the time for managing the clusters we conclude that the running time of kruskal' algorithm is ( log
13,411
python implementation code fragment presents python implementation of kruskal' algorithm as with our implementation of the prim-jarnik algorithmthe minimum spanning tree is returned in the form of list of edges as consequence of kruskal' algorithmthose edges will be reported in nondecreasing order of their weights our implementation assumes use of partition class for managing the cluster partition an implementation of the partition class is presented in section def mst kruskal( ) """compute minimum spanning tree of graph using kruskal algorithm return list of edges that comprise the mst the elements of the graph edges are assumed to be weights "" tree list of edges in spanning tree pq heappriorityqueueentries are edges in gwith weights as key forest partitionkeeps track of forest clusters position map each node to its partition entry for in vertices) position[vforest make group( for in edges) pq add( element)eedge' element is assumed to be its weight size vertex count while len(tree!size and not pq is empty) tree not spanning and unprocessed edges remain weight,edge pq remove min , edge endpoints forest find(position[ ] forest find(position[ ] if ! tree append(edge forest union( , return tree code fragment python implementation of kruskal' algorithm for the minimum spanning tree problem
13,412
disjoint partitions and union-find structures in this sectionwe consider data structure for managing partition of elements into collection of disjoint sets our initial motivation is in support of kruskal' minimum spanning tree algorithmin which forest of disjoint trees is maintainedwith occasional merging of neighboring trees more generallythe disjoint partition problem can be applied to various models of discrete growth we formalize the problem with the following model partition data structure manages universe of elements that are organized into disjoint sets (that isan element belongs to one and only one of these setsunlike with the set adt or python' set classwe do not expect to be able to iterate through the contents of setnor to efficiently test whether given set includes given element to avoid confusion with such notions of setwe will refer to the clusters of our partition as groups howeverwe will not require an explicit structure for each groupinstead allowing the organization of groups to be implicit to differentiate between one group and anotherwe assume that at any point in timeeach group has designated entry that we refer to as the leader of the group formallywe define the methods of partition adt using position objectseach of which stores an element the partition adt supports the following methods make group( )create singleton group containing new element and return the position storing union(pq)merge the groups containing positions and find( )return the position of the leader of the group containing position sequence implementation simple implementation of partition with total of elements uses collection of sequencesone for each groupwhere the sequence for group stores element positions each position object stores variableelementwhich references its associated element and allows the execution of an element(method in ( time in additioneach position stores variablegroupthat references the sequence storing psince this sequence is representing the group containing ' element (see figure with this representationwe can easily perform the make group(xand find(poperations in ( timeallowing the first position in sequence to serve as the "leader operation union(pqrequires that we join two sequences into one and update the group references of the positions in one of the two we choose to implement this operation by removing all the positions from the sequence with smaller
13,413
figure sequence-based implementation of partition consisting of three groupsa { } { }and { sizeand inserting them in the sequence with larger size each time we take position from the smaller group and insert it into the larger group bwe update the group reference for that position to now point to hencethe operation union(pqtakes time (min( nq ))where (resp nq is the cardinality of the group containing position (resp qclearlythis time is (nif there are elements in the partition universe howeverwe next present an amortized analysis that shows this implementation to be much better than appears from this worst-case analysis proposition when using the above sequence-based partition implementationperforming series of make groupunionand find operations on an initially empty partition involving at most elements takes ( log ntime justificationwe use the accounting method and assume that one cyber-dollar can pay for the time to perform find operationa make group operationor the movement of position object from one sequence to another in union operation in the case of find or make group operationwe charge the operation itself cyber-dollar in the case of union operationwe assume that cyber-dollar pays for the constant-time work in comparing the sizes of the two sequencesand that we charge cyber-dollar to each position that we move from the smaller group to the larger group clearlythe cyber-dollar charged for each find and make group operationtogether with the first cyber-dollar collected for each union operationaccounts for total of cyber-dollars considerthenthe number of charges made to positions on behalf of union operations the important observation is that each time we move position from one group to anotherthe size of that position' group at least doubles thuseach position is moved from one group to another at most logn timeshenceeach position can be charged at most (log ntimes since we assume that the partition is initially emptythere are (ndifferent elements referenced in the given series of operationswhich implies that the total time for moving elements during the union operations is ( log
13,414
tree-based partition implementation an alternative data structure for representing partition uses collection of trees to store the elementswhere each tree is associated with different group (see figure in particularwe implement each tree with linked data structure whose nodes are themselves the group position objects we view each position as being node having an instance variableelementreferring to its element xand an instance variableparentreferring to its parent node by conventionif is the root of its treewe set ' parent reference to itself figure tree-based implementation of partition consisting of three groupsa { } { }and { with this partition data structureoperation find(pis performed by walking up from position to the root of its treewhich takes (ntime in the worst case operation union(pqcan be implemented by making one of the trees subtree of the other this can be done by first locating the two rootsand then in ( additional time by setting the parent reference of one root to point to the other root see figure for an example of both operations ( (bfigure tree-based implementation of partition(aoperation union(pq)(boperation find( )where denotes the position object for element
13,415
at firstthis implementation may seem to be no better than the sequence-based data structurebut we add the following two simple heuristics to make it run faster union-by-sizewith each position pstore the number of elements in the subtree rooted at in union operationmake the root of the smaller group become child of the other rootand update the size field of the larger root path compressionin find operationfor each position that the find visitsreset the parent of to the root (see figure ( (bfigure path-compression heuristic(apath traversed by operation find on element (brestructured tree surprising property of this data structurewhen implemented using the unionby-size and path-compression heuristicsis that performing series of operations involving elements takes ( logntimewhere logn is the log-star functionwhich is the inverse of the tower-of-twos function intuitivelylogn is the number of times that one can iteratively take the logarithm (base of number before getting number smaller than table shows few sample values minimum log , table some values of logn and critical values for its inverse proposition when using the tree-based partition representation with both union-by-size and path compressionperforming series of make groupunionand find operations on an initially empty partition involving at most elements takes ( logntime although the analysis for this data structure is rather complexits implementation is quite straightforward we conclude with complete python code for the structuregiven in code fragment
13,416
class partition """union-find structure for maintaining disjoint sets "" nested position class class positionslots _container _element _size _parent def init (selfcontainere) """create new position that is the leader of its own group ""reference to partition instance self container container self element self size convention for group leader self parent self def element(self) """return element stored at this position "" return self element public partition methods def make group(selfe) """makes new group containing element eand returns its position "" return self position(selfe def find(selfp) """finds the group containging and return the position of its leader "" if parent ! parent self find( parentoverwrite parent after recursion return parent def union(selfpq) """merges the groups containg elements and (if distinct"" self find( self find( if is not bonly merge if different groups if size size parent size + size else parent size + size code fragment python implementation of partition class using union-bysize and path compression
13,417
exercises for help with exercisesplease visit the sitewww wiley com/college/goodrich reinforcement - draw simple undirected graph that has vertices edgesand connected components - if is simple undirected graph with vertices and connected componentswhat is the largest number of edges it might haver- draw an adjacency matrix representation of the undirected graph shown in figure - draw an adjacency list representation of the undirected graph shown in figure - draw simpleconnecteddirected graph with vertices and edges such that the in-degree and out-degree of each vertex is show that there is single (nonsimplecycle that includes all the edges of your graphthat isyou can trace all the edges in their respective directions without ever lifting your pencil (such cycle is called an euler tour - suppose we represent graph having vertices and edges with the edge list structure whyin this casedoes the insert vertex method run in ( time while the remove vertex method runs in (mtimer- give pseudo-code for performing the operation insert edge( , ,xin ( time using the adjacency matrix representation - repeat exercise - for the adjacency list representationas described in the - can edge list be omitted from the adjacency matrix representation while still achieving the time bounds given in table why or why notr- can edge list be omitted from the adjacency list representation while still achieving the time bounds given in table why or why notr- would you use the adjacency matrix structure or the adjacency list structure in each of the following casesjustify your choice the graph has , vertices and , edgesand it is important to use as little space as possible the graph has , vertices and , , edgesand it is important to use as little space as possible you need to answer the query get edge( ,vas fast as possibleno matter how much space you use
13,418
- explain why the dfs traversal runs in ( time on an -vertex simple graph that is represented with the adjacency matrix structure - in order to verify that all of its nontree edges are back edgesredraw the graph from figure so that the dfs tree edges are drawn with solid lines and oriented downwardas in standard portrayal of treeand with all nontree edges drawn using dashed lines - simple undirected graph is complete if it contains an edge between every pair of distinct vertices what does depth-first search tree of complete graph look liker- recalling the definition of complete graph from exercise - what does breadth-first search tree of complete graph look liker- let be an undirected graph whose vertices are the integers through and let the adjacent vertices of each vertex be given by the table belowvertex adjacent vertices ( ( ( ( ( ( ( ( assume thatin traversal of gthe adjacent vertices of given vertex are returned in the same order as they are listed in the table above draw give the sequence of vertices of visited using dfs traversal starting at vertex give the sequence of vertices visited using bfs traversal starting at vertex - draw the transitive closure of the directed graph shown in figure - if the vertices of the graph from figure are numbered as ( jfkv laxv miav bosv ordv sfov dfw)in what order would edges be added to the transitive closure during the floyd-warshall algorithmr- how many edges are in the transitive closure of graph that consists of simple directed path of verticesr- given an -node complete binary tree rooted at given positionconhaving the nodes of as its vertices for each sider directed graph from the parent to the parent-child pair in create directed edge in child show that the transitive closure of has ( log nedges
13,419
- compute topological ordering for the directed graph drawn with solid edges in figure - bob loves foreign languages and wants to plan his course schedule for the following years he is interested in the following nine language coursesla la la la la la la la and la the course prerequisites arela (nonela la la (nonela la la la la la la la la la la la la la la in what order can bob take these coursesrespecting the prerequisitesr- draw simpleconnectedweighted graph with vertices and edgeseach with unique edge weights identify one vertex as "startvertex and illustrate running of dijkstra' algorithm on this graph - show how to modify the pseudo-code for dijkstra' algorithm for the case when the graph is directed and we want to compute shortest directed paths from the source vertex to all the other vertices - draw simpleconnectedundirectedweighted graph with vertices and edgeseach with unique edge weights illustrate the execution of the prim-jarnik algorithm for computing the minimum spanning tree of this graph - repeat the previous problem for kruskal' algorithm - there are eight small islands in lakeand the state wants to build seven bridges to connect them so that each island can be reached from any other one via one or more bridges the cost of constructing bridge is proportional to its length the distances between pairs of islands are given in the following table find which bridges to build to minimize the total construction cost
13,420
- describe the meaning of the graphical conventions used in figure illustrating dfs traversal what do the line thicknesses signifywhat do the arrows signifyhow about dashed linesr- repeat exercise - for figure that illustrates directed dfs traversal - repeat exercise - for figure that illustrates bfs traversal - repeat exercise - for figure illustrating the floyd-warshall algorithm - repeat exercise - for figure that illustrates the topological sorting algorithm - repeat exercise - for figures and illustrating dijkstra' algorithm - repeat exercise - for figures and that illustrate the prim-jarnik algorithm - repeat exercise - for figures through that illustrate kruskal' algorithm - george claims he has fast way to do path compression in partition structurestarting at position he puts into list land starts following parent pointers each time he encounters new positionqhe adds to and updates the parent pointer of each node in to point to ' parent show that george' algorithm runs in ( time on path of length creativity - give python implementation of the remove vertex(vmethod for our adjacency map implementation of section making sure your implementation works for both directed and undirected graphs your method should run in (deg( )time - give python implementation of the remove edge(emethod for our adjacency map implementation of section making sure your implementation works for both directed and undirected graphs your method should run in ( time - suppose we wish to represent an -vertex graph using the edge list structureassuming that we identify the vertices with the integers in the set { describe how to implement the collection to support (log )-time performance for the get edge(uvmethod how are you implementing the method in this casec- let be the spanning tree rooted at the start vertex produced by the depthfirst search of connectedundirected graph argue why every edge of not in goes from vertex in to one of its ancestorsthat isit is back edge
13,421
graph algorithms - our solution to reporting path from to in code fragment could be made more efficient in practice if the dfs process ended as soon as is discovered describe how to modify our code base to implement this optimization - let be an undirected graph with vertices and edges describe an ( )-time algorithm for traversing each edge of exactly once in each direction if one - implement an algorithm that returns cycle in directed graph gexists - write functioncomponents( )for undirected graph gthat returns dictionary mapping each vertex to an integer that serves as an identifier for its connected component that istwo vertices should be mapped to the same identifier if and only if they are in the same connected component - say that maze is constructed correctly if there is one path from the start to the finishthe entire maze is reachable from the startand there are no loops around any portions of the maze given maze drawn in an gridhow can we determine if it is constructed correctlywhat is the running time of this algorithmc- computer networks should avoid single points of failurethat isnetwork vertices that can disconnect the network if they fail we say an undirectedconnected graph is biconnected if it contains no vertex whose removal would divide into two or more connected components give an algorithm for adding at most edges to connected graph gwith > vertices and > edgesto guarantee that is biconnected your algorithm should run in ( mtime - explain why all nontree edges are cross edgeswith respect to bfs tree constructed for an undirected graph - explain why there are no forward nontree edges with respect to bfs tree constructed for directed graph - show that if is bfs tree produced for connected graph gthenfor each vertex at level ithe path of between and has edgesand any other path of between and has at least edges - justify proposition - provide an implementation of the bfs algorithm that uses fifo queuerather than level-by-level formulationto manage vertices that have been discovered until the time when their neighbors are considered - graph is bipartite if its vertices can be partitioned into two sets and such that every edge in has one end vertex in and the other in design and analyze an efficient algorithm for determining if an undirected graph is bipartite (without knowing the sets and in advance
13,422
with vertices and edges is - an euler tour of directed graph exactly once according to its direction cycle that traverses each edge of such tour always exists if is connected and the in-degree equals the describe an ( )-time algorithm for out-degree of each vertex in finding an euler tour of such directed graph - company named rt& has network of switching stations connected by high-speed communication links each customer' phone is directly connected to one station in his or her area the engineers of rt& have developed prototype video-phone system that allows two customers to see each other during phone call in order to have acceptable image qualityhoweverthe number of links used to transmit video signals between the two parties cannot exceed suppose that rt& ' network is represented by graph design an efficient algorithm that computesfor each stationthe set of stations it can reach using no more than links - the time delay of long-distance call can be determined by multiplying small fixed constant by the number of communication links on the telephone network between the caller and callee suppose the telephone network of company named rt& is tree the engineers of rt& want to compute the maximum possible time delay that may be experienced in long-distance call given tree the diameter of is the length of longest path between two nodes of give an efficient algorithm for computing the diameter of - tamarindo university and many other schools worldwide are doing joint project on multimedia computer network is built to connect these schools using communication links that form tree the schools decide to install file server at one of the schools to share data among all the schools since the transmission time on link is dominated by the link setup and synchronizationthe cost of data transfer is proportional to the number of links used henceit is desirable to choose "centrallocation for the file server given tree and node of the eccentricity of is the length of longest path from to any other node of node of with minimum eccentricity is called center of design an efficient algorithm thatgiven an -node tree computes center of is the center uniqueif nothow many distinct centers can tree haveis compact if there is some - say that an -vertex directed acyclic graph way of numbering the vertices of with the integers from to such contains the edge (ijif and only if jfor all ij in [ that is compact give an ( )-time algorithm for detecting if
13,423
graph algorithms be weighted directed graph with vertices design variation - let of floyd-warshall' algorithm for computing the lengths of the shortest paths from each vertex to every other vertex in ( time - design an efficient algorithm for finding longest directed path from specify the vertex to vertex of an acyclic weighted directed graph graph representation used and any auxiliary data structures used alsoanalyze the time complexity of your algorithm - an independent set of an undirected graph (veis subset of such that no two vertices in are adjacent that isif and are in ithen (uvis not in maximal independent set is an independent set such thatif we were to add any additional vertex to mthen it would not be independent any more every graph has maximal independent set (can you see thisthis question is not part of the exercisebut it is worth thinking about give an efficient algorithm that computes maximal independent set for graph what is this method' running timec- give an example of an -vertex simple graph that causes dijkstra' algorithm to run in ( log ntime when its implemented with heap with negative-weight - give an example of weighted directed graph edgesbut no negative-weight cyclesuch that dijkstra' algorithm incorrectly computes the shortest-path distances from some start vertex - consider the following greedy strategy for finding shortest path from vertex start to vertex goal in given connected graph initialize path to start initialize set visited to {start if start=goalreturn path and exit otherwisecontinue find the edge (start,vof minimum weight such that is adjacent to start and is not in visited add to path add to visited set start equal to and go to step does this greedy strategy always find shortest path from start to goaleither explain intuitively why it worksor give counterexample - our implementation of shortest path lengths in code fragment relies on use of "infinityas numeric valueto represent the distance bound for vertices that are not (yetknown to be reachable from the source reimplement that function without such sentinelso that verticesother than the sourceare not added to the priority queue until it is evident that they are reachable - show that if all the weights in connected weighted graph are distinctthen there is exactly one minimum spanning tree for
13,424
- an old mst methodcalled baruvka' algorithmworks as follows on graph having vertices and edges with distinct weightslet be subgraph of initially containing just the vertices in while has fewer than edges do for each connected component ci of do find the lowest-weight edge (uvin with in ci and not in ci add (uvto (unless it is already in return prove that this algorithm is correct and that it runs in ( log ntime - let be graph with vertices and edges such that all the edge weights in are integers in the range [ ngive an algorithm for finding minimum spanning tree for in ( logntime - consider diagram of telephone networkwhich is graph whose vertices represent switching centersand whose edges represent communication lines joining pairs of centers edges are marked by their bandwidthand the bandwidth of path is equal to the lowest bandwidth among the path' edges give an algorithm thatgiven network and two switching centers and boutputs the maximum bandwidth of path between and - nasa wants to link stations spread over the country using communication channels each pair of stations has different bandwidth availablewhich is known priori nasa wants to select channels (the minimum possiblein such way that all the stations are linked by the channels and the total bandwidth (defined as the sum of the individual bandwidths of the channelsis maximum give an efficient algorithm for this problem and determine its worst-case time complexity consider the weighted graph (ve)where is the set of stations and is the set of channels between the stations define the weight (eof an edge in as the bandwidth of the corresponding channel - inside the castle of asymptopia there is mazeand along each corridor of the maze there is bag of gold coins the amount of gold in each bag varies noble knightnamed sir paulwill be given the opportunity to walk through the mazepicking up bags of gold he may enter the maze only through door marked "enterand exit through another door marked "exit while in the maze he may not retrace his steps each corridor of the maze has an arrow painted on the wall sir paul may only go down the corridor in the direction of the arrow there is no way to traverse "loopin the maze given map of the mazeincluding the amount of gold in each corridordescribe an algorithm to help sir paul pick up the most gold
13,425
graph algorithms - suppose you are given timetablewhich consists ofa set of airportsand for each airport in aa minimum connecting time (aa set of flightsand the followingfor each flight in forigin airport in destination airport in departure time arrival time describe an efficient algorithm for the flight scheduling problem in this problemwe are given airports and band time tand we wish to compute sequence of flights that allows one to arrive at the earliest possible time in when departing from at or after time minimum connecting times at intermediate airports must be observed what is the running time of your algorithm as function of and mwith verticesand let be the - suppose we are given directed graph adjacency matrix corresponding to let the product of with itself ( be definedfor <ij <nas followsm (ijm( ( jm(inm(nj)where "is the boolean or operator and "is boolean and given this definitionwhat does (ij imply about the vertices and jwhat if (ij suppose is the product of with itself what do the entries of signifyhow about the entries of ( )( )in generalwhat information is contained in the matrix is weighted and assume the followingc now suppose that for < <nm(ii for <ij <nm(ijweight(ijif (ijis in for <ij <nm(ijif (ijis not in alsolet be definedfor <ij <nas followsm (ijmin{ ( ( ) (inm(nj)if (ijkwhat may we conclude about the relationship between vertices and jc- karen has new way to do path compression in tree-based union/find partition data structure starting at position she puts all the positions that are on the path from to the root in set then she scans through and sets the parent pointer of each position in to its parent' parent
13,426
pointer (recall that the parent pointer of the root points to itself if this pass changed the value of any position' parent pointerthen she repeats this processand goes on repeating this process until she makes scan through that does not change any position' parent value show that karen' algorithm is correct and analyze its running time for path of length projects - use an adjacency matrix to implement class supporting simplified graph adt that does not include update methods your class should include constructor method that takes two collections-- collection of vertex elements and collection of pairs of vertex elements--and produces the graph that these two collections represent - implement the simplified graph adt described in project - using the edge list structure - implement the simplified graph adt described in project - using the adjacency list structure - extend the class of project - to support the update methods of the graph adt - design an experimental comparison of repeated dfs traversals versus the floyd-warshall algorithm for computing the transitive closure of directed graph - perform an experimental comparison of two of the minimum spanning tree algorithms discussed in this (kruskal and prim-jarnikdevelop an extensive set of experiments to test the running times of these algorithms using randomly generated graphs - one way to construct maze starts with an grid such that each grid cell is bounded by four unit-length walls we then remove two boundary unit-length wallsto represent the start and finish for each remaining unit-length wall not on the boundarywe assign random value and create graph gcalled the dualsuch that each grid cell is vertex in and there is an edge joining the vertices for two cells if and only if the cells share common wall the weight of each edge is the weight of the corresponding wall we construct the maze by finding minimum spanning tree for and removing all the walls corresponding to edges in write program that uses this algorithm to generate mazes and then solves them minimallyyour program should draw the maze andideallyit should visualize the solution as well
13,427
- write program that builds the routing tables for the nodes in computer networkbased on shortest-path routingwhere path distance is measured by hop countthat isthe number of edges in path the input for this problem is the connectivity information for all the nodes in the networkas in the following example which indicates three network nodes that are connected to that isthree nodes that are one hop away the routing table for the node at address is set of pairs ( , )which indicates thatto route message from to bthe next node to send to (on the shortest path from to bis your program should output the routing table for each node in the networkgiven an input list of node connectivity listseach of which is input in the syntax as shown aboveone per line notes the depth-first search method is part of the "folkloreof computer sciencebut hopcroft and tarjan [ are the ones who showed how useful this algorithm is for solving several different graph problems knuth [ discusses the topological sorting problem the simple linear-time algorithm that we describe for determining if directed graph is strongly connected is due to kosaraju the floyd-warshall algorithm appears in paper by floyd [ and is based upon theorem of warshall [ the first known minimum spanning tree algorithm is due to baruvka [ ]and was published in the prim-jarnik algorithm was first published in czech by jarnik [ in and in english in by prim [ kruskal published his minimum spanning tree algorithm in [ the reader interested in further study of the history of the minimum spanning tree problem is referred to the paper by graham and hell [ the current asymptotically fastest minimum spanning tree algorithm is randomized method of kargerkleinand tarjan [ that runs in (mexpected time dijkstra [ published his single-sourceshortest-path algorithm in the running time for the prim-jarnik algorithmand also that of dijkstra' algorithmcan actually be improved to be ( log mby implementing the queue with either of two more sophisticated data structuresthe "fibonacci heap[ or the "relaxed heap[ to learn about different algorithms for drawing graphsplease see the book by tamassia and liotta [ and the book by di battistaeadestamassia and tollis [ the reader interested in further study of graph algorithms is referred to the books by ahujamagnantiand orlin [ ]cormenleisersonrivest and stein [ ]mehlhorn [ ]and tarjan [ ]and the book by van leeuwen [
13,428
memory management and -trees contents memory management memory allocation garbage collection additional memory used by the python interpreter memory hierarchies and caching memory systems caching strategies external searching and -trees ( ,btrees -trees external-memory sorting multiway merging exercises
13,429
our study of data structures thus far has focused primarily upon the efficiency of computationsas measured by the number of primitive operations that are executed on central processing unit (cpuin practicethe performance of computer system is also greatly impacted by the management of the computer' memory systems in our analysis of data structureswe have provided asymptotic bounds for the overall amount of memory used by data structure in this we consider more subtle issues involving the use of computer' memory system we first discuss ways in which memory is allocated and deallocated during the execution of computer programand the impact that this has on the program' performance secondwe discuss the complexity of multilevel memory hierarchies in today' computer systems although we often abstract computer' memory as consisting of single pool of interchangeable locationsin practicethe data used by an executing program is stored and transferred between combination of physical memories ( cpu registerscachesinternal memoryand external memorywe consider the use of classic data structures in the algorithms used to manage memoryand how the use of memory hierarchies impacts the choice of data structures and algorithms for classic problems such as searching and sorting memory management in order to implement any data structure on an actual computerwe need to use computer memory computer memory is organized into sequence of wordseach of which typically consists of or bytes (depending on the computerthese memory words are numbered from to where is the number of memory words available to the computer the number associated with each memory word is known as its memory address thusthe memory in computer can be viewed as basically one giant array of memory words for examplein figure of section we portrayed section of the computer' memory as follows in order to run programs and store informationthe computer' memory must be managed so as to determine what data is stored in what memory cells in this sectionwe discuss the basics of memory managementmost notably describing the way in which memory is allocated to store new objectsthe way in which portions of memory are deallocated and reclaimedwhen no longer neededand the way in which the python interpreter uses memory in completing its tasks
13,430
memory allocation with pythonall objects are stored in pool of memoryknown as the memory heap or python heap (which should not be confused with the "heapdata structure presented in when command such as widgetis executedassuming widget is the name of classa new instance of the class is created and stored somewhere within the memory heap the python interpreter is responsible for negotiating the use of space with the operating system and for managing the use of the memory heap when executing python program the storage available in the memory heap is divided into blockswhich are contiguous array-like "chunksof memory that may be of variable or fixed sizes the system must be implemented so that it can quickly allocate memory for new objects one popular method is to keep contiguous "holesof available free memory in linked listcalled the free list the links joining these holes are stored inside the holes themselvessince their memory is not being used as memory is allocated and deallocatedthe collection of holes in the free lists changeswith the unused memory being separated into disjoint holes divided by blocks of used memory this separation of unused memory into separate holes is known as fragmentation the problem is that it becomes more difficult to find large continuous chunks of memorywhen neededeven though an equivalent amount of memory may be unused (yet fragmentedthereforewe would like to minimize fragmentation as much as possible there are two kinds of fragmentation that can occur internal fragmentation occurs when portion of an allocated memory block is unused for examplea program may request an array of size but only use the first cells of this array there is not much that run-time environment can do to reduce internal fragmentation external fragmentationon the other handoccurs when there is significant amount of unused memory between several contiguous blocks of allocated memory since the run-time environment has control over where to allocate memory when it is requestedthe run-time environment should allocate memory in way to try to reduce external fragmentation as much as reasonably possible several heuristics have been suggested for allocating memory from the heap so as to minimize external fragmentation the best-fit algorithm searches the entire free list to find the hole whose size is closest to the amount of memory being requested the first-fit algorithm searches from the beginning of the free list for the first hole that is large enough the next-fit algorithm is similarin that it also searches the free list for the first hole that is large enoughbut it begins its search from where it left off previouslyviewing the free list as circularly linked list (section the worst-fit algorithm searches the free list to find the largest hole of available memorywhich might be done faster than search of the entire free list
13,431
memory management and -trees if this list were maintained as priority queue (in each algorithmthe requested amount of memory is subtracted from the chosen memory hole and the leftover part of that hole is returned to the free list although it might sound good at firstthe best-fit algorithm tends to produce the worst external fragmentationsince the leftover parts of the chosen holes tend to be small the first-fit algorithm is fastbut it tends to produce lot of external fragmentation at the front of the free listwhich slows down future searches the next-fit algorithm spreads fragmentation more evenly throughout the memory heapthus keeping search times low this spreading also makes it more difficult to allocate large blockshowever the worst-fit algorithm attempts to avoid this problem by keeping contiguous sections of free memory as large as possible garbage collection in some languageslike and ++the memory space for objects must be explicitly deallocated by the programmerwhich is duty often overlooked by beginning programmers and is the source of frustrating programming errors even for experienced programmers the designers of python instead placed the burden of memory management entirely on the interpreter the process of detecting "staleobjectsdeallocating the space devoted to those objectsand returning the reclaimed space to the free list is known as garbage collection to perform automated garbage collectionthere must first be way to detect those objects that are no longer necessary since the interpreter cannot feasibly analyze the semantics of an arbitrary python programit relies on the following conservative rule for reclaiming objects in order for program to access an objectit must have direct or indirect reference to that object we will define such objects to be live objects in defining live objecta direct reference to an object is in the form of an identifier in an active namespace ( the global namespaceor the local namespace for any active functionfor exampleimmediately after the command widgetis executedidentifier will be defined in the current namespace as reference to the new widget object we refer to all such objects with direct references as root objects an indirect reference to live object is reference that occurs within the state of some other live object for exampleif the widget instance in our earlier example maintains list as an attributethat list is also live object (as it can be reached indirectly through use of identifier wthe set of live objects are defined recursivelythusany objects that are referenced within the list that is referenced by the widget are also classified as live objects the python interpreter assumes that live objects are the active objects currently being used by the running programthese objects should not be deallocated other objects can be garbage collected python relies on the following two strategies for determining which objects are live
13,432
reference counts within the state of every python object is an integer known as its reference count this is the count of how many references to the object exist anywhere in the system every time reference is assigned to this objectits reference count is incrementedand every time one of those references is reassigned to something elsethe reference count for the former object is decremented the maintenance of reference count for each object adds ( space per objectand the increments and decrements to the count add ( additional computation time per such operations the python interpreter allows running program to examine an object' reference count within the sys module there is function named getrefcount that returns an integer equal to the reference count for the object sent as parameter it is worth noting that because the formal parameter of that function is assigned to the actual parameter sent by the callerthere is temporarily one additional reference to that object in the local namespace of the function at the time the count is reported the advantage of having reference count for each object is that if an object' count is ever decremented to zerothat object cannot possibly be live object and therefore the system can immediately deallocate the object (or place it in queue of objects that are ready to be deallocatedcycle detection although it is clear that an object with reference count of zero cannot be live objectit is important to recognize that an object with nonzero reference count need not qualify as live there may exist group of objects that have references to each othereven though none of those objects are reachable from root object for examplea running python program may have an identifierdatathat is reference to sequence implemented using doubly linked list in this casethe list referenced by data is root objectthe header and trailer nodes that are stored as attributes of the list are live objectsas are all the intermediate nodes of the list that are indirectly referenced and all the elements that are referenced as elements of those nodes if the identifierdatawere to go out of scopeor to be reassigned to some other objectthe reference count for the list instance may go to zero and be garbage collectedbut the reference counts for all of the nodes would remain nonzerostopping them from being garbage collected by the simple rule above every so oftenin particular when the available space in the memory heap is becoming scarcethe python interpreter uses more advanced form of garbage collection to reclaim objects that are unreachabledespite their nonzero reference counts there are different algorithms for implementing cycle detection (the mechanics of garbage collection in python are abstracted in the gc moduleand may vary depending on the implementation of the interpreter classic algorithm for garbage collection is the mark-sweep algorithmwhich we next discuss
13,433
memory management and -trees the mark-sweep algorithm in the mark-sweep garbage collection algorithmwe associate "markbit with each object that identifies whether that object is live when we determine at some point that garbage collection is neededwe suspend all other activity and clear the mark bits of all the objects currently allocated in the memory heap we then trace through the active namespaces and we mark all the root objects as "live we must then determine all the other live objects--the ones that are reachable from the root objects to do this efficientlywe can perform depth-first search (see section on the directed graph that is defined by objects reference other objects in this caseeach object in the memory heap is viewed as vertex in directed graphand the reference from one object to another is viewed as directed edge by performing directed dfs from each root objectwe can correctly identify and mark each live object this process is known as the "markphase once this process has completedwe then scan through the memory heap and reclaim any space that is being used for an object that has not been marked at this timewe can also optionally coalesce all the allocated space in the memory heap into single blockthereby eliminating external fragmentation for the time being this scanning and reclamation process is known as the "sweepphaseand when it completeswe resume running the suspended program thusthe mark-sweep garbage collection algorithm will reclaim unused space in time proportional to the number of live objects and their references plus the size of the memory heap performing dfs in-place the mark-sweep algorithm correctly reclaims unused space in the memory heapbut there is an important issue we must face during the mark phase since we are reclaiming memory space at time when available memory is scarcewe must take care not to use extra space during the garbage collection itself the trouble is that the dfs algorithmin the recursive way we have described it in section can use space proportional to the number of vertices in the graph in the case of garbage collectionthe vertices in our graph are the objects in the memory heaphencewe probably do not have this much memory to use so our only alternative is to find way to perform dfs in-place rather than recursivelythat iswe must perform dfs using only constant amount of additional storage the main idea for performing dfs in-place is to simulate the recursion stack using the edges of the graph (which in the case of garbage collection correspond to object referenceswhen we traverse an edge from visited vertex to new vertex wwe change the edge (vwstored in ' adjacency list to point back to ' parent in the dfs tree when we return back to (simulating the return from the "recursivecall at )we can then switch the edge we modified to point back to wassuming we have some way to identify which edge we need to change back
13,434
additional memory used by the python interpreter we have discussedin section how the python interpreter allocates memory for objects within memory heap howeverthis is not the only memory that is used when executing python program in this sectionwe discuss some other important uses of memory the run-time call stack stacks have most important application to the run-time environment of python programs running python program has private stackknown as the call stack or python interpreter stackthat is used to keep track of the nested sequence of currently active (that isnonterminatedinvocations of functions each entry of the stack is structure known as an activation record or framestoring important information about an invocation of function at the top of the call stack is the activation record of the running callthat isthe function activation that currently has control of the execution the remaining elements of the stack are activation records of the suspended callsthat isfunctions that have invoked another function and are currently waiting for that other function to return control when it terminates the order of the elements in the stack corresponds to the chain of invocations of the currently active functions when new function is calledan activation record for that call is pushed onto the stack when it terminatesits activation record is popped from the stack and the python interpreter resumes the processing of the previously suspended call each activation record includes dictionary representing the local namespace for the function call (see sections and for further discussion of namespacesthe namespace maps identifierswhich serve as parameters and local variablesto object valuesalthough the objects being referenced still reside in the memory heap the activation record for function call also includes reference to the function definition itselfand special variableknown as the program counterto maintain the address of the statement within the function that is currently executing when one function returns control to anotherthe stored program counter for the suspended function allows the interpreter to properly continue execution of that function implementing recursion one of the benefits of using stack to implement the nesting of function calls is that it allows programs to use recursion that isit allows function to call itselfas discussed in we implicitly described the concept of the call stack and the use of activation records within our portrayal of recursion traces in
13,435
that interestinglyearly programming languagessuch as cobol and fortrandid not originally use call stacks to implement function calls but because of the elegance and efficiency that recursion allowsalmost all modern programming languages utilize call stack for function callsincluding the current versions of classic languages like cobol and fortran each box of recursive trace corresponds to an activation record that is placed on the call stack during the execution of recursive function at any point in timethe content of the call stack corresponds to the chain of boxes from the initial function invocation to the current one to better illustrate how call stack is used by recursive functionswe refer back to the python implementation of the classic recursive definition of the factorial functionnn( )( with the code originally given in code fragment and the recursive trace in figure the first time we call factorialits activation record includes namespace storing the parameter value the function recursively calls itself to compute ( )!causing new activation recordwith its own namespace and parameterto be pushed onto the call stack in turnthis recursive invocation calls itself to compute ( )!and so on the chain of recursive invocationsand thus the call stackgrows up to size with the most deeply nested call being factorial( )which returns without any further recursion the run-time stack allows several invocations of the factorial function to exist simultaneously each has an activation record that stores the value of its parameterand eventually the value to be returned when the first recursive call eventually terminatesit returns ( )!which is then multiplied by to compute nfor the original call of the factorial method the operand stack interestinglythere is actually another place where the python interpreter uses stack arithmetic expressionssuch as (( ( ))/eare evaluated by the interpreter using an operand stack in section we described how to evaluate an arithmetic expression using postorder traversal of an explicit expression tree we described that algorithm in recursive wayhoweverthis recursive description can be simulated using nonrecursive process that maintains an explicit operand stack simple binary operationsuch as bis computed by pushing on the stackpushing on the stackand then calling an instruction that pops the top two items from the stackperforms the binary operation on themand pushes the result back onto the stack likewiseinstructions for writing and reading elements to and from memory involve the use of pop and push methods for the operand stack
13,436
memory hierarchies and caching with the increased use of computing in societysoftware applications must manage extremely large data sets such applications include the processing of online financial transactionsthe organization and maintenance of databasesand analyses of customerspurchasing histories and preferences the amount of data can be so large that the overall performance of algorithms and data structures sometimes depends more on the time to access the data than on the speed of the cpu memory systems in order to accommodate large data setscomputers have hierarchy of different kinds of memorieswhich vary in terms of their size and distance from the cpu closest to the cpu are the internal registers that the cpu itself uses access to such locations is very fastbut there are relatively few such locations at the second level in the hierarchy are one or more memory caches this memory is considerably larger than the register set of cpubut accessing it takes longer at the third level in the hierarchy is the internal memorywhich is also known as main memory or core memory the internal memory is considerably larger than the cache memorybut also requires more time to access another level in the hierarchy is the external memorywhich usually consists of diskscd drivesdvd drivesand/or tapes this memory is very largebut it is also very slow data stored through an external network can be viewed as yet another level in this hierarchywith even greater storage capacitybut even slower access thusthe memory hierarchy for computers can be viewed as consisting of five or more levelseach of which is larger and slower than the previous level (see figure during the execution of programdata is routinely copied from one level of the hierarchy to neighboring leveland these transfers can become computational bottleneck network storage external memory internal memory caches bigger registers cpu figure the memory hierarchy faster
13,437
memory management and -trees caching strategies the significance of the memory hierarchy on the performance of program depends greatly upon the size of the problem we are trying to solve and the physical characteristics of the computer system oftenthe bottleneck occurs between two levels of the memory hierarchy--the one that can hold all data items and the level just below that one for problem that can fit entirely in main memorythe two most important levels are the cache memory and the internal memory access times for internal memory can be as much as to times longer than those for cache memory it is desirablethereforeto be able to perform most memory accesses in cache memory for problem that does not fit entirely in main memoryon the other handthe two most important levels are the internal memory and the external memory here the differences are even more dramaticfor access times for disksthe usual general-purpose external-memory deviceare typically as much as to times longer than those for internal memory to put this latter figure into perspectiveimagine there is student in baltimore who wants to send request-for-money message to his parents in chicago if the student sends his parents an email messageit can arrive at their home computer in about five seconds think of this mode of communication as corresponding to an internal-memory access by cpu mode of communication corresponding to an external-memory access that is times slower would be for the student to walk to chicago and deliver his message in personwhich would take about month if he can average miles per day thuswe should make as few accesses to external memory as possible most algorithms are not designed with the memory hierarchy in mindin spite of the great variance between access times for the different levels indeedall of the algorithm analyses described in this book so far have assumed that all memory accesses are equal this assumption might seemat firstto be great oversight-and one we are only addressing now in the final -but there are good reasons why it is actually reasonable assumption to make one justification for this assumption is that it is often necessary to assume that all memory accesses take the same amount of timesince specific device-dependent information about memory sizes is often hard to come by in factinformation about memory size may be difficult to get for examplea python program that is designed to run on many different computer platforms cannot easily be defined in terms of specific computer architecture configuration we can certainly use architecture-specific informationif we have it (and we will show how to exploit such information later in this but once we have optimized our software for certain architecture configurationour software will no longer be deviceindependent fortunatelysuch optimizations are not always necessaryprimarily because of the second justification for the equal-time memory-access assumption
13,438
caching and blocking another justification for the memory-access equality assumption is that operating system designers have developed general mechanisms that allow most memory accesses to be fast these mechanisms are based on two important locality-ofreference properties that most software possessestemporal localityif program accesses certain memory locationthen there is increased likelihood that it accesses that same location again in the near future for exampleit is common to use the value of counter variable in several different expressionsincluding one to increment the counter' value in facta common adage among computer architects is that program spends percent of its time in percent of its code spatial localityif program accesses certain memory locationthen there is increased likelihood that it soon accesses other locations that are near this one for examplea program using an array may be likely to access the locations of this array in sequential or near-sequential manner computer scientists and engineers have performed extensive software profiling experiments to justify the claim that most software possesses both of these kinds of locality of reference for examplea nested for loop used to repeatedly scan through an array will exhibit both kinds of locality temporal and spatial localities havein turngiven rise to two fundamental design choices for multilevel computer memory systems (which are present in the interface between cache memory and internal memoryand also in the interface between internal memory and external memorythe first design choice is called virtual memory this concept consists of providing an address space as large as the capacity of the secondary-level memoryand of transferring data located in the secondary level into the primary levelwhen they are addressed virtual memory does not limit the programmer to the constraint of the internal memory size the concept of bringing data into primary memory is called cachingand it is motivated by temporal locality by bringing data into primary memorywe are hoping that it will be accessed again soonand we will be able to respond quickly to all the requests for this data that come in the near future the second design choice is motivated by spatial locality specificallyif data stored at secondary-level memory location is accessedthen we bring into primary-level memory large block of contiguous locations that include the location (see figure this concept is known as blockingand it is motivated by the expectation that other secondary-level memory locations close to will soon be accessed in the interface between cache memory and internal memorysuch blocks are often called cache linesand in the interface between internal memory and external memorysuch blocks are often called pages
13,439
block on disk block in the external memory address space figure blocks in external memory when implemented with caching and blockingvirtual memory often allows us to perceive secondary-level memory as being faster than it really is there is still problemhowever primary-level memory is much smaller than secondarylevel memory moreoverbecause memory systems use blockingany program of substance will likely reach point where it requests data from secondary-level memorybut the primary memory is already full of blocks in order to fulfill the request and maintain our use of caching and blockingwe must remove some block from primary memory to make room for new block from secondary memory in this case deciding which block to evict brings up number of interesting data structure and algorithm design issues caching in web browsers for motivationwe consider related problem that arises when revisiting information presented in web pages to exploit temporal locality of referenceit is often advantageous to store copies of web pages in cache memoryso these pages can be quickly retrieved when requested again this effectively creates two-level memory hierarchywith the cache serving as the smallerquicker internal memoryand the network being the external memory in particularsuppose we have cache memory that has "slotsthat can contain web pages we assume that web page can be placed in any slot of the cache this is known as fully associative cache as browser executesit requests different web pages each time the browser requests such web page pthe browser determines (using quick testif is unchanged and currently contained in the cache if is contained in the cachethen the browser satisfies the request using the cached copy if is not in the cachehoweverthe page for is requested over the internet and transferred into the cache if one of the slots in the cache is availablethen the browser assigns to one of the empty slots but if all the cells of the cache are occupiedthen the computer must determine which previously viewed web page to evict before bringing in to take its place there areof coursemany different policies that can be used to determine the page to evict
13,440
page replacement algorithms some of the better-known page replacement policies include the following (see figure )first-infirst-out (fifo)evict the page that has been in the cache the longestthat isthe page that was transferred to the cache furthest in the past least recently used (lru)evict the page whose last request occurred furthest in the past in additionwe can consider simple and purely random strategyrandomchoose page at random to evict from the cache new block old block (chosen at randomnew block old block (present longestrandom policyfifo policyinsert time : am : am : am : am new block : am : am : am old block (least recently usedlru policylast used : am : am : am : am : am : am : am figure the randomfifoand lru page replacement policies the random strategy is one of the easiest policies to implementfor it only requires random or pseudo-random number generator the overhead involved in implementing this policy is an ( additional amount of work per page replacement moreoverthere is no additional overhead for each page requestother than to determine whether page request is in the cache or not stillthis policy makes no attempt to take advantage of any temporal locality exhibited by user' browsing
13,441
memory management and -trees the fifo strategy is quite simple to implementas it only requires queue to store references to the pages in the cache pages are enqueued in when they are referenced by browserand then are brought into the cache when page needs to be evictedthe computer simply performs dequeue operation on to determine which page to evict thusthis policy also requires ( additional work per page replacement alsothe fifo policy incurs no additional overhead for page requests moreoverit tries to take some advantage of temporal locality the lru strategy goes step further than the fifo strategyfor the lru strategy explicitly takes advantage of temporal locality as much as possibleby always evicting the page that was least-recently used from policy point of viewthis is an excellent approachbut it is costly from an implementation point of view that isits way of optimizing temporal and spatial locality is fairly costly implementing the lru strategy requires the use of an adaptable priority queue that supports updating the priority of existing pages if is implemented with sorted sequence based on linked listthen the overhead for each page request and page replacement is ( when we insert page in or update its keythe page is assigned the highest key in and is placed at the end of the listwhich can also be done in ( time even though the lru strategy has constant-time overheadusing the implementation abovethe constant factors involvedin terms of the additional time overhead and the extra space for the priority queue qmake this policy less attractive from practical point of view since these different page replacement policies have different trade-offs between implementation difficulty and the degree to which they seem to take advantage of localitiesit is natural for us to ask for some kind of comparative analysis of these methods to see which oneif anyis the best from worst-case point of viewthe fifo and lru strategies have fairly unattractive competitive behavior for examplesuppose we have cache containing pagesand consider the fifo and lru methods for performing page replacement for program that has loop that repeatedly requests pages in cyclic order both the fifo and lru policies perform badly on such sequence of page requestsbecause they perform page replacement on every page request thusfrom worst-case point of viewthese policies are almost the worst we can imagine--they require page replacement on every page request this worst-case analysis is little too pessimistichoweverfor it focuses on each protocol' behavior for one bad sequence of page requests an ideal analysis would be to compare these methods over all possible page-request sequences of coursethis is impossible to do exhaustivelybut there have been great number of experimental simulations done on page-request sequences derived from real programs based on these experimental comparisonsthe lru strategy has been shown to be usually superior to the fifo strategywhich is usually better than the random strategy
13,442
external searching and -trees consider the problem of maintaining large collection of items that does not fit in main memorysuch as typical database in this contextwe refer to the secondarymemory blocks as disk blocks likewisewe refer to the transfer of block between secondary memory and primary memory as disk transfer recalling the great time difference that exists between main memory accesses and disk accessesthe main goal of maintaining such collection in external memory is to minimize the number of disk transfers needed to perform query or update we refer to this count as the / complexity of the algorithm involved some inefficient external-memory representations typical operation we would like to support is the search for key in map if we were to store items unordered in doubly linked listsearching for particular key within the list requires transfers in the worst casesince each link hop we perform on the linked list might access different block of memory we can reduce the number of block transfers by using an array-based sequence sequential search of an array can be performed using only ( /bblock transfers because of spatial locality of referencewhere denotes the number of elements that fit into block this is because the block transfer when accessing the first element of the array actually retrieves the first elementsand so on with each successive block it is worth noting that the bound of ( /btransfers is only achieved when using compact array representation (see section the standard python list class is referential containerand so even though the sequence of references are stored in an arraythe actual elements that must be examined during search are not generally stored sequentially in memoryresulting in transfers in the worst case we could alternately store sequence using sorted array in this casea search performs (log ntransfersvia binary searchwhich is nice improvement but we do not get significant benefit from block transfers because each query during binary search is likely in different block of the sequence as usualupdate operations are expensive for sorted array since these simple implementations are / inefficientwe should consider the logarithmic-time internal-memory strategies that use balanced binary trees (for exampleavl trees or red-black treesor other search structures with logarithmic average-case query and update times (for exampleskip lists or splay treestypicallyeach node accessed for query or update in one of these structures will be in different block thusthese methods all require (log ntransfers in the worst case to perform query or update operation but we can do betterwe can perform map queries and updates using only (logb no(log nlog btransfers
13,443
( ,btrees to reduce the number of external-memory accesses when searchingwe can represent our map using multiway search tree (section this approach gives rise to generalization of the ( tree data structure known as the (abtree an (abtree is multiway search tree such that each node has between and children and stores between and entries the algorithms for searchinginsertingand removing entries in an (abtree are straightforward generalizations of the corresponding ones for ( trees the advantage of generalizing ( trees to (abtrees is that generalized class of trees provides flexible search structurewhere the size of the nodes and the running time of the various map operations depends on the parameters and by setting the parameters and appropriately with respect to the size of disk blockswe can derive data structure that achieves good external-memory performance definition of an ( ,btree an ( ,btreewhere parameters and are integers such that < <( )/ is multiway search tree with the following additional restrictionssize propertyeach internal node has at least childrenunless it is the rootand has at most children depth propertyall the external nodes have the same depth proposition the height of an (abtree storing entries is (log nlog band (log nlog ajustificationlet be an (abtree storing entriesand let be the height of we justify the proposition by establishing the following bounds on + log( < <log log log by the size and depth propertiesthe number of external nodes of is at least ah- and at most bh by proposition thus ah- < <bh taking the logarithm in base of each termwe get ( log <log( < log an algebraic manipulation of these inequalities completes the justification
13,444
search and update operations we recall that in multiway search tree each node of holds secondary structure ( )which is itself map (section if is an (abtreethen (vstores at most entries let (bdenote the time for performing search in mapm(vthe search algorithm in an (abtree is exactly like the one for multiway search trees given in section hencesearching in an (abtree (bwith entries takes olog log ntime note that if is considered constant (and thus is also)then the search time is (log nthe main application of (abtrees is for maps stored in external memory namelyto minimize disk accesseswe select the parameters and so that each tree node occupies single disk block (so that ( if we wish to simply count block transfersproviding the right and values in this context gives rise to data structure known as the -treewhich we will describe shortly before we describe this structurehoweverlet us discuss how insertions and removals are handled in (abtrees the insertion algorithm for an (abtree is similar to that for ( tree an overflow occurs when an entry is inserted into -node wwhich becomes an illegal ( )-node (recall that node in multiway tree is -node if it has children to remedy an overflowwe split node by moving the median entry of into the parent of and replacing with ( )/ -node and ( )/ node we can now see the reason for requiring <( )/ in the definition of an (abtree note that as consequence of the splitwe need to build the secondary structures ( and ( removing an entry from an (abtree is similar to what was done for ( trees an underflow occurs when key is removed from an -node wdistinct from the rootwhich causes to become an illegal ( - )-node to remedy an underflowwe perform transfer with sibling of that is not an -node or we perform fusion of with sibling that is an -node the new node resulting from the fusion is ( )-nodewhich is another reason for requiring <( )/ table shows the performance of map realized with an (abtree operation running time [ko [kv del [ko (blog log (blog log (blog log table time bounds for an -entry map realized by an (abtree we assume the secondary structure of the nodes of support search in (btimeand split and fusion operations in (btimefor some functions (band ( )which can be made to be ( when we are only counting disk transfers
13,445
-trees version of the (abtree data structurewhich is the best-known method for maintaining map in external memoryis called the " -tree (see figure -tree of order is an (abtree with / and since we discussed the standard map query and update methods for (abtrees abovewe restrict our discussion here to the / complexity of -trees figure -tree of order an important property of -trees is that we can choose so that the children references and the keys stored at node can fit compactly into single disk blockimplying that is proportional to this choice allows us to assume that and are also proportional to in the analysis of the search and update operations on (abtrees thusf (band (bare both ( )for each time we access node to perform search or an update operationwe need only perform single disk transfer as we have already observed aboveeach search or update requires that we examine at most ( nodes for each level of the tree thereforeany map search or update operation on -tree requires only (logd/ )that iso(log nlog )disk transfers for examplean insert operation proceeds down the -tree to locate the node in which to insert the new entry if the node would overflow (to have childrenbecause of this additionthen this node is split into two nodes that have ( )/ and ( )/ childrenrespectively this process is then repeated at the next level upand will continue for at most (logb nlevels likewiseif remove operation results in node underflow (to have / children)then we move references from sibling node with at least / children or we perform fusion operation of this node with its sibling (and repeat this computation at the parentas with the insert operationthis will continue up the -tree for at most (logb nlevels the requirement that each internal node have at least / children implies that each disk block used to support -tree is at least half full thuswe have the followingproposition -tree with entries has / complexity (logb nfor search or update operationand uses ( /bblockswhere is the size of block
13,446
external-memory sorting in addition to data structuressuch as mapsthat need to be implemented in external memorythere are many algorithms that must also operate on input sets that are too large to fit entirely into internal memory in this casethe objective is to solve the algorithmic problem using as few block transfers as possible the most classic domain for such external-memory algorithms is the sorting problem multiway merge-sort an efficient way to sort set of objects in external memory amounts to simple external-memory variation on the familiar merge-sort algorithm the main idea behind this variation is to merge many recursively sorted lists at timethereby reducing the number of levels of recursion specificallya high-level description of this multiway merge-sort method is to divide into subsets sd of roughly equal sizerecursively sort each subset si and then simultaneously merge all sorted lists into sorted representation of if we can perform the merge process using only ( /bdisk transfersthenfor large enough values of nthe total number of transfers performed by this algorithm satisfies the following recurrencet(nd ( /dcn/bfor some constant > we can stop the recursion when <bsince we can perform single block transfer at this pointgetting all of the objects into internal memoryand then sort the set with an efficient internal-memory algorithm thusthe stopping criterion for (nis ( if / < this implies closed-form solution that (nis (( /blogd ( / ))which is (( /blog( / )log dthusif we can choose to be th( / )where is the size of the internal memorythen the worst-case number of block transfers performed by this multiway mergesort algorithm will be quite low for reasons given in the next sectionwe choose ( / the only aspect of this algorithm left to specifythenis how to perform the -way merge using only ( /bblock transfers
13,447
multiway merging in standard merge-sort (section )the merge process combines two sorted sequences into one by repeatedly taking the smaller of the items at the front of the two respective lists in -way mergewe repeatedly find the smallest among the items at the front of the sequences and place it as the next element of the merged sequence we continue until all elements are included in the context of an external-memory sorting algorithmif main memory has size and each block has size bwe can store up to / blocks within main memory at any given time we specifically choose ( / so that we can afford to keep one block from each input sequence in main memory at any given timeand to have one additional block to use as buffer for the merged sequence (see figure figure -way merge with and blocks that currently reside in main memory are shaded we maintain the smallest unprocessed element from each input sequence in main memoryrequesting the next block from sequence when the preceding block has been exhausted similarlywe use one block of internal memory to buffer the merged sequenceflushing that block to external memory when full in this waythe total number of transfers performed during single -way merge is ( / )since we scan each block of list si onceand we write out each block of the merged list once in terms of computation timechoosing the smallest of values can trivially be performed using (doperations if we are willing to devote (dinternal memorywe can maintain priority queue identifying the smallest element from each sequencethereby performing each step of the merge in (log dtime by removing the minimum element and replacing it with the next element from the same sequence hencethe internal time for the -way merge is ( log dproposition given an array-based sequence of elements stored compactly in external memorywe can sort with (( /blog( / )log( / )block transfers and ( log ninternal computationswhere is the size of the internal memory and is the size of block
13,448
exercises for help with exercisesplease visit the sitewww wiley com/college/goodrich reinforcement - julia just bought new computer that uses -bit integers to address memory cells argue why julia will never in her life be able to upgrade the main memory of her computer so that it is the maximum-size possibleassuming that you have to have distinct atoms to represent different bits - describein detailalgorithms for adding an item toor deleting an item froman (abtree - suppose is multiway tree in which each internal node has at least five and at most eight children for what values of and is valid (abtreer- for what values of is the tree of the previous exercise an order- -treer- consider an initially empty memory cache consisting of four pages how many page misses does the lru algorithm incur on the following page request sequence( ) - consider an initially empty memory cache consisting of four pages how many page misses does the fifo algorithm incur on the following page request sequence( ) - consider an initially empty memory cache consisting of four pages what is the maximum number of page misses that the random algorithm incurs on the following page request sequence( )show all of the random choices the algorithm made in this case - draw the result of insertinginto an initially empty order- -treeentries with keys ( )in this order creativity - describe an efficient external-memory algorithm for removing all the duplicate entries in an array list of size - describe an external-memory data structure to implement the stack adt so that the total number of disk transfers needed to process sequence of push and pop operations is ( /
13,449
memory management and -trees - describe an external-memory data structure to implement the queue adt so that the total number of disk transfers needed to process sequence of enqueue and dequeue operations is ( /bc- describe an external-memory version of the positionallist adt (section )with block size bsuch that an iteration of list of length is completed using ( /btransfers in the worst caseand all other methods of the adt require only ( transfers - change the rules that define red-black trees so that each red-black tree has corresponding ( treeand vice versa - describe modified version of the -tree insertion algorithm so that each time we create an overflow because of split of node wwe redistribute keys among all of ' siblingsso that each sibling holds roughly the same number of keys (possibly cascading the split up to the parent of wwhat is the minimum fraction of each block that will always be filled using this schemec- another possible external-memory map implementation is to use skip listbut to collect consecutive groups of (bnodesin individual blockson any level in the skip list in particularwe define an order- -skip list to be such representation of skip list structurewhere each block contains at least / list nodes and at most list nodes let us also choose in this case to be the maximum number of list nodes from level of skip list that can fit into one block describe how we should modify the skip-list insertion and removal algorithms for -skip list so that the expected height of the structure is (log nlog bc- describe how to use -tree to implement the partition (union-findadt (from section so that the union and find operations each use at most (log nlog bdisk transfers - suppose we are given sequence of elements with integer keys such that some elements in are colored "blueand some elements in are colored "red in additionsay that red element pairs with blue element if they have the same key value describe an efficient externalmemory algorithm for finding all the red-blue pairs in how many disk transfers does your algorithm performc- consider the page caching problem where the memory cache can hold pagesand we are given sequence of requests taken from pool of possible pages describe the optimal strategy for the offline algorithm and show that it causes at most / page misses in totalstarting from an empty cache - describe an efficient external-memory algorithm that determines whether an array of integers contains value occurring more than / times
13,450
- consider the page caching strategy based on the least frequently used (lfurulewhere the page in the cache that has been accessed the least often is the one that is evicted when new page is requested if there are tieslfu evicts the least frequently used page that has been in the cache the longest show that there is sequence of requests that causes lfu to miss (ntimes for cache of pageswhereas the optimal algorithm will miss only (mtimes - suppose that instead of having the node-search function ( in an order- -tree we have (dlog what does the asymptotic running time of performing search in now becomeprojects - write python class that simulates the best-fitworst-fitfirst-fitand nextfit algorithms for memory management determine experimentally which method is the best under various sequences of memory requests - write python class that implements all the methods of the ordered map adt by means of an (abtreewhere and are integer constants passed as parameters to constructor - implement the -tree data structureassuming block size of and integer keys test the number of "disk transfersneeded to process sequence of map operations notes the reader interested in the study of the architecture of hierarchical memory systems is referred to the book by burger et al [ or the book by hennessy and patterson [ the mark-sweep garbage collection method we describe is one of many different algorithms for performing garbage collection we encourage the reader interested in further study of garbage collection to examine the book by jones and lins [ knuth [ has very nice discussions about external-memory sorting and searchingand ullman [ discusses external memory structures for database systems the handbook by gonnet and baeza-yates [ compares the performance of number of different sorting algorithmsmany of which are external-memory algorithms -trees were invented by bayer and mccreight [ and comer [ provides very nice overview of this data structure the books by mehlhorn [ and samet [ also have nice discussions about -trees and their variants aggarwal and vitter [ study the / complexity of sorting and related problemsestablishing upper and lower bounds goodrich et al [ study the / complexity of several computational geometry problems the reader interested in further study of /oefficient algorithms is encouraged to examine the survey paper of vitter [
13,451
character strings in python string is sequence of characters that come from some alphabet in pythonthe built-in str class represents strings based upon the unicode international character seta -bit character encoding that covers most written languages unicode is an extension of the -bit ascii character set that includes the basic latin alphabetnumeralsand common symbols strings are particularly important in most programming applicationsas text is often used for input and output basic introduction to the str class was provided in section including use of string literalssuch as hello and the syntax str(objthat is used to construct string representation of typical object common operators that are supported by stringssuch as the use of for concatenationwere further discussed in section this appendix serves as more detailed referencedescribing convenient behaviors that strings support for the processing of text to organize our overview of the str class behaviorswe group them into the following broad categories of functionality searching for substrings the operator syntaxpattern in scan be used to determine if the given pattern occurs as substring of string table describes several related methods that determine the number of such occurrencesand the index at which the leftmost or rightmost such occurrence begins each of the functions in this table accepts two optional parametersstart and endwhich are indices that effectively restrict the search to the implicit slice [start:endfor examplethe call find(pattern restricts the search to [ calling syntax count(patterns find(patterns index(patterns rfind(patterns rindex(patterndescription return the number of non-overlapping occurrences of pattern return the index starting the leftmost occurrence of patternelse - similar to findbut raise valueerror if not found return the index starting the rightmost occurrence of patternelse - similar to rfindbut raise valueerror if not found table methods that search for substrings
13,452
constructing related strings strings in python are immutableso none of their methods modify an existing string instance howevermany methods return newly constructed string that is closely related to an existing one table provides summary of such methodsincluding those that replace given pattern with anotherthat vary the case of alphabetic charactersthat produce fixed-width string with desired justificationand that produce copy of string with extraneous characters stripped from either end calling syntax replace(oldnews capitalizes uppers lowers center(widths ljust(widths rjust(widths zfill(widths strips lstrips rstripdescription return copy of with all occurrences of old replaced by new return copy of with its first character having uppercase return copy of with all alphabetic characters in uppercase return copy of with all alphabetic characters in lowercase return copy of spadded to widthcentered among spaces return copy of spadded to width with trailing spaces return copy of spadded to width with leading spaces return copy of spadded to width with leading zeros return copy of swith leading and trailing whitespace removed return copy of swith leading whitespace removed return copy of swith trailing whitespace removed table string methods that produce related strings several of these methods accept optional parameters not detailed in the table for examplethe replace method replaces all nonoverlapping occurrences of the old pattern by defaultbut an optional parameter can limit the number of replacements that are performed the methods that center or justify text use spaces as the default fill character when paddingbut an alternate fill character can be specified as an optional parameter similarlyall variants of the strip methods remove leading and trailing whitespace by defaultbut an optional string parameter designates the choice of characters that should be removed from the ends testing boolean conditions table includes methods that test for boolean property of stringsuch as whether it begins or ends with patternor whether its characters qualify as being alphabeticnumericwhitespaceetc for the standard ascii character setalphabetic characters are the uppercase -zand lowercase -znumeric digits are - and whitespace includes the space charactertab characternewlineand carriage return conventions for what are considered alphabetic and numeric character codes are extended to more general unicode character sets
13,453
calling syntax startswith(patterns endswith(patterns isspaces isalphas islowers isuppers isdigits isdecimals isnumerics isalnum description return true if pattern is prefix of string return true if pattern is suffix of string return true if all characters of nonempty string are whitespace return true if all characters of nonempty string are alphabetic return true if there are one or more alphabetic charactersall of which are lowercased return true if there are one or more alphabetic charactersall of which are uppercased return true if all characters of nonempty string are in - return true if all characters of nonempty string represent digits - including unicode equivalents return true if all characters of nonempty string are numeric unicode characters ( - equivalentsfraction charactersreturn true if all characters of nonempty string are either alphabetic or numeric (as per above definitionstable methods that test boolean properties of strings splitting and joining strings table describes several important methods of python' string classused to compose sequence of strings together using delimiter to separate each pairor to take an existing string and determine decomposition of that string based upon existence of given separating pattern calling syntax sep join(stringss splitliness split(sepcounts rsplit(sepcounts partition(seps rpartition(sepdescription return the composition of the given sequence of stringsinserting sep as delimiter between each pair return list of substrings of sas delimited by newlines return list of substrings of sas delimited by the first count occurrences of sep if count is not specifiedsplit on all occurrences if sep is not specifieduse whitespace as delimiter similar to splitbut using the rightmost occurrences of sep return (headseptailsuch that head sep tailusing leftmost occurrence of sepif anyelse return (sreturn (headseptailsuch that head sep tailusing rightmost occurrence of sepif anyelse return stable methods for splitting and joining strings the join method is used to assemble string from series of pieces an example of its usage is and join(red green blue ])which produces the result red and green and blue note well that spaces were embedded in the separator string in contrastthe command and join(red green blue ]produces the result redandgreenandblue
13,454
the other methods discussed in table serve dual purpose to joinas they begin with string and produce sequence of substrings based upon given delimiter for examplethe call red and green and blue splitand produces the result red green blue if no delimiter (or noneis specifiedsplit uses whitespace as delimiterthusred and green and blue splitproduces red and green and blue string formatting the format method of the str class composes string that includes one or more formatted arguments the method is invoked with syntax format(arg arg )where serves as formatting string that expresses the desired result with one or more placeholders in which the arguments will be substituted as simple examplethe expression {had little {formatmary lamb produces the result mary had little lamb the pairs of curly braces in the formatting string are the placeholders for fields that will be substituted into the result by defaultthe arguments sent to the function are substituted using positional orderhencemary was the first substitute and lamb the second howeverthe substitution patterns may be explicitly numbered to alter the orderor to use single argument in more than one location for examplethe expression { }{ }{ your { formatrow boat produces the result rowrowrow your boat all substitution patterns allow use of annotations to pad an argument to particular widthusing choice of fill character and justification mode an example of such an annotation is {:-^ formathello in this examplethe hyphen (-serves as fill characterthe caret (^designates desire for the string to be centeredand is the desired width for the argument this example results in the string hello by defaultspace is used as fill character and an implied character would dictate right-justification there are additional formatting options for numeric types number will be padded with zeros rather than spaces if its width description is prefaced with zero for examplea date can be formatted in traditional "yyyy/mm/ddform as {}/{: }/{: format(yearmonthdayintegers can be converted to binaryoctalor hexadecimal by respectively adding the character boor as suffix to the annotation the displayed precision of floating-point number is specified with decimal point and the subsequent number of desired digits for examplethe expression { format( / produces the string rounded to three digits after the decimal point programmer can explicitly designate use of fixed-point representation ( by adding the character as suffixor scientific notation ( - by adding the character as suffix
13,455
useful mathematical facts in this appendix we give several useful mathematical facts we begin with some combinatorial definitions and facts logarithms and exponents the logarithm function is defined as logb if bc the following identities hold for logarithms and exponents logb ac logb logb logb / logb logb logb ac logb logb (logc )logc blogc alogc (ba ) bac ba bc ba+ ba /bc ba- in additionwe have the followingproposition if and bthen log log log justificationit is enough to show that ab / we can write ab ab ab ( ) ( ( ) < the natural logarithm function ln loge xwhere is the value of the following progressione **
13,456
in additionx ** ln( xx there are number of useful inequalities relating to these functions (which derive from these definitionsproposition if - <ln( < + ex proposition for < <ex < - proposition for any two positive real numbers and <ex < + / integer functions and relations the "floorand "ceilingfunctions are defined respectively as follows xthe largest integer less than or equal to xthe smallest integer greater than or equal to the modulo operator is defined for integers > and as mod the factorial function is defined as ( ) the binomial coefficient is nk!( ) which is equal to the number of different combinations one can define by choosing different items from collection of items (where the order does not matterthe name "binomial coefficientderives from the binomial expansionn - ab ( bk= we also have the following relationships
13,457
proposition if < <nthen nk <<kk proposition (stirling' approximation) ( pn where (nis ( / the fibonacci progression is numeric progression such that and fn fn- fn- for > proposition if fn is defined by the fibonacci progressionthen fn is th( )where ( )/ is the so-called golden ratio summations there are number of useful facts about summations proposition factoring summationsn = = (ia ( )provided does not depend upon proposition reversing the ordern (ijf (iji= = = = one special form of is telescoping sumn (if ( ) (nf ( ) = which arises often in the amortized analysis of data structure or algorithm the following are some other facts about summations that arise often in the analysis of data structures and algorithms proposition ni= ( )/ proposition ni= ( )( )/
13,458
proposition if > is an integer constantthen ik is th(nk+ = another common summation is the geometric sumni= ai for any fixed real number  proposition an+ ai = for any real number  proposition ai = for any real number there is also combination of the two common formscalled the linear exponential summationwhich has the following expansionproposition for iai = ( ) ( + na( + ( ) the nth harmonic number hn is defined as hn = proposition if hn is the nth harmonic numberthen hn is ln th( basic probability we review some basic facts from probability theory the most basic is that any statement about probability is defined upon sample space swhich is defined as the set of all possible outcomes from some experiment we leave the terms "outcomesand "experimentundefined in any formal sense example consider an experiment that consists of the outcome from flipping coin five times this sample space has different outcomesone for each different ordering of possible flips that can occur sample spaces can also be infiniteas the following example illustrates
13,459
example consider an experiment that consists of flipping coin until it comes up heads this sample space is infinitewith each outcome being sequence of tails followed by single flip that comes up headsfor probability space is sample space together with probability function pr that maps subsets of to real numbers in the interval [ it captures mathematically the notion of the probability of certain "eventsoccurring formallyeach subset of is called an eventand the probability function pr is assumed to possess the following basic properties with respect to events defined from pr( pr( <pr( < for any if ab and then pr( bpr(apr(btwo events and are independent if pr( bpr(apr(ba collection of events { an is mutually independent if pr(ai ai aik pr(ai pr(ai pr(aik for any subset {ai ai aik the conditional probability that an event occursgiven an event bis denoted as pr( | )and is defined as the ratio pr( bpr(bassuming that pr( an elegant way for dealing with events is in terms of random variables intuitivelyrandom variables are variables whose values depend upon the outcome of some experiment formallya random variable is function that maps outcomes from some sample space to real numbers an indicator random variable is random variable that maps outcomes to the set { often in data structure and algorithm analysis we use random variable to characterize the running time of randomized algorithm in this casethe sample space is defined by all possible outcomes of the random sources used in the algorithm we are most interested in the typicalaverageor "expectedvalue of such random variable the expected value of random variable is defined as ( pr( ) where the summation is defined over the range of (which in this case is assumed to be discrete
13,460
proposition (the linearity of expectation)let and be two random variables and let be number then ( + ( ( and (cx ce( example let be random variable that assigns the outcome of the roll of two fair dice to the sum of the number of dots showing then ( justificationto justify this claimlet and be random variables corresponding to the number of dots on each die thusx ( they are two instances of the same functionand ( ( ( ( each outcome of the roll of fair die occurs with probability / thuse(xi for thereforee( two random variables and are independent if pr( | ypr( )for all real numbers and proposition if two random variables and are independentthen (xy ( ) ( example let be random variable that assigns the outcome of roll of two fair dice to the product of the number of dots showing then ( / justificationlet and be random variables denoting the number of dots on each die the variables and are clearly independenthence ( ( ( ) ( ( / ) / the following bound and corollaries that follow from it are known as chernoff bounds proposition let be the sum of finite number of independent / random variables and let be the expected value of thenfor ed pr( ( ) ( )( +
13,461
useful mathematical techniques to compare the growth rates of different functionsit is sometimes helpful to apply the following rule proposition ( 'hopital' rule)if we have limnf (nand we have limng( +then limnf ( )/ (nlimnf ( )/ ( )where (nand (nrespectively denote the derivatives of (nand (nin deriving an upper or lower bound for summationit is often useful to split summation as followsn = = (if (in (iij+ another useful technique is to bound sum by an integral if is nondecreasing functionthenassuming the following terms are definedz - + = (xdx < ( < (xdx there is general form of recurrence relation that arises in the analysis of divide-and-conquer algorithmst (nat ( /bf ( )for constants > and proposition let (nbe defined as above then if (nis (nlogb - )for some constant then (nis th(nlogb if (nis th(nlogb logk )for fixed nonnegative integer > then (nis th(nlogb logk+ if (nis (nlogb + )for some constant and if ( / < ( )then (nis thf ( )this proposition is known as the master method for characterizing divide-andconquer recurrence relations asymptotically
13,462
[ abelsong sussmanand sussmanstructure and interpretation of computer programs cambridgemamit press nd ed [ adel'son-vel'skii and landis"an algorithm for the organization of information,doklady akademii nauk sssrvol pp - english translation in soviet math dokl - [ aggarwal and vitter"the input/output complexity of sorting and related problems,commun acmvol pp - [ aho"algorithms for finding patterns in strings,in handbook of theoretical computer science ( van leeuwened )vol algorithms and complexitypp - amsterdamelsevier [ ahoj hopcroftand ullmanthe design and analysis of computer algorithms readingmaaddison-wesley [ ahoj hopcroftand ullmandata structures and algorithms readingmaaddison-wesley [ ahujat magnantiand orlinnetwork flowstheoryalgorithmsand applications englewood cliffsnjprentice hall [ baeza-yates and ribeiro-netomodern information retrieval readingmaaddison-wesley [ baruvka" jistem problemu minimalnim,praca moravske prirodovedecke spolecnostivol pp - (in czech[ bayer"symmetric binary -treesdata structure and maintenance,acta informaticavol no pp - [ bayer and mccreight"organization of large ordered indexes,acta inform vol pp - [ beazleypython essential reference addison-wesley professional th ed [ bellmandynamic programming princetonnjprinceton university press [ bentley"programming pearlswriting correct programs,communications of the acmvol pp - [ bentley"programming pearlsthanksheaps,communications of the acmvol pp - [ bentley and mcilroy"engineering sort function,software--practice and experiencevol no pp - [ boochobject-oriented analysis and design with applications redwood citycabenjamin/cummings
13,463
[ boyer and moore" fast string searching algorithm,communications of the acmvol no pp - [ brassard"crusade for better notation,sigact newsvol no pp [ buddan introduction to object-oriented programming readingmaaddisonwesley [ burgerj goodmanand sohi"memory systems,in the computer science and engineering handbook ( tuckerjr ed )ch pp - crc press [ campbellp griesj montojoand wilsonpractical programmingan introduction to computer science pragmatic bookshelf [ cardelli and wegner"on understanding typesdata abstraction and polymorphism,acm computing surveysvol no pp - [ carlsson"average case results on heapsort,bitvol pp - [ cedarthe quick python book manning publications nd ed [ clarkson"linear programming in ( time,inform process lett vol pp - [ cole"tight bounds on the complexity of the boyer-moore pattern matching algorithm,siam comput vol no pp - [ comer"the ubiquitous -tree,acm comput surv vol pp - [ cormenc leisersonr rivestand steinintroduction to algorithms cambridgemamit press rd ed [ crochemore and lecroq"pattern matching and text compression algorithms,in the computer science and engineering handbook ( tuckerjr ed )ch pp - crc press [ crosby and wallach"denial of service via algorithmic complexity attacks,in proc th usenix security symp pp - [ dawsonpython programming for the absolute beginner course technology ptr rd ed [ demurjiansr "software design,in the computer science and engineering handbook ( tuckerjr ed )ch pp - crc press [ di battistap eadesr tamassiaand tollisgraph drawing upper saddle rivernjprentice hall [ dijkstra" note on two problems in connexion with graphs,numerische mathematikvol pp - [ dijkstra"recursive programming,numerische mathematikvol no pp - [ driscollh gabowr shrairamanand tarjan"relaxed heapsan alternative to fibonacci heaps with applications to parallel computation,commun acmvol pp - [ floyd"algorithm shortest path,communications of the acmvol no [ floyd"algorithm treesort ,communications of the acmvol no [ fredman and tarjan"fibonacci heaps and their uses in improved network optimization algorithms, acmvol pp -
13,464
bibliography [ gammar helmr johnsonand vlissidesdesign patternselements of reusable object-oriented software readingmaaddison-wesley [ goldberg and robsonsmalltalk- the language readingmaaddisonwesley [ goldwasser and letscherobject-oriented programming in python upper saddle rivernjprentice hall [ gonnet and baeza-yateshandbook of algorithms and data structures in pascal and readingmaaddison-wesley [ gonnet and munro"heaps on heaps,siam comput vol no pp - [ goodrichj - tsayd vengroffand vitter"external-memory computational geometry,in proc th annu ieee sympos found comput sci pp - [ graham and hell"on the history of the minimum spanning tree problem,annals of the history of computingvol no pp - [ guibas and sedgewick" dichromatic framework for balanced trees,in proc th annu ieee sympos found comput sci lecture notes comput sci pp - springer-verlag [ gurevich"what does (nmean?,sigact newsvol no pp - [ hennessy and pattersoncomputer architecturea quantitative approach san franciscomorgan kaufmann nd ed [ hoare"quicksort,the computer journalvol pp - [ hopcroft and tarjan"efficient algorithms for graph manipulation,communications of the acmvol no pp - [ - huang and langston"practical in-place merging,communications of the acmvol no pp - [ jajaan introduction to parallel algorithms readingmaaddison-wesley [ jarnik" jistem problemu minimalnim,praca moravske prirodovedecke spolecnostivol pp - (in czech[ jones and linsgarbage collectionalgorithms for automatic dynamic memory management john wiley and sons [ kargerp kleinand tarjan" randomized linear-time algorithm to find minimum spanning trees,journal of the acmvol pp - [ karp and ramachandran"parallel algorithms for shared memory machines,in handbook of theoretical computer science ( van leeuwened )pp - amsterdamelsevier/the mit press [ kirschenhofer and prodinger"the path length of random skip lists,acta informaticavol pp - [ kleinberg and tardosalgorithm design readingmaaddison-wesley [ klink and walde"efficient denial of service attacks on web application platforms [ knuthsorting and searchingvol of the art of computer programming readingmaaddison-wesley
13,465
[ knuth"big omicron and big omega and big theta,in sigact newsvol pp - [ knuthfundamental algorithmsvol of the art of computer programming readingmaaddison-wesley rd ed [ knuthsorting and searchingvol of the art of computer programming readingmaaddison-wesley nd ed [ knuthj morrisjr and pratt"fast pattern matching in strings,siam comput vol no pp - [ kruskaljr "on the shortest spanning subtree of graph and the traveling salesman problem,proc amer math soc vol pp - [ lesuisse"some lessons drawn from the history of the binary search algorithm,the computer journalvol pp - [ leveson and turner"an investigation of the therac- accidents,ieee computervol no pp - [ levitin"do we teach the right algorithm design techniques?,in th acm sigcse symp on computer science educationpp - [ liskov and guttagabstraction and specification in program development cambridgema/new yorkthe mit press/mcgraw-hill [ lutzprogramming python 'reilly media th ed [ mccreight" space-economical suffix tree construction algorithm,journal of algorithmsvol no pp - [ mcdiarmid and reed"building heaps fast,journal of algorithmsvol no pp - [ megiddo"linear programming in linear time when the dimension is fixed, acmvol pp - [ mehlhorndata structures and algorithms sorting and searchingvol of eatcs monographs on theoretical computer science heidelberggermanyspringer-verlag [ mehlhorndata structures and algorithms graph algorithms and npcompletenessvol of eatcs monographs on theoretical computer science heidelberggermanyspringer-verlag [ mehlhorn and tsakalidis"data structures,in handbook of theoretical computer science ( van leeuwened )vol algorithms and complexitypp amsterdamelsevier [ morrison"patricia--practical algorithm to retrieve information coded in alphanumeric,journal of the acmvol no pp - [ motwani and raghavanrandomized algorithms new yorknycambridge university press [ papadakisj munroand poblete"average search and update costs in skip lists,bitvol pp - [ perkovicintroduction to computing using pythonan application development focus wiley [ phillipspython object oriented programming packt publishing [ pobletej munroand papadakis"the binomial transform and its application to the analysis of skip lists,in proceedings of the european symposium on algorithms (esa)pp -
13,466
bibliography [ prim"shortest connection networks and some generalizations,bell syst tech vol pp - [ pugh"skip listsa probabilistic alternative to balanced trees,commun acmvol no pp - [ sametthe design and analysis of spatial data structures readingmaaddison-wesley [ schaffer and sedgewick"the analysis of heapsort,journal of algorithmsvol no pp - [ sleator and tarjan"self-adjusting binary search trees, acmvol no pp - [ stephenstring searching algorithms world scientific press [ summerfieldprogramming in python complete introduction to the python language addison-wesley professional nd ed [ tamassia and liotta"graph drawing,in handbook of discrete and computational geometry ( goodman and 'rourkeeds )ch pp - crc press llc nd ed [ tarjan and vishkin"an efficient parallel biconnectivity algorithm,siam comput vol pp - [ tarjan"depth first search and linear graph algorithms,siam comput vol no pp - [ tarjandata structures and network algorithmsvol of cbms-nsf regional conference series in applied mathematics philadelphiapasociety for industrial and applied mathematics [ tuckerjr the computer science and engineering handbook crc press [ ullmanprinciples of database systems potomacmdcomputer science press [ van leeuwen"graph algorithms,in handbook of theoretical computer science ( van leeuwened )vol algorithms and complexitypp - amsterdamelsevier [ vitter"efficient memory access in large-scale computation,in proc th sympos theoret aspects comput sci lecture notes comput sci springer-verlag [ vitter and chendesign and analysis of coalesced hashing new yorkoxford university press [ vitter and flajolet"average-case analysis of algorithms and data structures,in algorithms and complexity ( van leeuwened )vol of handbook of theoretical computer sciencepp - amsterdamelsevier [ warshall" theorem on boolean matrices,journal of the acmvol no pp - [ williams"algorithm heapsort,communications of the acmvol no pp - [ wooddata structuresalgorithmsand performance readingmaaddisonwesley [ zellepython programmingan introduciton to computer science franklinbeedle associates inc nd ed
13,467
character operator operator - operator operator operator operator operator +operator operator -operator operator /operator - operator <operator <operator operator =operator operator >operator >operator operator abs add - and bool - call contains delitem eq float floordiv ge getitem gt hash iadd imul init int invert ior isub iter le len lshift lt mod mul name ne neg next or pos pow radd rand repr reversed rfloordiv rlshift rmod rmul ror rpow rrshift rshift rsub rtruediv setitem slots str sub truediv xor
13,468
abc module abelsonhal abs function abstract base class - abstract data typev deque - graph - map - partition - positional list - priority queue queue sorted map stack - tree - abstraction - (abtree - access frequency accessors activation record actual parameter acyclic graph adaptability adaptable priority queue - adaptableheappriorityqueue class - adapter design pattern adel'son-vel'skiigeorgii adjacency list - adjacency map adjacency matrix adtsee abstract data type aggarwalalok ahoalfred ahujaravindra algorithm algorithm analysis - average-case worst-case alias all function alphabet amortization - - ancestor and operator any function arc arithmetic operators arithmetic progression - arithmeticerror array - compact dynamic - array module arrayqueue class - ascii assignment statement chained extended simultaneous asymptotic notation - big-oh - big-omega big-theta attributeerror avl tree - balance factor height-balance property back edge backslash character baeza-yatesricardo baruvkaotakar base class baseexception bayerrudolf beazleydavid bellmanrichard bentleyjon best-fit algorithm bfssee breadth-first search biconnected graph big-oh notation - big-omega notation big-theta notation binary heap - binary recursion binary search - - - binary search tree -
13,469
insertion removal - rotation trinode restructuring binary tree - array-based representation - complete full improper level linked structure - proper binaryeulertour class - binarytree class - binomial expansion bipartite graph bitwise operators boochgrady bool class boolean expressions bootstrapping boyerrobert boyer-moore algorithm - brassardgilles breadth-first search - breadth-first tree traversal - break statement brute force -tree bubble-sort bucket-sort - buddtimothy built-in classes built-in functions burgerdoug byte caching - caesar cipher - call stack campbelljennifer cardelliluca carlsonnsvante cedarvern ceiling function central processing unit (cpu) chained assignment chained operators chainhashmap class character-jump heuristic chenwen-chin chernoff bound child class child node chr function circularly linked list - circularqueue class - clarksonkenneth class abstract base - base child concrete diagram nested - parent sub super clustering colerichard collections module deque class collision resolution - comerdouglas comment syntax in python compact array comparison operators complete binary tree complete graph composition design pattern compression function concrete class conditional expression conditional probability conditional statements connected components constructor continue statement contradiction contrapositive copy module copying objects - core memory
13,470
cormenthomas counter class cpu crc cards creditcard class - - crochemoremaxime cryptography - ctypes module cubic function cyber-dollar - - cycle directed cyclic-shift hash code - dagsee directed acyclic graph data packets data structure dawsonmichael debugging decision tree decorate-sort-undecorate design pattern decrease-and-conquer decryption deep copy deepcopy function def keyword degree of vertex del operator demorgan' law demurjiansteven depth of tree - depth-first search (dfs) - deque - abstract data type - linked-list implementation deque class descendant design patternsv adapter amortization - brute force composition divide-and-conquer - dynamic programming - factory method greedy method position - prune-and-search - template method dfssee depth-first search di battistagiuseppe diameter dict class dictionary - see also map dictionary comprehension dijkstra' algorithm - dijkstraedsger dir function directed acyclic graph - disk usage - - divide-and-conquer - - division method for hash codes documentation double hashing double-ended queuesee deque doubly linked list - doublylinkedbase class - down-heap bubbling duck typing dynamic array - shrinking dynamicarray class - dynamic binding dynamic dispatch dynamic programming - dynamically typed eadespeter edge destination endpoint incident multiple origin outgoing parallel self-loop edge list - edge relaxation edit distance
13,471
element uniqueness problem - elif keyword empty exception class encapsulation encryption endpoints eoferror escape character euclidean norm euler tour of graph euler tour tree traversal - eulertour class - event except statement - exception - catching - raising - exception class expected value exponential function - - expression tree - expressions - expressiontree class - external memory - external-memory algorithm - external-memory sorting - factorial function - - factoring number - factory method pattern false favorites list - favoriteslist class - favoriteslistmtf class fibonacci heap fibonacci series - fifo file proxy - file system - finally first-class object first-fit algorithm first-infirst-out (fifo) flajoletphilippe float class floor function flowchart floydrobert floyd-warshall algorithm - for loop forest formal parameter fractal fragmentation of memory free list frozenset class full binary tree function - body built-in signature game tree gameentry class gammaerich garbage collection mark-sweep gausscarl gc module generator - generator comprehension geometric progression geometric sum getsizeof function - global scope goldbergadele goldwassermichael gonnetgaston goodrichmichael grade-point average (gpa) grahamronald graph - abstract data type - acyclic breadth-first search - connected data structures - adjacency list - adjacency map adjacency matrix edge list - depth-first search -
13,472
directed acyclic - strongly connected mixed reachability - shortest paths simple traversal - undirected weighted - greedy method guibasleonidas guttagjohn harmonic number hash code - cyclic-shift - polynomial hash function hash table - clustering collision collision resolution - double hashing linear probing quadratic probing hashmapbase class - header sentinel heap - bottom-up construction - heap-sort - heappriorityqueue class - heapq module height of tree - height-balance property hellpavol hennessyjohn heuristic hierarchy hoarec hook hopcroftjohn hoppergrace horner' method html - huangbing-chao huffman coding - / complexity id function identifier idle immutable type implied method import statement in operator - in-degree in-place algorithm incidence collection incident incoming edges independent index negative zero-indexing indexerror induction - infix notation inheritance - multiple inorder tree traversal - input function - insertion-sort - - instance instantiation int class integrated development environment internal memory internet interpreter inversion inverted file ioerror is not operator is operator isinstance function isomorphism iterable type iterator - jajajoseph jarnikvojtech join function of str class jonesrichard
13,473
kargerdavid karprichard keyboardinterrupt keyerror keyword parameter kleinphilip kleinbergjon knuthdonald knuth-morris-pratt algorithm - kosarajus rao kruskal' algorithm - kruskaljoseph 'hopital' rule landisevgenii langstonmichael last-infirst-out (lifo) lazy evaluation lcssee longest common subsequence leaves lecroqthierry leisersoncharles len function lesuisser letscherdavid level in tree level numbering lexicographic order lifo linear exponential linear function linear probing linearity of expectation linked list - doubly linked - singly linked - linked structure linkedbinarytree class - linkeddeque class - linkedqueue class - linkedstack class - linsrafael liottagiuseppe liskovbarbara list of favorites - positional - list class - sort method list comprehension literal littmanmichael live objects load factor - local scope - locality of reference locator log-star function logarithm function - logical operators longest common subsequence - looking-glass heuristic lookup table lookuperror loop invariant lowest common ancestor lutzmark magnantithomas main memory map abstract data type - avl tree - binary search tree - hash table - red-black tree - skip list - sorted ( , tree - update operations mapbase class - mapping abstract base class mark-sweep algorithm math module matrix matrix chain-product - max function - maximal independent set mccreightedward
13,474
benjamin baka works as software developer and has over yearsexperience in programming he is graduate of kwame nkrumah university of science and technology and member of the linux accra user group notable in his language toolset are cc++javapythonand ruby he has huge interest in algorithms and finds them good intellectual exercise he is technology strategist and software engineer at mpedigree networkweaving together dizzying array of technologies in combating counterfeiting activitiesempowering consumers in ghananigeriaand kenya to name few in his spare timehe enjoys playing the bass guitar and listening to silence you can find him on his blog many thanks to the team at packt who have played major part in bringing this book to light would also like to thank david julianthe reviewer on this bookfor all the assistance he extended through diverse means in preparing this book am forever indebted to lorenzo danielson and guido sohne for their immense help in ways can never repay
13,475
david julian has over years of experience as an it educator and consultant he has worked on diverse range of projectsincluding assisting with the design of machine learning system used to optimize agricultural crop production in controlled environments and numerous backend web development and data analysis projects he has authored the book designing machine learning systems with python and worked as technical reviewer on sebastian raschka' book python machine learningboth by packt publishing
13,476
for support files and downloads related to your bookplease visit www packtpub com did you know that packt offers ebook versions of every book publishedwith pdf and epub files availableyou can upgrade to the ebook version at www packtpub com and as print book customeryou are entitled to discount on the ebook copy get in touch with us at service@packtpub com for more details at www packtpub comyou can also read collection of free technical articlessign up for range of free newsletters and receive exclusive discounts and offers on packt books and ebooks get the most in-demand software skills with mapt mapt gives you full access to all packt books and video coursesas well as industry-leading tools to help you plan your personal development and advance your career why subscribefully searchable across every book published by packt copy and pasteprintand bookmark content on demand and accessible via web browser
13,477
thanks for purchasing this packt book at packtquality is at the heart of our editorial process to help us improveplease leave us an honest review on this book' amazon page at if you' like to join our team of regular reviewersyou can -mail us at customerreviews@packtpub com we award our regular reviewers with free ebooks and videos in exchange for their valuable feedback help us be relentless in improving our products
13,478
preface python objectstypesand expressions understanding data structures and algorithms python for data the python environment variables and expressions data encapsulation and properties summary python data types and structures variable scope flow control and iteration overview of data types and objects strings lists functions as first class objects higher order functions recursive functions generators and co-routines classes and object programming special methods inheritance operations and expressions boolean operations comparison and arithmetic operators membershipidentityand logical operations built-in data types none type numeric types representation error sequences tuples dictionaries sorting dictionaries dictionaries for text analysis sets immutable sets
13,479
collections deques chainmaps counter objects ordered dictionaries defaultdict named tuples arrays summary principles of algorithm design algorithm design paradigms recursion and backtracking backtracking divide and conquer long multiplication can we do bettera recursive approach runtime analysis asymptotic analysis big notation composing complexity classes omega notation (ohmtheta notation (amortized analysis summary lists and pointer structures arrays pointer structures nodes finding endpoints node other node types singly linked lists singly linked list class append operation faster append operation getting the size of the list improving list traversal deleting nodes list search clearing list ii
13,480
doubly linked list node doubly linked list append operation delete operation list search circular lists appending elements deleting an element iterating through circular list summary stacks and queues stacks stack implementation push operation pop operation peek bracket-matching application queues list-based queue enqueue operation dequeue operation stack-based queue enqueue operation dequeue operation node-based queue queue class enqueue operation dequeue operation application of queues media player queue summary trees terminology tree nodes binary trees binary search trees binary search tree implementation binary search tree operations finding the minimum and maximum nodes iii
13,481
deleting nodes searching the tree tree traversal depth-first traversal in-order traversal and infix notation pre-order traversal and prefix notation post-order traversal and postfix notation breadth-first traversal benefits of binary search tree expression trees parsing reverse polish expression balancing trees heaps summary hashing and symbol tables hashing perfect hashing functions hash table putting elements getting elements testing the hash table using [with the hash table non-string keys growing hash table open addressing chaining symbol tables summary graphs and other algorithms graphs directed and undirected graphs weighted graphs graph representation adjacency list adjacency matrix graph traversal breadth-first search depth-first search other useful graph methods iv
13,482
inserting pop testing the heap selection algorithms summary searching linear search unordered linear search ordered linear search binary search interpolation search choosing search algorithm summary sorting sorting algorithms bubble sort insertion sort selection sort quick sort list partitioning pivot selection implementation heap sort summary selection algorithms selection by sorting randomized selection quick select partition step deterministic selection pivot selection median of medians partitioning step summary design techniques and strategies classification of algorithms classification by implementation [
13,483
logical serial or parallel deterministic versus nondeterministic algorithms classification by complexity complexity curves classification by design divide and conquer dynamic programming greedy algorithms technical implementation dynamic programming memoization tabulation the fibonacci series the memoization technique the tabulation technique divide and conquer divide conquer merge merge sort greedy algorithms coin-counting problem dijkstra' shortest path algorithm complexity classes versus np np-hard np-complete summary implementationsapplicationsand tools tools of the trade data preprocessing why process raw datamissing data feature scaling min-max scalar standard scalar binarizing data machine learning types of machine learning hello classifier supervised learning example vi
13,484
bag of words prediction an unsupervised learning example -means algorithm prediction data visualization bar chart multiple bar charts box plot pie chart bubble chart summary index vii
13,485
knowledge of data structures and the algorithms that bring them to life is the key to building successful data applications with this knowledgewe have powerful way to unlock the secrets buried in large amounts of data this skill is becoming more important in data-saturated worldwhere the amount of data being produced dwarfs our ability to analyze it in this bookyou will learn the essential python data structures and the most common algorithms this book will provide basic knowledge of python and an insight into the exciting world of data algorithms we will look at algorithms that provide solutions to the most common problems in data analysisincluding sorting and searching dataas well as being able to extract important statistics from data with this easy-to-read bookyou will learn how to create complex data structures such as linked listsstacksand queuesas well as sorting algorithms such as bubble sort and insertion sort you will learn the common techniques and structures used in tasks such as preprocessingmodelingand transforming data we will also discuss how to organize your code in manageableconsistentand extendable way you will learn how to build components that are easy to understanddebugand use in different applications good understanding of data structures and algorithms cannot be overemphasized it is an important arsenal to have in being able to understand new problems and find elegant solutions to them by gaining deeper understanding of algorithms and data structuresyou may find uses for them in many more ways than originally intended you will develop consideration for the code you write and how it affects the amount of memory and cpu cycles to say the least code will not be written for the sake of itbut rather with mindset to do more using minimal resources when programs that have been thoroughly analyzed and scrutinized are used in real-life settingthe performance is delight to experience sloppy code is always recipe for poor performance whether you like algorithms purely from the standpoint of them being an intellectual exercise or them serving as source of inspiration in solving problemit is an engagement worthy of pursuit the python language has further opened the door for many professionals and students to come to appreciate programming the language is fun to work with and concise in its description of problems we leverage the language' mass appeal to examine number of widely studied and standardized data structures and algorithms the book begins with concise tour of the python programming language as suchit is not required that you know python before picking up this book
13,486
what this book covers python objectstypesand expressionsintroduces you to the basic types and objects of python we will give an overview of the language featuresexecution environmentand programming styles we will also review the common programming techniques and language functionality python data types and structuresexplains each of the five numeric and five sequence data typesas well as one mapping and two set data typesand examine the operations and expressions applicable to each type we will also give examples of typical use cases principles of algorithm designcovers how we can build additional structures with specific capabilities using the existing python data structures in generalthe data structures we create need to conform to number of principles these principles include robustnessadaptabilityreusabilityand separating the structure from function we look at the role iteration plays and introduce recursive data structures lists and pointer structurescovers linked listswhich are one of the most common data structures and are often used to implement other structuressuch as stacks and queues in this we describe their operation and implementation we compare their behavior to arrays and discuss the relative advantages and disadvantages of each stacks and queuesdiscusses the behavior and demonstrates some implementations of these linear data structures we give examples of typical applications treeswill look at how to implement binary tree trees form the basis of many of the most important advanced data structures we will examine how to traverse trees and retrieve and insert values we will also look at how to create structures such as heaps hashing and symbol tablesdescribes symbol tablesgives some typical implementationsand discusses various applications we will look at the process of hashinggive an implementation of hash tableand discuss the various design considerations graphs and other algorithmslooks at some of the more specialized structuresincluding graphs and spatial structures representing data as set of nodes and vertices is convenient in number of applicationsand from thiswe can create structures such as directed and undirected graphs we will also introduce some other structures and concepts such as priority queuesheapsand selection algorithms [
13,487
searchingdiscusses the most common searching algorithms and gives examples of their use for various data structures searching data structure is fundamental task and there are number of approaches sortinglooks at the most common approaches to sorting this will include bubble sortinsertion sortand selection sort selection algorithmscovers algorithms that involve finding statisticssuch as the minimummaximumor median elements in list there are number of approaches and one of the most common approaches is to first apply sort operation other approaches include partition and linear selection design techniques and strategiesrelates to how we look for solutions for similar problems when we are trying to solve new problem understanding how we can classify algorithms and the types of problem that they most naturally solve is key aspect of algorithm design there are many ways in which we can classify algorithmsbut the most useful classifications tend to revolve around either the implementation method or the design method implementationsapplicationsand toolsdiscusses variety of real-world applications these include data analysismachine learningpredictionand visualization in additionthere are libraries and tools that make our work with algorithms more productive and enjoyable what you need for this book the code in this book will require you to run python or higher python' default interactive environment can also be used to run the snippets of code in order to use other third-party librariespip should be installed on your system who this book is for this book would appeal to python developers basic knowledge of python is preferred but is not requirement no previous knowledge of computer concepts is assumed most of the concepts are explained with everyday scenarios to make it very easy to understand [
13,488
conventions in this bookyou will find number of text styles that distinguish between different kinds of information here are some examples of these styles and an explanation of their meaning code words in textdatabase table namesfolder namesfilenamesfile extensionspathnamesdummy urlsuser inputand twitter handles are shown as follows"this repetitive construct could be simple while loop or any other kind of loop block of code is set as followsdef dequeue(self)if not self outbound_stackwhile self inbound_stackself outbound_stack append(self inbound_stack pop()return self outbound_stack pop(when we wish to draw your attention to particular part of code blockthe relevant lines or items are set in bolddef dequeue(self)if not self outbound_stackwhile self inbound_stackself outbound_stack append(self inbound_stack pop()return self outbound_stack pop(any command-line input or output is written as followspython bubble py new terms and important words are shown in bold words that you see on the screenfor examplein menus or dialog boxesappear in the text like this"clicking the next button moves you to the next screen warnings or important notes appear in box like this tips and tricks appear like this [
13,489
reader feedback feedback from our readers is always welcome let us know what you think about this book-what you liked or disliked reader feedback is important for us as it helps us develop titles that you will really get the most out of to send us general feedbacksimply -mail feedback@packtpub comand mention the book' title in the subject of your message if there is topic that you have expertise in and you are interested in either writing or contributing to booksee our author guide at www packtpub com/authors customer support now that you are the proud owner of packt bookwe have number of things to help you to get the most from your purchase downloading the example code you can download the example code files for this book from your account at acktpub com if you purchased this book elsewhereyou can visit om/supportand register to have the files -mailed directly to you you can download the code files by following these steps log in or register to our website using your -mail address and password hover the mouse pointer on the support tab at the top click on code downloads errata enter the name of the book in the search box select the book for which you're looking to download the code files choose from the drop-down menu where you purchased this book from click on code download once the file is downloadedplease make sure that you unzip or extract the folder using the latest version ofwinrar -zip for windows zipeg izip unrarx for mac -zip peazip for linux [
13,490
the code bundle for the book is also hosted on github at ishing/python-data-structures-and-algorithma we also have other code bundles from our rich catalog of books and videos available at ingcheck them outerrata although we have taken every care to ensure the accuracy of our contentmistakes do happen if you find mistake in one of our books-maybe mistake in the text or the codewe would be grateful if you could report this to us by doing soyou can save other readers from frustration and help us improve subsequent versions of this book if you find any errataplease report them by visiting your bookclicking on the errata submission form linkand entering the details of your errata once your errata are verifiedyour submission will be accepted and the errata will be uploaded to our website or added to any list of existing errata under the errata section of that title to view the previously submitted erratago to /supportand enter the name of the book in the search field the required information will appear under the errata section piracy piracy of copyrighted material on the internet is an ongoing problem across all media at packtwe take the protection of our copyright and licenses very seriously if you come across any illegal copies of our works in any form on the internetplease provide us with the location address or website name immediately so that we can pursue remedy please contact us at copyright@packtpub com with link to the suspected pirated material we appreciate your help in protecting our authors and our ability to bring you valuable content questions if you have problem with any aspect of this bookyou can contact us at questions@packtpub comand we will do our best to address the problem [
13,491
python objectstypesand expressions python is the language of choice for many advanced data tasks for very good reason python is one of the easiest advanced programming languages to learn intuitive structures and semantics mean that for people who are not computer scientistsbut maybe biologistsstatisticiansor the directors of start-uppython is straightforward way to perform wide variety of data tasks it is not just scripting languagebut full-featured objectoriented programming language in pythonthere are many useful data structures and algorithms built in to the language alsobecause python is an object-based languageit is relatively easy to create custom data objects in this bookwe will examine both python internal librariessome of the external librariesas well as learning how to build your own data objects from first principles this book does assume that you know python howeverif you are bit rustycoming from another languageor do not know python at alldon' worrythis first should get you quickly up to speed if notthen visit documentation at learn this programming language in this we will look at the following topicsobtaining general working knowledge of data structures and algorithms understanding core data types and their functions exploring the object-oriented aspects of the python programming language
13,492
understanding data structures and algorithms algorithms and data structures are the most fundamental concepts in computing they are the building blocks from which complex software is built having an understanding of these foundation concepts is hugely important in software design and this involves the following three characteristicshow algorithms manipulate information contained within data structures how data is arranged in memory what the performance characteristics of particular data structures are in this bookwe will examine this topic from several perspectives firstlywe will look at the fundamentals of the python programming language from the perspective of data structures and algorithms secondlyit is important that we have the correct mathematical tools we need to understand some fundamental concepts of computer science and for this we need mathematics by taking heuristics approachdeveloping some guiding principles means thatin generalwe do not need any more than high school mathematics to understand the principles of these key ideas another important aspect is evaluation measuring an algorithms performance involves understanding how each increase in data size affects operations on that data when we are working on large datasets or real-time applicationsit is essential that our algorithms and structures are as efficient as they can be finallywe need sound experimental design strategy being able to conceptually translate real-world problem into the algorithms and data structures of programming language involves being able to understand the important elements of problem and methodology for mapping these elements to programming structures to give us some insight into algorithmic thinkinglet' consider real-world example imagine we are at an unfamiliar market and we are given the task of purchasing list of items we assume that the market is laid out randomlyand each vendor sells random subset of itemssome of which may be on our list our aim is to minimize the price we pay for each item as well as minimize the time spent at the market one way to approach this is to write an algorithm like the following[
13,493
repeat for each vendor does the vendor have items on my list and is the cost less than predicted cost for the item if yesbuy and remove from listif nomove on to the next vendor if no more vendorsend this is simple iteratorwith decision and an action if we were to implement thiswe would need data structures to define both the list of items we want to buy as well as the list of items of each vendor we would need to determine the best way of matching items in each list and we need some sort of logic to decide whether to purchase or not there are several observations that we can make regarding this algorithm firstlysince the cost calculation is based on predictionwe don' know what the real average cost isif we underpredict the cost of an itemwe come to the end of the market with items remaining on our list thereforewe need an efficient way to backtrack to the vendor with the lowest cost alsowe need to understand what happens to the time it takes to compare items on our shopping list with items sold by each vendor as the number of items on our shopping listor the number of items sold by each vendorincreases the order in which we search through items and the shape of the data structures can make big difference to the time it takes to do search clearlywe would like to arrange our listas well as the order we visit each vendorin such way that we minimize search time alsoconsider what happens when we change the buy condition to purchase at the cheapest pricenot just the below average price this changes the problem entirely instead of sequentially going from one vendor to the nextwe need to traverse the market once andwith this knowledgewe can order our shopping list with regards to the vendors we want to visit obviouslythere are many more subtleties involved in translating real-world problem into an abstract construct such as programming language for exampleas we progress through the marketour knowledge of the cost of product improvesso our predicted average price variable becomes more accurate untilby the last stallour knowledge of the market is perfect assuming any kind of backtracking algorithm incurs costwe can see cause to review our entire strategy conditions such as high price variabilitythe size and shape of our data structuresand the cost of backtracking all determine the most appropriate solution [
13,494
python for data python has several built-in data structuresincluding listsdictionariesand setsthat we use to build customized objects in additionthere are number of internal librariessuch as collections and the math objectwhich allow us to create more advanced structures as well as perform calculations on those structures finallythere are the external libraries such as those found in the scipy packages these allow us to perform range of advanced data tasks such as logistic and linear regressionvisualizationand mathematical calculations such as operations on matrixes and vectors external libraries can be very useful for an outof-the-box solution howeverwe must also be aware that there is often performance penalty compared to building customized objects from the ground up by learning how to code these objects ourselveswe can target them to specific tasksmaking them more efficient this is not to exclude the role of external libraries and we will look at this in design techniques and strategies to beginwe will take an overview of some of the key language features that make python such great choice for data programming the python environment feature of the python environment is its interactive console allowing you to both use python as desktop programmable calculator and also as an environment to write and test snippets of code the read-evaluate-print loop of the console is very convenient way to interact with larger code basesuch as to run functions and methods or to create instances of classes this is one of the major advantages of python over compiled languages such as / +or javawhere the write-compile-test-recompile cycle can increase development time considerably compared to python' read evaluate print loop being able to type in expressions and get an immediate response can greatly speed up data science tasks there are some excellent distributions of python apart from the official cpython version two of the most popular are anaconda (canopy (their own developer environments both canopy and anaconda include libraries for scientificmachine learningand other data applications most distributions come with an editor there are also number of implementations of the python consoleapart from the cpython version most notable amongst these is the ipython/jupyter platform that includes webbased computational environment
13,495
variables and expressions to translate real-world problem into one that can be solved by an algorithmthere are two interrelated tasks firstlyselect the variablesand secondlyfind the expressions that relate to these variables variables are labels attached to objectsthey are not the object itself they are not containers for objects either variable does not contain the objectrather it acts as pointer or reference to an object for exampleconsider the following codehere we have created variableawhich points to list object we create another variablebwhich points to this same list object when we append an element to this list objectthis change is reflected in both and python is dynamically typed language variable names can be bound to different values and types during program execution each value is of typea stringor integer for examplehoweverthe name that points to this value does not have specific type this is different from many languages such as and java where name represents fixed sizetypeand location in memory this means when we initialize variables in pythonwe do not need to declare type alsovariablesor more specifically the objects they point tocan change type depending on the values assigned to themfor example
13,496
variable scope it is important to understand the scoping rules of variables inside functions each time function executesa new local namespace is created this represents local environment that contains the names of the parameters and variables that are assigned by the function to resolve namespace when function is calledthe python interpreter first searches the local namespace (that isthe function itselfand if no match is foundit searches the global namespace this global namespace is the module in which the function was defined if the name is still not foundit searches the built-in namespace finallyif this fails then the interpreter raises nameerror exception consider the following codea= = def my_function()global = = my_function(print( #prints print( #prints here is the output of the preceding codein the preceding codewe define two global variables we need to tell the interpreterusing the keyword globalthat inside the functionwe are referring to global variable when we change this variable to these changes are reflected in the global scope howeverthe variable we set to is local to the functionand any changes made to it inside the function are not reflected in the global scope when we run the function and print bwe see that it retains its global value flow control and iteration python programs consist of sequence of statements the interpreter executes each statement in order until there are no more statements this is true if both files run as the main program as well as files that are loaded via import all statementsincluding variable assignmentfunction definitionsclass definitionsand module importshave equal status there are no special statements that have higher priority than any other and every statement can be placed anywhere in program there are two main ways of controlling the flow of program executionconditional statements and loops
13,497
the ifelseand elif statements control the conditional execution of statements the general format is series of if and elif statements followed by final else statementx='oneif == print('false'elif == print('true'elseprint('something else'#prints 'something elsenote the use of the =operator to test for the same values this returns true if the values are equalit returns false otherwise note also that setting to string will return something else rather than generate type error as may happen in languages that are not dynamically typed dynamically typed languages such as python allow flexible assignment of objects with different types the other way of controlling program flow is with loops they are created using the while or for statementsfor exampleoverview of data types and objects python contains built-in data types these include four numeric types (intfloatcomplexbool)four sequence types (strlisttuplerange)one mapping type (dict)and two set types it is also possible to create user-defined objects such as functions or classes we will look at the string and the list data types in this and the remaining built-in types in the next
13,498
all data types in python are objects in factpretty much everything is an object in pythonincluding modulesclassesand functionsas well as literals such as strings and integers each object in python has typea valueand an identity when we write greet "hello worldwe are creating an instance of string object with the value "hello worldand the identity of greet the identity of an object acts as pointer to the object' location in memory the type of an objectalso known as the object' classdescribes the object' internal representation as well as the methods and operations it supports once an instance of an object is createdits identity and type cannot be changed we can get the identity of an object by using the built-in function id(this returns an identifying integer and on most systems this refers to its memory locationalthough you should not rely on this in any of your code alsothere are number of ways to compare objectsfor exampleif = # and have the same value if is bif and are the same object if type(ais type( ) and are the same type an important distinction needs to be made between mutable and immutable objects mutable object' such as lists can have their values changed they have methodssuch as insert(or append()that change an objects value immutable objectssuch as stringscannot have their values changedso when we run their methodsthey simply return value rather than change the value of an underlying object we canof courseuse this value by assigning it to variable or using it as an argument in function strings strings are immutable sequence objectswith each character representing an element in the sequence as with all objectswe use methods to perform operations stringsbeing immutabledo not change the instanceeach method simply returns value this value can be stored as another variable or given as an argument to function or method
13,499
the following table is list of some of the most commonly used string methods and their descriptionsmethods descriptions count(substring[start,end]counts the occurrences of substring with optional start and end parameters expandtabs([tabsize]replaces tabs with spaces find(substring[startend]returns the index of the first occurrence of substring or returns - if the substring is not found isalnum(returns true if all characters are alphanumericreturns false otherwise isalpha(returns true if all characters are alphabeticreturns false otherwise isdigit(returns true if all characters are digitsreturns false otherwise join(tjoins the strings in sequence lower(converts the string to all lowercase replace(oldnew [maxreplace]replaces old substring with new substring strip([characters]removes whitespace or optional characters split([separator][maxsplit]splits string separated by whitespace or an optional separator returns list