text
stringlengths 7
4.92M
|
---|
Mauricio G. C. Resende and Celso C. Ribeiro
Optimization by GRASPGreedy Randomized Adaptive Search Procedures
Mauricio G. C. Resende
Modeling and Optimization Group (MOP), Amazon.com, Inc., Seattle, Washington, USA
Celso C. Ribeiro
Instituto de Ciência da Computação, Universidade Federal Fluminense, Niterói, Rio de Janeiro, Brazil
ISBN 978-1-4939-6528-1e-ISBN 978-1-4939-6530-4
DOI 10.1007/978-1-4939-6530-4
Library of Congress Control Number: 2016948721
© Springer Science+Business Media New York 2016
This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, express or implied, with respect to the material contained herein or for any errors or omissions that may have been made.
Printed on acid-free paper
This Springer imprint is published by Springer Nature
The registered company is Springer Science+Business Media LLC
The registered company address is: 233 Spring Street, New York, NY 10013, U.S.A
In memory of
David Stifler Johnson
Foreword
In recent years, advances in metaheuristics have given practitioners a powerful framework for making key decisions in problems as diverse as telecommunications network design and supply chain planning to scheduling in transportation systems. GRASP is a metaheuristic that has enjoyed wide success in practice, with an extraordinarily broad range of applications to real-world optimization problems. Starting from the seminal 1989 paper by Feo and Resende, over the past 25 years, a large body of work on greedy randomized adaptive search procedures has emerged. A vast array of papers on GRASP have been published in the open literature, and numerous MSc and PhD theses have been written on the subject. This book is a timely and welcome addition to the metaheuristics literature, bringing together this body of work in a single volume.
The account of GRASP in this book is especially commendable for its readability, covering many facets of this metaheuristic, such as solution construction, local search, hybridizations, and extensions. It is organized into four main sections: introduction to combinatorial optimization, fundamentals of heuristic search, basic GRASP, and advanced topics. The book can be used as an introductory text, not only to GRASP but also to combinatorial optimization, local search, path-relinking, and metaheuristics in general. For the more advanced reader, chapters on hybridization with path-relinking and parallel and continuous GRASP present these topics in a clear and concise fashion. The book additionally offers a very complete annotated bibliography of GRASP and combinatorial optimization.
For the practitioner who needs to solve combinatorial optimization problems, the book provides implementable templates for all algorithms covered in the text.
This book, with its excellent overview of the state of the art of GRASP, should appeal to researchers and practitioners of combinatorial optimization who have a need to find optimal or near-optimal solutions to hard optimization problems.
Fred Glover
Boulder, CO, USA
May 2016
Preface
Greedy randomized adaptive search procedures, or GRASP, were introduced by T. Feo and M. Resende in 1989 as a probabilistic heuristic for solving hard set covering problems. Soon after its introduction, it was recognized as a general purpose metaheuristic and was applied to a number of other combinatorial optimization problems, including scheduling problems, the quadratic assignment problem, the satisfiability problem, and graph planarization. At the Spring 1991 ORSA/TIMS meeting in Nashville, T. Feo and M. Resende presented the first tutorial on GRASP as a metaheuristic, which was followed by their tutorial published in the Journal of Global Optimization in 1995. Since then, GRASP has gained wide acceptance as an effective and easy-to-implement metaheuristic for finding optimal or near-optimal solutions to combinatorial optimization problems.
This book has been many years in planning. Though many books have been written about other metaheuristics, including genetic algorithms, tabu search, simulated annealing, and ant colony optimization, a book on GRASP had yet to be published. Since the subject has had 25 years to mature, we feel that this is the right time for such a book. After Springer agreed to publish this book, we began the task of writing it in 2010.
We have been collaborating on the design and implementation of GRASP heuristics since 1994 when we decided, at the TIMS XXXII International Meeting in Anchorage, Alaska, to partner in designing a GRASP for graph planarization. Since then, we have worked together on a number of papers on GRASP, including three highly cited surveys.
This book is aimed at students, engineers, scientists, operations researchers, application developers, and other specialists who are looking for the most appropriate and recent GRASP-based optimization tools to solve particular problems. It focuses on algorithmic and computational aspects of applied optimization with GRASP. Emphasis is given to the end user, providing sufficient information on the broad spectrum of advances in applied optimization with GRASP.
The book grew from talks and short courses that we gave at many universities, companies, and conferences. Optimization by GRASP turned out to be not only a book on GRASP but also a pedagogical book on heuristics, metaheuristics and its basics, foundations, extensions, and applications. We motivate the subject with a number of hard combinatorial optimization problems expressed in simple descriptions in the first chapter. This is followed by an overview of complexity theory that makes the case for heuristics and metaheuristics as very effective strategies for solving hard or large instances of the so-called intractable NP -hard optimization problems. In our view, most metaheuristics share a number of common building blocks that are combined following different strategies to overcome premature local optimality. Such building blocks are explored, for example, in the chapters or sections on greedy algorithms, randomization, local search, cost updates and candidate lists, solution perturbations and ejection chains, adaptive memory and elite sets, path-relinking, runtime distributions and probabilistic analysis tools, parallelization strategies, and implementation tricks, among other topics. As such, preliminary versions of this text have been used in the last three years as a textbook for the course on metaheuristics at the graduate program in computer science at Universidade Federal Fluminense, Brazil, complemented with specific reading material about other metaheuristics, where it has matured and was exposed to criticisms and suggestions from many students and colleagues.
The book begins in Chapter with an introduction to optimization and a discussion about solution methods for discrete optimization, such as exact and approximate methods, including heuristics and metaheuristics.
We then provide in Chapter a short tour of combinatorial optimization and computational complexity, in which we introduce metaheuristics as a very effective tool for approximately solving hard optimization problems.
This is followed in Chapter with solution construction methods, including greedy algorithms and their relation to matroids, adaptive greedy and semi-greedy algorithms, and solution repair procedures.
Chapter focuses on local search. We discuss solution representation, neighborhoods, and the solution space graph. We then focus on local search methods, covering neighborhood search strategies, cost function updates, and candidate list strategies. Ejection chains and perturbations as well as other strategies to escape from local optima are discussed.
Chapter introduces the basic GRASP as a semi-greedy multistart procedure with local search. Techniques for accelerating the basic procedure are pointed out. Probabilistic stopping criteria for GRASP are also discussed. The chapter concludes with a short introduction to the application of GRASP as a heuristic for multiobjective optimization.
Chapter focuses on time-to-target plots (or runtime distributions) for comparing exponentially distributed runtimes, such as those for GRASP heuristics, and runtimes with general distributions, such as those for GRASP with path-relinking. Runtime distribution will be extensively used throughout this book to assess the performance of stochastic search algorithms.
Extended GRASP construction heuristics are covered in Chapter The chapter begins with reactive GRASP and then covers topics such as probabilistic choice of the construction parameter, random plus greedy and sampled greedy constructions, construction by cost perturbation, and the use of bias functions in construction. The chapter continues with the use of memory, learning, and the proximate optimality principle in construction, pattern-based construction, and Lagrangean GRASP.
Path-relinking is introduced in Chapter The chapter provides a template for path-relinking and discusses its mechanics and implementation strategies. Other topics related to path-relinking are also discussed in this chapter. This includes how to deal with infeasibilities in path-relinking, how to randomize path-relinking, and external path-relinking and its relation to diversification.
The hybridization of GRASP with path-relinking is covered in Chapter The chapter begins by providing motivation for hybridizing path-relinking with GRASP to provide GRASP with a memory mechanism. It then goes on to discuss elite sets and how they can be used as a way to connect GRASP and path-relinking. The chapter ends with a discussion of evolutionary path-relinking and restart mechanisms for GRASP with path-relinking heuristics.
The implementation of GRASP on parallel machines is the topic of Chapter . The chapter introduces two types of strategies for parallel implementation of GRASP: multiple-walk independent-thread and multiple-walk cooperative-thread strategies. It then goes on to illustrate these implementation strategies with three examples: the three-index assignment problem, the job shop scheduling problem, and the 2-path network design problem.
Continuous GRASP extends GRASP heuristics from discrete optimization to continuous global optimization. This is the topic of Chapter . After establishing the similarities and differences between GRASP for discrete optimization and continuous GRASP (or simply C-GRASP), the chapter describes the construction and local search phases of C-GRASP and concludes with several examples applying C-GRASP to multimodal box-constrained optimization.
The book concludes with Chapter where four implementations of GRASP and GRASP with path-relinking are described in detail. These implementations are for the 2-path network design problem, the graph planarization problem, the unsplittable network flow problem, and the maximum cut problem.
Each chapter concludes with bibliographical notes.
Writing this book was certainly a long and arduous task, but most of all it has been an amazing experience. The many trips between Holmdel, Seattle, and Rio de Janeiro and the periods the authors spent visiting each other along the last six years have been gratifying and contributed much to fortify an already strong friendship. We had a lot of fun and we are very happy with the outcome of this project. We will be even happier if the readers appreciate reading and using this book as much as we enjoyed writing it.
Mauricio G. C. Resende
Celso C. Ribeiro
Seattle, WA, USARio de Janeiro, RJ, Brazil
May 2016
Frances Stark
b. 1967; Newport Beach, CA
I must explain, specify, rationalize, classify, etc. , 2008.
Acrylic, fiber-tipped pen, graphite pencil, inset laser print, and paper
collage on paper
Acknowledgments
Over the years, we have collaborated with many people on research related to topics covered in this book. We make an attempt to acknowledge all of them below, in alphabetical order, and apologize in case someone was omitted from this long list: James Abello, Vaneet Aggarwal, Renata Aiex, Daniel Aloise, Dario Aloise, Adriana Alvim, Diogo Andrade, Alexandre Andreatta, David Applegate, Aletéia Araújo, Aaron Archer, Silvio Binato, Ernesto Birgin, Isabelle Bloch, Maria Claudia Boeres, Maria Cristina Boeres, Julliany Brandão, Luciana Buriol, Vicente Campos, Suzana Canuto, Sergio Carvalho, W.A. Chaovalitwongse, Bruno Chiarini, Clayton Commander, Abraham Duarte, Alexandre Duarte, Christophe Duhamel, Sandra Duni-Ekişog̃lu, João Lauro Facó, Djalma Falcão, Haroldo Faria Jr., Tom Feo, Eraldo Fernandes, Daniele Ferone, Paola Festa, Rafael Frinhani, Micael Gallego, Fred Glover, Fernando Carvalho-Gomes, José Fernando Gonçalves, José Luis González-Velarde, Erico Gozzi, Allison Guedes, William Hery, Michael Hirsch, Rubén Interian, David Johnson, Howard Karloff, Yong Li, X. Liu, David Loewenstern, Irene Loiseau, Abilio Lucena, Rafael Martí, Cristian Martínez, Simone Martins, Geraldo Mateus, Thelma Mavridou, Marcelo Medeiros, Rafael Melo, Claudio Meneses, Renato Moraes, Luis Morán-Mirabal, Leonardo Musmanno, Fernanda Nakamura, Mariá Nascimento, Thiago Noronha, Carlos Oliveira, Panos Pardalos, Luciana Pessoa, Alexandre Plastino, Leonidas Pitsoulis, Marcelo Prais, Fábio Protti, Tianbing Qian, Michelle Ragle, Sanguthevar Rajasekaran, Martin Ravetti, Vinod Rebello, Lucia Resende, Alberto Reyes, Caroline Rocha, Noemi Rodriguez, Isabel Rosseti, Jesus Sánchez-Oro, Andréa dos Santos, Ricardo Silva, Stuart Smith, Cid de Souza, Mauricio de Souza, Reinaldo Souza, Fernando Stefanello, Sandra Sudarksy, Franklina Toledo, Giorgio de Tomi, Gerardo Toraldo, Marco Tsitselis, Eduardo Uchoa, Osman Ulular, Sebastián Urrutia, Reinaldo Vallejos, Álvaro Veiga, Ana Viana, Dalessandro Vianna, Carlos Eduardo Vieira, Eugene Vinod, and Renato Werneck.
The authors are particularly indebted to Simone Martins for her careful revision of this manuscript. We are also very thankful to Fred Glover for kindly agreeing to write the foreword of this book.
The second author is grateful to Jiosef Fainberg, Julieta Guevara, Nelson Maculan, and Segyu Rinpoche for their friendship and support throughout the preparation of this book.
Finally, a special thanks goes to the artist Frances Stark for agreeing to let us use a reproduction of an image of her collage I must explain, specify, rationalize, classify, etc. that appears on page 13 of this book. The text in this piece is taken from Witold Gombrowicz's novel Ferdydurke . As in the commentary by art historian Alex Kitnick (Kitnick, 2013), the work "has to do with beginnings, blank pages, and the question of artistic labor, but it is also concerned with how one arranges oneself in relation to language. The question here is less how to make a first mark than how to organize a set of information and desires in relation to one's own person." We feel that it perfectly illustrates the effort we made to collect, organize, explain, and convey as clearly as possible the fundamentals, principles, and applications of optimization by GRASP.
May 2016
Mauricio G. C. Resende
Celso C. Ribeiro
Contents
1 Introduction 1
1.1 Optimization problems 1
1.2 Motivation 2
1.3 Exact vs. approximate methods 4
1.4 Metaheuristics 5
1.5 Graphs: basic notation and definitions 6
1.6 Organization 7
1.7 Bibliographical notes 10
2 A short tour of combinatorial optimization and computational complexity 13
2.1 Problem formulation 13
2.2 Computational complexity 18
2.2.1 Polynomial-time algorithms 19
2.2.2 Characterization of problems and instances 20
2.2.3 One problem has three versions 22
2.2.4 The classes P and NP 26
2.2.5 Polynomial transformations and NP -complete problems 30
2.2.6 NP -hard problems 33
2.2.7 The class co-NP 33
2.2.8 Pseudo-polynomial algorithms and strong NP -completeness 34
2.2.9 PSPACE and the polynomial hierarchy 35
2.3 Solution approaches 36
2.4 Bibliographical notes 38
3 Solution construction and greedy algorithms 41
3.1 Greedy algorithms 41
3.2 Matroids 46
3.3 Adaptive greedy algorithms 48
3.4 Semi-greedy algorithms 59
3.5 Repair procedures 60
3.6 Bibliographical notes 62
4 Local search 63
4.1 Solution representation 63
4.2 Neighborhoods and search space graph 67
4.3 Implementation strategies 74
4.3.1 Neighborhood search 75
4.3.2 Cost function update 77
4.3.3 Candidate lists 82
4.3.4 Circular search 84
4.4 Ejection chains and perturbations 85
4.5 Going beyond the first local optimum 88
4.5.1 Tabu search and short-term memory 88
4.5.2 Variable neighborhood descent 90
4.6 Final remarks 91
4.7 Bibliographical notes 91
5 GRASP: The basic heuristic 95
5.1 Random multistart 95
5.2 Semi-greedy multistart 96
5.3 GRASP 98
5.4 Accelerating GRASP 103
5.5 Stopping GRASP 105
5.5.1 Probabilistic stopping rule 105
5.5.2 Gaussian approximation for GRASP iterations 106
5.5.3 Stopping rule implementation 107
5.6 GRASP for multiobjective optimization 110
5.7 Bibliographical notes 111
6 Runtime distributions 113
6.1 Time-to-target plots 113
6.2 Runtime distribution of GRASP 114
6.3 Comparing algorithms with exponential runtime distributions 118
6.4 Comparing algorithms with general runtime distributions 123
6.5 Numerical applications to sequential algorithms 126
6.5.1 DM-D5 and GRASP algorithms for server replication 126
6.5.2 Multistart and tabu search algorithms for routing and wavelength assignment 132
6.5.3 GRASP algorithms for 2-path network design 132
6.6 Comparing and evaluating parallel algorithms 142
6.7 Bibliographical notes 145
7 Extended construction heuristics 147
7.1 Reactive GRASP 147
7.2 Probabilistic choice of the RCL parameter 148
7.3 Random plus greedy and sampled greedy 149
7.4 Cost perturbations 150
7.5 Bias functions 151
7.6 Memory and learning 151
7.7 Proximate optimality principle in construction 152
7.8 Pattern-based construction 152
7.9 Lagrangean GRASP heuristics 155
7.9.1 Lagrangean relaxation and subgradient optimization 155
7.9.2 A template for Lagrangean heuristics 157
7.9.3 Lagrangean GRASP 159
7.10 Bibliographical notes 162
8 Path-relinking 167
8.1 Template and mechanics of path-relinking 167
8.1.1 Restricted neighborhoods 167
8.1.2 A template for forward path-relinking 173
8.2 Other implementation strategies for path-relinking 175
8.2.1 Backward and back-and-forward path-relinking 176
8.2.2 Mixed path-relinking 178
8.2.3 Truncated path-relinking 179
8.3 Minimum distance required for path-relinking 181
8.4 Dealing with infeasibilities in path-relinking 182
8.5 Randomization in path-relinking 185
8.6 External path-relinking and diversification 186
8.7 Bibliographical notes 188
9 GRASP with path-relinking 189
9.1 Memoryless GRASP 189
9.2 Elite sets 190
9.3 Hybridization of GRASP with path-relinking 191
9.4 Evolutionary path-relinking 192
9.5 Restart strategies 196
9.6 Bibliographical notes 202
10 Parallel GRASP heuristics 205
10.1 Multiple-walk independent-thread strategies 205
10.2 Multiple-walk cooperative-thread strategies 210
10.3 Some parallel GRASP implementations 211
10.3.1 Three-index assignment 212
10.3.2 Job shop scheduling 216
10.3.3 2-path network design problem 221
10.4 Bibliographical notes 226
11 GRASP for continuous optimization 229
11.1 Box-constrained global optimization 229
11.2 C-GRASP for continuous box-constrained global optimization 231
11.3 C-GRASP construction phase 233
11.4 Approximate discrete line search 236
11.5 C-GRASP local search 237
11.6 Computing global optima with C-GRASP 239
11.7 Bibliographical notes 244
12 Case studies 245
12.1 2-path network design problem 245
12.1.1 GRASP with path-relinking for 2-path network design 245
12.2 Graph planarization 248
12.2.1 Two-phase heuristic 248
12.2.2 GRASP for graph planarization 250
12.2.3 Enlarging the planar subgraph 252
12.3 Unsplittable multicommodity network flow: Application to bandwidth packing 253
12.3.1 Problem formulation 254
12.3.2 GRASP with path-relinking for PVC routing 257
12.4 Maximum cut in a graph 260
12.4.1 GRASP with path-relinking for the maximum cut problem 261
12.5 Bibliographical notes 270
References275
Index303
© Springer Science+Business Media New York 2016
Mauricio G.C. Resende and Celso C. RibeiroOptimization by GRASP10.1007/978-1-4939-6530-4_1
# 1. Introduction
Mauricio G. C. Resende1 and Celso C. Ribeiro2
(1)
Modeling and Optimization Group (MOP), Amazon.com, Inc., Seattle, WA, USA
(2)
Instituto de Ciência da Computação, Universidade Federal Fluminense, Niterói, Rio de Janeiro, Brazil
In this first chapter, we introduce general optimization problems and the class of combinatorial optimization problems. As a motivation, we present a number of fundamental combinatorial optimization problems that will be revisited along the next chapters of this book. We also contrast exact and approximate solution methods and trace a brief history of approximate algorithms (or heuristics) from A∗ to metaheuristics, going through greedy algorithms and local search. We motivate the reader and outline, chapter by chapter, the material in the book. Finally, we introduce basic notation and definitions that will be used throughout the book.
## 1.1 Optimization problems
In its most general form, an optimization problem can be cast as

(1.1)
subject to

(1.2)
where F is a feasible set of solutions and f is a real-valued objective function that associates each feasible solution S ∈ F to its cost or value f(S). In the case of a minimization problem we seek a solution minimizing f(S), while in the case of a maximization problem we search for a solution that maximizes f(S) over the entire domain F of feasible solutions.
A global optimum of a minimization problem is a solution S ∗ ∈ F such that . Similarly, the global optimum of a maximization problem is a solution S ∗ ∈ F such that .
Optimization problems are commonly classified into two groups: those with continuous variables, that in principle can take any real value, and those represented by discrete variables, that can take only a finite or a countably infinite set of values. The latter are called combinatorial optimization problems and reduce to the search for a solution in a finite (or, alternatively, countably infinite) set, which can typically be formed by binary or integer variables, permutations, paths, trees, cycles, or graphs. Most of this book is concerned with combinatorial optimization problems with a single objective function, although Section 5.6 briefly addresses approaches for multiobjective problems and Chapter describes a method for continuous problems.
Combinatorial optimization problems and their applications abound in the literature and in real-life, as will be illustrated later in this book. As a motivation, some fundamental combinatorial optimization problems will be presented in the next section, together with examples of their basic applications. These problems will be revisited many times along the next chapters of this book. In particular, they will be formalized and discussed in detail in the next chapter, where we show that some combinatorial optimization problems are intrinsically harder to solve than others. By harder, we mean that state-of-the-art algorithms to solve them can be very expensive in terms of the computation time needed to find a global optimum or that only small problems can be solved in a reasonable amount of time.
Understanding the inner computational complexity of each problem is an absolute prerequisite for the identification and development of an appropriate, effective, and efficient algorithm for its solution.
## 1.2 Motivation
We motivate our introductory discussion with the description of six typical and fundamental combinatorial optimization problems.
Shortest path problem
Suppose a number of cities are distributed in a region and we want to travel from city s to city t. The distances between each pair of cities are known beforehand. We can either go directly from s to t if there is a road directly connecting these two cities, or start in s, traverse one or more cities, and end up in t. A path from s to t is defined to be a sequence of two or more cities that starts in s and ends in t. The length of a path is defined to be the sum of the distances between consecutive cities in this path. In the shortest path problem , we seek, among all paths from s to t, one which has minimum length. ■
Minimum spanning tree problem
Suppose that a number of points spread out on the plane have to be interconnected. Once again, the distances between each pair of points are known beforehand. Some points have to be made pairwise connected, so as to establish a unique path between any two points. In the minimum spanning tree problem , we seek to determine which pairs of points will be directly connected such that the sum of the distances between the selected pairs is minimum. ■
Steiner tree problem in graphs
Assume that a number of terminals (or clients) have to be connected by optical fibers. The terminals can be connected either directly or using a set of previously located hubs at fixed positions. The distances between each pair of points (be them terminals or hubs) are known beforehand. In the Steiner tree problem in graphs , we look for a network connecting terminals and hubs such that there is exactly one unique path between any two terminals and the total distance spanned by the optical fibers is minimum. ■
Maximum clique problem
Consider the global friendship network where pairs of people are considered to be either friends or not. In the maximum clique problem, we seek to determine the largest set of people for which each pair of people in the set are mutual friends. ■
Knapsack problem
Consider a hiker who needs to pack a knapsack with a number of items to take along on a hike. The knapsack has a maximum weight capacity. Each item has a given weight and some utility to the hiker. If all of the items fit in the knapsack, the hiker packs them and goes off. However, the entire set of items may not fit in the knapsack and the hiker will need to determine which items to take. The knapsack problem consists in finding a subset of items with maximum total utility, among all sets of items that fit in the knapsack. ■
Traveling salesman problem
Consider a traveling salesman who needs to visit all cities in a given sales territory. The salesman must begin and end the tour in a given city and visit each other city in the territory exactly once. Since each city must be visited only once, a tour can be represented by a circular permutation of the cities. Assuming the distances between each pair of cities are known beforehand, the objective of the traveling salesman problem is to determine a permutation of the cities that minimizes the total distance traveled. ■
## 1.3 Exact vs. approximate methods
An exact method or optimization method for solving an optimization problem is one that is guaranteed to produce, in finite time, a global optimum for this problem and a proof of its optimality, in case one exists, or otherwise show that no feasible solution exists. Globally optimal solutions are often referred to as exact optimal solutions . Among the many exact methods for solving combinatorial optimization problems, we find algorithmic paradigms such as cutting planes , dynamic programming , backtracking , branch-and-bound (together with its variants and extensions, such as branch-and-cut and branch-and-price ), and implicit enumeration . Some of these paradigms can be viewed as tree search procedures, in the sense that they start from a feasible solution (which corresponds to the root of the tree) and carry out the search for the optimal solution by generating and scanning the nodes of a subtree of the solution space (whose nodes correspond to problem solutions).
Chapter shows that efficient exact algorithms are not known (and are unlikely to exist) for a broad class of optimization problems classified as NP-hard . These problems are often referred to as intractable . Even though the size of the problems that can be solved to optimality (exactly) has been always increasing due to algorithmic and technological developments, there are problems (or problem instances) that are not amenable to be solved by exact methods. Other approaches, based on different paradigms, are needed to tackle such hard and large optimization problems.
As opposed to exact methods, approximate methods are those that provide feasible solutions that, however, are not necessarily optimal. Approximate methods usually run faster than exact methods. As a consequence, approximate methods are capable of handling larger problem instances than are exact methods. In this book, we use the terms heuristic and approximate method interchangeably.
Relevant work on heuristics or approximate algorithms for combinatorial optimization problems can be traced back to the origins of the field of Artificial Intelligence in the 1960s, with the development and applications of A∗ search .
Constructive heuristics are those that build a feasible solution from scratch. They are often based on greedy algorithms and their connections with matroid theory. Greedy algorithms and their extensions will be thoroughly studied in Chapter
Local search procedures start from a feasible solution and improve it by successive small modifications until a solution that cannot be further improved is encountered. Although they often provide high-quality solutions whose values are close to those of optimal solutions, in some situations they can become prematurely trapped in low-quality solutions. Local search heuristics are explored in Chapter
Metaheuristics are general high-level procedures that coordinate simple heuristics and rules to find high-quality solutions to computationally difficult optimization problems. Metaheuristics are often based on distinct paradigms and offer different mechanisms to go beyond the first solution obtained that cannot be improved by local search. They are among the most effective solution strategies for solving combinatorial optimization problems in practice and very frequently produce much better solutions than those obtained by the simple heuristics and rules they coordinate.
## 1.4 Metaheuristics
In this section, we recall the principles of some of the most used metaheuristics. These solution approaches have been instrumental and contributed to most developments and applications in the field of metaheuristics. Among them, we can cite genetic algorithms, simulated annealing, tabu search, variable neighborhood search (VNS), and greedy randomized adaptive search procedures (GRASP), with the last one being the focus of this book.
Genetic algorithms are search procedures based on the mechanics of evolution and natural selection. These algorithms evolve populations of solutions that are pairwise combined to generate offspring. Elements of the solution population are also submitted to mutation processes that create individuals with new characteristics. The most fit individuals in each generation are those that most likely survive and pass their characteristics to the individuals of the next generation. Several operators and strategies can be used to create the initial population and to implement the mechanisms of mating, reproduction, mutation, and selection. In general, the evolution process ends after a number of generations without improvement of the best individual. Although the original implementations of genetic algorithms were based almost exclusively on probabilistic choices for mating, reproduction, and mutation, modern versions incorporate developments from the field of optimization and heuristics, such as the use of greedy randomized algorithms to generate the initial population and local search to improve the characteristics of the offspring.
Annealing is the physical process of heating up a solid until it melts, followed by its cooling down until crystallization. The free energy of the solid is minimized in this process. Practical experiments show that slow cooling schemes lead to final states with lower energy levels. By establishing associations between the physical states of the solid submitted to the annealing process and the feasible solutions of an optimization problem, and between the free energy of the solid and the cost function to be optimized, the simulated annealing metaheuristic mimics this process for the solution of a combinatorial optimization problem. It can be seen as a form of controlled random walk in the space of feasible solutions. The method is very general, can be easily implemented and applied, and has the ability to find good approximate solutions whose value is often close to the optimal. It can even be proved to converge to the optimal solution under certain circumstances, although at the cost of unlimited computation time. In practice, it can require large computation times and its memoryless characteristic does not contribute to more effective iterations and, consequently, to the efficiency of the approach.
Tabu search is a metaheuristic that guides a local search procedure to explore the solution space beyond local optimality. Its basic principle consists in pursuing local search whenever it encounters a local optimum. At such points, instead of moving to an improving solution, the algorithm moves to the least deteriorating solution in the neighborhood of that local optimum, under the expectation that after some steps a better solution will be found. However, moving to a worse solution can lead to cycling, since the algorithm can return to the previous local optimum at the next or in a later iteration. To avoid cycling back to solutions already visited, tabu search makes use of a short-term memory which contains recently visited solutions or, more often and in more clever implementations, the attributes of the current solution which should not be changed in order to prevent cycling. This short-term memory of forbidden solutions or attributes is called a tabu list. More complete or sophisticated variants of the algorithm also make use of medium-term and long-term memories which are used to intensify the search in promising regions of the solution space, or to diversity the search towards new regions that have not been properly explored.
Variable neighborhood search (VNS) is a metaheuristic based on the exploration of multiple neighborhood definitions imposed on the same solution space. Each of its iterations has two main steps: shaking and local search. With shaking, a neighbor of the current solution is randomly generated. Local search is applied to the solution obtained by the shaking step. VNS systematically exploits the idea of neighborhood change, both in the search for local optima, as in the process of escaping from the valleys that contain them.
GRASP is an acronym for greedy randomized adaptive procedures and is among the most effective metaheuristics for solving combinatorial optimization problems. It is a multistart procedure, in which each iteration consists basically of two phases: construction and local search. The construction phase builds a feasible solution, whose neighborhood is investigated until a local minimum is found during the local search phase. The best overall solution is kept as the result. GRASP and VNS are somehow complementary, in the sense that randomization is applied in GRASP at the construction phase, while in VNS it is applied in the local search phase.
The remainder of this book is entirely devoted to the presentation of the main building blocks, algorithms, performance evaluation tools, case studies, and strategies for sequential and parallel implementations of GRASP for solving optimization problems.
## 1.5 Graphs: basic notation and definitions
As many of the combinatorial optimization problems studied in this book come from graph theory and its applications, we introduce in this section some basic notation and definitions that will be used throughout the book.
Given a set V = { 1,..., n}, we denote by | V | = n the cardinality (i.e., the number of elements) of V. We define 2 V as the set formed by all subsets of V, including the empty set  and the set V itself.
A graph G = (V, U) is defined by a set V = { 1,..., n} of nodes and a set U ⊆ V × V of unordered pairs (i, j) of nodes i, j ∈ V called edges. Therefore, both pairs (i, j) or (j, i) can be used to represent the same edge between i, j ∈ V in U. A graph is said to be complete if there is an edge in U between any two distinct nodes i, j ∈ V. A graph can also be referred to as an undirected graph . A path P st (G) in an undirected graph G from s ∈ V to t ∈ V is defined as a sequence of nodes i 1, i 2,..., i q−1, i q ∈ V, where i 1 = s, i q = t, and each edge (i k , i k+1) ∈ U, for any k = 1,..., q − 1. The number of edges in this path is given by q − 1. A graph G = (V, U) is said to be connected if there is at least one path P st (G) connecting every pair of nodes s, t ∈ V. A subgraph G′ = (V ′, U′) of G = (V, U) is such that for any pair of nodes i, j ∈ V ′, edge (i, j) ∈ U′ if and only if (i, j) ∈ U, and therefore V ′ ⊆ V and U′ ⊆ U.
A spanning tree of a graph G = (V, U) is a connected subgraph of G with the same node set V and whose edge set U′ ⊆ U has exactly n − 1 edges.
Given a graph G = (V, U) and a subset V ′ of its node set V, the graph G(V ′) = (V ′, U′) induced in G by V ′ has U′ = { (i, j) ∈ U : i, j ∈ V ′} as its edge set.
A clique of a graph G = (V, U) is a subset of nodes C ⊆ V such that (i, j) ∈ U for every pair of nodes i, j ∈ C, with i ≠ j. Alternatively, we can say that C is a clique if the graph G(C) induced in G by C is complete. The size of a clique is defined to be its cardinality | C | . A subset I ⊆ V of the nodes in G is said to be an independent set or a stable set if every two vertices in I are not directly connected by an edge, i.e., if (i, j) ∉ U for all i, j ∈ I such that i ≠ j.
A directed graph G = (V, A) is defined by a set V = { 1,..., n} of nodes and a set A ⊆ V × V of ordered pairs (i, j) of nodes i, j ∈ V called arcs. A path P st (G) in a directed graph G from s ∈ V to t ∈ V is defined as a sequence of nodes i 1, i 2,..., i q−1, i q ∈ V, where i 1 = s, i q = t, and each arc (i k , i k+1) ∈ A, for any k = 1,..., q − 1. A directed graph G = (V, A) is said to be strongly connected if there is at least one path P st (G) connecting node s to node t and another path P ts (G) connecting node t to node s, for every pair of nodes s, t ∈ V.
A Hamiltonian path in a directed or undirected graph is a path between two nodes that visits each node of the graph exactly once. A Hamiltonian cycle in a directed or undirected graph is a Hamiltonian path that is also a cycle, i.e., its extremities coincide. Every Hamiltonian cycle corresponds to a circular permutation of the nodes of the graph. A Hamiltonian cycle is also known as a Hamiltonian tour or, simply, as a tour.
## 1.6 Organization
In addition to this introductory chapter, this book contains another 11 chapters. Each chapter concludes with a section with bibliographical notes.
Chapter introduces combinatorial optimization problems and their computational complexity. First, some fundamental problems are formulated and then basic concepts of the theory of computational complexity are introduced, with special emphasis on decision problems, polynomial-time algorithms, and NP-complete problems. The chapter concludes with a discussion of solution approaches for NP-hard problems, introducing constructive heuristics, local search, and, finally, metaheuristics.
Chapter addresses the construction of feasible solutions. We begin by considering greedy algorithms and showing how they are related with matroid theory. We then consider adaptive greedy algorithms, which are a generalization of greedy algorithms. Next, we present semi-greedy algorithms that are obtained by randomizing greedy or adaptive greedy algorithms. The chapter concludes with a discussion of solution repair procedures.
Chapter deals with local search. A local search method is one that starts from any feasible solution and visits a sequence of other (feasible or infeasible) solutions, until a feasible solution that cannot be further improved is found. Local improvements are evaluated with respect to neighbor solutions that can be obtained by slight modifications applied to the solution currently being visited. We introduce in this chapter the concept of solution representation, which is instrumental in the design and implementation of local search methods. We also define neighborhoods of combinatorial optimization problems and moves between neighbor solutions. We illustrate the definition of a neighborhood by a number of examples for different problems. Local search methods are introduced and different implementation issues are discussed, such as neighborhood search strategies, quick cost updates, and candidate list strategies.
Chapter presents the basic structure of a greedy randomized adaptive search procedure (or, more simply, GRASP). We first introduce random and semi-greedy multistart procedures and show how solutions produced by these procedures differ. The hybridization of a semi-greedy procedure with a local search method within an iterative procedure constitutes a GRASP heuristic. Efficient implementation strategies are also discussed in this chapter, as well as probabilistic stopping criteria. The chapter concludes with a short introduction to the application of GRASP as a heuristic for multiobjective optimization.
Chapter covers runtime distributions. Also called time-to-target plots, runtime distributions display on the ordinate axis the probability that an algorithm will find a solution at least as good as a given target value within a given running time, shown on the abscissa axis. They provide a very useful tool to characterize the running times of stochastic algorithms for combinatorial optimization problems and to compare different algorithms or strategies for solving a given problem. Accordingly, they have been widely used as a tool for algorithm design and comparison.
Chapter considers enhancements, extensions, and variants of greedy randomized adaptive construction procedures such as Reactive GRASP, the probabilistic choice of the parameter used in the construction of restricted candidate lists, random plus greedy and sampled greedy constructions, cost perturbations, bias functions, using principles of intelligent construction based on memory and learning, proximate optimality and local search applied to partially constructed solutions, and pattern-based construction strategies using vocabulary building or data mining.
Chapter introduces path-relinking, an important search intensification strategy. Being a major enhancement to heuristic search methods for solving combinatorial optimization problems, its hybridization with other metaheuristics has led to significant improvements in both solution quality and running times. In this chapter, we review the fundamentals of path-relinking, implementation issues and strategies, and the use of randomization in path-relinking.
Chapter covers the hybridization of GRASP with path-relinking. Path-relinking is a major enhancement that adds a long-term memory mechanism to the otherwise memoryless GRASP heuristics. GRASP with path-relinking implements a long-term memory with an elite set of diverse high-quality solutions found during the search. In its most basic implementation, at each GRASP iteration the path-relinking operator is applied between the solution found by local search and a randomly selected solution from the elite set. The solution resulting from path-relinking is a candidate for inclusion in the elite set. In this chapter we examine elite sets, their integration with GRASP, the basic GRASP with path-relinking procedure, several variants of the basic scheme (including evolutionary path-relinking), and restart strategies for GRASP with path-relinking heuristics.
Chapter introduces parallel GRASP heuristics. Parallel computers and parallel algorithms have been increasingly finding their way into metaheuristics. Most of the parallel implementations of GRASP found in the literature consist in either partitioning the search space or the iterations and assigning each partition to a processor. These implementations can be categorized as following the multiple-walk independent-thread approach, with the communication among processors during GRASP iterations being limited to the detection of program termination and gathering the best solution found over all processors. Parallel strategies for the parallelization of GRASP with path-relinking can follow not only the multiple-walk independent-thread but also the multiple-walk cooperative-thread approach, in which the processors share the information about the elite solutions they visited at previous iterations. This chapter covers multiple-walk independent-thread strategies, multiple-walk cooperative-thread strategies, and some applications of parallel GRASP.
Chapter considers Continuous GRASP, or C-GRASP, which extends GRASP to the domain of continuous box-constrained global optimization. The algorithm searches the solution space over a dynamic grid. Each iteration of C-GRASP consists of two phases. In the construction (or diversification) phase, a greedy randomized solution is constructed. In the local search (or intensification) phase, a local search algorithm is applied, starting from the first phase solution, and a locally optimal solution is produced. A deterministic rule triggers a restart after each C-GRASP iteration. This chapter addresses the construction phase and the restart strategy and also presents a local search procedure. The chapter concludes with examples of continuous functions optimized with an implementation of C-GRASP.
The book concludes with Chapter , in which we consider four case studies, 2-path network design, graph planarization, unsplittable multicommodity flow, and maximum cut, to illustrate the application and the implementation of GRASP heuristics. The key point here is not to show numerical results or comparative statistics with other approaches but, instead, to show how to customize the GRASP metaheuristic for each particular problem.
## 1.7 Bibliographical notes
The shortest path problem and the minimum spanning tree problem that were used to motivate Section 1.2 have been addressed in many papers and textbooks (see Cormen et al. (2009) in particular). Although many references exist for the other problems discussed in this chapter, we refer the reader to Pardalos and Xue (1994) for the maximum clique problem , Martello and Toth (1990) for the knapsack problem , and Lawler et al. (1985), Gutin and Punnen (2002), and Applegate et al. (2006) for the traveling salesman problem . The Steiner tree problem in graphs appeared first in Hakimi (1971) and Dreyfus and Wagner (1972). See also Maculan (1987), Winter (1987), Goemans and Myung (1993), Hwang et al. (1992), Voss (1992), and Ribeiro et al. (2002).
The textbooks by Nilsson (1971; 1982) and Pearl (1985) are fundamental references on the origins, principles, and applications of A∗ search and heuristic search methods introduced in Section 1.3. Cormen et al. (2009) present a good coverage of greedy algorithms and an introduction to matroid theory. Pitsoulis (2014) offers a more in-depth coverage of matroids.
Yagiura and Ibaraki (2002) trace back the history of local search since the work of Croes (1958). Kernighan and Lin (1970) and Lin and Kernighan (1973) were among the first to propose local search algorithms for the graph partitioning and the traveling salesman problem, respectively. The book by Hoos and Stützle (2005) is a thorough study of the foundations and applications of stochastic local search.
Genetic algorithms and research in metaheuristics were pioneered in the book of Holland (1975). Other developments in genetic algorithms appeared in textbooks by Reeves and Rowe (2002), Goldberg (1989), and Michalewicz (1996), among others. The work on optimization by simulated annealing was pioneered by Kirkpatrick et al. (1983), with accounts of later developments and applications being found in textbooks by van Laarhoven and Aarts (1987) and Aarts and Korst (1989). The seminal papers of Glover (1989; 1990) established the fundamentals, extensions, and uses of tabu search . They provided solid foundations and originated most of the developments from where the field of metaheuristics flourished. An alternative approach based on virtually the same principle of using a short-term memory was independently proposed by Hansen (1986), in a method named steepest-ascent mildest-descent. The reader is also referred to the textbook by Glover and Laguna (1997). Variable neighborhood search (VNS) was proposed by Mladenović and Hansen (1997), followed by other reviews by Hansen and Mladenović (1999; 2002; 2003).
The fundamentals of GRASP were originally proposed by Feo and Resende (1989). This article was followed by many others proposing variants, extensions, hybridizations, and applications of GRASP. Extensive literature reviews containing late developments and applications appeared in papers by Feo and Resende (1995), Festa and Resende (2002; 2009a;b), Resende and Ribeiro (2003b; 2005a; 2010; 2014), Pitsoulis and Resende (2002), Ribeiro (2002), Resende and González-Velarde (2003), Resende et al. (2012), and Resende and Silva (2013).
The reader interested in other relevant, but less explored approaches such as ant colony optimization , iterated local search , scatter search , and particle swarm optimization , can look into the broad and ever evolving literature on the subject, in particular the handbooks edited by Reeves (1993), Glover and Kochenberger (2003), Gendreau and Potvin (2010), and Burke and Kendall (2005; 2014). Sörensen (2015) offers a critical view of the explosion of metaheuristic methods based on metaphors of some natural or man-made processes and concludes by pointing out some of the most promising research avenues for the field of metaheuristics.
The reader is referred to the books by Bondy and Murty (1976), West (2001), and Diestel (2010), among others, for notation, definitions, theoretical results, and algorithms in graphs .
References
E.H.L. Aarts and J. Korst. Simulated annealing and Boltzmann machines: A stochastic approach to combinatorial optimization and neural computing. Wiley, New York, 1989.MATH
D.L. Applegate, R.E. Bixby, V. Chvátal, and W.J. Cook. The traveling salesman problem: A computational study. Princeton University Press, Princeton, 2006.MATH
J.A. Bondy and U.S.R. Murty. Graph theory with applications. Elsevier, 1976.CrossRefMATH
E.K. Burke and G. Kendall, editors. Search methodologies: Introductory tutorials in optimization and decision support techniques. Springer, New York, 2005.MATH
E.K. Burke and G. Kendall, editors. Search methodologies: Introductory tutorials in optimization and decision support techniques. Springer, New York, 2nd edition, 2014.
T.H. Cormen, C.E. Leiserson, R.L. Rivest, and C. Stein. Introduction to Algorithms. MIT Press, Cambridge, 3rd edition, 2009.MATH
G.A. Croes. A method for solving traveling-salesman problems. Operations Research, 6:791–812, 1958.MathSciNetCrossRef
R. Diestel. Graph theory. Springer, New York, 2010.CrossRefMATH
S.E. Dreyfus and R.A. Wagner. The Steiner problem in graphs. Networks, 1: 195–201, 1972.MathSciNetCrossRefMATH
T.A. Feo and M.G.C. Resende. A probabilistic heuristic for a computationally difficult set covering problem. Operations Research Letters, 8:67–71, 1989.MathSciNetCrossRef90002-3)MATH
T.A. Feo and M.G.C. Resende. Greedy randomized adaptive search procedures. Journal of Global Optimization, 6:109–133, 1995.MathSciNetCrossRefMATH
P. Festa and M.G.C. Resende. GRASP: An annotated bibliography. In C.C. Ribeiro and P. Hansen, editors, Essays and surveys in metaheuristics, pages 325–367. Kluwer Academic Publishers, Boston, 2002.
P. Festa and M.G.C. Resende. An annotated bibliography of GRASP, Part I: Algorithms. International Transactions in Operational Research, 16:1–24, 2009a.
P. Festa and M.G.C. Resende. An annotated bibliography of GRASP, Part II: Applications. International Transactions in Operational Research, 16, 2009b. 131–172.
M. Gendreau and J.-Y. Potvin, editors. Handbook of metaheuristics. Springer, New York, 2nd edition, 2010.
F. Glover. Tabu Search - Part I. ORSA Journal on Computing, 1:190–206, 1989.MathSciNetCrossRefMATH
F. Glover. Tabu Search - Part II. ORSA Journal on Computing, 2:4–32, 1990.CrossRefMATH
F. Glover and G. Kochenberger, editors. Handbook of metaheuristics. Kluwer Academic Publishers, Boston, 2003.MATH
F. Glover and M. Laguna. Tabu search. Kluwer Academic Publishers, Boston, 1997.CrossRefMATH
M.X. Goemans and Y. Myung. A catalog of Steiner tree formulations. Networks, 23:19–28, 1993.MathSciNetCrossRefMATH
D.E. Goldberg. Genetic algorithms in search, optimization and machine learning. Addison-Wesley, Reading, 1989.MATH
G. Gutin and A.P. Punnen, editors. The traveling salesman problem and its variations. Kluwer Academic Publishers, Boston, 2002.MATH
S.L. Hakimi. Steiner's problem in graphs and its applications. Networks, 1: 113–133, 1971.MathSciNetCrossRefMATH
P. Hansen. The steepest ascent mildest descent heuristic for combinatorial programming. In Proceedings of the Congress on Numerical Methods in Combinatorial Optimization, pages 70–145, Capri, 1986.
P. Hansen and N. Mladenović. An introduction to variable neighbourhood search. In S Voss, S. Martello, I.H. Osman, and C. Roucairol, editors, Metaheuristics: Advances and trends in local search procedures for optimization, pages 433–458. Kluwer Academic Publishers, Boston, 1999.
P. Hansen and N. Mladenović. Developments of variable neighborhood search. In C.C. Ribeiro and P. Hansen, editors, Essays and surveys in metaheuristics, pages 415–439. Kluwer Academic Publishers, Boston, 2002.CrossRef
P. Hansen and N. Mladenović. Variable neighborhood search. In F. Glover and G. Kochenberger, editors, Handbook of metaheuristics, pages 145–184. Kluwer Academic Publishers, Boston, 2003.CrossRef
J.H. Holland. Adaptation in natural and artificial systems: An introductory analysis with applications to biology, control, and artificial intelligence. University of Michigan Press, Ann Arbor, 1975.MATH
H.H. Hoos and T. Stützle. Stochastic local search: Foundations and applications. Elsevier, New York, 2005.MATH
F.K. Hwang, D.S. Richards, and P. Winter. The Steiner tree problem. North-Holland, Amsterdam, 1992.MATH
B.W. Kernighan and S. Lin. An efficient heuristic procedure for partitioning graphs. Bell System Technical Journal, 49:291–307, 1970.CrossRefMATH
S. Kirkpatrick, C.D. Gelatt Jr., and M.P. Vecchi. Optimization by simulated annealing. Science, 220(4598):671–680, 1983.MathSciNetCrossRefMATH
E.L. Lawler, J.K. Lenstra, A.H.G. Rinnooy Kan, and D.B. Shmoys, editors. The traveling salesman problem: A guided tour of combinatorial optimization. John Wiley & Sons, New York, 1985.MATH
S. Lin and B.W. Kernighan. An effective heuristic algorithm for the traveling-salesman problem. Operations Research, 21:498–516, 1973.MathSciNetCrossRefMATH
N. Maculan. The Steiner problem in graphs. Annals of Discrete Mathematics, 31:182–212, 1987.MathSciNetMATH
S. Martello and P. Toth. Knapsack problems: Algorithms and computer implementations. John Wiley & Sons, New York, 1990.MATH
Z. Michalewicz. Genetic algorithms + Data structures = Evolution programs. Springer, Berlin, 1996.CrossRefMATH
N. Mladenović and P. Hansen. Variable neighborhood search. Computers & Operations Research, 24:1097–1100, 1997.MathSciNetCrossRef00031-2)MATH
N.J. Nilsson. Problem-solving methods in artificial intelligence. McGraw-Hill, New York, 1971.
N.J. Nilsson. Principles of artificial intelligence. Springer, Berlin, 1982.CrossRefMATH
P.M. Pardalos and J. Xue. The maximum clique problem. Journal of Global Optimization, 4:301–328, 1994.MathSciNetCrossRefMATH
J. Pearl. Heuristics: Intelligent search strategies for computer problem solving. Addison-Wesley, Reading, 1985.
L.S. Pitsoulis. Topics in matroid theory. SpringerBriefs in Optimization. Springer, 2014.CrossRefMATH
L.S. Pitsoulis and M.G.C. Resende. Greedy randomized adaptive search procedures. In P.M. Pardalos and M.G.C. Resende, editors, Handbook of applied optimization, pages 168–183. Oxford University Press, New York, 2002.
C. Reeves and J.E. Rowe. Genetic algorithms: Principles and perspectives. Springer, Berlin, 2002.
C.R. Reeves. Modern heuristic techniques for combinatorial problems. Blackwell, London, 1993.MATH
M.G.C. Resende and J.L. González-Velarde. GRASP: Procedimientos de búsqueda miope aleatorizado y adaptativo. Inteligencia Artificial, 19: 61–76, 2003.
M.G.C. Resende and C.C. Ribeiro. Greedy randomized adaptive search procedures. In F. Glover and G. Kochenberger, editors, Handbook of metaheuristics, pages 219–249. Kluwer Academic Publishers, Boston, 2003b.
M.G.C. Resende and C.C. Ribeiro. GRASP with path-relinking: Recent advances and applications. In T. Ibaraki, K. Nonobe, and M. Yagiura, editors, Metaheuristics: Progress as real problem solvers, pages 29–63. Springer, New York, 2005a.
M.G.C. Resende and C.C. Ribeiro. Greedy randomized adaptive search procedures: Advances and applications. In M. Gendreau and J.-Y. Potvin, editors, Handbook of metaheuristics, pages 293–319. Springer, New York, 2nd edition, 2010.
M.G.C. Resende and C.C. Ribeiro. GRASP: Greedy randomized adaptive search procedures. In E.K. Burke and G. Kendall, editors, Search methodologies: Introductory tutorials in optimization and decision support systems, chapter 11, pages 287–312. Springer, New York, 2nd edition, 2014.
M.G.C. Resende and R.M.A. Silva. GRASP: Procedimentos de busca gulosos, aleatórios e adaptativos. In H.S. Lopes, L.C.A. Rodrigues, and M.T.A. Steiner, editors, Meta-heurísticas em pesquisa operacional, chapter 1, pages 1–20. Omnipax Editora, Curitiba, 2013.
M.G.C. Resende, G.R. Mateus, and R.M.A. Silva. GRASP: Busca gulosa, aleatorizada e adaptativa. In A. Gaspar-Cunha, R. Takahashi, and C.H. Antunes, editors, Manual da computação evolutiva e metaheurística, pages 201–213. Coimbra University Press, Coimbra, 2012.
C.C. Ribeiro. GRASP: Une métaheuristique gloutonne et probabiliste. In J. Teghem and M. Pirlot, editors, Optimisation approchée en recherche opérationnelle, pages 153–176. Hermès, Paris, 2002.
C.C. Ribeiro, E. Uchoa, and R.F. Werneck. A hybrid GRASP with perturbations for the Steiner problem in graphs. INFORMS Journal on Computing, 14:228–246, 2002.MathSciNetCrossRefMATH
K. Sörensen. Metaheuristics – The metaphor exposed. International Transactions in Operational Research, 22:1–16, 2015.MathSciNetCrossRefMATH
P.J.M. van Laarhoven and E. Aarts. Simulated annealing: Theory and applications. Kluwer Academic Publishers, Boston, 1987.CrossRefMATH
S. Voss. Steiner's problem in graphs: Heuristic methods. Discrete Applied Mathematics, 40:45–72, 1992.MathSciNetCrossRef90021-2)MATH
D.B. West. Introduction to graph theory. Pearson, 2001.
P. Winter. Steiner problem in networks: A survey. Networks, 17:129–167, 1987.MathSciNetCrossRefMATH
M. Yagiura and T. Ibaraki. Local search. In P.M. Pardalos and M.G.C. Resende, editors, Handbook of applied optimization, pages 104–123. Oxford University Press, 2002.
© Springer Science+Business Media New York 2016
Mauricio G.C. Resende and Celso C. RibeiroOptimization by GRASP10.1007/978-1-4939-6530-4_2
# 2. A short tour of combinatorial optimization and computational complexity
Mauricio G. C. Resende1 and Celso C. Ribeiro2
(1)
Modeling and Optimization Group (MOP), Amazon.com, Inc., Seattle, WA, USA
(2)
Instituto de Ciência da Computação, Universidade Federal Fluminense, Niterói, Rio de Janeiro, Brazil
This chapter introduces combinatorial optimization problems and their computational complexity. We first formulate some fundamental problems already introduced in the previous chapter and then consider basic concepts of the theory of computational complexity, with special emphasis on decision problems, polynomial-time algorithms, and NP-complete problems. The chapter concludes with a discussion of solution approaches for NP-hard problems, introducing constructive heuristics, local search or improvement procedures and, finally, metaheuristics.
## 2.1 Problem formulation
An instance of a combinatorial optimization problem is defined by a finite ground set E = { 1,..., n}, a set of feasible solutions F ⊆ 2 E , and an objective function . In the case of a minimization problem, we seek a global optimal solution S ∗ ∈ F such that . The ground set E, the cost function f, and the set of feasible solutions F are defined for each specific problem. Similarly, in the case of a maximization problem, we seek an optimal solution S ∗ ∈ F such that .
Each of the six problems considered in Section 1.2 is an example of a combinatorial optimization problem that can be formulated as described below.
Shortest path problem – Revisited
Let G = (V, A) be a directed graph, where V is its set of nodes and A its set of arcs. Each city corresponds to a node of this graph. The origin s and destination t are two special nodes in V. For every pair of cities i, j ∈ V that are directly connected, let d ij be the length of arc (i, j) ∈ A. Furthermore, let P st (G) be a path from s to t in G, defined as a sequence of nodes i 1, i 2,..., i q−1, i q ∈ V, with i 1 = s and i q = t. The length of path P st (G) is given by .
Therefore, in the case of the shortest path problem, the ground set E consists of the arc set A. The set of feasible solutions F ⊆ 2 E is formed by all subsets of E that correspond to paths from s to t in G. The objective of the shortest path problem is to find a path P ∗ ∈ F that minimizes the objective function f(P) over all paths P ∈ F from s to t in G.
Consider the example in Figure 2.1, not drawn to scale. The shortest path from node 1 to node 6 is 1 − 2 − 3 − 6 and is shown in red. The length of this path is 55 + 20 + 25 = 100. ■
Fig. 2.1
The shortest path from node 1 to node 6 is 1 − 2 − 3 − 6 and is indicated by red arcs. This path has length 100.
Minimum spanning tree problem – Revisited
Let G = (V, U) be a graph, where the node set V corresponds to points to be connected and its edge set U is formed by unordered pairs of points i, j ∈ V, with i ≠ j. Let d ij be the length (or weight) of edge (i, j) ∈ U. In addition, let T(G) = (V, U′) be a spanning tree of graph G, i.e., a connected subgraph of G with the same node set V and whose edge set U′ ⊆ U has exactly n − 1 = | V | − 1 edges. The total weight of tree T(G) is given by f(T(G)) = ∑ (i, j) ∈ U′ d ij .
Therefore, in the case of the minimum spanning tree problem, the ground set E consists of the set U of edges. The set of feasible solutions F ⊆ 2 E is formed by all subsets of edges that correspond to spanning trees of G. The objective of the minimum spanning tree problem is to find a spanning tree T ∗ ∈ F such that f(T ∗) ≤ f(T) for all T ∈ F.
Consider the example in Figure 2.2, not drawn to scale. The minimum spanning tree of the graph in this figure is shown in red and has five edges: (1, 2), (2, 3), (2, 4), (3, 5), and (3, 6). Its total weight is 55 + 20 + 40 + 35 + 25 = 175. ■
Fig. 2.2
A minimum spanning tree is shown in red and has total length 175.
Steiner tree problem in graphs – Revisited
Let G = (V, U) be a graph, where the node set is V = { 1,..., n} and the edge set U is formed by unordered pairs of points i, j ∈ V, with i ≠ j. Let d ij be the length of edge (i, j) ∈ U. Furthermore, let T ⊆ V be a subset of terminal nodes that have to be connected. A Steiner tree S = (V ′, U′) of G is a subtree of G that connects all nodes in T, i.e., T ⊆ V ′ ⊆ V. The total length of the Steiner tree S is given by f(S) = ∑ (i, j) ∈ U′ d ij .
Therefore, in the case of the Steiner tree problem in graphs, the ground set E once again consists of the set U of edges. The set of feasible solutions F ⊆ 2 E is formed by all subsets of edges that correspond to Steiner trees of G. The objective of the Steiner tree problem in graphs is to find a Steiner tree S ∗ ∈ F such that f(S ∗) ≤ f(S) for all S ∈ F.
Consider the graph in the example in Figure 2.3, not drawn to scale. The terminal nodes are represented by circles, while the optional nodes correspond to squares. The minimum Steiner tree is shown in red and makes use of the optional nodes 5 and 6. Its total length is 5 + 5 + 5 + 5 + 5 = 25.
Fig. 2.3
A minimum Steiner tree is shown in red and has total length 25.
The nonterminal, optional nodes in V ∖ T that are effectively used to connect the terminal nodes in T are called Steiner nodes. The Steiner tree problem in graphs reduces to a shortest path problem whenever | T | = 2. Furthermore, it reduces to a minimum spanning tree problem whenever T = V. ■
Maximum clique problem – Revisited
Let G = (V, U) be a graph, where the node set V = { 1,..., n} corresponds to the set of people in the world. For every two people i, j ∈ V, the edge (i, j) ∈ U if and only if i and j are friends. A clique is a subset C ⊆ V such that (i, j) ∈ U for every pair i, j ∈ C with i ≠ j. The size of a clique is defined to be its cardinality, i.e., f(C) = | C | .
Therefore, in the case of the maximum clique problem, the ground set E corresponds to the set of nodes V. The set of feasible solutions F ⊆ 2 E is formed by all subsets of V in which all nodes are pairwise adjacent. The objective of the maximum clique problem is to find a clique C ∗ ∈ F such that f(C ∗) ≥ f(C) for all C ∈ F.
Consider the example in Figure 2.4. The maximum clique is formed by the four nodes numbered 2, 3, 5, and 6. The edges connecting the nodes of this clique are illustrated in red. ■
Fig. 2.4
A maximum clique of size four formed by nodes 2, 3, 5, and 6 is illustrated with the edges connecting its nodes in red.
Knapsack problem – Revisited
Let b be an integer representing the maximum weight that can be taken in a hiker's knapsack and suppose the hiker has a set I = { 1,..., n} of items to be placed in the knapsack. Let a i and c i be integer numbers representing, respectively, the weight and the utility of each item i ∈ I. Without loss of generality, we assume that each item fits in the knapsack by itself, i.e., a i ≤ b, for all i ∈ I. A subset of items K ⊆ I is feasible if ∑ i ∈ K a i ≤ b. The utility of this subset is given by f(K) = ∑ i ∈ K c i .
Therefore, in the case of the knapsack problem, the ground set E consists of the set I of items to be packed. The set of feasible solutions F ⊆ 2 E is formed by all subsets of items K ⊆ I for which ∑ i ∈ K a i ≤ b. The objective of the knapsack problem is to find a set of items K ∗ ∈ F such that f(K ∗) ≥ f(K) for all K ∈ F.
Consider the example in Figure 2.5, where four items are available to a hiker to be placed in a knapsack of capacity 19. The weights of the yellow and green items are each equal to 10 and those of the blue and red items are both equal to 5. Therefore, only two of the four items fit together in the knapsack. The two heaviest items have utilities 20 and 10 to the hiker, while the two items with least weights have utilities 10 and 5. Since both large items (which combined have the highest utility to the hiker, but also the greatest weight) cannot be placed together in the knapsack, the hiker will need to select a large and a small item. Of each group, the hiker selects the one with maximum utility. The solution is shown on the right side of the figure. The yellow and blue items are placed in the knapsack and together they have a total weight of 5 + 10 = 15 and a total maximum utility of 10 + 20 = 30. ■
Fig. 2.5
Four items are candidates to be packed into a knapsack with maximum weight capacity of 19. The optimal solution packs the yellow and blue items with total weight of 15 and maximum total utility of 30.
Traveling salesman problem – Revisited
Let V = { 1,..., n} be the set of cities a traveling salesman has to visit. If we consider the graph G = (V, U) with non-negative lengths d ij associated with each existing edge (i, j) ∈ U, then any tour visiting each of the n cities exactly once corresponds to a Hamiltonian cycle in G, i.e., a cycle in G that visits every node exactly once. A feasible solution to the traveling salesman problem is a tour defined by a circular permutation π = (i 1, i 2,..., i n , i 1) of the n cities, with i j ≠ i k for every j ≠ k ∈ V. This permutation is associated with the Hamiltonian cycle H = { (i 1, i 2), (i 2, i 3),..., (i n−1, i n ), (i n , i 1)} in G, i.e., (i n , i 1) ∈ U and (i k , i k+1) ∈ U, for k = 1,..., n − 1. The total length of this tour is given by .
In case G = (V, U) is not complete, for any pair of vertices i, j ∈ V such that (i, j) ∉ E, we can create a new edge (i, j) with a sufficiently large length d ij = ∞. Every Hamiltonian cycle in the original graph corresponds to a finite length Hamiltonian cycle in the resulting complete graph. Therefore, we can always assume, without loss of generality, that G = (V, U) can be viewed as a complete graph.
In the case of the traveling salesman problem, the ground set E consists of the edge set U. The set of feasible solutions F ⊆ 2 E is formed by all subsets of edges corresponding to Hamiltonian cycles in G. The objective of the traveling salesman problem is to find a Hamiltonian cycle H ∗ ∈ F such that f(H ∗) ≤ f(H) for all H ∈ F. Alternatively, we can view the ground set E as formed by all vertices in V and the set F of feasible solutions formed by all circular permutations of the elements of the ground set.
Consider an instance of the traveling salesman problem, defined by the graph in Figure 2.6. Next, Figure 2.7 depicts in red a tour that visits cities 1 − 2 − 4 − 5 − 3 − 6 − 1 in this order and has a total length of 325. The shortest tour, shown in red in Figure 2.8, visits cities 1 − 2 − 3 − 6 − 5 − 4 − 1 in this order and has a total length of 285. ■
Fig. 2.6
Instance of a traveling salesman problem with six cities.
Fig. 2.7
Example of a tour in this graph visiting cities 1 − 2 − 4 − 5 − 3 − 6 − 1 in this order as illustrated in red, with a total length of 325.
Fig. 2.8
The shortest tour in this graph visits cities 1 − 2 − 3 − 6 − 5 − 4 − 1 in this order as illustrated in red and has a total length of 285.
## 2.2 Computational complexity
This book is mainly concerned with the solution of computationally difficult combinatorial optimization problems. In this section, we discuss the basics of the theory of computational complexity, which provides useful tools to differentiate between easy and hard combinatorial optimization problems.
### 2.2.1 Polynomial-time algorithms
A computational problem is generally considered well-solved when there exists an efficient algorithm for its exact solution. Basically, efficient algorithms are those that are not too time-consuming and whose computation times do not grow excessively fast with the problem size. In fact, the rate of growth of the time taken by an algorithm is the main limitation for its use in practice. Algorithms with fast increasing computation times quickly become useless to solve real-world applications.
An algorithm is considered efficient and practically useful for solving a computational problem  whenever its time complexity (or its running time) grows as a polynomial function of the size of its input. In this context, the input size corresponds to the number of bits or to the amount of memory needed to store the data of any particular instance of . If we denote by L the length of any reasonable encoding of the problem data, using an alphabet defined by at least two symbols, an algorithm  for this problem is said to run in polynomial time if there is a polynomial function p such that the computation of  is bounded from above by p(L). In this case, we say that algorithm  runs in time O(p(L)) if there exists an integer number L 0 and a real number c such that the running time of algorithm  applied to any instance of problem  of size L ≥ L 0 is less than or equal to c ⋅ p(L).
Definition 2.1 (Polynomial algorithm ).
An algorithm  for a problem  is said to be polynomial if there is a polynomial function p such that  solves any instance of  in time bounded by p(L), where L is the length of a reasonable encoding of this instance.
Polynomial-time algorithms are known for two of the six combinatorial optimization problems used to motivate the discussion in this chapter. These are the shortest path problem and the minimum spanning tree problem. The other four – the Steiner tree problem in graphs, the maximum clique problem, the knapsack problem, and the traveling salesman problem – are typical examples of hard problems for which, to date, no polynomial-time algorithm is known. Hard optimization problems in this category are the main concern of this book and correspond to those that benefit from the solution methods presented here.
### 2.2.2 Characterization of problems and instances
In Section 2.1, we saw that an instance of a combinatorial optimization problem can be characterized by a finite ground set E = { 1,..., n}, a set F of feasible solutions, and a cost function  that associates a real value f(S) with each feasible solution S ∈ F.
For each combinatorial optimization problem, we assume that its set F of feasible solutions and its cost function f(S) are implicitly given by two algorithms  and , respectively. Given an object S ∈ 2 E and a set P F of parameters, the recognition algorithm  determines if the object S belongs to F, the set of feasible solutions characterized by the parameters in P F . Similarly, given a feasible solution S ∈ F and a set of parameters P f , the cost function algorithm  computes the cost function value f(S). Therefore, we can say that each combinatorial optimization problem is characterized by the recognition algorithm  and the cost function algorithm , while each of its instances is associated with a pair of parameter sets P F and P f . These concepts are illustrated below for some of the problems previously described in this chapter.
Shortest path problem – Characterizing parameters and algorithms
In the case of the shortest path problem, the parameter set P F that establishes feasibility consists of the description of the directed graph G = (V, A), where V is its set of nodes and A its set of arcs, together with the definition of the source and destination nodes s, t ∈ V. An object that is a candidate to be a feasible solution is defined by a subset P of the arcs in A. The cost function parameter set P f is defined by the arc lengths d ij , for every arc (i, j) ∈ A. The recognition algorithm  checks if P corresponds to a path from s to t in G. Once the feasibility of a solution P has been established (i.e., P characterizes a path from s to t), the cost function algorithm  adds up the lengths of all arcs in P to compute the cost function value f(P). ■
Minimum spanning tree problem – Characterizing parameters and algorithms
In the minimum spanning tree problem, the parameter set P F that establishes feasibility consists exclusively of the description of the graph G = (V, U) itself, where V is its set of nodes and U its set of edges. An object that is a candidate to be a feasible solution is defined by a subset T of the edges in U. The cost function parameter set P f consists of the edge lengths d ij , for every edge (i, j) ∈ U. The recognition algorithm  checks if T corresponds to a spanning tree in G. Once the feasibility of a solution T has been established (i.e., the graph induced in G by the edge subset T is connected and has exactly | V | − 1 edges), the cost function algorithm  adds up the lengths of all edges in T to compute the cost function value f(T). ■
Maximum clique problem – Characterizing parameters and algorithms
In the case of the maximum clique problem, once again the parameter set P F that establishes feasibility consists exclusively of the description of the graph G = (V, U), where V is its set of nodes and U its set of edges. An object that is a candidate to be a feasible solution is defined by a subset C of the nodes in V. Since the cost of a feasible solution C depends only on the number of nodes in C, no cost parameter exists and therefore . The recognition algorithm  checks if C corresponds to a clique in G. Once the feasibility of a solution C has been established (i.e., every two nodes i, j ∈ C are pairwise connected by an edge in U), the cost function algorithm  simply counts the number of nodes in C to compute the cost function value f(C). ■
Knapsack problem – Characterizing parameters and algorithms
For the knapsack problem, the parameter set P F that establishes feasibility is defined by the maximum weight b of the knapsack and by the weights a i of each item i ∈ I. An object that is a candidate to be a feasible solution is defined by a subset B of the items in I. The cost function parameter set P f is defined by the utilities c i , for each i ∈ I. The recognition algorithm  checks if B corresponds to a feasible subset of items. Once the feasibility of a solution B has been established (i.e., the sum of the weights of all items in B is less than or equal to the maximum weight b), the cost function algorithm  adds up the utilities of all items in B to compute the cost function value f(B). ■
Traveling salesman problem – Characterizing parameters and algorithms
In the case of the traveling salesman problem, the parameter set P F that establishes feasibility consists of the number | V | of cities or vertices of the complete graph G = (V, U). An object that is a candidate to be a feasible solution is characterized by a circular permutation of all cities in V or by the set H of edges in the associated Hamiltonian cycle in G. The cost function parameter set P f is defined by the distances d ij , for every pair of cities i, j ∈ V, with i ≠ j. The recognition algorithm  checks if H corresponds to a Hamiltonian cycle in G. Once the feasibility of a solution H has been established (i.e., H characterizes a Hamiltonian cycle visiting every node of G exactly once), the cost function algorithm  adds up the distances of all arcs in H to compute the cost function value f(H). ■
### 2.2.3 One problem has three versions
A combinatorial optimization problem can therefore be alternatively stated in general as the following computational task:
Definition 2.2 (Optimization problem ).
Given representations for the parameter sets P F and P f for algorithms  and , respectively, find an optimal feasible solution.
The above formulation is usually referred to as the optimization version of the problem. However, if instead of finding an optimal solution itself, we are only interested in finding its value, then we have a more relaxed evaluation form of this problem:
Definition 2.3 (Evaluation problem ).
Given representations for the parameter sets P F and P f for algorithms  and , respectively, find the cost of an optimal feasible solution.
Under the reasonable assumption that  is a polynomial-time algorithm, which means that the cost of any solution can be efficiently computed, this evaluation version cannot be harder than the optimization version. This is so because once the optimization version of the problem has been solved and its optimal solution is known, its cost can be easily computed in polynomial time by the cost function algorithm .
A third version of a combinatorial optimization problem is particularly important in the context of complexity theory. The decision version (or recognition version) of a minimization problem amounts to a single question requiring a "yes" or "no" answer:
Definition 2.4 (Decision version of a minimization problem ).
Given representations for parameter sets P F and P f for algorithms  and , respectively, and an integer number B that represents a bound, is there a feasible solution S ∈ F such that f(S) ≤ B?
Analogously, the decision version of a maximization problem asks for the existence of a feasible solution S ∈ F whose cost f(S) is greater than or equal to B. The decision version of a combinatorial optimization problem cannot be harder than its evaluation version. In fact, once the cost of an optimal solution has been obtained as the solution of the evaluation version, we can just compare it with the value of B to give a "yes" or "no" answer to the decision version. We have therefore established a problem hierarchy, in which the decision version is not harder than the evaluation version that, in turn, is not harder than the optimization version.
Maximum clique problem – Problem versions
1. 1.
Optimization version: Given a graph G = (V, U), find a maximum cardinality clique of G.
2. 2.
Evaluation version: Given a graph G = (V, U), find the number of nodes in a maximum cardinality clique of G.
3. 3.
Decision version: Given a graph G = (V, U) and an integer number B, is there a clique in G with at least B nodes? ■
Knapsack problem – Problem versions
1. 1.
Optimization version: Given a set I = { 1,..., n} of items, integer weights a i and utilities c i associated with each item i ∈ I, and a maximum weight capacity b, find a subset K ∗ ⊆ I of items such that }.
2. 2.
Evaluation version: Given a set I = { 1,..., n} of items, integer weights a i and utilities c i associated with each item i ∈ I, and a maximum weight capacity b, find c ∗ = max K ⊆ I {∑ i ∈ K c i : ∑ i ∈ K a i ≤ b}.
3. 3.
Decision version: Given a set I = { 1,..., n} of items, integer weights a i and utilities c i associated with each item i ∈ I, a maximum weight capacity b, and an integer B, is there K ⊆ I such that ∑ i ∈ K a i ≤ b and ∑ i ∈ K c i ≥ B? ■
Traveling salesman problem – Problem versions
1. 1.
Optimization version: Given a complete graph G = (V, U) with non-negative distances d ij between every pair of nodes i, j ∈ V, find a shortest Hamiltonian cycle in G.
2. 2.
Evaluation version: Given a complete graph G = (V, U) with non-negative distances d ij between every pair of nodes i, j ∈ V, compute the length of a shortest Hamiltonian cycle in G.
3. 3.
Decision version: Given a complete graph G = (V, U) with non-negative distances d ij between every pair of nodes i, j ∈ V and an integer B, is there a Hamiltonian cycle in G of length less than or equal to B? ■
Under very reasonable assumptions, we can show that the three versions of any combinatorial problem have roughly the same computational complexity. If we have a polynomial-time algorithm to solve the decision version of a combinatorial problem, then in general we can also construct polynomial-time algorithms for solving the evaluation and the optimization versions.
As an example, consider the case of the asymmetric traveling salesman problem defined on a complete directed graph G = (V, A), in which the distances d ij and d ji associated with a pair of arcs (i, j) ∈ A and (j, i) ∈ A are not necessarily the same. We first suppose that there exists an algorithm TSPDEC(n, d, B) that solves the decision version of the traveling salesman problem. This algorithm provides the appropriate "yes" or "no" answer for any instance defined by n cities, non-negative distances d ij for every pair of cities i, j ∈ V = { 1,..., n}, with i ≠ j, and an integer B. We also assume that algorithm TSPDEC(n, d, B) runs in time T(n).
Algorithm TSPOPT(n, d), described in Figure 2.9, solves the optimization version of the asymmetric traveling salesman problem by repeatedly applying algorithm TSPDEC(n, d, B) to slightly modified instances of its decision version. Lines 1 and 2 set initial values to LB and UB which are, respectively, trivial lower and upper bounds for the optimal solution value. Line 3 defines a sufficiently large value BIG that will be used as a flag. The loop in lines 4 to 10 implements a binary search procedure that seeks the optimal solution value in the interval [LB, UB]. Line 4 interrupts the search as soon as an optimal solution is found, in which case both bounds LB and UB are equal to the optimal value. Line 5 asks for the existence of a solution whose length is less than or equal to ⌊(LB +UB)∕2⌋. If there is one, then the upper bound UB is reset to ⌊(LB +UB)∕2⌋ in line 6. Otherwise, line 8 resets the lower bound LB to ⌊(LB +UB)∕2⌋. At the end of the execution of the loop in lines 4 to 10, the optimal solution value LB = UB is saved to OPT in line 11, providing a solution to the evaluation version.
Fig. 2.9
Pseudo-code of algorithm TSPOPT(n, d) for the optimization version of the traveling salesman problem based on the repeated execution of algorithm TSPDEC(n, d, B) for the decision version.
Lines 12 to 24 compute the solution of the optimization version. The loop in lines 12 to 18 identifies the arcs that belong to an optimal Hamiltonian cycle. Lines 12 and 13 enumerate all ordered pairs of cities. In line 14, we save in TMP the distance d ij associated with the arc (i, j) ∈ A, with i, j ∈ V and i ≠ j, and replace it with a sufficiently large value BIG in line 15. Line 16 asks for the existence of a tour whose length is less than or equal to OPT with the modified distance d ij . If there is none, then arc (i, j) must belong to the optimal solution and its length d ij is reset to the original value TMP. The arcs that have their lengths reset to BIG at the end of the loop in lines 12 to 18 are those that do not belong to an optimal solution. The optimal solution S ∗ is initialized with the empty set in line 19. The loop in lines 20 to 24 builds an optimal tour. Lines 20 and 21 are used to enumerate all arcs or ordered pairs of cities. Line 22 inserts an arc (i, j) ∈ A in the optimal solution if its length has not been reset to BIG, in which case it belongs to the optimal solution. Line 25 returns an optimal solution S ∗ and its optimal value OPT, solving, respectively, the optimization and the evaluation versions.
Since algorithm TSPDEC(n, d, B) runs in time T(n), both the evaluation and the optimization versions can be solved in time O(n 2 ⋅ T(n)), therefore within a polynomial factor of the time needed to solve the decision version. The binary search approach to solve the evaluation version can be extended to most problems under reasonable assumptions, while similar constructions are available for solving the optimization version by successive applications of an algorithm that solves the decision version.
### 2.2.4 The classes P and NP
We have seen in the previous section that the decision version of a combinatorial optimization problem amounts to a question that can be answered by either "yes" or "no":
> SHORTEST PATH: Given a directed graph G = (V, A), an origin node s ∈ V, a destination node t ∈ V, lengths d ij associated with every arc (i, j) ∈ A, and an integer B, is there a path from s to t in G whose length is less than or equal to B?
> SPANNING TREE: Given a graph G = (V, U), a weight d ij associated with each edge (i, j) ∈ U, and an integer B, is there a spanning tree of G such that the sum of the weights of its edges is less than or equal to B?
> STEINER TREE IN GRAPHS: Given a graph G = (V, U), lengths d ij associated with each edge (i, j) ∈ U, a subset T ⊆ V, and an integer B, is there a subtree of G that connects all nodes in T and such that the sum of its edge lengths is less than or equal to B?
> CLIQUE: Given a graph G = (V, U) and an integer B, is there a clique in G with at least B nodes?
> KNAPSACK: Given a set I = { 1,..., n} of items, integer weights a i and utilities c i associated with each item i ∈ I, a maximum weight capacity b, and an integer B, is there a subset of items K ⊆ I such that ∑ i ∈ K a i ≤ b and ∑ i ∈ K c i ≥ B?
> TRAVELING SALESMAN PROBLEM (TSP): Given a set V = { 1,..., n} of cities and non-negative distances d ij between every pair of cities i, j ∈ V, with i ≠ j, and an integer B, is there a tour visiting every city of V exactly once with length less than or equal to B?
Other examples of well-known computational problems that correspond to the decision versions of combinatorial optimization problems are
> LINEAR PROGRAMMING: Given an m × n matrix A of integer numbers, an integer m-vector b, an integer n-vector c, and an integer B, is there an n-vector x ≥ 0 of rational numbers such that A ⋅ x = b and c ⋅ x ≤ B?
> GRAPH COLORING: Given a graph G = (V, U) and an integer B, is it possible to color the nodes of G with at most B colors, such that adjacent nodes receive different colors?
> INDEPENDENT SET: Given a graph G = (V, U) and an integer B, is there an independent set of nodes in G (i.e., a subset of mutually nonadjacent nodes) with at least B nodes?
> INTEGER PROGRAMMING: Given an m × n matrix A of integer numbers, an integer m-vector b, an integer n-vector c, and an integer B, is there an n-vector x ≥ 0 of integer numbers such that A ⋅ x = b and c ⋅ x ≤ B?
In general, a decision problem is one that has only two possible solutions: either the answer "yes" or the answer "no." All the above decision versions of combinatorial optimization problems are decision problems. However, there are many other decision problems that have not been originally cast as optimization problems. Some examples are
> HAMILTONIAN CYCLE: Given a graph G = (V, U), is there a Hamiltonian cycle in G visiting all its nodes exactly once?
> GRAPH PLANARITY: Given a graph G = (V, U), is it planar?
> GRAPH CONNECTEDNESS: Given a graph G = (V, U), is it connected?
> SATISFIABILITY (SAT): Given m disjunctive clauses C 1,..., C m involving the Boolean variables x 1,..., x n and their complements, is there a truth assignment of 0 (false) and 1 (true) values to these variables such that the formula C 1 ∧ C 2 ∧⋯ ∧ C m is satisfiable?
Decision problems and the decision versions of optimization problems are closer to the prototype of computational problems studied by the theory of computation and play a very important role in complexity theory. Furthermore, since we have shown that the decision version of an optimization problem cannot be harder than the optimization version, if a decision problem cannot be solved in polynomial time, then its corresponding optimization version cannot be solved in polynomial time as well.
Definition 2.5 (Class P).
A decision problem  belongs to the class P if there exists an algorithm  that solves any of its instances in polynomial time.
In other words, the class P is formed by "easy" decision problems that can be efficiently solved by polynomial-time algorithms. Some problems in this class among those we have already examined are SHORTEST PATH, SPANNING TREE, GRAPH CONNECTEDNESS, and LINEAR PROGRAMMING. For all of them, there are efficient algorithms that compute an exact "yes" or "no" answer in polynomial time.
Given a decision problem  and a "yes" instance , a certificate  is a string that encodes a solution and makes it possible to reach the "yes" answer for instance . A certificate is said to be concise if the length of its encoding is bounded from above by a polynomial in the amount of memory that is used to encode instance . With these definitions, we identify a broader class of decision problems.
Definition 2.6 (Class NP).
A decision problem  belongs to the class NP if there exists a certificate-checking algorithm  such that, for any "yes" instance  of , there is a concise certificate  with the property that algorithm  applied to instance  and certificate  reaches the answer "yes" in polynomial time.
For a problem to be in NP, it is not required that there exists an algorithm that computes an answer in polynomial time for every instance of this problem. All that is required is that there exists a concise certificate for any "yes" instance that can be checked for validity in polynomial time. The certificate-checking algorithm  is usually a combination of the recognition algorithm  that checks for feasibility with the cost function algorithm  that computes the cost function value as defined in Section 2.2.1.
We next give examples of concise certificates and membership in NP for several combinatorial optimization problems.
Maximum clique problem – Concise certificate and membership in NP
A certificate  for the maximum clique problem is an encoding of a possible list of nodes forming a clique. This certificate is concise, because it cannot have more than | V | nodes. The certificate-checking algorithm is polynomial, since it amounts to first checking whether  corresponds to a subset of the nodes of the graph G = (V, U), then verifying if there is an edge in G for every pair of nodes in the certificate. This part corresponds to the application of the recognition algorithm  and is followed by the application of the cost function algorithm  that counts the number of nodes in the certificate and by the comparison with the integer parameter B. Therefore, the decision problem CLIQUE belongs to NP. ■
Knapsack problem – Concise certificate and membership in NP
A certificate  for the knapsack problem is an encoding of a possible sequence of integer numbers representing a subset of the n available items. Once again this certificate is concise, because it cannot have more than n elements. The certificate-checking algorithm is polynomial and corresponds exactly to the recognition algorithm , since it amounts to adding up the weights of the items in  and comparing the total weight with the maximum weight capacity b. Next, the cost function algorithm  adds up the utilities of the items in  and their total utility is compared with the integer parameter B. Consequently, the decision problem KNAPSACK also belongs to NP. ■
Traveling salesman problem – Concise certificate and membership in NP
A certificate  for the traveling salesman problem is an encoding of a possible permutation of the n cities or nodes in the graph G = (V, U). This certificate is also concise, because it must have exactly | V | nodes. The certificate-checking algorithm is polynomial and corresponds to the recognition algorithm  since it amounts to checking if every city or node in the graph G appears exactly once in the certificate. Finally, the cost function algorithm  adds up the lengths of the edges defined by the permutation established by certificate  and the total length of the tour is compared with the integer parameter B. Therefore, the decision problem TSP also belongs to NP. ■
Examples of other decision problems in NP are STEINER TREE IN GRAPHS, GRAPH PLANARITY, GRAPH COLORING, INDEPENDENT SET, HAMILTONIAN CYCLE, SAT, and INTEGER PROGRAMMING.
To prove that a problem is in NP, one is not required to provide an efficient algorithm to compute the certificate  for any given instance . One has only to prove the existence of at least one concise certificate for each "yes" instance. Nothing is required for the "no" instances: concise certificates should exist only for "yes" instances.
We now suppose that there exists a polynomial-time algorithm  for solving a decision problem  in P. In other words, algorithm  is able to provide the appropriate "yes" or "no" answer for every instance of . Therefore, the steps of algorithm  applied to a "yes" instance  can be represented as a string of polynomial size. This string is a concise certificate, since it can be checked in polynomial time to be a valid execution of . The existence of a concise certificate that can be checked in polynomial time for any "yes" instance  shows that  is also in NP. Therefore, whenever a decision problem , it also holds that . In other words, P ⊆ NP.
We remark that the acronym NP stands for nondeterministic polynomial, and not for nonpolynomial, as it often appears erroneously in the literature. A nondeterministic algorithm can be seen as one that makes use of the same instructions used by (deterministic) algorithms, plus the special GO TO BOTH X,Y command. This instruction simultaneously transfers the execution flow of a computer program to two instructions labeled X and Y. This very powerful statement behaves as if it creates two parallel threads of the algorithm currently under execution, one continuing from the instruction labeled X and the other from that labeled Y. The repeated application of this command can create very strong algorithms with an unlimited level of parallelism. As an example, we consider the nondeterministic algorithm ND-01KSP(n, a, b, c, B) in Figure 2.10 that solves KNAPSACK, i.e., the decision version of the knapsack problem.
Fig. 2.10
Pseudo-code of the nondeterministic algorithm ND-01KSP(n, a, b, c, B) that solves the decision version of the knapsack problem in polynomial time.
The first part of the algorithm consists of the loop from line 1 to 7, which is used to create 2 n parallel execution threads. Line 1 is used to implement a loop exploring all variables x 1,..., x n , starting from x 1. Line 2 duplicates each currently active thread in the execution flow of the algorithm: line 3 sets x i to 0 in the first thread, while line 5 sets x i to 1 in the second. At the end of the execution of the loop in lines 1 to 7, there are 2 n threads of the algorithm, all of them running in parallel. Every variable x i , for i = 1,..., n, is set to 0 in 2 n−1 threads and to 1 in the other 2 n−1.
In the second part of the algorithm, each parallel thread verifies in line 8 if the solution x 1,..., x n that it contains is feasible and if its total utility is greater than or equal to B. If so, then the algorithm returns "yes" and stops.
We observe that the first part of the algorithm is equivalent to the creation of 2 n concise certificates for KNAPSACK, each of them running in a different thread and corresponding to a different assignment of 0-1 values to variables x 1,..., x n . The second part of the algorithm running in each thread checks the certificate it stores and answers "yes" if it is valid. Since all possible certificates are explored in parallel, there will be always at least one thread that will answer "yes" for any "yes" instance. Once again, we observe that nothing is required for the "no" instances.
We say that a nondeterministic algorithm runs in polynomial time if the first thread to reach the "yes" answer runs in time polynomial in the size of the instance. Although the construction of parallel computers with an arbitrarily large number of processors (i.e., with unlimited parallelism) is unlikely at least in the near future, nondeterministic algorithms provide a very useful and powerful tool. In particular, they can be used to provide an alternative to Definition 2.6 of the class NP:
Definition 2.7 (Class NP).
A decision problem  belongs to the class NP if and only if any of its "yes" instances can be solved in polynomial time by a nondeterministic algorithm.
In addition to containing all problems in P, the class NP also contains the decision versions of many optimization problems and arises very naturally in the study of the complexity of combinatorial optimization problems.
### 2.2.5 Polynomial transformations and NP-complete problems
Solving a computational problem often becomes easy as soon as we assume the existence of an efficient algorithm for solving a second problem, which is equivalent to the first.
Definition 2.8 (Polynomial-time reduction).
Let  and  be two decision problems. We say that there is a polynomial-time reduction from problem  to  if and only if the first can be solved by an algorithm  that amounts to a polynomial number of calls to an algorithm  for solving problem .
As a consequence, if problem  polynomially reduces to  and there is a polynomial-time algorithm for , then there is also a polynomial-time algorithm for problem . A polynomial-time transformation is a special case of a polynomial-time reduction that is particularly relevant in the context of complexity theory.
Definition 2.9 (Polynomial-time transformation).
Let  and  be two decision problems. We say that there is a polynomial-time transformation from problem  to problem  if an instance  of  can be constructed in polynomial time from any instance  of , such that  is a "yes" instance of  if and only if  is a "yes" instance of .
We give two examples of polynomial-time transformations between problems in NP. We first show that CLIQUE polynomially transforms to INDEPENDENT SET. Let an instance  of CLIQUE be defined by a graph G = (V, U) and an integer B. Let  be the complement of G: for every pair of nodes i, j ∈ V, there is an edge  if and only if the pair i, j does not constitute an edge in U. Therefore, an instance  of INDEPENDENT SET defined by the complement of G and the same integer B can be constructed in time O( | V | 2) such that  is a "yes" instance of CLIQUE if and only if  is a "yes" instance of INDEPENDENT SET (see Figure 2.11 for the illustration of the transformation from CLIQUE to INDEPENDENT SET).
Fig. 2.11
Polynomial transformation from CLIQUE to INDEPENDENT SET: Nodes 1, 4, and 5 form a maximum clique of the original graph G in (a), while the same nodes correspond to a maximum independent set of the complement  of G in (b). The instances defined by G and  are "yes" instances for any B ≤ 3 and "no" instances for any B > 3.
As a second example, we show that HAMILTONIAN CYCLE polynomially transforms to TSP. Let an instance  of HAMILTONIAN CYCLE be defined by a directed graph G = (V, A). First, associate a city of the TSP instance with every node in V. For every pair of cities i, j ∈ V, we set the distance d ij = 1 if (i, j) ∈ A, d ij = 2 otherwise. Next, set B = | V | . Therefore, an instance  of TSP can be constructed in time O( | V | 2) such that  is a "yes" instance of HAMILTONIAN CYCLE if and only if  is a "yes" instance of TSP. In fact, there is a Hamiltonian cycle in G if and only if there exists a tour of length B = | V | visiting all cities corresponding to the nodes in V.
A polynomial-time transformation can be seen as a polynomial-time reduction that makes a single call to algorithm , exactly at the end of algorithm , and spends the rest of the time constructing the instance  of problem . With this definition, we can introduce a very important subclass of the problems in NP.
Definition 2.10 (NP-complete problems).
A decision problem  is said to be NP -complete if every other problem in NP can be transformed to it in polynomial time.
NP-complete problems have therefore a very important property: if there is a polynomial-time algorithm for any one of them, then there are also polynomial-time algorithms for all other problems in NP.
The proof that a problem is NP-complete involves two main steps: (1) proving that it is in NP and (2) showing that all other problems in NP can be transformed to it in polynomial time. The second part is often the hardest and is usually proved by showing that another problem already proved to be NP-complete is polynomially transformable to the problem on hand. SAT was the first problem to be explicitly proved to be NP-complete. Other NP-completeness results followed by polynomial transformations originating with SAT, showing that 3-SAT (a particular case of SAT, in which every clause has exactly three variables or their complements), KNAPSACK, CLIQUE, INDEPENDENT SET, TSP, STEINER TREE IN GRAPHS, INTEGER PROGRAMMING, HAMILTONIAN CYCLE, GRAPH COLORING, and GRAPH PLANARITY are also NP-complete, among many other problems.
We notice that special cases of NP-complete problems do not necessarily need to be NP-complete and hard to solve. As an example, we recall that CLIQUE is NP-complete. We now consider the complexity of the decision problem PLANAR CLIQUE, which corresponds to a restriction of CLIQUE to planar graphs. We know from graph theory that a planar graph cannot have a clique with five or more nodes. Therefore, a maximum clique of a planar graph G = (V, U) can have at most four nodes and can be found by exhaustive search in time O( | V | 4) or even faster by more specialized algorithms. Therefore, PLANAR CLIQUE is indeed a polynomially solvable special case of CLIQUE and, consequently, belongs to P.
As a second example, we consider KNAPSACK, the decision version of the knapsack problem that can be solved in nonpolynomial time O(n ⋅ b) by a straightforward dynamic programming algorithm. Suppose now that we are only interested in a restricted set of KNAPSACK instances, for which the maximum total weight is limited to b ≤ n. In this case, the dynamic programming algorithm runs in O(n 2) time. Therefore, although KNAPSACK is in NP, we conclude that KNAPSACK restricted to b ≤ n is in P.
### 2.2.6 NP-hard problems
We say that a problem  is NP -hard if all problems in NP are polynomially transformable to , but its membership to NP cannot be established. We notice that although  is certainly as hard as any problem in NP, in this case it does not qualify to be called NP-complete.
Besides its use to describe decision problems that are not proved to be in NP, the term NP-hard is also used to refer to optimization problems (which are certainly not in NP, since they are not decision problems) whose decision versions are NP-complete.
For example, we can say that the maximum clique problem, the knapsack problem, and the traveling salesman problem introduced as combinatorial optimization problems in Section 1.2 are all NP-hard, since the decision problems CLIQUE, KNAPSACK, and TSP are NP-complete, respectively.
### 2.2.7 The class co-NP
A problem  is said to be the complement of problem  if every "yes" instance of  is a "no" instance of  and vice-versa. We recall the definition of the CLIQUE decision problem:
> CLIQUE: Given a graph G = (V, U) and an integer B, is there a clique in G with at least B nodes?
It is easy to show that CLIQUE belongs to NP, since every "yes" instance has a concise certificate, defined by a list with at least B nodes of G. We now consider the following complementary version of the same problem:
> CLIQUE COMPLEMENT: Given a graph G = (V, U) and an integer B, is it true that there is no clique in G with at least B nodes?
It is clear that every "yes" instance of CLIQUE is a "no" instance of CLIQUE COMPLEMENT and vice-versa. However, there is no proof to date that CLIQUE COMPLEMENT belongs to NP, since the only known strategy for proving that there is no clique with B or more nodes consists in listing all cliques in G, counting the number of nodes in each of them, and verifying that any of them has fewer than B nodes. This clique list is indeed a certificate, but it is not concise since its has exponential length.
We now consider the case of a problem that belongs to P:
> SPANNING TREE: Given a graph G = (V, U), weights d ij associated with every edge (i, j) ∈ U, and an integer B, is there a spanning tree of G whose length is less than or equal to B?
The same algorithm that solves any "yes" or "no" instance of SPANNING TREE can also be used to solve SPANNING TREE COMPLEMENT:
> SPANNING TREE COMPLEMENT: Given a graph G = (V, U), weights d ij associated with every edge (i, j) ∈ U, and an integer B, is it true that there is no spanning tree of G whose length is less than or equal to B?
Since a polynomial algorithm that solves any problem in P can also be used to solve its complement in polynomial time by simply replacing any "yes" answer to the former by a "no" answer to the latter and vice-versa, we can conclude that the complement of any problem in P is also in P.
This same argument cannot be used to prove that the complement of a problem in NP is also in NP. This leads to the definition of a new complexity class:
Definition 2.11 (Class co-NP).
A decision problem  belongs to the class co-NP if its complement is in NP.
### 2.2.8 Pseudo-polynomial algorithms and strong NP-completeness
Definition 2.12 (Pseudo-polynomial algorithms ).
An algorithm  for a problem  is said to be pseudo-polynomial if there is a polynomial function p such that  solves any instance of  in time bounded by p(L, M), where L and M are, respectively, the length of a reasonable encoding of this instance and the largest integer appearing in this instance.
We observe that whenever the largest integer M appearing in any instance of a problem solvable by a pseudo-polynomial algorithm is bounded by a polynomial function in the size of the instance, then algorithm  becomes polynomial. As an example, we recall that KNAPSACK can be solved in pseudo-polynomial time O(n ⋅ b) by a dynamic programming algorithm. This gives rise to a very efficient algorithm for the case where the maximum weight capacity b is bounded and small. For instance, this dynamic programming algorithm runs in time O(n 2) whenever b ≤ n.
Definition 2.13 (Strongly NP -complete problems ).
A problem  is said to be strongly NP -complete if it remains NP-complete even if there is a polynomial function p such that the largest integer M appearing in each of the instances of  is bounded by p(L), where L is the length of a reasonable encoding of the instance.
TSP, CLIQUE, and HAMILTONIAN CYCLE are examples of strongly NP-complete problems. On the other hand, KNAPSACK is a typical example of a problem that is not strongly NP-complete, since it can be solved in polynomial time whenever the maximum capacity b is bounded by a polynomial function on the number n of variables.
### 2.2.9 PSPACE and the polynomial hierarchy
We have discussed up to this point the complexity of decision problems exclusively in terms of the computational time or the number of operations needed for their solution.
However, we can also consider the computational requirements in terms of the amount of space or memory that is needed for solving a decision problem. With this idea in mind, we present the definition of a new class of decision problems:
Definition 2.14 (Class PSPACE).
A decision problem  belongs to the class PSPACE if there exists an algorithm  that solves any of its instances using an amount of space (or memory) that is bounded by a polynomial in the length of its input.
Any algorithm that takes a polynomial amount of time to be solved cannot consume more than a polynomial amount of space, since it cannot write more than a fixed number of symbols (or words) at any of its operations. Therefore, it is clear that P ⊆ PSPACE.
However, what can we say about the relationship between the classes NP and PSPACE? In fact, we can prove that NP ⊆ PSPACE also holds, although this might seem unexpected at first sight. We first notice that any nondeterministic algorithm for solving a decision problem  can be simulated by a deterministic algorithm that generates all possible concise certificates for any of its "yes" instances, one after the other. Furthermore, since , this deterministic algorithm can check each of these certificates in polynomial time, using an amount of space that is bounded by a polynomial in the length of the input of . Although this deterministic algorithm will run in exponential time, the total amount of space will be polynomial, since each certificate can be erased after it is checked and the space it occupied can be freed and reused for checking the next certificate. Therefore, NP ⊆ PSPACE. Using a similar argument, we can also prove that co-NP ⊆ PSPACE.
We conclude this section with Figure 2.12, which depicts the basic polynomial hierarchy and complexity classes. The preceding discussion has shown that any decision problem that can be solved in polynomial time by either a deterministic or a nondeterministic algorithm (or, alternatively, sequentially or in parallel), can also be solved using a polynomial amount of space. Therefore, even problems that take an exponential amount of time can be solved in polynomial space. Since polynomiality is considered as a limitation for any scarce resource such as time or space, we can say that time requirements become critical (i.e., superpolynomial) before space does. This observation supports the consideration of time as the main and critical scarce resource considered in the analysis and design of computer algorithms, which in practice very rarely involve space considerations.
Fig. 2.12
Complexity classes and the polynomial hierarchy.
## 2.3 Solution approaches
There are many more NP-complete and NP-hard combinatorial optimization problems than those presented to illustrate the main concepts introduced in this chapter. In fact, we can say that the majority of the problems of practical relevance belong to these classes. The fact that such problems are considered to be computationally intractable does not preclude the need for their solution. In addition to general superpolynomial exact methods to solve them, a great amount of research is devoted to identifying special cases or situations that can be solved exactly in reasonable time, or to developing approximate algorithms that are able to efficiently find high-quality solutions. Some of these approaches are quickly discussed below:
1. 1.
Superpolynomial-time exact algorithms: Theoretical developments in polyhedral theory, combined with efficient algorithm design and data structures and advances in computer hardware, have made it possible to solve even very large instances of NP-complete and NP-hard problems. Methods such as branch-and-bound and branch-and-cut are routinely applied to exactly solve large instances in affordable computation time. Such strategies cannot be discarded, in particular in the case of real-life instances whose sizes are often limited in practice.
2. 2.
Pseudo-polynomial algorithms: These algorithms form a subclass of superpolynomial-time algorithms. Pseudo-polynomial algorithms can be very efficient in practice whenever the maximum integer appearing in any instance of a given problem is small. We have already noticed that since KNAPSACK can be solved in pseudo-polynomial time O(n ⋅ b) by dynamic programming, this algorithm becomes very efficient for the case where the maximum weight capacity b is bounded and small. Pseudo-polynomial algorithms can therefore become very practical and attractive in some situations, in spite of the fact that they are, essentially, superpolynomial-time algorithms.
3. 3.
Polynomially solvable special cases: It is often the case that although the general formulation of some specific problem is NP-complete or NP-hard, interesting or practical instances can be solved exactly in polynomial time. If one is interested in solving exclusively these special instances, the fact that the general problem is intractable is less relevant and exact approaches can be used. Some examples follow:
* We have seen that CLIQUE is NP-complete. However, if one considers only planar graphs G = (V, U), the special, restricted case of CLIQUE in planar graphs (or PLANAR CLIQUE) can be exactly solved by exhaustive enumeration in polynomial time O( | V | 4) or even faster by direct application of Kuratowski's theorem.
* Although SAT is NP-complete (as is 3-SAT), its special case 2-SAT in which each clause has exactly two literals (a literal is a Boolean variable or its complement) can be solved exactly in polynomial time.
4. 4.
Approximation algorithms: These are algorithms that build feasible solutions that are not necessarily optimal, but whose objective function value can be shown to be within a guaranteed difference from the exact optimal value. Although in some cases this gap can be reasonable, for most problems it can be quite large.
5. 5.
Heuristics: A heuristic is essentially any algorithm that provides a feasible solution for a given problem, without necessarily providing a guarantee of performance in terms of solution quality or computation time. Heuristic methods can be classified into three main groups:
* Constructive heuristics are those that build a feasible solution from scratch.
Greedy and semi-greedy algorithms, to be introduced in Chapter , are examples of constructive heuristics.
* Local search or improvement procedures start from a feasible solution and improve it by successive small modifications until a locally optimal solution is found. Although they provide high-quality solutions close to the optimum in many cases, in some situations they can become prematurely stuck in low-quality locally optimal solutions. Local search heuristics and their variants will be explored in Chapter
* Metaheuristics are general high-level procedures that coordinate simple heuristics and rules to find good-quality solutions to computationally difficult optimization problems. Among them, we find simulated annealing, tabu search, greedy randomized adaptive search procedures, genetic algorithms, scatter search, variable neighborhood search, ant colony optimization, and others. Metaheuristics are based on distinct paradigms and offer different mechanisms to escape from locally optimal solutions (as opposed to greedy algorithms or local search methods). They are among the most effective solution strategies for solving combinatorial optimization problems in practice and very often produce much better solutions than those obtained by the simple heuristics and rules they coordinate. Metaheuristics have been applied to a wide array of academic and real-world problems. The customization (or instantiation) of a metaheuristic to a given problem yields a heuristic for this problem.
## 2.4 Bibliographical notes
Fundamental references for the shortest path problem, the minimum spanning tree problem, the maximum clique problem, the knapsack problem, the traveling salesman problem, and the Steiner problem in graphs that were revisited and formulated in Section 1.2 have been already presented in Section 1.7
The foundations of the theory of computational complexity appeared in Cobham (1964) and Edmonds (1965; 1975), where informal references to P, NP, and related concepts are made. The landmark reference on the theory of NP-completeness is the seminal paper of Cook (1971), in which the author proved that SAT and 3-SAT are NP-complete. This work was closely followed by that of Karp (1972), in which its consequences were discussed and explored, leading to results establishing the NP-completeness of several other problems. A tutorial on the theory of NP-completeness was presented by Karp (1975). A discussion about strong NP-completeness first appeared in Garey and Johnson (1978).
Garey and Johnson (1979) is the most influential textbook on computational complexity theory. It introduced the theory of NP-completeness and computer intractability. The exposition and the basic notions of computational complexity presented in Section 2.2 follow closely the textbook by Papadimitriou and Steiglitz (1982). These ideas were further developed in Papadimitriou (1994) and Yannakakis (2007).
Accounts of integer programming methods that were cited in Section 2.3, such as branch-and-bound and branch-and-cut, can be found in textbooks by Schrijver (1986), Nemhauser and Wolsey (1988), Wolsey (1998), and Bertsimas and Weismantel (2005), among others. The pseudo-polynomial dynamic programming algorithm for the knapsack problem appeared in many references, in particular in the textbook of Martello and Toth (1990). It is well known that the stable set problem, the maximum clique problem, the chromatic number problem, and the clique cover problem are NP-complete for general graphs (Garey and Johnson, 1979). However, Grötschel et al. (1984) showed that the weighted versions of these problems can be solved in polynomial time for perfect graphs. Kuratowski's theorem was originally published in Kuratowski (1930). Krom (1967) described the first polynomial-time algorithm for 2-SAT . Early discussions about approximation algorithms appeared in Johnson (1974) and Garey and Johnson (1976). The reader is also referred to the textbooks of Vazirani (2001) and Williamson and Shmoys (2011). The first fit decreasing algorithm for bin packing is a classical example of an approximation algorithm, guaranteeing that no packing it generates will use more than 11/9 times the optimal number of bins (Johnson, 1973).
As noticed in the previous chapter, the textbooks by Nilsson (1971; 1982) and Pearl (1985) are fundamental references on the origins, principles, and applications of A∗ and other heuristic search methods. Cormen et al. (2009) presented a good coverage of greedy algorithms. Hoos and Stützle (2005) report in detail the foundations and applications of stochastic local search, while Michelis et al. (2007) discuss theoretical aspects of local search.
Glover and Kochenberger (2003) and Gendreau and Potvin (2010) collected thorough and complete accounts of metaheuristics, with a large coverage of the subject and detailed chapters about each of them. Other tutorials can also be found in Reeves (1993) and Burke and Kendall (2005; 2014). Some books provide detailed accounts of individual metaheuristics, see, e.g., van Laarhoven and Aarts (1987) and Aarts and Korst (1989) for simulated annealing, Glover and Laguna (1997) for tabu search, and Michalewicz (1996) and Goldberg (1989) for genetic algorithms. Previous surveys and tutorials about greedy randomized adaptive search procedures and their extensions and applications were authored by Feo and Resende (1995), Festa and Resende (2002; 2009a;b), Ribeiro (2002), Pitsoulis and Resende (2002), and Resende and Ribeiro (2003b; 2005a;b; 2010).
References
E.H.L. Aarts and J. Korst. Simulated annealing and Boltzmann machines: A stochastic approach to combinatorial optimization and neural computing. Wiley, New York, 1989.MATH
D. Bertsimas and R. Weismantel. Optimization over integers. Dynamic Ideas, Belmont, 2005.
E.K. Burke and G. Kendall, editors. Search methodologies: Introductory tutorials in optimization and decision support techniques. Springer, New York, 2005.MATH
E.K. Burke and G. Kendall, editors. Search methodologies: Introductory tutorials in optimization and decision support techniques. Springer, New York, 2nd edition, 2014.
A. Cobham. The intrinsic computational difficulty of functions. In Y. Bar-Hillel, editor, Proceedings of the 1964 International Congress for Logical Methodology and Philosophy of Science, pages 24–30, Amsterdam, 1964. North Holland.
S.A. Cook. The complexity of theorem-proving procedures. In M.A. Harrison, R.B. Banerji, and J.D. Ullman, editors, Proceedings of the Third Annual ACM Symposium on Theory of Computing, pages 151–158, New York, 1971. ACM.
T.H. Cormen, C.E. Leiserson, R.L. Rivest, and C. Stein. Introduction to Algorithms. MIT Press, Cambridge, 3rd edition, 2009.MATH
J. Edmonds. Paths, trees, and flowers. Canadian Journal of Mathematics, 17: 449–467, 1965.MathSciNetCrossRefMATH
J. Edmonds. Minimum partition of a matroid in independent subsets. Journal of Research, National Bureau of Standards, 69B:67–72, 1975.MathSciNetCrossRefMATH
T.A. Feo and M.G.C. Resende. Greedy randomized adaptive search procedures. Journal of Global Optimization, 6:109–133, 1995.MathSciNetCrossRefMATH
P. Festa and M.G.C. Resende. GRASP: An annotated bibliography. In C.C. Ribeiro and P. Hansen, editors, Essays and surveys in metaheuristics, pages 325–367. Kluwer Academic Publishers, Boston, 2002.
P. Festa and M.G.C. Resende. An annotated bibliography of GRASP, Part I: Algorithms. International Transactions in Operational Research, 16:1–24, 2009a.
P. Festa and M.G.C. Resende. An annotated bibliography of GRASP, Part II: Applications. International Transactions in Operational Research, 16, 2009b. 131–172.
M.R. Garey and D.S. Johnson. Approximation algorithms for combinatorial problems: An annotated bibliography. In J.F. Traub, editor, Algorithms and complexity: New directions and recent results, pages 41–52. Academic Press, Orlando, 1976.
M.R. Garey and D.S. Johnson. Strong NP-completeness results: Motivation, examples, and implications. Journal of the ACM, 25:499–508, 1978.MathSciNetCrossRefMATH
M.R. Garey and D.S. Johnson. Computers and intractability. Freeman, San Francisco, 1979.MATH
M. Gendreau and J.-Y. Potvin, editors. Handbook of metaheuristics. Springer, New York, 2nd edition, 2010.
F. Glover and G. Kochenberger, editors. Handbook of metaheuristics. Kluwer Academic Publishers, Boston, 2003.MATH
F. Glover and M. Laguna. Tabu search. Kluwer Academic Publishers, Boston, 1997.CrossRefMATH
D.E. Goldberg. Genetic algorithms in search, optimization and machine learning. Addison-Wesley, Reading, 1989.MATH
M. Grötschel, L. Lovász, and A. Schrijver. Polynomial algorithms for perfect graphs. Annals of Discrete Mathematics, 21:325–356, 1984.MathSciNetMATH
H.H. Hoos and T. Stützle. Stochastic local search: Foundations and applications. Elsevier, New York, 2005.MATH
D.S. Johnson. Near-optimal bin-packing algorithms. PhD thesis, Massachusetts Institute of Technology, Cambridge, 1973.
D.S. Johnson. Approximation algorithms for combinatorial problems. Journal of Computer and System Sciences, 9:256–278, 1974.MathSciNetCrossRef80044-9)MATH
R.M. Karp. Reducibility among combinatorial problems. In R.E. Miller and J.W. Thatcher, editors, Complexity of computer computations. Plenum Press, New York, 1972.
R.M. Karp. On the computational complexity of combinatorial problems. Networks, 5:45–68, 1975.MathSciNetMATH
M.R. Krom. The decision problem for a class of first-order formulas in which all disjunctions are binary. Zeitschrift fr Mathematische Logik und Grundlagen der Mathematik, 13:15–20, 1967.MathSciNetCrossRefMATH
K. Kuratowski. Sur le problème des courbes gauches en topologie. Fundamenta Mathematicae, 15:271–283, 1930.MATH
S. Martello and P. Toth. Knapsack problems: Algorithms and computer implementations. John Wiley & Sons, New York, 1990.
Z. Michalewicz. Genetic algorithms + Data structures = Evolution programs. Springer, Berlin, 1996.CrossRefMATH
W. Michelis, E.H.L. Aarts, and J. Korst. Theoretical aspects of local search. Springer, Berlin, 2007.MATH
G.L. Nemhauser and L.A. Wolsey. Integer and combinatorial optimization. Wiley, New York, 1988.CrossRefMATH
N.J. Nilsson. Problem-solving methods in artificial intelligence. McGraw-Hill, New York, 1971.
N.J. Nilsson. Principles of artificial intelligence. Springer, Berlin, 1982.CrossRefMATH
C.H. Papadimitriou. Computational complexity. Addison-Wesley, Reading, 1994.MATH
C.H. Papadimitriou and K. Steiglitz. Combinatorial optimization: Algorithms and complexity. Prentice Hall, Englewood Cliffs, 1982.MATH
J. Pearl. Heuristics: Intelligent search strategies for computer problem solving. Addison-Wesley, Reading, 1985.
L.S. Pitsoulis and M.G.C. Resende. Greedy randomized adaptive search procedures. In P.M. Pardalos and M.G.C. Resende, editors, Handbook of applied optimization, pages 168–183. Oxford University Press, New York, 2002.
C.R. Reeves. Modern heuristic techniques for combinatorial problems. Blackwell, London, 1993.MATH
M.G.C. Resende and C.C. Ribeiro. Greedy randomized adaptive search procedures. In F. Glover and G. Kochenberger, editors, Handbook of metaheuristics, pages 219–249. Kluwer Academic Publishers, Boston, 2003b.
M.G.C. Resende and C.C. Ribeiro. GRASP with path-relinking: Recent advances and applications. In T. Ibaraki, K. Nonobe, and M. Yagiura, editors, Metaheuristics: Progress as real problem solvers, pages 29–63. Springer, New York, 2005a.
M.G.C. Resende and C.C. Ribeiro. Parallel greedy randomized adaptive search procedures. In E. Alba, editor, Parallel metaheuristics: A new class of algorithms, pages 315–346. Wiley-Interscience, Hoboken, 2005b.
M.G.C. Resende and C.C. Ribeiro. Greedy randomized adaptive search procedures: Advances and applications. In M. Gendreau and J.-Y. Potvin, editors, Handbook of metaheuristics, pages 293–319. Springer, New York, 2nd edition, 2010.
C.C. Ribeiro. GRASP: Une métaheuristique gloutonne et probabiliste. In J. Teghem and M. Pirlot, editors, Optimisation approchée en recherche opérationnelle, pages 153–176. Hermès, Paris, 2002.
A. Schrijver. Theory of linear and integer programming. Wiley, New York, 1986.MATH
P.J.M. van Laarhoven and E. Aarts. Simulated annealing: Theory and applications. Kluwer Academic Publishers, Boston, 1987.CrossRefMATH
V.V. Vazirani. Approximation algorithms. Springer, Berlin, 2001.MATH
D.P. Williamson and D.B. Shmoys. The design of approximation algorithms. Cambridge University Press, New York, 2011.CrossRefMATH
L.A. Wolsey. Integer programming. Wiley, New York, 1998.MATH
M. Yannakakis. Computational complexity. In E.H.L. Aarts and J.K. Lenstra, editors, Local search in combinatorial optimization, chapter 2, pages 19–55. Wiley, Chichester, 2007.
© Springer Science+Business Media New York 2016
Mauricio G.C. Resende and Celso C. RibeiroOptimization by GRASP10.1007/978-1-4939-6530-4_3
# 3. Solution construction and greedy algorithms
Mauricio G. C. Resende1 and Celso C. Ribeiro2
(1)
Modeling and Optimization Group (MOP), Amazon.com, Inc., Seattle, WA, USA
(2)
Instituto de Ciência da Computação, Universidade Federal Fluminense, Niterói, Rio de Janeiro, Brazil
This chapter addresses the construction of feasible solutions. We begin by considering greedy algorithms and show their relationship with matroids. We then consider adaptive greedy algorithms, a generalization of greedy algorithms. Next, we present semi-greedy algorithms, obtained by randomizing adaptive greedy algorithms. The chapter concludes with a discussion of solution repair procedures.
## 3.1 Greedy algorithms
As we saw in Chapter , a feasible solution S of a combinatorial optimization problem is a subset of the elements of the ground set E = { 1,..., n}. Since certain subsets of ground set elements can lead to infeasibilities, by definition a feasible solution cannot contain any such subset. If c i denotes the contribution to the objective function value of ground set element i ∈ E, then we assume in this discussion that the objective function value of a solution S is f(S) = ∑ i ∈ S c i .
Many algorithms for combinatorial optimization problems build a solution incrementally from scratch, where at each step, a single ground set element is added to the partial solution under construction. A ground set element to be added at each step cannot be such that its combination with one or more previously added elements leads to an infeasibility. We call such an element feasible and denote by  the set of all feasible elements at the time a given step is performed. Since the set of candidate elements  may contain more than one element, an algorithm designed to build a feasible solution for some problem must have a mechanism to select the next feasible ground set element from  to be added to the partially built solution under construction.
From among all yet unselected feasible elements, a greedy algorithm for minimization always chooses one of least cost. Figure 3.1 shows the pseudo-code of a greedy algorithm. The solution S to be constructed and its cost f(S) are initialized to  and 0, respectively, in lines 1 and 2. In line 3, the set  of candidate elements is initialized with all feasible ground set elements. The construction of the solution is done in the while loop in lines 4 to 9, ending when  becomes empty. In line 5, the feasible ground set element i ∗ having least cost is selected. Then, in lines 6 and 7, respectively, the solution under construction and its cost are updated to account for the inclusion of i ∗ in the solution under construction. In line 8, the set  is updated, taking into account that i ∗ is now part of solution S. Solution S and its cost are returned in line 10.
Fig. 3.1
Pseudo-code of a greedy algorithm for a minimization problem.
The algorithm shown in Figure 3.1 is devised for minimization problems. For the case of maximization, the argmin operator in line 5 of the pseudo-code is simply replaced by argmax, which selects a candidate element of maximum cost. We next show examples of greedy algorithms for some combinatorial optimization problems.
Minimum spanning tree problem - Greedy algorithm
Recall from Chapter that in the minimum spanning tree problem, the ground set is the set of edges U and d ij is the length of edge (i, j) ∈ U. A greedy algorithm for the minimum spanning tree problem is shown in Figure 3.2. The solution S to be constructed and its cost f(S) are initialized to  and 0, respectively, in lines 1 and 2. The set of feasible ground set elements is initialized in line 3 with all the edges in U. A feasible edge of least length is selected in line 5 and added to the spanning tree in line 6, with the length of the partial tree being updated in line 7. All edges whose inclusion in the current solution would create a cycle (i.e., an infeasibility) are removed from the set  of feasible candidate elements in line 8. The solution S and its cost are returned in line 10.
Fig. 3.2
Pseudo-code of Kruskal's greedy algorithm for the minimum spanning tree problem.
This greedy algorithm for the minimum spanning tree problem is known as Kruskal's algorithm. As we shall see later in this chapter, this algorithm will always produce an optimal solution for this problem. ■
Knapsack problem - Greedy algorithm
As we saw in Chapter , the ground set for the knapsack problem consists of the set I of items to be packed. Each item i ∈ I has weight a i and utility c i . The knapsack can accommodate a maximum weight of b. We assume, without loss of generality, that a i ≤ b, for all i ∈ I. A greedy algorithm for the knapsack problem is shown in Figure 3.3. The solution S to be constructed and its cost f(S) are initialized to  and 0, respectively, in lines 1 and 2. The set of feasible ground set elements is initialized in line 3 with all items in I. A feasible item of greatest utility per unit weight is selected in line 5 and added to the knapsack in line 6, with the total utility of the partial solution being updated in line 7. All items whose inclusion in the current solution would overflow the knapsack (i.e., create an infeasibility) are removed from the set  of feasible ground set elements in line 8. The solution S and its cost are returned in line 10.
Fig. 3.3
Pseudo-code of a greedy algorithm for the knapsack problem.
As opposed to the greedy algorithm for the minimum spanning tree problem, this greedy algorithm for the knapsack problem will not always find an optimal solution. Consider the following counter-example with three items, where the item weight vector a = (3, 2, 2), the item utility vector c = (12, 7, 6), and the knapsack weight capacity b = 4. The utility per unit weight of each item is c 1∕a 1 = 12∕3 = 4, c 2∕a 2 = 7∕2 = 3. 5, and c 3∕a 3 = 6∕2 = 3. Consequently, the greedy algorithm considers the items in the order 1, 2, 3. Since the weight of item 1 is 3, then it is included in the solution. Since items 2 and 3 both have weight 2, neither can be included in the solution together with item 1. Therefore, the total utility of the greedy solution is 12. However, note that a solution consisting of items 2 and 3, but not item 1, is feasible (since its total weight is 4, which equals the capacity of the knapsack) and its utility is 13, which is greater than 12, the utility of the greedy solution. ■
Steiner tree problem in graphs - Greedy algorithm
We describe here the distance network heuristic for the Steiner tree problem in graphs. Recall that, in this problem, we are given a graph G = (V, U), where the node set is V = { 1, 2,..., n} and the edge set U is formed by unordered pairs of points i, j ∈ V, with i ≠ j. There is a length d ij associated with each edge (i, j) ∈ U and a subset T ⊆ V of terminal nodes that have to be connected by a minimum length subtree of G.
The distance network heuristic is based on the construction of the complete graph of distances D(G), whose node set is formed exclusively by the terminal nodes T of the original graph G. The length of the edge (i, j) of graph D(G) is equal to the length of the shortest path between i and j in graph G, for every i, j ∈ T, with i ≠ j. Therefore, each edge of the complete graph D(G) may be seen as a "super-edge" representing the shortest path between its extremities in G.
Once the graph of distances D(G) has been built, the distance network heuristic consists basically in the computation of a minimum spanning tree of D(G). This is followed by the replacement of each super-edge of the minimum spanning tree by the edges in the shortest path between its extremities in the original graph. We observe that the shortest paths in G, associated with the super-edges of the graph of distances, are not necessarily disjoint. Therefore, since the same edge of the original graph may appear in more than one shortest path, it is possible that the length of the Steiner tree in the original graph is smaller than that of the minimum spanning tree of D(G).
The pseudo-code for the main building blocks of the distance network heuristic for the Steiner tree problem in graphs is given in Figure 3.4. Algorithm GREEDY-SPG is indeed a greedy algorithm whenever Kruskal's greedy algorithm is used to compute the minimum spanning tree in line 5.
Fig. 3.4
Pseudo-code of the greedy distance network heuristic for the Steiner tree problem in graphs.
Figure 3.5(a) displays an instance of the Steiner tree problem in graphs, whose optimal solution appears in Figure 3.5(b). Figure 3.6 shows the application of the GREEDY-SPG distance network heuristic to this instance. The shortest path from terminal 1 to 2 corresponds to the node sequence 1 - 5 - 2 and has length 2. The selected alternative path from terminal 1 to 3 is given by the node sequence 1 - 6 - 3 and has length 4. The shortest path from terminal 1 to 4 is formed by nodes 1 - 5 - 7 - 9 - 4 and has also length 4. Similarly, the shortest path from node 2 to 3 is formed by nodes 2 - 5 - 7 - 9 - 3 and has the same length 4. The selected alternative path from terminal 2 to 4 is given by the node sequence 2 - 8 - 4 and has length 4. The shortest path from terminal 3 to 4 corresponds to the node sequence 3 - 9 - 4 and has length 2. The graph of distances D(G) appears in Figure 3.6(a). Figure 3.6(b) depicts a minimum spanning tree of D(G). Finally, Figure 3.6(c) shows the Steiner tree for the original graph recovered from the minimum spanning tree computed for D(G). We observe that the solution found by the heuristic has length 8. Therefore, it is not optimal. ■
Fig. 3.5
An instance of the Steiner problem in graphs (a) and a minimum Steiner tree with total length 6 (b).
Fig. 3.6
Application of the GREEDY-SPG distance network heuristic: (a) graph of distances D(G), (b) minimum spanning tree of D(G) with length 8, and (c) Steiner tree with length 8, which is not optimal.
## 3.2 Matroids
We stated earlier in this chapter that the greedy algorithm for the minimum spanning tree problem always produces an optimal solution. In this section, we make use of the theory of matroids to show that this is indeed the case. This theory is useful in showing many situations in which a greedy algorithm does find an optimal solution.
A matroid  is a combinatorial structure defined by a finite nonempty set  of elements and a nonempty family  of independent subsets of , such that:
Property 3.1.
 is hereditary, i.e.,  and all proper subsets of a set  are also in .
Property 3.2.
If I′ and I″ are sets in  and | I′ | < | I″ | , then there exists an element e ∈ I″∖ I′ such that .
As an example of a matroid, consider the graphic matroid  defined on the graph G = (V, U), where V is the set of nodes and U the set of edges of G. In this matroid,  is defined to be the set U of edges and  is such that if U′ ⊆ U, then  if and only if the graph induced in G by U′ is acyclic.
To see that the graphic matroid is indeed a matroid, we must verify properties 3.1 and 3.2 above. Property 3.1 clearly holds, since a graph with an empty set of edges is acyclic and all subgraphs of an acyclic graph are also acyclic. Suppose now that I′ and I″ are the edge sets of two forests in G such that | I′ | < | I″ | . To verify property 3.2, we must show that there is some edge e ∈ I″∖ I′ such that the graph induced in G by the edge set I′ ∪{ e} is acyclic. Since the number of trees in any forest induced in G by the edge set I can be proved by induction to be equal to | V | − | I | , then the forest induced by I′ has more trees than the one induced by I″. Consequently, there must exist some edge e ∈ I″ having its extremities in two disjoint trees in the forest induced by I′. Therefore, e ∈ I″∖ I′. Since adding an edge between two disjoint trees does not create a cycle, then .
An important property of a matroid is that all its maximal independent subsets are of the same size. In the graphic matroid, a maximal independent subset is the largest edge set that induces an acyclic graph in G, i.e., the | V | − 1 edges of any spanning tree of G.
A weighted matroid is a matroid  with a weight function  that assigns a positive weight w(x) to each element . This weight function is also defined for any subset  as w(U′) = ∑ x ∈ U′ w(x). In a graphic matroid, if w(e) denotes the weight of edge e ∈ U and U′ ⊆ U is a subset of the edges of G, then w(U′) denotes the total weight of the edges in set U′. A natural optimization problem on a weighted matroid is to find a maximum-weight independent set. Since w(x) > 0 for all , then any maximum-weight independent set is maximal. A minimum-weight spanning tree of the graph G = (V, U) is simply a maximum-weight independent set on the weighted graphic matroid  with weights , where  and w(e) is the weight of edge e ∈ U.
A greedy algorithm for finding a maximum-weight independent set of any weighted matroid  is shown in Figure 3.7. At each iteration, the algorithm adds to the solution the element of maximum weight that maintains independence within the solution. In case no element can be added to the initial solution while maintaining independence, the algorithm returns , which by definition is independent. Otherwise, line 8 guarantees that only elements that maintain feasibility are considered for inclusion in the solution under construction. Therefore, this algorithm returns in line 10 an independent subset S of .
Fig. 3.7
Pseudo-code of a greedy algorithm for finding a maximum-weight independent set of a weighted matroid.
We now state some properties of weighted matroids and of the greedy algorithm for finding a maximum-weight independent set of a weighted matroid  that show the correctness of the algorithm.
Property 3.3.
Let  be the first element selected in line 5 of the greedy algorithm in Figure 3.7. Then, there exists an optimal subset  such that i ∗ ∈ U′.
Property 3.4.
If  and , then , for any independent subset .
Property 3.5.
Let  be the first element selected in line 5 of the greedy algorithm in Figure 3.7. The remaining problem reduces to finding a maximum-weight independent set of the weighted matroid , where , , and the weight function for  is the same for , although restricted to .
Property 3.3 says that once an element is added to S, it will never be removed from S later. This is so because there is some optimal subset  that contains that element. Property 3.4 tells us that if an element is initially disregarded from S, then we can definitely discard it since it will never be added to solution S later. Finally, Property 3.5 states that once an element is added to S, the problem reduces itself to the same problem on a reduced weighted matroid.
Summarizing, Properties 3.1 to 3.5 ensure that the greedy algorithm will always find an optimal solution whenever it is applied to a combinatorial optimization problem defined over a weighted matroid. For this reason, a greedy algorithm for the minimum spanning tree problem will always produce an optimal solution, while a greedy algorithm applied to the knapsack problem will not necessarily find an optimum.
## 3.3 Adaptive greedy algorithms
The greedy algorithm of Figure 3.1, as well as the other greedy algorithms described in the previous section, selects an element i ∗ of the set of feasible candidate elements  as

where c i is the cost associated with the inclusion of element  in the solution. In all of these algorithms, this cost is constant. Therefore, the elements can be sorted in the increasing order of their costs in a preprocessing step. The while loop in lines 4 to 9 of the algorithm in Figure 3.1 will then scan the elements in this order. Upon considering the k-th element, if its inclusion in the solution causes an infeasibility, then it is discarded for good and the next element is considered. Otherwise, it is added to the solution and the next element is considered.
Although the greedy algorithm described in Figure 3.1 is applicable in many situations, such as to the minimum spanning tree problem and to the knapsack problem (in which case it will not always find an optimal solution), there are other situations where the cost of the contribution of an element is affected by the previous choices of elements made by the algorithm. We shall call these adaptive greedy algorithms.
Figure 3.8 shows the pseudo-code of an adaptive greedy algorithm for a minimization problem. As before, a solution S is constructed, one element at a time. This solution and its cost are initialized in lines 1 and 2, respectively. The initial set of feasible elements from the ground set E is determined in line 3. The greedy choice function g(i) measures the suitability of element i to be included in the partial solution S, for all . The values of the greedy choice function are initialized in line 4. The while loop in lines 5 to 11 constructs the solution. In line 6, a candidate element with minimum greedy choice function value is selected. This element is included in the solution in line 7. The value of the cost function is updated in line 8. The set of feasible candidate elements is updated in line 9 and the values of the greedy choice function are updated in line 10, for all remaining feasible candidate elements. Construction ends when there are no more feasible candidate elements to be added, i.e., when . Solution S and its cost are returned in line 12.
Fig. 3.8
Pseudo-code of a generic adaptive greedy algorithm for a minimization problem.
We next give examples of adaptive greedy algorithms for four combinatorial optimization problem: the minimum spanning tree problem, the set covering problem, the maximum clique problem, and the traveling salesman problem.
Minimum spanning tree problem – Adaptive greedy algorithm
In Section 3.1 we saw a greedy algorithm for the minimum spanning tree problem. The first example of an adaptive greedy algorithm is one for the same problem. As before, we are given a graph G = (V, U), where V is the set of nodes and U is the set of weighted edges. Let d ij be the weight of edge (i, j) ∈ U.
An adaptive greedy approach for this problem is to grow the set of spanned nodes of the tree, at each step adding a new edge with the least weight among all edges with only one endpoint in the set of already spanned nodes. The other endpoint of this edge is then added to the set of spanned nodes. This is repeated until all nodes are spanned. Figure 3.9 shows the pseudo-code of this adaptive greedy algorithm for the minimum spanning tree problem. In lines 1 and 2, respectively, the set S of edges in the spanning tree and its weight f(S) are initialized. The set  of nodes yet to be spanned is initialized in line 3. The loop in lines 4 to 7 is used to initialize the greedy choice function g(i) in lines 5 and the pointer π(i) to the other endpoint of the minimum weight edge connecting node i to the current set  of spanned nodes in line 6, for every . In line 8, a node  is chosen to be placed in the set of spanned nodes and, in line 9, its greedy choice function is set to 0. The main loop of the algorithm, in lines 10 to 24, is repeated until the set  of nodes yet to be spanned becomes empty. In line 11, node j is added to the set of spanned nodes (in fact, it is removed from the set of nodes yet to be spanned). Node i is set in line 12 to be the closest node to j yet to be spanned. In the first iteration of the loop, i is 0 and, consequently, lines 14 and 15 are not computed. For all other iterations, edge (i, j) is added to the spanning tree S in line 14 and the partial cost f(S) of the spanning tree is updated in line 15. Lines 17 to 22 update the greedy choice function g(k) and the pointer π(k) for each node k adjacent to j that has not yet been spanned. In line 23, a yet unspanned node j with minimum greedy choice function value is chosen to become spanned. The minimum-weight spanning tree S and its weight f(S) are returned in line 25.
Fig. 3.9
Pseudo-code of Prim's adaptive greedy algorithm for the minimum spanning tree problem.
This adaptive greedy algorithm for the minimum spanning tree problem is known as Prim's algorithm. ■
Minimum cardinality set covering problem – Adaptive greedy algorithm
Given a set I = { 1,..., m} of objects, let {P 1,..., P n } be a collection of finite subsets of I such that ∪ j = 1 n P j = I, with a non-negative cost c j associated with each subset P j , for j = 1,..., n. A subset  is a cover of I if . The cost of a cover  is given by . The set covering problem consists in finding a minimum cost cover. In the minimum cardinality set covering problem,
we seek a cover of minimum cardinality, which is equivalent to setting c j = 1 in the set covering problem, for j = 1,..., n. Let the m × n binary matrix A = { a ij } be such that for all i ∈ I and for all j ∈ J, a ij = 1 if and only if i ∈ P j ; a ij = 0, otherwise. A solution  of the minimum cardinality set covering problem can be represented by a binary n-vector x, where x j = 1 if and only if ; x j = 0 otherwise, for j = 1,..., n. An integer programming formulation for the minimum cardinality set covering problem is then

We say that column j of matrix A covers row i if a ij = 1. A greedy approach to this problem is to select columns of matrix A, one at a time, such that each selected column covers the maximum number of yet-uncovered rows of A. Let g(j) be the greedy choice function which measures the number of yet-uncovered rows of A that would become covered if the still unused column j were to be added to the cover under construction. Initially, we set g(j) = ∑ i ∈ I a ij , for all j = 1,..., n. We denote by j ∗ the first column selected by the greedy algorithm, which is the one that maximizes g(j), for j = 1,..., n: g(j ∗) = max j ∈ J g(j). Once column j ∗ is placed in the partial solution under construction, every unselected column that
covers a row newly covered by column j ∗ must have its g(j) value updated, since g(j) measures the number of yet-uncovered rows that will be covered with the inclusion of column j in the solution. The adaptive greedy algorithm repeats this column selection and greedy choice function update process until all rows of A are covered.
Figure 3.10 shows the pseudo-code of an adaptive greedy algorithm for the minimum cardinality set covering problem. Lines 1 to 5 initialize the cover S, its cost f(S), the set of potential cover elements , the set of covered row indices , and the greedy choice function g(j) for each potential cover element . The cover is constructed in the while loop in lines 6 to 17. In line 7, a column j ∗ that maximizes the greedy choice function is selected. This column is included in the cover in line 8, while the cover's cost f(S) is updated in line 9. Element j ∗ is removed from the set of potential cover elements in line 10. The greedy choice function is updated in the for loop in lines 11 to 16, which scans all uncovered rows that have just became covered by j ∗. The index of each such row is inserted in the set  of covered rows in line 12. The value of the greedy choice function g(j) is updated in line 14 for each column j other than j ∗ that also covers that row. The algorithm terminates when a cover is produced. A cover S and its cost f(S) are returned in line 18. ■
Fig. 3.10
Pseudo-code of an adaptive greedy algorithm for minimum cardinality set covering.
Maximum clique problem – Adaptive greedy algorithm
We now give an adaptive greedy algorithm for the maximum clique problem. Given an undirected graph G = (V, U), we recall that a clique is any subset of nodes of G that are mutually adjacent. In the maximum clique problem, we want to find a largest cardinality clique in G. An adaptive greedy algorithm for the maximum clique problem builds a clique, one node at a time. Initially, all nodes are candidates to be included in the clique. We shall call the candidate set . A natural measure of suitability for a node v ∈ V to be the first node included in the clique is its degree, which is equal to the number of nodes adjacent to it. Let us denote this greedy choice function by g(v), for all . Once the node with maximum degree is placed in the clique, all nodes that are not adjacent to it can no longer be considered for placement in the clique. Let us redefine  as the set of remaining nodes that can be added to the current clique. The greedy choice function g(v) for all nodes  must be updated to account for the fact that the clique now consists of the first selected node. The suitability of a node  to be the next node to be included in the clique is related with the number of nodes adjacent to it in . The adaptive greedy algorithm repeats this node selection and greedy choice function update process until the candidate list becomes empty, i.e., .
Figure 3.11 shows the pseudo-code for an adaptive greedy algorithm for the maximum clique problem. Lines 1 to 4 initialize the clique S, its cost f(S), the set  of yet unselected potential clique elements, and, for each potential clique node , its initial greedy choice function value g(v) that is set to , which represents the number of nodes that are adjacent to v in G and belong to  (or, alternatively, the degree of node v with respect to the nodes in ). The clique is constructed in the while loop in lines 5 to 11. In line 6, a node v′ maximizing the greedy choice function is selected. This node is included in the clique S in line 7, while the clique's size is updated in line 8. The set of potential clique nodes is updated in line 9 and consists of all yet unselected nodes that are adjacent to all nodes in the current clique S. The greedy choice function is updated in line 10. The algorithm terminates when a maximal clique is produced, i.e., when . The largest clique found and its cardinality are returned in line 12. ■
Fig. 3.11
Pseudo-code of an adaptive greedy algorithm for maximum clique.
Traveling salesman problem – Adaptive greedy algorithm
The next example of an adaptive greedy algorithm is known as the nearest neighbor heuristic for the traveling salesman problem. We are given a graph G = (V, U), where V is the set of nodes and U is the set of weighted edges. Let d ij be the length (or weight) of edge (i, j) ∈ U.
An adaptive greedy approach for this problem is to grow the set of visited nodes of the tour, starting from any initial node. Denote by v the last visited node of the partial tour under construction. At each step we add to the tour a nearest unvisited node adjacent to v. This is repeated until the tour visits all nodes.
Figure 3.12 shows the pseudo-code of this algorithm. In lines 1 and 2, the set S of edges in the tour and its total length f(S) are initialized. In line 3, we select any initial node i ∈ V to start the tour and save it as i 0 to be used later. The set of unvisited nodes  is initialized in line 4. The main loop of the algorithm, in lines 5 to 13, is repeated until the set  of unvisited nodes becomes empty. In line 6, we build the set  of candidate nodes that can be added to the tour following the last added node i. For each candidate node, the greedy choice function is set in line 7. Node j′ is set to be the nearest unvisited node adjacent to i in line 8. Edge (i, j′) is added to the Hamiltonian cycle under construction in line 9 and the length f(S) of the tour is updated in line 10. The set  of candidate unvisited nodes is updated in line 11 and j′ is made the last visited node in line 12. The while loop terminates when all nodes have been visited, i.e., when  becomes empty. At this point, we add a return edge connecting the last visited node i with the initial node i 0 in line 14. The length of the tour is updated in line 15 and the solution and its total length are returned in line 16.
Fig. 3.12
Pseudo-code of the nearest neighbor adaptive greedy algorithm for the traveling salesman problem.
We remark that if the graph G = (V, U) is not complete, then it is possible that at some iteration in line 6 there is no edge connecting node i with an unvisited node. Note also that if the graph is not complete then in line 14 the return edge (i, i 0) may not exist. Therefore, a sufficient condition for this algorithm to find a feasible solution is that the graph be complete.
An example of the application of the nearest neighbor adaptive greedy algorithm of Figure 3.12 to the leftmost graph in Figure 3.13 is described in the following. The algorithm starts by selecting some node to be the start of the tour. Suppose node 1 is selected as the starting node. From node 1, the distances to nodes 2, 3, 4, and 5 are, respectively, 1, 2, 7, and 5. Since d 12 = 1 is the smallest of the distances, node 2 is selected to be the next node in the tour and the partial tour becomes 1 → 2. From node 2, the distances to nodes 3, 4, and 5 (nodes not yet in the tour) are, respectively, 3, 4, and 3. Since d 23 = d 25 = 3 is the smallest distance from node 2 to a yet unselected node, either node 3 or node 5 could be selected as the next node in the tour. Suppose node 3 is chosen. The partial tour is now formed by 1 → 2 → 3. From node 3, the distances to nodes 4 and 5 are, respectively, 5 and 2. Since d 35 = 2 is the smallest distance from node 3 to any of the yet unselected nodes, then node 5 is chosen next to be in the partial tour, which becomes 1 → 2 → 3 → 5. The only yet unselected node is node 4 and it is then selected to be the next node on the tour. Consequently, the full tour is 1 → 2 → 3 → 5 → 4 → 1. The length of the tour is d 12 \+ d 23 \+ d 35 \+ d 54 \+ d 41 = 1 + 3 + 2 + 3 + 7 = 16. It is shown in the middle graph of Figure 3.13.
Fig. 3.13
Examples of a TSP instance solved with two adaptive greedy algorithms. The leftmost graph shows all edge lengths. The one in the middle shows a tour of length 16 produced by the nearest neighbor adaptive greedy algorithm that grows the path from one end of the partial path. The rightmost graph shows the tour of length 12 produced by a variant of this adaptive greedy algorithm that grows the partial path from both of its extremities.
The adaptive greedy algorithm of Figure 3.12 always extends the path from the last node to be added, i.e., the path is grown out from only one side of the partial path. If instead of only considering one side of the partial path, we consider both sides, we get a new modified adaptive greedy algorithm for the TSP. Again consider node 1 as the initial node of the path. As before, from node 1, the distances to nodes 2, 3, 4, and 5 are, respectively, 1, 2, 7, and 5. Since d 12 = 1 is the smallest of the distances, node 2 is selected to be the next node in the tour and the partial tour becomes 1 → 2. The two extremities of the path are nodes 1 and 2. As before, from node 2, the distances to nodes 3, 4, and 5 (nodes not yet in the tour) are, respectively, 3, 4, and 3. From node 1, the distances to nodes 3, 4, and 5, are, respectively, 2, 7, and 5. Since d 13 = 2 is the smallest of the lengths, node 3 is selected to be the next node in the tour. It is connected to node 1 and the partial tour becomes 3 → 1 → 2. The two extremities of the path are now nodes 3 and 2. From node 2, the distances to nodes 4 and 5 are, respectively, 4 and 3, while from node 3 the distances to nodes 4 and 5 are, respectively, 5 and 2. Since d 35 = 2 is the smallest of the lengths, node 5 is selected to be the next node in the tour. It is connected to node 3 and the partial tour becomes 5 → 3 → 1 → 2. Node 2 can now only connect to node 4, which in turns connects to node 5. The final tour becomes 4 → 5 → 3 → 1 → 2 with a corresponding length of d 45 \+ d 53 \+ d 31 \+ d 12 \+ d 24 = 3 + 2 + 2 + 1 + 4 = 12 < 16. This solution improves the previous one and appears as the rightmost graph of Figure 3.13. ■
We observe that even if the graph is complete, the nearest neighbor adaptive greedy algorithm may still find a very bad solution for some instances. Consider the graph on the left side of Figure 3.14, where M > 3 is an arbitrarily large number. If the nearest neighbor adaptive greedy algorithm starts from node 1, it produces a tour of length M \+ 4 containing all the edges of length 1 and edge (1, 2) of length M. This tour is shown in red on the graph on the left of the figure. It can be arbitrarily longer than the optimal tour of length 7, which is shown on the right of the figure.
Fig. 3.14
Example of an instance of the traveling salesman problem on which, assuming M > 3, the nearest neighbor adaptive greedy algorithm always produces a tour of length M \+ 4 that may be arbitrarily bad as M grows, since there exists an optimal tour of length 7 shown in red on the graph on the right of the figure.
Steiner tree problem in graphs – Adaptive greedy algorithm
Once again, recall that we are given a graph G = (V, U), a length d ij associated with each edge (i, j) ∈ U, and a subset T ⊆ V of terminal nodes that have to be connected by a minimum length subtree of G. The adaptive greedy heuristic for the Steiner tree problem in graphs may be seen as an extension of Prim's algorithm for the minimum spanning tree problem presented earlier in this section. At each iteration, the closest yet unconnected terminal node is connected to the current partial tree by a minimum shortest path.
A pseudo-code for this heuristic is given in Figure 3.15. The algorithm starts in line 1 from any randomly selected terminal node s ∈ T, which is used to initialize the Steiner tree in line 2. The set of terminal nodes already connected by the Steiner tree is initialized with terminal s in line 3. Next, in line 4, the algorithm computes the shortest path SP(i, S) from each yet unconnected terminal i ∈ T ∖ M to node s. The loop in lines 5 to 10 connects one new terminal in each iteration, until a Steiner tree connecting all terminals is built. The closest terminal s to the current partial tree is selected in line 6. The set of connected terminal nodes is expanded by node s in line 7 and all nodes and edges in the path SP(s, S) are added to the Steiner tree S in line 8. In line 9, the shortest path SP(i, S) from each yet unconnected terminal i ∈ T ∖ M to the updated tree S is recomputed. The Steiner tree S is returned in line 11.
Fig. 3.15
Pseudo-code of the adaptive greedy heuristic for the Steiner tree problem in graphs.
Figure 3.16 illustrates the application of the ADAPTIVE-GREEDY-SPG heuristic to the same instance in Figure 3.5. The Steiner tree is initialized in Figure 3.16(a) with terminal node 1 that has been randomly selected from the set of terminals. The length of the shortest path from terminal 2 to node 1 is two, while that from terminal 3 is four and that from terminal 4 is also four. Since terminal 2 is the closest to node 1, it is the next to be connected and the path 1 - 5 - 2 is added to the partial Steiner tree in Figure 3.16(b). The shortest paths from terminals 3 and 4 to the partial Steiner tree are updated. The new lengths of the shortest paths from either terminal 3 or 4 to the partial Steiner tree become equal to three and any of these terminals may be selected at the next iteration. Suppose that terminal 4 is selected to be added to the set of connected terminals. The path 4 - 9 - 7 - 5 is incorporated into the tree in Figure 3.16(c). Terminal 3 is the last to be connected to the previously selected terminal nodes and the path 3 - 9 is added to complete a Steiner tree in Figure 3.16(d). ■
Fig. 3.16
Application of the ADAPTIVE-GREEDY-SPG: (a) terminal node 1 is added to the tree that is initially empty, (b) terminal 2 is the next to be connected, since it is the closest to terminal node 1, (c) terminal 4 is connected to the tree, and (d) terminal node 3 is the last to be connected forming a Steiner tree with length 6, which is optimal.
## 3.4 Semi-greedy algorithms
Consider the graph shown in Figure 3.17 and suppose we wish to find a shortest Hamiltonian cycle in this graph applying the nearest neighbor adaptive greedy algorithm presented in Figure 3.12. The algorithm starts from any node and repeatedly moves from the current node to its nearest unvisited node. Suppose the algorithm were to start from node 1, in which case it should move next to either node 2 or 3. If it moves to node 2, then it must necessarily move next to node 3 and then to node 4. Since there is no edge connecting node 4 to node 1, the algorithm will fail to find a tour. By symmetry, the same situation occurs if it were to start from node 4. Now suppose the algorithm starts from node 2. Node 3 is the nearest to node 2 and from node 3 it can move either to node 1 or node 4, failing in either case to find a tour. Again, by symmetry, the same situation occurs if one were to start from node 3. Therefore, this adaptive greedy algorithm fails to find a tour, no matter which node it starts from.
Fig. 3.17
Example of a traveling salesman problem instance for which the nearest neighbor adaptive greedy algorithm fails to find an optimal solution, while a semi-greedy algorithm succeeds.
Now, consider the following randomized version of the same adaptive greedy algorithm. This randomized variant starts from any node and repeatedly moves, with equal probability, to one of its two nearest unvisited nodes. Starting from node 1, it then moves to either node 2 or node 3 with equal probability. Suppose it were to move to node 2. Now, again with equal probability, it moves to either node 3 or node 4. On the one hand, if it were to move to node 3, it would fail to find a tour. On the contrary, by moving to node 4, it would then go to node 3, and then back to node 1, thus finding a tour of length 40. Therefore, there is a 50% probability that the algorithm will find a tour if it starts from node 1. By applying this algorithm repeatedly, the probability that it will eventually find the optimal cycle quickly approaches one. For example, after only ten attempts, the probability that this algorithm finds the optimal solution is over 99.9%.
Algorithms like the one above, which add randomization to a greedy or adaptive greedy algorithm, are called semi-greedy or randomized-greedy algorithms. Figure 3.18 shows the pseudo-code of a generic semi-greedy algorithm for a minimization problem. This pseudo-code is similar to that of the greedy algorithm in Figure 3.1, differing only in how the ground set element is chosen from the set  of feasible candidate ground set elements (lines 5 and 6). In line 5, a subset of lowest-cost elements of set  is placed in a restricted candidate list (RCL). In line 6, a ground set element is selected at random from the RCL to be incorporated into the solution in line 7. Although random selection usually assigns equal probabilities to each RCL element, probabilities proportional to the quality of each element can also be used.
Fig. 3.18
Pseudo-code of a semi-greedy algorithm for a minimization problem.
Two simple schemes to define a restricted candidate list are cardinality-based and quality-based. In the former, the k least-costly feasible candidate ground set elements of set  are placed in the RCL. In the latter, let  and . Furthermore, let α be such that 0 ≤ α ≤ 1. The RCL is formed by all ground set elements  satisfying c min ≤ c i ≤ c min +α(c max − c min). We observe that setting α = 0 corresponds to an implementation of a pure greedy algorithm, since a lowest-cost element will always be selected at any iteration. On the other hand, setting α = 1 leads to a completely random algorithm, since any new element may be added with equal probability at any iteration. Later in this book, we present other variants and applications of semi-greedy algorithms.
## 3.5 Repair procedures
Suppose that in the greedy algorithm of Figure 3.1 (or in the adaptive greedy algorithm of Figure 3.8 or, still, in the semi-greedy algorithm of Figure 3.18) we reach a situation in which  but S is not yet a feasible solution. We illustrate this situation with two problems: the knapsack problem with an equality constraint and the traveling salesman problem.
Knapsack problem with an equality constraint – Infeasible construction
In the knapsack problem, we seek a solution maximizing the total utility and whose total weight is at most equal to the maximum weight capacity of the knapsack b. In case of the knapsack problem with an equality constraint, we also seek a solution maximizing the total utility, but whose total weight is exactly equal to the maximum capacity of the knapsack b.
A naive adaptation of the greedy algorithm for the knapsack problem, whose pseudo-code is shown in Figure 3.3, will not always find a feasible solution. Consider the following counter-example with three items, where the item weight vector is a = (3, 2, 2), the item utility vector is c = (3, 1, 1), and the knapsack capacity is b = 4. The utility per unit weight of each item is c 1∕a 1 = 3∕3 = 1 and c 2∕a 2 = c 3∕a 3 = 1∕2 = 0. 5. Consequently, the greedy algorithm considers the items in the order 1, 2, 3. Since the weight of item 1 is 3, then it would be included in the solution. Since items 2 and 3 both have weight equal to 2, neither can be included in the solution together with item 1 because otherwise the equality constraint would be violated. Note, however, that if the items were to be packed in the reverse order, then items 2 and 3 would be packed and a feasible solution produced. ■
Traveling salesman problem – Infeasible construction
Consider again the graph shown in Figure 3.17, on which we wish to find a tour with minimum length by applying the nearest neighbor greedy algorithm described in Figure 3.12. If the tour were to start at node 1, then we could add either arc (1, 2) or (1, 3) without causing any infeasibility. If we chose arc (1, 2), then from node 2 we could add either arc (2, 3) or (2, 4). Since the greedy choice is to add arc (2, 3), we do so. From node 3, all ground set elements lead to infeasibility: if we were to add arc (3, 1), we would get a sub-tour; if we add arc (3, 4) we get a path that cannot be extended to form a tour. ■
We saw in Section 3.4 that one way to try to produce a feasible solution is to add randomization to the greedy algorithm, thus repeatedly applying the resulting semi-greedy algorithm until a feasible solution is produced.
Another way is through a repair procedure. A repair procedure undoes erroneous selections made by the construction procedure and attempts to correct them so that a feasible solution can be found.
A possible strategy for implementing a repair procedure consists in removing the last element added to the solution and attempting to add another feasible (but not necessarily greedy) element. In the above example for the traveling salesman problem, arc (2, 3) would be removed from the solution. The only remaining feasible element that could be added from node 2 is arc (2, 4). By doing so, we easily construct a feasible solution by then adding arcs (4, 3) and (3, 1) to the tour. An extension of this strategy consists in backtracking, if the removal of the last added element is not sufficient to recover feasibility.
A generalization of the previous strategy, which was based on backtracking and replacing the last added element, is to repeatedly apply destructive modifications to the solution, followed by constructive steps that attempt to recover feasibility. However, all these strategies are problem-specific and are difficult to generalize. Examples will be presented later in this book.
## 3.6 Bibliographical notes
Kruskal (1956) proposed the greedy algorithm for the minimum weighted spanning tree problem described in Section 3.1. Greedy algorithms for knapsack problems were discussed by Martello and Toth (1990). A more efficient implementation of the distance network heuristic for the Steiner tree problem in graphs was developed by Melhorn (1988), based on Voronoi diagrams.
Edmonds (1971) established the connection between weighted matroids and greedy algorithms. Chapter of Lawler (1976) and Chapter 16 of Cormen et al. (2009) cover greedy algorithms and an introduction to matroid theory. Many of the properties of matroids listed in Section 3.2 are proved there. Matroid theory was introduced by Whitney (1935) and was independently discovered by Takeo Nakasawa (see Nishimura and Kuroda (2009) for a historical note and English translation of his original work). Pitsoulis (2014) offered an in-depth coverage of matroids.
Prim (1957) developed the adaptive greedy algorithm for the minimum spanning tree problem described in Section 3.3. Prim's algorithm was originally proposed by Jarník (1930). The adaptive greedy algorithm for the set covering problem was first described by Johnson (1974) and studied by Chvátal (1979). The adaptive greedy algorithm for the maximum clique problem was based on an adaptive greedy algorithm for finding maximum independent sets proposed by Feo et al. (1994). Heuristics for the traveling salesman problem were discussed by Lawler et al. (1985), Gutin and Punnen (2002), and Applegate et al. (2006). The shortest path adaptive greedy heuristic for the Steiner tree problem in graphs was developed by Takahashi and Matsuyama (1980).
Semi-greedy algorithms presented in Section 3.4 were first introduced by Hart and Shogan (1987) and independently developed by Feo and Resende (1989). Bang-Jensen et al. (2004) characterized cases where the greedy algorithm fails and applied their results to the traveling salesman problem and to the minimum bisection problem.
Examples of the repair procedures described in Section 3.5 were reported, e.g., by Duarte et al. (2007a), Duarte et al. (2007b), and Mateus et al. (2011).
References
D.L. Applegate, R.E. Bixby, V. Chvátal, and W.J. Cook. The traveling salesman problem: A computational study. Princeton University Press, Princeton, 2006.MATH
J. Bang-Jensen, G. Gutin, and A. Yeo. When the greedy algorithm fails. Discrete Optimization, 1:121–127, 2004.MathSciNetCrossRefMATH
V. Chvátal. A greedy heuristic for the set-covering problem. Mathematics of Operations Research, 4:233–235, 1979.MathSciNetCrossRefMATH
T.H. Cormen, C.E. Leiserson, R.L. Rivest, and C. Stein. Introduction to Algorithms. MIT Press, Cambridge, 3rd edition, 2009.MATH
A.R. Duarte, C.C. Ribeiro, and S. Urrutia. A hybrid ILS heuristic to the referee assignment problem with an embedded MIP strategy. In T. Bartz-Beielstein, M.J.B. Aguilera, C. Blum, B. Naujoks, A. Roli, G. Rudolph, and M. Sampels, editors, Hybrid metaheuristics, volume 4771 of Lecture Notes in Computer Science, pages 82–95. Springer, Berlin, 2007a.
A.R. Duarte, C.C. Ribeiro, S. Urrutia, and E.H. Haeusler. Referee assignment in sports leagues. In E.K. Burke and H. Rudov, editors, Practice and theory of automated timetabling VI, volume 3867 of Lecture Notes in Computer Science, pages 158–173. Springer, Berlin, 2007b.
J. Edmonds. Matroids and the greedy algorithm. Mathematical Programming, 1:125–136, 1971.MathSciNetCrossRefMATH
T.A. Feo and M.G.C. Resende. A probabilistic heuristic for a computationally difficult set covering problem. Operations Research Letters, 8:67–71, 1989.MathSciNetCrossRef90002-3)MATH
T.A. Feo, M.G.C. Resende, and S.H. Smith. A greedy randomized adaptive search procedure for maximum independent set. Operations Research, 42: 860–878, 1994.CrossRefMATH
G. Gutin and A.P. Punnen, editors. The traveling salesman problem and its variations. Kluwer Academic Publishers, Boston, 2002.MATH
J.P. Hart and A.W. Shogan. Semi-greedy heuristics: An empirical study. Operations Research Letters, 6:107–114, 1987.MathSciNetCrossRef90021-6)MATH
V. Jarník. O jistém problému minimálním. Práce Moravské Přírodovědecké Společnosti, 6:57–63, 1930.
D.S. Johnson. Approximation algorithms for combinatorial problems. Journal of Computer and System Sciences, 9:256–278, 1974.MathSciNetCrossRef80044-9)MATH
J.B. Kruskal. On the shortest spanning subtree of a graph and the traveling salesman problem. Proceedings of the American Mathematical Society, 7: 48–50, 1956.MathSciNetCrossRefMATH
E.L. Lawler. Combinatorial optimization: Networks and matroids. Holt, Rinehart and Winston, New York, 1976.MATH
E.L. Lawler, J.K. Lenstra, A.H.G. Rinnooy Kan, and D.B. Shmoys, editors. The traveling salesman problem: A guided tour of combinatorial optimization. John Wiley & Sons, New York, 1985.MATH
S. Martello and P. Toth. Knapsack problems: Algorithms and computer implementations. John Wiley & Sons, New York, 1990.MATH
G.R. Mateus, M.G.C. Resende, and R.M.A. Silva. GRASP with path-relinking for the generalized quadratic assignment problem. Journal of Heuristics, 17: 527–565, 2011.CrossRefMATH
K. Melhorn. A faster approximation algorithm for the Steiner problem in graphs. Information Processing Letters, 27:125–128, 1988.MathSciNetCrossRef90066-X)
H. Nishimura and S. Kuroda, editors. A lost mathematician, Takeo Nakasawa. The forgotten father of matroid theory. Birkhäuser Verlag, Basel, 2009.
L.S. Pitsoulis. Topics in matroid theory. SpringerBriefs in Optimization. Springer, 2014.CrossRefMATH
R.C. Prim. Shortest connection networks and some generalizations. Bell System Technical Journal, 36:1389–1401, 1957.CrossRef
H. Takahashi and A. Matsuyama. An approximate solution for the Steiner problem in graphs. Mathematica Japonica, 24:573–577, 1980.MathSciNetMATH
H. Whitney. On the abstract properties of linear dependence. American Journal of Mathematics, 57:509–533, 1935.MathSciNetCrossRefMATH
© Springer Science+Business Media New York 2016
Mauricio G.C. Resende and Celso C. RibeiroOptimization by GRASP10.1007/978-1-4939-6530-4_4
# 4. Local search
Mauricio G. C. Resende1 and Celso C. Ribeiro2
(1)
Modeling and Optimization Group (MOP), Amazon.com, Inc., Seattle, WA, USA
(2)
Instituto de Ciência da Computação, Universidade Federal Fluminense, Niterói, Rio de Janeiro, Brazil
Local search methods start from any feasible solution and visit other (feasible or infeasible) solutions, until a feasible solution that cannot be further improved is found. Local improvements are evaluated with respect to neighboring solutions that can be obtained by slight modifications applied to a solution being visited. We introduce in this chapter the concept of solution representation, which is instrumental in the design and implementation of local search methods. We also define neighborhoods of combinatorial optimization problems and moves between neighboring solutions. We illustrate the definition of a neighborhood by a number of examples for different problems. Local search methods are introduced and different implementation issues are discussed, such as neighborhood search strategies, quick cost updates, and candidate list strategies.
## 4.1 Solution representation
We consider that any solution S for a combinatorial optimization problem is defined by a subset of the elements of the ground set E. A feasible solution is one that satisfies all constraints of the problem. We denote by F the set of feasible solutions for this problem and by  the set formed by all subsets of ground set elements, which includes all feasible and infeasible solutions. We assume in the following that the objective function value of any (feasible or infeasible) solution S is given by f(S) = ∑ i ∈ S c i , where c i denotes the contribution to the objective function value of the ground set element i ∈ E.
Maximum clique problem – Solution representation
Let G = (V, U) be a graph, in which we seek a maximum cardinality clique. In the case of the maximum clique problem, the ground set E corresponds to the set of nodes V = { 1,..., n}. Every solution S can be represented by a binary vector (x 1,..., x n ), in which x i = 1 if node i belongs to S, x i = 0 otherwise, for every i = 1,..., n. The set of feasible solutions  is formed by all subsets of V in which all nodes are pairwise adjacent. ■
Knapsack problem – Solution representation
In the case of the knapsack problem, one has a set I = { 1,..., n} of items to be placed in a knapsack. Integer numbers a i and c i represent, respectively, the weight and the utility of each item i ∈ I. We assume that each item fits in the knapsack by itself and denote by b its maximum total weight. As for the previous problem, every solution S can be represented by a binary vector (x 1,..., x n ), in which x i = 1 if item i is selected, x i = 0 otherwise, for every i = 1,..., n. A solution S = (x 1,..., x n ) belongs to the feasible set F if ∑ i ∈ I a i ⋅ x i ≤ b. ■
Steiner tree problem in graphs – Solution representation
Let G = (V, U) be a graph, where the node set is V = { 1,..., n} and the edge set is U. We recall that, in the Steiner tree problem in graphs, we are also given a subset T ⊆ V of terminal nodes that have to be connected. A Steiner tree S = (V ′, U′) of G is a subtree of G that connects all nodes in T, i.e., T ⊆ V ′ ⊆ V.
Given any subset V ′ of nodes such that T ⊆ V ′ ⊆ V, we note that any spanning tree of the graph induced in G by V ′ is also a Steiner tree of G connecting all terminal nodes in T. Therefore, any Steiner tree of G connecting the terminal nodes may be obtained by selecting a subset of optional nodes W ⊆ V ∖ T and computing a minimum spanning tree of the graph G(W ∪ T) induced in G by V ′ = W ∪ T.
As a consequence, every solution S of the Steiner tree problem can be represented by a binary vector (x 1,..., x n ), in which x i = 1 if node i ∈ V ′ = W ∪ T; x j = 0 otherwise, for every i = 1,..., n. We notice that this solution representation is very similar to those adopted for the maximum clique and knapsack problems.
Figure 4.1 illustrates these ideas for an instance of the Steiner tree problem in graphs. This instance is depicted in Figure 4.1(a) and has 15 nodes. The terminal nodes 1, 2, 3, and 4 are represented by circles, while the optional nodes 5 to 15 correspond to squares. The graph induced by the terminal nodes 1, 2, 3, and 4 and the optional nodes 6, 7, 10, 13, and 14 is shown in Figure 4.1(b), with the edges of the corresponding minimum spanning tree marked in red. This minimum spanning tree of length 72 contains edge (6,10), which is not needed to form a Steiner tree of the original graph. Node 10 is removed from the set of optional nodes and the graph induced by the terminal nodes 1, 2, 3, and 4 and the optional nodes 6, 7, 13, and 14 is shown in Figure 4.1(c). The new minimum spanning tree has a smaller length 64 and its edges are marked in red. To conclude this example, we show in Figure 4.2 a still better Steiner tree with length 62 for the same instance, formed by the red edges and containing the optional nodes 9, 11, 12, and 13. ■
Fig. 4.1
Solution representation for the Steiner tree problem in graphs.
Fig. 4.2
Smaller Steiner tree with length 62 for the instance in Figure 4.1(a).
Traveling salesman problem – Solution representation
Let V = { 1,..., n} be the set of cities a traveling salesman has to visit, with non-negative lengths d ij associated with each pair of cities i, j ∈ V.
Any tour visiting each of the n cities exactly once corresponds to a feasible solution. Every feasible solution S can be represented by a binary vector (x 1,..., x m ), where m = n(n − 1)∕2 and x k = 1 if the edge indexed by k belongs to the corresponding tour, x k = 0 otherwise, for every k = 1,..., m. However, this representation applies to any edge subset, regardless if it corresponds to a tour or not. Therefore, the edge subset {k = 1,..., m: x k = 1} must define a tour for this solution to be feasible.
Figure 4.3 illustrates a complete graph with four nodes. Numbers on the six edges represent their indices. Every solution can be represented by a binary vector (x 1, x 2, x 3, x 4, x 5, x 6). There are three different tours, corresponding to the incidence vectors (1, 1, 1, 1, 0, 0), (1, 0, 1, 0, 1, 1), and (0, 1, 0, 1, 1, 1).
Fig. 4.3
Hamiltonian cycles on a complete graph with four nodes.
We notice that any solution to the traveling salesman problem can alternatively be represented by a circular permutation (π 1,..., π n ) of the n cities, with π i ∈ V for every i = 1,..., n and π i ≠ π j for every i, j = 1,..., n: i ≠ j. This permutation is associated with the tour defined by the edges (π 1, π 2), (π 2, π 3),..., (π n−1, π n ), and (π n , π 1). Referring to Figure 4.3, the three tours represented by the incidence vectors (1, 1, 1, 1, 0, 0), (1, 0, 1, 0, 1, 1), and (0, 1, 0, 1, 1, 1) correspond, respectively, to the circular permutations (a, b, c, d), (a, b, d, c), and (a, c, d, b). ■
This discussion illustrates the fact that alternative representations can exist for a combinatorial optimization problem. The choice of one over another can lead to easier implementations or faster algorithms. In some occasions, it can also be helpful to work simultaneously with two different representations for every solution, since each of them can be more effective than the other for the implementation of some specific operations. We find the following schemes among the most frequently used solution representation techniques:
* 0-1 incidence vector: This representation is typically used whenever the ground set is partitioned into two subsets, one of them corresponding to the elements that belong to the solution, while the others do not. This representation was applied in the three examples described above.
* Generalized incidence vector: This representation is often used whenever the ground set has to be partitioned into a number of subsets, each of them with a different interpretation. We can cite as examples the graph coloring problem and its variants, in which different colors have to be assigned to adjacent nodes of a graph. Another example is that of the vehicle routing and scheduling problem, in which a number of vehicles have to be assigned to clients. Still another example is that of the bin packing problem, in which a number of items have to be accommodated in different bins with the same size.
* Permutation: This representation typically applies to scheduling problems in which one is interested in establishing an optimal order for the execution of a number of tasks. It was illustrated above in the context of the traveling salesman problem.
## 4.2 Neighborhoods and search space graph
A neighborhood of a solution S ∈ F can be defined by any subset of F. More formally, a neighborhood is a mapping that associates each feasible solution S ∈ F with a subset N(S) = { S 1,..., S p } of feasible solutions also in F.
Each solution S′ ∈ N(S) can be reached from S by an operator called move. Normally, two neighboring solutions S and S′ ∈ N(S) differ only by a few elements and a move from a solution S consists simply in changing one or more elements in S. Usually, S ∈ N(S′) whenever S′ ∈ N(S).
The search space graph  has a node set that corresponds to the set F of feasible solutions. The edge set M of the search space graph is such that there is an edge (S, S′) ∈ M between two solutions S, S′ ∈ F if and only if S′ ∈ N(S) and S ∈ N(S′). An extended search space graph may be similarly defined, encompassing not only the set of feasible solutions F but, instead, the whole set  formed by all subsets of elements of the ground set E.
Figure 4.4 displays an example of an instance of a combinatorial problem in which the set F is formed by 16 feasible solutions depicted in a square grid and represented by S(i, j), for i, j = 1,..., 4.
Fig. 4.4
Set of 16 feasible solutions of a combinatorial optimization problem.
A search space graph associated with the above problem instance can be created by imposing a neighborhood definition on the node set F. Neighborhood N 1 is defined such that any solution S(i, j) has neighbors S(i \+ 1, j), S(i − 1, j), S(i, j \+ 1), and S(i, j − 1), whenever they exist. Figure 4.5 displays the corresponding search space graph.
Fig. 4.5
Search space graph with 16 feasible solutions and neighborhood N 1.
However, other different neighborhoods can be defined and imposed on the same set of feasible solutions. Another neighborhood N 2 can be defined, such that any solution S(i, j) has neighbors S(i \+ 1, j \+ 1), S(i \+ 1, j − 1), S(i − 1, j \+ 1), and S(i − 1, j − 1), whenever they exist. Figure 4.6 displays the search space graph defined by this neighborhood.
Fig. 4.6
Search space graph with 16 feasible solutions and neighborhood N 2.
We observe that some pairs of solutions are closer within one neighborhood or another. For instance, six moves are necessary to traverse the search space graph defined by neighborhood N 1 from S(1, 1) to S(4, 4), although only three are necessary if neighborhood N 2 is applied. On the other hand, every feasible solution is reachable from any other one if neighborhood N 1 is used. Contrarily, if the search starts from a solution S(i, j) where i \+ j is odd and takes moves defined by neighborhood N 2, only half of the solutions in the search space graph are reachable. However, if i \+ j is even, then only the other half of the solutions can be visited. Therefore, the search space graph is not connected in this case. This can lead to implementation difficulties and can even make it impossible for the search procedure to find good solutions located at the part of the graph that cannot be reached from the initial solution.
Since each of the neighborhoods N 1 and N 2 leads to different search paths through the set F of feasible solutions, a natural idea is to combine them into a single neighborhood. Therefore, neighborhood N 3 can be defined as the union of N 1 and N 2: within this new neighborhood, any feasible solution S(i, j) has up to eight neighbors S(i \+ 1, j), S(i − 1, j), S(i, j \+ 1), S(i, j − 1), S(i \+ 1, j \+ 1), S(i \+ 1, j − 1), S(i − 1, j \+ 1), and S(i − 1, j − 1), whenever they exist. Figure 4.7 displays the search space graph defined by this enlarged neighborhood.
Fig. 4.7
Search space graph with 16 feasible solutions and neighborhood N 3.
At this point, it is very insightful to present some analogies between these three neighborhoods on the 4 × 4 grid of feasible solutions of our illustrative combinatorial optimization problem and the way chess pieces move on a chess board. Moves along neighborhood N 1 are similar to those of rooks, which move along rows and columns of the chess board. Moves following neighborhood N 2 are equivalent to those of bishops, which traverse the diagonals of the chess board. While the white bishop can visit only white squares of the chess board, the black bishop can visit only the black squares. Each bishop can visit only half of the chess board squares, in the same way as moves within neighborhood N 2 starting from any feasible solution in Figure 4.4 can visit only half of the set of feasible solutions. The queen is stronger than both a rook and a bishop, since it can perform all kinds of moves, along rows, columns, and diagonals. Analogously, neighborhood N 3 entails all moves that can be performed within neighborhoods N 1 and N 2.
The above discussion illustrates the notion that different neighborhoods can be defined and used in the implementation of a local search method. The larger the neighborhood, the denser will be the search space graph and the shorter will be the paths connecting any two solutions. However, the use of large neighborhoods requires the evaluation of more neighboring solutions, leading to larger computation times during the investigation of the current solution.
It is important to note that the search space does not need to be formed exclusively by feasible solutions in F, but can also contain any subset of the set  formed by all solutions, either feasible or infeasible. The definitions presented in this section remain the same, with the search space graph being now defined over  and not only over F. In this situation, the search can visit feasible and infeasible solutions but, in any case, it must terminate at a feasible solution. Working with more complex search space graphs, which include infeasible solutions, can be essential in some cases to ensure connectivity between any pair of feasible solutions.
In conclusion, finding an appropriate neighborhood and the best way to explore it is a crucial step towards the implementation of effective and efficient local search methods.
Knapsack problem – Neighborhood and search space graph
We have already seen that any solution of the knapsack problem can be represented by a binary vector (x 1,..., x n ), in which x i = 1 if item i is selected, x i = 0 otherwise, for every i = 1,..., n. A solution S = (x 1,..., x n ) belongs to the feasible set F if ∑ i = 1 n a i ⋅ x i ≤ b. In this context, a move from any solution amounts to complementing the value of any single variable among x 1,..., x n , while keeping the others fixed. Each solution has exactly n neighbors and the full search space graph defined over the set  formed by all feasible and infeasible solutions is an n-dimensional hypercube, as depicted in Figure 4.8. ■
Fig. 4.8
Search space graph for a knapsack problem with three items.
Traveling salesman problem – Neighborhood and search space graph
We consider the case of a traveling salesman problem defined over a set V = { 1,..., n} of cities that have to be visited exactly once. We have already seen that a feasible solution is a tour visiting each of the cities in V exactly once. Any feasible solution of the traveling salesman problem can be represented by a circular permutation (π 1, π 2,..., π n−1, π n ) of the n cities, with π i ∈ V for every i = 1,..., n and π i ≠ π j for every i, j = 1,..., n: i ≠ j. This circular permutation is equivalent to any of the n linear permutations (π 1, π 2,..., π n−1, π n ), (π 2, π 3,..., π n , π 1),..., and (π n , π 1,..., π n−2, π n−1), each of them originating at a different city. All of them correspond to the same tour (π 1, π 2), (π 2, π 3),..., (π n−1, π n ), (π n , π 1). The three tours in Figure 4.3 correspond to the circular permutations (a, b, c, d), (a, b, d, c), and (a, c, d, b).
The search space graph has exactly n! nodes, each of them corresponding to a permutation of the n cities to be visited. Several neighborhood definitions are possible in this context. Neighborhood N 1 is defined as that formed by all permutations that can be obtained by exchanging the positions of two consecutive cities of the current permutation. Any solution (π 1,..., π i−1, π i ,..., π n ) has exactly n − 1 neighbors within neighborhood N 1, each of them defined by a different permutation (π 1,..., π i , π i−1,..., π n ) characterized by the swap of cities π i−1 and π i , for i = 2,..., n. Figure 4.9 illustrates the search space graph corresponding to this neighborhood for a symmetric traveling salesman problem with four cities. Every solution has exactly three neighbors. As an example, the neighbors of solution (1, 2, 3, 4) are (2, 1, 3, 4), (1, 3, 2, 4), and (1, 2, 4, 3).
Fig. 4.9
Search space graph defined by neighborhood N 1 for a traveling salesman problem with four cities.
Neighborhood N 2 is defined by associating a solution (π 1,..., π i ,..., π j ,..., π n ) with all n(n − 1)∕2 neighbors (π 1,..., π j ,..., π i ,..., π n ) that can be obtained by exchanging the positions of any two cities π i and π j , for i, j = 1,..., n: i ≠ j. Considering the same example of a symmetric traveling salesman problem with four cities, every solution has exactly six neighbors. In particular, the same solution (1, 2, 3, 4) has now (2, 1, 3, 4), (1, 3, 2, 4), (1, 2, 4, 3), (3, 2, 1, 4), (1, 4, 3, 2), and (4, 2, 3, 1) as its neighbors.
We can also define a third neighborhood N 3 for the same problem by associating a solution (π 1,..., π i−1, π i , π i+1,..., π j ,..., π n ) with all n(n − 1)∕2 neighbors (π 1,..., π i−1, π i+1,..., π i , π j ,..., π n ) that can be obtained by moving city π i to position j, with 1 ≤ i < j ≤ n, and shifting by one position to the left all cities between positions i \+ 1 and j. Considering once again the same example of a traveling salesman problem with four cities as above, every solution has also exactly six neighbors. In particular, solution (1, 2, 3, 4) has now (2, 1, 3, 4), (2, 3, 1, 4), (2, 3, 4, 1), (1, 3, 2, 4), (1, 3, 4, 2), and (1, 2, 4, 3) as its neighbors.
Each solution has three neighbors in neighborhood N 1 and six neighbors in neighborhoods N 2 and N 3, for this example. Neighbors in neighborhoods N 2 and N 3 are not the same. Therefore, even if the feasible solution set is the same in the three examples, the search space graphs defined by the three neighborhoods have different edge sets and, consequently, are different. Figure 4.10 superimposes neighborhoods N 1, N 2, and N 3, illustrating the neighbors of solution (1, 2, 3, 4) within each of the three neighborhoods.
Fig. 4.10
Neighborhoods N 1, N 2, and N 3 of solution (1, 2, 3, 4) in the search space graph of a traveling salesman problem with four cities. Nodes connected by red edges belong to the three neighborhoods. Nodes connected by blue edges are those within neighborhood N 2, while those connected by green edges belong to neighborhood N 3.
We recall that each tour (π 1, π 2), (π 2, π 3),..., (π n−1, π n ), (π n , π 1) corresponds to exactly n linear permutations of the cities to be visited. Considering the same example of a traveling salesman problem with four cities, the tour that starts at city 1, visits cities 2, 3, and 4 in this order, and then returns to city 1, can be represented by any one of the following four linear permutations: (1, 2, 3, 4), (2, 3, 4, 1), (3, 4, 1, 2), and (4, 1, 2, 3). Furthermore, if the problem is symmetric, then each tour can be traversed in two opposite directions with the same total traveled distance. Therefore, permutations (4, 3, 2, 1), (1, 4, 3, 2), (2, 1, 4, 3), and (3, 2, 1, 4) also correspond to the same tour, but traversed in reverse order. In the general case, each tour corresponds to a circular permutation of the n cities. Since each circular permutation can be indistinctly traversed in two different orders, the number of feasible solutions in F can be reduced from n! to (n − 1)! ∕2, leading to a more compact representation of the problem. However, this reduction is not sufficient to make the solution of large problems easier, since the size of the search space graph remains superpolynomial in the number of cities. ■
## 4.3 Implementation strategies
We have shown that the search space can be seen as a graph whose vertices correspond to feasible solutions that are connected by edges associated with neighboring solutions. A path in the search space graph consists of a sequence of feasible solutions, in which any two consecutive solutions are neighbors of each other.
This definition of the search space graph can be enlarged to allow also for vertices that correspond to infeasible solutions. In this case, paths in the search space can visit feasible as well as infeasible solutions.
Given any instance of a minimization problem defined by a finite ground set E = { 1,..., n}, a set of feasible solutions F ⊆ 2 E , and an objective function , we have noted in Section 2.1 that we seek a globally optimal solution S ∗ ∈ F such that .
We have also seen in Section 4.2 that a neighborhood of a feasible solution S ∈ F is defined by a mapping that associates S with a subset N(S) of feasible solutions also in F. A solution S + is said to be a local optimum for a minimization problem with respect to neighborhood N if and only if . We notice that a global optimum is also locally optimal with respect to any neighborhood, while a local optimum is not necessarily a global optimum.
Local search methods can be viewed as a traversal of the search space graph starting from any given solution and stopping whenever some optimality condition is met. In most cases, a local search procedure is made to stop after a locally optimal solution is encountered. Metaheuristics such as tabu search extend the search beyond the first local optimum found, offering different escape mechanisms. The effectiveness and efficiency of a local search method depend on several factors, such as the starting solution, the neighborhood structure, and the objective function being optimized. The main components or phases of a local search method are
Phase 4.1
– Start: Construction of the initial solution, from where the search starts. Methods that may be applied to build an initial solution have already been discussed in Chapter 3 and will be further considered in Chapter 7 .
Phase 4.2
– Neighborhood search: Application of a subordinate heuristic or search strategy to find an improving solution in the neighborhood of the current solution. Neighborhood search strategies will be discussed along this section.
Phase 4.3
– Stop: Interruption of the search by a stopping criterion, which in most cases consists in the identification that a locally optimal solution has been found. Stopping criteria for the neighborhood search will be considered at different points of this chapter.
### 4.3.1 Neighborhood search
We consider in the following first-improving, best-improving, and other variants and strategies for the implementation of the neighborhood search.
At any iteration of an iterative improvement or first-improving neighborhood search strategy, the algorithm moves from the current solution to any neighbor with a better value for the objective function. In general, the new solution is the first-improving solution identified along the neighborhood search. The pseudo-code in Figure 4.11 describes a local search procedure based on a first-improving strategy for a minimization problem. The search starts from a given initial solution S. A flag indicating whether or not an improving solution was found is set in line 1. The loop in lines 2 to 10 is performed until it becomes impossible to replace the current solution with a better neighbor. The flag is reset to.FALSE. in line 3 at the beginning of a new iteration. The loop in lines 4 to 9 visits every neighbor S′ ∈ N(S) of the current solution S until an improving solution is found. If the test in line 5 detects that S′ is better than the current solution S, then the latter is replaced by the former in line 6 and the flag is reset to.TRUE. in line 7, indicating that a better solution was found. The algorithm returns the local optimum S in line 11.
Fig. 4.11
Pseudo-code of a first-improving local search procedure for a minimization problem.
At any iteration of a best-improving local search strategy, the algorithm moves from the current solution to the best of its neighbors, whenever this neighbor improves upon the current solution. The pseudo-code in Figure 4.12 describes a local search procedure based on a best-improving strategy for a minimization problem. As in the previous algorithm, the search starts from any given initial solution S. A flag indicating whether or not an improving solution was found is set to.TRUE. in line 1. The loop in lines 2 to 15 is performed until it becomes impossible to replace the current solution with a better neighbor. The flag is reset to.FALSE. in line 3 at the beginning of a new iteration. The variable f best that stores the best objective function value over all neighbors of the current solution S is set to a large value in line 4. The loop in lines 5 to 10 visits every neighbor S′ ∈ N(S) of the current solution S. If the test in line 6 detects that a new neighbor S′ is better than the current best neighbor, then the current best is replaced by the improved neighbor in line 7 and the best objective function value f best in the neighborhood is updated in line 8. In line 11, we compare the current solution S with its best neighbor S best . If f best is less than f(S), then the current solution is updated in line 12 and the flag is reset to.TRUE. in line 13, indicating that a better solution was found. The algorithm returns the local optimum S in line 16.
Fig. 4.12
Pseudo-code of a best-improving local search procedure for a minimization problem.
To illustrate the main ideas discussed in this section, we consider the traveling salesman problem instances with four cities whose search space graph was represented in Figure 4.9. We assume that the edge lengths are such that the values of the objective function for each solution are those depicted in Figure 4.13. For this minimization problem, there is only one local minimum, which is, necessarily, also a global optimum. The node of the search space graph corresponding to this solution is colored red. We observe that independently of the starting solution and of the neighborhood search strategy, the local search always stops at the global optimum.
Fig. 4.13
Search space graph for a minimization problem with a unique local optimum.
We now suppose that the edge lengths are modified and that the solution costs are now as depicted in Figure 4.14. Considering this new situation, there are six nodes of the search space graph corresponding to locally optimal solutions: two nodes colored green have their objective function values equal to 49, two colored blue have their objective function values equal to 48, and two colored red have their objective function values equal to 46. Among those, we observe that only the red nodes correspond to globally optimal solutions. Furthermore, we notice that the solution obtained by local search varies, depending on both the starting solution and the neighborhood search strategy.
Fig. 4.14
Search space graph for a minimization problem with six local optima.
### 4.3.2 Cost function update
The complexity of each neighborhood search iteration depends not only on the number of neighbors of each visited solution, but also on the efficiency of the computation of the cost function value for each neighbor.
Efficient implementations of neighborhood search usually compute the cost of each neighbor S′ by updating the cost of the current solution S, instead of calculating it from scratch, avoiding repetitive and unnecessary calculations, as illustrated in the two examples that follow.
Knapsack problem – Cost function update
Consider a solution S for the knapsack problem, represented by a binary vector (x 1,..., x n ), in which x i = 1 if item i is selected, x i = 0 otherwise, for every item i = 1,..., n. The cost of solution S is given by f(S). Consider now a neighbor solution S′ ∈ N(S) that differs from S by a single element, i.e., S′ = (x′1,..., x′ n ), with x′ j = 1 − x j for some j ∈ { 1,..., n} and x′ i = x i for every i = 1,..., n: i ≠ j. The cost f(S′) can be computed from scratch in time O(n) by adding up the costs of all items for which x′ j = 1. However, this value can be computed much faster in time O(1) as f(S′) = f(S) + c j if x j = 0 and x′ j = 1, or as f(S′) = f(S) − c j if x j = 1 and x′ j = 0. ■
Traveling salesman problem – Cost function update
In Section 4.2 we examined three different neighborhoods for the traveling salesman problem. Here, we discuss the implementation of a local search procedure for the traveling salesman problem based on a different neighborhood definition.
Recall that every solution S for an instance of the traveling salesman problem can be represented by an incidence binary vector (x 1,..., x m ), where m = n(n − 1)∕2 and x j = 1 if the edge indexed by j belongs to the corresponding tour, x j = 0 otherwise, for j = 1,..., m. Figure 4.15 (a) depicts an example involving a 5-vertex weighted graph, whose edges are numbered as indicated in Figure 4.15 (b). Figure 4.15 (c) shows the initial solution S corresponding to the incidence vector (1, 1, 1, 1, 1, 0, 0, 0, 0, 0), whose cost is 17.
Fig. 4.15
Instance of the traveling salesman problem with five vertices and its initial solution.
The 2-opt neighborhood for the traveling salesman problem is defined by replacing any pair of nonadjacent edges of solution S by the unique pair of edges that recreates a Hamiltonian cycle. Figure 4.16 displays the five neighbors of the initial solution S = (1, 1, 1, 1, 1, 0, 0, 0, 0, 0) together with their costs, indicating for each of them the eliminated edges by dashed lines. Suppose that the neighbors are generated and examined from left to right. In that case, a first-improving neighborhood search strategy would return the second generated neighbor (with cost 16) as the improving solution. However, if a best-improving strategy were applied, then it would return the fourth neighbor as the best one (with cost 14). Assuming that this fourth neighbor is selected and becomes the new current solution S = (1, 0, 1, 1, 0, 1, 0, 0, 0, 1), Figure 4.17 displays its five neighbors. The best of those is the second from left to right (with cost 12). Since this solution cannot be improved by any of its neighbors, then it is a local optimum and the search is interrupted.
Fig. 4.16
First local search iteration: fourth neighbor is returned.
Fig. 4.17
Second local search iteration: second neighbor is a local optimum.
Each solution has exactly n(n − 1)∕2 − n neighbors. As for the case of the knapsack problem, the cost of each neighbor S′ can be recomputed in time O(1) from the cost of solution S, by simply taking the cost f(S), subtracting the lengths of the two removed edges, and adding those of the two edges that replaced them.
The 3-opt neighborhood for the traveling salesman can be defined by taking three nonadjacent edges of the current solution and replacing them with any of the only four possible combinations of three edges that recreate a tour, as illustrated in Figure 4.18. In that case, the number of neighbors increases to O(n 3) and the search becomes slower, although more solutions can be investigated and better neighbors might be possibly found. This neighborhood can be generalized to any k ≤ m: neighborhood k-opt is formed by all solutions that can be obtained by replacing k edges from the current solution by k others that do not belong to it, so as to recreate a new tour. ■
Fig. 4.18
3-opt neighborhood for the traveling salesman problem.
### 4.3.3 Candidate lists
Candidate list strategies correspond to different techniques that make it possible to implement local search methods in the most efficient ways by dealing faster or with fewer neighbors instead of the full neighborhood. Basically, these candidate list strategies provide a number of techniques to speed up the local search either by restraining the number of neighbors investigated (for instance, when the neighborhood is very large) or by avoiding repetitive computations that can be saved from one iteration to the next (typically, whenever the computation of the objective function is expensive).
Instead of describing the types of candidate list strategies that are useful for the efficient implementation of local search, we illustrate with an example of one of the most simple and effective variants often used.
We assume that a best-improving neighborhood search strategy is applied as part of a local search method to solve a minimization problem. We also assume that the current solution S with cost f(S) at a given local search iteration has p neighboring solutions, each of them associated with a move indexed by j = 1,..., p. Each neighbor is obtained by applying one of the p moves to the current solution S. Each move j = 1,..., p may correspond, for instance, to flipping a 0-1 variable indexed by j, or to interchanging the values of the j-th pair of variables, or, still, to removing the j-th pair of edges in a 2-opt neighborhood for the traveling salesman problem and replacing them by the unique pair of edges that makes it possible to recover a tour. We denote by S ⊕{ j} the solution obtained by applying the move indexed by j to the current solution S. The incremental cost associated with the move indexed by j is computed and stored in Δ(j), for j = 1,..., p. Therefore, the cost of each neighbor is given by f(S ⊕{ j}) = f(S) +Δ(j), for j = 1,..., p. In particular, suppose that p = 10 and Δ(1) = −10, Δ(2) = 1, Δ(3) = −8, Δ(4) = −12, Δ(5) = 4, Δ(6) = −3, Δ(7) = −5, Δ(8) = 6, Δ(9) = −1, and Δ(10) = 5. Since we are dealing with a minimization problem and a best-improving strategy is being used, the best-improving move is that indexed by j ∗ = argmin{Δ(j): Δ(j) < 0, j = 1,..., p}. Consequently, j ∗ = 4 in this example and the search moves to solution S ⊕{ 4}, whose cost is f(S ⊕{ 4}) = f(S) +Δ(4) = f(S) − 12.
At this time, the best-improving neighborhood search strategy would investigate the full neighborhood of solution S ⊕{ 4}. Instead, we consider a candidate list formed by all yet unselected negative cost moves j = 1,..., p: j ≠ j ∗ and Δ(j) < 0. Therefore, all other possible moves from S ⊕{ 4} are discarded and the candidate list is formed exclusively by the yet unselected negative cost moves indexed by 1, 3, 6, 7, and 9. In addition to reducing the number of neighbors investigated, we use the already-available values of Δ(j) as estimates of the new incremental costs associated with each move from S ⊕{ 4}. The best-improving strategy selects j ∗ = 1 as the best candidate move and only now the true value of the incremental cost Δ(1) associated with the application of the move indexed by 1 to solution S ⊕{ 4} will be recomputed. If this move remains an improving move (i.e., if the recently updated value Δ(1) is negative), then it is applied to S ⊕{ 4} and the search moves to solution S ⊕{ 4} ⊕{ 1} with cost f(S ⊕{ 4} ⊕{ 1}) = f(S) +Δ(4) +Δ(1). Otherwise, if Δ(1) ≥ 0, then this move became nonimproving. Therefore, it can be discarded from the candidate list and the procedure resumes from the reduced candidate list formed by the still remaining moves, which are indexed by 3, 6, 7, and 9.
This procedure continues until the candidate list is exhausted and becomes empty. The incremental costs of all possible moves from the current solution are fully reevaluated and the search continues from this updated candidate list, until a local optimum is found.
Different variants of candidate lists strategies have been proposed in the literature and successfully applied to a number of problems. Additional references are given in the bibliographical notes presented at the end of this chapter.
### 4.3.4 Circular search
In the case of a local search procedure following a first-improving strategy, a very effective strategy for exploring a full neighborhood or a candidate list consists in performing a circular search. A circular search amounts to using a circular list of candidate moves. As before, consider an example in which local search is applied to a minimization problem. Again, we assume that the current solution S with cost f(S) at a given local search iteration has p neighboring solutions, each of them associated with a move indexed by j = 1,..., p. Each neighbor is obtained by applying one of the p moves to the current solution S. As before, we denote by S ⊕{ j} the solution obtained by applying the move indexed by j to the current solution S. The incremental cost associated with the move indexed by j is computed and stored in Δ(j), for j = 1,..., p. Suppose that the moves are investigated in ascending order of their indices j = 1,..., p, until the first-improving neighbor j′ ≥ 2 is found, i.e., Δ(j′) < 0 and Δ(j) ≥ 0 for all j = 1,..., j′ − 1. At this point, the new solution becomes S ⊕{ j′} and the search would resume from the first move, i.e., from j = 1. However, since in the last iteration this was not an improving move since Δ(1) ≥ 0, it most likely will still be a nonimproving move. The same applies to all moves j = 2,..., j′ − 1. Therefore, we profess that a more effective strategy resumes the search from j = j′ + 1, instead of from j = 1. In this context, the move defined by j = p is followed by that indexed by j = 1, as if they were organized as a circular list. The first-improving local search strategy stops at a local optimum as soon as a complete tour of this circular list is performed without any improvement in the current solution.
The use of a candidate list strategy using a circular search to implement a local search method based on a first-improving strategy can speed up the search by several orders of magnitude, without any loss in terms of solution quality.
## 4.4 Ejection chains and perturbations
Simple moves as those described in Section 4.2 can be extended to define broader neighborhoods associated with compound moves. These can be seen as sequences of simple moves that introduce structural changes in the current solution. Algorithms that incorporate compound moves are called variable depth methods, since the number of components of a compound move may vary from step to step.
Ejection chains are variable depth methods that make use of a sequence of interrelated, consecutive simpler moves to create more complex moves. They are designed to induce successive changes following an initial move that entailed infeasibilities in the neighbor solution, until feasibility is recovered. We say that an ejection chain has a variable depth because the number of simpler moves that are needed to recover feasibility may vary from one iteration to another. The length of the sequence of simple consecutive moves that lead to a compound move may be long and the evaluation of a compound move defining an ejection chain can be very costly in terms of computational effort. Therefore, the full exploration of neighborhoods defined by ejection chains can hardly be done. However, ejections chains may be employed as random or biased perturbations to destroy the structure of a local optimum, followed by the generation of a new solution that still shares part of the original local optimum that originated it.
Many metaheuristics make use of diversification strategies to drive the search towards unexplored regions of the search space. In particular, iterated local search (ILS) explores perturbation moves to escape from locally optimal solutions, obtaining new, different initial solutions from where the search restarts. Ejection chains are a very attractive alternative to generate perturbation moves in the context of diversification steps in tabu search or iterated local search.
We illustrate the use of ejection chains with an application to the traveling tournament problem. In this problem, one seeks to schedule the games of a compact round robin tournament involving n teams. Each team plays exactly once with every other team. The tournament takes n − 1 rounds to be completed and every team plays exactly one game in each round. The tournament can be represented by the complete graph K n , in which each node represents one team and each edge corresponds to the game between the two teams represented by its two extremities. Each round can be seen as a 1-factor of K n , containing exactly n∕2 edges. The goal is to minimize the total distance traveled by the teams, subject to constraints on the number of consecutive games each team plays at home and away. The discussion that follows regards exclusively feasibility issues.
Simple and easy moves to be applied within a local search method for the traveling tournament problem consist, for example, of team swaps (the opponents of any two teams are interchanged over all rounds) or round swaps (the games played in any two rounds are interchanged), as illustrated in Figure 4.19.
However, moves such as round swaps and team swaps are not sufficient to make the search space graph connected and some solutions may be unreachable from the initial solution. The game rotation neighborhood consists in enforcing any given game to be played in any particular round. Only this neighborhood is capable of breaking the inner structure of the 1-factorization corresponding to the initial solution and making the search space graph connected. Nevertheless, whenever a game is removed from the round where it is currently being played and enforced to be played in a different round, infeasibilities are created in both rounds. Appropriate modifications to avoid clashes of teams playing more than once in the same round should be applied. To eliminate these infeasibilities, new edges have to be successively removed from one round and reassigned to another, until a feasible solution is recovered. The sequence of reassignments of games to new rounds gives rise to an ejection chain, as illustrated in Figures 4.20 to 4.22 for an instance of the traveling tournament problem with six teams.
Fig. 4.19
Round swap and team swap moves for the traveling tournament problem.
Fig. 4.20
Ejection chain moving game (1, 3) from round 5 to round 2: first three moves.
Fig. 4.21
Ejection chain moving game (1, 3) from round 5 to round 2: intermediate three moves.
Fig. 4.22
Ejection chain moving game (1, 3) from round 5 to round 2: final three moves.
In Figure 4.20(a), game (1, 3) is selected to be reassigned from round 5 to round 2. In the seven steps in Figures 4.20(b) to 4.22(b) one game is reassigned to a new round and another game is selected for reassignment. Finally, in Figure 4.22(c) game (2, 3) is assigned to round 5 and a feasible solution is reached.
To conclude, we may say that ejection chains are then based on the principle of generating compound sequences of moves by linked steps in which changes in selected elements cause other elements to be ejected from their current state, position, or value assignment. Although they are often costly to be used as regular moves in a local search method in terms of computation time, they are very effective to generate perturbations and for search diversification.
## 4.5 Going beyond the first local optimum
We have shown that local search methods always stop at the first local optimum, from which they are unable to escape. In the following, we illustrate two extended variants of local search that may overcome this limitation.
### 4.5.1 Tabu search and short-term memory
Tabu search is a memory-based metaheuristic whose philosophy is to derive and exploit a collection of principles of intelligent problem solving. The method is based on procedures designed to cross boundaries of feasibility or local optimality, which are often treated as barriers. It guides a local search procedure to explore the solution space beyond local optimality. In its simplest version, which makes exclusive use of short-term memory to avoid cycling, tabu search may be seen as a powerful extension of a local search procedure that accepts nonimproving moves to escape from locally optimal solutions, until some alternative stopping criterion is satisfied.
In this context, a reasonable strategy to extend a pure local search method beyond optimality consists in accepting only a small, limited number of nonimproving moves in order to give the search a chance of escaping from the first local optimum found, without either being trapped or increasing too much the computation time. A short-term memory has to be used to keep track of recent nonimproving moves whose reversal should be forbidden to avoid cycling, i.e., visiting a solution that has been already visited in a previous iteration.
### 4.5.2 Variable neighborhood descent
A local optimum with respect to some neighborhood is not necessarily optimal with respect to another neighborhood. For example, a locally optimal solution for the traveling salesman problem under the 2-opt neighborhood may not be optimal within the 3-opt neighborhood. Changes of neighborhoods can be successfully performed within a local search procedure. They are crucial and instrumental in some cases. Exploring different neighborhoods in an appropriate order can save a significant amount of computation time.
In this context, small neighborhoods or those whose elements can be quickly evaluated may be explored first, with the search moving to progressively larger or more complex neighborhoods as locally optimal solutions are found within the lower order neighborhoods. This local search strategy is called variable neighborhood descent (VND) and its main steps are presented in the pseudo-code of Figure 4.23 for a minimization problem, in which  denotes the k-th neighborhood to be explored, for k = 1,..., k max .
Fig. 4.23
Pseudo-code of a VND local search procedure for a minimization problem.
At any iteration of a VND local search strategy, the algorithm moves from the current solution to the best of its neighbors within the current neighborhood, whenever the best neighbor improves upon the current solution. The search starts from a given initial solution S. A flag indicating whether or not an improving solution was found is set to.TRUE. in line 1. The loop in lines 2 to 14 is performed until it becomes impossible to replace the current solution by a better neighbor. The flag is reset to.FALSE. in line 3 at the beginning of a new iteration. The index of the current neighborhood is set initially to 1 in line 4. The loop in lines 5 to 13 consecutively investigates all neighborhoods. Line 6 returns the best neighbor solution S′ of the current solution S within neighborhood . If the test in line 7 detects that S′ is better than the current solution S, then S is replaced by S′ in line 8, the current neighborhood is reset to 1 in line 9, and the flag is reset to.TRUE. in line 10, indicating that a better solution was found. Otherwise, the neighborhood counter is increased by one, indicating that a higher-order neighborhood will be investigated next. The algorithm returns the local optimum S with respect to all neighborhoods in line 15.
## 4.6 Final remarks
We have seen that local search methods start from an initial solution and iteratively improve it until a local optimum is found. They are memoryless search methods that are very sensitive to the initial solution and stop at the first local optimum, being unable to escape from it. They are also sensitive to the neighborhood structure and to the search strategy applied to explore each neighborhood.
Metaheuristics are general high-level procedures that coordinate simple heuristics and rules to find good (often optimal) approximate solutions to computationally difficult combinatorial optimization problems. Among them, we find simulated annealing, tabu search, GRASP, VNS, genetic algorithms, scatter search, ant colonies, and others. They are based on distinct paradigms and offer different mechanisms to escape from locally optimal solutions, as opposed to greedy algorithms or local search methods. Metaheuristics are among the most effective solution strategies for solving combinatorial optimization problems in practice and they have been applied to a large variety of areas and situations. The customization (or instantiation) of some metaheuristic to a given problem yields a heuristic to the problem.
In the next chapter we give an introductory presentation to the fundamentals of the GRASP (which stands for greedy randomized adaptive search procedures) metaheuristic and is one of the most effective approximate solution methods for hard combinatorial optimization problems. GRASP is one of the alternatives to overcome some of the main limitations of basic local search methods, such as the sensitivity to the initial solution and stopping at the first local optimum.
## 4.7 Bibliographical notes
Local search methods are a common component of a number of metaheuristics. Hoos and Stützle (2005) defined stochastic local search algorithms to be methods based on local search that make use of randomization to generate or select candidate solutions for combinatorial optimization problems. Yagiura and Ibaraki (2002) traced the history of local search since the work of Croes (1958), where a local search algorithm for the traveling salesman problem was proposed. Kernighan and Lin (1970) and Lin and Kernighan (1973) were early proponents of local search for, respectively, graph partitioning and the traveling salesman problem. Michelis et al. (2007) discussed theoretical aspects of local search. The simplex algorithm developed by Dantzig (1953) can be seen as a local search algorithm for solving linear programming problems.
The principles discussed in Sections 4.1 to 4.3 of this chapter are usually described in books, chapters, and articles devoted to metaheuristics, such as simulated annealing, tabu search, GRASP, variable neighborhood search, iterated local search, genetic algorithms, and ant colony optimization, which all make use of local search. In particular, we refer the reader to the aforementioned book of Hoos and Stützle (2005), to the textbook on tabu search by Glover and Laguna (1997), to different chapters in the handbooks edited by Glover and Kochenberger (2003), Gendreau and Potvin (2010), and Burke and Kendall (2005; 2014), and to the book chapters by Rego and Glover (2002) and Yagiura and Ibaraki (2002).
We showed in Section 4.4 that simple moves can be extended to define broader neighborhoods associated with compound moves that are called ejection chains. They were defined by Rego and Glover (2002) as variable depth methods that generate a sequence of interrelated simple moves to create more complex moves. We refer the reader to Cao and Glover (1997), Glover (1996a), Glover and Punnen (1997), Pesch and Glover (1997), Glover (1991), and Rego (1998) for a comprehensive description of ejection chains and their applications to the traveling salesman problem and other optimization problems in graphs. Dorndorf and Pesch (1994), Laguna et al. (1995), Yagiura et al. (2004), and Cavique et al. (1999) applied ejection chains to problems in other domains. The traveling tournament problem was introduced in the seminal paper by Easton et al. (2001). Ribeiro and Urrutia (2007) presented a successful application of ejection chains to generate perturbations in the context of a hybridization of GRASP with iterated local search to solve the mirrored variant of the traveling tournament problem. We also refer the reader to Kendall et al. (2010) and Ribeiro (2012) for surveys of optimization problems in sports. Accounts of the iterated local search metaheuristic (ILS) were presented by Lourenço et al. (2003), Martin and Otto (1996), and Martin et al. (1991).
We showed in Section 4.5 that tabu search and variable neighborhood descent represent two extended variants of local search that make it possible to go beyond the first local optimum encountered. Tabu search is a memory-based metaheuristic proposed and developed by Glover (1989; 1990) in two seminal papers, see also Glover and Laguna (1997). A similar idea was independently developed by Hansen (1986) in what was called the steepest-ascent mildest-descent method. The use of a tabu search procedure based on a small short-term memory to replace a pure local search method for going beyond the first local optimum was successfully used by Souza et al. (2004) in the context of a GRASP heuristic developed for the capacitated minimum spanning tree problem. The idea of using nested neighborhoods to improve a local search procedure was around for a long time, see, e.g., the aforementioned applications to the traveling salesman problem (Lin, 1965; Lin and Kernighan, 1973) and to graph partitioning (Kernighan and Lin, 1970). Variable neighborhood descent was completely described and its VND acronym coined by Mladenović and Hansen (1997), see also Hansen and Mladenović (2002; 2003). VND was used to implement the local search phase of GRASP heuristics for the Steiner problem in graphs (Martins et al., 2000) and for the phylogeny problem (Andreatta and Ribeiro, 2002), among other applications.
References
A.A. Andreatta and C.C. Ribeiro. Heuristics for the phylogeny problem. Journal of Heuristics, 8:429–447, 2002.CrossRefMATH
E.K. Burke and G. Kendall, editors. Search methodologies: Introductory tutorials in optimization and decision support techniques. Springer, New York, 2005.MATH
E.K. Burke and G. Kendall, editors. Search methodologies: Introductory tutorials in optimization and decision support techniques. Springer, New York, 2nd edition, 2014.
B. Cao and F. Glover. Tabu search and ejection chains – Application to a node weighted version of the cardinality-constrained TSP. Management Science, 43:908–921, 1997.CrossRefMATH
L. Cavique, C. Rego, and I. Themido. Subgraph ejection chains and tabu search for the crew scheduling problem. Journal of the Operational Research Society, 50:608–616, 1999.CrossRefMATH
G.A. Croes. A method for solving traveling-salesman problems. Operations Research, 6:791–812, 1958.MathSciNetCrossRef
G.B. Dantzig. Linear programming and extensions. Princeton University Press, Princeton, 1953.MATH
U. Dorndorf and E. Pesch. Fast clustering algorithms. INFORMS Journal on Computing, 6:141–153, 1994.CrossRefMATH
K. Easton, G. Nemhauser, and M.A. Trick. The travelling tournament problem: Description and benchmarks. In T. Walsh, editor, Principles and practice of constraint programming, volume 2239 of Lecture Notes in Computer Science, pages 580–585. Springer, Berlin, 2001.
M. Gendreau and J.-Y. Potvin, editors. Handbook of metaheuristics. Springer, New York, 2nd edition, 2010.
F. Glover. Tabu Search - Part I. ORSA Journal on Computing, 1:190–206, 1989.MathSciNetCrossRefMATH
F. Glover. Tabu Search - Part II. ORSA Journal on Computing, 2:4–32, 1990.CrossRefMATH
F. Glover. Multilevel tabu search and embedded search neighborhoods for the traveling salesman problem. Technical report, University of Colorado, Boulder, 1991.
F. Glover. Ejection chains, reference structures and alternating path methods for traveling salesman problems. Discrete Applied Mathematics, 65:223–253, 1996a.
F. Glover and G. Kochenberger, editors. Handbook of metaheuristics. Kluwer Academic Publishers, Boston, 2003.MATH
F. Glover and M. Laguna. Tabu search. Kluwer Academic Publishers, Boston, 1997.CrossRefMATH
F. Glover and A.P. Punnen. The travelling salesman problem: New solvable cases and linkages with the development of approximation algorithms. Journal of the Operational Research Society, 48:502–510, 1997.CrossRefMATH
P. Hansen. The steepest ascent mildest descent heuristic for combinatorial programming. In Proceedings of the Congress on Numerical Methods in Combinatorial Optimization, pages 70–145, Capri, 1986.
P. Hansen and N. Mladenović. Developments of variable neighborhood search. In C.C. Ribeiro and P. Hansen, editors, Essays and surveys in metaheuristics, pages 415–439. Kluwer Academic Publishers, Boston, 2002.CrossRef
P. Hansen and N. Mladenović. Variable neighborhood search. In F. Glover and G. Kochenberger, editors, Handbook of metaheuristics, pages 145–184. Kluwer Academic Publishers, Boston, 2003.CrossRef
H.H. Hoos and T. Stützle. Stochastic local search: Foundations and applications. Elsevier, New York, 2005.MATH
G. Kendall, S. Knust, C.C. Ribeiro, and S. Urrutia. Scheduling in sports: An annotated bibliography. Computers & Operations Research, 37:1–19, 2010.MathSciNetCrossRefMATH
B.W. Kernighan and S. Lin. An efficient heuristic procedure for partitioning graphs. Bell System Technical Journal, 49:291–307, 1970.CrossRefMATH
M. Laguna, J.P. Kelly, J.L. González-Velarde, and F. Glover. Tabu search for multilevel generalized assignment problem. European Journal of Operational Research, 82:176–189, 1995.CrossRefE0174-V)MATH
S. Lin. Computer solutions of the traveling salesman problem. Bell System Technical Journal, 44:2245–2260, 1965.MathSciNetCrossRefMATH
S. Lin and B.W. Kernighan. An effective heuristic algorithm for the traveling-salesman problem. Operations Research, 21:498–516, 1973.MathSciNetCrossRefMATH
H.R. Lourenço, O. Martin, and T. Stützle. Iterated local search. In F. Glover and G. Kochenberger, editors, Handbook of metaheuristics, pages 321–353. Kluwer Academic Publishers, Boston, 2003.
O. Martin and S.W. Otto. Combining simulated annealing with local search heuristics. Annals of Operations Research, 63:57–75, 1996.CrossRefMATH
O. Martin, S.W. Otto, and E.W. Felten. Large-step Markov chains for the traveling salesman problem. Complex Systems, 5:299–326, 1991.MathSciNetMATH
S.L. Martins, P.M. Pardalos, M.G.C. Resende, and C.C. Ribeiro. A parallel GRASP for the Steiner tree problem in graphs using a hybrid local search strategy. Journal of Global Optimization, 17:267–283, 2000.MathSciNetCrossRefMATH
W. Michelis, E.H.L. Aarts, and J. Korst. Theoretical aspects of local search. Springer, Berlin, 2007.MATH
N. Mladenović and P. Hansen. Variable neighborhood search. Computers & Operations Research, 24:1097–1100, 1997.MathSciNetCrossRef00031-2)MATH
E. Pesch and F. Glover. TSP ejection chains. Discrete Applied Mathematics, 76:165–181, 1997.MathSciNetCrossRef00123-0)MATH
C. Rego. Relaxed tours and path ejections for the traveling salesman problem. European Journal of Operational Research, 106:522–538, 1998.CrossRef00288-9)MATH
C. Rego and F. Glover. Local search and metaheuristics. In G. Gutin and A.P. Punnen, editors, The traveling salesman problem and its variations, pages 309–368. Kluwer Academic Publishers, Boston, 2002.
C.C. Ribeiro. Sports scheduling: Problems and applications. International Transactions in Operational Research, 19:201–226, 2012.MathSciNetCrossRefMATH
C.C. Ribeiro and S. Urrutia. Heuristics for the mirrored traveling tournament problem. European Journal of Operational Research, 179: 775–787, 2007.CrossRefMATH
M.C. Souza, C. Duhamel, and C.C. Ribeiro. A GRASP heuristic for the capacitated minimum spanning tree problem using a memory-based local search strategy. In M.G.C. Resende and J. Souza, editors, Metaheuristics: Computer decision-making, pages 627–658. Kluwer Academic Publishers, Boston, 2004.
M. Yagiura and T. Ibaraki. Local search. In P.M. Pardalos and M.G.C. Resende, editors, Handbook of applied optimization, pages 104–123. Oxford University Press, 2002.
M. Yagiura, T. Ibaraki, and F. Glover. An ejection chain approach for the generalized assignment problem. INFORMS Journal on Computing, 16: 133–151, 2004.MathSciNetCrossRefMATH
© Springer Science+Business Media New York 2016
Mauricio G.C. Resende and Celso C. RibeiroOptimization by GRASP10.1007/978-1-4939-6530-4_5
# 5. GRASP: The basic heuristic
Mauricio G. C. Resende1 and Celso C. Ribeiro2
(1)
Modeling and Optimization Group (MOP), Amazon.com, Inc., Seattle, WA, USA
(2)
Instituto de Ciência da Computação, Universidade Federal Fluminense, Niterói, Rio de Janeiro, Brazil
This chapter presents the basic structure of a greedy randomized adaptive search procedure (or, more simply, GRASP). We first introduce random and semi-greedy multistart procedures and show how solutions produced by both procedures differ. The hybridization of a semi-greedy procedure with a local search method constitutes a GRASP heuristic. The chapter concludes with some implementation details, including stopping criteria.
## 5.1 Random multistart
A multistart procedure is an algorithm which repeatedly applies a solution construction procedure and outputs the best solution found over all trials. Each trial, or iteration, of a multistart procedure is applied under different conditions. An example of a random multistart procedure for minimization is shown in Figure 5.1. The algorithm repeatedly generates random solutions. Similar to the GREEDY algorithm presented in Figure 3.1 of Chapter , a new random solution is generated in line 3 by adding to the partial solution (initially empty) a new feasible ground set element, one element at a time. Unlike in the greedy algorithm, each ground set element is chosen at random from the set of candidate ground set elements. If the solution has a better objective function value than the incumbent solution, then it is saved and made the incumbent in lines 5 and 6. The best solution over all iterations is returned as the solution of the random multistart algorithm.
Fig. 5.1
Pseudo-code of a random multistart procedure for a minimization problem.
## 5.2 Semi-greedy multistart
In Chapter we introduced the adaptive greedy algorithm whose pseudo-code was given in Figure 3.8 In line 5 of the pseudo-code, the index of the next ground set element to be added to the partial solution under construction is chosen. Since there may exist more than one index  that minimizes the greedy g(i), the algorithm needs to have a tie-breaking rule. If ties are not broken at random, then embedding a greedy algorithm in a multistart strategy would be useless since the same solution would be produced at each iteration of the multistart procedure.
We also introduced in Figure 3.18 of Chapter the semi-greedy construction procedure which adds randomization to the greedy algorithm. The semi-greedy algorithm can also be embedded in a multistart framework as shown in Figure 5.2. This algorithm is almost identical to the random multistart method, except that solutions are generated with a semi-greedy procedure instead of at random. Note that each invocation of procedure SemiGreedy(⋅ ) in line 3 of Figure 5.2 is independent of the others, therefore producing independent solutions.
Fig. 5.2
Pseudo-code of a semi-greedy multistart procedure for a minimization problem.
The semi-greedy procedure can use either a quality-based or a cardinality-based restricted candidate list (RCL), as described in Section 3.4 In the former case, a quality-enforcing parameter α regulates how random or how greedy the construction will be. In a minimization problem, the value α = 0 leads to a purely greedy construction, since it places in the RCL all ground set elements  for which the cost associated with its inclusion in the solution is minimum. On the other hand, the value α = 1 leads to a random construction, since it now places all feasible ground set elements in the RCL. Other values of α, i.e., 0 < α < 1, mix greediness and randomness in the construction. The same outcomes can be achieved with a cardinality-based restricted candidate list. A cardinality-enforcing parameter value k = 1 places in the RCL only one candidate element, with minimum cost associated with its inclusion in the solution and therefore corresponds to a greedy construction. On the other hand, a parameter value  places all candidate elements in the RCL, and therefore corresponds to a random construction. Other values of k, i.e., , mix greediness and randomness in the construction.
Conversely, in the case of a maximization problem, a value α = 1 would lead to a greedy construction, since it would place in the RCL all ground set elements  for which the cost associated with the inclusion of the element in the solution is maximum. A value α = 0 would lead to a random construction, since it would place all feasible ground elements in the RCL.
Figure 5.3 shows the distribution of solution values produced by a random multistart procedure and by a semi-greedy multistart algorithm with the RCL parameter α = 0. 85 on an instance of the maximum covering problem. In this problem, we are given a set of m demand points, each with an associated weight, a set of n potential facility locations, each of which can provide service (or cover) to a given subset of the demand points, and are asked to find p ≤ n facility locations such that the sum of the weights of the demand points covered by the p facility points is maximized. The figure compares the two distributions with the greedy solution value and the best known solution value for this problem instance. It illustrates four important points:
1. 1.
Semi-greedy solutions are on average much better than random solutions.
2. 2.
There is more variance in the solution values produced by a random multistart method than by a semi-greedy multistart algorithm.
3. 3.
The greedy solution is on average better than both the random and the semi-greedy solutions but, even if ties are broken at random, it has less variance than the random or semi-greedy solutions.
4. 4.
Random, semi-greedy, and greedy solutions are usually sub-optimal.
Fig. 5.3
Distributions of 5000 random solution values and 5000 semi-greedy solution values for an instance of the maximum covering problem. The figure also shows the values of the greedy solution and of the best known solution for this problem instance.
Figure 5.4 further illustrates the three first points above. It shows the distribution of 1000 independent semi-greedy solutions with RCL parameters α = 0 (random), 0.2, 0.4, 0.6, 0.8, and 1 (greedy) on an instance of the maximum weighted satisfiability problem. In this problem, we are given m disjunctive clauses , with corresponding real-valued weights w 1, w 2,..., w m , involving the Boolean variables x 1,..., x n and their complements. The problem is to find a truth assignment of 0 (false) and 1 (true) values to these variables such that the sum of the weights of the satisfiable clauses (i.e., clauses that evaluate to true) is maximized.
Fig. 5.4
Distribution of semi-greedy solution values as a function of the quality-based RCL parameter α (1000 repetitions were recorded for each value of α) on an instance of the maximum weighted satisfiability problem.
## 5.3 GRASP
Local search was introduced in Chapter as a solution improvement procedure that is given a starting solution and iteratively explores the neighborhood of the current solution, looking for one with a better objective function value. If such solution is found, it is made the current solution and the algorithm proceeds with a new iteration. Local search ends when no solution in the neighborhood of the current solution has a better objective function value than the current solution. In this case, the current solution is called a local optimum.
We observed that in designing a local search algorithm, one is usually given an objective function but has the flexibility to choose one or more neighborhood structures. We also observed that given an objective function and a neighborhood structure, the success of a local search algorithm to find a global optimum solution will depend on the starting solution it uses, among other factors. Regardless of the neighborhood search strategy used, some starting solutions always lead to a global optimum, while others always lead to a local optimum that is not globally optimal. Others, yet, can lead to either a global optimum or to a local optimum that is not globally optimal, depending on the search strategy used. Having the capability of producing different starting solutions for the local search, one would like to increase the likelihood of producing at least one starting solution that leads to a global optimum with the application of a local search procedure.
A greedy randomized adaptive search procedure (GRASP) is the hybridization of a semi-greedy algorithm with a local search method embedded in a multistart framework. The method consists of multiple applications of local search, each starting from a solution generated with a semi-greedy construction procedure. The best local optimum, over all GRASP iterations, is returned as the solution provided by the algorithm.
Figure 5.5 illustrates a basic GRASP heuristic for minimization. After initializing the value of the incumbent in line 1, the GRASP iterations are carried out in the while loop in lines 2 to 12. A solution S is constructed with a semi-greedy algorithm in line 3. As shown in Chapter , a semi-greedy algorithm may not always generate a feasible solution. When this happens, a repair procedure must be invoked in line 5 to make changes in S so that it becomes feasible (alternatively, solution S may be simply discarded and followed by a new run of the semi-greedy algorithm, until a feasible solution is built). In line 7, local search is applied starting from a feasible solution provided by the semi-greedy algorithm or, if necessary, by the repair procedure. We use LOCAL-SEARCH to denote any variant of the local search methods considered in Sections 4.3.1 and 4.5.2, such as FIRST-IMPROVEMENT, BEST-IMPROVEMENT, or VND. If the objective function value f(S) of the local minimum produced in line 7 is better than the objective function value f ∗ of the incumbent, then the local minimum is made the incumbent in line 9 and its objective function value is recorded as f ∗ in line 10. The while loop is repeated until some stopping criterion is satisfied. The best solution found over all GRASP iterations is returned in line 13 as the GRASP solution.
Fig. 5.5
Pseudo-code of a basic GRASP heuristic for minimization.
Figure 5.6 shows the distribution of the solution values obtained after local search is applied to 1000 solutions built by the semi-greedy algorithm as a function of the RCL quality-enforcing parameter α of the semi-greedy construction procedure for an instance of the maximum weighted satisfiability problem. This figure is similar to Figure 5.4, with the difference being that here local search is applied to the semi-greedy solution. For each value of α, 1000 GRASP iterations were carried out and a histogram was produced showing the frequency of solution values in different cost ranges. The distributions show that the variance of the solution values decreases as α increases, as already observed in Figure 5.4 for the distributions of the solution values obtained by semi-greedy construction. As occurs with semi-greedy construction, GRASP solutions improve on average as we move from a totally random construction to a greedy construction. However, they differ in one important way from those of Figure 5.4. The best solution found, over all 1000 runs, improves as we move from random to semi-greedy construction (until some value of parameter α), and then deteriorates as α approaches 1. This is illustrated better in Figure 5.7, where we superimpose two plots: one for the average solution value and the other for the best solution value, both displayed as a function of α.
Fig. 5.6
Distribution of the solution values obtained after local search as a function of the quality-based parameter α of the semi-greedy construction procedure (1000 repetitions for each value of α) on an instance of the maximum weighted satisfiability problem.
Fig. 5.7
Best and average solution values for GRASP as a function of the RCL parameter α for 1000 GRASP iterations on an instance of the maximum weighted satisfiability problem.
Figures 5.8 and 5.9 show, respectively, the objective function values for solutions of an instance of the maximum covering problem constructed with random and semi-greedy algorithms, each followed by local search in a multistart procedure. For each iteration, the plots show in red the value of the constructed solution and in blue the value of the local maximum solution. The iterations are sorted in increasing order of the values of their local maxima. The figures illustrate further why the iterations of a random multistart method with local search are longer than those of a GRASP. While random construction can be slightly faster than semi-greedy construction, this does not compensate for the poor quality of the randomly constructed solutions when compared to the semi-greedy solutions. Note that the average value of solutions constructed with the random approach is 3.55, while the average value of solutions constructed with the semi-greedy algorithm is 9.70. This indicates that the path taken by local search to a local optimum from the semi-greedy solution is much shorter than the path taken from the random solution.
Fig. 5.8
Random construction and local maximum solution values, sorted by local maximum values, for an instance of the maximum covering problem.
Fig. 5.9
Semi-greedy construction and local maximum solution values, sorted by local maximum values, for an instance of the maximum covering problem.
Figure 5.10 shows, for an instance of the maximum weighted satisfiability problem, the effect of different RCL parameter values on the Hamming distance between the constructed solutions and the corresponding local maxima, the number of moves made by local search, and the local search and total running times. The figure shows a strong correlation between Hamming distance, number of moves taken by local search, and local search running time. Figure 5.11 displays, again for the same instance of the maximum covering problem considered in Figures 5.8 and 5.9, the best objective function solution value as a function of running time for GRASP (with α = 0. 85), random multistart (GRASP with α = 0) with local search, and greedy multistart (GRASP with α = 1) with local search. While greedy multistart with local search fails to find the best known solution of value 9.92926, GRASP finds it after only 126 seconds, while random multistart with local search takes 152,664 seconds to reach that solution, i.e., over one thousand times longer.
Fig. 5.10
Total GRASP running time, total local search running time, average Hamming distance between constructed solution and local maximum, and average number of local search moves as a function of the RCL parameter α on 1000 GRASP iterations on an instance of the maximum weighted satisfiability problem. Note that since the local search traverses a 1-flip neighborhood, the curve for the number of moves made by local search coincides with the curve for the Hamming distance between the starting solution and the local maximum.
Fig. 5.11
Best solution value for GRASP, random multistart with local search, and greedy multistart with local search as a function of time (in seconds) for an instance of the maximum covering problem. GRASP reaches the best solution in less than one thousandths of the time taken by random multistart with local search.
## 5.4 Accelerating GRASP
The local search phase takes considerably longer than the construction phase in most GRASP applications. Many times, a single execution of the local search algorithm may be more time-consuming than the overall time spent in constructing all starting solutions along the main GRASP loop. In addition to quick cost updates, candidate lists, and circular search strategies already explored in Section 4.3, a number of filtering strategies have been used to speedup the time spent with local search in GRASP.
Filtering consists basically in not applying local search at every GRASP iteration, but only to the most promising solutions built by the construction phase. One strategy is to apply local search only if the semi-greedy solution is better than a given threshold. Another strategy often applied consists in considering an integer parameter η and applying local search only to the best solution built along the η previous applications of the construction phase. Since the local search phase is usually much more time-consuming, this strategy may lead to a reduction by a factor of up to η in the total time needed to perform a number of iterations.
Another filtering strategy consists in calculating, observing, and making use of performance statistics computed along successive applications of local search. We describe a typical, simple use of this idea. Considering a minimization problem, one can keep track of the maximum relative reduction ρ, resulted from applying local search, in the value of the starting solution S created by the construction algorithm in the same iteration. Although the value of ρ may increase as the number of iterations grows, it is likely to quickly stabilize. Since it becomes progressively more and more unlikely that local search will cause a reduction in the objective function f(S) that is greater than ρ, we may discard the application of local search whenever f(S) > ρ ⋅ f ∗, where f ∗ is the value of the incumbent solution, with low risk of missing good starting solutions.
Other variations of these strategies exist and have been applied. Although all of them contribute to significantly accelerate the main GRASP loop, there is always the possibility that some good initial solutions may be discarded by filtering, leading to a small deterioration in the quality of the best solution obtained at the end of the algorithm.
## 5.5 Stopping GRASP
The main drawback of most metaheuristics is the absence of effective stopping criteria. Most implementations of these algorithms stop after performing a given maximum number of iterations or a given maximum number of consecutive iterations without improvement in the best known solution value, or after the stabilization of the set of elite solutions found along the search. In some cases, the algorithm can perform an exaggerated and unnecessary number of iterations, when the best solution is found quickly (as often happens in GRASP implementations). In other situations, the algorithm can stop just before the iteration in which an optimal solution would be found. Dual bounds can be used to implement quality-based stopping rules, but they are often hard to compute or far from the optimal value, which make them unusable in both situations.
Randomization plays a very important role in the design of metaheuristics. Effective probabilistic stopping rules can be applied to randomized metaheuristics.
### 5.5.1 Probabilistic stopping rule
Let X be a random variable that denotes the objective function value of the local minima obtained at each iteration of a GRASP heuristic for a minimization problem. Let the probability density function and cumulative probability distribution of X be denoted, respectively, by f X (⋅ ) and F X (⋅ ). Let f k be the solution value obtained at iteration k of a GRASP heuristic and f 1,..., f k be a sample formed by the solution values obtained along the first k iterations. Let f X k (⋅ ) and F X k (⋅ ) be, respectively, estimates of the probability density function and of the cumulative probability distribution of the random variable X, obtained after the first k GRASP iterations.
Let UB k be the value of the best solution found along the first k GRASP iterations. Therefore, the probability of finding a solution value smaller than or equal to UB k in the next iteration can be estimated by

For sake of computational efficiency, the value of F X k (UB k ) can be recomputed periodically or whenever the value of the incumbent improves, rather than at every iteration of the heuristic.
For any given threshold β, the GRASP iterations can be interrupted when F X k (UB k ) becomes less than or equal to β, i.e., as soon as the probability of finding a solution at least as good as the incumbent in the next iteration becomes smaller than or equal to the threshold β. The probability value F X k (UB k ) can be used to estimate the number of iterations that must be performed by the algorithm to find a new solution that is at least as good as the incumbent. Since the user is able to account for the average time taken by each GRASP iteration, the threshold defining the stopping criterion can either be fixed or determined online so as to limit the computation time when the probability of finding an improving solution becomes very small.
### 5.5.2 Gaussian approximation for GRASP iterations
Computational experiments and statistical tests have shown that the solution values obtained by GRASP heuristics for a number of combinatorial optimization problems fit a normal distribution. If f 1,..., f N denote a sample formed by all solution values obtained along N GRASP iterations, the null hypothesis stating that the sample f 1,..., f N follows a normal distribution usually cannot be rejected with 90% confidence level by the chi-square test after relatively few iterations are performed.
We illustrate below that the solution values obtained along the GRASP iterations fit a normal distribution with numerical results obtained for four instances of the 2-path network design problem. The chi-square test shows that, already after as few as 50 iterations, the solution values obtained by the heuristic fit a normal distribution very closely. Table 5.1 lists the mean, standard deviation, skewness, and kurtosis for these four instances for N = 50, 100, 500, 1,000, 5,000, and 10,000 GRASP iterations. Skewness measures the symmetry of the original data, while kurtosis measures the shape of the fitted distribution. Ideally, they should be equal to 0 and 3, respectively, in the case of a perfect normal fit. This table shows that the mean consistently converges very quickly to a steady-state value when the number of iterations increases. Furthermore, the mean after 50 iterations is already very close to that of the normal fit after 10,000 iterations. The skewness values are consistently very close to 0, while the measured kurtosis of the sample is always close to 3.
Table 5.1
Statistics for normal fittings for a heuristic to the 2-path network design problem.
Instance | Iterations | Mean | Std. dev. | Skewness | Kurtosis
---|---|---|---|---|---
|
50 | 372.920000 | 7.583772 | 0.060352 | 3.065799
|
100 | 373.550000 | 7.235157 | -0.082404 | 2.897830
2pndp50 | 500 | 373.802000 | 7.318661 | -0.002923 | 2.942312
|
1,000 | 373.854000 | 7.192127 | 0.044952 | 3.007478
|
5,000 | 374.031400 | 7.442044 | 0.019068 | 3.065486
|
10,000 | 374.063500 | 7.487167 | -0.010021 | 3.068129
|
50 | 540.080000 | 9.180065 | 0.411839 | 2.775086
|
100 | 538.990000 | 8.584282 | 0.314778 | 2.821599
2pndp70 | 500 | 538.334000 | 8.789451 | 0.184305 | 3.146800
|
1,000 | 537.967000 | 8.637703 | 0.099512 | 3.007691
|
5,000 | 538.576600 | 8.638989 | 0.076935 | 3.016206
|
10,000 | 538.675600 | 8.713436 | 0.062057 | 2.969389
|
50 | 698.100000 | 9.353609 | -0.020075 | 2.932646
|
100 | 700.790000 | 9.891709 | -0.197567 | 2.612179
2pndp90 | 500 | 701.766000 | 9.248310 | -0.035663 | 2.883188
|
1,000 | 702.023000 | 9.293141 | -0.120806 | 2.753207
|
5,000 | 702.281000 | 9.149319 | 0.059303 | 2.896096
|
10,000 | 702.332600 | 9.196813 | 0.022076 | 2.938744
|
50 | 1,599.240000 | 13.019309 | 0.690802 | 3.311439
|
100 | 1,600.060000 | 14.179436 | 0.393329 | 2.685849
2pndp200 | 500 | 1,597.626000 | 13.052744 | 0.157841 | 3.008731
|
1,000 | 1,597.727000 | 12.828035 | 0.083604 | 3.009355
|
5,000 | 1,598.313200 | 13.017984 | 0.057133 | 3.002759
|
10,000 | 1,598.366100 | 13.066900 | 0.008450 | 3.019011
Figure 5.12 displays the normal distribution fit for each instance and for each value of N. Together with the statistics reported in Table 5.1, these plots illustrate the robustness of the normal fits to the solution values obtained along the iterations of the GRASP heuristic.
Fig. 5.12
Normal distribution fits for four instances of the 2-path network design problem.
Since the null hypothesis cannot be rejected with a 90% confidence level, we can approximate the solution values obtained by a GRASP heuristic with a normal distribution whose fit is progressively better as more iterations are accounted for.
### 5.5.3 Stopping rule implementation
Recall that X is a random variable representing the value of the objective function for the local minimum obtained by each GRASP iteration. We illustrated in the previous section that the distribution of X can be approximated by a normal distribution N(m k , S k ) with mean m k = (1∕k) ⋅ ∑ j = 1 k f j and standard deviation S k = [(1∕(k − 1)) ⋅ ∑ j = 1 k (f j − m k )2]1∕2, whose probability density function and cumulative probability distribution are, respectively, f X k (⋅ ) and F X k (⋅ ).
The pseudo-code in Figure 5.13 extends the previous template of a GRASP heuristic for minimization presented in Figure 5.5, implementing the termination rule based on stopping the GRASP iterations whenever the probability F X k (UB k ) of improving the best known solution value becomes smaller than or equal to the threshold β. Lines 14 and 15 update the sample f 1,..., f k and the best known solution value UB k = f ∗ at each iteration k. The mean and the standard deviation of the fitted normal distribution in iteration k are computed in line 16. The probability of finding a solution whose value is better than the currently best known solution value is computed in line 17.
Fig. 5.13
Pseudo-code of a basic GRASP with probabilistic stopping rule.
This approach also makes it possible to apply stopping rules based on estimating the number of iterations needed to improve the value of the best solution found by each percentage point. For example, consider instance 2pndp90 of the 2-path network design problem, with the threshold β = 10−3. Figure 5.14 plots the expected number of additional iterations needed to find a solution that improves the best known solution value by each percentage point that might be sought. For instance, the expected number of iterations needed to improve the best solution value found at termination by 0.5% is 12,131. If one seeks a percentage improvement of 1%, then the expected number of additional iterations to be performed increases to 54,153.
Fig. 5.14
Estimated number of additional iterations needed to improve the best solution value.
## 5.6 GRASP for multiobjective optimization
In this section, we consider a multiobjective optimization problem in which one seeks a solution S ∗ belonging to the set of feasible solutions F that optimizes a set of k objective functions f j (S) = ∑ i ∈ S c i j , for j = 1,..., k. Specifically, we want to determine the set  of efficient points (usually called the efficient Pareto frontier) . In its minimization form, a point or solution S ∗ ∈ F is said to be efficient if there is no other solution S ∈ F such that f i (S) ≤ f i (S ∗) for all i = 1,..., k and f j (S) < f j (S ∗) for at least one j ∈ { 1,..., k}. In summary, efficiency requires that a solution to a multiobjective function be such that no single objective can be improved without deteriorating some other objective. In this context, we say that a solution S ∗ dominates another solution S ∈ F if S ∗ is not worse than S for all the objectives, and is better for at least one objective. Similarly, we say that S ∗ weakly dominates S if it is not worse than S with respect to all objectives.
Since GRASP is a heuristic, instead of computing an exact Pareto efficient set, we will determine an approximation of this set. The pseudo-code in Figure 5.15 illustrates a multiobjective GRASP. In line 1, the Pareto efficient set  is initialized empty. The iterations take place in the loop from line 2 to 14. The steps are similar to those of the standard single-objective GRASP, except that multiobjective versions of the construction and local search algorithms are used and the approximate Pareto optimal set has to be managed.
Fig. 5.15
Pseudo-code of a multiobjective GRASP heuristic.
Multiobjective construction takes place in line 3, where a solution S is built. If the Pareto efficient set  is empty, then the constructed solution S is added to it in line 4. Otherwise, if S is not dominated by any solution in , then it is added to  in line 7 and any other solution in  that is dominated by S is removed from  in line 9. In line 13, multiobjective improvement takes as input the constructed solution S and the current Pareto efficient set  and returns an improved solution. Instead of verifying only if the solution returned by the improvement procedure should be included in the Pareto efficient set, multiobjective improvement also verifies every solution visited along the search. The algorithm returns an approximate Pareto efficient set in line 15.
## 5.7 Bibliographical notes
Early proposals for what we called multistart methods in Section 5.1 can be found in the domains of heuristic scheduling (Muth and Thompson, 1963; Crowston et al., 1963), the traveling salesman problem (Held and Karp, 1970; Lawler et al., 1985), and the knapsack problems with single and multiple constraints (Senju and Toyoda, 1968; Wyman, 1973; Kochenberger et al., 1974). The survey by Martí et al. (2013a) gives an account of multistart methods and briefly sketches historical developments that have motivated the field. Focusing on contributions that define the current state of the art, two categories of multistart methods are considered: memory-based and memoryless procedures.
The semi-greedy multistart algorithm (without local search) discussed in Section 5.2 was proposed by Hart and Shogan (1987) and was independently developed by Feo and Resende (1989). In that paper, Feo and Resende described for the first time a GRASP heuristic, not referring to either the name GRASP or to greedy randomized adaptive search procedures, but simply calling the algorithm a probabilistic heuristic.
GRASP as a metaheuristic was presented and discussed in Section 5.3. This acronym was first introduced in the technical report by Feo et al. (1989) that appeared later as a journal paper in Feo et al. (1994). GRASP, as a general-purpose metaheuristic, was introduced by Feo and Resende (1995). Other tutorials on the method were authored by Pitsoulis and Resende (2002), Resende and Ribeiro (2003b; 2010; 2014), Ribeiro (2002), and Resende and Silva (2011). Annotated bibliographies of GRASP appeared in Festa and Resende (2002; 2009a;b).
The GRASP for the maximum covering problem, for which Figures 5.8, 5.9, and 5.11 were originally produced, appeared in Resende (1998). The GRASP for the maximum weighted satisfiability problem, for which Figures 5.4, 5.6, 5.7, and 5.10 were produced, was presented in Resende et al. (1997). See also Resende and Feo (1996) for the first proposal of a GRASP for the weighted satisfiability problem and Festa et al. (2006) for an improved GRASP for the maximum weighted satisfiability problem.
Filtering strategies discussed in Section 5.4 to accelerate the local search phase were originally proposed and applied by Feo et al. (1994) and Martins et al. (2000). Proposed Bayesian stopping rules were not followed by sufficient computational studies to validate their effectiveness or to give evidence of their efficiency (Bartkutė et al., 2006; Bartkutė and Sakalauskas, 2009; Boender and Kan, 1987; Dorea, 1990; Hart, 1998). Orsenigo and Vercellis (2006) developed a Bayesian framework for stopping rules aimed at controlling the number of iterations in a GRASP heuristic. Stopping rules have also been discussed by Duin and Voss (1999) and Voss et al. (2005) in another context. The statistical estimation of optimal values for combinatorial optimization problems as a way to evaluate the performance of heuristics was also addressed by Rardin et al. (2001) and Serifoglu and Ulusoy (2004). Ribeiro et al. (2011; 2013) proposed and developed the probabilistic stopping rules described in Section 5.5. The 2-path network design problem was introduced and proved to be NP-hard by Dahl and Johannessen (2004). The GRASP heuristic used in the computational experiments with the 2-path network design problem was proposed by Ribeiro and Rosseti (2002; 2007).
Martí et al. (2015) surveyed applications of GRASP to multiobjective optimization, such as the multiobjective knapsack problem (Vianna and Arroyo, 2004), the multicriteria minimum spanning tree problem (Arroyo et al., 2008), the multiobjective quadratic assignment problem (Li and Landa-Silva, 2009), a learning classification problem (Ishida et al., 2009), the biobjective path dissimilarity and biorienteering problems (Martí et al., 2015), environmental investment decision making (Higgins et al., 2008), partial classification of databases (Reynolds and de la Iglesia, 2009; Reynolds et al., 2009), flow shop scheduling (Davoudpour and Ashrafi, 2009), biobjective set packing (Delorme et al., 2010), biobjective commercial territory design (Salazar-Aguilar et al., 2013), path dissimilarity (Martí et al., 2009), line balancing (Chica et al., 2010), and locating and sizing capacitors for reactive power compensation (Antunes et al., 2014), among others.
References
C.H. Antunes, E. Oliveira, and P. Lima. A multi-objective GRASP procedure for reactive power compensation planning. Optimization and Engineering, 15: 199–215, 2014.CrossRefMATH
J.E.C. Arroyo, P.S. Vieira, and D.S. Vianna. A GRASP algorithm for the multi-criteria minimum spanning tree problem. Annals of Operations Research, 159:125–133, 2008.MathSciNetCrossRefMATH
V. Bartkutė and L. Sakalauskas. Statistical inferences for termination of Markov type random search algorithms. Journal of Optimization Theory and Applications, 141:475–493, 2009.MathSciNetCrossRefMATH
V. Bartkutė, G. Felinskas, and L. Sakalauskas. Optimality testing in stochastic and heuristic algorithms. Technical Report 12, Vilnius Gediminas Technical University, Vilnius, 2006.
C.G.E. Boender and A.H.G. Rinnooy Kan. Bayesian stopping rules for multistart global optimization methods. Mathematical Programming, 37: 59–80, 1987.MathSciNetCrossRefMATH
M. Chica, O. Cordón, S. Damas, and J. Bautista. A multiobjective GRASP for the 1/3 variant of the time and space assembly line balancing problem. In N. García-Pedrajas, F. Herrera, C. Fyfe, J. Benítez, and M. Ali, editors, Trends in applied intelligent systems, volume 6098 of Lecture Notes in Computer Science, pages 656–665. Springer, Berlin, 2010.
W.B. Crowston, F. Glover, G.L. Thompson, and J.D. Trawick. Probabilistic and parametric learning combinations of local job shop scheduling rules. Technical Report 117, Carnegie-Mellon University, Pittsburgh, 1963.
G. Dahl and B. Johannessen. The 2-path network design problem. Networks, 43:190–199, 2004.MathSciNetCrossRefMATH
H. Davoudpour and M. Ashrafi. Solving multi-objective SDST flexible flow shop using GRASP algorithm. The International Journal of Advanced Manufacturing Technology, 44:737–747, 2009.CrossRef
X. Delorme, X. Gandibleux, and F. Degoutin. Evolutionary, constructive and hybrid procedures for the bi-objective set packing problem. European Journal of Operational Research, 204:206–217, 2010.MathSciNetCrossRefMATH
C. Dorea. Stopping rules for a random optimization method. SIAM Journal on Control and Optimization, 28:841–850, 1990.MathSciNetCrossRefMATH
C. Duin and S. Voss. The Pilot method: A strategy for heuristic repetition with application to the Steiner problem in graphs. Networks, 34:181–191, 1999.MathSciNetCrossRef1097-0037\(199910\)34%3A3<181%3A%3AAID-NET2>3.0.CO%3B2-Y)MATH
T.A. Feo and M.G.C. Resende. A probabilistic heuristic for a computationally difficult set covering problem. Operations Research Letters, 8:67–71, 1989.MathSciNetCrossRef90002-3)MATH
T.A. Feo and M.G.C. Resende. Greedy randomized adaptive search procedures. Journal of Global Optimization, 6:109–133, 1995.MathSciNetCrossRefMATH
T.A. Feo, M.G.C. Resende, and S.H. Smith. A greedy randomized adaptive search procedure for maximum independent set. Technical report, AT&T Bell Laboratories, 1989.
T.A. Feo, M.G.C. Resende, and S.H. Smith. A greedy randomized adaptive search procedure for maximum independent set. Operations Research, 42: 860–878, 1994.CrossRefMATH
P. Festa and M.G.C. Resende. GRASP: An annotated bibliography. In C.C. Ribeiro and P. Hansen, editors, Essays and surveys in metaheuristics, pages 325–367. Kluwer Academic Publishers, Boston, 2002.
P. Festa and M.G.C. Resende. An annotated bibliography of GRASP, Part I: Algorithms. International Transactions in Operational Research, 16:1–24, 2009a.
P. Festa and M.G.C. Resende. An annotated bibliography of GRASP, Part II: Applications. International Transactions in Operational Research, 16, 2009b. 131–172.
P. Festa, P.M. Pardalos, L.S. Pitsoulis, and M.G.C. Resende. GRASP with path-relinking for the weighted MAXSAT problem. ACM Journal of Experimental Algorithmics, 11:1–16, 2006.MathSciNetMATH
J.P. Hart and A.W. Shogan. Semi-greedy heuristics: An empirical study. Operations Research Letters, 6:107–114, 1987.MathSciNetCrossRef90021-6)MATH
W.E. Hart. Sequential stopping rules for random optimization methods with applications to multistart local search. SIAM Journal on Optimization, 9: 270–290, 1998.MathSciNetCrossRefMATH
M. Held and R.M. Karp. The traveling-salesman problem and minimum spanning trees. Operations Research, 18:1138–1162, 1970.MathSciNetCrossRefMATH
A.J. Higgins, S. Hajkowicz, and E. Bui. A multi-objective model for environmental investment decision making. Computers & Operations Research, 35:253–266, 2008.CrossRefMATH
C. Ishida, A. Pozo, E. Goldbarg, and M. Goldbarg. Multiobjective optimization and rule learning: Subselection algorithm or meta-heuristic algorithm? In N. Nedjah, L.M. Mourelle, and J. Kacprzyk, editors, Innovative applications in data mining, pages 47–70. Springer, Berlin, 2009.
G.A. Kochenberger, B.A. McCarl, and F.P. Wyman. A heuristic for general integer programming. Decision Sciences, 5:36–41, 1974.CrossRef
E.L. Lawler, J.K. Lenstra, A.H.G. Rinnooy Kan, and D.B. Shmoys, editors. The traveling salesman problem: A guided tour of combinatorial optimization. John Wiley & Sons, New York, 1985.MATH
H. Li and D. Landa-Silva. An elitist GRASP metaheuristic for the multi-objective quadratic assignment problem. In M. Ehrgott, C.M. Fonseca, X. Gandibleux, J.-K. Hao, and M. Sevaux, editors, Evolutionary multi-criterion optimization, volume 5467 of Lecture Notes in Computer Science, pages 481–494. Springer, Berlin, 2009.
R. Martí, J.L. González-Velarde, and A. Duarte. Heuristics for the bi-objective path dissimilarity problem. Computers & Operations Research, 36:2905–2912, 2009.CrossRefMATH
R. Martí, M.G.C. Resende, and C.C. Ribeiro. Multi-start methods for combinatorial optimization. European Journal of Operational Research, 226:1–8, 2013a.
R. Martí, V. Campos, M.G.C. Resende, and A. Duarte. Multiobjective GRASP with path relinking. European Journal of Operational Research, 240:54–71, 2015.MathSciNetCrossRefMATH
S.L. Martins, P.M. Pardalos, M.G.C. Resende, and C.C. Ribeiro. A parallel GRASP for the Steiner tree problem in graphs using a hybrid local search strategy. Journal of Global Optimization, 17:267–283, 2000.MathSciNetCrossRefMATH
J.F. Muth and G.L. Thompson. Industrial scheduling. Prentice-Hall, Boston, 1963.
C. Orsenigo and C. Vercellis. Bayesian stopping rules for greedy randomized procedures. Journal of Global Optimization, 36:365–377, 2006.MathSciNetCrossRefMATH
L.S. Pitsoulis and M.G.C. Resende. Greedy randomized adaptive search procedures. In P.M. Pardalos and M.G.C. Resende, editors, Handbook of applied optimization, pages 168–183. Oxford University Press, New York, 2002.
R.L. Rardin, R., and Uzsoy. Experimental evaluation of heuristic optimization algorithms: A tutorial. Journal of Heuristics, 7:261–304, 2001.
M.G.C. Resende. Computing approximate solutions of the maximum covering problem using GRASP. Journal of Heuristics, 4:161–171, 1998.CrossRefMATH
M.G.C. Resende and T.A. Feo. A GRASP for satisfiability. In D.S. Johnson and M.A. Trick, editors, Cliques, coloring, and satisfiability: The second DIMACS implementation challenge, volume 26 of DIMACS Series in Discrete Mathematics and Theoretical Computer Science, pages 499–520. American Mathematical Society, Providence, 1996.
M.G.C. Resende and C.C. Ribeiro. Greedy randomized adaptive search procedures. In F. Glover and G. Kochenberger, editors, Handbook of metaheuristics, pages 219–249. Kluwer Academic Publishers, Boston, 2003b.
M.G.C. Resende and C.C. Ribeiro. Greedy randomized adaptive search procedures: Advances and applications. In M. Gendreau and J.-Y. Potvin, editors, Handbook of metaheuristics, pages 293–319. Springer, New York, 2nd edition, 2010.
M.G.C. Resende and C.C. Ribeiro. GRASP: Greedy randomized adaptive search procedures. In E.K. Burke and G. Kendall, editors, Search methodologies: Introductory tutorials in optimization and decision support systems, chapter 11, pages 287–312. Springer, New York, 2nd edition, 2014.
M.G.C. Resende and R.M.A. Silva. GRASP: Greedy randomized adaptive search procedures. In J.J. Cochran, L.A. Cox, Jr., P. Keskinocak, J.P. Kharoufeh, and J.C. Smith, editors, Encyclopedia of operations research and management science, volume 3, pages 2118–2128. Wiley, New York, 2011.
M.G.C. Resende, L.S. Pitsoulis, and P.M. Pardalos. Approximate solution of weighted MAX-SAT problems using GRASP. In J. Gu and P.M. Pardalos, editors, Satisfiability problems, volume 35 of DIMACS Series in Discrete Mathematics and Theoretical Computer Science, pages 393–405. American Mathematical Society, Providence, 1997.
A.P. Reynolds and B. de la Iglesia. A multi-objective GRASP for partial classification. Soft Computing, 13:227–243, 2009.CrossRef
A.P. Reynolds, D.W. Corne, and B. de la Iglesia. A multiobjective GRASP for rule selection. In Proceedings of the 11th Annual Conference on Genetic and Evolutionary Computation, pages 643–650, Montreal, 2009. ACM.
C.C. Ribeiro. GRASP: Une métaheuristique gloutonne et probabiliste. In J. Teghem and M. Pirlot, editors, Optimisation approchée en recherche opérationnelle, pages 153–176. Hermès, Paris, 2002.
C.C. Ribeiro and I. Rosseti. A parallel GRASP heuristic for the 2-path network design problem. In B. Monien and R. Feldmann, editors, Euro-Par 2002 Parallel Processing, volume 2400 of Lecture Notes in Computer Science, pages 922–926. Springer, Berlin, 2002.
C.C. Ribeiro and I. Rosseti. Efficient parallel cooperative implementations of GRASP heuristics. Parallel Computing, 33:21–35, 2007.MathSciNetCrossRef
C.C. Ribeiro, I. Rosseti, and R.C. Souza. Effective probabilistic stopping rules for randomized metaheuristics: GRASP implementations. In C.A.C. Coello, editor, Learning and intelligent optimization, volume 6683, pages 146–160. Springer, Berlin, 2011.
C.C. Ribeiro, I. Rosseti, and R.C. Souza. Probabilistic stopping rules for GRASP heuristics and extensions. International Transactions in Operational Research, 20:301–323, 2013.CrossRefMATH
M.A. Salazar-Aguilar, R.Z. Ríos-Mercado, and J.L. González-Velarde. GRASP strategies for a bi-objective commercial territory design problem. Journal of Heuristics, 19:179–200, 2013.CrossRef
S. Senju and Y. Toyoda. An approach to linear programming with 0-1 variables. Management Science, 15:196–207, 1968.
F.S. Serifoglu and G. Ulusoy. Multiprocessor task scheduling in multistage hybrid flow-shops: A genetic algorithm approach. Journal of the Operational Research Society, 55:504–512, 2004.CrossRefMATH
D.S. Vianna and J.E.C. Arroyo. A GRASP algorithm for the multi-objective knapsack problem. In Proceedings of the 24th International Conference of the Chilean Computer Science Society, pages 69–75, Arica, 2004. IEEE.
S. Voss, A. Fink, and C. Duin. Looking ahead with the Pilot method. Annals of Operations Research, 136:285–302, 2005.MathSciNetCrossRefMATH
F.P. Wyman. Binary programming: A occasion rule for selecting optimal vs. heuristic techniques. The Computer Journal, 16:135–140, 1973.
© Springer Science+Business Media New York 2016
Mauricio G.C. Resende and Celso C. RibeiroOptimization by GRASP10.1007/978-1-4939-6530-4_6
# 6. Runtime distributions
Mauricio G. C. Resende1 and Celso C. Ribeiro2
(1)
Modeling and Optimization Group (MOP), Amazon.com, Inc., Seattle, WA, USA
(2)
Instituto de Ciência da Computação, Universidade Federal Fluminense, Niterói, Rio de Janeiro, Brazil
Runtime distributions or time-to-target plots display on the ordinate axis the probability that an algorithm will find a solution at least as good as a given target value within a given running time, shown on the abscissa axis. They provide a very useful tool to characterize the running times of stochastic algorithms for combinatorial optimization problems and to compare different algorithms or strategies for solving a given problem. They have been widely used as a tool for algorithm design and comparison.
## 6.1 Time-to-target plots
Let  be an optimization problem and  a randomized heuristic for this problem. Furthermore, let  be a specific instance of  and let look4 be a solution cost target value for this instance.
Heuristic  is run N times on the fixed instance  and the algorithm is made to stop as soon as a solution whose objective function is at least as good as the given target value look4 is found. For each of the N runs, the random number generator used in the implementation of the heuristic is initialized with a distinct seed and, therefore, the runs are assumed to be independent. The solution time of each run is recorded and saved. To compare their empirical and theoretical distributions, we follow a standard graphical methodology for data analysis. This methodology is used to produce the time-to-target plots (TTT-plots) and is described next.
After concluding the N independent runs, solution times are sorted in increasing order. We associate with the i-th sorted solution time t i a probability p i = (i − 1∕2)∕N, and plot the points z i = (t i , p i ), for i = 1,..., N. We comment on this choice of p i later in Section 6.2. Figure 6.1 illustrates this estimated cumulative probability distribution plot for problem , a GRASP heuristic , instance , and target look4. We can see that the probability that the heuristic finds a solution at least as good as the target value in at most 416 seconds is about 50%, in at most 1064 seconds is about 80%, and in at most 1569 seconds is about 90%.
Fig. 6.1
Cumulative probability distribution plot of measured data.
## 6.2 Runtime distribution of GRASP
The plot in Figure 6.1 appears to fit an exponential distribution, or more generally, a shifted exponential distribution. To estimate the parameters of this two-parameter exponential distribution, we first draw the theoretical quantile-quantile plot (or Q-Q plot) for the data. To describe Q-Q plots, we recall that the cumulative distribution function for the two-parameter exponential distribution is given by F(t) = 1 − e −(t−μ)∕λ , where λ is the mean of the distribution data (and also is the standard deviation of the data) and μ is the shift of the distribution with respect to the ordinate axis.
The quantiles of the data of an empirical distribution are derived from the (sorted) raw data, which in our case are N measured (sorted) running times. Quantiles are cutpoints that group a set of sorted observations into classes of equal (or approximately equal) size. For each value p i , i = 1,..., N, we associate a p i -quantile q(p i ) of the theoretical distribution. For each p i -quantile we have, by definition, that F((q(p i )) = p i . Hence, q(p i ) = F −1(p i ) and therefore, for the two-parameter exponential distribution, we have q(p i ) = −λ ⋅ ln(1 − p i ) +μ. Note that if we were to use p i = 1∕N, for i = 1,..., N, then q(p N ) would be undefined.
A theoretical quantile-quantile plot (or theoretical Q-Q plot) is obtained by plotting the quantiles of the data of an empirical distribution against the quantiles of a theoretical distribution. This involves three steps. First, the data (in this case, the measured solution times) are sorted in ascending order. Second, the quantiles of the theoretical exponential distribution are obtained. Finally, a plot of the data against the theoretical quantiles is made.
In a situation where the theoretical distribution is a close approximation of the empirical distribution, the points in the Q-Q plot will have a nearly straight configuration. In a plot of the data against a two-parameter exponential distribution with λ = 1 and μ = 0, the points would tend to follow the line . Consequently, parameters λ and μ of the two-parameter exponential distribution can be estimated, respectively, by the slope  and the intercept  of the line depicted in the Q-Q plot.
The Q-Q plot shown in Figure 6.2 is obtained by plotting the measured times in the ordinate against the quantiles of a two-parameter exponential distribution with λ = 1 and μ = 0 in the abscissa, given by q(p i ) = −ln(1 − p i ), for i = 1,..., n. To avoid possible distortions caused by outliers, we do not estimate the distribution mean with the data mean or by linear regression on the points of the Q-Q plot. Instead, we estimate the slope  of the line y = λ ⋅ x +μ using the upper quartile q u and lower quartile q l of the data. The upper quartile q u and lower quartile q l are, respectively, the q(1∕4) and q(3∕4) quantiles. We take  as an estimate of the slope, where z u and z l are the u-th and l-th points of the ordered measured times, respectively. This informal estimation of the distribution of the measured data mean is robust since it will not be distorted by a few outliers. Consequently, the estimate for the shift is .
Fig. 6.2
Q-Q plot showing fitted line.
To analyze the straightness of the Q-Q plots, we superimpose them with variability information. For each plotted point, we show plus and minus one standard deviation in the vertical direction from the line fitted to the plot. An estimate of the standard deviation for point z i , i = 1,..., n, of the Q-Q plot is ![
$$\\hat{\\sigma }=\\hat{\\lambda } \[p_{i}/\(1 - p_{i}\)n\]^{\\frac{1} {2} }$$
](A271843_1_En_6_Chapter_IEq16.gif). Figure 6.3 shows an example of a Q-Q plot with superimposed variability information.
Fig. 6.3
Q-Q plot with variability information.
When observing a theoretical quantile-quantile plot with superimposed standard deviation information, one should avoid turning such information into a formal test. One important fact that must be kept in mind is that the natural variability of the data generates departures from the straightness, even if the model of the distribution is valid. The most important reason for portraying standard deviation is that it gives us a sense of the relative variability of the points in the different regions of the plot. However, since one is trying to make simultaneous inferences from many individual inferences, it is difficult to use standard deviations to judge departures from the reference distribution. For example, the probability that a particular point deviates from the reference line by more than two standard deviations is small. However, the probability that any of the points deviates from the line by two standard deviations is probably much greater. In order statistics, this is made more difficult by the high correlation that exists between neighboring points. If one plotted point deviates by more than one standard deviation, there is a good chance that a whole bunch of them will too. Another point to keep in mind is that standard deviations vary substantially in the Q-Q plot. As one can observe in the Q-Q plot in Figure 6.3, the standard deviation of the points near the high end is substantially larger than the standard deviation of the points near the other end.
Once the two parameters of the distribution have been estimated, a superimposed plot of the empirical and theoretical distributions can be made. Figure 6.4 depicts the superimposed empirical and theoretical distributions corresponding to the Q-Q plot in Figure 6.3.
Fig. 6.4
Superimposed empirical runtime distribution and best exponential fit.
The runtime distribution of a pure GRASP heuristic has been shown experimentally to behave as a random variable that fits an exponential distribution. Later in this book, we will discuss the implication of this observation with respect to parallel implementations of GRASP and restart strategies.
However, in the case of more elaborate heuristics where setup times are not negligible, the runtimes fit a two-parameter or shifted exponential distribution.
Therefore, the probability density function of the time-to-target random variable is given by f(t) = (1∕λ) ⋅ e −t∕λ in the first case (exponential distribution) and by f(t) = (1∕λ) ⋅ e −(t−μ)∕λ in the second (shifted exponential distribution), with the parameters  and  being associated with the shape and the shift of the exponential function, respectively. Figure 6.4 illustrates this result, depicting the superimposed empirical and theoretical distributions observed for an instance of the maximum covering problem where one wants to choose 500 out of 1000 facility locations such that, of the 10,000 customers, the sum of the weights of those that are covered is maximized. The best known solution for this instance is 33,343,542 and the target solution value used was 33,339,175 (about 0.01% off of the best known solution).
However, if path-relinking is applied as an intensification step at the end of each GRASP iteration (see Chapter in this book), then the iterations are no longer independent and the memoryless characteristic of GRASP is destroyed. This also happens in the case of cooperative parallel implementations of GRASP (see also Chapter in this book). Consequently, the time-to-target random variable may not fit an exponential distribution in such situations. This result is illustrated by two implementations of GRASP with bidirectional path-relinking. The first is an application to the 2-path network design problem. The runtime distribution and the corresponding quantile-quantile plot for an instance with 80 nodes and 800 origin-destination pairs are depicted in Figure 6.5. The second is an application to the three-index assignment problem. Runtime distributions and the corresponding quantile-quantile plots for Balas and Saltzman problems 22.1 (target value set to 8) and 24.1 (target value set to 7) are shown in Figures 6.6 and 6.7, respectively. For both heuristics and these three example instances, we observe that points steadily deviate by more than one standard deviation from the estimate for the upper quantiles in the quantile-quantile plots (i.e., many points associated with large computation times fall outside the plus or minus one standard deviation bounds). Therefore, we cannot say that these runtime distributions are exponentially distributed.
Fig. 6.5
Runtime distribution and quantile-quantile plot for GRASP with bidirectional path-relinking of an instance of the 2-path network design problem with 80 nodes and 800 origin-destination pairs, with target set to 588.
Fig. 6.6
Runtime distribution and quantile-quantile plot for GRASP with bidirectional path-relinking on Balas and Saltzman problem 22.1, with the target value set to 8.
Fig. 6.7
Runtime distribution and quantile-quantile plot for GRASP with bidirectional path-relinking on Balas and Saltzman problem 24.1, with the target value set to 7.
## 6.3 Comparing algorithms with exponential runtime distributions
We assume the existence of two randomized algorithms A 1 and A 2 for the approximate solution of some optimization problem. Furthermore, we assume that their solution times fit exponential (or shifted exponential) distributions. We denote by X 1 (resp. X 2) the continuous random variable representing the time needed by algorithm A 1 (resp. A 2) to find a solution as good as a given target value:

and

where T 1, λ 1, T 2, and λ 2 are parameters (λ 1 and λ 2 define the shape of each shifted exponential distribution, whereas T 1 and T 2 denote by how much each of them is shifted). The cumulative probability distribution and the probability density function of X 1 are depicted in Figure 6.8.
Fig. 6.8
Probability density function and cumulative probability distribution of the random variable X 1.
Since both algorithms stop when they find a solution at least as good as the target, we can say that algorithm A 1 performs better than A 2 if the former stops before the latter. Therefore, we must evaluate the probability Pr(X 1 ≤ X 2) that the random variable X 1 takes a value smaller than or equal to X 2. Conditioning on the value of X 2 and applying the total probability theorem, we obtain


Let ν = τ − T 2. Then, d ν = d τ and

(6.1)
Using the formula of cumulative probability function of the random variable X 1 (see Figure 6.8), we obtain

(6.2)
Replacing (6.2) in (6.1) and solving the integral, we conclude that

(6.3)
This result can be better interpreted by rewriting expression (6.3) as

(6.4)
The first term of the right-hand side of equation (6.4) is the probability that 0 ≤ X 1 ≤ T 2, in which case X 1 is clearly less than or equal to X 2. The second term is given by the product of the factors  and λ 1∕(λ 1 +λ 2), in which the former corresponds to the probability that X 1 ≥ T 2 and the latter to the probability that X 1 be less than or equal to X 2, given that X 1 ≥ T 2.
To illustrate the above result, we consider two algorithms for solving the server replication for reliable multicast problem. Algorithm A 1 is an implementation of pure GRASP with α = 0. 2, while algorithm A 2 is a pure GRASP heuristic with α = 0. 9. The runs were performed on an Intel Core2 Quad with 2.40 GHz of clock speed and 4 GB of RAM memory. Figure 6.9 depicts the runtime distributions of each algorithm, obtained after 500 runs with different seeds of an instance with the target value set at 2830. The parameters of the two distributions are λ 1 = 0. 524422349, T 1 = 0. 36, λ 2 = 0. 190533895, and T 2 = 0. 51. Applying expression (6.3), we get Pr(X 1 ≤ X 2) = 0. 684125. This probability is consistent with Figure 6.10, in which we superimposed the runtime distributions of the two pure GRASP heuristics for the same instance. The plots in this figure show that the pure GRASP with α = 0. 2 outperforms one with α = 0. 9, since the runtime distribution of the former is to the left of the runtime distribution of the latter.
Fig. 6.9
Runtime distributions of an instance of the server replication for reliable multicast problem with m = 25 and the target value set at 2830.
Fig. 6.10
Superimposed runtime distributions of pure GRASP with α = 0. 2 and pure GRASP with α = 0. 9.
If the solution times do not fit exponential (or two-parameter shifted exponential) distributions, as for the case of GRASP with path-relinking heuristics, then the the closed form result established in expression (6.3) does not hold. Algorithms in this situation cannot be compared by this approach. The next section extends this approach to general runtime distributions.
## 6.4 Comparing algorithms with general runtime distributions
Let X 1 and X 2 be two continuous random variables, with cumulative probability distributions  and  and probability density functions  and , respectively. Then,

since  for any τ < 0. For an arbitrary small real number ɛ, the above expression can be rewritten as

(6.5)
Since Pr(X 1 ≤ i ⋅ ɛ) ≤ Pr(X 1 ≤ τ) ≤ Pr(X 1 ≤ (i \+ 1) ⋅ ɛ) for i ⋅ ɛ ≤ τ ≤ (i \+ 1) ⋅ ɛ, then replacing Pr(X 1 ≤ τ) by Pr(X 1 ≤ i ⋅ ɛ) and by Pr(X 1 ≤ (i \+ 1) ⋅ ɛ) in (6.5) leads to

Let L(ɛ) and R(ɛ) be the value of the left- and right-hand sides of the above expression, respectively, with Δ(ɛ) = R(ɛ) − L(ɛ) being the difference between the upper and lower bounds to Pr(X 1 ≤ X 2). Then, we have that
![
$$\\displaystyle{ \\varDelta \(\\varepsilon \) =\\sum _{ i=0}^{\\infty }\\left \[F_{ X_{1}}\(\(i + 1\)\\cdot \\varepsilon \) - F_{X_{1}}\(i\\cdot \\varepsilon \)\\right \]\\int _{i\\cdot \\varepsilon }^{\(i+1\)\\cdot \\varepsilon }f_{ X_{2}}\(\\tau \) \\cdot d\\tau. }$$
](A271843_1_En_6_Chapter_Equ6.gif)
(6.6)
Let . Since  for i ≥ 0, expression (6.6) turns out to be

Consequently,

(6.7)
i.e., the difference Δ(ɛ) between the upper and lower bounds to Pr(X 1 ≤ X 2) (or the absolute error in the integration) is smaller than or equal to δ ɛ. Therefore, this difference can be made as small as desired by choosing a sufficiently small value for ɛ.
In order to numerically evaluate a good approximation to Pr(X 1 ≤ X 2), we select the appropriate value of ɛ such that the resulting approximation error Δ(ɛ) is sufficiently small. Next, we compute L(ɛ) and R(ɛ) to obtain the approximation

(6.8)
In practice, the above probability distributions are unknown. Instead of the distributions, the information available is limited to a sufficiently large number N 1 (resp. N 2) of observations of the random variable X 1 (resp. X 2). Since the value of  is also unknown beforehand, the appropriate value of ɛ cannot be estimated. Then, we proceed iteratively as follows.
Let t 1(j) (resp. t 2(j)) be the value of the j-th smallest observation of the random variable X 1 (resp. X 2), for j = 1,..., N 1 (resp. N 2). We set the bounds a = min{t 1(1), t 2(1)} and b = max{t 1(N 1), t 2(N 2)} and choose an arbitrary number h of integration intervals to compute an initial value ɛ = (b − a)∕h for each integration interval. For sufficiently small values of the integration interval ɛ, the probability density function  in the interval [i ⋅ ɛ, (i \+ 1) ⋅ ɛ] can be approximated by , where

(6.9)
The same approximations hold for random variable X 2.
Finally, the value of Pr(X 1 ≤ X 2) can be computed as in expression (6.8), using the estimates  and  in the computation of L(ɛ) and R(ɛ). If the approximation error Δ(ɛ) = R(ɛ) − L(ɛ) becomes sufficiently small, then the procedure stops. Otherwise, the value of ɛ is halved and the above steps are repeated until convergence.
## 6.5 Numerical applications to sequential algorithms
We illustrate next an application of the procedure described in the previous section for the comparison of randomized algorithms (running on the same instance) on three problems: server replication for reliable multicast, routing and wavelength assignment, and 2-path network design.
### 6.5.1 DM-D5 and GRASP algorithms for server replication
Multicast communication consists of simultaneously delivering the same information to many receivers, from single or multiple sources. Network services specially designed for multicast are needed. The scheme used in current multicast services creates a delivery tree, whose root represents the sender, whose leaves represent the receivers, and whose internal nodes represent network routers or relaying servers. Transmission is performed by creating copies of the data at split points of the tree. An important issue regarding multicast communication is how to provide reliable service, ensuring the delivery of all packets from the sender to receivers. A successful technique to provide reliable multicast service is the server replication approach, in which data is replicated at some of the multicast-capable relaying hosts (also called replicated or repair servers) and each of them is responsible for the retransmission of packets to receivers in its group. The problem consists in selecting the best subset of the multicast-capable relaying hosts to act as replicated servers in a multicast scenario. It is a special case of the p-median problem.
DM-GRASP is a hybrid version of GRASP described in Section 7.8 of this book, which incorporates a data mining process. We compare two heuristics for the server replication problem: algorithm A 1 is an implementation of the DM-D5 version of DM-GRASP, in which the mining algorithm is periodically applied, while A 2 is a pure GRASP heuristic. We present results for two instances using the same network scenario, with m = 25 and m = 50 replication servers.
Each algorithm was run 200 times with different seeds. The target was set at 2,818.925 (the best known solution value is 2,805.89) for the instance with m = 25 and at 2,299.07 (the best known solution value is 2,279.84) for the instance with m = 50. Figures 6.11 and 6.12 depict runtime distributions and quantile-quantile plots for DM-D5, for the instances with m = 25 and m = 50, respectively. Running times of DM-D5 did not fit exponential distributions for any of the instances. GRASP solution times were exponential for both.
Fig. 6.11
Runtime distribution and quantile-quantile plot for algorithm DM-D5 on the instance with m = 25 and the target value set at 2,818.925.
Fig. 6.12
Runtime distribution and quantile-quantile plot for algorithm DM-D5 on the instance with m = 50 and the target value set at 2,299.07.
The empirical runtime distributions of DM-D5 and GRASP are superimposed in Figure 6.13. Algorithm DM-D5 outperformed GRASP, since the runtime distribution of the DM-D5 is to the left of the distribution for GRASP on the both instances, with m = 25 and m = 50. Consistently, the computations show that Pr(X 1 ≤ X 2) = 0. 619763 (with L(ɛ) = 0. 619450, R(ɛ) = 0. 620075, Δ(ɛ) = 0. 000620, and ɛ = 0. 009552) and Pr(X 1 ≤ X 2) = 0. 854113 (with L(ɛ) = 0. 853800, R(ɛ) = 0. 854425, Δ(ɛ) = 0. 000625, and ɛ = 0. 427722) for the instances with m = 25 and m = 50, respectively.
Fig. 6.13
Superimposed runtime distributions of DM-D5 and GRASP: (a) Pr(X 1 ≤ X 2) = 0. 619763, and (b) Pr(X 1 ≤ X 2) = 0. 854113.
We also investigate the convergence of the proposed measure with the sample size (i.e., with the number of independent runs of each algorithm). Convergence with the sample size is illustrated next for the same m = 25 instance of the server replication problem, with the same target 2,818.925 already used in the previous experiment. Once again, algorithm A 1 is the DM-D5 version of DM-GRASP and algorithm A 2 is the pure GRASP heuristic. The estimation of Pr(X 1 ≤ X 2) is computed for N = 100, 200, 300, 400, 500, 600, 700, 800, 900, 1000, 2000, 3000, 4000, and 5000 independent runs of each algorithm. Table 6.1 shows the results obtained, which are also displayed in Figure 6.14. We notice that the estimation of Pr(X 1 ≤ X 2) stabilizes as the sample size N increases.
Fig. 6.14
Convergence of the estimation of Pr(X 1 ≤ X 2) with the sample size for the m = 25 instance of the server replication problem.
Table 6.1
Convergence of the estimation of Pr(X 1 ≤ X 2) with the sample size for the m = 25 instance of the server replication problem.
N | L(ɛ) | Pr(X 1 ≤ X 2) | R(ɛ) | Δ(ɛ) | ɛ
---|---|---|---|---|---
100 | 0.655900 | 0.656200 | 0.656500 | 0.000600 | 0.032379
200 | 0.622950 | 0.623350 | 0.623750 | 0.000800 | 0.038558
300 | 0.613344 | 0.613783 | 0.614222 | 0.000878 | 0.038558
400 | 0.606919 | 0.607347 | 0.607775 | 0.000856 | 0.038558
500 | 0.602144 | 0.602548 | 0.602952 | 0.000808 | 0.038558
600 | 0.596964 | 0.597368 | 0.597772 | 0.000808 | 0.038558
700 | 0.591041 | 0.591440 | 0.591839 | 0.000798 | 0.038558
800 | 0.593197 | 0.593603 | 0.594009 | 0.000812 | 0.042070
900 | 0.593326 | 0.593719 | 0.594113 | 0.000788 | 0.042070
1000 | 0.594849 | 0.595242 | 0.595634 | 0.000785 | 0.042070
2000 | 0.588913 | 0.589317 | 0.589720 | 0.000807 | 0.047694
3000 | 0.583720 | 0.584158 | 0.584596 | 0.000875 | 0.047694
4000 | 0.582479 | 0.582912 | 0.583345 | 0.000866 | 0.047694
5000 | 0.584070 | 0.584511 | 0.584953 | 0.000882 | 0.050604
### 6.5.2 Multistart and tabu search algorithms for routing and wavelength assignment
A point-to-point connection between two endnodes of an optical network is called a lightpath. Two lightpaths may use the same wavelength, provided they do not share any common link. The routing and wavelength assignment problem is that of routing a set of lightpaths and assigning a wavelength to each of them, minimizing the number of wavelengths needed. A decomposition strategy is compared with a multistart greedy heuristic. Two networks are used for benchmarking. The first has 27 nodes representing the capitals of the 27 states of Brazil, with 70 links connecting them. There are 702 lightpaths to be routed. Instance Finland is formed by 31 nodes and 51 links, with 930 lightpaths to be routed. Each algorithm was run 200 times with different seeds. The target was set at 24 (the best known solution value) for instance Brazil and at 50 for instance Finland (the best known solution value is 47). Algorithm A 1 is the multistart heuristic, while A 2 is the tabu search decomposition scheme. The multistart solution times fit exponential distributions for both instances. Figures 6.15 and 6.16 display runtime distributions and quantile-quantile plots for instances Brazil and Finland, respectively.
Fig. 6.15
Runtime distribution and quantile-quantile plot for tabu search on Brazil instance with the target value set at 24.
Fig. 6.16
Runtime distribution and quantile-quantile plot for tabu search on Finland instance with the target value set at 50.
The empirical runtime distributions of the decomposition and multistart strategies are superimposed in Figure 6.17. The direct comparison of the two approaches shows that decomposition clearly outperformed the multistart strategy for instance Brazil, since Pr(X 1 ≤ X 2) = 0. 13 in this case (with L(ɛ) = 0. 129650, R(ɛ) = 0. 130350, Δ(ɛ) = 0. 000700, and ɛ = 0. 008163). However, the situation changes, for instance Finland. Although both algorithms have similar performances, multistart is slightly better with respect to the measure proposed in this work, since Pr(X 1 ≤ X 2) = 0. 536787 (with L(ɛ) = 0. 536525, R(ɛ) = 0. 537050, Δ(ɛ) = 0. 000525, and ɛ = 0. 008804).
Fig. 6.17
Superimposed runtime distributions of multistart and tabu search: (a) Pr(X 1 ≤ X 2) = 0. 13, and (b) Pr(X 1 ≤ X 2) = 0. 536787.
As done for the server replication problem in Section 6.5.1, we also investigate the convergence of the proposed measure with the sample size (i.e., with the number of independent runs of each algorithm). Convergence with the sample size is illustrated next for the Finland instance of the routing and wavelength assignment problem, with the target set at 49. Once again, algorithm A 1 is the multistart heuristic and algorithm A 2 is the tabu search decomposition scheme. The estimation of Pr(X 1 ≤ X 2) is computed for N = 100, 200, 300, 400, 500, 600, 700, 800, 900, 1000, 2000, 3000, 4000, and 5000 independent runs of each algorithm. Table 6.2 shows the results obtained, which are also displayed in Figure 6.18. Once again, we notice that the estimation of Pr(X 1 ≤ X 2) stabilizes as the sample size N increases.
Fig. 6.18
Convergence of the estimation of Pr(X 1 ≤ X 2) with the sample size for the Finland instance of the routing and wavelength assignment problem.
Table 6.2
Convergence of the estimation of Pr(X 1 ≤ X 2) with the sample size for the Finland instance of the routing and wavelength assignment problem.
N | L(ɛ) | Pr(X 1 ≤ X 2) | R(ɛ) | Δ(ɛ) | ɛ
---|---|---|---|---|---
100 | 0.000001 | 0.000200 | 0.000400 | 0.000400 | 1.964844
200 | 0.000100 | 0.004875 | 0.009650 | 0.009550 | 0.000480
300 | 0.006556 | 0.012961 | 0.019367 | 0.012811 | 0.000959
400 | 0.007363 | 0.013390 | 0.019425 | 0.012063 | 0.000959
500 | 0.007928 | 0.014694 | 0.021460 | 0.013532 | 0.000610
600 | 0.006622 | 0.013069 | 0.019517 | 0.012894 | 0.000610
700 | 0.005722 | 0.011261 | 0.016800 | 0.011078 | 0.000610
800 | 0.005033 | 0.011667 | 0.018302 | 0.013269 | 0.000610
900 | 0.004556 | 0.010461 | 0.016367 | 0.011811 | 0.000610
1000 | 0.004100 | 0.009425 | 0.014750 | 0.010650 | 0.000610
2000 | 0.006049 | 0.011580 | 0.017112 | 0.011063 | 0.000610
3000 | 0.007802 | 0.014395 | 0.020987 | 0.013185 | 0.000610
4000 | 0.007408 | 0.013698 | 0.019988 | 0.012580 | 0.000610
5000 | 0.006791 | 0.013090 | 0.019389 | 0.012598 | 0.000623
### 6.5.3 GRASP algorithms for 2-path network design
Given a connected undirected graph with non-negative weights associated with its edges, together with a set of origin-destination nodes, the 2-path network design problem consists in finding a minimum weighted subset of edges containing a path formed by at most two edges between every origin-destination pair. Applications can be found in the design of communication networks, in which paths with few edges are sought to enforce high reliability and small delays.
#### 6.5.3.1 Instance with 90 nodes
We first compare four GRASP heuristics for solving an instance of the 2-path network design problem with 90 nodes. The first heuristic is a pure GRASP (algorithm A 1). The others integrate different path-relinking strategies (see Chapters and ) for search intensification at the end of each GRASP iteration: forward path-relinking (algorithm A 2), bidirectional path-relinking (algorithm A 3), and backward path-relinking (algorithm A 4).
Each algorithm was run 500 independent times on the benchmark instance with 90 nodes and 900 origin-destination pairs, with the solution target value set at 673 (the best known solution value is 639). The runtime distributions and quantile-quantile plots for the different versions of GRASP with path-relinking are shown in Figures 6.19 to 6.21.
Fig. 6.19
Runtime distribution and quantile-quantile plot for GRASP with forward path-relinking on 90-node instance with the target value set at 673.
Fig. 6.20
Runtime distribution and quantile-quantile plot for GRASP with bidirectional path-relinking on 90-node instance with the target value set at 673.
Fig. 6.21
Runtime distribution and quantile-quantile plot for GRASP with backward path-relinking on 90-node instance with the target value set at 673.
The empirical runtime distributions of the four algorithms are superimposed in Figure 6.22. Algorithm A 2 (as well as A 3 and A 4) performs much better than A 1, as indicated by Pr(X 2 ≤ X 1) = 0. 986604 (with L(ɛ) = 0. 986212, R(ɛ) = 0. 986996, Δ(ɛ) = 0. 000784, and ɛ = 0. 029528). Algorithm A 3 outperforms A 2, as illustrated by the fact that Pr(X 3 ≤ X 2) = 0. 636000 (with L(ɛ) = 0. 630024, R(ɛ) = 0. 641976, Δ(ɛ) = 0. 011952, and ɛ = 1. 354218 × 10−6). Finally, we observe that algorithms A 3 and A 4 behave very similarly, although A 4 performs slightly better for this instance, since Pr(X 4 ≤ X 3) = 0. 536014 (with L(ɛ) = 0. 528560, R(ɛ) = 0. 543468, Δ(ɛ) = 0. 014908, and ɛ = 1. 001358 × 10−6).
Fig. 6.22
Superimposed runtime distributions of pure GRASP and three versions of GRASP with path-relinking.
As for the problems considered in Sections 6.5.1 and 6.5.2, we also investigate the convergence of the proposed measure as a function of sample size (i.e., with the number of independent runs of each algorithm). Convergence with the sample size is illustrated next for the 90-node instance of the 2-path network design problem, with the same target 673 previously used. We recall that algorithm A 1 is the GRASP with backward path-relinking heuristic, while algorithm A 2 is the GRASP with bidirectional path-relinking heuristic. The estimation of Pr(X 1 ≤ X 2) is computed for N = 100, 200, 300, 400, 500, 600, 700, 800, 900, 1000, 2000, 3000, 4000, and 5000 independent runs of each algorithm. Table 6.3 shows the results, which are also displayed in Figure 6.23. Once again, the estimation of Pr(X 1 ≤ X 2) stabilizes as the sample size N increases.
Fig. 6.23
Convergence of the estimation of Pr(X 1 ≤ X 2) with the sample size for the 90-node instance of the 2-path network design problem.
Table 6.3
Convergence of the estimation of Pr(X 1 ≤ X 2) with the sample size for the 90-node instance of the 2-path network design problem.
N | L(ɛ) | Pr(X 1 ≤ X 2) | R(ɛ) | Δ(ɛ) | ɛ
---|---|---|---|---|---
100 | 0.553300 | 0.559150 | 0.565000 | 0.011700 | 4. 387188 × 10−7
200 | 0.553250 | 0.553850 | 0.554450 | 0.001199 | 4. 501629 × 10−7
300 | 0.551578 | 0.557483 | 0.563389 | 0.011811 | 4. 501629 × 10−7
400 | 0.545244 | 0.551241 | 0.557238 | 0.011994 | 4. 730511 × 10−7
500 | 0.546604 | 0.552420 | 0.558236 | 0.011632 | 5. 035686 × 10−7
600 | 0.538867 | 0.544749 | 0.550631 | 0.011764 | 5. 073833 × 10−7
700 | 0.536320 | 0.542181 | 0.548041 | 0.011720 | 5. 073833 × 10−7
800 | 0.537533 | 0.543298 | 0.549064 | 0.011531 | 5. 073833 × 10−7
900 | 0.533912 | 0.539671 | 0.545430 | 0.011517 | 5. 073833 × 10−7
1000 | 0.531595 | 0.537388 | 0.543180 | 0.011585 | 5. 073833 × 10−7
2000 | 0.528224 | 0.533959 | 0.539698 | 0.011469 | 5. 722427 × 10−7
3000 | 0.530421 | 0.536128 | 0.541835 | 0.011414 | 6. 027603 × 10−7
4000 | 0.532695 | 0.538364 | 0.544033 | 0.011338 | 6. 027603 × 10−7
5000 | 0.530954 | 0.536566 | 0.542178 | 0.011225 | 6. 027603 × 10−7
#### 6.5.3.2 Instance with 80 nodes
We next compare five GRASP heuristics for the 2-path network design problem, with and without path-relinking, for solving an instance with 80 nodes and 800 origin-destination pairs, with target value set at 588 (the best known solution value is 577). In this example, the first algorithm is a pure GRASP (algorithm A 1). The other heuristics integrate different path-relinking strategies at the end of each GRASP iteration (see Chapters and ): forward path-relinking (algorithm A 2), bidirectional path-relinking (algorithm A 3), backward path-relinking (algorithm A 4), and mixed path-relinking (algorithm A 5). As before, each heuristic was run independently 500 times.
The empirical runtime distributions of the five algorithms are superimposed in Figure 6.24. Algorithm A 2 (as well as A 3, A 4, and A 5) performs much better than A 1, as indicated by Pr(X 2 ≤ X 1) = 0. 970652 (with L(ɛ) = 0. 970288, R(ɛ) = 0. 971016, Δ(ɛ) = 0. 000728, and ɛ = 0. 014257). Algorithm A 3 outperforms A 2, as shown by the fact that Pr(X 3 ≤ X 2) = 0. 617278 (with L(ɛ) = 0. 610808, R(ɛ) = 0. 623748, Δ(ɛ) = 0. 012940, and ɛ = 1. 220703 × 10−6). Algorithm A 4 performs slightly better than A 3 for this instance, since Pr(X 4 ≤ X 3) = 0. 537578 (with L(ɛ) = 0. 529404, R(ɛ) = 0. 545752, Δ(ɛ) = 0. 016348, and Δ(ɛ) = 1. 201630 × 10−6). Algorithms A 5 and A 4 also behave very similarly, but A 5 is slightly better for this instance since Pr(X 5 ≤ X 4) = 0. 556352 (with L(ɛ) = 0. 547912, R(ɛ) = 0. 564792, Δ(ɛ) = 0. 016880, and ɛ = 1. 001358 × 10−6).
Fig. 6.24
Superimposed empirical runtime distributions of pure GRASP and four GRASP with path-relinking heuristics.
## 6.6 Comparing and evaluating parallel algorithms
We conclude this chapter by describing the use of the runtime distribution methodology to evaluate and compare parallel implementations of stochastic local search algorithms. Once again, the 2-path network design problem is used to illustrate this application.
Figures 6.25 and 6.26 superimpose the runtime distributions of, respectively, cooperative and independent parallel implementations of GRASP with bidirectional path-relinking for the same problem on 2, 4, 8, 16, and 32 processors, on an instance with 100 nodes and 1000 origin-destination pairs, using 683 as target value. Each algorithm was run independently 200 times. We denote by A 1 k (resp. A 2 k ) the cooperative (resp. independent) parallel implementation running on k processors, for k = 2, 4, 8, 16, 32.
Fig. 6.25
Superimposed empirical runtime distributions of cooperative parallel GRASP with bidirectional path-relinking running on 2, 4, 8, 16, and 32 processors.
Fig. 6.26
Superimposed empirical runtime distributions of independent parallel GRASP with bidirectional path-relinking running on 2, 4, 8, 16, and 32 processors.
Table 6.4 shows the probability that the cooperative parallel implementation performs better than the independent implementation on 2, 4, 8, 16, and 32 processors. We observe that the independent implementation performs better than the cooperative implementation on two processors. In that case, the cooperative implementation does not benefit from the availability of two processors, since only one of them performs iterations, while the other acts as the master. However, as the number of processors increases from two to 32, the cooperative implementation performs progressively better than the independent implementation, since more processors are devoted to perform GRASP iterations. The proposed methodology is clearly consistent with the relative behavior of the two parallel versions for any number of processors. Furthermore, it illustrates that the cooperative implementation becomes progressively better than the independent implementation when the number of processors increases.
Table 6.4
Comparing cooperative (algorithm A 1) and independent (algorithm A 2) parallel implementations.
Processors (k) | Pr(X 1 k ≤ X 2 k )
---|---
2 | 0.309784
4 | 0.597253
8 | 0.766806
16 | 0.860864
32 | 0.944938
Table 6.5 displays the probability that each of the two parallel implementations performs better on 2 j+1 than on 2 j processors, for j = 1, 2, 3, 4. Both implementations scale appropriately as the number of processors grows. Once again, we can see that the performance measure appropriately describes the relative behavior of the two parallel strategies and provides insight on how parallel algorithms scale with the number of processors. The table shows numerical evidence to evaluate the trade-offs between computation times and the number of processors in parallel implementations.
Table 6.5
Comparing the parallel implementations on 2 j+1 (algorithm A 1) and 2 j (algorithm A 2) processors, for j = 1, 2, 3, 4.
Processors (a) | Processors (b) | Pr(X 1 a ≤ X 1 b ) | Pr(X 2 a ≤ X 2 b )
---|---|---|---
4 | 2 | 0.766235 | 0.651790
8 | 4 | 0.753904 | 0.685108
16 | 8 | 0.724398 | 0.715556
32 | 16 | 0.747531 | 0.669660
## 6.7 Bibliographical notes
The time-to-target, or runtime distribution, plots introduced in Section 6.1 were first used by Feo et al. (1994). They have been also advocated by Hoos and Stützle (1998b;a) as a way to characterize the execution times of stochastic algorithms for combinatorial optimization. Aiex et al. (2007) developed a perl program to create time-to-target plots for measured times that are assumed to fit a shifted exponential distribution, following closely the work of Aiex et al. (2002).
Section 6.2 reports that the runtime distributions of GRASP heuristics follow exponential distributions and shows how the best fittings can be obtained. In fact, Aiex et al. (2002), Battiti and Tecchiolli (1992), Dodd (1990), Eikelder et al. (1996), Hoos (1999), Hoos and Stützle (1999), Osborne and Gillett (1991), Selman et al. (1994), Taillard (1991), Verhoeven and Aarts (1995), and others observed that in many implementations of randomized heuristics, such as simulated annealing, genetic algorithms, iterated local search, tabu search, and GRASP, the random variable time-to-target value (i.e., the runtime) is exponentially distributed or fits a shifted exponential distribution. Hoos and Stützle (1998c; 1999) conjectured that this is true for all methods for combinatorial optimization based on stochastic local search.
Aiex et al. (2002) used TTT-plots to show experimentally that the running times of GRASP heuristics fit shifted exponential distributions, reporting computational results for 2400 runs of GRASP heuristics for each of five different problems: maximum stable set (Feo et al., 1994; Resende et al., 1998), quadratic assignment (Li et al., 1994; Resende et al., 1996), graph planarization (Resende and Ribeiro, 1997; Ribeiro and Resende, 1999; Resende and Ribeiro, 2001), maximum weighted satisfiability (Resende et al., 2000), and maximum covering (Resende, 1998). To compare the empirical and the theoretical runtime distributions, a standard graphical methodology for data analysis was used (Chambers et al., 1983). Experiments with instances of the 2-path network design problem and of the three-index assignment problem were reported to show that implementations of GRASP with path-relinking may not follow exponential distributions. The 2-path network design problem was introduced and proved to be NP-hard by Dahl and Johannessen (2004). The GRASP heuristics used in the computational experiments with this problem were proposed by Ribeiro and Rosseti (2002; 2007). The three-index assignment problem considered in the experiments reported by Aiex et al. (2005) was studied by Balas and Saltzman (1991), from where problem instances 22.1 and 24.1 were taken.
The closed form result developed in Section 6.3 to compare two exponential algorithms and the iterative procedure proposed in Section 6.4 to compare two algorithms following generic distributions were first presented by Ribeiro et al. (2009). This work was extended by Ribeiro et al. (2012) and was also applied in the comparison of parallel heuristics. Ribeiro and Rosseti (2015) developed the code to compare runtime distributions of randomized algorithms.
Different problems and algorithms were used in Sections 6.5 and 6.6 to illustrate the application of the iterative procedure to compare generic runtime distributions of two algorithms. Algorithms for solving the server replication for reliable multicast problem were described by Fonseca et al. (2008) and Santos et al. (2008). The DM-GRASP hybrid version of GRASP that incorporates a data mining process appeared in Santos et al. (2008). Its basic principle consisted in mining for patterns found in high-quality solutions to guide the construction of new solutions. Variant DM-D5 appeared in Fonseca et al. (2008).
Noronha and Ribeiro (2006) proposed a decomposition heuristic for solving the routing and wavelength assignment problem. First, a set of possible routes is precomputed for each lightpath. Next, one of the precomputed routes and a wavelength are assigned to each lightpath by a tabu search heuristic solving an instance of the partition coloring problem. Manohar et al. (2002) developed the multistart greedy heuristic for the same problem. The Finland instance of the routing and wavelength assignment problem came from Hyytiä and Virtamo (1998).
References
R.M. Aiex, M.G.C. Resende, and C.C. Ribeiro. Probability distribution of solution time in GRASP: An experimental investigation. Journal of Heuristics, 8:343–373, 2002.CrossRefMATH
R.M. Aiex, M.G.C. Resende, P.M. Pardalos, and G. Toraldo. GRASP with path relinking for three-index assignment. INFORMS Journal on Computing, 17: 224–247, 2005.MathSciNetCrossRefMATH
R.M. Aiex, M.G.C. Resende, and C.C. Ribeiro. TTTPLOTS: A perl program to create time-to-target plots. Optimization Letters, 1:355–366, 2007.MathSciNetCrossRefMATH
E. Balas and M.J. Saltzman. An algorithm for the three-index assignment problem. Operations Research, 39:150–161, 1991.MathSciNetCrossRefMATH
R. Battiti and G. Tecchiolli. Parallel biased search for combinatorial optimization: Genetic algorithms and tabu. Microprocessors and Microsystems, 16:351–367, 1992.CrossRef90003-C)
J.M. Chambers, W.S. Cleveland, B. Kleiner, and P.A. Tukey. Graphical methods for data analysis. Duxbury Press, Boston, 1983.MATH
G. Dahl and B. Johannessen. The 2-path network design problem. Networks, 43:190–199, 2004.MathSciNetCrossRefMATH
N. Dodd. Slow annealing versus multiple fast annealing runs: An empirical investigation. Parallel Computing, 16:269–272, 1990.CrossRef90063-F)MATH
H.T. Eikelder, M. Verhoeven, T. Vossen, and E. Aarts. A probabilistic analysis of local search. In I. Osman and J. Kelly, editors, Metaheuristics: Theory and applications, pages 605–618. Kluwer Academic Publishers, Boston, 1996.
T.A. Feo, M.G.C. Resende, and S.H. Smith. A greedy randomized adaptive search procedure for maximum independent set. Operations Research, 42: 860–878, 1994.CrossRefMATH
E. Fonseca, R. Fuchsuber, L.F.M. Santos, A. Plastino, and S.L. Martins. Exploring the hybrid metaheuristic DM-GRASP for efficient server replication for reliable multicast. In International Conference on Metaheuristics and Nature Inspired Computing, Hammamet, 2008.
H.H. Hoos. On the run-time behaviour of stochastic local search algorithms for SAT. In Proceedings of the Sixteenth National Conference on Artificial Intelligence, pages 661–666, Orlando, 1999. American Association for Artificial Intelligence.
H.H. Hoos and T. Stützle. On the empirical evaluation of Las Vegas algorithms - Position paper. Technical report, Computer Science Department, University of British Columbia, Vancouver, 1998a.
H.H. Hoos and T. Stützle. Evaluating Las Vegas algorithms – Pitfalls and remedies. In Proceedings of the 14th Conference on Uncertainty in Artificial Intelligence, pages 238–245, Madison, 1998b.
H.H. Hoos and T. Stützle. Some surprising regularities in the behaviour of stochastic local search. In M. Maher and J.-F. Puget, editors, Principles and practice of constraint programming, volume 1520 of Lecture Notes in Computer Science, page 470. Springer, Berlin, 1998c.
H.H. Hoos and T. Stützle. Towards a characterisation of the behaviour of stochastic local search algorithms for SAT. Artificial Intelligence, 112: 213–232, 1999.MathSciNetCrossRef00048-X)MATH
E. Hyytiä and J. Virtamo. Wavelength assignment and routing in WDM networks. In Proceedings of the Fourteenth Nordic Teletraffic Seminar NTS-14, pages 31–40, Lyngby, 1998.
Y. Li, P.M. Pardalos, and M.G.C. Resende. A greedy randomized adaptive search procedure for the quadratic assignment problem. In P.M. Pardalos and H. Wolkowicz, editors, Quadratic assignment and related problems, volume 16 of DIMACS Series in Discrete Mathematics and Theoretical Computer Science, pages 237–261. American Mathematical Society, Providence, 1994.
P. Manohar, D. Manjunath, and R.K. Shevgaonkar. Routing and wavelength assignment in optical networks from edge disjoint path algorithms. IEEE Communications Letters, 5:211–213, 2002.CrossRef
T.F. Noronha and C.C. Ribeiro. Routing and wavelength assignment by partition coloring. European Journal of Operational Research, 171: 797–810, 2006.CrossRefMATH
L. Osborne and B. Gillett. A comparison of two simulated annealing algorithms applied to the directed Steiner problem on networks. ORSA Journal on Computing, 3:213–225, 1991.CrossRefMATH
M.G.C. Resende. Computing approximate solutions of the maximum covering problem using GRASP. Journal of Heuristics, 4:161–171, 1998.CrossRefMATH
M.G.C. Resende and C.C. Ribeiro. A GRASP for graph planarization. Networks, 29:173–189, 1997.CrossRef1097-0037\(199705\)29%3A3<173%3A%3AAID-NET5>3.0.CO%3B2-E)MATH
M.G.C. Resende and C.C. Ribeiro. Graph planarization. In C. Floudas and P.M. Pardalos, editors, Encyclopedia of optimization, volume 2, pages 368–373. Kluwer Academic Publishers, Boston, 2001.
M.G.C. Resende, P.M. Pardalos, and Y. Li. Algorithm 754: Fortran subroutines for approximate solution of dense quadratic assignment problems using GRASP. ACM Transactions on Mathematical Software, 22:104–118, 1996.CrossRefMATH
M.G.C. Resende, T.A. Feo, and S.H. Smith. Algorithm 787: Fortran subroutines for approximate solution of maximum independent set problems using GRASP. ACM Transactions on Mathematical Software, 24:386–394, 1998.CrossRefMATH
M.G.C. Resende, L.S. Pitsoulis, and P.M. Pardalos. Fortran subroutines for computing approximate solutions of MAX-SAT problems using GRASP. Discrete Applied Mathematics, 100:95–113, 2000.CrossRef00171-7)MATH
C.C. Ribeiro and M.G.C. Resende. Algorithm 797: Fortran subroutines for approximate solution of graph planarization problems using GRASP. ACM Transactions on Mathematical Software, 25:341–352, 1999.CrossRefMATH
C.C. Ribeiro and I. Rosseti. A parallel GRASP heuristic for the 2-path network design problem. In B. Monien and R. Feldmann, editors, Euro-Par 2002 Parallel Processing, volume 2400 of Lecture Notes in Computer Science, pages 922–926. Springer, Berlin, 2002.
C.C. Ribeiro and I. Rosseti. Efficient parallel cooperative implementations of GRASP heuristics. Parallel Computing, 33:21–35, 2007.MathSciNetCrossRef
C.C. Ribeiro and I. Rosseti. tttplots-compare: A perl program to compare time-to-target plots or general runtime distributions of randomized algorithms. Optimization Letters, 9:601–614, 2015.
C.C. Ribeiro, I. Rosseti, and R. Vallejos. On the use of run time distributions to evaluate and compare stochastic local search algorithms. In T. Sttzle, M. Biratari, and H.H. Hoos, editors, Engineering stochastic local search algorithms, volume 5752 of Lecture Notes in Computer Science, pages 16–30. Springer, Berlin, 2009.
C.C. Ribeiro, I. Rosseti, and R. Vallejos. Exploiting run time distributions to compare sequential and parallel stochastic local search algorithms. Journal of Global Optimization, 54:405–429, 2012.MathSciNetCrossRefMATH
L.F. Santos, S.L. Martins, and A. Plastino. Applications of the DM-GRASP heuristic: A survey. International Transactions on Operational Research, 15:387–416, 2008.MathSciNetCrossRefMATH
B. Selman, H.A. Kautz, and B. Cohen. Noise strategies for improving local search. In Proceedings of the Twelfth National Conference on Artificial Intelligence, pages 337–343, Seattle, 1994. American Association for Artificial Intelligence.
E.D. Taillard. Robust taboo search for the quadratic assignment problem. Parallel Computing, 17:443–455, 1991.MathSciNetCrossRef80147-4)
M.G.A. Verhoeven and E.H.L. Aarts. Parallel local search. Journal of Heuristics, 1:43–66, 1995.CrossRefMATH
© Springer Science+Business Media New York 2016
Mauricio G.C. Resende and Celso C. RibeiroOptimization by GRASP10.1007/978-1-4939-6530-4_7
# 7. Extended construction heuristics
Mauricio G. C. Resende1 and Celso C. Ribeiro2
(1)
Modeling and Optimization Group (MOP), Amazon.com, Inc., Seattle, WA, USA
(2)
Instituto de Ciência da Computação, Universidade Federal Fluminense, Niterói, Rio de Janeiro, Brazil
In Chapter , we considered cardinality-based and quality-based adaptive greedy algorithms as a generalization of greedy algorithms. Next, we presented semi-greedy algorithms that are obtained by randomizing adaptive greedy algorithms and constitute the main foundation for developing the construction phase of GRASP heuristics. In this chapter, we consider enhancements, extensions, and variants of greedy randomized adaptive construction procedures such as Reactive GRASP, the probabilistic choice of the construction parameter α, random plus greedy and sampled greedy constructions, cost perturbations, bias functions, principles of intelligent construction based on memory and learning, the proximate optimality principle and local search applied to partially constructed solutions, and pattern-based construction strategies using vocabulary building or data mining.
## 7.1 Reactive GRASP
The choice of the parameter α of a quality-based semi-greedy algorithm used in the GRASP construction phase determines the blend of greediness and randomness that is used in the construction. One basic strategy is to use a fixed value for α. Another strategy consists in using a different value chosen at random in each iteration. Reactive GRASP is a strategy in which the algorithm progressively learns and updates the best value of α. It was the first proposal to incorporate a learning mechanism in the otherwise memoryless construction phase of GRASP.
In the context of Reactive GRASP , the value of the restricted candidate list parameter α is not fixed, but instead is randomly selected at each iteration from a discrete set of possible values. This selection is guided by the solution values found during previous iterations. One way to accomplish this is to use a rule that considers a set Ψ = {α 1,..., α m } of possible values for α. The probabilities associated with the choice of each value are all initially made equal to p i = 1∕m, for i = 1,..., m. Furthermore, let f ∗ be the incumbent solution value and let A i be the average value of all solutions found using α = α i , for i = 1,..., m. The selection probabilities are periodically reevaluated by taking p i = q i ∕∑ j = 1 m q j , with q i = f ∗∕A i for i = 1,..., m. For the case of minimization, the value of q i will be larger for values of α = α i leading to the best solutions on average. Larger values of q i correspond to more suitable values for the parameter α. Therefore, the probabilities associated with the more appropriate values increase when they are reevaluated.
The reactive approach leads to improvements over the basic GRASP in terms of robustness and solution quality, due to greater diversification and less reliance on parameter tuning.
## 7.2 Probabilistic choice of the RCL parameter
The computational results obtained by Reactive GRASP show that using a single fixed value for the restricted candidate parameter α very often hinders finding high-quality solutions, which could be found if other values were used. These results motivated the study of the behavior of GRASP using alternative strategies for the variation of the value of the restricted candidate list parameter α:
1. (R)
α self tuned according with the Reactive GRASP procedure;
2. (E)
α randomly chosen from a uniform discrete probability distribution;
3. (H)
α randomly chosen from a decreasing nonuniform discrete probability distribution; and
4. (F)
fixed value of α, close to the purely greedy choice.
We summarize the results obtained in experiments incorporating these four strategies into the GRASP procedures developed for four different optimization problems: (P-1) matrix decomposition for traffic assignment in communication satellites, (P-2) set covering, (P-3) weighted MAX-SAT, and (P-4) graph planarization. Let Ψ = {α 1,..., α m } be the set of possible values for the parameter α for the first three strategies. The strategy for choosing and self-tuning the value of α in the case of the Reactive GRASP procedure (R) is the one described in Section 7.1. In the case of the strategy (E) based on using a discrete uniform distribution, all choice probabilities are equal to 1∕m. The third case corresponds to a hybrid strategy (H), in which the following probabilities are considered: p(α = 0. 1) = 0. 5, p(α = 0. 2) = 0. 25, p(α = 0. 3) = 0. 125, p(α = 0. 4) = 0. 03, p(α = 0. 5) = 0. 03, p(α = 0. 6) = 0. 03, p(α = 0. 7) = 0. 01, p(α = 0. 8) = 0. 01, p(α = 0. 9) = 0. 01, and p(α = 1. 0) = 0. 005. In the last strategy (F), the value of α is fixed as tuned and recommended in the original references reporting results for these problems. A subset of instances from the literature was considered for each class of test problems. Numerical results are reported in Table 7.1. For each problem, we first list the number of instances considered. Next, for each strategy, we give the number of instances for which it found the best solution (hits), as well as the average computation time (in seconds) on an IBM 9672 model R34. The number of GRASP iterations was fixed at 10,000.
Table 7.1
Computational results for different strategies for the variation of parameter α. | R | E | H | F
---|---|---|---|---
Problem | Instances | Hits | Time (s) | Hits | Time (s) | Hits | Time (s) | Hits | Time (s)
P-1 | 36 | 34 | 579.0 | 35 | 358.2 | 32 | 612.6 | 24 | 642.8
P-2 | 7 | 7 | 1346.8 | 6 | 1352.0 | 6 | 668.2 | 5 | 500.7
P-3 | 44 | 22 | 2463.7 | 23 | 2492.6 | 16 | 1740.9 | 11 | 1625.2
P-4 | 37 | 28 | 6363.1 | 21 | 7292.9 | 24 | 6326.5 | 19 | 5972.0
Total | 124 | 91
| |
85
| |
78
| |
59
|
Strategy (F) presented the shortest average computation times for three of the four problem types. It was also the one with the least variability in the constructed solutions and, as a consequence, found the best solution the fewest times. The reactive strategy (R) is the one which most often found the best solutions, however, at the cost of computation times that are longer than those of some of the other strategies. The high number of hits observed by strategy (E) also illustrates the effectiveness of strategies based on the variation of α, the parameter that defines the size of the restricted candidate list.
## 7.3 Random plus greedy and sampled greedy
In Section 3.4 of Chapter , we described the semi-greedy scheme used in the construction phase of GRASP to build solutions that serve as starting points for local search. Two alternative randomized greedy approaches that run faster than the semi-greedy algorithm are introduced next. Both have been originally applied to the p-median problem.
Instead of combining greediness and randomness at each step of the construction procedure, the random plus greedy scheme applies randomness during the first p construction steps to produce a random partial solution. Next, the algorithm completes the solution with one or more pure adaptive greedy construction steps. The resulting solution is randomized greedy. One can control the balance between greediness and randomness in the construction by changing the value of the parameter p. Larger values of p are associated with solutions that are more random, while smaller values result in greedier solutions.
Similar to the random plus greedy procedure, the sampled greedy construction also combines randomness and greediness, but in a different way. This procedure is also controlled by a parameter p. At each step of the construction process, the procedure builds a restricted candidate list by sampling min{p, | C | } elements of the candidate set C of elements that can be added to the current partial solution. Each element of the restricted candidate list is evaluated by the greedy function. The element with the smallest greedy function value is added to the partial solution. This two-step process is repeated until there are no more candidate elements. The resulting solution is also randomized greedy. Once again, the balance between greediness and randomness can be controlled by changing the value of the parameter p, i.e., the number of candidate elements that are sampled. Small sample sizes lead to more random solutions, while large sample sizes lead to greedier solutions.
## 7.4 Cost perturbations
The idea of introducing noise into the original costs to change the objective function adds more flexibility to algorithm design. Furthermore, in circumstances where the construction algorithm is not very sensitive to randomization, it can also be more effective than the greedy randomized adaptive construction of the basic GRASP procedure. This is indeed the case for the shortest path heuristic used as one of the main building blocks of the construction phase of GRASP heuristics for the Steiner problem in graphs.
The cost perturbation methods used in a hybrid algorithm for the Steiner problem in graphs incorporate learning mechanisms associated with intensification and diversification strategies. Three distinct weight randomization methods were applied to generate cost perturbations for the shortest path heuristic. At any given GRASP iteration, the modified weight of each edge is randomly selected from a uniform distribution in an interval which depends on the selected weight randomization method applied at that iteration. The different weight randomization methods use frequency information and are used to enforce intensification and diversification strategies. Experimental results show that the strategy combining these three perturbation methods is more robust than any of them used in isolation, leading to the best overall results on a quite broad mix of test instances with different characteristics. The GRASP heuristic using this cost perturbation strategy was among the most effective heuristics available for the Steiner problem in graphs at the time of its development.
Another situation where cost perturbations can be very effective appears when no greedy algorithm is available for straightforward randomization. A typical situation is the case of a hybrid GRASP developed for the prize-collecting Steiner tree problem, which makes use of a primal-dual approximation algorithm to build initial solutions using perturbed costs. A new solution is built at each iteration using node prizes updated by a perturbation function, according to the structure of the current solution. Two different prize perturbation schemes were used. In perturbation by eliminations, the primal-dual algorithm used in the construction phase is driven to build a new solution without some of the nodes that appeared in the solution constructed in the previous iteration. In perturbation by prize changes, noise is introduced into the node prizes to change the objective function.
## 7.5 Bias functions
In the construction phase of the basic GRASP heuristic, the next element to be introduced in the solution is chosen at random from the elements in the restricted candidate list. The elements of the restricted candidate list are assigned equal probabilities of being chosen. However, as was the case for Reactive GRASP, any probability distribution can be used to bias the selection towards some particular candidates. Another selection mechanism used in the construction phase makes use of a family of such probability distributions. They are based on the rank r(σ) assigned to each candidate element σ, according to its greedy function value. Several bias functions can be used:
* random bias: bias(r(σ)) = 1;
* linear bias: bias(r(σ)) = 1∕r(σ);
* log bias: bias(r(σ)) = log−1(r(σ) + 1);
* exponential bias: bias(r(σ)) = e −r(σ); and
* polynomial bias of order n: bias(r(σ)) = r(σ)−n .
Consider that any one of the above bias functions is being used. Once the rank r(σ) and its corresponding bias, bias(r(σ)), are evaluated for all elements in the candidate set C, the probability π(σ) of selecting element σ is given by

The basic GRASP heuristic uses a random bias function. The evaluation of these bias functions can be applied to all candidate elements or can be limited to the elements of the restricted candidate list.
## 7.6 Memory and learning
Flexible and adaptive memory techniques have been the source of a number of developments to improve multistart methods, which otherwise would simply resort to random restarts.
The basic memoryless GRASP heuristic does not make use of information gathered in previously performed iterations. As in tabu search and other multistart heuristics, a long-term memory strategy can be used to add memory to GRASP.
An elite solution is a high-quality solution found during the iterations of a search algorithm. Long-term memory can be implemented by maintaining a pool or set of elite solutions. To become an elite solution, a solution must be either better than the best member of the pool, or better than its worst member and sufficiently different from the other solutions in the pool. For example, one can count identical solution attributes and set a threshold for rejection. In Chapter , we revisit elite sets.
A strongly determined variable is one that cannot be changed without eroding the objective or changing significantly other variables. A consistent variable is one that receives a particular value in a large portion of the elite solution set. Let I(e) be a measure of the strong determination and consistency features of a solution element e of the ground set E. Then, I(e) becomes larger as e appears more often in the pool of elite solutions. The intensity function I(e) is used in the construction phase as follows. Recall that g(e) is the greedy function, i.e., the incremental cost associated with the incorporation of element e ∈ E into the solution under construction. Let K(e) = F(g(e), I(e)) be a composite function of the greedy and the intensification functions. For example, K(e) = λ ⋅ g(e) + I(e). The intensification scheme biases selection from the restricted candidate list RCL to those elements e ∈ E with a high value of K(e) by setting its selection probability to be p(e) = K(e)∕∑ s ∈ RCL K(s). Function K(e) can vary with time by changing the value of λ. For example, λ can be set to a large value when diversification is required and can be gradually decreased as intensification is called for.
## 7.7 Proximate optimality principle in construction
The proximate optimality principle is based on the idea that good solutions at one level of the search are likely to be found close to good solutions at an adjacent level. A GRASP interpretation of this principle suggests that imperfections introduced during steps of the GRASP construction can be ironed-out by applying local search during (and not only at the end of) the construction phase.
Because of efficiency considerations, a practical implementation of the proximate optimality principle to GRASP consists in applying local search a few times during the construction phase, but not necessarily at every construction iteration.
## 7.8 Pattern-based construction
Different strategies have been devised to intelligently explore adaptive memory information. Their main underlying principle consists in exploring information collected and updated along the search to improve the performance of different construction and local search methods.
Vocabulary building is an intensification strategy for creating new solutions from good fragments of high-quality solutions previously found and stored in an elite set. Data mining refers to the automatic extraction of new and potentially useful knowledge from data sets. The extraction of frequent items is often at the core of data mining methods. Frequent items extracted from the elite set represent patterns appearing in high-quality solutions that may be used as building blocks in an adapted construction phase.
Both data mining and vocabulary building can be combined into more efficient implementations of GRASP or other multistart procedures. Since data mining has been further explored in this context, we focus our presentation of pattern-based construction strategies in the hybridization GRASP with data mining.
The main kinds of rules and patterns mined from data sets are frequent items, association rules, sequential patterns, classification rules, and data clusters. In the context of a data set formed by solutions obtained by GRASP, the extracted frequent items represent patterns that are common to high-quality solutions, i.e., subsets of variables that often appear in the elite set.
Let I = { i 1,..., i n } be a set of items. A transaction t is a subset of I and a data set  is a set of transactions. An item set F, with support s ∈ [0, 1], is a subset of I which occurs in at least  transactions of . An item set F is said to be frequent if its support s is greater than or equal to a minimum threshold s specified by the user. The frequent item mining problem consists in extracting all frequent item sets from a data set  with a minimum support s specified as a parameter. A frequent item set is maximal if it has no superset that is also frequent. Maximal frequent item sets are useful to avoid mining and using patterns that are subsets of larger patterns.
The principle behind the incorporation of a data mining process in GRASP is that patterns or frequent item sets found in high-quality solutions obtained in earlier iterations can be used in later iterations to improve the search procedure.
The DM-GRASP (Data Mining GRASP) heuristic starts with the elite set generation phase, which consists of executing n iter pure GRASP iterations to obtain an elite set  formed by the  best distinct solutions found along these iterations. Next, a data mining process is applied to extract a set P of common patterns from the solutions in the elite set . These patterns are subsets of attributes that frequently appear in solutions of the elite set  (or, equivalently, variables that are set at persistent values in these solutions). A frequent pattern mined from the elite set with support s ∈ [0, 1] represents a subset of attributes that occur in  elite solutions. The hybrid phase is the last to be performed. An additional n iter slightly modified GRASP iterations are executed. The construction phase of each of these modified iterations starts from a pattern selected from the set of mined patterns P, and not from scratch. Therefore, DM-GRASP spends the first half of its iterations in the elite set generation phase and the second half in the hybrid phase, which makes use of the mined frequent patterns. We observe that the number of iterations n iter may be replaced by any other stopping criterion.
A pseudo-code of the DM-GRASP hybridization for a minimization problem is illustrated in Figure 7.1. The best solution value f ∗ and the elite set  are initialized in lines 1 and 2, respectively. The n iter pure GRASP iterations of the first phase are carried out in the while loop in lines 3 to 14. A solution S is constructed with a semi-greedy algorithm in line 4. Since a semi-greedy algorithm cannot always generate a feasible solution, a repair procedure may have to be invoked in line 6 to make changes in S so that it becomes feasible (alternatively, the solution S may be simply discarded and followed by a new run of the semi-greedy algorithm, until a feasible solution is built). Local search is then applied starting from the feasible solution provided by the semi-greedy algorithm or by the repair procedure. If the objective function value f(S) of the local minimum produced in line 8 is better than the value f ∗ of the incumbent, then the local minimum is made the incumbent and its objective function value is placed in f ∗ in lines 10 and 11. The elite set  is updated in line 13: if the new solution S is added to , then a previously obtained elite solution is discarded. Algorithm UPDATE-ELITE-SET described in Section 9.2 receives as inputs the local optimum S and the current elite set  and returns the updated elite set. The data mining algorithm extracts the set of frequent patterns P from the elite set  in line 15. The loop from line 16 to 27 corresponds to the hybridization phase and runs until an additional n iter iterations are performed. Each iteration starts in line 17 by the selection of a pattern p ∈ P. An adapted construction procedure based on the SemiGreedy algorithm is performed in line 18, starting from a partial solution defined by pattern p as a starting point, and not from scratch. The feasibility of solution S is tested in line 19. A repair procedure may have to be invoked in line 20 to make changes in S so that it becomes feasible (as before, the solution S may also be simply discarded and followed by a new run of the adapted semi-greedy algorithm, until a feasible solution is built). Local search is applied in line 22 to solution S. If the objective function value f(S) of the local minimum S is better than the value f ∗ of the incumbent, then this new local minimum is made the incumbent in line 24 and its objective function value is placed in f ∗ in line 25. The best solution S ∗ and its cost f(S ∗) are returned in line 28.
Fig. 7.1
Pseudo-code of a DM-GRASP heuristic for minimization.
To illustrate the improvements brought by DM-GRASP to the basic GRASP procedure, we summarize below some computational results obtained for the problem of server replication for reliable multicast, which was introduced in Section 6.5.1 of Chapter
In computational experiments performed to compare the performance of GRASP and DM-GRASP both heuristics were implemented in C++ and were run ten times for each problem instance, with different random seeds. The GRASP parameter α was set to 0. 7. Each run consisted of 500 iterations. In the hybrid DM-GRASP, the elite set generation phase made use of n iter = 250 iterations and the hybrid phase performed the remaining n iter = 250 iterations. The size of the elite set  was set at ten, as well as that of the set P of mined patterns. The pattern extraction algorithm used a support value such that a set of nodes may be considered as a frequent pattern if it appears at least in two elite solutions.
The computational results are shown in Table 7.2. The first two columns summarize the characteristics of the problem instances, showing the multicast scenario and the number m of nodes to be set as replicated servers. The next three columns contain the results obtained by a previously developed GRASP heuristic (best solution value, average solution value, and computation time in seconds), while the last three columns depict the same results for DM-GRASP. The best solution obtained by DM-GRASP improved on the solution obtained by GRASP in 12 out of the 20 instances in this table, while GRASP never obtained a better solution. The best results are indicated in boldface. DM-GRASP obtained better average solution values for 13 out of the 20 instances, while GRASP obtained better average values for only four instances. Furthermore, DM-GRASP was considerably faster for all instances. The last column shows the average reduction in time obtained by DM-GRASP with respect to GRASP. On average, DM-GRASP ran in 36.8% time less than GRASP.
Table 7.2
Comparison between GRASP and DM-GRASP for the reliable multicast problem.
Instances | GRASP | DM-GRASP | Time
---|---|---|---
Scenario | m | Best | Average | Time (s) | Best | Average | Time (s) | reduction
CONF_1 | 5 | 63762.2 | 63762.2 | 28231.5 | 63762.2 | 63762.2 | 20292.0 | 28.1%
|
10 | 44480.7 | 44480.7 | 43826.0 | 44480.7 | 44480.7 | 30881.8 | 29.5%
|
15 | 31328.6 | 31347.2 | 43374.8 | 31328.6 | 31328.6 | 30058.0 | 30.7%
|
20 | 23625.2 | 23775.9 | 43314.2 | 23625.2 | 23763.7 | 26831.3 | 38.0%
CONF_2 | 10 | 11894.1 | 11894.1 | 3083.9 | 11894.1 | 11894.1 | 2631.8 | 14.7%
|
20 | 10076.3 | 10076.3 | 5239.0 | 10047.1 | 10047.1 | 3280.1 | 37.4%
|
30 | 9207.8 | 9208.7 | 7211.5 | 9207.8 | 9211.7 | 4196.7 | 41.9%
|
40 | 8668.5 | 8676.6 | 6787.9 | 8642.3 | 8646.3 | 4418.2 | 34.9%
CONF_3 | 20 | 11130.1 | 11177.4 | 7518.6 | 11114.5 | 11114.5 | 5661.6 | 24.7%
|
40 | 9631.7 | 9652.9 | 15077.0 | 9584.3 | 9596.5 | 8724.9 | 42.1%
|
60 | 8855.9 | 8869.3 | 19683.5 | 8848.1 | 8869.4 | 11312.0 | 42.5%
|
80 | 8550.4 | 8557.8 | 17747.8 | 8550.4 | 8559.1 | 10628.1 | 40.1%
BROAD_1 | 25 | 2818.9 | 2818.9 | 1555.1 | 2807.2 | 2807.2 | 1004.8 | 35.4%
|
50 | 2296.6 | 2299.0 | 3709.2 | 2281.8 | 2287.4 | 2301.7 | 38.0%
|
75 | 2039.3 | 2045.9 | 5530.9 | 2020.9 | 2030.8 | 3366.7 | 39.1%
|
100 | 1873.6 | 1877.5 | 7183.0 | 1873.6 | 1877.9 | 4160.2 | 42.0%
BROAD_2 | 50 | 2444.0 | 2444.2 | 5096.8 | 2425.6 | 2431.4 | 2994.0 | 41.3%
|
100 | 2019.0 | 2020.2 | 9246.2 | 2018.9 | 2020.1 | 5188.1 | 43.9%
|
150 | 1836.3 | 1837.0 | 11482.1 | 1836.2 | 1836.9 | 6098.7 | 46.9%
|
200 | 1727.9 | 1729.6 | 14047.3 | 1726.5 | 1729.3 | 7705.9 | 45.1%
Average reduction in time: | 36.8%
## 7.9 Lagrangean GRASP heuristics
### 7.9.1 Lagrangean relaxation and subgradient optimization
Lagrangean relaxation can be used to provide lower bounds for combinatorial optimization problems. However, the primal solutions produced by the algorithms used to solve the Lagrangean dual problem are not necessarily feasible. Lagrangean heuristics exploit dual multipliers to generate primal feasible solutions.
Given a mathematical programming problem  formulated as

(7.1)

(7.2)

(7.3)
its Lagrangean relaxation is obtained by associating dual multipliers  to each inequality (7.2), for i = 1,..., m. This results in the following Lagrangean relaxation problem LRP(λ)

(7.4)

(7.3)
whose optimal solution x(λ) gives a lower bound f′(x(λ)) to the optimal value of the original problem  defined by (7.1) to (7.3). The best (dual) lower bound is given by the solution of the Lagrangean dual problem 

(7.5)
Subgradient optimization is used to solve the dual problem  defined by (7.5). Subgradient algorithms start from any feasible set of dual multipliers, such as λ i = 0, for i = 1,..., m, and iteratively generate updated multipliers.
At any iteration q, let λ q be the current vector of multipliers and let x(λ q ) be an optimal solution to problem LRP(λ q ), whose optimal value is f′(x(λ q )). Furthermore, let  be a known upper bound to the optimal value of problem  defined by (7.1) to (7.3). Additionally, let  be a subgradient of f′(x) at x = x(λ q ), with g i q = g i (x(λ q )) for i = 1,..., m. To update the Lagrangean multipliers, the algorithm makes use of a step size

(7.6)
where η ∈ (0, 2]. Multipliers are then updated as

(7.7)
and the subgradient algorithm proceeds to iteration q \+ 1.
### 7.9.2 A template for Lagrangean heuristics
We describe next a template for Lagrangean heuristics that make use of the dual multipliers λ q and of the optimal solution x(λ q ) to each problem LRP(λ q ) to build feasible solutions to the original problem  defined by (7.1) to (7.3). In the following, we assume that the objective function and all constraints are linear functions, i.e., f(x) = ∑ i = 1 n c j x j and g i (x) = ∑ j = 1 n d ij x j − e i , for i = 1,..., m.
Let  be a primal heuristic that builds a feasible solution x to , starting from the initial solution x 0 = x(λ q ) at every iteration q of the subgradient algorithm. Heuristic  is first applied using the original costs c j , i.e., using the cost function f(x). In any subsequent iteration q of the subgradient algorithm,  uses either Lagrangean reduced costs c′ j = c j − ∑ i = 1 m λ i q d ij or complementary costs .
Let  be the solution obtained by heuristic , using a generic cost vector γ corresponding to either one of the above modified cost schemes or to the original cost vector. Its cost can be used to update the upper bound  to the optimal value of the original problem (7.1) to (7.3). This upper bound can be further improved by local search and is used to adjust the step size defined by equation (7.6).
The algorithm in Figure 7.2 shows the pseudo-code of a Lagrangean heuristic. Lines 1 to 4 initialize the upper and lower bounds, the iteration counter, and the dual multipliers. The iterations of the subgradient algorithm are performed along the loop in lines 5 to 24. The reduced costs are computed in line 6 and the Lagrangean relaxation problem is solved in line 7. In the first iteration of the Lagrangean heuristic, the original cost vector is assigned to γ in line 9, while in subsequent iterations a modified cost vector is assigned to γ in line 11. Heuristic  is applied in line 13 at the first iteration and after every H iterations thereafter (i.e., whenever the iteration counter q is a multiple of the input parameter H) to produce a feasible solution  to problem (7.1) to (7.3). If the cost of this solution is smaller than the current upper bound, then the best solution and its cost are updated in lines 14 to 18. If the lower bound f′(x(λ q )) is greater than the current lower bound , then  is updated in line 19. Line 20 computes a subgradient at x(λ q ) and line 21 computes the step size. The dual multipliers are updated in line 22 and the iteration counter is incremented in line 23. The best solution found and its cost are returned in line 24.
Fig. 7.2
Pseudo-code of a template for a Lagrangean heuristic.
Different choices for the initial solution x 0, for the modified costs γ, and for the primal heuristic  itself lead to different variants of the above algorithm. The integer parameter H defines the frequency in which  is applied. The smaller the value of H, the greater the number of times  is applied. Therefore, the computation time increases as the value of H decreases. In particular, one should set H = 1 if the primal heuristic  is to be applied at every iteration.
### 7.9.3 Lagrangean GRASP
Different choices for the primal heuristic  in the template of Algorithm 7.2 lead to distinct Lagrangean heuristics. We consider two variants: the first makes use of a greedy algorithm with local search, while in the second a GRASP with path-relinking heuristic is used.
Greedy heuristic: This heuristic repairs the solution x(λ q ) produced in line 7 of the Lagrangean heuristic described in Algorithm 7.2 to make it feasible for problem . It makes use of the modified costs (c′ or ). Local search can be applied to the resulting solution, using the original cost vector c. We refer to this approach as a greedy Lagrangean heuristic (GLH).
GRASP heuristic: Instead of simply performing one construction step followed by local search as for GLH, this variant applies a GRASP heuristic to repair the solution x(λ q ) produced in line 7 of the Lagrangean heuristic to make it feasible for problem .
Although the GRASP heuristic produces better solutions than the greedy heuristic, the greedy heuristic is much faster. To appropriately address this trade-off, we adapt line 10 of Algorithm 7.2 to use the GRASP heuristic with probability β and the greedy heuristic with probability 1 −β, where β is a parameter of the algorithm.
We note that this strategy involves three main parameters: the number H of iterations after which the basic heuristic is always applied, the number Q of iterations performed by the GRASP heuristic when it is chosen as the primal heuristic, and the probability β of choosing the GRASP heuristic as . We shall refer to the Lagrangean heuristic that uses this hybrid strategy as LAGRASP(β, H, Q).
We next summarize computational results obtained for 135 instances of the set k-covering problem. These instances have up to 400 constraints and 4000 binary variables.
The first experiment with the GRASP Lagrangean heuristic established the relationship between running times and solution quality for different parameter settings. Parameter β, the probability of GRASP being applied as the heuristic , was set to 0, 0.25, 0.50, 0.75, and 1. Parameter H, the number of iterations between successive calls to the heuristic , was set to 1, 5, 10, and 50. Parameter Q, the number of iterations carried out by the GRASP heuristic, was set to 1, 5, 10, and 50. By combining some of these parameter values, 68 variants of the hybrid LAGRASP(β, H, Q) heuristic were created. Each variant was applied eight times to a subset formed by 21 instances, with different initial seeds being given to the random number generator.
The plot in Figure 7.3 summarizes the results for all variants evaluated, displaying points whose coordinates are the values of the average deviation from the best known solution value and the total time in seconds for processing the eight runs on all instances, for each combination of parameter values. Eight variants of special interest are identified and labeled with the corresponding parameters β, H, and Q, in this order. These variants correspond to selected Pareto points in the plot in Figure 7.3. Setting β = 0 and H = 1 corresponds to the greedy Lagrangean heuristic (GLH) or, equivalently, to LAGRASP(0,1,-), whose average deviation from the best value amounted to 0.12% in 4,859.16 seconds of total running time. Table 7.3 shows the average deviation from the best known solution value and the total time for each of the eight selected variants.
Fig. 7.3
Average deviation from the best value and total running time for 68 different variants of LAGRASP on a reduced set of 21 instances of the set k-covering problem: each point represents a unique combination of parameters β, H, and Q.
Table 7.3
Summary of the numerical results obtained with the selected variants of the GRASP Lagrangean heuristic on a reduced set of 21 instances of the set k-covering problem. These values correspond to the coordinates of the selected variants in Figure 7.3. The total time is given in seconds.
Heuristic | Average deviation | Total time (s)
---|---|---
LAGRASP(1,1,50) | 0.09 % | 399,101.14
LAGRASP(0.50,1,1) | 0.11 % | 6,198.46
LAGRASP(0,1,-) | 0.12 % | 4,859.16
LAGRASP(0.25,5,10) | 0.24 % | 4,373.56
LAGRASP(0.25,5,5) | 0.25 % | 2,589.79
LAGRASP(0.25,5,1) | 0.26 % | 1,101.64
LAGRASP(0.25,50,5) | 0.47 % | 292.95
LAGRASP(0,50,-) | 0.51 % | 124.26
In another experiment, all 135 test instances were considered for the comparison of the above selected eight variants of LAGRASP. Table 7.4 summarizes the results obtained by the eight selected variants. It shows that LAGRASP(1,1,50) found the best solutions, with their average deviation from the best values amounting to 0.079%. It also found the best known solutions in 365 executions, again with the best performance when the eight variants are evaluated side by side, although at the cost of the longest running times. On the other hand, the smallest running times were observed for LAGRASP(0,50,-), which was over 3000 times faster than LAGRASP(1,1,50) but found the worst-quality solutions among the eight variants considered.
Table 7.4
Summary of the numerical results obtained with the selected variants of the GRASP Lagrangean heuristic on the full set of 135 instances of the set k-covering problem. The total time is given in seconds.
Heuristic | Average deviation | Hits | Total time (s)
---|---|---|---
LAGRASP(1,1,50) | 0.079 % | 365 | 1,803,283.64
LAGRASP(0.50,1,1) | 0.134 % | 242 | 30,489.17
LAGRASP(0,1,-) | 0.135 % | 238 | 24,274.72
LAGRASP(0.25,5,10) | 0.235 % | 168 | 22,475.54
LAGRASP(0.25,5,5) | 0.247 % | 163 | 11,263.80
LAGRASP(0.25,5,1) | 0.249 % | 164 | 5,347.78
LAGRASP(0.25,50,5) | 0.442 % | 100 | 1,553.35
LAGRASP(0,50,-) | 0.439 % | 97 | 569.30
Figure 7.4 illustrates the merit of the proposed approach for one of the test instances. We first observe that all variants reach the same lower bounds, which is expected since they depend exclusively on the common subgradient algorithm. However, as the lower bound appears to stabilize, the upper bound obtained by LAGRASP(0,1,-) (or GLH) also seems to freeze. On the other hand, the other variants continue to make improvements in discovering better upper bounds, since the randomized GRASP construction help them to escape from locally optimal solutions and find new, improved upper bounds.
Fig. 7.4
Evolution of lower and upper bounds over the iterations for different variants of LAGRASP. The number of iterations taken by each LAGRASP variant depends on the step size, which in turn depends on the upper bounds produced by each heuristic.
Finally, we also report on the comparison of the performance of GRASP with backward path-relinking and LAGRASP when the same time limits are used as the stopping criterion for all heuristics and variants running on all 135 test instances. Eight runs were performed for each heuristic and each instance, using different initial seeds for the random number generator. The results in Table 7.5 show that all variants of LAGRASP outperformed GRASP with backward path-relinking and were able to find solutions whose costs are very close to or as good as the best known solution values, while GRASP with backward path-relinking found solutions whose costs are on average 4.05% larger than the best known solution values.
Table 7.5
Summary of results for the best variants of LAGRASP and GRASP.
Heuristic | Average deviation | Hits
---|---|---
LAGRASP(1,1,50) | 3.30 % | 0
LAGRASP(0.50,1,1) | 0.35 % | 171
LAGRASP(0,1,-) | 0.35 % | 173
LAGRASP(0.25,5,10) | 0.45 % | 138
LAGRASP(0.25,5,5) | 0.45 % | 143
LAGRASP(0.25,5,1) | 0.46 % | 137
LAGRASP(0.25,50,5) | 0.65 % | 97
LAGRASP(0,50,-) | 0.65 % | 93
GRASP with backward path-relinking | 4.05 % | 0
Figure 7.5 displays for one test instance the typical behavior of these heuristics. As opposed to the GRASP with path-relinking heuristic, the Lagrangean heuristics are able to escape from local optima for longer and keep on improving the solutions to obtain the best results.
Fig. 7.5
Evolution of solution costs with time for the best variants of LAGRASP and GRASP with backward path-relinking.
To conclude, we note that an important feature of Lagrangean heuristics is that they provide not only a feasible solution (which gives an upper bound, in the case of a minimization problem), but also a lower bound that may be used to give an estimate of the optimality gap that may be considered as a stopping criterion.
## 7.10 Bibliographical notes
The Reactive GRASP approach considered in Section 7.1 was developed by Prais and Ribeiro (2000a) in the context of a traffic assignment problem in communication satellites. It has been widely explored and used in a number of successful applications. Computational experiments on this traffic assignment problem, reported in Prais and Ribeiro (2000a), showed that Reactive GRASP found better solutions than the basic algorithm for many test instances. In addition to the applications in Prais and Ribeiro (1999; 2000a;b), this approach was used in power transmission network expansion planning (Bahiense et al., 2001; Binato and Oliveira, 2002), job shop scheduling (Binato et al., 2002), parallel machine scheduling with setup times (Kampke et al., 2009), balancing reconfigurable transfer lines (Essafi et al., 2012), container loading (Parreño et al., 2008), channel assignment in mobile phone networks (Gomes et al., 2001), broadcast scheduling (Butenko et al., 2004; Commander et al., 2004), just-in-time scheduling (Alvarez-Perez et al., 2008), single machine scheduling (Armentano and Araujo, 2006), examination scheduling (Casey and Thompson, 2003), semiconductor manufacturing (Deng et al., 2010), rural road network development (Scaparra and Church, 2005), maximum diversity (Duarte and Martí, 2007; Santos et al., 2005; Silva et al., 2004), max-min diversity (Resende et al., 2010a) , capacitated location (Delmaire et al., 1999), locating emergency services (Silva and Serra, 2007), point-feature cartographic label placement (Cravo et al., 2008), set packing (Delorme et al., 2004), strip-packing (Alvarez-Valdes et al., 2008b), biclustering of gene expression data (Dharan and Nair, 2009; Das and Idicula, 2010), constrained two-dimensional nonguillotine cutting (Alvarez-Valdes et al., 2004), capacitated clustering (Deng and Bard, 2011), capacitated multi-source Weber problem (Luis et al., 2011), capacitated location routing (Prins et al., 2005), vehicle routing (Repoussis et al., 2007), family traveling salesperson (Morán-Mirabal et al., 2014), driver scheduling (Leone et al., 2011), portfolio optimization (Anagnostopoulos et al., 2010), automated test case prioritization (Maia et al., 2010), Golomb ruler search (Cotta and Fernández, 2004), commercial territory design motivated by a real-world application in a beverage distribution firm (Ríos-Mercado and Fernández, 2009), combined production-distribution (Boudia et al., 2007), and therapist routing and scheduling (Bard et al., 2014), among others.
The use of probabilistically determined values of the construction parameter α reported in Section 7.2 originally appeared in Prais and Ribeiro (1999; 2000b). The four tested strategies were incorporated into basic GRASP heuristics implemented for matrix decomposition for traffic assignment in communication satellites (Prais and Ribeiro, 2000a), set covering (Feo and Resende, 1989), weighted MAX-SAT (Resende et al., 1997; 2000), and graph planarization (Resende and Ribeiro, 1997; Ribeiro and Resende, 1999).
The two alternative randomized greedy approaches described in Section 7.3 were originally proposed in Resende and Werneck (2004) and compared with the semi-greedy algorithm for the p-median problem.
The idea of introducing perturbations into the original costs discussed in Section 7.4 is similar to that used in the so-called "noising" method of Charon and Hudry (1993; 2002). It was first applied in the context of GRASP to the shortest path heuristic of Takahashi and Matsuyama (1980), which is used as the main building block of the construction phase of the hybrid procedure proposed by Ribeiro et al. (2002) for the Steiner tree problem in graphs.
Another situation where cost perturbations can be very effective appears when no greedy algorithm is available for straightforward randomization. Canuto et al. (2001) made effective use of cost perturbations in their GRASP heuristic for the prize-collecting Steiner tree problem in graphs, for which no greedy algorithm was available to build starting solutions. In that case, the primal-dual algorithm of Goemans and Williamson (1996) was applied to build initial solutions, using different perturbed costs at each iteration of the hybrid GRASP procedure.
In the construction procedure of the basic GRASP, the next element to be introduced in the solution is chosen at random from the restricted candidate list. The elements of the restricted candidate list are assigned equal probabilities of being chosen. However, any probability distribution can be used to bias the selection towards some particular candidates. Bresina (1996) proposed a family of probability distributions to bias the selection mechanism in the construction phase of GRASP towards some particular candidates, as described in Section 7.5, instead of randomly choosing any element in the restricted candidate list (the basic GRASP heuristic uses a random bias function). Bresina's selection procedure applied to elements of the restricted candidate list was used in Binato et al. (2002).
Adaptive memory fundamentals and uses are reported by Rochat and Taillard (1995), Fleurent and Glover (1999), Patterson et al. (1999), Melián et al. (2004), and Martí et al. (2013a). Fleurent and Glover (1999) proposed the use of the long-term memory scheme described in Section 7.6 in multistart heuristics. The function K(e) can vary with time by changing the value of λ. Procedures for changing the value of λ were reported by Binato et al. (2002).
Glover and Laguna (1997) stated the proximate optimality principle as introduced in Section 7.7. Fleurent and Glover (1999) provided its interpretation in the context of GRASP and suggested the application of local search also during the construction phase. Local search was applied by Binato et al. (2002) after 40% and 80% of the construction moves have been taken, as well as at the end of the construction phase.
Section 7.8 presented alternative construction strategies based on the use of frequent patterns that appear in high-quality solutions previously detected. Vocabulary building and data mining can be combined into efficient pattern-based implementations of GRASP or other multistart procedures. Vocabulary building is an intensification strategy originally proposed in Glover and Laguna (1997) and Glover et al. (2000) for creating new solutions from good fragments of high-quality solutions previously found and stored in an elite set. See also Scholl et al. (1998) and Berger et al. (2000) for some successful applications. Aloise and Ribeiro (2011) developed a multistart procedure based on vocabulary building for multicommodity network design.
Data mining refers to the automatic extraction of new and potentially useful knowledge from data sets (Han et al., 2011; Witten et al., 2011). The extracted knowledge, expressed in terms of patterns or rules, represents important features of the data set at hand. The extraction of frequent items is one of the issues involved in the data mining process. Some algorithms exist to efficiently mine frequent items (Agrawal and Srikant, 1994; Han et al., 2000; Orlando et al., 2002; Goethals and Zaki, 2003). The patterns mined in the context of GRASP correspond to subsets of attributes that frequently appear in elite solutions.
The hybridization of GRASP with a data mining process was first introduced and applied to the set packing problem by Ribeiro et al. (2004; 2006). Afterwards, the method was evaluated in the context of three other applications, namely the maximum diversity problem (Santos et al., 2005), the server replication for reliable multicast problem (Santos et al., 2006), and the p-median problem (Plastino et al., 2009), with equally successful outcomes. The DM-GRASP hybrid heuristic, developed by Ribeiro et al. (2006), used a frequent item strategy that enhanced GRASP in terms of both solution quality and computation times. Frequent items extracted from the elite set represent patterns appearing in high-quality solutions, which are then used to perform an adapted construction phase which makes use of them. Frequent patterns are mined by the FPMax* algorithm (Grahne and Zhu, 2003). This hybridization strategy was also successfully applied to other combinatorial optimization problems (Santos et al., 2008; Plastino et al., 2009; 2011).
The MDM-GRASP variant, which performs data mining not only in the first phase, but also along the entire execution of the algorithm whenever the elite set changes, was developed by Barbalho et al. (2013) and Plastino et al. (2014). These references give numerical evidence that DM-GRASP and MDM-GRASP are able to improve not only the basic GRASP heuristic, but also implementations of Reactive GRASP and of GRASP with path-relinking.
Lagrangean relaxation (Beasley, 1993; Fisher, 2004) is a mathematical programming technique that can be used to provide lower bounds for minimization problems. Held and Karp (1970; 1971) were among the first to explore the use of the dual multipliers produced by Lagrangean relaxation to derive lower bounds, applying this idea in the context of the traveling salesman problem. Lagrangean heuristics further explore the use of different dual multipliers to generate feasible solutions. Beasley (1987; 1990b) described a Lagrangean heuristic for set covering, which can be extended to the set k-covering problem. The set multicovering or set k-covering problem is an extension of the classical set covering problem, in which each object is required to be covered at least k times. The problem finds applications in the design of communication networks and in computational biology. Pessoa et al. (2011; 2013) proposed the hybridization of GRASP and Lagrangean relaxation leading to the Lagrangean GRASP heuristic described in Section 7.9. They generated 135 set k-covering instances from 45 set covering instances of the OR-Library (Beasley, 1990a), using three different coverage factors k. The experiments they performed were on a 2.33 GHz Intel Xeon E5410 Quadcore computer running Linux Ubuntu 8.04. All algorithms were implemented in C and compiled with gcc 4.1.2. They used the same strategy proposed by Held et al. (1974) for updating the dual multipliers from one iteration to the next. Beasley (1990b) reported as computationally useful the adjustment of components of the subgradients to zero whenever they do not effectively contribute to the update of the multipliers, i.e., arbitrarily setting g i q = 0 whenever g i q > 0 and λ i q = 0, for i = 1,..., m.
References
R. Agrawal and R. Srikant. Fast algorithms for mining association rules. In Proceedings of the 20th International Conference on Very Large Data Bases, pages 487–499. Morgan Kaufmann Publishers, 1994.
D. Aloise and C.C. Ribeiro. Adaptive memory in multistart heuristics for multicommodity network design. Journal of Heuristics, 17:153–179, 2011.CrossRefMATH
G.A. Alvarez-Perez, J.L. González-Velarde, and J.W. Fowler. Crossdocking – Just in time scheduling: An alternative solution approach. Journal of the Operational Research Society, 60:554–564, 2008.CrossRefMATH
R. Alvarez-Valdes, F. Parreño, and J.M. Tamarit. A GRASP algorithm for constrained two-dimensional non-guillotine cutting problems. Journal of the Operational Research Society, 56:414–425, 2004.CrossRefMATH
R. Alvarez-Valdes, F. Parreño, and J.M. Tamarit. Reactive GRASP for the strip-packing problem. Computers & Operations Research, 35:1065–1083, 2008b.
K.P. Anagnostopoulos, P.D. Chatzoglou, and S. Katsavounis. A reactive greedy randomized adaptive search procedure for a mixed integer portfolio optimization problem. Managerial Finance, 36:1057–1065, 2010.CrossRef
V.A. Armentano and O.C.B. Araujo. GRASP with memory-based mechanisms for minimizing total tardiness in single machine scheduling with setup times. Journal of Heuristics, 12:427–446, 2006.CrossRef
L. Bahiense, G.C. Oliveira, M. Pereira, and S. Granville. A mixed integer disjunctive model for transmission network expansion. IEEE Transactions on Power Systems, 16:560–565, 2001.CrossRef
H. Barbalho, I. Rosseti, S.L. Martins, and A. Plastino. A hybrid data mining GRASP with path-relinking. Computers & Operations Research, 40:3159–3173, 2013.CrossRef
J.F. Bard, Y. Shao, and A.I. Jarrah. A sequential GRASP for the therapist routing and scheduling problem. Journal of Scheduling, 17:109–133, 2014.MathSciNetCrossRefMATH
J.E. Beasley. An algorithm for set-covering problems. European Journal of Operational Research, 31:85–93, 1987.MathSciNetCrossRef90141-X)MATH
J.E. Beasley. OR-Library: Distributing test problems by electronic mail. Journal of the Operational Research Society, 41:1069–1072, 1990a.
J.E. Beasley. A Lagrangean heuristic for set-covering problems. Naval Research Logistics, 37:151–164, 1990b.
J.E. Beasley. Lagrangean relaxation. In C.R. Reeves, editor, Modern heuristic techniques for combinatorial problems, pages 243–303. Blackwell Scientific Publications, Oxford, 1993.
D. Berger, B. Gendron, J.-Y Potvin, S. Raghavan, and P. Soriano. Tabu search for a network loading problem with multiple facilities. Journal of Heuristics, 6:253–267., 2000.
S. Binato and G.C. Oliveira. A reactive GRASP for transmission network expansion planning. In C.C. Ribeiro and P. Hansen, editors, Essays and surveys in metaheuristics, pages 81–100. Kluwer Academic Publishers, Boston, 2002.CrossRef
S. Binato, W.J. Hery, D. Loewenstern, and M.G.C. Resende. A GRASP for job shop scheduling. In C.C. Ribeiro and P. Hansen, editors, Essays and surveys in metaheuristics, pages 59–79. Kluwer Academic Publishers, Boston, 2002.CrossRef
M. Boudia, M.A.O. Louly, and C. Prins. A reactive GRASP and path relinking for a combined production–distribution problem. Computers & Operations Research, 34:3402–3419, 2007.CrossRefMATH
J.L. Bresina. Heuristic-biased stochastic sampling. In Proceedings of the Thirteenth National Conference on Artificial Intelligence, pages 271–278, Portland, 1996. Association for the Advancement of Artificial Intelligence.
S.I. Butenko, C.W. Commander, and P.M. Pardalos. A GRASP for broadcast scheduling in ad-hoc TDMA networks. In Proceedings of the International Conference on Computing, Communications, and Control Technologies, volume 5, pages 322–328, Austin, 2004.
S.A. Canuto, M.G.C. Resende, and C.C. Ribeiro. Local search with perturbations for the prize-collecting Steiner tree problem in graphs. Networks, 38:50–58, 2001.MathSciNetCrossRefMATH
S. Casey and J. Thompson. GRASPing the examination scheduling problem. In E. Burke and P. De Causmaecker, editors, Practice and theory of automated timetabling IV, volume 2740 of Lecture Notes in Computer Science, pages 232–244. Springer, Berlin, 2003.
I. Charon and O. Hudry. The noising method: A new method for combinatorial optimization. Operations Research Letters, 14:133–137, 1993.MathSciNetCrossRef90023-A)MATH
I. Charon and O. Hudry. The noising methods: A survey. In C.C. Ribeiro and P. Hansen, editors, Essays and surveys in metaheuristics, pages 245–261. Kluwer Academic Publishers, Boston, 2002.
C.W. Commander, S.I. Butenko, P.M. Pardalos, and C.A.S. Oliveira. Reactive GRASP with path relinking for broadcast scheduling. In Proceedings of the 40th Annual International Telemetry Conference, pages 792–800, San Diego, 2004.
C. Cotta and A.J. Fernández. A hybrid GRASP–evolutionary algorithm approach to Golomb ruler search. In X. Yao, E.K. Burke, J.A. Lozano, J. Smith, J.J. Merelo-Guervós, J.A. Bullinaria, J.E. Rowe, P. Tiňo, A. Kabán, and H.-P. Schwefel, editors, Parallel Problem Solving from Nature, volume 3242 of Lecture Notes in Computer Science, pages 481–490. Springer, Berlin, 2004.
G.L. Cravo, G.M. Ribeiro, and L.A.N. Lorena. A greedy randomized adaptive search procedure for the point-feature cartographic label placement. Computers & Geosciences, 34:373–386, 2008.CrossRef
S. Das and S.M. Idicula. Application of reactive GRASP to the biclustering of gene expression data. In Proceedings of the International Symposium on Biocomputing, page 14, Calicut, 2010. ACM.
H. Delmaire, J.A. Díaz, E. Fernández, and M. Ortega. Reactive GRASP and tabu search based heuristics for the single source capacitated plant location problem. INFOR, 37:194–225, 1999.
X. Delorme, X. Gandibleux, and J. Rodriguez. GRASP for set packing problems. European Journal of Operational Research, 153:564–580, 2004.MathSciNetCrossRef00263-7)MATH
Y. Deng and J.F. Bard. A reactive GRASP with path relinking for capacitated clustering. Journal of Heuristics, 17:119–152, 2011.CrossRefMATH
Y. Deng, J.F. Bard, G.R. Chacon, and J. Stuber. Scheduling back-end operations in semiconductor manufacturing. IEEE Transactions on Semiconductor Manufacturing, 23:210–220, 2010.CrossRef
S. Dharan and A.S. Nair. Biclustering of gene expression data using reactive greedy randomized adaptive search procedure. BMC Bioinformatics, 10 (Suppl 1):S27, 2009.
A. Duarte and R. Martí. Tabu search and GRASP for the maximum diversity problem. European Journal of Operational Research, 178:71–84, 2007.MathSciNetCrossRefMATH
M. Essafi, X. Delorme, and A. Dolgui. A reactive GRASP and path relinking for balancing reconfigurable transfer lines. International Journal of Production Research, 50:5213–5238, 2012.CrossRef
T.A. Feo and M.G.C. Resende. A probabilistic heuristic for a computationally difficult set covering problem. Operations Research Letters, 8:67–71, 1989.MathSciNetCrossRef90002-3)MATH
M.L. Fisher. The Lagrangean relaxation method for solving integer programming problems. Management Science, 50:1861–1871, 2004.CrossRef
C. Fleurent and F. Glover. Improved constructive multistart strategies for the quadratic assignment problem using adaptive memory. INFORMS Journal on Computing, 11:198–204, 1999.MathSciNetCrossRefMATH
F. Glover and M. Laguna. Tabu search. Kluwer Academic Publishers, Boston, 1997.CrossRefMATH
F. Glover, M. Laguna, and R. Martí. Fundamentals of scatter search and path relinking. Control and Cybernetics, 39:653–684, 2000.MathSciNetMATH
M.X. Goemans and D.P. Williamson. The primal dual method for approximation algorithms and its application to network design problems. In D. Hochbaum, editor, Approximation algorithms for NP-hard problems, pages 144–191. PWS Publishing Company, Boston, 1996.
B. Goethals and M.J. Zaki. Advances in frequent itemset mining implementations: Introduction to FIMI03. In B. Goethals and M.J. Zaki, editors, Proceedings of the IEEE ICDM 2003 Workshop on Frequent Itemset Mining Implementations, pages 1–12, Melbourne, 2003.
F.C. Gomes, P. Pardalos, C.S. Oliveira, and M.G.C. Resende. Reactive GRASP with path relinking for channel assignment in mobile phone networks. In Proceedings of the 5th International Workshop on Discrete Algorithms and Methods for Mobile Computing and Communications, pages 60–67, Rome, 2001. ACM Press.
G. Grahne and J. Zhu. Efficiently using prefix-trees in mining frequent itemsets, 2003. URL http://bit.ly/1qxiKbl. Last visited on April 16, 2016.
J. Han, J. Pei, and Y. Yin. Mining frequent patterns without candidate generation. In Proceedings of the 2000 ACM SIGMOD International Conference on Management of Data, pages 1–12, Dallas, 2000. ACM.
J. Han, M. Kamber, and J. Pei. Data mining: Concepts and techniques. Morgan Kaufmann Publishers, San Francisco, 3rd edition, 2011.MATH
M. Held and R.M. Karp. The traveling-salesman problem and minimum spanning trees. Operations Research, 18:1138–1162, 1970.MathSciNetCrossRefMATH
M. Held and R.M. Karp. The traveling-salesman problem and minimum spanning trees: Part II. Mathematical Programming, 1:6–25, 1971.MathSciNetCrossRefMATH
M. Held, P. Wolfe, and H.P. Crowder. Validation of subgradient optimization. Mathematical Programming, 6:62–88, 1974.MathSciNetCrossRefMATH
E.H. Kampke, J.E.C. Arroyo, and A.G. Santos. Reactive GRASP with path relinking for solving parallel machines scheduling problem with resource-assignable sequence dependent setup times. In Proceedings of the World Congress on Nature and Biologically Inspired Computing, pages 924–929, Coimbatore, 2009. IEEE.
R. De Leone, P. Festa, and E. Marchitto. Solving a bus driver scheduling problem with randomized multistart heuristics. International Transactions in Operational Research, 18:707–727, 2011.MathSciNetCrossRefMATH
M. Luis, S. Salhi, and G. Nagy. A guided reactive GRASP for the capacitated multi-source Weber problem. Computers & Operations Research, 38:1014–1024, 2011.MathSciNetCrossRefMATH
C.L.B. Maia, R.A.F. Carmo, F.G. Freitas, G.A.L. Campos, and J.T. Souza. Automated test case prioritization with reactive GRASP. Advances in Software Engineering, 2010, 2010. doi: 10.1155/2010/428521. Article ID 428521.
R. Martí, M.G.C. Resende, and C.C. Ribeiro. Multi-start methods for combinatorial optimization. European Journal of Operational Research, 226:1–8, 2013a.
B. Melián, M. Laguna, and J.A. Moreno-Pérez. Capacity expansion of fiber optic networks with WDM systems: Problem formulation and comparative analysis. Computers & Operations Research, 31:461–472, 2004.CrossRef00011-X)MATH
L.F. Morán-Mirabal, J.L. González-Velarde, and M.G.C. Resende. Randomized heuristics for the family traveling salesperson problem. International Transactions in Operational Research, 21:41–57, 2014.MathSciNetCrossRefMATH
S. Orlando, P. Palmerini, and R. Perego. Adaptive and resource-aware mining of frequent sets. In Proceedings of the 2002 IEEE International Conference on Data Mining, pages 338–345, Maebashi City, 2002. IEEE.
F. Parreño, R. Alvarez-Valdes, J.M. Tamarit, and J.F. Oliveira. A maximal-space algorithm for the container loading problem. INFORMS Journal on Computing, 20:412–422, 2008.MathSciNetCrossRefMATH
R.A. Patterson, H. Pirkul, and E. Rolland. A memory adaptive reasoning technique for solving the capacitated minimum spanning tree problem. Journal of Heuristics, 5:159–180, 1999.CrossRefMATH
L.S. Pessoa, M.G.C. Resende, and C.C. Ribeiro. Experiments with the LAGRASP heuristic for set k-covering. Optimization Letters, 5:407–419, 2011.MathSciNetCrossRefMATH
L.S. Pessoa, M.G.C. Resende, and C.C. Ribeiro. A hybrid Lagrangean heuristic with GRASP and path-relinking for set k-covering. Computers & Operations Research, 40:3132–3146, 2013.MathSciNetCrossRef
A. Plastino, E.R. Fonseca, R. Fuchshuber, S.L. Martins, A.A. Freitas, M. Luis, and S. Salhi. A hybrid data mining metaheuristic for the p-median problem. In H. Park, S. Parthasarathy, H. Liu, and Z. Obradovic, editors, Proceedings of the 9th SIAM International Conference on Data Mining, pages 305–316, Sparks, 2009. SIAM.
A. Plastino, R. Fuchshuber, S.L. Martins, A.A. Freitas, and S. Salhi. A hybrid data mining metaheuristic for the p-median problem. Statistical Analysis and Data Mining, 4:313–335, 2011.MathSciNetCrossRef
A. Plastino, H. Barbalho, L.F.M. Santos, R. Fuchshuber, and S.L. Martins. Adaptive and multi-mining versions of the DM-GRASP hybrid metaheuristic. Journal of Heuristics, 20:39–74, 2014.CrossRef
M. Prais and C.C. Ribeiro. Parameter variation in GRASP implementations. In C.C. Ribeiro and P. Hansen, editors, Extended Abstracts of the Third Metaheuristics International Conference, pages 375–380, Angra dos Reis, 1999.
M. Prais and C.C. Ribeiro. Reactive GRASP: An application to a matrix decomposition problem in TDMA traffic assignment. INFORMS Journal on Computing, 12:164–176, 2000a.
M. Prais and C.C. Ribeiro. Parameter variation in GRASP procedures. Investigación Operativa, 9:1–20, 2000b.
C. Prins, C. Prodhon, and R.Wolfler-Calvo. A reactive GRASP and path relinking algorithm for the capacitated location routing problem. In Proceedings of the International Conference on Industrial Engineering and Systems Management, Marrakech, 2005. I4E2. ISBN 2-9600532-0-6.
P.P. Repoussis, C.D. Tarantilis, and G. Ioannou. A hybrid metaheuristic for a real life vehicle routing problem. In T. Boyanov, S. Dimova, K. Georgiev, and G. Nikolov, editors, Numerical methods and applications, volume 4310 of Lecture Notes in Computer Science, pages 247–254. Springer, Berlin, 2007.
M.G.C. Resende and C.C. Ribeiro. A GRASP for graph planarization. Networks, 29:173–189, 1997.CrossRef1097-0037\(199705\)29%3A3<173%3A%3AAID-NET5>3.0.CO%3B2-E)MATH
M.G.C. Resende and R.F. Werneck. A hybrid heuristic for the p-median problem. Journal of Heuristics, 10:59–88, 2004.CrossRefMATH
M.G.C. Resende, L.S. Pitsoulis, and P.M. Pardalos. Approximate solution of weighted MAX-SAT problems using GRASP. In J. Gu and P.M. Pardalos, editors, Satisfiability problems, volume 35 of DIMACS Series in Discrete Mathematics and Theoretical Computer Science, pages 393–405. American Mathematical Society, Providence, 1997.
M.G.C. Resende, L.S. Pitsoulis, and P.M. Pardalos. Fortran subroutines for computing approximate solutions of MAX-SAT problems using GRASP. Discrete Applied Mathematics, 100:95–113, 2000.CrossRef00171-7)MATH
M.G.C. Resende, R. Martí, M. Gallego, and A. Duarte. GRASP and path relinking for the max-min diversity problem. Computers & Operations Research, 37: 498–508, 2010a.
C.C. Ribeiro and M.G.C. Resende. Algorithm 797: Fortran subroutines for approximate solution of graph planarization problems using GRASP. ACM Transactions on Mathematical Software, 25:341–352, 1999.CrossRefMATH
C.C. Ribeiro, E. Uchoa, and R.F. Werneck. A hybrid GRASP with perturbations for the Steiner problem in graphs. INFORMS Journal on Computing, 14:228–246, 2002.MathSciNetCrossRefMATH
M.H.F. Ribeiro, V.F. Trindade, A. Plastino, and S.L. Martins. Hybridization of GRASP metaheuristic with data mining techniques. In Proceedings of the ECAI Workshop on Hybrid Metaheuristics, pages 69–78, Valencia, 2004.
M.H.F. Ribeiro, A. Plastino, and S.L. Martins. Hybridization of GRASP metaheuristic with data mining techniques. Journal of Mathematical Modelling and Algorithms, 5:23–41, 2006.MathSciNetCrossRefMATH
R.Z. Ríos-Mercado and E. Fernández. A reactive GRASP for a commercial territory design problem with multiple balancing requirements. Computers & Operations Research, 36:755–776, 2009.CrossRefMATH
Y. Rochat and É. Taillard. Probabilistic diversification and intensification in local search for vehicle routing. Journal of Heuristics, 1:147–167, 1995.CrossRefMATH
L.F. Santos, M.H.F. Ribeiro, A. Plastino, and S.L. Martins. A hybrid GRASP with data mining for the maximum diversity problem. In M.J. Blesa, C. Blum, A. Roli, and M. Sampels, editors, Hybrid metaheuristics, volume 3636 of Lecture Notes in Computer Science, pages 116–127. Springer, Berlin, 2005.
L.F. Santos, C.V. Albuquerque, S.L. Martins, and A. Plastino. A hybrid GRASP with data mining for efficient server replication for reliable multicast. In Proceedings of the 49th Annual IEEE GLOBECOM Technical Conference, pages 1–6, San Francisco, 2006. IEEE. doi: 10.1109/ GLOCOM.2006.246.
L.F. Santos, S.L. Martins, and A. Plastino. Applications of the DM-GRASP heuristic: A survey. International Transactions on Operational Research, 15:387–416, 2008.MathSciNetCrossRefMATH
M. Scaparra and R. Church. A GRASP and path relinking heuristic for rural road network development. Journal of Heuristics, 11:89–108, 2005.CrossRefMATH
A. Scholl, R. Klein, and W. Domschke. Pattern based vocabulary building for effectively sequencing mixed-model assembly lines. Journal of Heuristics, 4:359–381, 1998.CrossRefMATH
F. Silva and D. Serra. Locating emergency services with different priorities: The priority queuing covering location problem. Journal of the Operational Research Society, 59:1229–1238, 2007.CrossRefMATH
G.C. Silva, L.S. Ochi, and S.L. Martins. Experimental comparison of greedy randomized adaptive search procedures for the maximum diversity problem. In C.C. Ribeiro and S.L. Martins, editors, Experimental and efficient algorithms, volume 3059 of Lecture Notes in Computer Science, pages 498–512. Springer, Berlin, 2004.
H. Takahashi and A. Matsuyama. An approximate solution for the Steiner problem in graphs. Mathematica Japonica, 24:573–577, 1980.MathSciNetMATH
I.H. Witten, E. Frank, and M.A. Hall. Data mining: Practical machine learning tools and techniques. Morgan Kaufmann, San Francisco, 3rd edition, 2011.
© Springer Science+Business Media New York 2016
Mauricio G.C. Resende and Celso C. RibeiroOptimization by GRASP10.1007/978-1-4939-6530-4_8
# 8. Path-relinking
Mauricio G. C. Resende1 and Celso C. Ribeiro2
(1)
Modeling and Optimization Group (MOP), Amazon.com, Inc., Seattle, WA, USA
(2)
Instituto de Ciência da Computação, Universidade Federal Fluminense, Niterói, Rio de Janeiro, Brazil
Path-relinking is a search intensification strategy. As a major enhancement to heuristic search methods for solving combinatorial optimization problems, its hybridization with other metaheuristics has led to significant improvements in both solution quality and running times of hybrid heuristics. In this chapter, we review the fundamentals of path-relinking, implementation issues and strategies, and the use of randomization in path-relinking.
## 8.1 Template and mechanics of path-relinking
Path-relinking is an intensification strategy to explore trajectories connecting elite solutions (i.e., high-quality solutions) of combinatorial optimization problems. In this section, we focus on the path-relinking operator, including its template and mechanics.
As introduced in Chapter , we consider the search space graph  associated with a combinatorial optimization problem. The nodes of this graph correspond to the set F of feasible solutions. There is an edge (S, S′) ∈ M if and only if S ∈ F, S′ ∈ F, S′ ∈ N(S), and S ∈ N(S′), where N(S) ⊆ F is the neighborhood of S. Path-relinking is usually carried out between two solutions in F: one is the initial solution S i , while the other is the guiding solution S g . One or more paths connecting these solutions in the search space graph can be explored by path-relinking in the search for better solutions. Local search is often applied to the best solution in each of these paths since there is no guarantee that this solution is locally optimal.
### 8.1.1 Restricted neighborhoods
Let S ∈ F be any solution (i.e., a node) on a path in  leading from the initial solution S i ∈ F to the guiding solution S g ∈ F. Not all solutions in the neighborhood N(S) are allowed to follow S on this path from S i to S g . Path-relinking restricts its possible choices to the feasible solutions in N(S) that are more similar to S g than S is (measures of solution similarity will be discussed later). We denote by N(S: S g ) ⊆ N(S) this restricted neighborhood, which is therefore defined exclusively by moves that introduce in S attributes of the guiding solution S g that do not appear in S.
The elements of the ground set E that appear in S but not in S g are those that must be removed from the current solution S in a path leading to S g . Similarly, the elements of the ground set E that appear in S g but not in S are those that must be incorporated into S in a path leading to S g . The restricted neighborhood N(S: S g ) is formed by all feasible solutions in N(S) that may appear in a path from S to S g .
Therefore, path-relinking may be viewed as a strategy that seeks to incorporate attributes of a guiding solution (which is often a high-quality solution) into the current solution, by favoring these attributes in the selected moves. After evaluating each potential move leading to a feasible solution in N(S: S g ), the most common strategy is a greedy approach, where one selects the move resulting in a best-quality restricted neighbor of S that is closer to S g than S is.
We next illustrate the restricted neighborhoods used by path-relinking with three examples.
Minimum spanning tree problem – Restricted neighborhood
Consider the weighted graph depicted in Figure 8.1(a) and two of its spanning trees in Figures 8.1(b) and 8.1(c). Suppose the spanning tree in Figure 8.1(b) is the current solution S and the one in Figure 8.1(c) is the guiding solution S g . The total weight of solution S is 35, while that of S g is 32.
Fig. 8.1
Current and guiding solutions for path-relinking applied to the minimum spanning tree problem.
Edges (3, 4), (1, 3), and (2, 5) are present in S g but not in S, while edges (2, 4), (1, 2), and (1, 5) appear in S but not in S g . Therefore, to transform solution S into S g it is necessary that all edges (2, 4), (1, 2), and (1, 5) be removed from S and be replaced by the three edges (3, 4), (1, 3), and (2, 5) that originally appear only in S g . If N is a swap neighborhood, then there are nine moves associated with ordered pairs of edges such that the first belongs to S but not to S g (edge to be removed from S), while the second belongs to S g but not to S (edge to be added to S). However, note that only four of the solutions resulting from these moves are feasible, corresponding to four ordered pairs of edges to be swapped: swap edge (2,4) with (3,4), edge (1,2) with (1,3), edge (1,5) with (2,5), and edge (1,2) with (2,5). Swapping, e.g., edge (1,2) with (3,4) would lead to an infeasible solution corresponding to a graph with two connected components (the first would be the cycle formed by nodes 2, 3, and 4, with the second being the edge connecting nodes 1 and 5).
This situation is illustrated in Figure 8.2, in which only four moves lead to the feasible solutions in the restricted neighborhood N(S: S g ). If edge (3, 4) is added to S, then the cycle (3, 4) − (2, 4) − (2, 3) is created. This is followed by the removal of edge (2, 4), leading to solution A. If edge (1, 3) is added to S, then the cycle (1, 3) − (2, 3) − (1, 2) is created. In this case, edge (1, 2) is removed from S and solution B is obtained. If edge (2, 5) is added to S, then the cycle (2, 5) − (1, 5) − (1, 2) is created. One possible edge to be removed is (1, 5), leading to solution C. Finally, we note that edge (1, 2) may also be removed following the addition of edge (2, 5), in which case the feasible solution D is obtained.
Fig. 8.2
Minimum spanning tree problem: four swap moves from the current solution S towards the guiding solution S g .
Traveling salesman problem – Restricted neighborhood
We now consider the traveling salesman problem associated with the weighted graph in Figure 8.3(a). Two of its Hamiltonian cycles are depicted in Figures 8.3(b) and 8.3(c). Suppose the tour in Figure 8.3(b) is the current solution represented by the linear permutation S = (1, 2, 3, 4, 5), while that in Figure 8.3(c) is the guiding solution associated with the linear permutation S g = (1, 3, 5, 2, 4). We note that these two linear permutations correspond to two different circular permutations of the five cities, i.e., they are associated with two different tours. The total length of solution S is 17, while that of S g is 18.
Fig. 8.3
Current solution S = (1, 2, 3, 4, 5) and guiding solution S g = (1, 3, 5, 2, 4) for path-relinking applied to the traveling salesman problem.
We observe that these two solutions only match in the first component of their associated linear permutations, i.e., both start from vertex 1: S(1) = S g (1) = 1. Therefore, there are four misplaced cities between S and S g , each of them corresponding to a position in the tour for which the two linear permutations originally differ.
Let us suppose that neighborhood N 2 defined for the traveling salesman problem in Section 4.1 is being used. Each neighbor is obtained by a move consisting of the exchange of two cities in different positions of the current solution S. Therefore, solution S = (1, 2, 3, 4, 5) has six neighbors if node 1 is fixed as the first: (1, 3, 2, 4, 5), (1, 2, 4, 3, 5), (1, 2, 3, 5, 4), (1, 4, 3, 2, 5), (1, 2, 5, 4, 3), and (1, 5, 3, 4, 2). Since S g = (1, 3, 5, 2, 4) is the guiding solution, four out of these six solutions in neighborhood N 2(S) also belong to the restricted neighborhood N(S: S g ), each of them making the position of a new city coincide in the new solution and in S g : solution A = (1, 3, 2, 4, 5) makes node 3 the second in the tour, solution B = (1, 2, 5, 4, 3) makes node 5 the third in the tour, solution C = (1, 4, 3, 2, 5) makes node 2 the fourth in the tour, and solution D = (1, 2, 3, 5, 4) makes node 4 the last in the tour.
The restricted neighborhood N(S: S g ) is shown in Figure 8.4, where we denote by swap(i, j) the move that swaps the cities in positions i and j of the linear permutation associated with the current solution S and makes at least one of them coincide with its final position in the linear permutation corresponding to the guiding solution S g . ■
Fig. 8.4
Traveling salesman problem: four moves from the current solution S towards the guiding solution S g .
Knapsack problem – Restricted neighborhood
We consider the optimization version of the knapsack problem, as introduced in Section 1.2 In this problem, one has a set I = { 1,..., n} of items to be placed in a knapsack. Integer numbers a i and c i represent, respectively, the weight and the utility of each item i ∈ I. We assume that each item fits in the knapsack by itself and denote by b the maximum total weight that can be taken in the knapsack. We have seen in Section 4.1 that every solution S of the knapsack problem can be represented by a binary vector (x 1,..., x n ), in which x i = 1 if item i is selected, x i = 0 otherwise, for every i = 1,..., n. A solution S = (x 1,..., x n ) is feasible if ∑ i ∈ I a i ⋅ x i ≤ b.
We recall the example in Figure 2.5, where four items are available to be placed in a knapsack of capacity 19. The weights of the yellow and green items are each equal to 10 and those of the blue and red items are both equal to 5. Therefore, only two of the four items fit together in the knapsack. The two heaviest items have utilities 20 and 10 to the hiker, while the two items with least weights have utilities 10 and 5. We consider the red, green, blue, and yellow items indexed by 1, 2, 3, and 4, respectively. The four items are illustrated in Figure 8.5(a) and two feasible solutions appear in Figures 8.5(b) and 8.5(c). Suppose the solution in Figure 8.5(b) is the current solution represented by vector S = (1, 1, 0, 0), while the solution in Figure 8.5(c) is the guiding solution associated with vector S g = (0, 0, 1, 1).
Fig. 8.5
Current and guiding solutions for a knapsack problem with four items.
These two solutions differ in all elements. Therefore, there are four moves in a path leading from S to S g , each of them corresponding to an item that appears in one solution, but not in the other. Following the solution representation proposed for the knapsack problem in Section 4.1, there are four possible neighbors in N(S), each of them corresponding to flipping the value of one variable of the current solution S. We denote by flip(j) the move that replaces the value of x(j) by 1 − x(j), for j = 1,..., n: flip(1) sets x 1 = 0, flip(2) sets x 2 = 0, flip(3) sets x 3 = 1, and flip(4) sets x 4 = 1. However, since the two last moves lead to infeasible solutions, the restricted neighborhood N(S: S g ) illustrated in Figure 8.6 contains only two solutions: solution A = (0, 1, 0, 0) and solution B = (1, 0, 0, 0), corresponding, respectively, to the moves flip(1) that sets x 1 = 0 and flip(2) that sets x 2 = 0. ■
Fig. 8.6
Knapsack problem: two moves from the current solution S towards the guiding solution S g .
### 8.1.2 A template for forward path-relinking
The algorithm in Figure 8.7 is an implementation of forward path-relinking for a minimization problem, where S i ∈ F is the initial solution and S g ∈ F is the guiding solution. We assume that the guiding solution S g is at least as good as (and possibly better than) the initial solution S i , hence the qualification of this strategy as forward. The current solution, the incumbent, and their cost are initialized in lines 1 to 3.
Fig. 8.7
Pseudo-code for a template of a forward path-relinking algorithm for minimization problems.
In line 4, the algorithm checks if the restricted neighborhood N(S: S g ) ⊆ N(S) contains at least one feasible solution. In most cases, the restricted neighborhood N(S: S g ) does not have to be explicitly computed and stored: instead, its elements may only be implicitly enumerated on-the-fly.
As the algorithm traverses the path from S to S g , the best restricted neighbor solution of the current solution is selected at each iteration. A path from the initial solution S i to the guiding solution S g is created in the loop going from line 4 to 10. The best restricted neighbor S is selected in line 5. Lines 6 to 9 update the best solution S ∗ and its cost if a new best-quality solution is found.
Since the best solution found along the path from S i to S g may not be locally optimal, local search is applied to it in line 11 and the final solution obtained by forward path-relinking and its cost are returned in line 12.
Knapsack problem – Forward path-relinking
Figure 8.8 illustrates the full application of forward path-relinking to the same instance of the knapsack problem with four items that was used in the last example presented in the previous section. As before, we consider the red, green, blue, and yellow items indexed by 1, 2, 3, and 4, respectively, and we denote by flip(j) the move that replaces the current value of x j by 1 − x j . The initial solution is S i = (1, 1, 0, 0) and the guiding solution is S g = (0, 0, 1, 1). We recall that the knapsack capacity is 19.
Fig. 8.8
Example of forward path-relinking applied to an instance of the knapsack problem with four items: the path from the initial solution S i = (1, 1, 0, 0) to the guiding solution S g = (0, 0, 1, 1) has exactly four moves, corresponding to the number of items by which the initial and guiding solutions differ. In this example, the best solution along the generated path coincides with the guiding solution S g .
The first iteration of path-relinking corresponds to the example in Section 8.1. The initial solution is S = S i = (1, 1, 0, 0) and there are two possible moves in the restricted neighborhood: flip(1) sets x 1 = 0 and leads to solution A = (0, 1, 0, 0), whose utility is 10, while flip(2) sets x 2 = 0 and leads to solution B = (1, 0, 0, 0), whose utility is 5. Since we are facing a maximization problem, move flip(1) is selected. Item 1 is removed from the knapsack and path-relinking proceeds from solution A. The second iteration begins with two possible moves to incorporate attributes of the guiding solution S g = (0, 0, 1, 1, ) into the current solution A: flip(2) sets x 2 = 0 and leads to solution C = (0, 0, 0, 0), whose utility is 0, while flip(3) sets x 3 = 1 and leads to solution D = (0, 1, 1, 0), whose utility is 20. Move flip(3) is selected, item 3 is included in the knapsack, and path-relinking moves to solution D. At this time, there is only one possible remaining move to be applied to solution D that makes the resulting solution closer, or more similar, to the guiding solution: flip(2) sets x 2 = 0 and leads to solution E = (0, 0, 1, 0), whose utility is 10. Finally, once again there is only one possible move to be performed at the fourth and last iteration: flip(4) sets x 4 = 1 and leads to solution F = (0, 0, 1, 1), which coincides with S g and whose utility is 30.
The initial and guiding solutions differ by all four elements, each of them corresponding to a move that would make the current solution closer to the guiding solution. As expected, path-relinking reaches the guiding solution after exactly four moves. In this particular example, the best solution along the generated path coincided with the guiding solution S g . However, in many cases, the best solution found improves both the initial and the guiding solutions, as will be illustrated later in this chapter. ■
## 8.2 Other implementation strategies for path-relinking
Path-relinking can be implemented using different strategies, as illustrated in Figure 8.9. These include not only forward path-relinking, as seen in Section 8.1.2, but also backward, back-and-forward, mixed, truncated, greedy randomized adaptive, external, and evolutionary path-relinking, together with their hybrids. All these strategies involve trade-offs between computation time and solution quality.
Fig. 8.9
Different implementations of path-relinking: (a) Forward path-relinking: a path is traversed from the initial solution S i to a guiding solution S g at least as good as S i . (b) Backward path-relinking: a path is traversed from the initial solution S i to a guiding solution S g that is not better than S i . (c) Mixed path-relinking: two subpaths are traversed, one starting at S i and the other at S g , which eventually meet in the middle of the trajectory connecting S i and S g .
### 8.2.1 Backward and back-and-forward path-relinking
Suppose that path-relinking is applied to a minimization problem between two solutions S 1 and S 2 such that f(S 1) ≤ f(S 2), where f(S) denotes the value of solution S for the objective function to be minimized. Path-relinking is always carried out from an initial solution S i to a guiding solution S g . We have seen in Section 8.1.2 that in the case of forward path-relinking, the initial and guiding solutions are set as S i = S 2 and S g = S 1: in this case, the initial solution is not better than the guiding solution.
Conversely, in backward path-relinking, we set S i = S 1 and S g = S 2: now, the guiding solution is not better than the initial solution. In back-and-forward path-relinking, backward path-relinking is applied first, followed by forward path-relinking. Path-relinking explores the restricted neighborhood of the initial solution more thoroughly than the restricted neighborhood of the guiding solution because, as it moves along the path, the size of the restricted neighborhood progressively decreases. If one of the solutions S 1 or S 2 is strictly better than the other, then backward path-relinking explores more thoroughly the restricted neighborhood of the solution which is the best among S 1 and S 2. Since it is more likely to find an improving solution in the restricted neighborhood of the better solution than in that of the worse, backward path-relinking usually tends to perform better than forward path-relinking. Back-and-forward path-relinking does at least as well as either backward or forward path-relinking, but takes about twice as long to compute, since two (usually distinct) paths of the same length are traversed. Computational experiments have confirmed that backward path-relinking usually outperforms forward path-relinking in terms of solution quality, while back-and-forward path-relinking finds solutions at least as good as forward or backward path-relinking, but at the expense of longer running times. Figure 8.10 illustrates this behavior on an instance of a routing problem in private virtual networks.
Fig. 8.10
Time-to-target plots for pure GRASP and three variants of GRASP with path-relinking (forward, backward, and back-and-forward) on an instance of a routing problem in private virtual networks. The plots show that GRASP with backward path-relinking outperformed the other path-relinking variants as well as the pure GRASP heuristic, which was the slowest to find a solution whose value is at least as good as the target value.
### 8.2.2 Mixed path-relinking
In applying mixed path-relinking between two feasible solutions S i and S g , the connecting path is explored from both extremities. At each iteration of path-relinking, the closest extremity to the new current solution alternates between the original initial solution S i and the original guiding solution S g . The search behaves as if solutions in two different subpaths were visited alternately: the first of these subpaths leaves from the initial solution S i and leads to the guiding solution S g , while the second emanates from S g and develops towards S i . These two subpaths meet at some feasible solution in the middle of the trajectory, thus connecting S i and S g with a single path. We observe that, in this case, the qualification of a solution as being the initial or the guiding solution is meaningless, since the procedure behaves as if they keep permanently interchanging their role until the end. Figure 8.11 illustrates the steps of the application of mixed path-relinking to two solutions S i and S g for which the path connecting them is formed by five arcs. Moves alternate between the subpath leaving from the left and the subpath leaving from the right.
Fig. 8.11
Mixed path-relinking between two solutions S i and S g for which the path connecting them is formed by five arcs: numbers above the arrows represent the order in which the moves are performed. Moves alternate between the subpath leaving from the left and the subpath leaving from the right.
Figure 8.12 shows the pseudo-code of a template for a mixed path-relinking algorithm between solutions S i and S g for a minimization problem. The pseudo-code of algorithm MIXED-PR is basically the same of algorithm FORWARD-PR, except for lines 10 to 12, in which the direction of the path is reversed by the exchange of the roles of the guiding and current solutions.
Fig. 8.12
Pseudo-code for a template of a mixed path-relinking algorithm for minimization problems.
While back-and-forward path-relinking thoroughly explores both restricted neighborhoods of S i and S g , the mixed variant explores the entire restricted neighborhood of S i and all but one solution of the restricted neighborhood of S g . This is in contrast with both forward and backward path-relinking, which each fully explore only one of the restricted neighborhoods.
Furthermore, mixed path-relinking explores half as many restricted neighbors as back-and-forward path-relinking and the same number of neighbors as either the backward or forward variants. Figure 8.13 illustrates the comparison of a pure GRASP heuristic with four of its variants combined with path-relinking and applied to an instance of the 2-path network design problem: forward, backward, back-and-forward, and mixed path-relinking. The time-to-target plots show that GRASP with mixed path-relinking has the best runtime profile among the variants compared.
Fig. 8.13
Time-to-target plots for pure GRASP and four variants of GRASP with path-relinking (forward, backward, back-and-forward, and mixed) on an instance of the 2-path network design problem. The plot on the bottom compares only the variants that include path-relinking.
### 8.2.3 Truncated path-relinking
One can expect to see most solutions produced by path-relinking to come from subpaths that are close to either the initial or the guiding solution.
Figure 8.14 illustrates this observation for 80 instances of the max-min diversity problem, where a GRASP with back-and-forward path-relinking was run on each instance for two minutes. In each application of path-relinking, the step which produced the best path-relinking solution was recorded. For each instance, the total numbers of best path-relinking solutions found in each tenth of the traversed paths were added up and the average numbers of solutions found in each tenth were computed. It was shown experimentally that exploring only the subpaths near the extremities often produces solutions as good as those found by exploring the entire path, since there is a higher concentration of better solutions close to the initial and guiding solutions explored by path-relinking. The figure shows that most of the best solutions obtained by path-relinking are found near the initial and guiding solutions, and only in 15% of the calls to path-relinking would it be necessary to exploit subpaths longer than 20% of the total number of moves.
Fig. 8.14
Fraction of the best solutions found by GRASP with backward-and-forward path-relinking that appear in each range of the path length from the initial to the guiding solutions on two-minute runs over 80 instances of the max-min diversity problem. Fifty four percent of the best solutions were found in subpaths that originate at the initial solutions and appear within the first 20% of the total number of moves performed, while 31% are close to the guiding solutions and appear in the last 20% of the moves performed in each path.
It is straightforward to adapt path-relinking to explore only the restricted neighborhoods that are close to the extremities. Truncated path-relinking can be applied to either forward, backward, backward-and-forward, or mixed path-relinking: instead of exploring the entire path, it just explores a fraction of the path and, consequently, takes a fraction of the running time.
## 8.3 Minimum distance required for path-relinking
We assume that we want to connect two locally optimal solutions S 1 and S 2 with path-relinking. If S 1 and S 2 differ by only one of their components, then the path directly connects the two solutions and no solution, other than S 1 and S 2, is visited.
Since S 1 and S 2 are both local minima, then f(S 1) ≤ f(S) for all S ∈ N(S 1) and f(S 2) ≤ f(S) for all S ∈ N(S 2), where N(S) denotes the neighborhood of solution S. If S 1 and S 2 differ by exactly two moves, then any path between S 1 and S 2 visits exactly one intermediary solution S ∈ N(S 1) ∩ N(S 2). Consequently, solution S cannot be better than either S 1 or S 2.
Likewise, if S 1 and S 2 differ by exactly three moves, then any path between them visits two intermediary solutions S ∈ N(S 1) and S′ ∈ N(S 2) and, consequently, neither S nor S′ can be better than both S 1 and S 2.
Therefore, things only get interesting when the two solutions S 1 and S 2 differ by at least four moves. Consequently, we can discard the application of path-relinking to pairs of solutions differing by less than four moves.
## 8.4 Dealing with infeasibilities in path-relinking
So far in our discussion about path-relinking, we assumed that at least one restricted neighbor of a solution S with respect to a target guiding solution S g was feasible. Consider line 5 of both path-relinking templates shown earlier in this chapter, where we minimize f(S), for S ∈ F. This step selects the best restricted neighbor of the current solution as argmin{f(S′) : S′ ∈ N(S: S g )}. However, it may occur that all moves from the current solution S lead to infeasible solutions, i.e.,  and the result of the argmin operator is undefined. In this situation, path-relinking would have to stop.
Consider the example in Figures 8.15 to 8.17, where we are given a bipartite graph with six nodes, A, B, C, D, E, and F, and seek a maximum independent set, i.e., a set of mutually nonadjacent nodes of maximum cardinality. Suppose we are given the initial solution S i = { A, B, C} and the guiding solution S g = { D, E, F}, and that we consider a neighborhood characterized by moves defined as swap(out, in), where the node out is replaced by the node in in the solution. Since all nodes in the initial solution must be removed from it and all nodes in the guiding solution S g must be inserted, there are nine moves that might be applied to build a path from S i to S g : swap(A, D), swap(A, E), swap(A, F), swap(B, D), swap(B, E), swap(B, F), swap(C, D), swap(C, E), and swap(C, F). Applying any of these moves to S i results in one of nine infeasible solutions. Infeasibilities correspond to edges connecting pairs of nodes in a candidate independent set, i.e., they are associated with conflicting edges. In the figures, infeasibilities are indicated by edges in red. Of the nine moves, six lead to solutions that have a single infeasibility, while three lead to solutions that have two conflicting edges. In such situations, one possible strategy that might be applied is a greedy path-relinking operator that proceeds by moving to a least-infeasible solution.
Fig. 8.15
First iteration of path-relinking for a 6-node maximum independent set problem where all nine restricted neighbors of the initial solution are infeasible.
Fig. 8.16
Second iteration of path-relinking for a 6-node maximum independent set problem where all four restricted neighbors of the current solution are infeasible.
Fig. 8.17
Third iteration of path-relinking for a 6-node maximum independent set problem, in which the path finally reaches the guiding solution.
In this example, suppose solution {B, C, D} with a single infeasibility corresponding to edge (B, D) is chosen, i.e., the algorithm moves to solution S = { B, C, D}. Now, there are only four moves that might be applied to S to build a path to S g : swap(B, E), swap(B, F), swap(C, E), and swap(C, F). Again, all moves lead to infeasible solutions, but two lead to a single infeasibility, while the others to two infeasibilities. Suppose the greedy choice is to apply move swap(B, E) resulting in solution S′ = { C, D, E}, with a single infeasibility corresponding to edge (C, E). There is now only the move swap(C, F) leading from S′ to the guiding solution S g . In this example, all restricted neighbors on all paths from the initial solution to the target solution are infeasible. In general, however, some may be feasible, some infeasible.
In a revised path-relinking operator that allows moves to infeasible solutions, each visited solution S may be in either one of two possible situations: either at least one move from S leads to a feasible solution, in which case | N(S: S g ) | ≥ 1), or, alternatively, all restricted moves lead to infeasible solutions and the restricted neighborhood N(S: S g ) becomes empty before the guiding solution is reached. In the first case, a greedy version of path-relinking selects a move that leads to a least cost feasible neighbor of S. Otherwise, the selected move is one that leads to an infeasible neighbor of S with minimum infeasibility.
The pseudo-code in Figure 8.18 presents a revised mixed path-relinking procedure that allows feasible and infeasible moves. It is very similar to the template presented in Figure 8.12, with the main difference corresponding to lines 4 to 10. Both feasible and infeasible moves are allowed in the neighborhood N(S). As already observed for algorithm FORWARD-PR in Figure 8.7, the neighborhood N(S) does not necessarily have to be explicitly enumerated. If the algorithm detects in line 5 that there is at least one move that once applied to S leads to a feasible solution that is closer to S g than S is, then the best restricted neighbor is selected in line 6. Otherwise, the restricted neighborhood is empty. Denoting by infeasibility(S) a measure of the degree of infeasibility of a solution S, line 8 selects the best infeasible neighbor of the current solution, i.e., the one with the smallest measure of infeasibility. The updates in lines 11 and 12 are performed if the new incumbent solution S is feasible and improves the best solution previously known.
Fig. 8.18
Pseudo-code for a revised template of a mixed path-relinking algorithm for minimization problems, with feasible and infeasible moves.
## 8.5 Randomization in path-relinking
All previously described path-relinking strategies follow a greedy criterion to select the best move at each of their iterations. Therefore, path-relinking is limited to exploring a single path from a set of exponentially many paths between any pair of solutions. By adding randomization to path-relinking, greedy randomized adaptive path-relinking is not constrained to explore a single path. Instead of always selecting the move that results in the best solution, a restricted candidate list is constructed with the moves that result in promising solutions with costs in an interval that depends on the values of the best and worst moves, as well as on a parameter in the interval [0, 1]. A move is selected at random from this set to produce the next solution in the path.
By applying this strategy several times to the initial and guiding solutions, several paths can be explored. This strategy is useful when path-relinking is applied more than once to the same pair of solutions as it may occur in evolutionary path-relinking, which we will introduce in the next chapter.
## 8.6 External path-relinking and diversification
So far in this chapter, we have considered variants of path-relinking in which a path in the search space graph  connects two feasible solutions S, T ∈ F by progressively introducing in one of them (the initial solution) attributes of the other (the guiding solution). Since attributes common to both solutions are not changed and all solutions visited belong to a path between the two solutions, we may also refer to this type of path-relinking as internal path-relinking.
External path-relinking extends any path connecting S and T in  beyond its extremities. To extend such a path beyond S, attributes not present in either S or T are introduced in S. Symmetrically, to extend it beyond T, attributes not present in either S or T are introduced in T. In its greedy variant, all moves are evaluated and the solution chosen to be next in the path is one with best cost or, in case they are all infeasible, the one with least infeasibility. In either direction, the procedure stops when all attributes that do not appear in either S or T have been tested for extending the path. Once both paths are complete, local search may be applied to the best solution in each of them. The best of the two local minima is returned as the solution produced by the external path-relinking procedure.
Figure 8.19 illustrates internal and external path-relinking. The path with red nodes and edges is one resulting from internal path-relinking applied with S as the initial solution and T as the guiding solution. We observe that the orientation introduced by the arcs in this path is due only to the choice of the initial and guiding solutions. If the roles of solutions S and T were interchanged, it could have been computed and generated in the reverse direction. The same figure also illustrates two paths obtained by external path-relinking, one emanating from S and the other from T, both represented with blue nodes and edges. The orientations of the arcs in each of these paths indicate that they necessarily emanate from either solution S or T.
Fig. 8.19
An internal path (red arcs, red nodes) from solution S to solution T and two external (blue arcs, blue nodes) paths, one emanating from solution S and the other from solution T. These paths are produced by internal and external path-relinking.
To conclude, we establish a parallel between internal and external path-relinking. Since internal path-relinking works by fixing all attributes common to the initial and guiding solutions and searches for paths between them satisfying this property, it is clearly an intensification strategy. Contrarily, external path-relinking progressively removes common attributes and replaces them by others that do not appear in either one of the initial or guiding solution. Therefore, it can be seen as a diversification strategy which produces solutions increasingly farther from both the initial and the guiding solutions. External path-relinking becomes therefore a tool for search diversification.
## 8.7 Bibliographical notes
Path-relinking, as introduced in Section 8.1, was originally proposed by Glover (1996b) as an intensification strategy to explore trajectories connecting elite solutions obtained by tabu search or scatter search (Glover and Laguna, 1997; Glover, 2000; Glover et al., 2000; 2003; 2004). Accounts and surveys of path-relinking, mostly in the context of GRASP applications, were authored by Resende and Ribeiro (2005a), Resende et al. (2010b), and Ribeiro and Resende (2012). Forward path-relinking corresponds to the original proposal for the implementation strategy.
Section 8.2 discussed other, more elaborate implementation strategies. The concepts of backward as well as of back-and-forward path-relinking appeared first in Ribeiro et al. (2002), with both names being later introduced in Aiex et al. (2005). Computational experiments reported in Ribeiro et al. (2002) and Resende and Ribeiro (2003a) were the first to show that backward path-relinking usually outperforms forward path-relinking, while back-and-forward path-relinking finds solutions at least as good as forward or backward path-relinking, but at the expense of longer running times.
Mixed path-relinking was suggested by Glover (1996b) and was first implemented and tested in the context of the 2-path network design problem by Rosseti (2003), followed by results in Resende and Ribeiro (2005a) and Ribeiro and Rosseti (2009), where it was shown that mixed path-relinking usually outperforms forward, backward, and back-and-forward path-relinking.
Resende et al. (2010a) showed empirically, for instances of the max-min diversity problem, that most of the best solutions obtained by path-relinking are found near the initial and guiding solutions, and that more are found near the best of these two solutions. Andrade and Resende (2007a) and Resende et al. (2010a) were the first to apply truncated path-relinking.
Sections 8.3 to 8.6 introduced several extensions of path-relinking. The requirement of a minimum distance between the initial and guiding solutions for the application of path-relinking appeared originally in Festa et al. (2005) and Festa et al. (2006). Infeasibility in path-relinking was first addressed by Mateus et al. (2011). Morán-Mirabal et al. (2013b) developed the approach to deal with infeasibility in path-relinking. Greedy randomized adaptive path-relinking was proposed by Faria Jr. et al. (2005). External path-relinking was introduced by Glover (2014) and first applied by Duarte et al. (2015) in a heuristic for differential dispersion minimization.
References
R.M. Aiex, M.G.C. Resende, P.M. Pardalos, and G. Toraldo. GRASP with path relinking for three-index assignment. INFORMS Journal on Computing, 17: 224–247, 2005.MathSciNetCrossRefMATH
D.V. Andrade and M.G.C. Resende. GRASP with path-relinking for network migration scheduling. In Proceedings of the International Network Optimization Conference, Spa, 2007a. URL http://bit.ly/1NfaTK0. Last visited on April 16, 2016.
A. Duarte, J. Sánchez-Oro, M.G.C. Resende, F. Glover, and R. Martí. GRASP with exterior path relinking for differential dispersion minimization. Information Sciences, 296:46–60, 2015.MathSciNetCrossRef
H. Faria Jr., S. Binato, M.G.C. Resende, and D.J. Falcão. Transmission network design by a greedy randomized adaptive path relinking approach. IEEE Transactions on Power Systems, 20:43–49, 2005.CrossRef
P. Festa, P.M. Pardalos, L.S. Pitsoulis, and M.G.C. Resende. GRASP with path-relinking for the weighted maximum satisfiability problem. Lecture Notes in Computer Science, 3503:367–379, 2005.CrossRefMATH
P. Festa, P.M. Pardalos, L.S. Pitsoulis, and M.G.C. Resende. GRASP with path-relinking for the weighted MAXSAT problem. ACM Journal of Experimental Algorithmics, 11:1–16, 2006.MathSciNetMATH
F. Glover. Tabu search and adaptive memory programing – Advances, applications and challenges. In R.S. Barr, R.V. Helgason, and J.L. Kennington, editors, Interfaces in computer science and operations research, pages 1–75. Kluwer Academic Publishers, Boston, 1996b.
F. Glover. Multi-start and strategic oscillation methods – Principles to exploit adaptive memory. In M. Laguna and J.L. González-Velarde, editors, Computing tools for modeling, optimization and simulation: Interfaces in computer science and operations research, pages 1–24. Kluwer Academic Publishers, Boston, 2000.
F. Glover. Exterior path relinking for zero-one optimization. International Journal of Applied Metaheuristic Computing, 5(3):1–8, 2014.CrossRef
F. Glover and M. Laguna. Tabu search. Kluwer Academic Publishers, Boston, 1997.CrossRefMATH
F. Glover, M. Laguna, and R. Martí. Fundamentals of scatter search and path relinking. Control and Cybernetics, 39:653–684, 2000.MathSciNetMATH
F. Glover, M. Laguna, and R. Martí. Scatter search and path relinking: Advances and applications. In F. Glover and G. Kochenberger, editors, Handbook of metaheuristics, pages 1–35. Kluwer Academic Publishers, Boston, 2003.
F. Glover, M. Laguna, and R. Martí. Scatter search and path relinking: Foundations and advanced designs. In G.C. Onwubolu and B.V. Babu, editors, New optimization techniques in engineering, volume 141 of Studies in Fuzziness and Soft Computing, pages 87–100. Springer, Berlin, 2004.
G.R. Mateus, M.G.C. Resende, and R.M.A. Silva. GRASP with path-relinking for the generalized quadratic assignment problem. Journal of Heuristics, 17: 527–565, 2011.CrossRefMATH
L.F. Morán-Mirabal, J.L. González-Velarde, M.G.C. Resende, and R.M.A. Silva. Randomized heuristics for handover minimization in mobility networks. Journal of Heuristics, 19:845–880, 2013b.
M.G.C. Resende and C.C. Ribeiro. A GRASP with path-relinking for private virtual circuit routing. Networks, 41:104–114, 2003a.
M.G.C. Resende and C.C. Ribeiro. GRASP with path-relinking: Recent advances and applications. In T. Ibaraki, K. Nonobe, and M. Yagiura, editors, Metaheuristics: Progress as real problem solvers, pages 29–63. Springer, New York, 2005a.
M.G.C. Resende, R. Martí, M. Gallego, and A. Duarte. GRASP and path relinking for the max-min diversity problem. Computers & Operations Research, 37: 498–508, 2010a.
M.G.C. Resende, C.C. Ribeiro, F. Glover, and R. Martí. Scatter search and path-relinking: Fundamentals, advances, and applications. In M. Gendreau and J.-Y. Potvin, editors, Handbook of metaheuristics, pages 87–107. Springer, New York, 2nd edition, 2010b.
C.C. Ribeiro and M.G.C. Resende. Path-relinking intensification methods for stochastic local search algorithms. Journal of Heuristics, 18:193–214, 2012.CrossRef
C.C. Ribeiro and I. Rosseti. Exploiting run time distributions to compare sequential and parallel stochastic local search algorithms. In Proceedings of the VIII Metaheuristics International Conference, Hamburg, 2009.
C.C. Ribeiro, E. Uchoa, and R.F. Werneck. A hybrid GRASP with perturbations for the Steiner problem in graphs. INFORMS Journal on Computing, 14:228–246, 2002.MathSciNetCrossRefMATH
I. Rosseti. Sequential and parallel strategies of GRASP with path-relinking for the 2-path network design problem. PhD thesis, Department of Computer Science, Pontifical Catholic University of Rio de Janeiro, Rio de Janeiro, 2003. In Portuguese.
© Springer Science+Business Media New York 2016
Mauricio G.C. Resende and Celso C. RibeiroOptimization by GRASP10.1007/978-1-4939-6530-4_9
# 9. GRASP with path-relinking
Mauricio G. C. Resende1 and Celso C. Ribeiro2
(1)
Modeling and Optimization Group (MOP), Amazon.com, Inc., Seattle, WA, USA
(2)
Instituto de Ciência da Computação, Universidade Federal Fluminense, Niterói, Rio de Janeiro, Brazil
Path-relinking is a major enhancement to GRASP, adding a long-term memory mechanism to GRASP heuristics. GRASP with path-relinking implements long-term memory using an elite set of diverse high-quality solutions found during the search. In its most basic implementation, at each iteration the path-relinking operator is applied between the solution found at the end of the local search phase and a randomly selected solution from the elite set. The solution resulting from path-relinking is a candidate for inclusion in the elite set. In this chapter we examine elite sets, their integration with GRASP, the basic GRASP with path-relinking procedure, several variants of the basic scheme, including evolutionary path-relinking, and restart strategies for GRASP with path-relinking heuristics.
## 9.1 Memoryless GRASP
The basic GRASP heuristic, as presented in Chapter , searches the solution space by repeatedly applying independent searches in the solution space graph , each search starting from a different greedy randomized solution. Each independent search uses no information produced by any other search performed at previous iterations. The choices of starting solutions for local search are not influenced by information produced during the search. However, Reactive GRASP and adaptive memory techniques (introduced in Sections 7.1 and 7.6, respectively) do make use of information produced during the search. Reactive GRASP does so to select the blend of randomness and greediness used in the construction of the starting solutions for local search, while programming with adaptive memory determines the amount of intensification and diversification in the construction phase.
The memoryless nature of basic, or pure, GRASP is in contrast with many successful metaheuristics, such as tabu search, genetic algorithms, and ant colony optimization, which make extensive use of information gathered during the search process to guide their choice of the region of the solution space to explore.
In this chapter, we show how path-relinking can be used with any GRASP heuristic to result in a hybrid procedure with a long-term memory mechanism. Given the same running time, this hybridization almost always produces better solutions than pure GRASP. Alternatively, given a target value, it almost always finds a solution at least as good as this target in less running time than pure GRASP.
## 9.2 Elite sets
An elite set  of solutions is a set formed by at most a fixed number  of diverse, high-quality solutions found during the run of a heuristic. The elite solutions should represent distinct promising regions of the solution space and therefore should not include solutions that are too similar, even if they are of high quality.
A basic scheme to maintain an elite set  for a minimization problem is outlined in the algorithm of Figure 9.1. The algorithm is given a candidate solution S and determines if S should be added to  and, if so, which solution, if any, should be removed from .
Fig. 9.1
Pseudo-code of a template for the maintenance of the elite set  of at most  elements in the context of a minimization problem.
If line 1 determines that the elite set  is not full, i.e., if , then a candidate solution S is always added to  if it is different from any solution currently in the set. This case is treated in lines 2 to 7 of the pseudo-code. In line 3, S is added to  if the elite set is empty. Let the symmetric difference Δ(S, S′) be formed by the ground set elements that belong to either S or S′. In line 5, the minimum cardinality δ among the symmetric differences between S and the elements of  is computed. If S is different from all elite solutions, then it is added to  in line 6.
Otherwise, if the elite set is full (i.e., if ), then any time a solution is added to the set, another solution must be removed from it, thus maintaining the size of  equal to . Our goal is to first improve the average quality of the elite set, and then maximize the diversity of its elements, which amounts to maximizing the cardinalities of the symmetric differences between all pairs of solutions in the set. This case is treated in lines 9 to 14. In line 9, the cost f + of the worst-valued elite set solution is computed, while in line 10 the minimum cardinality δ among the symmetric differences between S and any element of  is determined. S is added to  if it is better than the worst solution in the elite set and if it is different from all elite solutions, i.e., if f(S) < f + and δ > 0 in line 11. This is accomplished in lines 12 and 13. Line 12 determines, among all elite set solutions valued no better than S, one which is most similar to S, i.e., one which minimizes the cardinality of its symmetric difference with respect to S. This solution, S −, is removed from  in line 13. The new elite solution S is inserted in the pool as a replacement for S − at the same line. The updated elite set is returned in line 16.
The algorithm in Figure 9.1 can be modified to increase the diversity of the elite set solutions by modifying lines 6 and 11, where condition δ > 0 can be changed to δ ≥ δ, where δ > 0 is a parameter. In this case, instead of requiring that S only be different from all other elite set solutions, we now require that it be sufficiently different by at least a given number of attributes.
## 9.3 Hybridization of GRASP with path-relinking
Path-relinking is a major enhancement to GRASP, equipping GRASP heuristics with a long-term memory mechanism and enabling search intensification beyond simple local search. In this section, we show how to hybridize path-relinking with GRASP.
To implement GRASP with path-relinking, we make use of an elite set , such as the one introduced in Section 9.2, to collect a diverse set of high-quality solutions found during the search. The elite set starts empty and is constrained to have at most  solutions. Each new locally optimal solution produced by the GRASP local search phase is relinked with one or more solutions from the elite set. Each solution resulting from path-relinking is considered as a candidate to be inserted in the elite set according to algorithm UPDATE-ELITE-SET of Figure 9.1.
The pseudo-code of Figure 9.2 outlines the main steps of a GRASP with path-relinking heuristic for minimization. This simple variant relinks the locally optimal solution produced in each GRASP iteration with a single, randomly chosen, solution from the elite set, following the forward path-relinking strategy described in Section 8.1.2 The output of the path-relinking operator is a candidate for inclusion in the elite set.
Fig. 9.2
Pseudo-code of a template of a basic GRASP with path-relinking heuristic for minimization.
Line 1 of the pseudo-code initializes the elite set  as empty. The loop from line 2 to line 13 makes up the steps of GRASP with path-relinking. Lines 3 to 7 correspond to the semi-greedy construction, repair (in case of infeasibility), and local search phases of a basic GRASP heuristic. Forward path-relinking is performed in lines 9 and 10 in case the elite set is not empty: in line 9, an elite set solution S′ is selected at random from  while, in line 10, S′ is relinked with the locally optimal solution S produced in line 7. The resulting solution, S, is tested for inclusion in the elite set in line 12, which updates  by applying algorithm UPDATE-ELITE-SET of Figure 9.1. The algorithm returns the best-valued elite solution in line 14, after a stopping criterion is met.
Enhancing GRASP with path-relinking almost always improves the performance of the heuristic. As an illustration, Figures 9.3 and 9.4 show time-to-target plots (or runtime distributions) for GRASP with and without path-relinking for four different applications. These plots show the empirical cumulative probability distributions of the time-to-target random variable, i.e., the time needed to find a solution at least as good as a given target value. For all problems, the plots show that GRASP with path-relinking is able to find target solutions faster than the memoryless basic algorithm.
Fig. 9.3
Time-to-target plots comparing running times of GRASP with and without path-relinking on distinct problems: three-index assignment and maximum satisfiability. Forward path-relinking was used in these two examples.
Fig. 9.4
Time-to-target plots comparing running times of GRASP with and without path-relinking on distinct problems: bandwidth packing and quadratic assignment. Forward path-relinking was used in these two examples. In addition, on the bandwidth packing example, plots for GRASP with backward and back-and-forward path-relinking are also shown.
## 9.4 Evolutionary path-relinking
As aforementioned, GRASP with path-relinking heuristics maintain an elite set of high-quality solutions. In the variant of GRASP with path-relinking introduced in Section 9.3, locally optimal solutions produced by local search are relinked with elite set solutions. Path-relinking can also be applied to pairs of elite set solutions to search for new high-quality solutions and to improve the quality of the elite set. This procedure, called evolutionary path-relinking (EvPR), can be applied as a post-optimization phase of GRASP, after the main heuristic stops, or periodically, when the main heuristic is still running.
The pseudo-codes in Figures 9.5 and 9.6 correspond to the post-processing and periodic variants, respectively. The pseudo-code in Figure 9.5 is identical to that of the GRASP with path-relinking of Figure 9.2, with an additional step in line 15 where EvPR is applied.
Fig. 9.5
Pseudo-code of a template of a GRASP with evolutionary path-relinking heuristic where evolutionary path-relinking is applied at a post-processing step.
Fig. 9.6
Pseudo-code of a template of a GRASP with evolutionary path-relinking heuristic where evolutionary path-relinking is applied periodically during the search.
The pseudo-code of Figure 9.6 adds lines 3 and 15 to 19 to manage the periodic application of EvPR. Line 3 initializes it2evPR, a counter of iterations to EvPR, with evPRfreq being the number of GRASP iterations between consecutive calls to EvPR. If evPRfreq iterations have passed without the application of EvPR, then in line 16 it is applied and the counter it2evPR is reinitialized in line 17. Finally, in line 19, it2evPR is decreased by one iteration.
Evolutionary path-relinking takes as input the elite set and returns either the same elite set or a renewed one with an improved average cost. This approach is outlined in the pseudo-code of Figure 9.7. While there exists a pair of solutions in the elite set for which path-relinking has not yet been applied, the two solutions are combined with path-relinking and the resulting solution is tested for membership in the elite set. If it is accepted, it then replaces the elite solution most similar to it among all solutions having worse cost. To explore more than one path connecting two solutions, evolutionary path-relinking can apply greedy randomized adaptive path-relinking a fixed number of times between each pair of elite solutions.
Fig. 9.7
Pseudo-code of a template of the evolutionary path-relinking strategy.
This strategy outperformed several other heuristics using GRASP with path-relinking, simulated annealing, tabu search, and a multistart strategy for the max-min diversity problem. Figure 9.8 shows the evolution of the best solution found by the multistart strategy, pure GRASP, and GRASP with evolutionary path-relinking for a 500-element max-min diversity instance.
Fig. 9.8
Percent deviation from best known solution value for GRASP with evolutionary path-relinking, pure GRASP, and a multistart algorithm for a 500-element instance of a max-min diversity problem with a time limit of 60 minutes.
## 9.5 Restart strategies
Figure 9.9 shows a typical iteration count distribution for a GRASP with path-relinking heuristic. Observe in this example that for most of the independent runs whose iteration counts make up the plot, the algorithm finds a target solution in relatively few iterations: about 25% of the runs take at most 101 iterations; about 50% take at most 192 iterations; and about 75% take at most 345. However, some runs take much longer: 10% take over 1000 iterations; 5% over 2000; and 2% over 9715 iterations. The longest run took 11607 iterations to find a solution at least as good as the target. These long tails contribute to a large average iteration count as well as to a high standard deviation. This section proposes strategies to reduce the tail of the distribution, consequently reducing the average iteration count and its standard deviation.
Fig. 9.9
Typical iteration count distribution of GRASP with path-relinking.
Consider again the distribution in Figure 9.9. The distribution shows that each run will take over 345 iterations with about 25% probability. Therefore, any time the algorithm is restarted, the probability that the new run will take over 345 iterations is also about 25%. By restarting the algorithm after 345 iterations, the new run will take more than 345 iterations with probability of also about 25%. Therefore, the probability that the algorithm will be still running after 345 + 345 = 690 iterations is the probability that it takes more than 345 iterations multiplied by the probability that it takes more than 690 iterations given that it took more than 345 iterations, i.e., about (1∕4) × (1∕4) = (1∕4)2. It follows by induction that the probability that the algorithm will still be running after k periods of 345 iterations is 1∕(4 k ). In this example, the probability that the algorithm will be running after 1725 iterations will be about 0.1%, i.e., much less than the 5% probability that the algorithm will take over 2000 iterations without restart.
A restart strategy is defined as an infinite sequence of time intervals τ 1, τ 2, τ 3,... which define epochs τ 1, τ 1 +τ 2, τ 1 +τ 2 +τ 3,... when the algorithm is restarted from scratch. It can be shown that the optimal restart strategy uses τ 1 = τ 2 = ⋯ = τ ∗, where τ ∗ is some (unknown) constant.
Implementing the optimal strategy may be difficult in practice because it requires inputting the constant value τ ∗. Runtimes can vary greatly for different combinations of algorithm, instance, and solution quality sought. Since usually one has no prior information about the runtime distribution of the stochastic search algorithm for the optimization problem under consideration, one runs the risk of choosing a value of τ ∗ that is either too small or too large. On the one hand, a value that is too small can cause the restart variant of the algorithm to take much longer to converge than a no-restart variant. On the other hand, a value that is too large may never lead to a restart, causing the restart-variant of the algorithm to take as long to converge as the no-restart variant. Figure 9.10 illustrates the restart strategies with time-to-target plots for the maximum cut instance G12 on an 800-node graph with edge density of 0.63% with target solution value 554 for τ = 6, 9, 12, 18, 24, 30, and 42 seconds. For each value of τ, 100 independent runs of a GRASP with path-relinking heuristic with restarts were performed. The variant with τ = ∞ corresponds to the heuristic without restart. The figure shows that, for some values of τ, the resulting heuristic outperformed its counterpart with no restart by a large margin.
Fig. 9.10
Time-to-target plot for target solution value of 554 for maximum cut instance G12 using different values of τ.
In GRASP with path-relinking, the number of iterations between improvements of the incumbent (or best so far) solution tends to vary less than the runtimes for different combinations of instance and solution quality sought. If one takes this into account, a simple and effective restart strategy for GRASP with path-relinking is to keep track of the last iteration when the incumbent solution was improved and restart the GRASP with path-relinking heuristic if κ iterations have gone by without improvement. We shall call such a strategy restart(κ). A restart consists in saving the incumbent and emptying out the elite set.
The pseudo-code shown in Figure 9.11 summarizes the steps of a GRASP with path-relinking heuristic using the restart(κ) strategy for a minimization problem. The algorithm keeps track of the current iteration (CurrentIter), as well as of the last iteration when an improving solution was found (LastImprov). If an improving solution is detected in line 16, then this solution and its cost are saved in lines 17 and 18, respectively, and the iteration of last improvement is set to the current iteration in line 19. If, in line 21, it is determined that more than κ iterations have gone by since the last improvement of the incumbent, then a restart is triggered, emptying out the elite set in line 22 and resetting the iteration of last improvement to the current iteration in line 23. If restart is not triggered, then in line 25 the current solution S is tested for inclusion in the elite set and the set is updated if S is accepted. The best overall solution found S ∗ is returned in line 28 after the stopping criterion is satisfied.
Fig. 9.11
Pseudo-code of a template of a GRASP with path-relinking heuristic with restarts for a minimization problem.
As an illustration of the use of the restart(κ) strategy within a GRASP with path-relinking heuristic, consider the maximum cut instance G12. For the values κ = 50, 100, 200, 300, 500, 1000, 2000, and 5000, the heuristic was run independently 100 times, stopping when a cut of weight 554 or higher was found. A strategy without restarts was also implemented. Figures 9.12 and 9.13, as well as Table 9.1, summarize these runs, showing the average time to target solution as a function of the value of κ and the time-to-target plots for different values of κ. These figures illustrate well the effect on running time of selecting a value of κ that is either too small (κ = 50, 100) or too large (κ = 2000, 5000). They further show that there is a wide range of κ values (κ = 200, 300, 500, 1000) that result in lower runtimes when compared to the strategy without restarts.
Fig. 9.12
Average time-to-target solution for maximum cut instance G12 using different values of κ. All runs of all strategies have found a solution at least as good as the target value of 554.
Fig. 9.13
Time-to-target plots for maximum cut instance G12 using different values of κ. The figure also shows the time-to-target plot for the strategy without restarts. All runs of all strategies found a solution at least as good as the target value of 554.
Table 9.1
Summary of computational results on maximum cut instance G12 with four strategies. For each strategy, 100 independent runs were executed, each stopped when a solution as good as the target solution value 554 was found. For each strategy, the table shows the distribution of the number of iterations by quartile. For each quartile, the table gives the maximum number of iterations taken by all runs in that quartile, i.e., the slowest of the fastest 25% (1st), 50% (2nd), 75% (3rd), and 100% (4th) of the runs. The average number of iterations over the 100 runs and the standard deviation (st.dev.) are also given for each strategy. | Iterations in quartile
| |
---|---|---|---
Strategy | 1st | 2nd | 3rd | 4th | Average | st.dev.
Without restarts | 326 | 550 | 1596 | 68813 | 4525.1 | 11927.0
restart(1000) | 326 | 550 | 1423 | 5014 | 953.2 | 942.1
restart(500) | 326 | 550 | 1152 | 4178 | 835.0 | 746.1
restart(100) | 509 | 1243 | 3247 | 8382 | 2055.0 | 2005.9
Figure 9.14 further illustrates the behavior of the restart(100), restart(500), and restart(1000) strategies for the previous example, when compared with the strategy without restarts on the same maximum cut instance G12. However, in this figure, for each strategy, we plot the number of iterations to the target solution value. It is interesting to note that, as expected, each strategy restart(κ) behaves exactly like the strategy without restarts for the κ first iterations, for κ = 100, 500, 1000. After this point, each trajectory deviates from that of the strategy without restarts. Among these strategies, restart(500) is the one with the best performance.
Fig. 9.14
Comparison of the iterations-to-target plots for maximum cut instance G12 using strategies restart(100), restart(500), and restart(1000). The figure also shows the iterations-to-target plot for the strategy without restarts. All runs of all strategies found a solution at least as good as the target value of 554.
We conclude this chapter with some observations about these experiments. The effect of the restart strategies can be mainly observed in the column corresponding to the fourth quartile of Table 9.1. Entries in this quartile correspond to those in the heavy tails of the distributions. The restart strategies in general did not affect the other quartiles of the distributions, which is a desirable characteristic. Compared to the no-restart strategy, at least one restart strategy was always able to reduce the maximum number of iterations, the average number of iterations, and the standard deviation of the number of iterations. Compared to the no-restart strategy, restart strategies restart(500) and restart(1000) were able to reduce the maximum number of iterations, as well as the average and the standard deviation. Strategy restart(100) did so, too, but not as much as restart(500) and restart(1000). Restart strategies restart(500) and restart(1000) were clearly the best strategies of those tested.
## 9.6 Bibliographical notes
GRASP with path-relinking as proposed in Section 9.3 was first introduced by Laguna and Martí (1999), where a forward path-relinking operator from the solution found by local search to a randomly selected elite solution was applied. This was followed by a number of applications of GRASP with path-relinking, e.g., to maximum cut (Festa et al., 2002), 2-path network design (Ribeiro and Rosseti, 2002), Steiner problem in graphs (Ribeiro et al., 2002), job-shop scheduling (Aiex et al., 2003), private virtual circuit routing (Resende and Ribeiro, 2003a), p-median (Resende and Werneck, 2004), quadratic assignment (Oliveira et al., 2004), set packing (Delorme et al., 2004), three-index assignment (Aiex et al., 2005), p-hub median (Pérez et al., 2005), uncapacitated facility location (Resende and Werneck, 2006), project scheduling (Alvarez-Valdes et al., 2008a), maximum weighted satisfiability (Festa et al., 2006), maximum diversity (Silva et al., 2007), network migration scheduling (Andrade and Resende, 2007a), capacitated arc routing (Labadi et al., 2008; Usberti et al., 2013), disassembly sequencing (Adenso-Díaz et al., 2008), flowshop scheduling (Ronconi and Henriques, 2009), multi-plant capacitated lot sizing (Nascimento et al., 2010), workover rig scheduling (Pacheco et al., 2010), max-min diversity (Resende et al., 2010a), biobjective orienteering (Martí et al., 2015), biobjective path dissimilarity (Martí et al., 2015), generalized quadratic assignment (Mateus et al., 2011), antibandwidth (Duarte et al., 2011), capacitated clustering (Deng and Bard, 2011), linear ordering (Chaovalitwongse et al., 2011), data clustering (Frinhani et al., 2011), two-echelon location routing (Nguyen et al., 2012), image registration (Santamaría et al., 2012), drawing proportional symbols in maps (Cano et al., 2013), family traveling salesperson (Morán-Mirabal et al., 2014), handover minimization in mobility networks (Morán-Mirabal et al., 2013b), facility layout (Silva et al., 2013b), survivable network design (Pedrola et al., 2013), equitable dispersion (Martí and Sandoya, 2013), 2D and 3D bin packing (Alvarez-Valdes et al., 2013), microarray data analysis (Cordone and Lulli, 2013), community detection (Nascimento and Pitsoulis, 2013), set k-covering (Pessoa et al., 2013), network load balancing (Santos et al., 2013), power optimization in ad hoc networks (Moraes and Ribeiro, 2013), capacitated vehicle routing (Sörensen and Schittekat, 2013), and symmetric Euclidean clustered traveling salesman (Mestria et al., 2013).
Surveys on GRASP with path-relinking can be found in Resende and Ribeiro (2005a), Aiex and Resende (2005), Resende (2008), Resende et al. (2010b), Resende and Ribeiro (2010), Ribeiro and Resende (2012), and Festa and Resende (2013). A special issue of Computers & Operations Research (Martí et al., 2013b) was dedicated to GRASP with path-relinking.
Section 9.4 discussed evolutionary path-relinking that was originally proposed by Resende and Werneck (2004), where it was used as a post-processing phase for a GRASP with path-relinking for the p-median problem. Andrade and Resende (2007a) were the first to apply evolutionary path-relinking periodically during the search. The term evolutionary path-relinking was introduced by Andrade and Resende (2007b). This was followed by a number of applications of GRASP with evolutionary path-relinking, e.g., to uncapacitated facility location (Resende and Werneck, 2006), max-min diversity (Resende et al., 2010a), image registration (Santamaría et al., 2010; Santamaría et al., 2012), power transmission network expansion planning (Rahmani et al., 2010), vehicle routing with trailers (Villegas, 2010), antibandwidth minimization (Duarte et al., 2011), truck and trailer routing (Villegas et al., 2011), parallel machine scheduling (Rodriguez et al., 2012), linear ordering (Duarte et al., 2012), family traveling salesperson (Morán-Mirabal et al., 2014), handover minimization in mobility networks (Morán-Mirabal et al., 2013b), set covering (Morán-Mirabal et al., 2013a), maximum cut (Morán-Mirabal et al., 2013a), node capacitated graph partitioning (Morán-Mirabal et al., 2013a), capacitated arc routing (Usberti et al., 2013), and 2D and 3D bin packing (Alvarez-Valdes et al., 2013),
Figures 9.3 and 9.4 show time-to-target plots comparing pure GRASP and GRASP with path-relinking implementations on instances of the three-index assignment problem (Aiex et al., 2005), maximum satisfiability (Festa et al., 2006), bandwidth packing (Resende and Ribeiro, 2003a), and the quadratic assignment problem (Oliveira et al., 2004).
Figure 9.8 shows results from Resende et al. (2010a), where a GRASP and GRASP with evolutionary path-relinking for max-min diversity were proposed. The simulated annealing and multistart algorithms were the ones described in Kincaid (1992) and Ghosh (1996), respectively.
The restart(κ) strategy for GRASP with path-relinking discussed in Section 9.5 was proposed by Resende and Ribeiro (2011). Besides the experiments presented in this chapter for the maximum cut instance G12, that paper also considered five other instances of maximum cut, maximum weighted satisfiability, and bandwidth packing. Strategies for speeding up stochastic local search algorithms using restarts were first proposed by Luby et al. (1993), where they proved the result for an optimal restart strategy. Restart strategies in metaheuristics have been addressed in D'Apuzzo et al. (2006), Kautz et al. (2002), Nowicki and Smutnicki (2005), Palubeckis (2004), and Sergienko et al. (2004). Further work on restart strategies can be found in Shylo et al. (2011a) and Shylo et al. (2011b).
References
B. Adenso-Díaz, S. García-Carbajal, and S.M. Gupta. A path-relinking approach for a bi-criteria disassembly sequencing problem. Computers & Operations Research, 35:3989–3997, 2008.CrossRefMATH
R.M. Aiex and M.G.C. Resende. Parallel strategies for GRASP with path-relinking. In T. Ibaraki, K. Nonobe, and M. Yagiura, editors, Metaheuristics: Progress as real problem solvers, pages 301–331. Springer, New York, 2005.
R.M. Aiex, S. Binato, and M.G.C. Resende. Parallel GRASP with path-relinking for job shop scheduling. Parallel Computing, 29:393–430, 2003.MathSciNetCrossRef00014-0)
R.M. Aiex, M.G.C. Resende, P.M. Pardalos, and G. Toraldo. GRASP with path relinking for three-index assignment. INFORMS Journal on Computing, 17: 224–247, 2005.MathSciNetCrossRefMATH
R. Alvarez-Valdes, E. Crespo, J.M. Tamarit, and F. Villa. GRASP and path relinking for project scheduling under partially renewable resources. European Journal of Operational Research, 189:1153–1170, 2008a.
R. Alvarez-Valdes, F. Parreño, and J.M. Tamarit. A GRASP/path relinking algorithm for two- and three-dimensional multiple bin-size bin packing problems. Computers & Operations Research, 40:3081–3090, 2013.MathSciNetCrossRef
D.V. Andrade and M.G.C. Resende. GRASP with path-relinking for network migration scheduling. In Proceedings of the International Network Optimization Conference, Spa, 2007a. URL http://bit.ly/1NfaTK0. Last visited on April 16, 2016.
D.V. Andrade and M.G.C. Resende. GRASP with evolutionary path-relinking. In Proceedings of the Seventh Metaheuristics International Conference, Montreal, 2007b.
R.G. Cano, G. Kunigami, C.C. de Souza, and P.J. de Rezende. A hybrid GRASP heuristic to construct effective drawings of proportional symbol maps. Computers & Operations Research, 40:1435–1447, 2013.MathSciNetCrossRef
W.A. Chaovalitwongse, C.A.S Oliveira, B. Chiarini, P.M. Pardalos, and M.G.C. Resende. Revised GRASP with path-relinking for the linear ordering problem. Journal of Combinatorial Optimization, 22:572–593, 2011.
R. Cordone and G. Lulli. A GRASP metaheuristic for microarray data analysis. Computers & Operations Research, 40:3108–3120, 2013.MathSciNetCrossRef
M.M. D'Apuzzo, A. Migdalas, P.M. Pardalos, and G. Toraldo. Parallel computing in global optimization. In E. Kontoghiorghes, editor, Handbook of parallel computing and statistics. Chapman & Hall / CRC, Boca Raton, 2006.
X. Delorme, X. Gandibleux, and J. Rodriguez. GRASP for set packing problems. European Journal of Operational Research, 153:564–580, 2004.MathSciNetCrossRef00263-7)MATH
Y. Deng and J.F. Bard. A reactive GRASP with path relinking for capacitated clustering. Journal of Heuristics, 17:119–152, 2011.CrossRefMATH
A. Duarte, R. Martí, M.G.C. Resende, and R.M.A. Silva. GRASP with path relinking heuristics for the antibandwidth problem. Networks, 58: 171–189, 2011.MathSciNetCrossRefMATH
A. Duarte, R. Martí, A. Álvarez, and F. Ángel-Bello. Metaheuristics for the linear ordering problem with cumulative costs. European Journal of Operational Research, 216:270–277, 2012.MathSciNetCrossRef
P. Festa and M.G.C. Resende. Hybridizations of GRASP with path-relinking. In E-G. Talbi, editor, Hybrid metaheuristics, volume 434 of Studies in Computational Intelligence, pages 135–155. Springer, New York, 2013.
P. Festa, P.M. Pardalos, M.G.C. Resende, and C.C. Ribeiro. Randomized heuristics for the MAX-CUT problem. Optimization Methods and Software, 7:1033–1058, 2002.MathSciNetCrossRefMATH
P. Festa, P.M. Pardalos, L.S. Pitsoulis, and M.G.C. Resende. GRASP with path-relinking for the weighted MAXSAT problem. ACM Journal of Experimental Algorithmics, 11:1–16, 2006.MathSciNetMATH
R.D. Frinhani, R.M. Silva, G.R. Mateus, P. Festa, and M.G.C. Resende. GRASP with path-relinking for data clustering: A case study for biological data. In P.M. Pardalos and S. Rebennack, editors, Experimental algorithms, volume 6630 of Lecture Notes in Computer Science, pages 410–420. Springer, Berlin, 2011.
J.B. Ghosh. Computational aspects of the maximum diversity problem. Operations Research Letters, 19:175–181, 1996.MathSciNetCrossRef00025-9)MATH
H. Kautz, E. Horvitz, Y. Ruan, C. Gomes, and B. Selman. Dynamic restart policies. In Proceedings of the Eighteenth National Conference on Artificial intelligence, pages 674–681, Edmonton, 2002. American Association for Artificial Intelligence.
R.K. Kincaid. Good solutions to discrete noxious location problems via metaheuristics. Annals of Operations Research, 40:265–281, 1992.CrossRefMATH
N. Labadi, C. Prins, and M. Reghioui. GRASP with path relinking for the capacitated arc routing problem with time windows. In A. Fink and F. Rothlauf, editors, Advances in computational intelligence in transport, logistics, and supply chain management, pages 111–135. Springer, Berlin, 2008.
M. Laguna and R. Martí. GRASP and path relinking for 2-layer straight line crossing minimization. INFORMS Journal on Computing, 11:44–52, 1999.CrossRefMATH
M. Luby, A. Sinclair, and D. Zuckerman. Optimal speedup of Las Vegas algorithms. Information Processing Letters, 47:173–180, 1993.MathSciNetCrossRef90029-9)MATH
R. Martí and F. Sandoya. GRASP and path relinking for the equitable dispersion problem. Computers & Operations Research, 40:3091–3099, 2013.MathSciNetCrossRef
R. Martí, M.G.C. Resende, and C.C. Ribeiro. Special issue of Computers & Operations Research: GRASP with path relinking: Developments and applications. Computers & Operations Research, 40:3080, 2013b.
R. Martí, V. Campos, M.G.C. Resende, and A. Duarte. Multiobjective GRASP with path relinking. European Journal of Operational Research, 240:54–71, 2015.MathSciNetCrossRefMATH
G.R. Mateus, M.G.C. Resende, and R.M.A. Silva. GRASP with path-relinking for the generalized quadratic assignment problem. Journal of Heuristics, 17: 527–565, 2011.CrossRefMATH
M. Mestria, L.S. Ochi, and S.L. Martins. GRASP with path relinking for the symmetric Euclidean clustered traveling salesman problem. Computers & Operations Research, 40:3218–3229, 2013.MathSciNetCrossRef
R.E.N. Moraes and C.C. Ribeiro. Power optimization in ad hoc wireless network topology control with biconnectivity requirements. Computers & Operations Research, 40:3188–3196, 2013.MathSciNetCrossRef
L.F. Morán-Mirabal, J.L. González-Velarde, and M.G.C. Resende. Automatic tuning of GRASP with evolutionary path-relinking. In M.J. Blesa, C. Blum, P. Festa, A. Roli, and M. Sampels, editors, Hybrid metaheuristics, volume 7919 of Lecture Notes in Computer Science, pages 62–77. Springer, Berlin, 2013a.
L.F. Morán-Mirabal, J.L. González-Velarde, M.G.C. Resende, and R.M.A. Silva. Randomized heuristics for handover minimization in mobility networks. Journal of Heuristics, 19:845–880, 2013b.
L.F. Morán-Mirabal, J.L. González-Velarde, and M.G.C. Resende. Randomized heuristics for the family traveling salesperson problem. International Transactions in Operational Research, 21:41–57, 2014.MathSciNetCrossRefMATH
M.C.V. Nascimento and L. Pitsoulis. Community detection by modularity maximization using GRASP with path relinking. Computers & Operations Research, 40:3121–3131, 2013.MathSciNetCrossRef
M.C.V. Nascimento, M.G.C. Resende, and F.M.B. Toledo. GRASP heuristic with path-relinking for the multi-plant capacitated lot sizing problem. European Journal of Operational Research, 200:747–754, 2010.CrossRefMATH
V.-P. Nguyen, C. Prins, and C. Prodhon. Solving the two-echelon location routing problem by a GRASP reinforced by a learning process and path relinking. European Journal of Operational Research, 216:113–126, 2012.MathSciNetCrossRefMATH
E. Nowicki and C. Smutnicki. An advanced tabu search algorithm for the job shop problem. Journal of Scheduling, 8:145–159, 2005.MathSciNetCrossRefMATH
C.A. Oliveira, P.M. Pardalos, and M.G.C. Resende. GRASP with path-relinking for the quadratic assignment problem. In C.C. Ribeiro and S.L. Martins, editors, Experimental and efficient algorithms, volume 3059, pages 356–368. Springer, Berlin, 2004.CrossRef
A.V.F. Pacheco,, G.M. Ribeiro, and G.R. Mauri. A GRASP with path-relinking for the workover rig scheduling problem. International Journal of Natural Computing Research, 1:1–14, 2010.CrossRef
G. Palubeckis. Multistart tabu search strategies for the unconstrained binary quadratic optimization problem. Annals of Operations Research, 131: 259–282, 2004.MathSciNetCrossRefMATH
O. Pedrola, M. Ruiz, L. Velasco, D. Careglio, O. González de Dios, and J. Comellas. A GRASP with path-relinking heuristic for the survivable IP/MPLS-over-WSON multi-layer network optimization problem. Computers & Operations Research, 40:3174–3187, 2013.MathSciNetCrossRef
M. Pérez, F. Almeida, and J.M. Moreno-Vega. A hybrid GRASP-path relinking algorithm for the capacitated p-hub median problem. In M.J. Blesa, C. Blum, A. Roli, and M. Sampels, editors, Hybrid metaheuristics, volume 3636 of Lecture Notes in Computer Science, pages 142–153. Springer, Berlin, 2005.
L.S. Pessoa, M.G.C. Resende, and C.C. Ribeiro. A hybrid Lagrangean heuristic with GRASP and path-relinking for set k-covering. Computers & Operations Research, 40:3132–3146, 2013.MathSciNetCrossRef
M. Rahmani, M. Rashidinejad, E.M. Carreno, and R.A. Romero. Evolutionary multi-move path-relinking for transmission network expansion planning. In 2010 IEEE Power and Energy Society General Meeting, pages 1–6, Minneapolis, 2010. IEEE.
M.G.C. Resende. Metaheuristic hybridization with greedy randomized adaptive search procedures. In Zhi-Long Chen and S. Raghavan, editors, Tutorials in Operations Research, pages 295–319. INFORMS, 2008.
M.G.C. Resende and C.C. Ribeiro. A GRASP with path-relinking for private virtual circuit routing. Networks, 41:104–114, 2003a.
M.G.C. Resende and C.C. Ribeiro. GRASP with path-relinking: Recent advances and applications. In T. Ibaraki, K. Nonobe, and M. Yagiura, editors, Metaheuristics: Progress as real problem solvers, pages 29–63. Springer, New York, 2005a.
M.G.C. Resende and C.C. Ribeiro. Greedy randomized adaptive search procedures: Advances and applications. In M. Gendreau and J.-Y. Potvin, editors, Handbook of metaheuristics, pages 293–319. Springer, New York, 2nd edition, 2010.
M.G.C. Resende and C.C. Ribeiro. Restart strategies for GRASP with path-relinking heuristics. Optimization Letters, 5:467–478, 2011.MathSciNetCrossRefMATH
M.G.C. Resende and R.F. Werneck. A hybrid heuristic for the p-median problem. Journal of Heuristics, 10:59–88, 2004.CrossRefMATH
M.G.C. Resende and R.F. Werneck. A hybrid multistart heuristic for the uncapacitated facility location problem. European Journal of Operational Research, 174:54–68, 2006.MathSciNetCrossRefMATH
M.G.C. Resende, R. Martí, M. Gallego, and A. Duarte. GRASP and path relinking for the max-min diversity problem. Computers & Operations Research, 37: 498–508, 2010a.
M.G.C. Resende, C.C. Ribeiro, F. Glover, and R. Martí. Scatter search and path-relinking: Fundamentals, advances, and applications. In M. Gendreau and J.-Y. Potvin, editors, Handbook of metaheuristics, pages 87–107. Springer, New York, 2nd edition, 2010b.
C.C. Ribeiro and M.G.C. Resende. Path-relinking intensification methods for stochastic local search algorithms. Journal of Heuristics, 18:193–214, 2012.CrossRef
C.C. Ribeiro and I. Rosseti. A parallel GRASP heuristic for the 2-path network design problem. In B. Monien and R. Feldmann, editors, Euro-Par 2002 Parallel Processing, volume 2400 of Lecture Notes in Computer Science, pages 922–926. Springer, Berlin, 2002.
C.C. Ribeiro, E. Uchoa, and R.F. Werneck. A hybrid GRASP with perturbations for the Steiner problem in graphs. INFORMS Journal on Computing, 14:228–246, 2002.MathSciNetCrossRefMATH
F.J. Rodriguez, C. Blum, C. García-Martínez, and M. Lozano. GRASP with path-relinking for the non-identical parallel machine scheduling problem with minimising total weighted completion times. Annals of Operations Research, 201:383–401, 2012.MathSciNetCrossRefMATH
D.P. Ronconi and L.R.S. Henriques. Some heuristic algorithms for total tardiness minimization in a flowshop with blocking. Omega, 37:272–281, 2009.CrossRef
J. Santamaría, O. Cordón, S. Damas, R. Martí, and R.J. Palma. GRASP & evolutionary path relinking for medical image registration based on point matching. In 2010 IEEE Congress on Evolutionary Computation, pages 1–8. IEEE, 2010.
J. Santamaría, O. Cordón, S. Damas, R. Martí, and R.J. Palma. GRASP and path relinking hybridizations for the point matching-based image registration problem. Journal of Heuristics, 18:169–192, 2012.CrossRef
D. Santos, A. de Sousa, and F. Alvelos. A hybrid column generation with GRASP and path relinking for the network load balancing problem. Computers & Operations Research, 40:3147–3158, 2013.MathSciNetCrossRef
I.V. Sergienko, V.P. Shilo, and V.A. Roshchin. Optimization parallelizing for discrete programming problems. Cybernetics and Systems Analysis, 40: 184–189, 2004.MathSciNetCrossRefMATH
O.V. Shylo, T. Middelkoop, and P.M. Pardalos. Restart strategies in optimization: Parallel and serial cases. Parallel Computing, 37:60–68, 2011a.
O.V. Shylo, O.A. Prokopyev, and J. Rajgopal. On algorithm portfolios and restart strategies. Operations Research Letters, 39:49–52, 2011b.
G.C. Silva, M.R.Q. de Andrade, L.S. Ochi, S.L. Martins, and A. Plastino. New heuristics for the maximum diversity problem. Journal of Heuristics, 13: 315–336, 2007.CrossRef
R.M.A. Silva, M.G.C. Resende, P.M. Pardalos, G.R. Mateus, and G. de Tomi. GRASP with path-relinking for facility layout. In B.I. Goldengorin, V.A. Kalyagin, and P.M. Pardalos, editors, Models, algorithms, and technologies for network analysis, volume 59 of Springer Proceedings in Mathematics & Statistics, pages 175–190. Springer, Berlin, 2013b.
K. Sörensen and P. Schittekat. Statistical analysis of distance-based path relinking for the capacitated vehicle routing problem. Computers & Operations Research, 40:3197–3205, 2013.MathSciNetCrossRef
F.L. Usberti, P.M. França, and A.L.M. França. GRASP with evolutionary path-relinking for the capacitated arc routing problem. Computers & Operations Research, 40:3206–3217, 2013.MathSciNetCrossRef
J.G. Villegas. Vehicle routing problems with trailers. PhD thesis, Universite de Technologie de Troyes, Troyes, 2010.
J.G. Villegas, C. Prins, C. Prodhon, A.L. Medaglia, and N. Velasco. A GRASP with evolutionary path relinking for the truck and trailer routing problem. Computers & Operations Research, 38:1319–1334, 2011.CrossRefMATH
© Springer Science+Business Media New York 2016
Mauricio G.C. Resende and Celso C. RibeiroOptimization by GRASP10.1007/978-1-4939-6530-4_10
# 10. Parallel GRASP heuristics
Mauricio G. C. Resende1 and Celso C. Ribeiro2
(1)
Modeling and Optimization Group (MOP), Amazon.com, Inc., Seattle, WA, USA
(2)
Instituto de Ciência da Computação, Universidade Federal Fluminense, Niterói, Rio de Janeiro, Brazil
Parallel computers and parallel algorithms have increasingly found their way into metaheuristics. Most parallel implementations of GRASP found in the literature consist in either partitioning the search space or the GRASP iterations and assigning each partition to a processor. GRASP is applied to each partition in parallel. These implementations can be categorized as multiple-walk independent-thread, with the communication among processors during GRASP iterations being limited to the detection of program termination and gathering the best solution found over all processors. Approaches for the parallelization of GRASP with path-relinking can be categorized as either multiple-walk independent-thread or multiple-walk cooperative-thread, with processors sharing and exchanging information about elite solutions visited during the GRASP iterations. This chapter is an introduction to parallel GRASP heuristics, covering multiple-walk independent-thread strategies, multiple-walk cooperative-thread strategies, and some applications of parallel GRASP and parallel GRASP with path-relinking.
## 10.1 Multiple-walk independent-thread strategies
Most parallel implementations of GRASP follow the multiple-walk independent-thread strategy, based on the distribution of the iterations among the processors. In general, each search thread has to perform MaxIterations∕p iterations, where p and MaxIterations are, respectively, the number of processors and the total number of iterations. Each processor has a copy of the sequential algorithm, a copy of the problem data, and an independent seed to generate its own pseudo-random number sequence. To avoid that the processors find the same solutions, each processor uses a different sequence of pseudo-random numbers. A single global variable is required to store the best solution found over all processors. One of the processors acts as the master, reading and distributing problem data, generating the seeds which will be used by the pseudo-random number generator at each processor, distributing the iterations, and collecting the best solution found by each processor. Since the iterations are independent and very little information is exchanged, linear speedups are easily obtained provided that no major load imbalance occurs. The speedup of a parallel GRASP heuristic running on p processors is the ratio of the time taken by the sequential GRASP heuristic and the time taken by the parallel heuristic running on p processors. To improve load balancing, the iterations can be evenly distributed over the processors or according to their demands.
Implementations of this strategy in machines with different architectures and using different software platforms have shown linear or almost-linear speedups for a number of applications. We illustrate the case for independent-thread strategies with two examples of parallel implementations.
The first example is of a parallel GRASP for the MAX-SAT problem running on a cluster of SUN-SPARC 10 workstations, sharing the same file system, with communication done using Parallel Virtual Machine (PVM). The parallel GRASP was applied to each test instance using 1, 5, 10, and 15 processors, and the maximum number of iterations was set at 1000, 200, 100, and 66, respectively. The computation time required to perform the specified number of iterations and the best solution found were recorded. Since communication was kept to a minimum, linear speedups were expected. Figure 10.1 shows individual speedups and average speedups for these runs. Figure 10.2 shows that the average quality of the solutions found was not greatly affected by the number of processors used.
Fig. 10.1
Average speedups on 5, 10, and 15 processors for the maximum satisfiability problem.
Fig. 10.2
Error on 1, 5, 10, and 15 processors for the maximum satisfiability problem.
The second example is an implementation of a parallel GRASP for the Steiner problem in graphs. Parallelization was achieved by the distribution of 512 iterations over the processors, with the value of the restricted candidate list parameter α randomly chosen in the interval [0. 0, 0. 3] at each iteration. The algorithm was tested on an IBM SP-2 machine with 32 processors, using the Message Passing Interface (MPI) library for communication. The 60 problems from series C, D, and E of the OR-Library were used for the computational experiments. The parallel implementation obtained 45 optimal solutions over the 60 test instances. The relative deviation with respect to the optimal value was never larger than 4%. Almost-linear speedups were observed for 2, 4, 8, and 16 processors with respect to the sequential implementation and are illustrated in Figure 10.3.
Fig. 10.3
Average speedups on 2, 4, 8, and 16 processors on Steiner tree problem in graphs.
Path-relinking may also be used in conjunction with independent-thread parallel implementations of GRASP. An independent-thread implementation for the job shop scheduling problem keeps local sets (or pools) of elite solutions in each processor and path-relinking is applied to pairs of elite solutions stored in each local pool. Computational results using MPI on an SGI Challenge computer with 28 R10000 processors showed linear speedups for the 3-index assignment problem.
Multiple-walk independent-thread approaches for the parallelization of GRASP may benefit from load balancing techniques, whenever heterogeneous processors are used or if the parallel machine is simultaneously shared by several users. In this case, almost-linear speedups can be obtained with a heterogeneous distribution of the iterations among the p processors in q ≥ p packets. Each processor starts performing one packet of ⌈MaxIterations∕q⌉ iterations and informs the master when it finishes its packet of iterations. The master stops the execution of each worker processor when there are no more iterations to be performed and collects the best solution found. Faster or less loaded processors will perform more iterations than the others. In the case of the parallel GRASP implemented for the problem of traffic assignment discussed in Chapter , this dynamic load balancing strategy allows reductions in the elapsed times of up to 15% with respect to the times observed for the static strategy, in which the iterations are uniformly distributed over the processors.
The efficiency of multiple-walk independent-thread parallel implementations of metaheuristics (based on running multiple copies of the same sequential algorithm) has been addressed in the literature. The efficiency of the parallel heuristic running on p processors is given by its speedup divided by p. We have seen in Section 6.2 of this book that the time taken by a pure GRASP heuristic to find a solution with cost at least as good as a certain target value has been shown experimentally to behave as a random variable that fits an exponential distribution. In the case where the setup times are not negligible, the runtimes fit a two-parameter shifted exponential distribution. Therefore, the probability density function of the time-to-target random variable is given by f(t) = (1∕λ) ⋅ e −t∕λ in the first case or by f(t) = (1∕λ) ⋅ e −(t−μ)∕λ in the second, with the parameters  and  being associated with the shape and the shift of the exponential function, respectively.
Recall that the speedup of a parallel GRASP heuristic running on p processors measures the ratio between the time needed to find a solution with value at least as good as the target value using a sequential algorithm and that taken by a parallel implementation with p processors. The linear speedups of parallel GRASP implementations with negligible setup times naturally follow from the expression of the probability density function f(t) = (1∕λ) ⋅ e −t∕λ of the exponentially distributed time-to-target random variable, as illustrated with the previous examples. For example, suppose t 1, t 2,..., t p are p independent exponentially distributed runtimes, each with parameter λ. Suppose t i is the runtime on processor i = 1, 2,..., p. By definition, the expected value of t i is E(t i ) = λ, for i = 1, 2,..., p. Define τ to be the runtime of the parallel process, i.e., the time taken by a parallel implementation with p processors:

which is the runtime of the fastest of the p processes. Then,

Therefore, the cumulative distribution function of τ is given by F τ (a) = 1 − P(τ > a) = 1 − e −a∕(λ∕p). Hence, the random variable τ is also exponentially distributed with expected value E(τ) = λ∕p, showing that with p processors there is an expected linear speedup of p.
Let P p (t) be the probability of not having found a given (target) solution value in t time units with p independent processors. If P 1(t) = e −(t−μ)∕λ , with  and , i.e., P 1 corresponds to a two-parameter exponential distribution, then P p (t) = e −ρ(t−μ)∕λ . This follows from the definition of the two-parameter exponential distribution. It implies that the probability of finding a solution of a given value in time pt with one processor is equal to 1 − e −(ρ t−μ)∕λ , while the probability of finding a solution at least as good as that given target value in time t with p independent parallel processors is 1 − e −ρ(t−μ)∕λ . Note that if μ = 0, corresponding to the case of nonshifted exponential distributions, then both probabilities are equal. Furthermore, since p ≥ 1, then the two probabilities are approximately equal if p | μ | ≪ λ and it is possible to approximately achieve linear speedup in solution time-to-target value using multiple independent processors.
The observation above suggests a test using a one-processor, sequential implementation to determine whether it is likely that a parallel implementation using multiple independent processors will be efficient. We say a parallel implementation is efficient if it achieves linear speedup (with respect to wall, or elapsed, time) to find a solution at least as good as a given target value. The test consists in performing a large number of independent runs of the sequential program to build a Q-Q plot and estimate the parameters μ and λ of the shifted exponential distribution. If p | μ | ≪ λ, then we can predict that the parallel implementation will be efficient. Later in this chapter (on page 55), we illustrate this test.
## 10.2 Multiple-walk cooperative-thread strategies
In this section, we focus on the use of path-relinking as a mechanism for implementing GRASP as a multiple-walk cooperative-thread strategy, in which processors share and exchange information (in this case, about elite solutions previously visited) collected during previous GRASP iterations.
Path-relinking and its hybridization with GRASP heuristics have been extensively discussed in Chapters and of this book. The algorithm in Figure 10.4 recalls the pseudo-code of a hybrid GRASP with path-relinking for minimization, as already presented in Section 9.3
Fig. 10.4
Pseudo-code of a template of a basic GRASP with path-relinking heuristic for minimization (revisited).
Two basic mechanisms may be used to implement a multiple-walk cooperative-thread GRASP with path-relinking heuristic. In distributed strategies, each thread maintains its own pool of elite solutions. Each iteration of each thread consists initially of a GRASP construction, followed by local search. Then, the local optimum is combined with a randomly selected element of the thread's pool using path-relinking. The output of path-relinking is finally tested for insertion into the pool. If accepted for insertion, the solution is sent to the other threads, where it is tested for insertion into the other pools. Collaboration takes place at this point. Though there may be some communication overhead in the early iterations, this tends to ease up as pool insertions become less frequent.
The second mechanism corresponds to centralized strategies based on a single pool of elite solutions. As before, each GRASP iteration performed by each thread starts with the construction and local search phases. Next, an elite solution is requested and received from the centralized pool. Once path-relinking has been performed, the solution obtained as the output is sent to the pool and tested for insertion. Collaboration takes place when an elite solution is sent from a pool to a processor distinct from the one in which the solution was originally computed.
We note that, in both the distributed and the centralized strategies, each processor has a copy of the sequential algorithm and a copy of the data. One processor acts as the master, reading and distributing the problem data, generating the seeds used by the pseudo-random number generators at each processor, distributing the iterations, and collecting the best solution found by each processor. In the case of a distributed strategy, each processor has its own pool of elite solutions and all available processors perform GRASP iterations. Contrary to the case of a centralized strategy, one particular processor does not perform GRASP iterations and is used exclusively to store the pool and handle all operations involving communication requests between the pool and the workers. In the next section, we describe three examples of parallel implementations of GRASP with path-relinking.
## 10.3 Some parallel GRASP implementations
In this section, we report comparisons of multiple-walk independent-thread and multiple-walk cooperative-thread strategies for GRASP with path-relinking for the three-index assignment problem, the job shop scheduling problem, and the 2-path network design problem. For each problem, we first state the problem and describe the construction, local search, and path-relinking procedures. Next, we show numerical results comparing the different parallel implementations.
The experiments described in Sections 10.3.1 and 10.3.2 were done on an SGI Challenge computer (16 196-MHz MIPS R10000 processors and 12 194-MHz MIPS R10000 processors) with 7.6 Gb of memory. The algorithms were coded in Fortran and were compiled with the SGI MIPSpro F77 compiler using flags -O3 -static -u. The parallel codes used SGI's Message Passing Toolkit 1.4, which contains a fully compliant implementation of version 1.2 of the Message Passing Interface (MPI) specification. In the parallel experiments, wall clock times were measured with the MPI function MPI_WT. This was also the case for runs with a single processor that are compared to multiple-processor runs. Timing in the parallel runs excludes the time to read the problem data, to initialize the random number generator seeds, and to output the solution.
In the experiments described in Section 10.3.3, both variants of the parallel GRASP with path-relinking heuristic were coded in C and were compiled with version egcs-2.91.66 of the gcc compiler. MPI LAM 6.3.2 was used in the implementation. Computational experiments were performed on a cluster of 32 Pentium II 400MHz processors with 32 Mbytes of RAM memory each, running under the Red Hat 6.2 implementation of Linux. Processors were connected by a 10 Mbits/s IBM 8274 switch.
### 10.3.1 Three-index assignment
#### 10.3.1.1 Problem formulation
The three-index assignment problem (AP3) is a straightforward extension of the classical two-dimensional assignment problem and can be formulated as follows. Given three disjoint sets I, J, and K, with | I | = | J | = | K | = n, and a weight c ijk associated with each ordered triplet (i, j, k) ∈ I × J × K, find a minimum weight collection of n disjoint triplets (i, j, k) ∈ I × J × K. Another way to formulate the AP3 is with permutations. There are n 3 cost elements. The optimal solution consists of the n triplets with the smallest total cost, such that the constraints are not violated. The constraints are enforced if one assigns to each set I, J, and K, the numbers 1, 2,..., n and none of the chosen triplets (i, j, k) is allowed to have the same value for indices i, j, and k as another. The permutation-based formulation for the AP3 is

where π N denotes the set of all permutations of the set of integers N = { 1, 2,..., n}.
#### 10.3.1.2 GRASP construction
The construction phase selects n triplets, one at a time, to form a three-index assignment S. A random choice in the interval [0, 1] for the restricted candidate list parameter α is made at each iteration. The value remains constant during the entire construction phase. Construction begins with an empty solution S. The initial set C of candidate triplets consists of the set of all triplets. Let c and  denote, respectively, the values of the smallest and largest cost triplets in C. All triplets (i, j, k) in the candidate set C having cost  are placed in the restricted candidate list. Triplet (i p , j p , k p ) ∈ C′ is chosen at random and is added to the solution, i.e., S = S ∪{ (i p , j p , k p )}. Once (i p , j p , k p ) is selected, any triplet (i, j, k) ∈ C such that i = i p or j = j p or k = k p is removed from C. After n − 1 triplets have been selected, the set C of candidate triplets contains one last triplet which is added to S, thus completing the construction phase.
#### 10.3.1.3 Local search
If the solution of AP3 is represented by a pair of permutations (p, q), then the solution space consists of all (n! )2 possible combinations of permutations. If p is a permutation vector, then a 2-exchange permutation of p is a permutation vector that results from swapping two elements in p. In the 2-exchange neighborhood scheme used in this local search, the neighborhood of a solution (p, q) consists of all 2-exchange permutations of p plus all 2-exchange permutations of q. In the local search, the cost of each neighbor solution is compared with the cost of the current solution. If the cost of the neighbor is lower, then the solution is updated, the search is halted, and a search in the new neighborhood is initialized. The local search ends when no neighbor of the current solution has a lower cost than the current solution.
#### 10.3.1.4 Path-relinking
A solution of AP3 can be represented by two permutation arrays of numbers 1, 2,..., n in sets J and K, respectively. Path-relinking is done between an initial solution

and a guiding solution

Let the difference between S and T be defined by the two sets of indices

During a path-relinking move, a permutation π (for either p or q) array in S, given by

is replaced by a permutation array

by exchanging permutation elements π i S and π j S , where i ∈ δ π S, T and j ∈ { 1, 2,..., n} are such that π j T = π i S .
#### 10.3.1.5 Parallel independent-thread GRASP with path-relinking for AP3
We study the parallel efficiency of the multiple-walk independent-thread GRASP with path-relinking on AP3 instances B-S 20.1, B-S 22.1, B-S 24.1, and B-S 26.1, using 7, 8, 7, and 8 as target solution values, respectively. Table 10.1 shows the estimated shifted exponential distribution parameters for the multiple-walk independent-thread GRASP with path-relinking strategy, obtained from 200 independent runs of a sequential variant of the algorithm. In addition to the sequential variant, 60 independent runs of 2-, 4-, 8-, and 16-thread variants were run on the four test problems. Average speedups were computed dividing the sum of the execution times of the independent parallel program executing on one processor by the sum of the execution times of the parallel program on 2, 4, 8, and 16 processors, for 60 runs. The execution times of the independent parallel implementation executing on one processor and the execution times of the sequential program are approximately the same. The average speedups can be seen in Table 10.2 and Figure 10.5.
Table 10.1
Estimated shifted exponential distribution parameters μ and λ obtained with 200 independent runs of a sequential GRASP with path-relinking on AP3 instances B-S 20.1, B-S 22.1, B-S 24.1, and B-S 26.1, with target values 7, 8, 7, and 8, respectively. | Estimated parameter
---|---
Problem | μ | λ | | μ | ∕λ
B-S 20.1 | -26.46 | 1223.80 | 0.021
B-S 22.1 | -135.12 | 3085.32 | 0.043
B-S 24.1 | -16.76 | 4004.11 | 0.004
B-S 26.1 | 32.12 | 2255.55 | 0.014
average | 0.020
Table 10.2
Speedups for multiple-walk independent-thread implementations of GRASP with path-relinking on instances B-S 20.1, B-S 22.1, B-S 24.1, and B-S 26.1, with target values 7, 8, 7, and 8, respectively. Speedups are computed with the average of 60 runs. | Number of processors
---|---
|
2 | 4 | 8 | 16
Problem | speedup | efficiency | speedup | efficiency | speedup | efficiency | speedup | efficiency
B-S 20.1 | 1.67 | 0.84 | 3.34 | 0.84 | 6.22 | 0.78 | 10.82 | 0.68
B-S 22.1 | 2.25 | 1.13 | 4.57 | 1.14 | 9.01 | 1.13 | 14.37 | 0.90
B-S 24.1 | 1.71 | 0.86 | 4.00 | 1.00 | 7.87 | 0.98 | 12.19 | 0.76
B-S 26.1 | 2.11 | 1.06 | 3.89 | 0.97 | 6.10 | 0.76 | 11.49 | 0.72
average | 1.94 | 0.97 | 3.95 | 0.99 | 7.30 | 0.91 | 12.21 | 0.77
Fig. 10.5
Average speedups on 2, 4, 8, and 16 processors for multiple-walk independent-thread parallel GRASP with path-relinking on AP3 instances B-S 20.1, B-S 22.1, B-S 24.1, and B-S 26.1.
#### 10.3.1.6 Parallel cooperative-thread GRASP with path-relinking for AP3
We now study the multiple-walk cooperative-thread strategy for GRASP with path-relinking on AP3. As with the independent-thread GRASP with path-relinking strategy, the target solution values 7, 8, 7, and 8 were used for instances B-S 20.1, B-S 22.1, B-S 24.1, and B-S 26.1, respectively. Table 10.3 and Figure 10.6 show super-linear speedups on instances B-S 22.1, B-S 24.1, and B-S 26.1 and about 90% efficiency for B-S 20.1. Super-linear speedups are possible because good elite solutions are shared among the threads and are combined with GRASP solutions, whereas they would not be combined in an independent-thread implementation, making the parallel cooperative-thread GRASP with path-relinking converge faster to the target.
Table 10.3
Speedups for multiple-walk cooperative-thread implementations of GRASP with path-relinking on instances B-S 20.1, B-S 22.1, B-S 24.1, and B-S 26.1, with target values 7, 8, 7, and 8, respectively. Average speedups were computed over 60 runs. | Number of processors
---|---
|
2 | 4 | 8 | 16
Problem | speedup | efficiency | speedup | efficiency | speedup | efficiency | speedup | efficiency
B-S 20.1 | 1.56 | 0.78 | 3.47 | 0.88 | 7.37 | 0.92 | 14.36 | 0.90
B-S 22.1 | 1.64 | 0.82 | 4.22 | 1.06 | 8.83 | 1.10 | 18.78 | 1.04
B-S 24.1 | 2.16 | 1.10 | 4.00 | 1.00 | 9.38 | 1.17 | 19.29 | 1.21
B-S 26.1 | 2.16 | 1.08 | 5.30 | 1.33 | 9.55 | 1.19 | 16.00 | 1.00
average | 1.88 | 0.95 | 4.24 | 1.07 | 8.78 | 1.10 | 17.10 | 1.04
Fig. 10.6
Average speedups on 2, 4, 8, and 16 processors for multiple-walk cooperative-thread parallel GRASP with path-relinking on AP3 instances B-S 20.1, B-S 22.1, B-S 24.1, and B-S 26.1.
Figure 10.7 compares the average speedup of the two implementations tested in this section, namely the multiple-walk independent-thread and the multiple-walk cooperative-thread GRASP with path-relinking implementations using target solution values 7, 8, 7, and 8, on the same instances. The figure shows that the cooperative variant of GRASP with path-relinking achieves the best parallelization, since the largest speedups are observed for that variant.
Fig. 10.7
Average speedups on 2, 4, 8, and 16 processors for the parallel algorithms tested on instances of AP3: multiple-walk independent-thread GRASP with path-relinking and multiple-walk cooperative-thread GRASP with path-relinking.
### 10.3.2 Job shop scheduling
#### 10.3.2.1 Problem formulation
The job shop scheduling problem (JSP) has long challenged researchers. It consists in processing a finite set of jobs on a finite set of machines. Each job is required to complete a set of operations in a fixed order. Each operation is processed on a specific machine for a fixed duration. Each machine can process at most one job at a time. Once a job initiates processing on a given machine, it must complete processing on that machine without interruption. A schedule is a mapping of the operations to time slots on the machines. The makespan is the maximum completion time of the jobs. The objective of the JSP is to find a schedule that minimizes the makespan.
A feasible solution of the JSP can be built from a permutation of the set of jobs  on each of the machines in the set , observing the precedence constraints, the restriction that a machine can process only one operation at a time, and requiring that once started, processing of an operation cannot be interrupted until its completion. Since each set of feasible permutations has a corresponding schedule, the objective of the JSP is to find, among the feasible permutations, the one with the smallest makespan.
#### 10.3.2.2 GRASP construction
Each single operation is a building block of the GRASP construction phase for the JSP. A feasible schedule is built by scheduling individual operations, one at a time, until all operations have been scheduled.
While constructing a feasible schedule, not all operations can be selected at a given stage of the construction. An operation σ k j can only be scheduled if all prior operations of job j have already been scheduled. Therefore, at each construction phase iteration, at most  operations are candidates to be scheduled. Let this set of candidate operations be denoted by  and the set of already scheduled operations by . Denote the value of the greedy function for candidate operation σ k j by h(σ k j ).
The greedy choice is to next schedule operation . Let , h = h(σ k j ), and . Then, the GRASP restricted candidate list (RCL) is defined as

where α is a parameter such that 0 ≤ α ≤ 1.
A typical iteration of the GRASP construction is summarized as follows: a partial schedule (which is initially empty) is on hand, the next operation to be scheduled is selected from the RCL and is added to the partial schedule, resulting in a new partial schedule. The selected operation is inserted into the earliest available feasible time slot on machine . Construction ends when the partial schedule is complete, i.e., all operations have been scheduled.
The algorithm uses two greedy functions. Even numbered iterations use a greedy function based on the makespan resulting from the inclusion of operation σ k j to the already-scheduled operations, i.e.,  for . On odd numbered iterations, solutions are constructed by favoring operations from jobs having long remaining processing times. The greedy function used is given by , which measures the remaining processing time for job j. The use of two different greedy functions produce a greater diversity of initial solutions to be used by the local search.
#### 10.3.2.3 Local search
As an attempt to decrease the makespan of the solution produced in the construction phase, we employ a 2-exchange local search procedure based on a disjunctive graph model.
#### 10.3.2.4 Path-relinking
Path-relinking for job shop scheduling is similar to path-relinking for three-index assignment. Where in the case of three-index assignment each solution is represented by two permutation arrays, in the job shop scheduling problem, each solution is made up of  permutation arrays of numbers .
#### 10.3.2.5 Parallel independent-thread GRASP with path-relinking for JSP
We study the efficiency of the multiple-walk independent-thread GRASP with path-relinking on JSP instances abz6, mt10, orb5, and la21 of ORLib using 943, 938, 895, and 1100 as target solution values, respectively. Table 10.4 shows the estimated shifted exponential distribution parameters for the multiple-walk independent-thread GRASP with path-relinking strategy obtained from 200 independent runs of a sequential variant of the algorithm. In addition to the sequential variant, 60 independent runs of 2-, 4-, 8-, and 16-thread variants were run on the four test problems. As before, the average speedups were computed dividing the sum of the execution times of the independent parallel program executing on one processor by the sum of the execution times of the parallel program on 2, 4, 8, and 16 processors, over 60 runs. The average speedups can be seen in Table 10.5 and Figure 10.8.
Table 10.4
Estimated shifted exponential distribution parameters μ and λ obtained with 200 independent runs of a sequential GRASP with path-relinking on JSP instances abz6, mt10, orb5, and la21, with target values 943, 938, 895, and 1100, respectively. | Estimated parameter
---|---
Problem | μ | λ | | μ | ∕λ
abz6 | 47.67 | 756.56 | 0.06
mt10 | 305.27 | 524.23 | 0.58
orb5 | 130.12 | 395.41 | 0.32
la21 | 175.20 | 407.73 | 0.42
average | 0.34
Table 10.5
Speedups for multiple-walk independent-thread implementations of GRASP with path-relinking on instances abz6, mt10, orb5, and la21, with target values 943, 938, 895, and 1100, respectively. Speedups are computed with the average of 60 runs. | Number of processors
---|---
|
2 | 4 | 8 | 16
Problem | speedup | efficiency | speedup | efficiency | speedup | efficiency | speedup | efficiency
bz6 | 2.00 | 1.00 | 3.36 | 0.84 | 6.44 | 0.81 | 10.51 | 0.66
mt10 | 1.57 | 0.79 | 2.12 | 0.53 | 3.03 | 0.39 | 4.05 | 0.25
orb5 | 1.95 | 0.98 | 2.97 | 0.74 | 3.99 | 0.50 | 5.36 | 0.34
la21 | 1.64 | 0.82 | 2.25 | 0.56 | 3.14 | 0.39 | 3.72 | 0.23
average | 1.79 | 0.90 | 2.67 | 0.67 | 4.15 | 0.52 | 5.91 | 0.37
Fig. 10.8
Average speedups on 2, 4, 8, and 16 processors for multiple-walk independent-thread parallel GRASP with path-relinking on JSP instances abz6, mt10, orb5, and la21.
Compared to the efficiencies observed on the AP3 instances, those for these instances of the JSP were much worse. While with 16 processors average speedups of 12.2 were observed for AP3, average speedups of only 5.9 occurred for JSP. This is consistent with the test proposed earlier in this chapter (page 16), since the average | μ | ∕λ values for AP3 and JSP are equal to 0.02 and 0.34, respectively.
#### 10.3.2.6 Parallel cooperative-thread GRASP with path-relinking for JSP
We now study the multiple-walk cooperative-thread strategy for GRASP with path-relinking on the JSP. As with the independent-thread GRASP with path-relinking strategy, the target solution values 943, 938, 895, and 1100 were used for instances abz6, mt10, orb5, and la21, respectively. Table 10.6 and Figure 10.9 show super-linear speedups on instances abz6 and mt10, linear speedup on orb5 and about 70% efficiency for la21. As before, super-linear speedups are possible because good elite solutions are shared among the threads and these elite solutions are combined with GRASP solutions whereas they would not be combined in an independent-thread implementation.
Table 10.6
Speedups for multiple-walk cooperative-thread implementations of GRASP with path-relinking on instances abz6, mt10, orb5, and la21, with target values 943, 938, 895, and 1100, respectively. Average speedups were computed over 60 runs. | Number of processors
---|---
|
2 | 4 | 8 | 16
Problem | speedup | efficiency | speedup | efficiency | speedup | efficiency | speedup | efficiency
abz6 | 2.40 | 1.20 | 4.21 | 1.05 | 11.43 | 1.43 | 23.58 | 1.47
mt10 | 1.75 | 0.88 | 4.58 | 1.15 | 8.36 | 1.05 | 16.97 | 1.06
orb5 | 2.10 | 1.05 | 4.91 | 1.23 | 8.89 | 1.11 | 15.76 | 0.99
la21 | 2.23 | 1.12 | 4.47 | 1.12 | 7.54 | 0.94 | 11.41 | 0.71
average | 2.12 | 1.06 | 4.54 | 1.14 | 9.05 | 1.13 | 16.93 | 1.06
Fig. 10.9
Average speedups on 2, 4, 8, and 16 processors for multiple-walk cooperative-thread parallel GRASP with path-relinking on JSP instances abz6, mt10, orb5, and la21.
Figure 10.10 compares the average speedup of the two implementations tested in this section, namely the multiple-walk independent-thread and the multiple-walk cooperative-thread GRASP with path-relinking implementations using target solution values 943, 938, 895, and 1100, on instances abz6, mt10, orb5, and la21, respectively. The figure shows that the cooperative variant of GRASP with path-relinking achieves the best parallelization.
Fig. 10.10
Average speedups on 2, 4, 8, and 16 processors for the parallel algorithms tested on instances of JSP: multiple-walk independent-thread GRASP with path-relinking and multiple-walk cooperative-thread GRASP with path-relinking.
### 10.3.3 2-path network design problem
#### 10.3.3.1 Problem formulation
Let G = (V, U) be a connected graph, where V is the set of nodes and U is the set of edges. A k-path between nodes s, t ∈ V is a sequence of at most k edges connecting s and t. Given a non-negative weight function w: U → R + associated with the edges of G and a set D of pairs of origin-destination nodes, the 2-path network design problem (2PNDP) consists in finding a minimum weighted subset of edges U′ ⊆ U containing a 2-path between every origin-destination pair.
Applications of 2PNDP can be found in the design of communications networks, in which paths with few edges are sought to enforce high reliability and small delays.
#### 10.3.3.2 GRASP construction
The construction of a new solution begins by the initialization of modified edge weights with the original edge weights. Each iteration of the construction phase starts by the random selection of an origin-destination pair still in D. A shortest 2-path between the extremities of this pair is computed, using the modified edge weights. The weights of the edges in this 2-path are set to zero until the end of the construction procedure, the origin-destination pair is removed from D, and a new iteration resumes. The construction phase stops when 2-paths have been computed for all origin-destination pairs.
#### 10.3.3.3 Local search
The local search phase seeks to improve each solution built in the construction phase. Each solution may be viewed as a set of 2-paths, one for each origin-destination pair in D. To introduce diversity to drive different applications of the local search to different local optima, the origin-destination pairs are investigated at each GRASP iteration in a circular order, defined by a different random permutation of their original indices.
Each 2-path in the current solution is tentatively eliminated. The weights of the edges used by other 2-paths are temporarily set to zero, while those which are not used by other 2-paths in the current solution are restored to their original values. A new shortest 2-path between the extremities of the origin-destination pair under investigation is computed, using the modified weights. If the new 2-path improves the current solution, then the current solution is updated with the new 2-path; otherwise the previous 2-path is restored. The search stops if the current solution is not improved after a sequence of | D | iterations along which all 2-paths are investigated. Otherwise, the next 2-path in the current solution is investigated for substitution and a new iteration resumes.
#### 10.3.3.4 Path-relinking
A solution to 2PNDP is represented as a set of 2-paths connecting each origin-destination pair. Path-relinking starts by determining all origin-destination pairs whose associated 2-paths are different in the starting and guiding solutions. These computations amount to determining a set of moves which should be applied to the initial solution to reach the guiding solution. Each move is characterized by a pair of 2-paths, one to be inserted and the other to be eliminated from the current solution.
#### 10.3.3.5 Parallel implementations of GRASP with path-relinking for 2PNDP
As for AP3 and JSP, in the case of an independent-thread parallel implementation of GRASP with path-relinking for 2PNDP, each processor has a copy of the sequential algorithm, a copy of the data, and its own pool of elite solutions. One processor acts as the master, reading and distributing the problem data, generating the seeds used by the pseudo-random number generators at each processor, distributing the iterations, and collecting the best solution found by each processor. All the p available processors perform GRASP iterations.
On the other hand, in the case of a cooperative-thread parallel implementation of GRASP with path-relinking for 2PNDP, the master handles a centralized pool of elite solutions, collecting and distributing elite solutions upon request (recall that in the case of AP3 and JSP each processor had its own pool of elite solutions). The p − 1 workers exchange elite solutions found along their search trajectories. In this implementation for 2PNDP, each worker can send up to three different solutions to the master at each iteration: the solution obtained by local search, and solutions w 1 and w 2 obtained by forward and backward path-relinking between the same pair of starting and guiding solutions, respectively.
#### 10.3.3.6 Computational results
The results illustrated in this section are for an instance with 100 nodes, 4950 edges, and 1000 origin-destination pairs. We use the methodology based on time-to-target plots showing empirical runtime distributions of the random variable time to target solution value. To plot the empirical distribution, we fix a solution target value and run each algorithm 200 times, recording the running time when a solution with cost at least as good as the target value is found. For each algorithm, we associate with the i-th sorted running time t i a probability  and plot the points z i = (t i , p i ), for i = 1,..., 200.
Results obtained for both the independent-thread and the cooperative-thread parallel implementations of GRASP with path-relinking on the above instance with the target value set at 683 are reported in Figure 10.11. The cooperative implementation is already faster than the independent implementation for eight processors. For fewer processors the independent implementation is naturally faster, since it employs all p processors in the search (while only p − 1 worker processors effectively take part in the computations performed by the cooperative implementation).
Fig. 10.11
Running times for 200 runs of the multiple-walk independent-thread and the multiple-walk cooperative-thread implementations of GRASP with path-relinking using (a) two processors and (b) eight processors, with the target solution value set at 683.
Three strategies further improve the performance of the cooperative-thread implementation, by reducing the cost of the communication between the master and the workers when the number of processors increases:
* Strategy 1: Each send operation is broken into two parts. First, the worker only sends only the cost of the solution to the master. If this solution is better than the worst solution in the pool, then the full solution is sent. The number of messages increases, but most of them will be very small, with light memory requirements.
* Strategy 2: Only one solution is sent to the pool at each GRASP iteration.
* Strategy 3: A distributed implementation, in which each worker handles its own pool of elite solutions. Every time a processor finds a new elite solution, the newly found elite solution is broadcast to the other processors.
Comparative results for these three strategies on the same problem instance are plotted in Figure 10.12. The first strategy outperforms the others.
Fig. 10.12
Strategies for improving the performance of the centralized multiple-walk cooperative-thread implementation on eight processors.
Table 10.7 lists the average computation times and the best solutions found over ten runs of each strategy when the total number of GRASP iterations is set at 3200. There is a clear degradation in solution quality for the independent-thread strategy when the number of processors increases, despite the fact that speedups are high, of the same order as the number of processors used in the computations. As fewer iterations are performed by each processor, the pool of elite solutions gets poorer with the increase in the number of processors. Since the processors do not communicate, the overall solution quality is worse. In the case of the cooperative strategy, the information shared by the processors guarantees the high quality of the solutions in the pool. The cooperative implementation is more robust: solution quality does not deteriorate and very good solutions are obtained as the number of processors increases. Smaller speedups than those obtained with the independent-thread strategy are observed. However, the efficiency remains close to one for up to 16 processors.
Table 10.7
Average times and best solutions over ten runs of 2PNDP. | Independent | Cooperative
---|---|---
Processors | best value | avg. time (s) | speedup | best value | avg. time (s) | speedup | efficiency
1 | 673 | 1310.1 | — | — | — | — | —
2 | 676 | 686.8 | 1.91 | 676 | 1380.9 | 0.95 | 0.48
4 | 680 | 332.7 | 3.94 | 673 | 464.1 | 2.82 | 0.71
8 | 687 | 164.1 | 7.98 | 676 | 200.9 | 6.52 | 0.82
16 | 692 | 81.7 | 16.04 | 674 | 97.5 | 13.44 | 0.84
32 | 702 | 41.3 | 31.72 | 678 | 74.6 | 17.56 | 0.55
## 10.4 Bibliographical notes
Metaheuristics, such as GRASP, have found their way into the standard toolkit of combinatorial optimization methods. Parallel computers have increasingly found their way into metaheuristics.Verhoeven and Aarts (1995), Cung et al. (2002), Duni Ekşog̃lu et al. (2002), Alba (2005), and Talbi (2009) presented good accounts of parallel implementations of metaheuristics.
Most multiple-walk independent-thread parallel implementations of GRASP (with or without path-relinking) described in Section 10.1 are based on partitioning the search space or the iterations among a number of processors and appeared in Alvim and Ribeiro (1998), Canuto et al. (2001), Feo et al. (1994), Drummond et al. (2002), Li et al. (1994), Martins et al. (1998), Martins et al. (1999), Martins et al. (2000), Martins et al. (2004), Murphey et al. (1998), Pardalos et al. (1995), Pardalos et al. (1996), Resende et al. (1998), and Ribeiro and Rosseti (2002), among other references. Linear speedups can be expected in parallel multiple-walk independent-thread implementations. This was illustrated with applications to the maximum satisfiability problem and to the Steiner problem in graphs. Pardalos et al. (1996) implemented a parallel GRASP for the MAX-SAT problem using PVM (Geist et al., 1994). Martins et al. (1998) implemented a parallel GRASP for the Steiner problem in graphs using MPI (Snir et al., 1998) on test problems taken from the OR-Library (Beasley, 1990a).
In the case of the multiple-walk independent-thread implementation described by Aiex et al. (2005) for the 3-index assignment problem and by Aiex et al. (2003) for the job shop scheduling problem, each processor applies path-relinking to pairs of elite solutions stored in a local pool. The test for predicting whether a parallel implementation using multiple independent processors will be efficient was proposed in Aiex and Resende (2005).
Path-relinking has been increasingly used to introduce memory in the otherwise memoryless original GRASP procedure and was also used in conjunction with parallel implementations of GRASP. The hybridization of GRASP and path-relinking led to some effective multiple-walk cooperative-thread implementations. Collaboration between the threads is usually achieved by sharing elite solutions, either in a single centralized pool or in distributed pools. In some of these implementations, super-linear speedups were achieved even for cases where small speedups occurred in multiple-walk independent-thread variants.
Section 10.2 dealt with multiple-walk cooperative-thread implementations of GRASP with path-relinking using distributed strategies that appeared in Aiex et al. (2003) and Aiex and Resende (2005), in which each thread maintains its own pool of elite solutions. Centralized strategies appeared in Martins et al. (2004) and Ribeiro and Rosseti (2002), in which only a single pool of elite solutions was used.
The three-index assignment problem (AP3) (Pierskalla, 1967) is a straightforward NP-hard (Frieze, 1983; Garey and Johnson, 1979) extension of the classical two-dimensional assignment problem. The parallel implementations and the computational experiments reported in Section 10.3.1 appeared originally in Aiex et al. (2005). Exact and heuristic algorithms exist for this problem in the literature (Balas and Saltzman, 1991; Burkard and Fröhlich, 1980; Burkard and Rudolf, 1993; Burkard et al., 1996; Crama and Spieksma, 1992; Hansen and Kaufman, 1973; Leue, 1972; Pardalos and Pitsoulis, 2000; Pierskalla, 1967; 1968; Vlach, 1967; Voss, 2000). Test instances were described in Balas and Saltzman (1991), Crama and Spieksma (1992), and Burkard et al. (1996).
The job shop scheduling problem (JSP) considered in Section 10.3.2 was proved to be NP-hard by Lenstra and Rinnooy Kan (1979). The GRASP construction phase is the one proposed in Binato et al. (2002) and Aiex et al. (2003). The 2-exchange local search is used in Aiex et al. (2003), Binato et al. (2002), and Taillard (1991), and is based on the disjunctive graph model of Roy and Sussmann (1964). We also refer to Aiex et al. (2003) and Binato et al. (2002) for a description of the implementation of the local search procedure. Test instances are available from the OR-Library (Beasley, 1990a).
Applications of the 2-path network design problem (2PNDP) introduced in Section 10.3.3 can be found in the design of communications networks, in which paths with few edges are sought to enforce high reliability and small delays. The problem was shown to be NP-hard by Dahl and Johannessen (2004). Ribeiro and Rosseti (2002; 2007) developed parallel GRASP heuristics for 2PNDP.
References
R.M. Aiex and M.G.C. Resende. Parallel strategies for GRASP with path-relinking. In T. Ibaraki, K. Nonobe, and M. Yagiura, editors, Metaheuristics: Progress as real problem solvers, pages 301–331. Springer, New York, 2005.
R.M. Aiex, S. Binato, and M.G.C. Resende. Parallel GRASP with path-relinking for job shop scheduling. Parallel Computing, 29:393–430, 2003.MathSciNetCrossRef00014-0)
R.M. Aiex, M.G.C. Resende, P.M. Pardalos, and G. Toraldo. GRASP with path relinking for three-index assignment. INFORMS Journal on Computing, 17: 224–247, 2005.MathSciNetCrossRefMATH
E. Alba. Parallel metaheuristics: A new class of algorithms. Wiley, New York, 2005.CrossRefMATH
A.C. Alvim and C.C. Ribeiro. Load balancing for the parallelization of the GRASP metaheuristic. In Proceedings of the X Brazilian Symposium on Computer Architecture, pages 279–282, Búzios, 1998.
E. Balas and M.J. Saltzman. An algorithm for the three-index assignment problem. Operations Research, 39:150–161, 1991.MathSciNetCrossRefMATH
J.E. Beasley. OR-Library: Distributing test problems by electronic mail. Journal of the Operational Research Society, 41:1069–1072, 1990a.
S. Binato, W.J. Hery, D. Loewenstern, and M.G.C. Resende. A GRASP for job shop scheduling. In C.C. Ribeiro and P. Hansen, editors, Essays and surveys in metaheuristics, pages 59–79. Kluwer Academic Publishers, Boston, 2002.CrossRef
R.E. Burkard and K. Fröhlich. Some remarks on 3-dimensional assignment problems. Methods of Operations Research, 36:31–36, 1980.MATH
R.E. Burkard and R. Rudolf. Computational investigations on 3-dimensional axial assignment problems. Belgian Journal of Operational Research, Statistics and Computer Science, 32:85–98, 1993.MATH
R.E. Burkard, R. Rudolf, and G.J. Woeginger. Three-dimensional axial assignment problems with decomposable cost coefficients. Discrete Applied Mathematics, 65:123–139, 1996.MathSciNetCrossRef00031-L)MATH
S.A. Canuto, M.G.C. Resende, and C.C. Ribeiro. Local search with perturbations for the prize-collecting Steiner tree problem in graphs. Networks, 38:50–58, 2001.MathSciNetCrossRefMATH
Y. Crama and F.C.R. Spieksma. Approximation algorithms for three-dimensional assignment problems with triangle inequalities. European Journal of Operational Research, 60:273–279, 1992.CrossRef90078-N)MATH
V.-D. Cung, S.L. Martins, C.C. Ribeiro, and C. Roucairol. Strategies for the parallel implementation of metaheuristics. In C.C. Ribeiro and P. Hansen, editors, Essays and surveys in metaheuristics, pages 263–308. Kluwer Academic Publishers, Boston, 2002.CrossRef
G. Dahl and B. Johannessen. The 2-path network design problem. Networks, 43:190–199, 2004.MathSciNetCrossRefMATH
L.M.A. Drummond, L.S. Vianna, M.B. Silva, and L.S. Ochi. Distributed parallel metaheuristics based on GRASP and VNS for solving the traveling purchaser problem. In Proceedings of the Ninth International Conference on Parallel and Distributed Systems, pages 257–263, Chungli, 2002. IEEE.
S. Duni Ekşog̃lu, P.M. Pardalos, and M.G.C. Resende. Parallel metaheuristics for combinatorial optimization. In R. Corrêa, I. Dutra, M. Fiallos, and F. Gomes, editors, Models for parallel and distributed computation – Theory, algorithmic techniques and applications, pages 179–206. Kluwer Academic Publishers, Boston, 2002.
T.A. Feo, M.G.C. Resende, and S.H. Smith. A greedy randomized adaptive search procedure for maximum independent set. Operations Research, 42: 860–878, 1994.CrossRefMATH
A.M. Frieze. Complexity of a 3-dimensional assignment problem. European Journal of Operational Research, 13:161–164, 1983.MathSciNetCrossRef90078-4)MATH
M.R. Garey and D.S. Johnson. Computers and intractability. Freeman, San Francisco, 1979.MATH
A. Geist, A. Beguelin, J. Dongarra, W. Jiang, R. Mancheck, and V. Sunderam. PVM: Parallel virtual machine, A user's guide and tutorial for networked parallel computing. Scientific and Engineering Computation. MIT Press, Cambridge, 1994.MATH
P. Hansen and L. Kaufman. A primal-dual algorithm for the three-dimensional assignment problem. Cahiers du CERO, 15:327–336, 1973.MathSciNetMATH
J.K. Lenstra and A.H.G. Rinnooy Kan. Computational complexity of discrete optimization problems. Annals of Discrete Mathematics, 4:121–140, 1979.MathSciNetCrossRef70821-5)MATH
O. Leue. Methoden zur Lösung dreidimensionaler Zuordnungsprobleme. Angewandte Informatik, 14:154–162, 1972.MATH
Y. Li, P.M. Pardalos, and M.G.C. Resende. A greedy randomized adaptive search procedure for the quadratic assignment problem. In P.M. Pardalos and H. Wolkowicz, editors, Quadratic assignment and related problems, volume 16 of DIMACS Series in Discrete Mathematics and Theoretical Computer Science, pages 237–261. American Mathematical Society, Providence, 1994.
S.L. Martins, C.C. Ribeiro, and M.C. Souza. A parallel GRASP for the Steiner problem in graphs. In A. Ferreira, J. Rolim, H. Simon, and S.-H. Teng, editors, Solving irregularly structured problems in parallel, volume 1457 of Lecture Notes in Computer Science, pages 285–297. Springer, Berlin, 1998.
S.L. Martins, P.M. Pardalos, M.G.C. Resende, and C.C. Ribeiro. Greedy randomized adaptive search procedures for the Steiner problem in graphs. In P.M. Pardalos, S. Rajasejaran, and J. Rolim, editors, Randomization methods in algorithmic design, volume 43 of DIMACS Series in Discrete Mathematics and Theoretical Computer Science, pages 133–145. American Mathematical Society, Providence, 1999.
S.L. Martins, P.M. Pardalos, M.G.C. Resende, and C.C. Ribeiro. A parallel GRASP for the Steiner tree problem in graphs using a hybrid local search strategy. Journal of Global Optimization, 17:267–283, 2000.MathSciNetCrossRefMATH
S.L. Martins, C.C. Ribeiro, and I. Rosseti. Applications and parallel implementations of metaheuristics in network design and routing. In S. Manandhar, J. Austin, U. Desai, Y. Oyanagi, and A.K. Talukder, editors, Applied computing, volume 3285 of Lecture Notes in Computer Science, pages 205–213. Springer, Berlin, 2004.
R.A. Murphey, P.M. Pardalos, and L.S. Pitsoulis. A parallel GRASP for the data association multidimensional assignment problem. In P.M. Pardalos, editor, Parallel processing of discrete problems, volume 106 of The IMA Volumes in Mathematics and Its Applications, pages 159–180. Springer, New York, 1998.
P.M. Pardalos and L.S. Pitsoulis. Nonlinear assignment problems: Algorithms and applications. Kluwer Academic Publishers, Boston, 2000.CrossRefMATH
P.M. Pardalos, L.S. Pitsoulis, and M.G.C. Resende. A parallel GRASP implementation for the quadratic assignment problem. In A. Ferreira and J. Rolim, editors, Parallel algorithms for irregular problems: State of the art, pages 115–133. Kluwer Academic Publishers, Boston, 1995.CrossRef
P.M. Pardalos, L.S. Pitsoulis, and M.G.C. Resende. A parallel GRASP for MAX-SAT problems. In J. Waśniewski, J. Dongarra, K. Madsen, and D. Olesen, editors, Applied parallel computing industrial computation and optimization, volume 1184 of Lecture Notes in Computer Science, pages 575–585. Springer, Berlin, 1996.
W.P. Pierskalla. The tri-substitution method for the three-multidimensional assignment problem. Journal of the Canadian Operational Research Society, 5:71–81, 1967.MATH
W.P. Pierskalla. The multidimensional assignment problem. Operations Research, 16:422–431, 1968.CrossRefMATH
M.G.C. Resende, T.A. Feo, and S.H. Smith. Algorithm 787: Fortran subroutines for approximate solution of maximum independent set problems using GRASP. ACM Transactions on Mathematical Software, 24:386–394, 1998.CrossRefMATH
C.C. Ribeiro and I. Rosseti. A parallel GRASP heuristic for the 2-path network design problem. In B. Monien and R. Feldmann, editors, Euro-Par 2002 Parallel Processing, volume 2400 of Lecture Notes in Computer Science, pages 922–926. Springer, Berlin, 2002.
C.C. Ribeiro and I. Rosseti. Efficient parallel cooperative implementations of GRASP heuristics. Parallel Computing, 33:21–35, 2007.MathSciNetCrossRef
B. Roy and B. Sussmann. Les problèmes d'ordonnancement avec contraintes disjonctives. Technical Report Note DS no. 9 bis, SEMA, Montrouge, 1964.
M. Snir, S. Otto, S. Huss-Lederman, D. Walker, and J. Dongarra. MPI – The complete reference, Volume 1 – The MPI core. MIT Press, Cambridge, 1998.
E.D. Taillard. Robust taboo search for the quadratic assignment problem. Parallel Computing, 17:443–455, 1991.MathSciNetCrossRef80147-4)
E.-G. Talbi. Metaheuristics: From design to implementation. Wiley, New York, 2009.CrossRefMATH
M.G.A. Verhoeven and E.H.L. Aarts. Parallel local search. Journal of Heuristics, 1:43–66, 1995.CrossRefMATH
M. Vlach. Branch and bound method for the three index assignment problem. Ekonomicko-Mathematický Obzor, 3:181–191, 1967.MathSciNet
S. Voss. Heuristics for nonlinear assignment problems. In P.M. Pardalos and L.S. Pitsoulis, editors, Nonlinear assignment problems: Algorithms and applications, pages 175–215. Kluwer Academic Publishers, Boston, 2000.CrossRef
© Springer Science+Business Media New York 2016
Mauricio G.C. Resende and Celso C. RibeiroOptimization by GRASP10.1007/978-1-4939-6530-4_11
# 11. GRASP for continuous optimization
Mauricio G. C. Resende1 and Celso C. Ribeiro2
(1)
Modeling and Optimization Group (MOP), Amazon.com, Inc., Seattle, WA, USA
(2)
Instituto de Ciência da Computação, Universidade Federal Fluminense, Niterói, Rio de Janeiro, Brazil
Continuous GRASP, or C-GRASP, extends GRASP to the domain of continuous box-constrained global optimization. The algorithm searches the solution space over a dynamic grid. Each iteration of C-GRASP consists of two phases. In the construction (or diversification) phase, a greedy randomized solution is constructed. In the local search (or intensification) phase, a local search algorithm starts from the first phase solution and produces an approximate locally optimal solution. A deterministic rule triggers a restart after each C-GRASP iteration. This chapter addresses the construction phase and the restart strategy, and presents a local search procedure for continuous GRASP.
## 11.1 Box-constrained global optimization
Continuous global optimization seeks a minimum or maximum of a multimodal function over a continuous domain. In its minimization form, global optimization can be stated as finding a global minimum  such that , where F is some region of  and the multimodal objective function is defined by . In this chapter, we limit ourselves to box constraints: the domain is a hyper-rectangle , where  such that ℓ i ≤ u i , for i = 1,..., n. Therefore, the minimization problem considered here consists in finding S ∗ = argmin{f(S): ℓ ≤ S ≤ u}, where , and .
Five examples of classical box-constrained continuous global optimization problems are
* Ackley function:

where (x 1,..., S n ) ∈ [−15, 30] n .
* Bohachevsky function:

where (x 1, x 2) ∈ [−50, 100]2.
* Schwefel function:

where (x 1,..., x n ) ∈ [−500, 500] n .
* Shekel function:
![
$$\\displaystyle{\\min \\;S_{4,m}\(x\) = -\\sum _{i=1}^{m}\[\(x - a_{ i}\)^{T}\(x - a_{ i}\) + c_{i}\]^{-1},}$$
](A271843_1_En_11_Chapter_Equd.gif)
where (x 1, x 2, x 3, x 4) ∈ [0, 10]4,
![
$$\\displaystyle{a = \\left \[\\begin{array}{*{10}c} 4.0&4.0&4.0&4.0\\\\ 1.0 &1.0 &1.0 &1.0 \\\\ 8.0&8.0&8.0&8.0\\\\ 6.0 &6.0 &6.0 &6.0 \\\\ 7.0&3.0&7.0&3.0\\\\ 2.0 &9.0 &2.0 &9.0 \\\\ 5.0&5.0&3.0&3.0\\\\ 8.0 &1.0 &8.0 &1.0 \\\\ 6.0&2.0&6.0&2.0\\\\ 7.0 &2.6 &7.0 &3.6\\end{array} \\right \],}$$
](A271843_1_En_11_Chapter_Eque.gif)
and c = (0. 1, 0. 2, 0. 2, 0. 4, 0. 4, 0. 6, 0. 3, 0. 7, 0. 5, 0. 5).
* Shubert function:
![
$$\\displaystyle{\\min \\;SH\(x\) ={\\bigl \[\\sum _{ i=1}^{5}i\\cos \[\(i + 1\)x_{ 1} + i\]\\bigr \]}{\\bigl \[\\sum _{i=1}^{5}i\\cos \[\(i + 1\)x_{ 2} + i\]\\bigr \]},}$$
](A271843_1_En_11_Chapter_Equf.gif)
where (x 1, x 2) ∈ [−10, 10]2.
## 11.2 C-GRASP for continuous box-constrained global optimization
Continuous GRASP, or C-GRASP, extends GRASP to the domain of continuous box-constrained global optimization. The algorithm searches the solution space over a dynamic grid with hypercubed cells of side h each, fully contained in the domain. The initial grid is formed by hypercubes of sides of size h s . As each approximate local minimum is found during local search, the grid density is increased by halving the side of the current hypercube. When the size of the grid side becomes very small, i.e., when h < h e , for some given minimum grid size h e , a restart of the search is triggered.
Figure 11.1 shows a hyper-rectangle approximated by three grids: a sparse grid on top (red grid); a medium-density grid in the middle (green grid); and a dense grid in the bottom (blue grid). In all hyper-rectangles, an optimal solution is represented as a point in the upper right-hand corner of the feasible domain. As the grid density increases, more grid points are placed in the hyper-rectangle and the upper right point on the grid becomes an increasingly better approximation of the solution.
Fig. 11.1
This figure illustrates the effect of changing the resolution of the dynamic grid. From top (red) to bottom (blue), both the grid density and the search resolution increase. As the grid density increases, the closest distance from a point on the grid to the solution represented by a point in the upper-right corner of the domain decreases and a point on the grid better approximates this solution.
The initial solution S of the algorithm, as well as the sequence of initial solutions right after each restart, are randomly generated points in the interior of the hyper-rectangle, i.e., ℓ i < S i < u i for i = 1,..., n. Each iteration of C-GRASP consists of two phases. In the construction (or diversification) phase, a greedy randomized solution is constructed, while in the local search (or intensification) phase, a local search algorithm is applied, starting from the first phase solution and producing an approximate locally optimal solution. A deterministic rule can trigger a restart after each C-GRASP iteration.
The pseudo-code in Figure 11.2 shows the template of a continuous GRASP heuristic for the minimization of f(S), with ℓ ≤ S ≤ u, where . The value f ∗ of the best solution found is initialized in line 1. The loop from line 2 to 15 is repeated until some predefined stopping criterion is satisfied. In line 3, the current iterate S is initialized (or reinitialized) with a point from the interior of the hyper-rectangle defined by the n-vectors ℓ and u, drawn randomly by procedure RANDOM-IN-BOX(ℓ, u). The grid size h is initialized in line 4. The loop from line 5 to 14 is repeated until an approximate global minimum is found, i.e., while the grid size is not too small. The current solution is saved in line 6. In lines 7 and 8, the construction and local search phases of C-GRASP are applied, always starting from the current solution S and using the grid size value h. If the current iterate S obtained by local search is better than the best solution found so far, then the best found solution and its objective function value are updated in lines 10 and 11, respectively. Otherwise, if no improvement was found by either the construction or the local search phases, then the grid size is halved in line 13. If a stopping criterion is satisfied in line 2, then the best solution found S ∗ (together with its objective function value f(S ∗)) is returned as an approximate globally optimal solution in line 16.
Fig. 11.2
Pseudo-code of the basic C-GRASP heuristic for box-constrained continuous global minimization.
## 11.3 C-GRASP construction phase
The construction phase of C-GRASP mimics the construction phase of GRASP. The main difference is that while the GRASP construction starts from scratch, the construction in C-GRASP starts from a given initial solution S.
At each of its iterations, the construction modifies one of the n components of S. It does so by performing a discrete line search in each yet unmodified canonical basis direction to build a restricted candidate list (RCL) of canonical components. A discrete line search is an approximate search that evaluates the objective function for a discrete set of points, all laying on a line, defined by a given canonical basis direction, that passes through the current iterate. The line search returns the point with the best objective function value. The restriction for the RCL is by value and is based on a parameter α. A component is selected at random from the RCL, its value is set to the value found in the discrete line search, and its index is removed from further consideration. This is repeated until all components are examined and possibly modified.
The pseudo-code in Figure 11.3 summarizes the steps of the construction procedure of C-GRASP. In line 1, the set of yet unconsidered indices of search directions (which corresponds to the set of all still unfixed components) is initialized to correspond to all directions. In line 2, the RCL parameter α is assigned to a random real value in the interval [0, 1]. Each iteration of the loop from lines 3 to 21 potentially modifies the value of one component of S. In lines 4 and 5, the best and worst values that will be obtained by line search over all possible directions are initialized. The for loop in lines 6 to 12 evaluates the objective function, for all still unfixed components of the solution being constructed. Since the line search is always performed along one of the directions of the canonical basis, it can modify at most one component of the solution. Line 7 invokes DISCRETE-LINE-SEARCH and returns the potentially modified component S i ∗ that minimizes f(S) along the canonical direction e i . In line 8, the current iterate is tentatively modified with its i-th component taking on the value S i ∗. The tentative solution is evaluated in line 9 and the lower and upper bounds on the solutions obtained by line search are updated in lines 10 and 11, if necessary. These values, along with α and the objective function values produced by the line searches, are used in lines 13 to 17 to set up the RCL. In line 18, an index j is selected at random from the RCL and the j-th component of S is set, in line 19, to the value S j ∗ found in the line search corresponding to the j-th canonical basis direction. In line 20, index j is removed from the set of unfixed components. Finally, in line 22 the constructed solution S and its objective function value f(S) are returned.
Fig. 11.3
Pseudo-code of the C-GRASP construction phase.
The randomized greedy procedure CONTINUOUS-RANDOMIZED-GREEDY of Figure 11.3 can be made more efficient by noting that, right after line 18, the values of S j and S j ∗ may be identical. In this case, there will be no change, or movement, of solution S with the assignment made in line 19. Consequently, in the next iteration of the while loop from line 3 to line 21, the values returned by the line search procedure DISCRETE-LINE-SEARCH in line 7 will be identical to those produced in the current iteration. The pair (S i ∗, g i ) computed in the current iteration can therefore be reused in the next iteration, where there will be no need to compute lines 7 to 9.
## 11.4 Approximate discrete line search
At each iteration of algorithm CONTINUOUS-RANDOMIZED-GREEDY in Figure 11.3, the approximate discrete line search DISCRETE-LINE-SEARCH procedure is applied several times along different directions (line 7 of the pseudo-code of Figure 11.3). This line search evaluates a discrete set of points on the line determined by the starting solution S and one of the canonical search directions e i , i = 1,..., n. The set of visited points depends on the current iterate S, the grid size h, and the upper and lower bounds defined by the box constraints.
Figures 11.4 and 11.5 show two iterations of the randomized construction procedure, where the discrete line search is applied to a two-dimensional function. In the first iteration (shown in Figure 11.4), the line search is applied in two directions, each defined by the current iterate (or starting solution) and a canonical basis direction (e 1 = (1, 0) or e 2 = (0, 1)). Note that the points in which the function is evaluated are determined by the initial solution, by the grid size h, and by the upper and lower bounds, as well as by the canonical basis directions. Once a solution is chosen in one of the line searches, this point becomes the new initial solution and another line search is performed (see Figure 11.5).
Fig. 11.4
First iteration of a bi-dimensional randomized greedy construction: discrete line searches start from a solution represented by the blue point and are performed in both directions to populate the RCL with solutions represented by the green points. Each line search starts at the blue point (initial solution) and evaluates the blue point and the black points determined by the initial solution, each search direction, the grid size h, and the upper and lower bounds. Suppose the green point furthest to the right is selected at random from the RCL. It will be represented as the blue point in Figure 11.5 and will act as the new initial solution for the discrete line search of the second iteration of the bi-dimensional randomized greedy construction.
Fig. 11.5
Second (last) iteration of a bi-dimensional randomized greedy construction: the blue point in this figure is the green point chosen at random from the RCL in Figure 11.4. It is the starting point for the discrete line search in the last direction of the randomized construction. As for the line search of Figure 11.4, the blue point and black points are evaluated. The green point is the best solution among the evaluated points and is the final solution produced by the randomized greedy construction procedure.
Figure 11.6 shows the pseudo-code of algorithm DISCRETE-LINE-SEARCH to perform a discrete line search for minimization. Lines 1 and 2 initialize, respectively, the best objective function value f ∗ and the distance Δ to the next point to be visited. The loop in lines 3 to 9 perform the search from the initial solution S along the canonical direction e i while the upper bound u i is not violated, using h as the step size. If a new visited point improves the best solution along this direction in line 4, then S ∗ and its cost f ∗ are updated in lines 5 and 6, respectively. The step size is updated in line 8 and a new iteration resumes. Lines 10 to 17 perform the same search from the initial solution S along the opposite direction, while the lower bound ℓ i is not violated, once again using h as the step size. Line 18 returns S i ∗, i.e., the i-th component of the best solution found S ∗.
Fig. 11.6
Pseudo-code of the approximate discrete line search algorithm.
Example of approximate discrete line search
Suppose we want to minimize , with 0 ≤ x 1 ≤ 1 and 0 ≤ x 2 ≤ 2. We use C-GRASP and consider that at some iteration we wish to perform an approximate discrete line search starting from (0. 25, 0. 25) along the canonical direction e 1 = (1, 0), with the grid parameter h = 0. 15. The points to be evaluated along the line search are all defined by (0. 25, 0. 25) + k ⋅ h ⋅ e 1 = (0. 25, 0. 25) + k ⋅ h ⋅ (1, 0), for k = −1, 0, 1, 2, 3, 4, 5, i.e., (0.10, 0.25), (0.25, 0.25), (0.40, 0.25), (0.55, 0.25), (0.70, 0.25), (0.85, 0.25), and (1.00, 0.25). Note that any other point along this direction will violate constraint 0 ≤ x 1 ≤ 1. Of the seven trial points, (x 1 ∗, x 2 ∗) = (0. 10, 0. 25) is the one minimizing . Then, algorithm DISCRETE-LINE-SEARCH will return 0.10 as the best value for the first component. ■
## 11.5 C-GRASP local search
The local search procedure CONTINUOUS-LOCAL-SEARCH is called from line 8 of the pseudo-code of algorithm CONTINUOUS-GRASP in Figure 11.2 as an attempt to improve the constructed solution S with a search on the largest grid of size h that fits in the domain F and for which one of its grid points coincides with the current iterate S.
The local search described in this section makes no use of derivatives. Though derivatives can be easily computed for many functions, there are some for which they cannot be computed or are computationally difficult to compute. The approach described in this section can be seen as approximating the role of the gradient of the objective function f.
From a given input point S ∈ F, the local improvement algorithm generates a neighborhood and determines at which points in the neighborhood, if any, the objective function improves. If an improving point is found, then it is made the current point and the local search continues from this new solution.
Let S be the current solution and h be the current grid discretization parameter. Define

to be a lattice of points in F whose coordinates are integer steps (of size h) away from those of S (see Figure 11.7(a)). Let

be the projection of the points in F h (S)∖{S} onto the ball of radius h centered at S (see Figure 11.7(a)). The h-neighborhood of solution S is defined as the set of points in B h (S). The size of this neighborhood is bounded from above by ∏ i = 1 n ⌈(u i − ℓ i )∕h \+ 1⌉. If all of these points are examined and no improving solution is found, then the current solution S ∗ is called an h-local minimum. Since the number of points in B h (S) can be huge, it may be only feasible to evaluate the objective function on a subset of them. If a subset of these points is examined and no improving point is found, then the current solution S ∗ is considered an approximate h-local minimum.
Fig. 11.7
One step of procedure CONTINUOUS-LOCAL-SEARCH: (a) All gridpoints in F are projected onto the ball of radius h centered at the current solution S; (b) A subset of the projected points on the ball is sampled; (c) Best sampled point is determined (in green); and (d) Solution S is moved to best point, the grid is shifted to coincide with the new solution, and new ball of radius h is centered at the new current solution.
The pseudo-code of the algorithm that performs the local search phase is shown in Figure 11.8. It takes as input the current solution S ∈ F, the grid size h, and the maximum number of points to be sampled in each neighborhood B h (S). The current best solution S ∗ is initialized to S in line 1. The cost of the best known solution is set in line 2. The number of points sampled in the neighborhood is initialized in line 3. Starting from S ∗, the loop in lines 4 to 12 investigates at most k max neighbors of the current solution. A new neighbor S ∈ B h (S ∗) is selected in line 5 and the number of solutions sampled in this neighborhood is incremented by one in line 6. If line 7 detects that the newly selected neighbor S is feasible and better than S ∗, then a move is performed: solution S ∗ is set to S in line 8, the cost of the best solution is updated in line 9, and the process restarts with S ∗ as the new best solution after the counter k is reset to 0 in line 10. Local improvement terminates if an approximate h-local minimum solution S ∗ is found. At that point, S ∗ is returned in line 13 as the solution produced by local search.
Fig. 11.8
Pseudo-code of the C-GRASP local search phase.
## 11.6 Computing global optima with C-GRASP
We conclude this chapter by showing the results of running an implementation of C-GRASP on the five functions listed in Section 11.1 of this chapter: Ackley (for n = 10), Bohachevsky, Schwefel (for n = 10), Shekel, and Shubert. These functions have global optimal objective function values of, respectively, 0, 0, 0, − 10. 5364, and − 186. 7309. Since all global optima are known, the inner loop of the algorithm is made to stop when either the grid size h ≤ h e (as in the case when the global optimum is unknown) or when the gap

(11.1)
where S is the current best solution found by the heuristic, S ∗ is the known global minimum solution, and ε = 0. 001.
Each run used the same parameter values: initial grid size h s = 0. 5, final grid size h e = 0. 0001, and local search maximum sampling parameter k max = 100.
For all runs, the algorithm stopped because of stopping rule (11.1), before the size of the grid h became less than 0.0001. Therefore, only a single outer iteration was carried out. Figures 11.9 to 11.14 show the convergence of C-GRASP on the five test functions. Convergence is shown both as a function of the number of function evaluations and of the computation time (in seconds).
Fig. 11.9
Best objective function value as a function of the number of function evaluations for C-GRASP runs on three functions: Ackley, Bohachevsky, and Schwefel. All three functions have global optima of value zero.
Figures 11.9 and 11.10 show, respectively, the convergence of the algorithm with respect to the number of function evaluations and the computation time for the three functions whose global minima are zero, i.e., for Ackley, Bohachevsky, and Schwefel. The true optimum for Ackley is S ∗ = (0,..., 0) with f(S ∗) = 0, while C-GRASP stopped with

and f(S) = 0. 000708. The true optimum for Bohachevsky is S ∗ = (0, 0) with f(S ∗) = 0, while C-GRASP stopped with S = (−0. 004350, −0. 003859) and f(S) = 0. 000771. The true optimum for Schwefel is S ∗ = (420. 9687,..., 420. 9687) with f(S ∗) = 0, while C-GRASP stopped with

and f(S) = 0. 000321. Of those three functions, Ackley was the most difficult to optimize, requiring 1,681,424 function evaluations and 20.9 seconds of running time, while Bohachevsky required only 38,415 evaluations and 0.22 seconds. The final grid sizes for Ackley, Bohachevsky, and Schwefel were, respectively, 0.000977, 0.015625, and 0.062500.
Fig. 11.10
Best objective function value as a function of the computation time (in seconds) for C-GRASP runs on three functions: Ackley, Bohachevsky, and Schwefel. All three functions have global optima of value zero.
Figures 11.11 and 11.12 illustrate, respectively, the convergence of the algorithm with respect to the number of function evaluations and the computation time for the function Shekel with m = 10. The true optimum for Shekel is S ∗ = (4, 4, 4, 4) with f(S ∗) = −10. 5365, while C-GRASP stopped with

and f(S) = −10. 529192. C-GRASP required 6565 function evaluations and 89.6 seconds to find this solution. The final grid size was 0.015625.
Fig. 11.11
Best objective function value f(S) as a function of the number of function evaluations for one C-GRASP run on function Shekel whose global optimum objective function value is f(S ∗) = −10. 5364.
Fig. 11.12
Best shifted objective function value | f(S) − f(S ∗) | as a function of the computation time (in seconds) for one C-GRASP run on function Shekel whose global optimum objective function value is f(S ∗) = −10. 5364. Optimization ends when | f(S) − f(S ∗) | < ε ⋅ f(S ∗) = 0. 0105364.
Finally, Figures 11.13 and 11.14 show, respectively, the convergence of the algorithm with respect to the number of function evaluations and computation time for the function Shubert. The function Shubert has many global minima, all with f(S ∗) = −186. 7309. C-GRASP stopped with S = (4. 859558, 5. 483684) and f(S) = −186. 724170. C-GRASP required 5550 function evaluations and 0.06 seconds to find this solution. The final grid size was 0.015625.
Fig. 11.13
Best objective function value f(S) as a function of the number of function evaluations for one C-GRASP run on function Shubert whose global optimum objective function value is f(S ∗) = −186. 7309.
Fig. 11.14
Best shifted objective function value | f(S) − f(S ∗) | as a function of the computation time (in seconds) for one C-GRASP run on function Shubert whose global optimum objective function value is f(S ∗) = −186. 7309. Optimization ends when | f(S) − f(S ∗) | < ε ⋅ f(S ∗) = 0. 1867309.
In each of the five runs of C-GRASP, procedure RANDOM-IN-BOX(ℓ, u) was never called more than once, since the grid size h was never smaller or equal than h e = 0. 0001.
## 11.7 Bibliographical notes
C-GRASP was first introduced by Hirsch et al. (2007b) and in the Ph.D. thesis of Hirsch (2006). Hirsch et al. (2010) made several observations to speed up the computations of C-GRASP, including the reuse of line search results. The local search algorithm GENCAN (Birgin and Martínez, 2002), an active-set method for bound-constrained local minimization, was used by Birgin et al. (2010) to play the role of local search in a C-GRASP for minimization of functions for which gradients can be computed. Martin et al. (2013) proposed improvements to C-GRASP, including the use of direct searches in the local search phase. Araújo et al. (2015) presented several direct search procedures that are used in a C-GRASP heuristic. They named their algorithm DC-GRASP. Silva et al. (2013a) described libcgrpp, a GNU-style dynamic shared Python/C library for quick implementation of C-GRASP heuristics.
C-GRASP has been applied to a range of problems. These include sensor registration in a sensor network (Hirsch et al., 2006), finding correspondence of projected 3D points and lines (Hirsch et al., 2011), solving systems of nonlinear equations (Hirsch et al., 2009), determining the relationship between drug combinations and adverse reactions (Hirsch et al., 2007a), economic dispatch of thermal units (Vianna Neto et al., 2010), robot path planning (Macharet et al., 2011), thermodynamics (Guedes et al., 2011), target tracking (Hirsch et al., 2012), and finding the largest ellipse, with prescribed eccentricity, inscribed in a nonconvex polygon (da Silva et al., 2012).
The global minima for the five test functions used in Section 11.6 were computed with the Python/C library of Silva et al. (2013a). Andrade et al. (2014) presented a parallel implementation of C-GRASP construction using a GPU. Speedups of up to 1.56 were measured, even though construction only accounts for 10 to 40% of the execution time in C-GRASP.
References
L.M.M.S. Andrade, R.B. Xavier, L.A.F. Cabral, and A.A. Formiga. Parallel construction for continuous GRASP optimization on GPUs. In Anais do XLVI Simpósio Brasileiro de Pesquisa Operacional, pages 2393–2404, Salvador, 2014. URL http://bit.ly/1SS3lte. Last visited on April 16, 2016.
T.M.U. Araújo, L.M.M.S. Andrade, C. Magno, L.A.F. Cabral, R.Q. Nascimento, and C.N. Meneses. DC-GRASP: Directing the search on continuous-GRASP. Journal of Heuristics, 2015. doi: 10.1007/ s10732-014-9278-6. Published online on 6 January 2015.
E.G. Birgin and J.M. Martínez. Large-scale active-set box-constrained optimization method with spectral projected gradients. Computational Optimization and Applications, 23:101–125, 2002.MathSciNetCrossRefMATH
E.G. Birgin, E.M. Gozzi, M.G.C. Resende, and R.M.A. Silva. Continuous GRASP with a local active-set method for bound-constrained global optimization. Journal of Global Optimization, 48:289–310, 2010.MathSciNetCrossRefMATH
V.B. da Silva, M. Ritt, J.B. da Paz Carvalho, M.J. Brusso, and J.T. da Silva. Identificação da maior elipse com excentricidade prescrita inscrita em um polígono não convexo através do Continuous GRASP. Revista Brasileira de Computação Aplicada, 4:61–70, 2012.
A.L. Guedes, F.D. Moura Neto, and G.M. Platt. Double Azeotropy: Calculations with Newton-like methods and continuous GRASP (C-GRASP). International Journal of Mathematical Modelling and Numerical Optimisation, 2:387–404, 2011.CrossRefMATH
M.J. Hirsch. GRASP-based heuristics for continuous global optimization problems. PhD thesis, Department of Industrial and Systems Engineering, University of Florida, Gainesville, 2006.
M.J. Hirsch, P.M. Pardalos, and M.G.C. Resende. Sensor registration in a sensor network by continuous GRASP. In IEEE Conference on Military Communications, pages 501–506, Washington, DC, 2006.
M.J. Hirsch, C.N. Meneses, P.M. Pardalos, M.A. Ragle, and M.G.C. Resende. A continuous GRASP to determine the relationship between drugs and adverse reactions. In O. Seref, O. Erhun Kundakcioglu, and P.M. Pardalos, editors, Data mining, systems analysis and optimization in biomedicine, volume 953 of AIP Conference Proceedings, pages 106–121. Springer, 2007a.
M.J. Hirsch, C.N. Meneses, P.M. Pardalos, and M.G.C. Resende. Global optimization by continuous GRASP. Optimization Letters, 1:201–212, 2007b.
M.J. Hirsch, P.M. Pardalos, and M.G.C. Resende. Solving systems of nonlinear equations with continuous GRASP. Nonlinear Analysis: Real World Applications, 10:2000–2006, 2009.MathSciNetCrossRefMATH
M.J. Hirsch, P.M. Pardalos, and M.G.C. Resende. Speeding up continuous GRASP. European Journal of Operational Research, 205:507–521, 2010.CrossRefMATH
M.J. Hirsch, P.M. Pardalos, and M.G.C. Resende. Correspondence of projected 3D points and lines using a continuous GRASP. International Transactions in Operational Research, 18:493–511, 2011.MathSciNetCrossRefMATH
M.J. Hirsch, H. Ortiz-Pena, and C. Eck. Cooperative tracking of multiple targets by a team of autonomous UAVs. International Journal of Operations Research and Information Systems, 3:53–73, 2012.CrossRef
D.G. Macharet, A.A. Neto, V.F. da Camara Neto, and M.F.M. Campos. Nonholonomic path planning optimization for Dubins' vehicles. In 2011 IEEE International Conference on Robotics and Automation, pages 4208–4213, Shanghai, 2011. IEEE.
B. Martin, X. Gandibleux, and L. Granvilliers. Continuous-GRASP revisited. In P. Siarry, editor, Heuristics: Theory and applications, chapter 1. Nova Science Publishers, Hauppauge, 2013.
R.M.A. Silva, M.G.C. Resende, P.M. Pardalos, and M.J. Hirsch. A Python/C library for bound-constrained global optimization with continuous GRASP. Optimization Letters, 7:967–984, 2013a.
J.X. Vianna Neto, D.L.A. Bernert, and L.S. Coelho. Continuous GRASP algorithm applied to economic dispatch problem of thermal units. In Proceedings of the 13th Brazilian Congress of Thermal Sciences and Engineering, Uberlandia, 2010.
© Springer Science+Business Media New York 2016
Mauricio G.C. Resende and Celso C. RibeiroOptimization by GRASP10.1007/978-1-4939-6530-4_12
# 12. Case studies
Mauricio G. C. Resende1 and Celso C. Ribeiro2
(1)
Modeling and Optimization Group (MOP), Amazon.com, Inc., Seattle, WA, USA
(2)
Instituto de Ciência da Computação, Universidade Federal Fluminense, Niterói, Rio de Janeiro, Brazil
In this final chapter of the book, we consider four case studies to illustrate the application and implementation of GRASP heuristics. These heuristics are for 2-path network design, graph planarization, unsplittable multicommodity flows, and maximum cut in a graph. The key point here is not to show numerical results or compare these GRASP heuristics with other approaches, but instead simply show how to customize the GRASP metaheuristic for each particular problem.
## 12.1 2-path network design problem
Let G = (V, U) be a connected undirected graph, where V is the set of nodes and U is the set of edges. A k-path between nodes s, t ∈ V is a sequence of at most k edges connecting s and t. Given a non-negative weight function w: U → R + associated with the edges of G and a set D of pairs of origin-destination nodes, the 2-path network design problem (2PNDP) consists in finding a minimum weighted subset of edges U′ ⊆ U containing a 2-path between every origin-destination pair.
### 12.1.1 GRASP with path-relinking for 2-path network design
In the remainder of this section, we customize a parallel GRASP heuristic for the 2-path network design problem. We describe the construction and local search procedures, as well as a path-relinking intensification strategy.
#### 12.1.1.1 Solution construction
The construction of a new solution begins with the initialization of modified edge weights with their original weights. Each iteration of the construction phase starts by the random selection of an origin-destination pair still in D. A shortest 2-path between the extremities of this pair is computed, using the modified edge weights. The weights of the edges in this 2-path are set to zero until the end of the construction procedure, the origin-destination pair is removed from D, and a new iteration begins. The construction phase stops when 2-paths have been computed for all origin-destination pairs.
#### 12.1.1.2 Local search
The local search phase seeks to improve each solution built in the construction phase. Each solution can be viewed as a set of 2-paths, one for each origin-destination pair in D. To introduce diversity by driving different applications of the local search to different local optima, the origin-destination pairs are investigated at each GRASP iteration in a circular order defined by a different random permutation of their original indices.
Each 2-path in the current solution is tentatively eliminated. The weights of the edges used by other 2-paths are temporarily set to zero, while those that are not used by other 2-paths in the current solution are restored to their original values. A new shortest 2-path between the extremities of the origin-destination pair under investigation is computed, using the modified weights. If the new 2-path improves the current solution, then the current solution is modified; otherwise the previous 2-path is restored. The search stops if the current solution is not improved after a sequence of | D | iterations along which all 2-paths are investigated. Otherwise, the next 2-path in the current solution is investigated for substitution and a new iteration begins.
#### 12.1.1.3 Path-relinking
Path-relinking is applied to solution pairs composed of an initial solution, chosen at random from a pool formed by a limited number of previously found elite solutions, and by the solution produced by local search, which we call the guiding solution. The pool is initially empty. Each locally optimal solution is considered a candidate to be inserted into the pool if it is different from every other solution currently in the pool. If the pool is full and the candidate is better than the worst elite solution, then the candidate replaces the worst elite solution. If the pool is not full, then the candidate is simply inserted.
The algorithm starts by determining all origin-destination pairs whose associated 2-paths are different in the initial and guiding solutions. These computations amount to determining a set of moves that should be applied to the initial solution to reach the guiding solution. Each move is characterized by a pair of 2-paths, one to be inserted and the other to be eliminated from the current solution. The best solution is initialized with the initial solution. At each path-relinking iteration, the best yet unselected move is applied to the current solution, the incumbent solution is updated, and the selected move is removed from the set of candidate moves, until the guiding solution is reached. The incumbent is returned as the best solution found by path-relinking and is inserted into the pool if it satisfies the membership conditions.
#### 12.1.1.4 Parallel GRASP implementation and numerical results
Parallel implementations of metaheuristics such as GRASP are more robust than their sequential versions. We describe a parallel implementation of the GRASP sequential heuristic described in the previous sections, corresponding to a typical multiple-walk independent-thread strategy introduced in Chapter The iterations are evenly distributed over the processors. However, to improve load balancing, the iterations could also be distributed by demand, when faster processors perform more iterations than slower processors.
The processors perform MaxIterations∕p iterations each, where p is the number of processors and MaxIterations is the total number of iterations. Each processor has a copy of the sequential GRASP algorithm, a copy of the problem data, and its own pool of elite solutions. One of the processors acts as the master, reading and distributing the problem data, generating the seeds that are used by the pseudo-random number generators at each processor, distributing the iterations, and collecting the best solution found by each processor.
The results of the parallel GRASP algorithm are compared with those obtained by a greedy heuristic, using two samples of solution values and Student's t-test for unpaired observations. The main statistics are summarized in Table 12.1. These results show with 40% confidence level that GRASP finds better solutions than the greedy heuristic. The average value of the solutions obtained by GRASP was 2.2% smaller than that of the solutions obtained by the greedy heuristic. The dominance of GRASP is even stronger when harder or larger instances are considered. The parallel GRASP was applied to problems with up to 400 nodes, 79,800 edges, and 4,000 origin-destination pairs, while the greedy heuristic solved problems with no more than 120 nodes, 7,140 edges, and 60 origin-destination pairs.
Table 12.1
Statistics for GRASP (sample A) and the greedy heuristic (sample B). | Parallel GRASP (sample A) | Greedy (sample B)
---|---|---
Size | n A = 100 | n B = 30
Mean | μ A = 443. 73 | μ B = 453. 67
Standard deviation | S A = 40. 64 | S B = 61. 56
## 12.2 Graph planarization
A graph is said to be planar if it can be drawn on the plane in such a way that no two of its edges cross. Given a graph G = (V, E) with vertex set V and edge set E, the objective of graph planarization is to find a minimum cardinality subset of edges F ⊆ E such that the graph G′ = (V, E ∖ F) resulting from the removal of the edges F from G, is planar. This problem is also known as the maximum planar subgraph problem. A maximal planar subgraph is a planar subgraph G′ = (V ′, E′) of G = (V, E), such that the addition of any edge e ∈ E ∖ E′ to G′ destroys the planarity of the subgraph. Applications of graph planarization include graph drawing and numerous layout problems. Graph planarization is known to be NP-hard.
We begin with a review of a two-phase heuristic used as part of the GRASP heuristic for graph planarization. Then, the GRASP heuristic itself is described. Finally, we describe a post-optimization algorithm to further improve the solution obtained by GRASP.
### 12.2.1 Two-phase heuristic
In this section, we review the main components of the GT two-phase heuristic for graph planarization. The first phase of this heuristic is depicted in Figure 12.1 and consists in devising a sequence Π of the set of vertices V of the input graph G. Next, the vertices of G are placed on a line according to the sequence Π. Let π(v) denote the relative position of vertex v ∈ V within vertex sequence Π. Furthermore, let e 1 = (a, b) and e 2 = (c, d) be two edges of G, such that, without loss of generality, π(a) < π(b) and π(c) < π(d). These edges are said to cross with respect to sequence Π if π(a) < π(c) < π(b) < π(d) or π(c) < π(a) < π(d) < π(b). Basically, the second phase of GT partitions the edge set E of G into subsets , , and  in such a way that  is large (or ideally maximum) and no two edges both in  or both in  cross with respect to the sequence Π devised in the first phase.
Fig. 12.1
Pseudo-code of the first phase of the GT heuristic.
Let H = (E, I) be a graph where each of its vertices corresponds to an edge of the input graph G. Vertices e 1 and e 2 of H are connected by an edge if the corresponding edges of G cross with respect to sequence Π. A graph is called an overlap graph if its vertices can be placed in one-to-one correspondence with a family of intervals on a line. Two intervals are said to overlap if they cross and none is contained in the other. Two vertices of the overlap graph are connected by an edge if and only if their corresponding intervals overlap. Hence, the graph H as constructed above is the overlap graph associated with the representation of G defined by sequence Π.
The second phase of the GT two-phase heuristic consists in two-coloring a maximum number of vertices of the overlap graph H such that each of the two color classes  (blue) and  (red) forms an independent set. Equivalently, the second phase seeks a maximum induced bipartite subgraph of the overlap graph H, i.e., a bipartite subgraph having the largest number of vertices. This problem is equivalent to drawing the edges of the input graph G above or below the line where its vertices have been placed according to sequence Π. Since the decision version of the problem of finding a maximum induced bipartite subgraph of an overlap graph is NP-complete, a greedy algorithm is used in the GT heuristic to construct a maximal induced bipartite subgraph of the overlap graph. This algorithm finds a maximum independent set  of the overlap graph H = (E, I), reduces this overlap graph by removing from the vertex set E all vertices in  and from the edge set I all edges incident to vertices in , and then finds a maximum independent set  in the resulting overlap graph . The two independent sets obtained induce a bipartite subgraph of the original overlap graph, not necessarily with a maximum number of vertices. This procedure has polynomial-time complexity, since finding a maximum independent set of an overlap graph is polynomially solvable in time O(| E |3), where | E | is the number of vertices of the overlap graph H = (E, I). The pseudo-code of the second phase of heuristic GT is given in Figure 12.2. The set  corresponds to the edges that can be drawn without crossings.
Fig. 12.2
Pseudo-code of the second phase of the GT heuristic.
This two-phase algorithm is not guaranteed to produce an optimal (i.e., maximum) planar subgraph. Furthermore, even under a simple neighborhood definition, it does not necessarily produce a locally optimal solution. The first phase of GT is based on an adaptive greedy algorithm to produce a vertex sequence. This vertex sequence appears to affect the size of the planar subgraph found in the second phase of GT. However, it is not clear that the sequence produced by the adaptive greedy algorithm is the best. To produce other, possibly better, sequences, randomization and local search can be introduced in the adaptive greedy algorithm. We next explore these ideas and describe a GRASP heuristic for graph planarization that finds a locally optimal planar subgraph, often improving on the solution found by GT.
### 12.2.2 GRASP for graph planarization
The two-phase heuristic presented in the previous section uses an adaptive greedy algorithm to produce the vertex sequencing of its first phase. In the following, we show an alternative to the adaptive greedy algorithm: a GRASP for the first phase vertex sequencing problem. The construction phase of this GRASP heuristic is described in the pseudo-code of Figure 12.3.
Fig. 12.3
Pseudo-code of the GRASP construction phase (vertex sequencing).
The procedure takes as input the graph G = (V, E) to be planarized, the restricted candidate list (RCL) parameter 0 ≤ α ≤ 1, and a seed for the pseudo-random number generator. Let deg G (v) be the degree of vertex v with respect to G, d = min v ∈ V {degG(v)} and . The first vertex in the sequence is determined in lines 1 to 4, where all vertices having degree in the range ![
$$\[\\underline{d},\\alpha \(\\bar{d} -\\underline{ d}\) +\\underline{ d}\]$$
](A271843_1_En_12_Chapter_IEq16.gif) are placed in the RCL and a single vertex is selected at random from the list. The working vertex set  and graph G 1 are defined in lines 5 and 6.
The loop from lines 7 to 18 determines the sequence of the remaining | V | − 1 vertices. To assign the k-th vertex (iteration k of the loop), two cases can occur. Define G k to be the graph induced on G by V ∖{v 1, v 2,..., v k }. Let  be the set of vertices of G k−1 adjacent to v k−1 in G. The RCL is made up of all vertices in  having degree in the range ![
$$\[\\underline{d},\\alpha \(\\bar{d} -\\underline{ d}\) +\\underline{ d}\]$$
](A271843_1_En_12_Chapter_IEq20.gif) in G k . Otherwise, if , the RCL is made up of all unselected vertices having degree in the range ![
$$\[\\underline{d},\\alpha \(\\bar{d} -\\underline{ d}\) +\\underline{ d}\]$$
](A271843_1_En_12_Chapter_IEq22.gif) in G k . In line 15, the k-th vertex in the sequence is determined by selecting a vertex, at random, from the RCL. The working vertex set and the working graph are updated in lines 16 and 17. The vertex sequence Π = (v 1,..., v | V|) is returned in line 19.
The first phase of the GT heuristic seeks a sequence of the vertices, followed by a second phase minimizing the number of edges that need to be removed to eliminate all edge crossings with respect to the first phase sequence. One possible strategy (not taken in GT) is to attempt to reduce the number of crossing edges by locally searching a neighborhood of the current vertex sequence prior to the second phase. The local search procedure makes use of a neighborhood  of the vertex sequence Π that is formed by all vertex sequences Π′ differing from Π in exactly two positions, i.e.,

Let χ(Π) be the number of pairs of edges that cross if the vertex sequence Π is adopted. The pseudo-code in Figure 12.4 describes the local search procedure used in the GRASP heuristic, based on a slightly more restricted neighborhood that only considers the exchange of consecutive vertices.
Fig. 12.4
Pseudo-code of the GRASP local search phase for graph planarization.
Putting together the randomized vertex sequencing procedure displayed in Figure 12.3, the local search algorithm displayed in Figure 12.4, and the second phase of the GT heuristic provided in Figure 12.2 we obtain a GRASP for graph planarization, whose pseudo-code is given in Figure 12.5.
Fig. 12.5
Pseudo-code of the GRASP heuristic for graph planarization.
The number of edges in the maximal planar subgraph corresponding to the best solution found is initialized in line 1. The iterative GRASP procedure in lines 2 to 11 is repeated MaxIter times. In each iteration, a greedy randomized solution (vertex sequence Π) is constructed in line 3. In line 4, the local search phase attempts to produce a vertex sequence that has fewer crossings of pairs of edges than the one generated in line 3. The vertex sequence Π is given as input to the second phase heuristic of GT in line 5 to produce a planar subgraph of G. If the new solution improves the number of the edges in the planar subgraph, then the best solution found is updated in lines 7 to 9. The best solution found is returned in line 12.
### 12.2.3 Enlarging the planar subgraph
As already observed, there is no guarantee that the planar subgraph produced by SecondPhaseGT is optimal. Three edge sets are output:  (blue edges),  (red edges), and  (the remaining edges, which we refer to as the pale edges). By construction, , , and  are such that no red or pale edge can be colored blue. Likewise, pale edges cannot be colored red. However, if there exists a pale edge p such that all blue edges that cross with p (let  be the set of such blue edges) do not cross with any red edge, then all blue edges in  can be colored red and p can be colored blue. Consequently, this reassignment of color classes increases the size of the planar subgraph by one edge.
Figure 12.6 shows the pseudo-code of procedure EnlargePlanarGraph that seeks pale and blue edges allowing the above color class reassignment and enlarges the planar subgraph whenever such edges are encountered. The pale edges are scanned in the loop in lines 1 to 17. Set  is initialized in line 2 and the pale edge p is temporarily made a candidate to be recolored by setting variable enlarge to.TRUE. in line 3. The loop in lines 4 to 11 scans the blue edges to construct the set  for each pale edge . Any blue edge that crosses with the pale edge p is added to the candidate set  in line 6. Red edges are scanned in the loop in lines 7 to 9. If a blue edge  crosses any red edge, then the pale edge p will be discarded by setting the variable enlarge to.FALSE. in line 8. If none of the blue edges in  crosses a red edge, then all blue edges in  will be recolored as red and the pale edge p will be colored as blue in lines 13 to 15. The possibly enlarged solution is returned in line 18.
Fig. 12.6
Pseudo-code of the improvement procedure to enlarge a planar subgraph.
The improvement procedure EnlargePlanarGraph can be applied to each solution obtained in line 5 of the GRASP heuristic displayed in Figure 12.5 or, alternatively, exclusively to the best solution found returned by GRASP-GP in line 12.
## 12.3 Unsplittable multicommodity network flow: Application to bandwidth packing
Telecommunication service providers offer virtual private networks to customers by provisioning a set of permanent (long-term) private virtual circuits (PVCs) between endpoints on a large backbone network. During the provisioning of a PVC, routing decisions are made either automatically by the routing equipment (the router) or by the network operator, through the use of preferred routing assignments and without any knowledge of future requests. Over time, these decisions usually cause inefficiencies in the network and occasional rerouting of the PVCs is needed. The new routing scheme is then implemented on the network through preferred routing assignments. Given a preferred routing assignment, the switch will move the PVC from its current route to the new preferred route as soon as this move becomes feasible.
One possible way to create preferred routing assignments is to appropriately order the set of PVCs currently in the network and apply an algorithm that mimics the routing algorithm used by the router to each PVC in that order. However, more elaborate routing algorithms, which take into account factors not considered by the router, could further improve the efficiency of network resource utilization.
Typically, the routing scheme used by the routers to automatically provision PVCs is also used to reroute the PVCs in the case of trunk or card failures. Therefore, this routing algorithm should be efficient in terms of running time, a requirement that can be traded off for improved network resource utilization when building preferred routing assignments offline.
We discuss variants of a GRASP with path-relinking algorithm for the problem of routing offline a set of PVC demands over a backbone network , such that a combination of the delays due to propagation and congestion is minimized. This problem and its variants are also known in the literature as bandwidth packing problems. The set of PVCs to be routed can include not only all or a subset of the PVCs currently in the network, but also possibly a set of forecast PVCs. The explicit handling of propagation delays, as opposed to just handling the number of hops, is particularly important in international networks, where distances between backbone nodes vary considerably. The minimization of network congestion is important for providing the maximum flexibility to handle overbooking (which is typically used by network operators to account for noncoincidence of traffic), rerouting (due to link or card failures), and bursting above the committed rate (which is not only allowed, but sold to customers as one of the attractive features of the service).
We next formulate the offline PVC routing problem as an integer multicommodity flow problem with additional constraints and a hybrid objective function, which takes into account delays due to propagation as well as delays due to network congestion. Minimum cost multicommodity network flow problems are characterized by a set of commodities flowing through an underlying network, each commodity having an associated integral demand that must flow from its source to its destination. The flows are simultaneous and the commodities share network resources. We conclude this section by describing variants of a GRASP with path-relinking heuristic for this problem.
### 12.3.1 Problem formulation
Let G = (V, E) be an undirected graph representing a backbone network. Denote by V = { 1,..., n} the set of backbone nodes where routers reside, while E is the set of trunks (or edges) that connect the backbone nodes, with | E | = m. Parallel trunks are allowed. Since G is an undirected graph, flows through each trunk (i, j) ∈ E have two components to be summed up, one in each direction. However, for modeling purposes, costs and capacities are associated only with ordered pairs (i, j) ∈ E satisfying i < j. For each trunk (i, j) ∈ E, we denote by b ij its maximum allowed bandwidth (in kbits/second), while c ij denotes the maximum number of PVCs that can be routed through it and d ij is the propagation (or hopping) delay associated with the trunk. Each commodity k ∈ K = { 1,..., p} is a PVC to be routed, associated with an origin-destination pair and with a bandwidth requirement r k (or demand, also known as its effective bandwidth). It takes into account the actual bandwidth required by the customer in the forward and reverse directions, as well as an overbooking factor.
The ultimate objective of the offline PVC routing problem is to minimize propagation delays or network congestion, subject to several technological constraints. Queuing delays are often associated with network congestion and in some networks account for a large part of the total delay. In other networks, distances can be long and loads low, causing the propagation delay to account for a large part of the total delay. Two common measures of network congestion are the load on the most utilized trunk, and the average delay in a network of independent M∕M∕1 queues. Another measure, which we use here, is a cost function that penalizes heavily loaded trunks. This function resembles the average delay function, except that it allows loads to exceed trunk capacities. Routing assignments with minimum propagation delays may not achieve the least network congestion. Likewise, routing assignments having the least congestion may not minimize propagation delays. A compromising objective is to route the PVCs such that a desired point in the trade-off curve between propagation delays and network congestion is achieved.
The upper bound on the number of PVCs allowed on a trunk depends on the technology used to implement it. A set of routing assignments is feasible if and only if, for every trunk (i, j) ∈ E, the total PVC effective bandwidth requirements routed through it does not exceed its maximum bandwidth b ij and the number of PVCs routed through it is not greater than c ij .
Let x ij k be a 0-1 variable such that x ij k = 1 if and only if trunk (i, j) ∈ E is used to route commodity k ∈ K from node i to node j. The following linear integer program models the problem:

(12.1)
subject to

(12.2)

(12.3)

(12.4)

(12.5)
Constraints of type (12.2) limit the total flow on each trunk to at most its capacity. Constraints of type (12.3) enforce the limit on the number of PVCs routed through each trunk. Constraints of type (12.4) are flow conservation equations, which together with constraints (12.5) state that the flow associated with each PVC cannot be split, where a i k = 1 if node i is the source for commodity k, a i k = −1 if node i is the destination for commodity k, and a i k = 0 otherwise.
The cost function ϕ ij (x ij 1, ⋯ , x ij p , x ji 1, ⋯ , x ji p ) associated with each trunk (i, j) ∈ E with i < j is the linear combination of a trunk propagation delay component and a trunk congestion component. The propagation delay component is defined as

(12.6)
where coefficients ρ k are used to model two plausible delay functions:
* If ρ k = 1, then this component leads to the minimization of the number of hops weighted by the propagation delay on each trunk.
* If ρ k = r k , then the minimization takes into account the effective bandwidth routed through each trunk weighted by its propagation delay.
Let y ij = ∑ k ∈ K r k (x ij k \+ x ji k ) be the total flow through trunk (i, j) ∈ E with i < j. The trunk congestion component depends on the utilization rates u ij = y ij ∕b ij of each trunk (i, j) ∈ E with i < j. This piecewise linear function ,

(12.7)
depicted in Figure 12.7, increasingly penalizes flows approaching or violating the capacity limits. The value

is a global measure of the maximum congestion in the network.
Fig. 12.7
Piecewise linear load balance cost component associated with each trunk.
Let weights (1 −δ) and δ correspond, respectively, to the propagation delay and to the network congestion components, with δ ∈ [0, 1]. The cost function

(12.8)
is associated with each trunk (i, j) ∈ E with i < j. Note that if δ > 0, then the network congestion component is present in the objective function, which allows us to relax capacity constraints (12.2). This will be assumed in the algorithms discussed in Section 12.3.2.
Model (12.1)–(12.5) proposed in this section has two distinctive features. First, it takes into account a two component objective function, which is able to handle both delays and load balance. Second, it enforces constraints that limit the maximum number of PVCs that can be routed through any trunk. A GRASP with path-relinking heuristic for its solution is described in the next section.
### 12.3.2 GRASP with path-relinking for PVC routing
In the remainder of this section, we customize a GRASP heuristic for the offline PVC routing problem. We describe construction and local search procedures, as well as a path-relinking intensification strategy.
#### 12.3.2.1 Construction phase
In the construction phase, the routes are determined, one at a time. In each iteration of construction a new PVC is selected to be routed. To reduce the computation times, we make use of a combination of the strategies usually employed by GRASP and heuristic-biased stochastic sampling. We create a restricted candidate list (RCL) with a fixed number of elements n c . At each iteration, the RCL is formed by the n c unrouted PVC pairs with the largest demands. An element ℓ is selected at random from this list with probability π(ℓ) = r ℓ ∕∑ k ∈ RCL r k .
Once a PVC ℓ ∈ K is selected, it is routed on a shortest path from its origin to its destination. The capacity constraints (12.2) are relaxed and handled via the penalty function introduced by the load balance component (12.7) of the edge weights. The constraints of type (12.3) are explicitly taken into account by forbidding routing through trunks already using its maximum number of PVCs. The weight Δ ϕ ij of each edge (i, j) ∈ E is given by the increment of the cost function value ϕ ij (x ij 1, ⋯ , x ij p , x ji 1, ⋯ , x ji p ) associated with routing r ℓ additional units of demand through edge (i, j).
More precisely, let K ⊆ K be the set of previously routed PVCs and K ij ⊆ K be the subset of PVCs that are routed through trunk (i, j) ∈ E. Likewise, let  be the new set of routed PVCs and  be the new subset of PVCs that are routed through trunk (i, j). Then, we define x ij ℓ = 1 if PVC ℓ ∈ K is routed through trunk (i, j) ∈ E from i to j, x ij ℓ = 0 otherwise. Similarly, we define  if PVC  is routed through trunk (i, j) ∈ E from i to j,  otherwise. According with (12.8), the cost associated with each edge (i, j) ∈ E in the current solution is given by ϕ ij (x ij 1, ⋯ , x ij p , x ji 1, ⋯ , x ji p ). In the same manner, the cost associated with each edge (i, j) ∈ E after routing PVC ℓ will be . Then, the incremental edge weight Δ ϕ ij associated with routing PVC ℓ ∈ K through edge (i, j) ∈ E, used in the shortest path computations, is given by

(12.9)
The enforcement of type (12.3) constraints may lead to unroutable demand pairs. In this case, the current solution is discarded and a new construction phase starts.
#### 12.3.2.2 Local search
Each solution built in the first phase may be viewed as a set of routes, one for each PVC. The local search procedure seeks to improve each route in the current solution. For each PVC k ∈ K, we start by removing r k units of flow from each edge in its current route. Next, we compute incremental edge weights Δ ϕ ij associated with routing this demand through each trunk (i, j) ∈ E according to equation (12.9), as described in Section 12.3.2.1. A tentative new shortest path route is computed using the incremental edge weights. If the new route improves the solution, then it replaces the current route of PVC k. This is continued until no improving route can be found.
#### 12.3.2.3 Path-relinking
Path-relinking for bandwidth packing is applied to pairs {y, z} of solutions, where one solution is the locally optimum obtained after local search and the other solution is randomly chosen from an elite set  formed by a limited number  of elite solutions found along the search. This elite pool is initially empty. Each locally optimal solution obtained by local search is considered as a candidate to be inserted into the pool if it differs by at least one trunk in one route from every other solution currently in the pool. If the pool already has  solutions and the candidate is better than the worst solution in the pool, then the candidate replaces the worst solution. If the pool is not full, the candidate is simply inserted in the pool.
Either y or z is selected to be the initial solution, while the other will be the guiding solution. The algorithm starts by computing the set of moves that should be applied to the initial solution to reach the guiding solution. Starting from the initial solution, the best move still not performed is applied to the current solution, until the guiding solution is attained. The best solution found along this trajectory is also considered as a candidate for insertion in the pool and the incumbent is updated. Several alternatives have been considered and combined to explore trajectories connecting y and z. All these alternatives involve the trade-offs between computation time and solution quality, as already discussed in Chapters and
In this application of path-relinking, the set of moves between any pair {y, z} of solutions is the subset K y, z ⊆ K of PVCs routed through different routes in y and z. Without loss of generality, let us suppose that path-relinking starts from any elite solution z in the pool and uses the locally optimal solution y as the guiding solution.
The best solution  along the new path to be constructed is initialized with z. For each PVC k ∈ K y, z , the same shortest path computations described in Sections 12.3.2.1 and 12.3.2.2 are used to evaluate the cost of the new solution obtained by rerouting the demand associated with PVC k through the route used in the guiding solution y instead of the route used in the current solution originated from z. The best move is selected and removed from K y, z . The new solution obtained by rerouting the above selected PVC is computed, the incumbent  is updated, and a new iteration resumes. These steps are repeated, until the guiding solution y is reached. The incumbent  is returned as the best solution found by path-relinking and inserted into the pool if it satisfies the membership conditions.
The pseudo-code with the complete description of procedure GRASP+PR-BPP for the bandwidth packing problem arising in the context of offline PVC rerouting is given in Figure 12.8. This description incorporates the construction, local search, and path-relinking phases.
Fig. 12.8
Pseudo-code of the GRASP with path-relinking procedure for the bandwidth packing problem.
## 12.4 Maximum cut in a graph
Given an undirected graph G = (V, U), where V is the set of vertices and U is the set of edges, and weights w uv associated with each edge (u, v) ∈ U, the maximum cut (MAX-CUT) problem consists in finding a nonempty proper subset of vertices S ⊂ V (), such that the weight of the cut , given by

is maximized.
Figure 12.9 shows four cuts having different weights on a graph with five nodes and seven edges. The maximum cut has S = { 1, 2, 4} and , with weight .
Fig. 12.9
Example of the maximum cut problem on a graph with five vertices and seven edges. Four cuts are shown. The maximum cut is  and has a weight .
### 12.4.1 GRASP with path-relinking for the maximum cut problem
A GRASP with path-relinking heuristic for the MAX-CUT problem consists in repeatedly constructing a cut  with a semi-greedy algorithm, applying local search from  to produce a locally maximal solution, and applying path-relinking from the local maximum to a solution  selected from a pool  of elite solutions. The best local maximum , found over all GRASP iterations, is returned as the GRASP solution.
The pseudo-code of a GRASP with path-relinking heuristic for the MAX-CUT problem is shown in Figure 12.10. In line 1, the value w ∗ of the best cut found is initialized to −∞ and in line 2 the pool of elite solutions  is initialized empty. The while loop from line 3 to line 18 carries out the GRASP with path-relinking iterations. The algorithm terminates when a stopping criterion is satisfied. In line 4, a semi-greedy solution  is constructed and, in line 5, it is tentatively improved with local search. The local maximal produced by local search in line 5 is . Path-relinking is applied if the pool has at least one elite solution. In that case, a guiding solution  is selected at random from the pool in line 7 and the path-relinking operator is applied from the locally maximal cut  to the guiding solution in line 8. The solution obtained by path-relinking is saved in . If solution , obtained by either local search or path-relinking, satisfies the membership conditions, then the pool of elite solutions  is updated in line 11. Though the cut weight  is computed in the local search procedure and in the path-relinking procedure, its computation is shown in line 13 of the pseudo-code. If the weight of the local maximum is greater than the weight w ∗ of the best cut found so far, then the best cut  and its weight are updated in lines 15 and 16, respectively. The best cut and the best solution found are returned in line 19.
Fig. 12.10
Pseudo-code of a GRASP with path-relinking for the MAX-CUT problem.
In the remainder of this chapter we describe the components of this GRASP with path-relinking heuristic in more detail.
#### 12.4.1.1 A greedy algorithm for the maximum cut problem
The construction phase of the GRASP for the MAX-CUT problem described here is a semi-greedy algorithm. Recall that we wish to build a proper subset S ⊂ V, such that  forms a partition of V, i.e.,  and . The ground set for the MAX-CUT problem is the set V of vertices of graph G = (V, U).
We first describe a greedy algorithm that is the basis for this GRASP. It builds a solution incrementally in sets X and Y by assigning vertices from the ground set V to either X or Y. Initially, sets X and Y each contain an endpoint of a largest-weight edge. At each other step of the construction, a new ground set element v ∈ V is added to either set X or set Y of the partial solution. This is repeated until X ∪ Y = V, at which point we set S to X,  to Y, and a feasible solution  is on hand.
At each iteration of this greedy construction, an element is selected from a candidate list whose elements are the yet-unassigned ground set elements, i.e., V ∖(X ∪ Y ), according to an adaptive greedy function described next.
The greedy function takes into account the contribution to the objective function (the weight of the partial cut) achieved by assigning a particular element to either set X or set Y. Formally, let (X, Y ) be the partial solution under construction. Recall that, for any partial solution, X ∪ Y ⊂ V. For each yet-unassigned vertex v ∈ V ∖(X ∪ Y ), define

(12.10)
and

(12.11)
to be, respectively, the incremental contributions to the cut weight resulting from the assignment of node v to sets X and Y of the partial partition (X, Y ). The greedy function

for v ∈ V ∖(X ∪ Y ), measures how much additional weight results from the assignment of vertex v to X or Y. The greedy choice is

Vertex v ∗ is assigned to set X if σ X (v) > σ Y (v) or to set Y, otherwise.
Maximum cut problem – Adaptive greedy algorithm to find a large-weight cut
Consider the following example on the five-node graph G = (V, U) of Figure 12.11, for which we seek a large-weight cut. We build a partition (X, Y ) of the nodes of G incrementally, with the greedy algorithm described above. Initially, sets X and Y are such that each contains an endpoint of a largest-weight edge of G. Since edge (3, 4) is the one with the largest weight, then X = { 3}, Y = { 4}, and the weight of the partial cut is w(X, Y ) = w 3, 4 = 15.
Fig. 12.11
Five-node graph for maximum cut problem.
To select the next node to be added to the partial cut, we consider only nodes 2 and 5, since node 1 is not adjacent to any node in X ∪ Y and, consequently, σ X (1) = σ Y (1) = 0. Consider first node 2. Its contribution to the cut if added to set Y is σ Y (2) = ∑ u ∈ X w 2, u = w 2, 3 = 9. Since it is not adjacent to any node in set Y, then it will not contribute to the partial cut if added to X, i.e., σ X (2) = ∑ u ∈ Y w 2, u = 0. Now consider node 5. Its contribution to the cut if added to sets X and Y are, respectively, σ X (5) = ∑ u ∈ Y w 5, u = w 5, 4 = 10 and σ Y (5) = ∑ u ∈ X w 5, u = w 5, 3 = 2. Since the greedy function values are

then the greedy choice is node 5, because

Furthermore, since σ X (5) > σ Y (5), then node 5 is assigned to set X. The partial cut becomes (X, Y ) = ({3, 5}, {4}) with weight 25.
The remaining nodes are 1 and 2. Consider first node 1. Its contribution to the cut if added to set Y is σ Y (1) = ∑ u ∈ X w 1, u = w 1, 5 = 6. Since node 1 is not adjacent to node 4, the only node in Y, then σ X (1) = ∑ u ∈ Y w 1, u = 0. Now, consider node 2. Its contribution to the cut if added to set Y is σ Y (2) = ∑ u ∈ X w 2, u = w 2, 5 \+ w 2, 3 = 19. Since node 2 is not adjacent to node 4, the only node in Y, then σ X (2) = ∑ u ∈ Y w 2, u = 0. Since the greedy function values are

then the greedy choice is node 2, because

Furthermore, since σ Y (2) > σ X (2), then node 2 is assigned to set Y. The partial cut becomes (X, Y ) = ({3, 5}, {2, 4}) with weight 44.
Finally, consider node 1 that is the last remaining. Its contribution to the cut if added to sets X and Y are, respectively, σ X (1) = ∑ u ∈ Y w 1, u = w 1, 2 = 5 and σ Y (1) = ∑ u ∈ X w 1, u = w 1, 5 = 6. Since σ Y (1) > σ X (1), then node 1 is assigned to set Y. The final cut is  with weight 50. This cut is the best one of the four shown in Figure 12.9. ■
#### 12.4.1.2 A semi-greedy algorithm for the maximum cut problem
Chapter introduced the randomization of greedy algorithms to construct their semi-greedy variants. We next present a semi-greedy variant of the greedy algorithm for maximum cut described in Section 12.4.1.1.
To start the construction process, a large-weight edge (i ∗, j ∗) ∈ U is selected and each of its endpoints is assigned to a different subset of the partial solution, e.g., i ∗ is assigned to X and j ∗ to Y. To add variability to this choice process, one adopts a greedy randomized approach by building a restricted candidate list with all edges in U having weights above the cutoff threshold μ = w min +α ⋅ (w max − w min ), where w min and w max are, respectively, the smallest and largest edge weights of edges in U and α is a real number in the interval [0, 1]. Edge (i ∗, j ∗) is randomly selected from this initial restricted candidate list.
To define the construction mechanism for the restricted candidate list used at each iteration, let

and

where V ′ = V ∖(X ∪ Y ) is the set of nodes that are not yet assigned to either subset X or subset Y. Denoting by μ = w min +α ⋅ (w max − w min ) the cut-off value, where α is a parameter such that 0 ≤ α ≤ 1, the restricted candidate list is made up by all nodes whose value of the greedy function is greater than or equal to μ. A node v is randomly selected from the list. If σ X (v) > σ Y (v), then node v ∈ V ′ is placed in X; otherwise it is placed in Y.
The pseudo-code of the semi-greedy GRASP construction procedure for the maximum cut problem is shown in Figure 12.12. The restricted candidate list parameter α is generated at random in line 1. The initial edge of the cut is determined in lines 2 to 8. Lines 2 and 3 determine the smallest and largest edge weights w min and w max , respectively. The cutoff value μ is computed in line 4 and the restricted candidate list RCL e is set up in line 5. Finally, in line 6, edge (i ∗, j ∗) is randomly selected from RCL e and each endpoint of the selected edge is assigned in lines 7 and 8.
Fig. 12.12
Pseudo-code of the semi-greedy GRASP construction phase algorithm for the MAX-CUT problem.
The while loop in lines 9 to 25 builds the remainder of the cut. It stops when a cut (X, Y ) is on hand, i.e., when X ∪ Y = V. In line 10, the set V ′ of candidate vertices still to be added to each side of the cut under construction is determined. In lines 11 to 14, the incremental contributions σ X (v) and σ Y (v) associated with the addition of each vertex v ∈ V ′ to subsets X and Y, respectively, are computed. Lines 15 and 16 compute, respectively, the smallest and largest contributions of vertex v ∈ V ′. Line 17 computes the cutoff value for membership in the restricted candidate list RCL v , which is set up in line 18. The next vertex, v ∗, to be added to X or Y is selected at random from RCL v in line 19. If it contributes more to the cut by being added to X, then it is added to that set in line 21. Otherwise, it is added to Y in line 23. In lines 26 and 27, X and Y are assigned, respectively, to sets S and  that form the cut . Line 28 returns the constructed cut  and its weight .
#### 12.4.1.3 Local search for the maximum cut problem
Since a solution  generated with the semi-greedy algorithm of Section 12.4.1.2 is not guaranteed to be locally optimum with respect to any neighborhood structure, a local search algorithm may improve its weight. We base the local search algorithm presented next on the following neighborhood structure. To each vertex v ∈ V, we associate either the neighbor  if v ∈ S, or the neighbor , otherwise. In other words, we move vertex v from one side of the cut to the other. Let

(12.12)
be the sum of the weights of the edges incident to v that have their other endpoint in  and

(12.13)
be the sum of the weights of the edges incident to v that have their other endpoint in S. The value

represents the change in the objective function associated with moving vertex v from one subset of the cut to the other. All possible moves are investigated. The current solution is replaced by the first improving neighbor found. The search stops after all possible moves have been evaluated and no improving neighbor is found.
The pseudo-code of the local search procedure is given in Figure 12.13. The loop from line 2 to line 16 is repeated until no improving move is possible. In line 3 the move indicator variable change is initialized to indicate that no move has been made. The for loop from line 4 to line 15 scans all vertices and attempts to move vertices from one set of the cut to the other. The loop concludes scanning the vertices either if all moves are tested and no move improves the total weight of the cut or if an improving move is found. Lines 5 to 7 moves node v from S to  if , where σ S (v) and  are defined, respectively, in equations (12.12) and (12.13). Line 8 sets the indicator variable change to indicate a move has been made. Line 10 to 12 moves node v from  to S if . Line 13 sets the indicator variable change to indicate a move has been made. A locally maximal cut  and its weight  are returned by the local search procedure in line 17.
Fig. 12.13
Pseudo-code of the GRASP local search phase algorithm for the MAX-CUT problem.
#### 12.4.1.4 GRASP with path-relinking for maximum cut
As we saw in Chapter , during GRASP with path-relinking, trajectories connecting high-quality solutions in the search space graph  are explored in search of other high-quality solutions. In the GRASP with path-relinking algorithm described here, we apply the usual strategy of saving high-quality, or elite, solutions in an elite set  of size . This elite set is initially empty and it is constructed during the initial GRASP iterations. Then, after each GRASP iteration, one or more paths in  connecting the solution produced by the local search procedure and one of the elite solutions is explored in search for other high-quality solutions. In a forward path-relinking scheme, we specify the local search solution to be the initial solution and a randomly selected elite solution to be the guiding solution. A series of moves take the current solution from the initial solution to the guiding solution. Each move along the path introduces attributes contained in the guiding solution into the current solution. At each step, the chosen move to a solution in a restricted neighborhood of the current solution is usually one that maximizes some greedy criteria. As we saw in Chapter , choices other than a greedy choice are possible. However, in this discussion we assume that the move made is one that increases the weight of the current solution the most or, if no move increases its weight, then we choose one that decreases it the least.
We next describe a forward path-relinking procedure for the MAX-CUT problem, going from an initial solution  to a guiding elite solution . Note that other forms of path-relinking, such as backward and mixed path-relinking, can be applied in place of forward path-relinking. Also note that the guiding solution can be represented not only as , but also as . Since different solutions can be traversed in the path from an initial solution  to a guiding solution  and in the path from  to , then traversing both paths may enable the algorithm to find better solutions.
The path-relinking procedure starts by initializing the current solution with the initial solution, i.e., setting  and computing the restricted neighborhood

where

and

Set  is the neighborhood formed by solutions that result from moving from  to S a vertex v that belongs to S g (but not to S). Conversely, set  corresponds to the neighborhood formed by solutions that result from moving from S to  a vertex v that does not belong to S g (but belongs to S).
Maximum cut problem – Path-relinking iteration
Consider the following example on the graph in Figure 12.11, where V = { 1, 2, 3, 4, 5}. Let the current solution  be such that S = { 1, 2} and  and let the guiding solution  be such that S g = { 1, 3, 5} and . Since S ∪ S g = { 1, 2, 3, 5} and S ∩ S g = { 1}, then

and

Consequently,

Since w({1, 2, 3}, {4, 5}) = 33, w({1, 2, 5}, {3, 4}) = 21, and w({1}, {2, 3, 4, 5}) = 11, path-relinking would make the greedy choice and make the move to solution

■
The best solution in the restricted neighborhood  is selected, the current solution is updated, and a new path-relinking iteration is performed, until the guiding solution is reached. The total number of iterations performed by path-relinking is

where  is the starting solution and  and  are the two representations of the guiding solution.
Figure 12.14 shows the pseudo-code for a forward path-relinking algorithm for the maximum cut problem. The procedure takes as input an initial solution  and two representations of a guiding solution,  and , and returns a locally maximum cut  in line 25. Lines 1 through 12 traverse a path from  to , while in lines 13 through 24 a path from  to  is traversed. Lines 1 and 2 initialize the current solution  with the initial solution  and line 3 initializes the largest cut weight to −∞. Traversal of the path from  to  in the solution space graph takes place in the while loop from line 4 to line 11. This loop is applied until the guiding solution is reached, i.e., until  becomes empty. In line 5, a best solution among all solutions in the restricted neighborhood  is assigned to  and, in line 6, the move to  is made. If the weight of the cut corresponding to the new solution is the largest seen so far in this path, then the cut and its weight are saved in  (line 8) and in w ∗ (line 9), respectively. Since there is no guarantee that the solutions in the path traversed by path-relinking are local maxima, local search is applied to  in line 12 and the local maximum found is . Traversal of the second path is similar and takes place in lines 13 to 24. The local maximum found starting from the best solution in this path is  and is computed in line 24. The best solution  returned in line 25 is the solution with the largest cut weight among  and .
Fig. 12.14
Pseudo-code of the forward path-relinking procedure for the MAX-CUT problem.
Once a new solution  is produced by path-relinking, the GRASP with path-relinking procedure verifies in lines 10 to 12 of the pseudo-code in Figure 12.10 if this solution can be inserted into the elite set . Denote by  the maximum number of elite elements. If , then  is simply inserted into . Otherwise, if , then two cases can arise. In the first case, if  is greater than the largest cut weight in , then  is inserted into the pool . In the second case, it will be added to the pool  only if it is sufficiently different from all pool elements. In both cases, an elite solution  of less weight than  is replaced by  in the pool. There can be one or more such lower-weight solutions in the pool. Among those, the chosen solution  is the one that is most similar to .
## 12.5 Bibliographical notes
Applications of the 2-path network design problem can be found in the design of communication networks, in which paths with few edges are sought to enforce high reliability and small delays. Dahl and Johannessen (2004) proved that the decision version of the 2-path network design problem is NP-complete and proposed a greedy heuristic based on the linear relaxation of its integer programming formulation. They also gave an exact cutting plane algorithm and presented computational results for randomly generated problems with up to 120 nodes, 7,140 edges, and 60 origin-destination pairs. The numerical results reported in Section 12.1 appeared originally in Ribeiro and Rosseti (2002) , where 100 test instances with 70 nodes and 35 origin-destination pairs were randomly generated with the same parameters of those used by Dahl and Johannessen (2004), since neither the instances nor the code of the greedy heuristic were available for a straightforward comparison with the parallel GRASP algorithm. Student's t-test for unpaired observations was applied as in Jain (1991). Efficient parallel cooperative implementations of GRASP heuristics for the 2-path network design problem appeared in Ribeiro and Rosseti (2007), as did the implementation details of the algorithms used in the computational experiments.
The graph planarization problem discussed in Section 12.2 is NP-hard, see Liu and Geldmacher (1977). Applications of graph planarization include graph drawing, such as in CASE tools (Tamassia and Di Battista, 1988), automated graphical display systems, and numerous layout problems, such as circuit layout and the layout of industrial facilities (Hassan and Hogg, 1987). A survey of some of these applications appeared in Mutzel (1994). The two-phase GT heuristic for graph planarization was originally proposed by Goldschmidt and Takvorian (1994). Since the decision version of the problem of finding a maximum induced bipartite subgraph of an overlap graph is NP-complete (Sarrafzadeh and Lee, 1989), a greedy algorithm is used by the two-phase heuristic GT to construct a maximal induced bipartite subgraph of the overlap graph. Finding the maximum independent set of an overlap graph has been shown by Gavril (1973) to be polynomially solvable in time O( | E |3), where | E | is the number of vertices of the overlap graph H = (E, I) (see Golumbic (1980) for another description of this algorithm). The GRASP heuristic for graph planarization was developed by Resende and Ribeiro (1997), where extensive computational results have been reported. Its code is detailed and available from Ribeiro and Resende (1999).
The bandwidth packing problem introduced in Section 12.3 can be solved in polynomial time (Ouorou et al., 2000) if the cost function in each edge is convex. However, the problem is NP-hard if the flows are required to be integral (Even et al., 1976) or if each commodity is required to follow a single path from its source to its destination (Chlamtac et al., 1994). The cost function adopted in model (12.1)-(12.5) is the same piecewise linear function originally proposed by Fortz and Thorup (2000).
Several heuristics have been proposed for different variants of the bandwidth packing problem. One of the first algorithms for routing virtual circuits in communication networks was proposed by Yee and Lin (1992). Resende and Resende (1999) proposed a GRASP for frame relay permanent virtual circuit routing different from the one described in this chapter. Sung and Park (1995) developed a Lagrangean heuristic for a similar variant of this problem. Laguna and Glover (1993) considered a bandwidth packing problem in which they want to assign calls to paths in a capacitated graph, such that capacities are not violated and some measure of the total profit is maximized. They developed a tabu search algorithm which makes use of an efficient implementation of the k-shortest path algorithm.
Amiri et al. (1999) proposed another formulation for the bandwidth packing problem, considering both revenue losses and costs associated with communication delays as part of the objective. A heuristic procedure based on Lagrangean relaxation was applied for finding bounds and solutions. Shyur and Wen (2001) proposed a tabu search algorithm for optimizing the system of virtual paths. The objective function consisted in minimizing the maximum link load, by requiring that each route visits the minimum number of hubs. The load of a link is defined as the sum of the virtual path demands, summed over the virtual paths that traverse this link.
A number of exact approaches for solving variants of the bandwidth packing problem have also appeared in the literature. Parker and Ryan (1994) described a branch-and-bound procedure for optimally solving a bandwidth packing problem, in which the linear relaxation of the associated integer programming problem is solved by using column generation. LeBlanc et al. (1999) addressed packet switched telecommunication networks, considering restrictions on paths and flows: hop limits, node and link capacity constraints, and high- and low-priority flows. They minimize the expected queuing time and do not impose integrality constraints on the flows. Dahl et al. (1999) studied a network configuration problem in telecommunications, searching for paths in a capacitated network to accommodate a given traffic demand matrix. Their model also involves an intermediate pipe layer. The problem is formulated as an integer linear program. An associated integral polytope is studied and different classes of facets are described. Barnhart et al. (2000) proposed a branch-and-cut-and-price algorithm for origin-destination integer multicommodity flow problems. This problem is a constrained version of the linear multicommodity network flow problem, in which each flow can use only one path from its origin to its destination.
The GRASP algorithm for the bandwidth packing problem discussed in this chapter was originally proposed by Resende and Ribeiro (2003a), where computational results illustrating the trade-offs between different implementation strategies and their application in practice are reported in detail. The construction mechanism based on heuristic-biased stochastic sampling was introduced by Bresina (1996), in which the candidates are ranked according to the greedy function. Binato et al. (2002) also used this selection procedure, but restricted to elements of the RCL.
The GRASP with path-relinking heuristic for the maximum cut problem, discussed in this chapter, was originally proposed in Festa et al. (2002). In that paper, a variable neighborhood search (VNS) heuristic for MAX-CUT and its hybridization with path-relinking were also proposed.
The decision version of the MAX-CUT problem was proved to be NP-complete by Karp (1972). Applications of MAX-CUT are found in VLSI design and statistical physics, see, e.g., Barahona et al. (1988), Chang and Du (1987), Chen et al. (1983), and Pinter (1984), among others. The reader is referred to Poljak and Tuza (1995) for an introductory survey of MAX-CUT.
The idea that the MAX-CUT problem can be naturally relaxed to a semidefinite programming problem was first observed by Lovász (1979) and Shor (1987). Goemans and Williams (1995) proposed a randomized algorithm that uses semidefinite programming to achieve a performance guarantee of 0.87856 if the weights are non-negative. Algorithms for solving the semidefinite programming relaxation of MAX-CUT are particularly efficient because they explore the structure of the problem. One approach along this line is the use of interior point methods (Benson et al., 2000; Fujisawa et al., 1997; 2000).
Other nonlinear programming approaches have also been presented for the MAX-CUT semidefinite programming relaxation, solution see Helmberg and Rendl (2000) and Homer and Peinado (1997). Homer and Peinado (1997) reformulated the constrained problem as an unconstrained one and used the standard steepest-ascent method on the latter. A variant of the Homer and Peinado algorithm was proposed by Burer and Monteiro (2001). Their idea is based on the constrained nonlinear programming reformulation of the MAX-CUT semidefinite programming relaxation obtained by a change of variables.
Burer et al. (2001) proposed a rank-2 relaxation heuristic for MAX-CUT and described the circut computer code that produces better solutions in practice than the randomized algorithm of Goemans and Williamson.
Following the paper by Festa et al. (2002) that proposed GRASP, path-relinking, and variable neighborhood search for the MAX-CUT, other heuristics based on metaheuristic concepts were proposed, including a hierarchical social heuristic (Duarte et al., 2004), ant colony optimization (Gao et al., 2008), scatter search (Martí et al., 2009), memetic algorithms (Wu and Hao, 2012), genetic algorithms (Seo et al., 2012), tabu search (Kochenberger et al., 2013), and breakout local search (Benlic and Hao, 2013).
References
A. Amiri, E. Rolland, and R. Barkhi. Bandwidth packing with queueing delay costs: Bounding and heuristic solution procedures. European Journal of Operational Research, 112:635–645, 1999.CrossRef00401-3)MATH
F. Barahona, M. Grötschel, M. Jürgen, and G. Reinelt. An application of combinatorial optimization to statistical optimization and circuit layout design. Operations Research, 36:493–513, 1988.CrossRefMATH
C. Barnhart, C.A. Hane, and P.H. Vance. Using branch-and-price-and-cut to solve origin-destination integer multicommodity flow problems. Operations Research, 48:318–326, 2000.CrossRef
U. Benlic and J.-K. Hao. Breakout local search for the Max-Cut problem. Engineering Applications of Artificial Intelligence, 26:1162–1173, 2013.CrossRefMATH
S. Benson, Y. Ye, and X. Zhang. Solving large-scale sparse semidefinite programs for combinatorial optimization. SIAM Journal on Optimization, 10:443–461, 2000.MathSciNetCrossRefMATH
S. Binato, W.J. Hery, D. Loewenstern, and M.G.C. Resende. A GRASP for job shop scheduling. In C.C. Ribeiro and P. Hansen, editors, Essays and surveys in metaheuristics, pages 59–79. Kluwer Academic Publishers, Boston, 2002.CrossRef
J.L. Bresina. Heuristic-biased stochastic sampling. In Proceedings of the Thirteenth National Conference on Artificial Intelligence, pages 271–278, Portland, 1996. Association for the Advancement of Artificial Intelligence.
S. Burer and R.D.C. Monteiro. A projected gradient algorithm for solving the Max-Cut SDP relaxation. Optimization Methods and Software, 15: 175–200, 2001.MathSciNetCrossRefMATH
S. Burer, R.D.C. Monteiro, and Y. Zhang. Rank-two relaxation heuristics for MAX-CUT and other binary quadratic programs. SIAM Journal on Optimization, 12:503–521, 2001.MathSciNetCrossRefMATH
K.C. Chang and D.-Z. Du. Efficient algorithms for layer assignment problems. IEEE Transactions on Computer-Aided Design, CAD-6:67–78, 1987.CrossRef
R. Chen, Y. Kajitani, and S. Chan. A graph-theoretic via minimization algorithm for two-layer printed circuit boards. IEEE Transactions on Circuits and Systems, CAS-30:284–299, 1983.CrossRefMATH
I. Chlamtac, A. Faragó, and T. Zhang. Optimizing the system of virtual paths. IEEE/ACM Transactions on Networking, 2:581–587, 1994.CrossRef
G. Dahl and B. Johannessen. The 2-path network design problem. Networks, 43:190–199, 2004.MathSciNetCrossRefMATH
G. Dahl, A. Martin, and M. Stoer. Routing through virtual paths in layered telecommunication networks. Operations Research, 47:693–702, 1999.MathSciNetCrossRefMATH
A. Duarte, F. Fernández, Á. Sánchez, and A. Sanz. A hierarchical social metaheuristic for the Max-Cut problem. In J. Gottlieb and G.R. Raidl, editors, Evolutionary computation in combinatorial optimization, volume 3004 of Lecture Notes in Computer Science, pages 84–94. Springer, Berlin, 2004.
S. Even, A. Itai, and A. Shamir. On the complexity of timetable and multicommodity flow problems. SIAM Journal on Computing, 5:691–703, 1976.MathSciNetCrossRefMATH
P. Festa, P.M. Pardalos, M.G.C. Resende, and C.C. Ribeiro. Randomized heuristics for the MAX-CUT problem. Optimization Methods and Software, 7:1033–1058, 2002.MathSciNetCrossRefMATH
B. Fortz and M. Thorup. Increasing Internet capacity using local search. Computational Optimization and Applications, 29:13–48, 2000.MathSciNetCrossRefMATH
K. Fujisawa, M. Fojima, and K. Nakata. Exploiting sparsity in primal-dual interior-point methods for semidefinite programming. Mathematical Programming, 79:235–253, 1997.MathSciNetMATH
K. Fujisawa, M. Fukuda, M. Kojima, and K. Nakata. Numerical evaluation of SDPA (Semidefinite Programming Algorithm). In H. Frenk, K. Roos, T. Terlaky, and S. Zhang, editors, High performance optimization, pages 267–301. Springer, Boston, 2000.CrossRef
L. Gao, Y. Zeng, and A. Dong. An ant colony algorithm for solving Max-cut problem. Progress in Natural Science, 18:1173–1178, 2008.MathSciNetCrossRef
F. Gavril. Algorithms for a maximum clique and a maximum independent set of a circle graph. Networks, 3:261–273, 1973.MathSciNetCrossRefMATH
M.X. Goemans and D.P. Williams. Improved approximation algorithms for Max-Cut and Satisfiability problems using semidefinite programming. Journal of the ACM, 42:1115–1145, 1995.CrossRefMATH
O. Goldschmidt and A. Takvorian. An efficient graph planarization two-phase heuristic. Networks, 24:69–73, 1994.MathSciNetCrossRefMATH
M.C. Golumbic. Algorithmic graph theory and perfect graphs. Academic Press, New York, 1980.MATH
M.M. Hassan and G.L. Hogg. A review of graph theory applications to the facilities layout problem. Omega, 15:291–300, 1987.CrossRef90017-X)
C. Helmberg and F. Rendl. A spectral bundle method for semidefinite programming. SIAM Journal on Optimization, 10:673–696, 2000.MathSciNetCrossRefMATH
S. Homer and M. Peinado. Two distributed memory parallel approximation algorithms for Max-Cut. Journal of Parallel and Distributed Computing, 1:48–61, 1997.CrossRef
R. Jain. The art of computer systems performance analysis: Techniques for experimental design, measurement, simulation, and modeling. Wiley, New York, 1991.MATH
R.M. Karp. Reducibility among combinatorial problems. In R.E. Miller and J.W. Thatcher, editors, Complexity of computer computations. Plenum Press, New York, 1972.
G.A. Kochenberger, J.-K. Hao, Z. Lu, H. Wang, and F. Glover. Solving large scale Max Cut problems via tabu search. Journal of Heuristics, 19: 565–571, 2013.CrossRef
M. Laguna and F. Glover. Bandwidth packing: A tabu search approach. Management Science, 39:492–500, 1993.CrossRefMATH
L.J. LeBlanc, J. Chifflet, and P. Mahey. Packet routing in telecommunication networks with path and flow restrictions. INFORMS Journal on Computing, 11:188–197, 1999.MathSciNetCrossRefMATH
P.C. Liu and R.C. Geldmacher. On the deletion of nonplanar edges of a graph. In Proceedings of the 10th Southeastern Conference on Combinatorics, Graph Theory and Computing, pages 727–738, Boca Raton, 1977.
L. Lovász. On the Shannon capacity of a graph. IEEE Transactions on Information Theory, IT-25:1–7, 1979.MathSciNetCrossRefMATH
R. Martí, A. Duarte, and M. Laguna. Advanced scatter search for the MAX-CUT problem. INFORMS Journal on Computing, 21:26–38, 2009.MathSciNetCrossRefMATH
P. Mutzel. The maximum planar subgraph problem. PhD thesis, Universität zu Köln, Cologne, 1994.
A. Ouorou, P. Mahey, and J.P. Vial. A survey of algorithms for convex multicommodity flow problems. Management Science, 46:126–147, 2000.CrossRefMATH
M. Parker and J. Ryan. A column generation algorithm for bandwidth packing. Telecommunication Systems, 2:185–195, 1994.CrossRef
R.Y. Pinter. Optimal layer assignment for interconnect. Advances in VLSI and Computer Systems, 1:123–137, 1984.MATH
S. Poljak and Z. Tuza. Maximum cuts and largest bipartite subgraphs. In W. Cook, L. Lovász, and P. Seymour, editors, Papers from the special year on Combinatorial Optimization, volume 20 of DIMACS Series in Discrete Mathematics and Theoretical Computer Science, pages 181–244. American Mathematical Society, Providence, 1995.
L.I.P. Resende and M.G.C. Resende. A GRASP for frame relay permanent virtual circuit routing. In C.C. Ribeiro and P. Hansen, editors, Extended Abstracts of the III Metaheuristics International Conference, pages 397–401, Angra dos Reis, 1999.
M.G.C. Resende and C.C. Ribeiro. A GRASP for graph planarization. Networks, 29:173–189, 1997.CrossRef1097-0037\(199705\)29%3A3<173%3A%3AAID-NET5>3.0.CO%3B2-E)MATH
M.G.C. Resende and C.C. Ribeiro. A GRASP with path-relinking for private virtual circuit routing. Networks, 41:104–114, 2003a.
C.C. Ribeiro and M.G.C. Resende. Algorithm 797: Fortran subroutines for approximate solution of graph planarization problems using GRASP. ACM Transactions on Mathematical Software, 25:341–352, 1999.CrossRefMATH
C.C. Ribeiro and I. Rosseti. A parallel GRASP heuristic for the 2-path network design problem. In B. Monien and R. Feldmann, editors, Euro-Par 2002 Parallel Processing, volume 2400 of Lecture Notes in Computer Science, pages 922–926. Springer, Berlin, 2002.
C.C. Ribeiro and I. Rosseti. Efficient parallel cooperative implementations of GRASP heuristics. Parallel Computing, 33:21–35, 2007.MathSciNetCrossRef
M. Sarrafzadeh and D. Lee. A new approach to topological via minimization. IEEE Transactions on Computer-Aided Design, 8:890–900, 1989.CrossRef
K. Seo, S. Hyun, and Y.-H. Kim. A spanning tree-based encoding of the MAX CUT problem for evolutionary search. In C.A.C. Coello, V. Cutello, K. Deb, S. Forrest, G. Nicosia, and M. Pavone, editors, Parallel problem solving from nature - Part I, volume 7491 of Lecture Notes in Computer Science, pages 510–518. Springer, Berlin, 2012.
N.Z. Shor. Quadratic optimization problems. Soviet Journal of Computer and Systems Science, 25:1–11, 1987.MathSciNetMATH
C.-C. Shyur and U.-E. Wen. Optimizing the system of virtual paths by tabu search. European Journal of Operational Research, 129:650–662, 2001.MathSciNetCrossRef00472-5)MATH
C.S. Sung and S.K. Park. An algorithm for configuring embedded networks in reconfigurable telecommunication networks. Telecommunication Systems, 4: 241–271, 1995.CrossRef
R. Tamassia and G. Di Battista. Automatic graph drawing and readability of diagrams. IEEE Transactions on Systems, Man, and Cybernetics, 18: 61–79, 1988.CrossRef
Q. Wu and J.-K. Hao. A memetic approach for the Max-Cut problem. In C.A.C. Coello, V. Cutello, K. Deb, S. Forrest, G. Nicosia, and M. Pavone, editors, Parallel problem solving from nature - Part II, volume 7492 of Lecture Notes in Computer Science, pages 297–306. Springer, Berlin, 2012.
J.R. Yee and F.Y.S. Lin. A routing algorithm for virtual circuit data networks with multiple sessions per O-D pair. Networks, 22:185–208, 1992.CrossRefMATH
References
E.H.L. Aarts and J. Korst. Simulated annealing and Boltzmann machines: A stochastic approach to combinatorial optimization and neural computing . Wiley, New York, 1989. MATH
B. Adenso-Díaz, S. García-Carbajal, and S.M. Gupta. A path-relinking approach for a bi-criteria disassembly sequencing problem. Computers & Operations Research , 35:3989–3997, 2008. MATH
R. Agrawal and R. Srikant. Fast algorithms for mining association rules. In Proceedings of the 20th International Conference on Very Large Data Bases , pages 487–499. Morgan Kaufmann Publishers, 1994.
R.M. Aiex and M.G.C. Resende. Parallel strategies for GRASP with path-relinking. In T. Ibaraki, K. Nonobe, and M. Yagiura, editors, Metaheuristics: Progress as real problem solvers , pages 301–331. Springer, New York, 2005.
R.M. Aiex, M.G.C. Resende, and C.C. Ribeiro. Probability distribution of solution time in GRASP: An experimental investigation. Journal of Heuristics , 8:343–373, 2002. MATH
R.M. Aiex, S. Binato, and M.G.C. Resende. Parallel GRASP with path-relinking for job shop scheduling. Parallel Computing , 29:393–430, 2003. MathSciNet
R.M. Aiex, M.G.C. Resende, P.M. Pardalos, and G. Toraldo. GRASP with path relinking for three-index assignment. INFORMS Journal on Computing , 17: 224–247, 2005. MathSciNetMATH
R.M. Aiex, M.G.C. Resende, and C.C. Ribeiro. TTTPLOTS: A perl program to create time-to-target plots. Optimization Letters , 1:355–366, 2007. MathSciNetMATH
E. Alba. Parallel metaheuristics: A new class of algorithms . Wiley, New York, 2005. MATH
D. Aloise and C.C. Ribeiro. Adaptive memory in multistart heuristics for multicommodity network design. Journal of Heuristics , 17:153–179, 2011. MATH
G.A. Alvarez-Perez, J.L. González-Velarde, and J.W. Fowler. Crossdocking – Just in time scheduling: An alternative solution approach. Journal of the Operational Research Society , 60:554–564, 2008. MATH
R. Alvarez-Valdes, F. Parreño, and J.M. Tamarit. A GRASP algorithm for constrained two-dimensional non-guillotine cutting problems. Journal of the Operational Research Society , 56:414–425, 2004. MATH
R. Alvarez-Valdes, E. Crespo, J.M. Tamarit, and F. Villa. GRASP and path relinking for project scheduling under partially renewable resources. European Journal of Operational Research , 189:1153–1170, 2008a. MATH
R. Alvarez-Valdes, F. Parreño, and J.M. Tamarit. Reactive GRASP for the strip-packing problem. Computers & Operations Research , 35:1065–1083, 2008b. MATH
R. Alvarez-Valdes, F. Parreño, and J.M. Tamarit. A GRASP/path relinking algorithm for two- and three-dimensional multiple bin-size bin packing problems. Computers & Operations Research , 40:3081–3090, 2013. MathSciNet
A.C. Alvim and C.C. Ribeiro. Load balancing for the parallelization of the GRASP metaheuristic. In Proceedings of the X Brazilian Symposium on Computer Architecture , pages 279–282, Búzios, 1998.
A. Amiri, E. Rolland, and R. Barkhi. Bandwidth packing with queueing delay costs: Bounding and heuristic solution procedures. European Journal of Operational Research , 112:635–645, 1999. MATH
K.P. Anagnostopoulos, P.D. Chatzoglou, and S. Katsavounis. A reactive greedy randomized adaptive search procedure for a mixed integer portfolio optimization problem. Managerial Finance , 36:1057–1065, 2010.
D.V. Andrade and M.G.C. Resende. GRASP with path-relinking for network migration scheduling. In Proceedings of the International Network Optimization Conference , Spa, 2007a. URL http://bit.ly/1NfaTK0 . Last visited on April 16, 2016.
D.V. Andrade and M.G.C. Resende. GRASP with evolutionary path-relinking. In Proceedings of the Seventh Metaheuristics International Conference , Montreal, 2007b.
L.M.M.S. Andrade, R.B. Xavier, L.A.F. Cabral, and A.A. Formiga. Parallel construction for continuous GRASP optimization on GPUs. In Anais do XLVI Simpósio Brasileiro de Pesquisa Operacional , pages 2393–2404, Salvador, 2014. URL http://bit.ly/1SS3lte . Last visited on April 16, 2016.
A.A. Andreatta and C.C. Ribeiro. Heuristics for the phylogeny problem. Journal of Heuristics , 8:429–447, 2002. MATH
C.H. Antunes, E. Oliveira, and P. Lima. A multi-objective GRASP procedure for reactive power compensation planning. Optimization and Engineering , 15: 199–215, 2014. MATH
D.L. Applegate, R.E. Bixby, V. Chvátal, and W.J. Cook. The traveling salesman problem: A computational study . Princeton University Press, Princeton, 2006. MATH
T.M.U. Araújo, L.M.M.S. Andrade, C. Magno, L.A.F. Cabral, R.Q. Nascimento, and C.N. Meneses. DC-GRASP: Directing the search on continuous-GRASP. Journal of Heuristics , 2015. doi: 10.1007/ s10732-014-9278-6. Published online on 6 January 2015.
V.A. Armentano and O.C.B. Araujo. GRASP with memory-based mechanisms for minimizing total tardiness in single machine scheduling with setup times. Journal of Heuristics , 12:427–446, 2006.
J.E.C. Arroyo, P.S. Vieira, and D.S. Vianna. A GRASP algorithm for the multi-criteria minimum spanning tree problem. Annals of Operations Research , 159:125–133, 2008. MathSciNetMATH
L. Bahiense, G.C. Oliveira, M. Pereira, and S. Granville. A mixed integer disjunctive model for transmission network expansion. IEEE Transactions on Power Systems , 16:560–565, 2001.
E. Balas and M.J. Saltzman. An algorithm for the three-index assignment problem. Operations Research , 39:150–161, 1991. MathSciNetMATH
J. Bang-Jensen, G. Gutin, and A. Yeo. When the greedy algorithm fails. Discrete Optimization , 1:121–127, 2004. MathSciNetMATH
F. Barahona, M. Grötschel, M. Jürgen, and G. Reinelt. An application of combinatorial optimization to statistical optimization and circuit layout design. Operations Research , 36:493–513, 1988. MATH
H. Barbalho, I. Rosseti, S.L. Martins, and A. Plastino. A hybrid data mining GRASP with path-relinking. Computers & Operations Research , 40:3159–3173, 2013.
J.F. Bard, Y. Shao, and A.I. Jarrah. A sequential GRASP for the therapist routing and scheduling problem. Journal of Scheduling , 17:109–133, 2014. MathSciNetMATH
C. Barnhart, C.A. Hane, and P.H. Vance. Using branch-and-price-and-cut to solve origin-destination integer multicommodity flow problems. Operations Research , 48:318–326, 2000.
V. Bartkutė and L. Sakalauskas. Statistical inferences for termination of Markov type random search algorithms. Journal of Optimization Theory and Applications , 141:475–493, 2009. MathSciNetMATH
V. Bartkutė, G. Felinskas, and L. Sakalauskas. Optimality testing in stochastic and heuristic algorithms. Technical Report 12, Vilnius Gediminas Technical University, Vilnius, 2006.
R. Battiti and G. Tecchiolli. Parallel biased search for combinatorial optimization: Genetic algorithms and tabu. Microprocessors and Microsystems , 16:351–367, 1992.
J.E. Beasley. An algorithm for set-covering problems. European Journal of Operational Research , 31:85–93, 1987. MathSciNetMATH
J.E. Beasley. OR-Library: Distributing test problems by electronic mail. Journal of the Operational Research Society , 41:1069–1072, 1990a.
J.E. Beasley. A Lagrangean heuristic for set-covering problems. Naval Research Logistics , 37:151–164, 1990b. MathSciNetMATH
J.E. Beasley. Lagrangean relaxation. In C.R. Reeves, editor, Modern heuristic techniques for combinatorial problems , pages 243–303. Blackwell Scientific Publications, Oxford, 1993.
U. Benlic and J.-K. Hao. Breakout local search for the Max-Cut problem. Engineering Applications of Artificial Intelligence , 26:1162–1173, 2013. MATH
S. Benson, Y. Ye, and X. Zhang. Solving large-scale sparse semidefinite programs for combinatorial optimization. SIAM Journal on Optimization , 10:443–461, 2000. MathSciNetMATH
D. Berger, B. Gendron, J.-Y Potvin, S. Raghavan, and P. Soriano. Tabu search for a network loading problem with multiple facilities. Journal of Heuristics , 6:253–267., 2000.
D. Bertsimas and R. Weismantel. Optimization over integers . Dynamic Ideas, Belmont, 2005.
S. Binato and G.C. Oliveira. A reactive GRASP for transmission network expansion planning. In C.C. Ribeiro and P. Hansen, editors, Essays and surveys in metaheuristics , pages 81–100. Kluwer Academic Publishers, Boston, 2002.
S. Binato, W.J. Hery, D. Loewenstern, and M.G.C. Resende. A GRASP for job shop scheduling. In C.C. Ribeiro and P. Hansen, editors, Essays and surveys in metaheuristics , pages 59–79. Kluwer Academic Publishers, Boston, 2002.
E.G. Birgin and J.M. Martínez. Large-scale active-set box-constrained optimization method with spectral projected gradients. Computational Optimization and Applications , 23:101–125, 2002. MathSciNetMATH
E.G. Birgin, E.M. Gozzi, M.G.C. Resende, and R.M.A. Silva. Continuous GRASP with a local active-set method for bound-constrained global optimization. Journal of Global Optimization , 48:289–310, 2010. MathSciNetMATH
C.G.E. Boender and A.H.G. Rinnooy Kan. Bayesian stopping rules for multistart global optimization methods. Mathematical Programming , 37: 59–80, 1987. MathSciNetMATH
J.A. Bondy and U.S.R. Murty. Graph theory with applications . Elsevier, 1976. MATH
M. Boudia, M.A.O. Louly, and C. Prins. A reactive GRASP and path relinking for a combined production–distribution problem. Computers & Operations Research , 34:3402–3419, 2007. MATH
J.L. Bresina. Heuristic-biased stochastic sampling. In Proceedings of the Thirteenth National Conference on Artificial Intelligence , pages 271–278, Portland, 1996. Association for the Advancement of Artificial Intelligence.
S. Burer and R.D.C. Monteiro. A projected gradient algorithm for solving the Max-Cut SDP relaxation. Optimization Methods and Software , 15: 175–200, 2001. MathSciNetMATH
S. Burer, R.D.C. Monteiro, and Y. Zhang. Rank-two relaxation heuristics for MAX-CUT and other binary quadratic programs. SIAM Journal on Optimization , 12:503–521, 2001. MathSciNetMATH
R.E. Burkard and K. Fröhlich. Some remarks on 3-dimensional assignment problems. Methods of Operations Research , 36:31–36, 1980. MATH
R.E. Burkard and R. Rudolf. Computational investigations on 3-dimensional axial assignment problems. Belgian Journal of Operational Research, Statistics and Computer Science , 32:85–98, 1993. MATH
R.E. Burkard, R. Rudolf, and G.J. Woeginger. Three-dimensional axial assignment problems with decomposable cost coefficients. Discrete Applied Mathematics , 65:123–139, 1996. MathSciNetMATH
E.K. Burke and G. Kendall, editors. Search methodologies: Introductory tutorials in optimization and decision support techniques . Springer, New York, 2005. MATH
E.K. Burke and G. Kendall, editors. Search methodologies: Introductory tutorials in optimization and decision support techniques . Springer, New York, 2nd edition, 2014.
S.I. Butenko, C.W. Commander, and P.M. Pardalos. A GRASP for broadcast scheduling in ad-hoc TDMA networks. In Proceedings of the International Conference on Computing, Communications, and Control Technologies , volume 5, pages 322–328, Austin, 2004.
R.G. Cano, G. Kunigami, C.C. de Souza, and P.J. de Rezende. A hybrid GRASP heuristic to construct effective drawings of proportional symbol maps. Computers & Operations Research , 40:1435–1447, 2013. MathSciNet
S.A. Canuto, M.G.C. Resende, and C.C. Ribeiro. Local search with perturbations for the prize-collecting Steiner tree problem in graphs. Networks , 38:50–58, 2001. MathSciNetMATH
B. Cao and F. Glover. Tabu search and ejection chains – Application to a node weighted version of the cardinality-constrained TSP. Management Science , 43:908–921, 1997. MATH
S. Casey and J. Thompson. GRASPing the examination scheduling problem. In E. Burke and P. De Causmaecker, editors, Practice and theory of automated timetabling IV , volume 2740 of Lecture Notes in Computer Science , pages 232–244. Springer, Berlin, 2003.
L. Cavique, C. Rego, and I. Themido. Subgraph ejection chains and tabu search for the crew scheduling problem. Journal of the Operational Research Society , 50:608–616, 1999. MATH
J.M. Chambers, W.S. Cleveland, B. Kleiner, and P.A. Tukey. Graphical methods for data analysis . Duxbury Press, Boston, 1983. MATH
K.C. Chang and D.-Z. Du. Efficient algorithms for layer assignment problems. IEEE Transactions on Computer-Aided Design , CAD-6:67–78, 1987.
W.A. Chaovalitwongse, C.A.S Oliveira, B. Chiarini, P.M. Pardalos, and M.G.C. Resende. Revised GRASP with path-relinking for the linear ordering problem. Journal of Combinatorial Optimization , 22:572–593, 2011.
I. Charon and O. Hudry. The noising method: A new method for combinatorial optimization. Operations Research Letters , 14:133–137, 1993. MathSciNetMATH
I. Charon and O. Hudry. The noising methods: A survey. In C.C. Ribeiro and P. Hansen, editors, Essays and surveys in metaheuristics , pages 245–261. Kluwer Academic Publishers, Boston, 2002.
R. Chen, Y. Kajitani, and S. Chan. A graph-theoretic via minimization algorithm for two-layer printed circuit boards. IEEE Transactions on Circuits and Systems , CAS-30:284–299, 1983. MATH
M. Chica, O. Cordón, S. Damas, and J. Bautista. A multiobjective GRASP for the 1/3 variant of the time and space assembly line balancing problem. In N. García-Pedrajas, F. Herrera, C. Fyfe, J. Benítez, and M. Ali, editors, Trends in applied intelligent systems , volume 6098 of Lecture Notes in Computer Science , pages 656–665. Springer, Berlin, 2010.
I. Chlamtac, A. Faragó, and T. Zhang. Optimizing the system of virtual paths. IEEE/ACM Transactions on Networking , 2:581–587, 1994.
V. Chvátal. A greedy heuristic for the set-covering problem. Mathematics of Operations Research , 4:233–235, 1979. MathSciNetMATH
A. Cobham. The intrinsic computational difficulty of functions. In Y. Bar-Hillel, editor, Proceedings of the 1964 International Congress for Logical Methodology and Philosophy of Science , pages 24–30, Amsterdam, 1964. North Holland.
C.W. Commander, S.I. Butenko, P.M. Pardalos, and C.A.S. Oliveira. Reactive GRASP with path relinking for broadcast scheduling. In Proceedings of the 40th Annual International Telemetry Conference , pages 792–800, San Diego, 2004.
S.A. Cook. The complexity of theorem-proving procedures. In M.A. Harrison, R.B. Banerji, and J.D. Ullman, editors, Proceedings of the Third Annual ACM Symposium on Theory of Computing , pages 151–158, New York, 1971. ACM.
R. Cordone and G. Lulli. A GRASP metaheuristic for microarray data analysis. Computers & Operations Research , 40:3108–3120, 2013. MathSciNet
T.H. Cormen, C.E. Leiserson, R.L. Rivest, and C. Stein. Introduction to Algorithms . MIT Press, Cambridge, 3rd edition, 2009. MATH
C. Cotta and A.J. Fernández. A hybrid GRASP–evolutionary algorithm approach to Golomb ruler search. In X. Yao, E.K. Burke, J.A. Lozano, J. Smith, J.J. Merelo-Guervós, J.A. Bullinaria, J.E. Rowe, P. Tiňo, A. Kabán, and H.-P. Schwefel, editors, Parallel Problem Solving from Nature , volume 3242 of Lecture Notes in Computer Science , pages 481–490. Springer, Berlin, 2004.
Y. Crama and F.C.R. Spieksma. Approximation algorithms for three-dimensional assignment problems with triangle inequalities. European Journal of Operational Research , 60:273–279, 1992. MATH
G.L. Cravo, G.M. Ribeiro, and L.A.N. Lorena. A greedy randomized adaptive search procedure for the point-feature cartographic label placement. Computers & Geosciences , 34:373–386, 2008.
G.A. Croes. A method for solving traveling-salesman problems. Operations Research , 6:791–812, 1958. MathSciNet
W.B. Crowston, F. Glover, G.L. Thompson, and J.D. Trawick. Probabilistic and parametric learning combinations of local job shop scheduling rules. Technical Report 117, Carnegie-Mellon University, Pittsburgh, 1963.
V.-D. Cung, S.L. Martins, C.C. Ribeiro, and C. Roucairol. Strategies for the parallel implementation of metaheuristics. In C.C. Ribeiro and P. Hansen, editors, Essays and surveys in metaheuristics , pages 263–308. Kluwer Academic Publishers, Boston, 2002.
V.B. da Silva, M. Ritt, J.B. da Paz Carvalho, M.J. Brusso, and J.T. da Silva. Identificação da maior elipse com excentricidade prescrita inscrita em um polígono não convexo através do Continuous GRASP. Revista Brasileira de Computação Aplicada , 4:61–70, 2012.
G. Dahl and B. Johannessen. The 2-path network design problem. Networks , 43:190–199, 2004. MathSciNetMATH
G. Dahl, A. Martin, and M. Stoer. Routing through virtual paths in layered telecommunication networks. Operations Research , 47:693–702, 1999. MathSciNetMATH
G.B. Dantzig. Linear programming and extensions . Princeton University Press, Princeton, 1953. MATH
M.M. D'Apuzzo, A. Migdalas, P.M. Pardalos, and G. Toraldo. Parallel computing in global optimization. In E. Kontoghiorghes, editor, Handbook of parallel computing and statistics . Chapman & Hall / CRC, Boca Raton, 2006.
S. Das and S.M. Idicula. Application of reactive GRASP to the biclustering of gene expression data. In Proceedings of the International Symposium on Biocomputing , page 14, Calicut, 2010. ACM.
H. Davoudpour and M. Ashrafi. Solving multi-objective SDST flexible flow shop using GRASP algorithm. The International Journal of Advanced Manufacturing Technology , 44:737–747, 2009.
H. Delmaire, J.A. Díaz, E. Fernández, and M. Ortega. Reactive GRASP and tabu search based heuristics for the single source capacitated plant location problem. INFOR , 37:194–225, 1999.
X. Delorme, X. Gandibleux, and J. Rodriguez. GRASP for set packing problems. European Journal of Operational Research , 153:564–580, 2004. MathSciNetMATH
X. Delorme, X. Gandibleux, and F. Degoutin. Evolutionary, constructive and hybrid procedures for the bi-objective set packing problem. European Journal of Operational Research , 204:206–217, 2010. MathSciNetMATH
Y. Deng and J.F. Bard. A reactive GRASP with path relinking for capacitated clustering. Journal of Heuristics , 17:119–152, 2011. MATH
Y. Deng, J.F. Bard, G.R. Chacon, and J. Stuber. Scheduling back-end operations in semiconductor manufacturing. IEEE Transactions on Semiconductor Manufacturing , 23:210–220, 2010.
S. Dharan and A.S. Nair. Biclustering of gene expression data using reactive greedy randomized adaptive search procedure. BMC Bioinformatics , 10 (Suppl 1):S27, 2009.
R. Diestel. Graph theory . Springer, New York, 2010. MATH
N. Dodd. Slow annealing versus multiple fast annealing runs: An empirical investigation. Parallel Computing , 16:269–272, 1990. MATH
C. Dorea. Stopping rules for a random optimization method. SIAM Journal on Control and Optimization , 28:841–850, 1990. MathSciNetMATH
U. Dorndorf and E. Pesch. Fast clustering algorithms. INFORMS Journal on Computing , 6:141–153, 1994. MATH
S.E. Dreyfus and R.A. Wagner. The Steiner problem in graphs. Networks , 1: 195–201, 1972. MathSciNetMATH
L.M.A. Drummond, L.S. Vianna, M.B. Silva, and L.S. Ochi. Distributed parallel metaheuristics based on GRASP and VNS for solving the traveling purchaser problem. In Proceedings of the Ninth International Conference on Parallel and Distributed Systems , pages 257–263, Chungli, 2002. IEEE.
A. Duarte and R. Martí. Tabu search and GRASP for the maximum diversity problem. European Journal of Operational Research , 178:71–84, 2007. MathSciNetMATH
A. Duarte, F. Fernández, Á. Sánchez, and A. Sanz. A hierarchical social metaheuristic for the Max-Cut problem. In J. Gottlieb and G.R. Raidl, editors, Evolutionary computation in combinatorial optimization , volume 3004 of Lecture Notes in Computer Science , pages 84–94. Springer, Berlin, 2004.
A. Duarte, R. Martí, M.G.C. Resende, and R.M.A. Silva. GRASP with path relinking heuristics for the antibandwidth problem. Networks , 58: 171–189, 2011. MathSciNetMATH
A. Duarte, R. Martí, A. Álvarez, and F. Ángel-Bello. Metaheuristics for the linear ordering problem with cumulative costs. European Journal of Operational Research , 216:270–277, 2012. MathSciNet
A. Duarte, J. Sánchez-Oro, M.G.C. Resende, F. Glover, and R. Martí. GRASP with exterior path relinking for differential dispersion minimization. Information Sciences , 296:46–60, 2015. MathSciNet
A.R. Duarte, C.C. Ribeiro, and S. Urrutia. A hybrid ILS heuristic to the referee assignment problem with an embedded MIP strategy. In T. Bartz-Beielstein, M.J.B. Aguilera, C. Blum, B. Naujoks, A. Roli, G. Rudolph, and M. Sampels, editors, Hybrid metaheuristics , volume 4771 of Lecture Notes in Computer Science , pages 82–95. Springer, Berlin, 2007a.
A.R. Duarte, C.C. Ribeiro, S. Urrutia, and E.H. Haeusler. Referee assignment in sports leagues. In E.K. Burke and H. Rudov, editors, Practice and theory of automated timetabling VI , volume 3867 of Lecture Notes in Computer Science , pages 158–173. Springer, Berlin, 2007b.
C. Duin and S. Voss. The Pilot method: A strategy for heuristic repetition with application to the Steiner problem in graphs. Networks , 34:181–191, 1999. MathSciNetMATH
S. Duni Ekşog̃lu, P.M. Pardalos, and M.G.C. Resende. Parallel metaheuristics for combinatorial optimization. In R. Corrêa, I. Dutra, M. Fiallos, and F. Gomes, editors, Models for parallel and distributed computation – Theory, algorithmic techniques and applications , pages 179–206. Kluwer Academic Publishers, Boston, 2002.
K. Easton, G. Nemhauser, and M.A. Trick. The travelling tournament problem: Description and benchmarks. In T. Walsh, editor, Principles and practice of constraint programming , volume 2239 of Lecture Notes in Computer Science , pages 580–585. Springer, Berlin, 2001.
J. Edmonds. Paths, trees, and flowers. Canadian Journal of Mathematics , 17: 449–467, 1965. MathSciNetMATH
J. Edmonds. Matroids and the greedy algorithm. Mathematical Programming , 1:125–136, 1971. MathSciNetMATH
J. Edmonds. Minimum partition of a matroid in independent subsets. Journal of Research, National Bureau of Standards , 69B:67–72, 1975. MathSciNetMATH
H.T. Eikelder, M. Verhoeven, T. Vossen, and E. Aarts. A probabilistic analysis of local search. In I. Osman and J. Kelly, editors, Metaheuristics: Theory and applications , pages 605–618. Kluwer Academic Publishers, Boston, 1996.
M. Essafi, X. Delorme, and A. Dolgui. A reactive GRASP and path relinking for balancing reconfigurable transfer lines. International Journal of Production Research , 50:5213–5238, 2012.
S. Even, A. Itai, and A. Shamir. On the complexity of timetable and multicommodity flow problems. SIAM Journal on Computing , 5:691–703, 1976. MathSciNetMATH
H. Faria Jr., S. Binato, M.G.C. Resende, and D.J. Falcão. Transmission network design by a greedy randomized adaptive path relinking approach. IEEE Transactions on Power Systems , 20:43–49, 2005.
T.A. Feo and M.G.C. Resende. A probabilistic heuristic for a computationally difficult set covering problem. Operations Research Letters , 8:67–71, 1989. MathSciNetMATH
T.A. Feo and M.G.C. Resende. Greedy randomized adaptive search procedures. Journal of Global Optimization , 6:109–133, 1995. MathSciNetMATH
T.A. Feo, M.G.C. Resende, and S.H. Smith. A greedy randomized adaptive search procedure for maximum independent set. Technical report, AT&T Bell Laboratories, 1989.
T.A. Feo, M.G.C. Resende, and S.H. Smith. A greedy randomized adaptive search procedure for maximum independent set. Operations Research , 42: 860–878, 1994. MATH
P. Festa and M.G.C. Resende. GRASP: An annotated bibliography. In C.C. Ribeiro and P. Hansen, editors, Essays and surveys in metaheuristics , pages 325–367. Kluwer Academic Publishers, Boston, 2002.
P. Festa and M.G.C. Resende. An annotated bibliography of GRASP, Part I: Algorithms. International Transactions in Operational Research , 16:1–24, 2009a. MathSciNetMATH
P. Festa and M.G.C. Resende. An annotated bibliography of GRASP, Part II: Applications. International Transactions in Operational Research , 16, 2009b. 131–172. MathSciNetMATH
P. Festa and M.G.C. Resende. Hybridizations of GRASP with path-relinking. In E-G. Talbi, editor, Hybrid metaheuristics , volume 434 of Studies in Computational Intelligence , pages 135–155. Springer, New York, 2013.
P. Festa, P.M. Pardalos, M.G.C. Resende, and C.C. Ribeiro. Randomized heuristics for the MAX-CUT problem. Optimization Methods and Software , 7:1033–1058, 2002. MathSciNetMATH
P. Festa, P.M. Pardalos, L.S. Pitsoulis, and M.G.C. Resende. GRASP with path-relinking for the weighted maximum satisfiability problem. Lecture Notes in Computer Science , 3503:367–379, 2005. MATH
P. Festa, P.M. Pardalos, L.S. Pitsoulis, and M.G.C. Resende. GRASP with path-relinking for the weighted MAXSAT problem. ACM Journal of Experimental Algorithmics , 11:1–16, 2006. MathSciNetMATH
M.L. Fisher. The Lagrangean relaxation method for solving integer programming problems. Management Science , 50:1861–1871, 2004.
C. Fleurent and F. Glover. Improved constructive multistart strategies for the quadratic assignment problem using adaptive memory. INFORMS Journal on Computing , 11:198–204, 1999. MathSciNetMATH
E. Fonseca, R. Fuchsuber, L.F.M. Santos, A. Plastino, and S.L. Martins. Exploring the hybrid metaheuristic DM-GRASP for efficient server replication for reliable multicast. In International Conference on Metaheuristics and Nature Inspired Computing , Hammamet, 2008.
B. Fortz and M. Thorup. Increasing Internet capacity using local search. Computational Optimization and Applications , 29:13–48, 2000. MathSciNetMATH
A.M. Frieze. Complexity of a 3-dimensional assignment problem. European Journal of Operational Research , 13:161–164, 1983. MathSciNetMATH
R.D. Frinhani, R.M. Silva, G.R. Mateus, P. Festa, and M.G.C. Resende. GRASP with path-relinking for data clustering: A case study for biological data. In P.M. Pardalos and S. Rebennack, editors, Experimental algorithms , volume 6630 of Lecture Notes in Computer Science , pages 410–420. Springer, Berlin, 2011.
K. Fujisawa, M. Fojima, and K. Nakata. Exploiting sparsity in primal-dual interior-point methods for semidefinite programming. Mathematical Programming , 79:235–253, 1997. MathSciNetMATH
K. Fujisawa, M. Fukuda, M. Kojima, and K. Nakata. Numerical evaluation of SDPA (Semidefinite Programming Algorithm). In H. Frenk, K. Roos, T. Terlaky, and S. Zhang, editors, High performance optimization , pages 267–301. Springer, Boston, 2000.
L. Gao, Y. Zeng, and A. Dong. An ant colony algorithm for solving Max-cut problem. Progress in Natural Science , 18:1173–1178, 2008. MathSciNet
M.R. Garey and D.S. Johnson. Approximation algorithms for combinatorial problems: An annotated bibliography. In J.F. Traub, editor, Algorithms and complexity: New directions and recent results , pages 41–52. Academic Press, Orlando, 1976.
M.R. Garey and D.S. Johnson. Strong NP-completeness results: Motivation, examples, and implications. Journal of the ACM , 25:499–508, 1978. MathSciNetMATH
M.R. Garey and D.S. Johnson. Computers and intractability . Freeman, San Francisco, 1979. MATH
F. Gavril. Algorithms for a maximum clique and a maximum independent set of a circle graph. Networks , 3:261–273, 1973. MathSciNetMATH
A. Geist, A. Beguelin, J. Dongarra, W. Jiang, R. Mancheck, and V. Sunderam. PVM: Parallel virtual machine, A user's guide and tutorial for networked parallel computing . Scientific and Engineering Computation. MIT Press, Cambridge, 1994. MATH
M. Gendreau and J.-Y. Potvin, editors. Handbook of metaheuristics . Springer, New York, 2nd edition, 2010.
J.B. Ghosh. Computational aspects of the maximum diversity problem. Operations Research Letters , 19:175–181, 1996. MathSciNetMATH
F. Glover. Tabu Search - Part I. ORSA Journal on Computing , 1:190–206, 1989. MathSciNetMATH
F. Glover. Tabu Search - Part II. ORSA Journal on Computing , 2:4–32, 1990. MATH
F. Glover. Multilevel tabu search and embedded search neighborhoods for the traveling salesman problem. Technical report, University of Colorado, Boulder, 1991.
F. Glover. Ejection chains, reference structures and alternating path methods for traveling salesman problems. Discrete Applied Mathematics , 65: 223–253, 1996a. MathSciNetMATH
F. Glover. Tabu search and adaptive memory programing – Advances, applications and challenges. In R.S. Barr, R.V. Helgason, and J.L. Kennington, editors, Interfaces in computer science and operations research , pages 1–75. Kluwer Academic Publishers, Boston, 1996b.
F. Glover. Multi-start and strategic oscillation methods – Principles to exploit adaptive memory. In M. Laguna and J.L. González-Velarde, editors, Computing tools for modeling, optimization and simulation: Interfaces in computer science and operations research , pages 1–24. Kluwer Academic Publishers, Boston, 2000.
F. Glover. Exterior path relinking for zero-one optimization. International Journal of Applied Metaheuristic Computing , 5(3):1–8, 2014.
F. Glover and G. Kochenberger, editors. Handbook of metaheuristics . Kluwer Academic Publishers, Boston, 2003. MATH
F. Glover and M. Laguna. Tabu search . Kluwer Academic Publishers, Boston, 1997. MATH
F. Glover and A.P. Punnen. The travelling salesman problem: New solvable cases and linkages with the development of approximation algorithms. Journal of the Operational Research Society , 48:502–510, 1997. MATH
F. Glover, M. Laguna, and R. Martí. Fundamentals of scatter search and path relinking. Control and Cybernetics , 39:653–684, 2000. MathSciNetMATH
F. Glover, M. Laguna, and R. Martí. Scatter search and path relinking: Advances and applications. In F. Glover and G. Kochenberger, editors, Handbook of metaheuristics , pages 1–35. Kluwer Academic Publishers, Boston, 2003.
F. Glover, M. Laguna, and R. Martí. Scatter search and path relinking: Foundations and advanced designs. In G.C. Onwubolu and B.V. Babu, editors, New optimization techniques in engineering , volume 141 of Studies in Fuzziness and Soft Computing , pages 87–100. Springer, Berlin, 2004.
M.X. Goemans and Y. Myung. A catalog of Steiner tree formulations. Networks , 23:19–28, 1993. MathSciNetMATH
M.X. Goemans and D.P. Williams. Improved approximation algorithms for Max-Cut and Satisfiability problems using semidefinite programming. Journal of the ACM , 42:1115–1145, 1995. MATH
M.X. Goemans and D.P. Williamson. The primal dual method for approximation algorithms and its application to network design problems. In D. Hochbaum, editor, Approximation algorithms for NP-hard problems , pages 144–191. PWS Publishing Company, Boston, 1996.
B. Goethals and M.J. Zaki. Advances in frequent itemset mining implementations: Introduction to FIMI03. In B. Goethals and M.J. Zaki, editors, Proceedings of the IEEE ICDM 2003 Workshop on Frequent Itemset Mining Implementations , pages 1–12, Melbourne, 2003.
D.E. Goldberg. Genetic algorithms in search, optimization and machine learning . Addison-Wesley, Reading, 1989. MATH
O. Goldschmidt and A. Takvorian. An efficient graph planarization two-phase heuristic. Networks , 24:69–73, 1994. MathSciNetMATH
M.C. Golumbic. Algorithmic graph theory and perfect graphs . Academic Press, New York, 1980. MATH
F.C. Gomes, P. Pardalos, C.S. Oliveira, and M.G.C. Resende. Reactive GRASP with path relinking for channel assignment in mobile phone networks. In Proceedings of the 5th International Workshop on Discrete Algorithms and Methods for Mobile Computing and Communications , pages 60–67, Rome, 2001. ACM Press.
G. Grahne and J. Zhu. Efficiently using prefix-trees in mining frequent itemsets, 2003. URL http://bit.ly/1qxiKbl . Last visited on April 16, 2016.
M. Grötschel, L. Lovász, and A. Schrijver. Polynomial algorithms for perfect graphs. Annals of Discrete Mathematics , 21:325–356, 1984. MathSciNetMATH
A.L. Guedes, F.D. Moura Neto, and G.M. Platt. Double Azeotropy: Calculations with Newton-like methods and continuous GRASP (C-GRASP). International Journal of Mathematical Modelling and Numerical Optimisation , 2:387–404, 2011. MATH
G. Gutin and A.P. Punnen, editors. The traveling salesman problem and its variations . Kluwer Academic Publishers, Boston, 2002. MATH
S.L. Hakimi. Steiner's problem in graphs and its applications. Networks , 1: 113–133, 1971. MathSciNetMATH
J. Han, J. Pei, and Y. Yin. Mining frequent patterns without candidate generation. In Proceedings of the 2000 ACM SIGMOD International Conference on Management of Data , pages 1–12, Dallas, 2000. ACM.
J. Han, M. Kamber, and J. Pei. Data mining: Concepts and techniques . Morgan Kaufmann Publishers, San Francisco, 3rd edition, 2011. MATH
P. Hansen. The steepest ascent mildest descent heuristic for combinatorial programming. In Proceedings of the Congress on Numerical Methods in Combinatorial Optimization , pages 70–145, Capri, 1986.
P. Hansen and L. Kaufman. A primal-dual algorithm for the three-dimensional assignment problem. Cahiers du CERO , 15:327–336, 1973. MathSciNetMATH
P. Hansen and N. Mladenović. An introduction to variable neighbourhood search. In S Voss, S. Martello, I.H. Osman, and C. Roucairol, editors, Metaheuristics: Advances and trends in local search procedures for optimization , pages 433–458. Kluwer Academic Publishers, Boston, 1999.
P. Hansen and N. Mladenović. Developments of variable neighborhood search. In C.C. Ribeiro and P. Hansen, editors, Essays and surveys in metaheuristics , pages 415–439. Kluwer Academic Publishers, Boston, 2002.
P. Hansen and N. Mladenović. Variable neighborhood search. In F. Glover and G. Kochenberger, editors, Handbook of metaheuristics , pages 145–184. Kluwer Academic Publishers, Boston, 2003.
J.P. Hart and A.W. Shogan. Semi-greedy heuristics: An empirical study. Operations Research Letters , 6:107–114, 1987. MathSciNetMATH
W.E. Hart. Sequential stopping rules for random optimization methods with applications to multistart local search. SIAM Journal on Optimization , 9: 270–290, 1998. MathSciNetMATH
M.M. Hassan and G.L. Hogg. A review of graph theory applications to the facilities layout problem. Omega , 15:291–300, 1987.
M. Held and R.M. Karp. The traveling-salesman problem and minimum spanning trees. Operations Research , 18:1138–1162, 1970. MathSciNetMATH
M. Held and R.M. Karp. The traveling-salesman problem and minimum spanning trees: Part II. Mathematical Programming , 1:6–25, 1971. MathSciNetMATH
M. Held, P. Wolfe, and H.P. Crowder. Validation of subgradient optimization. Mathematical Programming , 6:62–88, 1974. MathSciNetMATH
C. Helmberg and F. Rendl. A spectral bundle method for semidefinite programming. SIAM Journal on Optimization , 10:673–696, 2000. MathSciNetMATH
A.J. Higgins, S. Hajkowicz, and E. Bui. A multi-objective model for environmental investment decision making. Computers & Operations Research , 35:253–266, 2008. MATH
M.J. Hirsch. GRASP-based heuristics for continuous global optimization problems . PhD thesis, Department of Industrial and Systems Engineering, University of Florida, Gainesville, 2006.
M.J. Hirsch, P.M. Pardalos, and M.G.C. Resende. Sensor registration in a sensor network by continuous GRASP. In IEEE Conference on Military Communications , pages 501–506, Washington, DC, 2006.
M.J. Hirsch, C.N. Meneses, P.M. Pardalos, M.A. Ragle, and M.G.C. Resende. A continuous GRASP to determine the relationship between drugs and adverse reactions. In O. Seref, O. Erhun Kundakcioglu, and P.M. Pardalos, editors, Data mining, systems analysis and optimization in biomedicine , volume 953 of AIP Conference Proceedings , pages 106–121. Springer, 2007a.
M.J. Hirsch, C.N. Meneses, P.M. Pardalos, and M.G.C. Resende. Global optimization by continuous GRASP. Optimization Letters , 1:201–212, 2007b. MathSciNetMATH
M.J. Hirsch, P.M. Pardalos, and M.G.C. Resende. Solving systems of nonlinear equations with continuous GRASP. Nonlinear Analysis: Real World Applications , 10:2000–2006, 2009. MathSciNetMATH
M.J. Hirsch, P.M. Pardalos, and M.G.C. Resende. Speeding up continuous GRASP. European Journal of Operational Research , 205:507–521, 2010. MATH
M.J. Hirsch, P.M. Pardalos, and M.G.C. Resende. Correspondence of projected 3D points and lines using a continuous GRASP. International Transactions in Operational Research , 18:493–511, 2011. MathSciNetMATH
M.J. Hirsch, H. Ortiz-Pena, and C. Eck. Cooperative tracking of multiple targets by a team of autonomous UAVs. International Journal of Operations Research and Information Systems , 3:53–73, 2012.
J.H. Holland. Adaptation in natural and artificial systems: An introductory analysis with applications to biology, control, and artificial intelligence . University of Michigan Press, Ann Arbor, 1975. MATH
S. Homer and M. Peinado. Two distributed memory parallel approximation algorithms for Max-Cut. Journal of Parallel and Distributed Computing , 1:48–61, 1997.
H.H. Hoos. On the run-time behaviour of stochastic local search algorithms for SAT. In Proceedings of the Sixteenth National Conference on Artificial Intelligence , pages 661–666, Orlando, 1999. American Association for Artificial Intelligence.
H.H. Hoos and T. Stützle. On the empirical evaluation of Las Vegas algorithms - Position paper. Technical report, Computer Science Department, University of British Columbia, Vancouver, 1998a.
H.H. Hoos and T. Stützle. Evaluating Las Vegas algorithms – Pitfalls and remedies. In Proceedings of the 14th Conference on Uncertainty in Artificial Intelligence , pages 238–245, Madison, 1998b.
H.H. Hoos and T. Stützle. Some surprising regularities in the behaviour of stochastic local search. In M. Maher and J.-F. Puget, editors, Principles and practice of constraint programming , volume 1520 of Lecture Notes in Computer Science , page 470. Springer, Berlin, 1998c.
H.H. Hoos and T. Stützle. Towards a characterisation of the behaviour of stochastic local search algorithms for SAT. Artificial Intelligence , 112: 213–232, 1999. MathSciNetMATH
H.H. Hoos and T. Stützle. Stochastic local search: Foundations and applications . Elsevier, New York, 2005. MATH
F.K. Hwang, D.S. Richards, and P. Winter. The Steiner tree problem . North-Holland, Amsterdam, 1992. MATH
E. Hyytiä and J. Virtamo. Wavelength assignment and routing in WDM networks. In Proceedings of the Fourteenth Nordic Teletraffic Seminar NTS-14 , pages 31–40, Lyngby, 1998.
C. Ishida, A. Pozo, E. Goldbarg, and M. Goldbarg. Multiobjective optimization and rule learning: Subselection algorithm or meta-heuristic algorithm? In N. Nedjah, L.M. Mourelle, and J. Kacprzyk, editors, Innovative applications in data mining , pages 47–70. Springer, Berlin, 2009.
R. Jain. The art of computer systems performance analysis: Techniques for experimental design, measurement, simulation, and modeling . Wiley, New York, 1991. MATH
V. Jarník. O jistém problému minimálním. Práce Moravské Přírodovědecké Společnosti , 6:57–63, 1930.
D.S. Johnson. Near-optimal bin-packing algorithms . PhD thesis, Massachusetts Institute of Technology, Cambridge, 1973.
D.S. Johnson. Approximation algorithms for combinatorial problems. Journal of Computer and System Sciences , 9:256–278, 1974. MathSciNetMATH
E.H. Kampke, J.E.C. Arroyo, and A.G. Santos. Reactive GRASP with path relinking for solving parallel machines scheduling problem with resource-assignable sequence dependent setup times. In Proceedings of the World Congress on Nature and Biologically Inspired Computing , pages 924–929, Coimbatore, 2009. IEEE.
R.M. Karp. Reducibility among combinatorial problems. In R.E. Miller and J.W. Thatcher, editors, Complexity of computer computations . Plenum Press, New York, 1972.
R.M. Karp. On the computational complexity of combinatorial problems. Networks , 5:45–68, 1975. MathSciNetMATH
H. Kautz, E. Horvitz, Y. Ruan, C. Gomes, and B. Selman. Dynamic restart policies. In Proceedings of the Eighteenth National Conference on Artificial intelligence , pages 674–681, Edmonton, 2002. American Association for Artificial Intelligence.
G. Kendall, S. Knust, C.C. Ribeiro, and S. Urrutia. Scheduling in sports: An annotated bibliography. Computers & Operations Research , 37:1–19, 2010. MathSciNetMATH
B.W. Kernighan and S. Lin. An efficient heuristic procedure for partitioning graphs. Bell System Technical Journal , 49:291–307, 1970. MATH
R.K. Kincaid. Good solutions to discrete noxious location problems via metaheuristics. Annals of Operations Research , 40:265–281, 1992. MATH
S. Kirkpatrick, C.D. Gelatt Jr., and M.P. Vecchi. Optimization by simulated annealing. Science , 220(4598):671–680, 1983. MathSciNetMATH
A. Kitnick. Frances Stark: Text after text. Parkett , 93:66–71, 2013.
G.A. Kochenberger, B.A. McCarl, and F.P. Wyman. A heuristic for general integer programming. Decision Sciences , 5:36–41, 1974.
G.A. Kochenberger, J.-K. Hao, Z. Lu, H. Wang, and F. Glover. Solving large scale Max Cut problems via tabu search. Journal of Heuristics , 19: 565–571, 2013.
M.R. Krom. The decision problem for a class of first-order formulas in which all disjunctions are binary. Zeitschrift fr Mathematische Logik und Grundlagen der Mathematik , 13:15–20, 1967. MathSciNetMATH
J.B. Kruskal. On the shortest spanning subtree of a graph and the traveling salesman problem. Proceedings of the American Mathematical Society , 7: 48–50, 1956. MathSciNetMATH
K. Kuratowski. Sur le problème des courbes gauches en topologie. Fundamenta Mathematicae , 15:271–283, 1930. MATH
N. Labadi, C. Prins, and M. Reghioui. GRASP with path relinking for the capacitated arc routing problem with time windows. In A. Fink and F. Rothlauf, editors, Advances in computational intelligence in transport, logistics, and supply chain management , pages 111–135. Springer, Berlin, 2008.
M. Laguna and F. Glover. Bandwidth packing: A tabu search approach. Management Science , 39:492–500, 1993. MATH
M. Laguna and R. Martí. GRASP and path relinking for 2-layer straight line crossing minimization. INFORMS Journal on Computing , 11:44–52, 1999. MATH
M. Laguna, J.P. Kelly, J.L. González-Velarde, and F. Glover. Tabu search for multilevel generalized assignment problem. European Journal of Operational Research , 82:176–189, 1995. MATH
E.L. Lawler. Combinatorial optimization: Networks and matroids . Holt, Rinehart and Winston, New York, 1976. MATH
E.L. Lawler, J.K. Lenstra, A.H.G. Rinnooy Kan, and D.B. Shmoys, editors. The traveling salesman problem: A guided tour of combinatorial optimization . John Wiley & Sons, New York, 1985. MATH
L.J. LeBlanc, J. Chifflet, and P. Mahey. Packet routing in telecommunication networks with path and flow restrictions. INFORMS Journal on Computing , 11:188–197, 1999. MathSciNetMATH
J.K. Lenstra and A.H.G. Rinnooy Kan. Computational complexity of discrete optimization problems. Annals of Discrete Mathematics , 4:121–140, 1979. MathSciNetMATH
R. De Leone, P. Festa, and E. Marchitto. Solving a bus driver scheduling problem with randomized multistart heuristics. International Transactions in Operational Research , 18:707–727, 2011. MathSciNetMATH
O. Leue. Methoden zur Lösung dreidimensionaler Zuordnungsprobleme. Angewandte Informatik , 14:154–162, 1972. MATH
H. Li and D. Landa-Silva. An elitist GRASP metaheuristic for the multi-objective quadratic assignment problem. In M. Ehrgott, C.M. Fonseca, X. Gandibleux, J.-K. Hao, and M. Sevaux, editors, Evolutionary multi-criterion optimization , volume 5467 of Lecture Notes in Computer Science , pages 481–494. Springer, Berlin, 2009.
Y. Li, P.M. Pardalos, and M.G.C. Resende. A greedy randomized adaptive search procedure for the quadratic assignment problem. In P.M. Pardalos and H. Wolkowicz, editors, Quadratic assignment and related problems , volume 16 of DIMACS Series in Discrete Mathematics and Theoretical Computer Science , pages 237–261. American Mathematical Society, Providence, 1994.
S. Lin. Computer solutions of the traveling salesman problem. Bell System Technical Journal , 44:2245–2260, 1965. MathSciNetMATH
S. Lin and B.W. Kernighan. An effective heuristic algorithm for the traveling-salesman problem. Operations Research , 21:498–516, 1973. MathSciNetMATH
P.C. Liu and R.C. Geldmacher. On the deletion of nonplanar edges of a graph. In Proceedings of the 10th Southeastern Conference on Combinatorics, Graph Theory and Computing , pages 727–738, Boca Raton, 1977.
H.R. Lourenço, O. Martin, and T. Stützle. Iterated local search. In F. Glover and G. Kochenberger, editors, Handbook of metaheuristics , pages 321–353. Kluwer Academic Publishers, Boston, 2003.
L. Lovász. On the Shannon capacity of a graph. IEEE Transactions on Information Theory , IT-25:1–7, 1979. MathSciNetMATH
M. Luby, A. Sinclair, and D. Zuckerman. Optimal speedup of Las Vegas algorithms. Information Processing Letters , 47:173–180, 1993. MathSciNetMATH
M. Luis, S. Salhi, and G. Nagy. A guided reactive GRASP for the capacitated multi-source Weber problem. Computers & Operations Research , 38:1014–1024, 2011. MathSciNetMATH
D.G. Macharet, A.A. Neto, V.F. da Camara Neto, and M.F.M. Campos. Nonholonomic path planning optimization for Dubins' vehicles. In 2011 IEEE International Conference on Robotics and Automation , pages 4208–4213, Shanghai, 2011. IEEE.
N. Maculan. The Steiner problem in graphs. Annals of Discrete Mathematics , 31:182–212, 1987. MathSciNetMATH
C.L.B. Maia, R.A.F. Carmo, F.G. Freitas, G.A.L. Campos, and J.T. Souza. Automated test case prioritization with reactive GRASP. Advances in Software Engineering , 2010, 2010. doi: 10.1155/2010/428521. Article ID 428521.
P. Manohar, D. Manjunath, and R.K. Shevgaonkar. Routing and wavelength assignment in optical networks from edge disjoint path algorithms. IEEE Communications Letters , 5:211–213, 2002.
S. Martello and P. Toth. Knapsack problems: Algorithms and computer implementations . John Wiley & Sons, New York, 1990. MATH
R. Martí and F. Sandoya. GRASP and path relinking for the equitable dispersion problem. Computers & Operations Research , 40:3091–3099, 2013. MathSciNet
R. Martí, A. Duarte, and M. Laguna. Advanced scatter search for the MAX-CUT problem. INFORMS Journal on Computing , 21:26–38, 2009. MathSciNetMATH
R. Martí, J.L. González-Velarde, and A. Duarte. Heuristics for the bi-objective path dissimilarity problem. Computers & Operations Research , 36:2905–2912, 2009. MATH
R. Martí, M.G.C. Resende, and C.C. Ribeiro. Multi-start methods for combinatorial optimization. European Journal of Operational Research , 226:1–8, 2013a. MathSciNetMATH
R. Martí, M.G.C. Resende, and C.C. Ribeiro. Special issue of Computers & Operations Research: GRASP with path relinking: Developments and applications. Computers & Operations Research , 40:3080, 2013b.
R. Martí, V. Campos, M.G.C. Resende, and A. Duarte. Multiobjective GRASP with path relinking. European Journal of Operational Research , 240:54–71, 2015. MathSciNetMATH
B. Martin, X. Gandibleux, and L. Granvilliers. Continuous-GRASP revisited. In P. Siarry, editor, Heuristics: Theory and applications , chapter 1. Nova Science Publishers, Hauppauge, 2013.
O. Martin and S.W. Otto. Combining simulated annealing with local search heuristics. Annals of Operations Research , 63:57–75, 1996. MATH
O. Martin, S.W. Otto, and E.W. Felten. Large-step Markov chains for the traveling salesman problem. Complex Systems , 5:299–326, 1991. MathSciNetMATH
S.L. Martins, C.C. Ribeiro, and M.C. Souza. A parallel GRASP for the Steiner problem in graphs. In A. Ferreira, J. Rolim, H. Simon, and S.-H. Teng, editors, Solving irregularly structured problems in parallel , volume 1457 of Lecture Notes in Computer Science , pages 285–297. Springer, Berlin, 1998.
S.L. Martins, P.M. Pardalos, M.G.C. Resende, and C.C. Ribeiro. Greedy randomized adaptive search procedures for the Steiner problem in graphs. In P.M. Pardalos, S. Rajasejaran, and J. Rolim, editors, Randomization methods in algorithmic design , volume 43 of DIMACS Series in Discrete Mathematics and Theoretical Computer Science , pages 133–145. American Mathematical Society, Providence, 1999.
S.L. Martins, P.M. Pardalos, M.G.C. Resende, and C.C. Ribeiro. A parallel GRASP for the Steiner tree problem in graphs using a hybrid local search strategy. Journal of Global Optimization , 17:267–283, 2000. MathSciNetMATH
S.L. Martins, C.C. Ribeiro, and I. Rosseti. Applications and parallel implementations of metaheuristics in network design and routing. In S. Manandhar, J. Austin, U. Desai, Y. Oyanagi, and A.K. Talukder, editors, Applied computing , volume 3285 of Lecture Notes in Computer Science , pages 205–213. Springer, Berlin, 2004.
G.R. Mateus, M.G.C. Resende, and R.M.A. Silva. GRASP with path-relinking for the generalized quadratic assignment problem. Journal of Heuristics , 17: 527–565, 2011. MATH
K. Melhorn. A faster approximation algorithm for the Steiner problem in graphs. Information Processing Letters , 27:125–128, 1988. MathSciNet
B. Melián, M. Laguna, and J.A. Moreno-Pérez. Capacity expansion of fiber optic networks with WDM systems: Problem formulation and comparative analysis. Computers & Operations Research , 31:461–472, 2004. MATH
M. Mestria, L.S. Ochi, and S.L. Martins. GRASP with path relinking for the symmetric Euclidean clustered traveling salesman problem. Computers & Operations Research , 40:3218–3229, 2013. MathSciNet
Z. Michalewicz. Genetic algorithms + Data structures = Evolution programs . Springer, Berlin, 1996. MATH
W. Michelis, E.H.L. Aarts, and J. Korst. Theoretical aspects of local search . Springer, Berlin, 2007. MATH
N. Mladenović and P. Hansen. Variable neighborhood search. Computers & Operations Research , 24:1097–1100, 1997. MathSciNetMATH
R.E.N. Moraes and C.C. Ribeiro. Power optimization in ad hoc wireless network topology control with biconnectivity requirements. Computers & Operations Research , 40:3188–3196, 2013. MathSciNet
L.F. Morán-Mirabal, J.L. González-Velarde, and M.G.C. Resende. Automatic tuning of GRASP with evolutionary path-relinking. In M.J. Blesa, C. Blum, P. Festa, A. Roli, and M. Sampels, editors, Hybrid metaheuristics , volume 7919 of Lecture Notes in Computer Science , pages 62–77. Springer, Berlin, 2013a.
L.F. Morán-Mirabal, J.L. González-Velarde, M.G.C. Resende, and R.M.A. Silva. Randomized heuristics for handover minimization in mobility networks. Journal of Heuristics , 19:845–880, 2013b.
L.F. Morán-Mirabal, J.L. González-Velarde, and M.G.C. Resende. Randomized heuristics for the family traveling salesperson problem. International Transactions in Operational Research , 21:41–57, 2014. MathSciNetMATH
R.A. Murphey, P.M. Pardalos, and L.S. Pitsoulis. A parallel GRASP for the data association multidimensional assignment problem. In P.M. Pardalos, editor, Parallel processing of discrete problems , volume 106 of The IMA Volumes in Mathematics and Its Applications , pages 159–180. Springer, New York, 1998.
J.F. Muth and G.L. Thompson. Industrial scheduling . Prentice-Hall, Boston, 1963.
P. Mutzel. The maximum planar subgraph problem . PhD thesis, Universität zu Köln, Cologne, 1994.
M.C.V. Nascimento and L. Pitsoulis. Community detection by modularity maximization using GRASP with path relinking. Computers & Operations Research , 40:3121–3131, 2013. MathSciNet
M.C.V. Nascimento, M.G.C. Resende, and F.M.B. Toledo. GRASP heuristic with path-relinking for the multi-plant capacitated lot sizing problem. European Journal of Operational Research , 200:747–754, 2010. MATH
G.L. Nemhauser and L.A. Wolsey. Integer and combinatorial optimization . Wiley, New York, 1988. MATH
V.-P. Nguyen, C. Prins, and C. Prodhon. Solving the two-echelon location routing problem by a GRASP reinforced by a learning process and path relinking. European Journal of Operational Research , 216:113–126, 2012. MathSciNetMATH
N.J. Nilsson. Problem-solving methods in artificial intelligence . McGraw-Hill, New York, 1971.
N.J. Nilsson. Principles of artificial intelligence . Springer, Berlin, 1982. MATH
H. Nishimura and S. Kuroda, editors. A lost mathematician, Takeo Nakasawa. The forgotten father of matroid theory . Birkhäuser Verlag, Basel, 2009.
T.F. Noronha and C.C. Ribeiro. Routing and wavelength assignment by partition coloring. European Journal of Operational Research , 171: 797–810, 2006. MATH
E. Nowicki and C. Smutnicki. An advanced tabu search algorithm for the job shop problem. Journal of Scheduling , 8:145–159, 2005. MathSciNetMATH
C.A. Oliveira, P.M. Pardalos, and M.G.C. Resende. GRASP with path-relinking for the quadratic assignment problem. In C.C. Ribeiro and S.L. Martins, editors, Experimental and efficient algorithms , volume 3059, pages 356–368. Springer, Berlin, 2004.
S. Orlando, P. Palmerini, and R. Perego. Adaptive and resource-aware mining of frequent sets. In Proceedings of the 2002 IEEE International Conference on Data Mining , pages 338–345, Maebashi City, 2002. IEEE.
C. Orsenigo and C. Vercellis. Bayesian stopping rules for greedy randomized procedures. Journal of Global Optimization , 36:365–377, 2006. MathSciNetMATH
L. Osborne and B. Gillett. A comparison of two simulated annealing algorithms applied to the directed Steiner problem on networks. ORSA Journal on Computing , 3:213–225, 1991. MATH
A. Ouorou, P. Mahey, and J.P. Vial. A survey of algorithms for convex multicommodity flow problems. Management Science , 46:126–147, 2000. MATH
A.V.F. Pacheco,, G.M. Ribeiro, and G.R. Mauri. A GRASP with path-relinking for the workover rig scheduling problem. International Journal of Natural Computing Research , 1:1–14, 2010.
G. Palubeckis. Multistart tabu search strategies for the unconstrained binary quadratic optimization problem. Annals of Operations Research , 131: 259–282, 2004. MathSciNetMATH
C.H. Papadimitriou. Computational complexity . Addison-Wesley, Reading, 1994. MATH
C.H. Papadimitriou and K. Steiglitz. Combinatorial optimization: Algorithms and complexity . Prentice Hall, Englewood Cliffs, 1982. MATH
P.M. Pardalos and L.S. Pitsoulis. Nonlinear assignment problems: Algorithms and applications . Kluwer Academic Publishers, Boston, 2000. MATH
P.M. Pardalos and J. Xue. The maximum clique problem. Journal of Global Optimization , 4:301–328, 1994. MathSciNetMATH
P.M. Pardalos, L.S. Pitsoulis, and M.G.C. Resende. A parallel GRASP implementation for the quadratic assignment problem. In A. Ferreira and J. Rolim, editors, Parallel algorithms for irregular problems: State of the art , pages 115–133. Kluwer Academic Publishers, Boston, 1995.
P.M. Pardalos, L.S. Pitsoulis, and M.G.C. Resende. A parallel GRASP for MAX-SAT problems. In J. Waśniewski, J. Dongarra, K. Madsen, and D. Olesen, editors, Applied parallel computing industrial computation and optimization , volume 1184 of Lecture Notes in Computer Science , pages 575–585. Springer, Berlin, 1996.
M. Parker and J. Ryan. A column generation algorithm for bandwidth packing. Telecommunication Systems , 2:185–195, 1994.
F. Parreño, R. Alvarez-Valdes, J.M. Tamarit, and J.F. Oliveira. A maximal-space algorithm for the container loading problem. INFORMS Journal on Computing , 20:412–422, 2008. MathSciNetMATH
R.A. Patterson, H. Pirkul, and E. Rolland. A memory adaptive reasoning technique for solving the capacitated minimum spanning tree problem. Journal of Heuristics , 5:159–180, 1999. MATH
J. Pearl. Heuristics: Intelligent search strategies for computer problem solving . Addison-Wesley, Reading, 1985.
O. Pedrola, M. Ruiz, L. Velasco, D. Careglio, O. González de Dios, and J. Comellas. A GRASP with path-relinking heuristic for the survivable IP/MPLS-over-WSON multi-layer network optimization problem. Computers & Operations Research , 40:3174–3187, 2013. MathSciNet
M. Pérez, F. Almeida, and J.M. Moreno-Vega. A hybrid GRASP-path relinking algorithm for the capacitated p -hub median problem. In M.J. Blesa, C. Blum, A. Roli, and M. Sampels, editors, Hybrid metaheuristics , volume 3636 of Lecture Notes in Computer Science , pages 142–153. Springer, Berlin, 2005.
E. Pesch and F. Glover. TSP ejection chains. Discrete Applied Mathematics , 76:165–181, 1997. MathSciNetMATH
L.S. Pessoa, M.G.C. Resende, and C.C. Ribeiro. Experiments with the LAGRASP heuristic for set k -covering. Optimization Letters , 5:407–419, 2011. MathSciNetMATH
L.S. Pessoa, M.G.C. Resende, and C.C. Ribeiro. A hybrid Lagrangean heuristic with GRASP and path-relinking for set k -covering. Computers & Operations Research , 40:3132–3146, 2013. MathSciNet
W.P. Pierskalla. The tri-substitution method for the three-multidimensional assignment problem. Journal of the Canadian Operational Research Society , 5:71–81, 1967. MATH
W.P. Pierskalla. The multidimensional assignment problem. Operations Research , 16:422–431, 1968. MATH
R.Y. Pinter. Optimal layer assignment for interconnect. Advances in VLSI and Computer Systems , 1:123–137, 1984. MATH
L.S. Pitsoulis. Topics in matroid theory . SpringerBriefs in Optimization. Springer, 2014. MATH
L.S. Pitsoulis and M.G.C. Resende. Greedy randomized adaptive search procedures. In P.M. Pardalos and M.G.C. Resende, editors, Handbook of applied optimization , pages 168–183. Oxford University Press, New York, 2002.
A. Plastino, E.R. Fonseca, R. Fuchshuber, S.L. Martins, A.A. Freitas, M. Luis, and S. Salhi. A hybrid data mining metaheuristic for the p -median problem. In H. Park, S. Parthasarathy, H. Liu, and Z. Obradovic, editors, Proceedings of the 9th SIAM International Conference on Data Mining , pages 305–316, Sparks, 2009. SIAM.
A. Plastino, R. Fuchshuber, S.L. Martins, A.A. Freitas, and S. Salhi. A hybrid data mining metaheuristic for the p -median problem. Statistical Analysis and Data Mining , 4:313–335, 2011. MathSciNet
A. Plastino, H. Barbalho, L.F.M. Santos, R. Fuchshuber, and S.L. Martins. Adaptive and multi-mining versions of the DM-GRASP hybrid metaheuristic. Journal of Heuristics , 20:39–74, 2014.
S. Poljak and Z. Tuza. Maximum cuts and largest bipartite subgraphs. In W. Cook, L. Lovász, and P. Seymour, editors, Papers from the special year on Combinatorial Optimization , volume 20 of DIMACS Series in Discrete Mathematics and Theoretical Computer Science , pages 181–244. American Mathematical Society, Providence, 1995.
M. Prais and C.C. Ribeiro. Parameter variation in GRASP implementations. In C.C. Ribeiro and P. Hansen, editors, Extended Abstracts of the Third Metaheuristics International Conference , pages 375–380, Angra dos Reis, 1999.
M. Prais and C.C. Ribeiro. Reactive GRASP: An application to a matrix decomposition problem in TDMA traffic assignment. INFORMS Journal on Computing , 12:164–176, 2000a. MathSciNetMATH
M. Prais and C.C. Ribeiro. Parameter variation in GRASP procedures. Investigación Operativa , 9:1–20, 2000b.
R.C. Prim. Shortest connection networks and some generalizations. Bell System Technical Journal , 36:1389–1401, 1957.
C. Prins, C. Prodhon, and R.Wolfler-Calvo. A reactive GRASP and path relinking algorithm for the capacitated location routing problem. In Proceedings of the International Conference on Industrial Engineering and Systems Management , Marrakech, 2005. I4E2. ISBN 2-9600532-0-6.
M. Rahmani, M. Rashidinejad, E.M. Carreno, and R.A. Romero. Evolutionary multi-move path-relinking for transmission network expansion planning. In 2010 IEEE Power and Energy Society General Meeting , pages 1–6, Minneapolis, 2010. IEEE.
R.L. Rardin, R., and Uzsoy. Experimental evaluation of heuristic optimization algorithms: A tutorial. Journal of Heuristics , 7:261–304, 2001.
C. Reeves and J.E. Rowe. Genetic algorithms: Principles and perspectives . Springer, Berlin, 2002.
C.R. Reeves. Modern heuristic techniques for combinatorial problems . Blackwell, London, 1993. MATH
C. Rego. Relaxed tours and path ejections for the traveling salesman problem. European Journal of Operational Research , 106:522–538, 1998. MATH
C. Rego and F. Glover. Local search and metaheuristics. In G. Gutin and A.P. Punnen, editors, The traveling salesman problem and its variations , pages 309–368. Kluwer Academic Publishers, Boston, 2002.
P.P. Repoussis, C.D. Tarantilis, and G. Ioannou. A hybrid metaheuristic for a real life vehicle routing problem. In T. Boyanov, S. Dimova, K. Georgiev, and G. Nikolov, editors, Numerical methods and applications , volume 4310 of Lecture Notes in Computer Science , pages 247–254. Springer, Berlin, 2007.
L.I.P. Resende and M.G.C. Resende. A GRASP for frame relay permanent virtual circuit routing. In C.C. Ribeiro and P. Hansen, editors, Extended Abstracts of the III Metaheuristics International Conference , pages 397–401, Angra dos Reis, 1999.
M.G.C. Resende. Computing approximate solutions of the maximum covering problem using GRASP. Journal of Heuristics , 4:161–171, 1998. MATH
M.G.C. Resende. Metaheuristic hybridization with greedy randomized adaptive search procedures. In Zhi-Long Chen and S. Raghavan, editors, Tutorials in Operations Research , pages 295–319. INFORMS, 2008.
M.G.C. Resende and T.A. Feo. A GRASP for satisfiability. In D.S. Johnson and M.A. Trick, editors, Cliques, coloring, and satisfiability: The second DIMACS implementation challenge , volume 26 of DIMACS Series in Discrete Mathematics and Theoretical Computer Science , pages 499–520. American Mathematical Society, Providence, 1996.
M.G.C. Resende and J.L. González-Velarde. GRASP: Procedimientos de búsqueda miope aleatorizado y adaptativo. Inteligencia Artificial , 19: 61–76, 2003.
M.G.C. Resende and C.C. Ribeiro. A GRASP for graph planarization. Networks , 29:173–189, 1997. MATH
M.G.C. Resende and C.C. Ribeiro. Graph planarization. In C. Floudas and P.M. Pardalos, editors, Encyclopedia of optimization , volume 2, pages 368–373. Kluwer Academic Publishers, Boston, 2001.
M.G.C. Resende and C.C. Ribeiro. A GRASP with path-relinking for private virtual circuit routing. Networks , 41:104–114, 2003a. MathSciNetMATH
M.G.C. Resende and C.C. Ribeiro. Greedy randomized adaptive search procedures. In F. Glover and G. Kochenberger, editors, Handbook of metaheuristics , pages 219–249. Kluwer Academic Publishers, Boston, 2003b.
M.G.C. Resende and C.C. Ribeiro. GRASP with path-relinking: Recent advances and applications. In T. Ibaraki, K. Nonobe, and M. Yagiura, editors, Metaheuristics: Progress as real problem solvers , pages 29–63. Springer, New York, 2005a.
M.G.C. Resende and C.C. Ribeiro. Parallel greedy randomized adaptive search procedures. In E. Alba, editor, Parallel metaheuristics: A new class of algorithms , pages 315–346. Wiley-Interscience, Hoboken, 2005b.
M.G.C. Resende and C.C. Ribeiro. Greedy randomized adaptive search procedures: Advances and applications. In M. Gendreau and J.-Y. Potvin, editors, Handbook of metaheuristics , pages 293–319. Springer, New York, 2nd edition, 2010.
M.G.C. Resende and C.C. Ribeiro. Restart strategies for GRASP with path-relinking heuristics. Optimization Letters , 5:467–478, 2011. MathSciNetMATH
M.G.C. Resende and C.C. Ribeiro. GRASP: Greedy randomized adaptive search procedures. In E.K. Burke and G. Kendall, editors, Search methodologies: Introductory tutorials in optimization and decision support systems , chapter 11, pages 287–312. Springer, New York, 2nd edition, 2014.
M.G.C. Resende and R.M.A. Silva. GRASP: Greedy randomized adaptive search procedures. In J.J. Cochran, L.A. Cox, Jr., P. Keskinocak, J.P. Kharoufeh, and J.C. Smith, editors, Encyclopedia of operations research and management science , volume 3, pages 2118–2128. Wiley, New York, 2011.
M.G.C. Resende and R.M.A. Silva. GRASP: Procedimentos de busca gulosos, aleatórios e adaptativos. In H.S. Lopes, L.C.A. Rodrigues, and M.T.A. Steiner, editors, Meta-heurísticas em pesquisa operacional , chapter 1, pages 1–20. Omnipax Editora, Curitiba, 2013.
M.G.C. Resende and R.F. Werneck. A hybrid heuristic for the p -median problem. Journal of Heuristics , 10:59–88, 2004. MATH
M.G.C. Resende and R.F. Werneck. A hybrid multistart heuristic for the uncapacitated facility location problem. European Journal of Operational Research , 174:54–68, 2006. MathSciNetMATH
M.G.C. Resende, P.M. Pardalos, and Y. Li. Algorithm 754: Fortran subroutines for approximate solution of dense quadratic assignment problems using GRASP. ACM Transactions on Mathematical Software , 22: 104–118, 1996. MATH
M.G.C. Resende, L.S. Pitsoulis, and P.M. Pardalos. Approximate solution of weighted MAX-SAT problems using GRASP. In J. Gu and P.M. Pardalos, editors, Satisfiability problems , volume 35 of DIMACS Series in Discrete Mathematics and Theoretical Computer Science , pages 393–405. American Mathematical Society, Providence, 1997.
M.G.C. Resende, T.A. Feo, and S.H. Smith. Algorithm 787: Fortran subroutines for approximate solution of maximum independent set problems using GRASP. ACM Transactions on Mathematical Software , 24: 386–394, 1998. MATH
M.G.C. Resende, L.S. Pitsoulis, and P.M. Pardalos. Fortran subroutines for computing approximate solutions of MAX-SAT problems using GRASP. Discrete Applied Mathematics , 100:95–113, 2000. MATH
M.G.C. Resende, R. Martí, M. Gallego, and A. Duarte. GRASP and path relinking for the max-min diversity problem. Computers & Operations Research , 37: 498–508, 2010a. MathSciNetMATH
M.G.C. Resende, C.C. Ribeiro, F. Glover, and R. Martí. Scatter search and path-relinking: Fundamentals, advances, and applications. In M. Gendreau and J.-Y. Potvin, editors, Handbook of metaheuristics , pages 87–107. Springer, New York, 2nd edition, 2010b.
M.G.C. Resende, G.R. Mateus, and R.M.A. Silva. GRASP: Busca gulosa, aleatorizada e adaptativa. In A. Gaspar-Cunha, R. Takahashi, and C.H. Antunes, editors, Manual da computação evolutiva e metaheurística , pages 201–213. Coimbra University Press, Coimbra, 2012.
A.P. Reynolds and B. de la Iglesia. A multi-objective GRASP for partial classification. Soft Computing , 13:227–243, 2009.
A.P. Reynolds, D.W. Corne, and B. de la Iglesia. A multiobjective GRASP for rule selection. In Proceedings of the 11th Annual Conference on Genetic and Evolutionary Computation , pages 643–650, Montreal, 2009. ACM.
C.C. Ribeiro. GRASP: Une métaheuristique gloutonne et probabiliste. In J. Teghem and M. Pirlot, editors, Optimisation approchée en recherche opérationnelle , pages 153–176. Hermès, Paris, 2002.
C.C. Ribeiro. Sports scheduling: Problems and applications. International Transactions in Operational Research , 19:201–226, 2012. MathSciNetMATH
C.C. Ribeiro and M.G.C. Resende. Algorithm 797: Fortran subroutines for approximate solution of graph planarization problems using GRASP. ACM Transactions on Mathematical Software , 25:341–352, 1999. MATH
C.C. Ribeiro and M.G.C. Resende. Path-relinking intensification methods for stochastic local search algorithms. Journal of Heuristics , 18:193–214, 2012.
C.C. Ribeiro and I. Rosseti. A parallel GRASP heuristic for the 2-path network design problem. In B. Monien and R. Feldmann, editors, Euro-Par 2002 Parallel Processing , volume 2400 of Lecture Notes in Computer Science , pages 922–926. Springer, Berlin, 2002.
C.C. Ribeiro and I. Rosseti. Efficient parallel cooperative implementations of GRASP heuristics. Parallel Computing , 33:21–35, 2007. MathSciNet
C.C. Ribeiro and I. Rosseti. Exploiting run time distributions to compare sequential and parallel stochastic local search algorithms. In Proceedings of the VIII Metaheuristics International Conference , Hamburg, 2009.
C.C. Ribeiro and I. Rosseti. tttplots-compare: A perl program to compare time-to-target plots or general runtime distributions of randomized algorithms. Optimization Letters , 9:601–614, 2015.
C.C. Ribeiro and S. Urrutia. Heuristics for the mirrored traveling tournament problem. European Journal of Operational Research , 179: 775–787, 2007. MATH
C.C. Ribeiro, E. Uchoa, and R.F. Werneck. A hybrid GRASP with perturbations for the Steiner problem in graphs. INFORMS Journal on Computing , 14:228–246, 2002. MathSciNetMATH
C.C. Ribeiro, I. Rosseti, and R. Vallejos. On the use of run time distributions to evaluate and compare stochastic local search algorithms. In T. Sttzle, M. Biratari, and H.H. Hoos, editors, Engineering stochastic local search algorithms , volume 5752 of Lecture Notes in Computer Science , pages 16–30. Springer, Berlin, 2009.
C.C. Ribeiro, I. Rosseti, and R.C. Souza. Effective probabilistic stopping rules for randomized metaheuristics: GRASP implementations. In C.A.C. Coello, editor, Learning and intelligent optimization , volume 6683, pages 146–160. Springer, Berlin, 2011.
C.C. Ribeiro, I. Rosseti, and R. Vallejos. Exploiting run time distributions to compare sequential and parallel stochastic local search algorithms. Journal of Global Optimization , 54:405–429, 2012. MathSciNetMATH
C.C. Ribeiro, I. Rosseti, and R.C. Souza. Probabilistic stopping rules for GRASP heuristics and extensions. International Transactions in Operational Research , 20:301–323, 2013. MATH
M.H.F. Ribeiro, V.F. Trindade, A. Plastino, and S.L. Martins. Hybridization of GRASP metaheuristic with data mining techniques. In Proceedings of the ECAI Workshop on Hybrid Metaheuristics , pages 69–78, Valencia, 2004.
M.H.F. Ribeiro, A. Plastino, and S.L. Martins. Hybridization of GRASP metaheuristic with data mining techniques. Journal of Mathematical Modelling and Algorithms , 5:23–41, 2006. MathSciNetMATH
R.Z. Ríos-Mercado and E. Fernández. A reactive GRASP for a commercial territory design problem with multiple balancing requirements. Computers & Operations Research , 36:755–776, 2009. MATH
Y. Rochat and É. Taillard. Probabilistic diversification and intensification in local search for vehicle routing. Journal of Heuristics , 1:147–167, 1995. MATH
F.J. Rodriguez, C. Blum, C. García-Martínez, and M. Lozano. GRASP with path-relinking for the non-identical parallel machine scheduling problem with minimising total weighted completion times. Annals of Operations Research , 201:383–401, 2012. MathSciNetMATH
D.P. Ronconi and L.R.S. Henriques. Some heuristic algorithms for total tardiness minimization in a flowshop with blocking. Omega , 37:272–281, 2009.
I. Rosseti. Sequential and parallel strategies of GRASP with path-relinking for the 2-path network design problem . PhD thesis, Department of Computer Science, Pontifical Catholic University of Rio de Janeiro, Rio de Janeiro, 2003. In Portuguese.
B. Roy and B. Sussmann. Les problèmes d'ordonnancement avec contraintes disjonctives. Technical Report Note DS no. 9 bis, SEMA, Montrouge, 1964.
M.A. Salazar-Aguilar, R.Z. Ríos-Mercado, and J.L. González-Velarde. GRASP strategies for a bi-objective commercial territory design problem. Journal of Heuristics , 19:179–200, 2013.
J. Santamaría, O. Cordón, S. Damas, R. Martí, and R.J. Palma. GRASP & evolutionary path relinking for medical image registration based on point matching. In 2010 IEEE Congress on Evolutionary Computation , pages 1–8. IEEE, 2010.
J. Santamaría, O. Cordón, S. Damas, R. Martí, and R.J. Palma. GRASP and path relinking hybridizations for the point matching-based image registration problem. Journal of Heuristics , 18:169–192, 2012.
D. Santos, A. de Sousa, and F. Alvelos. A hybrid column generation with GRASP and path relinking for the network load balancing problem. Computers & Operations Research , 40:3147–3158, 2013. MathSciNet
L.F. Santos, M.H.F. Ribeiro, A. Plastino, and S.L. Martins. A hybrid GRASP with data mining for the maximum diversity problem. In M.J. Blesa, C. Blum, A. Roli, and M. Sampels, editors, Hybrid metaheuristics , volume 3636 of Lecture Notes in Computer Science , pages 116–127. Springer, Berlin, 2005.
L.F. Santos, C.V. Albuquerque, S.L. Martins, and A. Plastino. A hybrid GRASP with data mining for efficient server replication for reliable multicast. In Proceedings of the 49th Annual IEEE GLOBECOM Technical Conference , pages 1–6, San Francisco, 2006. IEEE. doi: 10.1109/ GLOCOM.2006.246.
L.F. Santos, S.L. Martins, and A. Plastino. Applications of the DM-GRASP heuristic: A survey. International Transactions on Operational Research , 15:387–416, 2008. MathSciNetMATH
M. Sarrafzadeh and D. Lee. A new approach to topological via minimization. IEEE Transactions on Computer-Aided Design , 8:890–900, 1989.
M. Scaparra and R. Church. A GRASP and path relinking heuristic for rural road network development. Journal of Heuristics , 11:89–108, 2005. MATH
A. Scholl, R. Klein, and W. Domschke. Pattern based vocabulary building for effectively sequencing mixed-model assembly lines. Journal of Heuristics , 4:359–381, 1998. MATH
A. Schrijver. Theory of linear and integer programming . Wiley, New York, 1986. MATH
B. Selman, H.A. Kautz, and B. Cohen. Noise strategies for improving local search. In Proceedings of the Twelfth National Conference on Artificial Intelligence , pages 337–343, Seattle, 1994. American Association for Artificial Intelligence.
S. Senju and Y. Toyoda. An approach to linear programming with 0-1 variables. Management Science , 15:196–207, 1968.
K. Seo, S. Hyun, and Y.-H. Kim. A spanning tree-based encoding of the MAX CUT problem for evolutionary search. In C.A.C. Coello, V. Cutello, K. Deb, S. Forrest, G. Nicosia, and M. Pavone, editors, Parallel problem solving from nature - Part I , volume 7491 of Lecture Notes in Computer Science , pages 510–518. Springer, Berlin, 2012.
I.V. Sergienko, V.P. Shilo, and V.A. Roshchin. Optimization parallelizing for discrete programming problems. Cybernetics and Systems Analysis , 40: 184–189, 2004. MathSciNetMATH
F.S. Serifoglu and G. Ulusoy. Multiprocessor task scheduling in multistage hybrid flow-shops: A genetic algorithm approach. Journal of the Operational Research Society , 55:504–512, 2004. MATH
N.Z. Shor. Quadratic optimization problems. Soviet Journal of Computer and Systems Science , 25:1–11, 1987. MathSciNetMATH
O.V. Shylo, T. Middelkoop, and P.M. Pardalos. Restart strategies in optimization: Parallel and serial cases. Parallel Computing , 37:60–68, 2011a. MathSciNetMATH
O.V. Shylo, O.A. Prokopyev, and J. Rajgopal. On algorithm portfolios and restart strategies. Operations Research Letters , 39:49–52, 2011b. MathSciNetMATH
C.-C. Shyur and U.-E. Wen. Optimizing the system of virtual paths by tabu search. European Journal of Operational Research , 129:650–662, 2001. MathSciNetMATH
F. Silva and D. Serra. Locating emergency services with different priorities: The priority queuing covering location problem. Journal of the Operational Research Society , 59:1229–1238, 2007. MATH
G.C. Silva, L.S. Ochi, and S.L. Martins. Experimental comparison of greedy randomized adaptive search procedures for the maximum diversity problem. In C.C. Ribeiro and S.L. Martins, editors, Experimental and efficient algorithms , volume 3059 of Lecture Notes in Computer Science , pages 498–512. Springer, Berlin, 2004.
G.C. Silva, M.R.Q. de Andrade, L.S. Ochi, S.L. Martins, and A. Plastino. New heuristics for the maximum diversity problem. Journal of Heuristics , 13: 315–336, 2007.
R.M.A. Silva, M.G.C. Resende, P.M. Pardalos, and M.J. Hirsch. A Python/C library for bound-constrained global optimization with continuous GRASP. Optimization Letters , 7:967–984, 2013a. MathSciNetMATH
R.M.A. Silva, M.G.C. Resende, P.M. Pardalos, G.R. Mateus, and G. de Tomi. GRASP with path-relinking for facility layout. In B.I. Goldengorin, V.A. Kalyagin, and P.M. Pardalos, editors, Models, algorithms, and technologies for network analysis , volume 59 of Springer Proceedings in Mathematics & Statistics , pages 175–190. Springer, Berlin, 2013b.
M. Snir, S. Otto, S. Huss-Lederman, D. Walker, and J. Dongarra. MPI – The complete reference, Volume 1 – The MPI core . MIT Press, Cambridge, 1998.
K. Sörensen. Metaheuristics – The metaphor exposed. International Transactions in Operational Research , 22:1–16, 2015. MathSciNetMATH
K. Sörensen and P. Schittekat. Statistical analysis of distance-based path relinking for the capacitated vehicle routing problem. Computers & Operations Research , 40:3197–3205, 2013. MathSciNet
M.C. Souza, C. Duhamel, and C.C. Ribeiro. A GRASP heuristic for the capacitated minimum spanning tree problem using a memory-based local search strategy. In M.G.C. Resende and J. Souza, editors, Metaheuristics: Computer decision-making , pages 627–658. Kluwer Academic Publishers, Boston, 2004.
C.S. Sung and S.K. Park. An algorithm for configuring embedded networks in reconfigurable telecommunication networks. Telecommunication Systems , 4: 241–271, 1995.
E.D. Taillard. Robust taboo search for the quadratic assignment problem. Parallel Computing , 17:443–455, 1991. MathSciNet
H. Takahashi and A. Matsuyama. An approximate solution for the Steiner problem in graphs. Mathematica Japonica , 24:573–577, 1980. MathSciNetMATH
E.-G. Talbi. Metaheuristics: From design to implementation . Wiley, New York, 2009. MATH
R. Tamassia and G. Di Battista. Automatic graph drawing and readability of diagrams. IEEE Transactions on Systems, Man, and Cybernetics , 18: 61–79, 1988.
F.L. Usberti, P.M. França, and A.L.M. França. GRASP with evolutionary path-relinking for the capacitated arc routing problem. Computers & Operations Research , 40:3206–3217, 2013. MathSciNet
P.J.M. van Laarhoven and E. Aarts. Simulated annealing: Theory and applications . Kluwer Academic Publishers, Boston, 1987. MATH
V.V. Vazirani. Approximation algorithms . Springer, Berlin, 2001. MATH
M.G.A. Verhoeven and E.H.L. Aarts. Parallel local search. Journal of Heuristics , 1:43–66, 1995. MATH
D.S. Vianna and J.E.C. Arroyo. A GRASP algorithm for the multi-objective knapsack problem. In Proceedings of the 24th International Conference of the Chilean Computer Science Society , pages 69–75, Arica, 2004. IEEE.
J.X. Vianna Neto, D.L.A. Bernert, and L.S. Coelho. Continuous GRASP algorithm applied to economic dispatch problem of thermal units. In Proceedings of the 13th Brazilian Congress of Thermal Sciences and Engineering , Uberlandia, 2010.
J.G. Villegas. Vehicle routing problems with trailers . PhD thesis, Universite de Technologie de Troyes, Troyes, 2010.
J.G. Villegas, C. Prins, C. Prodhon, A.L. Medaglia, and N. Velasco. A GRASP with evolutionary path relinking for the truck and trailer routing problem. Computers & Operations Research , 38:1319–1334, 2011. MATH
M. Vlach. Branch and bound method for the three index assignment problem. Ekonomicko-Mathematický Obzor , 3:181–191, 1967. MathSciNet
S. Voss. Steiner's problem in graphs: Heuristic methods. Discrete Applied Mathematics , 40:45–72, 1992. MathSciNetMATH
S. Voss. Heuristics for nonlinear assignment problems. In P.M. Pardalos and L.S. Pitsoulis, editors, Nonlinear assignment problems: Algorithms and applications , pages 175–215. Kluwer Academic Publishers, Boston, 2000.
S. Voss, A. Fink, and C. Duin. Looking ahead with the Pilot method. Annals of Operations Research , 136:285–302, 2005. MathSciNetMATH
D.B. West. Introduction to graph theory . Pearson, 2001.
H. Whitney. On the abstract properties of linear dependence. American Journal of Mathematics , 57:509–533, 1935. MathSciNetMATH
D.P. Williamson and D.B. Shmoys. The design of approximation algorithms . Cambridge University Press, New York, 2011. MATH
P. Winter. Steiner problem in networks: A survey. Networks , 17:129–167, 1987. MathSciNetMATH
I.H. Witten, E. Frank, and M.A. Hall. Data mining: Practical machine learning tools and techniques . Morgan Kaufmann, San Francisco, 3rd edition, 2011.
L.A. Wolsey. Integer programming . Wiley, New York, 1998. MATH
Q. Wu and J.-K. Hao. A memetic approach for the Max-Cut problem. In C.A.C. Coello, V. Cutello, K. Deb, S. Forrest, G. Nicosia, and M. Pavone, editors, Parallel problem solving from nature - Part II , volume 7492 of Lecture Notes in Computer Science , pages 297–306. Springer, Berlin, 2012.
F.P. Wyman. Binary programming: A occasion rule for selecting optimal vs. heuristic techniques. The Computer Journal , 16:135–140, 1973.
M. Yagiura and T. Ibaraki. Local search. In P.M. Pardalos and M.G.C. Resende, editors, Handbook of applied optimization , pages 104–123. Oxford University Press, 2002.
M. Yagiura, T. Ibaraki, and F. Glover. An ejection chain approach for the generalized assignment problem. INFORMS Journal on Computing , 16: 133–151, 2004. MathSciNetMATH
M. Yannakakis. Computational complexity. In E.H.L. Aarts and J.K. Lenstra, editors, Local search in combinatorial optimization , chapter 2, pages 19–55. Wiley, Chichester, 2007.
J.R. Yee and F.Y.S. Lin. A routing algorithm for virtual circuit data networks with multiple sessions per O-D pair. Networks , 22:185–208, 1992. MATH
Index
A
A∗ search
adaptive memory
ant colony optimization
maximum cut problem
approximate methods
approximation algorithms
artificial intelligence
B
backtracking
bandwidth packing
branch-and-bound
branch-and-cut-and-price
complexity
exact approaches
GRASP with path-relinking
construction
local search
path-relinking
pseudo-code
heuristics
Lagrangean relaxation
piecewise-linear cost function
polytope
problem formulation
tabu search
best-improving neighborhood search
bias functions
bin packing problem
box-constrained continuous global optimization
continuous GRASP
examples
Ackley function
Bohachevsky function
Schwefel function
Shekel function
Shubert function
branch-and-bound
branch-and-cut
branch-and-price
C
C-GRASP, see continuous GRASP
combinatorial optimization
cost function algorithm
knapsack problem
maximum clique problem
minimum spanning tree problem
shortest path problem
traveling salesman problem
ground set
knapsack problem
maximum clique problem
minimum spanning tree problem
shortest path problem
Steiner tree problem in graphs
traveling salesman problem
recognition algorithm
knapsack problem
maximum clique problem
minimum spanning tree problem
shortest path problem
traveling salesman problem
solution approaches
approximation algorithms
constructive heuristics
dynamic programming
genetic algorithms
greedy algorithm
greedy randomized adaptive search procedures
heuristics
integer programming
local search
metaheuristics
multistart
polynomially solvable special cases
pseudo-polynomial algorithms
semi-greedy algorithms
simulated annealing
tabu search
combinatorial optimization problems
feasible solutions
input size
instance
objective function
polynomially solvable special cases
2-SAT
planar clique
problem versions
decision
equivalence
evaluation
optimization
computational complexity
classes
co \- NP
NP
NP -complete
NP -hard
P
polynomial hierarchy
PSPACE
strongly NP -complete
foundations
computational problem
constructive heuristics
continuous global optimization
continuous GRASP
applications
computational geometry
correspondence of projected 3D points and lines
drug combinations and adverse reactions
economic dispatch of thermal units
robot path planning
sensor registration
system of nonlinear equations
target tracking
thermodynamics
approximate discrete line search
approximate h-local minimum
canonical basis direction
construction phase
DC-GRASP variant
directed search
diversification phase, see construction phase
dynamic grid
examples
GENCAN local search
grid density mechanics
h-local minimum
h-neighborhood
hyper-rectangle approximation
hypercubed cells
improvements
initial grid
initial solution
initial solution after restart
intensification phase, see local search phase
local search phase
parallel implementation
pseudo-code
C-GRASP
construction phase
continuous local search
discrete line search
Python/C library
RCL parameter
restart trigger
restricted candidate list
reuse of line search results
test functions
Ackley
Bohachevsky
Schwefel
Shekel
Shubert
cutting planes
D
decision problem
complement
concise certificate
knapsack problem
maximum clique problem
traveling salesman problem
definitions
graph coloring
graph connectedness
graph planarity
Hamiltonian cycle
integer programming
knapsack
linear programming
maximum clique
maximum independent set
maximum planar subgraph
minimum spanning tree
satisfiability
shortest path
Steiner tree in graphs
traveling salesman
polynomial-time reduction
polynomial-time transformation
diversification
dynamic programming
E
efficient algorithms
ejection chains
elite pool, see elite set
elite set
maintenance
pseudo-code
elite solution
evaluation problem
evolutionary path-relinking
greedy randomized adaptive path-relinking
exact method
exact optimal solution, see global optimum
F
feasible set
feasible solutions
filtering
first-improving neighborhood search
G
genetic algorithms
maximum cut problem
global optimal solution, see global optimum
global optimization
global optimum
graph
graph planarization
applications
complexity
GRASP
code
computational results
improvement procedure
pseudo-codes
first phase of GT heuristic
GRASP
GRASP construction
GRASP local search
planar subgraph enlargement
second phase of GT heuristic
two-phase heuristic
graphs
chromatic number
clique
clique cover
complete
connected
directed
Hamiltonian cycle
Hamiltonian path
Hamiltonian tour
independent set
induced
maximal planar subgraph
maximum clique
maximum induced bipartite subgraph
complexity
overlap
path in a directed graph
path in an undirected graph
perfect
planar
planarization
complexity
spanning tree
stable set
strongly connected
subgraph
tour
two-coloring
undirected
GRASP, see greedy randomized adaptive search procedures
GRASP with path-relinking
applications
elite pool, see elite set
elite set
maintenance
elite solution
evolutionary
hybridization
pseudo-code
evolutionary
with restarts
restart strategies
runtime distributions
runtime distributions
greedy algorithm
adaptive greedy
maximum clique problem
maximum independent set problem
minimum cardinality set covering problem
minimum spanning tree problem
Steiner tree problem in graphs
tie-breaking rule
traveling salesman problem
candidate element selection
connection with matroids
greedy choice function
knapsack problem
matroids
minimum spanning tree problem
pseudo-codes
adaptive greedy algorithm
semi-greedy algorithm
repair procedures
semi-greedy
Steiner tree problem in graphs
greedy choice function
greedy randomized adaptive search procedures
accelerating
applications
2-path network design problem
3-index assignment
automated test case prioritization
balancing reconfigurable transfer lines
bandwidth packing
biclustering
biobjective commercial territory design
biobjective path dissimilarity problem
biobjective set packing problem
biorienteering problem
broadcast scheduling
capacitated clustering
capacitated location
capacitated location routing
capacitated minimum spanning tree
capacitated multi-source Weber problem
capacity expansion of fiber optic networks
channel assignment in mobile phone networks
classification of databases
combined production-distribution
commercial territory design
constrained two-dimensional
nonguillotine cutting
container loading
driver scheduling
environmental investment decision making
examination scheduling
family traveling salesman
flow shop scheduling
Golomb ruler search
graph planarization
job shop scheduling
just-in-time scheduling
learning classification problem
line balancing
locating emergency services
matrix decomposition for traffic assignment in communication satellites
max-min diversity
maximum covering
maximum cut problem
maximum diversity
maximum stable set problem
maximum weighted satisfiability
multicommodity network design
multicriteria minimum spanning tree problem
multiobjective knapsack problem
multiobjective quadratic assignment problem
p -median
parallel machine scheduling with setup times
path dissimilarity
point-feature cartographic label placement
portfolio optimization
power compensation
power transmission network expansion planning
prize-collecting Steiner tree problem in graphs
quadratic assignment problem
rural road network development
semiconductor manufacturing
server replication for reliable multicast
set covering
set k -covering, see set multicovering
set multicovering
set packing
single machine scheduling
Steiner tree problem in graphs
strip packing
therapist routing and scheduling
unsplittable multicommodity flow
vehicle routing
weighted maximum satisfiability
cost perturbations in construction
perturbation by eliminations
perturbation by prize changes
distribution of solution values
diversification
filtering
intensification
Lagrangean GRASP heuristics
Lagrangean relaxation and subgradient optimization
LAGRASP
pseudo-code
template
local search
memory and learning in construction
consistent variable
elite pool, see elite set
elite set
elite solution
intensity function
long-term memory
strongly determined variable
multiobjective optimization
dominance
efficient solution
Pareto frontier
pseudo-code
weak dominance
pattern-based construction
data mining
pseudo-code for GRASP with data mining
vocabulary building
probabilistic choice of RCL parameter
decreasing
greedy
reactive
uniform
proximate optimality principle in construction
pseudo-codes
GRASP
GRASP construction for MAX-CUT
GRASP for bandwidth packing
GRASP for graph planarization
GRASP for multiobjective optimization
GRASP local search for MAX-CUT
GRASP with data mining
GRASP with evolutionary path-relinking
GRASP with path-relinking
GRASP with path-relinking for MAX-CUT
GRASP with path-relinking with restarts
GRASP with probabilistic stopping rule
path-relinking for MAX-CUT
random multistart
semi-greedy multistart
random plus greedy construction
reactive
diversification
dynamic restricted candidate list parameter
parameter tuning
robustness
selection probabilities
runtime distribution
exponential
shifted exponential
two-parameter exponential, see shifted exponential
sampled greedy construction
stopping
Gaussian approximation for GRASP iterations
implementation of stopping rule
probabilistic stopping rule
pseudo-code
with path-relinking
applications
elite pool, see elite set
elite set
elite set maintenance
elite solution
evolutionary
hybridization
pseudo-code
restart strategies
runtime distributions
with restarts
ground set
H
heuristics
traveling salesman problem
I
implicit enumeration
input size
integer programming
intensification
intractable problems
iterated local search
iterative improvement, see first-improving neighborhood search
K
k -shortest path algorithm
knapsack problem
characterization
concise certificate
cost function update
formulation
forward path-relinking
greedy algorithm
ground set
motivation
neighborhood
restricted neighborhood
solution representation
versions
L
Lagrangean GRASP heuristics
Lagrangean relaxation and subgradient optimization
dual problem
multipliers
relaxed problem
LAGRASP
pseudo-code
template
local optimum
escaping
short-term tabu search
variable neighborhood descent
local search
ejection chains
graph partitioning
history
implementation strategies
candidate lists
circular search
cost function update
neighborhood search
iterative improvement, see first-improving neighborhood search
local optimum
move
neighborhood
neighborhood search
best-improving
first-improving
variable neighborhood descent
perturbations, see ejection chains
pseudo-codes
VND local search
restricted neighborhood
search space graph
simplex method
theory
traveling salesman problem
2-opt neighborhood
3-opt neighborhood
M
matroid
connection with greedy algorithm
properties
weighted
MAX-CUT, see maximum cut problem
maximum clique problem
adaptive greedy algorithm
pseudo-code
characterization
concise certificate
formulation
ground set
motivation
solution representation
versions
maximum covering problem
maximum cut problem
ant colony optimization
applications
approximation algorithm
breakout local search
complexity
example
genetic algorithm
GRASP
construction phase
local search phase
path-relinking
pseudo-code for GRASP construction
pseudo-code for GRASP local search
pseudo-code for path-relinking
GRASP with path-relinking
pseudo-code
interior point methods
memetic algorithm
nonlinear programming approaches
path-relinking
pseudo-codes
GRASP construction
GRASP local search
GRASP with path-relinking
path-relinking
rank-2 heuristic
scatter search
semidefinite programming relaxation
tabu search
variable neighborhood search
variable neighborhood search with path-relinking
maximum independent set of an overlap graph
complexity
maximum induced bipartite subgraph
complexity
maximum planar subgraph problem, see graph planarization
maximum weighted satisfiability problem
memetic algorithms
maximum cut problem
metaheuristics
ant colony optimization
genetic algorithms
greedy randomized adaptive search procedures
iterated local search
particle swarm optimization
scatter search
simulated annealing
tabu search
variable neighborhood search
minimum cardinality set covering problem
adaptive greedy algorithm
pseudo-code
minimum spanning tree problem
adaptive greedy algorithm
pseudo-code
characterization
formulation
greedy algorithm
pseudo-code
ground set
motivation
restricted neighborhood
move
MST, see minimum spanning tree problem
multimodal function
multiobjective optimization
dominance
efficient solution
Pareto frontier
pseudo-code
weak dominance
multistart
random
pseudo-code
semi-greedy
pseudo-code
N
neighborhood
neighborhood search
best-improving
first-improving
iterative improvement, see first-improving neighborhood search
variable neighborhood descent
O
objective function
optimization method
optimization problem
combinatorial
feasible set
feasible solutions
global
maximization
minimization
overlap graph
P
parallel GRASP heuristics, see parallel GRASP implementations
parallel GRASP implementations
computational results
2-path network design problem
job shop scheduling
three-index assignment
efficiency
multiple-walk cooperative-thread
centralized strategies
distributed strategies
multiple-walk independent-thread
load balancing
MAX-SAT problem
path-relinking
speedup
Steiner problem in graphs
speedup
parallel metaheuristics
particle swarm optimization
path-relinking
back-and-forward
backward
diversification
elite pool, see elite set
elite set
maintenance
pseudo-code
elite solution
evolutionary
pseudo-code
external
forward
knapsack problem
greedy randomized adaptive
ground set
guiding solution
implementation strategies
infeasibilities in
initial solution
intensification
minimum distance in
mixed
pseudo-codes
elite set maintenance
evolutionary
forward
mixed
mixed with feasible and infeasible moves
with GRASP
with GRASP with restarts
randomization in
restricted neighborhood
knapsack problem
minimum spanning tree problem
traveling salesman problem
search space graph
truncated
pattern-based construction
data mining
pseudo-code for GRASP with data mining
vocabulary building
penalty function
polynomial-time algorithm
private virtual circuit
private virtual circuit routing, see bandwidth packing
pseudo-codes
adaptive greedy algorithm
maximum clique problem
minimum cardinality set covering
nearest neighbor heuristic for the traveling salesman problem
Prim's algorithm for minimum spanning tree
Steiner tree problem in graphs
continuous GRASP
approximate discrete line search
construction phase
continuous local search
elite set maintenance
graph planarization
first phase of GT heuristic
GRASP
GRASP construction
GRASP local search
planar subgraph enlargement
second phase of GT heuristic
GRASP for minimization
GRASP for multiobjective optimization
GRASP with data mining
GRASP with evolutionary path-relinking
GRASP with path-relinking
GRASP with path-relinking (revisited)
GRASP with path-relinking for bandwidth packing
GRASP with path-relinking with restarts
GRASP with probabilistic stopping rule
greedy algorithm
distance network heuristic for the Steiner tree problem in graphs
Kruskal's algorithm for minimum spanning tree
maximum-weight independent set of a weighted matroid
Lagrangean heuristic
maximum cut problem
GRASP construction
GRASP local search
GRASP with path-relinking
path-relinking
path-relinking
evolutionary
forward
mixed
mixed with feasible and infeasible moves
random multistart
semi-greedy algorithm
semi-greedy multistart
variable neighborhood descent local search
pseudo-polynomial algorithms
PVC, see private virtual circuit
PVC routing, see private virtual circuit routing
R
randomized-greedy, see semi-greedy
RCL, see restricted candidate list
reactive GRASP
diversification
dynamic restricted candidate list parameter
parameter tuning
robustness
selection probabilities
repair procedures
restart strategies
pseudo-code
runtime distributions
restricted candidate list
bias functions
exponential bias
linear bias
log bias
polynomial bias
random bias
cardinality based
cardinality-based parameter
distribution of solution values
quality based
quality-based parameter
restricted neighborhood
knapsack problem
minimum spanning tree problem
traveling salesman problem
routing and wavelength assignment problem
runtime distributions
comparing algorithms with exponential runtime distributions
comparing algorithms with general runtime distributions
2-path network design problem
routing and wavelength assignment problem
server replication for reliable multicast problem
comparing parallel algorithms
graphical methodology for data analysis
GRASP
exponential distribution
shifted exponential distribution
two-parameter exponential distribution, see shifted exponential distribution
GRASP with path-relinking
lower quartile
outliers
Q-Q plots, see quantile-quantile plots
quantile-quantile plots
quantiles
shift estimate
slope estimate
target value
upper quartile
variability information
S
scatter search
maximum cut problem
search space graph
semi-greedy algorithm
cardinality-based RCL
distribution of solution values
greedy construction
probabilistic choice of RCL parameter
decreasing
greedy
reactive
uniform
pseudo-code
quality-based RCL
random construction
random plus greedy construction
reactive construction
restricted candidate list
bias functions
sampled greedy construction
server replication for reliable multicast problem
short-term memory tabu search
shortest path problem
characterization
formulation
ground set
motivation
simulated annealing
solution approaches
approximation algorithms
constructive heuristics
dynamic programming
genetic algorithms
greedy algorithm
greedy randomized adaptive search procedures
heuristics
integer programming
local search
metaheuristics
multistart
polynomially solvable special cases
2-SAT
planar clique
pseudo-polynomial algorithms
semi-greedy algorithms
simulated annealing
tabu search
solution representation
incidence vector
generalized incidence vector
permutation
steepest-ascent mildest-descent
Steiner tree problem in graphs
adaptive greedy algorithm
pseudo-code
formulation
greedy algorithm
pseudo-code
ground set
motivation
solution representation
Steiner nodes
Steiner tree
symmetric difference
T
tabu search
bandwidth packing
maximum cut problem
three-index assignment problem
time-to-target plots, see runtime distributions
traveling salesman problem
adaptive greedy algorithm
pseudo-code
characterization
concise certificate
cost function update
2-opt
3-opt
formulation
ground set
heuristics
motivation
neighborhood
restricted neighborhood
solution representation
versions
TTT-plots, see time-to-target plots
2-path network design problem
applications
complexity
GRASP
construction
local search
parallel implementation
path-relinking
greedy heuristic
2-SAT
V
variable neighborhood descent
variable neighborhood search
virtual circuit routing
Lagrangean heuristic
virtual private network
VND, see variable neighborhood descent
VNS, see variable neighborhood search
|
Header$type=menu
World Conqueror 3 Apk Android Game
World Conqueror 3 Apk Android Game | Full Version Pro Free Download.World Conqueror 3 is a New War strategy Android game released this year. Features: Here war
World Conqueror 3 Apk Android Game | Full Version Pro Free Download
World Conqueror 3 Desc: This Latest version is Ad-Free with
Free of
Cost. World Conqueror 3is a New War strategy Android game released this year. Features: Here war is just going to starting you just have to lead your army and conquer whole world, thirty two historical Campaigns, three level and one hundred military Tasks.
It has real time game play with five challenge mode, over fifty country user can join this war, one forty eighty, Conventional Weapons like Air Force, Missiles, Space Weapons and some more.
World Conqueror 3 Requirement: It works in Android 4.0 and all above versions of Android Smartphones.
New Features: It is a latest version of this game comes with World Map zoom in & out options and fixed all bugs.
|
ANTICIPATED IMPACTS ON VETERAN?S HEALTHCARE: Knowledge generated by this proposal will inform VHA about: 1) the prevalence (and incidence) of post-colonoscopy colorectal cancer (PCCRC) in Veterans; 2) determine whether and the extent to which patient outcomes are affected relative to detected colorectal cancer (DCRC); 3) identify patient-, endoscopist-, and facility- / system-specific factors associated with PCCRC, in patients who do and who do not have one or more index polyps identified and removed. Identifying remediable factors associated with PCCRC will lead to interventions to improve colonoscopy performance and adherence to appropriate surveillance intervals, aligning with recent VHA directives for high-quality colonoscopy. Deployment of these interventions will help ensure that Veterans receive colonoscopy of the highest quality. BACKGROUND: Colorectal cancer (CRC) that occurs after a colonoscopy showing no CRC but prior to the recommended interval for follow-up colonoscopy is referred to as ?post-colonoscopy CRC? (PCCRC). PCCRC results from missed colorectal lesions, incompletely resected lesions, or from de novo, fast-growing lesions. A robust, but heterogeneous literature shows that 3-9% of all CRCs are ICCs. More limited studies show an inconsistent effect of PCCRC on patient outcomes as compared to DCRC, and attribute PCCRC to specific colonoscopy-related factors and to polyp characteristics. As the prevalence of PCCRC, its associated factors, and effect on patient outcomes have not been well-studied within VHA, we propose the following specific aims: PROJECT AIMS: 1) Quantify the a) prevalence and incidence, and b) outcomes of PCCRC in Veterans, as compared with DCRC; 2) Assess the role of colonoscopy-related factors, polyp characteristics, patient factors, and facility factors for the risk for CRC after colonoscopy a) with polypectomy, and; b) without polypectomy METHODS: Using VA electronic databases (VA Central Cancer Registry, Corporate Data Warehouse, VA- CMS data repository, VA Informatics and Computer Infrastructure, VA Vital Status File, and others), we will perform a retrospective cross-sectional study (for prevalence), a retrospective cohort study (for incidence and outcomes) and nested case-control studies (to identify risk factors). The retrospective cross-sectional study will quantify prevalence of PCCRC, using definitions consistent with the published literature and experience from other large healthcare systems in order to facilitate comparison of PCCRC prevalence with those other systems for the interval 1/1/06-12/31/2011. From all patients undergoing colonoscopy during this interval, we will calculate PCCRC incidence for Veterans with non-advanced neoplasia and no neoplasia for whom a 5- year and 10-year surveillance / rescreening interval, respectively, is recommended. Incidence and prevalence estimates will be adjusted for diagnostic-error rates, which will be based on manual medical record review. We will conduct a retrospective cohort study to compare Veterans aged 50-85 years diagnosed with PCCRC to those diagnosed with DCRC between 1/1/2006 and 12/31/2011, examining the primary outcome of 5-year overall survival and secondary outcomes of urgent hospitalization, disease stage, surgery, and 30-day post- operative mortality. Multivariate analysis will include adjustment for covariates including age, sex, rurality, comorbidity, and cancer site. For all CRC diagnosed between 2004 and 2011, we will use a case-control study (CCS) design to identify risk factors for PCCRC among Veterans ages 50-85 years who did or did not have polypectomy. Cases will be Veterans with PCCRC either following polypectomy (CCS-1) or not (CCS-2). For both CCSs, controls will be Veterans who do not have PCCRC during the same timeframe as that of the cases. Exposure variables will be procedure-related (extent of exam, preparation quality, others), endoscopist-related (specialty, level of training, others), and institution-related (volume, mechanisms for ensuring follow-up, complexity, others). Odds ratios and attributable (etiologic) fractions will be derived using multiple logistic regression and Greenland?s method for logistic regression, respectively.
|
TEXAS COURT OF APPEALS, THIRD DISTRICT, AT AUSTIN
NO. 03-00-00243-CR
Edward Bell, Appellant
v.
The State of Texas, Appellee
FROM THE DISTRICT COURT OF TOM GREEN COUNTY, 119TH JUDICIAL DISTRICT
NO. B-98-0433-S, HONORABLE DICK ALCALA, JUDGE PRESIDING
Appellant Edward Bell appeals his conviction for capital murder. Tex. Pen. Code
Ann. § 19.03(a)(3) (West 1994). After the jury found appellant guilty, the trial court assessed
punishment at life imprisonment, the State having waived the death penalty.
Points of Error
Appellant advances three points of error. First, appellant challenges the legal
sufficiency of the evidence to support his conviction for capital murder. Second, appellant urges that
the trial court erred in overruling his objection to an application paragraph of the jury charge relating
to parties in this capital murder case for remuneration. Appellant contends that he could not be
guilty as a party when the primary actor, Luis Ramirez, was not guilty of this aggravated element
(remuneration) of the offense charged. Third, appellant contends that under Rule 403 of the Texas
Rules of Evidence, the trial court erred in admitting into evidence a handwritten note of Luis
Ramirez found in appellant's wallet because its probative value was outweighed by the danger of
unfair prejudice and the needless presentation of cumulative evidence. We will affirm the
conviction.
Indictment
The indictment in pertinent part charged that appellant on or about April 8, 1998 "did
then and there intentionally cause the death of an individual, namely, Nemecio Nandin, by shooting
the said Nemecio Nandin with a deadly weapon, to wit: a firearm, for remuneration and the promise
of remuneration from Luis Ramirez."
Background
The facts of the case are important in view of appellant's claim that the evidence is
legally insufficient to sustain the capital murder conviction, and to place the other points of error in
proper perspective.
On April 16, 1998, the decomposing body of twenty-nine year-old Nemecio Nandin
was found buried in a shallow grave near the home of Richard and Lana Riordon on a fifty-acre tract
between Orient and Tennyson in far northeast Tom Green County. The medical examiner
determined that the cause of death was two shotgun blasts to the head, at least one of which was
caused by a twenty-gauge shotgun.
Nandin was a firefighter for the San Angelo Fire Department. His off-duty business
was repairing washing machines and dryers. Nandin was last heard from around noon on April 8,
1998, when he called his girlfriend, Carla (1) Bewick, to tell her he was headed to a business call to
repair a washer and a dryer, and that he would call her when he got back into San Angelo that
evening. Bewick, who testified that her relationship with Nandin was an on-again, off-again one,
related that when Nandin did not call as promised and did not respond to her calls to his cell phone
or pager, she became concerned. Later in the day, when she went to the Wal-Mart store on North
Bryant Street in San Angelo, she saw Nandin's pick-up truck with a washer and dryer parked near
the rear of the store. The truck was unlocked. Nandin was not to be found in the store. When
Bewick returned to the store later in the evening, the truck was still there. She was unable to contact
Nandin the next morning at the fire department. He had not reported for work. His co-workers
organized a search. As noted, his body was found on April 16, 1998.
Dawn Ramirez Holquin had remarried after the offense but prior to the time of the
trial. She testified that she married Luis Ramirez in 1985. They had two children and lived in Las
Vegas, Nevada, before moving to San Angelo in 1992. She related that Luis was an angry, jealous,
and domineering man who had to be in control of everything; that their marriage was very turbulent,
very violent and "not happy"; and that she got a divorce in November 1995. Dawn testified that she
did not date for a long period after the divorce because of Luis's jealousy. He told her that "it drove
him crazy to think that you are with other men."
Dawn related that she first met Nemecio Nandin in June 1995 when he repaired her
dryer. She encountered him again in October 1997 when he came to the Town and Country store,
where she was employed, to purchase hot chocolate. He called her at work that evening to thank her
for the hot chocolate. Luis was in the store at the time. She and Nandin began dating. Dawn
decided to move to Austin to get away from Luis. On Saturday, December 20, 1997, Nandin came
by her San Angelo home at 7:30 a.m. to tell her goodbye. While Nandin was there, Luis began
telephoning, telling Dawn to get rid of whomever was there. She learned that Luis was calling from
a grocery store two blocks away. After Luis had called seven or eight times, Nandin took the phone
and told Luis to leave Dawn alone and that her life was none of Luis's business. Later, Dawn heard
a noise and opened the front door to find her children there. She saw Luis driving away. It was
Luis's weekend with the children and he had only picked them up the night before.
Dawn revealed that she saw Nandin again on December 26, 1997, when she brought
the children to San Angelo for their visitation with their father; that on the last week-end in March
1998 Nandin came to Austin to see her; that the following week-end in early April 1998, her children
returned from their visitation with their father; that her daughter reported that her father (Luis) had
become upset when told the children liked Nandin; and that Luis had stated he would take care of
the problem. Dawn related that Luis had once told her that "if he found out that I was with another
man, he would kill him and then come after me."
From around Thanksgiving Day of 1997 until the early part of April 1998, appellant
Bell, his girlfriend, Lisa McDowell, and a young child lived with the Riordons on their property near
the Tennyson Road where Nandin's body was later found. Due to the objections of the Riordons'
landlord, appellant, McDowell, and the child moved in with Timothy and Nicole Hoogstra in their
house in San Angelo. Hoogstra and appellant had been co-workers on several different construction
sites. Neither appellant nor McDowell had jobs at the time they moved. Shortly after the move
when Hoogstra was working on his truck, appellant told him that a man named "Luis" was going to
hire him (appellant) to kill a fireman for $1,000.00. Later, appellant told Hoogstra that he "had done
it" and to watch for it on the news. When Hoogstra learned that Nandin, a fireman, was missing, he
questioned appellant about the matter. Appellant told Hoosgstra that "they" (appellant Bell and Luis
Ramirez) had lured Nandin to a location on Tennyson Road to service a washer and dryer; that
"they" put a gun to Nandin's head and placed handcuffs on Nandin; and that "they" blew Nandin's
brains out with a shotgun and buried Nandin's body behind a chicken house. Appellant explained
that the ground was "real hard," Nandin was a big man, and they had a difficult time in burying the
body. Hoogstra recalled that appellant "giggled" when relating that "they" blew the fireman's brains
out. While appellant did not specifically say who fired the shots, he later told Hoogstra that Luis
Ramirez had pulled the trigger. Appellant explained that he and Luis had done this to Nandin
because the fireman was seeing Luis's wife, or Luis was seeing the fireman's wife, "one way or the
other."
Appellant also told Hoogstra that after the burial, he put on gloves and drove
Nandin's pick-up truck back to San Angelo and parked it at the rear of the old Wal-Mart store on
North Bryant. Appellant left the washer and dryer in the truck, but threw away Nandin's cell phone
and pager. Later, appellant gave Hoogstra a pair of handcuffs which he said had been used on
Nandin. After appellant left for Tyler, Hoogstra gave the handcuffs to the police. These handcuffs
were shown to be the same brand of handcuffs purchased by Luis Ramirez in 1991 in Las Vegas,
Nevada, where Ramirez worked briefly as a security guard.
On April 14 or 15, 1998, Hoogstra went with Lee Westerman to do construction work
on a lake house in Brownwood. Hoogstra told Westerman of his conversation with appellant.
Westerman had Hoogstra call Crime Stoppers from Brownwood. Either that day or another in April
when Westerman brought Hoogstra home, Westerman met appellant. Appellant told Westerman that
the police had been at Hoogstra's home that day, and he did not know whether it was because of a
fight with his girlfriend, or whether it was for the "other thing" that would "get him either life or
death in prison."
Lisa McDowell, appellant's girlfriend, related that early on the morning of April 8,
1998, she and appellant had gone to the Riordons' home on Tennyson Road to pick up some clothes
they had left behind when they moved. After the Riordons went to work, she got into an argument
with appellant. She left him behind and took her infant son and drove to her aunt's house in San
Angelo. That afternoon, Luis Ramirez dropped appellant off at the aunt's house. Appellant and
McDowell then drove to the Miles High School to pick up the Riordon children from school as
McDowell had promised. On the way, McDowell observed appellant throw a latex glove or gloves
out of the car window. Sometime later, McDowell returned to this particular stretch of highway with
police officers. A latex glove was found as well as the keys to Nandin's pick-up truck.
On April 16, 1998, appellant and McDowell left San Angelo for Tyler. He
supposedly had a construction job in the area. Appellant had $200 which he told McDowell had
been given to him by Luis Ramirez. Appellant was arrested in a trailer park near Whitehouse, south
of Tyler. Appellant and McDowell both signed consent forms giving the officers permission to
search the Mercury Cougar automobile in which they had traveled. In the search, the officers found
appellant's wallet near the front seat. In it were two business cards of Luis Ramirez, and a
handwritten note containing directions to two residences in Austin and descriptions of three motor
vehicles. According to Dawn Ramirez Holquin, Ramirez's ex-wife, the note was in Luis Ramirez's
handwriting and the residences and vehicles described belonged to her and her uncle.
In the trunk of the searched automobile, a pair of Bugle Boy jeans were found. After
testing, it was determined that the DNA from the blood stains on the jeans matched the DNA of the
deceased, Nemecio Nandin. Ginger Herring, Luis Ramirez's girlfriend, testified that Ramirez owned
a pair of Bugle Boy jeans. McDowell testified that when she and appellant were throwing things into
a trash dumpster before leaving San Angelo, she found the jeans in the trunk of the car, that they
were not appellant's, but he would not let her throw them away.
A search of the Riordon house uncovered three shotguns--two twenty-gauge
shotguns and one sixteen-gauge shotgun. Lana Riordan testified that the two twenty-gauge shotguns
were in working order, but not the sixteen-gauge shotgun. She related that appellant and McDowell
had arrived at her house early on the morning of April 8, 1998, and when she returned that evening
one of the twenty-gauge shotguns had been moved from its regular place and left in another room.
After his arrest, appellant wrote a letter to McDowell from the Tom Green County
jail. McDowell kept the letter two months before revealing it to a friend. The letter was introduced
into evidence describing, inter alia, how Nandin was killed by Ramirez in appellant's presence, how
the body was buried, how he was given money by Ramirez which had been taken out of Nandin's
wallet, and how he drove Nandin's truck to the Wal-Mart store. Appellant relied upon some of the
self-serving statements in the letter introduced by the State to support his defense of duress.
Appellant offered no defensive testimony.
Jury Charge--Parties
In submitting the case to the jury, the trial court inter alia, instructed the jury
abstractly on the law of parties. In paragraph 4, an application paragraph, the trial court charged the
jury on capital murder in accordance with the allegations of the indictment. Then separating the
paragraph with an "OR," the trial court applied the law of parties to the capital murder. In paragraph
5, the trial court likewise submitted the lesser included offense of murder in the same fashion. The
trial court also charged the jury on the affirmative defense of duress.
Legal Sufficiency
In his first point of error, appellant contends that the evidence is legally insufficient
to support the capital murder conviction because (1) the evidence is legally insufficient to show that
"he was the triggerman," and (2) insufficient to show his guilt as a party "because there was no
evidence that the principal, Luis Ramirez, committed the murder for remuneration." By his
expressed contention, appellant challenges only the legal sufficiency of the evidence.
The Standard of Review
The standard for reviewing the legal sufficiency of evidence is whether, viewing the
evidence in the light most favorable to the jury's verdict, any rational trier of fact could have found
beyond a reasonable doubt all the essential elements of the offense charged. Jackson v. Virginia, 443
U.S. 307, 319 (1979); Skillern v. State, 890 S.W.2d 849, 879 (Tex. App.--Austin 1994, pet. ref'd).
The standard of review is the same in both direct and circumstantial evidence cases. King v. State,
895 S.W.2d 701, 703 (Tex. Crim. App. 1995); Green v. State, 840 S.W.2d 394, 401 (Tex. Crim.
App. 1992). The State may prove its case by circumstantial evidence if it proves all of the elements
of the charged offense beyond a reasonable doubt. Easley v. State, 986 S.W.2d 264, 271 (Tex.
App.--San Antonio 1998, no pet.) (citing Jackson, 443 U.S. at 319). The sufficiency of the evidence
is determined from the cumulative effect of all the evidence; each fact in isolation need not establish
the guilt of the accused. Alexander v. State, 740 S.W.2d 749, 758 (Tex. Crim. App. 1987). It is
important to remember that all the evidence the jury was permitted, properly or improperly, to
consider must be taken into account in determining the legal sufficiency of the evidence. Garcia v.
State, 919 S.W.2d 370, 378 (Tex. Crim. App. 1994); Johnson v. State, 871 S.W.2d 183, 186 (Tex.
Crim. App. 1993); Rodriguez v. State, 939 S.W.2d 211, 218 (Tex. App.--Austin 1997, no pet.).
The jury is the exclusive judge of the facts proved, the weight to be given the
testimony, and the credibility of the witnesses. See Tex. Code Crim. Proc. Ann. art. 38.04 (West
1979); Alvarado v. State, 912 S.W.2d 199, 207 (Tex. Crim. App. 1995); Adelman v. State, 828
S.W.2d 418, 421 (Tex. Crim. App. 1992). The jury is free to accept or reject any or all of the
evidence presented by either party. Saxton v. State, 804 S.W.2d 910, 914 (Tex. Crim. App. 1991).
The jury maintains the power to draw reasonable inferences from basic facts to ultimate facts. Welch
v. State, 993 S.W.2d 690, 693 (Tex. App.--San Antonio 1999, no pet.); Hernandez v. State, 939
S.W.2d 692, 693 (Tex. App.--Fort Worth 1997, pet. ref'd). Moreover, the reconciliation of
evidentiary conflicts is solely within the province of the jury. Heiselbetz v. State, 906 S.W.2d 500,
504 (Tex. Crim. App. 1995).
Under the Jackson standard, the reviewing court is not to position itself as a thirteenth
juror in assessing the evidence. Rather, it is to position itself as a final due process safeguard
insuring only the rationality of the fact finder. Moreno v. State, 755 S.W.2d 866, 867 (Tex. Crim.
App. 1988). It is not the reviewing court's duty to disregard, realign, or weigh the evidence. Id. The
jury's verdict must stand unless it is found to be irrational or unsupported by more than a "mere
modicum" of evidence, with such evidence being viewed in the Jackson light. Id. The legal
sufficiency of the evidence is a question of law. McCoy v. State, 932 S.W.2d 720, 724 (Tex.
App.--Fort Worth 1996, pet. ref'd).
Law of Parties
"A person is criminally responsible as a party to an offense if the offense is committed
by his own conduct, by the conduct of another for which he is criminally responsible, or by both."
Tex. Pen. Code Ann. § 7.01(a) (West 1994); Goff v. State, 931 S.W.2d 537, 544 (Tex. Crim. App.
1996). Thus, under the law of parties, the State is able to enlarge a defendant's criminal
responsibility to acts in which he may not be the primary actor. Tex. Pen. Code Ann. § 7.01(b)
(West 1994); Rome v. State, 568 S.W.2d 298, 300 (Tex. Crim. App. 1997) (op. on reh'g); Rosillo
v. State, 953 S.W.2d 808, 812 (Tex. App.--Corpus Christi 1997, pet. ref'd). The law of parties may
be applied even though no such allegation is contained in the indictment. Jackson v. State, 898
S.W.2d 896, 898 (Tex. Crim. App. 1995); Montoya v. State, 810 S.W.2d 160, 165 (Tex. Crim. App.
1989); Pitts v. State, 569 S.W.2d 898, 900 (Tex. Crim. App. 1978); Howard v. State, 966 S.W.2d
821, 824 (Tex. App.--Austin 1998, pet. ref'd).
Section 7.02 of the Texas Penal Code provides in pertinent part:
a person is criminally responsible for an offense committed by the conduct of
another if
* * *
acting with intent to promote or assist the commission of the offense, he
solicits encourages, directs, aids, or attempts to aid the other person to
commit the offense.
Tex. Pen. Code Ann. § 7.02(a)(3) (West 1994).
The test for determining when an instruction should be submitted to the jury on the
law of parties was set forth in McCuin v. State, 505 S.W.2d 827, 830 (Tex. Crim. App. 1974), and
quoted in Goff, 931 S.W.2d at 544-45. It need not be restated here. When the evidence is sufficient
to support both the primary actor and party theories of liability, as in the instant case, the trial court
does not err in submitting an instruction on the law of parties. Ransom v. State, 920 S.W.2d 288,
302 (Tex. Crim. App. 1994); Webb v. State, 760 S.W.2d 263, 267, 275 (Tex. Crim. App. 1988);
Rosillo, 953 S.W.2d at 814. Evidence is sufficient to convict under the law of parties if the
defendant is physically present at the commission of the offense and encourages its commission by
words or other agreement. Ransom, 920 S.W.2d at 302; Rosillo, 953 S.W.2d at 814. Clearly, there
was more evidence here than encouragement by appellant Bell so as to constitute him a party, if in
fact Luis Ramirez was the primary actor.
The trial court submitted the theory to the jury that appellant was the primary actor
in killing Nandin for remuneration as alleged in the indictment. In the alternative, the trial court
submitted the theory of parties if the jury found that Luis Ramirez was the primary actor and that
appellant for remuneration aided Ramirez in the killing.
The jury returned a general verdict--"We, the jury find the defendant EDWARD
BELL, guilty of CAPITAL MURDER as charged in the indictment."
Appellant argues, however, that the evidence is insufficient to show that he was the
primary actor as alleged. Appellant told Hoogstra that he had been hired to kill a fireman for $1,000.
He later told Hoogstra that "he (appellant) had done it" and to watch the news. Still later, appellant
told Hoogstra that "they" had lured Nandin to the location of the killing, "they" had handcuffed
Nandin, and "they" had blown Nandin's brains out with a shotgun. Nandin died from two shotgun
blasts, one of which was a twenty-gauge shotgun. The medical examiner, because of the body's
condition, could not determine what type of shotgun inflicted the other wound. Two twenty-gauge
shotguns in working condition were found at the location of the murder. The medical examiner was
unable to determine if the same shotgun was used to inflict both wounds. The deadly blasts could
have been fired from separate shotguns, one by appellant and one by Ramirez, or could have been
fired by the same shotgun. The jury as the trier of fact could have found that appellant was a primary
actor. In fact, the jury could have found from the evidence that appellant and Ramirez were both
primary actors. It was only later that appellant told Hoogstra that Ramirez pulled the trigger. This
fact was reiterated in appellant's jail letter to McDowell. The jury was not required to accept this
evidence that Ramirez was the lone gunman. Given all the circumstances, we reject appellant's
claim that the evidence was insufficient for a rational jury to find beyond a reasonable doubt that
appellant was a primary actor in this capital murder.
Appellant also urges that if the evidence was sufficient to show that he was a party
to the offense for remuneration, the evidence is insufficient to show that he was guilty of capital
murder. Appellant argues that this is so because the indictment's allegation of an aggravating
element making murder a capital offense was remuneration or the promise of remuneration.
Appellant contends that the primary actor was Ramirez and that the court's charge
did not require the jury to find that Ramirez murdered Nandin for remuneration or the promise of
remuneration, and that without that requirement and a finding by the jury, the offense was not a
capital offense. Therefore, appellant asserts that if he is guilty as a party to the offense, it is not a
capital offense of which he is guilty and that the conviction imposed cannot stand. Appellant
overlooks the law of parties and the fact that section 19.03(a)(3) provides that a person who commits
murder and "commits the murder for remuneration or the promise of remuneration or employs
another to commit the murder for remuneration or the promise of remuneration" is guilty of capital
murder. Tex. Pen. Code Ann. § 19.03(a)(3) (West 1994). (Emphasis added.) We do not understand
appellant to challenge the remuneration issue. In this regard, appellant cites no authorities and there
is no argument except that inherent in his assertion. This briefing fails to comply with Rule 38.1(h)
of the Texas Rules of Appellate Procedure.
When different theories are submitted to the jury in the disjunctive as in the instant
case, a general verdict is sufficient if the evidence supports one of the theories. Fuller v. State, 827
S.W.2d 919, 931 (Tex. Crim. App. 1992); Kitchens v. State, 823 S.W.2d 256, 257-58 (Tex. Crim.
App. 1991); see also Ladd v. State, 3 S.W.3d 547, 557 (Tex. Crim. App. 1999); Rabbani v. State,
847 S.W.2d 555, 558-59 (Tex. Crim. App. 1995). We conclude that a rational jury could have found
beyond a reasonable doubt all the essential elements of the offense charged under either of the two
theories submitted to the jury. The first point of error is overruled.
Jury Charge Error Claimed
In his second point of error, appellant contends that the "trial court committed
reversible error by overruling appellant's objection to the application paragraph of the charge." In
his brief, appellant directs us to the application of the law of parties to the facts in the second part
of paragraph 4 of the jury charge. Appellant, in his trial objection, contended that the State was not
entitled to a charge on the law of parties, that appellant was entitled to a "straight up" charge--"he
either did capital murder or he did not, under this indictment." Appellant's argument was that he
was charged with capital murder for remuneration or promise of remuneration and that Luis
Ramirez, the primary actor in any parties charge, was not shown to have committed murder for
remuneration or promise of remuneration. Therefore, appellant contends that he could not be a party
to the offense committed by Ramirez. This is the same contention that appellant advanced in his
challenge to the sufficiency of the evidence. Murder is defined in section 19.02(b)(1). Tex. Pen.
Code Ann. § 19.02(b)(1) (West 1994). Murder becomes capital murder if it is committed under
certain circumstances, limited and set forth in section 19.03. Id. § 19.03. Appellant was charged
under section 19.03(a)(3) which provides that murder becomes a capital offense if "the person
commits the murder for remuneration or promise of remuneration or employs another to commit the
murder for remuneration of promise of remuneration." (Emphasis added.) As noted earlier, the law
of parties may be applied even without any parties allegation in the indictment. Pitts, 569 S.W.2d
at 900. There was evidence that Ramirez employed appellant to murder Nandin for remuneration
or promise of remuneration, thus indicating his guilt under the very same subsection of section 19.03
under which appellant was charged. In addition, the evidence shows Ramirez was a primary actor
in the murder of Nandin along with his hired help--appellant. Appellant cites no authorities
supporting his contention (2) and his argument is of little persuasion. We find no error in the trial
court's action in overruling the objection to the jury charge. The second point of error is overruled.
The Note
In the third point of error, appellant contends that the "trial court erred in admitting
into evidence an extraneous offense or bad act, in violation of Rule 403 of the Rules of Evidence."
Despite the language, appellant presents an issue under Rule 403, not Rule 404(b). (3) Appellant has
reference to the introduction of a handwritten note found in appellant's wallet during the search of
the Mercury Cougar automobile in east Texas when appellant was arrested.
Before the jury, Jack Allen, Special Investigator of the Texas Department of Public
Safety, identified the note on a yellow piece of paper and two business cards of Luis Ramirez as
being found in the wallet. When the State sought to introduce the note into evidence during the re-direct examination of Dawn Ramirez Holquin, a hearing in the absence of the jury was conducted.
The trial court overruled appellant's Rule 403 objection and his subsequently urged Rule 404(b)
objection.
Appellant complains that the introduction of the note "permitted the jury to speculate
about an unaccomplished extraneous offense, the murder of Luis Ramirez's ex-wife." Appellant
relies upon the note and a threat made by Ramirez to his ex-wife, Dawn. Appellant concedes,
however, that the note is relevant evidence. (4)
He argues that in light of Rule 403 the note's probative
value was outweighed by the danger of unfair prejudice and the needless presentation of cumulative
evidence.
The note in question, which was in the handwriting of Ramirez according to Dawn,
gave highway directions from Brady to Austin and further directions to two different street addresses
in Austin and a general description of three motor vehicles. Dawn testified that the street addresses
were those of her home and her uncle's home. Two of the vehicles described belonged to Dawn's
uncle and the other was hers.
Dawn had earlier testified that Ramirez had told her it drove him crazy to think of her
with another man, and that if he found her with another man that he would kill him "and come after
her."
Rule 403 provides:
Although relevant, evidence may be excluded if its probative value is substantially
outweighed by the danger of unfair prejudice, confusion of the issues, or misleading
the jury, or by consideration of undue delay, or needless presentation of cumulative
evidence.
Tex. R. Evid. 403
The factors listed in Rule 403 are the only ones to be balanced against probative
value. The absence of "such as" or a similar introductory phrase indicates that the
listed factors are not merely examples. The first three factors--unfair prejudice,
confusion and misleading--are termed "dangers," whereas delay and cumulativeness
are referred to as "considerations." The former are weightier because they threaten
the integrity of the fact-finding process, whereas the latter are concerned with
efficiency.
2A Steven Goode, et al., Texas Practice: Courtroom Handbook on Texas Evidence 283 (2001 ed.).
Rule 401 defines "relevant evidence." Tex. R. Evid 401. (5) Rule 402 pronounces all
relevant evidence admissible unless barred by certain constitutional and statutory provisions. Tex.
R. Evid. 402. Rule 403 places discretion in the trial court to temper somewhat the breadth of Rule
402. The trial court is authorized to weigh the probative value of relevant evidence against the
countervailing dangers and considerations set forth in Rule 403. Rule 403 favors the admissibility
of evidence in close cases in keeping with the presumptions of admissibility of relevant evidence.
Moreno v. State, 22 S.W.3d 482, 487 (Tex. Crim. App. 1999) (quoting Montgomery, 810 S.W.2d
at 389); Mozon v. State, 991 S.W.2d 841, 847 (Tex. Crim. App. 1999). The standard of review of
a trial court's decision under Rule 403 is an abuse of discretion. The trial court's ruling will be
upheld so long as it is within "the zone of reasonable disagreement." Lane v. State, 933 S.W.2d 504,
520 (Tex. Crim. App. 1996); see also Moreno, 22 S.W.3d at 487; Poole v. State, 974 S.W.2d 892,
897 (Tex. App.--Austin 1998, pet. ref'd).
Unfair Prejudice
We turn now to appellant's claim that the probative value of the note was
substantially outweighed by the danger of unfair prejudice. Any evidence presented by the
prosecution will generally be prejudicial to the defendant in a criminal case. Wyatt v. State, 25
S.W.3d 18, 26 (Tex. Crim. App. 2000); Ford v. State, 26 S.W.3d 669, 675 (Tex. App.--Corpus
Christi 2000, no pet.). "In one sense, almost every piece of evidence introduced at trial can be said
to be prejudicial to one side or the other." Blakeney v. State, 911 S.W.2d 508, 516 (Tex.
App.--Austin 1995, no pet.). Consequently, only evidence that is unfairly prejudicial is excluded.
Cabellero v. State, 919 S.W.2d 919, 922 (Tex. App.--Houston [14th Dist.] 1996, pet. ref'd);
Blakeney, 911 S.W.2d at 516. Unfair evidence is that evidence which has an undue tendency to
suggest a decision be made on an improper basis, commonly an emotional one. Montgomery, 810
S.W.2d at 389; Ford, 919 S.W.2d at 922.
Appellant contends that the note in Ramirez's handwriting and found in appellant's
possession, when taken with Ramirez's threat to Dawn, could have caused the jury to speculate a
second conspiracy was afoot. Thus, the introduction of the note resulted in unfair prejudice. The
note was not signed and not dated. It was not addressed to anyone. Dawn moved to Austin in
December 1997. She testified that Ramirez had visitation rights with their children and he had
returned them to Austin one weekend. Dawn related that Nandin visited her in Austin the last
weekend in March 1998 before his death on April 8, 1998. The time of the threat against all of
Dawn's boyfriends was not established, but it was apparently after the Ramirez divorce in 1995.
Appellant urges that we apply the four-pronged relevance criteria set forth in
Montgomery, 810 S.W.2d at 389-91, for determining whether the probative value of an extraneous
offense is substantially outweighed by the danger of unfair prejudice. Appellant overlooks the fact
that the introduction of the note, even taken together with the earlier threat by Ramirez, does not
constitute an extraneous offense by appellant Bell (6) to which the relevance criteria would apply. (7)
The probative value of the note found in appellant's wallet with Ramirez's business
cards was not substantially outweighed by the danger of unfair prejudice.
Cumulative Evidence
Unlike the "danger" of unfair prejudice, the "consideration" of the needless
presentation of cumulative evidence found in Rule 403 is concerned with the efficiency of judicial
proceedings rather than the threat of inaccurate decisions. See Alvarado v. State, 912 S.W.2d 199,
212-13 (Tex. Crim. App. 1998). "Cumulativeness" alone is not a basis for the exclusion of evidence
under Rule 403. The rule speaks to the "needless presentation" of such evidence. In this regard, trial
courts should be sensitive to a party's "right" to make its case in the most persuasive manner
possible. Etheridge v. State, 903 S.W.2d 1, 20-21 (Tex. Crim. App. 1994); see also generally 1
Steven Goode, et al., Texas Practice: Texas' Rules of Evidence: Civil and Criminal § 403.2 (1993).
Cumulative evidence suggests that other evidence on the same point has already been
received. But this alone is not a basis for exclusion. The cumulative effect of the evidence may
heighten rather than reduce its probative value. Further, evidence that may be cumulative as to one
point may not have that characteristic as to another material point. Goode, § 403.2.
In this "murder for hire scenario", the State bore the burden of proof beyond a
reasonable doubt and the link between appellant and Ramirez was of vital importance. The strongest
evidence of a link came from Hoogstra and appellant's jail letter introduced by the State. Appellant
impeached and discredited Hoogstra by showing, inter alia, that Hoogstra had received hundreds of
dollars from several law enforcement agencies early on in the investigation. Appellant also used
statements in his jail letter to support his defense of duress. If it can be argued that appellant's
possession of the note written by Ramirez was cumulative evidence of the link between the two, it
was not inadmissible on the basis of needless presentation of cumulative basis. Moreover, the State
urges that the note was admissible to rebut the defense of duress.
We conclude that the trial court did not abuse its discretion in overruling appellant's
Rule 403 objection. The third point of error is overruled.
The judgment is affirmed.
John F. Onion, Jr., Justice
Before Justices Kidd, Puryear and Onion*
Affirmed
Filed: November 29, 2001
Do Not Publish
* Before John F. Onion, Jr., Presiding Judge (retired), Court of Criminal Appeals, sitting by
assignment. See Tex. Gov't Code Ann. § 74.003(b) (West 1998).
1. The name was also spelled "Karla" in the record.
2. Appellant does cite Hutch v. State, 922 S.W.2d 166, 170-71 (Tex. Crim. App. 1996), for
the nature of errors which result in "egregious harm." "Egregious harm" applies to a review of
charge error which was not preserved by timely objection. See Almanza v. State, 686 S.W.2d 157,
171 (Tex. Crim. App. 1985) (op. on reh'g). Here, appellant timely objected to the court's charge.
3. Tex. R. Evid. 404(b).
4. Appellant's brief states:
The note itself was undoubtedly relevant evidence establishing a link between
Ramirez and appellant.
5. Rule 401 states:
"Relevant evidence" means evidence having any tendency to make the existence
of any fact that is of consequence to the determination of the action more
probable or less probable than it would be without the evidence.
6. See Brown v. State, 6 S.W.3d 571, 575 (Tex. App.--Tyler 1999, pet. ref'd); Conner v.
State, 891 S.W.2d 668, 671 (Tex. App.--Houston [1st Dist.] 1994, no pet.).
7. Appellant acknowledges that subsequent to the introduction of the note, the trial court
refused to admit the testimony of Ginger Herring, Ramirez's girlfriend to the effect that she
overheard Ramirez and appellant talking about going to Austin and killing Ramirez's ex-wife, her
uncle and possibly her mother. This disallowed evidence was never before the jury.
|
Growth through embracing discomfort… I’ve had a wonderful winter back in the Gunnison Valley after 3+ years in Hawaii. My time in Hawaii gave me many unique opportunities to embrace discomfort. As a professional I progressively advanced my fundraising and public speaking skills and reconnected with my youth in poverty through the most powerful mission I’ve … Continue reading Running from Tony: Embracing Discomfort
|
tabs
Trend Alert Top Designer Wallpaper Sources
Title Posted by Deborah (at) dvdInteriorDesign aHome Decor Website, a Connecticut-based Design Blog and portfolio collection of the latest trends and products as seen at trade shows and in the marketplace, including ideas for stylish living and inspirations.
It's Back!Yes, wallpaper is back and it's back in a big way!. Despite all those tiring memories of scraping that old yellow wallpaper from the past (1970?) off of many walls to clean them up with a fresh application of paint, I am excited to add paper as an instant update to any room. The options available today have me forgetting those painful memories and wanting more. It's looking great in the entry, the bathroom and on the ceiling. It is a welcomed trend shift for an update.
So here are the latest prints and designs, let's take a look at everything from florals, to wood textures, murals and more.
Wallpaper is a visual obsession lately . Fashion is first, and home decor usually follows. I'm a little obsessed with wallpaper lately so indulge me little bit. I like patterns, color and different motifs.
image via HGTV
Drop it Modern is the added surprise that completes this luxurious Bath for HGTV House of Bryon.
Green and white is a very popular color choice this year. (design note) This reflects not only the color of the year " Greenery", but also an interest in motifs again and desires to connect with nature.
My go to company for bold graphic and fragmented patterns in wallpaper and fabrics is definitely Thibaut. They have many colors and usually a coordinating fabric option for upholstery and drapery.
interior design by Annie Anderson
The garden inspired wallpaper in this dining room/ library is sophisticated, warm and welcoming. (Design Tip) Notice the extra drama added by the orange banding, the finishing touch on this wonderful room.
WOW! Sometimes it takes a show house for a designer to show their creativity. I am in love with he color explosion in this room. The papered ceiling is the final detail that brings it all together with continuity .
With the advancements in digital printing, a new era of paper is available. This custom wallpaper was spotted in the Holiday House Stairway by Iris Dankner with Brooklyn-based Wallpaper Projects: a boutique design studio specializing in custom made wallpaper for Architects and Designers.
I especially like patterns, exotics, and find lately myself enamored with sexy moody florals (as seen @elliecashmandesign ). With the abilities offered through digital media, one can take a print and apply it to a variety of products.
Seen At the Architectural Digest Design Show
@flavorpaper rocks the house with this collaboration with @OvandoNY a wonderful NY florist. #adds2016 great wall drama #designhounds by @dvdinteriordesign
Because of the surface area it consumes, wallpaper packs a big punch. A wall covering is one of the easiest ways to add interest to any ...here is a highlight from Instagram at the Architectural Digest Home Show. Flavor Paper
AP 7 - BLUSH
Area Environments: Artists creating environments with wallpaper.
Yes, this is art. Area Environments has been in business for over 25 years and has evolved into a booming art, fabrication, and print business that employs a growing number of artists and tradespeople.
"What sets us apart from other companies is that we are all artists creating wallpaper with artists' eyes. The care and time we take to create the best, most visually appealing paper has paid off for us; our customers recognize this as well and keep coming back for more. Starting Area is one of the most exciting things I've ever done. As we continue to seek out artists from around the world, our collection of unique designs will inspire new ways of relating to and using wallpaper. It is truly rewarding to share our love of art."
Great pattern for a ceiling.
This is a great way to add a graphic pattern to your space. The ceiling is often overlooked when is comes to thinking about pattern or color. The wall space has also been used wisely for storage and for maintaining a a clean look.
Add personality to a bookcase.
This wallpaper with its bold pattern adds personality and a pop to an otherwise nondescript bookcase.
"Lindsay Cowles Voted one of the “Top Emerging Artists” by Art Business News."
This is the paper you saw! Often someone will ask for a paper recommendation and while I have no favorites, this is one of my favorites. Lindsay is a contemporary abstract expressionist based in Richmond, VA. She paints large-scale abstract paintings with bright, bold color and energetic movement.
Root Cellar Designs is a boutique Print Studio of wallpaper and fabrics founded and curated by friend and fellow interior designer Tamara Stephenson and her partner Susan Young. Shown above is a current installation in a powder room for Holiday House, NYC.
Beautiful installation by Zoe Design. This looks like a handpainted mural, but it is finely detailed and 1/2 the cost since it is printed.
Another method for a fabric wall finish is Stenciling!This Fortuny Wall Stencilis named because the pattern is reminiscent of ones used in the famed Fortuny silks from Italy. It is perfect for creating fabric finishes, like the subtle stria technique shown here, done with solid, high contrast, or with the "lost and found" edges technique of the Faded Damask.
Watercolor leaf pattern by Pixersize
Eco-friendly and free delivery, Pixersize not only has beautiful arrangements, they are also a great resource fo furniture makeovers, temporary papers and custom arrangments!
Yes, the use of toile is beautiful in new places such and the bedroom with a romantic feeling accented with love birds and your lovers' name carved into the trunk of one of the trees. This romantic scenery is from the design studio of Rebel Walls.
Also, come join our Facebook group where we chat all things HOME! You can find it right here. Share your favorite wallpaper or let us know if you purchased one of the latest introductions for your home.
Please Note: dvd Interior Design participates in affiliate advertising programs designed to provide a small source of income to support my site. This is at no additional cost to you my reader, but very helpful in providing me with an opportunity to continue this site. Thank you for you support. Deborah From dvd Interior Design Lately.......Feng Shui for your Living Room
Change your Home, Change your Life.
Interested in seeing how we can transform your home?
At dvdInterior Design, we help you create a your personal paradise, that makes you happy to be home every day. We offer full interior design and home outfitting services. We furnish and decorate, but there's so much more to what our interior design team will do. We can fully outfit your home with all home essentials from furniture, lighting, rugs and including bed linens, dishes and silverware, and towels. Everything you need for a fully custom luxury living experience.
Contact us today to discuss how we can help!
email : dvd2design@gmail.com
Recent Posts
Seen and heard at High Point Market. As I toured around High Point Market and visited with a variety of our vendors, I couldn't help but feel the traditional induction into the insiders market. The journey to North Carolina for High Point Market is a tradition for Interior Designers as we.... can make your head spin a little...
Designing the interior of your home should be enjoyable. Unfortunately, it is not that easy to do well without the experience and understanding for interior spaces, how they should flow, the best construction investments and design direction for a home that suits your personality and the needs of your lifestyle. I am surprised by..
HOUZZ
LET'S GET STARTED!
FULL-SERVICE INTERIOR DESIGN
Based just outside of New York City with projects ranging from New York to Connecticut to Los Angeles, dvd Interior Design Studio is a full-service boutique interior design firm specializing in the design of entire houses and individual rooms. We work closely with architects and builders to create bespoke homes that become the backdrop for our clients’ lives.
Follow
LET'S GET STARTED!
We know interior design is not a one-size-fits-all solution, so our goal is to help you feel comfortable with the process, find your unique style, budget, and needs. Our goal is to work together to enhance your environment and increase the quality of life within it. We believe in taking some time to create "your home" a place you will never want to leave. We work with you to create your personal sanctuary, that is the most important part of our job.
DISCLOSURE STATEMENT:
dvd Interior Design Blog contains paid advertising banners, sponsored content, and some contextual affiliate links. These are kinda cool; they allow us to earn money for some of the items we showcase, and to support all of the time we put into this blog. We do our best to make sure that each link is something I support personally.
subscribe pop up
Instagram Lately
CT / NY FULL-SERVICE INTERIOR DESIGN
FULL-SERVICE INTERIOR DESIGN
Based just outside of New York City with projects ranging from New York to Connecticut to Los Angeles, dvd Interior Design Studio is a full-service boutique interior design firm specializing in the design of entire houses and individual rooms. We work closely with architects and builders to create bespoke homes that become the backdrop for our clients’ lives.
MORE ON HOUZZ
2 HOUR INTERIOR DESIGN CONSULT $285
Your home influences your life. Having good organization and quality design is the basis for a Happy Home. Having honed her design skills for more than a decade, Deborah is also available to provide you with a 2-hour consult in your home on everything from color selections to furniture layouts. We work with you to create a timeless, balanced interior that will become the backdrop for our clients’ lives.
CONTACT US
1/2 HOUR VIRTUAL CONSULT $125
This is a 1/2 hour consult done via email. This is an online design service for the design savvy client who knows what they want but has a single question about a specific detail and wants to have some input for completion. You will tell me what you need advice on, and I will answer your query with my best practices insights for interior design. In 4 easy steps, I can help you create your dream home.
|
7 smart and sassy crime fiction writers dish on writing and life.
It's The View. With bodies.
Thursday, March 3, 2016
A Question of Class #mystery @barbross
LUCY BURDETTE: I love love love Barbara Ross's Maine Clambake mystery series. Her characters feel so real, and the setting is interesting and unique. As she was preparing to launch her fourth book, Fogged Inn, I persuaded her to visit us here. And we not only get her smart blog post, we get her husband Bill's fabulous photographs.
Barbara Ross: Thank you so much to the Jungle Reds
for having me. I had dinner with two of the Reds in Key West this week, (Hi
Lucy! Hi Hallie!), but sadly now my husband and I are making the long drive
back to New England.
The latest book in my Maine Clambake Mystery series, Fogged Inn, was released last week. I
love writing this series about the Maine coast and the complexities of life and
society in Busman’s Harbor, a small Maine town dependent on lobstering and the
tourist dollars it can generate in its short summer season. And I love writing about
my protagonist, Julia Snowden, a young woman who returns to town to save her
family’s failing clambake business from bankruptcy.
As I’ve
written the Maine Clambake series, I’ve thought a lot about the question of
class and the complexity that topic all across American life. I suspect like a
lot of authors, I find it to be a minefield.
For one
thing, there’s the general role of class in American life, which is often
contradictory and hard to understand. It has to do with money, or perhaps more
broadly with resources, but not exclusively, and also with outlook, aspiration,
opportunity, and peers (who are, in some cases, resources).
Julia Snowden
is the product of a marriage between a summer person mother, whose family owns
a mansion on a private island, and a dad who as a teen delivered groceries to
the island on his skiff. By the time Julia’s parents marry, there isn’t much
economic difference between their families. Julia’s mother’s family fortune is
long gone. Though they’ve hung onto the island, the mansion is empty and in
disrepair. Julia’s grandfather on her father’s side is a successful lobsterman.
But, as Julia
says in Clammed Up, the first book in
the series. “A town person
marrying a summer person was still rare, but had been even rarer when my
parents married thirty-two years ago. Especially a marriage between a
high-school educated boy and a girl from a family that owned an island. As a
result I’ve always felt a little apart. Neither a local nor a summer person, I
didn’t fit in anywhere. I went to elementary school and junior high in the
harbor, but always knew I’d go away for high school. It wasn’t a financial
thing. During my childhood there was still good money to be made from
lobstering, fishing and construction. I was separated by a mother From Away,
and my parents’ expectations for me.”
In some ways, the
complexities of Julia’s family echo those of any resort town. As she explains in, Boiled Over, “Oh geez, the
socio-dynamics of a resort town. The natives look down on the seasonal
homeowners, who look down on the monthly house-renters, who look down on the
weekly hotel-stayers, who look down the weekenders, who look down on the
day-tripping tourists, who look down on the natives, in an endless cycle of
misunderstanding.”
As I write the characters that populate my Maine town, the
locales, the summer people, the retirees, and the tourists, I want to give them
all their due—to recognize their struggles, honor their perspectives and not
judge their choices (or in some cases, lack of choice). I find the best way is
to be specific—to write about specific people, with specific histories, in a
specific place. The road to stereotypes is paved with generic characters and
settings, and I find myself attracted to stories that recognize the
complexities and contradictions of real lives.
Readers, how to you
react to the social structures occupied by the characters in books you read?
Writers how do you negotiate the minefield of class in America in the
characters you create?
Barbara Ross is the author of the Maine Clambake Mysteries, Clammed Up,
Boiled Over, Musseled Out and Fogged Inn.Clammed Up was
nominated for an Agatha Award for Best Contemporary Novel and
was a finalist for the Maine Literary Award for Crime Fiction.
Barbara blogs with a wonderful group of Maine mystery authors at Maine Crime Writers and with a
group of writers of New England-based cozy mysteries at Wicked Cozy Authors.
30 comments:
It certainly seems as if folks living in places that attract summer visitors or seasonal tourists feel a bit of proprietorship about their town; I know there are always grumblings about the summer folks at the shore even though the economy of the area depends on the summer beachgoers. It is a bit of a conundrum.As for the social structures of the characters in books, if it’s honest and well-drawn, it’s good . . . stereotypical characters and settings are frustrating and annoying.
I live in a place where the population ebbs and flows with the seasons (and where one very popular bumper sticker used to be "If we can't shoot 'em why do they call it tourist season?"). There's the legitimate argument that the economy is entwined with the seasonal people and the tourists-- but also the other side that some of us do live here, and get very frustrated with people who treat this as a playground to be careless with and endlessly criticise. People who let their kids harass marine life, who feed the seagulls, who leave their cigarette butts in the sand, who don't tip at restaurants because they won't be back...they all make it hard to be tolerant of the people who do nothing more heinous than clog our favorite shops and our roads half the year.
On the flip side, I think it makes me more cognisant of these issues when I travel. We were with a high school group outside the US last year, and every time someone in the group was a stereotypical Ugly American, I think I died a little inside.
We have a summer influx as well--and with the growing popularity of indoor water resorts, there's a continual influx of people into those places year-round. Besides all of the issues noted by Joan and Jennifer above, some issues go deeper, as Barb Ross noted. We get plenty of speeches by the politicians about how the economy has improved here--all the great new jobs--BUT those jobs are service industry jobs that typically don't pay a living wage and often get filled--especially in the summer--by people with summer green cards.
As for characters in books--make them real, honest, no matter their position in life, and I'll be in it for the duration of the story--the series, I'm hooked!
You do such a good job with making your characters real, Barb. I just finished reading FOGGED INN - another hit!
Readers, one of Barb's local characters speaks today as my guest over on the Killer Characters blog! Take a listen to Officer Jamie Dawes here: http://www.killercharacters.com/2016/03/my-name-is-jamie-dawes.html
Boy do these issues resonate with my setting (and also the place I live), Key West. Because it's so warm all year, we have not only tourists, but a large number of homeless folks. As you can imagine, major conflicts ensue between homeless, seasonal tourists, cruise ship visitors, and locals (called conchs.)
I'm almost to the end of Fogged Inn and enjoying so much! There's another interesting theme in your book--people who have "escaped" the small town, but have returned home for one reason or another and struggle with how it feels to be back.
Another shout-out from me - LOVE this series! So happy to see the new one which I have right on my bedside table as I speak. R
eading your comments about town/summer relationships it strikes me how it's similar to town/gown -- relationship between college student and townie... and the friction between the factions a la romeo and juliet. They say CONFLICT is the most important seasoning for any story, and there you have it built into the very fabric of your story.
Mornin', Barb!! Fun to find you here this morning. I love your series, and it's one I relate to.
Boone, where we live, is much like what you describe.
The locals are, for the most part, families who have been here and on family land for generations. The land is not as many acres as it once was but they try to hold on to it as long and as tightly as they can. There's some resentment in some quarters towards those who have moved here for any number of reasons, and who are, for the most part, employed by the university here. Then there are the "summer people" who come up from Florida, have bought homes at ridiculous prices and have caused property prices to be at a level those of us living here can no longer afford. We have seasonal skiers and weekend leaf lookers.
We all depend on one another for different aspects of our lives.
But.
It does not stop the underlying currents that are constantly eddying and occasionally erupting. Especially in zoning issues.
My father-in-law spends his summers up at Sherkston Beach, Ontario. He owns his trailer and lives there all summer. And there is always a bit of a "hmphf" when it comes to the folks who come to the park for the day or have weekend/weeklong rentals. Even though there is probably no difference in class/economics/social status between the two groups. I guess it's a "we live here and you're just visiting" mentality.
And Hallie, yes! College towns. My university was the biggest "thing" (and probably the second biggest employer) in the Olean, NY area. The businesses in the little neighboring town of Allegany both loved and hated the college students. They loved us because we went the bars and restaurants and ordered pizza. They hated us because we were "outsiders" who only lived in the area for 10 months of the year, made a mess, and went home. And for our part, we called them "townies" and some (not me) looked down on them because, well, they weren't university students.
I agree. If you draw the character realistically you're fine. It's when you sink into cliche that you get in trouble.
This post exemplifies her intelligent approach to mystery writing. Threaded through Barb's stories are the kinds of conflicts faced by real people, and she writes with such deep respect for her characters.
In my Joe Gale series I've tried to acknowledge how class informs human interaction. In Quick Pivot, longtime reporter Paulie Finnegan's background is contrasted with that of the scion of the Preble family:
Valedictorian of the 1958 graduating class at Riverside High School, Preble was a track star and senior prom king. He was the first local boy accepted to Harvard since the mid-forties, not that anyone should have been surprised, given the Preble family legacy. He came home seven years later, with two degrees to hang on the wall and a couple of years of world travel under his belt. The Chronicle did a front page story about his homecoming.
Paulie got his South Portland High School diploma in 1956, barely having met the requirements for graduation. Thank God for shop class. Then he did some travel of his own, but it was on Uncle Sam’s dime. Paulie was pretty sure nobody in his Ferry Village neighborhood noticed when he was honorably discharged by the Coast Guard.
Count me among Barb's many fans, precisely because her Clambake series, while cozy, doesn't brush aside the real issues of Maine coastal towns. I can always tell when I'm reading a book by an author whose closest relationship with our state was a weekend trip or watching the Murder, She Wrote series. The words 'quaint', 'crusty' (referring to an old lobsterman) and idyllic will all be used. Natives will dress like an LL Bean catalogue (except for the crusty lobsterman, who wears old pants that smell of fish and a turtleneck sweater) and the people from away )who are not called people from away) have just walked out of the the Brooks Brother's summer clothing sale.
Maine is complicated, with real issues about class, property values, poverty, zoning, nativism, management of forests and fisheries...dozens of things that aren't touched on by writers using the state as window dressing. Barb never does that, and I appreciate it.
That being said, Barb, when do we get a book where governor "Lou LePlage" is found floating face down in the waters off Busman's Harbor? That has the potential to be a real bestseller! :-)
Welcome, Barb! I'm so intrigued by the "real Maine" and all of its issues — that plus a cat on the cover! Sign me up! As a New Yorker, I kinda like the tourists (although I stay away from Times Square, as a rule).
I've had my eye on Fogged Inn for a while now, as the cover draws me in so. And, well, I'm a fog fan. The timing here on the Reds blog is always so perfect in my reading life. I am currently reading a series that is set in San Francisco with fog as a recurring element, I just received a book some of you might be familiar with entitled Time of Fog and Fire, and here today is Fogged Inn. I'm thinking maybe I should just make March my "fog" month.
Barbara, Fogged Inn will be my first Clambake mystery read, but I always go back and pick up the previous books. After reading your post and all the comments about how authentic your series is, I'm looking forward to reading your stories set in a place I'd love to visit. And, your attention to detail about class and the different peoples that inhabit a place have me thinking about being more aware of that element. I've always appreciated and gravitated toward books that portrayed the interacting of different groups of people in a realistic manner, but now I will probably be thinking of it even more.
I'm on the road back from Key West. Today's leg, Fayetteville, NC to Alexandria, VA. The internet was annoyingly terrible at our hotel in NC this morning. Now I'm in a diner in Emporia, VA,
Thanks everyone for your kind comments about the Maine Clambake Mysteries. It makes me so happy people are enjoying them.
Julia, thanks for your comments about the real Maine. The last one made me laugh and then made me want to cry! I have considered murdering the crony who is buying up great swaths of my little Maine town. (Fictionally, of course.)
I'm sure it's different out in the neighborhood enclaves in the far flung boroughs, but I love the way in NYC you're a native as soon as you master the subway system and ordering a sandwich at the deli. (Hesitate and you die--works for both.) None of this "from away" stuff.
The conflicts of your series, and the conflicts Julia feels, are some of the reasons I love it. I have spent a lot of time on the Cape, and also worked for Universities. Tensions between folks are part of life there, but everywhere. Julia is an outsider insider, and I love her take on things. I also LOVE this series.
I love not only the series, but reading about these sorts of real-life issues. When I moved to CT a few years ago and landed in Norwich (not a tourist mecca by any stretch of the imagination), I was still struck by the attitudes of people "born and raised" here and how others didn't measure up. I remember an interview I did for the paper I worked for back then. The man I was speaking to had spent 25 years in the town, had invested in it and bought property and tried to make it more appealing for people to visit and live, and he was still referred to as an outsider. Crazy stuff.
A tourist season. I always wondered why there was no bag limit :). There was another popular bumper sticker in Miami, it read "Welcome to Miami. Leave your money. Go." That was a different Miami. There's not much of a tourist season there anymore. It does often seem that tourists are not on their best behavior though. Living in Florida I've had them come into my yard and pick fruit from the trees and watched people scurry from commercial groves with laundry baskets of fruit. Strikes me as odd. Do they do that at home? Don't know. It's dangerous to paint with a broad brush as a writer. I like Barbara's approach of giving each character its due. There's a lot of wonderful things to be found that way.
Love the conversation here. I moved to a resort town fourteen years ago to run a B&B and am still considered "new." There is definitely tension here. However, the summer locals tend to cause more of it than the tourists. Most tourists seem happy to be here and are usually easy to get along with (although sometimes clueless - can you PLEASE use the cross walks??) The summer locals have buckets of money and can be somewhat vocal about "you wouldn't have a job if I didn't come to your town every summer" and "what on earth do you people do here in the winter." Most year-round locals suck it up and wait to "get their town back" at the end of season. Hallie is right - plenty of tension to include in a story (and definitely a lot of motives for murder mysteries!)
I remember the fun we had as children in Marblehead as we planned our class-based revenge on the tourists who invaded our town during the season. It was humiliating to be photographed doing our childhood thing then asked to repeat the activity so strangers could record it.
During summers in Marblehead parents and adult neighbors were most helpful in passing along the stories of their own "gawkers' special" tricks. When an outsider complained? They were at their exemplar best. as they shut down their coffee shops and fried clam concessions or made the main street "One Way" in both directions.
Some of these stories go back generations to the beginning of colonial settlement, the favorites though cluster during revolutionary times. Knowing that your ancestors were neighbors with your best friends' ancestors--or your girl or boyfriend's--had a solid way of shaping your behavior.
The Clambake series sounds wonderful. I hate the general assumptions made by people about each other based on where they're from. We need a national plague of courtesy and friendliness to sweep through. Let's face it: people can be pretty damn stupid and thoughtless.
When I taught in Jamaica, friends talked about taxi drivers and other service worker types, neglecting or over-charging locals during tourist season. She also said they took note of who did that when deciding who to do business with the rest of the year.
Linda--my mother-in-law ran a B&B in our Maine town for 15 years, so I totally get it. She felt the tourists we stayed in B&Bs, ie the people who didn't need TVs in every room or a pool, were the best tourists. (Of course, that was her take.)
Pat-I am all for a national plague of courtesy, though as a New Englander, I have to draw the line at friendliness. :-)
Reine--Marblehead is a perfect example of a town that is a suburb and a summer place (decreasingly so) and its own place all in one.
Kait, wow, I can't believe the nerve of going into someone's yard or orchard and stealing fruit. In Key West, the worst visitor behavior has to do with drunkenness and lack of clothing--more on that to come in KILLER TAKEOUT. But in a milder way, tourists do things like simply ignoring traffic lights. Since when did it become ok to just cross the street even if your light is red? And then glare at the rest of us? John and I get strong urges to go around correcting people, but we mostly hold back:)
Roberta, although you can get ticketed for jay walking, I'm afraid it's de rigueur in the Boston area. To be fair I haven't been there in 2 or 3 years but it was like that even that recently. People will look the other way and even hold their hands like a traffic cop as if that guarantees you will stop. Scary. I've only seen one naked person crossing the street there, however. That was 1996 in the theatre district at Tremont and Stuart.
Thanks, all. As Roberta says, crimes in Key West tend to drunkenness, and all the associated behaviors, like walking into a stranger's house and falling asleep on their couch. What always amazes me is, per the newspaper crime report, how many of these drunk tourists are cops or school principals back home in Michigan or Minnesota or wherever.
|
Compare Prices on Crown Princess Southern Caribbean Cruises
Cheaper to cruise then to stay in a nursing home....
Sail Date:
January 2014
Destination:
Southern Caribbean
Embarkation:
Fort Lauderdale (Port Everglades)
I will start off by saying we enjoyed the cruise very much. We were in a inside cabin on the Riviera deck. We boarded the boat around 2:30 pm. No line up. First night we had no air in the cabin. Called and this was fixed immediately!. Food was excellent. Movies Under The Stars was great. Lots of brand new movies. Princess Patter kept us up to date on everything... Bingo was fun, although a bit pricey.
clientele..... Were in our 30's, and I think I was the youngest on the ship. We figured it out and it literally is cheaper to cruise than for the geriatric population to stay in a nursing home. Between the Walkers, Canes, and electric scooters running around, you really had to watch yourself. It never clued in to me that I was on a nursing home cruise till I read the sign at the pools..... No adult diapers to be worn in pool..... Jesus......
Ship.... Now I didn't do much research, I left it up to my GF and she booked it all. I had no idea about the ship ect. OK, now I More
honestly thought I was on the original Love Boat. Very Very bland dated ship. No frills, but they are appealing to the geriatric population, not people in their 30's or even 40's. I found out that the ship was released to Princess in 2006. I personally thought 1986.....If you like the "wood " look, then you will like this ship. I was told that all the princess ships are all the same.
Casino.... There was a rude player at the Casino who was playing Blackjack and throwing money around like it grew on trees. He lost so much money, and was swearing at dealers, pit bosses ect. They did not kick him out, they allowed him to continually play. I get that they want to take him for all his money, but at the expense of other paying customers having to listen to this???
No points accumulated on your card if you play Blackjack. You could play 8 hours a day and still have 0 points. There is no program, but play on the slots for 20 min, and you will have enough for a drink... What a joke....
Food.... Awesome,, Crown Grille .... Awesome...
I truly hope Princess starts catering to the younger crowd, as I felt out of place with some of the clientele that have their nose in the air. And please princess, get with the times with décor. We are not in the 80's anymore... Less
|
The onset of diabetes and poor metabolic control increases gingival bleeding in children and adolescents with insulin-dependent diabetes mellitus.
Gingival health (bleeding on probing) and oral hygiene (plaque percent) were assessed in 2 groups of children and adolescents with insulin-dependent diabetes mellitus (IDDM). 1st study group included 12 newly diagnosed diabetic children and adolescents (age range 6.3-14.0 years, 5 boys and 7 girls). They were examined on the 3rd day after initial hospital admission and at 2 weeks and 6 weeks after initiation of insulin treatment. Gingival bleeding decreased after 2 weeks of insulin treatment (37.8% versus 19.0%, p < 0.001, paired t-test), and remained at the same level when examined 1 month later while glucose balance was excellent. Another group (n = 80) of insulin-dependent diabetic children and adolescents (age range 11.7-18.4 years, 44 boys and 36 girls) with a mean duration of diabetes 6.0 years (range 0.3-15.0 years) were examined 2x at 3-month intervals. Subjects with poor blood glucose control (glycosylated haemoglobin, HbA1, values over 13%) had more gingival bleeding (46.3% on examination 1, 41.7% on examination 2) than subjects with HbA1 values less than 10% (mean gingival bleeding 35.2% and 26.9%, respectively) or subjects with HbA1 values between 10 to 13% (mean gingival bleeding 35.6% and 33.4%, respectively). Differences were significant on both examinations (p < 0.05, Anova), and remained significant after controlling the groups for differences in age, age at the onset of diabetes, duration of diabetes and pubertal stage (Ancova). Results were not related to differences or changes in dental plaque status, supporting the concept that imbalance of glucose metabolism associated with diabetes predisposes to gingival inflammation. An increase in gingival bleeding in association with hyperglycaemia suggests that hyperglycaemia-associated biological alterations, which lower host resistance toward plaque, have apparently taken place. Consequently, although not all gingivitis proceeds into a destructive periodontal disease, prevention of plaque-induced gingival inflammation should be emphasised, particularly in children and adolescents with poorly controlled diabetes.
|
created_by: Создал
about: Описание
add_description: Добавить описание
phone: Мобильный телефон
email: Email
|
class Testp
convert
A '2'
B '3'
end
prechigh
left B
preclow
rule
/* comment */
target: A B C nonterminal { action "string" == /regexp/o
1 /= 3 }
; # comment
nonterminal: A '+' B = A;
/* end */
end
---- driver
# driver is old name
|
Presenter Information
Location
Room 129
Type of Presentation
Individual paper/presentation (20 minute presentation)
Target Audience
Higher Education
Abstract
Librarians face numerous challenges when designing effective, sustainable assessment methods for student learning outcomes in one-shot, course-integrated library instruction sessions. In this presentation, we will share how librarians at Virginia Commonwealth University (VCU) use a rubric to assess students’ authentic learning products from one-shot instruction sessions for a research and writing course required for all undergraduate students. We will share how rubric-based assessment enhances student learning and explain how we use this type of assessment to demonstrate our information literacy program’s effectiveness.
University 200: Inquiry and the Craft of Argument is a sophomore-level writing and research course required for all VCU students. Information literacy is a stated core competency for UNIV 200 and librarians provide instruction for all of the approximately 80 sections of the course offered each semester. To assess student learning in these sessions, we developed a worksheet and a rubric based on information literacy learning outcomes defined with UNIV 200 faculty. The worksheet serves as both an applied learning exercise for students and an assessment object.
In this presentation, participants will learn about the benefits of using rubrics for learning assessment in a one-shot environment, the mechanics of how we employed rubric-based assessment programmatically for UNIV 200 at VCU, and our findings on students’ achievement of information literacy learning outcomes. Additionally, we will discuss how we use our findings to improve librarian teaching, and how a rubric-based assessment model can be translated to any one-shot environment regardless of content being taught.
Presentation Description
The presenters will explain the benefits of using rubrics to assess information literacy learning outcomes in a one-shot library instruction environment. Participants will also learning how the rubric-based assessment model can be translated to any one-shot environment regardless of the content being taught.
Share
Librarians face numerous challenges when designing effective, sustainable assessment methods for student learning outcomes in one-shot, course-integrated library instruction sessions. In this presentation, we will share how librarians at Virginia Commonwealth University (VCU) use a rubric to assess students’ authentic learning products from one-shot instruction sessions for a research and writing course required for all undergraduate students. We will share how rubric-based assessment enhances student learning and explain how we use this type of assessment to demonstrate our information literacy program’s effectiveness.
University 200: Inquiry and the Craft of Argument is a sophomore-level writing and research course required for all VCU students. Information literacy is a stated core competency for UNIV 200 and librarians provide instruction for all of the approximately 80 sections of the course offered each semester. To assess student learning in these sessions, we developed a worksheet and a rubric based on information literacy learning outcomes defined with UNIV 200 faculty. The worksheet serves as both an applied learning exercise for students and an assessment object.
In this presentation, participants will learn about the benefits of using rubrics for learning assessment in a one-shot environment, the mechanics of how we employed rubric-based assessment programmatically for UNIV 200 at VCU, and our findings on students’ achievement of information literacy learning outcomes. Additionally, we will discuss how we use our findings to improve librarian teaching, and how a rubric-based assessment model can be translated to any one-shot environment regardless of content being taught.
|
QS prev_prev_son==- { "b^*","ch^*","d^*","dh^*","f^*","g^*","hh^*","jh^*","k^*","p^*","s^*","sh^*","t^*","th^*","v^*","z^*","zh^*" }
QS prev_prev_vc==+ { "aa^*","ae^*","ah^*","ao^*","aw^*","ax^*","ay^*","eh^*","er^*","ey^*","ih^*","iy^*","ow^*","oy^*","uh^*","uw^*" }
QS prev_prev_cvox==+ { "b^*","d^*","dh^*","g^*","jh^*","l^*","m^*","n^*","ng^*","r^*","v^*","w^*","y^*","z^*","zh^*" }
QS prev_prev_ccor==+ { "ch^*","d^*","dh^*","jh^*","l^*","n^*","r^*","s^*","sh^*","t^*","th^*","z^*","zh^*" }
QS prev_prev_cont==+ { "dh^*","f^*","hh^*","l^*","r^*","s^*","sh^*","th^*","v^*","w^*","y^*","z^*","zh^*" }
QS prev_prev_vrnd==- { "aa^*","ae^*","ah^*","aw^*","ax^*","ay^*","eh^*","er^*","ey^*","ih^*","iy^*" }
QS prev_prev_ccor==+&&son==- { "ch^*","d^*","dh^*","jh^*","s^*","sh^*","t^*","th^*","z^*","zh^*" }
QS prev_prev_cvox==- { "ch^*","f^*","hh^*","k^*","p^*","s^*","sh^*","t^*","th^*" }
QS prev_prev_cont==+&&ccor==+ { "dh^*","l^*","r^*","s^*","sh^*","th^*","z^*","zh^*" }
QS prev_prev_cont==-&&cvox==+ { "b^*","d^*","g^*","jh^*","m^*","n^*","ng^*" }
QS prev_prev_csib==+ { "ch^*","jh^*","s^*","sh^*","z^*","zh^*" }
QS prev_prev_vlng==s { "ae^*","ah^*","eh^*","ih^*","uh^*" }
QS prev_prev_vfront==2 { "ah^*","aw^*","ax^*","ay^*","er^*" }
QS prev_prev_cvox==-&&ctype==f { "f^*","hh^*","s^*","sh^*","th^*" }
QS prev_prev_cvox==+&&cplace==a { "d^*","l^*","n^*","r^*","z^*" }
QS prev_prev_cplace==l { "b^*","m^*","p^*","w^*" }
QS prev_prev_vheight==1 { "ih^*","iy^*","uh^*","uw^*" }
QS prev_prev_vlng==l { "aa^*","ao^*","iy^*","uw^*" }
QS prev_prev_vrnd==-&&vheight==3 { "aa^*","ae^*","aw^*","ay^*" }
QS prev_prev_cvox==+&&clab==+ { "b^*","m^*","v^*","w^*" }
QS prev_prev_cont==+&&cplace==a { "l^*","r^*","s^*","z^*" }
QS prev_prev_vfront==2&&vheight==2 { "ah^*","ax^*","er^*" }
QS prev_prev_ctype==s&&cplace==a { "d^*","t^*" }
QS prev_prev_vfront==1&&vheight==2 { "eh^*","ey^*" }
QS prev_prev_cvox==-&&cplace==p { "ch^*","sh^*" }
QS prev_prev_name==ey { "ey^*" }
QS prev_prev_name==f { "f^*" }
QS prev_prev_name==pau { "pau^*" }
QS prev_vc==- { "*^b-*","*^ch-*","*^d-*","*^dh-*","*^f-*","*^g-*","*^hh-*","*^jh-*","*^k-*","*^l-*","*^m-*","*^n-*","*^ng-*","*^p-*","*^r-*","*^s-*","*^sh-*","*^t-*","*^th-*","*^v-*","*^w-*","*^y-*","*^z-*","*^zh-*" }
QS prev_son==- { "*^b-*","*^ch-*","*^d-*","*^dh-*","*^f-*","*^g-*","*^hh-*","*^jh-*","*^k-*","*^p-*","*^s-*","*^sh-*","*^t-*","*^th-*","*^v-*","*^z-*","*^zh-*" }
QS prev_vc==+ { "*^aa-*","*^ae-*","*^ah-*","*^ao-*","*^aw-*","*^ax-*","*^ay-*","*^eh-*","*^er-*","*^ey-*","*^ih-*","*^iy-*","*^ow-*","*^oy-*","*^uh-*","*^uw-*" }
QS prev_cvox==+ { "*^b-*","*^d-*","*^dh-*","*^g-*","*^jh-*","*^l-*","*^m-*","*^n-*","*^ng-*","*^r-*","*^v-*","*^w-*","*^y-*","*^z-*","*^zh-*" }
QS prev_ccor==+ { "*^ch-*","*^d-*","*^dh-*","*^jh-*","*^l-*","*^n-*","*^r-*","*^s-*","*^sh-*","*^t-*","*^th-*","*^z-*","*^zh-*" }
QS prev_cont==+ { "*^dh-*","*^f-*","*^hh-*","*^l-*","*^r-*","*^s-*","*^sh-*","*^th-*","*^v-*","*^w-*","*^y-*","*^z-*","*^zh-*" }
QS prev_vrnd==- { "*^aa-*","*^ae-*","*^ah-*","*^aw-*","*^ax-*","*^ay-*","*^eh-*","*^er-*","*^ey-*","*^ih-*","*^iy-*" }
QS prev_cont==- { "*^b-*","*^ch-*","*^d-*","*^g-*","*^jh-*","*^k-*","*^m-*","*^n-*","*^ng-*","*^p-*","*^t-*" }
QS prev_ccor==+&&son==- { "*^ch-*","*^d-*","*^dh-*","*^jh-*","*^s-*","*^sh-*","*^t-*","*^th-*","*^z-*","*^zh-*" }
QS prev_cvox==- { "*^ch-*","*^f-*","*^hh-*","*^k-*","*^p-*","*^s-*","*^sh-*","*^t-*","*^th-*" }
QS prev_ctype==f { "*^dh-*","*^f-*","*^hh-*","*^s-*","*^sh-*","*^th-*","*^v-*","*^z-*","*^zh-*" }
QS prev_cvox==+&&son==- { "*^b-*","*^d-*","*^dh-*","*^g-*","*^jh-*","*^v-*","*^z-*","*^zh-*" }
QS prev_cont==+&&ccor==+ { "*^dh-*","*^l-*","*^r-*","*^s-*","*^sh-*","*^th-*","*^z-*","*^zh-*" }
QS prev_cont==+&&cvox==+ { "*^dh-*","*^l-*","*^r-*","*^v-*","*^w-*","*^y-*","*^z-*","*^zh-*" }
QS prev_ccor==+&&cvox==+ { "*^d-*","*^dh-*","*^jh-*","*^l-*","*^n-*","*^r-*","*^z-*","*^zh-*" }
QS prev_son==+ { "*^l-*","*^m-*","*^n-*","*^ng-*","*^r-*","*^w-*","*^y-*" }
QS prev_cont==-&&cvox==+ { "*^b-*","*^d-*","*^g-*","*^jh-*","*^m-*","*^n-*","*^ng-*" }
QS prev_cplace==a { "*^d-*","*^l-*","*^n-*","*^r-*","*^s-*","*^t-*","*^z-*" }
QS prev_ccor==+&&ctype==f { "*^dh-*","*^s-*","*^sh-*","*^th-*","*^z-*","*^zh-*" }
QS prev_csib==+ { "*^ch-*","*^jh-*","*^s-*","*^sh-*","*^z-*","*^zh-*" }
QS prev_vfront==3 { "*^aa-*","*^ao-*","*^ow-*","*^oy-*","*^uh-*","*^uw-*" }
QS prev_vrnd==+ { "*^ao-*","*^ow-*","*^oy-*","*^uh-*","*^uw-*" }
QS prev_vfront==1 { "*^ae-*","*^eh-*","*^ey-*","*^ih-*","*^iy-*" }
QS prev_cont==+&&ccor==+&&cvox==+ { "*^dh-*","*^l-*","*^r-*","*^z-*","*^zh-*" }
QS prev_ccor==+&&cvox==- { "*^ch-*","*^s-*","*^sh-*","*^t-*","*^th-*" }
QS prev_cont==-&&cvox==- { "*^ch-*","*^k-*","*^p-*","*^t-*" }
QS prev_son==-&&clab==+ { "*^b-*","*^f-*","*^p-*","*^v-*" }
QS prev_vrnd==-&&vlng==s { "*^ae-*","*^ah-*","*^eh-*","*^ih-*" }
QS prev_vlng==l { "*^aa-*","*^ao-*","*^iy-*","*^uw-*" }
QS prev_cont==+&&son==+ { "*^l-*","*^r-*","*^w-*","*^y-*" }
QS prev_ctype==r { "*^er-*","*^r-*","*^w-*","*^y-*" }
QS prev_cvox==-&&ctype==s { "*^k-*","*^p-*","*^t-*" }
QS prev_vrnd==-&&vlng==d { "*^aw-*","*^ay-*","*^ey-*" }
QS prev_son==+&&cplace==a { "*^l-*","*^n-*","*^r-*" }
QS prev_ctype==n { "*^m-*","*^n-*","*^ng-*" }
QS prev_cont==-&&cplace==a { "*^d-*","*^n-*","*^t-*" }
QS prev_ctype==r&&son==+ { "*^r-*","*^w-*","*^y-*" }
QS prev_cvox==-&&csib==+ { "*^ch-*","*^s-*","*^sh-*" }
QS prev_cont==+&&cvox==+&&cplace==a { "*^l-*","*^r-*","*^z-*" }
QS prev_ccor==+&&cvox==-&&ctype==f { "*^s-*","*^sh-*","*^th-*" }
QS prev_vheight==1&&vlng==s { "*^ih-*","*^uh-*" }
QS prev_son==+&&clab==+ { "*^m-*","*^w-*" }
QS prev_cvox==-&&clab==+ { "*^f-*","*^p-*" }
QS prev_cvox==+&&son==-&&clab==+ { "*^b-*","*^v-*" }
QS prev_ctype==s&&cplace==a { "*^d-*","*^t-*" }
QS prev_cvox==-&&cplace==a { "*^s-*","*^t-*" }
QS prev_cont==-&&cvox==+&&clab==+ { "*^b-*","*^m-*" }
QS prev_cont==+&&son==+&&cplace==a { "*^l-*","*^r-*" }
QS prev_vfront==1&&vheight==2 { "*^eh-*","*^ey-*" }
QS prev_vheight==2&&vlng==s { "*^ah-*","*^eh-*" }
QS prev_ctype==a { "*^ch-*","*^jh-*" }
QS prev_cont==-&&cvox==+&&cplace==a { "*^d-*","*^n-*" }
QS prev_cvox==+&&cplace==v { "*^g-*","*^ng-*" }
QS prev_cont==+&&cvox==+&&cplace==p { "*^y-*","*^zh-*" }
QS prev_name==ao { "*^ao-*" }
QS prev_name==ax { "*^ax-*" }
QS prev_name==dh { "*^dh-*" }
QS prev_name==hh { "*^hh-*" }
QS prev_name==l { "*^l-*" }
QS prev_name==n { "*^n-*" }
QS prev_name==pau { "*^pau-*" }
QS prev_name==r { "*^r-*" }
QS prev_name==sh { "*^sh-*" }
QS prev_name==t { "*^t-*" }
QS prev_name==x { "*^x-*" }
QS prev_name==z { "*^z-*" }
QS vc==- { "*-b+*","*-ch+*","*-d+*","*-dh+*","*-f+*","*-g+*","*-hh+*","*-jh+*","*-k+*","*-l+*","*-m+*","*-n+*","*-ng+*","*-p+*","*-r+*","*-s+*","*-sh+*","*-t+*","*-th+*","*-v+*","*-w+*","*-y+*","*-z+*","*-zh+*" }
QS son==- { "*-b+*","*-ch+*","*-d+*","*-dh+*","*-f+*","*-g+*","*-hh+*","*-jh+*","*-k+*","*-p+*","*-s+*","*-sh+*","*-t+*","*-th+*","*-v+*","*-z+*","*-zh+*" }
QS vc==+ { "*-aa+*","*-ae+*","*-ah+*","*-ao+*","*-aw+*","*-ax+*","*-ay+*","*-eh+*","*-er+*","*-ey+*","*-ih+*","*-iy+*","*-ow+*","*-oy+*","*-uh+*","*-uw+*" }
QS cvox==+ { "*-b+*","*-d+*","*-dh+*","*-g+*","*-jh+*","*-l+*","*-m+*","*-n+*","*-ng+*","*-r+*","*-v+*","*-w+*","*-y+*","*-z+*","*-zh+*" }
QS ccor==+ { "*-ch+*","*-d+*","*-dh+*","*-jh+*","*-l+*","*-n+*","*-r+*","*-s+*","*-sh+*","*-t+*","*-th+*","*-z+*","*-zh+*" }
QS cont==+ { "*-dh+*","*-f+*","*-hh+*","*-l+*","*-r+*","*-s+*","*-sh+*","*-th+*","*-v+*","*-w+*","*-y+*","*-z+*","*-zh+*" }
QS vrnd==- { "*-aa+*","*-ae+*","*-ah+*","*-aw+*","*-ax+*","*-ay+*","*-eh+*","*-er+*","*-ey+*","*-ih+*","*-iy+*" }
QS cont==- { "*-b+*","*-ch+*","*-d+*","*-g+*","*-jh+*","*-k+*","*-m+*","*-n+*","*-ng+*","*-p+*","*-t+*" }
QS ccor==+&&son==- { "*-ch+*","*-d+*","*-dh+*","*-jh+*","*-s+*","*-sh+*","*-t+*","*-th+*","*-z+*","*-zh+*" }
QS cvox==- { "*-ch+*","*-f+*","*-hh+*","*-k+*","*-p+*","*-s+*","*-sh+*","*-t+*","*-th+*" }
QS cont==+&&cvox==+ { "*-dh+*","*-l+*","*-r+*","*-v+*","*-w+*","*-y+*","*-z+*","*-zh+*" }
QS cplace==a { "*-d+*","*-l+*","*-n+*","*-r+*","*-s+*","*-t+*","*-z+*" }
QS vheight==2 { "*-ah+*","*-ax+*","*-eh+*","*-er+*","*-ey+*","*-ow+*","*-oy+*" }
QS clab==+ { "*-b+*","*-f+*","*-m+*","*-p+*","*-v+*","*-w+*" }
QS csib==+ { "*-ch+*","*-jh+*","*-s+*","*-sh+*","*-z+*","*-zh+*" }
QS vfront==3 { "*-aa+*","*-ao+*","*-ow+*","*-oy+*","*-uh+*","*-uw+*" }
QS ctype==s { "*-b+*","*-d+*","*-g+*","*-k+*","*-p+*","*-t+*" }
QS vfront==1 { "*-ae+*","*-eh+*","*-ey+*","*-ih+*","*-iy+*" }
QS vlng==d { "*-aw+*","*-ay+*","*-ey+*","*-ow+*","*-oy+*" }
QS vlng==s { "*-ae+*","*-ah+*","*-eh+*","*-ih+*","*-uh+*" }
QS cvox==-&&ctype==f { "*-f+*","*-hh+*","*-s+*","*-sh+*","*-th+*" }
QS cplace==p { "*-ch+*","*-jh+*","*-sh+*","*-y+*","*-zh+*" }
QS vheight==3 { "*-aa+*","*-ae+*","*-ao+*","*-aw+*","*-ay+*" }
QS ctype==f&&csib==+ { "*-s+*","*-sh+*","*-z+*","*-zh+*" }
QS vlng==l { "*-aa+*","*-ao+*","*-iy+*","*-uw+*" }
QS cont==+&&son==+ { "*-l+*","*-r+*","*-w+*","*-y+*" }
QS ctype==r { "*-er+*","*-r+*","*-w+*","*-y+*" }
QS cvox==+&&clab==+ { "*-b+*","*-m+*","*-v+*","*-w+*" }
QS cvox==+&&ctype==f { "*-dh+*","*-v+*","*-z+*","*-zh+*" }
QS cplace==v { "*-g+*","*-k+*","*-ng+*" }
QS son==+&&cplace==a { "*-l+*","*-n+*","*-r+*" }
QS cvox==+&&csib==+ { "*-jh+*","*-z+*","*-zh+*" }
QS cont==+&&son==+&&cplace==a { "*-l+*","*-r+*" }
QS cont==-&&cvox==+&&cplace==a { "*-d+*","*-n+*" }
QS cont==+&&cvox==+&&cplace==p { "*-y+*","*-zh+*" }
QS vrnd==+&&vheight==1 { "*-uh+*","*-uw+*" }
QS name==aa { "*-aa+*" }
QS name==aw { "*-aw+*" }
QS name==ax { "*-ax+*" }
QS name==ay { "*-ay+*" }
QS name==ch { "*-ch+*" }
QS name==dh { "*-dh+*" }
QS name==f { "*-f+*" }
QS name==g { "*-g+*" }
QS name==hh { "*-hh+*" }
QS name==ih { "*-ih+*" }
QS name==k { "*-k+*" }
QS name==l { "*-l+*" }
QS name==n { "*-n+*" }
QS name==ow { "*-ow+*" }
QS name==oy { "*-oy+*" }
QS name==p { "*-p+*" }
QS name==r { "*-r+*" }
QS name==s { "*-s+*" }
QS name==t { "*-t+*" }
QS name==v { "*-v+*" }
QS name==z { "*-z+*" }
QS next_vc==- { "*+b=*","*+ch=*","*+d=*","*+dh=*","*+f=*","*+g=*","*+hh=*","*+jh=*","*+k=*","*+l=*","*+m=*","*+n=*","*+ng=*","*+p=*","*+r=*","*+s=*","*+sh=*","*+t=*","*+th=*","*+v=*","*+w=*","*+y=*","*+z=*","*+zh=*" }
QS next_son==- { "*+b=*","*+ch=*","*+d=*","*+dh=*","*+f=*","*+g=*","*+hh=*","*+jh=*","*+k=*","*+p=*","*+s=*","*+sh=*","*+t=*","*+th=*","*+v=*","*+z=*","*+zh=*" }
QS next_vc==+ { "*+aa=*","*+ae=*","*+ah=*","*+ao=*","*+aw=*","*+ax=*","*+ay=*","*+eh=*","*+er=*","*+ey=*","*+ih=*","*+iy=*","*+ow=*","*+oy=*","*+uh=*","*+uw=*" }
QS next_cvox==+ { "*+b=*","*+d=*","*+dh=*","*+g=*","*+jh=*","*+l=*","*+m=*","*+n=*","*+ng=*","*+r=*","*+v=*","*+w=*","*+y=*","*+z=*","*+zh=*" }
QS next_ccor==+ { "*+ch=*","*+d=*","*+dh=*","*+jh=*","*+l=*","*+n=*","*+r=*","*+s=*","*+sh=*","*+t=*","*+th=*","*+z=*","*+zh=*" }
QS next_cont==+ { "*+dh=*","*+f=*","*+hh=*","*+l=*","*+r=*","*+s=*","*+sh=*","*+th=*","*+v=*","*+w=*","*+y=*","*+z=*","*+zh=*" }
QS next_vrnd==- { "*+aa=*","*+ae=*","*+ah=*","*+aw=*","*+ax=*","*+ay=*","*+eh=*","*+er=*","*+ey=*","*+ih=*","*+iy=*" }
QS next_cont==- { "*+b=*","*+ch=*","*+d=*","*+g=*","*+jh=*","*+k=*","*+m=*","*+n=*","*+ng=*","*+p=*","*+t=*" }
QS next_ccor==+&&son==- { "*+ch=*","*+d=*","*+dh=*","*+jh=*","*+s=*","*+sh=*","*+t=*","*+th=*","*+z=*","*+zh=*" }
QS next_cvox==- { "*+ch=*","*+f=*","*+hh=*","*+k=*","*+p=*","*+s=*","*+sh=*","*+t=*","*+th=*" }
QS next_ctype==f { "*+dh=*","*+f=*","*+hh=*","*+s=*","*+sh=*","*+th=*","*+v=*","*+z=*","*+zh=*" }
QS next_cont==+&&cvox==+ { "*+dh=*","*+l=*","*+r=*","*+v=*","*+w=*","*+y=*","*+z=*","*+zh=*" }
QS next_ccor==+&&cvox==+ { "*+d=*","*+dh=*","*+jh=*","*+l=*","*+n=*","*+r=*","*+z=*","*+zh=*" }
QS next_cont==-&&son==- { "*+b=*","*+ch=*","*+d=*","*+g=*","*+jh=*","*+k=*","*+p=*","*+t=*" }
QS next_son==+ { "*+l=*","*+m=*","*+n=*","*+ng=*","*+r=*","*+w=*","*+y=*" }
QS next_cont==-&&cvox==+ { "*+b=*","*+d=*","*+g=*","*+jh=*","*+m=*","*+n=*","*+ng=*" }
QS next_vheight==2 { "*+ah=*","*+ax=*","*+eh=*","*+er=*","*+ey=*","*+ow=*","*+oy=*" }
QS next_clab==+ { "*+b=*","*+f=*","*+m=*","*+p=*","*+v=*","*+w=*" }
QS next_ctype==s { "*+b=*","*+d=*","*+g=*","*+k=*","*+p=*","*+t=*" }
QS next_vfront==1 { "*+ae=*","*+eh=*","*+ey=*","*+ih=*","*+iy=*" }
QS next_vfront==2 { "*+ah=*","*+aw=*","*+ax=*","*+ay=*","*+er=*" }
QS next_ccor==+&&cvox==+&&son==- { "*+d=*","*+dh=*","*+jh=*","*+z=*","*+zh=*" }
QS next_vheight==1 { "*+ih=*","*+iy=*","*+uh=*","*+uw=*" }
QS next_cont==-&&ccor==+&&son==- { "*+ch=*","*+d=*","*+jh=*","*+t=*" }
QS next_ctype==f&&csib==+ { "*+s=*","*+sh=*","*+z=*","*+zh=*" }
QS next_cont==+&&son==+ { "*+l=*","*+r=*","*+w=*","*+y=*" }
QS next_ctype==r { "*+er=*","*+r=*","*+w=*","*+y=*" }
QS next_cvox==+&&clab==+ { "*+b=*","*+m=*","*+v=*","*+w=*" }
QS next_cont==+&&cplace==a { "*+l=*","*+r=*","*+s=*","*+z=*" }
QS next_cvox==-&&ctype==s { "*+k=*","*+p=*","*+t=*" }
QS next_son==+&&cplace==a { "*+l=*","*+n=*","*+r=*" }
QS next_ctype==n { "*+m=*","*+n=*","*+ng=*" }
QS next_vfront==1&&vlng==s { "*+ae=*","*+eh=*","*+ih=*" }
QS next_ctype==r&&son==+ { "*+r=*","*+w=*","*+y=*" }
QS next_cvox==+&&ctype==s { "*+b=*","*+d=*","*+g=*" }
QS next_cont==+&&cvox==+&&cplace==a { "*+l=*","*+r=*","*+z=*" }
QS next_vfront==3&&vlng==l { "*+aa=*","*+ao=*","*+uw=*" }
QS next_cplace==d { "*+dh=*","*+th=*" }
QS next_son==+&&clab==+ { "*+m=*","*+w=*" }
QS next_vrnd==-&&vheight==1 { "*+ih=*","*+iy=*" }
QS next_ctype==s&&cplace==a { "*+d=*","*+t=*" }
QS next_cvox==-&&cplace==a { "*+s=*","*+t=*" }
QS next_vrnd==-&&vlng==l { "*+aa=*","*+iy=*" }
QS next_cont==+&&son==+&&cplace==a { "*+l=*","*+r=*" }
QS next_cvox==+&&cplace==v { "*+g=*","*+ng=*" }
QS next_cont==-&&ccor==+&&cvox==- { "*+ch=*","*+t=*" }
QS next_cvox==+&&son==-&&cplace==a { "*+d=*","*+z=*" }
QS next_son==-&&cplace==l { "*+b=*","*+p=*" }
QS next_vrnd==+&&vlng==l { "*+ao=*","*+uw=*" }
QS next_csib==+&&cplace==a { "*+s=*","*+z=*" }
QS next_name==ax { "*+ax=*" }
QS next_name==dh { "*+dh=*" }
QS next_name==hh { "*+hh=*" }
QS next_name==iy { "*+iy=*" }
QS next_name==l { "*+l=*" }
QS next_name==n { "*+n=*" }
QS next_name==ng { "*+ng=*" }
QS next_name==p { "*+p=*" }
QS next_name==r { "*+r=*" }
QS next_name==s { "*+s=*" }
QS next_name==t { "*+t=*" }
QS next_name==w { "*+w=*" }
QS next_name==y { "*+y=*" }
QS next_next_son==- { "*=b@*","*=ch@*","*=d@*","*=dh@*","*=f@*","*=g@*","*=hh@*","*=jh@*","*=k@*","*=p@*","*=s@*","*=sh@*","*=t@*","*=th@*","*=v@*","*=z@*","*=zh@*" }
QS next_next_ccor==+&&son==- { "*=ch@*","*=d@*","*=dh@*","*=jh@*","*=s@*","*=sh@*","*=t@*","*=th@*","*=z@*","*=zh@*" }
QS next_next_cont==-&&son==- { "*=b@*","*=ch@*","*=d@*","*=g@*","*=jh@*","*=k@*","*=p@*","*=t@*" }
QS next_next_cont==-&&cvox==+ { "*=b@*","*=d@*","*=g@*","*=jh@*","*=m@*","*=n@*","*=ng@*" }
QS next_next_clab==+ { "*=b@*","*=f@*","*=m@*","*=p@*","*=v@*","*=w@*" }
QS next_next_cont==-&&ccor==+ { "*=ch@*","*=d@*","*=jh@*","*=n@*","*=t@*" }
QS next_next_vheight==3 { "*=aa@*","*=ae@*","*=ao@*","*=aw@*","*=ay@*" }
QS next_next_vlng==l { "*=aa@*","*=ao@*","*=iy@*","*=uw@*" }
QS next_next_ctype==r { "*=er@*","*=r@*","*=w@*","*=y@*" }
QS next_next_son==-&&cplace==a { "*=d@*","*=s@*","*=t@*","*=z@*" }
QS next_next_cvox==+&&ctype==f { "*=dh@*","*=v@*","*=z@*","*=zh@*" }
QS next_next_cont==+&&cplace==a { "*=l@*","*=r@*","*=s@*","*=z@*" }
QS next_next_vlng==a { "*=ax@*","*=er@*" }
QS next_next_vheight==1&&vlng==l { "*=iy@*","*=uw@*" }
QS next_next_vrnd==-&&vlng==l { "*=aa@*","*=iy@*" }
QS next_next_name==er { "*=er@*" }
QS next_next_name==n { "*=n@*" }
QS next_next_name==x { "*=x@*" }
QS pos_in_syl_fw==1 { "*@1_*" }
QS pos_in_syl_fw==4 { "*@4_*" }
QS pos_in_syl_fw<=1 { "*@x_*","*@1_*" }
QS pos_in_syl_bw==1 { "*_1/A:*" }
QS pos_in_syl_bw==2 { "*_2/A:*" }
QS pos_in_syl_bw==3 { "*_3/A:*" }
QS prev_syl_stress==1 { "*/A:1_*" }
QS prev_syl_accented==0 { "*_0_*" }
QS prev_syl_length==0 { "*_0/B:*" }
QS prev_syl_length==3 { "*_3/B:*" }
QS prev_syl_length<=0 { "*_0/B:*" }
QS syl_stress==0 { "*/B:0-*" }
QS syl_stress==1 { "*/B:1-*" }
QS syl_length==2 { "*-2@*" }
QS syl_length<=2 { "*-x@*","*-1@*","*-2@*" }
QS syl_length<=3 { "*-x@*","*-1@*","*-2@*","*-3@*" }
QS syl_pos_in_word_fw==1 { "*@1-*" }
QS syl_pos_in_word_fw<=1 { "*@x-*","*@1-*" }
QS syl_pos_in_word_bw==1 { "*-1&*" }
QS syl_pos_in_word_bw==2 { "*-2&*" }
QS syl_pos_in_word_bw<=2 { "*-x&*","*-1&*","*-2&*" }
QS syl_pos_in_phrase_fw<=1 { "*&x-*","*&1-*" }
QS syl_pos_in_phrase_bw==5 { "*-5#*" }
QS syl_pos_in_phrase_bw<=1 { "*-x#*","*-1#*" }
QS syl_pos_in_phrase_bw<=4 { "*-x#*","*-1#*","*-2#*","*-3#*","*-4#*" }
QS syl_pos_in_phrase_bw<=9 { "*-x#*","*-1#*","*-2#*","*-3#*","*-4#*","*-5#*","*-6#*","*-7#*","*-8#*","*-9#*" }
QS num_stressed_syls_in_phrase_before_this_syl<=5 { "*#x-*","*#1-*","*#2-*","*#3-*","*#4-*","*#5-*" }
QS num_stressed_syls_in_phrase_after_this_syl<=8 { "*-x$*","*-1$*","*-2$*","*-3$*","*-4$*","*-5$*","*-6$*","*-7$*","*-8$*" }
QS num_accented_syls_in_phrase_before_this_syl<=2 { "*$x-*","*$1-*","*$2-*" }
QS dist_to_prev_stressed_syl_in_phrase==1 { "*!1-*" }
QS dist_to_prev_stressed_syl_in_phrase==2 { "*!2-*" }
QS dist_to_next_stressed_syl_in_phrase==1 { "*-1;*" }
QS dist_to_prev_accented_syl_in_phrase<=3 { "*;x-*","*;0-*","*;1-*","*;2-*","*;3-*" }
QS dist_to_next_accented_syl_in_phrase<=2 { "*-x|*","*-0|*","*-1|*","*-2|*" }
QS syl_vowel_vfront==3 { "*|aa/C:*","*|ao/C:*","*|ow/C:*","*|oy/C:*","*|uh/C:*","*|uw/C:*" }
QS syl_vowel_vheight==1 { "*|ih/C:*","*|iy/C:*","*|uh/C:*","*|uw/C:*" }
QS syl_vowel_vrnd==-&&vlng==s { "*|ae/C:*","*|ah/C:*","*|eh/C:*","*|ih/C:*" }
QS syl_vowel_vlng==l { "*|aa/C:*","*|ao/C:*","*|iy/C:*","*|uw/C:*" }
QS syl_vowel_vfront==1&&vlng==s { "*|ae/C:*","*|eh/C:*","*|ih/C:*" }
QS syl_vowel_vheight==1&&vlng==s { "*|ih/C:*","*|uh/C:*" }
QS syl_vowel_vlng==a { "*|ax/C:*","*|er/C:*" }
QS syl_vowel_vheight==1&&vlng==l { "*|iy/C:*","*|uw/C:*" }
QS syl_vowel_vrnd==-&&vlng==l { "*|aa/C:*","*|iy/C:*" }
QS syl_vowel_vheight==3&&vlng==d { "*|aw/C:*","*|ay/C:*" }
QS syl_vowel==aa { "*|aa/C:*" }
QS syl_vowel==ae { "*|ae/C:*" }
QS syl_vowel==ao { "*|ao/C:*" }
QS syl_vowel==eh { "*|eh/C:*" }
QS syl_vowel==er { "*|er/C:*" }
QS syl_vowel==ih { "*|ih/C:*" }
QS syl_vowel==iy { "*|iy/C:*" }
QS syl_vowel==ow { "*|ow/C:*" }
QS syl_vowel==oy { "*|oy/C:*" }
QS syl_vowel==uw { "*|uw/C:*" }
QS next_syl_accented==1 { "*+1+*" }
QS next_syl_length<=2 { "*+0/D:*","*+1/D:*","*+2/D:*" }
QS prev_word_gpos==content { "*/D:content_*" }
QS num_syls_in_prev_word==2 { "*_2/E:*" }
QS num_syls_in_prev_word<=0 { "*_0/E:*" }
QS word_gpos==aux { "*/E:aux+*" }
QS word_gpos==cc { "*/E:cc+*" }
QS word_gpos==content { "*/E:content+*" }
QS word_gpos==det { "*/E:det+*" }
QS word_gpos==in { "*/E:in+*" }
QS word_gpos==pps { "*/E:pps+*" }
QS word_gpos==wp { "*/E:wp+*" }
QS num_syls_in_word<=2 { "*+x@*","*+1@*","*+2@*" }
QS word_pos_in_phrase_fw==1 { "*@1+*" }
QS word_pos_in_phrase_fw<=1 { "*@x+*","*@1+*" }
QS word_pos_in_phrase_fw<=2 { "*@x+*","*@1+*","*@2+*" }
QS word_pos_in_phrase_fw<=3 { "*@x+*","*@1+*","*@2+*","*@3+*" }
QS num_content_words_in_phrase_before_this_word==2 { "*&2+*" }
QS num_content_words_in_phrase_before_this_word==5 { "*&5+*" }
QS dist_to_prev_content_word_in_phrase<=0 { "*#x+*","*#0+*" }
QS dist_to_prev_content_word_in_phrase<=2 { "*#x+*","*#0+*","*#1+*","*#2+*" }
QS dist_to_next_content_word_in_phrase==1 { "*+1/F:*" }
QS next_word_gpos==0 { "*/F:0_*" }
QS next_word_gpos==det { "*/F:det_*" }
QS next_word_gpos==in { "*/F:in_*" }
QS next_word_gpos==wp { "*/F:wp_*" }
QS num_syls_in_next_word==1 { "*_1/G:*" }
QS num_syls_in_next_word<=3 { "*_0/G:*","*_1/G:*","*_2/G:*","*_3/G:*" }
QS num_syls_in_prev_phrase==0 { "*/G:0_*" }
QS num_words_in_prev_phrase==1 { "*_1/H:*" }
QS num_words_in_prev_phrase==2 { "*_2/H:*" }
QS num_words_in_prev_phrase<=1 { "*_0/H:*","*_1/H:*" }
QS num_syls_in_phrase==2 { "*/H:2=*" }
QS num_syls_in_phrase<=8 { "*/H:x=*","*/H:1=*","*/H:2=*","*/H:3=*","*/H:4=*","*/H:5=*","*/H:6=*","*/H:7=*","*/H:8=*" }
QS num_syls_in_phrase<=11 { "*/H:x=*","*/H:1=*","*/H:2=*","*/H:3=*","*/H:4=*","*/H:5=*","*/H:6=*","*/H:7=*","*/H:8=*","*/H:9=*","*/H:10=*","*/H:11=*" }
QS num_syls_in_phrase<=13 { "*/H:x=*","*/H:1=*","*/H:2=*","*/H:3=*","*/H:4=*","*/H:5=*","*/H:6=*","*/H:7=*","*/H:8=*","*/H:9=*","*/H:10=*","*/H:11=*","*/H:12=*","*/H:13=*" }
QS num_syls_in_phrase<=17 { "*/H:x=*","*/H:1=*","*/H:2=*","*/H:3=*","*/H:4=*","*/H:5=*","*/H:6=*","*/H:7=*","*/H:8=*","*/H:9=*","*/H:10=*","*/H:11=*","*/H:12=*","*/H:13=*","*/H:14=*","*/H:15=*","*/H:16=*","*/H:17=*" }
QS num_words_in_phrase<=4 { "*=x^*","*=1^*","*=2^*","*=3^*","*=4^*" }
QS num_words_in_phrase<=11 { "*=x^*","*=1^*","*=2^*","*=3^*","*=4^*","*=5^*","*=6^*","*=7^*","*=8^*","*=9^*","*=10^*","*=11^*" }
QS phrase_pos_in_utt_fw<=2 { "*^x=*","*^1=*","*^2=*" }
QS phrase_end_tone==NONE { "*|NONE/I:*" }
QS num_syls_in_next_phrase==4 { "*/I:4=*" }
QS num_syls_in_next_phrase<=0 { "*/I:0=*" }
QS num_syls_in_next_phrase<=2 { "*/I:0=*","*/I:1=*","*/I:2=*" }
QS num_syls_in_next_phrase<=4 { "*/I:0=*","*/I:1=*","*/I:2=*","*/I:3=*","*/I:4=*" }
QS num_words_in_next_phrase<=0 { "*=0/J:*" }
QS num_words_in_next_phrase<=3 { "*=0/J:*","*=1/J:*","*=2/J:*","*=3/J:*" }
QS num_words_in_next_phrase<=4 { "*=0/J:*","*=1/J:*","*=2/J:*","*=3/J:*","*=4/J:*" }
QS num_words_in_next_phrase<=5 { "*=0/J:*","*=1/J:*","*=2/J:*","*=3/J:*","*=4/J:*","*=5/J:*" }
QS num_syls_in_utt==8 { "*/J:8+*" }
QS num_syls_in_utt==16 { "*/J:16+*" }
QS num_syls_in_utt<=8 { "*/J:1+*","*/J:2+*","*/J:3+*","*/J:4+*","*/J:5+*","*/J:6+*","*/J:7+*","*/J:8+*" }
QS num_syls_in_utt<=10 { "*/J:1+*","*/J:2+*","*/J:3+*","*/J:4+*","*/J:5+*","*/J:6+*","*/J:7+*","*/J:8+*","*/J:9+*","*/J:10+*" }
QS num_syls_in_utt<=14 { "*/J:1+*","*/J:2+*","*/J:3+*","*/J:4+*","*/J:5+*","*/J:6+*","*/J:7+*","*/J:8+*","*/J:9+*","*/J:10+*","*/J:11+*","*/J:12+*","*/J:13+*","*/J:14+*" }
QS num_words_in_utt==7 { "*+7-*" }
QS num_words_in_utt==12 { "*+12-*" }
QS num_words_in_utt<=5 { "*+1-*","*+2-*","*+3-*","*+4-*","*+5-*" }
QS pos_in_word_fw==1 { "*/K:1_*" }
QS pos_in_word_fw==2 { "*/K:2_*" }
QS pos_in_word_fw<=1 { "*/K:x_*","*/K:1_*" }
QS pos_in_word_fw<=2 { "*/K:x_*","*/K:1_*","*/K:2_*" }
QS pos_in_word_fw<=3 { "*/K:x_*","*/K:1_*","*/K:2_*","*/K:3_*" }
QS pos_in_word_bw==1 { "*_1" }
QS pos_in_word_bw==3 { "*_3" }
QS pos_in_word_bw==5 { "*_5" }
QS pos_in_word_bw<=1 { "*_x","*_1" }
QS pos_in_word_bw<=3 { "*_x","*_1","*_2","*_3" }
QS pos_in_word_bw<=4 { "*_x","*_1","*_2","*_3","*_4" }
QS pos_in_word_bw<=5 { "*_x","*_1","*_2","*_3","*_4","*_5" }
{*}[2]
{
0 syl_pos_in_phrase_bw<=1 -1 -3
-1 cont==-&&cvox==+&&cplace==a -2 -15
-2 name==ax -4 -18
-3 prev_name==x -9 -51
-4 ctype==s -5 -16
-5 vlng==d -6 -38
-6 pos_in_syl_fw==1 -8 -7
-7 name==dh -13 -42
-8 name==ih -10 -36
-9 vc==- -12 -11
-10 vheight==3 -28 -20
-11 ctype==s -23 -37
-12 name==ax -21 -81
-13 cvox==- -19 -14
-14 name==hh -30 -53
-15 name==n -26 -91
-16 pos_in_syl_fw<=1 -27 -17
-17 cvox==- -33 -25
-18 prev_cont==+&&cvox==+ -45 -83
-19 cvox==+&&clab==+ -34 -35
-20 word_gpos==aux -29 -41
-21 phrase_end_tone==NONE -22 -24
-22 prev_ccor==+&&ctype==f -44 -59
-23 cvox==-&&ctype==f -32 -61
-24 vlng==s -63 -137
-25 name==t -105 -70
-26 prev_name==n -73 -66
-27 prev_son==- -46 -48
-28 cont==+&&cvox==+ -31 -39
-29 next_cont==+&&son==+&&cplace==a -50 -111
-30 prev_son==- -64 -96
-31 ctype==r -43 -234
-32 pos_in_word_bw==1 -88 -62
-33 syl_pos_in_phrase_fw<=1 -187 -228
-34 vlng==s -94 -67
-35 cont==+&&son==+ -128 -112
-36 prev_cont==+&&son==+ -97 -189
-37 name==t -100 -54
-38 pos_in_word_fw==1 -78 -194
-39 prev_cvox==- -40 -80
-40 cvox==+&&ctype==f -69 -49
-41 prev_son==- -173 -222
-42 prev_vc==+ -65 -193
-43 prev_cont==-&&cvox==+ -47 -166
-44 num_words_in_next_phrase<=0 -135 -52
-45 pos_in_word_fw==1 -86 -237
-46 name==t -172 -90
-47 vrnd==+&&vheight==1 -56 -77
-48 name==t "dur_s2_1" -200
-49 next_cvox==- -115 -85
-50 syl_vowel==aa -76 -298
-51 next_vc==+ -214 -152
-52 prev_cvox==-&&cplace==a -71 -162
-53 syl_vowel_vheight==1&&vlng==l -74 -79
-54 pos_in_syl_fw<=1 -55 -252
-55 prev_cvox==- -230 -460
-56 clab==+ -57 -258
-57 vheight==2 -58 -60
-58 prev_name==hh -102 -293
-59 num_words_in_next_phrase<=0 -279 -95
-60 prev_cont==+&&son==+&&cplace==a -82 -84
-61 prev_son==- -98 -147
-62 cont==+&&son==+&&cplace==a -93 -568
-63 syl_vowel_vlng==l -169 -109
-64 prev_syl_length<=0 -72 -206
-65 prev_cont==-&&cvox==+&&cplace==a -132 -516
-66 prev_prev_vrnd==-&&vheight==3 -318 -89
-67 syl_vowel_vheight==1&&vlng==s -68 -110
-68 word_gpos==content -140 -259
-69 ctype==r -163 -117
-70 syl_stress==0 -123 -144
-71 prev_cvox==-&&clab==+ -180 -465
-72 next_son==+ -106 "dur_s2_2"
-73 pos_in_word_fw==1 -129 -127
-74 prev_name==pau -75 -266
-75 next_vrnd==-&&vheight==1 -372 -317
-76 next_ctype==n -148 -141
-77 word_gpos==content -122 -167
-78 prev_cont==+&&ccor==+&&cvox==+ -142 -146
-79 num_syls_in_prev_word<=0 -219 "dur_s2_3"
-80 next_vrnd==- -301 -159
-81 next_son==+ -467 -327
-82 next_name==r -226 "dur_s2_4"
-83 next_son==+ -248 -312
-84 prev_prev_vlng==l -118 "dur_s2_5"
-85 ccor==+ "dur_s2_6" -120
-86 syl_pos_in_word_bw==1 -360 -87
-87 next_son==+&&cplace==a -423 -131
-88 cont==+&&son==+ -121 -92
-89 next_cvox==- -321 -437
-90 next_ccor==+&&son==- -134 -633
-91 next_ctype==s&&cplace==a -113 -305
-92 prev_son==- -398 -525
-93 cplace==v -136 "dur_s2_7"
-94 vc==+ -116 -124
-95 prev_cvox==-&&cplace==a -196 -571
-96 name==f -157 -304
-97 next_name==ng -108 -204
-98 name==hh -99 -256
-99 next_next_name==x -154 -401
-100 cvox==+ -179 -101
-101 pos_in_word_bw<=1 -217 -195
-102 cont==- -103 -277
-103 prev_vc==- -104 -211
-104 next_cvox==-&&cplace==a "dur_s2_9" "dur_s2_8"
-105 next_ccor==+ -233 -197
-106 name==ch -107 "dur_s2_10"
-107 prev_vrnd==- -390 -278
-108 prev_name==hh -186 -349
-109 next_ctype==r -183 -410
-110 pos_in_word_fw==1 "dur_s2_11" -174
-111 prev_cvox==-&&clab==+ -188 -587
-112 syl_pos_in_phrase_fw<=1 -133 -441
-113 next_vc==+ -150 -114
-114 pos_in_word_fw==1 -647 -350
-115 next_name==dh -165 -392
-116 prev_vc==+ -224 -119
-117 prev_vlng==l -253 -184
-118 word_gpos==content "dur_s2_12" -325
-119 cvox==+&&csib==+ -238 -336
-120 csib==+ "dur_s2_13" -319
-121 name==dh -125 -294
-122 next_cont==- -249 -338
-123 prev_cont==-&&cplace==a -210 -376
-124 syl_stress==1 -330 -130
-125 cvox==+&&clab==+ -126 "dur_s2_14"
-126 cplace==a -316 -156
-127 prev_name==pau -314 "dur_s2_15"
-128 name==v -257 -199
-129 next_cont==+&&cplace==a -145 -504
-130 word_gpos==aux -164 -299
-131 next_name==l -261 -356
-132 prev_son==- -326 -216
-133 prev_cvox==- -139 -177
-134 prev_prev_vfront==1&&vheight==2 -207 "dur_s2_16"
-135 next_vc==- -182 -254
-136 csib==+ -220 -225
-137 syl_vowel==ih -151 -138
-138 pos_in_word_fw==1 -161 -492
-139 prev_syl_length==3 -383 -255
-140 dist_to_prev_content_word_in_phrase<=0 -153 -176
-141 syl_vowel==ae -292 -191
-142 name==oy -143 -609
-143 syl_vowel_vheight==3&&vlng==d -198 -215
-144 prev_son==- -149 "dur_s2_17"
-145 next_ccor==+ -171 -598
-146 name==ow -231 -155
-147 name==hh -359 -574
-148 prev_cont==+&&son==+&&cplace==a -269 -341
-149 pos_in_word_fw==1 -291 "dur_s2_18"
-150 next_cplace==d -168 "dur_s2_19"
-151 next_cont==+&&son==+ -181 -377
-152 next_vheight==1 -438 -389
-153 next_cvox==+&&clab==+ -232 -160
-154 next_vc==+ -208 -628
-155 prev_name==dh "dur_s2_21" "dur_s2_20"
-156 next_cont==-&&ccor==+&&cvox==- -190 "dur_s2_22"
-157 next_son==+ -158 "dur_s2_23"
-158 prev_cont==+&&cvox==+ -241 -284
-159 word_gpos==content "dur_s2_24" -178
-160 prev_son==- -409 -391
-161 next_name==ng -308 "dur_s2_25"
-162 prev_prev_cvox==-&&ctype==f -346 "dur_s2_26"
-163 next_son==- -322 -250
-164 vrnd==- -306 -418
-165 ccor==+&&son==- -271 -236
-166 son==- -175 "dur_s2_27"
-167 prev_ctype==r -192 -205
-168 next_name==l -311 -429
-169 name==ay -170 -276
-170 pos_in_word_bw<=1 -329 -280
-171 prev_vc==- -223 -613
-172 name==k -513 -209
-173 next_word_gpos==in "dur_s2_29" "dur_s2_28"
-174 word_pos_in_phrase_fw<=1 -309 -527
-175 word_gpos==cc -267 -498
-176 pos_in_syl_bw==3 "dur_s2_30" -243
-177 prev_name==t -524 -553
-178 name==r -552 -221
-179 pos_in_syl_fw==1 -273 -547
-180 prev_ctype==n -264 "dur_s2_31"
-181 syl_vowel_vfront==1&&vlng==s -452 -282
-182 next_vfront==1&&vlng==s -419 -339
-183 word_gpos==aux -246 "dur_s2_32"
-184 next_son==- "dur_s2_33" -185
-185 word_gpos==content -382 -285
-186 next_name==r -262 "dur_s2_34"
-187 prev_csib==+ -203 -497
-188 prev_cont==+&&cvox==+&&cplace==p -251 "dur_s2_35"
-189 word_gpos==content -387 -202
-190 next_son==- -501 -343
-191 word_gpos==content "dur_s2_37" "dur_s2_36"
-192 syl_vowel==uw -388 -365
-193 syl_vowel==er -536 "dur_s2_38"
-194 num_syls_in_prev_word<=0 -337 "dur_s2_39"
-195 prev_name==ax -395 "dur_s2_40"
-196 prev_name==z "dur_s2_41" -584
-197 clab==+ "dur_s2_42" -462
-198 pos_in_word_bw==1 -239 -469
-199 prev_vrnd==- -272 -470
-200 next_son==- -335 -201
-201 num_words_in_prev_phrase==1 "dur_s2_44" "dur_s2_43"
-202 next_cvox==+&&cplace==v -240 "dur_s2_45"
-203 syl_stress==1 -320 -402
-204 syl_stress==0 "dur_s2_46" -619
-205 next_clab==+ -244 -475
-206 csib==+ "dur_s2_48" "dur_s2_47"
-207 next_next_son==-&&cplace==a -434 "dur_s2_49"
-208 next_son==- "dur_s2_51" "dur_s2_50"
-209 next_son==+ -413 "dur_s2_52"
-210 prev_ctype==f -340 "dur_s2_53"
-211 prev_name==r -212 -213
-212 name==s -218 -533
-213 syl_stress==1 -461 "dur_s2_54"
-214 next_ctype==f -286 -290
-215 next_vrnd==- -302 "dur_s2_55"
-216 prev_cvox==+&&son==-&&clab==+ "dur_s2_56" -654
-217 prev_name==pau -281 "dur_s2_57"
-218 next_vrnd==-&&vlng==l -427 "dur_s2_58"
-219 dist_to_prev_stressed_syl_in_phrase==1 -576 -357
-220 son==- -520 -509
-221 prev_son==-&&clab==+ -447 -468
-222 num_syls_in_next_phrase==4 "dur_s2_60" "dur_s2_59"
-223 next_ctype==s -425 "dur_s2_61"
-224 name==l -562 -229
-225 prev_son==- -313 "dur_s2_62"
-226 word_gpos==wp -227 -643
-227 next_son==- -369 -471
-228 clab==+ "dur_s2_64" "dur_s2_63"
-229 next_name==iy -542 -345
-230 num_syls_in_next_word==1 -412 "dur_s2_65"
-231 next_cvox==-&&ctype==s -393 -245
-232 next_name==n -611 -508
-233 syl_pos_in_phrase_fw<=1 -260 "dur_s2_66"
-234 next_vc==+ -235 -606
-235 syl_stress==1 -315 -566
-236 prev_ccor==+&&son==- -242 "dur_s2_67"
-237 prev_cplace==a -328 -495
-238 prev_prev_name==f -287 "dur_s2_68"
-239 prev_cvox==-&&ctype==s -478 -380
-240 num_words_in_prev_phrase==2 -283 "dur_s2_69"
-241 ctype==f&&csib==+ "dur_s2_71" "dur_s2_70"
-242 name==z -494 -554
-243 num_words_in_prev_phrase<=1 "dur_s2_73" "dur_s2_72"
-244 vlng==l -640 -247
-245 name==ay "dur_s2_75" "dur_s2_74"
-246 name==aa -303 "dur_s2_76"
-247 next_cont==- -367 "dur_s2_77"
-248 prev_cplace==a "dur_s2_78" -421
-249 next_ctype==f -558 -405
-250 next_ctype==f&&csib==+ -531 "dur_s2_79"
-251 prev_son==+&&clab==+ -347 -507
-252 syl_stress==1 -275 "dur_s2_80"
-253 prev_cont==-&&cvox==+ -263 -378
-254 next_cvox==+&&ctype==s -295 "dur_s2_81"
-255 prev_prev_vrnd==- "dur_s2_83" "dur_s2_82"
-256 next_next_cont==-&&cvox==+ -583 "dur_s2_84"
-257 prev_cont==-&&cplace==a -463 "dur_s2_85"
-258 son==- -307 -265
-259 syl_stress==1 -416 -433
-260 syl_stress==0 -577 -368
-261 num_syls_in_next_word<=3 "dur_s2_87" "dur_s2_86"
-262 prev_cvox==+&&son==-&&clab==+ -394 -569
-263 next_son==+&&clab==+ -614 -596
-264 prev_son==- "dur_s2_88" -274
-265 next_son==- "dur_s2_90" "dur_s2_89"
-266 next_vrnd==-&&vheight==1 "dur_s2_92" "dur_s2_91"
-267 vlng==l -268 -453
-268 next_name==n "dur_s2_93" -624
-269 prev_name==dh -354 -270
-270 prev_word_gpos==content "dur_s2_94" -600
-271 next_son==+ "dur_s2_96" "dur_s2_95"
-272 syl_vowel_vlng==a "dur_s2_98" "dur_s2_97"
-273 next_next_name==x -543 "dur_s2_99"
-274 prev_cont==-&&cvox==+ "dur_s2_100" -457
-275 prev_prev_vc==+ "dur_s2_102" "dur_s2_101"
-276 prev_name==l -473 "dur_s2_103"
-277 name==ch -404 "dur_s2_104"
-278 next_vc==+ "dur_s2_105" -300
-279 prev_word_gpos==content "dur_s2_106" -342
-280 syl_stress==0 "dur_s2_108" "dur_s2_107"
-281 syl_stress==1 -440 -472
-282 next_cvox==+ -397 -540
-283 next_ccor==+&&cvox==+ -358 "dur_s2_109"
-284 prev_cplace==a "dur_s2_110" -491
-285 prev_name==ao "dur_s2_112" "dur_s2_111"
-286 next_son==-&&cplace==l -324 -502
-287 name==l -288 -289
-288 prev_vheight==2&&vlng==s -431 "dur_s2_113"
-289 next_vc==- -506 "dur_s2_114"
-290 next_next_vheight==1&&vlng==l -456 -489
-291 prev_name==l -351 "dur_s2_115"
-292 prev_son==- "dur_s2_117" "dur_s2_116"
-293 prev_syl_length==0 -589 "dur_s2_118"
-294 prev_cont==-&&cvox==+ "dur_s2_120" "dur_s2_119"
-295 prev_cvox==-&&ctype==s -503 -296
-296 prev_prev_vfront==2 -297 "dur_s2_121"
-297 next_word_gpos==det -344 "dur_s2_122"
-298 prev_cont==+&&cvox==+&&cplace==a -557 "dur_s2_123"
-299 num_syls_in_prev_phrase==0 "dur_s2_125" "dur_s2_124"
-300 ccor==+ "dur_s2_126" -483
-301 syl_pos_in_word_bw<=2 "dur_s2_127" -400
-302 next_name==l -374 "dur_s2_128"
-303 prev_cont==- -355 "dur_s2_129"
-304 prev_cont==-&&cplace==a "dur_s2_131" "dur_s2_130"
-305 pos_in_syl_bw==2 "dur_s2_132" -379
-306 word_gpos==det -439 "dur_s2_133"
-307 next_name==p -399 "dur_s2_134"
-308 next_ccor==+&&cvox==+&&son==- -477 "dur_s2_135"
-309 next_cont==-&&son==- -323 -310
-310 syl_pos_in_phrase_bw==5 "dur_s2_137" "dur_s2_136"
-311 next_name==hh -517 "dur_s2_138"
-312 next_cont==+ "dur_s2_140" "dur_s2_139"
-313 num_syls_in_next_phrase<=0 "dur_s2_141" -537
-314 next_ctype==r&&son==+ -455 "dur_s2_142"
-315 next_cont==+ "dur_s2_143" -484
-316 prev_son==- -604 "dur_s2_144"
-317 prev_son==+ -430 -570
-318 pos_in_word_fw<=2 -381 "dur_s2_145"
-319 next_name==hh -385 -331
-320 prev_cvox==+&&cplace==v -653 "dur_s2_146"
-321 pos_in_word_fw<=3 "dur_s2_148" "dur_s2_147"
-322 next_name==r -334 "dur_s2_149"
-323 prev_vfront==1 -371 "dur_s2_150"
-324 next_word_gpos==wp -560 "dur_s2_151"
-325 dist_to_next_stressed_syl_in_phrase==1 "dur_s2_152" -450
-326 prev_syl_stress==1 -428 "dur_s2_153"
-327 next_cont==+ "dur_s2_154" -649
-328 prev_vrnd==+ -549 "dur_s2_155"
-329 pos_in_syl_fw==1 -332 "dur_s2_156"
-330 syl_pos_in_word_fw<=1 -515 -364
-331 pos_in_syl_fw==4 "dur_s2_158" "dur_s2_157"
-332 syl_vowel==er -333 "dur_s2_159"
-333 prev_cont==+&&ccor==+&&cvox==+ -650 "dur_s2_160"
-334 prev_vfront==3 -572 -605
-335 pos_in_syl_bw==1 "dur_s2_162" "dur_s2_161"
-336 cont==+ "dur_s2_164" "dur_s2_163"
-337 word_gpos==pps -442 -595
-338 prev_name==t "dur_s2_166" "dur_s2_165"
-339 prev_ccor==+&&cvox==- -490 "dur_s2_167"
-340 syl_pos_in_phrase_fw<=1 -415 "dur_s2_168"
-341 prev_prev_vfront==2&&vheight==2 "dur_s2_170" "dur_s2_169"
-342 next_syl_accented==1 -348 "dur_s2_171"
-343 next_name==s -518 "dur_s2_172"
-344 num_syls_in_next_phrase<=4 "dur_s2_173" -639
-345 prev_cont==+&&cvox==+ -631 "dur_s2_174"
-346 prev_prev_vlng==s -612 "dur_s2_175"
-347 vfront==3 "dur_s2_176" -486
-348 prev_cvox==-&&csib==+ -539 "dur_s2_177"
-349 pos_in_word_bw<=5 "dur_s2_178" -448
-350 syl_pos_in_phrase_bw<=9 "dur_s2_180" "dur_s2_179"
-351 next_name==r -352 "dur_s2_181"
-352 prev_vrnd==-&&vlng==d -353 "dur_s2_182"
-353 prev_ctype==r -510 "dur_s2_183"
-354 prev_cont==-&&cvox==- "dur_s2_185" "dur_s2_184"
-355 syl_stress==1 -406 -555
-356 syl_length<=2 "dur_s2_187" "dur_s2_186"
-357 num_syls_in_phrase<=13 "dur_s2_189" "dur_s2_188"
-358 syl_length<=3 "dur_s2_190" -532
-359 next_next_name==x -449 "dur_s2_191"
-360 next_cont==+&&son==+ -361 "dur_s2_192"
-361 prev_cvox==- "dur_s2_193" -362
-362 next_next_cont==-&&son==- "dur_s2_194" -363
-363 syl_pos_in_word_bw==2 "dur_s2_196" "dur_s2_195"
-364 next_cont==+&&cvox==+ "dur_s2_198" "dur_s2_197"
-365 prev_ccor==+&&son==- -366 "dur_s2_199"
-366 next_cont==+&&cvox==+&&cplace==a "dur_s2_201" "dur_s2_200"
-367 num_stressed_syls_in_phrase_after_this_syl<=8 "dur_s2_203" "dur_s2_202"
-368 pos_in_word_fw<=1 -585 "dur_s2_204"
-369 syl_vowel==eh -370 -528
-370 next_clab==+ "dur_s2_206" "dur_s2_205"
-371 prev_vc==- "dur_s2_208" "dur_s2_207"
-372 syl_vowel_vfront==3 -373 "dur_s2_209"
-373 prev_cont==-&&cvox==+ -407 -638
-374 name==aw -375 "dur_s2_210"
-375 prev_cont==-&&cvox==+&&clab==+ -420 "dur_s2_211"
-376 prev_prev_cont==-&&cvox==+ -424 "dur_s2_212"
-377 syl_vowel_vfront==1&&vlng==s "dur_s2_214" "dur_s2_213"
-378 prev_cont==-&&cvox==+&&clab==+ -593 "dur_s2_215"
-379 num_words_in_next_phrase<=3 "dur_s2_216" -534
-380 next_cont==+&&son==+ "dur_s2_218" "dur_s2_217"
-381 dist_to_prev_content_word_in_phrase<=2 "dur_s2_220" "dur_s2_219"
-382 next_cont==+&&cvox==+ "dur_s2_222" "dur_s2_221"
-383 next_next_cvox==+&&ctype==f -384 -411
-384 prev_vrnd==+ "dur_s2_224" "dur_s2_223"
-385 next_ctype==f&&csib==+ "dur_s2_225" -386
-386 word_gpos==content "dur_s2_227" "dur_s2_226"
-387 next_son==- "dur_s2_229" "dur_s2_228"
-388 next_cont==+ -474 "dur_s2_230"
-389 num_words_in_next_phrase<=5 "dur_s2_231" -476
-390 syl_pos_in_word_fw==1 "dur_s2_232" -482
-391 prev_cont==-&&cvox==+ "dur_s2_234" "dur_s2_233"
-392 pos_in_word_fw<=3 "dur_s2_236" "dur_s2_235"
-393 name==ay "dur_s2_238" "dur_s2_237"
-394 next_name==l -417 -499
-395 prev_ctype==n -599 -396
-396 num_syls_in_next_phrase<=2 "dur_s2_240" "dur_s2_239"
-397 syl_vowel==eh -426 "dur_s2_241"
-398 prev_vc==+ -548 -511
-399 next_cvox==+&&clab==+ -541 "dur_s2_242"
-400 prev_ccor==+ "dur_s2_243" -618
-401 name==f -634 "dur_s2_244"
-402 clab==+ "dur_s2_245" -403
-403 prev_vc==- "dur_s2_246" -580
-404 next_son==- -651 -459
-405 next_clab==+ -487 "dur_s2_247"
-406 word_gpos==det "dur_s2_249" "dur_s2_248"
-407 word_gpos==aux "dur_s2_250" -408
-408 prev_vrnd==- -573 "dur_s2_251"
-409 num_syls_in_utt==16 -505 "dur_s2_252"
-410 syl_vowel==ao "dur_s2_254" "dur_s2_253"
-411 word_pos_in_phrase_fw<=2 -567 "dur_s2_255"
-412 prev_prev_cont==+&&ccor==+ "dur_s2_257" "dur_s2_256"
-413 next_cont==- -414 -526
-414 syl_vowel==eh "dur_s2_259" "dur_s2_258"
-415 prev_cont==- -545 "dur_s2_260"
-416 next_word_gpos==0 -597 "dur_s2_261"
-417 word_gpos==det -632 "dur_s2_262"
-418 syl_pos_in_word_fw<=1 "dur_s2_263" -451
-419 prev_cvox==- -446 "dur_s2_264"
-420 pos_in_word_bw<=3 "dur_s2_266" "dur_s2_265"
-421 next_name==s -422 "dur_s2_267"
-422 prev_son==- "dur_s2_269" "dur_s2_268"
-423 next_cont==+ "dur_s2_271" "dur_s2_270"
-424 prev_ctype==n "dur_s2_273" "dur_s2_272"
-425 next_cvox==+ -454 "dur_s2_274"
-426 prev_cont==+&&son==+ "dur_s2_276" "dur_s2_275"
-427 prev_son==- "dur_s2_277" -466
-428 dist_to_next_accented_syl_in_phrase<=2 "dur_s2_279" "dur_s2_278"
-429 next_next_vlng==l "dur_s2_281" "dur_s2_280"
-430 prev_ctype==f -575 "dur_s2_282"
-431 syl_pos_in_word_fw<=1 "dur_s2_283" -432
-432 ccor==+ -538 "dur_s2_284"
-433 pos_in_word_fw==1 -622 "dur_s2_285"
-434 prev_syl_length<=0 -435 -436
-435 next_cvox==- -615 -546
-436 prev_prev_name==pau "dur_s2_287" "dur_s2_286"
-437 next_name==hh -592 "dur_s2_288"
-438 num_syls_in_utt==8 "dur_s2_290" "dur_s2_289"
-439 phrase_pos_in_utt_fw<=2 "dur_s2_291" -544
-440 prev_cont==+ "dur_s2_293" "dur_s2_292"
-441 word_gpos==in "dur_s2_295" "dur_s2_294"
-442 next_name==l -443 "dur_s2_296"
-443 name==ay -444 "dur_s2_297"
-444 num_syls_in_word<=2 -445 -594
-445 next_vheight==2 "dur_s2_299" "dur_s2_298"
-446 prev_word_gpos==content "dur_s2_301" "dur_s2_300"
-447 prev_prev_cont==+&&cplace==a -535 "dur_s2_302"
-448 dist_to_next_content_word_in_phrase==1 "dur_s2_304" "dur_s2_303"
-449 prev_cont==+&&ccor==+ -586 "dur_s2_305"
-450 prev_name==l "dur_s2_307" "dur_s2_306"
-451 pos_in_word_bw==1 "dur_s2_309" "dur_s2_308"
-452 prev_prev_ctype==s&&cplace==a "dur_s2_311" "dur_s2_310"
-453 next_name==y -481 "dur_s2_312"
-454 pos_in_word_fw==2 -464 "dur_s2_313"
-455 prev_cont==- "dur_s2_315" "dur_s2_314"
-456 next_cvox==-&&cplace==a "dur_s2_317" "dur_s2_316"
-457 prev_prev_ccor==+&&son==- -458 "dur_s2_318"
-458 prev_prev_cvox==+ "dur_s2_319" -621
-459 ccor==+ "dur_s2_321" "dur_s2_320"
-460 next_next_name==x "dur_s2_323" "dur_s2_322"
-461 next_next_vrnd==-&&vlng==l "dur_s2_325" "dur_s2_324"
-462 prev_vc==+ "dur_s2_327" "dur_s2_326"
-463 prev_prev_cvox==+ -519 -637
-464 syl_vowel_vrnd==-&&vlng==l "dur_s2_329" "dur_s2_328"
-465 prev_cont==- "dur_s2_331" "dur_s2_330"
-466 prev_name==sh "dur_s2_333" "dur_s2_332"
-467 next_clab==+ -500 "dur_s2_334"
-468 pos_in_word_bw<=5 "dur_s2_336" "dur_s2_335"
-469 syl_vowel==ow "dur_s2_338" "dur_s2_337"
-470 syl_vowel_vfront==1&&vlng==s -652 "dur_s2_339"
-471 next_cont==-&&ccor==+&&son==- "dur_s2_341" "dur_s2_340"
-472 pos_in_syl_fw<=1 "dur_s2_342" -642
-473 next_cvox==- -644 "dur_s2_343"
-474 prev_son==- "dur_s2_345" "dur_s2_344"
-475 next_cont==+&&son==+ "dur_s2_347" "dur_s2_346"
-476 num_words_in_utt<=5 "dur_s2_349" "dur_s2_348"
-477 prev_cont==+&&son==+ "dur_s2_351" "dur_s2_350"
-478 prev_ccor==+&&cvox==+ -479 "dur_s2_352"
-479 next_name==l "dur_s2_353" -480
-480 vrnd==- "dur_s2_355" "dur_s2_354"
-481 prev_son==+&&cplace==a -625 -641
-482 next_vfront==2 -578 "dur_s2_356"
-483 syl_vowel_vrnd==-&&vlng==s "dur_s2_358" "dur_s2_357"
-484 next_next_ctype==r -485 "dur_s2_359"
-485 next_name==hh "dur_s2_361" "dur_s2_360"
-486 pos_in_word_bw<=4 "dur_s2_362" -629
-487 prev_ccor==+&&son==- "dur_s2_363" -488
-488 next_ccor==+ "dur_s2_365" "dur_s2_364"
-489 num_syls_in_utt<=8 -635 "dur_s2_366"
-490 prev_prev_cont==+ -579 "dur_s2_367"
-491 pos_in_word_bw==3 "dur_s2_369" "dur_s2_368"
-492 next_son==+&&cplace==a -493 "dur_s2_370"
-493 prev_prev_vheight==1 "dur_s2_372" "dur_s2_371"
-494 next_cvox==+ "dur_s2_374" "dur_s2_373"
-495 next_name==l -496 "dur_s2_375"
-496 prev_cvox==+ "dur_s2_377" "dur_s2_376"
-497 name==g "dur_s2_379" "dur_s2_378"
-498 next_next_clab==+ "dur_s2_381" "dur_s2_380"
-499 pos_in_word_bw<=3 "dur_s2_383" "dur_s2_382"
-500 next_cvox==+&&son==-&&cplace==a "dur_s2_385" "dur_s2_384"
-501 num_syls_in_phrase==2 "dur_s2_387" "dur_s2_386"
-502 num_words_in_next_phrase<=4 "dur_s2_389" "dur_s2_388"
-503 prev_cvox==- -620 "dur_s2_390"
-504 next_csib==+&&cplace==a "dur_s2_392" "dur_s2_391"
-505 syl_pos_in_phrase_bw==5 -530 "dur_s2_393"
-506 syl_vowel_vheight==1&&vlng==l "dur_s2_395" "dur_s2_394"
-507 pos_in_syl_bw==1 "dur_s2_397" "dur_s2_396"
-508 next_next_vlng==l "dur_s2_399" "dur_s2_398"
-509 name==v "dur_s2_401" "dur_s2_400"
-510 prev_prev_cvox==- -582 "dur_s2_402"
-511 ctype==r -512 "dur_s2_403"
-512 prev_vheight==1&&vlng==s "dur_s2_405" "dur_s2_404"
-513 name==p -630 -514
-514 next_cvox==+ "dur_s2_407" "dur_s2_406"
-515 syl_pos_in_word_bw==2 "dur_s2_409" "dur_s2_408"
-516 prev_prev_ccor==+ -581 "dur_s2_410"
-517 next_cont==-&&cvox==+ -551 "dur_s2_411"
-518 num_syls_in_next_phrase==4 "dur_s2_413" "dur_s2_412"
-519 prev_name==l "dur_s2_415" "dur_s2_414"
-520 num_syls_in_next_phrase<=0 "dur_s2_416" -521
-521 prev_prev_cvox==-&&cplace==p -522 "dur_s2_417"
-522 name==n -523 "dur_s2_418"
-523 prev_vrnd==- "dur_s2_420" "dur_s2_419"
-524 word_gpos==content "dur_s2_422" "dur_s2_421"
-525 prev_ctype==s&&cplace==a "dur_s2_424" "dur_s2_423"
-526 num_words_in_phrase<=11 "dur_s2_426" "dur_s2_425"
-527 next_ccor==+&&son==- "dur_s2_428" "dur_s2_427"
-528 next_cont==+ "dur_s2_429" -529
-529 prev_ctype==r&&son==+ "dur_s2_431" "dur_s2_430"
-530 prev_name==l "dur_s2_433" "dur_s2_432"
-531 syl_vowel==ow -601 "dur_s2_434"
-532 num_content_words_in_phrase_before_this_word==2 "dur_s2_436" "dur_s2_435"
-533 num_content_words_in_phrase_before_this_word==5 "dur_s2_438" "dur_s2_437"
-534 next_next_cont==-&&ccor==+ -626 "dur_s2_439"
-535 prev_cplace==a "dur_s2_441" "dur_s2_440"
-536 num_syls_in_phrase<=11 "dur_s2_443" "dur_s2_442"
-537 cplace==p "dur_s2_445" "dur_s2_444"
-538 prev_vlng==l "dur_s2_447" "dur_s2_446"
-539 num_syls_in_utt<=14 "dur_s2_449" "dur_s2_448"
-540 syl_vowel==eh "dur_s2_451" "dur_s2_450"
-541 word_gpos==in -610 "dur_s2_452"
-542 prev_cont==+&&ccor==+&&cvox==+ "dur_s2_454" "dur_s2_453"
-543 prev_vc==+ "dur_s2_456" "dur_s2_455"
-544 next_next_cont==+&&cplace==a "dur_s2_458" "dur_s2_457"
-545 syl_vowel_vheight==1 "dur_s2_460" "dur_s2_459"
-546 next_syl_accented==1 "dur_s2_462" "dur_s2_461"
-547 next_ctype==r&&son==+ -603 "dur_s2_463"
-548 cplace==p "dur_s2_465" "dur_s2_464"
-549 next_name==n -550 "dur_s2_466"
-550 word_pos_in_phrase_fw<=3 "dur_s2_468" "dur_s2_467"
-551 next_next_vlng==a "dur_s2_470" "dur_s2_469"
-552 pos_in_word_bw<=3 "dur_s2_472" "dur_s2_471"
-553 prev_prev_cvox==- "dur_s2_474" "dur_s2_473"
-554 word_gpos==content "dur_s2_475" -602
-555 syl_vowel==iy -556 -655
-556 prev_son==+ "dur_s2_477" "dur_s2_476"
-557 prev_ccor==+&&cvox==+ "dur_s2_479" "dur_s2_478"
-558 prev_ccor==+&&son==- "dur_s2_480" -559
-559 next_cont==+&&son==+ "dur_s2_482" "dur_s2_481"
-560 next_name==t -561 "dur_s2_483"
-561 next_name==l "dur_s2_485" "dur_s2_484"
-562 son==+&&cplace==a -564 -563
-563 prev_name==l -565 "dur_s2_486"
-564 prev_son==- "dur_s2_487" -590
-565 next_vrnd==-&&vheight==1 "dur_s2_489" "dur_s2_488"
-566 pos_in_word_bw==1 "dur_s2_491" "dur_s2_490"
-567 prev_vc==+ "dur_s2_493" "dur_s2_492"
-568 ctype==r "dur_s2_495" "dur_s2_494"
-569 syl_stress==1 "dur_s2_497" "dur_s2_496"
-570 prev_ctype==r "dur_s2_499" "dur_s2_498"
-571 num_words_in_utt==7 "dur_s2_501" "dur_s2_500"
-572 next_clab==+ -627 "dur_s2_502"
-573 prev_son==- "dur_s2_504" "dur_s2_503"
-574 next_vrnd==-&&vheight==1 "dur_s2_506" "dur_s2_505"
-575 prev_prev_csib==+ -617 "dur_s2_507"
-576 dist_to_prev_stressed_syl_in_phrase==2 "dur_s2_509" "dur_s2_508"
-577 prev_vc==- "dur_s2_511" "dur_s2_510"
-578 word_pos_in_phrase_fw==1 "dur_s2_513" "dur_s2_512"
-579 prev_cont==- "dur_s2_515" "dur_s2_514"
-580 prev_prev_son==- "dur_s2_517" "dur_s2_516"
-581 prev_name==n -608 "dur_s2_518"
-582 prev_prev_cvox==+&&clab==+ "dur_s2_520" "dur_s2_519"
-583 dist_to_prev_stressed_syl_in_phrase==1 "dur_s2_522" "dur_s2_521"
-584 prev_prev_vrnd==- "dur_s2_524" "dur_s2_523"
-585 prev_cont==-&&cvox==+ "dur_s2_526" "dur_s2_525"
-586 next_vfront==3&&vlng==l "dur_s2_528" "dur_s2_527"
-587 word_gpos==in "dur_s2_529" -588
-588 prev_prev_cvox==+&&cplace==a "dur_s2_531" "dur_s2_530"
-589 prev_prev_csib==+ "dur_s2_533" "dur_s2_532"
-590 cont==+ -591 "dur_s2_534"
-591 pos_in_word_bw<=3 "dur_s2_536" "dur_s2_535"
-592 next_syl_accented==1 "dur_s2_538" "dur_s2_537"
-593 prev_cvox==+&&son==- "dur_s2_540" "dur_s2_539"
-594 next_next_vrnd==-&&vlng==l "dur_s2_542" "dur_s2_541"
-595 prev_prev_cvox==+ "dur_s2_544" "dur_s2_543"
-596 prev_prev_ccor==+&&son==- "dur_s2_546" "dur_s2_545"
-597 next_syl_length<=2 "dur_s2_548" "dur_s2_547"
-598 num_accented_syls_in_phrase_before_this_syl<=2 "dur_s2_550" "dur_s2_549"
-599 prev_cvox==+&&son==- "dur_s2_552" "dur_s2_551"
-600 num_syls_in_phrase<=17 "dur_s2_554" "dur_s2_553"
-601 next_cvox==+&&cplace==v "dur_s2_556" "dur_s2_555"
-602 prev_prev_cplace==l "dur_s2_558" "dur_s2_557"
-603 syl_stress==1 "dur_s2_560" "dur_s2_559"
-604 cvox==+ "dur_s2_561" -623
-605 next_clab==+ "dur_s2_563" "dur_s2_562"
-606 next_vfront==1&&vlng==s -607 "dur_s2_564"
-607 next_next_son==-&&cplace==a "dur_s2_566" "dur_s2_565"
-608 syl_pos_in_phrase_bw<=4 "dur_s2_568" "dur_s2_567"
-609 prev_ctype==a "dur_s2_570" "dur_s2_569"
-610 prev_cont==+&&ccor==+ "dur_s2_572" "dur_s2_571"
-611 dist_to_prev_accented_syl_in_phrase<=3 "dur_s2_574" "dur_s2_573"
-612 prev_prev_name==ey "dur_s2_576" "dur_s2_575"
-613 prev_name==r "dur_s2_578" "dur_s2_577"
-614 num_syls_in_prev_word<=0 "dur_s2_580" "dur_s2_579"
-615 next_syl_length<=2 "dur_s2_581" -616
-616 next_vfront==1 "dur_s2_583" "dur_s2_582"
-617 num_syls_in_prev_word==2 "dur_s2_585" "dur_s2_584"
-618 next_next_son==- "dur_s2_587" "dur_s2_586"
-619 prev_ccor==+&&cvox==-&&ctype==f "dur_s2_589" "dur_s2_588"
-620 next_ctype==f&&csib==+ "dur_s2_591" "dur_s2_590"
-621 num_syls_in_utt<=10 "dur_s2_593" "dur_s2_592"
-622 next_next_name==er "dur_s2_595" "dur_s2_594"
-623 cont==+&&cvox==+&&cplace==p "dur_s2_597" "dur_s2_596"
-624 vfront==1 "dur_s2_599" "dur_s2_598"
-625 syl_length==2 "dur_s2_601" "dur_s2_600"
-626 num_words_in_phrase<=4 -645 "dur_s2_602"
-627 pos_in_syl_bw==1 "dur_s2_604" "dur_s2_603"
-628 next_next_name==n "dur_s2_606" "dur_s2_605"
-629 next_next_ccor==+&&son==- "dur_s2_608" "dur_s2_607"
-630 prev_vfront==1&&vheight==2 "dur_s2_610" "dur_s2_609"
-631 pos_in_word_fw<=3 "dur_s2_612" "dur_s2_611"
-632 next_name==s "dur_s2_614" "dur_s2_613"
-633 prev_name==n "dur_s2_616" "dur_s2_615"
-634 prev_cont==-&&cvox==+ "dur_s2_618" "dur_s2_617"
-635 num_words_in_utt==7 -636 "dur_s2_619"
-636 num_words_in_utt==12 "dur_s2_621" "dur_s2_620"
-637 next_vrnd==+&&vlng==l "dur_s2_623" "dur_s2_622"
-638 syl_vowel==ae "dur_s2_625" "dur_s2_624"
-639 prev_syl_accented==0 "dur_s2_627" "dur_s2_626"
-640 prev_prev_ctype==s&&cplace==a "dur_s2_629" "dur_s2_628"
-641 prev_prev_vlng==s "dur_s2_631" "dur_s2_630"
-642 pos_in_word_bw==5 "dur_s2_633" "dur_s2_632"
-643 vfront==1 "dur_s2_635" "dur_s2_634"
-644 prev_son==+&&clab==+ "dur_s2_637" "dur_s2_636"
-645 prev_vrnd==-&&vlng==s "dur_s2_638" -646
-646 next_next_vheight==3 "dur_s2_640" "dur_s2_639"
-647 next_vheight==1 -648 "dur_s2_641"
-648 next_name==ax "dur_s2_643" "dur_s2_642"
-649 prev_cont==-&&cplace==a "dur_s2_645" "dur_s2_644"
-650 syl_vowel==oy "dur_s2_647" "dur_s2_646"
-651 next_name==w "dur_s2_649" "dur_s2_648"
-652 pos_in_word_fw==1 "dur_s2_651" "dur_s2_650"
-653 num_stressed_syls_in_phrase_before_this_syl<=5 "dur_s2_653" "dur_s2_652"
-654 num_syls_in_phrase<=8 "dur_s2_655" "dur_s2_654"
-655 next_cont==-&&ccor==+&&son==- "dur_s2_657" "dur_s2_656"
}
|
As surely as the toll echoes from Big Ben, every nationwide election in Britain for more than a century has been won by one of two parties: Labor or the Conservatives.
Next week, that august record is likely to come crashing down, courtesy of a far-right insurgent party that has seized on a pervasive anti-immigrant and anti-establishment mood to rocket to the lead in polls for the European parliamentary election.
The rise of the U.K. Independence Party has shaken up British politics in a way rarely seen here. While far-right parties have long been influential across continental Europe, they have always been relegated to the fringe in this country, which sees itself as open and inclusive.
But the political and economic stars have aligned in UKIP’s favor, and a party that’s dismissed as racist, xenophobic and a bit loony by London sophisticates suddenly is steering the national debate with its calls for Britain to close down borders and leave the European Union. A victory in European elections would confirm its newfound status as a major political player, even though UKIP lacks a single seat in the British Parliament.
The party’s message has resonated particularly well in struggling small towns and decaying industrial centers, where the benefits of a recovering economy are scarcely felt and where mainstream politicians are seen as out of touch with constituents furious over a massive influx of foreign workers.
“We’ve gotta get control of our country back,” said Gordon Harris, a youthful-looking 73-year-old with a skull tattooed on his forearm. “I’ve got nothing against immigration, but it’s just too much. It’s out of control, and we can’t cope.”
For decades, Harris was a truck driver and a Labor voter. But one recent night, he turned out at a conference center in this genteel Cambridgeshire market town to cheer Nigel Farage, who as UKIP’s leader has become the nation’s preeminent channeler of anti-establishment vitriol.
The party’s emergence doesn’t just challenge the ruling Conservatives, who have scrambled to the right on immigration and environmental policies to keep from being outflanked. As Harris’s conversion shows, it also threatens to eat into support for Labor, which risks losing the backing of working-class voters alienated by the party’s progressivism.
UKIP’s appeals to the Reagan Democrats of Britain are hardly subtle: On one campaign billboard, a dejected worker sits on the curb with a coin cup at his feet. “British workers are hit hard by unlimited cheap labor,” reads the ad’s text.
The message is typical of a European election campaign that has been dominated in Britain by voters’ fears, not their hopes.
“UKIP promises a better yesterday,” said Peter Kellner, president of the polling firm YouGov. “The appeal is to people who feel that Britain has become a less attractive, less secure and more frightening place. They dislike the modern world and want to get off.”
The party has traditionally done well in European elections, which are held every five years to select the 751 members of the European Parliament. The elections are marked by low turnout, and UKIP has struggled to translate its success in the European vote into success where it most counts: British parliamentary elections. But the party, which has been around for two decades, has never done as well in the polls as it is doing now.
‘Different Britains’
Farage, UKIP’s leader, is a somewhat unlikely champion of the working man. He made a small fortune as a commodities broker in London’s financial district,and for the past 15 years has been employed on the taxpayer’s dime as a member of the European Parliament — a job he wants to eliminate. His German-born wife also earns a government salary working as his secretary.
But Farage is a gifted salesman whose breezy style and ear for a sound bite would not be out of place on American talk radio — or in the local pub, where he often campaigns, pint in hand. That sort of everyman quality places him in marked contrast with the stiff and remote Oxbridge-educated politicians who dominate the Labor and Conservative parties.
And it serves him particularly well when talking about immigration, an issue that polarizes the British electorate like few others.
In cosmopolitan London, immigration is widely seen as a virtuous driver of economic growth and cultural vitality. But here in rural eastern England — where jobs working the fields have been a magnet for Lithuanians, Portuguese and others from across Europe — immigrants are seen as a drain on public services and as competition for housing and employment.
“There are two completely different Britains. There’s London, and there’s the rest of Britain. Attitudes are very different,” Farage said in an interview before taking the stage for the UKIP rally here. “Nobody in this country has voted for 4 million immigrants to come here in the last 15 years, and for probably another 3 million to come between now and 2020. There’s unrecognizable change happening in our country. The life prospects and job prospects, particularly of working-class people, have been severely dented.”
Farage’s solution is for Britain to exit the European Union, a body that by law allows citizens of all 28 members to move freely across the bloc. In the past decade, the expansion of the E.U. into eastern Europe and the economic crisis that has roiled southern Europe have made Britain an especially attractive destination, and millions have made the journey.
The influx began during the Labor government of Tony Blair and has continued unabated under Tory Prime Minister David Cameron, despite pledges by Cameron to sharply reduce the flow. Farage argues that until Britain leaves the E.U., no government will be able to claim control of the nation’s borders.
“What we’ve done is say to 485 million people, ‘You can all come, every one of you,’ ” Farage said, referencing the total population of the E.U. “ ‘You’re unemployed? You’ve got a criminal record? Please come. You’ve got 19 children? Please come.’ We’ve lost any sense of perspective on this.”
Getting valuable momentum
Critics accuse Farage of pandering to anti-foreigner sentiment, and the party has come under assault in recent weeks for overtly racist comments made by its members. In one case, a local council candidate tweeted that a black comedian should emigrate to “a black country.”
On Tuesday, a prominent young UKIP activist who is of Indian heritage abruptly resigned, saying the party had lost its way by blaming foreigners for the struggles of ordinary Britons.
“The direction in which the party is going is terrifying,” wrote Sanya-Jeet Thandi in an article for the Guardian newspaper. “UKIP has descended into a form of racist populism that I cannot bring myself to vote for.”
Farage has been forced repeatedly to deny that UKIP has a problem with racism. He has said that UKIP will not go form a coalition with other far-right parties in Europe — several of which also are expected to do well in this month’s vote — because he disapproves of their policies and language on race.
The criticism has done little to dampen enthusiasm for the party, and indeed may be feeding it by confirming for many alienated Brits that the elites are out of touch.
Matthew Goodwin, author of a book that chronicles UKIP’s rise, “Revolt on the Right,” said that if the party finishes first in the European elections, the outcome “will amount to a complete rejection of the British political class.”
It also will give UKIP valuable momentum going into next year’s elections for the British Parliament — momentum that Farage already seemed to sense as his speech in St. Ives neared its finale.
“Are you prepared to join our people’s army that will topple the establishment on May 22?” he shouted to a packed auditorium.
The crowd of 700 backers clad in UKIP purple rose to its feet and roared its response.
|
ファイル形式・When importing uncompressed audio files into Audacity・Make a copy of the file before editing (safer) - Selecting this means that Audacity will take longer to import files, but it will always have its own copy of any audio you are using in a project. You can move, change, or throw away your files immediately after you open or import them into Audacity.・Read directly from the original file (faster) - Selecting this means that Audacity depends on your original audio files being there, and only stores changes you make to these files. If you move, change, or throw away one of the files you imported into Audacity, your project may become unusable. However, because Audacity doesn't need to make copies of everything first, it can import files in much less time.
・Uncompressed Export Format - This lets you select the format that Audacity will use when you export uncompressed files. 11 common options are displayed in the list, but you can also select "Other" and choose a nonstandard file format for Audacity to export.・Ogg Export Setup - Use this control to set the quality of Ogg Vorbis exporting. Ogg Vorbis is a compressed audio format similar to MP3, but free of patents and licensing fees. A normal quality Ogg Vorbis file is encoded with a quality setting of "5". Note that unlike MP3 encoding, Ogg Vorbis does not let you set a bitrate, because some audio clips are easier to compress than others. Increasing the quality will always increase the file size, however.・MP3 Export Setup - Use these controls to locate your MP3 encoder and set the quality of MP3 encoding. Higher quality files take up more space, so you will need to find the level of quality you feel is the best compromise. For more information, see Exporting MP3 Files.
Spectrograms
You can view any audio track as a Spectrogram instead of a Waveform by selecting one of the Spectral views from the Track Pop-Down Menu. This dialog lets you adjust some of the settings for these spectrograms.・FFT Size - The size of the Fast Fourier Transform (FFT) affects how much vertical (frequency) detail you see. Larger FFT sizes give you more bass resolution and less temporal (timing) resolution, and they are slower.・Grayscale - Select this for gray spectrograms instead of colored ones.・Maximum Frequency - Set this value anywhere from a couple of hundred hertz to half the sample rate (i.e. 22050 Hz if the sample rate is 44100 Hz). For some applications, such as speech recognition or pitch extraction, very high frequencies are not important (visually), so this allows you to hide these and only focus on the ones you care about.
Directories
Use this panel to set the location of Audacity's temporary directory (folder). Audacity uses this directory whenever you work on a project that you haven't saved as an Audacity Project (AUP file) yet. You have to restart Audacity (close and open it again) for changes to the temporary directory to take effect.インターフェース・Autoscroll while playing - Scrolls the window for you while playing, so that the playback cursor is always in the window. This can hurt playback performance on slower computers.・Always allow pausing - Normally the Pause button is only enabled while you are playing or recording. Checking this box allows you to set the pause button anytime, which allows you to press Record and not have the recording start until you unpause it. Sometimes starting a paused recording can be faster than starting to record in the first place.・Update spectrogram while playing - Because spectrograms are slower to draw, normally they are not drawn during playback, but this option lets you draw the spectrograms anyway.・Enable Edit Toolbar - Sets whether or not you want to display the Edit Toolbar, which has some common shortcuts for editing commands.・Enable Mixer Toolbar - Sets whether or not you want to display the ミキサー ツールバー, which lets you set the volume levels and input source.・Enable Meter Toolbar - Sets whether or not you want to display the Meter Toolbar for setting audio recording and playback levels.・Quit Audacity upon closing last window - By default on Windows and X-Windows (but not Mac OS), Audacity quits when you close the last project window. If you uncheck this box, Audacity will open a new blank document instead of quitting. To quit Audacity in this case, you must specifically select Exit (or Quit) from the File menu.・Enable dragging of left and right selection edges - Normally, when you move the mouse over the left and right edge of a selection, the cursor changes to a left or right pointer, and you can adjust that edge of the selection independently. If you don't like this feature, uncheck this box, and then clicking will always create a new selection (unless you hold down Shift to extend an existing selection).・Language - sets the language used by Audacity. Language files are named "audacity.mo" and are found in the "Languages" folder on Windows and Mac OS X, or in /usr/share/locale or /usr/local/share/locale on most Unix systems. Audacity will detect new languages the next time you start it.
Keyboard
This panel lets you change keyboard shortcuts. All of the commands that appear in Audacity menus appear on the left, along with a few other buttons that can get keyboard shortcuts. To change a command, first click on the command you want to change. Then type the new keyboard shortcut on your keyboard. Verify that the correct shortcut appears in the box below. If it's what you want, press the Set button. Or to get rid of a keyboard shortcut, press Clear.To reset to Audacity's defaults, press the Defaults button. This will get rid of any changes you have made.If you have customized your keyboard layout and want to share it with someone else, you can press Save... and save your complete keyboard layout as an XML file that you can share. To load an existing layout, press the Load... button and locate the XML file.Mouse
This panel doesn't let you change anything, but it lets you view all of the commands and actions that you can do using the mouse, many by holding down extra modifier keys.
|
Posts Tagged ‘Treasury of Tennessee Treats’
I just realized that I haven’t posted on this blog in February. Luckily, I just made something that was definitely blogworthy so I will squeak in a February post.
Wednesday evening I spoke to a group in Alexandria, Virginia, about my book Pulling Taffy. The fun, interested and interesting crowd included one of my college dorm mates, Jo-Ann McNally (as gorgeous and peppy as ever); a man who had known and loved my darling honorary godmother Dagny Johnson; the wonderful Joan Sutton, my mother’s geriatric adviser; and a number of people who had lived through dementia care themselves. I had a wonderful time and came home with a gift from my hosts as well as money from book sales. (I love money!)
Family members also came—and I wanted to have something easy yet tasty on hand to serve them after the program. It was snowing the morning, and I really didn’t feel like taking the Tinkymobile to the grocery store to purchase any exotic ingredients. Fortunately, I thought of Keith Brownies.
This brownie recipe may be found in a book called Treasury of Tennessee Treats, published by the Keith Memorial Church in Athens, Tennessee, home of my college roommate Kelly Boyd. I wish I had a photo of Kelly and me at Mount Holyoke to show you, but all of those photos are in another state. Picture two long-haired, short, slightly plump, astronomy-and-film-loving young girls with big smiles, and you won’t be far off.
Kelly and I made these brownies back in the day—and a couple of years ago when I asked her for the recipe she sent me her late Aunt Lucile’s copy of the cookbook. Lucile Mitchell made the first and the best cream candy I ever tasted, and I am honored to have her cookbook in my collection.
In addition to the brownies and many other dishes, the Keith Cookbook features one of those charming, sentimental “recipes” for a good life favored by community-cookbook committees in generations past. (The copy I have, the book’s second edition, was published in 1962.) I’m sure the ladies wouldn’t mind my reprinting it. Its message is sappy but inspiring.
To tell you the truth, the brownies didn’t QUITE live up to my memory of them. (It’s very hard for anything to live up to a memory.) They were still extremely tasty, however—somewhere between fudgy and cakey in consistency—and no one seemed to have any trouble eating them!
Best of all, they took no time at all to make and used ingredients I ALWAYS have in the house. I will definitely keep them in my repertoire. I hope you enjoy them, too.
|
Closure and relocation of Stockton and family law courthouses
Closure and relocation of Stockton and family law courthouses
Effective July 28, 2017 at 12:00 p.m. (noon), the Stockton Courthouse located at 222 E. Weber Ave. and the Family Law Courthouse located at 540 E. Main St. will be permanently closed. All services currently provided to the public by the San Joaquin Superior Court at the Stockton and Family Law Courthouses will be relocated to the soon to be completed new Stockton Courthouse at 180 E. Weber Ave. in Stockton. Public services will resume at the new Stockton Courthouse on July 31, 2017 at 1:00 p.m. read more >
|
Money Managers: How to Teach Your Children About Money
Learning about money is one of those “It’s never too early” moments when it comes to your kids. In fact, it’s better to teach them about money, how to manage it and budgeting early so it’s ingrained by the time they’re adults. This way they’ll make responsible financial decisions down the road.
Provide an Allowance: When it comes to money, sometimes kids really do think it grows on trees. By providing an allowance and letting them pay for things, they really start to learn what the true value of a dollar is. It doesn’t really matter how much the allowance is but it should be enough for them to be able to save it over time and buy something substantial.
Saving Money: A piggy bank is an excellent way to teach young children about the value of saving money. They will learn that when you save money, you can get something you really want or save it for something you really need rather than spending it all at once.
Developing a Budget: When teaching your kids about budgeting, make sure you include both long term and short term goals (e.g. money for video games, money for investing and saving, money for clothes etc.).
Keep a Check: Just like how adults have a checkbook to keep track of money going out, you can also create a ledger for your kids so they too can learn about balancing money they have versus money they spend. It’s a great way to teach the value of a dollar, teach them responsibility and let them see just how they spend their own money every month. These ledgers can be made by hand or printed out online. Every single time they buy something new as well as put more money into their piggy the information can be added to it.
Robin Williams is an Executive at CashOne, a leading provider of online payday loans and instant payday loans. Serving the entire United States, CashOne is a preferred partner to help people get through their short-term financial crunches through fast approval and simple terms and conditions. Google +
|
Dr. Joanna Slusky’s Gift Guide
What are the best gifts of all? The ones that come with a meaning and a sense of care that will be remembered far beyond the holiday season. Here are some top gifts of SIGHT, STYLE and SAFER BEAUTY from Chicago’s Favorite Optometrist...
CW1117_WebFlippers_3
1. NISANTASI SUN & 2. HAMPSTEAD SUN
Designed with 100% natural and malleable acetate material made from cotton, a flex hinge that adapts perfectly to the face, and mineral crystal lenses that provide polarized HD clarity with maximum UV protection. With a hint of gold or the classic black these are perfect for your holiday ensemble. $265
Flaunt festive lips all season long with lipstick collection of five universally flattering satin hues and fresh peppermint flavor. Perfect to give or get. $88
5 SMOKY EYE TRIO
Get party-perfect for your holiday events! This set is made with safer ingredients and includes a Volumizing Mascara, precise Liquid Eyeliner in black, and a new Silk Cream Eyeshadow that glides on seamlessly. $38
|
Providing free, reliable birth control to women could prevent between 41 percent and 71 percent of abortions in the United States, new research finds.
In a study published today (Oct. 4) in the journal Obstetrics and Gynecology, researchers provided free methods of reversible, reliable contraception to more than 9,000 teens and women in the St. Louis area. They found that the program reduced the abortion rate among these women by 62 percent to 78 percent.
"The impact of providing no-cost birth control was far greater than we expected in terms of unintended pregnancies," lead author Jeff Peipert, a professor of obstetrics and gynecology at the Washington University School of Medicine, said in a statement. "We think improving access to birth control, particularly IUDs [intrauterine devices] and [hormone] implants, coupled with education on the most effective methods, has the potential to significantly decrease the number of unintended pregnancies and abortions in this country."
The findings have implications for public policy, especially given that President Obama's health-care plan requires employers to offer plans that include birth control coverage. This requirement has been a point of controversy in the lead-up to the 2012 election.
Between 2006 and 2008, 49 percent of all pregnancies in America were unplanned, according to the CDC's National Survey of Family Growth. About 43 percent of these unintended pregnancies ended in abortion. Meanwhile, a 2011 study in the journal Contraception estimated that unintended births cost U.S. taxpayers about $11 billion a year.
To see if access to free contraception could budge those numbers, Peipert and his colleagues recruited 9,256 women ages 14 to 45 living in the St. Louis area through flyers, doctors and word-of-mouth. They also recruited patients from the city's two abortion clinics. Participants were given the option of using any reversible birth control method, from the birth control pill to a hormonal birth control patch to a long-lasting IUD or hormonal implant. [7 Surprising Facts About the Pill]
More than half of the women chose IUDs, 17 percent picked hormonal implants (tiny rods placed under the skin that release hormones), and the rest chose pills, patches and other hormonal methods. As a result, the researchers found, both teen births and overall abortion rates plummeted.
Among women in the free contraceptive program, the teen birth rate was 6.3 per 1,000 women, a huge difference from the national teen birth rate of 34.3 per 1,000 women.
Likewise, the abortion rate among women in the program was 4.4 to 7.5 per 1,000 between 2008 and 2010. Nationally, there are 19.6 abortions per every thousand women, a 62 percent to 78 percent difference. In the St. Louis area, the overall abortion rate in that time frame was between 13.4 and 17 abortions per 1,000 women.
The study highlights the importance of long-acting contraception methods such as the IUD, researchers said. Birth control pills have a higher failure rate than these methods, because women have to remember to take a pill at the same time every day. But IUDs, which last about 10 years, can cost more than $800, the researchers said, putting them out of reach for many lower-income women who may not be able to come up with that kind of money in one lump sum.
"Unintended pregnancy remains a major health problem in the United States, with higher proportions among teenagers and women with less education and lower economic status," Peipert said. "The results of this study demonstrate that we can reduce the rate of unintended pregnancy and this is key to reducing abortions in this country."
Editor's Recommendations
Stephanie Pappas
Stephanie Pappas is a contributing writer for Live Science. She covers the world of human and animal behavior, as well as paleontology and other science topics. Stephanie has a Bachelor of Arts in psychology from the University of South Carolina and a graduate certificate in science communication from the University of California, Santa Cruz. She has ducked under a glacier in Switzerland and poked hot lava with a stick in Hawaii. Stephanie hails from East Tennessee, the global center for salamander diversity. Follow Stephanie on Google+.
|
Welcome, friend! Are you looking to kickstart your prayer life? Be sure to check out the Teach Me to Pray Journal, in addition to the tips in this post.
Knowing how to pray is one of those things you think should come naturally, right? Especially if you’ve been a Christian for a while?
But if you’re like thousands of other people who’ve landed on this post, I bet you’ve run into the same problem: you can’t focus. You don’t know what to say or even where to start sometimes. It just doesn’t come naturally for everyone.
I’ve been praying in some capacity since I was a teen. I remember curling up in my bed at night when my life felt confusing, asking a big mysterious God for guidance and strength. I drifted in and out of a youth group and felt guilty for my lack of commitment, although I wasn’t even sure what “commitment” to him should look like. Finally I promised this God that I would go to church when I went to college.
I kept that promise and my life turned upside down. After deciding to follow Jesus I never looked back. Early on I learned that if I was going to keep this up, prayer was going to be an essential part of my life.
But here’s the thing. I suck at praying.
I say that a little tongue in cheek because at least I’m trying, and I’m pretty sure God listens to whatever jumbled mess of thoughts I throw his way. But let me give an example of what my mornings can look like…
My alarm goes off, I grumble, hit snooze a couple of times.
I finally turn on my phone and start rifling through emails and notifications, to wake up my brain. I start thinking about my day.
Depending on the day, I either read some of the Bible or go work out. Or put it off and sleep more.
At some point I turn to God like I know I should (and want to):
“Good morning God, thank you for the beautiful sunrise, thank you for guiding our family…oh I wonder how Jonathan’s cough is this morning. I’ll need to give him his medicine, but first I’ll need to make breakfast…oh no, I hope we’re not out of bread…oh sorry God, I mean, uh, please help Jonathan feel better…is he well enough to go to the library? Ugh, I really need to deposit those checks on the way home, Marc needs to sign them before he leaves…oh hi God, sorry, ummmm where was I? Ugh, I’m so tired, can’t focus…”
I could blame the stage of life I’m in, but the truth is I have always struggled with this. I’m a Type A, always planning, always ten steps ahead of where I’m at.
I have difficulty being in the moment.
When I pray, I just can’t focus.
Some people naturally pour their hearts out to God every time they turn to him. Others have to learn it through practice and habit. Guess which category I fall into.
I’ve made the mistake of assuming that knowing how to pray is a skill that everyone should know automatically, but it doesn’t quite work that way. As someone constantly grappling with grace I know that my prayer life is not something to be ashamed of, but it reflects a weakness in character that needs strengthening.
Over the years I’ve learned that there are a lot of ways to connect with God in a meaningful way. I may not be the 21st century Psalmist, but I can pray faithfully and powerfully even with my disjointed, distracted train of thought. If you relate to this, I hope you don’t feel guilty. Just know that God is listening, no matter what you try. And if you feel like you don’t know how to pray, don’t worry. It’s never too late to learn.
How To Pray When You Just Can’t Focus
Here are some simple tips about how to pray and connect with God if you struggle with consistency and focus. Also be sure check out the free journal I developed as a result of the popularity of this post, which is a part of my free resource collection.
Pray out loud
Yes, even when you’re by yourself. Or not by yourself. When I was a college student I would pretend I was on my phone while I prayed during my walk to class so people wouldn’t think I was crazy! I think of the story of Daniel. A Babylonian law forbade anyone to pray to any god but the king, yet Daniel continued to pray visibly and loudly enough to get arrested. Why didn’t he whisper or do it in his head? (See Daniel 6.) I find that when I make my thoughts verbal, they’re less likely to trail off.
Sing hymns
No need to come up with eloquent words when they’ve already been written. Don’t turn on music and zone out; say the words and mean them. I’m a fan of old-timey hymns with rich lyrics as opposed to saying “hallelujah” over and over. Get a songbook/hymnal, or print out some lyrics and try it!
Sing and make music from your heart to the Lord —Ephesians 5:19
Start with the Lord’s Prayer
Sometimes there is great value in ritual. It can keep us on target. Jesus’ disciples were with him constantly and must have seen his relationship with his Father, and yet they still asked, “how do we pray?” Jesus laid a foundation in Matthew 6:9–13:
This, then, is how you should pray: ’Our Father in heaven, hallowed be your name, your kingdom come, your will be done, on earth as it is in heaven. Give us today our daily bread. And forgive us our debts, as we also have forgiven our debtors. And lead us not into temptation, but deliver us from the evil one.’
When I don’t know what to say, I know I can’t go wrong with honoring God, asking that his will be done; asking for what I need, for forgiveness and for help through my weaknesses.
Imitate great prayers in the Bible
Just as Jesus set an example in prayer, so did many other people in the Bible. Pick one. I love Hannah’s prayer in 1 Samuel 2:1–10 for starters.
Fast
It’s Lent as I write this so fasting is on a lot of people’s minds; however, it doesn’t have to be a special occasion to get your heart and mind spiritually focused. I know that whenever I have practiced a traditional fast by giving up food, the hunger is a constant, humbling reminder that my strength comes from God alone. That helps me focus.
Pray continually
This tip is for you, parents—yeah you who don’t have more than five quiet minutes to yourselves ever. You’re probably already aware of this, but praying doesn’t have to be a formal event. God’s listening all the time. Say a quick thanks, a shout out for your friend, praise, or a request for help whenever you think of it. Nehemiah did it constantly, if you want inspiration.
Rejoice always, pray continually, give thanks in all circumstances; for this is God’s will for you in Christ Jesus. —1 Thessalonians 5:16–18
Pray with your kids
This seems obvious, but is it? I regularly forget to even pray at meals. Yet I nonetheless try to have regular times throughout the day. This practice not only for them—it helps me too. We pray when we load up the car (usually because this is stressful and I need the reset button). We pray at the beginning of our school day. We pray whenever someone is scared or sick or is being disciplined. We pray when kids throw fits and we pray when Mommy throws fits. We pray when something awesome happens and we want to thank God. We pray as a family at bedtime. These are quick and may or may not always be super heartfelt, but you know what? They add up. And what’s more, you’re teaching them how to pray!
Pray with your spouse
Sometimes this overlaps with the praying with the kids thing, but we try to spend at least some time in prayer together daily. Want to try something super humbling? Stop to pray in the middle of a fight.
Go for a prayer walk or drive
I did this a lot more before I had kids and when they were stroller size. Just thinking about it makes me want to get back in the habit. There is something about being outside that clears the mind. When you’re out walking you’re less likely to be distracted with your to-do list. Plus, there’s nature.
The heavens declare the glory of God; the skies proclaim the work of his hands. —Psalm 19:1
Pray the Psalms
I remember the first time I read through the book of Psalms; I was completely bored. Yeah, really. At that time I was trying to absorb the knowledge that the Bible offered, and I was falling asleep getting through this very long collection of poetry. But years later, I love the Psalms because they have guided me in prayer on so many occasions. You know those times when your mind is a flurry and you’re either sobbing uncontrollably or you’re so numb that you can’t even do that? You want to pray but you don’t even know where to begin? Pray a Psalm. A couple of my favorites include Psalm 23 and Psalm 63.
Friends, I don’t call myself a prayer warrior, but even as I write this I feel empowered because I know that in spite of my weaknesses, I do speak and God hears my prayers. If you have a hard time focusing, I hope this lists empowers you too.
If you found these tips helpful, check out the free prayer journal in my free resource collection, which spends a week going through the teachings of Jesus. You’ll also receive a free email course about how to make prayer a daily habit:
Plus, check out my Facebook Live video on how I pray when I’m a hot mess:
Do you know how to pray when you’re having trouble focusing? What do you do?
Disclosure: this post may contain affiliate links, which won’t change your price but will share some commission. See here for more information.
The Proverbs 31 Woman—10 Myths Explained
We’re all living in the shadow of that infamous icon, “The Proverbs 31 Woman,” whose life is so busy I wonder, when does she have time for friendships, for taking walks, or reading good books? Her light never goes out at night? When does she have sex? Somehow she has sanctified the shame most women live under biblical proof that yet again we don’t measure up. Is that supposed to be godly—that sense that you are a failure as a woman?
Oh, the “Proverbs 31 woman.” I have a bit of a fascination with her, and so do others. Ministries and businesses are named after her.
Some women love her. Many aspire to be her. Others feel guilt and worthlessness; some despise her.
Do we really understand her?
Captivating(quoted above), by John and Staci Eldredge, is all about women living to their full potential, the way God created them to be. It’s worth reading if you want to explore the heart of biblical femininity. The quote pretty much slams the near-goddess status of Mrs. 31. It says, “she has sanctified the shame.”
But has she? Or have we?
I don’t think the Eldredges’ intent was to pick apart the woman herself (this is in the Bible, after all), but rather to pick apart the common way she is perceived. And I agree that this idea of a woman who is too good for the rest of us to imitate is a false understanding of biblical womanhood.
The more I’ve dug into Proverbs 31, I’ve discovered that I was missing much of what it communicates. It is difficult to understand at face value, which is why I think so many of us have a knee-jerk response when we read it.
Want to understand this passage better, and ditch the guilt and shame? Throw out some of the following myths.
Want to do some more in-depth study? Be sure to to check out this 7-day devotional on the Proverbs 31 woman called Woman of Strength, which you can find in my free resource collection.
1. Proverbs 31 was written for young wives.
Who do you think this passage was originally written for? You might be in for a shock. It was written for a young man. It is not an instruction manual for wives; its purpose is to provide a young man a vision for what he should look for in a wife.
It makes a lot more sense when you view Proverbs 31 as what it was intended to be—an epilogue, a conclusion—rather than a stand-alone passage. Proverbs is a collection of sayings that contrast wisdom with folly; many of them are written as warnings to young men to let wisdom and godliness guide them instead of their lusts.
That’s not to say that there aren’t applications for women; why shouldn’t we aspire to the godly characteristics displayed here? But the purpose of the passage isn’t to provide an impossible standard; it’s to provide inspiration for the possibilities.
2. She was a real person (wasn’t she?).
Wouldn’t we like to know whether the person portrayed here was a flesh-and-blood being? I think it’s possible that someone, or more than one person, was the real-life inspiration for these words—otherwise, why would you instruct a man to marry an ideal woman who couldn’t exist? But on the other hand, the Proverbs 31 woman is the fourth somewhat allegorical female personage in the book of Proverbs (following Wisdom, Folly and the Adulteress). So there may have been some creative liberties in describing her, even if she was real.
If you’re wrestling with this question of whether or not she was real, my question to you is, does it matter? Sometimes the Bible uses real people to communicate truth and sometimes it uses symbols and parables. That doesn’t change the core principles of the message being communicated, which we’ll explore.
3. She has always had her act together.
When I was a newlywed I barely knew how cook. But I wanted to be great at it because I knew it would make my husband happy. One time I made blueberry muffins—my husband’s favorite—and left out the baking powder. Ewwww. Rookie mistake. I was devastated and he was bewildered as to why his wife was crying over something so trivial. Fortunately, I rarely (not never!) make that same mistake anymore.
Being a competent wife/mother/homemaker/whatever takes time. It doesn’t matter how hard you try or even how naturally talented you are; you will only learn through experience. Proverbs 31 is not a snapshot of a newlywed. This woman has been married long enough to have multiple children and run a couple of side businesses while skillfully managing her household. In other words, she’s older.
Remember, if this was written to a young man, he probably wasn’t going to go out looking for a forty-year-old with lots of life experience. He was going to be looking for someone young who had the potential to grow into that mature, godly wife. Keep that in mind as we continue.
4. She rarely sleeps.
How in the world can someone get up while it is still dark to make everyone breakfast (v. 15), and yet “her lamp does not go out at night” (v. 18)??? Certainly every mom has some sleepless nights like that, but the passage seems to imply an ongoing state.
This is where cultural insight is useful. How would you get around at night if you didn’t have electricity? An oil lamp would come in handy if you had to use the latrine. It’s common in societies without electricity to leave a lamp burning at night, even while you sleep, so people don’t have to fumble around in complete blackness (source).
So why is it significant that her lamp does not go out? Consider verse 18 in its entirety: “She sees that her trading is profitable, and her lamp does not go out at night.” Her side hustles are making enough money to keep everyone comfortable! She is also wise enough to ration the oil so it doesn’t run out (see the Parable of the Ten Virgins in Matthew 25).
5. Every wife and mother should aspire to live like her.
Consider this: the family in Proverbs 31 is making enough money to employ servants and have nice clothes. This is in part because of Mrs. 31’s side businesses, but it’s also evident that her husband is a man of standing the community, hanging out with the elders (v. 23). She is also in excellent health as far as we can tell.
Not everyone is blessed in these ways—with wealth, health and privilege. So would this passage not be relevant for them (most women in the world, actually)? Are you any less godly if you’re poor or in bad health? Of course not.
You have to look past the specifics and more at the principles.
6. She does everything.
Does she really “do it all”? Nope. This lady has servants.
I used to think, “Well, if I had servants, maybe I could do all of that too.” Granted, many of us in Western society have “servants” in the form of modern appliances, but if you’re dwelling on these thoughts then you’re missing the point of the passage. Most people in the world do not have servants, yet this passage is relevant for them. Again,you have to look past the specifics and more at the principles.
7. She never stops to rest.
She works 24 hours a day, seven days a week, right? Wrong!
How do I know? I’m fairly confident that someone upheld as a God-fearing woman in the Old Testament would observe the Sabbath. This didn’t mean just going to church on Sunday; this meant complete rest. No cooking. No cleaning. No gardening. No sewing. You couldn’t even walk more than half a mile! You had to sit and enjoy being with your family all day. Just imagine.
Have you ever tried observing a Sabbath like that ever, let alone regularly? While I don’t believe Christians are required to observe this old law, we probably could learning a thing or two about chilling out from Mrs. 31.
8. Proverbs 31 has no relevance for single women.
As a single woman more than a few years back, I was completely befuddled by this passage and assumed it wasn’t for me anyway. So I essentially ignored it. That was my loss.
Is this passage only for wives or only for women who want to be married? Absolutely not! Let’s start talking more about the principles of this passage, which are about character, not deeds. More specifically, “noble character.”
The only other time this phrase appears in the Bible outside of Proverbs is describing Ruth (Ruth 3:11). Who was Ruth? It’s a quick read in the Old Testament, so I’d recommend checking it out if you never have. She was a real person, she was single and she was she was dirt poor.She wasn’t trying to catch a husband; she was doing whatever she could to survive.
Hardly the same circumstances as Mrs. 31. And yet these women have identical character qualities. Single or married, we should focus on those.
9. This passage is irrelevant to the modern woman.
I hope it’s evident by now that it shouldn’t matter whether you’re rich or poor, married or single, young or old, living in modern America or in a hut in the jungle; when you’re focusing on the principles of the passage and not the circumstances,there is much to take away. These timeless, cultureless principles include love, generosity, work ethic and faithfulness.
10. We should wonder, “How does she do it?”
One question that might hinder you when you consider Mrs. 31 is, “How does she do it?” But unless you enjoy feeling insecure about your talents and stamina, that’s the worst kind of question you can ask.
We should be asking, “Who is she?” Not her name or her place in history, but what is the essence of her being? That question is the right one when we’re trying to determine the core principles of the passage. And the answer is quite simple. She is a woman who loves God (v. 30). Her character, her wisdom, her self-discipline—all of these qualities flow from this fact.
I hope you’ve found these insights useful as a you figure out who you are. Whether you’ve loved her or hated her, I also hope and pray that you can find the incredible inspiration at the core of Proverbs 31.
Want to learn more about what it means to be a woman of strength like Mrs. 31? Check out the Woman of Strength devotional, one of the many resources you’ll find in my free resource collection.
What do you think about the Proverbs 31 woman?
Do you wrestle with mommy guilt—that nagging feeling that you’re not doing enough or that you’re screwing up? Then you’re in the right place! Be sure to grab the printable download of these Bible verses in my free resource collection.
I screwed up again. My eight-year-old’s eyes spilled over in tears and he turned away, a little embarrassed. “We just did what you wanted to do, not what I wanted,” he confessed. “It wasn’t fun.”
Mixed feelings of guilt, shame and anger heated my face. I had taken him to the mall on a special outing with just the two of us to get him some new shoes and a special snack. It didn’t turn out as planned because or local mall is shutting down stores like crazy. The shoe choices and the snack options were very limited.
I was hurt that he wasn’t more thankful, yet at the same time I felt horrible that I had let him down. My efforts weren’t enough.
Mommy guilt.
It’s this condition I’ve battled with ever since I heard his first cry. This impossible question lingers constantly: Am I doing enough?
I know I’m not the only one with this issue, so I asked members of our Facebook group what made them feel guilty. The answers weren’t surprising, things like:
Banish Mommy Guilt with Scriptural Truth
It’s sneaky because in one sense, it holds just a kernel of truth: we fall short of perfection. So in that sense, no we’re not “enough” and never will be. But on the other hand, because Christ does immeasurably more than we could ask or imagine, we are more than enough through all of our weaknesses.
So the next time you are wallowing in guilt, whether you legitimately screwed up or are worried that you did, meditate on these truths and put their teachings into practice.
On Confession
I’ll be honest: I did not plan on having this section when I started thinking about this topic! But the more I dug into the word “guilt” in the Bible, it was apparent that the first step to healing is confession.
So be honest with yourself, with God and with other believers about those things that are weighing on your heart.
James 5:16
Therefore confess your sins to each other and pray for each other so that you may be healed. The prayer of a righteous person is powerful and effective.
1 John 1:9
If we confess our sins, he is faithful and just and will forgive us our sins and purify us from all unrighteousness.
Proverbs 28:13
Whoever conceals their sins does not prosper, but the one who confesses and renounces them finds mercy.
On Being Enough
That feeling of “not enough”? It’s a lie because Christ is more than enough. His love for you and intervention on your behalf give you a brand new start each and every time you feel “less than,” with your parenting and so much more.
Colossians 2:13–14
When you were dead in your sins and in the uncircumcision of your flesh, God made you alive with Christ. He forgave us all our sins, having canceled the charge of our legal indebtedness, which stood against us and condemned us; he has taken it away, nailing it to the cross.
1 John 2:1b
But if anybody does sin, we have an advocate with the Father—Jesus Christ, the Righteous One.
Romans 5:1
Therefore, since we have been justified through faith, we have peace with God through our Lord Jesus Christ…
Hebrews 9:14
How much more, then, will the blood of Christ, who through the eternal Spirit offered himself unblemished to God, cleanse our consciences from acts that lead to death, so that we may serve the living God!
On Moving Forward
I think this is the hardest part for my guilty conscience to accept. I know that I am forgiven and cleansed through Christ, but the truth is I still fall ridiculously short when it comes to loving my kids.
These verses help me remember to live in faith instead of fear, anxiety and guilt. God’s grace is sufficient; he fills in the gaps where I fall short. And because of this, my heart can be at peace.
1 John 4:18
There is no fear in love. But perfect love drives out fear, because fear has to do with punishment. The one who fears is not made perfect in love.
2 Corinthians 12:9
But he said to me, “My grace is sufficient for you, for my power is made perfect in weakness.” Therefore I will boast all the more gladly about my weaknesses, so that Christ’s power may rest on me.
Hebrews 10:22
[L]et us draw near to God with a sincere heart and with the full assurance that faith brings, having our hearts sprinkled to cleanse us from a guilty conscience and having our bodies washed with pure water.
Philippians 4:6–7
Do not be anxious about anything, but in every situation, by prayer and petition, with thanksgiving, present your requests to God.And the peace of God, which transcends all understanding, will guard your hearts and your minds in Christ Jesus.
Want to keep these Bible verses handy? You can access them and other Bible verses for moms! Just click below:
Leave a comment: when do you you struggle with “mommy guilt”?
This is a book review for When God Says “Go” by Elizabeth Laing Thompson, which is all about rising to challenge and change without losing your confidence, your courage or your cool. Read on to learn how you can win a signed copy!
Disclosure: this post may contain affiliate links, which won’t change your price but will share some commission. See here for more information.
My hand shook as I held my cup of water. Was my heart racing too? It was late afternoon and I wondered if perhaps I’d had a little too much caffeine.
I couldn’t focus. I was meeting my friend at a local coffee shop to talk about her, but all I could think about was myself. I was worried, replaying conversations in my head and calculating worst-case scenarios.
She sat down with her salad and looked up at me. I realized at that moment that this conversation wasn’t going to go at all the way I had planned.
It wasn’t the caffeine. It was anxiety, which was starting to boil over into full panic.
Tears filled my eyes as I decided to forget agendas and just be real. Big things were unfolding in my life. They were good things. But, they were big, scary things like moving forward in an international adoption and making weighty decisions in my writing career. I was scared.
I wish I could say that this was months ago and that I have figured out how to deal with my fears. But this was just a few days ago.
I’m still scared.
Sometimes, especially when your days are full of diaper changes, wiping noses, schoolwork help and dishes, the speed of life can feel like a crawl. But at other times you find yourself on or about to get on a roller coaster, and there will a point when it will be too late to get off.
This is what it feels like when God says, “Go.”
It just so happened that the rumblings in my life started happening right when I received this book in the mail:
When God Says “Go”
My friend Elizabeth is a funny, passionate storyteller who brilliantly looks into the Bible and brings to life what it was like for everyday people to interact with God. Through their imperfect, emotional and often comical reactions, he brought his ultimate plans to fruition.
The people we read about in the Bible are just as human as you and I. And the God who worked with them is the same one who works with all of us. Elizabeth relates their stories to everyone’s.
To be clear, a call from God today isn’t an audible command. Usually it’s a combination of circumstance, advice, biblical truth and gut feeling/Spirit prompting. (The book explains in further detail about how to clarify whether God is prompting you or whether you’re just having indigestion.)
Whether God is saying “go,” “stop,” “stay” or “proceed with caution” to you right now, Elizabeth’s deep digging into the biblical narrative has some thoughtful and encouraging insight. While there were many points that were helpful to me personally, three especially stood out.
1. It’s Not About You
I’ve noticed a trend when I hear people talk about “God’s call” for their lives (and I fall into this trap as well). We can focus a lot on what he wants for us, without spending much time thinking about what he wants, period. We can get sucked into obsessing about our own gifts and dreams and whether we’re living to our “full potential.” Those aren’t bad things to wonder about, but they’re really secondary to God’s ultimate purpose: to redeem and restore all of creation. He’ll do that with or without our help.
There were more than a couple of people in the Bible who objected when God came knocking: “But God, I’m too young/old/wounded/fearful/inadequate.” Elizabeth talks about how Moses was terrified to go back to Egypt and help free the Israelites. What’s interesting about God’s pep talk to him is that it wasn’t about how great Moses was. Instead, God said, “I will be with you.” And that was all that mattered.
As I take my big scary steps forward in the direction I think God is calling me, I already know that I will be inadequate for the task. However, this is irrelevant. I can be faithfully confident that because of Jesus’ promise in Matthew 28:20 (“surely I am with you always”).
2. It’s Normal to Have a Range of Emotions and Responses
I know there are a couple of instances of ridiculously holy people in the Bible who responded to God’s call with enthusiasm and confidence. However, there are many who did not—in fact, I’d say most didn’t.
Some people had even had triumphs of faithfulness in the past, but because of the trials and pain of life, they faltered later. Mary, Jesus’ own mother who had humbly accepted her role to give birth to the savior, later doubted and questioned Jesus’ ministry—after all she had seen in her miracle baby! The apostle Peter as well had some pretty epic stumbling, even after having walked with Jesus and given up everything to follow him.
I’ve had times in my life where I didn’t question God’s will as much and cheerfully got on board, like when I had the opportunity to do mission work in Alaska right after I graduated college. At other times I’ve been anxious, doubtful or downright rebellious about the directions God has led me. These are all normal responses and, while we have to battle through them sometimes, our emotions won’t deter God’s plans. The good news is that while we might be a hot mess, he will remain consistent.
3. Sometimes the Only Way Out Is Through
This is a message I’ve been hearing over and over again recently. By nature I avoid conflict, vulnerability and painful circumstances. I’m pretty good at side-stepping problems. But as I’ve learned the hard way, sometimes the only way to get past your fears and problems is to work through them.
Elizabeth gives the example of the story of Esther, the queen of Persia (secretly Jewish), who was living pretty comfortably until she discovered that her own husband had agreed to annihilate her entire people. Here choices were to not act and watch her people die, or act, and possibly die herself, without helping the situation.
The story, of course, ended with her courageously taking action, as terrifying as it was. And I think that’s where I am in my current situation. When there’s nowhere to go but forward, your only option is to grow.
We cannot run from these situations. Some are griefs that feel past bearing, past surviving—and yet we must bear them. The only way out is through. And in situations like this, God is saying, “Grow.” He gives us no choice but to move forward. No choice but to change (When God Says “Go,” p. 101).
Is God saying “Go” to you in some way right now? Then I’d like to give you the chance to get a signed copy of Elizabeth Laing Thompson’s latest book!
Leave a comment: how is God saying “go” in your life, and how are you responding?
Having a daily quiet time can be one of the biggest struggles for busy moms (and others!). Whether you’ve been doing it for 20+ years or you’re trying to start a brand new habit, it can be really tough digging into the Bible and praying when you’ve got kid boogers all over your shirt, bags under your eyes, and barely more than two uninterrupted seconds on any given day.
Sometimes we get into a rut not just because we lack the time, but because we lack the focus and motivation to get started.
Even when we’re motivated, however, it can still feel a bit overwhelming or uninspiring: where to even begin?
That’s why I thought it would be fun to compile a bucket list of ways you can connect with God in your personal quiet times. I see lots of posts about date ideas with your spouse; why not have some date ideas with God?
Personally, I need to mix things up sometimes. When you have your whole life to get to know someone, whether he be your earthly husband or Jesus, the same thing day after day after day can get a little dull. While there’s certainly value in routine, my husband and I enjoy taking little adventures together. I want to be that way with God.
Too often I hear people speaking with guilt about how they just can’t into this “read the Bible and pray” routine they think they should be following. If it’s not working for you, try something else; mix it up. A dynamic relationship with God does not have to fit into any particular box.
This list is for you whether you don’t know where to start or you need to rekindle your passion for God. It’s not a must-do list; these are simply ideas to help you get inspired.
Quiet Time Bucket List: 20 Ways to Build Intimacy with God
Disclosure: this post contains affiliate links. See here for more information.
Studying the Bible
Have you ever tried reading the Bible from the beginning and then get stuck somewhere around Leviticus? You wouldn’t be the first! Here are some ways you can read the Bible during your quiet times that are more than just…reading the Bible.
Write the Word: It’s pretty simple. Instead of reading, why not try copying passages of scripture as you work through them? This is a good way to read the Bible and pray at the same time (gasp!). I made a little video about how I’ve been using this method in my own life:
The R.E.S.T. Method: Read. Engage. Savor. Take charge. Kaylene Yoder has a ten-day challenge to help you work through it.
Verse Mapping: If you’re scatterbrained and need things to be visual, check out this tip from Arabah Joy. You take one passage and discover new insights by drawing it out. This is a great option for we non-artistic types because it doesn’t have to be pretty!
The Color Method: Color code verses as your read them to help you visualize the message. Check out this guide to help you get started, or invent your own system.
Study Guides: Personally I think there’s a big difference between a fluffy devotional that has sprinklings of biblical teachings and an in-depth guide that helps you dig much deeper. She Reads Truth has some excellent, engaging studies.
Reading Plans: I’m a simple kind of reader, and a simple plan is helpful for me. I’ve gone through several annual Bible reading plans, which only takes a few minutes a day. I recommend trying different translations and methods. You can go straight through or use a plan the mixes it up so you’re not camped out in the, uh, less interesting parts for weeks and weeks. I’ve used The Bible App.
Study Buddy or Buddies: One of the most influential experiences in my personal faith was getting together once a week with two other women who knew the Bible better than me so they could teach me what it was all about. Maybe you don’t have someone like that in your life right now, but I’d encourage you to pray about it and just ask someone! I doubt they would say no. If you can’t meet in person, talk over the phone or even email each other!
Take a Class: It’s a pretty simple concept really; if you want to learn about something, sign up for a class, silly! Arabah Joy offers 7 Days in 7 Ways in an online course that will really help you take your Bible study to the next level with some fresh strategies. I was pleasantly surprised with how much I learned!
Memorize Verses: When Deuteronomy 6 says to write the commands on your heart, I think this is what it means. Take some notecards and write down your favorite verses and flip through them regularly. You’ll have them down before you know it…and you might just start quoting them! Check out some of the mama verses in my free resource collection if you need ideas for what verses to use.
Digging Deeper in Prayer
I’ve said it before and I’ll say it again: I suck at praying. Well, at least I thought I did. But I’ve discovered that the cool thing about prayer is that there’s really no wrong way to do it! Here are a couple of tips to help you stay focused:
Fasting: I recently heard a sermon on the power of fasting and Oh. My. Goodness. I was challenged but inspired. You can fast as a way to humble yourself before God, to repent, to seek guidance, to ask for help, and—this is the kicker—to help you focus on God. There are a lot of ways to do it, but traditionally it makes sense to take at least a day to deprive your body of something it wants (like food, certain drinks, etc.). Try it! And try it again!
Prayer Journal: I’m obviously a big fan of this since I wrote the Teach Me To Pray guide. Prompting can help a lot. The Write the Word Journals from Lara Casey are also great resources when they’re available.
Prayer Buddy: strength in numbers, right? I learn so much when I hear the prayers of other people. Find someone you can pray with weekly—over the phone if needed!
Nature: Sometimes when I’m in a funk, I just need to drive outside of town and clear my head. Mountaintops are ideal, but if you live somewhere flat like I do, I’m sure you can nonetheless find inspiration in the beauty of creation.
Meditation: Not to go all woo-woo on you, but personally I find a lot of value in simply being quiet and listening for the Holy Spirit’s promptings. You can meditate on verses, or take in silence.
Keep a List: I like to write down all the prayer requests from my friends and family as well as the biggest items on my heart. Some places you can write it down include your planner, journal or even on a list on your phone. Bonus: since I have been in the habit, I can actually follow up with the people I’ve been praying for and ask them how it’s going!
“War Room”: First, if you haven’t seen the movie it’s definitely worth a watch. The idea is that you have a designated area of your house where you pray. In the movie they put up their favorite scriptures and prayers up around the inside of a closet. Personally, I like somewhere with a little more natural light…but do what works for you.
Embracing Your Creativity
Are you a creative type? Then use your passions and talents to connect with God! Think outside the box when it comes to your quiet times.
Sing: Even if you’re not signing a record deal, this is a simple yet powerful way to worship. Grab a hymnal, listen to your favorite artist, start a choir…do what inspires you.
Compose: I know a couple of people with this skill and I’m super jealous. Whether you write poetry, lyrics, play an instrument, sing, or all of the above, can you think of some ways to use your talent that will encourage you and others?
Guided Devotional Art: If you need more guidance or something a little simpler than Bible journaling, there are a bunch of artistic devotionals available. The Scripture Doodle six-week devotional is a great way to get started.
Kids’ Resources: Seriously, there is some phenomenal kids’ material out there that I think is helpful for adults too, especially if you want to get back to basics. In our house we are obsessed with the What’s in the Bible? series, which we stream on JellyTelly. I also highly recommend the Jesus Storybook Bible, which ties together the whole Bible narrative in an engaging and simple way.
I know there are more ideas out there so now I turn it over to you: how do you connect with God? And what’s on your quiet time bucket list?
When you’re a mom of young kids, “overwhelmed” can feel like a state of being. Since we’re constantly meeting others’ needs without always filling up our own tanks, it doesn’t take much for us to snap in anger or pass out in complete exhaustion. Encouraging Bible verses, anyone? Yes, please!
The other day I was going through an old journal and came across an entry over three years old. At the time, I had a 4-year-old, a 2-year-old and an infant:
Dear God,
Today was hard. A lot of days are hard. And then I feel guilty for thinking they’re hard.
Because I know my life is good. Incredibly good. I wouldn’t change a thing.
And so I just keep wrestling with my thoughts, one day after another. I grin and bear it through all the poop messes and the tantrums and the moments when I cry because I can’t find my keys and I just want to run away to somewhere very quiet.
Lord, you know me better than I know myself. I try to cling to you in all my desperate moments, even though I feel like I can’t see straight.
It broke my heart when I read this again, even though I knew that I would obviously pull through and we would all be fine. That was a really hard year for me. I felt desperately lonely and may have even been dealing with some postpartum depression.
It was around that time that I started a very simple practice that kept me grounded in truth rather than the lies of inadequacy that were swirling around in my head. I took a handful of 3×5 notecards and wrote out my favorite encouraging Bible verses on them. I left them in a highly visible place on the countertop in the kitchen. Every time I was having a “mommy moment,” I would whip out those cards and start flipping through them until a found a few that anchored my soul just enough so that I could face the next chaotic moment without screaming.
It didn’t take long before I had most of them memorized. And they remain my go-to verses when I just need to get my head on straight.
10 Encouraging Bible Verses for the Overwhelmed Mama
I’ve compiled many of my favorites in previous verses, which are now in a popular series of lists of encouraging Bible verse posts I call Mama Verses. It seemed only appropriate to add “overwhelmed” to the list!
Want a printable list of these verses for overwhelmed moms and more? You can find them in my free resource collection.
Finding Strength in God
In general, I get most overwhelmed when I’m leaning on my own ability and strength to get through a chaotic day. It never works. Here are some powerful yet encouraging Bible verses that remind me to rely on God for my strength.
Psalm 63:1
You, God, are my God,earnestly I seek you; I thirst for you,my whole being longs for you, in a dry and parched landwhere there is no water.
When I am feeling completely burnt out and overwhelmed, I have to remind myself that what I’m really thirsty for is the Lord. I need to do whatever I can to fill myself up with him.
Psalm 42:11
Why, my soul, are you downcast?Why so disturbed within me? Put your hope in God,for I will yet praise him,my Savior and my God.
Putting hope in God is the only true remedy to a downcast soul.
Acts 4:13
When they saw the courage of Peter and John and realized that they were unschooled, ordinary men, they were astonished and they took note that these men had been with Jesus.
This is becoming a new favorite of mine; I recommend reading the whole passage to understand the context better. We don’t have to be super moms in order to live courageous, meaning lives; we just need to hang out with Jesus!
Trusting God
Believing in God and trusting in God are two related but separate things. I might know in my head that I need to rely on his strength, but if I’m not entrusting my burdens to him, I will only continue spinning my wheels.
Psalm 68:19
Praise be to the Lord, to God our Savior,who daily bears our burdens.
Read it: this says God is here for me daily. Do I believe it and turn to him?
Romans 8:28
And we know that in all things God works for the good of those who love him, whohave been called according to his purpose.
No matter how hard it gets or how overwhelmed I feel, this truth reminds me that there is a bigger picture I might not see.
Waiting on God
I may rely on God and trust him, but sometimes I still have to wait. He never promises that life will be easy, and sometimes I have to be patient before I see answers to my prayers. These encouraging Bible verses remind me that waiting is not a bad thing.
Psalm 27:14
Wait for the Lord;be strong and take heartand wait for the Lord.
Be strong! Take heart! And wait. He’ll come through.
Psalm 37:7
Be still before the Lordand wait patiently for him; do not fret when people succeed in their ways,when they carry out their wicked schemes.
It’s easy to get worked up and fret over all the things. But here and in many other places, God says “be still.”
Galatians 6:9
Let us not become weary in doing good, for at the proper time we will reap a harvest if we do not give up.
That has to be one of the most encouraging verses in the whole Bible! Don’t give up!
Seeking Help
Don’t you sometimes wish Jesus would just appear in the flesh and give you direct advice (along with a big hug)? Well if that ever happens, I don’t think I need to be worrying about much of anything anymore 🙂 But in the meantime…we’re not alone. God puts people in our lives for a reason.
Galatians 6:2
Carry each other’s burdens, and in this way you will fulfill the law of Christ.
We mamas are carrying a lot of burdens. Other people are not only commanded to help you; they usually want to! We just have to be humble enough to ask: for babysitting, for help with meals, cleaning, you name it.
Ephesians 4:15–16
Instead, speaking the truth in love, we will grow to become in every respect the mature body of him who is the head, that is, Christ. From him the whole body, joined and held together by every supporting ligament, grows and builds itself up in love, as each part does its work.
We need each other. Someday, when your life isn’t so crazy chaotic, you’ll be able to pay it forward to some other overwhelmed mom.
These encouraging Bible verses have saved my life; I hope they are able to help you too.
By the way, you know that journal entry I mentioned at the beginning? It ended with this:
You are good, God. You love me, you help me, and that’s all I need to know. Thank you. Amen.
Want to keep these verses handy? You can now download them in a printable form! Just click below:
What helps you when you feel overwhelmed as a mom? Please leave a comment here or on social media.
It was a fall afternoon as I talked on the phone with my friend and mentor and I realized that I was going to have a big cry.
Sob-fests don’t scare me like they used to, and so I let the tears come. I sat in my feelings for about a week, observing, analyzing, praying.
And the diagnostic word began to surface: disconnected.
The conversation that had started my self-examination was about how I was feeling about some of my relationships in church. Disconnected. But then that feeling started to spill over into other areas of my life: work, parenting, finances. Disconnected. Much of how I had spent my time and energy in recent months was mechanical, and whenever I hit a hiccup, my reaction was to disengage, disconnect. I was more interested in checking boxes and crossing items off lists than I was in getting down on my knees, digging in, getting real and getting dirty.
It was humbling when I was honest with myself. I dropped a few more tears and came to peace with my weakness.
And then I decided to take action. We were heading into the holiday season and I knew that I wanted to shift my focus in the New Year (and even before).
I’ve never done a “word of the year” before with much success. Pick one word that is supposed to guide my life for a whole year? It has felt arbitrary, and honestly a bit contrived.
Maybe I’ll feel that way at the end of this year.
Nonetheless, I’ve learned a lot about goal-setting recently, and so at least for this year, a word that provides singular focus makes sense.
Connection. I put it up in my kitchen: my command center, the heart of my home, family and work space. I pass it multiple times a day, and it keeps me centered when I’m pouring milk, writing out my schedule or filing away receipts.
I think I’m onto something here with this word of the year business.
Disclosure: this post contains affiliate links. See here for more information.
5 Ways I’m Practicing Connection as “Word of the Year”
A word like this is inspirational on my kitchen wall, sure, but it’s empty without any way to put it into practice. Perhaps that’s why “word of the year” never really worked well for me in the past; I didn’t have any practical way to put it into practice day in and day out.
And while it would be ironically self-defeating if I watered down my word into a bunch of checklist items, I have spent many hours over the last several weeks mapping out exactly what practicing connection looks like. (I chose Cultivate What Matters PowerSheets this year to help me “make it happen.”)
1. Connection in My Home
I’ve actually done decently in this area, as I’ve worked steadily on decluttering and organizing over the past few years. But where I feel like I’d like to connect more is making my home…a home. A haven.
Three, maybe four years ago, we had the toilet and bathtub replaced in our main floor bathroom. We intended to finish remaking the whole room with new paint, laying tile that we already own, and replacing vanity and the linens. Surely this is not a terribly difficult task.
It’s years later, my friends. Years. I have made zero progress. I felt overwhelmed and uninspired by the project so I disconnected myself from it. We never budgeted for it. I was hoping my husband might take the lead, but it’s just not a high priority to him.
If it’s gonna happen, I must set it in motion.
I’m setting small, attainable goals. I’ll start with a Pinterest board. I’ll make a budget. I’ll block out a Saturday pick out some paint. And step-by-step, we’ll move forward.
And this is how I want to approach alllll of those little things in my home that just make it a little bit lovelier to be in. Connecting, step by step.
2. Connection in My Family
I homeschool my three kids. We have a good routine and rhythm in our home, which I love, but at the same time I can feel a little…checked out. And my kids can feel it.
This year it’s my goal to be intentional about each of my kids’ love languages (as well as my husband’s). I get one-on-one time with at least one of them each week. During that time I’m going to talk through with them what helps them feel loved, and then I’m going to do it! (My quality time kid will be thrilled.) My goal is to collect and record words, photos and mementos from our times together throughout the year. By Christmas, I’ll have a unique gift for each of them that commemorates how our relationship grew this year.
3. Connection in My Work
Last year was a huge year for me in my online business. I took some courses and worked hard to grow my audience, and I put together some digital products that I’m really proud of.
At the same time, by the end of the year I was feeling exhausted with it and, naturally, a little disconnected from my purpose. I’d gotten lost in numbers and productivity, which is the exact opposite of the message I want to communicate!
While I still have number goals, I’m much more interested in narrowing my focus this year and connecting with you. In my stress management course, Chaos to Calm (which I plan to open for enrollment in May), I’ll be doing more live and interactive coaching. In the Wiping Noses for Jesus is Legit Facebook Group, I’m going to be interacting more frequently and strategically to get to know you. As for the blog, I’m hoping to open up a bit more and sharing posts just like this one, where I worry less about the perfect title or presentation and just share my heart.
4. Connection in My Community
It’s hard to stay connected in friendships when you’re in this stage of life. It’s something I’ve continued to battle with, and I feel like I want to engage in my friendships in a deeper and more authentic way.
In addition to some changes in the small group I’m a part of at church, my personal prayer this year is to deepen three of my friendships. I know that’s a bit vague, but practically speaking what this looks like for me is choosing one person in my life whom I will pray for daily over the course of a week. I’m not sure where that will lead, but I’ll bet it will be pretty great.
Another goal is to bridge the gap between my online ministry and my “real-life” one. My hope is that by the fall I’ll be able to gather a group of women together in person to explore some of the topics I’m passionate about, particularly biblical stress management and rest.
A few years ago I started the practice of writing the Scriptures that spoke to me most on my hard days on notecards and putting them on my kitchen counter. After a time, most of them were memorized.
But then at some point I either misplaced or damaged the cards, and I fell out of the habit. I’m finding that I’m getting slower at recalling the verses I once knew so well. So this year I’m getting back into the habit, and my goal is to memorize one Bible verse a week. And this time I won’t throw out the cards!
There are a few other goals I’m working towards in business, finance, family and even having fun (like reading more fiction and learning to make sourdough!). I might reach them and I might not…but the point with my word of the year is to maintain the right perspective. No matter what I do, I am choosing connection over checklists and processes and perfection?
Have you picked a word of the year? I’d love to hear it and how you plan to live it out in the comments.
And if you need help learning how to map out some of your goals, be sure to check out this free resource, which is part of the free collection I offer for subscribers:
Hey friends! I’m super excited to share today’s post, which is packed full of info for how to be motivated based on your personality type (and how to motivate other people too). If you enjoy this and are looking for more resources to motivate and strengthen your faith, be sure to check out my whole collection of free resources.
As a coach and encourager, I think about motivation a lot. Whether I’m teaching an online course, leading a small group or even parenting my own kids, I frequently observe that some people follow through with expectations naturally…while others seem to rebel against the thought, ever! And everything in between.
I have a friend whose faith and maturity I admire tremendously. She was looking into ordering a daily prayer journal. While the journal was beautiful and she liked the idea, she nonetheless knew that once she had it in her hands, she would immediately resist using it.
Another friend loves learning about God and reads voraciously when she feels like it, but for the life of her can’t follow through with a daily Bible time, unless she’s in a study group.
Someone else I know is extremely disciplined about pursuing a hobby he cares about and will devote hours to filling out related spreadsheets. Yet a discipline of daily exercise? Not unless he finds a way he is convinced is right for him.
I personally can’t relate to any of these people.
There are other disciplines like eating healthy, family routines, keeping a clean house and so on. Some people seem to have no problem whatsoever keeping up. And some (most?) seem to incessantly struggle with at least one area.
While curious about why some people seem to be self-motivated and some aren’t, I’ve conceded that everyone has a different personality and complex reasons about how they’re motivated. And while that’s true to some extent…it’s not an entirely satisfying answer.
Is there an explanation for how people are motivated that’s actually practical and can help no matter what your natural bent is?There IS, according to the findings of Gretchen Rubin!
Disclosure: this post may contain affiliate links, which won’t change your price but will share some commission. See here for more information.
The Four Tendencies
I was so excited to stumble upon The Four Tendencies by Gretchen Rubin because she has examined my questions with about a million times the intensity and a devotion to research.
She’s boiled down this aspect of human behavior—motivation—to be explained by how we respond to expectations.
Everyone has a tendency when it comes to responding two types of expectations: internal and external. Internal expectations are self-imposed, like fitness goals or household schedules. External expectations are things like work deadlines, meetings or what you signed up to bring to a potluck.
Based on how you generally respond to internal and external expectations, you can fall into one of four categories: “Upholder,” “Questioner,” “Obliger” or “Rebel.” You have a dominant Tendency as well as a secondary one.
One reason I love this framework is because it doesn’t value one Tendency over another. As Psalm 139:14 says, everyone is “beautifully and wonderfully made.” With an accurate understanding of your Tendency, you can work with your personality to be motivated and follow through, rather than wishing you were different. And by better understanding and considering other people’s Tendencies, you can gracefully accept them for who they are and learn how to communicate with them more effectively.
While The Four Tendencies isn’t explicitly a Christian framework, I see a lot of practical application. In fact, I find it helpful to consider what Tendency people in the Bible are because it sheds some light on how God works through each type.
Motivated by All Expectations: the Upholder
I’ll start with one of the more “extreme” personality types: the Upholder. This is the person who readily responds to both internal and external expectations. Upholders are generally self-starters, easily motivated and reliable. They love checklists and following rules. You want Upholders on your team because they will carry their weight 110% every time.
On the other hand, they can also be rigid, perfectionistic, uptight and impatient. In situations where the expectations aren’t clear, they can feel anxious and uncertain. They will even go as far to create the rules inside the rules when they aren’t clear. They can be judgmental of others who don’t think the way they do. While Upholders are usually aware of their own need for self-care, they can be susceptible to driving themselves a bit mad with all of their expectations, which may be reasonable or not. They also can be resistant to delegating or trusting others to get the job done.
I am an Upholder, through and through. My greatest strengths are also my greatest weaknesses. Being self-aware helps me recognize when I’m following rules for rules’ sake, and in turn can help me let them go—for myself and for others.
When I think of Upholders in the Bible, the most obvious one is Paul. As a Pharisee he was an extreme rule follower and was so passionate about the rules that he sought to persecute those who didn’t fit inside his box. But when he found grace in Christ, his world was turned upside down. Instead of being passionate about rules, he became infinitely more passionate about grace and the freedom it ultimately brings.
Motivated by Internal Expectations: the Questioner
When I told my husband about The Four Tendencies, he was initially skeptical and expressed his distrust of personality frameworks. And he immediately confirmed my suspicions that he is a Questioner. This Tendency will follow expectations if those expectations make sense. Questioners critically examine all external expectations, and if they are deemed worthy, they will make them internal expectations and follow them.
As an Upholder, I love Questioners because they help me think critically rather than just following all the rules. They tend to do a lot of research and love the concepts of fairness, efficiency and effectiveness. Once they come to an internal conviction, they will stick with that conviction faithfully.
Questioners’ weaknesses are related to their strengths. They can be so data-driven that they can reach “paralysis analysis” and avoid making decisions altogether. This can be exhausting. But once they come to an opinion or decision, they can stick to it stubbornly. With their self-directed reasoning, they can also rationalize some strange ideas. To convince them otherwise you have to present them with extensive data, which can be frustrating.
A friend of mine who is a Questioner says that setting deadlines helps her avoid analysis paralysis and decision fatigue, and that has been freeing for her. Understanding this Tendency also helps explain the person who can never “take your word for it” or questions everything.
I believe that David in the Bible was a Questioner. When the Israelites were terrified of Goliath, he immediately questioned their lack of faith and had a firm internal conviction that God would have his back. I often have wondered how this same man fell into extreme sin later in life, like when he took a military census instead of trusting in God’s provision, or when he committed adultery and murder. His rationalization and stubbornness make more sense if you think of him as a Questioner who strayed (and fortunately came back once he saw his errors).
Other possible Questioners of the Bible: Gideon, Jonathan, John the Baptist
Motivated by External Expectations: the Obliger
There’s a reason that accountability and coaching programs are so popular. They work! Many people cannot be self-motivated with tasks and habits they know they need to do for themselves, like maintaining personal health, keeping house or being disciplined about completing a passion project. But if you present an external expectation like a deadline or a consequence for other people if they don’t follow through, Obligers are dutifully responsive. According to Rubin, Obligers are probably the largest group.
Obligers are reliable team players and are very responsive to others’ needs. But, unsurprisingly, they can be especially susceptible to overwork and burnout, as well as exploitation. In fact, if you push Obligers to their limit, they can actually slip into what Rubin terms “Obliger rebellion,” when they just stop showing up. If the outer expectations are too much for them to handle, they crumble because there is not enough internal motivation to carry them through.
While Obligers can naturally feel frustrated with themselves since they lack internal motivation, the great news is that the solution to being motivated is easy to identify! If you’re an Obliger and you want to be motivated, the key is to find an external accountability system. This can look different for every person, but if you can find a way to let someone else down by failing to meet an expectation, you’ll be much more likely to follow through.
I wonder if perhaps some of the more selfless people in the Bible were Obligers, like Ruth and Esther. While it’s difficult to know the motivation behind why they did what they did, they appeared to be very responsive and courageous when others needed them to step up. Perhaps Moses was an Obliger as well. At different stages in his life he appeared to be weak and cowardly, but when he had a clear outside expectation from God as well as from his community he was heroic. (He also seemed to have a couple of instances of Obliger-rebellion when he got pushed to his limit!)
Other notable Obligers of the Bible: Aaron, the Apostle John, Barnabas.
Not Motivated by Expectations: the Rebel
On the opposite extreme from Upholders are Rebels. They resist all expectations, internal and external. This Tendency is fascinating and befuddling to me, as it is my complete opposite. Yet some of my dearest friends are Rebels; I am drawn to them because of their creativity and authentic way of living, as well as their ability to think outside the box.
It can be frustrating to be a Rebel or to work with a Rebel because rules and “shoulds” do not motivate them; in fact, they are demotivating. Some Rebels feel energized by breaking rules just to prove that they can. This does not mean that Rebels are doomed to be slackers and slobs, but it does mean that they need to think about motivation differently than the other Tendencies.
Three things that motivate Rebels are their sense of identity, their ability to be free and the opportunity to step up to a challenge (they love to prove people wrong). For example, a friend of mine is very passionate about her love for her kids; she’ll go to the moon and back for them. But she has to maintain a sense of freedom when running her household; otherwise she doesn’t feel true to herself or her family. So she doesn’t do well with strict routines and schedules, but when the mood strikes she will do a beautiful job cleaning, organizing and decorating. If she finds a particular task challenging or frustrating, she’s wonderfully creative and determined to complete it.
When communicating with a Rebel, you can’t force them to do anything (even when you’re communicating with yourself). Rubin recommends the following sequence of information: information, consequences and choice. Present the Rebel with their options, explain the consequences of their decisions and then let them choose. If the desired outcome resonates with their identity and they have the freedom to choose it, they’ll come through.
I’m pretty sure that Peter in the Bible was a Rebel. He was obviously resistant to outer expectations in the Gospels and struggled with inner ones as well, most notably when he denied Christ on the night before the crucifixion. But once he found his identity in Christ, he became a force to be reckoned with in the Book of Acts. (His name literally means “Rock.”) I’m sure he got a kick out of resisting the authorities, and it makes sense that he rejoiced when he was persecuted. Tradition holds that when he died he was crucified upside down at his own request. Sounds pretty Rebel-like to me.
Other possible Rebels in the Bible: Jacob, Samson, Jonah, the Prodigal Son, Mary sister of Martha.
I hope you find this framework as insightful and practically helpful as I have. Most people know what their Tendency based on these basic descriptions, but if you’re not sure you can take Gretchen Rubin’s quiz here.
I’d love to hear from you: what do you think your Tendency is, and how do you think understanding this framework can help you be more motivated as you live out your faith?
Are you wrestling with sadness this Christmas? While we’re mostly enjoying the season, our family has a nagging sense of pain and loss as we wait for our international adoption referral.It hurts to think that our future child might be suffering and can’t create memories with us this year.
Christmas Sadness: How to Cope When the Holidays Hurt
The Christmas season is upon us, and for many it’s a time of joy, laughter and family.
But for some, it’s a time of unspoken sadness, grief and remembering loss. There are loved ones you can’t be with, unfulfilled dreams, and painful memories.
Or perhaps it’s a mixed bag; you enjoy the season, but there are still instances nagging pain.
Whatever your story, it is an emotional time of year, for good or for bad. In the hubbub of the festivities, it can be easy to shove the swell of emotions aside and just power through—and feel completed depleted afterward.
On the other hand, it can be tempting to want to pull away from it all and hibernate until January.
Click over to Equipping Godly Women to read more about how to approach the holidays in a godly way with these complexities of emotion.
Leave a comment: are the holidays painful for you? How do you cope?
Do you have a hard time connecting in prayer? I do too…and that’s why again and again I go back to Jesus’ teachings about prayer whenever I’m in a rut. If you enjoy this, be sure to check out the Teach Me To Pray 7-Day printable journal.
For the longest time, I thought that I pretty much stink at praying.
Don’t get me wrong, I come to God regularly, maybe with a prayer list or journal in hand if I’m really on top of things.
But too many times, daily prayer has just been an item on my spiritual checklist, and as a result it has felt rote, aimless, boring and powerless.
I quickly lose focus and my mind wanders to what it thinks are more interesting pursuits.
I’ve wondered at times, What is wrong with me? Is there a “right” way to pray? Or a wrong way? What exactly does God expect us to say when he already knows our thoughts, anyway?
I think Jesus’ disciples wondered about some of these things. I’m guessing this is why he offered them many lessons on prayer.
Jesus came to a people who were very…religious. The Jewish leaders at the time loved marking all the right boxes, praying long and loud, making a show out of fasting and demonstrating to everyone how extremely godly they were.
And then there were the regular Joes like the rest of us mortals who probably felt a little inadequate and lost when talking to the LORD of the universe.
What made Jesus’ approach to prayer different was that it was an ongoing conversation in an intimate relationship with his Father, rather than religious act you could check off your daily list.
Along with Jesus’ other teachings, his words on prayer were tough pills to swallow.
And you know what? They’re still tough. But that’s what makes them so effective.
The secret to a powerful prayer life isn’t following some formula or method.
CONNECT
WELL HELLO!
I'm Gina, a happily married mom of three and stress management coach. I help exhausted, overwhelmed moms find peace and purpose in the everyday. Be sure to sign up for tons of free resources that will help you stop just surviving and start thriving! Read More…
|
“Oh, what will the signal be/For your eyes to see me/Watching offside as I wait/Just in case you need me/So I still will set the stage/Send my thoughts to you/I’m receiving every wave/that sent love, sent love through…”
Summary Capsule: Post-apocalyptic mutant dog rock star wants to summon a demon through the power of rock, and… do you really need to know more?
Deneb’s Rating: Four mutants out of five.
Deneb’s Review: You know, it’s been a while since we’ve had any really weird animated films coming our way.
Think about it. The last one that was truly oddball (that I’m aware of, anyway) was The Triplets of Belleville, and that was A: almost a decade ago, and B: French, so what do you expect? (I love ya, French folks, but you’re tied with the Japanese in the category of ‘World’s Most Bizarre Collective Subconscious’.) Now, of course there’s always the experimental, avant-garde film-festival stuff, but those tend to be about ten minutes long and often made by just one animator. The full-length, mainstream weirdos? Those tend to be somewhat rarer – as in, a lot.
It wasn’t always this way, though. Back in the late ’70’s and early ‘80’s, there was a brief rash of something strange and wonderful in the world of animation. Something bubbled up from the bottom of the cauldron, and gave us things that were dark and rich and new. These were films that dared to experiment, to push the boundaries of what animation could do and get away with, that dared even to suggest that some day, maybe, there could be animated movies that weren’t just for kids.
It didn’t last for long. None of them were terribly successful, and the industry shrugged its collective shoulders and went back to making family-friendly fluff (which, for the record, I like, but still.) But before the bubble burst, we got films like The Secret of NIMH, Heavy Metal,The Black Cauldron – and, oh yes, Rock and Rule, the movie we are about to discuss.
The movie starts out with a narrative scroll explaining that The War finally happened (as just about everyone knew it was going to at the time). The only survivors were street animals, a motley collection of cats, rats and dogs which eventually wound up mutating into a gestalt humanoid species that more or less resemble the Dognoses from Donald Duck comics. Flash forward to a goodly length of time after that, and society has reformed into something more or less resembling the early ‘80’s, albeit with stuff like hover-cars and the like.
One thing that hasn’t changed is the power of Rock ‘n Roll. One of the biggest names in this future’s music industry is Mok (Don Francks), an aging rock star known for his bizarre and theatrical performances. Well-known though he is, however, his career seems to have peaked some time ago, and he’s had trouble regaining momentum. Hmm. Looks like he might have some spare time on his hands. Maybe he should take up a hobby.
Well, how about summoning a demon? Yeah, that’d do. Mok has become obsessed with the notion of bringing forth a creature from the underworld to do his bidding – just what he wants it for is a little unclear, but he’s bound and determined to do it, nonetheless. (None of these are spoilers, by the way, as this is also all described in the opening crawl.)
The trouble is, he can’t do it alone. He requires a specific vocal tone, a unique voice that will complete the ritual and allow him to carry out his dark plans. He’ll know it when he finds it, but he’s been looking all over the place for such a voice, and so far he’s had no luck.
This all changes when he returns to his hometown of Ohmtown and encounters a small-time rock band made up of Omar (Paul Le Mat), Angel (Susan Roman), Dizzy (Dan Hennessey) and Stretch (Greg Duffel). Omar and Angel are the lead singers, and a couple. It’s somewhat of a bumpy relationship – he’s a bit too focused on his career, and not enough on hers – but they do seem to get along well otherwise.
In any case, Angel winds up taking the lead on the night that Mok comes calling. Wouldn’t you know it, she’s the one that he’s been looking for – she’s got the voice! He’s just got to have her, and quickly sets out trying to seduce her into his employ.
Angel, however, isn’t having any of it. Her career with the band may not have gone as smoothly as it might so far, but they’re her friends, and she’s not going to desert them just as they’re starting to have some success. Mok isn’t taking no for an answer, though – if he can’t recruit her willingly, he’ll simply change tactics and make her work for him.
And make her he does, spiriting her off before she has the chance to do anything about it. Omar and the others smell a rat in all this, and follow the two to Nuke York (yes, “Nuke York”), where Mok is hard at work making preparations for a mammoth concert. At this concert Angel will sing, and his demon will be unleashed at last.
Will our heroes succeed in finding her? Will Omar and Angel ever manage to patch up their differences? And can the world’s most evil rock star be stopped? Well… maybe. Yeah, that’s a definite maybe.
There seems to be something about Rock music that draws filmmakers like flies to honey, and causes them to make these grandiose movies themed around it. This is already the third review of this particular subgenre I’ve done for this site (the others being Phantom of the Paradise and Streets of Fire), and I have no doubt that there are many more entries in it out there waiting to be discovered.
What’s different about Rock and Rule, of course, is that it’s animated, which allows the filmmakers to get really out there with the story and visuals – and oh, they are out there; we’ll be getting to them soon enough. But there are other differences besides that; oh yes. Lots and lots.
To start with, Rock and Rule may be the first Rock movie set in its own little universe that plays by a set of rules all its own. One could, of course, point to Heavy Metal as a counterargument,but from what I’ve seen of it, that’s more of an anthology film – it has lots of little stories that are only tenuously fit together. R&R, on the other hand, is one story, one narrative, one world – and oh, what a weird and wild world it is.
That, really, is the key to what makes the film stick in the head – the world. We may technically be dealing with the distant future here, but it feels almost like a nostalgia piece, up until you get into the flying cars and the weird post-apocalyptic stuff and the whole mutated animal thing, and… well. Just about everything else, really.
But that’s the genius about a movie themed, not just around music, but around the feel of music – music creates its own worlds, and ones that are not necessarily tied to strict reality. Music, after all, is not logical, it is emotional, and the more intense the music gets, the more powerful the emotions, and the more fantastic the mental images. And if you tried to capture early-‘80’s Rock and put it onscreen, you might not exactly get Rock and Rule, but you’d probably get something awfully close.
Which brings us, of course, to the music. I’m honestly not too familiar with this type and era of Rock, but if you do happen to be a fan of it, I’m sure you’ll be satisfied. The movie isn’t exactly a musical, per se, but there are a number of original songs written for the movie that are worked into it in a natural sort of way, and sung by some pret-ty well-known people. I mean, Debbie Harry, Cheap Trick, Iggy Pop, Earth Wind and Fire? Even I’ve heard of these guys (I don’t know a hell of a lot about them, but I’ve heard of them), and music-wise, they deliver. Not all of the songs are really my thing – some are a bit too raucous for my liking – but I do like most of them, and they’re all very appropriate in terms of character, mood, tone, etc. In any case, they all fit the film perfectly, which is not something you can say for all soundtracks.
So that’s how it sounds – how does it look? It looks pretty damn awesome. Considering the time when it was made and the tight budget involved, Rock and Rule is a minor triumph of animation. It’s not always perfect, but it’s consistently good, and even when there is the occasional glitch, chances are you’ll be too caught up in the dark, brooding visuals to notice. The cityscape of Nuke York, for instance, is a lovely bit of gritty post-apocalyptic hellhole-ishness, and every time Mok shows up, it’s likely that there’ll be some darn nifty stuff to goggle at. The demon sequence, for instance (oh come on, that’s not a spoiler; it’s all about summoning the thing) looks spectacular, and is worth waiting for.
Right – Mok. Let’s talk about Mok. I know that normally the villain goes second in these reviews, but while he may not technically be the protagonist, the entire film revolves around him, so he’s worth bringing up first (not to mention that there’s a lot to say about him, so better now than later).
Mok is, first and foremost, a really great villain. He’s got all the traits a classic bad guy needs – he’s cunning, manipulative, theatrical, absolutely evil and possessed of enough power to make going up against him a really tough proposition. Moreover, the man runs on pure ego; he’s obsessed with maintaining his rock star image to the point where he has his minions work as a special-effects crew so that he can dissolve into a cloud of sparkles or something if he thinks it’d impress somebody. While it’s never outright stated as such, it’s implied that this is his motivation for the demon-summoning – he may still be one of the biggest names in the industry, but if he can’t be the biggest, he’s going to punish all those wretches who refuse to recognize his magnificence by sending a monster from Hell after them. That’ll show ‘em!
Furthermore, he’s got one of the most distinctive looks I’ve ever seen in an animated character. Conceptually he’s something like an evil hybrid of Mick Jagger and David Bowie, and while that would have worked perfectly well on its own, the animators went a step further and gave him an image that is unmistakably his. He’s tall and cadaverous with great big long fingers and wears a succession of cool I’m-an-evil-rock-star outfits, but the real genius went into his face, or, more specifically, his lips. Mok’s lips are just fascinating – I don’t think I’ve encountered anything like them in animation before. Most characters with noticeable lips tend to possess ones that are pouting or puffy, but not Mok. His lips slope inward, in a manner that looks disturbingly like they were carved into his face with a chisel, and seem to have more articulation in them then some people have in the rest of their bodies combined. It’s difficult to articulate just why this is so mesmerizing; it just is – you’ll have to see it to understand it. Combined with a whopping mouthful of teeth and his oddly rectangular eyes, Mok draws your attention like a magnet every time he’s onscreen, and it doesn’t leave him until the movie is finished. If you remember one thing about this movie, it’ll be him.
Also, one should mention his voice. While I’ve never encountered Don Francks before, I’ll definitely be keeping an eye out for his stuff in future – between this and being a fill-in voice for Dr. Claw in Inspector Gadget, the man has talent. He provides Mok with a resonant, purring grate of a voice that honestly surprised me at first, as I had been expecting something more Tim Curry-ish. Still, what works works, so I ain’t complainin’.
Following him, the “real” protagonist of Rock and Rule would probably be Angel, who is also a pretty good and memorable character. At first glance it might seem like she’s a typical damsel-in-distress type that the hero has to rescue, but really, nothing could be farther from the truth. As voiced by Susan Roman, Angel is gutsy and determined, with a take-no-crap attitude and a refusal to compromise her standards for money or power. She’s loyal to her friends, devoted to her craft, and while she does remain Mok’s prisoner throughout most of the film, that’s because he’s, well, Mok – against a more conventional foe, one gets the impression that she would have just kneed him in the tender parts and gotten away. She is, in short, a genuinely positive female role model, and her helpless situation only serves to accentuate this – it takes a good character to keep one’s interest and respect even while they’re not in an active role.
Next up, we have Omar. A lot of people don’t seem to like Omar very much, and while I can see why, I don’t really agree. Sure, he can come across as a bit of a jerk sometimes, but that’s not really who he is – he’s more of an Angry Young Man. One must remember that for a good chunk of the film he’s semi-convinced that Angel has deserted him for Mok, so while his petulance can get a bit over-the-top at times, it’s a realistic way that someone like him would react; he’s the sort of guy who deals with his problems by angrily going “who cares?” and then going off to kick a wall. The thing is, though, that he does care – he genuinely loves Angel, and while it’s sometimes difficult to understand what she sees in the big meathead, he does ultimately prove himself worthy of her, and as voiced by Paul Le Mat, he’s got a certain James Dean-ish charm. Even if you do want to slug him sometimes, he’s an OK guy.
Moving on to the supporting characters, we have Stretch and Dizzy. Stretch is a jittery goofball, and as such serves as the main comic relief. He’s nothing too revelatory character-wise, but he does have a few good lines here and there, and never crosses the line into outright annoying. Dizzy is kind of an awkward nerd, which also makes him a bit of a stock character, but he serves an ancillary purpose by acting as the conscience of the group. When Stretch is too busy freaking out and Omar is too busy sulking, Dizzy’s the guy who gets things going by saying something like “look, we gotta get moving; Angel needs us!” He’s not terribly deep, but as a supporting character he works fine.
Finally, back on the bad guy side of things, we have the Schlepper Brothers, Toad, Zip and Sleazy. They serve as Mok’s dim-witted goon squad throughout the movie, filling the usual roles of the heavies. However, they are a little bit deeper than that, and ultimately wind up having hidden depths that I won’t go into here. As minions go, they’re fairly memorable.
So, to wrap things up, is Rock and Rule a perfect movie? Well, no – it does have its flaws. For one thing, if you’re expecting that this is something you can watch with the kiddies just because it’s animated, you’re wrong – there’s swearing, some (mild) drug use and implied sex. (Mind you, I’m sure there are plenty of kids who would love it, but it’s really more for early-teens on up.) The story is nothing to write home about, basically being “Mok’s gonna summon a demon, and until he does, here’s stuff that happens”. Also, the characters (aside from Mok) are by-and-large nothing new, and sometimes seem a little overly cartoonish for all the sturm und drang that’s surrounding them. (Oh yes – and the whole “evolved animals” thing? Doesn’t affect the plot in the slightest.) There’s a certain style that the movie has, and if it doesn’t click with you, then you may not like it very much.
However, if it does, you’re in for a treat. I mean, we’re kidding ourselves if we think that people watch movies like this for the plot or the characters; they watch them for the ride, man! And the ride on this one is ultimately pretty cool. The animation was great for its time, and remains darn pretty even today; the soundtrack is fairly impressive even if it’s not your thing, and the whole shebang just has a bizarre rock n’ roll sci-fi edge to it that makes it fairly unique. If you’re in the mood for something dark and rich and weird, then Rock and Rule’s your baby.
Go ahead and check it out. And rock on!
“Oh come now, Angel; you’ve been swayed by false rumors. I mean, I’ll admit that my rise to the top wasn’t ENTIRELY done without a bit of judicious murder and bribery here and there, and yes, I do enjoy a good round of torturing kittens and puppies every now and then, but evil? That’s a bit of a jump, don’t you think?”
Intermission!
There are several scenes in the film that feature what look like vintage computer graphics. In fact, these were largely animated through more traditional means, using overlays lit from underneath.
The film was originally to be named “Drats”, and aimed at a younger audience.
Mok’s full given name was originally ‘Mok Swagger’, something that Mick Jagger’s lawyers objected strongly to. Therefore it was not used, but it was in the comic book adaptation, and many fans of the movie have adapted it as the character’s ‘real’ name. Personally, I think just plain Mok is more elegant, but whatever.
At one point, the band’s car drives under a sign reading “Bridge to Aitch”. If you pause at this point, the rest of the sign can be read: “One Way Only (and this ain’t it). No doing anything on bridge.”
The various shots of the Ohmtown cityscape from above were done using a multi-plane camera, with lights shining through a matte painting during nighttime scenes. The cars driving through it are real model cars traveling along the painted streets.
This was the first animated film made in Canada. It was also the last such film that Nelvana ever made, as it flopped at the box office and nearly bankrupted the studio. Their subsequent efforts have all been less ambitious, more family-friendly fare.
The process of animating the demon involved smearing cow brains on the camera lens.
Groovy Quotes:
Mok: When I want your opinions, I’ll give them to you!
Angel (singing) Oh, what will the signal be/For your eyes to see me/Watching offside as I wait/Just in case you need me/So I still will set the stage/Send my thoughts to you/I’m receiving every wave/that sent love, sent love through…
Officer Quadhole: (repeated line) Sliiiime!
Dizzy: You’re just nervous. Take a deep breath.(Stretch does so)
Stretch: Hey, it woiked! I’m not noivous! I’m scared!
Mok: No Santa Claus, no Tooth Fairy, and no Uncle Mikey!
Mylar: Fabuloso!
Mok: (singing) My name is Mok, thanks a lot/I know you love the thing I’ve got/You’ve never seen the likes of me/Why, I’m the biggest thing since World War Three!
Omar: Hold onto yer privates, generals!
Mok: What did you think of my last album?
Angel: I loved it!
Omar: I bought it, too. My gerbil uses it for a room divider.
Video game voice: We’ve got company at twelve o’clock.
Stretch: But the house is such a mess!
Angel: I couldn’t leave them for anything.
Mok: I didn’t offer you anything – I offer you everything!
Toad: Ya gonna apologize, rude-boy?
Omar: I’m sorry, dogbreath.
Mok: Yes – good, clean fun! All work and no play makes Mok a dull boy!
Dizzy: Nuke York’s only three days away.
Stretch: It’s gonna take us six days. We only got half a car left.
Mok: Evil spelled backwards is ‘Live’ – and we all want to do that.
Officer Quadhole: What are ya doin’ in a public fountain?
Omar: We give up, Quad – what are we doin’ in a public fountain?
I wouldn’t be surprised – it is kind of surreal that Nelvana devoted themselves pretty much entirely to kiddy stuff after this. It IS kind of their roots, though – they’d never made anything like Rock and Rule before, and they never would again. Kind of makes you wonder what we’d know them for if the movie HAD been a success, doesn’t it?
I picture Rock & Rule only with Care Bears. You *know* someone’s thought of a dark and edgy reboot of the ‘Bears :) Still it is a little sad that something that was obviously so much a labor of love had to be put aside in favor of commercial stuff.
I wouldn’t say THAT exactly – I mean, they did DO it; they did make the thing. It wasn’t put aside, they MADE that puppy. They gave it their best and hoped it would make money, and, well, it didn’t. It’s certainly a shame that it didn’t spark off more projects like it, but at least we got what we got.
That’s actually a callback to a cut scene. The same couple are seen earlier on trying to sell “Mok’s Concert” T-shirts, which nobody is buying – so, naturally, they try again with “I survived Mok’s Concert” shirts, which gets the same results for different reasons.
|
Motor running very rough
JOSH PLETT
MEMBER
2003 BUICK CENTURY
3.1L
V6
2WD
AUTOMATIC
130,000 MILES
My motor is running rough. My check engine light is on. I have changed all spark plugs, plug wires, and coil packs. I have double checked to make sure I have the correct firing order. It runs the same as be for I changed all parts. I went to autozone and had a read out. It said originally that cylinder 3 wasn't firing. Cylinder 3 plug when I changed it was the only plug that had dark wet oily look to it. There is no oil or antifreeze smell nor is it expelling any smoke/oil or antifreeze/water out the exhaust. There is no leaking or smell of either liquid when motor is running or otherwise. It is making a ticking noise when running. Thought maybe if I replaced all electrical the roughness and ticking and the cutting out when depressing the gas would stop? It has not. It has changed nothing. The read out from autozone also stated oxygen sensors not reading? Needing to know what the next step in figuring out the problem?
SPONSORED LINKS
Do you
have the same problem?
Yes
No
Friday, September 9th, 2016 AT 4:53 PM
1 Reply
KEN
ADMIN
Hello JOSH,
It sounds like you have one of two problems, either the camshaft lobe is flat which you can check by removing the valve cover or you have low compression which you can check by following this guide.
|
New 'Medicare for Beginners' Workshops
The Central Ohio Area Agency on Aging (COAAA), in partnership with the Ohio Senior Health Insurance Information Program (OSHIIP), will offer additional ‘Medicare for Beginners’ Workshop dates this spring and summer. The daytime seminars are May 25 and July 19, both starting at 3:00 pm.
The additional dates will be daytime seminars on the off months the agency isn’t hosting a regular ‘Medicare for Beginners’ evening event. The seminars will be at COAAA, 3776 S. High St., Columbus, Oh 43207.
‘Medicare for Beginners’ workshops help those who are new to Medicare by offering unbiased advice to help individuals navigate through the Medicare process. The workshops are a valuable resource for those who need help understanding their Medicare options.
|
Replacement Pillow
Removing the pillow insert from the deluxe Kindermat and Daydreamer mat covers before washing helps to maintain their fluffiness and extend their life. However, if you want or need a backup, NapMat.com has these replacements available. These pillows replace the pillow inserts in the deluxe Kindermat and Daydreamer mat covers, not mats by Stephen Joseph, Mint, or other mats.
About Nap Mat
Napmat.com works hard to provide a variety of wonderful personalized gifts and products to help little ones successfully go off to school, to welcome a new baby to the family, and to offer wonderful gifts for family, friends and loved ones.
|
Monday, August 11, 2014
A few days before. I search for information on the New Orleans Saints 8'' x 20'' Framed Letter Art, so i have to tell.
New Orleans Saints 8'' x 20'' Framed Letter Art
Take a virtual tour of your favorite NFL teams city when you hang this 8aposapos x 20aposapos framed letter art in your home or office. This art piece features quotSaintsquot spelled out by arranging photographs from local landmarks and attractions arranged to form letters making it the perfect piece for any New Orleans native or fan. Read more
It's essential to spend less in today's financial state. We should be mindful with funds, but we are able to easily nevertheless keep store shopping. You can find everything required for a smaller amount when on the net. Please keep reading to discover getting the very best information regarding thrifty shopping on the net.
|
If you are like the rest of our user community, your IT team is busy. With pressure to deliver on-time projects, you don’t have a lot of time to spend making your management tools work. You need network monitoring tools that work for you. You want tools that makes it easy to find performance issues before your users do and resolve them before they impact the business. That’s why tens of thousands of customers around the world love WhatsUp Gold.
1/2
How many devices do you monitor on your company's network?
Under 25 25-50 51-100 Over100
2/2
One last question before you visit our site:
When do you plan to purchase a network performance monitoring solution?
kses is an HTML/XHTML filter written in PHP. It removes all unwanted HTML elements and attributes, and it also does several checks on attribute values. kses can be used to avoid Cross-Site Scripting (XSS). NOTE: I don't have time for kses right now..
Cloud Toolkit .Net provides many useful and good-looking .Net controls for use with .Net applications (VC++, VC#, VB.Net etc). Update (September 2011): The project has been inactive for a few years and will remain so. No support provided!
This library is meant for high performance calculations for science or 3D games/rasterizers using SIMD instructions of x86 processors to allow an unparalleled level of optimization. This takes advantage of MMX, 3DNow!, 3DNow!+/MMX+, & SSE/SSE2/SSE3/SSSE3
ABSim is an Agent-Object-Relationship (AOR) simulation system based on a Java program library. The development of ABSim has terminated. Its successor is <a href="http://oxygen.informatik.tu-cottbus.de/aor/?q=node/2">AOR-JSim </a>
Codewars is an client-server game. You have to write a robot-program in any programminglanguage, which fights against other robots in an simulated arena. A musst see for fans of artificial intelligence and coding.
CorEngine is a work in progress, OpenGL graphics powered 3D game engine designed to help independent game developers with quick prototyping and game/virtual environment creation.
The engine supports a standard set of features, like skeletal animation, post processing, Lua/C programming, physics powered by Bullet Physics, GUI and 2D/3D Audio.
MsraConsole is a Remote Desktop sharing Tool.
This Project is inactive. The forked Project is MsraCon wich uses Windows Authentication. https://sourceforge.net/projects/msracon/
The Tool could be used as Help Support solution in Classrooms. It shares the Windows Desktop Screen of multiple Computers with some Viewer Computers (Users). The Viewer can take control over the Mouse and Keyboard.
It is only a programming sample, don't use the Software in productive Environments!
It uses the Microsoft Remote Desktop API RDPCOMAPILib and AxRDPCOMAPILib the source Code is in C#. Requires Net Framework 4 or higher to be installed.
Pearl MATE 3.0 (16.04)
!!!!!!! THIS VERSION IS NO LONGER SUPPORTED !!!!!
Please see new location for all versions of Pearl 3.0:
https://sourceforge.net/projects/pearl-os-3-0/files/
>>>>> However <<<<<
You you do prefer this version if you go and comment out our repository and
just use the ubuntu archives for updates. If all you would like to keep is
pretty much the theming etc ... this is probably a good choice.
I am very sorry for this issue... totally my fault...humans geeze..lol..
btw to comment out our repo simply put an # in front of the deb http://..... line in the /etc/apt/sources.list file.
PocketGCC is a port of well-known GNU C/C++ compiler and Binutils for ARM-WinCE-PE platform. Both crosscompiler and native builds are provided, allowing to develop applications for WindowsCE devices with ARM-compatible processor on the go without desktop
ZEngine is designed to provide a powerful yet easy to use 2D game API using OpenGL for fast 2D drawing and SDL for everything else, it is completely cross-platform and the class based design makes it easy to learn and use.
jsXe is the Java Simple XML Editor. Out of the box it provides a tree view, DTD/Schema introspection and validation. It's aim is to provide a framework for XML editing through any number of views that can be loaded at runtime as plugins.
This is the Sokofinity project. The goal of this project is to recreate the classic NES game DuckHunt, only this time in 3D with Virtual Reality. Using an Infinity Box and Flock Of Birds positioning sensors, the game gets a new dimension.
Note: this p
This is the source code repository for my VMLAB User Components. The Project Website contains detailed descriptions of each component. You can also Download Components as pre-compiled DLL files from this site. The "AVR Peripheral" components should be installed to the "mculib" directory; all other components are installed in the "userlib" directory. A few components include a "readme.txt" with additional setup instructions.
Most components are licensed under the LGPLv2 (or higher). A few of the older components are released into the Public Domain.
The Program Killer is a Delphi 6 program that monitors the Process List on Windows 95/98/Me and Windows NT4/2000/XP for unauthorized EXE files (User Definable) and if found, those Processes are Terminated via the Windows API.
AbsoluteX is an open source free class library primarily developed for use with X Window System.
AbsoluteX uses object-oriented design and free software LGPL licensing.
It gives you the ability to develop open software, free software, or even commer
Get latest updates about Open Source Projects, Conferences and News.
Yes, also send me special offers about products & services regarding:
You can contact me via:
Email (required)PhoneSMSPhone
JavaScript is required for this form.
I agree to receive these communications from SourceForge.net. I understand that I can withdraw my consent at anytime. Please refer to our Terms of Use and Privacy Policy or Contact Us for more details.I agree to receive these communications from SourceForge.net via the means indicated above. I understand that I can withdraw my consent at anytime. Please refer to our Terms of Use and Privacy Policy or Contact Us for more details.
|
Jazz Fills the Air Near Brookhaven Accommodations for Annual Winterfest Guests
Wineries Open Their Tasting Rooms to Jazzy Winter Fun
BELLPORT, NY--(Marketwire - Feb 14, 2013) - Jazzing up the winter doldrums, Long Island's Winterfest returns to ignite some fun and relaxation with exceptional music and fine wines every weekend. Winterfest began on February 9 and continues through March 17 while a host of special events. Comfortable Brookhaven hotel accommodations make it easy to create an exciting winter weekend getaway.
Bearing the theme "JAZZ on the Vine" for its fourth consecutive year, Winterfest embraces the area's wine industry by captivating guests with six fun weekends of jazz performances inside local winery tasting rooms and allowing them the opportunity to enjoy remarkable music while celebrating the tasty flavors borne from local vines and the skilled craftsmanship of area vintners.
Centrally located to provide easy access to the impressive list of vineyards participating in the many events of Winterfest 2013, the stylish SpringHill Suites Long Island Brookhaven, NY hotel is an ideal destination for those planning to attend the festivities. Featuring full afternoon schedules in the tasting rooms of some of the area's most noted wineries, as well as a host of other cultural happenings and fine dining opportunities, Winterfest is destined to make winter an exciting time on Long Island. A special attraction to this year's sixth annual event includes concerts in the newly renovated Suffolk Theater on Riverhead's Main Street. Celebrating its grand opening on March 2, the theater is opening after a five year restoration that has transformed it into a magnificent performing arts center with state-of-the-art services.
While attending Winterfest, guests of the Marriott's SpringHill Suites near Medford, NY can enjoy spacious home-like comforts when choosing from the hotel's selection of well-appointed studio suites, all offering amenities such as complimentary Internet access; in-room refrigerator, microwave and coffee maker with tea service; a flat screen LCD TV with cable/satellite service, premium movie channels; and an iPod docking station. Ideal for traveling families, guest accommodations include cotton-rich linens over thick king- or queen-size mattresses and a pullout sofa for more comfortable sleeping arrangements. Providing added savings and a delightful convenience, guests are treated to a breakfast buffet served daily in the lobby of this hotel near Fire Island and have access to a relaxing indoor pool, outdoor patio and 24-hour fitness center.
About the SpringHill Suites Long Island Brookhaven Hotel
The SpringHill Suites Long Island Brookhaven Hotel welcomes travelers with comfortable accommodations designed to provide home-like comforts and convenient access to popular business destinations and nearby attractions including Tanger Outlets shopping, Atlantis Marine World and Baseball Heaven. The stylish lodging also offers well-equipped venues for small business and social gatherings as well as a 24-hour business center for the convenience of traveling executives. Free parking and complimentary airport shuttle service to and from the nearby Islip Airport are among the many amenities that make the SpringHill Suites an ideal destination for those seeking a remarkable Long Island hotel experience.
|
Mammals
More than 50 mammal species are found in Northern Virginia; and Willowsford habitat is suitable for many of them.
Some mammals, such as gray squirrels and white-tailed deer, are often seen. Others, like skunks, fox and bears are more elusive. The smallest mammals, including voles and shrews, are rarely seen because they spend much of their lives underground or hidden under leaves and low growing plants.
Roles of Mammals in Ecosystems
Animal species have different roles in the habitat or ecosystem, and all species in the ecosystem rely on each other.
Keystone species, such as white-tailed deer, have a disproportionately large impact on an ecological community, affecting many other organisms. As Virginia’s largest herbivore, a relatively small number of deer can have a huge impact on a forest environment. Their excessive browsing removes native tree seedlings, young shrubs and groundcover plants, leaving less food and shelter for other animals.
Predators, such as foxes and coyotes help manage prey species, such as rodents. And certain mammals—the ecosystem engineers—can influence habitats by modifying their environments. Beavers, for example, build dams, create ponds and wetlands, and alter stream habitats.
|
Alto-Shaam Golf Outing Raises $100K for Lymphoma
The sixth annual Jerry Maahs Memorial Golf Outing held on Aug. 9 at Ironwood Golf Course in Sussex raised more than $100,000 for the Leukemia & Lymphoma Society’s Wisconsin chapter, making the event its largest independent fundraiser in the state.
Alto-Shaam founder Jerry Maahs passed away from lymphoma in 2006. The Jerry Maahs Memorial Golf Outing was founded in 2009 to honor Jerry’s memory and to support finding a cure for this cancer.
“Our goal is to support research that will one day save many lives,” says Steve Maahs, chief operating officer and president of Alto-Shaam. “My family is overwhelmed by the supportshown by our employees and extended business community. Not only will my father’s industry legacy continue, but his memory will continue to live on as we support the discovery of a lymphoma cure.”
Finding a cure for lymphoma remains a growing concern. Nearly 71,000 new non-Hodgkin lymphoma cases are expected to be diagnosed in the United States in 2014, according to the National Cancer Institute at the National Institutes of Health. The NCI estimates almost 19,000 people in the U.S. will pass away from the disease this year.
Because of the generous donations from sponsors and the Alto-Shaam family, Jerry’s name will continue to support a research grant to help find a cure for aggressive non-Hodgkin lymphoma. The Jerry Maahs Memorial Golf Outing has raised nearly $225,000 since its inception in September 2009.
The golf outing continues to grow each year. The first event included 56 golfers and six volunteers working to raise $7,680. This year, 187 golfers and 31 volunteers helped raise $100,000. Golfers came from throughout the U.S., including Arizona, New England, Florida, California, Washington, New York, and Georgia.
“We look forward to seeing an end to non-Hodgkin lymphoma,” Steve says. “Every person who helped with the golf outing—whether sponsoring the outing, donating prizes, volunteering at holes, or swinging the club—should be proud of their contribution toward that goal.”
The research portfolio includes seven cutting-edge investigations underway at prestigious research institutions and a strategic alliance with a biotechnology company.
“We would like to congratulate and thank Steve Maahs, the golf committee and Alto –Shaam, Inc. family on their amazing event and generous donation,” says Mike Havlicek, Wisconsin chapter executive director. “Alto-Shaam has once again proven to be not only an industry leader, but a valued community leader. LLS is grateful to have their support and partnership. The company’s efforts and generosity bring help and hope to patients and their families.”
News and information presented in this release has not been corroborated by FSR, Food News Media, or Journalistic, Inc.
|
Overview of cancers
===================
Bladder cancer
--------------
Bladder cancer refers to any of several types of malignant growths of the urinary bladder, about 90--95% of which are transitional cell carcinomas (TCC); the remaining are squamous cell carcinomas and adenocarcinomas ([@bib50]). Every year in the United Kingdom almost 10 200 people are diagnosed with bladder cancer, with \>4800 deaths, accounting for around 1 in 20 of all cancer registrations and 1 in 30 cancer deaths ([@bib12]). In GB, the age-standardised incidence rates increased throughout the 1970s and 1980s to reach a peak in the late 1980s, although the numbers of deaths have remained steady in recent years ([@bib50]). In most European countries, including England and Wales, bladder cancer is at least three times less frequent in women than in men, which has been seen as partly due to different smoking habits and also an indication for an occupational origin ([@bib39]; [@bib47]). Patients with superficial non-penetrating tumours have an excellent prognosis with 5-year survival rates between 80 and 90%, whereas patients with muscle-invasive tumours have 5-year survival rates of \<50%. Population-based bladder cancer survival rates have changed very little between the late 1980s and the late 1990s, with men having a persistent 6--10% survival advantage ([@bib53]).
Many studies have suggested ∼40 potentially high-risk occupations ([@bib56]). Despite this, the relationship between many of these occupations and bladder cancer risk is unclear, with evidence of a strong association for a few occupations: aromatic amine manufacturing workers, dyestuffs workers and dye users, painters, leather workers, aluminium workers and truck drivers ([@bib56]).
Tobacco smoking and occupational exposure to aromatic amines (AAs) are two established environmental risk factors for bladder cancer, and controlling exposure to these has been an important contributor to the reduction in mortality, particularly among men ([@bib48]). Up to 40% of all male and 10% of female cases might be ascribable to smoking ([@bib12]); the International Agency for Research on Cancer (IARC) has suggested that the proportion of cases attributable to prolonged smoking in most countries is of the order of 50% in men and 25% in women ([@bib30]). The relative risks (RR) are around 2- to 4-fold ([@bib51]; [@bib68]; [@bib12]).
Kidney cancer
-------------
This can refer to cancer of the renal cells only (renal cell carcinoma; RCC) or can include the less-common cancers of the renal pelvis, ureter and other non-bladder urinary organs such as the urethra (transitional cell carcinoma; TCC). Kidney TCCs are closely associated with bladder cancers ([@bib50]). Although the incidence of kidney cancers has increased in Caucasian populations, RCC has increased more rapidly than TCC. More men are affected than women, and most cases occur in the age range of 50--70 years ([@bib50]). The five-year RCC survival rate is currently about 50%, but if detected early enough may be \>80%. Incidence, mortality and survival rates for kidney cancer in Great Britain are in the midpoint of the global range ([@bib50]). Cigarette smoking is the most well-established risk factor associated with RCC and particularly TCC, but other risk factors, which include body weight, diet, pre-existing kidney disease and genetic predisposition, have been identified ([@bib40]).
Although kidney cancer is not generally considered to be occupationally associated, occupational agents/exposure scenarios associated with RCC include asbestos, trichloroethylene (TCE), tetrachloroethylene, polycyclic aromatic hydrocarbons (PAH), diesel engine exhaust (DEE), heavy metals (e.g., cadmium, lead), polychlorinated biphenyls, coke production, oil refining and gasoline/diesel delivery ([@bib22]). Occupational associations for transitional cell carcinomas of the kidney resemble those for bladder cancer ([@bib40]) and include links to dyes or employment in leather and shoe manufacturing. Cancers of the renal pelvis and ureter have also been associated with exposure to coal/coke, natural gas and mineral oils, or employment in dry cleaning, iron and steel, chemical and petroleum refining industries ([@bib36]).
Methods
=======
Occupational risk factors
-------------------------
### Group 1 and 2A human carcinogens
The agents that the IARC has classified as either definite (Group 1) or probable (Group 2A) human carcinogens for urinary cancers are summarised in [Table 1](#tbl1){ref-type="table"}. The IARC has not identified any agent that is common to both kidney and bladder cancers. Causes for kidney cancer include work in coke production (Group 1) and exposure to TCE (Group 2A). For bladder cancer, these include mineral oils (Group 1), magenta manufacture (Group 1), auramine manufacture (Group 1), aluminium production (Group 1), work as a painter (Group 1), work in the rubber industry (Group 1), boot and shoe manufacture/repair (Group 1), exposure to AAs (Group 1/2A), PAHs (Group 1/2A), DEE (Group 2A), work as hairdressers and barbers (Group 2A), intermediates in plastic and rubber manufacturing (Group 2A) and petroleum refining (Group 2A).
Choice of studies providing risk estimates for urinary tract cancers
--------------------------------------------------------------------
Detailed reviews of occupational risk factor studies identified for urinary tract cancer are provided in the relevant Health and Safety Executive (HSE) technical reports ([@bib21], [@bib22]).
Occupational exposures considered for bladder cancer
----------------------------------------------------
### Aluminium production
The predominant exposure in aluminium manufacture is to PAHs, and thus those working in this industry will be considered among the estimates for PAHs. Although high risks of bladder cancer have been reported for those working in the aluminium industry, the causative agent(s) is unknown or unproven ([@bib31]). Most of these workers were concurrently exposed to aluminium dust, or fumes containing other known carcinogens such as tobacco smoke or PAHs. The PAH exposures mostly originate from the evaporation of carbon electrode materials used in the electrolysis process ([@bib26], [@bib27], [@bib28]). For a more comprehensive summary of the studies undertaken, refer to the relevant HSE technical report ([@bib21]). Studies generally report that the incidence of bladder cancer is increased in this industry, but since 1980 there has been a downward trend in incidence. Only studies in Canada demonstrate an exposure--response relationship. In a UK case--control study of 80 urothelial cancer cases, there was an almost doubling of the risk among those involved in aluminium refining/smelting ([@bib61]).
### Aromatic amines
Human exposure to AAs has been associated with an increased risk of bladder cancer, but their use has continued because of their industrial and commercial value. Well-established occupational causes of bladder cancer include the AAs 2-naphthylamine (*β*-naphthylamine), benzidine, 4-aminobiphenyl and chloraphazine ([@bib31]; [@bib69]; [@bib55]; [@bib11]). Aromatic amines have been used as antioxidants in the production of rubber and in cutting oils, as intermediates in azo dye manufacturing and as pesticides. They are also a common contaminant in several working environments, including the chemical and mechanical industries and aluminium transformation, and are widely used in the textile industry. Occupational exposures to AAs may explain up to 25% of bladder cancers in some Western countries ([@bib70]; [@bib69]).
### Risk estimates for occupational exposure to AAs and bladder cancer
For the present study, a series of risk estimates were used for different groups of workers. These were selected from the study by [@bib61] who investigated occupational exposure to AA and estimated the risk of developing urothelial cancer on the basis of a hospital-based case--control study in the West Midlands in the United Kingdom. Smoking-adjusted relative risks (RR) of \>2.0 were obtained for seven occupations, including dyestuff manufacture (RR=2.61, 95% CI=0.98--7.00), leather work (RR=2.51, 95% CI=1.44--4.35), cable manufacturing (RR=2.46, 95% CI=1.20--5.04) and textile printing and dyeing (RR=2.32, 95% CI=0.98--5.45). Sorahan\'s RR estimates for the manufacture of rubber products (RR=1.89, 95% CI=1.34--2.66), plastics (RR=1.73, 95% CI=1.17--2.55) and organic chemicals (RR=1.70, 95% CI=1.05--2.76) are used for industrial exposure to 4,4′-methylenebis(2-chloroaniline) (MOCA). The authors also provided an estimate of RR for medical and nursing occupations (RR=1.62, 95% CI=1.03--2.55), as well as for laboratory technicians (RR=1.05, 95% CI=0.60--1.86), which have been used for lower-level exposure groups.
In a review of studies, [@bib69] reported considerable increased risks of bladder cancer in workers exposed to 2-naphthylamine, benzidine and 4-aminobiphenyl, but also stated that some of these studies were poorly designed and/or based on very small numbers ([@bib70]; [@bib69]). A review of 11 European case--control studies also concluded that about 5--10% of bladder cancers in men could be attributed to occupational exposures, including, but not specifically, AAs ([@bib37]). Studies of *ortho*-toluidine and aniline have demonstrated elevated risks associated with bladder cancer. The use of 2-naphthylamine, benzidine and other carcinogenic arylamines has now been banned ([@bib44]; [@bib65]).
### Auramine and magenta manufacture
These exposures have been included in the general estimation of AAs and dyestuffs. Although IARC has classified the manufacture of auramine as Group 1, the responsible carcinogens are not known ([@bib31]). The IARC summarises one epidemiological study that suggests auramine manufacture as an occupational bladder cancer risk ([@bib24]). [@bib9] also observed a significant risk among dyestuff workers in the United Kingdom coming into contact with auramine.
[@bib31] also concluded that the manufacture of magenta entails exposures that are carcinogenic, but that overall there is inadequate evidence for human carcinogenicity of this dye. The manufacturing of magenta II involves the use of *ortho*-toluidine, formaldehyde, nitrotoluene, and for magenta the use of aniline, *ortho*- and *para*-toluidines and their hydrochlorides, nitrobenzene and *ortho*-nitrotoluene. In two studies of manufacturers from the United Kingdom and Italy, the risk of bladder cancer was highly significant ([@bib9]; [@bib52]); in both cases there was evidence of *ortho*-toluidine and 4,4′-methylene-bis(2-methyl aniline) exposure, implicating these compounds in the increased rate of bladder cancer mortality observed.
### Diesel engine exhaust
An effect of DEE on the occurrence of bladder cancer is plausible because metabolites of PAH are present in DEE and are concentrated in the urine and may interact with the urothelium of the bladder ([@bib57]). Exposure to DEE occurs in many occupational settings, and levels of PAHs in this fume are highest in emissions from heavy-duty diesel engines and lower (and comparable) in emissions from light-duty diesel engines and from petrol engines without catalytic converters ([@bib5]). Professional drivers, mechanics and people working in other related professions are exposed to elevated levels of emissions from combustion engines ([@bib19]).
### Risk estimates for occupational exposure to DEE and bladder cancer
For this current study, a suitable RR was calculated by the research team as 1.24 (95% CI=1.10--1. 41) for the 'high-exposed\' group using estimates from studies reviewed by [@bib6]. The RR was estimated using a random-effects model based on an overall inverse-variance-weighted average of all RRs from the studies based on cancer incidence, but excluding overlapping exposure categories. [@bib6] reviewed 35 European and North American epidemiological studies published between 1977 and 1998 that examined the risk for bladder cancer and exposure to DEE among highly exposed workers (e.g., railroad workers, garage maintenance workers, truck drivers and drivers and operators of heavy machines in ground and road construction). They classified exposure to DEE using a job-exposure matrix (JEM) or an experts\' assessment of individual occupational histories. Most of the cohort studies included did not control for smoking, whereas a majority of the case--control studies did. The studies based on routinely collected data were assumed to have been adjusted for smoking. An overall meta-analysis of the data was not carried out by [@bib6] because the results were heterogeneous owing to the fact that the studies used different definitions of exposure. Although the review did not offer an overall summary RR, because of this heterogeneity the value they calculated for all studies (RR=1.18, 95% CI=1.08--1.28) was in line with their observation of an overall RR in the range 1.1--1.3.
For low-exposure groups, an RR (based on a fixed effects model) of 1.03 (95% CI=0.84--1.26) was estimated. This was based on a reassessment of 6 of 10 studies examined by [@bib6], for which they were able to classify exposure as 'low\' on the basis of JEM. They noted that among these 10 studies although there were a few positive results (three above RR=1.1) most were close to unity.
### Hairdressers and barbers
Elevated risk of bladder cancer in association with occupational exposure to hair dyes in hairdressers, barbers, beauticians and cosmetologists has been widely reported; for a wider review of these studies, refer to the relevant HSE report ([@bib21]). Hairdressers have used a wide range of chemical products, including hair colourants and bleaches, shampoos and conditioners (e.g., primarily AAs, aminophenols and hydrogen peroxide; nitro-substituted AAs, aminophenols, aminoanthraquinones and azo dyes; and metal salts). However, the individual chemicals used have varied over time, and only permanent and semi-permanent colourants are now used to a significant extent by hairdressers ([@bib33]).
### Risk estimates for employment as hairdressers and barbers and bladder cancer
In the present study, a standardised incidence ratio (SIR) of 1.22 (95% CI=0.98--1. 51) was used for men, and an SIR of 1.09 (95% CI=0.81--1.43) for women. These figures were based on a study by [@bib13] who followed up a large cohort of \>45 000 male and female Swedish hairdressers recruited via the national census. These estimates were not adjusted for smoking. The study observed an increased risk among men and women irrespective of the census period over 39 years. The study reported the highest SIR of 2.56 for urinary bladder cancer in male hairdressers working in 1960 and followed up from 1960 to 1969. However, this risk decreased to 1.25 when these hairdressers were followed up for the whole period of 1960--1998. This reduction may have been because of the reduced use of brilliantine in male hair ([@bib20]; [@bib60]). Other Scandinavian studies of hairdressers have shown a statistically significant increased risk for bladder cancer in Sweden, Norway, Finland and Denmark ([@bib60]).
### Mineral oils
Exposure to mineral oils, and in particular shale oil, is of historical interest, but metalworking fluids (MWF) appear to be the predominant route for exposure to mineral oils during the study period of interest. A comprehensive and systematic NIOSH review examined the association between MWF exposure and bladder cancer, and concluded that this association is well supported by studies from different geographical locations employing different study designs, all of which controlled for smoking ([@bib8]; [@bib42]).
### Risk estimates for occupational exposure to mineral oils and bladder cancer
In the present study, the research team used papers from a review by [@bib66] describing the relationship between mineral oil and cancer to calculate a risk estimate of 1.39 (95% CI=1.20--1.61), assuming a random-effects model owing to significant heterogeneity across the different study results. This RR was used for those in situations of high exposure to MWF. The studies examined by [@bib66] reported substantial dermal and inhalation exposure for occupations such as metal machining, print press operating, and cotton and jute spinning. The overall RR for bladder cancer related to exposure to mineral oils was taken as a weighted average across these case--control and population-based studies reviewed, and reflected incidence and mortality due to bladder and other urinary tract organs. Owing to the absence of sufficient exposure--response data for mineral oils, an RR of 1 was assigned to the lower and background exposure categories of workers in the Labour Force Survey (LFS) data set.
### Occupation as a painter
Many chemicals are used in paint products such as pigments, extenders, binders, solvents and additives. Painters are commonly exposed by inhalation to solvents and other volatile paint components; inhalation of less volatile and non-volatile compounds is common during spray painting.
### Risk estimates for employment as a painter and bladder cancer
For this study, an overall RR of 1.17 (95% CI=1.11--1.27) was used to estimate the attributable fraction (AF) for work in this occupation. This overall estimate was obtained from a study by [@bib7] who systematically reviewed all epidemiological studies on bladder cancer in painters published after the [@bib32] monograph (this followed changes to paint technology and the nature of the exposures). This study examined cohort and case--control investigations from Europe and the United States, including one from the United Kingdom, up until 2004. Several RR estimates were obtained, including a pooled RR of 1.10 (95% CI=1.03--1.18) for four cohort studies, a pooled RR of 1.23 (95% CI=1.11--1.37) for mortality estimates and a pooled RR of 1.35 (95% CI=1.19--1.53) from 14 case--control studies, which included a pooled analysis of another 11 case--control studies.
Other studies have lent weight to this evidence of an association between bladder cancer and painting as an occupation. [@bib10] undertook a meta-analysis of published papers (1966--1998) from Europe, North America, New Zealand and China, and derived a combined standardised mortality ratio (SMR) of 1.30 (95% CI=1.14--1.50) for bladder cancer, based on 17 follow-up studies of painters. Occupational cohort studies also provided a combined SMR of 1.26 (95% CI=0.98--1.62). In the United Kingdom, a hospital-based case--control study of urothelial cancers by [@bib61] obtained a significant RR of 1.91 (95% CI=1.41--2.91) in occupations specified as manufacturing paints or in the professional use of paints.
### Polycyclic aromatic hydrocarbons
Polycyclic aromatic hydrocarbons are formed by the incomplete combustion of carbon-containing fuels such as wood, coal, diesel, petrol, fat or tobacco. Polycyclic aromatic hydrocarbons are produced in a number of occupational settings including coal gasification, coke production, coal-tar distillation, chimney sweeping, coal tar and pitches, carbon black manufacture, carbon and graphite electrode manufacture, creosotes and others ([@bib26], [@bib27], [@bib28], [@bib29]). Higher risks have been reported for specific categories of painters, metal, textile and electrical workers, miners, transport operators, excavating-machine operators and also for non-industrial workers such as concierges and janitors ([@bib37]). Industries entailing a high risk included salt mining, manufacture of carpets, paints, plastics and industrial chemicals. A number of studies have documented an increased risk among workers exposed to petrochemicals and combustion products in different industries, suggesting an association with PAHs, and to their nitro-derivatives as well as DEE ([@bib49]; [@bib55]).
### Risk estimates for occupational exposure to PAHs and bladder cancer
For the present study, an overall RR of 1.4 (95% CI=1.2--1.7) was based on 26 studies and was applied to those in 'high-exposed\' groups in the manufacturing industry. This overall RR was obtained from [@bib5] who reviewed the cancer risk from occupational and environmental exposure to PAHs, in aluminium production, coal gasification, coke production, iron and steel foundry work, DEE exposure, and workers exposed to coal tars and related products (i.e., tar distillation, shale oil extraction, creosote exposure, carbon black manufacture, carbon and graphite electrode manufacture, chimney sweeps and calcium carbide production). Results from all these sectors, with the exception of DEE exposure (for which AFs are calculated separately) and coke production (for which no evidence was found for a raised risk of bladder cancer), were used to calculate an inverse-variance-weighted combined estimate of RR.
The RR for the 'low-exposed\' group was set to 1 on the basis of combined OR estimates from population-based case--control studies covered in the same review [@bib5]. These included lower relative risk estimates of 0.9 (95% CI=0.8--1.1) for a large Montreal case--control study, and 1.2 (95% CI=1.1--1.4) for a range of other smaller studies (both used a random-effects model). Exposures to DEE and driving, and to mineral oils (cutting fluids), were again excluded from this current analysis.
### Rubber industry
In 1982, IARC classified work in the rubber industry as Group 1([@bib25]), with numerous studies providing strong evidence that workers in the rubber industry had elevated risks for bladder cancer ([@bib54]; [@bib71]; [@bib38]; [@bib11]), and continued to be at risk even among former workers ([@bib18]).
### Risk estimates for employment in the rubber industry and bladder cancer
More recent evidence suggests that the risk for bladder cancer ceased in the rubber industry in GB after 1950, and thus no AF has been calculated. For example, in a recently defined UK cohort of workers employed between 1982 and 1991, \>8000 workers from 41 factories were followed up through 2004 ([@bib16]). For both men and women, the incidence was nonsignificantly raised, and among men mortality was raised nonsignificantly. An analysis of 6500 male workers at a UK tyre factory who worked between 1946 and 1960 also found a nonsignificant excess in mortality ([@bib67]).
Mortality studies have been undertaken by the British Rubber Manufacturer\'s Association ([@bib45]; [@bib46]; [@bib62]) and by HSE ([@bib3]). Detailed studies have also been undertaken because of inadvertent exposure of some workers to 2-naphthylamine (used as an antioxidant) ([@bib21]). In the most recent of these studies, men employed in the period 1945--1949 were compared with those first employed after January 1950 (when 2-naphthylamine was removed) and were followed up until 1995. Overall, bladder cancer incidence significantly increased (standardised risk ratio (SRR)=1.23, 95% CI=1.02--1.48), especially for those employed in the period 1945--1949 (SRR=1.71, 95% CI=1.30--1.48) compared with those first employed after 1950 (SRR=1.02, 95% CI=0.72--1.39).
Occupational exposures considered for kidney cancer
---------------------------------------------------
### Coke production
Coke production has been classified as an IARC Group 1 carcinogenic occupation ([@bib28]) related to the exposure to PAHs. However, as there are few reports of a direct association between PAH exposure and kidney cancer (unlike for bladder cancer), it is not possible to derive a relative risk estimate for PAH as a causal factor of this cancer ([@bib5]).
### Risk estimates for coke production and kidney cancer
The most appropriate study (considering cohort size and period of follow-up) for coke-oven workers in Britain was provided by [@bib23] who found risk estimates of 1.16. for coke-oven workers employed in 6 British steel industry plants (2790 men) and a deficit of risk of 0.16 for coke-oven workers at 13 National Smokeless Fuels plants in Britain (3883 men). If these two are combined, an overall SMR of 0.58 is obtained, showing no excess of kidney cancer. As this combined SMR was \<1, no AF calculation was carried out for coke-oven workers. Another study by [@bib14] reported an SMR of 2.52 for kidney cancer but in a much smaller cohort of 610 coke-oven workers.
### Trichloroethylene
Trichloroethylene is not widely used today, but has been used as a metal degreasant/solvent in a range of manufacturing industries including rubber, textile and paint manufacture, and as a dry cleaning solvent ([@bib34]).
### Risk estimates for occupational exposure to TCE and kidney cancer
For the present study, the relative risk estimate for 'higher-exposure\' situations was 1.2 (95% CI=0.8--1.7), based on an average SMR calculated by [@bib72]. [@bib72] provided a robust methodology to evaluate 20 cohort and 40 case--control studies of TCE exposure. They divided these cohort studies into three tiers based on the specificity of the exposure information and consideration of confounding influences from other exposures. For those situations considered as 'low exposures\', the RR has been set to 1 based on a study by [@bib41] that provided a low-exposure SMR of 0.47 (95% CI=0.01--2.62).
The majority of cohort studies of TCE exposure report small, usually nonsignificant, elevations or deficits in kidney cancer associated with TCE exposure ([@bib21]). As all the studies include exposure to a combination of organic solvents, of which TCE may be a component, it is difficult to determine whether TCE is the causal factor, suggesting that exposure is more likely to be to organic solvents rather than to TCE specifically. Renal cell carcinoma is usually associated with occupational exposures to TCE ([@bib34]), but links have been made with exposure to TCC; thus, the risk estimate can be used for attributable risk estimation for all kidney cancers.
Estimation of numbers ever exposed
----------------------------------
The data sources, major industry sectors and jobs for estimation of numbers ever exposed over the REP, defined as the period during which exposure occurred that was relevant to the development of the cancer in the target year 2005, are given in [Table 1](#tbl1){ref-type="table"}.
For bladder cancer, AFs were estimated for occupational groups as a whole for painters and hairdressers/barbers. Coal tar and pitches, aluminium production, coal gasification, coke production and petroleum refining have all been included within PAHs. Auramine and magenta manufacture have been included under AAs. Exposure to boot and shoe manufacture/repair was considered only up to 1962 and was included with AAs. For the rubber industry, the risk was confined to before 1950 in the United Kingdom.
The following occupations were designated as jobs with known exposure to soluble MWFs in large droplet form: press and machine tool setters, other centre lathe turners, machine tool operators, machine tool setter operators, press stamping and automatic machine operators, and toolmakers tool fitters markers out. In addition, there were a number of occupations assigned a background exposure, including foremen of metal polishers, shot blasters and fettlers dressers; metal polishers; fettlers, dressers and shot blasters.
For PAHs, workers involved in the manufacture of industrial chemicals, miscellaneous products of petroleum and coal, as well as other non-metallic mineral products, and workers in the iron and basic steel and non-ferrous metal basic industries were assigned a high-exposure level. For DEE, workers in the metal ore and other mining industries, construction, land transport and services allied to transport were assigned to the high-exposure level.
For TCE, high-exposure-category industries were taken as those manufacturing finished metal products where TCE was likely to have been used as a metal degreasant, as well as the textile industry for similar reasons. The CARcinogen EXposure Database (CAREX) records only 117 workers as being exposed to TCE in clothing manufacture, and thus it was assumed that 99% of these workers were men. For exposed service workers, it was assumed that 25% were men, based on numbers of drycleaners/launderers reported in the UK LFS 1979--2003 (19% male workers in 1979, 25% in 1991 and increasing to 38% in 2003).
Over the current burden period of 1956--1996, the number of workers employed in coke production industries decreased from ∼20,000 to \<500.
Results
=======
Owing to assumptions made about cancer latency and working age range, only cancers in ages 25 years and above in 2005/2004 could be attributable to occupation. In the present study, a latency period of at least 10 years and up to 50 years has been assumed for urinary tract cancers. For bladder cancer, AFs have been calculated for exposure to mineral oil, AAs, PAHs (in coal tar and pitches, aluminium production, coal gasification, coke production and petroleum refining), DEE and for occupation as painters, and hairdressers and barbers. For the rubber industry, the risk for bladder cancer was confined to before 1950 in the United Kingdom; therefore, no AF was calculated. An AF has been calculated for kidney cancer for TCE exposure only. [Table 2](#tbl2){ref-type="table"} provides a summary of the attributable deaths and registrations in Britain for 2005 and 2004 and shows the separate estimates for men and women, respectively.
For all exposure scenarios combined, the estimated overall AF for bladder cancer was 5.28% (95% CI=3.43--7.72%). This resulted in a total of 245 attributable deaths (95% CI=159--358) and 550 attributable registrations (95% CI=357--795).
For all exposure scenarios combined, the estimated overall AF for kidney cancer was 0.04% (95% CI=0.00--0.15%). This resulted in a total of one attributable death (95% CI=0--5) and three (95% CI=0--10) attributable registrations.
Exposures affecting bladder cancer
----------------------------------
There were 101 654 men and 94 170 women 'ever exposed\' to AAs over the REP. The overall AF for bladder cancer and exposure to AAs was estimated as 0.67% (95% CI=0.30--1.49%), with 31 (95% CI=14--69) deaths and 66 (95% CI=30--147) registrations, respectively. Male workers in manufacture of wire and cable, as well as textile finishing, industries were most at risk, whereas women workers in the dry cleaning industry were most at risk.
There were an estimated 1 636 322 men and 426 949 women 'ever exposed\' to DEE over the REP. The overall total AF for bladder cancer and exposure to DEE was estimated as 1.00% (95% CI=0.17--2.03%), with 47 (95% CI=8--94) deaths and 106 (95% CI=18--214) registrations, respectively. Among men, workers in construction and land transport were at most risk.
There were in total 4,426,581 men and 466,252 women 'ever exposed\' to mineral oils over the 40-year relevant period. The overall total AF for bladder cancer and exposure to mineral oils was estimated as 2.81% (95% CI=1.47--4.31%), with 131 (95% CI=68--200) attributable deaths and 296 (95% CI=155--452) registrations. Metal workers were at most risk (231 male and 21 female registrations), especially machine tool operators (157 men and 15 women).
There were an estimated 334 339 men and 188 252 women 'ever exposed\' to PAHs. Work with coal tar and pitches, aluminium production, coal gasification, coke production and petroleum refining were all included under PAHs. The overall total AF for bladder cancer and exposure to PAHs was estimated as 0.07% (95% CI=0.03--0.11%), with three (95% CI=1--5) deaths and seven (95% CI=3--11) registrations, respectively. More than half of the total registrations occurred among workers in the iron and steel basic industries.
For work as a painter, 1 118 813 men and 130 630 women were estimated to have 'ever worked\' in the occupation over the REP. The overall total AF for bladder cancer and work as a painter was estimated as 0.67% (95% CI=0.44--0.91%), with 31 (95% CI=20--42) deaths and 71 (95% CI=47--97) registrations, respectively.
There were an estimated 96 041 male and 631 937 female hairdressers and barbers over the REP. The overall total AF for bladder cancer and work as a female hairdresser or barber was estimated as 0.16% (95% CI=0.00--0.63%), with 8 (95% CI=0--29) deaths and 15 (95% CI=0--56) registrations, respectively.
Exposures affecting kidney cancer
---------------------------------
Over the 40-year exposure period (1956--1996), a total of 43 861 male and 42 288 female workers were 'ever exposed\' to TCE, with the majority (∼85%) working in manufacturing industries. For women, there was a roughly equal split between manufacturing and services. Most exposures (\>95% for men and women) were classified as 'high\'. For kidney cancer, the estimated total AF was 0.04% (95% CI=0.00--0.15), with one (95% CI=0--5) attributable death and three (95% CI=0--10) attributable registrations. Attributable deaths/registrations corresponded to exposure in the manufacturing industries for both sexes.
Discussion
==========
Our estimate of the AF figure of 7.06% for all bladder cancer in men is lower than that of the 10% estimated by [@bib15] in their original critical review of literature, as well as the AF estimated by [@bib43] at 14.2% and the estimates for US white and non-white men of 21--27% ([@bib58], [@bib59]). However, our overall estimate of male and female bladder cancer of 5.3% is higher than the 2% obtained by [@bib17] for Nordic countries, and approximately the same as the 5.5% obtained for France by [@bib4]. It is also within the ranges observed in the qualitative reviews by [@bib70], [@bib35] and [@bib63], and a review of occupational studies in Italy ([@bib70]; [@bib69]; [@bib2]).
Overall, the AF for kidney cancer obtained in this study was 0.04% (men and women), which is significantly lower than the figure quoted by [@bib43]; that is, 4.7% for men and 0.8% for women.
It is possible that the overall AFs for kidney and bladder cancer reported here might be underestimated because certain agents and exposure circumstances were not considered. During 2009, IARC updated the critical review of Group 1 carcinogens as part of the 100th monograph. During this review process, the various committees reclassified arsenic as a Group 1 carcinogen for bladder cancer ([@bib64]), and soots and coal tars as Group 2A carcinogen ([@bib1]). Arsenic was also classified as a Group 2A carcinogen for kidney cancer, as was cadmium.
The lack of consistency in occupational findings for both cancers may partly be explained by small sample size, recall bias, misclassification and low levels of exposures, inadequate adjustment for confounding factors and short duration of follow-up in some studies.
British Occupational Cancer Burden Study Group
==============================================
Lesley Rushton (PI)^\*,1^, Sanjeev Bagga^3^, Ruth Bevan^3^, Terry Brown^3^, John W Cherrie^4^, Gareth S Evans^2^, Lea Fortunato^1^, Phillip Holmes^3^, Sally J Hutchings^1^, Rebecca Slack^5^, Martie Van Tongeren^4^ and Charlotte Young^2^.
^1^Department of Epidemiology and Biostatistics, School of Public Health and MRC-HPA Centre for Environment and Health, Imperial College London, St Mary\'s Campus, Norfolk Place, London W2 3PG, UK; ^2^Health and Safety Laboratory, Harpur Hill, Buxton, Derbyshire SK17 9JN, UK; ^3^Institute of Environment and Health, Cranfield Health, Cranfield University, Cranfield MK43 0AL, UK; ^4^Institute of Occupational Medicine, Research Avenue North, Riccarton, Edinburgh EH14 4AP, UK; ^5^School of Geography, University of Leeds, Leeds LS2 9JT, UK.
See [Appendix](#app1){ref-type="app"} for the members of the British Occupational Cancer Burden Study Group.
The authors declare no conflict of interest.
###### Occupational agents, groups of agents, mixtures and exposure circumstances classified by the IARC monographs, Vols 1--88, into Groups 1 and 2A, which have the kidney and/or bladder as the target organ
Agents, mixture, circumstance Main industry, use Evidence of carcinogenicity in humans Source of data for estimation of numbers ever exposed over REP Comments
---------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------ ---------------------------------------------------------------- -----------------------------------------------------------------------
**Group 1: Carcinogenic to humans**
**Agents, groups of agents**
Aromatic amine dyes 4-aminobiphenyl Benzidine 2-naphthylamine Production: dyestuffs and pigment manufacture Bladder *strong* CoE and LFS
Coal tars and pitches Production of refined chemicals and coal tar products (patent-fuel); coke production; coal gasification; aluminium production; foundries; road paving and construction (roofers and slaters) Bladder *suggestive* CAREX Included with PAHs
Polyaromatic hydrocarbons Benzo(a)pyrene Work involving combustion of organic matter; foundries; steel mills; fire-fighters; vehicle; mechanics Bladder *suggestive* CAREX
Mineral oils, untreated and mildly treated Production; used as lubricant by metal workers, machinists, engineers; printing industry (ink formulation); used in cosmetics, medicinal and pharmaceutical preparations Bladder *suggestive* LFS
**Exposure circumstances**
Coke production Coal-tar fumes Kidney *suggestive* Bladder *suggestive* LFS AF not calculated for kidney -- RR\<1; included with PAHs for bladder
Aluminium production Pitch volatiles; aromatic amines Bladder *strong* CAREX Included with PAHs
Auramine manufacture 2-naphthylamine; auramine; other chemicals; pigments Bladder *strong* CAREX Included with aromatic amines
Magenta manufacture Magenta; ortho-toluidine; 4,4′-methylene bis(2-methylaniline); orthonitrotoluene Bladder *strong* CAREX Included with aromatic amines
Rubber industry Aromatic amines; solvents Bladder *strong* CoE and LFS Risk confined to pre-1950 in GB
Boot and shoe manufacture and repair Leather dust; benzene and other solvents Bladder *suggestive* CAREX Exposure up to 1962 included with aromatic amines
Coal gasification Coal tar; coal-tar fumes; PAHs Bladder *strong* CAREX Included with PAHs
Painters Bladder LFS
**Group 2A: Probably carcinogenic to humans**
**Agents, groups of agents**
Trichloroethylene Production; dry cleaning; metal degreasing Renal *suggestive* CAREX LFS Included with PAHs -- benzo(a)pyrene above
Polyaromatic hydrocarbons Dibenz(a,h)anthracene Cyclopenta(c,d)pyrene Dibenzo(a,l)pyrene Work involving combustion of organic matter; foundries; steel mills; fire-fighters; vehicle mechanics Bladder *suggestive* CAREX
Diesel engine exhaust Railroad, professional drivers; dock workers; mechanics Bladder *suggestive* CAREX
Intermediates in plastics and rubber manufacturing 4,4′-methylene bis(2-chloroaniline) Styrene-7,8-oxide Production; curing agent for roofing and wood sealing Production; styrene glycol production; perfume preparation; reactive diluent in epoxy resin formulations; as chemical intermediate for cosmetics, surface coating, and agricultural and biological chemicals; used for treatment of fibres and textiles; in fabricated rubber products Bladder *suggestive* CoE and LFS
Aromatic amine dyes Benzidine-based dyes 4-chloro-*ortho*-toluidine *Ortho*-toluidine Production; used in textile, paper, leather, rubber, plastics, printing, paint, and lacquer industries Dye and pigment manufacture; textile industry Production; manufacture of dyestuffs, pigments, optical brightener, pharmaceuticals, and pesticides; rubber vulcanising; clinical laboratory reagent; cleaners and janitors Bladder *suggestive* CAREX Included in aromatic amine
**Exposure circumstances**
Hairdressers and barbers Dyes (aromatic amines, amino-phenols with hydrogen peroxide); solvents, propellants; aerosols Bladder *suggestive* LFS
Petroleum refining PAHs Bladder *suggestive* LFS Included with PAHs
Abbreviations: CAREX=CARcinogen EXposure Database; CoE=Census of Employment; IARC=International Agency for Research on Cancer; LFS=Labour Force Survey; REP=relevant exposure period.
###### Urinary cancer burden estimation results for men and women
Agent Number of men ever exposed Number of women ever exposed Proportion of men ever exposed Proportion of women ever exposed AF men (95% CI) AF women (95% CI) Attributable deaths (men) (95% CI) Attributable deaths (women) (95% CI) Attributable registrations (men) (95% CI) Attributable registrations (women) (95% CI)
--------------------------------------- ---------------------------- ------------------------------ -------------------------------- ---------------------------------- ------------------------- ------------------------- ------------------------------------ -------------------------------------- ------------------------------------------- ---------------------------------------------
**Bladder cancer**
Aromatic amines 101,654 94,170 0.0052 0.0045 0.0070 (0.0034--0.0151) 0.0060 (0.0024--0.0144) 21 (10--46) 10 (4--23) 49 (24--106) 17 (7--41)
Diesel engine exhaust 1,636,322 426,949 0.0843 0.0203 0.0145 (0.0026--0.0283) 0.0017 (0.0000--0.0054) 44 (8--86) 3 (0--9) 102 (18--198) 5 (0--15)
Hairdressers/barbers 96,041 631,937 0.0050 0.0301 0.0011 (0.0000--0.0025) 0.0027 (0.0000--0.0134) 3 (0--8) 4 (0--21) 8 (0--18) 8 (0--38)
Mineral oils 4,426,581 466,252 0.2228 0.0222 0.0392 (0.0205--0.0598) 0.0073 (0.0038--0.0113) 119 (62--182) 12 (6--18) 275 (144--420) 21 (11--32)
PAHs 334,339 188,252 0.0172 0.0090 0.0008 (0.0004--0.0013) 0.0004 (0.0002--0.0007) 2 (1--4) 1 (0--1) 6 (3--9) 1 (1--2)
Painters 1,118,813 130,630 0.0577 0.0062 0.0097 (0.0064--0.0132) 0.0011 (0.0007--0.0014) 30 (19--40) 2 (1--2) 68 (45--92) 3 (2--4)
Totals**[a](#t2-fn2){ref-type="fn"}** 0.0706 (0.0457--0.0975) 0.0189 (0.0128--0.0386) 215 (139--296) 30 (21--62) 496 (321--684) 54 (37--110)
**Kidney cancer**
Trichloroethylene 43,861 42,288 0.0023 0.0020 0.0004 (0.0000--0.0016) 0.0004 (0.0000--0.0014) 1 (0--3) 1 (0--2) 2 (0--7) 1 (0--4)
Abbreviations: AF=attributable fraction; CI=confidence interval; PAHs=polycyclic aromatic hydrocarbons.
Totals are the product sums and are not therefore equal to the sums of the separate estimates of attributable fraction, deaths and registrations for each agent. The difference is especially notable where the constituent AFs are large.
|
package com.rhwayfun.springboot.configuration.property;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.context.annotation.Configuration;
/**
* 带前缀属性配置
* 使用@ConfigurationProperties将property的配置映射到这个类的属性中
*
* property加载顺序:
*
* 1. Devtools global settings properties on your home directory (~/.spring-boot-devtools.properties when devtools is active).
* 2. @TestPropertySource annotations on your tests.
* 3. @SpringBootTest#properties annotation attribute on your tests.
* 4. Command line arguments.
* 5. Properties from SPRING_APPLICATION_JSON (inline JSON embedded in an environment variable or system property)
* 6. ServletConfig init parameters.
* 7. ServletContext init parameters.
* 8. JNDI attributes from java:comp/env.
* 9. Java System properties (System.getProperties()).
* 10. OS environment variables.
* 11. A RandomValuePropertySource that only has properties in random.*.
* 12. Profile-specific application properties outside of your packaged jar (application-{profile}.properties and YAML variants)
* 13. Profile-specific application properties packaged inside your jar (application-{profile}.properties and YAML variants)
* 14. Application properties outside of your packaged jar (application.properties and YAML variants).
* 15. Application properties packaged inside your jar (application.properties and YAML variants).
* 16. @PropertySource annotations on your @Configuration classes.
* 17. Default properties (specified using SpringApplication.setDefaultProperties).
*
* @author happyxiaofan
* @since 0.0.1
*/
@Configuration
@ConfigurationProperties(prefix = "my.config")
public class SimpleProperty {
private String app;
private String user;
private int age;
private String email;
private String blog;
private String github;
public String getApp() {
return app;
}
public void setApp(String app) {
this.app = app;
}
public String getUser() {
return user;
}
public void setUser(String user) {
this.user = user;
}
public int getAge() {
return age;
}
public void setAge(int age) {
this.age = age;
}
public String getEmail() {
return email;
}
public void setEmail(String email) {
this.email = email;
}
public String getBlog() {
return blog;
}
public void setBlog(String blog) {
this.blog = blog;
}
public String getGithub() {
return github;
}
public void setGithub(String github) {
this.github = github;
}
}
|
The Locked ONProximity Sensing t-shirt is a stone cold radar-detecting piece of apparel. When separated from companions, its radar screen stays in scan mode. But get within a few meters of another Locked ON wearer? Target detected, baby.
The decal on the front of the shirt is removable, since you probably wouldn't want to wash it, and runs on three AAA batteries. Its radio frequency transmission/detection range is about three meters, so chances are your eyes will spot a Locked ON compatriot before your radar will. At least, though, everyone around you will know you two were made for each other—or at least, your shirts were. [ThinkGeek via Techeblog]
|
All about clinical management of HAE
Clinical management of hereditary angioedema (HAE) is complex and, in addition to trigger avoidance, may include short- and/or long-term prophylaxis, intervention for acute attacks, and emergency treatment. Separate algorithms may be required for each.13
TIP: The World Allergy Organization guidelines state that every patient with HAE should be considered for home therapy and self-administration training.
Long-term prophylaxis
Ongoing treatment used to prevent symptoms in patients not adequately managed with acute therapy13
The treatment options for long-term HAE prophylaxis are C1-INH concentrate or androgens. Both have been shown to reduce HAE attack frequency.13 Choice of treatment depends on contraindications, adverse events, risk factors for adverse effects, tolerance, response, and dose required to control attacks.13
Short-term prophylaxis
Used to prevent edema when a predictable stressor (eg, dental work) is planned13,14
Short-term prophylaxis is generally limited to patients in unusual circumstances, particularly those about to undergo a surgical or dental procedure. Currently, however, there are no therapies approved for short-term prophylaxis.13
2012 WAO guidelines list C1-INH concentrate and androgens as short-term prophylactic options. If used, C1-INH should be administered 1 to 6 hours before the procedure. Short-term prophylaxis with an androgen should begin 5 days pre-procedure and continue 2 to 5 days post-procedure.13
Acute treatment
Treats attacks as they occur to reduce morbidity and prevent mortality13
If none of these drugs are available, solvent detergent-treated plasma or frozen plasma (if a safe supply is available) should be used.13
On-demand therapies are typically approved for use in adults and adolescents; one is also approved for use in pediatric patients.
Emergency treatment
Steps to take for laryngeal or abdominal attacks
Emergency treatment during an acute attack can be extremely challenging because, unlike allergic reactions, swelling related to HAE does not respond to epinephrine, antihistamines, or glucocorticoids.2 C1-INH, administered as early as possible, has been proven effective for the emergent treatment of HAE attacks.15
Laryngeal attacks
Acute HAE attacks involving laryngeal swelling are potentially fatal. Attacks of this type must be treated immediately in the hospital—not in a local clinic—in case emergency intubation or tracheotomy is necessary.16
Involvement of the upper airway usually begins slowly. Voice alteration and dysphagia indicate high risk of total airway obstruction. If there is suspicion of airway involvement, begin treatment immediately.9
Abdominal attacks
Acute abdominal HAE attacks can include severe pain that may mimic appendicitis, bowel rupture, or bowel obstruction. It is very important that HAE be correctly diagnosed in such cases.10
Pain management using NSAIDs is often effective.
Quality-of-life issues with HAE
For patients with HAE, suffering goes beyond the physical. Most feel a loss of control due to frequent, unpredictable attacks and the fear that they may face a life-threatening HAE attack. Many patients suffer from depression, fear, and anxiety, especially if they lack understanding of their condition.17
Healthcare professionals should clearly understand and communicate the issues around HAE and the urgency of treatment.
Additional considerations
Monitoring of "trigger" medications13
Because various medications, such as estrogen-containing oral contraceptives, hormone replacement therapy, and ACE inhibitors, can contribute to the onset of attacks, medication history and selection should be carefully reviewed when treating patients with HAE attacks.
Dental or surgical procedures13
Short-term prophylaxis (using C1-INH, for example) should be considered for patients scheduled to undergo a dental or surgical procedure.
Help maintain quality of life across the care continuum
Recognize the key role of the specialist nurse in education and support
Foster effective communication among team members
Ensure dissemination of information to patients and team
Discuss the need for trigger identification and avoidance
Network and share information among all specialties treating angioedema
Encourage patients to connect with the US Hereditary Angioedema Association (HAEA)
HAE clinical resources
View current recommendations for the treatment of HAE, download diagnostic tools, or get the latest news about HAE.
|
1. Introduction
This code implements MDI tab view for easy navigation.
The views are supported on a control bar, which can be floated (of course
docked!).
2. Implemented Features
Control bar-based owner-drawn tab view. Supports fonts and color settings,
the default is as follows
The normal/inactive tab text is painted black,
The active tab text is painted blue,
The tab text of a modified document is painted red.
The control bar can be docked (currently only top and bottom) or floated.
Custom MDI window list dialogs, similar to VC++.
Full screen modes.
Cool menu popup, three types.
From a right click on the tab control itself, as shown above.
From the right click on the tab control bar.
From a right click in the MDI main window client area, just a place
holder (demo) build yours.
Display of company logo text in the MDI main window client area.
MDI client background text logo and banner painting.
Saves and restores the state of the MDI child window, i.e. normal or
maximize, and the position of the control bars and the main window frame.
Tool tips and tab icons (icons!...just a demo-do not know what will be
better, let me know your views).
Minimum modifications to existing projects, say automated! Few changes to
only existing main frame class and maybe application class, no base
class to derive from.
3. Unicode?
There is no reason why it should not work!
4. Files required:
WindowManager.cpp/h: Manages the window list, and subclass the MDI
client to create the tab view bar. It also contain the class
CDocumentList, which lists all open documents. This is really
where life begins... NOTE: The CDocumentList class can
be very useful for many applications, take a good look.
ViewManager.cpp/h: Manages the views of the application, creating
the view tabs.
Others
tabview.h: Contains the resource ids needed by the controls. (NOTE:
Not used directly, deleted later).
tabview.bmp: The full screen and popup menus bitmap.
5. How to use it?
There is a bit of work involved in integrating the
resource file into your project. Lets do it, the difficult part first...
Move the bitmap file, tabview.bmp to your res directory, and
tabview.rc to the project directory.
Add the resource file tabview.rc to the project's *.rc2
file, and merge the resource id file tabview.h with your resource file,
resource.h. If you have not being manually modifying resource files
then read this...
Identify the last resource type (numbering starts from 128) in your
resource.h file, copy and paste the resource type identifiers (2)
from the tabview.h, give each an incremental id and finally increment
the VC++ object _APS_NEXT_RESOURCE_VALUE by 2, i.e. 1 more than
the last incremental id.
Similarly, copy and paste the dialog control ids (8) (numbering starts
from 1000) and increment the _APS_NEXT_CONTROL_VALUE by 8.
Finally, copy and paste the menu items ids (9) ( numbering starts from
32771) and increment the _APS_NEXT_COMMAND_VALUE by 9. The
project should now compile without any problem, and you can now delete the
tabview.h file.
Now, move the files WindowManager.cpp/h, ViewManager.cpp/h,
WindowTabCtrl.cpp/h and PopupMenu.cpp/h to the project directory
and add the implementation files to the project.
Open the WindowManager.h file, and in the Forward
Declaration part change the CMainFrame to your main window frame class
name (see TODO). Also include the header file of your main frame class in the
WindowManager.cpp.
Include header files, WindowManager.h and ViewManager.h in
your main frame header file and declare in a public section the
following:
pViewManager, a pointer to an instance of the viewmanager. The
viewmanager, control bar and tab control, is created by the window manager.
uID, an id of the control bar. With the defaulted value, hiding and
showing of the control bar is already implemented. If, however, you decide
to use a different id-say ID_THE_NEWVALUE, you can easily implement the
hiding and showing of the tab control bar by inserting the following in your
main frame message map. No further code is required. This is what is
done to the default status bar and toolbar created by the AppWizard...
In order for command messages to be handled directly in the
CWindowManager, use the ClassWizard to override the
OnCmdMsg() method of the main frame and modify it to be similar
to the ff:
// This function routes commands to window manager, then to rest of system.
BOOL CMainFrame::OnCmdMsg(UINT nID, int nCode,
void* pExtra, AFX_CMDHANDLERINFO* pHandlerInfo)
{
// Without this, the window manager menu commands will be disabled,// this is because without routing the command to the window manager,// MFC thinks there is no handler for it.if (m_MDIClient.OnCmdMsg(nID, nCode, pExtra, pHandlerInfo))
return TRUE;
return CMDIFrameWnd::OnCmdMsg(nID, nCode, pExtra, pHandlerInfo);
}
Tired eh! Just compile and have fun...
Well, you will need this too...build the menus, all the menu ids are
already defined so just select them from the ID combo box of the
property sheet.
On the View menu build the ff:
Menu Item ID
MENU ITEM STRING
ID_VIEW_VIEWTAB
Op&en File Tabs
ID_VIEW_FULLSCREEN
F&ull Screen
On the Window menu build the ff:
ID_WINDOW_NEXT
Ne&xt Window
ID_WINDOW_PREVIOUS
Pre&vious Window
ID_WINDOW_CLOSE_ALL
C&lose All
ID_WINDOW_SAVE_ALL
&Save All
Note: The Windows... menu is build for you automatically.
Well, well, well...if you need to support the position and control bar
restoration then do this...(unfortunately, neither the main frame class
destructor nor the window manager class destructor is called by the MFC
framework!)
Add message handle for the WM_CLOSE message for your main
frame, or modify the existing one adding the following single line, calling
the SaveMainFrameState() method...
Finally! replace the main frame displaying code at the end of the
InitInstance() of application class with the
RestoreMainFrameState() as
BOOL CDemoApp::InitInstance()
{
.............................
// The main window has been initialized, so show and update it.// pMainFrame->ShowWindow(m_nCmdShow); ///// <--- we do not need this one!
pMainFrame->m_MDIClient.RestoreMainFrameState(m_nCmdShow);
pMainFrame->UpdateWindow();
return TRUE;
}
6. Code Snippets
The icon support in the tab is application specific.
What I mean is you will need to build a more suitable solution for your
application. If, however, you have some ideas as how to implement something
general let me know.
For the current implementation... The OnCreate() function of
the CViewManager, which created both the tab and itself (the tab
bar), simply creates a place holder icon to fill an image list, which is then
attached to the tab.
LoadStandardIcon() is used to load system icon in there. You
may wish to replace this with the commented code, IDR_DEMOTYPE is
your application specific resource type.
HICON hIcon = AfxGetApp()->LoadIcon(IDR_DEMOTYPE);
In the AddView() function of the same class, the tab image
index is set to 0 (zero) the only image in the image list. Finally, in the
DrawItem() function of the tab, CWindowTabCtrl, the
dummy icon is replaced by the small icon attached to the frame of your child
window, parent of the view.
Initially, I considered getting the icon from the Windows shell, based on
the registered file extension SHGetFileInfo() API. However, it does
not look nice for the view frame icon to be different from the tab view icon. By
the current implementation, all is needed is a good citizenship like the VC++
itself. Let your application child window system icons reflect the file type and
there will be no need to write extra codes.
7. Credits
The code is based on code and ideas shared by the following:
Iuri Apollonio, he wrote the base codes using status bar. What is
his new email address?
Ivan Zhakov, his "MDI Windows Manager dialog" is better than
Iuri's. The Window manager class is now the main engine driving the view
manager.
Chris Maunder, his owner-drawn tab control code snippets are used
to improve Iuri's.
Adolf Szabo, his Full-screen mode idea is simpler than that
implemented by MS and MS's Mike B.
YOU, and many others...
Use this code in any project, there is no restriction!
Write whatever you like or do not like about this code in the comment
section, I will take note of all. Happy coding...
Paul
Selormey, Japan.
8. To Do
Ability to dock on all sides of the main frame, involves more work since
the tab control is owner-drawn.
Improved main frame window position saving and restoration...
Support for multiple monitors-I do not have the OS (Win98/Win2000) to
test this now (if I do, not the video card and monitors!)
Support for screen resolution changes. I have API code for my C/SDK
application, well tested on Win95 using the QuickRes program. I do not use
QuickRes currently, so a bit reluctant to implement it.
Your wishes...
9. Known Issues
The popup menu does not currently support accelerators (but is this
needed?).
The modified flag is not re-drawn immediately, I do not wish to play any
game to introduce flickers! (implementation still good enough!).
Add yours...
10. In this Update
Many parts are rewritten to address most of the issues in the comment
section.
The "tab view" is now CMDIChildWnd class, so splitters and others
should work.
Many bug fixes.
Should work with existing codes without little modifications, see Step 5.
|
0
0
18
74
9
30
0
61
0
15
41
9
0
0
7
14
27
17
0
9
2
17
25
2
30
5
19
36
0
0
0
0
0
26
0
0
5
0
9
12
3
2
45
33
25
33
0
36
34
29
12
5
81
3
0
2
10
29
76
2
0
21
66
0
0
0
0
0
0
2
0
0
0
0
0
41
0
72
0
2
5
5
5
24
0
22
2
31
26
34
5
0
0
27
0
0
0
2
2
47
48
0
37
25
27
28
2
14
7
4
0
23
28
0
0
0
0
0
0
0
0
0
14
24
29
26
0
9
21
24
0
2
3
34
3
3
15
29
0
0
0
2
0
0
0
0
0
45
0
5
2
5
5
40
0
0
2
9
2
84
71
0
0
0
0
0
0
0
0
0
0
4
17
7
2
28
3
0
0
0
2
0
20
2
12
0
2
63
0
3
66
0
25
1
0
70
26
12
39
0
0
0
3
69
0
2
2
3
7
70
31
0
0
0
0
0
29
31
4
26
0
2
56
42
15
36
29
2
0
0
0
0
2
28
0
25
3
0
0
7
0
24
0
0
10
8
0
31
0
23
51
0
0
0
0
23
59
37
0
3
2
63
2
2
0
2
0
2
3
52
0
4
0
0
0
0
0
0
0
0
6
2
8
2
1
4
39
2
|
Database Backup failed - Not enough disk space
I've been getting a pop-up stating that my database backup has been failing. However, when I look, I see that I have plenty of space.
CAUSE:
By default, Kaseya stores the two most recent database backups on the server.
If the disk that the database backup is stored on only has 50gb of free space and the database itself is 30gb, then the first database backup will succeed but the second database backup will fail. After the first backup, the free space will drop down to 20gb (50gb -30gb). The next backup will fail because there isn't enough space.
RESOLUTION:
In this event, you will need to increase the amount of space allocated to the drive that the database backups are stored on.
|
Virginia Minifarm Land For Sale
Sort :
Sort By
Blue Ridge Mountain Mini-Farm with Extensive Trout Stream Frontage. This is an excellent 29.416 acre farm, fenced for horses, goats or cows, 36x40 barn built in 2008 with a 12’ shed off one side, three 10x12 stalls and a tack room. All of this is within 15 minute pull of Virginia Highlands Horse Trails and Fox Creek Horse Camp. This property has...
Blue Ridge Mountain Farm House for Sale. This circa 1841 gem has been well cared for, has many original features and located on 3.25 acres with a wonderful view. Original wide plank flooring is in great condition and located throughout the original part of the home. Doors, woodwork and windows are also in good shape and really take you back in...
Grayson Highlands Mini-Farm with Wonderful Views. This property has 2 houses. The first is an antique cabin circa 1880. This is the real thing. It has a full bathroom, kitchen and woodstove for heat. Spring water services the cabin and there are several strong branches on the property for watering livestock. There is a barn in good condition and...
Blue Ridge Mountain Mini Farm. Very nice custom built home on acreage located in Elk Creek Virginia. This 4 bedroom 2 and 1/2 bath home is custom built with a guest/ mother in law suite located over attached, 2 car garage. Large open floor plan with cathedral ceiling in living area, extra large bedrooms and spacious closets throughout. Massive...
Blue Ridge Mountain Cabin and Acreage This 3 bedroom 3 bath cabin is situated on 45 acres and is in very close proximity to Trout fishing on Big Wilson Creek, the New River, Grayson Highlands Park, Jefferson National Forest and all of the recreational opportunities Grayson County has to provide. This 45 acre tract has long frontage on Bear Branch...
Mini Farm.. Travel through beautiful Grayson County country side to this secluded mountain home. Nestled amidst 29 rolling acres, this house is filled with possibilities. Two bedrooms, 1 bath and a full basement, just waiting for you to make it your own. This would make an awesome hunting cabin as well. There is a small musical branch on the...
Welcome to 6157 Pole Green Rd! INVESTMENT POTENTIAL. This 1,784 square foot home sits on 3 level acres just minutes from I-295 and Rt 360. The property offers great potential for a small residential development, mini farm, or someone who's interested in renovating a home that's located on some acreage. The property is on the corner of Pole Green...
Welcome to the 22 Acres along Landora Bridge Rd! The beautiful property offers just about everything. It has privacy, gently rolling pastures, mature timber, a deep well (about 7 yrs old), electricity on site, around 1000 kiwi plants and trellis system, and a drip irrigation system to the plantings. This is an excellent mini farm ready to go for...
This is a special property that has it all. Views, near town, seclusion, wonderful tree lined drive, rock outcroppings, pasture, 4 bedroom, 3 bath home and barn. There are many ATV or walking trails throughout the wooded acreage. The home has been professionally decorated and furnished and needs only your family as new occupants. You may relax on...
Captured in a beautiful setting is this remodeled farm house that has original hardwood floors, first floor bedroom and charm throughout. This property is set up for the perfect mini farm with mostly openland for crops or grazing, fencing for animals, fruit trees, a two car garage, and shed for
This sizeable 7-Bedroom, 5-bath Country Home on 11+ acres and is nestled in the heart of the Blue Ridge Mountains with easy access to National Parks and outdoor recreation. The house was previously used as medical clinic on the first floor and the residence on the second floor. The first floor has 3 (bed)rooms, 1 full bath and 2 half baths. The...
Great potential! This circa 1830's brick home is surrounded on 3 sides by Peach Bottom Creek, has beautiful view and is located within walking distance of the New River. This is a special property in a special location. The 2000+ sq. ft. Farmhouse has spacious rooms, original wood work, original hardwood flooring, 4 fire places, wide covered...
Here is a rare opportunity to take a step back in time, while still enjoying all the modern conveniences. If you have ever pictured yourself retiring to a self-sustaining mini-farm in the quiet peacefulness of the country, this is the perfect opportunity. This place is set up for your own homestead, with spring water, plus a nice small, but very...
New River Frontage in Grayson County VA. This is a great building site/mini-farm with level accessible New River Frontage. There is ample open land for gardening, livestock, orchards or haying. The land has a gentle up slope for a great elevated building site. The view across the river is undeveloped and scenic. This type of property is difficult...
|
QUALITY SUPPORT
Manufacturer's Description
Rise of the Argonauts will immerse gamers in a gladiatorial adventure, set in wondrously imagined vision of ancient Greece. With deep exploration and epic quests, players will live a life of brutal combat as they lead a team of iconic warriors - including Jason, Hercules and Atalanta - through a world ruled by mythological gods.
Unleash devastating attacks in real time with lethal combinations of weapons, powers granted by the gods and the battle tested loyalty of your Argonauts (Jason's body is broken up into 5 parts to give a wider range of attacking and defensive options
Encounter deadly foes, fearsome opponents and fantastical creatures as you explore the vast Aegean - a dynamic world of islands filled with shining cities, lush jungles and deep forests
Fighting alongside fabled heroes, including Hercules, Achilles and Atlanta, Jason's transformation from young King to mythological legend is an essential, epic adventure
From the developers of Battle Realms and Dungeons & Dragons: Dragonshard
|
El principio del fin de la ceguera.
Archivos diarios: 11/06/2013
If your interests lie in macular and vitreoretinal disease, then this year’s ARVO meeting surely did not disappoint. For the first time, a full-day retina subspecialty symposium was held just before ARVO—giving retinal specialists and general eye care practitioners an additional opportunity to hear about the latest and greatest in retina from a roster of world-renowned experts.
It was the perfect way to gear up for the many sessions at ARVO that reviewed the constantly evolving landscape of treatments and imaging technology and introduced new avenues of retinal research. Attendees were eager to hear the long-awaited results of the Age-Related Eye Disease Study 2 (See “The Latest on AREDS2 at ARVO 2013.”). Other hot topics this year included the role of Eylea in age-related macular degeneration (AMD), the genetics of AMD and several novel treatment approaches to AMD. Presentations also highlighted a new retinal prosthesis that may offer help to patients with retinitis pigmentosa (RP), as well as new ways to treat macula edema.
Retinal Prosthesis
It was a milestone moment for retinal technology in February when the FDA approved the first implanted device to treat adults with advanced RP. The Argus II Electronic Retinal Prosthesis System (Second Sight Medical Products)—which has been cleared in Europe since 2011—should be commercially available in the US later this year. For now, it’s been granted “humanitarian use,” an approval pathway limited to devices that treat or diagnose fewer than 4,000 people in the US each year.
The Argus II Implant (left) attaches to the retinal surface with a tack. The cable that both powers the chip and conducts the image signal from the episcleral housing is seen temporally. An early frame of a fluorescein angiogram (right) in a patient with the Argus II Implant demonstrates some persistent macular perfusion. Images: Elaine Leibenbaum, Julia Haller, MD, and Carl Regillo, MD.
A few studies evaluated this prosthesis, hopefully paving the way for its more widespread use. One study looked at the safety profile of 16 patients in Europe who had received the implant.1040/C0017 The patients were followed on average for 6.2 months, and reported no surgical or serious, device-related adverse effects. Ten patients experienced no surgery or device-related adverse events at all, whereas the other six reported minor adverse effects, such as IOP elevation, nausea, fainting, conjunctival irritation and a retinal tear.
A second study found that the Argus II implant has good long-term reliability, with only one failure in 30 subjects (each with an average of 4.2 years of use, representing more than 125 cumulative patient years).1037/C0014 Further, accelerated lifetime testing demonstrated that finished implants have more than a 10-year lifetime in accelerated testing. Another study confirmed these results, echoing previous tests that demonstrated the ability of the prosthesis to provide visual function over several years.349
Diabetic Retinopathy
Researchers evaluated 759 patients from the RISE/RIDE Phase III trials to see if Lucentis (intravitreal ranibizumab, Genentech) had an effect on the severity of a patient’s diabetic retinopathy.4028 Results showed that a greater proportion of patients in the ranibizumab arm had a two- or three-step regression of diabetic retinopathy on the ETDRS scale vs. those in the sham group. A three-step improvement was achieved at 36 months in 3.3% of the sham group, compared to 15.0% and 13.2% in the 0.3mg and 0.5mg treated eyes, respectively. Over the course of 36 months, 33.9% of the sham-treated eyes developed proliferative diabetic retinopathy, as opposed to only 12.8% and 15.1% of the ranibizumab-treated eyes.
Another study evaluated the safety and efficacy of Macugen (intravitreal pegaptanib, OSI Pharmaceuticals Inc.) combined with panretinal photocoagulation (PRP) vs. PRP alone in the regression of retinal neovascularization in eyes with high-risk proliferative disease.2439/C0140 At six months, the combination of pegaptanib with PRP showed better preservation of best-corrected vision, greater decrease in retinal thickness and maintained visual field better than PRP alone, but showed no major difference in neovascular regression.
A second study of 30 patients compared combination therapy of ranibizumab with PRP vs. PRP alone in treatment-naïve proliferative diabetic retinopathy (PDR).5761/D0008 This study uncovered a greater change in best-corrected vision, a larger decrease in central retinal thickness and a lower incidence of vitreous hemorrhage in the combination treated group––again suggesting that anti-VEGF agents in conjunction with PRP may be preferred to PRP alone.
Additionally, a retrospective study of 78 patients seemed to indicate that metformin may reduce the rate of PDR in type 2 diabetes patients.2249/C0150 In the non-metformin group, 15 patients (45.5%) developed PDR as compared to just 12 patients (27.3%) in the metformin-treated group, indicating a trend of less PDR in the metformin-treated group. A larger study is recommended.
Macular Edema
Several studies are investigating alternative approaches to treat macular edema––either secondary to diabetes or vein occlusion. The MOZART study evaluated the safety and efficacy of an intravitreal dexamethasone implant (Ozurdex, Allergan) in 59 patients with visual impairment from diabetic macula edema (DME).2387/C0088 Investigators noted that, over the six months, central retinal thickness was reduced and acuity improved—28% of patients had 20/40 or better acuity vs. just 6% at baseline. They observed IOP greater than 25mm in 7% of patients, with 4% of patients developing cataracts. No endophthalmitis was reported.
A second study showed positive results using a different intravitreal dexamethasone implant injection (DEX-I), with a gain of more than 10 letters in 27% of cases at two months and 24% at four months.2382/C0084 However, this study showed that recurrence of edema was observed in 76% of cases at four months, leading to re-treatment in more than one-third of cases.
Diabetic macular edema, as confirmed by optical coherence tomography.
Another study evaluated Ozurdex in patients with macular edema from vein occlusions. Forty eyes were treated with Ozurdex and were followed for six to 24 months.254/D0099 Overall, 94% showed initial regression on OCT, lasting an average of 4.2 months, with two lines of improvement. Overall, 59% improved 14.2 letters on average, while 10% worsened and 31% remained the same. Approximately 19% had elevated IOP and were treated with drops. In eyes that were not previously treated, the results were even better—86% showed improvement. Half of the patients required retreatment, with an average of 1.6 treatments per year. The results seem to indicate that Ozurdex may be an effective treatment in such patients––even those who did not respond well to anti-VEGF agents.
Other research looked at the role of laser, as well as anti-VEGF in combination with laser, in the treatment of macular edema. The LLOMD study evaluated 15 eyes of 13 patients with reduced visual acuity secondary to diabetic macular edema who had a mean VA of 20/100.2396/C0097 At six months, the mean VA gain was 12.6 ETDRS letters, with the central retinal thickness decreasing an average of 76.7µm in patients who received laser in combination with ranibizumab. Additionally, 37.5% of patients required a second injection at six months. However, with the addition of laser, the study showed that the number of injections needed over the first year was greatly reduced compared to previous studies of injections alone. In total, the researchers determined that approximately 10 injections are needed during the first year. Further, they concluded that adding macula grid laser to ranibizumab injection may reduce the economic burden of treatment.
Two additional studies revealed that reduced-energy focal macular photocoagulation could have advantages over traditional focal macular laser. 2375/C0076,2416/C0117 Both seemed to indicate that, by reducing the laser exposure when performing the procedure, there were decreases in CRT and increases in vision––with potentially less collateral damage and inflammation to surrounding viable tissue. More research is needed to investigate whether reduced-energy focal macular photocoagulation could replace more traditional laser therapy as the standard.
Eye on Eylea
Several reports evaluated Eylea (aflibercept, Regeneron Pharmaceuticals), the latest FDA-approved anti-VEGF agent for the treatment of wet AMD. A number of these looked at the role of Eylea in patients whose choroidal neovascularization did not respond to other agents, namely Lucentis (ranibizumab, Genentech) and Avastin (bevacizumab, Genentech/Roche).
One study evaluated 41 eyes of 34 such patients—77% of these patients had a good response to Eylea after one month, demonstrating decrease in central retinal thickness and absorption of subretinal fluid.4176/A0094 Best-corrected visual acuity improved in these patients to 20/74, from 20/122.5 at baseline.
A second study evaluated 60 eyes of 52 patients that did not respond after five consecutive injections of the other agents.3806/B0116 After three Eylea injections, 28 eyes (46.7%) displayed improved acuity, while 18 eyes (30%) showed decreased acuity, and 14 (23.3%) had no change in acuity at three months.
Lastly, a study evaluated 19 eyes of 17 patients receiving Eylea as primary therapy, with dosing as needed.3817/B0127 Over a 20-week period, patients received on average 1.84 injections, with an interval between injections of approximately 11 weeks. Five of these patients were determined to be non-responders to other anti-VEGF agents. Of these five, four responded positively to Eylea, indicating again that Eylea may be an effective alternative for patients who do not respond to other agents. Also, this study seems to indicate that the interval to repeat injections may be longer with Eylea than the other agents.
However, a separate study looked at the costs associated with Eylea.3838/B0148 The researchers hypothesized that, despite fewer injections, the cost of treatment per patient would actually increase. The study reviewed the records of 30 patients treated for wet AMD from 2011 to 2012 at the Cincinnati Eye Institute. The average duration between Avastin or Lucentis injections was 29 days, as opposed to 34 days with Eylea injections. No complications were noted in any groups. Total cost over the six months was $3,700 for Avastin, $96,000 for Lucentis and $366,300 for Eylea. This study suggests that while Eylea may reduce the frequency of injections, office visits and possibly complications, it appears to add considerable health care costs per patient.
New AMD Treatments
Several studies evaluated novel treatments for AMD. One study investigated the safety and feasibility of an episcleral brachytherapy device (SMD-1) for wet AMD.3787/B0097 Six patients received radiation for five and a half minutes to the macular CNV using a brachytherapy probe adjacent to the macular sclera via a subtenon retrobulbar approach. Patients also received concomitant anti-VEGF injections, as needed. The procedure was readily performed and well tolerated, with no adverse effects. At three months, all patients experienced improved best-corrected vision, with a mean gain of 19 ETDRS letters. At 12 months, three patients continued to demonstrate improved vision of seven letters on average, and two of those patients did not require any additional injections. All patients had reduced macular thickness compared to baseline, but two patients did demonstrate a reduction in vision.
Another study evaluated the safety of 1% CLT-005 topical eye drops, designed to inhibit Stat3, which has been associated with neovascular and inflammatory processes in animal studies.1716 The researchers determined that the drug was able to deliver the active ingredient to the RPE/choroid in animal eyes, without adverse effects—paving the way for additional studies regarding its role in the treatment of AMD or geographic atrophy (GA).
Australian researchers looked at the progression of early AMD after treatment with nanosecond pulse laser compared to patients with a natural history of AMD.4146/A0064 They treated 48 patients with bilateral high-risk AMD with ultra-low energy laser in 12 spots around the macula of one eye. At 12 months, three of the 48 treated participants progressed to GA, while seven of the 70 control group progressed. At 24 months, four in the treated group and nine in the control groups progressed to GA, suggesting that a single course of nanosecond laser intervention may potentially reduce the odds of progression to advanced AMD. A larger randomized controlled study is now underway.
Another study evaluated the safety and tolerability of an extrafoveal subretinal injection called rAAV.sFlt-1, an anti-VEGF gene therapy for AMD, in elderly patients.4504 Twelve patients underwent the procedure with minor adverse effects and no evidence of local or systemic toxicity. The researchers noted that this injection should be further evaluated as a potential strategy for long-term anti-VEGF therapy.
Research continues on Emixustat HCL, a novel orally administered agent in development for the treatment of GA associated with dry AMD.4506 Emixustat HCL is a rod visual cycle modulator that inhibits isomerase activity and reduces retinal toxins, such as A2E, which damages the RPE and overlying photoreceptors. Four dose levels and two dose regimens were examined in 72 patients who were followed for 90 days. No adverse systemic effects of concern were noted, with just two patients experiencing treatment-related events. All ocular adverse effects resolved upon drug cessation, and were mild with no severe events observed. Results were encouraging, and a long-term Phase II study in now underway to evaluate its role in GA patients.
Other studies looked at using existing therapy more effectively. A team of researchers in Italy evaluated whether ketorolac eye drops combined with ranibizumab intravitreal injections would provide additional efficacy over ranibizumab alone in wet AMD.4175/A0093 Sixty eyes were divided into two groups: one received ranibizumab alone, and one was treated with ranibizumab plus ketorolac BID for six months. At the end of six months, there was no statistically significant difference in best-corrected vision or number of injections required. However, the mean six-month change in central macular thickness was 146.53µm in the combination group, while the change was 106.88µm in the ranibizumab-only group. This is the first study to identify an additional effect of ketorolac eye drops combined with ranibizumab. More studies would be needed before a change in current protocol would be appropriate.
Two separate studies evaluated photodynamic therapy in combination with anti-VEGF injections.4509,3790/B0100Both indicated that this therapeutic combination might be an effective way of improving acuity in patients with wet AMD, while perhaps reducing the overall number of treatments needed. In one of the studies, 96.2% of eyes lost fewer than 15 letters, and 27.3% gained 15 or more letters.
Genetics in AMD
Genetics in eye care have been garnering a lot of attention lately, specifically the role of genetics in AMD.
One study evaluated data from the 100 Genomes Project to confirm the contribution of known genetic risk factors for AMD.6166/C0051 This investigation revealed that, in the population of European descent, CFH has the largest attributable risk (25.6%), followed by ARMS 2 (22.5%), then C3 (9.1%) and CST3 (5.8%). In other populations, the risk allele in ARMS2 is the major contributor to risk, followed by CFH. In Asian and African populations, CST3 takes precedent over C3 as the third strongest contributor to AMD risk.
This patient is at high risk for AMD due to multiple confluent drusen in both eyes. Perhaps genetic testing could one day identify patients like this earlier.
In Spanish patients, a study found that CFH and CB genes, combined with environmental risk factors such as smoking and body mass index, were associated with an increased risk of GA.6183/C0068 A second abstract confirmed the role of CFH gene in AMD risk in a cohort of Brazilian AMD patients.6175/C0060
In another study evaluating the genetic contribution of AMD in 38 Armenian patients, researchers found no genetic differences in the risk alleles compared to a Caucasian population.6196/C0081 All of this research indicates the genetic factors that could influence the development or AMD may be very similar across different groups.
Interestingly, some of these same studies seem to suggest that the HDL-related CETP gene may be associated with AMD in African Americans, pointing to a potential risk modifier in lipid pathways.6168/C0053
An abstract submitted by Johanna Seddon, MD, ScM, identified three new genes that may add to the predictive power of risk models for progression to advanced AMD.6178/C0063 They are the R1210c mutation in CFH, and variants to the genes COL8A1 and RAD51B. She suggested that these new genes will be useful for AMD surveillance in the future, along with genes that have already been identified and established factors such as drusen size, baseline AMD status, demographics and environmental factors (including smoking, age and body mass index).
Additional studies attempted to see if there was a link between genetic profile and response to treatment. One study evaluated the genetic profile of 835 patients from the CATT (Comparison of AMD Treatment Trial) trial to determine if certain genotypes responded better to treatment than others.6187/C0072 Results revealed there were no strong associations between the studied genotypes and response to anti-VEGF treatment.
A second study evaluated the IVAN study and also was unable to find any associations between genetic profiles and response to anti-VEGF treatment.6185/C0070 However, another study of 43 patients seemed to indicate that patients with high-risk alleles for AMD responded more poorly to treatments than those with low-risk alleles.6186/C0071
This link of genetic profiles to treatment response may continue to be investigated, as this could bring us closer to personalized treatment of AMD––based on genetic factors and other components.
Retinitis Pigmentosa and Rare Diseases
Usher syndrome (USH) is the most frequent cause of inherited deafness–blindness in humans, accounting for approximately 50% of all cases and affecting one child out of 25,000. Today, there is no specific and/or curative therapy for USH patients, except the hearing aids and cochlear implants designed to correct hearing impairment. Among the three USH clinical subtypes defined by the severity of the hearing impairment, the presence/absence of vestibular dysfunction, and the age of retinitis pigmentosa (RP) onset, USH1 is the most severe. In the past decade, several strategies to prevent and treat RP have been developed, including retinal implants, pharmacological agents, stem cells, retinal cell transplantation, and gene therapy. Clinical trials for some of these strategies are currently underway.
In 2012, the first ever gene therapy for USH1B, UshStat® (developed by Oxford BioMedica and using its LentiVector® platform technology) moved into human studies. Three dose levels for safety, tolerability and aspects of biological activity of UshStat® are under evaluation at the Oregon Health & Science University’s Casey Eye Institute where the study has started and at the Centre Hospitalier Nationale d’Ophthalmologie des Quinze-Vingts in Paris where the study should start soon with support from Foundation Fighting Blindness. This safety study will prepare for future efficacy trials. Further treatment advances will require more scientific effort such as: 1) to identify all causative genes and determine their function, 2) to develop appropriate animal models, 3) to uncover the mechanisms underlying the retinal defect in USH syndrome, 4) to engineer appropriate viruses for transfer of genetic material into appropriate cells.
We have started a new clinical trial using gene therapy with an adeno-associated viral (AAV) vector encoding Rab escort protein-1 (REP1) to treat patients suffering from choroideremia (NCT01461213). An AAV2 vector encoding human REP1 driven by a CBA promoter with a woodchuck hepatitis virus post-translational regulatory element (WPRE) was used. The first six patients have reached six months follow-up and a paper detailing the effects of gene therapy is currently undergoing peer review. The formal results of the study will therefore be reported soon. In the meantime we can share the following observations:
Through microperimetry testing, we have observed an underlying functional defect in this disease, similar to Leber congenital amaurosis (LCA), but more subtle. In fact, this observation was made previously by Dr. Sam Jacobson using psychophysical testing in choroideremia patients. This observation is exciting because it implies that we might see improvements in retinal sensitivity (and visual acuity in later stages) as evidence of successful gene transfer.
We have observed no problems in detaching the fovea in these six patients, or at least any negative effects on their vision are more than compensated for by gene expression relating to the vector. Retinal thinning was only seen in one of the six patients, but in a non-seeing area and stretching of this area was noted intra-operatively. We believe the problems of foveal thinning in the LCA studies relate to a combination of patients having a thin fovea at baseline and the injection being initiated too close to the fovea and/or too rapidly, thereby causing excessive horizontal stretch of the neurosensory retina. The technique we have developed would be suitable for all the other rod-cone dystrophies where the peripheral retina is thinner than the central macula.
We have set the study up as multicentre, so that expert ophthalmologists from other UK centres follow the patients up after six months. This spirit of openness is ideal as other experts have the opportunity to examine the patients and therefore provide independent verification of our initial findings. We have also been sharing our data with other centres worldwide to help them submit regulatory applications.
Leber congenital amaurosis (LCA) refers to a form of inherited retinopathy with early-onset and severe loss of vision. Clinical trial of gene augmentation therapy for LCA caused by RPE65 mutations has been ongoing at the University of Pennsylvania and University of Florida since 2007. Earlier reports from our group, as well as from other groups performing similar clinical trials in parallel, showed that a single surgical procedure introducing the normal version of the RPE65 gene leads to improved vision in a matter of days to weeks. But RPE65 –LCA is a complex blindness due to two pathologies: progressive loss of photoreceptors to degeneration, and malfunction of all surviving photoreceptors. The assumption all along was that correction of the malfunction would halt or slow down the photoreceptor degeneration.
To evaluate this natural assumption, we imaged the retina in patients and measured the sublayer within the retina where photoreceptor nuclei reside. This sublayer slowly thins over many years and the rate of thinning should mirror the rate of photoreceptor degeneration. In untreated eyes, the photoreceptor layer measurements were abnormally thin even in the youngest of the patients with ages as early as 3 years, and showed progressive further thinning when examined serially over 5 years. The rate of thinning was about 10% per year. We compared the rates of photoreceptor degeneration in treated regions to untreated regions and found no difference. The treated regions continued along the expected rate of degeneration, even though paradoxically retaining the vision improvement achieved immediately after the gene therapy.
We hypothesized that once initiated, retinal degeneration advances despite successful gene augmentation therapy. We tested this hypothesis in the dog model of the human disease. But first we needed to know the natural history of degeneration in the dogs. Examination of a large number of dog eyes with the same non-invasive imaging tools used in human patients showed that dog retinas are without any degeneration for the first 5 years of their lives (until ~35 human years). Thus RPE65-disease is late onset in the dog compared to the human. When dogs were treated at ages after the onset of degeneration, gene therapy resulted in improved visual function but did not slow down the retinal degeneration – just like the results in patients.
At this stage we do not know why the photoreceptor cells degenerate in RPE65-disease but our results are most consistent with the following speculative explanation for the paradox observed. Visual function is originating from the minority of cells that are functionally-potent wheras degeneration is dominated by the loss of cells that are functionally-silent. In order to improve outcomes of gene therapy, we need to start using stages of animal models that truly represent the human condition. We need to assume less and prove more. We need to better understand the pathways of cell loss and find means to augment gene augmentation by inducing cell-protective pathways or inhibiting cell-death mechanisms.
UF-021, isopropyl unoprostone, is an eyedrop which was already approved to treat eyes with glaucoma or ocular hypertension in the USA and Japan. Previous studies showed that topical IU increases human choroidal blood flow, and that an intravitreal injection of IU protects photoreceptors from light damage in rats. It was also shown that apoptosis of cultured photoreceptors was successfully inhibited by unoprostone but not by prostaglandin.
A Phase 2 clinical trial has been completed in Japan, in which 103 Japanese RP patients were randomized into 3 groups: high dose, low dose, and placebo groups. They completed 6 months of follow-up time. The primary endpoint was central retinal sensitivity measured by microperimetry. The secondary endpoints were: visual acuity, contrast sensitivity, retinal sensitivity by HFA, and vision-related QOL assessed by VFQ25. In a comparison of changes from baseline within groups, there was a statistically significant increase in central retinal sensitivity threshold for the high-dose group. In a post-hoc analysis that adjusted for baseline differences, a dose-dependent improvement in mean central sensitivity was demonstrated. There were statistically significant differences between the placebo and high-dose groups in the change from baseline in central retinal sensitivity and in the proportion of patients with worsening of the retinal sensitivity by ≥4dB (placebo 21.2%, high dose 2.6%). There was also a statistically significant change from baseline in the high-dose group in mean retinal sensitivity by HFA. A subgroup analysis showed that, among subjects whose central retinal sensitivity <29.4dB, there was a significant difference in the high-dose group. In a comparison within groups of changes from baseline in patients’ VFQ-25 total points values, the changes within the high dose group were statistically significant. In conclusion, this phase 2 trial shows that UF-021 improves or maintains the central retinal sensitivity in RP patients.
Recently, a Phase 3 clinical trial has started in Japan in which 300 RP patients will be randomized into 2 groups: high-dose and placebo groups. The primary endpoint of P3 is the central retinal sensitivity measured with HFA. All patients will be followed for 52 weeks. After this period, they will enter an open label safety study for 52 weeks.
One of the most common causes of irreversible blindness is loss of photoreceptor cells. An exciting new therapeutic strategy for treating these conditions, which include AMD and diabetic retinopathy as well as inherited retinopathies, is photoreceptor cell transplantation.
Some years ago we demonstrated that it is possible to transplant photoreceptor cells into an adult mouse retina, provided the cells are at a particular stage of development – a post-mitotic photoreceptor precursor (MacLaren et al., Nature in 2006). Even though we could only manage to obtain around 1000 integrated cells, this was an important proof-of-concept and forms the basis of our programme because this knowledge might be used to generate appropriate cells for transplantation from stem cells.
After 5 years of optimization, we were able to increase the efficiency of transplantation and with around 30- 40,0000 integrated cells we could demonstrate restoration of vision in a mouse model of stationary night blindness (Pearson et al, Nature 2012). This is another important proof-of-concept because it demonstrates that the transplanted photoreceptor cells make functional connections and there is enough plasticity to actually improve vision. Recently, we have also shown that we can transplant photoreceptor precursors in a variety of animal models of retinal degeneration including in models of severe degeneration and still improve vision (Barber et al, PNAS 2013).
So far we have focused primarily on studies involving the transplantation of photoreceptor precursors obtained from early post-natal retinas into visually-impaired adult mice. In order to develop this into a useful treatment we need to use a renewable source of cells for transplantation. Embryonic stem cells (ESC) represent the most promising such source of cells for transplantation and considerable progress has been made in their differentiation in the laboratory toward photoreceptor lineages. For some years now we have been trying to differentiate mouse ES into photoreceptor precursors efficiently enough to be able to transplant effectively. Until very recently we have not been successful. However, we have now used a new differentiation protocol based on the landmark paper in Nature in 2011 by Yoshiki Sasai. He demonstrated it is possible to generate synthetic retinae from mouse ESCs.
We have now optimised this protocol and shown for the first time that following transplantation, the rod precursors from these ESC-derived retinae, integrate and mature within adult degenerate retinae. This is an important study because it shows conclusively that ESCs can provide a useful source of photoreceptors for retinal cell transplantation.
Now that we have shown that it is possible to transplant mouse ESC- derived photoreceptors, the next step towards clinical translation is to develop human ESC lines to provide a potentially unlimited source of transplantation-competent photoreceptor precursors. We are now starting to work with hES cells and aim to develop GMP compliant processes that may enable translation to clinical trials.
6) The Use of Muller Glial Cells in Retina Repair. Dr. Tom Reh Dept. of Biological Structure, University of Washington, Seattle, WA, USA.
The study of Regenerative Medicine attempts to find ways to replace cells in the body that have degenerated such as do photoreceptor cells in retinal degenerative diseases like retinitis pigmentosa and age-related macular degeneration. The use of stem cells is one such method that takes an undifferentiated (usually embryonic) cell and converts it into a mature, functional cell such as a photoreceptor neuron that would be found in the adult retina.
Other methods though can be used to generate new photoreceptors, one of which utilizes Muller glial cells that are natural support elements in the retina. Over the past 10 years now, Dr. Reh and his collaborators have provided evidence that the retinas of some higher species such as the chicken had the potential to generate new neurons. In response to an acute insult that extensively damaged the photoreceptors, he found that many of the Muller cells re-entered the cell cycle and began to express biochemical markers associated with embryonic retinal progenitor cells. Work in vitro continues on this to dissect the various factors involved in the reprogramming of glial elements into neurons. Differentiation of the glial-derived progenitor cells offers several advantages over more traditional forms of transplantation and stem cell implantation. For example, the cells are already in place with no need to surgically implant new cells. Also, there is the possibility of decreasing or eliminating the deleterious immunological problems evoked by implantation of foreign cells into the retina.
In lower species such as fish and amphibians, there is a well known ability to regenerate retinal neurons after trauma or degeneration induced by other means. Understanding the molecular and biochemical pathways of regeneration could ultimately lead to the ability to reprogram native glial cells into retinal neurons such as photoreceptor cells in the human.
We have previously demonstrated in dogs with CNGB3-achromatopsia that intravitreal bolus injection of CNTF (1) resulted in transient restoration of cone function and day vision, and (2) optimized long-term cone functional response to AAV-mediated gene augmentation therapy. The objective of this study was to determine if sustained intravitreal delivery of CNTF by encapsulated cell technology (ECT) could reverse the disease phenotype of CNGB3-achromatopsia in dogs long-term.
Dogs homozygous for the D262N missense mutation in CNGB3 were unilaterally implanted with CNTF-secreting, encapsulated cell implants. The pre-implant CNTF secretion rate is 15 ng/day. The animals were 3 months (n=2) and 27 months (n=1) of age and were day blind with no recordable cone ERG prior to surgery. Following implant placement, the dogs were examined weekly by standard full-field electroretinography under general anesthesia and visual behavioral testing in an obstacle avoidance course.
In the operated eyes, day vision and cone function were partially restored by 1 week following CNTF-implant placement. The amplitudes of single and flicker cone ERG responses were small (~5-10% of normal) but were maintained for at least 5 weeks thus far. Scotopic ERG responses were reduced in 2 of the 3 implanted eyes to <30% of amplitudes recorded in the non-operated fellow eyes. These ERG data were comparable to our observations following single intravitreal bolus injection of 12 μg CNTF.
In conclusion, sustained intravitreal delivery of CNTF by ECT rescues cone function and day vision in CNGB3-achromatopsia. It remains to be shown if this therapeutic effect can be sustained long-term and if ECT can be combined with AAV-mediated cone-directed gene augmentation to optimize treatment.
To date, several clinical studies have been completed, including a phase 1 study in RP, 2 phase 2 studies in RP (early and late stages) and a phase 2 study in GA. Cone photoreceptor preservation by AOSLO was demonstrated in the RP study.
A phase 1 study in MacTel was completed recently and the study showed that both NT-501 implant and surgical procedure were well tolerated. We are actively planning a multi-center phase 2 study for MacTel in the U.S. and Australia.
The primary endpoint of the phased 2 study will be “change in area of IS/OS loss at 2 years post implant as measured by en face imaging by SDOCT in study eye(s)”.
The study will include 68 subjects
Study duration will be 2 years
We expect to initiate the study in the second half of 2013.
From a regulatory perspective, we have achieved the following objectives:
Obtained the fast track status for RP and GA with the FDA
Obtained the orphan designation for MacTel and RP with both the FDA and EMA.
Received agreement from FDA and EMA regarding the primary endpoint for the MacTel phase 2 study
Actively pursuing agreement/advice from the FDA and EMA regarding using cone preservation by AOSLO as the primary endpoint for the RP phase 3 studies
Note: The new results reported below are taken from a recent press release (May 16, 2013) from Advanced Cell Technology.
Human stem cells have two important characteristics. First, their numbers can be expanded in cell culture to almost unlimited amounts. Secondly, the cells have the potential of developing into any cell type of the body, for example, retinal photoreceptor or retinal pigment epithelial (RPE) cells. Thus, stem cell implantation is attractive as a possible therapy in replacing dead or defective cells in cases of retinal degeneration.
In macular diseases such as Stargardt Disease and Age-Related Macular Degeneration (AMD), early problems in pigment epithelial cell function can lead to death of photoreceptor cells. Thus, replacement of dead or defective RPE cells through stem cell implantation could prolong photoreceptor cell life and even restore their function.
The company Advanced Cell Technology (ACT) is conducting clinical trials in patients with Startardt Disease and AMD and have already published preliminary results indicating the safety and tolerability of their stem cell implantation. In this report (Lancet 379:713, 2012), the hESC-derived RPE cells showed “no signs of hyperproliferation, tumorgenicity, ectopic tissue formation or apparent rejection after 4 months.” In their most recent press release, ACT reposts that vision of one of their patients “has improved from 20/400 to 20/40 following treatment.” Although ACT adds the disclaimer that “improvement in the patient’s vision reported in this press release may not be indicative of future results….”, the positive results are welcome and give hope for improvement in other patients using this form of Regenerative Medicine.
The eye is an organ that is well-suited for the development and testing of novel therapeutic approaches. It is easily accessible and allows local application of therapeutic agents with reduced risk of systemic effects. Need exists for the development of non-viral therapeutic approaches for ocular diseases. Our lab and others have investigated the potentials of nanotechnology for ocular delivery of therapeutic genes. Investigations thus far have highlighted great potentials for nanoparticles to be a successful approach for ocular gene delivery.
Our group has focused on the efficacy of compacted DNA nanoparticles for the treatment of different diseases, particularly those associated with the retina and retinal pigment epithelium. We have shown that nanoparticle treatment leads to efficient transfection of ocular cells, long term gene expression, and exerts no toxic effects on the eye even after multiple injections. These nanoparticles mediate significant functional rescue in models of retinitis pigmentosa, Stargardt macular dystrophy and Leber’s congenital amaurosis. They have no limitations on the size of the genetic cargo and effective gene expression has been demonstrated with vectors up to 20 kb in the lung and 14 kb in the eye, making them an ideal complement to AAVs especially for delivery of large genes. Furthermore, in a side-by-side comparison study with AAV, we recently reported that nanoparticles can drive gene expression on a comparable scale and longevity to AAV.
We have synthesized two series of orally active multifunctional antioxidants (MFAOs) possessing distinct free radical scavenging activity and independent metal attenuating activity. Both series demonstrate similar selective metal chelating activity against iron, copper and zinc, as well as similar antioxidant activity against hydroxyl, peroxide and superoxide radicals assessed in human retinal pigmented epithelial (ARPE-19), human neuroblastoma (SH-SY5Y), and SRA human lens epithelial cells. Oral administration to mice indicates that the first MFAO series rapidly accumulates in the lens and retina but not the brain. Rat studies indicate that this MFAO series delays lens changes induced by diabetes, gamma irradiation, and UV irradiation and protects the photoreceptor layer against light damage. In contrast to the first series, oral administration of the second series of MFAOs to mice results in their accumulation in the brain and retina, but not the lens. Both series of MFAOs demonstrate no toxicity when administered by gavage at doses of 1600 mg/kg.
Since mitochondrial dysfunction and amyloid beta (Aβ) neurotoxicity are associated with age-related retinal changes, the effect of MFAOs on these factors has also been investigated in human neuroblastoma and retinal pigmented epithelial cells. Although these compounds chelate iron, they do not adversely affect mitochondrial function. In fact, they actually protect mitochondria from manganese poisoning. Both MFAO series also bind zinc; but zinquin staining indicates that these compounds do not adversely reduce cytoplasmic zinc levels. However, both MFAO series readily remove zinc from the neurotoxic amyloid beta zinc complex which is not readily degraded by matrix metalloproteinase (MMP)-2. The removal of zinc from the neurotoxic amyloid beta zinc complex by MFAOs allows MMP2 to degrade amyloid beta. The interaction of MFAOs with zinc is similar to the “metal attenuation” activity demonstrated by clioquinol and reported for PBT2, an analog of clioquinol undergoing a Phase 3 clinical trial for the treatment of Alzheimer’s dementia.
The Age-Related Eye Disease Study 2 (AREDS2), a multi-center randomized clinical trial tested the addition of lutein (10 mg)/zeaxanthin (2 mg) and/or omega-3 fatty acids to the original AREDS formulation. The investigators found neither harmful nor beneficial effects of omega-3 fatty acids for the treatment of age-related macular degeneration (AMD). The main effects analyses indicated that lutein/zeaxanthin had beneficial effects for reducing the risk of advanced AMD by 10%, for reducing the risk of advanced AMD by 26% in persons with the lowest dietary intake of lutein/zeaxanthin, and for reducing the risk for progression to neovascular AMD by 22% in the head-to-head comparisons of lutein/zeaxanthin vs. beta-carotene.
The safety of the AREDS formulation was tested with the elimination of beta-carotene. The finding of increased lung cancer in those supplemented with beta-carotene in mostly former smokers provided compelling data to eliminate beta-carotene. Furthermore, there was an increased efficacious effect in those treated with lutein/zeaxanthin compared with beta-carotene as indicated above. A safer and more efficacious formulation, which can be defined as the AREDS2 formulation would eliminate beta-carotene, add lutein (10 mg) and zeaxanthin (2 mg), and retain vitamin C (500 mg), vitamin E (400 international units), zinc (80 mg) and copper (2 mg).
a) Epiretinal implant: The ARGUSII system is produced by Second Sight Medical Products, Sylmar, California with sixty epiretinal electrodes, goggles with camera and an electronic transmission system to the back of the eye. After a study in thirty patients had been completed, the system had received the CE-mark and FDA approval recently. Twenty more patients had been implanted in the meantime. Maximum visual acuity reported so far was 20/1200 with mobility improving in most patients. Presently, a post-marketing study is planned and the costs for the device have been mentioned as 150.000 US$ per piece.
The camera has been reported to have some advantage because of the possibility of zooming but also has disadvantages as fading of the image occurs easier and can be compensated only with head nodding. For further information see Humayun et al. (2012) Ophthalmology 119:779–88.
b) Subretinal implant : The Alpha IMS is produced by Retina Implant AG, Tübingen/Reutlingen, Germany. After a pilot study in eleven patients (2005-2009) a clinical main trial with presently 25 more patients is ongoing in several centers (Oxford, London, Tübingen, Dresden, Hong Kong among other centers). This system consists of a light sensitive chip, similar to a camera chip that is implanted into the subretinal space in the back of the eye at the position of the degenerated photoreceptors. Each chip consists of 1.500 light sensitive photodiodes, amplifiers and electrodes. The image is resolved point by point and, depending on the brightness of each point, a current is forwarded to the bipolar cell layer. There is no camera outside the eye because all the electronics are in the eye and move with the eye except for the power supply coil that is implanted under the skin behind the ear. The best recorded visual acuity is 20/546. Due to microsaccades, fading occurs rarely and facial recognition has been reported by some patients. For further data see Stingl et al. (rspb.royalsocietypublishing.org/content/280/1757/20130077.full.pdf+html). The study is still ongoing and observation time so far is 1.5 years.
c) Other developments: In Australia, three patients have received a wire bound suprachoroidal (Bionic Vision) electrode array with 24 electrodes; no complete implant is available yet. Another new development includes preclinical work with passive elements by Palanker et al. Stanford University, USA (Mathieson et al. (2012) Nature Photonics 6, 391-7). By means of having three photosensitive elements per pixel in a row, sufficient voltage can be produced in order to stimulate the retinal neurons. However, this requires an enormous amount of light that only can be produced by special laser driven goggles.
A newly founded company in Paris (PIXIUM VISION) has now joined forces with former German-Swiss company IMI and the Palanker group to explore further possibilities of the use of passive elements.
For further comparison on the various approaches see Zrenner (2012), Nature Photonics 6: 344–5.
Degenerative blinding diseases such as retinitis pigmentosa and age-related macular degeneration affect millions of patients around the world. These disorders cause the progressive loss of rod and cone photoreceptors in the retina, eventually leading to complete blindness. A number of approaches are being explored to restore vision to blind patients. Our goal is develop and test novel pharmacological therapies for vision restoration. Here, we demonstrate the restoration of visual function to blind mice following the injection of red-shifted chemical “photoswitch” compounds.
We have created several small molecule photoswitches that can be used to control the activity of neurons by reversibly blocking native ion channels in response to light. In order to evaluate the ability of these photoswitches to restore light sensitivity to blind mice, we have tested them in an rd1 mouse model of retinitis pigmentosa. Our in vitro retinal light response measurements were carried out using a multi-electrode array (MEA). We also tested the restoration of several visually guided behaviors in blind mice in vivo.
We have previously demonstrated that the photoswitch AAQ could drive light responses in formerly blind retinas in vitro as well as restore the pupillary light reflex and light-aversive behaviors in blind mice. Here, we present the restoration of light sensitivity to blind mice in vitro and in vivo with two red-shifted photoswitch molecules, DENAQ and BENAQ. Unlike AAQ, DENAQ and BENAQ do not require the use of ultraviolet light and render a blind retina sensitive to visible (blue-green) light at a light intensity equivalent to ordinary daylight. These red-shifted molecules photosensitize the retinas of blind rd1 mice in vitro. The photoswitches persist up to several weeks in vivo and are well tolerated in the eye. DENAQ and BENAQ are selective for degenerated rather than healthy retinal tissue, suggesting they would not interfere with any remaining photoreceptor mediated vision in patients with retinal diseases. Intravitreal injection of DENAQ also restores light sensitivity to rd1 mice in vivo in an exploratory locomotory behavioral assay. Furthermore, DENAQ-injected rd1 animals are able to reverse the polarity of their naïve light response after appropriate fear conditioning, indicating their restored vision is sufficient for visual learning to take place.
In conclusion, red-shifted chemical photoswitches such as DENAQ and BENAQ, and our pharmacological approach in general, hold great promise for restoring visual function in end-stage degenerative blinding diseases.
Our project aims at restoring vision in patients with blindness consecutive to photoreceptor degeneration as in retinitis pigmentosa using optogenetic proteins. It is a collaborative program supported by Foundation Fighting Blindness and Gensight, a start-up company created thanks to the support of Novartis and Novartis Venture fund. The project is achieved together with Dr Roska at FMI in Basel Switzerland and Pr Sahel and myself at the Vision Institute in Paris France. Retinal prostheses have shown that it is possible to restore some vision in blind patients but we work on an alternative strategy, the optogenetic therapy, which consists in reactivating residual retinal cells by expressing photosensitive ionic pumps or channels after the photoreceptor degeneration thanks to gene therapy. We are trying to reactive photoreceptors, which have lost their sensitivity to light using the chloride pump, halorhodopsin.
Indeed, we have shown:
1) that some cone PRs remain in blind patients affected by retinitis pigmentosa. These PRs have lost their photosensitive part, the outer segment, which explains why the patient is blind. Similar dormant or non-photosensitive PRs are also found in animal models of the disease.
2) and that these dormant PR in blind mice can be reactivated to light by expressing halorhodopsin, via gene therapy. Visual perception in these blind mice is indicated by light responses in PR and retinal ganglion cells.
3) After showing these results on blind mice, we have used postmortem human retina in culture to show that human cone PRs can express the functional halorhodopsin at a sufficient level to polarize PRs
Our current objective is to demonstrate that this high level of expression can be obtained in vivo on non-human primates and that it does not trigger an immune response because halorhodopsin is a bacterial protein. We have already tested our AAV viral vector and obtained high and selective expression of the Green fluorescent protein GFP in cone PRs. When we co-express halorhodopsin and GFP, we still maintain a high protein expression as indicated by the GFP fluorescence.
We still need to demonstrate that this expression level can activate the retinal tissue by itself and measure markers of the immune response prior to testing the clinical batch of virus. This demonstration is quite complex to obtain because we are using normal monkeys. Therefore, their PRs have their natural response to light and we are introducing an additional sensitivity to light. We have therefore to bleach the natural light response to demonstrate an optogenetic-mediated response. To give us more chances, we can also culture the retina for some period of time to lose the natural response and demonstrate the halorhodopsin-elicited response.
Inherited retinal diseases (RD) display an impressive degree of allelic and genetic heterogeneity as nearly 10,000 mutations in >190 genes have been identified. Mutations in these genes account for 30% to 90% of cases, depending on the type of disease. Comprehensive genotyping of persons with inherited RD improves genetic counseling and the accuracy of disease prognoses. Moreover, genotyping identifies persons who are eligible for novel therapies. We are entering an era of routine testing for RD-associated defects, both in academic and non-academic centers. The identified known and novel variants are not published or deposited in open access databases. Sharing sequence variants and their associated phenotypes are at the heart of DNA diagnostics and it therefore is of utmost importance to register this information in publicly available databases.
The structure and use of RD-mutation databases need to meet the following criteria: 1). Web-based open access; 2). Registration of all published sequence variants; 3). Easy upload of new variants; 4). Accurate assessment of mutation data; 5). Regular updating. We propose the implementation of Leiden Open Variation Databases (LOVDs) for all RD genes in the next five years. LOVDs were previously created for 10 Usher syndrome-associated genes. A team of BS students and staff members in Islamabad, will collect all published sequence variants for the remaining RD genes, scrutinize them for their proper annotations, and upload them in gene-specific LOVDs. World-wide curators will check the new entries.
‘Empty’ LOVDs were created for all RD associated genes, and all published variants were registered for AIPL1, LCA5, RDH5, SEMA4A, and TULP1. Other mutation repositories previously were created for CEP290, NDP, and Bardet-Biedl syndrome-associated genes. These will be taken up in LOVDs. In 2013, variants of another 20 RD-associated genes will be deposited in LOVDs and the existing LOVDs will be updated every year.
The long-term success of this endeavor relies on a robust organization of sequence variant updating, proper curation, database maintenance, and a sound financial basis. It will also be vital to introduce compulsory deposition of sequence variants prior to publication submissions, and the compliance of diagnostic facilities worldwide to deposit unpublished variants in LOVDs.
C) RI Announcements, New Business and Conclusion
RD 2014 – First Announcement – Dr. Matthew LaVail
The XVIth International Symposium on Retinal Degeneration will be held in Pacific Grove, California, USA on July 13-18, 2014. The venue will be the Asilomar Conference Grounds. The organizers are: Drs. Catherine Bowes Rickman, Matthew LaVail, Joe G. Hollyfield, Robert E. Anderson, John Ash and Christian Grimm. The RD2014 website is under construction; please continue checking for when it is up and running. It is hoped that up to 30 Travel Awards can be funded for students, Post-doctoral Fellows and junior faculty members below the rank of Associate Professor. Important deadlines are: 1) Travel Award applications: February 3, 2014 2) Meeting and Hotel Registration: March 17, 2014 3) Online Abstract submission: March 17, 2014
4) Final Comments – Ms. C. Fasser
Ms. Fasser thanked all the speakers for their excellent presentations.
She remarked how heartening it is to see all the progress in clinical trials for the inherited retinal degenerative diseases and to hear about new trials to come in the future.
She hoped that all participants would have a very fruitful ARVO meeting and that we would all meet again for the RI meeting next year in Orlando.
|
---
layout: feature
title: 'Tense'
shortdef: 'tense'
udver: '2'
---
### Description
Tense feature matches the inflectional endings in verbs. In Uralic grammars
there have been various practices in refering to present/future tense and
past/preterite tense, in Universal dependencies we use `Pres` for common
non-past and `Past` for common past, unless language has more complex tense
system. Many grammars give descriptions of e.g. perfect and pluperfect tenses,
but if it's based on auxiliary verb constructions, this is not marked on UD
level.
|
This was great for a number of reasons - a nice dinner with my sister, mum and husband, the food is fantastic, I love the restaurant (as you you will from this earlier post), and because it was family I got to try everyone else's stuff!
The giant tube of deliciousness you see in the picture is a Paper Dosa. Everything we had was brilliant, particular shout outs go to the Dosa, the pancakes, the Kozi Chuttathu (Chicken breast chunks marinated with coconut powder, turmeric and yogurt), and Erachi Thengaa (lamb dish cooked in a thick sauce of onion, tomato, ginger and fried coconut).
So I'm hoping to form a habit of visiting this restaurant. They do take away but it's collection only and by the time I got it home it would be cold. If I'm ever passing by with a car though... I can't wait to try the sister North Indian cuisine restaurant The Dhabba!
New things
I managed to fulfil a life long dream recently in a random round about way.
Rockin' Roy is producing an album at the moment for release later this year and I ended up doing the backing vocals on the single! The sound is choral so I had to sing the part several times using different vocal ranges. I'm not sure what my range actually is but apparently it's not too shabby! It was a bit surreal standing in the little mic booth created in the room of our flat which Rockin' Roy has converted into a recording studio.
I've always wanted to sing on something 'real' and now I have, on a real single which will actually be released out into the world! I'll link to it once the artist launches.
I'm just part of the background of course, but it's nice to part of something and the song is good and extremely catchy. I had a dream a long time ago about singing in a band and got as far as doing some rehearsals with people but that was it. Being on this track is a small thing really, but while the idea of being on stage is quite exciting, it gives me the FEAR and I'm not sure it's something I'll ever have the guts to do. I get nervous singing even in my own house alone these days so this was a tiny triumph in the wilderness of bizarre nonsensical free floating anxiety. I have a lot of friends (and family) who are performers in various arts and I'm awe of them all for being able to do it. I'm even in awe of some of them for being really good at it...You mad brave fools!
|
You are here: Home/News/ Sun Pharma Recalls 40,000 Bottles of Antidepressant in U.S.
July 15, 2014 By: galadmin
Sun Pharma Recalls 40,000 Bottles of Antidepressant in U.S.
India’s Sun Pharmaceutical Industries Ltd is recalling 41,127 bottles of antidepressant venlafaxine hydrochloride in the United States after the drug failed to properly dissolve, the U.S. Food and Drug Administration (FDA) said.
Sun Pharma’s unit Caraco Pharmaceutical Laboratories Inc. began the voluntary recall in June and was classified by the FDA as a Class II, meaning that use of or exposure to the drug may cause temporary or medically reversible adverse health consequences.
“Stability results found the product did not meet the drug release dissolution specifications,” the FDA said in a post on its website Friday.
Dissolution tests are commonly conducted to help predict how a drug performs inside the body.
Sun Pharma manufactured the drug at its plant in the western Indian state of Gujarat. A company spokesman in Mumbai declined comment.
The company’s recall of venlafaxine hydrochloride comes three months after Pfizer Inc said it was pulling 104,000 bottles of the same drug, which the U.S. company sells under the brand Effexor XR, after a pharmacist reported that one of the bottles contained a heart drug.
Sun Pharma also began a recall of 200 vials of the chemotherapy drug gemcitabine in the U.S. in April due to a lack of assurance of sterility.
In January, the company pulled 2,528 bottles of its generic version of the diabetes drug Glumetza.
If you purchased this product, return it to the place of purchase for a refund or throw it away. For further information, feel free to contact one of our Gacovino Lake attorneys at 1-800-246-HURT (4878).
Free Case Review
You Be The Judge
Introducing our new online series You Be the Judge where we take you in to pull back the curtain, investigate the facts, and let you draw your own conclusions.
Talcum Powder Linked to Cancer
Several studies — and the anecdotal evidence from thousands of women across the country — link the use of talcum powder on or near the genitals with ovarian cancer. This correlation is strong enough that juries across ... continue reading...
Contact Our Firm
Gacovino, Lake & Associates, P.C.
270 W Main St
Sayville, NY 11782
Phone: +1 (800) 550-0000
Phone: +1 (631) 600-0000
Fax: (631) 543-5450
Looking for something?
Attorney Advertising Legal Disclaimer – Admitted in NY, NJ, CT, and Washington, D.C. only. While this firm maintains joint responsibility, your case may be referred to local or trial counsel for primary handling. Not available in all states. Prior results cannot and do not guarantee or predict a similar outcome with respect to any future matter, including yours, in which a lawyer or law firm may be retained.Site Map
|
Sections
Breaking Out
Breaking Out was staged around the Arnolfini building. The auditorium became The Basement where there were sounds from folk to pop punk, grime to drum and bass from groups Thistle & Thorn, The Rival, Deep and Conducta.
Screenings of short films were interspersed with live performance in The Box, from live action to animation, the political to the personal. The Lab chill out space included acoustic music, visual artwork and opportunities for creative interaction. Around the building audiences stumbled across audio interventions from Travelling Light Youth Theatre, acro-balancers and free runners from Circomedia and more.
|
Previous Offer
Sorry you missed this offer...
Offer highlights:
Me-ow that’s a good price! Especially considering she’s one of the hottest cabaret acts on the planet.
See the show our Sydney critics called ‘simply electric’, giving it five stars. Spend the night with a ‘cabaret diva of the highest order’ (The New York Post) these holidays.
The multi-award winning show, a dark twist of the classic tale, is available to you for just £15.
More details:
Warm up with a little match girl this Christmas. Destined to set your night on fire with debauchery, heavenly vocals and twisted humour, this Aussie cabaret girl is high quality diva-esque entertainment. Praised as ‘sensational’ by The Times, who also gave her five stars, now is your chance to get out of the cold and enjoy some hot, funny cabaret.
What Time Out says: 'Cabaret star Meow Meow -who was so superb in Kneehigh's 'The Umbrellas of Cherbourg' last year -offers her radical take on Hans Christian Andersen's festive fable, spinning comedy and music into the story, and drawing out its not inconsiderable darker elements.'
Need to know:
Voucher valid for one ticket to 'Meow Meow's Little Match Girl' on selected dates between December 15 - 30.
Please select your preferred date on checkout.
Customers purchasing multiple tickets will be sat together.
Please print your voucher and present it to the box office on arrival.
Cannot be cancelled, exchanged or refunded or used in conjunction with any other offer.
|
<?php
/**
* Efficiently run operations on batches of results for any function
* that supports an options array.
*
* This is usually used with elgg_get_entities() and friends,
* elgg_get_annotations(), and elgg_get_metadata().
*
* If you pass a valid PHP callback, all results will be run through that
* callback. You can still foreach() through the result set after. Valid
* PHP callbacks can be a string, an array, or a closure.
* {@link http://php.net/manual/en/language.pseudo-types.php}
*
* The callback function must accept 3 arguments: an entity, the getter
* used, and the options used.
*
* Results from the callback are stored in callbackResult. If the callback
* returns only booleans, callbackResults will be the combined result of
* all calls. If no entities are processed, callbackResults will be null.
*
* If the callback returns anything else, callbackresult will be an indexed
* array of whatever the callback returns. If returning error handling
* information, you should include enough information to determine which
* result you're referring to.
*
* Don't combine returning bools and returning something else.
*
* Note that returning false will not stop the foreach.
*
* @warning If your callback or foreach loop deletes or disable entities
* you MUST call setIncrementOffset(false) or set that when instantiating.
* This forces the offset to stay what it was in the $options array.
*
* @example
* <code>
* // using foreach
* $batch = new ElggBatch('elgg_get_entities', array());
* $batch->setIncrementOffset(false);
*
* foreach ($batch as $entity) {
* $entity->disable();
* }
*
* // using both a callback
* $callback = function($result, $getter, $options) {
* var_dump("Looking at annotation id: $result->id");
* return true;
* }
*
* $batch = new ElggBatch('elgg_get_annotations', array('guid' => 2), $callback);
* </code>
*
* @package Elgg.Core
* @subpackage DataModel
* @link http://docs.elgg.org/DataModel/ElggBatch
* @since 1.8
*/
class ElggBatch
implements Iterator {
/**
* The objects to interator over.
*
* @var array
*/
private $results = array();
/**
* The function used to get results.
*
* @var mixed A string, array, or closure, or lamda function
*/
private $getter = null;
/**
* The number of results to grab at a time.
*
* @var int
*/
private $chunkSize = 25;
/**
* A callback function to pass results through.
*
* @var mixed A string, array, or closure, or lamda function
*/
private $callback = null;
/**
* Start after this many results.
*
* @var int
*/
private $offset = 0;
/**
* Stop after this many results.
*
* @var int
*/
private $limit = 0;
/**
* Number of processed results.
*
* @var int
*/
private $retrievedResults = 0;
/**
* The index of the current result within the current chunk
*
* @var int
*/
private $resultIndex = 0;
/**
* The index of the current chunk
*
* @var int
*/
private $chunkIndex = 0;
/**
* The number of results iterated through
*
* @var int
*/
private $processedResults = 0;
/**
* Is the getter a valid callback
*
* @var bool
*/
private $validGetter = null;
/**
* The result of running all entities through the callback function.
*
* @var mixed
*/
public $callbackResult = null;
/**
* If false, offset will not be incremented. This is used for callbacks/loops that delete.
*
* @var bool
*/
private $incrementOffset = true;
/**
* Batches operations on any elgg_get_*() or compatible function that supports
* an options array.
*
* Instead of returning all objects in memory, it goes through $chunk_size
* objects, then requests more from the server. This avoids OOM errors.
*
* @param string $getter The function used to get objects. Usually
* an elgg_get_*() function, but can be any valid PHP callback.
* @param array $options The options array to pass to the getter function. If limit is
* not set, 10 is used as the default. In most cases that is not
* what you want.
* @param mixed $callback An optional callback function that all results will be passed
* to upon load. The callback needs to accept $result, $getter,
* $options.
* @param int $chunk_size The number of entities to pull in before requesting more.
* You have to balance this between running out of memory in PHP
* and hitting the db server too often.
* @param bool $inc_offset Increment the offset on each fetch. This must be false for
* callbacks that delete rows. You can set this after the
* object is created with {@see ElggBatch::setIncrementOffset()}.
*/
public function __construct($getter, $options, $callback = null, $chunk_size = 25,
$inc_offset = true) {
$this->getter = $getter;
$this->options = $options;
$this->callback = $callback;
$this->chunkSize = $chunk_size;
$this->setIncrementOffset($inc_offset);
if ($this->chunkSize <= 0) {
$this->chunkSize = 25;
}
// store these so we can compare later
$this->offset = elgg_extract('offset', $options, 0);
$this->limit = elgg_extract('limit', $options, 10);
// if passed a callback, create a new ElggBatch with the same options
// and pass each to the callback.
if ($callback && is_callable($callback)) {
$batch = new ElggBatch($getter, $options, null, $chunk_size, $inc_offset);
$all_results = null;
foreach ($batch as $result) {
if (is_string($callback)) {
$result = $callback($result, $getter, $options);
} else {
$result = call_user_func_array($callback, array($result, $getter, $options));
}
if (!isset($all_results)) {
if ($result === true || $result === false || $result === null) {
$all_results = $result;
} else {
$all_results = array();
}
}
if (($result === true || $result === false || $result === null) && !is_array($all_results)) {
$all_results = $result && $all_results;
} else {
$all_results[] = $result;
}
}
$this->callbackResult = $all_results;
}
}
/**
* Fetches the next chunk of results
*
* @return bool
*/
private function getNextResultsChunk() {
// reset memory caches after first chunk load
if ($this->chunkIndex > 0) {
global $DB_QUERY_CACHE, $ENTITY_CACHE;
$DB_QUERY_CACHE = $ENTITY_CACHE = array();
}
// always reset results.
$this->results = array();
if (!isset($this->validGetter)) {
$this->validGetter = is_callable($this->getter);
}
if (!$this->validGetter) {
return false;
}
$limit = $this->chunkSize;
// if someone passed limit = 0 they want everything.
if ($this->limit != 0) {
if ($this->retrievedResults >= $this->limit) {
return false;
}
// if original limit < chunk size, set limit to original limit
// else if the number of results we'll fetch if greater than the original limit
if ($this->limit < $this->chunkSize) {
$limit = $this->limit;
} elseif ($this->retrievedResults + $this->chunkSize > $this->limit) {
// set the limit to the number of results remaining in the original limit
$limit = $this->limit - $this->retrievedResults;
}
}
if ($this->incrementOffset) {
$offset = $this->offset + $this->retrievedResults;
} else {
$offset = $this->offset;
}
$current_options = array(
'limit' => $limit,
'offset' => $offset
);
$options = array_merge($this->options, $current_options);
$getter = $this->getter;
if (is_string($getter)) {
$this->results = $getter($options);
} else {
$this->results = call_user_func_array($getter, array($options));
}
if ($this->results) {
$this->chunkIndex++;
$this->resultIndex = 0;
$this->retrievedResults += count($this->results);
return true;
} else {
return false;
}
}
/**
* Increment the offset from the original options array? Setting to
* false is required for callbacks that delete rows.
*
* @param bool $increment Set to false when deleting data
* @return void
*/
public function setIncrementOffset($increment = true) {
$this->incrementOffset = (bool) $increment;
}
/**
* Implements Iterator
*/
/**
* PHP Iterator Interface
*
* @see Iterator::rewind()
* @return void
*/
public function rewind() {
$this->resultIndex = 0;
$this->retrievedResults = 0;
$this->processedResults = 0;
// only grab results if we haven't yet or we're crossing chunks
if ($this->chunkIndex == 0 || $this->limit > $this->chunkSize) {
$this->chunkIndex = 0;
$this->getNextResultsChunk();
}
}
/**
* PHP Iterator Interface
*
* @see Iterator::current()
* @return mixed
*/
public function current() {
return current($this->results);
}
/**
* PHP Iterator Interface
*
* @see Iterator::key()
* @return int
*/
public function key() {
return $this->processedResults;
}
/**
* PHP Iterator Interface
*
* @see Iterator::next()
* @return mixed
*/
public function next() {
// if we'll be at the end.
if (($this->processedResults + 1) >= $this->limit && $this->limit > 0) {
$this->results = array();
return false;
}
// if we'll need new results.
if (($this->resultIndex + 1) >= $this->chunkSize) {
if (!$this->getNextResultsChunk()) {
$this->results = array();
return false;
}
$result = current($this->results);
} else {
// the function above resets the indexes, so only inc if not
// getting new set
$this->resultIndex++;
$result = next($this->results);
}
$this->processedResults++;
return $result;
}
/**
* PHP Iterator Interface
*
* @see Iterator::valid()
* @return bool
*/
public function valid() {
if (!is_array($this->results)) {
return false;
}
$key = key($this->results);
return ($key !== NULL && $key !== FALSE);
}
}
|
Braising involves cooking meat or vegetables slowly in a covered dish with a small amount of liquid — water, stock, and perhaps even a splash of wine. It can be done in the oven in a covered casserole, on the stove top in a Dutch oven, or on your counter in a slow-cooker.
Time Saver: Braising requires very little supervision. No basting, stirring, or flipping required. Most recipes involve just a single step: Meat, vegetables, and seasonings are unceremoniously combined, and the meal basically cooks itself while you go about the rest of your day.
Extra Healthful: Cooking meat at high temperatures or over direct heat (such as char-broiling or grilling) increases the formation of harmful compounds, such as acrylamide, HCAs, and PAHs. Experts believe that potential health risks associated with meat consumption can be vastly reduced by choosing healthier cooking methods like braising. (No need to hang up your barbeue tongs entirely, however! See my previous post for tips on making your barbecues healthier.)
|
Zinc(II)-cyclen polyacrylamide gel electrophoresis for detection of mutations in short Ade/Thy-rich DNA fragments.
We describe an improved gel-based method with an additive Zn(2+)-cyclen complex (cyclen, 1,4,7,10-tetraazacyclododecane), Zn(2+)-cyclen-PAGE, for mutation detection in DNA fragments by PCR that contain more than 65% Ade/Thy bases and fewer than 100base pairs (bp). Existing techniques have a problem in analyzing such short Ade/Thy-rich fragments because the duplexes are disrupted and are not detectable due to binding of Zn(2+)-cyclen to Thy bases. In this strategy using a PCR primer with a Gua/Cyt-lined sequence attached at its 5'-end, we successfully detected a mutation in an 86-bp Ade/Thy-rich region of the BRCA1 gene from formalin-fixed paraffin-embedded breast cancer-tissue sections.
|
// Copyright (c) 2012-2014 The Bitcoin developers
// Copyright (c) 2014-2015 The Dash developers
// Copyright (c) 2015-2018 The Luxcore developers
// Distributed under the MIT software license, see the accompanying
// file COPYING or http://www.opensource.org/licenses/mit-license.php.
#ifndef BITCOIN_RPCSERVER_H
#define BITCOIN_RPCSERVER_H
#include "amount.h"
#include "rpcprotocol.h"
#include "uint256.h"
#include <list>
#include <map>
#include <stdint.h>
#include <string>
#include <httpserver.h>
#include <boost/function.hpp>
#include "univalue/univalue.h"
class CRPCCommand;
namespace RPCServer
{
void OnStarted(boost::function<void ()> slot);
void OnStopped(boost::function<void ()> slot);
void OnPreCommand(boost::function<void (const CRPCCommand&)> slot);
void OnPostCommand(boost::function<void (const CRPCCommand&)> slot);
}
class CBlockIndex;
class CNetAddr;
/** Wrapper for UniValue::VType, which includes typeAny:
* Used to denote don't care type. Only used by RPCTypeCheckObj */
struct UniValueType {
UniValueType(UniValue::VType _type) : typeAny(false), type(_type) {}
UniValueType() : typeAny(true) {}
bool typeAny;
UniValue::VType type;
};
class JSONRequest
{
public:
UniValue id;
std::string strMethod;
UniValue params;
bool isLongPolling;
/**
* If using batch JSON request, this object won't get the underlying HTTPRequest.
*/
JSONRequest() {
id = NullUniValue;
params = NullUniValue;
req = NULL;
isLongPolling = false;
};
JSONRequest(HTTPRequest *req);
/**
* Start long-polling
*/
void PollStart();
/**
* Ping long-poll connection with an empty character to make sure it's still alive.
*/
void PollPing();
/**
* Returns whether the underlying long-poll connection is still alive.
*/
bool PollAlive();
/**
* End a long poll request.
*/
void PollCancel();
/**
* Return the JSON result of a long poll request
*/
void PollReply(const UniValue& result);
void parse(const UniValue& valRequest);
// FIXME: make this private?
HTTPRequest *req;
};
class JSONRPCRequest
{
public:
UniValue id;
std::string strMethod;
UniValue params;
bool fHelp;
std::string URI;
std::string authUser;
bool isLongPolling;
/**
* If using batch JSON request, this object won't get the underlying HTTPRequest.
*/
JSONRPCRequest() {
id = NullUniValue;
params = NullUniValue;
fHelp = false;
req = NULL;
isLongPolling = false;
};
JSONRPCRequest(HTTPRequest *_req);
/**
* Start long-polling
*/
void PollStart();
/**
* Ping long-poll connection with an empty character to make sure it's still alive.
*/
void PollPing();
/**
* Returns whether the underlying long-poll connection is still alive.
*/
bool PollAlive();
/**
* End a long poll request.
*/
void PollCancel();
/**
* Return the JSON result of a long poll request
*/
void PollReply(const UniValue& result);
void parse(const UniValue& valRequest);
// FIXME: make this private?
HTTPRequest *req;
};
/** Query whether RPC is running */
bool IsRPCRunning();
/**
* Set
* the RPC warmup status. When this is done, all RPC calls will error out
* immediately with RPC_IN_WARMUP.
*/
void SetRPCWarmupStatus(const std::string& newStatus);
/* Mark warmup as done. RPC calls will be processed from now on. */
void SetRPCWarmupFinished();
/* returns the current warmup state. */
bool RPCIsInWarmup(std::string* statusOut);
/**
* Type-check arguments; throws JSONRPCError if wrong type given. Does not check that
* the right number of arguments are passed, just that any passed are the correct type.
* Use like: RPCTypeCheck(params, boost::assign::list_of(str_type)(int_type)(obj_type));
*/
void RPCTypeCheck(const UniValue& params,
const std::list<UniValue::VType>& typesExpected,
bool fAllowNull = false);
/**
* Check for expected keys/value types in an Object.
* Use like: RPCTypeCheck(object, boost::assign::map_list_of("name", str_type)("value", int_type));
*/
void RPCTypeCheckObj(const UniValue& o,
const std::map<std::string, UniValueType>& typesExpected,
bool fAllowNull = false);
/** Opaque base class for timers returned by NewTimerFunc.
* This provides no methods at the moment, but makes sure that delete
* cleans up the whole state.
*/
class RPCTimerBase
{
public:
virtual ~RPCTimerBase() {}
};
/**
* RPC timer "driver".
*/
class RPCTimerInterface
{
public:
virtual ~RPCTimerInterface() {}
/** Implementation name */
virtual const char *Name() = 0;
/** Factory function for timers.
* RPC will call the function to create a timer that will call func in *millis* milliseconds.
* @note As the RPC mechanism is backend-neutral, it can use different implementations of timers.
* This is needed to cope with the case in which there is no HTTP server, but
* only GUI RPC console, and to break the dependency of pcserver on httprpc.
*/
virtual RPCTimerBase* NewTimer(boost::function<void(void)>& func, int64_t millis) = 0;
};
/** Set the factory function for timers */
void RPCSetTimerInterface(RPCTimerInterface *iface);
/** Set the factory function for timer, but only, if unset */
void RPCSetTimerInterfaceIfUnset(RPCTimerInterface *iface);
/** Unset factory function for timers */
void RPCUnsetTimerInterface(RPCTimerInterface *iface);
/**
* Run func nSeconds from now.
* Overrides previous timer <name> (if any).
*/
void RPCRunLater(const std::string& name, boost::function<void(void)> func, int64_t nSeconds);
typedef UniValue(*rpcfn_type)(const UniValue& params, bool fHelp);
class CRPCCommand
{
public:
std::string category;
std::string name;
rpcfn_type actor;
bool okSafeMode;
bool threadSafe;
bool reqWallet;
};
/**
* LUX RPC command dispatcher.
*/
class CRPCTable
{
private:
std::map<std::string, const CRPCCommand*> mapCommands;
public:
CRPCTable();
const CRPCCommand* operator[](const std::string& name) const;
std::string help(std::string name) const;
/**
* Execute a method.
* @param method Method to execute
* @param params Array of arguments (JSON objects)
* @returns Result of the call.
* @throws an exception (UniValue) when an error happens.
*/
UniValue execute(const std::string& method, const UniValue& params) const;
/**
* Returns a list of registered commands
* @returns List of registered commands.
*/
std::vector<std::string> listCommands() const;
};
extern const CRPCTable tableRPC;
/**
* Utilities: convert hex-encoded Values
* (throws error if not hex).
*/
extern uint256 ParseHashV(const UniValue& v, std::string strName);
extern uint256 ParseHashO(const UniValue& o, std::string strKey);
extern std::vector<unsigned char> ParseHexV(const UniValue& v, std::string strName);
extern std::vector<unsigned char> ParseHexO(const UniValue& o, std::string strKey);
extern int64_t nWalletUnlockTime;
extern CAmount AmountFromValue(const UniValue& value);
extern double GetDifficulty(const CBlockIndex* blockindex = NULL);
extern CBlockIndex* GetLastBlockOfType(const int nPoS);
double GetPoWMHashPS();
double GetPoSKernelPS();
extern std::string HelpRequiringPassphrase();
extern std::string HelpExampleCli(std::string methodname, std::string args);
extern std::string HelpExampleRpc(std::string methodname, std::string args);
extern void EnsureWalletIsUnlocked();
extern UniValue getconnectioncount(const UniValue& params, bool fHelp); // in rpcnet.cpp
extern UniValue getpeerinfo(const UniValue& params, bool fHelp);
extern UniValue ping(const UniValue& params, bool fHelp);
extern UniValue addnode(const UniValue& params, bool fHelp);
//extern UniValue disconnectnode(const UniValue& params, bool fHelp);
extern UniValue getaddednodeinfo(const UniValue& params, bool fHelp);
extern UniValue getnettotals(const UniValue& params, bool fHelp);
extern UniValue setban(const UniValue& params, bool fHelp);
extern UniValue listbanned(const UniValue& params, bool fHelp);
extern UniValue clearbanned(const UniValue& params, bool fHelp);
extern UniValue dumpprivkey(const UniValue& params, bool fHelp); // in rpcdump.cpp
extern UniValue importprivkey(const UniValue& params, bool fHelp);
extern UniValue importaddress(const UniValue& params, bool fHelp);
extern UniValue importpubkey(const UniValue& params, bool fHelp);
extern UniValue dumphdinfo(const UniValue& params, bool fHelp);
extern UniValue dumpwallet(const UniValue& params, bool fHelp);
extern UniValue importwallet(const UniValue& params, bool fHelp);
extern UniValue bip38encrypt(const UniValue& params, bool fHelp);
extern UniValue bip38decrypt(const UniValue& params, bool fHelp);
extern UniValue importprunedfunds(const UniValue& params, bool fHelp);
extern UniValue removeprunedfunds(const UniValue& params, bool fHelp);
extern UniValue dumpprivkey(const UniValue& params, bool fHelp); // in rpcdump.cpp
extern UniValue importprivkey(const UniValue& params, bool fHelp);
extern UniValue importaddress(const UniValue& params, bool fHelp);
extern UniValue dumpwallet(const UniValue& params, bool fHelp);
extern UniValue importwallet(const UniValue& params, bool fHelp);
extern UniValue bip38encrypt(const UniValue& params, bool fHelp);
extern UniValue bip38decrypt(const UniValue& params, bool fHelp);
extern UniValue setstakesplitthreshold(const UniValue& params, bool fHelp);
extern UniValue getstakesplitthreshold(const UniValue& params, bool fHelp);
extern UniValue getgenerate(const UniValue& params, bool fHelp); // in rpcmining.cpp
extern UniValue setgenerate(const UniValue& params, bool fHelp);
extern UniValue getnetworkhashps(const UniValue& params, bool fHelp);
extern UniValue gethashespersec(const UniValue& params, bool fHelp);
extern UniValue getmininginfo(const UniValue& params, bool fHelp);
extern UniValue prioritisetransaction(const UniValue& params, bool fHelp);
extern UniValue getblocktemplate(const UniValue& params, bool fHelp);
extern UniValue getwork(const UniValue& params, bool fHelp);
extern UniValue submitblock(const UniValue& params, bool fHelp);
extern UniValue estimatefee(const UniValue& params, bool fHelp);
extern UniValue estimatepriority(const UniValue& params, bool fHelp);
extern UniValue estimatesmartfee(const UniValue& params, bool fHelp);
extern UniValue estimatesmartpriority(const UniValue& params, bool fHelp);
extern UniValue getnewaddress(const UniValue& params, bool fHelp); // in rpcwallet.cpp
extern UniValue getaccountaddress(const UniValue& params, bool fHelp);
extern UniValue getrawchangeaddress(const UniValue& params, bool fHelp);
extern UniValue setaccount(const UniValue& params, bool fHelp);
extern UniValue getaccount(const UniValue& params, bool fHelp);
extern UniValue getaddressesbyaccount(const UniValue& params, bool fHelp);
extern UniValue sendtoaddress(const UniValue& params, bool fHelp);
extern UniValue sendtoaddressix(const UniValue& params, bool fHelp);
extern UniValue signmessage(const UniValue& params, bool fHelp);
extern UniValue verifymessage(const UniValue& params, bool fHelp);
extern UniValue getreceivedbyaddress(const UniValue& params, bool fHelp);
extern UniValue getreceivedbyaccount(const UniValue& params, bool fHelp);
extern UniValue getbalance(const UniValue& params, bool fHelp);
extern UniValue getunconfirmedbalance(const UniValue& params, bool fHelp);
extern UniValue movecmd(const UniValue& params, bool fHelp);
extern UniValue sendfrom(const UniValue& params, bool fHelp);
extern UniValue sendmany(const UniValue& params, bool fHelp);
extern UniValue addmultisigaddress(const UniValue& params, bool fHelp);
extern UniValue createmultisig(const UniValue& params, bool fHelp);
extern UniValue createwitnessaddress(const UniValue& params, bool fHelp);
extern UniValue listreceivedbyaddress(const UniValue& params, bool fHelp);
extern UniValue listreceivedbyaccount(const UniValue& params, bool fHelp);
extern UniValue listtransactions(const UniValue& params, bool fHelp);
extern UniValue listaddressgroupings(const UniValue& params, bool fHelp);
extern UniValue listaddressbalances(const UniValue& params, bool fHelp);
extern UniValue listaccounts(const UniValue& params, bool fHelp);
extern UniValue listsinceblock(const UniValue& params, bool fHelp);
extern UniValue gettransaction(const UniValue& params, bool fHelp);
extern UniValue abandontransaction(const UniValue& params, bool fHelp);
extern UniValue backupwallet(const UniValue& params, bool fHelp);
extern UniValue keypoolrefill(const UniValue& params, bool fHelp);
extern UniValue walletpassphrase(const UniValue& params, bool fHelp);
extern UniValue walletpassphrasechange(const UniValue& params, bool fHelp);
extern UniValue walletlock(const UniValue& params, bool fHelp);
extern UniValue encryptwallet(const UniValue& params, bool fHelp);
extern UniValue validateaddress(const UniValue& params, bool fHelp);
extern UniValue getaddressmempool(const UniValue& params, bool fHelp);
extern UniValue getaddressutxos(const UniValue& params, bool fHelp);
extern UniValue getaddressdeltas(const UniValue& params, bool fHelp);
extern UniValue getaddresstxids(const UniValue& params, bool fHelp);
extern UniValue getaddressbalance(const UniValue& params, bool fHelp);
extern UniValue getspentinfo(const UniValue& params, bool fHelp);
extern UniValue purgetxindex(const UniValue& params, bool fHelp);
extern UniValue getinfo(const UniValue& params, bool fHelp);
extern UniValue getstateinfo(const UniValue& params, bool fHelp);
extern UniValue getwalletinfo(const UniValue& params, bool fHelp);
extern UniValue getblockchaininfo(const UniValue& params, bool fHelp);
extern UniValue getnetworkinfo(const UniValue& params, bool fHelp);
extern UniValue setmocktime(const UniValue& params, bool fHelp);
extern UniValue reservebalance(const UniValue& params, bool fHelp);
extern UniValue multisend(const UniValue& params, bool fHelp);
extern UniValue autocombinerewards(const UniValue& params, bool fHelp);
extern UniValue getstakingstatus(const UniValue& params, bool fHelp);
extern UniValue callcontract(const UniValue& params, bool fHelp);
extern UniValue createcontract(const UniValue& params, bool fHelp);
extern UniValue sendtocontract(const UniValue& params, bool fHelp);
extern UniValue getrawtransaction(const UniValue& params, bool fHelp); // in rcprawtransaction.cpp
extern UniValue listunspent(const UniValue& params, bool fHelp);
extern UniValue lockunspent(const UniValue& params, bool fHelp);
extern UniValue listlockunspent(const UniValue& params, bool fHelp);
extern UniValue createrawtransaction(const UniValue& params, bool fHelp);
extern UniValue fundrawtransaction(const UniValue& params, bool fHelp);
extern UniValue decoderawtransaction(const UniValue& params, bool fHelp);
extern UniValue decodescript(const UniValue& params, bool fHelp);
extern UniValue signrawtransaction(const UniValue& params, bool fHelp);
extern UniValue sendrawtransaction(const UniValue& params, bool fHelp);
extern UniValue gethexaddress(const UniValue& params, bool fHelp);
extern UniValue fromhexaddress(const UniValue& params, bool fHelp);
extern UniValue getblockcount(const UniValue& params, bool fHelp); // in rpcblockchain.cpp
extern UniValue getblockhashes(const UniValue& params, bool fHelp);
extern UniValue getbestblockhash(const UniValue& params, bool fHelp);
extern UniValue getdifficulty(const UniValue& params, bool fHelp);
extern UniValue settxfee(const UniValue& params, bool fHelp);
extern UniValue getmempoolinfo(const UniValue& params, bool fHelp);
extern UniValue getrawmempool(const UniValue& params, bool fHelp);
extern UniValue getblockhash(const UniValue& params, bool fHelp);
extern UniValue getblock(const UniValue& params, bool fHelp);
extern UniValue getblockheader(const UniValue& params, bool fHelp);
extern UniValue gettxoutsetinfo(const UniValue& params, bool fHelp);
extern UniValue gettxout(const UniValue& params, bool fHelp);
extern UniValue verifychain(const UniValue& params, bool fHelp);
extern UniValue getchaintips(const UniValue& params, bool fHelp);
extern UniValue getchaintxstats(const UniValue& params, bool fHelp);
extern UniValue switchnetwork(const UniValue& params, bool fHelp);
extern UniValue invalidateblock(const UniValue& params, bool fHelp);
extern UniValue reconsiderblock(const UniValue& params, bool fHelp);
extern UniValue darksend(const UniValue& params, bool fHelp);
extern UniValue spork(const UniValue& params, bool fHelp);
extern UniValue masternode(const UniValue& params, bool fHelp);
extern UniValue getaccountinfo(const UniValue& params, bool fHelp);
//extern UniValue masternodelist(const UniValue& params, bool fHelp);
//extern UniValue mnbudget(const UniValue& params, bool fHelp);
//extern UniValue mnbudgetvoteraw(const UniValue& params, bool fHelp);
//extern UniValue mnfinalbudget(const UniValue& params, bool fHelp);
//extern UniValue mnsync(const UniValue& params, bool fHelp);
extern UniValue generate(const UniValue& params, bool fHelp);
extern UniValue getstorage(const UniValue& params, bool fHelp);
extern UniValue listcontracts(const UniValue& params, bool fHelp);
extern UniValue gettransactionreceipt(const UniValue& params, bool fHelp);
extern UniValue searchlogs(const UniValue& params, bool fHelp);
extern UniValue waitforlogs(const UniValue& params, bool fHelp);
extern UniValue pruneblockchain(const UniValue& params, bool fHelp);
bool StartRPC();
void InterruptRPC();
void StopRPC();
std::string JSONRPCExecBatch(const UniValue& vReq);
void RPCNotifyBlockChange(bool ibd, const CBlockIndex* pindex);
#endif // BITCOIN_RPCSERVER_H
|
Product Description
Use Coupon code "A3S2" to get extra 25% off for this flight controller !
Description:The Eagle A3 Super II is our top of the line flight stabilization system designed specifically for fixed-wing aircraft. The A3 Super II has an integrated high precision 6-axis (3 gyro + 3 accelerometers) MEMS sensor with advanced attitude and PID control algorithm. The gyro can accurately detect the angular velocity and attitude of the aircraft and issue commands to all servos, which enables perfect balance and stability throughout the flight. The A3 Super II has built-in capabilities for delta-wing (flying-wing), v-tail, remote master gain adjustment, and separate dual aileron and elevator control. Finally, two of the most exciting new features on this gyro are the One-Click Auto-Recovery which allows you to return your plane to upright, level flight at the flip of a switch (similar to Horizon's SAFE technology) and the One-Click Hover which allows you to put your plane into a perfect hover at the flip of a switch.
Tech Note: Always make sure you are using the latest firmware and programming software with your Super II gyro. Click here to download the latest firmware.
Important: The A3 Super II is an advanced flight stabilizer. You must have a Windows PC, know how to install USB drivers, and have a strong working knowledge of Windows applications in order to program or update the firmware on this gyro. If you are interested in a 6 axis gyro which does not require a computer for setup and programming, please consider the Eagle A3.
Gyros (also called flight stabilizers) help keep your airplane stable during take-off, flight maneuvers, and landings which can be helpful during windy days or when learning how to fly RC planes. Advanced gyros, like the A3 Super II, also include accelerometers which allow you to turn your plane "right side up" if you lose orientation of your plane during flight. Gyros also help you master aerobatic maneuvers like knife edges and hovering as the system can "lock" in a planes position / trajectory. Once thought to be for beginners only, Gyros are now common in all classes of aircraft and are utilized by all levels of flyers. Gyros are great for learning, they let pilots practice advanced aerobatics, they give you piece of mind in less than ideal flight conditions, and they can often help you avoid costly crash damage.
Features:- Works with all major receiver brands including Spektrum, Futaba, Hitec, JR, Tactic, and more.- Integrated design of 6-axis (3 gyro+3 accelerometers ) 32-bit MEMS sensor for self-stability and self-balance.- Advanced brown-out fast recovery ability provides better security and reliability.- 6 Flight modes: - Normal Mode (2D Mode): this is the standard flight stabilization mode found on most airplane gyros. - 3D Flight Mode: this mode will attempt to hold the LAST position the aicraft was in. For example, if you put the airplane into a knife edge and then activate 3D mode, the airplane will hold the knife edge as long as the pilot provides appropriate throttle input. - Auto-Recovery Mode (also called Auto Balance Mode): this mode will automatically "right" or balance your aircraft if you lose orientation or if the aircaft becomes inverted. This functionality is similar to Spektrum's SAFE technology. - Auto-Hover Mode: this mode will hold the airplane in a perfect hover (nose pointed towards the sky) on 3D capable aircraft. - User Defind: allows you to select different modes (Normal, 3D, or Off) for each gyro axis. - Gyro Deactivated Mode: this turns the gyro off to allow unassisted/non-stablized flight.- 2 stick control modes: Manual Mode(MM) and Auto Mode(RR/AR).- Various wing types: 1AIL+1ELE, 2AIL+1ELE, 1AIL+2ELE, 2AIL+2ELE, delta-wing and V-tail.- Independent gyro gain adjustment and gyro ratio selection for each flight mode.- Separate adjustments for servo travel limits.- Up to 333Hz servo operating frequency, compatible with all analog and digital servos.- 5-level response rate setting allows you to use it on gas-powered planes.- Compatible with HV (7.4V) receivers and servos.- Flat or upright mounting orientations.- Newly designed config GUI makes gyro setup simple and intuitive.
|
Nick Carter Revealed His Baby's Gender On 'DWTS' & Fans Are Pretty Psyched About It
It’s a boy! Nick Carter and his wife, Lauren Kitt Carter, revealed their baby's gender live on Dancing with the Stars Monday night, and his fans were super pumped about it. The catch? It was a surprise to both parents, too. After the former Backstreet Boy received his scores with dancing partner Sharna Burgess, Nick's wife Lauren was brought on stage. There, the Carters opened a giant silver box with a glittering purple bow, releasing a cloud of blue balloons. “Ah, you’re having a boy!” co-host Erin Andrews told the happy couple. “It’s a Backstreet boy!”
The fun reveal followed up an awesome night of performances for Nick and his partner Sharna Burgess, including a contemporary dance dedicated to Lauren. Throughout the night, each dancing pair picked an idol to inspire their choreography; and while Nick mentioned that there are many people he admires, he felt that his wife was the obvious choice. “Lauren is the woman that I dreamed of,” he said. “She’s my savior in a lot of ways.” The couple shared their pregnancy story at the beginning of the segment, describing the pain they felt after suffering a miscarriage. Now, after a year of trying, Lauren is 16 weeks pregnant.
On top of Nick's happy baby news, there was this: The performance earned a 10 from all three judges, though Julianne Hough shouted, “I wish I had an eleven!” Bruno Tonioli described the piece as “a love poem perfectly visualized through dance," and Carrie Ann Inaba told the pair, “You took my breath away.” The perfect score earned Nick and Burgess a spot in next week’s episode.
After the show, Nick and Lauren each took to Twitter to follow up on the exciting announcement:
And it wasn't long before congratulatory tweets started flooding in — including some from a few celebs:
|
On February 9, Russian Internet users started sharing a recording of Vesti-Khamchatka news anchor Alexandra Novikova bursting into hysterical laughter while reporting a three-percent increase in certain social-security payments. Novikova loses her composure when explaining that the state’s new indexing of payments will leave recipients with “a little more than 1,500 rubles” ($23.50) a month to help with payments for medicines, health resort packages, and “international travel” (sic) to these facilities.
Drawing on a “set of social services,” Novikova explains in the report, beneficiaries can spend almost 900 rubles ($14.09) on medicines, 137 rubles ($2.15) on vouchers, and the remaining balance “on international travel.” When uttering the last phrase, the news anchor breaks down in laughter and says, “I honestly really tried not to laugh right there.”(Laughter begins around 0:48 in the video.)
Ведущая «Вестей» на Камчатке рассмеялась на записи эфира, говоря о повышении выплат льготникам на 3%: из ежемесячных выплат на санаторий и проезд к нему выделяется 264 рубля.
ВГТРК удалила копию оригинального сюжета из YouTube и соцсетейhttps://t.co/YcGGfOP5Wb pic.twitter.com/a6EgmA6YZ4 — TJ (@tjournal) February 10, 2020
Novikova then jokes that she’s happy not to have laughed during a live broadcast. The website TJournal says the footage that leaked online is likely one of several takes that were not included in the final report aired on television. TJournal reports that Alexandra Novikova does work for GTRK Kamchatka and has appeared in other reports aired on Vesti. The studio that recorded the report also belongs to the state television network, according to TJournal.
By the time of this writing, GTRK Kamchatka had removed the original report featuring Novikova from YouTube and VKontakte, though copies of the anchor laughing continue to circulate online. The GTRK Kamchatka does not have its own separate website.
TJournal also points out that Novikova incorrectly reported the new social payments for “international” travel. Starting on February 1, 2020, benefits paid to three types of recipients went up by three percent: disabled combat veterans, persons exposed to radiation, and persons honored as Heroes of the Soviet Union and Russia and Heroes of Socialist Labor.
These beneficiaries will now receive 1,155 rubles and 6 kopecks (about $18.08) a month for purchasing essential medicines (889 rubles and 66 kopecks, or $13.92), health resort packages to prevent major diseases (137 rubles and 63 kopecks, or $2.15), and free travel on the local commuter rail and 127 rubles and 77 kopecks ($2) for roundtrip intercity transport.
Alexey Kostelyov, the director of VGTRK’s Kamchatka branch, told the publication Pdmnews that he knows the identity of the individual who leaked the footage on social media. He says he plans to punish the person who was responsible, and he says anchor Alexandra Novikova is not in any trouble.
(c)MEDUZA 2020
|
Episode-569- Listener Feedback 12-13-10
So on a Monday what else would we be doing but answering your emails and questions. Today we have great ones like getting started hunting as a teen, life insurance on kids, more proof that the global warming scare is total bullshit and more of your questions and commentary. To be on a show like this send your email to jack @ thesurvivalpodcast.com with “question for jack” in the subject line.
Remember if you have a question and background information ask the question (in one to two sentences first) followed by your background information. Due to extremely high email rates this will give you the best chance of getting your email on the air.
Join me today as we discuss…
Is there any reason to have life insurance on kids at all
Is water fluoridation required by law or simply local policy
Are gemstones a good way to invest as an alternative to gold and silver
Remember to comment, chime in and tell us your thoughts, this podcast is one man’s opinion, not a lecture or sermon. Also please enter our listener appreciation contest and help spread the word about our show. Also remember you can call in your questions and comments to 866-65-THINK and you might hear yourself on the air.
29 Responses to Episode-569- Listener Feedback 12-13-10
A big part of dental hygiene is eating good food. Avoiding eating too much sugars and grains which are food for bacteria, and eat lots of stuff you need to chew on like vegetables, which will clean your teeth.
Mandatory Laws:
|Twelve states, Puerto Rico and the District of Columbia have laws intended to provide statewide
fluoridation. These states and the year that the fluoridation legislation was passed are listed below:
District of Columbia (1952)
Has only one water system and it has been fluoridated since 1952.
2008 UPDATE: Population receiving optimally fluoridated water: 100%
California (1995)
Fluoridation is mandated for communities of 10,000 or more service connections (estimated 25, 000 population).
“Outside” funds must be found for purchase, installation, and operations of the fluoridation system.
The law does not address water supply wholesalers.
The law sets a MCL of 2.0 mg/L.
California’s law cannot be enforced unless “outside” funds are made
available to the community for purchase, installation, and operation of the fluoridation system.
UPDATE:
• Implementtion of mandatory fluoridation began in 2007 affecting approximately 18 million Southern Californians.
• Recommended range of 0.7 to 0.8 part per million
• 2006 rates for California population receiving optimally fluoridated water: 27.1% (9,881,390 people)
* see FAN’s California NewsTracker
Connecticut (1965)
set lower limits on the size of the communities which must comply.
Fluoridation is mandated for communities with populations of 20,000 or more and
natural fluoride content of less than 0.8 mg/L.
Fluoridation levels must be maintained between 0.8-1.2 mg/L.
2008 UPDATE: Population receiving optimally fluoridated water: 88.9%
Delaware (1998)
Fluoridation is mandated for all municipalities but not rural water districts. State
funds will pay for fluoridation equipment, but not chemicals, for three years from
date of passage of the law. Delaware, which had previously passed a mandatory law in 1968,
changed it to require a referendum in 1974, then changed it again to a mandatory law in 1998.
Delaware provides funds for fluoridation equipment for 3 years from the date of passage of the law.
2008 UPDATE: Population receiving optimally fluoridated water: 73.6%
Georgia (1973)
Contain provisions which allow a community to exempt itself from compliance with the
State law, if a community decides it does not wish to institute this public health measure.
Georgia’s law cannot be enforced unless money is made available to the community by the state.
Law mandates adding fluoride to all incorporated communities.
The fluoride level must be no greater than 1 ppm.
Exemption to fluoridation is by referendum.
The law provides for “non-compliance” unless state makes funds available for the
cost of the fluoridation equipment, the installation of such equipment and the
materials and chemicals required for six months.
The law provides tax deduction for cost of device to remove fluoride if person
deemed allergic and advised by physician or approved by the Department of
Human Resources.
2008 UPDATE: Population receiving optimally fluoridated water: 95.8%
• Georgia water worker fired on Nov 20, 2008, for refusing to purchase and add fluoride to water system
Illinois (1967)
The law provides for addition of fluoride according to rules of the Department of Public Health.
The fluoride levels must not be less than 0.9 or more than 1.2 mg/L.
Regulations specify adding fluoride to all water supplies when the fluoride
concentration is less than 0.7 mg/L.
2008 UPDATE: Population receiving optimally fluoridated water: 98.9%
Kentucky (1966)
Kentucky statutes clearly delegate powers to the State Board of Health to adopt regulations
necessary to protect the dental health of the people. Under this law, Kentucky established
standards for approval of public water supplies. These administrative regulations have been
challenged in the courts and upheld. Administrative regulations states that fluoridation is required
for all communities with a population of 1,500 or more.
2008 UPDATE: Population receiving optimally fluoridated water: 99.8%
Louisiana (2008)
UPDATE:
• The state Legislature approved and Gov. Bobby Jindal recently signed into law Act 761, that requires
Louisiana public water systems that serve 5,000 or more customers to add fluoride to drinking water.
• Act 761 states that utilities are not required to move ahead with fluoridation unless the state
identifies sufficient funds to cover those costs.
• The new law also allows residents to opt out of fluoridation through a petition
signed by at least 15 percent of registered voters and a municipal election.
Minnesota (1967)
Fluoridation is mandated for all communities except where natural fluoride content
conforms with established regulations of the Board of Health.
Fluoride levels are to be established by Board of Health regulations.
Regulations set levels at Aaverage concentration of 1.2 mgs. per liter@ and
neither less than 0.9 mgs. nor more than 1.5 mgs.
2008 UPDATE: Population receiving optimally fluoridated water: 98.7%
Nebraska (1973)
As of 2000: Contain provisions which allow a community to exempt itself from compliance with the State law,
if a community decides it does not wish to institute this public health measure.
The law mandates adding fluoride to all political subdivisions.
It provides an exemption by adoption of an ordinance by initiative. Fluoride is not
to be added if the drinking water has a concentration of 0.7 mg/L or greater.
Fluorides must be maintained in the range of 0.8-1.5 mg/L; optimum range 1.0-1.3 mg/L.
2008 UPDATE: Population receiving optimally fluoridated water: 69.8%
• In April 2008, the Nebraska Legislature passed LB 245. This piece of legislation requires all cities
with a population greater than 1,000 to add fluoride to their water supply by June 1, 2010.
The legislature included an opt out provision into LB 245. Either by vote of city council or public petition,
the question of fluoridation can be put to the vote of the people.
• Summary of Nov 4, 2008, Fluoridation Referendums: 80% (49 out of 61) communities voted against fluoridation.
• FAN’s NewsTracker on Nebraska.
Ohio (1969)
Law mandates adding fluoride to systems supplying a population of 5,000 or more when natural
content is less than 0.8 mg/L
The system must maintain a fluoride level between 0.8 and 1.2 mg/L.
Ohio has provisions which allow a community to exempt itself from compliance with
the State law, if a community decides it does not wish to institute this public health measure.
Ohio, placed a time limit of 240 days on the period during which a referendum concerning fluoridation could be held.
Ohio provides funds for fluoridation equipment and chemicals.
2008 UPDATE: Population receiving optimally fluoridated water: 89.3%
Puerto Rico (1998)
by the passage of legislation in 1952, provided money for adding fluoride to the
water of those aqueducts of the Island of Puerto Rico as may be suitable therefore, as a
preventive to dental caries. This, in effect, made fluoridation mandatory in Puerto Rico, but it
was not enforced and as of 1997, there was no water fluoridation in Puerto Rico. In September
1998, the Governor of Puerto Rico signed into law a mandatory requirement for water
fluoridation. It will be implemented in phases and by the year 2000, 75% of the population in
Puerto Rico should be drinking fluoridated water.
2008 UPDATE: While FAN is not aware of any fluoridation scheme in Puerto Rico, in 2006,
the Association of State andTerritorial Dental Directors selected the communities of
Barranquitas, Cayey, and Fajardo-Ceiba for the 2006 Community Water Fluoridation Award Recipients.
NOTE: “Water fluoridation was instituted in Puerto Rico during the years 1953 and 1954.
However, during the latter part of the 1980’s, water fluoridation was discontinued due to budgetary constrains…”
South Dakota (1969)
Fluoridation is mandated for all communities of 500 or more except where natural
fluoride content conforms to State Department of Health regulations.
Regulations specify adding fluoride when the natural content is less than 0.9
mg/L and requires the system to maintain the fluoride concentration within a
range of 0.9 mg/L to 1.7 mg/L with an average level of 1.2 mg/L.
Public vote by special election was allowed, if petition filed within 120 days of
passage of the law. Special election to be held within 90-120 days after date of
filing petitions. Provides for reimbursement for actual cost of acquiring and
installing equipment, excluding chemicals.
2008 UPDATE: Population receiving optimally fluoridated water: 95.0%
Nevada (1999)
Nevada passed their law to apply only to counties over 400,000 population and only to water
systems in that county that serve a population of 100,000 or more. This applies to 4 water
systems in Clark County [Las Vegas]. The law also requires an advisory question must be placed
on the ballot in that county at the general election of November 7, 2000, to question if
fluoridation of the water should cease in any water system in that county. State regulations
required waters systems in Clark County to fluoridate by March 1, 2000. Fluoridation passed in
November 7, 2000.
It requires the fluoride level to be maintained between 0.7 mg/L and 1.2 mg/L. It also exempts any
well that is less that 15% of the total average annual water production of the water system.
The law also required a referendum to be held in Clark county on November 7, 2000 to determine
if fluoridation should be discontinued. Fluoridation was approved on November 7, 2000.
Nevada, which had passed a law in 1967 requiring a public vote before fluoridation, changed their
law in 1999 to mandatory fluoridation in all counties with populations greater than 400,000.
2008 UPDATE: Population receiving optimally fluoridated water: 72.0%
OTHER:
Michigan (1968)
passed a mandatory state law in 1968, with a lower limit population of 1,000 on the size
of the community which must comply, but in 1978, changed their law from “shall fluoridate” to “fluoridate.”
2008 UPDATE: Population receiving optimally fluoridated water: 90.9%
Massachusetts
Law enables a community through a Board of Health order to implement fluoridation. Implementation is
subject to a 90-day waiting period during which a petition for referendum may be filed.
2008 UPDATE: Population receiving optimally fluoridated water: 59.1%
Maine (1957)
have laws which require a public vote before fluoridation can be instituted.
2008 UPDATE: Population receiving optimally fluoridated water: 79.6%
New Hampshire (1959)
have laws which require a public vote before fluoridation can be instituted.
2008 UPDATE: Population receiving optimally fluoridated water: 42.6%
Utah (1976)
have laws which require a public vote before fluoridation can be instituted.
2008 UPDATE: Population receiving optimally fluoridated water: 54.3%
Some additional thoughts on life insurance for kids: if you ever lose a child, you’re not likely to want to return to work for a very long time. Life insurance can provide you the umbrella to enable you to stay home as long as needed.
Hey Jack, great show. As a person who has lived with the tankless water heater, I can’t give them high marks. We had one that serviced our 2 bathrooms and a second for the laundry and kitchen. They are temperamental and we ended up going through 4 of them in less than 5 years before we figured out that we needed a pressure regulator to keep from blowing up the heat coils. We finally decided that our best bet was a compromise.. for space reasons, we kept the on-demand heater in the laundry room, but put in a regular water heater in the bath area.
I have the Bosch WR430-3K and have had it for five years now. I like it, but every once in a while I have to fine tune the screw that determines when exhaust fan shuts off after a few seconds after turning the tap off; otherwise it will run for close to an hour. A 100 lb tank of propane lasts me a bit over a year. I use the woodstove for my dishwater and even adjust the water level in my washing machine and pour hot water heated on stove, this results in a warm wash. I also heat water on my woodstove for dishes six months of the year. Total how water cost for me is about 70 bucks a year.
Good answer to Aden’s question. Take a look at Appleseed. We take folks from “just picked up this rifle on the way here” to shooting sub 4MOA groups in 2 days.
When the student is ready, the teacher appears.
This is a non profit group, and 100% of the work is done by selfless volunteers. We want all Americans to learn learn to shoot safely, and hear some first hand tales of the sacrifice our fathers & mothers endured so we would have this right.
If you PM the shoot boss in your area and ask, you can usually get a loaner LTR (Liberty Training Rifle) for use at the event. Laws are different in each state as to age and who can loan you a rifle so just ask the local shoot boss. If you really want to learn, and your folks are cool with it. We’ll make it happen for you.
We had a 7,8,9 and 12 year old at our last event in Oregon. By the end of day 2 the 7 year old was shooting a 135 on the Army Qualifying Test – scoring Marksman.
Being young and flexible, with an open mind and teachable attitude goes a long way to becoming a Rifleman.
Pretty sure CPS was there for the beating allegations and not the organic food or fluoride free water. I have a good friend who works for the county and let me tell you she has WAY WAY WAY bigger fish to fry and doesn’t give a rats bum about organic food or fluoride.
Thanks Jack for featuring my question on tankless water heaters. I see your point, however, other than the extra hot shower when the power goes out, I think that a prepared type person could have better backup water sources and save oodles of money on water heating by going tankless. These might be best in a bugout or cabin situation where you go up for the weekend and don’t need/want to waste all that propane and time to heat up an entire water heater. Then again the best option is to not spend a penny and go solar!
FYI on life insurance for kids: this can be a good financial move for a child in the event that he or she develops a medical condition later which might preclude them getting a policy on their own or cause them to pay much higher rates. The rates on policies for children are very low and a policy already in force can not be cancelled for a later medical reason; the child can assume payments when they grow up, ensuring coverage.
Hot water: if you have an electric heater, wire in a 24-hour timer and run it for just the hours you need hot water. Extra insulation outside the tank will make a positive difference, as well as pipe insulation for a few feet away from the heater to prevent radiant loss through the piping. This is available at hardware stores, is very cheap and easy to install.
If you have a freestanding hot water heater and you live in an earthquake zone (and many of us do!), secure them to the wall to minimize the amount of property damage or injury caused by seismic activity.
For more great tips, go to http://www.disastersafety.org, type in your zip code, and get a list of the most likely threats in your area along with specific information to prepare for and mitigate likely problems.
Very good advice for the kid looking to get into hunting. At 15 years old, there are few options for people in most states.
Check with your high school. When I was a kid, I joined the JROTC program. If your school has one, (or the program exists in another school you can access), I highly recommend looking into it. Also check with your local college, many will allow high school students into an ROTC program, or similar junior police program.
My instructor was exceptional. At 14, I had hands on experience with several types of firearms, had been repelling off of towers and cliffs, and all the other “fun stuff” people associate with military training.
There is a huge difference in the quality of programs, and it all comes down to the dedication of the instructors. As I said, my instructor was great, but there are many who are looking to kill time until retirement by hiding in classrooms. Check them out before signing on. None of the fun stuff, and certainly none of the gun handling will happen in a public school these days, but in my case those experiences were conducted on weekend and summer trips to military bases. My brother didn’t have access to ROTC programs, but did the junior police training and had similar experiences.
Sadly, in many areas these programs are vilified as recruiting pools for the government. That is absolutely false. I never went into military service, specifically because of the program. I guess once you’ve jumped off of 200′ towers and qualified as expert on the range, the recruiting videos lose their luster. But it has instilled in me a profound respect for those who do chose to serve, and the class was by far the most useful bit of education I received during my schooling.
Back to the point, they’ll teach you to shoot a gun, give you plenty of practical experience and safety training. You’ll also meet people with similar interests, making it easier to find supplemental experiences and training. You’ll also build references, which are a necessity in many states to get various permits. In my state of NY for example, you need references from 5 other gun owners to get your concealed carry permit, which they require for essentially any hand gun. Someone who’s asking here about how to get trained probably doesn’t have five gun owners in their life they can go to for references. Hopefully he doesn’t live in a state run by whining anti-gun nazi bitches, but knowing other gun owners helps anyway.
If the primary interest is in hunting, rather than marksmanship or general gun ownership, there may be other options. Depending on your state, there are various permissible trapping methods. As a kid, I snared many a rabbit in the winter. There’s also fishing, and again, depending on the state, bow hunting may be an option at a younger age.
Parental supervision is a given here. It’s not that I believe a 15 year old can’t be trained to behave safely or responsibly, but laws are laws, best to cover your ass. Besides, there are many good reasons not to hunt alone, just for personal safety. As an adult who knows the woods, I don’t go more than about 2 miles into them alone for the simple reason that if I fell or was injured, no one would be able to find me. If someone else is going along on the hunt anyway, it may as well be a parent or experienced adult.
On the Gemstones.
I talked to my friend who works jewelry at a pawn shop. He told me that they will only pay $1.00 per point with a minimum of 10 points for diamonds.
In other words, if it ain’t at least 1 carat they don’t even take it into account.
I also know they give near nothing for most other stones unless they are top quality & large!
Getting vinegar smell out of 5 gallon buckets.
Don’t bother, it isn’t worth the effort, I know!
You can still clean them out & use them for other things where smell is not a factor.
They make great buckets for your seeds to be saved in. They are Food Grade & the seeds don’t care about the smell 🙂
They also make for excellent toilet buckets.
Again, the poop won’t give a crap about the smell 🙂
On the same note, when using buckets for a toilet.
Keep 2 buckets, one for going in & the other to hold content for covering up your excrement.
Some things that work well are sawdust, mulched leaves & good old dirt.
If you use dirt, then the hole you dug can be filled in with your bucket full of Shisnit when it’s full.
This has the makings of Night Soil & can fertilize the area you do it in.
On the tankless water heater.
Besides the things already mentioned, I found out that you must have a water softener hooked up to it to prevent the heating coils from getting scale on them.
Policing your Community.
If you start this before a SHTF situation then when the S does HTF you will already have a system in place.
When I walk my dogs around the neighborhood this is just what I am doing. I keep an eye on things for my neighbors & most of them know I carry a gun since I make it no secret & open carry most of the time. I even had one of my neighbors tell me that she feels safer just knowing I am walking around armed! Made me feel damn good hearing that from her.
It actually did pay off just taking my dogs out when I caught some guys in the act of checking cars for unlocked doors. I talked to some neighbors to let them know what happened & one of them called the police to report that his car had been ransacked but nothing was taken. He told me that he wouldn’t have done anything if he hadn’t been informed that I gave the police the license number of the car I saw that morning.
So people, talk to your neighbors. Let them know if you see anything suspicious going on!!
From our perspective, a BOB is something EVERYONE should have. However if you step back and look at it from a person who frequently see’s children being kidnapped by their own relatives and parents fleeing from the law, then this could look like someone who’s about to run away from something. So this social worker wonders, why are they so worried about leaving quick and what might they be fleeing from. The social worker saw it as a red flag and I think if they were met with education instead of what I imagine was hostility, it wouldn’t have been an issue.
I’m not even going to touch the issue of vaccination because I have very strong views that might be contrary to many here.
inverter on car – very good option. I have a hose that i can hook up to my exhaust pipe (dryer vent line and a rubber reducer fitting) and run it out my garage window. This solution works well for generators. Further, if you replace your normal car battery with a deep cycle battery (or two) you can easily run most inverters with no problem at all, then just idle the vehicle to keep things charged. Not a bad option if you need to keep something running for a little while until you get your primary generator up and running, or to really juice up a freezer for 30 or 45 mins at a time so you can shut it down for another few hours.
I agree with Donna on the life insurance for kids. You’d be surprised how many people, even kids, are uninsureable. An inexpensive child policy, which is often obtained without a medical if the child is young enough, can save the day when the kids is old enough to start a family and realizes he/she needs to help spare their family from extreme financial burden in the event of their death.
Hi Jack. I would like to suggest composting as a solution to human waste disposal – as an alternative to the solution recommended in this episode (plastic bags and chemicals). When I say ‘composting human waste’ I don’t mean combining human waste with your regular garden/kitchen compost system – I mean a system dedicated to human waste. Composting toilets can be quite sophisticated if desired, but from a prepping point of view and for a person who is not very interested in many hours of research or practice, the minimum preps are: a barrel of sawdust/ woodchips or similar carbonaceous material, a bucket and a place to leave the material undisturbed such as a large bin with simple aeration or a corner of the backyard(those without backyards will have to be more creative and thoughtful – but in a SHTF scenario when the toilets aren’t working I bet that rubbish collection isn’t either). For those who have experienced frustrations or failures in composting – composting human waste is absolutely not to be intimidated by. It does not need to include any kind of turning as you might be used to with garden compost. Using a sawdust/woodchip based compost toilet also need not be smelly, fly attracting or in any way offensive. The only smell I have ever noticed with many different styles of compost toilets has been amonia, which can be reduced/eliminated if the system does not also have to dispose of urine. Compost toilet designs and info are all over the internet. Many books have been published which can answer all of your questions about design, handling and disease. A classic text which you might find at your library, online or at a bookstore is The Humanure Handbook. IMHO composting human waste is a lot less gross than treating it on masse, flushing it into the ocean, or hoiking it off property in a plastic bag full of chemicals.
Cheers,
Anji and Evan
I have life insurance on my kids but for different reasons than Americans.
Here in Canada it is actually against the law to purchase or provide private health insurance. You have no choice but to use the government run health care system. However, the government doesn’t fund all treatments and the wait lists for treatments can be very long.
The Supreme Court actually ruled that the wait lists were a violation or our rights an unconstitutional. People have actually died waiting for care. Politicians have been to afraid to do anything though because as soon as they start talking about reforming health care the socialists, unions and media go nuts.
With this in mind, if my girls are ever sick and need treatment I’ll mortgage my house and sell everything I own to get them treatment ASAP in the USA (thank God for the USA and please don’t ever adopt “Canadian style health care”).
If they turn out ok I’ll gladly accept the debt and pay if off, however if they don’t make it at least I’ll have the insurance to pay off the debt.
@Jake to me it is more about the term “double”, the video is well done I admit but the issue here is the following…
1. The presenter points out that if you take the effect of more plants into account with studies that show greater warming the effect is still a big concern. Fine but he provides no evidence that these other studies are accurate.
2. All of these alarming numbers revolve around DOUBLING the CO2. When I was in school in the 80s there was about .033 percent of CO2 in the atmosphere. Today the number is .039, so exactly how long would we have to go on totally unabated to double this number? I mean we are talking about 30 years to get an increase of .006%.
3. The entire point of climate change has been to tax us on carbon, as I have stated this doesn’t do anything to actually cut carbon or pollution.
4. These studies seem to ignore the scientific FACT that CO2 has a saturation limit. It only effectively reflects certain wavelength of UV light, and it is already at a concentration that reflects about 100% of those wavelengths. This is fact, it can not be disputed.
People please listen to me, we must cut the use of fossil fuels, I totally agree with that, they do cause large amounts of pollution. Such as sulfur oxide, mercury, dioxins not to mention the damage their extraction does to the land.
So please use your brains with all of the clearly dangerous pollution from the extraction of gas, oil and coal why do you think all the focus is on the most benign (even if we believe the bullshit) pollutant.
Seriously even with new controls this stuff still causes acid rain, coal mining literally rusts ground water with sulfur oxide, coal companies strip mine and remove mountain tops, gas drillers do hydraulic fracturing and pollute ground water. Again it just keeps going, there are so many real reasons that we should be doing alternative and clean energy technology. There are so many forms of pollution that no one could ever effectively deny and what does the entire world focus on, the same gas you EXHALE with each breath.
Why? The forces behind this have no intention of reducing fossil fuel consumption, just creating a new fiat currency and derivatives from it by making CO2 which today is a worthless gas into a global commodity. This will create profit for industry, taxes for government and oppression of people. What it won’t do one bit of is cutting real pollution.
@Jake I gave you a scientific reason that the entire thing is bullshit, stop drinking the koolaid. As for science this so called science is backed by government and all objecting opinion is persecuted. That is the anti thesis of science.
Did you just ignore this FACT
“These studies seem to ignore the scientific FACT that CO2 has a saturation limit. It only effectively reflects certain wavelength of UV light, and it is already at a concentration that reflects about 100% of those wavelengths. This is fact, it can not be disputed.”
Jake
Although I disagree with Jack’s stance on global warming…he’s 100% right on one thing: IT’S ALL ABOUT THE MONEY!
Whether it’s taxation or grant money to scientists…it’s all about the money.
For those who might say that science is objective need to spend some time in the trenches because it’s anything but. Oft times scientists THINK they know what the outcome will be and thus their data SHOWS that THEY were CORRECT. Scientists have huge ego’s, end of story.
OK, I have to say this even though I don’t have all the info & can’t remember the show I saw it on.
I watched a show where they were taking Ice-core samples. They found that during an Ice Age the carbon levels were not that high. They also found a time when the carbon levels were 800(eight hundred) times higher. It was hundreds of years before there was any significant change in the temperature of the earth.
For goodness sakes people it’s a Planet! The temperatures go Up & Down with the cycles of the Sun, Asteroid Impacts, Volcano Eruptions….
Do you really think that mankind’s pollution is anything compared to 1 Volcanic eruption?
The ejection from 1 Volcano has changed the temperature of the earth several degrees.
Once again I say “It Is A Planet!” The temperature WILL change & mankind can do NOTHING about it!!!
I asked a question because I don’t “drink the koolaid” but rather like to check my facts, and explore disconfirming evidence — So if you could point me to some information about the “FACT that CO2 has a saturation limit” and how this pertains to warming scenarios, that would be
great.
I find nothing that has debunked the handbook, I find people claiming that it is simply not true but zero evidence presented in the article you reference and others. Just because someone that believes in the boogey many says that a person who wrote a book disproving it is wrong is not a debunking.
|
Increased speed limits lead to more deaths
WASHINGTON -- States that raised their speed limits to 70 mph or more have seen a big jump in traffic deaths, according to a report Monday by an auto safety group.
Some 1,880 more people died between 1996 and 1999 in the 22 states with higher speed limits on rural interstates, said the study, compiled by the Insurance Institute for Highway Safety, funded by insurers. It was based on data collected by the Land Transport Safety Authority of New Zealand. Congress repealed the 55 mph national speed limit in November 1995.
An institute researcher said New Zealand did the study because groups are questioning whether to raise the country's speed limit, which is 100 kilometers per hour -- about 62 mph.
"There's a significant societal cost," said Allan Williams, the institute's chief scientist, who said drivers often think a speeding ticket is the worst that can happen.
Supporters of higher speed limits pointed out that federal highway data show the nation's vehicle fatality rate fell each year from 1996 and 1999, from 1.69 deaths per million miles traveled to 1.55 deaths.
"We've moved toward a transportation system where cars are a lot safer and there are better measures like guard rails on highways," said Stephen Moore, a proponent of limited government and president of the Club for Growth. "We've made it safer to drive at faster speeds."
Institute researcher Susan Ferguson agreed that other factors are making highways safer, and that the nation's death rate dropped as a whole. But she said the study expands upon institute studies from the late 1990s, which showed a 12 to 15 percent increase in deaths when speed limits rise.
The study said the 10 states that raised limits to 75 mph -- all in the Midwest and West -- had 38 percent more deaths per million miles driven than states with 65-mph limits. That's approximately 780 more deaths.
The 12 states which raised their limits to 70 mph include California, Florida, North Carolina and Missouri. They saw a 35 percent increase -- some 1,100 additional deaths.
The report didn't examine the effects of other trends, such as the tendency to drive faster in rural states where cities are far apart. Nor did it analyze the increasing number of sport utility vehicles on the road in the late 1990s.
A separate review of six states by the institute found drivers are traveling faster than any time since the institute began collecting data in 1987. Researchers observed in Colorado, which has a 75-mph speed limit, one in four drivers going above 80 mph. In California, where the speed limit is 70 mph, one in five drivers was clocked at 80 mph.
The institute's study of speeds in Georgia, Massachusetts, Maryland, New Mexico, Colorado and California also found that when rates are raised on rural interstates, speeding increased on urban interstates.
Average travel speeds on urban interstates in Atlanta, Boston and Washington were the same as or higher than on rural interstates near those cities, even though the speed limits on those urban interstates were 55 mph. In Atlanta, 78 percent of drivers on one urban interstate exceeded 70 mph, the report found.
Institute President Brian O'Neill said tolerance of speeding and advertising that encourages drivers to speed is part of the problem. He pointed out a Dodge ad that invited consumers to "Burn rubber."
"It's up to drivers to obey speed limits, but the manufacturers aren't helping with ads that equate going fast with having fun," O'Neill said.
|
Toronto third baseman Josh Donaldson and Seattle designated hitter Nelson Cruz have moved ahead in fan voting for starting spots in the All-Star Game, leaving five Kansas City Royals still in the lead.
Very few 21-year-old athletes have been on more of a rollercoaster in the past year than Royals pitcher Brandon Finnegan. He became the first player to pitch in the College World Series and MLB World Series in the same year.
|
Several sharks were involved in a deadly attack on a New Zealand man, a coroner has found.
Adam Strange, 47, from Auckland, was killed in a rare shark attack at Muriwai Beach on Auckland’s west coast in February.
In a ruling on Wednesday, coroner Morag McDowell said the television commercial and film director died of blood loss from multiple shark bites, mostly to his legs, while there was also evidence of drowning.
Mr Strange was swimming in preparation for an upcoming race when the attack happened shortly after 1pm on February 27.
A fisherman noticed Mr Strange was being followed by a lot of birds and there were schools of small fish around him.
Two minutes after spotting the swimmer, the fisherman heard splashing and Mr Strange’s calls for help, and saw a lot of blood in the water.
The fisherman, who was not named, told the inquest he saw more than one shark attacking Mr Strange, who became limp and was face down in the water.
The man called emergency services.
A friend of Mr Strange, who was surfing at Muriwai Beach, also saw the attack, and recalled seeing a number of shark fins.
Surf lifesavers were alerted at 1.30pm and raced to Mr Strange’s aid.
When they reached him, a single shark, measuring about three to four metres, had hold of him.
Lifesavers used their boat and oars to get the shark to release Mr Strange and kept it at bay until police arrived, and shots were fired at the shark, which sank below the water’s surface.
Mr Strange’s body was then recovered.
Surf Lifesaving NZ told the inquest there had not been a recorded shark attack in west Auckland waters for at least 30 years.
Despite the rarity of the attack, the area is a known breeding ground for sharks.
|
pi-top CEED
pi‑topCEED is the easiest way to use your Raspberry Pi. We’ve put what you love about our flagship laptop in a slimmer form factor. Join hundreds of code clubs and classrooms using pi‑topCEED as their solution to Computer Science and STEAM based learning.
Promo Price: $99
Plug and Play Ready
Pre-assembled with all required software installed. You’ll be able to take your knowledge to the next level as soon as you take it out of the box.
Hardware expansion
pi-topCEED teaches you amazing creations that you add code to in order to bring to life. Use pi-top add-ons by simply sliding an add-on into the Modular Rail, then use pi-topCODER to learn amazing lesson plans that integrate fun physical computing projects.
Merging practicality and design
Designed with ergonomics in mind, pi-topCEED comes with back stand adjustable up to 180° and can be wall-mounted.
What does pi-topCEED teach?
All pi-topCEEDs come with CEEDuniverse preloaded, a puzzle filled Online Game that teaches you how to code, build circuits, and make hardware that interacts with the game in real time.
|
Q:
Get mean of values for each tuple in a list in the format (string, value) in Python
I have a list of tuples like: (A, 1), (B, 2), (C, 3), (A, 9), (B, 8).
How to get the mean of each value referred to the first element of the tuples, not knowing the number of occurences of the first element of tuples?
I want something like:
(A, 5), (B, 5), (C, 3).
A:
Using groupby and itemgetter:
from itertools import groupby
from operator import itemgetter
from statistics import mean
s = [('A', 1), ('B', 2), ('C', 3), ('A', 9), ('B', 8)]
s2 = sorted(s, key=itemgetter(0)) # sorting the tuple based on 0th index
print([(k, int(mean(list(zip(*g))[1]))) for k, g in groupby(s2, itemgetter(0))])
OUTPUT:
[('A', 5), ('B', 5), ('C', 3)]
|
About
Teams are the source of most of the productivity, creativity and reliability in organizations. Work and play are both successful – or not – because of the quality of teams performing the mission at hand. Understanding and developing the behaviors of success requires that team members develop and utilize the seven ESI skills measured by the TESI® and explained in our book, The Emotionally Intelligent Team. Powerful use of these seven skills is the path necessary for your team to experience the four results shown in the Collaborative Growth Team Model and the universally sought after benefits of sustainable productivity and emotional and social well-being for the team and its members. This journey can create profound benefits for team members, teams and their organizations.
|
Adding a place for funeral homes to upload any docs to their file
(I think this a OA idea but it wasnt letting me add it) Adding a place for funeral homes to be able to add any documents regarding their claim. Instead of faxing them or emailing them they would go straight to digiclaim
|
Weekly gas prices: Fall nationally, near steady in Nebraska
Average retail gasoline prices in Nebraska have fallen 0.8 cents per gallon in the past week, averaging $3.33/g yesterday. This compares with the national average that has fallen 3.7 cents per gallon in the last week to $3.39/g, according to gasoline price website NebraskaGasPrices.com.
Comment
Nebraska City News-Press - Nebraska City, NE
Writer
Posted Dec. 3, 2012 at 8:23 AM
Updated Dec 3, 2012 at 8:25 AM
Posted Dec. 3, 2012 at 8:23 AM
Updated Dec 3, 2012 at 8:25 AM
Nebraska
Average retail gasoline prices in Nebraska have fallen 0.8 cents per gallon in the past week, averaging $3.33/g yesterday. This compares with the national average that has fallen 3.7 cents per gallon in the last week to $3.39/g, according to gasoline price website NebraskaGasPrices.com.
Including the change in gas prices in Nebraska during the past week, prices yesterday were 5.7 cents per gallon higher than the same day one year ago and are 3.8 cents per gallon lower than a month ago. The national average has decreased 11.3 cents per gallon during the last month and stands 10.0 cents per gallon higher than this day one year ago.
"While the national average continues to fall, we've done the math- the yearly average in 2012 is all but guaranteed to set all time record highs," said GasBuddy.com Senior Petroleum Analyst Patrick DeHaan. "We still stand at $3.63/gal for a yearly average, and gasoline prices would need to average no more than $2.35/gallon for every day left in 2012 for the national yearly average to drop below 2011, something that appears impossible. In the meanwhile, we have lowered our forecast for Christmas gasoline prices to average between $3.25-$3.35/gallon," DeHaan said.
|
Pages
Friday, November 8, 2013
Morning Visitor
Isn’t he just he cutest…. The dog and I heard something at the door…. Every time I opened the door there was nothing there…. The last time I got up, I saw a squirrel in the bush by the window. So I thought that was the noise… You know the bush hitting the window… Nope!
It was this little bugger crawling across my screen to get from one side of the porch to the other. I can say I was scared to open the door to shoo him away for fear he come in the house. He was there for over 10 minutes with me taking pictures and tapping on the glass. No fear of me!
|
Wasn't the reason Mawae left was because he was too much of a locker room lawyer? .....and that was when he was a useful player. Doesn't make sense.
The real story is that when Penguini was given the head coaching position of the NY Jets back in 2006 he called Kevin Mawae into his office for a sit down. They were having a conversation & Kevin mentioned that they were both about the same age when Kevin asked Mangini jokingly "What should I call you? Coach, Eric or Mr. Mangini" and Mangini fired back "I don't care what you call me so long as you call me the boss."
It was right then and there Mawae knew to 911 his agent & immediately start the FA shopping process which led him to his new home in Tennessee.
Now I know more than anyone that the NFL is an occupation & not a career for 90% of the players in the league & only that golden 10% gets to call themselves real "career players", but Kevin Mawae is a career player, President of the NFLPA & put himself thru Wharton School of Business most nights & off seasons in order to further himself. He also gave the Jets 110% of himself only to be cast aside by a new HC with a Napoleon complex.
I say bring him back as a back up C along with a spot on the coaching staff.
Please Coach Ryan - At least take a meeting with Kevin, you may be surprised to find out how useful he can still be to this team. What have you got to lose by just talking to the man?
i'm not for it, but it wouldn't kill me to see him back. and i think it's a little harsh to call him the fat chad pennington. mawae definitely accomplished more individual success at his position than chad could ever dream of.
i'm not for it, but it wouldn't kill me to see him back. and i think it's a little harsh to call him the fat chad pennington. mawae definitely accomplished more individual success at his position than chad could ever dream of.
I was just being funny because, again, here we go with someone having a crush on an old player instead of thinking about what's best for the team.
Pennington, Mawae, Coles......these players were losers, won nothing in any city they've ever been in. For a team like ours, these types are the last thing we need here.
This isn't the Ex-Jet Home For A Cushy Retirement. We want the Super Bowl, time to act like it.
Kevin asked Mangini jokingly "What should I call you? Coach, Eric or Mr. Mangini" and Mangini fired back "I don't care what you call me so long as you call me the boss."
So what you're saying, basicly, is that Kevin felt he did not have to respect the new Head Coaches authority? I'm sorry, are you trying to convince us he was right...or that he is a jackass?
The Coach, your age or not, fat doofus or not, IS the boss.
It was right then and there Mawae knew to 911 his agent & immediately start the FA shopping process which led him to his new home in Tennessee.
So not only did Kevin not believe in the standard chain of command, he (not Coach Tub-O-crap) choose to leave the team asap all because the Head Coach dared to make the case that he was the Boss?
Again LL, not making much of a case for your boy here.
He also gave the Jets 110% of himself only to be cast aside by a new HC with a Napoleon complex.
No doubt he gave 110% on the field.
But from your OWN description of events, the only one with a complex appears to be Kevin, a rather obviosu "I don't have to respect the Head Coach's authority" complex, and that has no place on an NFL team, and no place on a Rex Ryan/Jets roster or staff.
Neither did Kevin Mawae that was Chris Baker who said "Chad is like an egg back there." And why would we start Mawae in front of Mangold? I mean we hired back Tony Richardson & exactly how much action did he see last season? I say offer Mawae a back up Center/coaching job and again let him retire a NY Jet.
are you seriously comparing mawae and richardson? t-rich was blocking for jones and greene on a regular basis. mawae's not even wanted by the mighty titans, whose o-line isn't exactly a brick wall. of course, mawae was part of the reason their line was shaky.....
The real story is that when Penguini was given the head coaching position of the NY Jets back in 2006 he called Kevin Mawae into his office for a sit down. They were having a conversation & Kevin mentioned that they were both about the same age when Kevin asked Mangini jokingly "What should I call you? Coach, Eric or Mr. Mangini" and Mangini fired back "I don't care what you call me so long as you call me the boss."
It was right then and there Mawae knew to 911 his agent & immediately start the FA shopping process which led him to his new home in Tennessee.
Now I know more than anyone that the NFL is an occupation & not a career for 90% of the players in the league & only that golden 10% gets to call themselves real "career players", but Kevin Mawae is a career player, President of the NFLPA & put himself thru Wharton School of Business most nights & off seasons in order to further himself. He also gave the Jets 110% of himself only to be cast aside by a new HC with a Napoleon complex.
I say bring him back as a back up C along with a spot on the coaching staff.
Please Coach Ryan - At least take a meeting with Kevin, you may be surprised to find out how useful he can still be to this team. What have you got to lose by just talking to the man?
Worth a shot IMO.
we all know of mangini's issues. he's gone. we know mawae's issues. he's gone too. why bring mawae back. both were bull-headed morons who needed to be shown the door. your love affair with mawae is misguided, but i just hope the jets remember how much mawae sucks.
we all know of mangini's issues. he's gone. we know mawae's issues. he's gone too. why bring mawae back. both were bull-headed morons who needed to be shown the door. your love affair with mawae is misguided, but i just hope the jets remember how much mawae sucks.
Sorry I have to agree to disagree when the guy played with a broken hand, sore back and Lord knows how many bumps & bruises. Plus he holds the record of having played in over 183 consecutive games without ever missing a down.
How many players these days are listed as "day to day" for not playing during the season because they have a cold? Are you kidding me? Yeah, you guys are right, having players like that in our locker room really sucks.
Sorry I have to agree to disagree when the guy played with a broken hand, sore back and Lord knows how many bumps & bruises. Plus he holds the record of having played in over 183 consecutive games without ever missing a down.
Parcells loves the guy, has already hired several of his ex-Jets, and he's even passing on Mawae.
Sorry I have to agree to disagree when the guy played with a broken hand, sore back and Lord knows how many bumps & bruises. Plus he holds the record of having played in over 183 consecutive games without ever missing a down.
How many players these days are listed as "day to day" for not playing during the season because they have a cold? Are you kidding me? Yeah, you guys are right, having players like that in our locker room really sucks.
Nobody ever questioned Mawae's toughness and production on the field. However, he would provide not one iota of that production for the Jets. He's washed up and would never see the field. The only possible tangible effect of signing Mawae would be for him to divide the locker room with more of his politics, anonymous quotes about teammates, and NFLPA BS that should be left to a conference room. From your previous post, the guy clearly has problems taking orders. Maybe he was a leader on the team 5-10 years ago, but he holds no sway with any of the current players on the roster. He has absolutely no place on this team. This is not a retirement home. Bringing in Joe Klecko would have more of a positive impact then bringing back Mawae and his ego.
Limo, Don't get me wrong. I loved Mawae as a player for the Jets in his time, but I don't want him back. Not for a NY minute. I believe you are thinking with your heart (and hormones maybe) rather than your brain here. Don't think there's much left in Mawae's tank. Makes more sense to bring in a young kid to learn the position while backing up Nick. Plus, Mawae is union leader/clubhouse lawyer type and why would any team elect to bring that bafggage in?
|
Increased risk of depressive disorder within 1 year after diagnosis with urinary calculi in Taiwan.
This study investigated the risk of subsequent depressive disorders (DD) following a diagnosis of urinary calculi (UC) in Taiwan. In total, 67,917 adult patients newly diagnosed with UC were recruited, along with 153,951 age-matched enrollees who were used as a comparison group. A stratified Cox proportional hazard regression analysis revealed that the adjusted hazard of DD within a 1-year period following diagnosis with UC was 1.75 times greater for patients with UC than for comparison patients.
|
/*
* Copyright (c) Facebook, Inc. and its affiliates.
*
* This source code is licensed under the MIT license found in the
* LICENSE file in the root directory of this source tree.
*/
package com.facebook.fresco.samples.showcase.drawee;
import android.graphics.drawable.Animatable;
import android.net.Uri;
import android.os.Bundle;
import android.view.LayoutInflater;
import android.view.View;
import android.view.ViewGroup;
import androidx.annotation.Nullable;
import com.facebook.common.references.CloseableReference;
import com.facebook.datasource.RetainingDataSourceSupplier;
import com.facebook.drawee.backends.pipeline.Fresco;
import com.facebook.drawee.controller.BaseControllerListener;
import com.facebook.drawee.controller.ControllerListener;
import com.facebook.drawee.view.SimpleDraweeView;
import com.facebook.fresco.samples.showcase.BaseShowcaseFragment;
import com.facebook.fresco.samples.showcase.R;
import com.facebook.imagepipeline.image.CloseableImage;
import com.facebook.imagepipeline.image.ImageInfo;
import com.facebook.imagepipeline.request.ImageRequest;
import java.util.List;
public class RetainingDataSourceSupplierFragment extends BaseShowcaseFragment {
private List<Uri> mSampleUris;
private int mUriIndex = 0;
private ControllerListener controllerListener =
new BaseControllerListener<ImageInfo>() {
@Override
public void onFinalImageSet(
String id, @Nullable ImageInfo imageInfo, @Nullable Animatable anim) {
if (anim != null) {
// app-specific logic to enable animation starting
anim.start();
}
}
};
@Override
public void onCreate(@Nullable Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
mSampleUris = sampleUris().getSampleGifUris();
}
@Nullable
@Override
public View onCreateView(
LayoutInflater inflater, @Nullable ViewGroup container, @Nullable Bundle savedInstanceState) {
return inflater.inflate(R.layout.fragment_drawee_retaining_supplier, container, false);
}
@Override
public void onViewCreated(View view, @Nullable Bundle savedInstanceState) {
final SimpleDraweeView simpleDraweeView = view.findViewById(R.id.drawee_view);
final RetainingDataSourceSupplier<CloseableReference<CloseableImage>> retainingSupplier =
new RetainingDataSourceSupplier<>();
simpleDraweeView.setController(
Fresco.newDraweeControllerBuilder()
.setDataSourceSupplier(retainingSupplier)
.setControllerListener(controllerListener)
.build());
replaceImage(retainingSupplier);
simpleDraweeView.setOnClickListener(
new View.OnClickListener() {
@Override
public void onClick(View view) {
replaceImage(retainingSupplier);
}
});
}
@Override
public int getTitleId() {
return R.string.drawee_retaining_supplier_title;
}
private void replaceImage(
RetainingDataSourceSupplier<CloseableReference<CloseableImage>> retainingSupplier) {
retainingSupplier.replaceSupplier(
Fresco.getImagePipeline()
.getDataSourceSupplier(
ImageRequest.fromUri(getNextUri()), null, ImageRequest.RequestLevel.FULL_FETCH));
}
private synchronized Uri getNextUri() {
int previousIndex = mUriIndex;
mUriIndex = (mUriIndex + 1) % mSampleUris.size();
return mSampleUris.get(previousIndex);
}
}
|
734 A.2d 350 (1996)
STATE of New Jersey
v.
Pedro SOTO, Delores Braswell, Larnie Boddy, Chauncey Davidson, Milton Lumpkin, Alfred S. Poole, Sam Gant, Donald Crews, Kim Harris, Ocie Norman, Antoine Peters, Floyd Porter, Theotis Williams a/k/a Walter Day, Paul Dacosta, Ronnie Lockhart, Terri Monroe and Kevin Jackson, Defendants.
Superior Court of New Jersey, Law Division, Gloucester County.
Decided March 4, 1996.
*351 P. Jeffrey Wintner, Deputy Public Defender I, for defendants Pedro Soto, Delores Braswell, Larnie Boddy, Chauncey Davidson, Milton Lumpkin and Alfred S. Poole (Susan L. Reisner, Public Defender, attorney).
Wayne E. Natale, Deputy Public Defender II, for defendant Sam Gant (Susan L. Reisner, Public Defender, attorney).
Carrie D. Dingle, Assistant Deputy Public Defender I, for defendants Donald Crews, Kim Harris, Ocie Norman, Antoine Peters, Floyd Porter, and Theotis Williams a/k/a Walter Day (Susan L. Reisner, Public Defender, attorney).
William H. Buckman, Moorestown, for defendants Paul DaCosta, Ronnie Lockhart and Terri Monroe.
Justin Loughry, Cherry Hill, for defendant Kevin Jackson (Tomar, Simonoff, *352 Adourian, O'Brien, Kaplan, Jacoby & Graziano, attorneys).
John M. Fahy, Senior Deputy Attorney General (Deborah T. Poritz, Attorney General of New Jersey, attorney) and Brent Hopkins, Assistant Gloucester County Prosecutor (Harris Y. Cotton, Gloucester County Prosecutor, attorney), for the State of New Jersey.
ROBERT E. FRANCIS, J.S.C.
These are consolidated motions to suppress under the equal protection and due process clauses of the Fourteenth Amendment.[1] Seventeen defendants of African ancestry claim that their arrests on the New Jersey Turnpike south of exit 3 between 1988 and 1991 result from discriminatory enforcement of the traffic laws by the New Jersey State Police.[2] After a lengthy hearing, I find defendants have established a prima facie case of selective enforcement which the State has failed to rebut requiring suppression of all contraband and evidence seized.
Defendants base their claim of institutional racism primarily on statistics. During discovery, each side created a database of all stops and arrests by State Police members patrolling the Turnpike between exits 1 and 7A out of the Moorestown Station for thirty-five randomly selected days between April 1988 and May 1991 from arrest reports, patrol charts, radio logs and traffic tickets. The databases are essentially the same. Both sides counted 3060 stops which the State found to include 1212 race identified stops (39.6%), the defense 1146 (37.4%).
To establish a standard against which to compare the stop data, the defense conducted a traffic survey and a violator survey. Dr. John Lamberth, Chairman of the Psychology Department at Temple University who I found is qualified as an expert in statistics and social psychology, designed both surveys.
The traffic survey was conducted over twenty-one randomly selected two and one-half hour sessions between June 11 and June 24, 1993 and between 8:00 a.m. and 8:00 p.m. at four sites, two northbound and two southbound, between exits 1 and 3 of the Turnpike. Teams supervised by Fred Last, Esq., of the Office of the Public Defender observed and recorded the number of vehicles that passed them except for large trucks, tractortrailers, buses and government vehicles, how many contained a "black" occupant and the state of origin of each vehicle. Of the 42,706 vehicles counted, 13.5% had a black occupant. Dr. Lamberth testified that this percentage is consistent with the 1990 Census figures for the eleven states from where almost 90% of the observed vehicles were registered. He said it is also consistent with a study done by the Triangle Group for the U.S. Department of Transportation with which he was familiar.
The violator survey was conducted over ten sessions in four days in July 1993 by Mr. Last traveling between exits 1 and 3 in his vehicle at sixty miles per hour on cruise control after the speedometer had been calibrated and observing and recording the number of vehicles that passed him, the number of vehicles he passed and how many had a black occupant. Mr. Last counted a total of 2096 vehicles other than large trucks, tractortrailers, buses and government vehicles of which 2062 or 98.1% passed him going in excess of sixty miles per hour including 306 with a black occupant equaling about 15% of those vehicles clearly speeding. Multiple violators, that is those violating the speed limit and *353 committing some other moving violation like tailgating, also equaled about 15% black. Dr. Lamberth testified that the difference between the percentage of black violators and the percentage of black travelers from the surveys is statistically insignificant and that there is no evidence traffic patterns changed between the period April 1988 to May 1991 in the databases and JuneJuly 1993 when the surveys were done.
Using 13.5% as the standard or benchmark against which to compare the stop data, Dr. Lamberth found that 127 or 46.2% of the race identified stops between exits 1 and 3 were of blacks constituting an absolute disparity of 32.7%, a comparative disparity of 242% (32.7% divided by 13.5%) and 16.35 standard deviations. By convention, something is considered statistically significant if it would occur by chance fewer than five times in a hundred (over two standard deviations). In case I were to determine that the appropriate stop data for comparison with the standard is the stop data for the entire portion of the Turnpike patrolled by the Moorestown Station in recognition of the fact that the same troopers patrol between exits 3 and 7A as patrol between exits 1 and 3, Dr. Lamberth found that 408 or 35.6% of the race identified stops between exits 1 and 7A were of blacks constituting an absolute disparity of 22.1%, a comparative disparity of 164% and 22.1 standard deviations.[3] He opined it is highly unlikely such statistics could have occurred randomly or by chance.[4]
Defendants also presented the testimony of Dr. Joseph B. Kadane, an eminently qualified statistician. Among his many credentials, Dr. Kadane is a full professor of statistics and social sciences at Carnegie Mellon University, headed the Department of Statistics there between 1972 and 1981 and is a Fellow of the American Statistical Association, having served on its board of directors and a number of its committees and held various editorships on its Journal. Dr. Kadane testified that in his opinion both the traffic and violator surveys were well designed, carefully performed and statistically reliable for analysis. From the surveys and the defense database, he calculated that a black was 4.85 times as likely as a white to be stopped between exits 1 and 3. This calculation led him to "suspect" a racially non-neutral stopping policy. While he noted that the surveys were done in 1993 and compared to data from 1988 to 1991, he was nevertheless satisfied that the comparisons were useable and accurate within a few percent. He was not concerned that the violator survey failed to count cars going less than sixty miles per hour and travelling behind Mr. Last when he started a session. He was concerned, however, with the fact that only 37.4% of the stops in the defense database were race identified.[5] In order to determine if the comparisons were sensitive to the missing racial data, he did *354 calculations performed on the log odds of being stopped. Whether he assumed the probability of having one's race recorded if black and stopped is the same as if white and stopped or two or three times as likely, the log odds were still greater than.99 that blacks were stopped at higher rates than whites on the Turnpike between exits 1 and 3 during the period April 1988 to May 1991. He therefore concluded that the comparisons were not sensitive to the missing racial data.
Supposing that the disproportionate stopping of blacks was related to police discretion, the defense studied the traffic tickets issued by State Police members between exits 1 and 7A on the thirty-five randomly selected days broken down by State Police unit.[6] There are 533 racially identified tickets in the databases issued by either the now disbanded Radar Unit, the Tactical Patrol Unit or general road troopers ("Patrol Unit"). The testimony indicates that the Radar Unit focused mainly on speeders using a radar van and chase cars and exercised limited discretion regarding which vehicles to stop. The Tac-Pac concentrates on traffic problems at specific locations and exercises somewhat more discretion as regards which vehicles to stop. Responsible to provide general law enforcement, the Patrol Unit exercises by far the most discretion among the three units. From Mr. Last's count, Dr. Lamberth computed that 18% of the tickets issued by the Radar Unit were to blacks, 23.8% of the tickets issued by the Tac-Pac were to blacks while 34.2% of the tickets issued by the Patrol Unit were to blacks. South of exit 3, Dr. Lamberth computed that 19.4% of the tickets issued by the Radar Unit were to blacks, 0.0% of the tickets issued by the Tac-Pac were to blacks while 43.8% of the tickets issued by the Patrol Unit were to blacks. In his opinion, the Radar Unit percentages are statistically consistent with the standard established by the violator survey, but the differences between the Radar Unit and the Patrol Unit between both exits 1 and 3 and 1 and 7A are statistically significant or well in excess of two standard deviations.
The State presented the testimony of Dr. Leonard Cupingood to challenge or refute the statistical evidence offered by the defense. I found Dr. Cupingood is qualified to give expert testimony in the field of statistics based on his Ph.D in statistics from Temple and his work experience with the Center for Forensic Economic Studies, a for profit corporation headquartered in Philadelphia. Dr. Cupingood collaborated with Dr. Bernard Siskin, his superior at the Center for Forensic Economic Studies and a former chairman of the Department of Statistics at Temple.
Dr. Cupingood had no genuine criticism of the defense traffic survey. Rather, he centered his criticism of the defense statistical evidence on the violator survey. Throughout his testimony he maintained that the violator survey failed to capture the relevant data which he opined was the racial mix of those speeders most likely to be stopped or the "tail of the distribution." He even recommended the State authorize him to design a study to collect this data, but the State declined. He was unclear, though, how he would design a study to ascertain in a safe way the vehicle going the fastest above the speed limit at a given time at a given location and the race of its occupants without involving the credibility of State Police members. In any event, his supposition that maybe blacks drive faster than whites above the speed limit was repudiated by all State Police members called by the State who were questioned about it. Colonel Clinton Pagano, Trooper Donald Nemeth, Trooper Stephen Baumann and Detective Timothy Grant each testified that blacks drive indistinguishably from whites. Moreover, *355 Dr. Cupingood acknowledged that he knew of no study indicating that blacks drive worse than whites. Nor could he reconcile the notion with the evidence that 37% of the unticketed stops between exits 1 and 7A in his database were black and 63% of those between exits 1 and 3. Dr. James Fyfe, a criminal justice professor at Temple who the defense called in its rebuttal case and who I found is qualified as an expert in police science and police procedures, also testified that there is nothing in the literature or in his personal experience to support the theory that blacks drive differently from whites.[7]
Convinced in his belief that the defense 15% standard or benchmark was open to question, Dr. Cupingood attempted to find the appropriate benchmark to compare with the databases. He did three studies of presumedly race-blind stops: night stops versus day stops; radar stops versus non-radar stops and drinking driving arrests triggered by calls for service.
In his study of night stops versus day stops, he compared the percentage of stops of blacks at night between exits 1 and 7A in the databases with the percentage of stops of blacks during daytime and found that night stops were 37.3% black versus 30.2% for daytime stops. Since he presumed the State Police generally cannot tell race at night, he concluded the higher percentage for night stops of blacks supported a standard well above 15%. His premise that the State Police generally cannot recognize race at night, however, is belied by the evidence. On July 16, 1994 between 9:40 p.m. and 11:00 p.m. Ahmad S. Corbitt, now an assistant deputy public defender, together with Investigator Minor of the Office of the Public Defender drove on the Turnpike at 55 miles per hour for a while and parked perpendicular to the Turnpike at a rest stop for a while to see if they could make out the races of the occupants of the vehicles they observed. Mr. Corbitt testified that the two could identify blacks versus whites about 80% of the time in the moving mode and close to 100% in the stationary mode. Over and above this proof is the fact the databases establish that the State Police only stopped an average of eight black occupied vehicles per night between exits 1 and 7A. Dr. Cupingood conceded a trooper could probably identify one or two black motorists per night.
Next, in his study of radar stops versus non-radar stops, Dr. Cupingood focused on the race identified tickets where radar was used in the databases and found that 28.5% of them were issued to blacks. Since he assumed that radar is race neutral, he suggested 28 .5% might be the correct standard. As Dr. Kadane said in rebuttal, this study is fundamentally flawed because it assumes what is in question or that the people stopped are the best measure of who is eligible to be stopped. If racial prejudice were afoot, the standard would be tainted. In addition, although a radar device is race-blind, the operator may not be. Of far more significance is the defense study comparing the traffic tickets issued by the Radar, Tac-Pac and Patrol Units which shows again that where radar is used by a unit concerned primarily with speeders and acting with little or no discretion like the Radar Unit, the percentage of tickets issued to blacks is consistent with their percentage on the highway.
*356 And lastly in his effort to find the correct standard, Dr. Cupingood considered a DUI study done by Lieutenant Fred Madden, Administrative Officer of the Records and Identification Section of the State Police. Lt. Madden tabulated DUI arrests between July 1988 and June 1991 statewide, statewide excluding the State Police, for Troop D of the State Police which patrols the entire length of the Turnpike, for Moorestown Station of Troop D and for Moorestown Station south of exit 3 broken down by race and between patrol related versus calls for service (i.e. accidents, motorist aids and other-the arrested motorist coming to the attention of the State Police by a toll-taker or civilian). Since Dr. Cupingood believed DUI arrests from calls for service were race neutral, he adopted the percentage of DUI arrests of blacks for the Moorestown Station from calls for service of 23% as a possible standard. Like his radar versus non-radar stop study, his use of the DUI arrest study is fundamentally flawed because he assumed what is in question. Further, he erred in assuming that DUI arrests from calls for service involve no discretion. While the encounters involve no discretion, the arrests surely do. He admitted that race/discretion may explain the following widespread statistics in the DUI arrest study:
Statewide (all departments) 12% black
Statewide (excluding State
Police) 10.4% black
State Police 16% black
Troop D 23% black
Moorestown Station 34% black
Moorestown Station patrol related 41% black
Moorestown Station patrol related
south of exit 3 50% black
After hearing the testimony of Kenneth Ruff and Kenneth Wilson, two former troopers called by the defense who were not reappointed at the end of their terms and who said they were trained and coached to make race based "profile" stops to increase their criminal arrests, the State asked Dr. Cupingood to study the race identified stops in his database and see how many possessed the profile characteristics cited by Ruff and Wilson, particularly how many were young (30 or under), black and male. Dr. Cupingood found that only 11.6% of the race identified stops were of young black males and only 6.6% of all stops were of young black males.
The defense then conducted a profile study of its own. It concentrated on the race identified stops of just blacks issued tickets and found that an adult black male was present in 88% of the cases where the gender of all occupants could be determined and that where gender and age could be determined, a black male 30 or younger was present in 63% of the cases. The defense study is more probative because it does concentrate on just stops of blacks issued tickets eliminating misleading comparisons with totals including whites or whites and a 62.6% group of race unknowns. Neither side, of course, could consider whether the blacks stopped and not issued tickets possessed profile characteristics since the databases contain no information about them.
Dr. Cupingood's so-called Mantel-Haentzel analysis ended the statistical evidence. He put forward this calculation of "expected black tickets" in an attempt to disprove the defense study showing the Patrol Unit, the unit with the most discretion, ticketed blacks at a rate not only well above the Radar and Tac-Pac Units, but also well above the standard fixed by the violator survey. The calculation insinuates that the Patrol Unit issued merely 5 excess tickets to blacks beyond what would have been expected. The calculation is worthless. First and foremost, Dr. Cupingood deleted the non-radar tickets which presumably involved a greater exercise of discretion. The role police discretion played in the issuance of tickets to blacks was the object of the defense study. Under the guise of comparing only things similarly situated, he thereupon deleted any radar tickets not issued in one of the four time periods he divided each of the thirty-five randomly selected days into for *357 which there was not at least one race identified radar ticket issued by the Patrol Unit and at least one by the combined Radar, Tac-Pac Unit. He provided no justification for either creating the 140 time periods or combining the tickets of the Radar and Tac-Pac Units. To compound his defective analysis, he pooled the data in each time period into a single number and employed the resultant weighted average of the two units to compute the expected and excess, if any, tickets issued to blacks. By using weighted averages, he once again assumed the answer to the question he purported to address. He assumed the Patrol Unit gave the same number of tickets to blacks as did the Radar, Tac-Pac Unit, rather than test to see if it did. Even after "winnowing" the data, the comparison between the Patrol Unit and the Radar, Tac-Pac Unit is marginally statistically significant. Without winnowing, Dr. Kadane found the comparison of the radar tickets issued by the Patrol Unit to blacks with the radar tickets issued by the Radar, Tac-Pac Unit to blacks constituted 3.78 standard deviations which is distinctly above the 5% standard of statistical significance.
The defense did not rest on its statistical evidence alone. Along with the testimony of former troopers Kenneth Ruff and Kenneth Wilson about having been trained and coached to make race based profile stops but whose testimony is weakened by bias related to their not having been reappointed at the end of their terms, the defense elicited evidence through cross-examination of State witnesses and a rebuttal witness, Dr. James Fyfe, that the State Police hierarchy allowed, condoned, cultivated and tolerated discrimination between 1988 and 1991 in its crusade to rid New Jersey of the scourge of drugs.
Conjointly with the passage of the Comprehensive Drug Reform Act of 1987 and to advance the Attorney General's Statewide Action Plan for Narcotics Enforcement issued in January 1988 which "directed that the enforcement of our criminal drug laws shall be the highest priority law enforcement activity", Colonel Pagano formed the Drug Interdiction Training Unit (DITU) in late 1987 consisting of two supervisors and ten other members, two from each Troop selected for their successful seizure statistics, "... to actually patrol with junior road personnel and provide critical on-the-job training in recognizing potential violators." State Police Plan For Action dated July 7, 1987, at p. 14. According to Colonel Pagano, the DITU program was intended to be one step beyond the existing coach program to impart to newer troopers insight into drug enforcement and the "criminal program" (patrol related arrests) in general. DITU was disbanded in or around July 1992.
No training materials remain regarding the training DITU members themselves received, and few training materials remain regarding the training DITU members provided the newer troopers except for a batch of checklists.[8] Just one impact study was ever prepared regarding the effectiveness of the DITU program rather than periodic impact evaluations and studies as required by S.O.P. F4 dated January 12, 1989, but this one undated report marked D-62 in evidence only provided statistics about the number of investigations conducted, the number of persons involved and the quantity and value of drugs seized without indicating the race of those involved or the number of fruitless investigations broken down by race. In the opinion of Dr. Fyfe, retention of training materials is important for review of the propriety of the training and to discern *358 agency policy, and preparation of periodic impact evaluations and studies is important not only to determine the effectiveness of the program from a numbers standpoint, but more than that to enable administration to monitor and control the quality of the program and its impact on the public, especially a crackdown program like DITU which placed so much emphasis on stopping drug transportation by the use of "consents" to search following traffic stops in order to prevent constitutional excesses.
Despite the paucity of training materials and lack of periodic and complete impact evaluations and studies, a glimpse of the work of DITU emerges from the preserved checklists and the testimony of Sergeants Brian Caffrey and David Cobb. Sergeant Caffrey was the original assistant supervisor of DITU and became the supervisor in 1989. Sergeant Cobb was an original member of DITU and became the assistant supervisor in 1989. Sergeant Caffrey left DITU sometime in 1992, Sergeant Cobb sometime in 1991. Both testified that a major purpose of DITU was to teach trainees tip-offs and techniques about what to look for and do to talk or "dig" their way into a vehicle after, not before, a motor vehicle stop to effectuate patrol related arrests. Both denied teaching or using race as a tip-off either before or after a stop. Nevertheless, Sergeant Caffrey condoned a comment by a DITU trainer during the time he was the supervisor of DITU stating:
"Trooper Fash previously had DITU training, and it showed in the way he worked. He has become a little reluctant to stop cars in lieu [sic] of the Channel 9 News Report. He was told as long as he uses Title 39 he can stop any car he wants. He enjoys DITU and would like to ride again."
As the defense observes in its closing brief, "Why would a trooper who is acting in a racially neutral fashion become reluctant to stop cars as a result of a news story charging that racial minorities were being targeted [by the New Jersey State Police]?" Even A.A.G. Ronald Susswein, Deputy Director of the Division of Criminal Justice, acknowledged that this comment is incomplete because it fails to add the caveat, "as long as he doesn't also use race or ethnicity." Further, Sergeant Caffrey testified that "ethnicity is something to keep in mind" albeit not a tip-off and that he taught attendees at both the annual State Police in-service training session in March 1987 and the special State Police in-service training sessions in July and August 1987 that Hispanics are mainly involved in drug trafficking and showed them the film Operation Pipeline wherein the ethnicity of those arrested, mostly Hispanics, is prominently depicted. Dr. Fyfe criticized Sergeant Caffrey's teaching Hispanics are mainly involved and his showing Operation Pipeline as well as the showing of the Jamaican Posse film wherein only blacks are depicted as drug traffickers at the 1989 annual State Police in-service training session saying trainers should not teach what they do not intend their trainees to act upon. At a minimum, teaching Hispanics are mainly involved in drug trafficking and showing films depicting mostly Hispanics and blacks trafficking in drugs at training sessions worked at cross-purposes with concomitant instruction pointing out that neither race nor ethnicity may be considered in making traffic stops.
Key corroboration for finding the State Police hierarchy allowed and tolerated discrimination came from Colonel Pagano. Colonel Pagano was Superintendent of the State Police from 1975 to February 1990. He testified there was a noisy demand in the 1980s to get drugs off the streets. In accord, Attorney General Cary Edwards and he made drug interdiction the number one priority of law enforcement. He helped formulate the Attorney General's Statewide Action Plan for Narcotics Enforcement and established DITU within the State Police. He kept an eye on DITU through conversations with staff officers and Sergeants Mastella and Caffrey and *359 review of reports generated under the traditional reporting system and D-62 in evidence. He had no thought DITU would engage in constitutional violations. He knew all State Police members were taught that they were guardians of the Constitution and that targeting any race was unconstitutional and poor police practice to boot. He recognized it was his responsibility to see that race was not a factor in who was stopped, searched and arrested. When he became Superintendent, he formed the Internal Affairs Bureau to investigate citizen complaints against State Police members to maintain the integrity of the Division. Substantiated deviations from regulations resulted in sanctions, additional training or counseling.
More telling, however, is what Colonel Pagano said and did, or did not do, in response to the Channel 9 exposé entitled "Without Just Cause" which aired in 1989 and which troubled Trooper Fash and what he did not do in response to complaints of profiling from the NAACP and ACLU and these consolidated motions to suppress and similar motions in Warren and Middlesex Counties. He said to Joe Collum of Channel 9 that "[violating rights of motorists was] of serious concern [to him], but no where near the concern that I think we have got to look to in trying to correct some of the problems we find with the criminal element in this State" and "the bottom line is that those stops were not made on the basis of race alone." (emphasis added) Since perhaps these isolated comments were said inadvertently or edited out of context, a truer reflection of his attitude about claims of racism would appear to be his videotaped remarks shown all members of the State Police at roll call in conjunction with the WOR series. Thereon he clearly said that he did not want targeting or discriminatory enforcement and that "[w]hen you put on this uniform, you leave your biases and your prejudices behind." But he also said as regarded the charge of a Trenton school principal named Jones that he had been stopped on the Turnpike and threatened, intimidated and assaulted by a trooper, "We know that the teacher assaulted the trooper. He didn't have a driver's license or a registration for his fancy new Mercedes." (emphasis added) And he called Paul McLemore, the first African-American trooper in New Jersey and now a practicing attorney and who spoke of discrimination within the ranks of the State Police, "an ingrate." And he told the members to "keep the heat on" and then assured them:
"...[H]ere at Division Headquarters we'll make sure that when the wheels start to squeak, we'll do whatever we can to make sure that you're supported out in the field.... Anything that goes toward implementing the Drug Reform Act is important. And, we'll handle the squeaky wheels here."
He admitted the Internal Affairs Bureau was not designed to investigate general complaints, so he could not refer the general complaints of discrimination to it for scrutiny. Yet he never requested the Analytical Unit to investigate stop data from radio logs, patrol charts and tickets or search and seizure data from arrest reports, operations reports, investigation reports and consent to search forms, not even after the Analytical Unit informed him in a report on arrests by region, race and crime that he had requested from it for his use in the WOR series that "... arrests are not a valid reflection of stops (data relative to stops with respect to race is not compiled)." The databases compiled for these motions attest, of course, to the fact that race identified stop data could have been compiled. He testified he could not launch an investigation into every general complaint because of limited resources and that there was insufficient evidence of discriminations in the Channel 9 series, the NAACP and ACLU complaints and the various motions to suppress for him to spend his "precious" resources. In short, he left the issue of discrimination up to the courts and months of testimony in this and other counties at State expense.
The right to be free from discrimination is firmly supported by the Fourteenth *360 Amendment to the United States Constitution and the protections of Article I, paragraphs 1 and 5 of the New Jersey Constitution of 1947. To be sure, "[t]he eradication of the `cancer of discrimination' has long been one of our State's highest priorities." Dixon v. Rutgers, The State University of N.J., 110 N.J. 432, 451, 541 A.2d 1046 (1988). It is indisputable, therefore, that the police may not stop a motorist based on race or any other invidious classification. See State v. Kuhn, 213 N.J.Super. 275, 517 A.2d 162 (1986).
Generally, however, the inquiry for determining the constitutionality of a stop or a search and seizure is limited to "whether the conduct of the law enforcement officer who undertook the [stop or] search was objectively reasonable, without regard to his or her underlying motives or intent." State v. Bruzzese, 94 N.J. 210, 463 A.2d 320 (1983). Thus, it has been said that the courts will not inquire into the motivation of a police officer whose stop of a vehicle was based upon a traffic violation committed in his presence. See United States v. Smith, 799 F.2d 704, 708-709 (11th Cir.1986); United States v. Hollman, 541 F.2d 196, 198 (8th Cir.1976); cf. United States v. Villamonte-Marquez, 462 U.S. 579, 103 S.Ct. 2573, 77 L.Ed.2d 22 (1983). But where objective evidence establishes "that a police agency has embarked upon an officially sanctioned or de facto policy of targeting minorities for investigation and arrest," any evidence seized will be suppressed to deter future insolence in office by those charged with enforcement of the law and to maintain judicial integrity. State v. Kennedy, 247 N.J.Super. 21, 588 A.2d 834 (App.Div. 1991).
Statistics may be used to make out a case of targeting minorities for prosecution of traffic offenses provided the comparison is between the racial composition of the motorist population violating the traffic laws and the racial composition of those arrested for traffic infractions on the relevant roadway patrolled by the police agency. Wards Cove Packing Co. v. Atonio, supra; State v. Kennedy, 247 N.J.Super. at 33-34, 588 A.2d 834. While defendants have the burden of proving "the existence of purposeful discrimination," discriminatory intent may be inferred from statistical proof presenting a stark pattern or an even less extreme pattern in certain limited contexts. McCleskey v. Kemp, 481 U.S. 279, 107 S.Ct 1756, 95 L.Ed.2d 262 (1987). Kennedy, supra, implies that discriminatory intent may be inferred from statistical proof in a traffic stop context probably because only uniform variables (Title 39 violations) are relevant to the challenged stops and the State has an opportunity to explain the statistical disparity. "[A] selection procedure that is susceptible of abuse... supports the presumption of discrimination raised by the statistical showing." Castaneda v. Partida, 430 U.S. 482, 494, 97 S.Ct. 1272, 51 L.Ed.2d 498 (1977).
Once defendants expose a prima facie case of selective enforcement, the State generally cannot rebut it by merely calling attention to possible flaws or unmeasured variables in defendants' statistics. Rather, the State must introduce specific evidence showing that either there actually are defects which bias the results or the missing factors, when properly organized and accounted for, eliminate or explain the disparity. Bazemore v. Friday, 478 U.S. 385, 106 S.Ct. 3000, 92 L.Ed.2d 315 (1986); EEOC v. General Telephone Co. of Northwest, Inc., 885 F.2d 575 (9th Cir.1989). Nor will mere denials or reliance on the good faith of the officers suffice. Castaneda v. Partida, 430 U.S. at 498 n. 19, 97 S.Ct. 1272, 51 L.Ed.2d 498.
Here, defendants have proven at least a de facto policy on the part of the State Police out of the Moorestown Station of targeting blacks for investigation and arrest between April 1988 and May 1991 both south of exit 3 and between exits 1 and 7A of the Turnpike. Their surveys satisfy Wards Cove, supra. The statistical disparities and standard deviations revealed are indeed stark. The discretion devolved upon general road *361 troopers to stop any car they want as long as Title 39 is used evinces a selection process that is susceptible of abuse. The utter failure of the State Police hierarchy to monitor and control a crackdown program like DITU or investigate the many claims of institutional discrimination manifests its indifference if not acceptance. Against all this, the State submits only denials and the conjecture and flawed studies of Dr. Cupingood.
The eradication of illegal drugs from our State is an obviously worthy goal, but not at the expense of individual rights. As Justice Brandeis so wisely said dissenting in Olmstead v. United States, 277 U.S. 438, 479, 48 S.Ct. 564, 72 L.Ed. 944 (1928):
"Experience should teach us to be most on our guard to protect liberty when the government's purposes are beneficent. Men born to freedom are naturally alert to repel invasion of their liberty by evilminded rulers. The greatest dangers to liberty lurk in insidious encroachment by men of zeal, well-meaning but without understanding."
Motions granted.
NOTES
[1] The motions also include claims under the Fourth Amendment, but they were severed before the hearing to await future proceedings if not rendered moot by this decision.
[2] Originally, twenty-three defendants joined in the motions. On the first day of the hearing, November 28, 1994, I dismissed the motions of Darrell Stanley, Roderick Fitzgerald, Fred Robinson, Charles W. Grayer, Keith Perry and Alton Williams due to their unexplained nonappearances.
[3] Dr. Lamberth erred in using 13.5% as the standard for comparison with the stop data. The violator survey indicates that 14 .8%, rounded to 15%, of those observed speeding were black. This percentage is the percentage Dr. Lamberth should have used in making statistical comparisons with the stop data in the databases. Nonetheless, it would appear that whatever the correctly calculated disparities and standard deviations are, they would be nearly equal to those calculated by Dr. Lamberth.
[4] In this opinion I am ignoring the arrest data in the databases and Dr. Lamberth's analysis thereof since neither side produced any evidence identifying the Turnpike population between exits 1 and 3 or 1 and 7A eligible to be arrested for drug offenses or otherwise. See Wards Cove Packing Co. v. Atonio, 490 U.S. 642, 109 S.Ct. 2115, 104 L.Ed.2d 733 (1989).
[5] That 62.6 percent of the stops in the defense database are not race identified is a consequence of both the destruction of the radio logs for ten of the thirty-five randomly selected days in accordance with the State Police document retention policy and the frequent dereliction of State Police members to comply with S.O.P. F3 effective July 13, 1984 requiring them to communicate by radio to their respective stations the race of all occupants of vehicles stopped prior to any contact.
[6] Of the 3060 stops in the databases, 1292 are ticketed stops. Hence, no tickets were issued for nearly 60% of the stops.
[7] During the hearing the State did attempt to introduce some annual speed surveys conducted on the Turnpike by the New JerseyDepartment of Transportation which the State represented would contradict a conclusion of the violator survey that 98.1% of the vehicles observed travelled in excess of sixty miles per hour. Besides noting that the State knew of these surveys long before the hearing and failed to produce them in discovery, I denied the proffer mainly because the surveys lacked racial data and also because there was a serious issue over their trustworthiness for admission under N.J.R.E. 803(c)(8) since the surveys were done to receive federal highway dollars. The critical information here is the racial mix of those eligible to be stopped, not the percentage of vehicles exceeding the speed limit.
[8] Although DITU kept copies of all arrest, operations and investigation reports and consent to search forms growing out of encounters with the public during training for a time at its office, the copies were destroyed sometime in 1989 or 1990 and before they were sought in discovery. The originals of these reports and forms were filed at the trainee's station and incorporated into the State Police "traditional reporting system" making them impossible to ferret out now.
|
In a decisive legal victory for the NCAA and the four major U.S. pro sports leagues, U.S. District Judge Michael Shipp late Friday issued a permanent injunction against New Jersey’s latest effort to legalize sports betting. The injunction prevents New Jersey from implementing a law that would have repealed the state’s ban against sports betting and permitted casinos and racetracks to offer sports wagering. While New Jersey has already filed a notice that it will appeal the injunction to the U.S. Court of Appeals for the Third Circuit, the ruling is a major setback for the state's leading advocate of sports betting: Governor Chris Christie.
The turbulent path of New Jersey’s attempt to legalize sports betting
Since 1992, the odds have been stacked against New Jersey legalizing sports betting. In that year, President George H.W. Bush signed the Professional and Amateur Sports Protection Act (“PASPA”) into law. PASPA enjoyed bipartisan support due to concerns about the influence of gambling on sports and the danger of athletes and coaches “throwing” games. These worries ostensibly distinguish sports betting from other types of betting such as slot machines, poker games, lotteries and other “bets” that are regulated and in some cases prohibited by states’ laws.
PASPA is often described as a federal ban on sports betting but is more technically a ban on states' ability to license, sponsor or authorize sports betting. This slight distinction is significant because advocates for states’ rights contend that PASPA, by taking away states’ autonomy on sports betting, violates the U.S. Constitution. Also, while PASPA is a “federal law,” it doesn’t apply equally across the country. PASPA exempts four states -- Nevada, Delaware, Oregon and Montana -- that had already adopted sports betting by 1992. As a result of this exemption, you can legally bet on any NFL game this Sunday in Nevada. But in New Jersey you can’t.
Attitudes about sports betting have become more permissive since 1992. Even NBA commissioner Adam Silver recently opined that Congress should allow states to legalize sports betting under certain conditions. But PASPA remains the law of the land.
In 2011, Christie took on the federal government and signed the New Jersey Sports Wagering Law. This law created a limited regulatory scheme where casinos and racetracks could apply for sports betting licenses. Other types of betting providers, as well as bets on college games played in New Jersey and games played by New Jersey colleges, would remain illegal. New Jersey hoped to generate millions of dollars in tax revenue by taxing sports bets.
The pro leagues and the NCAA then told New Jersey, in so many words, don’t bet on it. They filed a lawsuit against New Jersey in federal court, contending that New Jersey’s law violated PASPA. The U.S. Department of Justice soon joined them in a crusade against sports betting in the Garden State.
The plaintiffs charged that Christie’s law would jeopardize “the public’s faith and confidence” in team sports. They also highlighted that sports betting substantially impacts “interstate commerce,” since big-time pro and college sports constitute national industries. This is a crucial legal point. Under the U.S. Constitution’s Commerce Clause, a federal law can usually restrict states’ decision making so long as the law regulates an economic activity that crosses state lines. Much to the chagrin of states rights’ advocates and those who read the Constitution literally, courts today broadly read the Commerce Clause to uphold federal laws that merely impact an interstate industry. So even if the actual activity -- a person places sports bets at a New Jersey casino -- occurs in just one state, the games subject to the bet might be played in other states, or rivals of those teams might play in other states, thus violating federal law in the broad interpretation.
New Jersey fired back at the lawsuit with a two-pronged argument against PASPA. The first argument was that PASPA unlawfully treats Nevada favorably compared to other states. This argument is grounded in the equal sovereignty doctrine, a controversial principle holding that states are owed equal treatment by the federal government. Some jurists, including John Roberts, the Chief Justice of the U.S. Supreme Court, embrace the equal sovereignty doctrine. Critics, however, stress that it is nowhere to be found in the actual Constitution. Second, New Jersey maintained that PASPA unlawfully barred New Jersey from exercising its authority under the 10th Amendment. The anti-commandeering principle forbids Congress from ordering states to govern or regulate in a particular way where there is no related federal regulatory scheme.
Judge Shipp was assigned the case and in February 2013 held for the leagues and the NCAA. In September 2013, two of the three judges on a U.S. Court of Appeals for the Third Circuit panel also sided with the leagues and the NCAA. In affirming the constitutionality of PASPA, the majority rejected the equal sovereignty doctrine as unrelated to an issue like sports betting. The majority also reasoned that PASPA only prohibited New Jersey from legalizing sports betting, but did not compel New Jersey to maintain its state-law prohibitions against sports betting. This distinction may sound immaterial -- no matter how it is read, PASPA blocks New Jersey from legalizing sports betting -- but it is important for the purposes of the anti-commandeering doctrine. This doctrine only helps New Jersey if the federal government compels it to act, not prevent the state from acting.
The dissenting voice on the Third Circuit saw the law quite differently. Judge Thomas Vanaskie reasoned that the distinction between prohibiting New Jersey from legalizing sports betting and compelling New Jersey to ban sports betting was “illusory.” He stressed that no matter how it is read, PASPA unlawfully prevented New Jersey from pursuing a right that it enjoys through the anti-commandeering doctrine: the right to legalize sports betting. While Christie hoped the U.S. Supreme Court would consider Vanaskie’s argument, the Court declined to grant certiorari.
But a dedicated gambler is often undeterred by a loss here and there. Enter Christie’s Plan B to legalize sports betting in New Jersey: In September 2014, Christie joined New Jersey Acting Attorney General John Jay Hoffman in filing a motion with Judge Shipp. Christie and Hoffman asked Shipp to hold that the state was not required to criminalize sports betting. They stressed that while the majority of the Third Circuit held against New Jersey, it also held that a “state may repeal its sports wagering ban” without violating PASPA, and that each state is free “to decide what the exact contours of the prohibition [on sports betting] will be.” According to Daniel Wallach, a partner at Becker & Poliakoff, P.A., and a leading expert on gaming law, this language indicated that New Jersey could partially ban sports betting and thus partially legalize it: “In my view, the phrase the ‘exact contours of the prohibition’ suggests that New Jersey is free to decide just how much of a prohibition on sports gambling it wants to maintain on its books.”
Contemporaneously, Hoffman issued a formal opinion that under his office’s interpretation of New Jersey law, privately owned casinos and racetracks could accept sports bets without violating criminal law. With momentum on his side, Christie on Oct. 20 signed New Jersey Senate Bill 2460 into law. The bill partially repealed the state’s ban against sports betting and would have allowed casinos and racetracks to offer sports wagering beginning Oct. 26. The wheels were set in motion for New Jersey to partially legalize sports betting by decriminalizing it.
In October, Judge Shipp delivered disappointing news to Christie and Hoffman when he entered a temporary restraining order barring Monmouth Park from conducting sports wagering and prohibiting the state from implementing Senate Bill 2460. The order lasted until Friday, when Shipp issued a permanent injunction against New Jersey and granted final summary judgment in favor of the leagues.
Was Judge Shipp right to issue a permanent injunction?
Judge Shipp ruled for the NCAA and leagues despite the Third Circuit’s observation that states can repeal their prohibition on sports betting and are permitted to “decide the exact contours” of how they prohibit sports betting. Wallach, for one, takes issue with Shipp’s reasoning. “For Judge Shipp to read the Third Circuit’s language in the manner suggested by the leagues,” Wallach contends, “ignores the plain language and obvious meaning of the words employed by the Third Circuit.” Wallach adds that New Jersey’s repeal law "comports with prior statements made by United States Solicitor General Donald M. Verrilli, Jr., who asserted in a brief filed with the U.S. Supreme Court that New Jersey was free to repeal its ban against sports wagering 'in whole or in part' without violating PASPA."
New Jersey’s law, Wallach emphasizes, "is precisely the kind of law that the Solicitor General said would not be a violation of PASPA." Wallach, who attended Thursday’s oral argument, said that famed lawyer Ted Olson (New Jersey’s legal counsel) stressed that the federal government and the leagues (who were aligned with the federal government in the prior case) "were engaging in a classic 'bait-and-switch' by backtracking from the DOJ’s prior acknowledgment." Wallach expects this to be a key issue on appeal.
For at least two reasons, however, Shipp seemed troubled by the public policy implications of making sports betting lawful in New Jersey. First, Shipp signaled concern that New Jersey would, in violation of PASPA, inevitably regulate sports betting at casinos and racetracks. This is because those casinos and racetracks are already regulated by New Jersey, meaning it may be difficult for New Jersey to avoid also regulating sports books at those same casinos and racetracks.
Second, Shipp seemed worried that letting New Jersey decriminalize sports betting might open the door for other states to circumvent PASPA by decriminalizing sports betting as well. "This case has national implications," Wallace stresses. "It’s not just about New Jersey. If New Jersey prevails, we can expect other states such as Delaware, Pennsylvania and, perhaps, Florida to follow New Jersey’s 'blueprint' and enact similar partial repeal laws that likewise benefit only licensed gaming operators such as casinos and racetracks. Since casinos and racetracks are prevalent in many states, Judge Shipp may have been concerned about the impact that his ruling could have throughout the country and on the continued viability of PASPA." Wallach noted that during Thursday’s oral argument, Judge Shipp pointedly asked Ted Olson if other states could pass similar legislation if New Jersey prevails, which Wallach saw as a "harbinger" of this evening’s decision.
Wallach also believes that Judge Shipp viewed New Jersey’s repeal law as a “work-around” of PASPA, as evidenced by the following question he asked of Ted Olson during oral argument: "Are the federal laws so easily evaded that we can cast a law in such a way to, in essence, get around and do indirectly that which you cannot do directly?" By asking this question, Wallach reasons that Judge Shipp "may have been looking to avoid a result that could have the effect of rendering a federal law (PASPA) meaningless, especially if other states were to implement New Jersey’s approach."
Lastly, during oral arguments on Thursday, Shipp asked numerous questions about the meaning of the dissent by Judge Vanaskie. In the dissent, Vanaskie viewed the choice for states as one where there is no regulation of sports betting or a complete ban of sports betting. Neither scenario fits New Jersey’s desire to partially legalize sports betting. Shipp emphasized that point in his opinion Friday night.
Does New Jersey have a chance on appeal?
Shipp's order grants final summary judgment, which, under federal law, means it is appealable to the appropriate appellate court (the U.S. Court of Appeals for the Third Circuit). SI has learned New Jersey has already appealed Shipp's ruling, and will probably move to expedite the appeal in order to present its case as early as possible. New Jersey employed a similar tactic (with the leagues' consent) in the first lawsuit, and it is expected that the Third Circuit will again grant expedited review. If that happens, all written briefs would be filed by the end of February, and oral argument likely scheduled for late March or early April. Based upon this anticipated timetable, look for the Third Circuit to issue a decision between May 2015 and July 2015. The losing side could then petition for "en banc" review where all of the judges on the Third Circuit would hear the case. The last step of an appeal would be to petition the U.S. Supreme Court to take the case.
New Jersey’s chances of success before the Third Circuit will undoubtedly be greater than they were before Judge Shipp, who most observers expected to rule in favor of the leagues. The outcome of the appeal will ultimately hinge on the Third Circuit’s interpretation of a single paragraph from its opinion in the first case -- that “a state may repeal its sports wagering ban” without violating PASPA, and each state is free “to decide what the exact contours of the prohibition will be.” While New Jersey believes this “exact contours” language allows it to partially repeal its state-law prohibition against sports betting -- especially in view of the U.S. Department of Justice's prior concession on that point -- Wallach is skeptical the Third Circuit intended for its “exact contours” language to be utilized as a pathway for states to avoid the strictures of PASPA. As Wallach explains, “this language was never intended to be a ‘loophole’ for states to exploit, but, rather, it was a rationale expressed by the Third Circuit majority as to why PASPA did not ‘commandeer’ states to maintain unwanted state-law prohibitions against sports betting on its books.”
Wallach expects the Third Circuit judges will shift their positions in this new case. While Judge Vanaskie previously sided with New Jersey in concluding PASPA violates the anti-commandeering doctrine, his dissenting opinion seemed to reject New Jersey’s partial repeal strategy. In a footnote, Vanaskie revealingly wrote that he “fails to discern” how the Third Circuit majority opinion “leaves much room for the states to make their own policy.” He made clear that “the only choice is to allow for completely unregulated sports wagering (a result that Congress did not intend to foster), or to ban sports wagering completely.” Thus, Judge Vanaskie appears to be aligned with Judge Shipp’s “all-or-nothing” approach on just how far a repeal must go to avoid running afoul of PASPA. This is not a good sign for New Jersey, which, ironically, will be looking to the judges who previously ruled against it to side with it this time around. Not a promising prospect according to Wallach, who believes New Jersey’s chances for success on appeal may be no greater than 25 percent in view of Vanaskie’s footnote and the public policy considerations associated with potentially opening the floodgates for nationwide sports gambling.
National politics lurk in the background of New Jersey’s plans. Christie is considered a likely candidate for the 2016 Republican presidential nomination. Will he fight as hard for his sports betting case if his focus is on running for president and if the odds of his case prevailing are low? Only time will tell.
Michael McCann is a Massachusetts attorney and the founding director of the Sports and Entertainment Law Institute at the University of New Hampshire School of Law. He is also the distinguished visiting Hall of Fame Professor of Law at Mississippi College School of Law.
More More Sports
We've Got Apps Too
Get expert analysis, unrivaled access, and the award-winning storytelling only SI can provide - from Peter King, Tom Verducci, Lee Jenkins, Seth Davis, and more - delivered straight to you, along with up-to-the-minute news and live scores.
|
Where have all the time gone? From a physical point of view, time is only moving at a uniform rate without going back in a human known environment. "Time to go" is more of the birth and death of lament, and more on the mediocrity of repentance, we are all very clear, where omega watches time is not to transfer our will. Just as the "Prince of horses" are keen to capture the same, the tota kings three Prince Na Zha from downtown (taking the sea at the end of the week) a beautiful Monkey replica rolex watches King became a young spark, when (the Eastern Han Dynasty, the Zhenguan period 500 years ago) when a stupid fat young cadres. Even the celestial beings in the sky will gradually grow old because of the passage of time. (Shang Dynasty to the Eastern Han Dynasty, the replica uk human world has been close to the millennium, the sky is equal to less than three years), let alone the people on the ground? "Time is a butcher's knife", this is true also.
Related stories
The iPhone 5S and iPhone 5C have been announced. So what does that mean for the iPhone 6?
Well, we'll tell you. Or, at least, we'll tell you what we can glean from rumor and speculation - some reliable, some not so much.
iOS 7: Apple's new look for iPhone and iPad
Given the iPhone's history - from the 3G onwards, there's always been a half-step S model before the next numbered iPhone - so it was no surprise the 5S was first and so we're looking at 2014 for a new iPhone 6.
One thing is for sure, with potential refreshes of such super handsets as the Samsung Galaxy S4, Sony Xperia Z and HTC One, the next iPhone will have to seriously up its game.
iPhone 6 release date
The iPhone 6 release date will be in 2014. It will follow the iPhone 5S which will be released at the end of this week..
Jefferies analyst Peter Misek says that there will be a June 2014 release for the iPhone 6. We reckon it will be later than that, around a year after the 5S. Citi's Glen Yeung also believes that we won't see an iPhone 6 until 2014, although that's no big leap.
Interestingly, in May 2013 Stuff reported it received a photo of the till system at a Vodafone UK store (which it has since removed along with the reference to Vodafone), with '4G iPhone 6' listed.
So could we see both an iPhone 5S and iPhone 6 in quick succession? Some reports suggest a new 5S in the late part of the year before a revamped iPhone 6 very early in 2014.
Apple may have a new roadmap, with new phones every spring and autumn
iPhone 6 casing
It's been suggested that there could even be three size variants of the new iPhone - check out these mocked up images by artist Peter Zigich. He calls the handsets iPhone 6 Mini, iPhone 6 & iPhone 6 XL (these look rather like the iPhone 5C variant though). However, as ZDNet rightly points out, different size variants aren't exactly easy to just magic out of thin air.
Pretty, yes, but also horrifically scratch-prone. Will your next iPhone have a plastic back?
The iPhone 6 will finally do NFC
About time too. Well, that's what iDownloadblog reckons, quoting Jefferies analyst Peter Misek. Many Android phones now boast NFC and Apple appears to have been happy to be left behind here.
See our video below on what Apple needs to do to slay Samsung's Galaxy S4
The iPhone 6 will run iOS 8
With iOS 7 heading out of the traps now, who's betting against the next iPhone coming with iOS 8?
We'd expect a September or October release date for iOS 8 in line with previous releases.
iOS 7: what do you think?
iPhone 6 storage
We've already seen a 128GB iPad, so why not a 128GB iPhone 6? Yes, it'll cost a fortune, but high-spending early adopters love this stuff.
iPhone 6 home button
According to Business Insider, of the many iPhone 6 prototypes Apple has made, one has a giant Retina+ IGZO display and a "new form factor with no home button. Gesture control is also possibly included". It will surely include Apple's new Touch ID finger print tech though?
iPhone 6 screen
The Retina+ Sharp IGZO display, would have a 1080p Full HD resolution. It's also been widely reported that Apple could introduce two handset sizes as it seeks to compete with the plethora of Android devices now on the market.
Take this one with a pinch of salt, because China Times isn't always right: it reckons the codename iPhone Math, which may be a mistranslation of iPhone+, will have a 4.8-inch display. The same report suggests that Apple will release multiple handsets throughout the year over and above the iPhone 5S and 6, which seems a bit far-fetched to us.
Patents show that Apple has been thinking about magical morphing technology that can hide sensors and even cameras. Will it make it into the iPhone 6? Probably not.
Jefferies analyst Peter Misek also says he believes the new iPhone will have a bigger screen. Different sizes also seem rather likely to us - the word on the street after WWDC 2013 was that there would be 4.7 and 5.7-inch versions.
More rumors in September 2013 point to a six-inch display, but this seems a little large to us.
You'll probably still be able to see the camera lens in the iPhone 6
iPhone 6 processor
Not a huge surprise, this one: the next processor one will be a quad-core A8 or an evolved A7. The big sell here is more power with better efficiency, which should help battery life.
iPhone 6 camera
Apple's bought camera sensors from Sony before, and this year we're going to see a new, 13-megapixel sensor that takes up less room without compromising image quality.
An Apple patent, uncovered by Apple Insider in May 2013, shows a system where an iPhone can remotely control other illuminating devices - extra flashes. It would work in a similar manner to that seen in professional photography studios. Interesting stuff.
Will the iPhone 6 be handy for pro photographers? [Image credit: Apple Insider]
iPhone 6 eye tracking
One thing seems certain - Apple can't ignore the massive movement towards eye-tracking tech from other vendors, especially Samsung. It seems a shoe-in that Apple will deliver some kind of motion tech within the next iPhone, probably from uMoove.
iPhone 6 wireless charging
Wireless charging still isn't mainstream. Could Apple help give it a push? CP Tech reports that Apple has filed a patent for efficient wireless charging, but then again Apple has filed patents for pretty much anything imaginable.
The tasty bit of this particular patent is that Apple's tech wouldn't just charge one device, but multiple ones. Here are more details on the iPhone 6 wireless charging patent.
Meanwhile, a further Apple patent seems to imply that future iPhones will be able to adjust volume as you move them away from your ear.
And could the iPhone 6 really have 3D? It's unlikely, but the rumours keep on coming.
|
HR Intel: How the Zika Virus Impacts HR
A round up of workplace developments and legal trends to help keep HR ahead of the curve
Now that the Zika virus has made its way up from South and Central America, HR professionals and business owners in the United States need to start paying attention. While the virus is not as severe as Ebola in terms of its immediate risk of death, Zika’s impact will be felt throughout American businesses, particularly those operating in mosquito-friendly climates and whose employees must travel into Zika-affected areas.
Florida has already declared a Public Health Emergency and has about 16 cases of the virus state-wide, all of which are travel-related. More states likely will follow suit as the weather warms up and the virus continues to spread via the aedes aegypti mosquito, travel and sexual transmission. President Obama has asked Congress for $1.8 billion to halt the spread of the virus, which will be used for mosquito control, vaccine research and enhanced medical care for pregnant women.
Beyond the public health concerns, HR professionals and business owners should be on the lookout for a number of issues that could grow more severe in the wake of the Zika crisis:
Travel restrictions may be imposed on individuals with the virus;
Employers may see an uptick in disability, pregnancy-related and workers’ compensation leave claims;
National origin or race discrimination claims related to the virus may also increase;
The spread of the virus may increase employers’ burden insofar as health insurance coverage to employees and their families; and
Employers should use caution before sending their employees into Zika-affected areas.
Note: the Occupational Safety and Health Act may not afford employees the opportunity to legally refuse the work as the virus likely does not represent an “immediate risk of death or serious injury.”
In the next four years, employers and HR professionals will experience something that has never happened before: five separate generations of workers will be in the workplace simultaneously. If you thought the Baby Boomers and Millennials have trouble communicating, just wait until the Traditionalists meet Gen 2020!
Like it or not, the companies that are successful in the future will be the ones that compete for the best talent today, even if that means (shudder) catering to millennial culture. That means more flexible working arrangements, enhanced paid sick leave and parental leave benefits and a clearer connection between performance, rewards, benefits and pay.
Along those lines, a recent Talent Management Rewards Pulse Survey found that only 32% of HR executives said their merit pay program was effective in differentiating pay based on performance. Meanwhile, only 20% of executives found their existing merit pay systems to be effective at driving higher levels of individual performance and only 26% said their managers and employees are satisfied with the performance rating process. There is tremendous opportunity for change and improvement there, starting with an overhaul of the performance management system and training supervisors on the unique challenges of managing several generations at once.
Double-whammy for Yahoo!
While Yahoo goes through the painful process of shedding 15% of its workforce, the beleaguered Internet behemoth is also facing a lawsuit that claims its employee performance and ranking system was rigged to beat the Worker Adjustment and Retraining Notification (WARN) Act. Specifically, the suit accuses Yahoo of using a “forced ranking” system – sort of a bell-curve for HR – that forced certain percentages of the workforce into performance “buckets” that (allegedly) didn’t paint an accurate picture of performance.
The suit also claims that the ranking system enabled Yahoo to terminate employees for “cause,” rather than label terminations as “layoffs,” the latter of which may have triggered both the federal WARN Act and California’s counterpart “mini-WARN” legislation, requiring the company to provide advance notice of layoffs to employees and to compensate them during the notice period.
HR grab bag
Wal-Mart got slammed by a huge $31 million verdict due to a trifecta of claims for discrimination, retaliation and wrongful termination brought by a fired pharmacist. The retail giant allegedly ignored repeated warnings from the employee that patients were getting prescriptions filled improperly due to lack of training and management. After receiving the warnings, Wal-Mart then terminated the pharmacist, explaining that the termination decision was made after the pharmacist misplaced her key to the pharmacy. Wal-Mart (allegedly) retained a male pharmacist who had similarly misplaced his key.
Is your company website accessible to the legally blind? If not, it may violate Title III of the Americans with Disabilities Act (ADA), which requires reasonable modifications made to corporate communications to account for disabilities. The Department of Justices (DOJ) is ramping up efforts to target employers with websites that are inaccessible and plaintiffs’ attorneys smell blood in the water, meaning they are targeting employers with letters and phone calls threatening litigation.
No, not “Goodfellas” protection. This is legal protection in the form of their very own (eponymous) piece of legislation called the Grocery Workers Retention Act. The law will go into effect in May 2016, and it will require successor employers of grocery workers (companies that merge with or acquire existing grocery businesses) to retain all existing workers for 90 days. The law will also require successor employers to author written performance reviews of all existing employees and retain the records for three years. The companies are not obligated to retain the workers after the 90-day period.
How is this song relevant to HR?
In the last edition of HR Intel we asked you how “Take on Me” by A-ha is relevant to HR. Take on Me is (in addition to being one of the signature 80s songs) a study in perseverance and adaptability. The version of the song that we all know and love was (by several accounts) the fifth or sixth version of the song released by the band, meaning the song went through several rewrites, remixes, producers and managers before the final product was revealed to the world. The lesson there for HR is to stay focused if things don’t go your way. A minor tweak here and there and you could have yourself a hit.
As to how you might avoid becoming a one-hit wonder, that’s another idea for another column.
Get Social with XpertHR
In order for the RSS feed to display properly a browser extension may be required. Please note that newer browser versions will not need an extension. However, if one is needed use the relevant link below:
|
PEBBLE BEACH, Calif.--It appears as if he qualifies for a career as a porn star, known for dropping his pants to expose his penis as the world labeled him as a serial cheater, attracting nearly all bimbos with his freaky double-life.
From most perspectives, he’s despised for committing malignant transgressions against his family and wife, Elin, after it became evident that he had extramarital affairs, badly ruining his credibility among peers and spectators sitting in the galleries to witness the world’s greatest golfer.
Eventually the bitterness will revoke all attention and mend believability for the transcendent athlete on the planet, but he’ll always be described as a sex-addict by the resentful populace who cannot stand Tiger Woods for his poor judgment.
Half of the people hate an iconic golfer, once known as the inimitable role model who runs an educational center for children, unforgiving of his sex scandal that has ravaged an idolized career.
He almost responded with the best performance since his eight-month intermission while rehabilitating his surgical repaired knee, and his opprobrious sex scandal that unmasked the contemptuous side of Eldrick Woods.
As it happened, he aroused the crowd in the gallery when he fired a remarkable 3-wood shot that soared over the Pacific Ocean, traveled 250 feet and landed on the greenery and rolled 15 past the hole Saturday.
Seemingly, it led to a two-putt giving Woods his third straight birdie for a 5-under 66, tied for the lowest round in the U.S. Open and trailed five shots behind Dustin Johnson, an indecisive leader who collapsed on the charming surface of Pebble Beach on Father’s Day.
While thousands surrounded the green pulling for a tattered Woods Sunday, he absolutely lost balance in an enthralling tournament, bringing back memories of his lousy and hopeless letdown at the Masters and leaving behind grievance entering the Open.
Faced with the similarities from Augusta, it wasn’t a scene of madness involving the chaotic media when swarming reporters questioned Woods about the status of his irreparable marriage and disgraceful scandals that dented a dynamic career.
For much of the afternoon, the Open belonged to the biggest names, such as Woods, his nemesis Phil Mickelson and Ernie Els, but neither of the high-profile golfers prevailed in the grandest moment of the tournament, faltering on the final day.
This was the happiest ended on the coast for the unlikeliest winner Graeme McDowell, defeating runner-up Gregory Havret, two European golfers who survived the likings of the supreme stars of the tournament near the beautiful shores in California.
In a competition Woods transcended, like when he once dominated the fairways with a fiery mindset that he could win his first major title this year and move within legend Jack Nicklaus’ record of 14 major titles, he almost rejoiced in triumph before falling out of the spotlight.
It’s clearly easy to fathom that his time to win seemed perfect on Father’s Day, a day he reflects on the memorable moments with his late father, Earl Woods, who would have scolded and been very disillusioned with his son’s sickness of wrongly cheating on his wife after raising him to be a family-oriented man.
That’s certainly the truth when his father demanded strong character with his influential principles, but arrogantly, Tiger had an unmanageable and disloyal demeanor, ignoring his father’s ethical structures for his poor conduct.
As he ballooned near the top of the leaderboard with a 4-over 75 to tie with Mickelson at 3 over, Woods returned to a gratifying position and heightened his chances of winning the Open.
But unfortunately, he gaffed. He looked unbeatable, but he was beatable. He stared furiously, but he looked petrified. He had it, but he fell. He was a rising star, but crumbled as a fallen star. If he expects to win another major title, he’ll have to close it out strongly and not deteriorate on the final day when competition is vital.
Realistically, it’s the one sport requiring momentum and a tough-driven mental attitude, but he still hasn’t fully escaped or recovered from the tainted scandal, desperately trying to mend his impaired marriage.
There’s a sense that Tiger doesn’t have the urge or mindset, worried about salvaging a damaged relationship. The tabloids are still revealing disheartening chronicles about alleged mistresses, and there’s no doubt that he’s marked as a sexual criminal for the rest of his life with a polluted legacy, which is corroded forever.
This would be the appropriate time to admit that Tiger is gently fading out of the picture, faltering and tarnishing as the invincible and impeccable icon all people admired, including children before he foolishly lived a deceitful and insidious lifestyle. He’s not the same Tiger we once knew after he became known as a Tiger, looking for the women in the Woods.
The craziest thing is that we gaze at him like a villain, the bad man with immoralities and lack of respect for women. It’s far more fascinating, and even more annoying, that he owns all the limelight for being described as a narcissism and the most polarizing athlete of his infidelity. He said that he’s practicing Buddhism. So maybe he learned the values of acting as a true family man and not a sex-addict.
He spent ample time in rehab to cure his sex-addiction, a mental disturbance that destroyed his legacy, career and family.
So maybe he now avoids pancake waitresses, porn stars, teenage girls, and any other wicked female with nothing better to do but have sexual activity with a star athlete, after losing out on something very priceless.
Besides, he’s an elite athlete often motivated to rise in the biggest occasion, but he continues to struggle at the sport that made him famous, rich and admired.
As Mickelson stumbled to win a national championship, Woods’ failed to close out his second major in a row with useless play in the final round. He finished tied for fourth, hopeless, disappointed and empty again, on the verge of finishing winless of a major in a full year.
Realizing that he has gone two years without winning a major title makes us believe he’s no longer the menacing athlete on the planet, but quickly approaching the end of a captivating and epic age when he greatly dominated the courses with his iron stick and brilliancy.
But as of lately, it’s Woods having the paltry majors on the fairways. He missed half the greens in his final round, bogeying the first hole and was mocked when someone hired a plane to advertise a banner that flown the skies and read “TIGER ARE YOU MY DADDY?”
He begins the day 1-under par before lifting to 4-over after 13 holes, but eventually deflated Sunday with an awful 75 and could have won by capitalizing on even-par.
It’s sad to utter that he’s an irrelevant name, dropping out of contention for his blunders and meltdowns. Not even third-round leader Johnson was relevant, who seemed in command with a three-shot lead and looked fearless and unflappable, but went back six shots following a double-bogey at one point and plunged quickly on the leaderboard.
Els was just as bad, losing and botching an astounding front nine and reached 3-under after eight holes. There was Mickelson, who could have one another major, but didn’t making the turn at even par and bogeyed three holes on the back nine to finish tied for fourth.
There was only one winner and his name was the mysterious McDowell, the guy of Northern Ireland and the first European to win the Open in 40 years on an unfriendly course, as Tiger’s uninspired outing disappointed the homeland when he played badly in front of an ecstatic crowd finally reconciling trust in a tumult athlete.
Considering that Woods was finally recovering and finding his identity once again as the world’s greatest golfer, think again. He’s nowhere near the world’s greatest golfer, but the world’s greatest porn star, maybe. Just call 1-800-LAP-DANCE to reach Mr. Woods.
|
{
"forge_marker": 1,
"parent":"thebetweenlands:block/log_rotten_bark_carved",
"textures": {
"up": "thebetweenlands:blocks/rotten_bark_rotated",
"down": "thebetweenlands:blocks/rotten_bark_rotated",
"north": "thebetweenlands:blocks/rotten_bark_carved_11",
"east": "thebetweenlands:blocks/rotten_bark_carved_11",
"south": "thebetweenlands:blocks/rotten_bark_carved_11",
"west": "thebetweenlands:blocks/rotten_bark_carved_11"
}
}
|
For a more tranquil walk explore the Edwardian rose garden, ravine garden or luxurious herbaceous borders next to the reflecting lake where a certain Mr Darcy met Miss Bennet in the BBC production of 'Pride and Prejudice'.
Children can let off steam in Crow Wood Playscape with its giant slide, badger den and ropewalks, whilst the nearby Timber Yard Cafe offers delicious hot and cold snacks, soups and range of cakes.
Opening Times:
Park: 8:30am - 6pm
House: 11am – 5pm Closed until Friday 26 February
Garden: 11am – 5pm
Book a Minibus, Coach or School Bus with a discounted ticket package for any attraction
|
As some of you might know, the Falcons just completed E-baying practically their full rosters worth of black road jerseys and practice jerseys tonight. How Mitch Fritz and Daniel Corso's jerseys fetched more than Karri Ramo's, I'll never know.
In any event, yours truly just plunked down a couple hundie on young Matt Smaby's #27. No pressure Matt, but you better make the team now.
This is a tough pill to swallow. For the first time in franchise history the team has lost a series where I really believed they were the better team.
NJ-3
TB-2
Johan Holmqvist allowed 3 goals on 26 shots in the loss. He did allow a bit of a softie on the short side for Jersey's first goal, but he was good. After his horrible Game One performance, I think Johan did a lot to regain the confidence of the franchise going into an offseason where decisions need to be made concerning the netminder's job. Brad Richards had both Lightning goals on the power play. Filip Kuba had a pair of assists in the game to atone for accidentally knocking over Holmqvist after a Tim Taylor turnover that resulted in the game winning goal. Vaclav Prospal and Vincent Lecavalier also had assists.
Going into the offseason, I feel a lot better about the team's blueline than I did 3-4 months ago. Dan Boyle is one of the best defensemen in the league and Filip Kuba earned every penny of his free agent deal from this summer. I was impressed with how Shane O'Brien quickly factored into the mix and Paul Ranger will be approaching 250-300 pro games by the end of next season so he should start to really hit his comfort zone as a player. It will be difficult if not impossible to retain Cory Sarich and the team might do well to find a younger, more mobile replacement for Nolan Pratt. Now is the time for one of the Lightning's young defensive prospects like Matt Smaby or Vladimir Mihalik or Mike Egener to step up as a player.
In net, Holmqvist probably did enough in the playoffs to earn the starting job going into next season. Marc Denis' nearly $3M per year contract is unsustainable from a budget standpoint and he may be traded or even bought out before next season. That leaves young Karri Ramo in the Lightning's backup role and, mark my words, he could be a Calder contender next season.
I think the most amount of work for this team needs to be done up front this summer. Lecavalier, Richards and St. Louis showed why they're paid premium dollars in this playoff series. The core is sound. The supporting cast, however, is horrible. There was no greater endictment of the Lightning's lack of offensive depth than the fact Andreas Karlsson spent all of today's game playing on the Lightning's second line and second power play unit. What do you do if you're the Lightning? That's got to be the biggest question of this offseason. They have some decent fourth line type pieces (Andre Roy, Nick Tarnasky, Evgeny Artyukhin) and some decent third line type pieces (Ryan Craig, Eric Perrin, Jason Ward should they retain him). The question is whether they can find a couple of more forwards with the skill to play on the scoring lines and possibly a natural third line center.
It's going to be an interesting offseason. If I was a young forward who hasn't fully established himself as a scorer in the league, I'd take a long hard look at Tampa. I have a feeling Lecavalier and Richards could make some young man a lot of money if they latch on here.
Update:Jonathan Boutin is in net tonight being backed up by Bryon Lewis, an emergency goaltender who was signed yesterday by the Springfield Falcons. This points to either an injury to Karri Ramo or a recall to Tampa Bay. Munce is still with Johnstown, apparently, playing backup tonight against Reading.
Falcons president and general manager Bruce Landon met with the Booster Club prior to the game. "We have two years left on our deal with Tampa Bay," Landon said, debunking a rumor that Edmonton was planning to become Springfield's parent team next season. "I told our fans that I feel their pain." Landon also assured the fans the Falcons have not been sold and will operate next season.
As Nigel mentioned in the Greco/AHL discussion, Bill Barber and Claude Loiselle canceled their appearance with Springfield fans to watch Mike Lundin play UMass. If Maine's season is indeed finished, and considering Springfield's injuries to their defensemen (Rosehill, Rullier and now Matt Smaby), Lundin could be signed pretty quick - pending education issues, I'm sure.
|
Samsung rolls out its ‘Gear 360’
Samsung today announced a new Samsung Gear 360, a 4K resolution-capable 360-degree camera with a refined design for easier use. With enhanced features for high-quality content, the Gear 360 is lightweight and compact, offering an expanded Samsung VR ecosystem.
“As consumers turn more to video to share their experiences, we want to deliver accessible and innovative products to make digital content easier to create, share and stream,” said Younghee Lee, Executive Vice President of Global Marketing and Wearable Business, Mobile Communications at Samsung Electronics. “The updated Gear 360 will continue to expand the horizons of what consumers can experience and share.”
Enhanced Features for High-Quality 360 Content Creation
For the first time, the Gear 360 offers 4K video recording for immersive and realistic digital content. Equipped with 8.4-megapixel image sensors and Bright Lens F2.2 on both dual fisheye lenses, the Gear 360 can create high resolution images.
The Gear 360 introduces real-time content sharing. When the Gear 360 is synced with a compatible smartphone or computer, the new device enables users to share their best experiences with high-quality live broadcasting or direct uploading to platforms such as Facebook, YouTube or Samsung VR.
|
_________________Shattered MemoriesA memory long fogoton, fading into the past. How the heart longs, but the memories never last.A broken heart,shatted dreams,forgoton memories these are the painful things.A heart that pleas with shattered dreams for the lost memories. A memory faded away never to be remembered fromthose painful days.Lost in darkness from the painful forgoton dreamsalong with my shattered memories.The loveliness of dreams but now there just shattered memories."SAIYO ISHIDA"
_________________Shattered MemoriesA memory long fogoton, fading into the past. How the heart longs, but the memories never last.A broken heart,shatted dreams,forgoton memories these are the painful things.A heart that pleas with shattered dreams for the lost memories. A memory faded away never to be remembered fromthose painful days.Lost in darkness from the painful forgoton dreamsalong with my shattered memories.The loveliness of dreams but now there just shattered memories."SAIYO ISHIDA"
me:*smirks and still hyper*.....NOPE...*runs in curles qnd quicle kisses him and runs in curcles more*
_________________Shattered MemoriesA memory long fogoton, fading into the past. How the heart longs, but the memories never last.A broken heart,shatted dreams,forgoton memories these are the painful things.A heart that pleas with shattered dreams for the lost memories. A memory faded away never to be remembered fromthose painful days.Lost in darkness from the painful forgoton dreamsalong with my shattered memories.The loveliness of dreams but now there just shattered memories."SAIYO ISHIDA"
me:*kisses back and pulls away and runs in curcles and hyper*.....YAY I LOVE KISSES......*quickle gives him another kiss and runs in curcles again*
_________________Shattered MemoriesA memory long fogoton, fading into the past. How the heart longs, but the memories never last.A broken heart,shatted dreams,forgoton memories these are the painful things.A heart that pleas with shattered dreams for the lost memories. A memory faded away never to be remembered fromthose painful days.Lost in darkness from the painful forgoton dreamsalong with my shattered memories.The loveliness of dreams but now there just shattered memories."SAIYO ISHIDA"
me:*hyper and runs in cucles*........WEEEEEEEEEEEEE........*quickle gives him a kiss and runs in curcles*.....YAY.....KISSES ARE FUN.....
_________________Shattered MemoriesA memory long fogoton, fading into the past. How the heart longs, but the memories never last.A broken heart,shatted dreams,forgoton memories these are the painful things.A heart that pleas with shattered dreams for the lost memories. A memory faded away never to be remembered fromthose painful days.Lost in darkness from the painful forgoton dreamsalong with my shattered memories.The loveliness of dreams but now there just shattered memories."SAIYO ISHIDA"
me:*hyper and runs in cucles*........YES!!!....SPEEEEEEEEEEEEEEE.....*quickle kisses him and runs in curcles again*
(real me:...if you want I make gaara gets you sugar too and have sex with sam......want him too???....)
_________________Shattered MemoriesA memory long fogoton, fading into the past. How the heart longs, but the memories never last.A broken heart,shatted dreams,forgoton memories these are the painful things.A heart that pleas with shattered dreams for the lost memories. A memory faded away never to be remembered fromthose painful days.Lost in darkness from the painful forgoton dreamsalong with my shattered memories.The loveliness of dreams but now there just shattered memories."SAIYO ISHIDA"
_________________Shattered MemoriesA memory long fogoton, fading into the past. How the heart longs, but the memories never last.A broken heart,shatted dreams,forgoton memories these are the painful things.A heart that pleas with shattered dreams for the lost memories. A memory faded away never to be remembered fromthose painful days.Lost in darkness from the painful forgoton dreamsalong with my shattered memories.The loveliness of dreams but now there just shattered memories."SAIYO ISHIDA"
me:*smirks and lays beside him and pouts more*...thinking....I want too no fare!!!!!.....hmmmmmm....I need to get him to want it but how........
gaara:*smiles*...ok....*gets you a big thing of brownys from the store and gets back and hands them to you*.........here you go....
_________________Shattered MemoriesA memory long fogoton, fading into the past. How the heart longs, but the memories never last.A broken heart,shatted dreams,forgoton memories these are the painful things.A heart that pleas with shattered dreams for the lost memories. A memory faded away never to be remembered fromthose painful days.Lost in darkness from the painful forgoton dreamsalong with my shattered memories.The loveliness of dreams but now there just shattered memories."SAIYO ISHIDA"
me:*smirks and sits on his lap and puts my hands on eather side of his head on the bed and kisses him lustfuly and passionatly*
gaara:*smiles*....no you eat up.....
_________________Shattered MemoriesA memory long fogoton, fading into the past. How the heart longs, but the memories never last.A broken heart,shatted dreams,forgoton memories these are the painful things.A heart that pleas with shattered dreams for the lost memories. A memory faded away never to be remembered fromthose painful days.Lost in darkness from the painful forgoton dreamsalong with my shattered memories.The loveliness of dreams but now there just shattered memories."SAIYO ISHIDA"
me:*pouts*..fine...*gets off and pouts more and lays down on bed*.....thinking...no fare!!!!!!!!!!
gaara:*smiles*
_________________Shattered MemoriesA memory long fogoton, fading into the past. How the heart longs, but the memories never last.A broken heart,shatted dreams,forgoton memories these are the painful things.A heart that pleas with shattered dreams for the lost memories. A memory faded away never to be remembered fromthose painful days.Lost in darkness from the painful forgoton dreamsalong with my shattered memories.The loveliness of dreams but now there just shattered memories."SAIYO ISHIDA"
me:*still pouting*..ok....thinking...URG F*CKING HORMON......STOP MAKING IT WORSE FOR ME.......URG YOU SON OF B*TCH HORMONS...YOUR DRIING ME CRAZZY!!!!!!!
gaara:*smiles*.....let me know if you want more....
_________________Shattered MemoriesA memory long fogoton, fading into the past. How the heart longs, but the memories never last.A broken heart,shatted dreams,forgoton memories these are the painful things.A heart that pleas with shattered dreams for the lost memories. A memory faded away never to be remembered fromthose painful days.Lost in darkness from the painful forgoton dreamsalong with my shattered memories.The loveliness of dreams but now there just shattered memories."SAIYO ISHIDA"
me:*smiles and sits ontop of him and puts my hands on eather side og his head and kieese with lust and passion*
gaara:*smiles*
_________________Shattered MemoriesA memory long fogoton, fading into the past. How the heart longs, but the memories never last.A broken heart,shatted dreams,forgoton memories these are the painful things.A heart that pleas with shattered dreams for the lost memories. A memory faded away never to be remembered fromthose painful days.Lost in darkness from the painful forgoton dreamsalong with my shattered memories.The loveliness of dreams but now there just shattered memories."SAIYO ISHIDA"
_________________Shattered MemoriesA memory long fogoton, fading into the past. How the heart longs, but the memories never last.A broken heart,shatted dreams,forgoton memories these are the painful things.A heart that pleas with shattered dreams for the lost memories. A memory faded away never to be remembered fromthose painful days.Lost in darkness from the painful forgoton dreamsalong with my shattered memories.The loveliness of dreams but now there just shattered memories."SAIYO ISHIDA"
me:*pulls away and takes shirt off and bra and kisses him again with passion and tugs at his shirt and moans*
gaara:*smirks and kisses back*
_________________Shattered MemoriesA memory long fogoton, fading into the past. How the heart longs, but the memories never last.A broken heart,shatted dreams,forgoton memories these are the painful things.A heart that pleas with shattered dreams for the lost memories. A memory faded away never to be remembered fromthose painful days.Lost in darkness from the painful forgoton dreamsalong with my shattered memories.The loveliness of dreams but now there just shattered memories."SAIYO ISHIDA"
me:*moans and press's my chest against his and kisses with more passion and grinds my hip faster*
gaara:*chcukles*....did you want anything alse????
_________________Shattered MemoriesA memory long fogoton, fading into the past. How the heart longs, but the memories never last.A broken heart,shatted dreams,forgoton memories these are the painful things.A heart that pleas with shattered dreams for the lost memories. A memory faded away never to be remembered fromthose painful days.Lost in darkness from the painful forgoton dreamsalong with my shattered memories.The loveliness of dreams but now there just shattered memories."SAIYO ISHIDA"
me:*moans into the kiss and takes pants off and press's my chest against his again and kisses with lust and tug at his pant*
gaara:*chuckles*....what do you want......sweets, food, both???
_________________Shattered MemoriesA memory long fogoton, fading into the past. How the heart longs, but the memories never last.A broken heart,shatted dreams,forgoton memories these are the painful things.A heart that pleas with shattered dreams for the lost memories. A memory faded away never to be remembered fromthose painful days.Lost in darkness from the painful forgoton dreamsalong with my shattered memories.The loveliness of dreams but now there just shattered memories."SAIYO ISHIDA"
me:*moans as I feel him get hard and takes my panties off and kisses with more lust and presses my chest close to his*
gaara:*blsus and moans into the kiss and takes his shirt off and kisses with passion*
_________________Shattered MemoriesA memory long fogoton, fading into the past. How the heart longs, but the memories never last.A broken heart,shatted dreams,forgoton memories these are the painful things.A heart that pleas with shattered dreams for the lost memories. A memory faded away never to be remembered fromthose painful days.Lost in darkness from the painful forgoton dreamsalong with my shattered memories.The loveliness of dreams but now there just shattered memories."SAIYO ISHIDA"
me:*moans and kisses with more lust and presses my chest close to his and grinds my hips fast on him*
gaara:*moans more into the kiss and gets hard and kisses with more passion and lust*
_________________Shattered MemoriesA memory long fogoton, fading into the past. How the heart longs, but the memories never last.A broken heart,shatted dreams,forgoton memories these are the painful things.A heart that pleas with shattered dreams for the lost memories. A memory faded away never to be remembered fromthose painful days.Lost in darkness from the painful forgoton dreamsalong with my shattered memories.The loveliness of dreams but now there just shattered memories."SAIYO ISHIDA"
me:*moans and continues to grind my hips faster and harder and touchs his manhood lustfuly*
gaara:*moans and taks off his pants and boxers and thrust in you and moans loud*....AHHHHHHHHHHHHHHHHHHHHH.....
_________________Shattered MemoriesA memory long fogoton, fading into the past. How the heart longs, but the memories never last.A broken heart,shatted dreams,forgoton memories these are the painful things.A heart that pleas with shattered dreams for the lost memories. A memory faded away never to be remembered fromthose painful days.Lost in darkness from the painful forgoton dreamsalong with my shattered memories.The loveliness of dreams but now there just shattered memories."SAIYO ISHIDA"
me:*moans and continues to grind my hips faster and harder and touchs his manhood lustfuly*
gaara:*moans and thrust in and out you and moans loud*....AHHHHHHHHHHHHHHHHH.
_________________Shattered MemoriesA memory long fogoton, fading into the past. How the heart longs, but the memories never last.A broken heart,shatted dreams,forgoton memories these are the painful things.A heart that pleas with shattered dreams for the lost memories. A memory faded away never to be remembered fromthose painful days.Lost in darkness from the painful forgoton dreamsalong with my shattered memories.The loveliness of dreams but now there just shattered memories."SAIYO ISHIDA"
_________________Shattered MemoriesA memory long fogoton, fading into the past. How the heart longs, but the memories never last.A broken heart,shatted dreams,forgoton memories these are the painful things.A heart that pleas with shattered dreams for the lost memories. A memory faded away never to be remembered fromthose painful days.Lost in darkness from the painful forgoton dreamsalong with my shattered memories.The loveliness of dreams but now there just shattered memories."SAIYO ISHIDA"
gaara:*gets harder as he hears you moans his name and thrust deeper and harder and faster and moans more louder*.......SAMMMMMMMMMMMMMMMMMMMMMMM....
_________________Shattered MemoriesA memory long fogoton, fading into the past. How the heart longs, but the memories never last.A broken heart,shatted dreams,forgoton memories these are the painful things.A heart that pleas with shattered dreams for the lost memories. A memory faded away never to be remembered fromthose painful days.Lost in darkness from the painful forgoton dreamsalong with my shattered memories.The loveliness of dreams but now there just shattered memories."SAIYO ISHIDA"
_________________Shattered MemoriesA memory long fogoton, fading into the past. How the heart longs, but the memories never last.A broken heart,shatted dreams,forgoton memories these are the painful things.A heart that pleas with shattered dreams for the lost memories. A memory faded away never to be remembered fromthose painful days.Lost in darkness from the painful forgoton dreamsalong with my shattered memories.The loveliness of dreams but now there just shattered memories."SAIYO ISHIDA"
|
/*
* CoreShop.
*
* This source file is subject to the GNU General Public License version 3 (GPLv3)
* For the full copyright and license information, please view the LICENSE.md and gpl-3.0.txt
* files that are distributed with this source code.
*
* @copyright Copyright (c) 2015-2020 Dominik Pfaffenbauer (https://www.pfaffenbauer.at)
* @license https://www.coreshop.org/license GNU General Public License version 3 (GPLv3)
*
*/
pimcore.registerNS('coreshop.notification.rule.item');
coreshop.notification.rule.item = Class.create(coreshop.rules.item, {
iconCls: 'coreshop_icon_notification_rule',
url: {
save: '/admin/coreshop/notification_rules/save'
},
getPanel: function () {
var items = this.getItems();
this.panel = new Ext.TabPanel({
activeTab: 0,
title: this.data.name,
closable: true,
deferredRender: false,
forceLayout: true,
iconCls: this.iconCls,
buttons: [{
text: t('save'),
iconCls: 'pimcore_icon_apply',
handler: this.save.bind(this)
}],
items: items
});
if (this.data.type) {
this.reloadTypes(this.data.type);
}
return this.panel;
},
getSettings: function () {
var data = this.data;
var types = [];
this.parentPanel.getConfig().types.forEach(function (type) {
types.push([type, t('coreshop_notification_rule_type_' + type)]);
}.bind(this));
var typesStore = new Ext.data.ArrayStore({
data: types,
fields: ['type', 'typeName'],
idProperty: 'type'
});
this.settingsForm = Ext.create('Ext.form.Panel', {
iconCls: 'coreshop_icon_settings',
title: t('settings'),
bodyStyle: 'padding:10px;',
autoScroll: true,
border: false,
items: [
{
xtype: 'textfield',
name: 'name',
fieldLabel: t('name'),
width: 250,
value: data.name
},
{
xtype: 'checkbox',
name: 'active',
fieldLabel: t('active'),
checked: data.active
},
{
xtype: 'combo',
fieldLabel: t('coreshop_notification_rule_type'),
name: 'type',
displayField: 'type',
valueField: 'type',
store: typesStore,
value: this.data.type,
width: 250,
listeners: {
change: function (combo, value) {
this.reloadTypes(value);
}.bind(this)
}
}
]
});
return this.settingsForm;
},
getItems: function () {
return [
this.getSettings()
];
},
reloadTypes: function (type) {
if (this.actions) {
this.actions.destroy();
}
if (this.conditions) {
this.conditions.destroy();
}
var items = this.getItemsForType(type);
this.panel.add(items);
},
getItemsForType: function (type) {
var actionContainerClass = this.getActionContainerClass();
var conditionContainerClass = this.getConditionContainerClass();
var allowedActions = this.parentPanel.getActionsForType(type);
var allowedConditions = this.parentPanel.getConditionsForType(type);
this.actions = new actionContainerClass(allowedActions, type);
this.conditions = new conditionContainerClass(allowedConditions, type);
var items = [
this.conditions.getLayout(),
this.actions.getLayout()
];
// add saved conditions
if (this.data.conditions) {
Ext.each(this.data.conditions, function (condition) {
var conditionType = condition.type.replace(type + '.', '');
if (allowedConditions.indexOf(conditionType) >= 0) {
this.conditions.addCondition(conditionType, condition, false);
}
}.bind(this));
}
// add saved actions
if (this.data.actions) {
Ext.each(this.data.actions, function (action) {
var actionType = action.type.replace(type + '.', '');
if (allowedActions.indexOf(actionType) >= 0) {
this.actions.addAction(actionType, action, false);
}
}.bind(this));
}
return items;
},
getActionContainerClass: function () {
return coreshop.notification.rule.action;
},
getConditionContainerClass: function () {
return coreshop.notification.rule.condition;
}
});
|
The Flying Samaritans had their beginnings in 1961 when two aviators landed in the remote town of El Rosario on the Baja Peninsula to avoid strong winds and dust storms that had developed. The people of El Rosario were extremely cordial, sharing their food and offering accommodations for the night. They were treated with such kindness that the aviators asked what they could do in return. The locals indicated they were in great need of clothing, especially for the children. One month later the aviators returned with clothing and gifts for families. Among those who made the return trip was a Medical doctor who brought his medical bag and asked if anyone needed attention. At the time many of the villagers were unemployed and ineligible for government medical benefits, and the nearest medical services were over 140 miles away. The doctor received permission to return and continue his medical services. Because there were no roads south of Ensenada in those days, he enlisted the help of pilots and owners of small aircraft to expedite the journey. The Flying "Sams" have grown in number with chapters in Arizona and California. They now have well over 1000 volunteer members who have established and support more than 20 clinics in Baja. The volunteers of the Flying Samaritans pay their own expenses to travel to a clinic in Baja one weekend each month. Since the Flying Samaritans is an all volunteer organization with no paid staff, every dollar received through donations goes directly to help the work of the Flying Samaritans in Baja Mexico
How the Phoenix Chapter Began.....
In 1989 seven Flying Samaritans from the Phoenix area were traveling each month to Tucson to attend meetings and leave on trips to Baja. Tucson had the only Arizona Chapter of the Flying Samaritans. By mid 1989 this was getting old, so Marilyn Berton and others started the process of forming a Phoenix Chapter. Their first task was to find a clinic location. Clinic searches were made by landing at small Mexican towns and meeting with the local officials. After several trips they decided on a clinic site. Miguel Aleman was suggested by the Mexican Health Services organization. Miguel Aleman is located on the mainland of Mexico about 40 miles west of Hermosillo. The town consisted of mostly migrant farm workers and was in great need because, being migrants with no local employer paying their wages, they were not entitled to use the local Mexican clinic facility. The first Phoenix sponsored clinic was in January 1990.
In October 1990 a donation of $1500 from American Express gave the Phoenix group the financial boost they needed to get started with mail, equipment and so forth. In January 1991 Phoenix became an "official" chapter and selected the name Los Amigos, which was felt truly represented the Chapter's warmest feelings.
In June 1992 the Los Amigos Chapter was advised that they could no longer hold clinics at Miguel Aleman. Although it is not known for sure, it is suspected that on mainland Mexico there were some conflicting political issues.
For the next several months Los Amigos continued to serve the populations in Mexico on a temporary clinic basis at various locations, including Mulege` in Baja Sur. The Chapter was floundering without a permanent location for a clinic. Marilyn worked to hold the group together with events, newsletters and an occasional Mulege` clinic. When a general membership meeting was held in September 1993 and only 4 people showed up, Marilyn suggested that the Los Amigos Chapter be closed. But, when no one supported that idea, it was decided to take no action at that time. In December on a vacation trip to Mulege one of the pilots took some friend to San Juanico to check on their camper and deliver some Christmas Presents. While touring the town the pilot saw a small clinic building that was not being used and asked about it. It had been a Flying Samaritans clinic at one time but was no longer used. Doing our home work we got permission from Flying Samaritans International to take over that clinic. With the help of the Mulege Rotary organization we were invited to set up a clinic there.
In February 1994, we were invited to a whale-watching trip at Lopez Mateos, the Rotary and the community leader of Lopez Mateos asked the Los Amigos Chapter to serve their town. They hosted a tour of their clinic and provided a fish fry luncheon complete with turtle soup. The town suffered an 80% unemployment rate because the cannery had reduced employees from 1200 to 100. In September, 1994 the Los Amigos Chapter held their first clinic in Lopez Mateos. In 1995 the Flying Samaritans expanded their clinic to the community of Las Barrancas.
What happened to San Juanico ??? We served both clinics for almost two years until there was no runway. A part of the runway was on land owned by the ejido (much like our Tribes). So with a little help from an American living there that wanted to develop the land on a bluff over looking the ocean and move the runway up there enough trouble was stirred up that the existing runway was closed and a new runway was never built. The Aero Medicos from Santa Barbara California maintain a runway and a clinic much like ours in the town of Cadaje that is within driving distance for the people of San Juanico.
Since 1990 membership in the Los Amigos Chapter has grown from 7 to over 300 volunteers. Golf tournaments, original dinner theater productions, raffles as well as corporate and personal donations have helped raise the necessary funds to refurbish and build a permanent clinic building on land donated by the cannery. Over the past few years, donations have made it possible to add additional dental units, dental equipment, a chiropractic facility, medical equipment, a pharmacy and supplies needed to make the clinics run efficiently. If you would like to be a part of the giving go to our web page Help us Help and make a donation.
The Phoenix based Chapter travels to Baja ten months out of the year providing general medical, dental, optometry and chiropractic care for up to 350 patients each month. Occasionally patients with serious ailments that cannot be treated locally have been transported to the U.S. or elsewhere in Mexico for specialized care.
|
WHY I’M RUNNING
As the son of a Bay Area public school teacher and a University of California professor, my values have guided me to public service, fighting for working families and protecting our environment. In the nearly twenty years I’ve lived in Berkeley, I’ve fought for tenants’ rights to safe and affordable housing, and campaigned to protect open space and clean up our air.
In my eleventh year on East Bay Municipal Utility District Board, I’ve worked tirelessly so our community could become resilient to drought through increased conservation programs. I’ve fought hard to provide a safety net for low-income households so the most vulnerable can afford water and I took on big banks when they tried to turn off the water on families during the foreclosure crisis.
Now more than ever, we need a legislator with the experience to champion our community’s progressive values. And that is why I’m running for California State Assembly in the 15th District in 2018.
As a statewide public health advocate, I worked to pass new standards for clean energy, cleaning up trucks and buses, requirements for essential health benefits, and a ban on e-cigarette sales to kids. I fight every day for workers impacted by discrimination, wage theft, unsafe workplaces, and health insurance denials as a workers’ rights attorney. And as a proud member of the LGBT community, I have worked to address health disparities in the LGBT community.
Most recently, I led negotiations resulting in the Berkeley City Council raising the minimum wage to $15 per hour in 2018 and extending earned sick leave protections to working families. I represented Sierra Club in the U.N. climate negotiations, culminating in the landmark Paris Agreement. I’ve been working hard to stop Alta Bates Hospital from closing, which is why Mayor Jesse Arreguin asked me to serve as his Health Commissioner to help lead this difficult fight.
We need leaders in Sacramento who will continue to push California to a strong future for our environment, our schools, and our community. In the Assembly, I will put my experience as a workers’ rights attorney to work for you, so we can fight for single-payer health care, improve our public education system for our kids, and strengthen our climate protection laws. Will you send me to Sacramento to fight for you?
— Andy Katz
Send Andy to the Assembly!
Andy Katz is the best choice for California’s 15th Assembly District. He is a proven leader who will fight effectively for our progressive values in Sacramento, but we need your help to support the grassroots movement to get him elected. Will you donate today and help him be a voice for us?
|
The U.S. Food and Drug Administration (FDA) recommends 2000 calories a day as a reasonable average guideline for most American adults. Click here to learn how you can use the Monday 2000 to reset the calorie budget you have to spend each day. For specific calorie recommendations based on your age, metabolism and medical history, consult your doctor or nutritionist.
|
Returning You Look Like a Comedy Show Champion Khadija Hassan is back to take on 5 new contenders from local legends to hero's abroad. You Look Like is a battle of roasting in 3 rounds. Contenders trash each other for the audiences enjoyment and this audience does enjoy! This months new opponents vying to unseat Khadija are comedians Josh McLane of Memphis, Ozzy Jackson of Little Rock, Bunny from Nashville, along with Jada Brisentine and Harold King both of Memphis. This episode is not for the easily offended. This episode of You Look Like a Comedy Show was recorded live at the P&H Cafe in Memphis TN on July 18th 2015. Hosted by Tommy Oler and Katrina Coleman
|
Pages
Fractal analysis of the main currency pairs for May 11
Dear colleagues.
For the EUR / USD pair, we follow the formation of the initial conditions for the top of May 9. For the GBP / USD pair, we expect a correction to take place after the breakdown at 1.3614. For the of USD / CHF pair, we follow the formation of the downward structure from May 10. The potential for the top is limited by the level of 1.0056. For the USD / JPY pair, the continuation of the upward movement is expected after the breakdown of 109.96. For the EUR / JPY pair, the price is in correction and forms the potential for the top of May 8. For the GBP / JPY pair, the price forms the potential for the top of May 8.
The forecast for May 11:
Analytical review of currency pairs in the scale of H1:
For the EUR / USD pair, the key levels on the scale of H1 are: 1.2060, 1.2031, 1.1985, 1.1947, 1.1894, 1.1867 and 1.1822. Here, the price forms the potential initial conditions for the top of May 9. The continuation of the upward movement is expected after the breakdown of 1.1947. In this case, the target is 1.1985. Near this level is the consolidation of the price. The breakdown at the level of 1.1985 should be accompanied by a pronounced movement towards the level of 1.2031. Upon the reaching this level, we expect the consolidation of the price. The potential value for the top is the level of 1.2060.
Short-term downward movement is possible in the area of 1.1894-1.1867. The breakdown of the last value will lead to the development of the the downward structure. In this case, the target is 1.1822.
The main trend is the formation of the upward potential of May 9.
Trading recommendations:
Buy: 1.1947 Take profit: 1.1983
Buy 1.1987 Take profit: 1.2030
Sell: 1.1892 Take profit: 1.1870
Sell: 1.1864 Take profit: 1.1825
For the GBP / USD pair, the key levels on the scale of H1 are 1.3847, 1.3752, 1.3684, 1.3614, 1.3614 and 1.3482. Here, we expect a move towards correction. The upward movement is expected after the breakdown of 1.3614. In this case, the target is 1.3684. Short-term upward movement is possible in the area of 1.3684 - 1.3752. The breakdown of the last value will lead to movement. Here, the target is 1.3847. Upon reaching this level, the design of potential initial conditions for the top is possible.
For the downward movement, we have not yet determined the subsequent goals.
The main trend is a downward structure from April 17. We expect a correction.
Trading recommendations:
Buy: 1.3616 Take profit: 1.3682
Buy: 1.3684 Take profit: 1.3750
Sell: Take profit:
Sell: Take profit:
For the USD / CHF pair, the key levels on the scale of H1 are: 1.0056, 1.0039, 0.9991, 0.9972, 0.9949 and 0.9933. Here, we follow the formation of a downward structure from May 10. The continuation of the downward movement is expected after the breakdown of 0.9991. In this case, the target is 0.9972. Near this level is the consolidation of the price. The breakdown at the level of 0.9972 should be accompanied by a pronounced movement towards the level of 0.9949. The potential value for the bottom is the level of 0.9933. Upon reaching this level, we expect a rollback to the top.
Short-term upward movement is possible in the area of 1.0039 - 1.0056. Further objectives for the top are not yet considered.
The main trend is the formation of a downward structure from May 10.
Trading recommendations:
Buy: 1.0040 Take profit: 1.0053
Buy: Take profit:
Sell: 0.9990 Take profit: 0.9974
Sell: 0.9968 Take profit: 0.9950
For the USD / JPY pair, the key levels on a scale are: 110.82, 110.66, 110.20, 109.96, 109.56, 109.35, 108.99 and 108.70. Here, we follow the formation of the upward structure of May 4. The continuation of the upward movement is expected after the breakdown of 109.96. In this case, the target is 110.20. Near this level is the consolidation of the price. The breakdown at the level of 110.20 should be accompanied by a pronounced movement towards the level of 110.66. The potential value for the top is the level of 110.82. Upon reaching this level, we expect a pullback downwards.
Consolidated traffic is possible in the area of 109.56 - 109.35. The breakdown of the last value will lead to in-depth correction. Here, the target is 108.99. This level is the key support for the top of May 4.
The main trend is the formation of the upward structure of May 4.
Trading recommendations:
Buy: 109.96 Take profit: 110.20
Buy: 110.24 Take profit: 110.64
Sell: 109.30 Take profit: 109.05
Sell: 108.96 Take profit: 108.74
For the CAD / USD pair, the key levels on the H1 scale are: 1.2886, 1.2863, 1.2815, 1.2787, 1.2751, 1.2701, 1.2660 and 1.2603. Here, we follow the development of the downward structure of May 8. The continuation of the movement downwards is expected after the breakdown of 1.2750. In this case, the target is 1.2701. The breakdown of this level will allow us to count on the movement towards the level of 1.2660. Near this level is the consolidation of the price. The potential value for the bottom is the level of 1.2603. From this level, we expect a rollback to the top.
Short-term upward movement is possible in the area of 1.2787 - 1.2815. The breakdown of the last value will lead to in-depth correction. Here, the target is 1.2863. The range of 1.2863 - 1.2886 is the key support for the downward structure from May 8.
The main trend is the downward structure of May 8.
Trading recommendations:
Buy: 1.2787 Take profit: 1.2813
Buy: 1.2817 Take profit: 1.2860
Sell: 1.2748 Take profit: 1.2705
Sell: 1.2698 Take profit: 1.2660
For the AUD / USD pair, the key H1 scale levels are: 0.7635, 0.7587, 0.7569, 0.7547, 0.7507, 0.7488 and 0.7459. Here, we follow the formation of the upward structure of May 9. The continuation of the upward movement is expected after the breakdown of 0.7547. In this case, the target is 0.7569. In the area of 0.7569 - 0.7587 is the consolidation of the price. The potential value for the top is the level of 0.7635. The movement towards this level is expected after the breakdown of 0.7587.
Short-term downward movement is possible in the area of 0.7507 - 0.7488. The breakdown of the last value will lead to in-depth correction. Here, the target is 0.7460. This level is the key support for the upward structure.
The main trend is the formation of the upward structure of May 9.
Trading recommendations:
Buy: 0.7547 Take profit: 0.7569
Buy: 0.7590 Take profit: 0.7635
Sell: 0.7507 Take profit: 0.7488
Sell: 0.7485 Take profit: 0.7461
For the of EUR / JPY pair, the key levels on the scale of H1 are: 131.64, 131.16, 130.62, 130.32, 129.70, 129.37, 128.91 and 128.62. Here, we follow the downward structure from April 24. At the moment, the price is in correction and forms the potential for the top of May 8. Short-term downward movement is possible in the area of 129.70 - 129.37. The breakdown of the last value should be accompanied by a pronounced movement towards the level of 128.91. In the area of 128.91 - 128.62 is the consolidation of the price and from here, we expect a rollback towards correction.
Short-term upward movement is possible in the area of 130.32 - 130.62. The breakdown of the last value will lead to in-depth correction. Here, the target is 131.16. This level is the key support for the downward structure. Its breakdown will lead to the formation of an upward structure. Here, the potential target is 131.64.
The main trend is the downward structure from April 24, the correction stage.
Trading recommendations:
Buy: 130.32 Take profit: 130.60
Buy: 130.66 Take profit: 131.16
Sell: 129.70 Take profit: 129.40
Sell: 129.33 Take profit: 128.95
For the GBP / JPY pair, the key levels on the scale of H1 are: 149.81, 149.18, 148.67, 147.63, 146.88, 147.75 and 145.12. Here, we follow the local downward structure of April 26. At the moment, the price is in correction and forms the potential for the upward movement of May 8. The continuation of the downward movement is expected after the breakdown of 147.63. In this case, the target is 146.88. Near this level is the consolidation of the price. The breakdown of 146.85 should be accompanied by a pronounced downward movement. Here, the target is 145.75. The potential value for the bottom is the level 145.12. From this level, we expect a rollback towards correction.
Short-term upward movement is possible in the area of 148.67 - 149.18. The breakdown of the last value will lead to in-depth correction. Here, the target is 149.81.
The main trend is a local downward structure from April 26, the correction stage.
|
Notices are published
each Wednesday to alert the public regarding state agency rule-making.
You may obtain a copy of any rule by notifying the agency contact person.
You may also comment on the rule, and/or attend the public hearing.
If no hearing is scheduled, you may request one -- the agency may then
schedule a hearing, and must do so if 5 or more persons request it.
If you are disabled or need special services to attend a hearing, please
notify the agency contact person at least 7 days prior to it. Petitions:
you can petition an agency to adopt, amend, or repeal any rule; the
agency must provide you with petition forms, and must respond to your
petition within 60 days. The agency must enter rule-making if the petition
is signed by 150 or more registered voters, and may begin rule-making
if there are fewer. You can also petition the Legislature to review
a rule; the Executive Director of the Legislative Council (115 State
House Station, Augusta, ME 04333, phone 207/287-1615) will provide you
with the necessary petition forms. The appropriate legislative committee
will review a rule upon receipt of a petition from 100 or more registered
voters, or from "...any person who may be directly, substantially and
adversely affected by the application of a rule..." (Title 5 Section
11112). World-Wide Web: Copies of the weekly notices and the full texts
of adopted rule chapters may be found on the World-Wide Web at: http://www.maine.gov/sos/cec/rcn/apa/.
PROPOSALS
AGENCY:
26-239 - Maine Attorney General
RULE TITLE OR SUBJECT: Ch. 106, Rules for Administering the Maine
Lemon Law Arbitration Program
PROPOSED RULE NUMBER: 2003-P296
CONCISE SUMMARY: These amendments to the current Maine Lemon Law Arbitration
Program Rules reflect recent legislative changes and will improve administration
of the program. Among the amendments are a necessary change to the definition
of "Lemon Law term of protection" and informative additions to the Lemon
Law brochure and the Lemon Law notice that is required for all warranty
booklets.
THIS RULE WILL NOT HAVE A FISCAL IMPACT ON MUNICIPALITIES.
STATUTORY AUTHORITY: 10 MRSA §1169(3) (Administered by Attorney General)
PUBLIC HEARING: None scheduled; one may be requested.
DEADLINE FOR COMMENTS: January 26, 2004
AGENCY CONTACT PERSON: Julie Shanahan
AGENCY NAME: Attorney General's Lemon Law Arbitration Program
ADDRESS: 6 State House Station, Augusta, ME 04333
TELEPHONE: (207) 626-8848
AGENCY:
10-144 - Department of Human Services, Bureau of Medical Services
RULE TITLE OR SUBJECT: Ch. 101, MaineCare Benefits Manual: Ch.
III Section 97 and its Appendices A, B, C, D, E, and F (Private
Non Medical Institutions)
PROPOSED RULE NUMBER: 2003-P297
CONCISE SUMMARY: This rule is a major substantive rule that proposes
to permanently adopt two separate emergency rulemakings currently in
effect (2003-98, effective April 10, 2003, and 2003-222, effective July
1, 2003) as well as propose new policy. The Department is proposing
new policy that includes reductions in reimbursement. These reductions
have been determined necessary because unless the Department takes corrective
action, the funding available to the Department will soon be inadequate
to meet various expenditures of the Department. The Department has proposed
changes that will restrict reimbursement of bed-hold at 75% to all PNMI
facilities. The Department is also proposing to reduce reimbursement
to all PNMI providers by I%. The Department has also proposed other
changes in the rule including making stipends for some foreign students
allowable under certain circumstances.
Emergency rules the Department is proposing to permanently delete Appendix
A from the rules, as Appendix A is an outdated cost report that cannot
be accurately used to report allowable costs for private non-medical
institutions. In Appendix C, the Department has added a fourth peer
group for very small freestanding facilities (15 or fewer beds), taking
into account higher costs for some of these facilities.
NOTE: Pursuant to its statutory authority in 22 MRSA §§ 42 and 3173
and complying with 5 MRSA § 8054, the Department intends to adopt emergency
rules for Ch. III Section 97, effective in January 2004.
THIS RULE WILL NOT HAVE A FISCAL IMPACT ON MUNICIPALITIES.
STATUTORY AUTHORITY: 22 MRSA § 42, § 3173
PUBLIC HEARING:
Date: January 13, 2004, 1:00 p.m.
Location: Department of Human Services, 442 Civic Center Drive, Augusta,
Maine 04333-0011. Any interested party requiring special arrangements
to attend the hearing must contact the agency person listed below before
January 10, 2004.
DEADLINE FOR COMMENTS: January 23, 2004
AGENCY CONTACT PERSON: Patricia Dushuttle, Comprehensive Health Planner
AGENCY NAME: Policy and Provider Services
ADDRESS: 442 Civic Center Drive, State House Station 11, Augusta, Maine
04333-0011
TELEPHONE: (207) 287-9362
FAX: (207) 287-9369
TTY: (800) 423-4331 or (207) 287-1828 (Deaf or Hard of Hearing)
AGENCY:
06-096 - Department of Environmental Protection
RULE TITLE: Ch. 124, Total Reduced Sulfur Control from Kraft
Pulp Mills
PROPOSED RULE NUMBER: 2003-P298
CONCISE SUMMARY: The Department is proposing to amend Ch. 124 to align
the brownstock washer control compliance date with the 2007 federal
pulp and paper MACT (Cluster Rule) deadline. Ch. 124 currently requires
brownstock washer controls to be in place by 2005; the proposed amendments
would provide affected facilities an additional two years in which to
install and optimize the effectiveness of TRS controls. The complexity
of controlling brownstock washers in conjunction with meeting federal
MACT Subpart S requirements has resulted in the need to provide extensions
to three affected facilities. The proposed amendments will allow for
more effective control of TRS by providing additional time in which
to "fine-tune" emission control equipment, ultimately allowing for more
effective pollution control and more stringent emission limits.
Copies of these rules are available upon request by contacting the Agency
contact person listed below. Pursuant to Maine law, interested parties
must be publicly notified of the proposed rulemaking, the public hearing
and be provided an opportunity for comment. Any party interested in
providing public comment can testify at the public hearing or provide
written comments before the end of the comment period. All comments
should be sent to the Agency Contact person.
THIS RULE WILL NOT HAVE A FISCAL IMPACT ON MUNICIPALITIES.
STATUTORY AUTHORITY: 38 MRSA § 585-B
PUBLIC HEARING: January 15, 2004, 2:00 p.m., Best Western Senator Inn
& Spa, Augusta, Maine
DEADLINE FOR COMMENTS: January 28, 2004
AGENCY CONTACT PERSON: Jeff Crawford
AGENCY NAME: Bureau of Air Quality, Department of Environmental Protection
ADDRESS: State House Station 17, Augusta, ME 04333
TELEPHONE: (207) 287-2437
AGENCY:
02-031 - Department of Professional & Financial Regulation, Bureau
of Financial Institutions
RULE TITLE OR SUBJECT: Ch. 142 (Regulation #42), Charges Permitted
for Prepayment of Certain Consumer Loans
PROPOSED RULE NUMBER: 2003-P299
CONCISE SUMMARY: Public Law 2003 c.263 §1 amended 9-A MRSA § 2-509 to
authorize supervised financial organizations to assess a consumer a
reasonable charge related to the prepayment of a consumer loan secured
by an interest in land. That charge must be reasonably calculated to
offset only the cost of origination of the loan. Title 9-A § 2-509 as
amended requires the Superintendent to adopt rules to implement its
provisions.
THIS RULE WILL NOT HAVE A FISCAL IMPACT ON MUNICIPALITIES.
STATUTORY AUTHORITY: Title 9-A MRSA § 2-509
PUBLIC HEARING): None scheduled; one may be requested.
DEADLINE FOR COMMENTS: January 23, 2004
AGENCY CONTACT PERSON: Colette L. Mooney, Deputy Superintendent
AGENCY NAME: Bureau of Financial Institutions
ADDRESS: 36 State House Station, Augusta, ME 04333-0036
TELEPHONE: (207) 624-8570
AGENCY:
29-250 - Secretary of State, Bureau of Motor Vehicles
RULE TITLE OR SUBJECT: Ch. 100, Establishment of Renewal Agent
Service Fees (for the renewal of digital operator's licenses)
PROPOSED RULE NUMBER: 2003-P300
CONCISE SUMMARY: The Secretary of State may, in accordance with Title
29-A, Ch. 3, Subchapter II, Section 202, appoint agents for the sole
purpose of issuing renewals of operator's licenses from various non-BMV
locations throughout the State and that these agents may charge an applicant
fee over the required operator's license fee for each renewal issued.
The Secretary of State proposes that any such additional fee be set
at five dollars ($5.00) for each renewal issued.
TIHS RULE WILL NOT HAVE A FISCAL IMPACT ON MUNICIPALITIES.
STATUTORY AUTHORITY: Title 29-A, Ch. 3, Subchapter H, Section 202
PUBLIC HEARING: None scheduled, one may be requested.
DEADLINE FOR COMMENTS: January 23, 2004
AGENCY CONTACT PERSON: Paul Potvin
AGENCY NAME: Secretary of State/Bureau of Motor Vehicles
ADDRESS: 101 Hospital Street, Augusta, ME 04333
TELEPHONE: (207) 624-9005
AGENCY:
06-096 - Department of Environmental Protection
RULE TITLE: Ch. 127, New Motor Vehicle Emission Standards
PROPOSED RULE NUMBER: 2003-P301
CONCISE SUMMARY: On October 17, 2003, the Department received a petition
to require a rule making hearing pursuant to 5 MRSA Section 8055 and
to amend the Ch. 127 New Motor Vehicle Emission Standards regulation.
The petitioners are proposing to amend Ch. 127 to allow the sale of
Federally-compliant diesel-powered passenger cars, including the Volkswagen
TDI, in the State of Maine for a period of three years.
Copies of these rules are available upon request by contacting the Agency
contact person listed below. Pursuant to Maine law, interested parties
must be publicly notified of the proposed rulemaking, the public hearing
and be provided an opportunity for comment. Any party interested in
providing public comment can testify at the public hearing or provide
written comments before the end of the comment period. All comments
should be sent to the Agency Contact person.
THIS RULE WILL NOT HAVE A FISCAL IMPACT ON MUNICIPALITIES.
STATUTORY AUTHORITY: 38 MRSA §§ 585, 585-A, 585-D and 5 MRSA § 8055
PUBLIC HEARING: January 15, 2004
2:30 p.m., Best Western Senator Inn & Spa, Augusta, Maine
DEADLINE FOR COMMENTS: January 28, 2004
AGENCY CONTACT PERSON: Ron Severance
AGENCY NAME: Bureau of Air Quality, Department of Environmental Protection
ADDRESS: State House Station 17, Augusta, ME 04333
TELEPHONE: (207) 287-2437
CHAPTER
NUMBER AND TITLE: Ch. 8.05, Primary Buyer Permit
ADOPTED RULE NUMBER: 2003-468
CONCISE SUMMARY: Any Wholesale Seafood license holder or Retail Seafood
license holder who buys or obtains product direct from harvester(s)
is required to obtain a no-cost Primary Buyer Permit. The permit requires
reporting of landings in existing reporting requirements for only those
dealers who hold this permit. The Primary Buyer Permit requires the
reports to be sent to a single location where all landings will be aggregated
and categorized to support and maintain a complete dealer and retail
reporting compliance.
The proposed reporting requirement for dealers and retailers is part
of an initial step to provide accuracy and continuity for reporting
of catches and landings of regulated marine organisms and to bring the
Maine DMR landings program and data into conformity with the Atlantic
Coastal Cooperative Statistics Program (ACCSP), a the multi-state federal
initiative.
CHAPTER
NUMBER AND TITLE: Ch. 8.10(8), Lobster Dealer Reporting
ADOPTED RULE NUMBER: 2003-469
CONCISE SUMMARY: In addition to the relocation of existing reporting
requirements to the new Ch. 8 Landings Program, regulations are also
adopted for wholesale dealers with lobster permit and retail seafood
dealers to report landings data for lobster. This requirement has been
endorsed by the Maine Lobster Dealers Association with their support
of the rulemaking.
AGENCY:
17-229 - Department of Transportation
CHAPTER NUMBER AND TITLE: Ch. 300, Rules and Regulations for
the Use of the Interstate Highway System (Amendment)
ADOPTED RULE NUMBER: 2003-471
CONCISE SUMMARY: This amendment to the Maine Department of Transportation's
Rules restricts trucks, including truck tractors but excluding pickup
trucks, all as defined in 29-A MRSA § 101, to the two farthest right-hand
lanes on that portion of Interstate 95 from the Maine-New Hampshire
border to the southern terminus of the Maine Turnpike.
EFFECTIVE DATE: May 5, 2004
AGENCY CONTACT PERSON: Jim Smith
AGENCY NAME: Department of Transportation
ADDRESS: Office of Legal Services, 16 State House Station, Augusta ME
04333-0016
TELEPHONE: (207) 624-3020
AGENCY:
09-137 - Department of Inland Fisheries & Wildlife
CHAPTER NUMBER AND TITLE: Ch. 4.04, Moose Hunting
ADOPTED RULE NUMBER: 2003-472
CONCISE SUMMARY: The Department of Inland Fisheries & Wildlife has adopted
a rule establishing the number of moose permits to be issued for the
2004 moose hunting season. As in 2003, permits will be issued for bulls
and antlerless moose in each of the WMDs with permit allocations. The
original proposal increased permits in WMDs 3, 6, and 11. Based on input
from the public, In WMDs 6 and 11 permits have been reduced by 120 each
bringing the total number of permits issued from the originally proposed
3,13 5 to 2,895. In WMDs 1, 2, 3, 5, 6, 11 and 19 the season will open
on Monday following the close of the bear baiting season and remain
open for 6 days. In WMDs 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13,
14, 18, 19, 28 and 29 the season begins on the second Monday of October
and remains open for 6 days.
EFFECTIVE DATE: December 22, 2003
AGENCY CONTACT PERSON: Andrea L. Erskine
AGENCY NAME: Department of Inland Fisheries & Wildlife
ADDRESS: 284 State Street, Augusta, ME 04333-0041
TELEPHONE: (207) 287-5201
|
Category Archives: Python
The components were very simple, and provided the programmer with low-level access to the computers innards. Now you want to take your knowledge and make something real. It recognises the print command as it highlights the word, and when I submit an answer it processes correctly. So you're kind of trying to take the best of both worlds and create this intermediate language, and this is what Python is. Java is a solid language, but I hate using it.
C++/MFC apps are more time consuming to write but they run faster than Java or C#. It's great, and what's even greater about it is that it has built-in support for Python scripting, to where you can automate a lot of virtual machine tasks-- say the batch creation, the batch management of virtual machine files-- by using a Python interface. If the variable does not have a dollar sign ($) or ampersand (@) as its first character then its scope is scope defining region which most immediately contains it.
For Those Who Insult Delphi, I Challenged You To Build An Enormous System In 2 Days Time, Which Have 40 Coffee Breaks, 5 Hours Sleep & 4 Hours Free Time, Think You Can Do It? i think flex is a good tool to work ..its an upcoming tool. and sure that it will head up its not that simple to learn…but hw it works and hw to work. If a variable with no previous reference is accessed, its value is NULL. This is an implementation of Python written entirely in Java.
The three largest, most popular online class providers -- Coursera, edX and Udacity -- also offer introductory programming courses in Python, Guo found. He wanted to use an interpreted language like ABC (ABC has simple easy-to-understand syntax) that could access the Amoeba system calls. Even machine code numbers are an abstraction. Python was developed by Guido Van Rossum in 1991. However, the year 2013 has been a clear year of HTML and JavaScript.
MATLAB is also very good for matrix processing and there are open source versions (e.g., Octave) that have exactly the same syntax with similar benchmarks. The downsides to that are, generally speaking, you expect a performance degradation, because both the compilation and the execution have to happen at run time, after that code has been sent down to the client. It’s one line of code and is pretty close to English. In this article, I will be discussing the COM extensions. Source code of a nice CD player, a complete and fast text editor in a dll file, a complete text editor ready to use, a file shredder, a MIDI player and many other applications are all included in the installer package.
Programming Language Pragmatics (3nd Edition) by Michael L. Object oriented version of TCL with mixins. Guo admitted that he is a Python enthusiast -- he has developed a popular tool, called Online Python Tutor, to teach programming. Since occam came later (1983), the feature can't have been copied from that language. Some of the widely used programming languages that offer object-oriented programming features are C++, C#, Java, Perl 5, PHP, Python, and Ruby.
In essence, these types of libraries are "data-driven" systems; there is a conceptual and category gap between the declarative language and what a Python application does to carry out or apply its declarations. Why not other languages? they miss features that allow reusable code: integrated threads, exception handling, type checking, garbage collection, security, and dynamic linking. But they’re pretty small costs, and for me personally, the scales have now clearly tipped in favor of using Python for almost everything.
There is strong historical evidence that a language with simpler core, or even simplistic core Basic, Pascal) have better chances to acquire high level of popularity. Having programmed in Fortran and BASIC years ago, Paul wanted to program once more but in a newer language. Procedural languages, 3rd generation, also known as high-level languages (HLL), such as Pascal, FORTRAN, Algol, COBOL, PL/I, Basic, and C. The table had 197000 rows to return… C# took 19 seconds, Delphi took 4 seconds to return the same data.
The first person to write about these issues, as far as I know, was Fred Brooks in the Mythical Man Month. SQL is a vital part of software such as WordPress and MediaWiki. With little effort, a programmer may receive only weak (but, ideally, formal) guarantees; with more effort, a programmer should receive stronger guarantees. Neuroimaging data could be analyzed in SPM (MATLAB-based), FSL, or a variety of other packages, but there was no viable full-featured, free, open-source Python alternative.
A low-level language like assembly language actually does speak to a computer directly – performing a long series of processor operations, one byte at a time. SWIG is capable of wrapping all of ISO C99. Explore Python’s major built-in object types such as numbers, lists, and dictionaries Create and process objects with Python statements, and learn Python’s general syntax model Use functions to avoid code redundancy and package code for reuse Organize statements, functions, and other tools into larger components with modules Dive into classes: Python’s object-oriented programming tool for structuring code Write large programs with Python’s exception-handling model and development tools Learn advanced Python tools, including decorators, descriptors, metaclasses, and Unicode processing.
|
Crafted from premium materials, inside and out, that radiate elegance with a dramatic presence. It is designed to give business professionals and power users unprecedented functionality and performance in an intuitive smartphone
Sponsored
Top, New and Popular Wallpapers for CSL Black Hole Mi355. The collection of Free Wallpapers for CSL Black Hole Mi355 right here to be download. We update the collection of Wallpapers daily.
|
What, just because someone isn't riding Ocarina of Time's dick like everyone else does you lost all respect for him? I'd say he made some well reasoned points for the most part. I don't agree with every one, but I wouldn't say it's a bad analysis.
I personally think OoT is pretty overrated, even though it's revolutionary.
nevertheless a lot of the points that Arin made were **** and just a look into the mind of an ADHD-riddled idiot with Nostalgia glasses with glasses that are thicker than the average feminazi.
he did make some good points, mainly the failure of the "waiting game" combat and the "shoot an eye with an arrow" puzzles, but all in all he just makes himself look stupid.
I'd rather not get into a "us vs them" mindset. Being respectful to people of one group doesn't mean they're trying to destroy the other. Much of Arin's livelihood rests on the success of Game Grumps, and this recent Zoe Quinn thing is a mud slinging ******** that he'd rather not get on the wrong side of. He's just covering his bases, really, and there's nothing wrong with that.
Meanwhile, based TotalBiscuit has enough level-headed supporters to genuinely not give a ****, and says what's on his mind.
I get that, that's what any sane person would do. And I'm def. not 'us' nor 'them'. IDGAF. I was simply explaining the context.
It mainly looks 'bad', because as you said, 'trigger warnings' are a tumblrism. And, since it is known that Arin is on the 'defensive' side of the argument(for whatever), just now putting a tumblrism on game grumps makes it look like he's letting it(it being alleged feminist agenda) influence his other work.
That's a fair concern, as long as you don't fall for the "slippery slope" logic. When he starts going on a soapbox rather than being funny, then he's pretty much gone.
It's possible though that he saw what happened to JonTron, and became scared of getting the same backlash. Jon's actually been pretty stressed and upset about the constant barrage of attacks on him as of late, all because of a handful of comments on Twitter. It would make sense then that Arin realizes that he really doesn't want piss on the hornets' nest. The worst he'll get from old fans is passive-aggression, but SJW's will slander everything about you and send you death threats. It's practically terrorism.
Jesus Christ people, you will jump at anything in any slight way related to feminism and condemn it, and it's just getting childish. I hate feminazis as much as anyone else, but you guys are so annoying about it. If I want to enjoy GG because I think it's funny, I god damn will as long as I find it funny. I'm really getting sick of all the feminist **** on this site now, I don't come here to be informed of the feminazis stupidity, I want to come here and have a good laugh, and this certainly isn't funny in any way.
... seriously?
You get "bummed"? That's it?
This is just ridiculous. I feel sorry for your friend, but it's no one's business. If seeing something related to suicide makes you so sad, then I would suggest getting off the internet altogether.
Besides being related to SJWs, the term "trigger warning" just reminds me of how much we're raising a society of victims. Someone somewhere is sad so everyone should care. What happened to the world? Were our lives so sheltered that we actually look for potential diseases and illnesses to have and complain about? Is this all for attention? This is the sort of **** toddlers do, not fully developed and active members of society.
By the way, if you haven't gotten past the death of your friend, please seek professional help, but don't defend such faggotry and nonsense.
Then how can you not-get things so bad? It's not seeing something suicide-related that makes him sad, but the reminder of a lost friend. These weird thing he was talking about is called feeling and is a pretty normal and common thing. Besides, as you highlighted, he is "bummed" - not "unstoppably hysterical". Getting over something does not equal sperating yourself from it and never feeling anything towards it. If someone has to cut all emotional bonds and feel nothing to say "I got over something" then it's an issue.
I fail to understand. Your lost friends/relatives are "not anyone's business but your own", yet you bash a guy who shared with us and said he's upset because of a suicide of someone important to him? Too bad you don't mind your own buisness as much as you claim you do. You protect your own personal things, but you don't show respect to people who have the courage to open and speak out. It's a problem man, and not one to be taken lightly. Being prone to judging others while hiding personal things speaks "trust issues" with big, fat, shiny neon letters.
I dare to disagree on one more thing: Someone's suicide is a business of everyone who was touched by it. Unless you suggest people should go all like "My friend killed himself, but I guess it was his business so whatever".
You should kill yourself you massive faggot. The world Isn't going to make special rules for pieces of **** like you. Now if you saw him kill himself, and have PTSD, and are going to ******* rip people apart when that's triggered then I can see the need for a ******* trigger warning, but otherwise you can go jam your keyboard up your asshole you ******* faggot.
I can see that man. But I think its more when its a "trigger warning" rather than just a warning in general. Suicide isnt a feminism-specific subject even though the feminazis would try to make it seem like only bullied overweight girls commit suicide.
Well... "Trigger warning" isn't feminism-specific. It's pretty much a way of saying that whatever has the warning contains content that mentions a subject that can trigger bad emotions in the viewer. By putting a warning, the viewer knows what they're getting into and can either choose to not watch/read on the basis that it might upset them, or go ahead and be prepared for things that might make them uncomfortable.
But I'm sure you can see the benefit of a warning. Just wanted to let you know that adding the word trigger is not just a feminist thing, but maybe it's seen as such because a lot of feminists use it.
Well think about it for a moment, Arin is a SJW, not because of free will, no, because his wife told him to ******* do so, that means she has to be a feminist of sorts, since SJWism directly involves feminazism.
And Dan is a Jew.
It was all plotted by a powerful organization with the power that could rival that of god.
And who is that organization? The Government of Israel of course.
Have you ever read the bible? If yes, have you read the Exodus? In the Exodus it is described that Jews are the Chosen people of God, and Jesus himself was a Jew, his power transcended that of mortality and nature.
Why do the Illuminati use a triangle-shaped symbol resembling a pyramid with an eye on top of it? Because it is the only image of God judaism allows.
In other words, the Illuminati are in fact Jews.
Jews control everything, they are everywhere, they are God.
Every war, every conspiracy, every rebellion, every political change, every movement, everyone who tries to change something, everyone trying to make a career, every controversy, every thought and every breath you take, is another victory for Israel.
Arin was confirmed for feminist and SJW before in the Jontron era,
also everyone knows arin loves being the guy with the unpopular opinions.
not to even mention his plastic wife that is certainly a feminist too.
yeah, I met them working MAGfest. JonTron was cool but was sick the first time I worked there. Arin was cool when he was still with Jon and was really good to his fans. His girlfriend was odd. Struck me as something being off about her. Last year I saw them Arin was different, I had never heard of Dan and Arins girlfriend was famous for some reason. I just stuck with Stamper, Oney and Psychicpebbles. Those dudes are cool.
|
20 freelance font une offre moyenne de $123 pour ce travail
This is my job. I can do it. I'm ready to work now.
============================================================================================================================================================
Hello,
I will fix the issue and also update your Drupal core to latest stable release in 7.x branch.
Here are my details:
I have been working with Drupal for more than seven years.
I can build custom modules andPlus
Hi,
I have seen the problem. Problem is with the pager query.
I can upgrade to Drupal 7.22.
I have 8+ year of experience in Drupal 6,7, Customization, Theming, Custom module development, views, webform.
ThaPlus
++ This is my job. I'm ready to begin now. Pm for me. Thanks Looking forward to hear from you, Feel free to talk about this interesting stuff, give us any questions you have to know more about us ;-) Thanks Oleg!
☛ Hi! About your task:
I'm an expert in Drupal and I can help you.
I like your task. I can do it quickly.
I understend this error, but I must see a code.
Can we start now?
Please contact me to discuss at FreelancePlus
Hello
Hope you are doing good, i have read your post and able to resolve your issue. this is coming because of views, views and custom pager module have code changes. so we need to change some function parameters soPlus
Hi Sir;
I am more than 4 years of experience in Drupal and fixing its errors and confident to complete your project.
Please award me project, grant me access and let me start working in your project.
Don't hesitate Plus
Hello
I’m Frontend and Backend developer with experience of over 8 years.
I well know different CMS and a framework of Drupal, WordPress, Magento, Joomla also.
I earn as the freelancer, but I like to program aPlus
Myself Kumaran. I am working in software company and also doing part time freelancer jobs. I am having 8+ years of experience in web development. I am very good in PHP, Drupal,MySQL, AJAX, Jquery and web design(HTML &Plus
Hello There,
New Zealand based freelance developer here. With more than 12 years of experience i worked for various clients across Australia,New Zealand and UK. Worked on projects ranging from simple to extremely compPlus
Greetings!
We are a team of Drupal experts having 7+ years of experience individual, who are ready to accept any type of work in Drupal whether it is related to theming, development, e-commerce or custom work anythiPlus
Hello
I am Mariami,I'm more than 4 years experienced web developer,working on drupal and wordpress,
most of my drupal themes are based on twitter bootstrap.I am hardworking and professional with in my job.
you can cPlus
|
/*
* Copyright (c) 2020, the SerenityOS developers.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
*
* 1. Redistributions of source code must retain the above copyright notice, this
* list of conditions and the following disclaimer.
*
* 2. Redistributions in binary form must reproduce the above copyright notice,
* this list of conditions and the following disclaimer in the documentation
* and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
* DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
* SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
* CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
* OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#include <LibELF/Loader.h>
#include <stddef.h>
#include <stdint.h>
extern "C" int LLVMFuzzerTestOneInput(const uint8_t* data, size_t size)
{
ELF::Loader::create(data, size, /*verbose_logging=*/false);
return 0;
}
|
Media
Big Thief
Listen: “Shark Smile”
"Shark Smile" is the next song from Capacity, the highly anticipated sophomore album from Big Theif to be released June 9th on Saddle Creek. It follows the first single "Mythological Beauty".
Explains vocalist/guitarist Adrianne Lenker, "'Shark Smile' is the story of a car accident in which one dies and one lives. She recalls her lover leading up to the moment of the wreck, wishing she'd been taken into the next realm, too." The new songs emanate a maturity and refinement that sounds of a group of musicians truly hitting stride. The rolling warmth of bass lines wrapped around muffled snares invites repeated listens. And you surrender to the sweet sadness of Lenker's voice. Check it out here.
Their press release on Pitch Perfect sets up this next chapter for Big Theif best: "Capacity comes just one year after the stunning debut of Masterpiece, which brought Big Thief ever-expanding audiences and critical acclaim...recorded in a snowy winter nest in upstate New York at Outlier Studio with producer Andrew Sarlo, who also produced Masterpiece. Sarlo worked intimately with the band to arrange most of the songs for the first time at the studio"
"In Capacity’s eleven songs, Lenker introduces us to many different women...Lenker’s characters are often spooked and then soothed in a remarkably short amount of time, regenerating back into a state where they can once again be vulnerable. The band breathes with this swell, gasping in synchrony, hyperventilating with volume, and then suddenly the crisis passes and the band exhales with mutual relief."
“There is a darker darkness and a lighter light on this album,” Lenker says. “The songs search for a deeper level of self-acceptance, to embrace the world within and without. I think Masterpiece began that process, as a first reaction from inside the pain, and ‘Capacity’ continues that examination with a wider perspective.”
|
Come Play At Georgia’s Newest Trampoline Park!
Jump N Joy is one of the newest indoor trampoline parks in the state measuring 26,000 square feet with 11,000 square feet of interconnected wall to wall trampolines and attractions. We give you one of the greatest workouts ever combined with awesome, healthy fun. The indoor park includes a variety of specific activities such as Dodgeball, Basketball Dunking Lanes, Kid Zone, and Foam Pits, as well as a large main court. We are dedicated to providing you with the pure joy that comes with flying. We are jump lovers, thrill seekers and people who believe that jumping is freedom. We fly high and keep it safe. Come Jump and Rock your hearts out!
90 min Jump
2 hour Jump
Book your Next Birthday
Does your child want a birthday party at the most fun and physically interactive place in town? Do you want your child to be the cool kid at school? Jump N Joy is the place to make that happen! Jump N Rock offers birthday party packages to meet your needs. Let us do the work for you! Kids can flip into our Foam Jump (filled with thousands of foam cubes), dunk like a pro on Jump Slam hoops, or even jump on private trampoline courts reserved for your guests only. Trust us, your child will thank you and go home happy (and maybe even a little healthier!). Reserve Jump N Joy today and experience the “Best Party Ever!”
Came in on a weekday and had a blast! My son told me that the staff was firm, but fair and polite. It's clean.
Beatrice Ayton
Customer
I brought my nine year old Grandson here after my seventeen year old Son had been here with friends and had enjoyed their visit. Definitely a place for all ages. I really enjoyed watching the adults as well as children and my Grandson loved it. Quality play areas. Employees are very friendly.
Get the latest info about upcoming specials, events, and new attractions.
Ask about Events
Schools, church groups, and social clubs are all welcome to come and party at Jump N Joy! Host your next event in style! Enjoy Jump N Joy’s 22,000 square feet of FUN! The healthy, exciting fun you find at Jump N Joy is available for private events such as church groups, youth organizations, sports teams, college groups and more; everyone loves to jump for joy at Jump N Joy.
Fundraising
Activities
Jump N Joy Trampoline Park has many activities to have fun and stay fit. Come join in an awesome game of Dodge Ball, fly through the air and slam the ball like you are an NBA athlete or race down the runway and jump into a pit full of foam. There is something for all ages. We also have open jump for you to enjoy. Once you sign your Jump N Joy waiver, the possibilities are endless.
Hours
Waivers
Waivers required for all jumpers. Under 18 must be signed by their parent or legal guardian. You can fill out our waiver before arriving. Please purchase tickets online as we have a capacity limit and cannot guarantee your jump time for walk-ins. Hours subject to change. Open daily for special events and group outings: call for reservations!
|
Background: Overexpression of the cytokine - transforming growth factor-beta 2 (TGF-β2) - has been implicated in the malignant progression of pancreatic cancer (PAC). OT-101 (trabedersen) is an antisense oligodeoxynucleotide designed to target the human TGF-β2 mRNA. In a Phase I/II study, OT-101 treatment with subsequent chemotherapy was characterized by outstanding overall survival (OS) in patients with PAC. Objective: This study sought to identify 1) co-regulated sets of cyto-/chemokines; 2) potential mechanisms that link TGF-β receptor type 2 receptor inhibition that may result in the induction of a cytokine storm; and 3) predictive biomarkers for OS outcome in OT-101-treated patients with PAC...
Cytokines of the common gamma-chain receptor family, comprising interleukin (IL)-2, IL-4, IL-7, IL-9, IL-15 and IL-21, are vital with respect to organizing and sustaining healthy immune cell functions. Supporting the anti-cancer immune response, these cytokines inspire great interest for their use as vaccine adjuvants and cancer immunotherapies. It is against this background that gamma delta (γδ) T cells, as special-force soldiers and natural contributors of the tumor immunosurveillance, also received a lot of attention the last decade...
Background: The risk of short-term death for treatment naive patients dually infected with Mycobacterium tuberculosis and HIV may be reduced by early anti-retroviral therapy. Of those dying, mechanisms responsible for fatal outcomes are unclear. We hypothesized that greater malnutrition and/or inflammation when initiating treatment are associated with an increased risk for death. Methods: We utilized a retrospective case-cohort design among participants of the ACTG A5221 study who had baseline CD4 < 50 cells/mm3 ...
Interleukin-15 (IL-15) can promote both innate and adaptive immune reactions by stimulating CD8+ /CD4+ T cells and natural killer cells (NK) while showing no effect in activating T-regulatory (Treg) cells or inducing activation-associated death among effector T cells and NK cells. Thus, IL-15 is considered as one of the most promising molecules for antitumor immune therapy. To improve the drug-like properties of natural IL-15, we create an IL-15-based molecule, named P22339, with the following characteristics: 1) building a complex of IL-15 and the Sushi domain of IL-15 receptor α chain to enhance the agonist activity of IL-15 via transpresentation; 2) through a rational structure-based design, creating a disulfide bond linking the IL-15/Sushi domain complex with an IgG1 Fc to augment its half-life...
Myeloid leukocytes are essentially involved in both tumor progression and control. We show that neo-adjuvant treatment of mice with an inhibitor of CSF1 receptor (CSF1R), a drug that is used to deplete tumor-associated macrophages, unexpectedly promoted metastasis. CSF1R blockade indirectly diminished the number of NK cells due to a paucity of myeloid cells that provide the survival factor IL-15 to NK cells. Reduction of the number of NK cells resulted in increased seeding of metastatic tumor cells to the lungs but did not impact on progression of established metastases...
The high electric field across the plasma membrane might influence the conformation and behavior of transmembrane proteins that have uneven charge distributions in or near their transmembrane regions. Membrane depolarization of T cells occurs in the tumor microenvironment and in inflamed tissues because of K+ release from necrotic cells and hypoxia affecting the expression of K+ channels. However, little attention has been given to the effect of membrane potential (MP) changes on membrane receptor function...
Cytokines and chemokines are potent modulators of brain development and as such, dysregulation of the maternal immune system can result in deviations in the fetal cytokine balance, altering the course of typical brain development, and putting the individual on a "pathway to pathology". In the current study, we used a multi-variate approach to evaluate networks of interacting cytokines and investigated whether alterations in the maternal immune milieu could be linked to alcohol-related and alcohol-independent child neurodevelopmental delay...
CMV infection is a potentially fatal complication in patients receiving HSCT, but recent evidence indicates that CMV has strong anti-leukemia effects due in part to shifts in the composition of NK-cell subsets. NK-cells are the primary mediators of the anti-leukemia effect of allogeneic HSCT and infusion of allogeneic NK-cells has shown promise as a means of inducing remission and preventing relapse of several different hematologic malignancies. The effectiveness of these treatments is limited, however, when tumors express HLA-E, a ligand for the inhibitory receptor NKG2A which is expressed by the vast majority of post-transplant reconstituted and ex vivo expanded NK-cells...
PURPOSE: Natural killer (NK) cells can kill transformed cells and represent anti-tumor activities for improving the immunotherapy of cancer. In previous works, we established human interleukin-15 (hIL-15) gene-modified NKL cells (NKL-IL15) and demonstrated their efficiency against human hepatocarcinoma cells (HCCs) in vitro and in vivo. To further assess the applicability of NKL-IL15 cells in adoptive cellular immunotherapy for human leukemia, here we report their natural cytotoxicity against leukemia in vitro and in vivo...
Neoepitope-specific T-cell responses have been shown to induce durable clinical responses in patients with advanced cancers. We explored the recognition patterns of tumor-infiltrating T lymphocytes (TILs) from patients with glioblastoma multiforme (GBM), the most fatal form of tumors of the central nervous system. Whole-genome sequencing was used for generating DNA sequences representing the entire spectrum of 'private' somatic mutations in GBM tumors from five patients, followed by 15-mer peptide prediction and subsequent peptide synthesis...
The cytokine IL-2 is critical for promoting the development, homeostasis, and function of regulatory T (Treg) cells. The cellular sources of IL-2 that promote these processes remain unclear. T cells, B cells, and dendritic cells (DCs) are known to make IL-2 in peripheral tissues. We found that T cells and DCs in the thymus also make IL-2. To identify cellular sources of IL-2 in Treg cell development and homeostasis, we used Il2FL/FL mice to selectively delete Il2 in T cells, B cells, and DCs. Because IL-15 can partially substitute for IL-2 in Treg cell development, we carried out the majority of these studies on an Il15-/- background...
Despite successful introduction of NK-based cellular therapy in the treatment of myeloid leukemia, the potential use of NK alloreactivity in solid malignancies is still elusive. We performed a phase I clinical trial to assess the safety and efficacy of in situ delivery of allogeneic NK cells combined with cetuximab in liver metastasis of gastrointestinal origin. The conditioning chemotherapy was administrated before the allogeneic NK cells injection via hepatic artery. Three escalating doses were tested (3...
A brief in vitro stimulation of natural killer (NK) cells with interleukin (IL)-12, IL-15, and IL-18 endow them a memory-like behavior, characterized by higher effector responses when they are restimulated after a resting period of time. These preactivated NK cells, also known as cytokine-induced memory-like (CIML) NK cells, have several properties that make them a promising tool in cancer immunotherapy. In the present study, we have described the effect that different combinations of IL-12, IL-15, and IL-18 have on the generation of human CIML NK cells...
AIM: Osteoarthritis (OA) is a whole joint pathology involving cartilage, synovial membrane, meniscus, subchondral bone and infrapatellar fat pad (IFP). Synovitis has been widely documented in OA suggesting its important role in pathogenesis. The aim of this study was to investigate the role of different joint tissues in promoting synovitis. MATERIALS AND METHODS: Conditioned media (CM) from cartilage, synovial membrane, meniscus and IFP, were generated from tissues of five patients undergoing total knee replacement and used to stimulate a human fibroblast-like synoviocytes cell line (K4IM)...
Dendritic cell (DC) vaccination can be an effective post-remission therapy for acute myeloid leukemia (AML). Yet, current DC vaccines do not encompass the ideal stimulatory triggers for innate gamma delta (γδ) T cell anti-tumor activity. Promoting type 1 cytotoxic γδ T cells in patients with AML is, however, most interesting, considering these unconventional T cells are primed for rapid function and exert meaningful control over AML. In this work, we demonstrate that interleukin (IL)-15 DCs have the capacity to enhance the anti-tumoral functions of γδ T cells...
|
Main navigation
Can I Use My HSA To Pay For Botox Treatments?
One of my friends asked me this question the other day. Here’s what she said:
Nancy, I am in sales, so keeping up my appearance is important to career success. Since I turned 50, I’ve started seeing an esthetician and am getting botox injections for deep wrinkles and specialized skin care procedures. My skin looks fantastic. People tell me I look 10 years younger.
Since these are medical procedures, can I use my HSA to pay for them?
The short answer is no. Health Savings Accounts have some pretty amazing tax benefits, so the I.R.S. has strict guidelines about HSA funds being used for “medically necessary,” rather than cosmetic, procedures.
Let’s talk about the tax benefits first. There are three possible tax breaks!
A tax break on the front end—your contribution to your HSA is pre-tax, since it comes out of your paycheck before your income is taxed.
Tax deferral as it grows—your balance grows tax deferred, so there is no 1099 issued for interest earned each year.
Tax free when you take it out—your distributions can be withdrawn tax free for qualified medical expenses.
So what exactly constitutes a qualified medical expense? Think “medically necessary” and “prescribed by your physician.”
The I.R.S. specifically mentions cosmetic surgery, including facelifts and liposuction, as “not includible.” While they don’t call out botox specifically, it’s implied that this procedure is lumped in. Of course, if cosmetic surgery is needed due to a traumatic accident or a disease, the HSA can be used (and thank goodness for that!).
The long answer is that there may be exceptions to this rule. For example, botox can be an effective treatment for migraine headaches. This could fall into the “prescribed by your physician” category. If that were the case, you could use HSA funds to cover it.
One of my friends (who I won’t name here) is trying to beat the system by getting botox treatments from her dentist and using her Health Savings Account to pay for it. She figures she won’t get caught by the I.R.S. for unauthorized charges because dental care is a qualified expense. She’s using the dentist’s office as a trojan horse to slip in her botox treatment payments as qualified.
Although we can give her props for creativity, this move won’t stand up to an I.R.S. audit. She’d be subject to taxes on the distribution and a 20% penalty on top of that.
My advice is to skip the HSA for your cosmetic procedures and simply pay for them out of pocket. If these treatments make you look and feel fantastic, make them part of your personal care budget.
Set up a separate savings account and call it “My Fabulous Account.”
Use it to save up for special ways to take great care of yourself. Then you’ll have funds available for whatever your heart desires, which may include procedures like facials or massages as well as a nutritional coach, personal training or a yoga retreat.
Taking care of yourself inside and out can be expensive. If you have money set aside specifically for that purpose, you may be more inclined to do it.
If you have a money question, ask me by leaving a comment below, tweet it to @nancy_moneydiva, or email Nancy at nancy@NancyLAnderson.com
Share this:
Footer
The Blog
The Acres of Acorns blog is a resource for late starters (over 40) who are worried they will never have enough money to retire. At the same time, they want to enjoy life today! Traditional financial advice doesn’t apply but you’ll find the right tips right here!
|
GPS-friendly network seen years awayGPS-friendly network seen years away
January 31, 2012By Dan Namowitz
AOPA is calling for the Federal Communications Commission to revoke LightSquared's conditional approval to develop a mobile-satellite network.
A technical committee that analyzed test data has concluded that the transmissions pose intractable interference problems for aviation navigation and other uses of GPS. Acting at the request of the FCC, the National Telecommunications and Information Administration, and nine government agencies of which it is composed, the National Space-Based Positioning, Navigation and Timing Executive Committee (Excomm) tested and analyzed the LightSquared project. The panel worked closely with the company, an effort that it said required a substantial federal expenditure, including resources “diverted from other programs.”
The unanimous opinion of the committee comes as a major blow to LightSquared in its bid to develop the network that has faced widespread opposition since tests last year showed it jamming aviation GPS navigation signals.
The technical committee co-chaired by deputy secretaries of the U.S. defense and transportation departments concluded that there are “no practical solutions” to the LightSquared broadband network's GPS interference.
That result should now move the FCC to revoke the waiver under which LightSquared has been proceeding since last January, said Melissa Rudinger, AOPA senior vice president of government affairs.
In a related development, the FCC has opened a public comment period on a LightSquared petition to have the FCC declare GPS devices not entitled to protection from LightSquared's operations, “so long as LightSquared operates within the technical parameters prescribed” under its conditional approval.
The committee's unanimous opinion that LightSquared's original proposal—and subsequent modifications—created harmful interference with GPS comes just at the anniversary of LightSquared's conditional approval to develop the network, contingent on the now-known test results.
In addition to finding that no practical solutions existed for the interference problem, the committee unanimously concluded that the mobile-satellite network was “not compatible with several GPS-dependent aircraft safety-of-flight systems.”
The committee saw no remedy on the horizon that would allow the network to operate “in the next few months or years without significantly interfering with GPS.”
The panel's Jan. 13 letter to the FCC marked the latest in a series of setbacks for the hedge-fund-financed startup that spent much of 2011 pitted against an industry GPS coalition, public-sector GPS users, and more recently, nervous Wall Street investors who have financed the startup to the tune of $2.9 billion. Last fall a securities-law inquiry was opened into some past dealings of LightSquared's financiers, Harbinger Capital Partners.
The FCC should now withdraw its order that allowed LightSquared to proceed, wrote Rudinger in a Jan. 27 letter to FCC Chairman Julius Genachowski. The waiver granted to LightSquared, allowing it to use high-powered land-based transmissions as an exception to an “integrated service” rule usually applied, does not serve public interests because of the aviation safety risks, she wrote.
LightSquared blasted the Excomm opinion, claiming in a Jan. 13 news release that it represented a “systematic pattern of bias and collusion.”
The multi-industry Coalition to Save Our GPS, of which AOPA is a member, expressed satisfaction with the report, noting in a statement that LightSquared had “pursued a concerted disinformation campaign to attack and impugn specific companies and individuals who have been part of the process of reviewing its proposals.”
Comment deadline set
As explained in this public notice, interested parties may file comments in response to LightSquared's petition for declaratory ruling in IB Docket No. 11-109 or ET Docket No. 10-142, as appropriate, no later than Feb. 27. Parties may file replies in response to those comments in IB Docket No. 11-109 or ET Docket No. 10-142, as appropriate, no later than March 13.
In announcing the public comment period, the FCC noted that it was now also under new obligations from the recently passed Financial Services and General Government Appropriations Act of 2012 not to remove any conditions that have been imposed on LightSquared until concerns about GPS interference are resolved.
Dan Namowitz
Associate Editor Web
Associate Editor Web Dan Namowitz has been writing for AOPA in a variety of capacities since 1991. He has been a flight instructor since 1990 and is a 30-year AOPA member.
|
Get Connected!
MASNsports.com is your online home for the latest Orioles and Nationals
news, features, and commentary. And now, you can connect with MASN on
every digital level. From web and social media to our new mobile alert service,
MASN has got all the bases covered.
Does it become easier or harder as he gets older and it's time to order more change-of-address cards?
"It can go both ways," said Harris, who attended William & Mary and lives in Arlington, Va. "The first time I was traded from the Cubs, the emotional aspect was like, 'Oh man, what?' It's like the rug is pulled out from under you. But I think as you get older, you get a little more jaded to the business side of the game. You know it's part of the deal. Very few guys are lucky enough to get drafted and play their whole career with one organization. In that aspect, it's easier. But at this point, being from the East Coast and I've got family from here, as far as trades, it's probably one of the easier ones."
Harris, who spoke to reporters via conference call, has appeared in 214 major league games at shortstop, 120 at third base and 118 at second, posting a .973 fielding percentage. He also can play first base and the outfield.
Throw a mask and chest protector at him, and he'd squat behind the plate without argument.
"I'll come in ready to do anything and come in in good shape and be ready for any opportunity I get to play," he said.
Harris said he wasn't surprised by the trade.
"I'm just ecstatic that it's the Orioles and the situation I'm coming into, the team and organization I'm coming into," he said.
"He's a real good player, real fundamentally sound," Harris said. "He's had a couple injuries the last couple of years, but when he's healthy, he's one of more productive power-hitting shortstops in baseball."
|
Cython==0.29.21
numpy==1.19.1
cachetools==4.1.1
ruamel.yaml==0.16.10
eth-account==0.5.2
aioconsole==0.2.1
aiokafka==0.6.0
SQLAlchemy==1.3.18
binance==0.3
ujson==3.1.0
websockets==8.1
signalr-client-aio==0.0.1.6.2
web3==5.12.0
prompt-toolkit==3.0.5
0x-order-utils==4.0.0
0x-contract-wrappers==2.0.0
eth-bloom==1.0.3
pyperclip==1.8.0
telegram==0.0.1
jwt==1.0.0
mypy-extensions==0.4.3
python-telegram-bot==12.8
python-binance==0.7.5
pandas==1.1.0
aiohttp==3.6.2
|
Q:
How can i display in a archive page a number of post i want
How can i display in a archive themplate ( post type) page a number of post,
Let say i want to display a number of 20 post....in archive page, under i want to display the pages...
Please look here :
http://cinema.trancelevel.com/trailer/aventuri/page/2/
I use post type pages... all the archive pages are now displayng 12 post..
I want to be able to display a number...
let say 20 post for trailers - for the trailers category
let say 10 post for actori- for actori category...
Now i use :
<?php is_tag(); ?>
<?php if (have_posts()) : ?>
<?php while (have_posts()) : the_post(); ?>
Thanks in advance
A:
Just use the current_post property in the loop:
if ( have_posts() )
{
while( have_posts() )
{
global $wp_query;
the_post();
// Show current post number:
echo $wp_query->current_post;
} // endwhile;
} // endif;
|
Unlockable Cheat Codes
How to unlock the cheats menu: In order for this to work you must first have unlocked debug mode by collecting at least 16 medals from the bonus stages. Then, you need to enable Debug Mode at the Secrets menu which is found by pressing X (on Switch) / Triangle (on PS4) / Y (on Xbox One) at the File Select screen when the “No Save” option is highlighted.
Once Debug Mode is turned on go back to the File Select screen, place the cursor on the “No Save” option and hold down the Y button (on Switch) / Square button (on PS4) / X button (on Xbox One) then press start. This will load the debug menu. To navigate the screen, use the Directional D-pad and press A (on Switch) / Circle (on PS4) / B (on Xbox One) to make a selection.
From there, you can play any level, bonus stages, and preview all the games songs via the Sound Test option. As well as activate newly discovered cheats by going to Sound Test and playing the following sounds to enable the desired cheat code.
Sonic Mania Plus Cheats
Unlockable Modes, Retro Game & Moves
How to unlock secret Modes, a Retro Game & Classic Moves: You get these so-called “Secrets” by beating all of Sonic Mania Plus’s bonus stages, which are the Blue Sphere bonus stages, not the Chaos Emerald stages. If you collect enough Medals by successfully completing the Blue Sphere bonus stages you you unlock the following listed unlockables…
Note: Completing the game for the first time will unlock the following…
• “Time Attack” Mode
• “Competition” Mode
• “Stage Select” can be done by pressing up or down on the save file you’ve already completed the game with. — Shortcut: On the title screen press Up, Up, Down, Down, Left, Right, Left, Right, Left, Right, and then hold A, B & Y at the same time.
Unlockable Super Sonic / Tails / Knuckles
How to unlock the Super State: You can get the “Super Saiyan-like” Super State for Sonic, Tails or Knuckles, once you collect all seven Chaos Emeralds while playing with any of those three characters in Sonic Mania Plus! Activate it afterwards by jumping twice in a row. It makes you invincible, jump higher, and move faster.
Here’s a video of the “Super Sonic” state at 1:42 minutes:
Here’s a video of the “Super Knuckles” state at 2:12 minutes:
[Work-In-Progress]
These are all Sonic Mania Plus cheats discovered so far on Switch, Xbox One, PS4 & PC. We also made the handy Sonic Mania Plus guides listed above to help you with tips and tricks for the game!
Wallpaper Gallery Slider
Grand Theft Auto 5 Wallpapers
Zelda: Breath of the Wild Wallpapers
Metal Gear Survive Wallpapers
Devil May Cry Wallpapers
Mortal Kombat X Wallpapers
Dragon Ball XV Wallpapers
Metal Gear Solid 5 Wallpapers
The Witcher 3 Wallpapers
About the author
By Ferry Groenendijk: He is the founder and editor of Video Games Blogger. He loved gaming from the moment he got a Nintendo with Super Mario Bros. on his 8th birthday. Learn more about him here and connect with him on Twitter, Facebook and at Google+.
|
Directions to The Conference Center
The Conference Center at the Maritime Institute (CCMIT) is nestled within 80 acres of lush greenery and rolling grounds with ample room for jogging, walking, and other outside activities. This countryside setting is strategically located within the Baltimore - Washington corridor. In fact, the campus is less than five miles from the Baltimore - Washington International (BWI) Thurgood Marshall Airport, the BWI Amtrak Station, and Interstate-95. The campus is also near many major tourist destinations such as Baltimore, Annapolis, and Washington, DC. See The Conference Center map below.
Taxi Service
For students and guests that would prefer to use a taxi service, CCMIT's campus is less than five miles from the Baltimore - Washington International (BWI) Thurgood Marshall Airport, the BWI Amtrak Station, and the North Linthicum Light Rail Station. Please note that CCMIT's Hammonds Ferry Road entrance is typically closed from 1:00 AM to 5:00 AM. For safety reasons, this entrance is also closed during inclement weather conditions. During these times, please advise the taxi driver to use our alternate entrance, which can be accessed off of West Nursery Road.
|
Hi
I am searching job please send some details abuot ERP
project and real time test plan,traceability report,test
case if any relate to erp project Please help me out to get
a job in testing.my email address is chiku_69@yahoo.in
Thank u
If we have 9 floor n 3 eggs n we have to check from which floor the egg wont get break.how can we do that
439
Can anyone list out major scenarios for an application
managing drug composition?
652
You have a testing team of 10 members, and now you have to
reduce it by 5 member and you dont want to increase risk in
your product, you are try to cover all functionality to
test.
What test Strategy you follow.
1271
through which phases a software ttester need to pass like
as a junnior test engineer,team lead,project lead,etc.?
which is the final stage of your promotion and how will u
achieve?
599
which test strategy your are fallowing in your company?which
documents u r using in software development life cycle?
833
What are the interview question on insurance domain in
manual testing
14304
On login window for username and password auto populated field means what !
443
I have to give weights to 5 stones so that using a simple
balance and the stone i should be able to weight any number
between 1-100
|
/*
* Copyright (c) Enalean, 2019-Present. All Rights Reserved.
*
* This file is a part of Tuleap.
*
* Tuleap is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* Tuleap is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with Tuleap. If not, see <http://www.gnu.org/licenses/>.
*/
import Vue from "vue";
import { Component, Prop } from "vue-property-decorator";
import { ColumnDefinition } from "../../../../../type";
@Component
export default class ClassesForCollapsedColumnMixin extends Vue {
@Prop({ required: true })
readonly column!: ColumnDefinition;
get classes(): string[] {
if (!this.column.is_collapsed) {
return [];
}
const classes = ["taskboard-cell-collapsed"];
if (this.column.has_hover) {
classes.push("taskboard-cell-collapsed-hover");
}
return classes;
}
}
|
Maternal serum placental growth factor isoforms 1 and 2 at 11-13, 20-24 and 30-34 weeks' gestation in late-onset pre-eclampsia and small for gestational age neonates.
To compare the maternal serum concentrations of placental growth factor (PlGF)-1 and PlGF-2 in the first, second and third trimesters in normal pregnancies and in those complicated by pre-eclampsia (PE) or the delivery of small for gestational age (SGA) neonates after 37 weeks. Serum PlGF-1 and PlGF-2 were measured at 11-13, 20-24 and 30-34 weeks' gestation in 50 cases of PE, 99 cases of SGA and 298 controls. The values of PlGF-1 and PlGF-2 at 11-13 weeks were expressed as multiples of the median (MoM) after adjustment for maternal characteristics. The distributions of PlGF-1 and PlGF-2 in cases and controls at 20-24 and 30-34 weeks were converted to MoM of the values at 11-13 weeks and compared. Serum PlGF-1 and PlGF-2 levels were highly correlated and both increased with gestational age. At 30-34 weeks, the median MoM values for PlGF-1 and PlGF-2 in the late PE (4.2 and 4.3) and late SGA (7.2 and 6.0) groups were significantly lower than in the controls (12.8 and 9.9). Combining the two isoforms did not improve the prediction of late PE and late SGA provided by PlGF-1 alone. The performances of serum PlGF-1 and PlGF-2 in the prediction of late PE and late SGA are similar.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.