129 results
Search Results
2. A novel sustainable multi-objective optimization model for forward and reverse logistics system under demand uncertainty
- Author
-
Devika Kannan, Navid Zarbakhshnia, Reza Kiani Mavi, and Hamed Soleimani
- Subjects
Mathematical optimization ,021103 operations research ,Linear programming ,Computer science ,Sustainable reverse logistics ,Supply chain ,0211 other engineering and technologies ,Probabilistic logic ,General Decision Sciences ,Particle swarm optimization ,Social responsibility ,02 engineering and technology ,Reverse logistics ,Management Science and Operations Research ,Multi-objective optimization ,Non-dominated sorting genetic algorithm (NSGA-II) ,Analysis of variance (ANOVA) ,Multi-objective probabilistic programming ,Genetic algorithm ,Multi-objective particle swarm optimization (MOPSO) - Abstract
The paper aims to present a multi-product, multi-stage, multi-period, and multi-objective, probabilistic mixed-integer linear programming model for a sustainable forward and reverse logistics network problem. It looks at original and return products to determine both flows in the supply chain—forward and reverse—simultaneously. Besides, to establish centres of forward and reverse logistics activities and make a decision for transportation strategy in a more close-to-real manner, the demand is considered uncertain. We attempt to represent all major dimensions in the objective functions: First objective function is minimizing the processing, transportation, fixed establishing cost and costs of CO2 emission as environmental impacts. Furthermore, the processing time of reverse logistics activities is developed as the second objective function. Finally, in the third objective function, it is tried to maximize social responsibility. Indeed, a complete sustainable approach is developed in this paper. In addition, this model provides novel environmental constraint and social matters in the objective functions as its innovation and contribution. Another contribution of this paper is using probabilistic programming to manage uncertain parameters. Moreover, a non-dominated sorting genetic algorithm (NSGA-II) is configured to achieve Pareto front solutions. The performance of the NSGA-II is compared with a multi-objective particle swarm optimization (MOPSO) by proposing 10 appropriate test problems according to five comparison metrics using analysis of variance (ANOVA) to validate the modeling approach. Overall, according to the results of ANOVA and the comparison metrics, the performance of NSGA-II algorithm is more satisfying compared with that of MOPSO algorithm.
- Published
- 2020
3. Optimal scheduling of airport ferry vehicles based on capacity network
- Author
-
Xue Han, Peixin Zhao, Di Wan, Shengnan Yin, and Qingchun Meng
- Subjects
021103 operations research ,Linear programming ,Operations research ,Computer science ,0211 other engineering and technologies ,Process (computing) ,Scheduling (production processes) ,General Decision Sciences ,02 engineering and technology ,Path cover ,Management Science and Operations Research ,Directed acyclic graph ,International airport ,Scheduling (computing) ,Integer programming ,Network model - Abstract
For daily airport operations, the insufficient number and the improper scheduling of ground support vehicles are the main causes of flight delays. In this paper, a novel network model is proposed to complement the optimal scheduling of ferry vehicles for the flight ground support service. In the process of model construction, we first innovatively construct a ferry vehicle capacity network by having the introduced virtual flights and the ferry vehicle depot as nodes, in which the directed edges indicate that the two nodes associated may be consecutively served by the same ferry vehicle. Based on the capacity network, a mixed integer programming model is constructed to minimize the number of ferry vehicles needed. In addition, this paper shows that the mixed integer programming is equivalent to a linear programming when the service start time of each flight is fixed, which makes the solving process more efficient, and the linear programming model can be applied to solve the minimum node-disjoint path cover of directed acyclic graphs. The efficiency and accuracy of the method are validated by the actual flight data obtained from Beijing Capital International Airport. This study will provide a methodological reference for the optimal scheduling of airport ferry vehicles.
- Published
- 2020
4. A stochastic disaster-resilient and sustainable reverse logistics model in big data environment
- Author
-
Shraddha Mishra and Surya Prakash Singh
- Subjects
021103 operations research ,Operations research ,Linear programming ,Supply disruption ,business.industry ,Computer science ,Big data ,0211 other engineering and technologies ,General Decision Sciences ,02 engineering and technology ,Reverse logistics ,Management Science and Operations Research ,Facility location problem ,Supply and demand ,Sustainability ,Emissions trading ,business ,Remanufacturing - Abstract
In this paper, a mixed-integer linear programming model is discussed to provide joint decision making for facility location and production–distribution across countries for both forward and reverse logistics. A hybrid facility network is considered for cost-cutting and equipment sharing where the facilities of forward logistics are also equipped to provide reverse logistics services. The model considers the dynamic production and storage capacity of the facilities which can be expanded if required. Furthermore, the effectiveness of the model is tested to deal with disruptions due to man-made or natural disasters. The dynamic facility allocation enables the model to withstand the demand/supply disruptions in a disaster-affected zone. Besides this, the model considers carbon emissions caused due to manufacturing, remanufacturing, repair, storage and transportation. These emissions are regulated using cap and trade policy Thus, the proposed model balances resilience and sustainability under uncertain market demand and product returns. The chance-constrained approach is used to obtain the deterministic equivalence of the stochastic demand and returns. The paper also investigates the changes in emission and production level in each country under demand and supply disruptions. The parameters of the model are mapped with the various dimensions of big data such as volume, velocity and variety. The proposed model is solved using randomly generated data sets having realistic parameters with essential big data characteristics.
- Published
- 2020
5. Scheduling a forge with due dates and die deterioration
- Author
-
Olivér Ősz, Balázs Ferenczi, and Máté Hegyháti
- Subjects
Schedule ,Mathematical optimization ,021103 operations research ,Job shop scheduling ,Linear programming ,Computer science ,0211 other engineering and technologies ,Scheduling (production processes) ,General Decision Sciences ,02 engineering and technology ,Management Science and Operations Research ,Scheduling (computing) ,Discrete time and continuous time ,Network model - Abstract
In this paper a new scheduling problem is presented, which originates from the steel processing industry. The optimal scheduling of a steel forge is investigated with the goal of minimizing setup and storage costs under strict deadlines and special resource constraints. The main distinctive feature of the problem is the deterioration of some equipment, in this case, the so-called forging dies. While the aging effect has been widely investigated in scheduling approaches, where production speed decreases through time, durability deterioration caused by equipment setup has not been addressed yet. In this paper a mixed-integer linear programming model is proposed for solving the problem. The model uses a uniform discrete time representation and resource-balance constraints based on the resource–task network model formulation method. The proposed method was tested on 3-week long schedules based on real industrial scenarios. Computational results show that the approach is able to provide optimal short-term schedules in reasonable time.
- Published
- 2019
6. A linear programming primer: from Fourier to Karmarkar
- Author
-
Vijay Chandru, M. R. Rao, and Atlanta Chakraborty
- Subjects
Numerical linear algebra ,Linear programming ,Computer science ,Numerical analysis ,Ellipsoid method ,General Decision Sciences ,Duality (optimization) ,Management Science and Operations Research ,computer.software_genre ,Linear function ,Linear inequality ,Polyhedron ,Monotone polygon ,Simplex algorithm ,Theory of computation ,Calculus ,computer ,Time complexity ,Interior point method - Abstract
The story of linear programming is one with all the elements of a grand historical drama. The original idea of testing if a polyhedron is non-empty by using a variable elimination to project down one dimension at a time until a tautology emerges dates back to a paper by Fourier in 1823. This gets re-invented in the 1930s by Motzkin. The real interest in linear programming happens during World War II when mathematicians ponder best ways of utilising resources at a time when they are constrained. The problem of optimising a linear function over a set of linear inequalities becomes the focus of the effort. Dantzig’s Simplex Method is announced and the Rand Corporation becomes a hot bed of computational mathematics. The range of applications of this modelling approach grows and the powerful machinery of numerical analysis and numerical linear algebra becomes a major driver for the advancement of computing machines. In the 1970s, constructs of theoretical computer science indicate that linear programming may in fact define the frontier of tractable problems that can be solved effectively on large instances. This raised a series of questions and answers: Is the Simplex Method a polynomial-time method and if not can we construct novel polynomial time methods, etc. And that is how the Ellipsoid Method from the Soviet Union and the Interior Point Method from Bell Labs make their way into this story as the heroics of Khachiyan and Karmarkar. We have called this paper a primer on linear programming since it only gives the reader a quick narrative of the grand historical drama. Hopefully it motivates a young reader to delve deeper and add another chapter.
- Published
- 2019
7. Robust reliable humanitarian relief network design: an integration of shelter and supply facility location
- Author
-
Mohsen Yahyaei and Ali Bozorgi-Amiri
- Subjects
Distribution center ,021103 operations research ,Linear programming ,Total cost ,Computer science ,0211 other engineering and technologies ,General Decision Sciences ,Robust optimization ,02 engineering and technology ,Management Science and Operations Research ,Facility location problem ,Network planning and design ,Risk analysis (engineering) ,Backup ,Preparedness ,Hedge (finance) ,Natural disaster - Abstract
The human societies are threatened by natural disasters. Thus, preparedness and response planning is necessary to eliminate or mitigate their negative effects. Relief network design plays an important role in the efficient response to the affected people. This paper addresses the problem of relief logistics network design under interval uncertainty and the risk of facility disruption. A mixed-integer linear programming model is proposed (1) to consider distribution center (DC) disruption (2) to support the disrupted DC by backup plan (3) to take in to the account both supply and evacuation issues (4) and finally, to mitigate disruption impact by investment. Moreover, robust optimization methodology is applied to hedge against uncertain environments. We conduct computational experiments by using generated instances and a real-world case to perform sensitivity analysis and provide managerial insights. The results show that the total cost of relief network increases by increasing the conservatism level. Moreover, the result show that the total cost of the network can be decrease by reducing the interval of uncertain parameters. As a result, providing more information and better estimation about uncertain parameters can reduce network costs. Disruption probability effect is also investigated and the result indicates that the network tries to establish more reliable facilities as the disruption probability increases. To demonstrate superiority of reliable network described in this paper over the classic network, a Monte Carlo procedure is used to compare two networks and results confirmed superiority of reliable network.
- Published
- 2018
8. A generalization of Hunter’s bound to hypergraphs
- Author
-
Béla Vizvári and Gergely Kovács
- Subjects
Discrete mathematics ,021103 operations research ,Linear programming ,Generalization ,0211 other engineering and technologies ,General Decision Sciences ,0102 computer and information sciences ,02 engineering and technology ,Management Science and Operations Research ,01 natural sciences ,Upper and lower bounds ,Dual (category theory) ,010201 computation theory & mathematics ,Theory of computation ,Mathematics - Abstract
One of the primary objectives of this paper is to generalize Hunter’s bound to m-regular hypergraphs. In particular, new upper and lower bounds for the probability of the union of events is provided in this paper. The lower bounds generalize Hunter’s bound as well. All new bounds are derived from the dual feasible solutions of Boole’s linear programming problem.
- Published
- 2018
9. An MIP-based interval heuristic for the capacitated multi-level lot-sizing problem with setup times
- Author
-
Tao Wu, Leyuan Shi, and Jie Song
- Subjects
Linear programming relaxation ,Mathematical optimization ,Production planning ,Operations research ,Linear programming ,Heuristic ,Theory of computation ,General Decision Sciences ,Domain knowledge ,Time horizon ,Interval (mathematics) ,Management Science and Operations Research ,Mathematics - Abstract
This paper considers the capacitated multi-level lot-sizing problem with setup times, a class of difficult problems often faced in practical production planning settings. In the literature, relax-and-fix is a technique commonly applied to solve this problem due to the fact that setup decisions in later periods of the planning horizon are sensitive to setup decisions in the early periods but not vice versa. However, the weakness of this method is that setup decisions are optimized only on a small subset of periods in each iteration, and setup decisions fixed in early iterations might adversely affect setup decisions in later periods. In order to avoid these weaknesses, this paper proposes an extended relax-and-fix based heuristic that systematically uses domain knowledge derived from several strategies of relax-and-fix and a linear programming relaxation technique. Computational results show that the proposed heuristic is superior to other well-known approaches on solution qualities, in particular on hard test instances.
- Published
- 2011
10. Interdicting nuclear material on cargo containers using knapsack problem models
- Author
-
Jamie D. Lloyd, Laura A. McLay, and Emily Niman
- Subjects
Screening test ,Linear programming ,Computer science ,Knapsack problem ,Reliability (computer networking) ,Key (cryptography) ,General Decision Sciences ,Nuclear material ,Management Science and Operations Research ,Simulation ,Reliability engineering ,Primary screening - Abstract
This paper introduces a framework for screening cargo containers for nuclear material at security stations throughout the United States using knapsack problem, reliability, and Bayesian probability models. The approach investigates how to define a system alarm given a set of screening devices, and hence, designs and analyzes next-generation security system architectures. Containers that yield a system alarm undergo secondary screening, where more effective and intrusive screening devices are used to further examine containers for nuclear and radiological material. It is assumed that there is a budget for performing secondary screening on containers that yield a system alarm. This paper explores the relationships and tradeoffs between prescreening, secondary screening costs, and the efficacy of radiation detectors. The key contribution of this analysis is that it provides a risk-based framework for determining how to define a system alarm for screening cargo containers given limited screening resources. The analysis suggests that highly accurate prescreening is the most important factor for effective screening, particularly when screening tests are highly dependent, and that moderately accurate prescreening may not be an improvement over treating all cargo containers the same. Moreover, it suggests that screening tests with high true alarm rates may mitigate some of the risk associated with low prescreening intelligence.
- Published
- 2009
11. Analysis of stochastic problem decomposition algorithms in computational grids
- Author
-
Andres Ramos, Jesus M. Latorre, Santiago Cerisola, and Rafael Palacios
- Subjects
Mathematical optimization ,Optimization problem ,Mathematical problem ,Linear programming ,Computer science ,General Decision Sciences ,Management Science and Operations Research ,computer.software_genre ,Grid ,Stochastic programming ,Grid computing ,Theory of computation ,Decomposition method (constraint satisfaction) ,computer ,Algorithm - Abstract
Stochastic programming usually represents uncertainty discretely by means of a scenario tree. This representation leads to an exponential growth of the size of stochastic mathematical problems when better accuracy is needed. Trying to solve the problem as a whole, considering all scenarios together, yields to huge memory requirements that surpass the capabilities of current computers. Thus, decomposition algorithms are employed to divide the problem into several smaller subproblems and to coordinate their solution in order to obtain the global optimum. This paper analyzes several decomposition strategies based on the classical Benders decomposition algorithm, and applies them in the emerging computational grid environments. Most decomposition algorithms are not able to take full advantage of all the computing power available in a grid system because of unavoidable dependencies inherent to the algorithms. However, a special decomposition method presented in this paper aims at reducing dependency among subproblems, to the point where all the subproblems can be sent simultaneously to the grid. All algorithms have been tested in a grid system, measuring execution times required to solve standard optimization problems and a real-size hydrothermal coordination problem. Numerical results are shown to confirm that this new method outperforms the classical ones when used in grid computing environments.
- Published
- 2008
12. Revival of the Gomory cuts in the 1990’s
- Author
-
Gérard Cornuéjols
- Subjects
Convex hull ,Mathematical optimization ,Linear programming ,Theory of computation ,General Decision Sciences ,Management Science and Operations Research ,Heuristics ,Travelling salesman problem ,Branch and cut ,Cutting-plane method ,Integer (computer science) ,Mathematics - Abstract
In the early 90’s, the research community was unanimous: In order to solve integer programs of meaningful sizes, one had to exploit the structure of the underlying combinatorial problem; Gomory cuts (Gomory, 1960, 1963) made elegant theory (because they did not require knowledge of the underlying structure) but were utterly useless in practice (because they did not use the underlying structure!). I will come back shortly to these widely held beliefs. A practitioner confronted with a general mixed integer linear program (MILP) who did not have the time or skill to “exploit the underlying structure” had to resort to commercial codes that had been perfected in the early 70’s and had not been improved significantly in the following two decades. These branch-and-bound codes could solve small to medium size instances of MILP’s, but many applications were beyond their ability. Because MILP is NP-hard, it was widely accepted that nothing much could be done about this state of affairs. The sentiment was that there was little research left to do on branch-and-bound algorithms for general MILP’s. On the other hand, branch-and-cut algorithms for structured problems generated a tremendous amount of excitement. Padberg and Rinaldi obtained spectacular results for the traveling salesman problem. The same general philosophy was applied with a varying degree of success to numerous other classes of problems. Typically, for each class, a paper or series of papers would first identify facets of the integer polyhedron (i.e. the convex hull of the feasible solutions to the integer program), then present separation algorithms or heuristics for these facets and finally report some computational results. The success of such branch-and-cut algorithms was attributed to the use of facets of the integer polyhedron. By contrast, prominent researchers had a low opinion of the practical usefulness of Gomory cuts, including Gomory himself. Here are a few representative quotes reflecting the general sentiment in the early 90’s.
- Published
- 2006
13. An intersecting tree model for odd-diameter-constrained minimum spanning and Steiner trees
- Author
-
Thomas L. Magnanti, Cristina Requejo, and Luis Borges Gouveia
- Subjects
Discrete mathematics ,K-ary tree ,Spanning tree ,Linear programming ,General Decision Sciences ,Management Science and Operations Research ,Minimum spanning tree ,k-minimum spanning tree ,Tree (graph theory) ,Steiner tree problem ,Combinatorics ,Linear programming relaxation ,symbols.namesake ,symbols ,Mathematics - Abstract
In a previous paper, Gouveia and Magnanti (2003) found diameter-constrained minimal spanning and Steiner tree problems to be more difficult to solve when the tree diameter D is odd. In this paper, we provide an alternate modeling approach that views problems with odd diameters as the superposition of two problems with even diameters. We show how to tighten the resulting formulation to develop a model with a stronger linear programming relaxation. The linear programming gaps for the tightened model are very small, typically less than 0.5–, and are usually one third to one tenth of the gaps of the best previous model described in Gouveia and Magnanti (2003). Moreover, the new model permits us to solve large Euclidean problem instances that are not solvable by prior approaches.
- Published
- 2006
14. A new approach based on the surrogating method in the project time compression problems
- Author
-
Hadi Mohammadi Bidhandi
- Subjects
Mathematical optimization ,Linear programming ,Computer science ,Time compression ,Decomposition (computer science) ,General Decision Sciences ,Management Science and Operations Research ,Benders' decomposition ,Type (model theory) ,Algorithm ,Integer (computer science) - Abstract
This paper develops a mathematical model for project time compression problems in CPM/PERT type networks. It is noted this formulation of the problem will be an adequate approximation for solving the time compression problem with any continuous and non-increasing time-cost curve. The kind of this model is Mixed Integer Linear Program (MILP) with zero-one variables, and the Benders' decomposition procedure for analyzing this model has been developed. Then this paper proposes a new approach based on the surrogating method for solving these problems. In addition, the required computer programs have been prepared by the author to execute the algorithm. An illustrative example solved by the new algorithm, and two methods are compared by several numerical examples. Computational experience with these data shows the superiority of the new approach.
- Published
- 2006
15. Mixed Integer Linear Programming in Process Scheduling: Modeling, Algorithms, and Applications
- Author
-
Christodoulos A. Floudas and Xiaoxia Lin
- Subjects
Mathematical optimization ,Linear programming ,Branch and bound ,Mathematical model ,Computer science ,Branch and price ,Theory of computation ,General Decision Sciences ,Management Science and Operations Research ,Work in process ,Integer programming ,Scheduling (computing) - Abstract
This paper reviews the advances of mixed-integer linear programming (MILP) based approaches for the scheduling of chemical processing systems. We focus on the short-term scheduling of general network represented processes. First, the various mathematical models that have been proposed in the literature are classified mainly based on the time representation. Discrete-time and continuous-time models are presented along with their strengths and limitations. Several classes of approaches for improving the computational efficiency in the solution of MILP problems are discussed. Furthermore, a summary of computational experiences and applications is provided. The paper concludes with perspectives on future research directions for MILP based process scheduling technologies.
- Published
- 2005
16. [Untitled]
- Author
-
James W. George, Charles ReVelle, and John R. Current
- Subjects
education.field_of_study ,Branch and bound ,Linear programming ,Population ,Complete graph ,General Decision Sciences ,Management Science and Operations Research ,Facility location problem ,Combinatorics ,Tree (data structure) ,Node (computer science) ,Tree network ,education ,Mathematics - Abstract
A number of network design problems can be built on the following premise: Given a tree network, T, containing node set, V, identify a single subtree, t, containing nodes, v, so that the subtree is located optimally with respect to the remaining, unconnected nodes {V−v}. Distances between unconnected nodes and nodes in the subtree t can be defined on travel paths that are restricted to lie in the larger tree T (the travel-restricted case), or can be defined on paths in an auxiliary complete graph G (the travel-unrestricted case). This paper presents the Maximum Utilization Subtree Problem (MUSP), a bicriterion problem that trades off the cost of a subtree, t, against the utilization of the subtree by the sum of the populations at nodes connected to the subtree, plus the distance-attenuated population that must travel to the subtree from unconnected nodes. The restricted and unrestricted cases are formulated as a two objective integer programs where the objectives are to maximize utilization of the subtree and minimize the cost of the subtree. The programs are tested using linear programming and branch and bound to resolve fractions. The types of problems presented in this paper have been characterized in the existing literature as “structure location” or “extensive facility location” problems. This paper adds two significant contributions to the general body of location literature. First, it draws explicit attention to the travel-restricted and travel-unrestricted cases, which may also be called “limited-access” and “general-access” cases, respectively. Second, the distance-attenuated demands represent a new objective function concept that does not appear in the location literature.
- Published
- 2002
17. [Untitled]
- Author
-
Amal de Silva
- Subjects
Concurrent constraint logic programming ,Mathematical optimization ,Linear programming ,Computer science ,General Decision Sciences ,Management Science and Operations Research ,Constraint satisfaction ,Inductive programming ,System programming ,Procedural programming ,Theory of computation ,Constraint logic programming ,Constraint programming ,Reactive programming ,Column generation ,Fifth-generation programming language ,Functional reactive programming - Abstract
This paper provides details of a successful application where the Column Generation algorithm was used to combine Constraint Programming and Linear Programming. In the past, constraint programming and linear programming were considered to be two competing technologies that solved similar types of problems. Both these technologies had their strengths and weaknesses. This paper shows that the two technologies can be combined together to extract the strengths of both these technologies. Details of a real-world application to optimize bus driver duties is given here. This system was developed by ILOG for a major software house in Japan using ILOG-Solver and ILOG-CPLEX, constraint programming and linear programming C/C++ libraries.
- Published
- 2001
18. New common set of weights method in black-box and two-stage data envelopment analysis
- Author
-
Reza Kazemi Matin and Hamid Kiaei
- Subjects
Set (abstract data type) ,Mathematical optimization ,Fractional programming ,Linear programming ,Computer science ,Black box ,Theory of computation ,Data envelopment analysis ,General Decision Sciences ,Production (economics) ,Management Science and Operations Research ,Inefficiency - Abstract
Data envelopment analysis (DEA) strives to evaluate the production units under their best conditions. DEA flexibility in selecting the appropriate input/output weights always results in unreal and zero weights. Treating decision-making units (DMUs) as black-box regardless of their internal structures misleads the DEA performance evaluation. While considering units as a network process, it is more likely to identify more inefficiency sources. This paper suggests using a new common set of weights (CSWs) approach to evaluate the units in both black-box and two-stage structures based on a unified criterion. Indeed, our contribution to this line of research is as follows: Firstly, we improve the model proposed by Kao and Hung (J Oper res Soc 56(10): 1196–1203, 2005) to calculate the CSWs in a linear-based optimization model. Secondly, a new CSWs method is suggested in the two-stage network DEA (NDEA) as multiple objectives fractional programming (MOFP) problem. Thirdly, the MOFP problem is converted into a single objective linear programming problem in the two-stage network case. Finally, an enlightening application is presented.
- Published
- 2021
19. The use of mathematical programming to verify rule-based knowledge
- Author
-
Joanne R. Schaffer and Daniel E. O'Leary
- Subjects
Symbolic programming ,Mathematical optimization ,Theoretical computer science ,Linear programming ,Knowledge representation and reasoning ,Computer science ,business.industry ,Semantics (computer science) ,General Decision Sciences ,Management Science and Operations Research ,Inductive programming ,Dynamic programming ,Software ,Procedural programming ,Goal programming ,Programming paradigm ,N-version programming ,Constraint programming ,Reactive programming ,Fifth-generation programming language ,Programming domain ,business ,Integer programming ,Functional reactive programming ,Declarative programming - Abstract
The purpose of this paper is to use mathematical programming, including linear programming, dynamic programming, integer programming and goal programming to verify rule-based knowledge. We investigate both domain independent verification, exploiting the general structure of rules, and domain dependent verification, exploiting structure in the domain. Mathematical programming software is readily available and is very efficient. As a result, verification using mathematical programming can be very efficient at finding errors. Mathematical programming can be used to more than just find errors in knowledge representation. Once an error has been found, mathematical programming can be used to “recommend” an alternative. The recommendation can take into account the previous verified knowledge to mitigate the potential introduction of redundant knowledge and to help guide the choice process. Normally the development of recommendations to fix errors has been ignored in the verification literature, and treated as a separate knowledge acquisition task. Accordingly, this paper also extends the verification effort by providing a recommendation on how to fix errors.
- Published
- 1996
20. N-1-1 contingency-constrained unit commitment with renewable integration and corrective actions
- Author
-
Jose L. Ruiz Duarte, Daniel A. Zuniga Vazquez, Neng Fan, and Feng Qiu
- Subjects
Linear programming ,Computer science ,business.industry ,Economic dispatch ,General Decision Sciences ,Management Science and Operations Research ,Reliability engineering ,Power (physics) ,Renewable energy ,Electric power system ,Power system simulation ,Theory of computation ,Sensitivity (control systems) ,business - Abstract
Meeting the customer’s power demands is crucial for energy companies, even when unexpected and consecutive failures are present. This task has considerably increased its complexity due to the high integration of renewable energies and their intermittent behaviors. Therefore, it is important to achieve reliable power supply based on a criterion closer to real-life system operations and capable of addressing consecutive failures. The N-1-1 contingency involves the loss of a single transmission line or generation unit, followed by systems adjustments. Afterward, the power system experiences a subsequent loss of an additional generation unit or transmission line. This paper presents a power system unit commitment problem considering the N-1-1 reliability criterion with operations compliance check on economic dispatch and power flows under contingency states and renewable energy integration. Corrective actions are also included to determine the time that the failed components are restored. To address the complexity caused by renewable energy integration, the reliable unit commitment is achieved under the worst-case renewable output. The formulation results in an extremely large-scale adaptive robust mixed-integer linear programming model. For an efficient solution, a variation of the nested column-and-constraint generation algorithm is designed. Besides using the susceptance and phase angles to model the power flow, the linear sensitivity factors are also applied for improving the computational performance. The proposed models and algorithms are evaluated on modified IEEE 6-bus, 14-bus, and 118-bus test systems to confirm their effectiveness.
- Published
- 2021
21. Development of an energy-emission model using fuzzy sets
- Author
-
Otto Rentz and Carsten Oder
- Subjects
Mathematical optimization ,Linear programming ,Astrophysics::High Energy Astrophysical Phenomena ,Fuzzy set ,General Decision Sciences ,Management Science and Operations Research ,Fuzzy logic ,Energy accounting ,Reduction (complexity) ,Energy transformation ,Energy supply ,InformationSystems_MISCELLANEOUS ,Astrophysics::Galaxy Astrophysics ,Energy (signal processing) ,Mathematics - Abstract
In searching for cost-efficient strategies to reduce emissions from energy conversion, most western countries use energy-emission models. In these models, the whole energy conversion chain and possible future options for energy supply and emission reduction are mapped into a network of energy flows. Total discounted cost of energy supply and emission reduction is minimized under the restriction of maximum allowed emissions of SO2, NO x , or CO2. The present paper extends one of these models to allow for fuzzy parameters. Such an extension appears to be useful when the data situation is weak. In this paper, a fuzzy linear program is developed, which has been applied to an energy-emission model of Lithuania.
- Published
- 1994
22. Vague matrices in linear programming
- Author
-
Josef Nedoma
- Subjects
Algebra ,Matrix (mathematics) ,Mathematical optimization ,Linear programming ,Theory of computation ,MathematicsofComputing_NUMERICALANALYSIS ,Regular polygon ,General Decision Sciences ,Computer Science::Artificial Intelligence ,Management Science and Operations Research ,Simultaneous optimization ,Degeneracy (mathematics) ,Mathematics - Abstract
This paper deals with so-called vague matrices, the columns of which are convex sets. A special “square” problem of the vague optimization is analysed. The results form a base for the subsequent outline of an algorithm for solving the LP-problem with a vague matrix. The paper is concluded by the discussion of possible types of degeneracy.
- Published
- 1993
23. Algorithms for large scale set covering problems
- Author
-
Nicos Christofides and José M. P. Paixão
- Subjects
Mathematical optimization ,Linear programming ,Heuristic (computer science) ,General Decision Sciences ,Covering problems ,Set cover problem ,Management Science and Operations Research ,symbols.namesake ,Lagrangian relaxation ,symbols ,State space ,Relaxation (approximation) ,Row ,Algorithm ,Mathematics - Abstract
This paper is concerned with the set covering problem (SCP), and in particular with the development of a new algorithm capable of solving large-scale SCPs of the size found in real-life situations. The set covering problem has a wide variety of practical applications which, in general, yield large and sparse instances, normally with hundreds of rows and thousands of columns. In this paper, we present an algorithm capable of solving problems of this size and test problems up to 400 rows and 4000 columns are solved. The method developed in this paper consists of a tree-search procedure based on a combination of decomposition and state space relaxation which is a technique developed for obtaining lower bounds on the dynamic program associated with a combinatorial optimization problem. The large size SCPs are decomposed into many smaller SCPs which are then solved or bounded by state space relaxation (SSR). Before using the decomposition and SSR, reductions both in the number of columns and the number of rows of the problem are made by applying Lagrangian relaxation, linear programming and heuristic methods.
- Published
- 1993
24. LP modeling languages for personal computers: A comparison
- Author
-
David M. Steiger and Ramesh Sharda
- Subjects
Software documentation ,Linear programming ,Computer science ,Modeling language ,business.industry ,Programming language ,General Decision Sciences ,AMPL ,Management Science and Operations Research ,Solver ,computer.software_genre ,Cursor (databases) ,Theory of computation ,Artificial intelligence ,business ,computer ,computer.programming_language - Abstract
The purpose of this paper is to compare and contrast the modeling capabilities of seven algebraic modeling languages (ML) available today, namely, AMPL, GAMS, LINGO, LPL, MPL, PC-PROG and XPRESS-LP. In general, these MLs do an excellent job of providing an interface with which the modeler can specify an algebraically formatted linear program (LP). That is, each ML provides a substantial improvement in time and convenience over the matrix generator/report writers of the last few decades. Further, each of the MLs provides: (1) significant flexibility in model specification, instantiation and modification, (2) effective and efficient conversion from algebraic to solver format, and (3) an understandable and, for the most part, self-documenting model representation. In addition, each of the MLs is constantly being updated and upgraded to provide additional capabilities sought by practitioners and users. However, as shown in the fifteen tables provided in the body of this paper, each ML has its own set of competitive advantages. For example, the most integrated environments (i.e. those integrating the modeling language with a full-screen editor, data import capabilities and a solver) are provided by LINGO and PC-PROG. The most user-friendly interfaces are provided by MPL and PC-PROG, both of which provide window-based interfaces to create models and pop-up windows to display error messages; MPL also uses pull-down menus to specify various operations, whereas PC-PROG uses function keys for operational control. Package costs are led by a current (March, 1991) introductory offer from LINGO. Modeling effectiveness, especially with respect to flexibility in specifying arithmetic statements, is led by GAMS and LPL. Model compactness, as measured by the number of lines required to specify a model, is led by AMPL, LPL, MPL and PC-PROG; LPL, MPL and PC-PROG also provide context sensitive editors which automatically position the cursor where the error was detected. And finally, the most comprehensive user documentation is provided by GAMS, whereas GAMS, LINGO and LPL provide extensive libraries of sample models for those users who “learn by example”.
- Published
- 1993
25. Pivot rules for linear programming: A survey on recent theoretical developments
- Author
-
Tamás Terlaky and Shuzhong Zhang
- Subjects
Algebra ,Parametric programming ,Class (set theory) ,Recursion ,Simplex ,Linear programming ,Simplex algorithm ,Theory of computation ,Calculus ,General Decision Sciences ,Management Science and Operations Research ,Interior point method ,Mathematics - Abstract
The purpose of this paper is to discuss the various pivot rules of the simplex method and its variants that have been developed in the last two decades, starting from the appearance of the minimal index rule of Bland. We are mainly concerned with finiteness properties of simplex type pivot rules. Well known classical results concerning the simplex method are not considered in this survey, but the connection between the new pivot methods and the classical ones, if there is any, is discussed. In this paper we discuss three classes of recently developed pivot rules for linear programming. The first and largest class is the class of essentially combinatorial pivot rules including minimal index type rules and recursive rules. These rules only use labeling and signs of the variables. The second class contains those pivot rules which can actually be considered as variants or generalizations or specializations of Lemke's method, and so they are closely related to parametric programming. The last class has the common feature that the rules all have close connections to certain interior point methods. Finally, we mention some open problems for future research.
- Published
- 1993
26. Monte Carlo (importance) sampling within a benders decomposition algorithm for stochastic linear programs
- Author
-
Gerd Infanger
- Subjects
Mathematical optimization ,Linear programming ,Stochastic process ,Numerical analysis ,Theory of computation ,Monte Carlo method ,General Decision Sciences ,Sampling (statistics) ,Management Science and Operations Research ,Project portfolio management ,Algorithm ,Importance sampling ,Mathematics - Abstract
This paper focuses on Benders decomposition techniques and Monte Carlo sampling (importance sampling) for solving two-stage stochastic linear programs with recourse, a method first introduced by Dantzig and Glynn [7]. The algorithm is discussed and further developed. The paper gives a complete presentation of the method as it is currently implemented. Numerical results from test problems of different areas are presented. Using small test problems, we compare the solutions obtained by the algorithm with universe solutions. We present the solutions of large-scale problems with numerous stochastic parameters, which in the deterministic formulation would have billions of constraints. The problems concern expansion planning of electric utilities with uncertainty in the availabilities of generators and transmission lines and portfolio management with uncertainty in the future returns.
- Published
- 1992
27. A unified view of interior point methods for linear programming
- Author
-
David F. Shanno and Ansuman Bagghi
- Subjects
Mathematical optimization ,Logarithm ,Linear programming ,Theory of computation ,General Decision Sciences ,Applied mathematics ,Karmarkar's algorithm ,Affine transformation ,Management Science and Operations Research ,Interior point method ,Linear-fractional programming ,Mathematics ,Dual (category theory) - Abstract
The paper shows how various interior point methods for linear programming may all be derived from logarithmic barrier methods. These methods include primal and dual projective methods, affine methods, and methods based on the method of centers. In particular, the paper demonstrates that Karmarkar's algorithm is equivalent to a classical logarithmic barrier method applied to a problem in standard form.
- Published
- 1990
28. A Lagrangian relaxation algorithm for optimizing a bi-objective agro-supply chain model considering CO2 emissions
- Author
-
Seyed Hamid Reza Pasandideh and Fatemeh Keshavarz-Ghorbani
- Subjects
021103 operations research ,Linear programming ,Total cost ,Computer science ,0211 other engineering and technologies ,General Decision Sciences ,Robust optimization ,Context (language use) ,02 engineering and technology ,Management Science and Operations Research ,Purchasing ,symbols.namesake ,Lagrangian relaxation ,Perishability ,symbols ,Algorithm - Abstract
In this research, an agro-supply chain in the context of both economic and environmental issues has been investigated. To this end, a bi-objective model is formulated as a mixed-integer linear programming that aims to minimize the total costs and CO2 emissions. It generates the integration between purchasing, transporting, and storing decisions, considering specific characteristics of agro-products such as seasonality, perishability, and uncertainty. This study provides a different set of temperature conditions for preserving products from spoilage. In addition, a robust optimization approach is used to tackle the uncertainty in this paper. Then, $$\varepsilon$$ -constraint method is used to convert the bi-objective model to a single one. To solve the problem, Lagrangian relaxation algorithm is applied as an efficient approach giving lower bounds for the original problem and used for estimating upper bounds. At the end, a real case study is presented to give valuable insight via assessing the impacts of uncertainty in system costs.
- Published
- 2021
29. DEA under big data: data enabled analytics and network data envelopment analysis
- Author
-
Joe Zhu
- Subjects
Optimization problem ,Linear programming ,business.industry ,Computer science ,Supply chain ,Big data ,General Decision Sciences ,Benchmarking ,Management Science and Operations Research ,computer.software_genre ,Analytics ,Data envelopment analysis ,Data mining ,Dimension (data warehouse) ,business ,computer - Abstract
This paper proposes that data envelopment analysis (DEA) should be viewed as a method (or tool) for data-oriented analytics in performance evaluation and benchmarking. While computational algorithms have been developed to deal with large volumes of data (decision making units, inputs, and outputs) under the conventional DEA, valuable information hidden in big data that are represented by network structures should be extracted by DEA. These network structures, e.g., transportation and logistics systems, encompass a broader range of inter-linked metrics that cannot be modelled by conventional DEA. It is proposed that network DEA is related to the value dimension of big data. It is shown that network DEA is different from standard DEA, although it bears the name of DEA and some similarity with conventional DEA. Network DEA is big data enabled analytics (big DEA) when multiple (performance) metrics or attributes are linked through network structures. These network structures are too large or complex to be dealt with by conventional DEA. Unlike conventional DEA that are solved via linear programming, general network DEA corresponds to nonconvex optimization problems. This represents opportunities for developing techniques for solving non-linear network DEA models. Areas such as transportation and logistics system as well as supply chains have a great potential to use network DEA in big data modeling.
- Published
- 2020
30. Duality in balance optimization subset selection
- Author
-
Jason J. Sauppe, Sheldon H. Jacobson, and Hee Youn Kwon
- Subjects
Mathematical optimization ,021103 operations research ,Optimization problem ,Linear programming ,0211 other engineering and technologies ,General Decision Sciences ,Duality (optimization) ,02 engineering and technology ,Management Science and Operations Research ,Boss ,Causal inference ,Covariate ,Theory of computation ,Mathematics - Abstract
In this paper, we investigate a specific optimization problem that arises in the context of Balance Optimization Subset Selection (BOSS), which is an optimization framework for causal inference. Most BOSS problems can be formulated as mixed integer linear programs. By relaxing the integrality constraints so that fractional contributions of control units are permitted, a linear program (LP) is obtained. Properties of this LP and its dual are investigated and a sensitivity analysis is conducted to characterize how the objective value changes as the covariate values are perturbed.
- Published
- 2020
31. An exact scalarization method with multiple reference points for bi-objective integer linear optimization problems
- Author
-
Angelo Aliano Filho, Washington Alves de Oliveira, Margarida Vaz Pato, and Antonio Carlos Moretti
- Subjects
Mathematical optimization ,021103 operations research ,Linear programming ,Computer science ,Theory of computation ,0211 other engineering and technologies ,Bi objective ,Pareto principle ,General Decision Sciences ,02 engineering and technology ,Management Science and Operations Research ,Weighting - Abstract
This paper presents an exact scalarization method to solve bi-objective integer linear optimization problems. This method uses diverse reference points in the iterations, and it is free from any kind of a priori chosen weighting factors. In addition, two new adapted scalarization methods from literature and the modified Tchebycheff method are studied. Each one of them results in different ways to obtain the Pareto frontier. Computational experiments were performed with random real size instances of two special problems related to the manufacturing industry, which involve lot sizing and cutting stock problems. Extensive tests confirmed the very good performance of the new scalarization method with respect to the computational effort, the number of achieved solutions, the ability to achieve different solutions, and the spreading and spacing of solutions at the Pareto frontier.
- Published
- 2019
32. Duality theory in Atanassov’s intuitionistic fuzzy mathematical programming problems: Optimistic, pessimistic and mixed approaches
- Author
-
Vishnu Singh, Sujeet Kumar Singh, and Shiv Prasad Yadav
- Subjects
Mathematical optimization ,Duality gap ,Linear programming ,Computer science ,Bounded function ,Fuzzy set ,Duality (mathematics) ,Theory of computation ,General Decision Sciences ,Management Science and Operations Research ,Fuzzy logic ,Dual (category theory) - Abstract
Linear programming problems in fuzzy environment have been investigated by many researchers in the recent years. Some researchers have solved these problems by using primal-dual method with linear and exponential membership functions. These membership functions are particular form of the reference functions. In this paper, we introduce a pair of primal-dual PPs in Atanassov’s intuitionistic fuzzy environment (AIFE) in which the membership and non-membership functions are taken in the form of the reference functions and prove duality results in AIFE by using an aspiration level approach with different view points, viz., optimistic, pessimistic and mixed. Since fuzzy and AIF environments cause duality gap, we propose to investigate the impact of membership functions governed by reference functions on duality gap. This is specially meaningful for fuzzy and AIF programming problems, when the primal and dual objective values may not be bounded. Finally, the duality gap obtained by the approach has been compared with the duality gap obtained by existing approaches.
- Published
- 2019
33. Polyhedral results for position-based scheduling of chains on a single machine
- Author
-
Tamás Kis and Markó Horváth
- Subjects
Mathematical optimization ,021103 operations research ,Speedup ,Linear programming ,Job shop scheduling ,Computer science ,0211 other engineering and technologies ,Preemption ,General Decision Sciences ,QA75 Electronic computers. Computer science / számítástechnika, számítógéptudomány ,02 engineering and technology ,Management Science and Operations Research ,Scheduling (computing) ,Polyhedron ,Theory of computation - Abstract
We consider a scheduling problem, where a set of unit-time jobs has to be sequenced on a single machine without any idle times between the jobs. Preemption of processing is not allowed. The processing cost of a job is determined by the position in the sequence, i.e., for each job and each position, there is an associated weight, and one has to determine a sequence of jobs which minimizes the total weight incurred by the positions of the jobs. In addition, the ordering of the jobs must satisfy the given chain-precedence constraints. In this paper we show that this problem is NP-hard even in a special case, where each chain consists of two jobs (2-chains). Further on, we study the polyhedron associated with the problem, and present a class of valid inequalities along with a polynomial-time separation procedure, and show that some of these inequalities are facet-defining in the special case of 2-chains. Finally, we present our computational results that confirm that separating these inequalities can significantly speed up a linear programming based branch-and-bound procedure to solve the problem with chains of two jobs.
- Published
- 2019
34. Stochastic facility location model for drones considering uncertain flight distance
- Author
-
Kyungsik Lee, Dong-Wook Kim, and Ilkyeong Moon
- Subjects
021103 operations research ,Humanitarian Logistics ,Operations research ,Linear programming ,Emergency management ,Computer science ,Heuristic (computer science) ,business.industry ,Stochastic modelling ,0211 other engineering and technologies ,General Decision Sciences ,02 engineering and technology ,Management Science and Operations Research ,Drone ,Facility location problem ,Stochastic programming ,Theory of computation ,business - Abstract
This paper developed a stochastic modelling framework to determine the locations and transport capacities of drone facilities for effectively coping with a disaster. The developed model is applicable to emergency planning that incorporates drones into humanitarian logistics while taking into account the uncertain characteristics of drone operating conditions. Because of the importance of speedy decision making in disaster management, a heuristic algorithm was developed using Benders decomposition, which generates time-efficient high-quality solutions. The linear programming rounding method was used to make the algorithm efficient. Computational experiments demonstrated the superiority of the developed algorithm, and a sensitivity analysis was carried out to gain additional insights.
- Published
- 2018
35. Approximation schemes for r-weighted Minimization Knapsack problems
- Author
-
Khaled Elbassioni, Trung Thanh Nguyen, and Areg Karapetyan
- Subjects
Mathematical optimization ,021103 operations research ,Linear programming ,Computer science ,0211 other engineering and technologies ,General Decision Sciences ,02 engineering and technology ,Management Science and Operations Research ,Quadratic equation ,Knapsack problem ,TheoryofComputation_ANALYSISOFALGORITHMSANDPROBLEMCOMPLEXITY ,Minification ,Computer Science::Data Structures and Algorithms ,Greedy algorithm ,MathematicsofComputing_DISCRETEMATHEMATICS ,Integer (computer science) ,Curse of dimensionality - Abstract
Stimulated by salient applications arising from power systems, this paper studies a class of non-linear Knapsack problems with non-separable quadratic constrains, formulated in either binary or integer form. These problems resemble the duals of the corresponding variants of 2-weighted Knapsack problem (a.k.a., complex-demand Knapsack problem) which has been studied in the extant literature under the paradigm of smart grids. Nevertheless, the employed techniques resulting in a polynomial-time approximation scheme (PTAS) for the 2-weighted Knapsack problem are not amenable to its minimization version. We instead propose a greedy geometry-based approach that arrives at a quasi PTAS (QPTAS) for the minimization variant with boolean variables. As for the integer formulation, a linear programming-based method is developed that obtains a PTAS. In view of the curse of dimensionality, fast greedy heuristic algorithms are presented, additionally to QPTAS. Their performance is corroborated extensively by empirical simulations under diverse settings and scenarios.
- Published
- 2018
36. Measures of global sensitivity in linear programming: applications in banking sector
- Author
-
Mike G. Tsionas and Dionisis Philippas
- Subjects
Robustification ,Mathematical optimization ,021103 operations research ,Linear programming ,Uncertain data ,Inequality ,Computer science ,media_common.quotation_subject ,Bayesian probability ,0211 other engineering and technologies ,General Decision Sciences ,Sample (statistics) ,02 engineering and technology ,Management Science and Operations Research ,Joint probability distribution ,Linear problem ,Statistical inference ,Random variable ,media_common - Abstract
The paper examines the sensitivity for the solution of linear programming problems using Bayesian techniques, when samples for the coefficients of the objective function are uncertain. When data is available, we estimate the solution of the linear program and provide statistical measures of uncertainty through the posterior distributions of the solution in the light of the data. When data is not available, these techniques examine the sensitivity of the solution to random variation in the coefficients of the linear problem. The new techniques are based on two posteriors emerging from the inequalities of Karush–Kuhn–Tucker conditions. The first posterior is asymptotic and does not require data. The second posterior is finite-sample-based and is used whenever data is available or if random samples can be drawn from the joint distribution of coefficients. A by-product of our framework is a robust solution. We illustrate the new techniques in two empirical applications to the case of uncertain Data Envelopment Analysis efficiency, involving two large samples, of US commercial banks and a sample of European commercial banks regulated by the Single Supervisory Mechanism. We analyse whether some pre-determined criteria, associated with size and new supervisory framework, can adequately affect the solution of linear program. The results provide evidence of substantial improvements in statistical structure with respect to sensitivities and robustification. Our methodology can serve as a consistency check of the statistical inference for the solution of linear programming problems in efficiency under uncertainty in data.
- Published
- 2021
37. Total dual integrality of the linear complementarity problem
- Author
-
Naonori Kakimura, Kazuhisa Makino, and Hanna Sumita
- Subjects
Pure mathematics ,021103 operations research ,Linear programming ,0211 other engineering and technologies ,General Decision Sciences ,02 engineering and technology ,Management Science and Operations Research ,Quantitative Biology::Genomics ,Linear complementarity problem ,Dual (category theory) ,Orientation (vector space) ,Total dual integrality ,Matrix (mathematics) ,Unimodular matrix ,Computer Science::Data Structures and Algorithms ,Coefficient matrix ,Mathematics - Abstract
In this paper, we introduce total dual integrality of the linear complementarity problem (LCP) by analogy with the linear programming problem. The main idea of defining the notion is to propose the LCP with orientation, a variant of the LCP whose feasible complementary cones are specified by an additional input vector. Then we naturally define the dual problem of the LCP with orientation and total dual integrality of the LCP. We show that if the LCP is totally dual integral, then all basic solutions are integral. If the input matrix is sufficient or rank-symmetric, and the LCP is totally dual integral, then our result implies that the LCP always has an integral solution whenever it has a solution. We also introduce a class of matrices such that any LCP instance having the matrix as a coefficient matrix is totally dual integral. We investigate relationships between matrix classes in the LCP literature such as principally unimodular matrices. Principally unimodular matrices are known that all basic solutions to the LCP are integral for any integral input vector. In addition, we show that it is coNP-hard to decide whether a given LCP instance is totally dual integral.
- Published
- 2018
38. Mathematical modelling and heuristic approaches to the location-routing problem of a cost-effective integrated solid waste management
- Author
-
Hossein Asefi, Shahrooz Shahparvari, Samsung Lim, and Mojtaba Maghrebi
- Subjects
Mathematical optimization ,021103 operations research ,Municipal solid waste ,Linear programming ,Total cost ,Heuristic (computer science) ,Computer science ,0211 other engineering and technologies ,General Decision Sciences ,02 engineering and technology ,Management Science and Operations Research ,Variable (computer science) ,Simulated annealing ,Routing (electronic design automation) - Abstract
Integrated solid waste management (ISWM) comprises activities and processes to collect, transport, treat, recycle and dispose municipal solid wastes. This paper addresses the ISWM location-routing problem in which different types of municipal solid wastes are factored concurrently into an integrated system with all interrelated facilities. To support a cost-effective ISWM system, the number of locations of the system’s components (i.e. transfer stations; recycling, treatment and disposal centres) and truck routing within the system’s components need to be optimized. A mixed-integer linear programming (MILP) model is presented to minimise the total cost of the ISWM system including transportation costs and facility establishment costs. To tackle the non-deterministic polynomial-time hardness of the problem, a stepwise heuristic method is proposed within the frames of two meta-heuristic approaches: (i) variable neighbourhood search (VNS) and (ii) a hybrid VNS and simulated annealing algorithm (VNS + SA). A real-life case study from an existing ISWM system in Tehran, Iran is utilized to apply the proposed model and algorithms. Then the presented MILP model is implemented in CPLEX environment to evaluate the effectiveness of the proposed algorithms for multiple test problems in different scales. The results show that, while both proposed algorithms can effectively solve the problem within practical computing time, the proposed hybrid method efficiently has produced near-optimal solutions with gaps of
- Published
- 2018
39. Fix-and-optimize procedures for solving the long-term unit commitment problem with pumped storages
- Author
-
Julia Rieck, Alexander Franz, and Jürgen Zimmermann
- Subjects
Mathematical optimization ,021103 operations research ,Linear programming ,business.industry ,Computer science ,0211 other engineering and technologies ,General Decision Sciences ,Time horizon ,02 engineering and technology ,Management Science and Operations Research ,Renewable energy ,Term (time) ,Power system simulation ,Decomposition method (constraint satisfaction) ,business - Abstract
In this paper, we consider a long-term unit commitment problem with thermal and renewable energy sources, where system operating costs have to be minimized. The problem is enhanced by adding pumped storages, where water is stored in reservoirs, being turbinated or pumped up if it is beneficial in terms of reducing the operating costs. We present a tight mixed-integer linear programming model with a redefinition of decision variables and a reformulation of constraints, e.g., for the spinning reserve. The model serves as a basis for a new decomposition method, where fix-and-optimize schemes are used. In particular, a time-oriented, a unit-oriented, and a generic fix-and-optimize procedure are presented. A computational performance analysis shows that the mixed-integer linear model is efficient in supporting the solution process for small- and medium-scale instances. Furthermore, the fix-and-optimize procedures are able to tackle even large-scale instances. Particularly, problem instances with real-world energy demands, power plant-specific characteristics, and a one-year planning horizon with hourly time steps are solved to near-optimality in reasonable time.
- Published
- 2018
40. Mixed-model assembly lines balancing with given buffers and product sequence: model, formulation comparisons, and case study
- Author
-
Adalberto Sato Michels, Thiago Cantos Lopes, Leandro Magatão, and Celso Gustavo Stall Sikora
- Subjects
Mixed model ,Schedule ,Mathematical optimization ,021103 operations research ,Linear programming ,Computer science ,Real-time computing ,0211 other engineering and technologies ,Scheduling (production processes) ,General Decision Sciences ,02 engineering and technology ,Management Science and Operations Research ,Solver ,computer.software_genre ,Simulation software ,Line (geometry) ,Upstream (networking) ,computer - Abstract
Asynchronous assembly lines are productive layouts in which products move sequentially between stations when processing at current station is complete, and the following station is empty. When these conditions are not verified, downstream starvations and upstream blockages can occur. Buffers are often employed to minimize these problems, which are particularly relevant when the line is shared between a set of different products models (mixed-model lines). If the sequence of such models is cyclical, a steady-state production rate is eventually reached. However, determining (and, therefore, optimizing) such steady-state is challenging. This led to the development of indirect performance measures for mixed-model lines by many authors. In this paper, a direct performance measure is presented with a mixed-integer linear programming model and compared to previous formulations. The model is also applied to a practical case study and to a new dataset (with 1050 instances), allowing general assertions on the problem. All instances are solved with a universal solver and solutions are validated with a simulation software. Tests on the dataset instances confirmed the observations made on the case study: the proposed formulation produced solutions with higher production rate in 82% of the instances and tied the remaining ones, not being outperformed a single time. A triple interdependency of task balancing, product sequencing, and buffer allocation is demonstrated. Cyclical schedules show how buffers are able to compensate differences between models across stations and lead to the conclusion that the propagation of differences of models between stations can generate scheduling bottlenecks (blockages and starvation).
- Published
- 2017
41. Weight reduction technology and supply chain network design under carbon emission restriction
- Author
-
Zongwei Luo, Ling Zhao, Shuihua Han, Stephen C. H. Leung, and Yue Jiang
- Subjects
021103 operations research ,Linear programming ,Supply chain ,0211 other engineering and technologies ,General Decision Sciences ,02 engineering and technology ,Management Science and Operations Research ,Environmental economics ,Facility location problem ,Purchasing ,Network planning and design ,Economic cost ,Vehicle routing problem ,Economics ,Operations management ,Supply chain network - Abstract
As policies and regulations related to environmental protection and resource constraints are becoming increasingly tougher, corporations may face the difficulty of determining the optimal trade-offs between economic performance and environmental concerns when selecting product technology and designing supply chain networks. This paper considers weight reduction technology selection and network design problem in a real-world corporation in China which produces, sells and recycles polyethylene terephthalate (PET) bottles used for soft drinks. The problem is addressed while taking consideration of future regulations of carbon emissions restrictions. First, a deterministic mixed-integer linear programming model is developed to analyze the influence of economic cost and carbon emissions for different selections in terms of the weight of PET bottle, raw material purchasing, vehicle routing, facility location, manufacturing and recycling plans, etc. Then, the robust counterpart of the proposed mixed-integer linear programming model is used to deal with the uncertainty in supply chain network resulting from the weight reduction. Finally, results show that though weight reduction is both cost-effective and environmentally beneficial, the increased cost due to the switching of the filling procedure from hot-filling to aseptic cold-filling and the incumbent uncertainties have impacts on the location of the Pareto frontier. Besides, we observe that the feasible range between economic cost and carbon emission shrinks with weightreduction; and the threshold of restricted volume of carbon emission decreases with the increase of uncertainty in the supply chain network.
- Published
- 2017
42. Integrated forward/reverse logistics network design under uncertainty with pricing for collection of used products
- Author
-
Kannan Govindan and Mohammad Fattahi
- Subjects
0209 industrial biotechnology ,Mathematical optimization ,021103 operations research ,Uniform distribution (continuous) ,Optimization problem ,Mixed-integer linear programming ,Linear programming ,Computer science ,0211 other engineering and technologies ,Stochastic programming ,General Decision Sciences ,Time horizon ,02 engineering and technology ,Reverse logistics ,Management Science and Operations Research ,Simulated annealing ,Network planning and design ,Reduction (complexity) ,Integrated forward/reverse logistics network ,020901 industrial engineering & automation ,Simulation - Abstract
This paper addresses design and planning of an integrated forward/reverse logistics network over a planning horizon with multiple tactical periods. In the network, demand for new products and potential return of used products are stochastic. Furthermore, collection amounts of used products with different quality levels are assumed dependent on offered acquisition prices to customer zones. A uniform distribution function defines the expected price of each customer zone for one unit of each used product. Using two-stage stochastic programming, a mixed-integer linear programming model is proposed. To cope with demand and potential return uncertainty, Latin Hypercube Sampling method is applied to generate fan of scenarios and then, backward scenario reduction technique is used to reduce the number of scenarios. Due to the problem complexity, a novel simulation-based simulated annealing algorithm is developed to address large-sized test problems. Numerical results indicate the applicability of the model as well as the efficiency of the solution approach. In addition, the performance of the scenario generation method and the importance of stochasticity are examined for the optimization problem. Finally, several numerical experiments including sensitivity analysis on main parameters of the problem are performed.
- Published
- 2016
43. Representing knowledge about linear programming formulation
- Author
-
Pai-Chun Ma, Frederic H. Murphy, and Edward A. Stohr
- Subjects
Statement (computer science) ,Theoretical computer science ,Linear programming ,Computer science ,Theory of computation ,Programming paradigm ,General Decision Sciences ,Management Science and Operations Research ,Algebraic number ,Graphics ,Solver - Abstract
This paper describes a system, LPFORM, that enables users to design linear programming models interactively, using graphics. An output of the system is an algebraic statement of the model and data references that are subsequently used to generate the input for a solver in the standard MPS format. The emphasis of this paper is on the types of knowledge one has on the submodels that make up larger models and how this knowledge can be organized.
- Published
- 1989
44. Preface to topics in data envelopment analysis
- Author
-
Abraham Charnes and William W. Cooper
- Subjects
Fractional programming ,Returns to scale ,Linear programming ,Operations research ,Computer science ,Data envelopment analysis ,General Decision Sciences ,Management Science and Operations Research ,Inefficiency ,Sensitivity analyses - Abstract
This paper serves as an introduction to a series of three papers which are directed to different aspects of DEA (Data Envelopment Analysis) as follows: (1) uses and extensions of window analyses' to study DEA efficiency measures with an illustrative applications to maintenance activities for U.S. Air Force fighter wings, (2) a comparison of DEA and regression approaches to identifying and estimating, sources of inefficiency by means of artificially generated data, and (3) an extension of ordinary (linear programming) sensitivity analyses to deal with special features that require attention in DEA. Background is supplied in this introductory paper with accompanying proofs and explanations to facilitate understanding of what DEA provides in the way of underpinning for the papers that follow. An attempt is made to bring readers abreast of recent progress in DEA research and uses. A synoptic history is presented along with brief references to related work, and problems requiring attention are also indicated and possible research approaches also suggested.
- Published
- 1984
45. A spatiotemporal Data Envelopment Analysis (S-T DEA) approach: the need to assess evolving units
- Author
-
Emmanouil Stiakakis, Alexander Chatzigeorgiou, and Konstantinos Petridis
- Subjects
Mathematical optimization ,021103 operations research ,Linear programming ,business.industry ,0211 other engineering and technologies ,Software development ,General Decision Sciences ,Efficient frontier ,020207 software engineering ,02 engineering and technology ,Management Science and Operations Research ,Set (abstract data type) ,Theory of computation ,0202 electrical engineering, electronic engineering, information engineering ,Programming paradigm ,Data envelopment analysis ,Dimension (data warehouse) ,business ,Mathematics - Abstract
One of the major challenges in measuring efficiency in terms of resources and outcomes is the assessment of the evolution of units over time. Although Data Envelopment Analysis (DEA) has been applied for time series datasets, DEA models, by construction, form the reference set for inefficient units (lambda values) based on their distance from the efficient frontier, that is, in a spatial manner. However, when dealing with temporal datasets, the proximity in time between units should also be taken into account, since it reflects the structural resemblance among time periods of a unit that evolves. In this paper, we propose a two-stage spatiotemporal DEA approach, which captures both the spatial and temporal dimension through a multi-objective programming model. In the first stage, DEA is solved iteratively extracting for each unit only previous DMUs as peers in its reference set. In the second stage, the lambda values derived from the first stage are fed to a Multiobjective Mixed Integer Linear Programming model, which filters peers in the reference set based on weights assigned to the spatial and temporal dimension. The approach is demonstrated on a real-world example drawn from software development.
- Published
- 2015
46. Integrated strategic and tactical supply chain planning with price-sensitive demands
- Author
-
S.M. Moattar Husseini, Mohammad Fattahi, and Masoud Mahootchi
- Subjects
0209 industrial biotechnology ,Mathematical optimization ,021103 operations research ,Linear programming ,Supply chain ,0211 other engineering and technologies ,General Decision Sciences ,Time horizon ,02 engineering and technology ,Management Science and Operations Research ,Solver ,Network planning and design ,020901 industrial engineering & automation ,Capacity planning ,Economics ,Manufacturing operations ,Budget constraint - Abstract
This paper deals with the supply chain network design and planning for a multi-commodity and multi-layer network over a planning horizon with multiple periods in which demands of customer zones are considered to be price dependent. These prices determine the demands using plausible price–demand relationships of customer zones. The net income of the supply chain is maximized, while satisfying budget constraints for investment in network design. In addition, a new approach is considered for capacity planning to make the problem more realistic. In this regard, when production plants are opened and expanded, capacity options are taken into account for manufacturing operations. Furthermore, several aspects relevant to real world applications are captured in the problem. Different interconnected time periods in the planning horizon are considered for strategic and tactical decisions in the problem and then, a mixed-integer linear programming (MILP) model is developed. The performance and applications of the model are investigated by several test problems with reasonable sizes. The numerical results illustrate that obtained solutions after solving the MILP model by using CPLEX solver are acceptable. Moreover, using an alternative pricing approach, a tight upper bound is provided for the problem. In addition, a deep sensitivity analysis is conducted to show the validity and performance of the proposed model.
- Published
- 2015
47. Exterior point simplex-type algorithms for linear and network optimization problems
- Author
-
Nikolaos Samaras, Konstantinos Paparrizos, and Angelo Sifaleras
- Subjects
Mathematical optimization ,Optimization problem ,Simplex ,Linear programming ,MathematicsofComputing_NUMERICALANALYSIS ,General Decision Sciences ,Monotonic function ,Management Science and Operations Research ,Flow network ,Dual (category theory) ,TheoryofComputation_ANALYSISOFALGORITHMSANDPROBLEMCOMPLEXITY ,Path (graph theory) ,Point (geometry) ,Algorithm ,Mathematics - Abstract
Two decades of research led to the development of a number of efficient algorithms that can be classified as exterior point simplex-type. This type of algorithms can cross over the infeasible region of the primal (dual) problem and find an optimal solution reducing the number of iterations needed. The main idea of exterior point simplex-type algorithms is to compute two paths/flows. Primal (dual) exterior point simplex-type algorithms compute one path/flow which is basic but not always primal (dual) feasible and the other is primal (dual) feasible but not always basic. The aim of this paper is to explain to the general OR audience, for the first time, the developments in exterior point simplex-type algorithms for linear and network optimization problems, over the recent years. We also present other approaches that, in a similar way, do not preserve primal or dual feasibility at each iteration such as the monotonic build-up Simplex algorithms and the criss-cross methods.
- Published
- 2015
48. Coupling input–output analysis with multiobjective linear programming models for the study of economy–energy–environment–social (E3S) trade-offs: a review
- Author
-
Carlos Oliveira, Dulce Coelho, and Carlos Henggeler Antunes
- Subjects
Sustainable development ,Linear programming ,Input–output model ,020209 energy ,General Decision Sciences ,02 engineering and technology ,Scientific literature ,Management Science and Operations Research ,Goods and services ,Economy ,Coupling (computer programming) ,Theory of computation ,0202 electrical engineering, electronic engineering, information engineering ,Economics ,Energy (signal processing) - Abstract
The study of the interactions between the economy (at national, global and local levels), the energy sector and the corresponding impacts on the environment inherently involves multiple axes of evaluation of distinct policies. Input–output (IO) analysis offers a consistent framework for developing multiobjective models for assessing the trade-offs associated with those policies. The analytical framework of IO analysis enables to model the interactions between the whole economy and the energy sector, thus identifying the energy required for the provision of goods and services in an economy and also quantifying the corresponding pollutant emissions. This paper is aimed at reviewing the different modelling approaches available in the scientific literature based on coupling IO analysis with multiobjective models, which can be particularly useful for policy makers to assess the trade-offs between the economy–energy–environment–social pillars of sustainable development, particularly relevant in the current sluggish economic context.
- Published
- 2014
49. Optimal egress time calculation and path generation for large evacuation networks
- Author
-
Mukesh Rungta, Gino J. Lim, and MohammadReza Baharnemati
- Subjects
Mathematical optimization ,Schedule ,Dynamic network analysis ,Linear programming ,Computer science ,Path (graph theory) ,General Decision Sciences ,Graph (abstract data type) ,Management Science and Operations Research ,Flow network ,Simulation - Abstract
Finding the optimal clearance time and deciding the path and schedule of evacuation for large networks have traditionally been computationally intensive. In this paper, we propose a new method for finding the solution for this dynamic network flow problem with considerably lower computation time. Using a three phase solution method, we provide solutions for required clearance time for complete evacuation, minimum number of evacuation paths required for evacuation in least possible time and the starting schedules on those paths. First, a lower bound on the clearance time is calculated using minimum cost dynamic network flow model on a modified network graph representing the transportation network. Next, a solution pool of feasible paths between all O-D pairs is generated. Using the input from the first two models, a flow assignment model is developed to select the best paths from the pool and assign flow and decide schedule for evacuation with lowest clearance time possible. All the proposed models are mixed integer linear programing models and formulation is done for System Optimum (SO) scenario where the emphasis is on complete network evacuation in minimum possible clearance time without any preset priority. We demonstrate that the model can handle large size networks with low computation time. A numerical example illustrates the applicability and effectiveness of the proposed approach for evacuation.
- Published
- 2012
50. Interactive classification using data envelopment analysis
- Author
-
Marvin D. Troutt and Parag C. Pendharkar
- Subjects
Linear programming ,business.industry ,Computer science ,General Decision Sciences ,Function (mathematics) ,Management Science and Operations Research ,computer.software_genre ,Machine learning ,Class (biology) ,Discriminant function analysis ,Data envelopment analysis ,Data mining ,Artificial intelligence ,business ,computer ,Finite set ,Cutting-plane method - Abstract
In this paper, we illustrate how data envelopment analysis (DEA) can be used to aid interactive classification. We assume that the scoring function for the classification problem is known. We use DEA to identify difficult to classify cases from a database and present them to the decision-maker one at a time. The decision-maker assigns a class to the presented case and based on the decision-maker class assignment, a tradeoff cutting plane is drawn using the scoring function and decision-maker’s input. The procedure continues for finite number of iterations and terminates with the final discriminant function. We also show how a hybrid DEA and mathematical programming approach can be used when user interaction is not desired. For non-interactive case, we compare a hybrid DEA and mathematical programming based approach with several statistical and machine learning approaches, and show that the hybrid approach provides competitive performance when compared to the other machine learning approaches.
- Published
- 2012
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.