2,743 results
Search Results
2. SDP-Based Bounds for the Quadratic Cycle Cover Problem via Cutting-Plane Augmented Lagrangian Methods and Reinforcement Learning: INFORMS Journal on Computing Meritorious Paper Awardee.
- Author
-
de Meijer, Frank and Sotirov, Renata
- Subjects
- *
REINFORCEMENT learning , *COMBINATORIAL optimization , *TRAVELING salesman problem , *ALGORITHMS , *SEMIDEFINITE programming , *MACHINE learning , *DIRECTED graphs - Abstract
We study the quadratic cycle cover problem (QCCP), which aims to find a node-disjoint cycle cover in a directed graph with minimum interaction cost between successive arcs. We derive several semidefinite programming (SDP) relaxations and use facial reduction to make these strictly feasible. We investigate a nontrivial relationship between the transformation matrix used in the reduction and the structure of the graph, which is exploited in an efficient algorithm that constructs this matrix for any instance of the problem. To solve our relaxations, we propose an algorithm that incorporates an augmented Lagrangian method into a cutting-plane framework by utilizing Dykstra's projection algorithm. Our algorithm is suitable for solving SDP relaxations with a large number of cutting-planes. Computational results show that our SDP bounds and efficient cutting-plane algorithm outperform other QCCP bounding approaches from the literature. Finally, we provide several SDP-based upper bounding techniques, among which is a sequential Q-learning method that exploits a solution of our SDP relaxation within a reinforcement learning environment. Summary of Contribution: The quadratic cycle cover problem (QCCP) is the problem of finding a set of node-disjoint cycles covering all the nodes in a graph such that the total interaction cost between successive arcs is minimized. The QCCP has applications in many fields, among which are robotics, transportation, energy distribution networks, and automatic inspection. Besides this, the problem has a high theoretical relevance because of its close connection to the quadratic traveling salesman problem (QTSP). The QTSP has several applications, for example, in bioinformatics, and is considered to be among the most difficult combinatorial optimization problems nowadays. After removing the subtour elimination constraints, the QTSP boils down to the QCCP. Hence, an in-depth study of the QCCP also contributes to the construction of strong bounds for the QTSP. In this paper, we study the application of semidefinite programming (SDP) to obtain strong bounds for the QCCP. Our strongest SDP relaxation is very hard to solve by any SDP solver because of the large number of involved cutting-planes. Because of that, we propose a new approach in which an augmented Lagrangian method is incorporated into a cutting-plane framework by utilizing Dykstra's projection algorithm. We emphasize an efficient implementation of the method and perform an extensive computational study. This study shows that our method is able to handle a large number of cuts and that the resulting bounds are currently the best QCCP bounds in the literature. We also introduce several upper bounding techniques, among which is a distributed reinforcement learning algorithm that exploits our SDP relaxations. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
3. AILS-II: An Adaptive Iterated Local Search Heuristic for the Large-Scale Capacitated Vehicle Routing Problem.
- Author
-
Máximo, Vinícius R., Cordeau, Jean-François, and Nascimento, Mariá C. V.
- Subjects
- *
VEHICLE routing problem , *COMBINATORIAL optimization , *DATA libraries , *SOFTWARE development tools , *METAHEURISTIC algorithms - Abstract
A recent study on the classical capacitated vehicle routing problem (CVRP) introduced an adaptive version of the widely used iterated local search paradigm, hybridized with a path-relinking (PR) strategy. The solution method, called adaptive iterated local search (AILS)-PR, outperformed existing meta-heuristics for the CVRP on benchmark instances. However, tests on large-scale instances suggest that PR is too slow, making AILS-PR less advantageous in this case. To overcome this challenge, this paper presents an AILS combined with mechanisms to handle large CVRP instances, called AILS-II. The computational cost of this implementation is reduced, whereas the algorithm also searches the solution space more efficiently. AILS-II is very competitive on smaller instances, outperforming the other methods from the literature with respect to the average gap to the best-known solutions. Moreover, AILS-II consistently outperforms the state of the art on larger instances with up to 30,000 vertices. History: Accepted by Ted Ralphs, Area Editor for Software Tools. This paper has been accepted for the INFORMS Journal on Computing Special Issue on Software Tools for Vehicle Routing. Funding: This work was supported by the Fundação de Amparo à Pesquisa do Estado de São Paulo [Grants 2013/07375-0, 2019/22067-6, and 2022/05803-3] and the Conselho Nacional de Desenvolvimento Científico e Tecnológico [Grants 309385/2021-0 and 403735/2021-1]. Supplemental Material: The software that supports the findings of this study is available within the paper and its Supplemental Information (https://pubsonline.informs.org/doi/suppl/10.1287/ijoc.2023.0106) as well as from the IJOC GitHub software repository (https://github.com/INFORMSJoC/2023.0106). The complete IJOC Software and Data Repository is available at https://informsjoc.github.io/. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. VRPSolverEasy: A Python Library for the Exact Solution of a Rich Vehicle Routing Problem.
- Author
-
Errami, Najib, Queiroga, Eduardo, Sadykov, Ruslan, and Uchoa, Eduardo
- Subjects
- *
VEHICLE routing problem , *DATA libraries , *SOFTWARE development tools , *INTEGER programming , *MATHEMATICAL models - Abstract
The optimization community has made significant progress in solving vehicle routing problems (VRPs) to optimality using sophisticated branch-cut-and-price (BCP) algorithms. VRPSolver is a BCP algorithm with excellent performance in many VRP variants. However, its complex underlying mathematical model makes it hardly accessible to routing practitioners. To address this, VRPSolverEasy provides a Python interface to VRPSolver that does not require any knowledge of mixed integer programming modeling. Instead, routing problems are defined in terms of familiar elements, such as depots, customers, links, and vehicle types. VRPSolverEasy can handle several popular VRP variants and arbitrary combinations of them. History: Accepted by Ted Ralphs, Area Editor for Software Tools. This paper has been accepted for the INFORMS Journal on Computing Special Issue on Software Tools for Vehicle Routing. Funding: This work was supported by Faperj [Grant E-26/202.887/2017] and Conselho Nacional de Desenvolvimento Científico e Tecnológico [Grant 305684/2022-1]. Supplemental Material: The software that supports the findings of this study is available within the paper and its Supplemental Information (https://pubsonline.informs.org/doi/suppl/10.1287/ijoc.2023.0103) as well as from the IJOC GitHub software repository (https://github.com/INFORMSJoC/2023.0103). The complete IJOC Software and Data Repository is available at https://informsjoc.github.io/. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. PyVRP: A High-Performance VRP Solver Package.
- Author
-
Wouda, Niels A., Lan, Leon, and Kool, Wouter
- Subjects
- *
VEHICLE routing problem , *SOFTWARE engineering , *CHOICE (Psychology) , *DATA libraries , *SOFTWARE development tools - Abstract
We introduce PyVRP, a Python package that implements hybrid genetic search in a state-of-the-art vehicle routing problem (VRP) solver. The package is designed for the VRP with time windows (VRPTW) but can be easily extended to support other VRP variants. PyVRP combines the flexibility of Python with the performance of C++ by implementing (only) performance-critical parts of the algorithm in C++ while being fully customizable at the Python level. PyVRP is a polished implementation of the algorithm that ranked first in the 2021 DIMACS VRPTW challenge and, after improvements, ranked first on the static variant of the EURO meets NeurIPS 2022 vehicle routing competition. The code follows good software engineering practices and is well documented and unit tested. PyVRP is freely available under the liberal MIT license. Through numerical experiments, we show that PyVRP achieves state-of-the-art results on the VRPTW and capacitated VRP. We hope that PyVRP enables researchers and practitioners to easily and quickly build on a state-of-the-art VRP solver. History: Accepted by Ted Ralphs, Area Editor for Software Tools. This paper has been accepted for the INFORMS Journal on Computing Special Issue on Software Tools for Vehicle Routing. Funding: Funding was provided by TKI Dinalog, Topsector Logistics, and the Dutch Ministry of Economic Affairs and Climate Policy. Supplemental Material: The software that supports the findings of this study is available within the paper and its Supplemental Information (https://pubsonline.informs.org/doi/suppl/10.1287/ijoc.2023.0055) as well as from the IJOC GitHub software repository (https://github.com/INFORMSJoC/2023.0055). The complete IJOC Software and Data Repository is available at https://informsjoc.github.io/. There is a video associated with this paper. Click here to view the Video Overview. To save the file, right click and choose "Save Link As" from the menu. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Polyhedral Relaxations for Optimal Pump Scheduling of Potable Water Distribution Networks.
- Author
-
Tasseff, Byron, Bent, Russell, Coffrin, Carleton, Barrows, Clayton, Sigler, Devon, Stickel, Jonathan, Zamzam, Ahmed S., Liu, Yang, and Van Hentenryck, Pascal
- Subjects
- *
CLEAN energy , *DATA libraries , *WATER distribution , *DUALITY theory (Mathematics) , *DRINKING water - Abstract
The classic pump scheduling or optimal water flow (OWF) problem for water distribution networks (WDNs) minimizes the cost of power consumption for a given WDN over a fixed time horizon. In its exact form, the OWF is a computationally challenging mixed-integer nonlinear program (MINLP). It is complicated by nonlinear equality constraints that model network physics, discrete variables that model operational controls, and intertemporal constraints that model changes to storage devices. To address the computational challenges of the OWF, this paper develops tight polyhedral relaxations of the original MINLP, derives novel valid inequalities (or cuts) using duality theory, and implements novel optimization-based bound tightening and cut generation procedures. The efficacy of each new method is rigorously evaluated by measuring empirical improvements in OWF primal and dual bounds over 45 literature instances. The evaluation suggests that our relaxation improvements, model strengthening techniques, and a thoughtfully selected polyhedral relaxation partitioning scheme can substantially improve OWF primal and dual bounds, especially when compared with similar relaxation-based techniques that do not leverage these new methods. History: Accepted by David Alderson, Area Editor for Network Optimization: Algorithms & Applications. Funding: This work was supported by the U.S. Department of Energy (DOE) Advanced Grid Modeling project, Coordinated Planning and Operation of Water and Power Infrastructures for Increased Resilience and Reliability. Incorporation of the PolyhedralRelaxations Julia package was supported by Los Alamos National Laboratory's Directed Research and Development program under the project Fast, Linear Programming-Based Algorithms with Solution Quality Guarantees for Nonlinear Optimal Control Problems [Grant 20220006ER]. All work at Los Alamos National Laboratory was conducted under the auspices of the National Nuclear Security Administration of the U.S. DOE, Contract No. 89233218CNA000001. This work was also authored in part by the National Renewable Energy Laboratory, operated by the Alliance for Sustainable Energy, LLC, for the U.S. DOE, Contract No. DE-AC36-08GO28308. Supplemental Material: The software that supports the findings of this study is available within the paper and its Supplemental Information (https://pubsonline.informs.org/doi/suppl/10.1287/ijoc.2022.0233) as well as from the IJOC GitHub software repository (https://github.com/INFORMSJoC/2022.0233). The complete IJOC Software and Data Repository is available at https://informsjoc.github.io/. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. A Shrinkage Approach to Improve Direct Bootstrap Resampling Under Input Uncertainty.
- Author
-
Song, Eunhye, Lam, Henry, and Barton, Russell R.
- Subjects
- *
DATA libraries , *CONFIDENCE intervals , *SIMULATION methods & models , *COMPUTER software , *INSTITUTIONAL repositories , *RESAMPLING (Statistics) - Abstract
Discrete-event simulation models generate random variates from input distributions and compute outputs according to the simulation logic. The input distributions are typically fitted to finite real-world data and thus are subject to estimation errors that can propagate to the simulation outputs: an issue commonly known as input uncertainty (IU). This paper investigates quantifying IU using the output confidence intervals (CIs) computed from bootstrap quantile estimators. The standard direct bootstrap method has overcoverage due to convolution of the simulation error and IU; however, the brute-force way of washing away the former is computationally demanding. We present two new bootstrap methods to enhance direct resampling in both statistical and computational efficiencies using shrinkage strategies to down-scale the variabilities encapsulated in the CIs. Our asymptotic analysis shows how both approaches produce tight CIs accounting for IU under limited input data and simulation effort along with the simulation sample-size requirements relative to the input data size. We demonstrate performances of the shrinkage strategies with several numerical experiments and investigate the conditions under which each method performs well. We also show advantages of nonparametric approaches over parametric bootstrap when the distribution family is misspecified and over metamodel approaches when the dimension of the distribution parameters is high. History: Accepted by Bruno Tuffin, Area Editor for Simulation. Funding: This work was supported by the National Science Foundation [CAREER CMMI-1834710, CAREER CMMI-2045400, DMS-1854659, and IIS-1849280]. Supplemental Material: The software that supports the findings of this study is available within the paper and its Supplemental Information (https://pubsonline.informs.org/doi/suppl/10.1287/ijoc.2022.0044) as well as from the IJOC GitHub software repository (https://github.com/INFORMSJoC/2022.0044). The complete IJOC Software and Data Repository is available at https://informsjoc.github.io/. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Largest Volume Inscribed Rectangles in Convex Sets Defined by Finite Number of Inequalities.
- Author
-
Behroozi, Mehdi
- Subjects
- *
CONVEX sets , *GEOMETRIC shapes , *GEOMETRIC approach , *DATA libraries , *RECTANGLES , *INTERIOR-point methods , *TRIANGLES - Abstract
This paper considers the problem of finding maximum volume (axis-aligned) inscribed boxes in a compact convex set, defined by a finite number of convex inequalities, and presents optimization and geometric approaches for solving them. Several optimization models are developed that can be easily generalized to find other inscribed geometric shapes such as triangles, rhombi, and squares. To find the largest volume axis-aligned inscribed rectangles in the higher dimensions, an interior-point method algorithm is presented and analyzed. For two-dimensional space, a parametrized optimization approach is developed to find the largest area (axis-aligned) inscribed rectangles in convex sets. The optimization approach provides a uniform framework for solving a wide variety of relevant problems. Finally, two computational geometric (1−ε) –approximation algorithms with sublinear running times are presented that improve the previous results. History: Accepted by Antonio Frangioni, Area Editor for Design & Analysis of Algorithms—Continuous. Funding: The author is grateful for the support of Northeastern University for this research. Supplemental Material: The software that supports the findings of this study is available within the paper and its Supplemental Information (https://pubsonline.informs.org/doi/suppl/10.1287/ijoc.2022.0239) as well as from the IJOC GitHub software repository (https://github.com/INFORMSJoC/2022.0239). The complete IJOC Software and Data Repository is available at https://informsjoc.github.io/. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. A Column Generation Scheme for Distributionally Robust Multi-Item Newsvendor Problems.
- Author
-
Wang, Shanshan and Delage, Erick
- Subjects
- *
STOCHASTIC learning models , *KNAPSACK problems , *DATA libraries , *DECOMPOSITION method , *OPTIMAL stopping (Mathematical statistics) - Abstract
This paper studies a distributionally robust multi-item newsvendor problem, where the demand distribution is unknown but specified with a general event-wise ambiguity set. Using the event-wise affine decision rules, we can obtain a conservative approximation formulation of the problem, which can typically be further reformulated as a linear program. In order to efficiently solve the resulting large-scale linear program, we develop a column generation-based decomposition scheme and speed up the computational efficiency by exploiting a special column selection strategy and stopping early based on a Karush-Kuhn-Tucker condition test. Focusing on the Wasserstein ambiguity set and the event-wise mean absolute deviation set, a computational study demonstrates both the computational efficiency of the proposed algorithm, which significantly outperforms a commercial solver and a Benders decomposition method, and the out-of-sample superiority of distributionally robust solutions relative to their sample average approximation counterparts. History: Accepted by Nicola Secomandi, Area Editor for Stochastic Models & Reinforcement Learning. Funding: This work was supported by the Natural Sciences and Engineering Research Council of Canada [492997-2016, RGPIN-2016-05208], the National Natural Science Foundation of China [71972012], Alliance de recherche numérique du Canada, and Canada Research Chairs [CRC-2018-00105]. It was also supported by Groupe d'études et de recherche en analyse des décisions (GERAD). Finally, this research was enabled in part by support provided by Digital Research Alliance of Canada (https://alliancecan.ca/en). Supplemental Material: The software that supports the findings of this study is available within the paper and its supplemental information (https://pubsonline.informs.org/doi/suppl/10.1287/ijoc.2022.0010) as well as from the IJOC GitHub software repository (https://github.com/INFORMSJoC/2022.0010). The complete IJOC Software and Data Repository is available at https://informsjoc.github.io/. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. An Efficient Global Optimal Method for Cardinality Constrained Portfolio Optimization.
- Author
-
Xu, Wei, Tang, Jie, Yiu, Ka Fai Cedric, and Peng, Jian Wen
- Subjects
- *
PORTFOLIO management (Investments) , *DATA libraries , *PHILOSOPHY of science , *COVARIANCE matrices , *CONSTRAINED optimization , *INVESTMENT policy - Abstract
This paper focuses on the cardinality constrained mean-variance portfolio optimization, in which only a small number of assets are invested. We first treat the covariance matrix of asset returns as a diagonal matrix with a special matrix processing technique. Using the dual theory, we formulate the lower bound problem of the original problem as a max-min optimization. For the inner minimization problem with the cardinality constraint, we obtain its analytical solution for the portfolio weights. Then, the lower bound problem turns out to be a simple concave optimization with respect to the Lagrangian multipliers. Thus, the interval split method and the supergradient method are developed to solve it. Based on the precise lower bound, the depth-first branch and bound method are designed to find the global optimal investment selection strategy. Compared with other lower bounds and the current popular mixed integer programming solvers, such as CPLEX and SCIP, the numerical experiments show that our method has a high searching efficiency. History: Accepted by Pascal Van Hentenryck, Area Editor for Computational Modeling: Methods & Analysis. Funding: This work was supported by the National Natural Science Foundation of China [Grants 12101317, 12271071, and 11991024], the Natural Science Foundation of Jiangsu Province [Grant BK20200819], the Team Project of Innovation Leading Talent in Chongqing [Grant CQYC20210309536], the Contract System Project of Chongqing Talent Plan [Grant cstc2022ycjh-bgzxm0147], and the Philosophy and Social Science Fund of Education Department of Jiangsu Province [Grant 2020SJA0168]. K.F.C. Yiu is supported in part by the Research Grants Council of Hong Hong [Grant PolyU 15223419], the Hong Kong Polytechnic University [Grants 4-ZZPT and 1-WZ0E], and the Research Centre for Quantitative Finance [Grant 1-CE03]. Supplemental Material: The software that supports the findings of this study is available within the paper and its Supplemental Information (https://pubsonline.informs.org/doi/suppl/10.1287/ijoc.2022.0344) as well as from the IJOC GitHub software repository (https://github.com/INFORMSJoC/2022.0344). The complete IJOC Software and Data Repository is available at https://informsjoc.github.io/. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. BilevelJuMP.jl: Modeling and Solving Bilevel Optimization Problems in Julia.
- Author
-
Garcia, Joaquim Dias, Bodin, Guilherme, and Street, Alexandre
- Subjects
- *
BILEVEL programming , *MATHEMATICAL reformulation , *DATA libraries , *LINEAR programming , *NONLINEAR programming , *SOFTWARE development tools - Abstract
In this paper, we present BilevelJuMP.jl, a new Julia package to support bilevel optimization within the JuMP framework. The package is a Julia library that enables the user to describe both upper and lower-level optimization problems using the JuMP algebraic syntax. Because of the generality and flexibility that our library inherits from JuMP's syntax, our package allows users to model bilevel optimization problems with conic constraints in the lower level and all constraints supported by JuMP in the upper level including conic, quadratic, and nonlinear constraints. Moreover, the models defined with the syntax from BilevelJuMP.jl can be solved by multiple techniques that are based on reformulations as mathematical programs with equilibrium constraints (MPEC). Manipulations on the original problem data are possible due to MathOptInterface.jl's structures and Dualization.jl features. Hence, the proposed package allows quick modeling, deployment, and thereby experimenting with bilevel models based on off-the-shelf mixed-integer linear programming and nonlinear solvers. History: Accepted by Ted Ralphs, Area Editor for Software Tools. Funding: The authors were partially supported by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001. The work of A. Street was also partially supported by Fundação Carlos Chagas Filho de Amparo à Pesquisa do Estado do Rio de Janeiro (FAPERJ) and Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq). The work was partially funded by the project P&D ANEEL PD-00403-0050/2020 sponsored by ENGIE BRASIL ENERGIA S.A. Supplemental Material: The software that supports the findings of this study is available within the paper and its Supplemental Information (https://pubsonline.informs.org/doi/suppl/10.1287/ijoc.2022.0135) as well as from the IJOC GitHub software repository (https://github.com/INFORMSJoC/2022.0135). The complete IJOC Software and Data Repository is available at https://informsjoc.github.io/. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. A Deep Learning and Image Processing Pipeline for Object Characterization in Firm Operations.
- Author
-
Aghasi, Alireza, Rai, Arun, and Xia, Yusen
- Subjects
- *
IMAGE processing , *SIGNAL processing , *DEEP learning , *DATA libraries , *MACHINE learning , *INVENTORY control - Abstract
Given the abundance of images related to operations that are being captured and stored, it behooves firms to innovate systems using image processing to improve operational performance that refers to any activity that can save labor cost. In this paper, we use deep learning techniques, combined with classic image/signal processing methods, to propose a pipeline to solve certain types of object counting and layer characterization problems in firm operations. Using data obtained by us through a collaborative effort with real manufacturers, we demonstrate that the proposed pipeline method is able to achieve higher than 93% accuracy in layer and log counting. Theoretically, our study conceives, constructs, and evaluates proof of concept of a novel pipeline method in characterizing and quantifying the number of defined items with images, which overcomes the limitations of methods based only on deep learning or signal processing. Practically, our proposed method can help firms significantly reduce labor costs and/or improve quality and inventory control by recording the number of products in real time, more accurately and with minimal up-front technological investment. The codes and data are made publicly available online through the INFORMS Journal on Computing GitHub site. History: Accepted by Ram Ramesh, Area Editor for Data Science and Machine Learning. Supplemental Material: The software that supports the findings of this study is available within the paper and its Supplemental Information (https://pubsonline.informs.org/doi/suppl/10.1287/ijoc.2022.0260) as well as from the IJOC GitHub software repository (https://github.com/INFORMSJoC/2022.0260). The complete IJOC Software and Data Repository is available at https://informsjoc.github.io/. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Combination Chemotherapy Optimization with Discrete Dosing.
- Author
-
Ajayi, Temitayo, Hosseinian, Seyedmohammadhossein, Schaefer, Andrew J., and Fuller, Clifton D.
- Subjects
- *
COMBINATION drug therapy , *DRUG utilization , *LEUKOCYTE count , *DATA libraries , *DRUG administration - Abstract
Chemotherapy drug administration is a complex problem that often requires expensive clinical trials to evaluate potential regimens; one way to alleviate this burden and better inform future trials is to build reliable models for drug administration. This paper presents a mixed-integer program for combination chemotherapy (utilization of multiple drugs) optimization that incorporates various important operational constraints and, besides dose and concentration limits, controls treatment toxicity based on its effect on the count of white blood cells. To address the uncertainty of tumor heterogeneity, we also propose chance constraints that guarantee reaching an operable tumor size with a high probability in a neoadjuvant setting. We present analytical results pertinent to the accuracy of the model in representing biological processes of chemotherapy and establish its potential for clinical applications through a numerical study of breast cancer. History: Accepted by Paul Brooks, Area Editor for Applications in Biology, Medicine, & Healthcare. Funding: This work was supported by the National Science Foundation [Grants CMMI-1933369 and CMMI-1933373]. Supplemental Material: The software that supports the findings of this study is available within the paper and its Supplemental Information (https://pubsonline.informs.org/doi/suppl/10.1287/ijoc.2022.0207) as well as from the IJOC GitHub software repository (https://github.com/INFORMSJoC/2022.0207). The complete IJOC Software and Data Repository is available at https://informsjoc.github.io/. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Deterring the Gray Market: Product Diversion Detection via Learning Disentangled Representations of Multivariate Time Series.
- Author
-
Lin, Hao, Liu, Guannan, Wu, Junjie, and Zhao, J. Leon
- Subjects
- *
GRAY market , *TIME series analysis , *PROBABILISTIC generative models , *DATA libraries , *EMERGING markets , *DATA science - Abstract
A gray market emerges when some distributors divert products to unauthorized distributors/retailers to make sneaky profits from the manufacturers' differential channel incentives, such as quantity discounts. Traditionally, manufacturers rely heavily on internal audits to periodically investigate the flows of products and funds so as to deter the gray market; however, this is too costly given the large number of distributors and their huge volumes of orders. Owing to the advances in data analytics techniques, the ordering quantities of a distributor over time, which form multivariate time series, can help reveal suspicious product diversion behaviors and narrow the audit scope drastically. To that end, in this paper, we build on the recent advancement of representation learning for time series and adopt a sequence autoencoder to automatically characterize the overall demand patterns. To cope with the underlying entangled factors and interfering information in the multivariate time series of ordering quantities, we develop a disentangled learning scheme to construct more effective sequence representations. An interdistributor correlation regularization is also proposed to ensure more reliable representations. Finally, given the highly scarce anomaly labels for the detection task, an unsupervised deep generative model based on the learned representations of the distributors is developed to estimate the densities of distributions, which enables the anomaly scores generated through end-to-end learning. Extensive experiments on a real-world distribution channel data set and a larger simulated data set empirically validate our model's superior and robust performances compared with several state-of-the-art baselines. Additionally, our illustrative economic analysis demonstrates that the manufacturers can launch more targeted and cost-effective audits toward the suspected distributors recommended by our model so as to deter the gray market. History: Accepted by Ram Ramesh, Area Editor for Data Science & Machine Learning. Funding: This work was supported by the National Natural Science Foundation of China [Grants 72031001, 72301017, 72371011, and 72242101]. Supplemental Material: The software that supports the findings of this study is available within the paper and its Supplemental Information (https://pubsonline.informs.org/doi/suppl/10.1287/ijoc.2022.0155) as well as from the IJOC GitHub software repository (https://github.com/INFORMSJoC/2022.0155). The complete IJOC Software and Data Repository is available at https://informsjoc.github.io/. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. A Differentiable Path-Following Method with a Compact Formulation to Compute Proper Equilibria.
- Author
-
Cao, Yiyin, Chen, Yin, and Dang, Chuangyin
- Subjects
- *
DATA libraries , *EQUILIBRIUM , *GAME theory - Abstract
The concept of proper equilibrium was established as a strict refinement of perfect equilibrium. This establishment has significantly advanced the development of game theory and its applications. Nonetheless, it remains a challenging problem to compute such an equilibrium. This paper develops a differentiable path-following method with a compact formulation to compute a proper equilibrium. The method incorporates square-root-barrier terms into payoff functions with an extra variable and constitutes a square-root-barrier game. As a result of this barrier game, we acquire a smooth path to a proper equilibrium. To further reduce the computational burden, we present a compact formulation of an ε-proper equilibrium with a polynomial number of variables and equations. Numerical results show that the differentiable path-following method is numerically stable and efficient. Moreover, by relaxing the requirements of proper equilibrium and imposing Selten's perfection, we come up with the notion of perfect d-proper equilibrium, which approximates a proper equilibrium and is less costly to compute. Numerical examples demonstrate that even when d is rather large, a perfect d-proper equilibrium remains to be a proper equilibrium. History: Accepted by Antonio Frangioni, Area Editor for Design & Analysis of Algorithms-Continuous. Funding: This work was partially supported by General Research Fund (GRF) CityU 11306821 of Hong Kong SAR Government. Supplemental Material: The software that supports the findings of this study is available within the paper and its Supplemental Information (https://pubsonline.informs.org/doi/suppl/10.1287/ijoc.2022.0148) as well as from the IJOC GitHub software repository (https://github.com/INFORMSJoC/2022.0148). The complete IJOC Software and Data Repository is available at https://informsjoc.github.io/. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. InfrastructureModels: Composable Multi-infrastructure Optimization in Julia.
- Author
-
Bent, Russell, Tasseff, Byron, and Coffrin, Carleton
- Subjects
- *
NATURAL gas pipelines , *INFRASTRUCTURE (Economics) , *MATHEMATICAL programming , *DATA libraries , *ENVIRONMENTAL infrastructure , *ARTIFICIAL intelligence - Abstract
In recent years, there has been an increasing need to understand the complex interdependencies between critical infrastructure systems, for example, electric power, natural gas, and potable water. Whereas open-source and commercial tools for the independent simulation of these systems are well established, frameworks for cosimulation with other systems are nascent and tools for co-optimization are scarce—the major challenge being the hidden combinatorics that arise when connecting multiple-infrastructure system models. Building toward a comprehensive solution for modeling interdependent infrastructure systems, this work presents InfrastructureModels, an extensible, open-source mathematical programming framework for co-optimizing multiple interdependent infrastructures. This work provides new insights into methods and programming abstractions that make state-of-the-art independent infrastructure models composable with minimal additional effort. To that end, this paper presents the design of the InfrastructureModels framework, documents key components of the software's implementation, and demonstrates its effectiveness with three case studies on canonical co-optimization tasks arising in interdependent infrastructure systems. History: Accepted by Ted Ralphs, Area Editor for Software Tools. Funding: The work was funded by Los Alamos National Laboratory's Directed Research and Development project "The Optimization of Machine Learning: Imposing Requirements on Artificial Intelligence" and the U.S. Department of Energy's Office of Electricity Advanced Grid Modeling projects "Joint Power System and Natural Gas Pipeline Optimal Expansion Planning" and "Coordinated Planning and Operation of Water and Power Infrastructures for Increased Resilience and Reliability." This work was carried out under the U.S. DOE contract no. [DE-AC52-06NA25396]. Supplemental Material: The software that supports the findings of this study is available within the paper and its Supplemental Information (https://pubsonline.informs.org/doi/suppl/10.1287/ijoc.2022.0118) as well as from the IJOC GitHub software repository (https://github.com/INFORMSJoC/2022.0118). The complete IJOC Software and Data Repository is available at https://informsjoc.github.io/. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Heuristic Search for Rank Aggregation with Application to Label Ranking.
- Author
-
Zhou, Yangming, Hao, Jin-Kao, and Li, Zhen
- Subjects
- *
HEURISTIC , *DATA libraries , *APPROXIMATION algorithms , *SEARCH algorithms , *BASE pairs , *EVOLUTIONARY algorithms - Abstract
Rank aggregation combines the preference rankings of multiple alternatives from different voters into a single consensus ranking, providing a useful model for a variety of practical applications but posing a computationally challenging problem. In this paper, we provide an effective hybrid evolutionary ranking algorithm to solve the rank aggregation problem with both complete and partial rankings. The algorithm features a semantic crossover based on concordant pairs and an enhanced late acceptance local search method reinforced by a relaxed acceptance and replacement strategy and a fast incremental evaluation mechanism. Experiments are conducted to assess the algorithm, indicating a highly competitive performance on both synthetic and real-world benchmark instances compared with state-of-the-art algorithms. To demonstrate its practical usefulness, the algorithm is applied to label ranking, a well-established machine learning task. We additionally analyze several key algorithmic components to gain insight into their operation. History: Accepted by Erwin Pesch, Area Editor for Heuristic Search & Approximation Algorithms. Funding: This work was supported by the National Natural Science Foundation of China [Grant 72371157] and Shanghai Pujiang Programme [Grant 23PJC069]. Supplemental Material: The software that supports the findings of this study is available within the paper and its Supplemental Information (https://pubsonline.informs.org/doi/suppl/10.1287/ijoc.2022.0019) as well as from the IJOC GitHub software repository (https://github.com/INFORMSJoC/2022.0019). The complete IJOC Software and Data Repository is available at https://informsjoc.github.io/. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Ensemble Variance Reduction Methods for Stochastic Mixed-Integer Programming and their Application to the Stochastic Facility Location Problem.
- Author
-
Xu, Jiajun and Sen, Suvrajeet
- Subjects
- *
STOCHASTIC programming , *LINEAR programming , *DATA libraries , *BUDGET , *AIR forces , *SAMPLE size (Statistics) - Abstract
Sample average approximation (SAA), the standard approach to stochastic mixed-integer programming, does not provide guidance for cases with limited computational budgets. In such settings, variance reduction is critical in identifying good decisions. This paper explores two closely related ensemble methods to determine effective decisions with a probabilistic guarantee. (a) The first approach recommends a decision by coordinating aggregation in the space of decisions as well as aggregation of objective values. This combination of aggregation methods generalizes the bagging method and the "compromise decision" of stochastic linear programming. Combining these concepts, we propose a stopping rule that provides an upper bound on the probability of early termination. (b) The second approach applies efficient computational budget allocation for objective function evaluation and contributes to identifying the best solution with a predicted lower bound on the probability of correct selection. It also reduces the variance of the upper bound estimate at optimality. Furthermore, it adaptively selects the evaluation sample size. Both approaches provide approximately optimal solutions even in cases with a huge number of scenarios, especially when scenarios are generated by using oracles/simulators. Finally, we demonstrate the effectiveness of these methods via extensive computational results for "megascale" (extremely large scale) stochastic facility location problems. History: Accepted by Pascal Van Hentenryck, Area Editor for Computational Modeling: Methods & Analysis. Funding: This work was supported by The Office of Naval Research [Grant N00014-20-1-2077] and the Air Force Office of Scientific Research [Grant FA9550-20-1-0006]. Supplemental Material: The software that supports the findings of this study is available within the paper and its Supplemental Information (https://pubsonline.informs.org/doi/suppl/10.1287/ijoc.2021.0324) as well as from the IJOC GitHub software repository (https://github.com/INFORMSJoC/2021.0324). The complete IJOC Software and Data Repository is available at https://informsjoc.github.io/. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Seamless Multimodal Transportation Scheduling.
- Author
-
Raghunathan, Arvind U., Bergman, David, Hooker, John N., Serra, Thiago, and Kobori, Shingo
- Subjects
- *
CONTAINERIZATION , *TRANSPORTATION schedules , *TRAVEL time (Traffic engineering) , *RIDESHARING services , *DATA libraries , *PUBLIC transit - Abstract
Ride-hailing services have expanded the role of shared mobility in passenger transportation systems, creating new markets and creative planning solutions for major urban centers. In this paper, we consider their use for the first-mile or last-mile passenger transportation in coordination with a mass transit service to provide a seamless multimodal transportation experience for the user. A system that provides passengers with predictable information on travel and waiting times in their commutes is immensely valuable. We envision that the passengers will inform the system of their desired travel and arrival windows so that the system can jointly optimize the schedules of passengers. The problem we study balances minimizing travel time and the number of trips taken by the last-mile vehicles, so that long-term planning, maintenance, and environmental impact are all taken into account. We focus on the case where the last-mile service aggregates passengers by destination. We show that this problem is NP-hard, and we propose a decision diagram–based branch-and-price decomposition model that can solve instances of real-world size (10,000 passengers spread over an hour, 50 last-mile destinations, 600 last-mile vehicles) in computational time (∼1 minute) that is orders of magnitude faster than the solution times of other methods appearing in the literature. Our experiments also indicate that aggregating passengers by destination on the last-mile service provides high-quality solutions to more general settings. History: Accepted by Pascal Van Hentenryck, Area Editor for Computational Modeling: Methods and Analysis. Supplemental Material: The software that supports the findings of this study is available within the paper and its Supplemental Information (https://pubsonline.informs.org/doi/suppl/10.1287/ijoc.2019.0163) as well as from the IJOC GitHub software repository (https://github.com/INFORMSJoC/2019.0163). The complete IJOC Software and Data Repository is available at https://informsjoc.github.io/. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. Bayesian Network Models for PTSD Screening in Veterans.
- Author
-
Tan, Yi, Shenoy, Prakash P., Sherwood, Ben, Shenoy, Catherine, Gaddy, Melinda, and Oehlert, Mary E.
- Subjects
- *
BAYESIAN analysis , *MILITARY sexual trauma , *HEALTH facilities , *POST-traumatic stress disorder , *MEDICAL personnel , *TRAUMA registries - Abstract
The prediction of posttraumatic stress disorder (PTSD) has gained a lot of interest in clinical studies. Identifying patients with a high risk of PTSD can guide mental healthcare workers when making treatment decisions. The main goal of this paper is to propose several Bayesian network (BN) models to assess the probability that a veteran has PTSD when first visiting a U.S. Department of Veteran Affairs (VA) facility seeking medical care. The current practice is to use a five-question test called PC-PTSD-5. We aim to use the PC-PTSD-5 test, which is currently administered to most incoming new patients, and demographic information, military service history, and medical history. We construct a Bayes information criterion score-based BN, a group L2-regularized BN (GL2-regularized BN), and a naïve Bayes BN to assess the probability that a patient has PTSD. The GL2-regularized BN is a new method for constructing a BN motivated by some of the challenges of analyzing this data set. A secondary goal is to identify which features are important in predicting PTSD. We discover that the following features help compute the probability of PTSD: PC-PTSD-5, service-connected flag, combat flag, agent orange flag, military sexual trauma flag, traumatic brain injury, and age. History: Accepted by Ram Ramesh, Area Editor for Data Science & Machine Learning. Supplemental Material: The software that supports the findings of this study is available within the paper and its Supplemental Information (https://pubsonline.informs.org/doi/suppl/10.1287/ijoc.2021.0174) as well as from the IJOC GitHub software repository (https://github.com/INFORMSJoC/2021.0174). The complete IJOC Software and Data Repository is available at https://informsjoc.github.io/. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. Exact Matrix Factorization Updates for Nonlinear Programming.
- Author
-
Escobedo, Adolfo R.
- Subjects
- *
MATRIX decomposition , *LINEAR programming , *DATA libraries , *MATHEMATICAL programming , *LINEAR equations , *NONLINEAR programming , *INTEGER programming - Abstract
LU and Cholesky matrix factorization algorithms are core subroutines used to solve systems of linear equations (SLEs) encountered when solving an optimization problem. Standard floating-point algorithms are highly efficient but remain susceptible to the accumulation of round-off errors, which can lead solvers to return feasibility and optimality claims that are actually invalid. This paper introduces a novel direct solution approach for solving sequences of closely related SLEs encountered in nonlinear programming efficiently and without round-off errors. Specifically, it introduces rank-one update algorithms for the round-off error–free factorization framework, a tool set built on integer-preserving arithmetic that has led to the development and implementation of extremely reliable subroutines for solving SLEs occurring in linear programming. The formal guarantees of the presented algorithms are established through the derivation of theoretical insights. Their advantages are supported with computational experiments, which demonstrate upward of 75× improvements over exact factorization runtimes on fully dense matrices with more than one million entries. A significant advantage of the featured integer-preserving framework is that the length of any matrix coefficient produced by its algorithms is bounded polynomially in the size of the inputs without having to resort to greatest common divisor operations, which are required by and thereby hinder an efficient implementation of exact rational arithmetic approaches. History: Accepted by Antonio Frangioni, Area Editor for Design & Analysis of Algorithms–Continuous. Supplemental Material: The software that supports the findings of this study is available within the paper and its Supplemental Information (https://pubsonline.informs.org/doi/suppl/10.1287/ijoc.2021.0331) as well as from the IJOC GitHub software repository (https://github.com/INFORMSJoC/2021.0331). The complete IJOC Software and Data Repository is available at https://informsjoc.github.io/. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Note from the Editor.
- Author
-
Smith, Alice E.
- Subjects
- *
ARTIFICIAL intelligence , *MATHEMATICAL logic , *CONSTRAINT programming - Abstract
The INFORMS Journal on Computing has announced the winners of the Test of Time Paper Award, covering the years 1997-2001. The winning paper, "Algorithms for Hybrid MILP/CP Models for a Class of Optimization Problems" by Vipul Jain and Ignacio E. Grossmann, is recognized for its early and influential application of logical Benders decomposition. The paper's contributions have stimulated significant research in logical Benders decomposition, branch and check, and branch and price and check, with applications in various domains. The authors express their gratitude for the recognition and discuss the motivation and methodology behind their research. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
23. Learning Symbolic Expressions: Mixed-Integer Formulations, Cuts, and Heuristics.
- Author
-
Kim, Jongeun, Leyffer, Sven, and Balaprakash, Prasanna
- Subjects
- *
NONLINEAR operators , *DATA libraries , *SCIENTIFIC computing , *APPLIED mathematics , *SQUARE root , *HEURISTIC - Abstract
In this paper, we consider the problem of learning a regression function without assuming its functional form. This problem is referred to as symbolic regression. An expression tree is typically used to represent a solution function, which is determined by assigning operators and operands to the nodes. Cozad and Sahinidis propose a nonconvex mixed-integer nonlinear program (MINLP), in which binary variables are used to assign operators and nonlinear expressions are used to propagate data values through nonlinear operators, such as square, square root, and exponential. We extend this formulation by adding new cuts that improve the solution of this challenging MINLP. We also propose a heuristic that iteratively builds an expression tree by solving a restricted MINLP. We perform computational experiments and compare our approach with a mixed-integer program–based method and a neural network–based method from the literature. History: Accepted by Pascal Van Hentenryck, Area Editor for Computational Modeling: Methods & Analysis. Funding: This work was supported by the Applied Mathematics activity within the U.S. Department of Energy, Office of Science, Advanced Scientific Computing Research [Grant DE-AC02-06CH11357]. Supplemental Material: The software that supports the findings of this study is available within the paper and its Supplemental Information (https://pubsonline.informs.org/doi/suppl/10.1287/ijoc.2022.0050) as well as from the IJOC GitHub software repository (https://github.com/INFORMSJoC/2022.0050). The complete IJOC Software and Data Repository is available at https://informsjoc.github.io/. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
24. The Hot Spot Coverage Patrol Problem: Formulations and Solution Approaches.
- Author
-
Luo, Yuchen, Golden, Bruce, and Zhang, Rui
- Subjects
- *
POLICE vehicles , *DATA libraries , *SEARCH algorithms , *INTEGER programming , *CRIME statistics - Abstract
When designing a patrol route, it is often necessary to pay more attention to locations with high crime rates. In this paper, we study a patrol routing problem for a fleet of patrol cars patrolling a region with a high-crime neighborhood (HCN) consisting of multiple hot spots. Considering the disorder and chaos in the HCN, at least one patrol car is required in the HCN at any given time during the patrol. We call this routing problem the hot spot coverage patrol problem (HSCPP). In the HSCPP, the importance of a patrol location is quantified by a prize, and the prize is collected if a patrol car visits the location. Our objective is to maximize the sum of prizes collected by the patrol cars, obeying all operational requirements. We propose mathematical formulations and develop several solution approaches for the HSCPP. The global approach consists of finding the routing solution for all patrol cars with a single integer programming (IP) formulation. The partition approach involves first partitioning the region geographically and solving the routing problem in each subregion with two IP formulations. Next, we strengthen the partition approach by developing a column generation (CG) approach in which the initial columns of the CG approach are the solutions generated from the partition approach. We conduct a detailed computational case study using instances based on real crime data from Montgomery County, Maryland. To further understand the computational tractability of our solution approaches, we also perform a sensitivity analysis using synthetic instances under various scenarios. History: Accepted by Erwin Pesch, Area Editor for Heuristic Search & Approximation Algorithms. Supplemental Material: The software that supports the findings of this study is available within the paper and its Supplemental Information (https://pubsonline.informs.org/doi/suppl/10.1287/ijoc.2022.0192) as well as from the IJOC GitHub software repository (https://github.com/INFORMSJoC/2022.0192). The complete IJOC Software and Data Repository is available at https://informsjoc.github.io/. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
25. PaPILO: A Parallel Presolving Library for Integer and Linear Optimization with Multiprecision Support.
- Author
-
Gleixner, Ambros, Gottwald, Leona, and Hoen, Alexander
- Subjects
- *
DATA libraries , *LINEAR programming , *SOFTWARE development tools , *COLUMNS , *GOVERNMENT agencies - Abstract
Presolving has become an essential component of modern mixed integer program (MIP) solvers, both in terms of computational performance and numerical robustness. In this paper, we present PaPILO, a new C++ header-only library that provides a large set of presolving routines for MIP and linear programming problems from the literature. The creation of PaPILO was motivated by the current lack of (a) solver-independent implementations that (b) exploit parallel hardware and (c) support multiprecision arithmetic. Traditionally, presolving is designed to be fast. Whenever necessary, its low computational overhead is usually achieved by strict working limits. PaPILO's parallelization framework aims at reducing the computational overhead also when presolving is executed more aggressively or is applied to large-scale problems. To rule out conflicts between parallel presolve reductions, PaPILO uses a transaction-based design. This helps to avoid both the memory-intensive allocation of multiple copies of the problem and special synchronization between presolvers. Additionally, the use of Intel's Threading Building Blocks library aids PaPILO in efficiently exploiting recursive parallelism within expensive presolving routines, such as probing, dominated columns, or constraint sparsification. We provide an overview of PaPILO's capabilities and insights into important design choices. History: Accepted by Ted Ralphs, Area Editor for Software Tools. Funding: This work has been financially supported by Research Campus MODAL, funded by the German Federal Ministry of Education and Research [Grants 05M14ZAM, 05M20ZBM], and the European Union's Horizon 2020 research and innovation programme under grant agreement No 773897 (plan4res). The content of this paper only reflects the author's views. The European Commission / Innovation and Networks Executive Agency is not responsible for any use that may be made of the information it contains. Supplemental Material: The software that supports the findings of this study is available within the paper and its Supplemental Information (https://pubsonline.informs.org/doi/suppl/10.1287/ijoc.2022.0171), as well as from the IJOC GitHub software repository (https://github.com/INFORMSJoC/2022.0171). The complete IJOC Software and Data Repository is available at https://informsjoc.github.io/. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
26. Distributionally Robust Chance-Constrained p-Hub Center Problem.
- Author
-
Zhao, Yue, Chen, Zhi, and Zhang, Zhenzhen
- Subjects
- *
DATA libraries , *GAUSSIAN distribution , *LOCATION problems (Programming) , *UNIVERSITY research - Abstract
The p-hub center problem is a fundamental model for the strategic design of hub location. It aims at constructing p fully interconnected hubs and links from nodes to hubs so that the longest path between any two nodes is minimized. Existing literature on the p-hub center problem under uncertainty often assumes a joint distribution of travel times, which is difficult (if not impossible) to elicit precisely. In this paper, we bridge the gap by investigating two distributionally robust chance-constrained models that cover, respectively, an existing stochastic one under independent normal distribution and one that is based on the sample average approximation approach as a special case. We derive deterministic reformulations as a mixed-integer program wherein a large number of constraints can be dynamically added via a constraint-generation approach to accelerate computation. Counterparts of our models in the emerging robust satisficing framework are also discussed. Extensive numerical experiments demonstrate the encouraging out-of-sample performance of our proposed models as well as the effectiveness of the constraint-generation approach. History: Accepted by Pascal Van Hentenryck, Area Editor for Computational Modeling: Methods & Analysis. Funding: This work is partially supported by the National Natural Science Foundation of China [Grants 72101187 and 72021002] and Early Career Scheme from the Hong Kong Research Grants Council General Research Fund [Grant 9043424] and NSFC/RGC Joint Research Scheme N_CityU105/21. Y. Zhao is supported by the Ministry of Education, Singapore, under its 2019 Academic Research Fund Tier 3 grant call [Grant MOE-2019-T3-1-010]. Supplemental Material: The software that supports the findings of this study is available within the paper and its Supplemental Information (https://pubsonline.informs.org/doi/suppl/10.1287/ijoc.2022.0113) as well as from the IJOC GitHub software repository (https://github.com/INFORMSJoC/2022.0113). The complete IJOC Software and Data Repository is available at https://informsjoc.github.io/. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
27. Platoon Optimization Based on Truck Pairs.
- Author
-
Bhoopalam, Anirudh Kishore, Agatz, Niels, and Zuidwijk, Rob
- Subjects
- *
BASE pairs , *DATA libraries , *MOTOR vehicle driving , *ORGANIZATIONAL research , *POLYNOMIAL time algorithms - Abstract
Truck platooning technology allows trucks to drive at short headways to save fuel and associated emissions. However, fuel savings from platooning are relatively small, so forming platoons should be convenient and associated with minimum detours and delays. In this paper, we focus on developing optimization technology to form truck platoons. We formulate a mathematical program for the platoon routing problem with time windows (PRP-TW) based on a time–space network. We provide polynomial-time algorithms to solve special cases of PRP-TW with two-truck platoons. Based on these special cases, we build several fast heuristics. An extensive set of numerical experiments shows that our heuristics perform well. Moreover, we show that simple two-truck platoons already capture most of the potential savings of platooning. History: Accepted by Pascal van Hentenryck, Area Editor for Computational Modeling: Methods and Analysis. Funding: This work was supported by the Netherlands Organization for Scientific Research (NWO) as part of the Spatial and Transport Impacts of Automated Driving [Grant 438-15-161] project. Supplemental Material: The software that supports the findings of this study is available within the paper and its Supplemental Information (https://pubsonline.informs.org/doi/suppl/10.1287/ijoc.2020.0302) as well as from the IJOC GitHub software repository (https://github.com/INFORMSJoC/2020.0302). The complete IJOC Software and Data Repository is available at https://informsjoc.github.io/. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
28. RoutingBlocks: An Open-Source Python Package for Vehicle Routing Problems with Intermediate Stops.
- Author
-
Klein, Patrick S. and Schiffer, Maximilian
- Subjects
- *
METAHEURISTIC algorithms , *VEHICLE routing problem , *DATA structures , *SOFTWARE development tools , *C++ - Abstract
We introduce RoutingBlocks, a versatile open-source Python package designed to simplify the development of algorithms for vehicle routing problems with intermediate stops (VRPIS). The package offers a variety of modular algorithmic components and optimized data structures crafted specifically to address key challenges of VRPIS, such as a lack of exact constant-time move evaluations and difficult station visit decisions. By using a unified solution and instance representation that abstracts problem-specific behavior (for example, constraint checking, move evaluation, and cost computation) into well-defined interfaces, RoutingBlocks maintains a clear separation between algorithmic components and specific problem configurations, thus allowing the application of the same algorithm to a variety of problem settings. Leveraging an efficient C++ implementation for performance-critical core elements, RoutingBlocks combines the high performance of C++ with the user-friendliness and adaptability of Python, thereby streamlining the development of effective metaheuristic algorithms. As a result, researchers using RoutingBlocks can focus on their algorithms' core features, allocating more resources to innovation and advancement in the VRPIS domain. History: Accepted by Ted Ralphs, Area Editor for Software Tools. This paper has been accepted for the INFORMS Journal on Computing Special Issue on Software Tools for Vehicle Routing. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. A Dedicated Pricing Algorithm to Solve a Large Family of Nurse Scheduling Problems with Branch-and-Price.
- Author
-
Legrain, Antoine and Omer, Jérémy
- Subjects
- *
FAMILY nurses , *DYNAMIC programming , *FAMILY nursing , *PRICES , *NURSES - Abstract
In this paper, we describe a branch-and-price algorithm for the personalized nurse scheduling problem. The variants that appear in the literature involve a large number of constraints that can be hard or soft, meaning that they can be violated at the price of a penalty. We capture the diversity of the constraints on individual schedules by seven generic constraints characterized by lower and upper bounds on a given quantity. The core of the column generation procedure is in the identification of individual schedules with minimum reduced cost. For this, we solve a shortest path problem with resource constraints (SPPRC) where several generic constraints are modeled as resource constraints. We then describe dominance rules adapted to the presence of both upper and lower bounds on the resources and leverage soft constraints to improve the dominance. We also describe several acceleration techniques for the solution of the SPPRC, and branching rules that fit the specificities of the problem. Our numerical experiments are based on the instances of three benchmarks of the literature including those of the two international nurse rostering competitions (INRC-I and INRC-II). Their objective is threefold: assess the dominance rules and the acceleration techniques, investigate the capacity of the algorithm to find provable optimal solutions of instances that are still open, and conduct a comparison with best published results. The most noticeable conclusion is that the improved solution of the SPPRC allows to solve optimally all the INRC-II instances where a four-week planning horizon is considered and 40% of the eight-week instances. History: Accepted by Andrea Lodi, Area Editor for Design & Analysis of Algorithms–Discrete. Supplemental Material: The online appendix is available at https://doi.org/10.1287/ijoc.2023.0019. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. An Approximate Dynamic Programming Approach to Dynamic Stochastic Matching.
- Author
-
You, Fan and Vossen, Thomas
- Subjects
- *
STOCHASTIC learning models , *KIDNEY exchange , *DATA libraries , *STREAMING video & television , *DYNAMIC programming , *STOCHASTIC programming - Abstract
Dynamic stochastic matching problems arise in a variety of recent applications, ranging from ridesharing and online video games to kidney exchange. Such problems are naturally formulated as Markov decision processes (MDPs) that are, however, intractable in general. To improve tractability, we investigate the linear programming-based approach to approximate dynamic programming. This approach can provide both feasible control policies and bounds on the MDPs' optimal policy value, which can be used to establish optimality gaps. However, the approximate linear programs (ALPs) resulting from this approach can often be difficult to solve. To address this computational challenge, we derive novel ALP reformulations that can be used for a broad class of dynamic stochastic matching problems that incorporate, among others, possible match failures and certain restrictions on feasible matchings. We show that these ALP reformulations can be solved efficiently and applied to a broad class of dynamic matching problems. In addition, our numerical results indicate that our ALP reformulations can produce tight bounds that allow us to establish near-optimal policy performance for a broad set of problem instances. Thus, ALP reformulations can present an attractive alternative for applications that involve dynamic stochastic matching. History: Accepted by Nicola Secomandi, Area Editor for Stochastic Models & Reinforcement Learning. Supplemental Material: The software that supports the findings of this study is available within the paper and its Supplemental Information (https://pubsonline.informs.org/doi/suppl/10.1287/ijoc.2021.0203) as well as from the IJOC GitHub software repository (https://github.com/INFORMSJoC/2021.0203). The complete IJOC Software and Data Repository is available at https://informsjoc.github.io/. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. An Exact Algorithm for the Quadratic Multiknapsack Problem with an Application to Event Seating.
- Author
-
Bergman, David
- Subjects
KNAPSACK problems ,ALGORITHMS - Abstract
Knapsack problems play a pivotal role in the operations research literature, with various generalizations proposed and studied over the last century. Of recent interest is the quadratic multiknapsack problem (QMKP). Despite a plethora of heuristics, no exact methods for the QMKP have been published in the literature. This paper presents an exact branch-and-price algorithm for the QMKP. Experimental results indicate that the proposed algorithm is far superior, both in terms of solution times and objective function bounds, to state-of-the-art optimization technology solving a standard encoding of the problem. In addition to the algorithmic contribution, this paper studies the optimization problem of seating attendees at events, an operational challenge faced by event organizers. An optimization model for table event seating is shown to be closely related to the QMKP, and computational testing indicates that the proposed algorithm is particularly well suited for this application. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
32. An Interior Point–Inspired Algorithm for Linear Programs Arising in Discrete Optimal Transport.
- Author
-
Zanetti, Filippo and Gondzio, Jacek
- Subjects
- *
INTERIOR-point methods , *SCHUR complement , *DATA libraries , *ALGORITHMS , *SPARSE approximations - Abstract
Discrete optimal transport problems give rise to very large linear programs (LPs) with a particular structure of the constraint matrix. In this paper, we present a hybrid algorithm that mixes an interior point method (IPM) and column generation, specialized for the LP originating from the Kantorovich optimal transport problem. Knowing that optimal solutions of such problems display a high degree of sparsity, we propose a column generation–like technique to force all intermediate iterates to be as sparse as possible. The algorithm is implemented nearly matrix-free. Indeed, most of the computations avoid forming the huge matrices involved and solve the Newton system using only a much smaller Schur complement of the normal equations. We prove theoretical results about the sparsity pattern of the optimal solution, exploiting the graph structure of the underlying problem. We use these results to mix iterative and direct linear solvers efficiently in a way that avoids producing preconditioners or factorizations with excessive fill-in and at the same time guaranteeing a low number of conjugate gradient iterations. We compare the proposed method with two state-of-the-art solvers and show that it can compete with the best network optimization tools in terms of computational time and memory use. We perform experiments with problems reaching more than four billion variables and demonstrate the robustness of the proposed method. History: Accepted by Antonio Frangioni, Area Editor for Design & Analysis of Algorithms–Continuous. Funding: F. Zanetti received funding from the University of Edinburgh, in the form of a PhD scholarship. Supplemental Material: The software that supports the findings of this study is available within the paper and its Supplemental Information (https://pubsonline.informs.org/doi/suppl/10.1287/ijoc.2022.0184) as well as from the IJOC GitHub software repository (https://github.com/INFORMSJoC/2022.0184). The complete IJOC Software and Data Repository is available at https://informsjoc.github.io/. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
33. A Cost-Effective Sequential Route Recommender System for Taxi Drivers.
- Author
-
Liu, Junming, Teng, Mingfei, Chen, Weiwei, and Xiong, Hui
- Subjects
- *
RECOMMENDER systems , *DEEP learning , *OPTIMIZATION algorithms , *DATA libraries , *SEARCH algorithms , *TAXICABS - Abstract
This paper develops a cost-effective sequential route recommender system to provide real-time routing recommendations for vacant taxis searching for the next passenger. We propose a prediction-and-optimization framework to recommend the searching route that maximizes the expected profit of the next successful passenger pickup based on the dynamic taxi demand-supply distribution. Specifically, this system features a deep learning-based predictor that dynamically predicts the passenger pickup probability on a road segment and a recursive searching algorithm that recommends the optimal searching route. The predictor integrates a graph convolution network (GCN) to capture the spatial distribution and a long short-term memory (LSTM) to capture the temporal dynamics of taxi demand and supply. The GCN-LSTM model can accurately predict the pickup probability on a road segment with the consideration of potential taxi oversupply. Then, the dynamic distribution of pickup probability is fed into the route optimization algorithm to recommend the optimal searching routes sequentially as route inquiries emerge in the system. The recursion tree-based route optimization algorithm can significantly reduce the computational time and provide the optimal routes within seconds for real-time implementation. Finally, extensive experiments using Beijing Taxi GPS data demonstrate the effectiveness and efficiency of the proposed recommender system. History: Accepted by Ram Ramesh, Area Editor for Data Science and Machine Learning. Funding: This work was partially supported by the Hong Kong Research Grants Council [Grants CityU 21500220, CityU 11504322] and the National Natural Science Foundation of China [Grant 72201222]. Supplemental Material: The software that supports the findings of this study is available within the paper and its Supplemental Information (https://pubsonline.informs.org/doi/suppl/10.1287/ijoc.2021.0112) as well as from the IJOC GitHub software repository (https://github.com/INFORMSJoC/2021.0112). The complete IJOC Software and Data Repository is available at https://informsjoc.github.io/. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
34. Perturbation-Based Thresholding Search for Packing Equal Circles and Spheres.
- Author
-
Lai, Xiangjing, Hao, Jin-Kao, Xiao, Renbin, and Glover, Fred
- Subjects
- *
SPHERE packings , *APPROXIMATION algorithms , *SPHERES , *CONSTRAINED optimization , *THRESHOLDING algorithms , *CIRCLE , *SEARCH algorithms - Abstract
This paper presents an effective perturbation-based thresholding search for two popular and challenging packing problems with minimal containers: packing N identical circles in a square and packing N identical spheres in a cube. Following the penalty function approach, we handle these constrained optimization problems by solving a series of unconstrained optimization subproblems with fixed containers. The proposed algorithm relies on a two-phase search strategy that combines a thresholding search method reinforced by two general-purpose perturbation operators and a container adjustment method. The performance of the algorithm is assessed relative to a large number of benchmark instances widely studied in the literature. Computational results show a high performance of the algorithm on both problems compared with the state-of-the-art results. For circle packing, the algorithm improves 156 best-known results (new upper bounds) in the range of 2 ≤ N ≤ 400 and matches 242 other best-known results. For sphere packing, the algorithm improves 66 best-known results in the range of 2 ≤ N ≤ 200 , whereas matching the best-known results for 124 other instances. Experimental analyses are conducted to shed light on the main search ingredients of the proposed algorithm consisting of the two-phase search strategy, the mixed perturbation and the parameters. History: Accepted by Erwin Pesch, Area Editor for Heuristic Search & Approximation Algorithms. Funding: This work was supported by the National Natural Science Foundation of China [Grants 61703213 and 61933005]. Supplemental Material: The software that supports the findings of this study is available within the paper and its Supplemental Information (https://pubsonline.informs.org/doi/suppl/10.1287/ijoc.2023.1290) as well as from the IJOC GitHub software repository (https://github.com/INFORMSJoC/2022.0004) at (http://dx.doi.org/10.5281/zenodo.7579558). [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
35. An Ensemble Learning Approach with Gradient Resampling for Class-Imbalance Problems.
- Author
-
Zhao, Hongke, Zhao, Chuang, Zhang, Xi, Liu, Nanlin, Zhu, Hengshu, Liu, Qi, and Xiong, Hui
- Subjects
- *
MAJORITIES , *BOOSTING algorithms , *EVIDENCE gaps , *LEARNING strategies , *PLURALITY voting , *MACHINE learning - Abstract
Imbalanced classification is widely referred in many real-world applications and has been extensively studied. Most existing algorithms consider alleviating the imbalance by sampling or guiding ensemble learners with punishments. The combination of ensemble learning and sampling strategy at class level has achieved great progress. Actually, specific hard examples have little benefit for model learning and even degrade the performance. From the view of identifying classification difficulty of samples, one important motivation is to design algorithms to finely equip different samples with progressive learning. Unfortunately, how to perfectly configure the sampling and learning strategies under ensemble principles at the sample level remains a research gap. In this paper, we propose a new view from the sample level rather than class level in existing studies. We design an ensemble approach in pipe with sample-level gradient resampling, that is, balanced cascade with filters (BCWF). Before that, as a preliminary exploration, we first design a hard examples mining algorithm to explore the gradient distribution of classification difficulty of samples and identify the hard examples. Specifically, BCWF uses an under-sampling strategy and a boosting manner to train T predictive classifiers and reidentify hard examples. In BCWF, moreover, we design two types of filters: the first is assembled with a hard filter (BCWF_h), whereas the second is assembled with a soft filter (BCWF_s). In each round of boosting, BCWF_h strictly removes a gradient/set of the hardest examples from both classes, whereas BCWF_s removes a larger number of harder and easy examples simultaneously for final balanced-class retention. Consequently, the well-trained T predictive classifiers can be used with two ensemble voting strategies: average probability and majority vote. To evaluate the proposed approach, we conduct intensive experiments on 10 benchmark data sets and apply our algorithms to perform default user detection on a real-world peer to peer lending data set. The experimental results fully demonstrate the effectiveness and the managerial implications of our approach when compared with 11 competitive algorithms. History: Accepted by Ram Ramesh, Area Editor for Data Science & Machine Learning. Funding: This work was supported by the National Natural Science Foundation of China [Grants 72101176, 71722005, and 72241432], the National Key R&D program of China [Grant 2020YFA0908600] and the Natural Science Foundation of Tianjin City [Grant 18JCJQJC45900]. Supplemental Material: The software that supports the findings of this study is available within the paper and its Supplemental Information (https://pubsonline.informs.org/doi/suppl/10.1287/ijoc.2023.1274) as well as from the IJOC GitHub software repository (https://github.com/INFORMSJoC/2021.0104) at (http://dx.doi.org/10.5281/zenodo.6360996). [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
36. A Prediction-Based Approach for Online Dynamic Appointment Scheduling: A Case Study in Radiotherapy Treatment.
- Author
-
Pham, Tu San, Legrain, Antoine, De Causmaecker, Patrick, and Rousseau, Louis-Martin
- Subjects
- *
MEDICAL care wait times , *TREATMENT delay (Medicine) , *RESOURCE allocation , *SEARCH algorithms , *SCHEDULING - Abstract
Patient scheduling is a difficult task involving stochastic factors, such as the unknown arrival times of patients. Similarly, the scheduling of radiotherapy for cancer treatments needs to handle patients with different urgency levels when allocating resources. High-priority patients may arrive at any time, and there must be resources available to accommodate them. A common solution is to reserve a flat percentage of treatment capacity for emergency patients. However, this solution can result in overdue treatments for urgent patients, a failure to fully exploit treatment capacity, and delayed treatments for low-priority patients. This problem is especially severe in large and crowded hospitals. In this paper, we propose a prediction-based approach for online dynamic radiotherapy scheduling that dynamically adapts the present scheduling decision based on each incoming patient and the current allocation of resources. Our approach is based on a regression model trained to recognize the links between patients' arrival patterns and their ideal waiting time in optimal off-line solutions when all future arrivals are known in advance. When our prediction-based approach is compared with flat-reservation policies, it does a better job of preventing overdue treatments for emergency patients and also maintains comparable waiting times for the other patients. We also demonstrate how our proposed approach supports explainability and interpretability in scheduling decisions using Shapley additive explanation values. History: Accepted by Erwin Pesch, Area Editor for Heuristic Search & Approximation Algorithms. Funding: Mitacs Accélération IT26995 and Canada Research Chair in Analytics and Logistics in Healthcare (HANALOG). Supplemental Material: The software that supports the findings of this study is available within the paper and its Supplemental Information (https://pubsonline.informs.org/doi/suppl/10.1287/ijoc.2023.1289) as well as from the IJOC GitHub software repository (https://github.com/INFORMSJoC/2021.0342) at (http://dx.doi.org/10.5281/zenodo.7579533). [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
37. Convex and Nonconvex Risk-Based Linear Regression at Scale.
- Author
-
Wu, Can, Cui, Ying, Li, Donghui, and Sun, Defeng
- Subjects
- *
VALUE at risk , *RESEARCH grants , *NONCONVEX programming , *SELF-tuning controllers - Abstract
The value at risk (VaR) and the conditional value at risk (CVaR) are two popular risk measures to hedge against the uncertainty of data. In this paper, we provide a computational toolbox for solving high-dimensional sparse linear regression problems under either VaR or CVaR measures, the former being nonconvex and the latter convex. Unlike the empirical risk (neutral) minimization models in which the overall losses are decomposable across data, the aforementioned risk-sensitive models have nonseparable objective functions so that typical first order algorithms are not easy to scale. We address this scaling issue by adopting a semismooth Newton-based proximal augmented Lagrangian method of the convex CVaR linear regression problem. The matrix structures of the Newton systems are carefully explored to reduce the computational cost per iteration. The method is further embedded in a majorization–minimization algorithm as a subroutine to tackle the nonconvex VaR-based regression problem. We also discuss an adaptive sieving strategy to iteratively guess and adjust the effective problem dimension, which is particularly useful when a solution path associated with a sequence of tuning parameters is needed. Extensive numerical experiments on both synthetic and real data demonstrate the effectiveness of our proposed methods. In particular, they are about 53 times faster than the commercial package Gurobi for the CVaR-based sparse linear regression with 4,265,669 features and 16,087 observations. History: Accepted by Antonio Frangioni, Area Editor for Design & Analysis of Algorithms–Continuous. Funding: This work was supported in part by the NSF, the Division of Computing and Communication Foundations [Grant 2153352], the National Natural Science Foundation of China [Grant 12271187], and the Hong Kong Research Grant Council [Grant 15304019]. Supplemental Material: The software that supports the findings of this study is available within the paper and its Supplemental Information (https://pubsonline.informs.org/doi/suppl/10.1287/ijoc.2023.1282) as well as from the IJOC GitHub software repository (https://github.com/INFORMSJoC/2022.0012) at (http://dx.doi.org/10.5281/zenodo.7483279). [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
38. A Sequential Follower Refinement Algorithm for Robust Surgery Scheduling.
- Author
-
Bansal, Ankit, Richard, Jean-Philippe, Berg, Bjorn P., and Huang, Yu-Li
- Subjects
- *
DATA libraries , *ACADEMIC medical centers , *ROBUST optimization , *ALGORITHMS , *SCHEDULING - Abstract
An algorithm for the two-stage robust optimization surgery-to-operating room allocation problem is presented. The second-stage problem is an integer linear program whose convex hull is approximated using three types of specialized valid inequalities and Chvátal-Gomory cuts. The resulting linear relaxation of the second-stage problem is then dualized and integrated into the first-stage problem. The resulting mixed integer linear program, which is an approximation of the original problem, is then solved using a commercial solver. If the solution of this model is not optimal for the second-stage problem, valid inequalities for the second-stage problem are generated, yielding a type of column-generation based approach that we refer to as the sequential follower refinement (SFR) algorithm. Data from an academic medical center are used to compare the computational performance of SFR with the constraint and column generation (C&CG) algorithm, which is the only exact approach that has been specifically applied for this problem in the literature. An extensive numerical study of SFR and its computational characteristics is presented that shows that SFR yields better-quality solutions compared with C&CG, even as the termination criterion of SFR is met much sooner, especially for problems involving higher number of surgeries. History: Accepted by Paul Brooks, Area Editor for Applications in Biology, Medicine, & Healthcare. Supplemental Material: The software that supports the findings of this study is available within the paper and its Supplemental Information (https://pubsonline.informs.org/doi/suppl/10.1287/ijoc.2022.0191) as well as from the IJOC GitHub software repository (https://github.com/INFORMSJoC/2022.0191). The complete IJOC Software and Data Repository is available at https://informsjoc.github.io/. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. Solving Sparse Separable Bilinear Programs Using Lifted Bilinear Cover Inequalities.
- Author
-
Gu, Xiaoyi, Dey, Santanu S., and Richard, Jean-Philippe P.
- Subjects
- *
SEMIDEFINITE programming , *BILINEAR forms - Abstract
Recently a class of second-order cone representable convex inequalities called lifted bilinear cover inequalities were introduced, which are valid for a set described by a separable bilinear constraint together with bounds on variables. In this paper, we study the computational potential of these inequalities for separable bilinear optimization problems. We first prove that the semidefinite programming relaxation provides no benefit over the McCormick relaxation for such problems. We then design a simple randomized separation heuristic for lifted bilinear cover inequalities. In our computational experiments, we separate many rounds of these inequalities starting from McCormick's relaxation of instances where each constraint is a separable bilinear constraint set. We demonstrate that there is a significant improvement in the performance of a state-of-the-art global solver in terms of gap closed, when these inequalities are added at the root node compared with when they are not. History: Accepted by Andrea Lodi, Area Editor for Design & Analysis of Algorithms – Discrete. Funding: S. S. Dey gratefully acknowledges the support by the Office of Naval Research [Grant N000141912323]. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. Product Redesign and Innovation Based on Online Reviews: A Multistage Combined Search Method.
- Author
-
Qin, Jindong, Zheng, Pan, and Wang, Xiaojun
- Subjects
- *
CONSUMERS' reviews , *USER-generated content , *FEATURE extraction , *DATA libraries , *GRAPH algorithms , *NEW product development - Abstract
Online reviews published on the e-commerce platform provide a new source of information for designers to develop new products. Past research on new product development (NPD) using user-generated textual data commonly focused solely on extracting and identifying product features to be improved. However, the competitive analysis of product features and more specific improvement strategies have not been explored deeply. This study fully uses the rich semantic attributes of online review texts and proposes a novel online review–driven modeling framework. This new approach can extract fine-grained product features; calculate their importance, performance, and competitiveness; and build a competitiveness network for each feature. As a result, decision making is assisted, and specific product improvement strategies are developed for NPD beyond existing modeling approaches in this domain. Specifically, online reviews are first classified into redesign- and innovation-related themes using a multiple embedding model, and the redesign and innovation product features can be extracted accordingly using a mutual information multilevel feature extraction method. Moreover, the importance and performance of features are calculated, and the competitiveness and competitiveness network of features are obtained through a personalized unidirectional bipartite graph algorithm. Finally, the importance performance competitiveness analysis plot is constructed, and the product improvement strategy is developed via a multistage combined search algorithm. Case studies and comparative experiments show the effectiveness of the proposed method and provide novel business insights for stakeholders, such as product providers, managers, and designers. History: Accepted by Ram Ramesh, Area Editor for Data Science and Machine Learning. Funding: This work was supported by the National Natural Science Foundation of China [Project 72071151] and the Natural Science Foundation of Hubei Province, China [Grant 2023CFB712]. Supplemental Material: The software that supports the findings of this study is available within the paper and its Supplemental Information (https://pubsonline.informs.org/doi/suppl/10.1287/ijoc.2022.0333) as well as from the IJOC GitHub software repository (https://github.com/INFORMSJoC/2022.0333). The complete IJOC Software and Data Repository is available at https://informsjoc.github.io/. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. Sparsity-Exploiting Distributed Projections onto a Simplex.
- Author
-
Dai, Yongzheng and Chen, Chen
- Subjects
- *
PARALLEL algorithms , *DATA libraries , *ALGORITHMS - Abstract
Projecting a vector onto a simplex is a well-studied problem that arises in a wide range of optimization problems. Numerous algorithms have been proposed for determining the projection; however, the primary focus of the literature is on serial algorithms. We present a parallel method that decomposes the input vector and distributes it across multiple processors for local projection. Our method is especially effective when the resulting projection is highly sparse, which is the case, for instance, in large-scale problems with independent and identically distributed (i.i.d.) entries. Moreover, the method can be adapted to parallelize a broad range of serial algorithms from the literature. We fill in theoretical gaps in serial algorithm analysis and develop similar results for our parallel analogues. Numerical experiments conducted on a wide range of large-scale instances, both real world and simulated, demonstrate the practical effectiveness of the method. History: Accepted by Antonio Frangioni, Area Editor for Design & Analysis of Algorithms—Continuous. Funding: This work was supported by the Office of Naval Research [Grant N00014-23-1-2632]. Supplemental Material: The software that supports the findings of this study is available within the paper and its Supplemental Information (https://pubsonline.informs.org/doi/suppl/10.1287/ijoc.2022.0328) as well as from the IJOC GitHub software repository (https://github.com/INFORMSJoC/2022.0328). The complete IJOC Software and Data Repository is available at https://informsjoc.github.io/. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. Minimizing the Weighted Number of Tardy Jobs via (max,+)-Convolutions.
- Author
-
Hermelin, Danny, Molter, Hendrik, and Shabtay, Dvir
- Subjects
- *
POLYNOMIAL time algorithms , *ONLINE algorithms , *ALGORITHMS - Abstract
In this paper we consider the fundamental scheduling problem of minimizing the weighted number of tardy jobs on a single machine. We present a simple pseudo polynomial-time algorithm for this problem that improves upon the classical Lawler and Moore algorithm from the late 60's under certain scenarios and parameter settings. Our algorithm uses (max,+)-convolutions as its main tool, exploiting recent improved algorithms for computing such convolutions, and obtains several different running times depending on the specific improvement used. We also provide a related lower bound for the problem under a variant of the well-known Strong Exponential Time Hypothesis (SETH). History: Accepted by Andrea Lodi, Area Editor for Design & Analysis of Algorithms – Discrete. Funding: This work was supported by the Israel Science Foundation [Grant 1070/20]. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. Supervised ML for Solving the GI / GI /1 Queue.
- Author
-
Baron, Opher, Krass, Dmitry, Senderovich, Arik, and Sherzer, Eliran
- Subjects
- *
DATA libraries , *SUPERVISED learning , *QUEUING theory , *DATA science , *MACHINE learning , *ANALYTICAL solutions - Abstract
We apply supervised learning to a general problem in queueing theory: using a neural net, we develop a fast and accurate predictor of the stationary system-length distribution of a GI/GI/1 queue—a fundamental queueing model for which no analytical solutions are available. To this end, we must overcome three main challenges: (i) generating a large library of training instances that cover a wide range of arbitrary interarrival and service time distributions, (ii) labeling the training instances, and (iii) providing continuous arrival and service distributions as inputs to the neural net. To overcome (i), we develop an algorithm to sample phase-type interarrival and service time distributions with complex transition structures. We demonstrate that our distribution-generating algorithm indeed covers a wide range of possible positive-valued distributions. For (ii), we label our training instances via quasi-birth-and-death(QBD) that was used to approximate PH/PH/1 (with phase-type arrival and service process) as labels for the training data. For (iii), we find that using only the first five moments of both the interarrival and service times distribution as inputs is sufficient to train the neural net. Our empirical results show that our neural model can estimate the stationary behavior of the GI/GI/1—far exceeding other available methods in terms of both accuracy and runtimes. History: Ram Ramesh, Area Editor for Data Science and Machine Learning. Funding: O. Baron received financial support from the Natural Sciences and Engineering Research Council of Canada (NERC) [Grant 458051]. D. Krass received financial support from the NERC [Grant 458395]. Supplemental Material: The software that supports the findings of this study is available within the paper and its Supplemental Information (https://pubsonline.informs.org/doi/suppl/10.1287/ijoc.2022.0263) as well as from the IJOC GitHub software repository (https://github.com/INFORMSJoC/2022.0263). The complete IJOC Software and Data Repository is available at https://informsjoc.github.io/. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. Solving a Class of Cut-Generating Linear Programs via Machine Learning.
- Author
-
Rajabalizadeh, Atefeh and Davarnia, Danial
- Subjects
- *
VECTOR valued functions , *LOGISTIC regression analysis - Abstract
Cut-generating linear programs (CGLPs) play a key role as a separation oracle to produce valid inequalities for the feasible region of mixed-integer programs. When incorporated inside branch-and-bound, the cutting planes obtained from CGLPs help to tighten relaxations and improve dual bounds. However, running the CGLPs at the nodes of the branch-and-bound tree is computationally cumbersome due to the large number of node candidates and the lack of a priori knowledge on which nodes admit useful cutting planes. As a result, CGLPs are often avoided at default settings of branch-and-cut algorithms despite their potential impact on improving dual bounds. In this paper, we propose a novel framework based on machine learning to approximate the optimal value of a CGLP class that determines whether a cutting plane can be generated at a node of the branch-and-bound tree. Translating the CGLP as an indicator function of the objective function vector, we show that it can be approximated through conventional data classification techniques. We provide a systematic procedure to efficiently generate training data sets for the corresponding classification problem based on the CGLP structure. We conduct computational experiments on benchmark instances using classification methods such as logistic regression. These results suggest that the approximate CGLP obtained from classification can improve the solution time compared with that of conventional cutting plane methods. Our proposed framework can be efficiently applied to a large number of nodes in the branch-and-bound tree to identify the best candidates for adding a cut. History: Accepted by Andrea Lodi, Area Editor for Design & Analysis of Algorithms—Discrete. Supplemental Material: The e-companion is available at https://doi.org/10.1287/ijoc.2022.0241. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. Learning Equilibria in Asymmetric Auction Games.
- Author
-
Bichler, Martin, Kohring, Nils, and Heidekrüger, Stefan
- Subjects
- *
NONLINEAR differential equations , *PARTIAL differential equations , *AUCTIONS , *NASH equilibrium , *EQUILIBRIUM - Abstract
Computing Bayesian Nash equilibrium strategies in auction games is a challenging problem that is not well-understood. Such equilibria can be modeled as systems of nonlinear partial differential equations. It was recently shown that neural pseudogradient ascent (NPGA), an implementation of simultaneous gradient ascent via neural networks, converges to a Bayesian Nash equilibrium for a wide variety of symmetric auction games. Whereas symmetric auction models are widespread in the theoretical literature, in most auction markets in the field, one can observe different classes of bidders having different valuation distributions and strategies. Asymmetry of this sort is almost always an issue in real-world multiobject auctions, in which different bidders are interested in different packages of items. Such environments require a different implementation of NPGA with multiple interacting neural networks having multiple outputs for the different allocations in which the bidders are interested. In this paper, we analyze a wide variety of asymmetric auction models. Interestingly, our results show that we closely approximate Bayesian Nash equilibria in all models in which the analytical Bayes–Nash equilibrium is known. Additionally, we analyze new and larger environments for which no analytical solution is known and verify that the solution found approximates equilibrium closely. The results provide a foundation for generic equilibrium solvers that can be used in a wide range of auction games. History: Accepted by Ram Ramesh, Area Editor for Data Science & Machine Learning. Funding: This work was supported by Deutsche Forschungsgemeinschaft [Grant BI-1056/I-9]. Supplemental Material: The software that supports the findings of this study is available within the paper and its Supplemental Information (https://pubsonline.informs.org/doi/suppl/10.1287/ijoc.2023.1281) as well as from the IJOC GitHub software repository (https://github.com/INFORMSJoC/2021.0151) at (http://dx.doi.org/10.5281/zenodo.7407158). [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
46. Decomposition Strategies for Vehicle Routing Heuristics.
- Author
-
Santini, Alberto, Schneider, Michael, Vidal, Thibaut, and Vigo, Daniele
- Subjects
- *
VEHICLE routing problem , *ROUTING algorithms , *HEURISTIC , *DECOMPOSITION method , *APPROXIMATION algorithms , *SEARCH algorithms - Abstract
Decomposition techniques are an important component of modern heuristics for large instances of vehicle routing problems. The current literature lacks a characterization of decomposition strategies and a systematic investigation of their impact when integrated into state-of-the-art heuristics. This paper fills this gap: We discuss the main characteristics of decomposition techniques in vehicle routing heuristics, highlight their strengths and weaknesses, and derive a set of desirable properties. Through an extensive numerical campaign, we investigate the impact of decompositions within two algorithms for the capacitated vehicle routing problem: the Adaptive Large Neighborhood Search of Pisinger and Ropke (2007) and the Hybrid Genetic Search of Vidal et al. (2012). We evaluate the quality of popular decomposition techniques from the literature and propose new strategies. We find that route-based decomposition methods, which define subproblems by means of the customers contained in selected subsets of the routes of a given solution, generally appear superior to path-based methods, which merge groups of customers to obtain smaller subproblems. The newly proposed decomposition barycenter clustering achieves the overall best performance and leads to significant gains compared with using the algorithms without decomposition. History: Erwin Pesch, Area Editor for Heuristic Search and Approximation Algorithms. Funding: This work was supported by the U.S. Air Force [Grant FA9550-17-1-0234], the Ministerio de Ciencia e Innovación (Juan de la Cierva Formación), H2020 Marie Skłodowska-Curie Actions [Grant 945380], the Ministero dell'Università e della Ricerca [Grant 2015JJLC3E_002], the Conselho Nacional de Desenvolvimento Científico e Tecnológico [Grant 308528/2018-2], and the Fundação Carlos Chagas Filho de Amparo à Pesquisa do Estado do Rio de Janeiro [Grant E-26/202.790/2019]. Supplemental Material: The software that supports the findings of this study is available within the paper and its Supplemental Information (https://pubsonline.informs.org/doi/suppl/10.1287/ijoc.2023.1288) as well as from the IJOC GitHub software repository (https://github.com/INFORMSJoC/2022.0048) at (http://dx.doi.org/10.5281/zenodo.7613129). [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
47. Special Issue on Mining Web-Based Data for e-Business Applications.
- Author
-
Tuzhilin, Alexander and Raschid, Louiqa
- Subjects
DATA mining ,COMPUTER science - Abstract
Introduces articles related to mining of web-based data for electronic-business applications published in May 2003 issue of the journal 'Informs Journal on Computing.' Advantages of web information; Problem of data preprocessing of weblog files; Challenge for the data-mining community; Concept of web business intelligence.
- Published
- 2003
48. SimOpt: A Testbed for Simulation-Optimization Experiments.
- Author
-
Eckman, David J., Henderson, Shane G., and Shashaani, Sara
- Subjects
- *
RANDOM number generators , *PYTHON programming language , *OBJECT-oriented methods (Computer science) , *GRAPHICAL user interfaces , *RANDOM numbers , *SOURCE code - Abstract
This paper introduces a major redesign of SimOpt, a testbed of simulation-optimization (SO) problems and solvers. The testbed promotes the empirical evaluation and comparison of solvers and aims to accelerate their development. Relative to previous versions of SimOpt, the redesign ports the code to an object-oriented architecture in Python; uses an implementation of the MRG32k3a random number generator that supports streams, substreams, and subsubstreams; supports the automated use of common random numbers for ease and efficiency; includes a powerful suite of plotting tools for visualizing experiment results; uses bootstrapping to obtain error estimates; accommodates the use of data farming to explore simulation models and optimization solvers as their input parameters vary; and provides a graphical user interface. The SimOpt source code is available on a GitHub repository under a permissive open-source license and as a Python package. History: Accepted by Ted Ralphs, Area Editor for Software Tools. Funding: This work was supported by the National Science Foundation [Grant CMMI-2035086]. Supplemental Material: The software that supports the findings of this study is available within the paper and its Supplemental Information (https://pubsonline.informs.org/doi/suppl/10.1287/ijoc.2023.1273) as well as from the IJOC GitHub software repository (https://github.com/INFORMSJoC/2022.0011) at (http://dx.doi.org/10.5281/zenodo.7468744). [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
49. An LSTM+ Model for Managing Epidemics: Using Population Mobility and Vulnerability for Forecasting COVID-19 Hospital Admissions.
- Author
-
Ray, Arindam, Jank, Wolfgang, Dutta, Kaushik, and Mullarkey, Matthew
- Subjects
- *
RECURRENT neural networks , *HOSPITAL admission & discharge , *EPIDEMICS , *FORECASTING methodology , *COVID-19 , *MILITARY hospitals - Abstract
Worldwide epidemics, such as corona virus disease 2019 (COVID-19), cause unprecedented challenges for society and its healthcare systems. Governments attempt to mitigate those challenges by either reducing healthcare demand ("flattening the curve" by imposing restrictions, e.g., on travel or social gatherings) or by increasing healthcare capacity, for example, by canceling elective procedures or setting up field hospitals. To implement these mitigation procedures efficiently, accurate and timely forecasts of the epidemic's progression are necessary. In this paper, we develop an innovative forecasting methodology based on the ideas of long short-term memory (LSTM) recurrent neural networks. LSTM models are shown to outperform traditional forecasting models, especially when the relationship between input and output is complex and not available in closed form. However, whereas LSTM models perform well for data that changes dynamically over time, one shortcoming is that they are not directly applicable when the data also includes static, nontemporal components. In this work, we propose an LSTM + model that overcomes this limitation. Our model leverages a private partnership with a mobile data company in order to capture population mobility (using mobility indices derived from mobile device data), which allows us to anticipate an epidemic's spread early and accurately. In addition, we also leverage a public partnership with a consortium of hospitals. Using hospital admissions (rather than, say, positive caseload) results in an unbiased measure of the severity of an epidemic because patients seek and are admitted to hospital care only when symptoms worsen beyond a critical point. We illustrate the effectiveness of our method on forecasting COVID-19 for a major U.S. metropolitan area where it has aided decision makers of the emergency policy group. Our model improves the predictive accuracy of hospital admission by a factor of 2.5× as compared with competing models in the same analytical space. History: Accepted by J. Paul Brooks, Area Editor for Applications in Biology, Medicine, & Healthcare. Funding: This research was funded by a monetary gift from Hillsborough County to establish the Pandemic Response Research Fund at University of South Florida. Supplemental Material: The software that supports the findings of this study is available within the paper and its Supplemental Information (https://pubsonline.informs.org/doi/suppl/10.1287/ijoc.2023.1269) as well as from the IJOC GitHub software repository (https://github.com/INFORMSJoC/2021.0027) at (http://dx.doi.org/10.5281/zenodo.7112004). [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
50. Convergence Analysis of Stochastic Kriging-Assisted Simulation with Random Covariates.
- Author
-
Li, Cheng, Gao, Siyang, and Du, Jianzhong
- Subjects
- *
STOCHASTIC analysis , *STOCHASTIC convergence , *MODULES (Algebra) , *SAMPLING errors , *DECISION making , *SYSTEMS design - Abstract
We consider performing simulation experiments in the presence of covariates. Here, covariates refer to some input information other than system designs to the simulation model that can also affect the system performance. To make decisions, decision makers need to know the covariate values of the problem. Traditionally in simulation-based decision making, simulation samples are collected after the covariate values are known; in contrast, as a new framework, simulation with covariates starts the simulation before the covariate values are revealed and collects samples on covariate values that might appear later. Then, when the covariate values are revealed, the collected simulation samples are directly used to predict the desired results. This framework significantly reduces the decision time compared with the traditional way of simulation. In this paper, we follow this framework and suppose there are a finite number of system designs. We adopt the metamodel of stochastic kriging (SK) and use it to predict the system performance of each design and the best design. The goal is to study how fast the prediction errors diminish with the number of covariate points sampled. This is a fundamental problem in simulation with covariates and helps quantify the relationship between the offline simulation efforts and the online prediction accuracy. Particularly, we adopt measures of the maximal integrated mean squared error (IMSE) and integrated probability of false selection (IPFS) for assessing errors of the system performance and the best design predictions. Then, we establish convergence rates for the two measures under mild conditions. Last, these convergence behaviors are illustrated numerically using test examples. History: Accepted by Bruno Tuffin, area editor for simulation. Funding: This work was supported in part by Singapore Ministry of Education Academic Research Funds [Tier 1 Grants R-155-000-201-114 and A-0004822-00-00], the City University of Hong Kong [Grants 7005269 and 7005568], and the National Natural Science Foundation of China [Grant 72091211]. Supplemental Material: The software that supports the findings of this study is available within the paper and its Supplemental Information (https://pubsonline.informs.org/doi/suppl/10.1287/ijoc.2022.1263) as well as from the IJOC GitHub software repository (https://github.com/INFORMSJoC/2021.0329) at (http://dx.doi.org/10.5281/zenodo.7344997). [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.