23 results on '"Lei, Ting L."'
Search Results
2. On the Theoretical Link between Optimized Geospatial Conflation Models for Linear Features.
- Author
-
Lei, Zhen, Yuan, Zhangshun, and Lei, Ting L.
- Abstract
Geospatial data conflation involves matching and combining two maps to create a new map. It has received increased research attention in recent years due to its wide range of applications in GIS (Geographic Information System) data production and analysis. The map assignment problem (conceptualized in the 1980s) is one of the earliest conflation methods, in which GIS features from two maps are matched by minimizing their total discrepancy or distance. Recently, more flexible optimization models have been proposed. This includes conflation models based on the network flow problem and new models based on Mixed Integer Linear Programming (MILP). A natural question is: how are these models related or different, and how do they compare? In this study, an analytic review of major optimized conflation models in the literature is conducted and the structural linkages between them are identified. Moreover, a MILP model (the base-matching problem) and its bi-matching version are presented as a common basis. Our analysis shows that the assignment problem and all other optimized conflation models in the literature can be viewed or reformulated as variants of the base models. For network-flow based models, proof is presented that the base-matching problem is equivalent to the network-flow based fixed-charge-matching model. The equivalence of the MILP reformulation is also verified experimentally. For the existing MILP-based models, common notation is established and used to demonstrate that they are extensions of the base models in straight-forward ways. The contributions of this study are threefold. Firstly, it helps the analyst to understand the structural commonalities and differences of current conflation models and to choose different models. Secondly, by reformulating the network-flow models (and therefore, all current models) using MILP, the presented work eases the practical application of conflation by leveraging the many off-the-shelf MILP solvers. Thirdly, the base models can serve as a common ground for studying and writing new conflation models by allowing a modular and incremental way of model development. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Large scale geospatial data conflation: A feature matching framework based on optimization and divide-and-conquer
- Author
-
Lei, Ting L.
- Published
- 2021
- Full Text
- View/download PDF
4. Towards Topological Geospatial Conflation: An Optimized Node-Arc Conflation Model for Road Networks.
- Author
-
Lei, Zhen and Lei, Ting L.
- Subjects
- *
GEOSPATIAL data , *GEOGRAPHIC information systems , *HAMMING distance , *ELECTRIC arc - Abstract
Geospatial data conflation is the process of identifying and merging the corresponding features in two datasets that represent the same objects in reality. Conflation is needed in a wide range of geospatial analyses, yet it is a difficult task, often considered too unreliable and costly due to various discrepancies between GIS data sources. This study addresses the reliability issue of computerized conflation by developing stronger optimization-based conflation models for matching two network datasets with minimum discrepancy. Conventional models match roads on a feature-by-feature basis. By comparison, we propose a new node-arc conflation model that simultaneously matches road-center lines and junctions in a topologically consistent manner. Enforcing this topological consistency increases the reliability of conflation and reduces false matches. Similar to the well-known rubber-sheeting method, our model allows for the use of network junctions as "control" points for matching network edges. Unlike rubber sheeting, the new model is automatic and matches all junctions (and edges) in one pass. To the best of our knowledge, this is the first optimized conflation model that can match nodes and edges in one model. Computational experiments using six road networks in Santa Barbara, CA, showed that the new model is selective and reduces false matches more than existing optimized conflation models. On average, it achieves a precision of 94.7% with over 81% recall and achieves a 99.4% precision when enhanced with string distances. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. On the unified dispersion problem: Efficient formulations and exact algorithms
- Author
-
Lei, Ting L. and Church, Richard L.
- Published
- 2015
- Full Text
- View/download PDF
6. Designing Robust Coverage Systems: A Maximal Covering Model with Geographically Varying Failure Probabilities
- Author
-
Lei, Ting L., Tong, Daoqin, and Church, Richard L.
- Published
- 2014
7. Linear feature conflation: An optimization‐based matching model with connectivity constraints.
- Author
-
Lei, Ting L. and Lei, Zhen
- Subjects
- *
GEOSPATIAL data , *LINEAR programming , *INTEGER programming , *STRUCTURAL models , *MIXED integer linear programming , *DATA modeling - Abstract
Geospatial data conflation is the process of combining multiple datasets about a geographic phenomenon to produce a single, richer dataset. It has received increased research attention due to its many applications in map making, transportation, planning, and temporal geospatial analyses, among many others. One approach to conflation, attempted from the outset in the literature, is the use of optimization‐based conflation methods. Conflation is treated as a natural optimization problem of minimizing the total number of discrepancies while finding corresponding features from two datasets. Optimization‐based conflation has several advantages over traditional methods including conciseness, being able to find an optimal solution, and ease of implementation. However, current optimization‐based conflation methods are also limited. A main shortcoming with current optimized conflation models (and other traditional methods as well) is that they are often too weak and cannot utilize the spatial context in each dataset while matching corresponding features. In particular, current optimal conflation models match a feature to targets independently from other features and therefore treat each GIS dataset as a collection of unrelated elements, reminiscent of the spaghetti GIS data model. Important contextual information such as the connectivity between adjacent elements (such as roads) is neglected during the matching. Consequently, such models may produce topologically inconsistent results. In this article, we address this issue by introducing new optimization‐based conflation models with structural constraints to preserve the connectivity and contiguity relation among features. The model is implemented using integer linear programming and compared with traditional spaghetti‐style models on multiple test datasets. Experimental results show that the new element connectivity (ec‐bimatching) model reduces false matches and consistently outperforms traditional models. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
8. Locating short-term empty-container storage facilities to support port operations: A user optimal approach
- Author
-
Lei, Ting L. and Church, Richard L.
- Published
- 2011
- Full Text
- View/download PDF
9. Identifying functionally connected habitat compartments with a novel regionalization technique
- Author
-
Gao, Peng, Kupfer, John A., Guo, Diansheng, and Lei, Ting L.
- Published
- 2013
- Full Text
- View/download PDF
10. Hedging against service disruptions: an expected median location problem with site-dependent failure probabilities
- Author
-
Lei, Ting L. and Tong, Daoqin
- Published
- 2013
- Full Text
- View/download PDF
11. Harmonizing Full and Partial Matching in Geospatial Conflation: A Unified Optimization Model.
- Author
-
Lei, Ting L. and Lei, Zhen
- Subjects
- *
ASSIGNMENT problems (Programming) , *GEOGRAPHIC information systems , *MULTISENSOR data fusion - Abstract
Spatial data conflation is aimed at matching and merging objects in two datasets into a more comprehensive one. Starting from the "map assignment problem" in the 1980s, optimized conflation models treat feature matching as a natural optimization problem of minimizing certain metrics, such as the total discrepancy. One complication in optimized conflation is that heterogeneous datasets can represent geographic features differently. Features can correspond to target features in the other dataset either on a one-to-one basis (forming full matches) or on a many-to-one basis (forming partial matches). Traditional models consider either full matching or partial matches exclusively. This dichotomy has several issues. Firstly, full matching models are limited and cannot capture any partial match. Secondly, partial matching models treat full matches just as partial matches, and they are more prone to admit false matches. Thirdly, existing conflation models may introduce conflicting directional matches. This paper presents a new model that captures both full and partial matches simultaneously. This allows us to impose structural constraints differently on full/partial matches and enforce the consistency between directional matches. Experimental results show that the new model outperforms conventional optimized conflation models in terms of precision (89.2%), while achieving a similar recall (93.2%). [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
12. Conflating linear features using turning function distance: A new orientation‐sensitive similarity measure.
- Author
-
Lei, Ting L. and Wang, Rongrong
- Subjects
- *
COMPUTER vision , *DISTANCES - Abstract
Measuring the similarity between counterpart geospatial features is crucial in the effective conflation of spatial datasets from difference sources. This article proposes a new similarity metric called the "map turning function distance" (MTFD) for matching linear features such as roads based on the well‐known turning function (TF) distance in computer vision. The MTFD overcomes the limitations of the traditional TF distance, such as the inability to handle partial matches and insensitivity to differences in scale and rotation. In particular, the MTFD allows one to: (a) partially match a linear feature to a portion of a larger feature from a certain position of match; and (b) consider both the shape and orientation differences of polylines based on comparing their turning angles. In finding the best match position, we prove that the optimal position can be found among a finite set of positions on the target feature. We then combine the MTFD with widely used point‐offset distances such as the Hausdorff distance to form a composite similarity metric. Our experiments with real road datasets demonstrate that the new metric has greater discriminative power than traditional point‐offset‐based similarity measures, and significantly improves the precision of two tested conflation models. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
13. Geospatial data conflation: a formal approach based on optimization and relational databases.
- Author
-
Lei, Ting L.
- Subjects
- *
RELATIONAL databases , *GEOSPATIAL data , *ASSIGNMENT problems (Programming) , *OPERATIONS research , *GEOGRAPHIC information systems - Abstract
Geospatial data conflation is aimed at matching counterpart features from two or more data sources in order to combine and better utilize information in the data. Due to the importance of conflation in spatial analysis, different approaches to the conflation problem have been proposed ranging from simple buffer-based methods to probability and optimization based models. In this paper, I propose a formal framework for conflation that integrates two powerful tools of geospatial computation: optimization and relational databases. I discuss the connection between the relational database theory and conflation, and demonstrate how the conflation process can be formulated and carried out in standard relational databases. I also propose a set of new optimization models that can be used inside relational databases to solve the conflation problem. The optimization models are based on the minimum cost circulation problem in operations research (also known as the network flow problem), which generalizes existing optimal conflation models that are primarily based on the assignment problem. Using comparable datasets, computational experiments show that the proposed conflation method is effective and outperforms existing optimal conflation models by a large margin. Given its generality, the new method may be applicable to other data types and conflation problems. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
14. Designing Reliable Center Systems: A Vector Assignment Center Location Problem.
- Author
-
Lei, Ting L.
- Subjects
- *
LOCATION theory (Geography) , *VECTORS (Calculus) , *FAULT tolerance (Engineering) , *INTEGER programming , *MATHEMATICAL analysis - Abstract
The p-center problem is one of the most important models in location theory. Its objective is to place a fixed number of facilities so that the maximum service distance for all customers is as small as possible. This article develops a reliable p-center problem that can account for system vulnerability and facility failure. A basic assumption is that located centers can fail with a given probability and a customer will fall back to the closest nonfailing center for service. The proposed model seeks to minimize the expected value of the maximum service distance for a service system. In addition, the proposed model is general and can be used to solve other fault-tolerant center location problems such as the (p, q)-center problem using appropriate assignment vectors. I present an integer programming formulation of the model and computational experiments, and then conclude with a summary of findings and point out possible future work. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
15. A unified approach for location-allocation analysis: integrating GIS, distributed computing and spatial optimization.
- Author
-
Lei, Ting L., Church, Richard L., and Lei, Zhen
- Subjects
- *
LOCATION analysis , *ASSIGNMENT problems (Programming) , *SPATIAL arrangement , *GEOGRAPHIC information systems , *TABU search algorithm , *MATHEMATICAL models , *DECISION support systems - Abstract
Location-allocation modeling is an important area of research in spatial optimization and GIScience. A large number of analytical models for location-allocation analysis have been developed in the past 50 years to meet the requirements of different planning and spatial-analytic applications, ranging from the location of emergency response units (EMS) to warehouses and transportation hubs. Despite their great number, many location-allocation models are intrinsically linked to one another. A well-known example is the theoretical link between the classicp-median problem and coverage location problems. Recently, Lei and Church showed that a large number of classic and new location models can be posed as special case problems of a new modeling construct called the vector assignment ordered median problem (VAOMP). Lei and Church also reported extremely high computational complexity in optimally solving the best integer linear programming (ILP) formulation developed for the VAOMP even for medium-sized problems in certain cases. In this article, we develop an efficient unified solver for location-allocation analysis based on the VAOMP model without using ILP solvers. Our aim is to develop a fast heuristic algorithm based on the Tabu Search (TS) meta-heuristic, and message passing interface (MPI) suitable for obtaining optimal or near-optimal solutions for the VAOMP in a real-time environment. The unified approach is particularly interesting from the perspective of GIScience and spatial decision support systems (DSS) as it makes it possible to solve a wide variety of location models in a unified manner in a GIS environment. Computational results show that the TS method can often obtain in seconds, solutions that are better than those obtained using the ILP-based approach in hours or a day. [ABSTRACT FROM PUBLISHER]
- Published
- 2016
- Full Text
- View/download PDF
16. On the Finite Optimality Set of the Vector Assignment p-Median Problem.
- Author
-
Lei, Ting L. and Church, Richard L.
- Subjects
- *
SET theory , *VECTOR analysis , *PROBLEM solving , *FRACTIONS , *NONMONOTONIC logic - Abstract
The vector assignment p-median problem (VAPMP) is one of the first discrete location problems to account for the service of a demand by multiple facilities, and has been used to model a variety of location problems in addressing issues such as system vulnerability and reliability. Specifically, it involves the location of a fixed number of facilities when the assumption is that each demand point is served a certain fraction of the time by its closest facility, a certain fraction of the time by its second closest facility, and so on. The assignment vector represents the fraction of the time a facility of a given closeness order serves a specific demand point. Weaver and Church showed that when the fractions of assignment to closer facilities are greater than more distant facilities, an optimal all-node solution always exists. However, the general form of the VAPMP does not have this property. Hooker and Garfinkel provided a counterexample of this property for the nonmonotonic VAPMP. However, they do not conjecture as to what a finite set may be in general. The question of whether there exists a finite set of locations that contains an optimal solution has remained open to conjecture. In this article, we prove that a finite optimality set for the VAPMP consisting of "equidistant points" does exist. We also show a stronger result when the underlying network is a tree graph. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
17. Stream Model-Based Orthorectification in a GPU Cluster Environment.
- Author
-
Lei, Zhen, Wang, Mi, Li, Deren, and Lei, Ting L.
- Abstract
One of the most important tasks in remote sensing data processing is the production of orthorectified images. Such tasks are computationally intensive and can become a bottleneck for remote sensing image processing, particularly in high-throughput environments, such as large satellite imagery processing centers. This letter explores the use of massive parallel processing graphical processing unit (GPU) in a clustered network environment to speed up image processing tasks, such as orthorectification. Our parallelization method is based on inverse sensor model and the stream model for image processing, which allow the flexibility of placing computational units on proper computation units, such as GPU, CPU cores, or nodes in a cluster. In our experiments on images of two satellites, more than 198 times and 50.3 times speedup over one and multiple thread CPU versions have been achieved, respectively. [ABSTRACT FROM PUBLISHER]
- Published
- 2014
- Full Text
- View/download PDF
18. Vector Assignment Ordered Median Problem: A Unified Median Problem.
- Author
-
Lei, Ting L. and Church, Richard L.
- Subjects
- *
PROBLEM solving , *GENERALIZATION , *LINEAR programming , *MEDIAN (Mathematics) , *INTEGERS - Abstract
The vector assignment p-median problem (VAPMP) and the ordered p-median problem (OMP) are important extensions of the classic p-median problem. The VAPMP extends the p-median problem by allowing assignment of a demand to multiple facilities, and a wide variety of multi-assignment and backup location problems are special cases of this problem. The OMP optimizes a weighted sum of service distances according to their relative ranks among all demands. The OMP is well known as it represents a generalization of both the p-median and the p-center problems. In this article, a new model is developed which extends both the VAPMP and OMP problems. In addition, beyond median, center, and vector assignment, this new model can resolve problems where the system objective involves maximizing distance. The new model also gives rise to meaningful special-case problems, such as a “reliable p-center” problem. Different integer linear programming (ILP) formulations of the new problem are presented and tested. It is demonstrated that an efficient formulation for a special case of the VAOMP problem can solve medium sized problems optimally in a reasonable amount of time. [ABSTRACT FROM PUBLISHER]
- Published
- 2014
- Full Text
- View/download PDF
19. A Unified Model for Dispersing Facilities.
- Author
-
Lei, Ting L. and Church, Richard L.
- Subjects
- *
RETAIL stores , *ECONOMIC sanctions , *HAZARDOUS substances , *NUCLEAR power plants - Abstract
One of the important classes of facility dispersion problems involves the location of a number of facilities where the intent is to place them as far apart from each other as possible. Four basic forms of the p-facility dispersion problem appear in the literature. Erkut and Neuman present a classification system for these four classic constructs. More recently, Curtin and Church expanded upon this framework by the introduction of 'multiple types' of facilities, where the dispersion distances between specific types are weighted differently. This article explores another basic assumption found in all four classic models (including the multitype facility constructs of Curtin and Church): that dispersion is accounted for in terms of either distance to the closest facility or distances to all facilities (from a given facility), whether applied to a single type of facility or across a set of facility types. In reality, however, measuring dispersion in terms of whether neighboring facilities to a given facility are dispersed rather than whether all facilities are dispersed away from the given facility often makes more sense. To account for this intermediate measure of dispersion, we propose a construct called partial-sum dispersion. We propose four 'partial-sum' dispersion problem forms and show that these are generalized forms of the classic set of four models codified by Erkut and Neuman. Further, we present a unifying model that is a generalized form of all four partial-sum models as well as a generalized form of the original four classic model constructs. Finally, we present computational experience with the general model and conclude with a few examples and suggestions for future research. Una de las clases importantes dentro de los problemas de dispersión de instalaciones de servicios/infraestructura es el caso en el que la localización de un número de instalaciones debe cumplir la condición de maximizar la distancia entre cada par. La literatura especializada cita cuatro formas básicas del problema de dispersión llamados tipo p-instalación ( p-facility) (Shier 1977; Luna y Chaudhry 1984; Kuby 1987; Erkut y Neuman, 1991). Erkut y Neuman (1991) presentan un sistema de clasificación para estas cuatro formas clásicas. Recientemente, Curtin e Iglesia (2006) ampliaron este marco metodológico al incorporar múltiples tipos de instalaciones, permitiendo que las distancias de dispersión entre diferentes tipos específicos de instalaciones sean ponderadas de manera diferente. El artículo presente explora otro supuesto básico que se encuentra en los cuatro modelos clásicos (y las modifcaciones para acomodar instalaciones multi-tipo de Curtin e Iglesia): la dispersión es cuantificada en términos de la distancia entre una instalación dada y la instalación más cercana, o entre una instalación dada y la totalidad de las instalaciones. Este supuesto se mantiene si las distancias son aplicadas a un solo tipo de instalación o a múltiples tipos de instalaciones. Sin embargo, en realidad, tiene más sentido medir la dispersión en relación a las instalaciones vecinas, en vez de en relación a la totalidad las instalaciones. Para incorporar esta realidad a un nuevo tipo de medida intermedia de dispersión, se propone una medida llamada dispersión de suma parcial ( partial-sum dispersion). Proponemos cuatro tipos de problemas de dispersión de tipo parcial-sum y demostramos que éstas son formas generalizadas de los cuatro modelos clásicos presentados por Erkut y Neuman (1991). Además, se presenta un modelo unificado que es una forma generalizada de los cuatro modelos tipo partial-sum, así como una forma generalizada de las cuatro tipos en el modelo clásico. Por último, se presenta los resultados de pruebas computacionales usando el modelo general y se concluye con algunos ejemplos y sugerencias para investigaciones futuras. 设施分散问题中重要的一类是大量设施的布局,其意图是将它们在空间上尽可能离得更远。目前文献中主要讨论了4种基本形式(Shier 1977; Moon and Chaudhry 1984; Kuby 1987; Erkut and Neuman 1991)。Erkut and Neuman (1991)提出了这4种经典结构的一种分类系统。Curtin and Church (2006)引入设施'多种类型'对上述分类框架进行拓展,在特定类别之间的分散距离的权重存在不同。本文探索了在4种经典模型中所发现的另一种基本假设(包含Curtin and Church的多种类型设施结构):无论是在单一类型设施或包括多种类型设施中,分散度在解释某一给定设施到最近设施的距离或到所有设施的距离方面都是合理的。然而,在现实中设施分散度度量方面,测量某一给定设施的邻近设施的分散度特征相比于测量给定设施的所有其他设施的散布特征通常更有意义。为解释这种分散度的中间度量,本文提出了一种称为'局部和整体'的结构,包括4种分散问题形式,它们是Erkut and Neuman 4种传统类型的广义形式。本文进而提出了一个统一模型,即所有 '局部和整体'模型和经典类型结构一种广义形式。最后,对统一模型进行了计算检验,并基于几个实证进行了总结,还提出了未来的研究建议。 [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
20. Identifying Critical Facilities in Hub-and-Spoke Networks: A Hub Interdiction Median Problem Identifying Critical Facilities in Hub-and-Spoke Networks: A Hub Interdiction Median Problem.
- Author
-
Lei, Ting L.
- Subjects
- *
NETWORK hubs , *LINEAR programming , *MEDIAN (Mathematics) , *TRANSSHIPMENT , *TELECOMMUNICATION systems - Abstract
The hub location problem has been widely used in analyzing hub-and-spoke systems. The basic assumption is that a large number of demands exist to travel from origins to destinations via a set of intermediate transshipment nodes. These intermediate nodes can be lost, due to reasons such as natural disasters, outbreaks of disease, labor strikes, and intentional attacks. This article presents a hub interdiction median ( HIM) problem. It can be used to identify the set of critical facilities in a hub-and-spoke system that, if lost, leads to the maximal disruption of the system's service. The new model is formulated using integer linear programming. Special constraints are constructed to account for origin-to-destination demand following the least-cost route via the remaining hubs. Based on the HIM problem, two hub protection problems are defined that aim to minimize the system cost associated with the worst-case facility loss. Computational experiment results are presented along with a discussion of possible future work. El problema de la ubicación de la central ( hub) ha sido ampliamente analizado para el caso de los sistemas de sistemas radiales ( hub-and-spoke). La presunción inicial es que existe un gran número de demandas que viajan desde puntos de origen hasta sus puntos de destino a través de un set de nodos intermedios de trasbordo. Estos nodos intermedios pueden perderse por diferentes motivos, como desastres naturales, brotes de enfermedades, huelgas de trabajadores, o ataques intencionales. Este artículo presenta un problema de tipo mediana de interdicción de hub, conocido como hub interdiction median-HIM. Puede usarse para identificar un set de instalaciones críticas de un sistema tipo hub-and-spoke que, si se pierde, conduce a la máxima interrupción del servicio del sistema. El nuevo modelo se ha formulado utilizando programación entera lineal, ( integer linear programming-ILP). El modelo construye restricciones especiales para dar cuenta de la demanda de 'origen-a-destino' (O-D), siguiendo la ruta de menor costo, a través de los hubs restantes. Basándonos en el problema de HIM, se definen dos problemas de protección de hub que buscan minimizar el costo asociado al peor caso posible de pérdida de instalaciones. Se presentan además, resultados de experimentos computacionales, así como a una discusión sobre posibles futuros trabajos en la materia. 枢纽区位研究已广泛应用于中枢辐射型系统分析,其基本假设条件为起始点到目的地之间存在大量旅行需求的中间转运节点。但自然灾害、突发疾病、劳务罢工和蓄意攻击等因素可能导致中间转运节点的丧失。本文提出了一种枢纽封闭中心模型(HIM),可用于识别中枢辐射型系统的重要节点,一旦这些节点丧失,将导致整个系统服务最大程度的瓦解。新模型通过整数线性规划公式建立。模型特殊约束条件的建立基于最小成本路径通过余下枢纽的花费来解释始发到目的地( origin-to-destination)需求量。基于HIM问题,双枢纽保护问题被定义为旨在最小化系统花费及其与之关联的最坏情况下节点丢失问题。最后,根据计算的经验结果讨论未来可能深入的研究。 [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
21. Constructs for Multilevel Closest Assignment in Location Modeling.
- Author
-
Lei, Ting L. and Church, Richard L.
- Subjects
- *
SPATIAL analysis (Statistics) , *ECONOMIC demand , *CONSUMER preferences , *MATHEMATICAL models of economics , *MULTILEVEL models , *STATISTICAL correlation , *AREA studies , *LOCATION marketing - Abstract
In the classic p-median problem, it is assumed that each point of demand will be served by his or her closest located facility. The p-median problem can be thought of as a ‘‘single-level’’ allocation and location problem, as all demand at a specific location is assigned as a whole unit to the closest facility. In some service protocols, demand assignment has been defined as ‘‘multilevel’’ where each point of demand may be served a certain percentage of the time by the closest facility, a certain percentage of the time by the second closest facility, and so on. This article deals with the case in which there is a need for ‘‘explicit’’ closest assignment (ECA) constraints. The authors review past location modeling work that involves single-level ECA constraints as well as specific constraint constructs that have been proposed to ensure single-level closest assignment. They then show how each of the earlier proposed ECA constructs can be generalized for the ‘‘multilevel’’ case. Finally, the authors provide computational experience using these generalized ECA constructs for a novel multilevel facility interdiction problem introduced in this article. Altogether, this article proposes both a new set of constraint structures that can be used in location models involving multilevel assignment as well as a new facility interdiction model that can be used to optimize worst case levels of facility disruption. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
22. Evaluating the Vulnerability of Time-Sensitive Transportation Networks: A Hub Center Interdiction Problem.
- Author
-
Lei, Ting L.
- Abstract
Time-sensitive transportation systems have received increasing research attention recently. Examples of time-sensitive networks include those of perishable goods, high-value commodity, and express delivery. Much research has been devoted to optimally locating key facilities such as transportation hubs to minimize transit time. However, there is a lack of research attention to the reliability and vulnerability of time-sensitive transportation networks. Such issues cannot be ignored as facilities can be lost due to reasons such as extreme weather, equipment malfunction, and even intentional attacks. This paper proposes a hub interdiction center (HIC) model for evaluating the vulnerability of time-sensitive hub-and-spoke networks under disruptions. The model identifies the set of hub facilities whose loss will lead to the greatest increase in the worst-case transit time. From a planning perspective, such hubs are critical facilities that should be protected or enhanced by preventive measures. An efficient integer linear programming (ILP) formulation of the new model is developed. Computational experiments on a widely used US air passenger dataset show that losing a small number of hub facilities can double the maximum transit time. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
23. Robust projections of future fire probability for the conterminous United States.
- Author
-
Gao, Peng, Terando, Adam J., Kupfer, John A., Morgan Varner, J., Stambaugh, Michael C., Lei, Ting L., and Kevin Hiers, J.
- Published
- 2021
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.