40 results on '"Specht, Sebastian"'
Search Results
2. The impact of the Covid-19 pandemic and government intervention on active mobility
- Author
-
Möllers, Alessa, Specht, Sebastian, and Wessel, Jan
- Published
- 2022
- Full Text
- View/download PDF
3. REHEATFUNQ (REgional HEAT-Flow Uncertainty and aNomaly Quantification) 2.0.1: a model for regional aggregate heat flow distributions and anomaly quantification.
- Author
-
Ziebarth, Malte Jörn and von Specht, Sebastian
- Subjects
- *
GEOTHERMAL ecology , *GAMMA distributions , *RANDOM variables , *BAYESIAN field theory , *GOODNESS-of-fit tests , *DATABASES - Abstract
Surface heat flow is a geophysical variable that is affected by a complex combination of various heat generation and transport processes. The processes act on different lengths scales, from tens of meters to hundreds of kilometers. In general, it is not possible to resolve all processes due to a lack of data or modeling resources, and hence the heat flow data within a region is subject to residual fluctuations. We introduce the REgional HEAT-Flow Uncertainty and aNomaly Quantification (REHEATFUNQ) model, version 2.0.1. At its core, REHEATFUNQ uses a stochastic model for heat flow within a region, considering the aggregate heat flow to be generated by a gamma-distributed random variable. Based on this assumption, REHEATFUNQ uses Bayesian inference to (i) quantify the regional aggregate heat flow distribution (RAHFD) and (ii) estimate the strength of a given heat flow anomaly, for instance as generated by a tectonically active fault. The inference uses a prior distribution conjugate to the gamma distribution for the RAHFDs, and we compute parameters for a uninformed prior distribution from the global heat flow database by. Through the Bayesian inference, our model is the first of its kind to consistently account for the variability in regional heat flow in the inference of spatial signals in heat flow data. Interpretation of these spatial signals and in particular their interpretation in terms of fault characteristics (particularly fault strength) form a long-standing debate within the geophysical community. We describe the components of REHEATFUNQ and perform a series of goodness-of-fit tests and synthetic resilience analyses of the model. While our analysis reveals to some degree a misfit of our idealized empirical model with real-world heat flow, it simultaneously confirms the robustness of REHEATFUNQ to these model simplifications. We conclude with an application of REHEATFUNQ to the San Andreas fault in California. Our analysis finds heat flow data in the Mojave section to be sufficient for an analysis and concludes that stochastic variability can allow for a surprisingly large fault-generated heat flow anomaly to be compatible with the data. This indicates that heat flow alone may not be a suitable quantity to address fault strength of the San Andreas fault. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Comprehensive data set of in situ hydraulic stimulation experiments for geothermal purposes at the Äspö Hard Rock Laboratory (Sweden).
- Author
-
Zang, Arno, Niemz, Peter, von Specht, Sebastian, Zimmermann, Günter, Milkereit, Claus, Plenkers, Katrin, and Klee, Gerd
- Subjects
ROCK music ,HORIZONTAL wells ,ACOUSTIC emission ,GRANITE ,FLUID injection ,SEISMIC arrays ,CRYSTALLINE rocks ,HYDRAULIC fracturing - Abstract
In this article, a high-resolution acoustic emission sensor, accelerometer, and broadband seismometer array data set is made available and described in detail from in situ experiments performed at Äspö Hard Rock Laboratory in May and June 2015. The main goal of the hydraulic stimulation tests in a horizontal borehole at 410 m depth in naturally fractured granitic rock mass is to demonstrate the technical feasibility of generating multi-stage heat exchangers in a controlled way superiorly to former massive stimulations applied in enhanced geothermal projects. A set of six, sub-parallel hydraulic fractures is propagated from an injection borehole drilled parallel to minimum horizontal in situ stress and is monitored by an extensive complementary sensor array implemented in three inclined monitoring boreholes and the nearby tunnel system. Three different fluid injection protocols are tested: constant water injection, progressive cyclic injection, and cyclic injection with a hydraulic hammer operating at 5 Hz frequency to stimulate a crystalline rock volume of size 30 m × 30 m × 30 m at depth. We collected geological data from core and borehole logs, fracture inspection data from an impression packer, and acoustic emission hypocenter tracking and tilt data, as well as quantified the permeability enhancement process. The data and interpretation provided through this publication are important steps in both upscaling laboratory tests and downscaling field tests in granitic rock in the framework of enhanced geothermal system research. Data described in this paper can be accessed at GFZ Data Services under 10.5880/GFZ.2.6.2023.004. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Strong-motion on DAS: Insights from the 2022 MW 6.9 Chishang earthquake (Southern Taiwan)
- Author
-
von Specht, Sebastian, Lin, Chen-Ray, Pilz, Marco, Ma, Kuo-Fong, and Cotton, Fabrice
- Abstract
Distributed acoustic sensing (DAS) sees increased utilization in the seismological community in recent years and various applications are investigated for the usage of DAS in different branches of seismology.Strong-motion seismology uses records of earthquakes of engineering concern (MW>4.5) with hypocentral distances within few hundreds of kilometers. This demands dense networks over a wide area and installation of typical strong-motion instruments (accelerometers) can be achieved quickly and at a reasonable budget, compared to other network types. For DAS, installation and operation are more involved, and deployment is very still limited. Consequently, DAS recordings of nearby large events are still very unlikely and rare compared to accelerometers.On September 18, 2022, a shallow earthquake sequence with a MW 6.9 mainshock struck near Chishang (Taiwan) and was recorded by DAS in Hualien city, appr. 100 km north. Shaking of the mainshock and several aftershocks were noticeable in Hualien, though not damaging with PGA recorded at 0.28 m/s^2 nearby the DAS site. The DAS campaign was originally conceptualized as a test suite with different fiber installations: including buried, within a gutter (as in commercial fiber installation) and loose within a basement. The test site is in an urban area affected by surface rupturing during the 2018 Hualien earthquake.The presented recordings provide not only an unprecedented insight how strong-motion appears on DAS but also how effective different installation techniques are for this kind of event. The waveforms are also compared to records of a collocated broadband seismometer and an accelerometer 1 km away., The 28th IUGG General Assembly (IUGG2023) (Berlin 2023)
- Published
- 2023
6. Unchanged frequency of moraine-dammed glacial lake outburst floods in the Himalaya
- Author
-
Veh, Georg, Korup, Oliver, von Specht, Sebastian, Roessner, Sigrid, and Walz, Ariane
- Published
- 2019
- Full Text
- View/download PDF
7. Revisiting the San Andreas Heat Flow Paradox From the Perspective of Seismic Efficiency and Elastic Power in Southern California.
- Author
-
Ziebarth, Malte J., Anderson, John G., von Specht, Sebastian, Heidbach, Oliver, and Cotton, Fabrice
- Subjects
SURFACE of the earth ,PARADOX ,CALORIMETRY ,FRICTION ,EARTHQUAKES - Abstract
We investigate the relation between frictional heating on a fault and the resulting conductive surface heat flow anomaly using the fault's long‐term energy budget. Analysis of the surface heat flow surrounding the fault trace leads to a constraint on the frictional power generated on the fault—the mechanism behind the San Andreas fault (SAF) heat flow paradox. We revisit this paradox from a new perspective using an estimate of the long‐term accumulating elastic power in the region surrounding the fault, and analyze the paradox using two parameters: the seismic efficiency and the elastic power. The results show that the constraint on frictional power from the classic interpretation is incompatible with the accumulating elastic power and the radiated power from earthquake catalogs. We then explore four mechanisms that can resolve this extended paradox. First, stochastic fluctuations of surface heat flow could mask the fault‐generated anomaly (we estimate 21% probability). Second, the elastic power accumulating in the region could be overestimated (≥550 MW required). Third, the seismic efficiency—ratio of radiated energy to elastic work—of the SAF could be higher than that of the remaining faults in the region (≥5.8% required). Fourth, the scaled energy—ratio of radiated energy to seismic moment—on the SAF could be lower than on the remaining faults in the region (a factor 5 difference required). In the last three hypotheses, we analyze the interplay of the energy budget on a single fault with the total energy budget of the region. Plain Language Summary: When earthquakes move rock against rock, friction heats the contact surface. If this frictional resistance were like laboratory measurements of typical crustal rock, the heat would cause a considerable heat flow signature ("anomaly") at Earth's surface. For the San Andreas fault (SAF) in Southern California, such a signature has not been observed. One solution to this paradox is that the fault is weak. We approach the paradox from a new angle by using additionally the rate at which elastic energy accumulates in California. This elastic power is incompatible with the radiated power from earthquake catalogs and the maximum rate of frictional heating from the paradox if only simple assumptions are made. We call this conflict the extended heat flow paradox. Four mechanisms could individually resolve the extended paradox: randomness in regional heat flow measurements could conceal the anomaly, the elastic power on the SAF could be overestimated, the seismic efficiency (ratio of radiated energy per input work) on the SAF could be comparatively high, or the scaled energy (ratio of radiated energy per seismic moment) on the SAF could be comparatively low. A combination of multiple effects is possible. Key Points: Heat flow around the San Andreas fault is incompatible with radiated power and elastic input power under simple assumptionsRegionally, a stochastic view on heat flow and/or an overestimated total elastic power can resolve the paradoxLocally, a high seismic efficiency or a low scaled energy (radiated energy/seismicmoment) on the San Andreas fault can resolve the paradox [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
8. Comprehensive data set of in-situ hydraulic stimulation experiments for geothermal purposes at the Äspö Hard Rock Laboratory (Sweden)
- Author
-
Zang, Arno, Niemz, Peter, Specht, Sebastian, Zimmermann, Günter, Milkereit, Claus, Plenkers, Katrin, and Klee, Gerd
- Abstract
In this article, a high-resolution acoustic emission sensor, accelerometer and broadband seismometer array data set is made available and described in detail from in-situ experiments performed at Äspö Hard Rock Laboratory in May and June 2015. The main goal of the hydraulic stimulation tests in a horizontal borehole at 410 m depth in naturally fractured granitic rock mass is to demonstrate the technical feasibility to generate multi-stage heat exchangers in a controlled way superior to former massive stimulations applied in enhanced geothermal projects. A set of six, sub-parallel hydraulic fractures is propagated from an injection borehole drilled parallel to minimum horizontal in-situ stress, and monitored by an extensive complementary sensor array implemented in three inclined monitoring boreholes and the near-by tunnel system. Three different fluid-injection protocols are tested: constant water injection, progressive cyclic injection, and cyclic injection with a hydraulic hammer operating at 5 Hz frequency to stimulate a crystalline rock volume of size 30 x 30 x 30 m at depth. We collected geological data from core and borehole logs, fracture inspection data from impression packer, acoustic emission hypocenter tracking and tilt data, as well as quantified the permeability enhancement process. The data and interpretation provided through this publication is an important step both, in upscaling laboratory tests, and downscaling field tests in granitic rock in the framework of enhanced geothermal system research.
- Published
- 2023
9. Sensing fault zone at depth through Optical Fiber: Taiwan Milun-fault Drilling and All-inclusive Sensing (Taiwan MiDAS) project
- Author
-
Ma, Kuo-Fong, von Specht, Sebastian, Kuo, Li-Wei, Huang, Hsin-Hua, Lin, Chin-Jen, Ku, Chin-Shang, Lin, Chen-Ray, and Jousset, Philippe
- Abstract
The Taiwan Milun fault zone located at the boundary between the Eurasian and Philippine Sea plates. This fault slips frequently and produced large earthquakes, as for example the Mw6.4 Hualien earthquake (6 February 2018). We map and observe the fault zone and its behavior at depth by high spatial resolution dynamic strain sensing with optical fiber. In 2021-2022, we drilled and cored the fault, and deployed a 3D multi-cross-fault fiber array comprising a borehole loop with a depth of 700 m (Hole-A, Hanging wall site, crossing the fault at depth), a surface array crossing the fault rupture zone using commercial fiber, and a second borehole loop of 500m fiber (Hole-B, Footwall site). The high spatial resolution from distributed acoustic sensing (DAS) and the retrieved core combined with geophysical logs allow us to characterize the structure on meter-scale. Within the Milun fault zone, we identified a 20-m wide fault core comprised of gray and black gouge in the core sample. DAS strain-rate records associated with the same depth as the fault core show a distinct amplification. The amplification ratio of 2.5-3 is constant as for all types of events (local, teleseismic ), when compared to DAS channels at larger depth, related to a consolidated rock material. Although the fault gouge is narrow, the nature of the amplification in strain is due to its strong material contrast from fault gouge. This result may shed the light on the understanding of fault-zone dynamics in terms of remote earthquake triggering and near-fault ground motion., The 28th IUGG General Assembly (IUGG2023) (Berlin 2023)
- Published
- 2023
- Full Text
- View/download PDF
10. Investigating dynamic range of Distributed Acoustic Sensing and introducing a neural-network-based detection for saturation effects
- Author
-
Lin, Chen-Ray, von Specht, Sebastian, Cotton, Fabrice, Ohrnberger, Matthias, and Ma, Kuo-Fong
- Abstract
Distributed Acoustic Sensing (DAS) is used to record high-spatial resolution strain-rate data. For ground motion observation, the DAS data can be converted from strain rate to acceleration or velocity by array-based measurements with coherent plane waves. DAS provides an opportunity to map high-resolution shaking patterns near faults. We installed collocated geophones and optical fiber in Hualien City (a very seismically active area in Taiwan) from the end of January to the end of February in 2022. Earthquakes with magnitudes (Mw) between 3.2 and 5.4 have been recorded. These records illustrate the typical magnitude-distance dependence of ground-motion but also show saturation for higher magnitudes and/or at shorter distances (e.g for an earthquake of Mw 5.2 earthquake recorded at 100 km). For frequency-based analyses, clipped signals on DAS result in challenges not present in classical instruments (seismometers). The upper limit in dynamic range of seismometers results in easily identifiable trapezoidal signals. The dynamic range of DAS interrogators is limited by gauge length, sampling frequency, and wrapped phase in the interferometric phase demodulation. We observe that clipped DAS signals not only affect time series but also contaminate their spectra on all frequencies, due to the random nature of clipping in DAS—contrasting to the flat plateaus in clipped time series on seismometers. Therefore, the identification of the start and end points of clipped DAS records poses a major challenge, which we aim to resolve with a neural network. This approach enhances the efficiency for quality control of massive DAS datasets., The 28th IUGG General Assembly (IUGG2023) (Berlin 2023)
- Published
- 2023
11. REHEATFUNQ 1.4.0: A model for regional aggregate heat flow distributions and anomaly quantification.
- Author
-
Ziebarth, Malte Jörn and Specht, Sebastian von
- Subjects
GAMMA distributions ,BAYESIAN field theory ,GOODNESS-of-fit tests ,RANDOM variables ,COMMUNITIES - Abstract
Surface heat flow is a geophysical variable that is affected by a complex combination of various heat generation and transport processes. The processes act on different lengths scales, from tens of meters to hundreds of kilometers. In general, it is not possible to resolve all processes for a lack of data or modeling resources, and hence the heat flow data within a region is subject to residual fluctuations. We introduce the REgional HEAT-Flow Uncertainty and aNomaly Quantification (REHEATFUNQ) model, version 1.4.0. At its core, REHEATFUNQ uses a stochastic model for heat flow within a region, considering the aggregate heat flow to be generated by a gamma distributed random variable. Based on this assumption, REHEATFUNQ uses Bayesian inference to (i) quantify the regional aggregate heat flow distribution (RAHFD), and (ii) estimate the strength of given heat flow anomaly, for instance as generated by a tectonically active fault. The inference uses a prior conjugate to the gamma distribution for the RAHFDs, and we compute parameters for a uninformed prior from the global heat flow data base by Lucazeau (2019). Through the Bayesian inference, our model is the first of its kind to consistently account for the variability of regional heat flow in the inference of spatial signals in heat flow data. Interpretation of these spatial signals and in particular their interpretation in terms of fault characteristics (particularly fault strength) is a longstanding debate within the geophysical community. We describe the components of REHEATFUNQ and perform a series of goodness-of-fit tests and synthetic resilience analyses of the model. While our analysis reveals to some degree a misfit of our idealized empirical model with real-world heat flow, it simultaneously confirms the robustness of REHEATFUNQ to these model simplifications. We conclude with an application of REHEATFUNQ to the San Andreas fault in California. Our analysis finds heat flow data in the Mojave section to be sufficient for an analysis, and concludes that stochastic variability can allow for a surprisingly large fault-generated heat flow anomaly to be compatible with the data. This indicates that heat flow alone may not be a suitable quantity to address fault strength of the San Andreas fault. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
12. Mixtures of ground-motion prediction equations as backbone models for a logic tree: an application to the subduction zone in Northern Chile
- Author
-
Haendel, Annabel, Specht, Sebastian, Kuehn, Nicolas M., and Scherbaum, Frank
- Published
- 2015
- Full Text
- View/download PDF
13. ICH: Strengths, Weaknesses, and Future Tasks
- Author
-
Specht, Sebastian and Klingmann, Ingrid
- Published
- 2014
- Full Text
- View/download PDF
14. Chronic Pancreatitis Is Associated With Disease-Specific Regulatory T-Cell Responses
- Author
-
Schmitz–Winnenthal, Hubertus, Pietsch, Dong–Ho Kim, Schimmack, Simon, Bonertz, Andreas, Udonta, Florian, Ge, Yingzi, Galindo, Luis, Specht, Sebastian, Volk, Christine, Zgraggen, Kaspar, Koch, Moritz, Büchler, Markus W., Weitz, Jürgen, and Beckhove, Philipp
- Published
- 2010
- Full Text
- View/download PDF
15. Ground-Motion Modeling as an Image Processing Task: Introducing a Neural Network Based, Fully Data-Driven, and Nonergodic Approach.
- Author
-
Lilienkamp, Henning, von Specht, Sebastian, Weatherill, Graeme, Caire, Giuseppe, and Cotton, Fabrice
- Abstract
We construct and examine the prototype of a deep learning-based ground-motion model (GMM) that is both fully data driven and nonergodic. We formulate ground-motion modeling as an image processing task, in which a specific type of neural network, the U-Net, relates continuous, horizontal maps of earthquake predictive parameters to sparse observations of a ground-motion intensity measure (IM). The processing of map-shaped data allows the natural incorporation of absolute earthquake source and observation site coordinates, and is, therefore, well suited to include site-, source-, and path-specific amplification effects in a nonergodic GMM. Data-driven interpolation of the IM between observation points is an inherent feature of the U-Net and requires no a priori assumptions. We evaluate our model using both a synthetic dataset and a subset of observations from the KiK-net strong motion network in the Kanto basin in Japan. We find that the U-Net model is capable of learning the magnitude-distance scaling, as well as site-, source-, and path-specific amplification effects from a strong motion dataset. The interpolation scheme is evaluated using a fivefold cross validation and is found to provide on average unbiased predictions. The magnitude-distance scaling as well as the site amplification of response spectral acceleration at a period of 1 s obtained for the Kanto basin are comparable to previous regional studies. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
16. ICBM Integrated Combined Baseline Modification
- Author
-
Specht, Sebastian von (Dr.)
- Subjects
ddc:550 ,Institut für Geowissenschaften - Abstract
Accelerograms are the primary source for characterizing strong ground motion. It is therefore of paramount interest to have high-quality recordings free from any nonphysical contamination. Frequently, accelerograms are affected by baseline jumps and drifts, either related to the instrument and/or a major earthquake. In this work, I propose a correction method for these undesired baseline drifts based on segmented linear least squares. The algorithm operates on the integrated waveforms and combines all three instrument components to estimate a model that modifies the baseline to be at zero continuously. The procedure consists of two steps: first a suite of models with variable numbers of discontinuities is derived for all three instrument components. During this process, the number of discontinuities is reduced in a parsimonious way, for example, two very close discontinuities are merged into a single one. In the second step, the optimal model is selected on the basis of the Bayesian information criterion. I exemplify the application on synthetic waveforms with known discontinuities and on observed waveforms from a unified strong-motion database of the Japan Meteorological Agency (JMA) and the National Research Institute for Earth Science and Disaster Prevention (NIED, Japan) networks for the major events of the 2016 Kumamoto earthquakes. After the baseline jump correction, the waveforms are furthermore corrected for displacement according to Wang et al.(2011). The resulting displacements are comparable to the Interferometric Synthetic Aperture Radar-derived displacement estimates for the Kumamoto earthquake sequence.
- Published
- 2019
17. Within- and Between-Event Variabilities of Strong-Velocity Pulses of Moderate Earthquakes within Dense Seismic Arrays.
- Author
-
Ming-Hsuan Yen, von Specht, Sebastian, Yen-Yu Lin, Cotton, Fabrice, and Kuo-Fong Ma
- Abstract
Ground motion with strong-velocity pulses can cause significant damage to buildings and structures at certain periods; hence, knowing the period and velocity amplitude of such pulses is critical for earthquake structural engineering. However, the physical factors relating the scaling of pulse periods with magnitude are poorly understood. In this study, we investigate moderate but damaging earthquakes (M
w 6-7) and characterize ground-motion pulses using the method of Shahi and Baker (2014) while considering the potential static-offset effects. We confirm that the within-event variability of the pulses is large. The identified pulses in this study are mostly from strike-slip-like earthquakes. We further perform simulations using the frequency-wavenumber algorithm to investigate the causes of the variability of the pulse periods within and between events for moderate strike-slip earthquakes. We test the effect of fault dips, and the impact of the asperity locations and sizes. The simulations reveal that the asperity properties have a high impact on the pulse periods and amplitudes at nearby stations. Our results emphasize the importance of asperity characteristics, in addition to earthquake magnitudes for the occurrence and properties of pulses produced by the forward directivity effect. We finally quantify and discuss within- and between-event variabilities of pulse properties at short distances. [ABSTRACT FROM AUTHOR]- Published
- 2022
- Full Text
- View/download PDF
18. Effects of finite source rupture on landslide triggering
- Author
-
von Specht, Sebastian, Öztürk, Ugur (Dr.-Ing.), Veh, Georg (Dr.), Cotton, Fabrice Pierre (Prof. Dr.), and Korup, Oliver (Prof. Dr.)
- Subjects
ddc:550 ,Institut für Geowissenschaften ,Institut für Umweltwissenschaften und Geographie - Abstract
The propagation of a seismic rupture on a fault introduces spatial variations in the seismic wave field surrounding the fault. This directivity effect results in larger shaking amplitudes in the rupture propagation direction. Its seismic radiation pattern also causes amplitude variations between the strike-normal and strike-parallel components of horizontal ground motion. We investigated the landslide response to these effects during the 2016 Kumamoto earthquake (M-w 7.1) in central Kyushu (Japan). Although the distribution of some 1500 earthquake-triggered landslides as a function of rupture distance is consistent with the observed Arias intensity, the landslides were more concentrated to the northeast of the southwest-northeast striking rupture. We examined several landslide susceptibility factors: hillslope inclination, the median amplification factor (MAF) of ground shaking, lithology, land cover, and topographic wetness. None of these factors sufficiently explains the landslide distribution or orientation (aspect), although the landslide head scarps have an elevated hillslope inclination and MAF. We propose a new physics-based ground-motion model (GMM) that accounts for the seismic rupture effects, and we demonstrate that the low-frequency seismic radiation pattern is consistent with the overall landslide distribution. Its spatial pattern is influenced by the rupture directivity effect, whereas landslide aspect is influenced by amplitude variations between the fault-normal and fault-parallel motion at frequencies < 2 Hz. This azimuth dependence implies that comparable landslide concentrations can occur at different distances from the rupture. This quantitative link between the prevalent landslide aspect and the low-frequency seismic radiation pattern can improve coseismic landslide hazard assessment.
- Published
- 2019
19. Likelihood - based optimization in strong-motion seismology
- Author
-
von Specht, Sebastian
- Subjects
ddc:500 ,Institut für Geowissenschaften - Published
- 2019
20. Effects of finite source rupture on landslide triggering: The 2016 MW 7.1 Kumamoto earthquake
- Author
-
Specht, Sebastian, Ozturk, Ugur, Veh, Georg, Cotton, Fabrice, and Korup, Oliver
- Abstract
The propagation of a seismic rupture on a fault introduces spatial variations in the seismic wavefield surrounding the fault during an earthquake. This directivity effect results in larger shaking amplitudes in the rupture propagation direction. Its seismic radiation pattern also causes amplitude variations between the strike-normal and strike-parallel components of horizontal ground motion. We investigated the landslide response to these effects during the 2016 Kumamoto earthquake (MW 7.1) in central Kyūshū (Japan). Although the distribution of some 1,500 earthquake-triggered landslides as function of rupture distance is consistent with the observed Arias intensity, the landslides are more concentrated to the northeast of the southwest-northeast striking rupture. We examined several landslide susceptibility factors: hillslope inclination, median amplification factor (MAF) of ground shaking, lithology, land cover, and topographic wetness. None of these factors can sufficiently explain the landslide distribution or orientation (aspect), although the landslide headscarps coincide with elevated hillslope inclination and MAF. We propose a new physics-based ground motion model that accounts for the seismic rupture effects, and demonstrate that the low-frequency seismic radiation pattern consistent with the overall landslide distribution. The spatial landslide distribution is primarily influenced by the rupture directivity effect, whereas landslide aspect is influenced by amplitude variations between the fault-normal and fault-parallel motion at frequencies
- Published
- 2018
21. A Link between Machine Learning and Optimization in Ground-Motion Model Development: Weighted Mixed-Effects Regression with Data-Driven Probabilistic Earthquake Classification.
- Author
-
von Specht, Sebastian and Cotton, Fabrice
- Abstract
The steady increase of ground-motion data not only allows new possibilities but also comes with new challenges in the development of ground-motion models (GMMs). Data classification techniques (e.g., cluster analysis) do not only produce deterministic classifications but also probabilistic classifications (e.g., probabilities for each datum to belong to a given class or cluster). One challenge is the integration of such continuous classification in regressions for GMM development such as the widely used mixed-effects model. We address this issue by introducing an extension of the mixed-effects model to incorporate data weighting. The parameter estimation of the mixed-effects model, that is, fixed-effects coefficients of the GMMs and the random-effects variances, are based on the weighted likelihood function, which also provides analytic uncertainty estimates. The data weighting permits for earthquake classification beyond the classical, expert-driven, binary classification based, for example, on event depth, distance to trench, style of faulting, and fault dip angle. We apply Angular Classification with Expectation-maximization, an algorithm to identify clusters of nodal planes from focal mechanisms to differentiate between, for example, interface- and intraslab-type events. Classification is continuous, that is, no event belongs completely to one class, which is taken into account in the ground-motion modeling. The theoretical framework described in this article allows for a fully automatic calibration of ground-motion models using large databases with automated classification and processing of earthquake and ground-motion data. As an example, we developed a GMM on the basis of the GMM by Montalva et al. (2017) with data from the strong-motion flat file of Bastías and Montalva (2016) with ~2400 records from 319 events in the Chilean subduction zone. Our GMM with the data-driven classification is comparable to the expert-classification-based model. Furthermore, the model shows temporal variations of the between-event residuals before and after large earthquakes in the region. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
22. An 8 month slow slip event triggers progressive nucleation of the 2014 Chile megathrust
- Author
-
Socquet, Anne, Piña Valdes, Jesús, Jara, Jorge, Cotton, Fabrice, Walpersdorf, Andrea, Cotte, Nathalie, Specht, Sebastian, Ortega‐Culaciati, Francisco, Carrizo, Daniel, and Norabuena Ortiz, Edmundo
- Subjects
Seismic station ,Earthworks ,Earthquakes ,Subduction ,Global positioning system ,purl.org/pe-repo/ocde/ford#1.05.04 [http] ,Landslides ,Seismology ,purl.org/pe-repo/ocde/ford#1.05.00 [http] - Abstract
The mechanisms leading to large earthquakes are poorly understood and documented. Here we characterize the long‐term precursory phase of the 1 April 2014 Mw8.1 North Chile megathrust. We show that a group of coastal GPS stations accelerated westward 8 months before the main shock, corresponding to a Mw6.5 slow slip event on the subduction interface, 80% of which was aseismic. Concurrent interface foreshocks underwent a diminution of their radiation at high frequency, as shown by the temporal evolution of Fourier spectra and residuals with respect to ground motions predicted by recent subduction models. Such ground motions change suggests that in response to the slow sliding of the subduction interface, seismic ruptures are progressively becoming smoother and/or slower. The gradual propagation of seismic ruptures beyond seismic asperities into surrounding metastable areas could explain these observations and might be the precursory mechanism eventually leading to the main shock. Por pares
- Published
- 2017
23. Data-driven earthquake focal mechanism cluster analysis
- Author
-
Specht, Sebastian, Heidbach, Oliver, Cotton, Fabrice, and Zang, Arno
- Abstract
Scientific Technical Report STR; 17/01, Earthquake focal mechanism solutions (FMS) form the basic data input for many applications, e.g. stress tensor inversion or ground-motion prediction equation estimation. In these applications the FMS data is usually binned spatially or in predetermined ranges of rake and dip based on expert elicitation. However, due to the significant increase of FMS data in the past decade an objective data-driven cluster analysis is now possible. Here we present the method ACE (Angular Classification with Expectation-Maximization) that identities clusters of FMS without a priori information. The identified clusters can be used for the classification of the Style-of- Faulting and as weights for FMS data binning in the aforementioned applications. As an application example we use ACE to identify FMS clusters according to their Style-of- Faulting that are related to certain earthquake types (e.g. subduction interface) in northern Chile, the Nazca Plate and in Kyushu (Japan). We use the resulting clusters and weights as a priori information for a stress tensor inversion for these regions and show that uncertainties of the stress tensor estimates are reduced significantly.
- Published
- 2017
24. Applying Conservation of Energy to Estimate Earthquake Frequencies from Strain Rates and Stresses.
- Author
-
Ziebarth, Malte J., Specht, Sebastian, Heidbach, Oliver, Cotton, Fabrice, and Anderson, John G.
- Subjects
- *
EARTHQUAKES , *EARTHQUAKE hazard analysis , *SEISMIC waves , *SEISMOLOGY , *GEOPHYSICS - Abstract
Estimating earthquake occurrence rates from the accumulation rate of seismic moment is an established tool of seismic hazard analysis. We propose an alternative, fault‐agnostic approach based on the conservation of energy: the Energy‐Conserving Seismicity Framework (ENCOS). Working in energy space has the advantage that the radiated energy is a better predictor of the damage potential of earthquake waves than the seismic moment release. In a region, ENCOS balances the stationary power available to cause earthquakes with the long‐term seismic energy release represented by the energy‐frequency distribution's first moment. Accumulation and release are connected through the average seismic efficiency, by which we mean the fraction of released energy that is converted into seismic waves. Besides measuring earthquakes in energy, ENCOS differs from moment balance essentially in that the energy accumulation rate depends on the total stress in addition to the strain rate tensor. To validate ENCOS, we exemplarily model the energy‐frequency distribution around Southern California. We estimate the energy accumulation rate due to tectonic loading assuming poroelasticity and hydrostasis. Using data from the World Stress Map and assuming the frictional limit to estimate the stress tensor, we obtain a power of 0.8 GW. The uncertainty range, 0.3–2.0 GW, originates mainly from the thickness of the seismogenic crust, the friction coefficient on preexisting faults, and models of Global Positioning System (GPS) derived strain rates. Based on a Gutenberg‐Richter magnitude‐frequency distribution, this power can be distributed over a range of energies consistent with historical earthquake rates and reasonable bounds on the seismic efficiency. Key Points: Conservation of energy is used to estimate long‐term earthquake occurrence rates from geomechanical modeling of relevant processesThe elastic power is estimated using global positioning system and stress data, assuming linear poroelasticity and the frictional limitIn Southern California, the earthquake occurrence rates modeled from the elastic power are compatible with observed seismicity [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
25. Full-waveform-based characterization of acoustic emission activity in a mine-scale experiment: a comparison of conventional and advanced hydraulic fracturing schemes.
- Author
-
Niemz, Peter, Cesca, Simone, Heimann, Sebastian, Grigoli, Francesco, von Specht, Sebastian, Hammer, Conny, Zang, Arno, and Dahm, Torsten
- Subjects
INDUCED seismicity ,HYDRAULIC fracturing ,PIEZOELECTRIC detectors ,CRYSTALLINE rocks ,ACOUSTIC emission ,SENSOR networks ,FLUID injection - Abstract
Understanding fracturing processes and the hydromechanical relation to induced seismicity is a key question for enhanced geothermal systems (EGS). Commonly massive fluid injection, predominately causing hydroshearing, are used in large-scale EGS but also hydraulic fracturing approaches were discussed. To evaluate the applicability of hydraulic fracturing techniques in EGS, six in situ , multistage hydraulic fracturing experiments with three different injection schemes were performed under controlled conditions in crystalline rock at the Äspö Hard Rock Laboratory (Sweden). During the experiments the near-field ground motion was continuously recorded by 11 piezoelectric borehole sensors with a sampling rate of 1 MHz. The sensor network covered a volume of 30×30×30 m around a horizontal, 28-m-long injection borehole at a depth of 410 m. To extract and characterize massive, induced, high-frequency acoustic emission (AE) activity from continuous recordings, a semi-automated workflow was developed relying on full waveform based detection, classification and location procedures. The approach extended the AE catalogue from 196 triggered events in previous studies to more than 19 600 located AEs. The enhanced catalogue, for the first time, allows a detailed analysis of induced seismicity during single hydraulic fracturing experiments, including the individual fracturing stages and the comparison between injection schemes. Beside the detailed study of the spatio-temporal patterns, event clusters and the growth of seismic clouds, we estimate relative magnitudes and b -values of AEs for conventional, cyclic progressive and dynamic pulse injection schemes, the latter two being fatigue hydraulic fracturing techniques. While the conventional fracturing leads to AE patterns clustered in planar regions, indicating the generation of a single main fracture plane, the cyclic progressive injection scheme results in a more diffuse, cloud-like AE distribution, indicating the activation of a more complex fracture network. For a given amount of hydraulic energy (pressure multiplied by injected volume) pumped into the system, the cyclic progressive scheme is characterized by a lower rate of seismicity, lower maximum magnitudes and significantly larger b -values, implying an increased number of small events relative to the large ones. To our knowledge, this is the first direct comparison of high resolution seismicity in a mine-scale experiment induced by different hydraulic fracturing schemes. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
26. A Regionalized Seismicity Model for Subduction Zones Based on Geodetic Strain Rates, Geomechanical Parameters, and Earthquake-Catalog Data.
- Author
-
Viveros, José Antonio Bayona, von Specht, Sebastian, Strader, Anne, Hainzl, Sebastian, Cotton, Fabrice, and Schorlemmer, Danijel
- Abstract
The Seismic Hazard Inferred from Tectonics based on the Global Strain Rate Map (SHIFT_GSRM) earthquake forecast was designed to provide high-resolution estimates of global shallow seismicity to be used in seismic hazard assessment. This model combines geodetic strain rates with global earthquake parameters to characterize long-term rates of seismic moment and earthquake activity. Although SHIFT_GSRM properly computes seismicity rates in seismically active continental regions, it underestimates earthquake rates in subduction zones by an average factor of approximately 3. We present a complementary method to SHIFT_GSRM to more accurately forecast earthquake rates in 37 subduction segments, based on the conservation of moment principle and the use of regional interface seismicity parameters, such as subduction dip angles, corner magnitudes, and coupled seismogenic thicknesses. In seven progressive steps, we find that SHIFT_GSRM earthquake-rate underpredictions are mainly due to the utilization of a global probability function of seismic moment release that poorly captures the great variability among subduction megathrust interfaces. Retrospective test results show that the forecast is consistent with the observations during the 1 January 1977 to 31 December 2014 period. Moreover, successful pseudoprospective evaluations for the 1 January 2015 to 31 December 2018 period demonstrate the power of the regionalized earthquake model to properly estimate subduction-zone seismicity. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
27. Effects of finite source rupture on landslide triggering: the 2016 Mw 7.1 Kumamoto earthquake.
- Author
-
von Specht, Sebastian, Ozturk, Ugur, Veh, Georg, Cotton, Fabrice, and Korup, Oliver
- Subjects
- *
LANDSLIDES , *LANDSLIDE hazard analysis , *SEISMIC waves , *EARTHQUAKES , *LAND cover - Abstract
The propagation of a seismic rupture on a fault introduces spatial variations in the seismic wave field surrounding the fault. This directivity effect results in larger shaking amplitudes in the rupture propagation direction. Its seismic radiation pattern also causes amplitude variations between the strike-normal and strike-parallel components of horizontal ground motion. We investigated the landslide response to these effects during the 2016 Kumamoto earthquake (Mw 7.1) in central Kyushu (Japan). Although the distribution of some 1500 earthquake-triggered landslides as a function of rupture distance is consistent with the observed Arias intensity, the landslides were more concentrated to the northeast of the southwest–northeast striking rupture. We examined several landslide susceptibility factors: hillslope inclination, the median amplification factor (MAF) of ground shaking, lithology, land cover, and topographic wetness. None of these factors sufficiently explains the landslide distribution or orientation (aspect), although the landslide head scarps have an elevated hillslope inclination and MAF. We propose a new physics-based ground-motion model (GMM) that accounts for the seismic rupture effects, and we demonstrate that the low-frequency seismic radiation pattern is consistent with the overall landslide distribution. Its spatial pattern is influenced by the rupture directivity effect, whereas landslide aspect is influenced by amplitude variations between the fault-normal and fault-parallel motion at frequencies <2 Hz. This azimuth dependence implies that comparable landslide concentrations can occur at different distances from the rupture. This quantitative link between the prevalent landslide aspect and the low-frequency seismic radiation pattern can improve coseismic landslide hazard assessment. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
28. Effects of finite source rupture on landslide triggering: The 2016 MW 7.1 Kumamoto earthquake.
- Author
-
von Specht, Sebastian, Ozturk, Ugur, Veh, Georg, Cotton, Fabrice, and Korup, Oliver
- Subjects
- *
SPATIAL variation , *SURFACE fault ruptures , *LANDSLIDES - Abstract
The propagation of a seismic rupture on a fault introduces spatial variations in the seismic wavefield surrounding the fault during an earthquake. This directivity effect results in larger shaking amplitudes in the rupture propagation direction. Its seismic radiation pattern also causes amplitude variations between the strike-normal and strike-parallel components of horizontal ground motion. We investigated the landslide response to these effects during the 2016 Kumamoto earthquake (MW 7.1) in central Kyūshū (Japan). Although the distribution of some 1,500 earthquake-triggered landslides as function of rupture distance is consistent with the observed Arias intensity, the landslides are more concentrated to the northeast of the southwest-northeast striking rupture. We examined several landslide susceptibility factors: hillslope inclination, median amplification factor (MAF) of ground shaking, lithology, land cover, and topographic wetness. None of these factors can sufficiently explain the landslide distribution or orientation (aspect), although the landslide headscarps coincide with elevated hillslope inclination and MAF. We propose a new physics-based ground motion model that accounts for the seismic rupture effects, and demonstrate that the low-frequency seismic radiation pattern consistent with the overall landslide distribution. The spatial landslide distribution is primarily influenced by the rupture directivity effect, whereas landslide aspect is influenced by amplitude variations between the fault-normal and fault-parallel motion at frequencies < 2Hz. This azimuth-dependence implies that comparable landslide concentrations can occur at different distances from the rupture. This quantitative link between the prevalent landslide aspect and the low-frequency seismic radiation pattern can improve coseismic landslide hazard assessment. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
29. A Regionalized Seismicity Model for Subduction Zones Based on Geodetic Strain Rates, Geomechanical Parameters, and Earthquake-Catalog Data.
- Author
-
Viveros, José Antonio Bayona, von Specht, Sebastian, Strader, Anne, Hainzl, Sebastian, Cotton, Fabrice, and Schorlemmer, Danijel
- Abstract
The Seismic Hazard Inferred from Tectonics based on the Global Strain Rate Map (SHIFT_GSRM) earthquake forecast was designed to provide high-resolution estimates of global shallow seismicity to be used in seismic hazard assessment. This model combines geodetic strain rates with global earthquake parameters to characterize long-term rates of seismic moment and earthquake activity. Although SHIFT_GSRM properly computes seismicity rates in seismically active continental regions, it underestimates earthquake rates in subduction zones by an average factor of approximately 3. We present a complementary method to SHIFT_GSRM to more accurately forecast earthquake rates in 37 subduction segments, based on the conservation of moment principle and the use of regional interface seismicity parameters, such as subduction dip angles, corner magnitudes, and coupled seismogenic thicknesses. In seven progressive steps, we find that SHIFT_GSRM earthquake-rate underpredictions are mainly due to the utilization of a global probability function of seismic moment release that poorly captures the great variability among subduction megathrust interfaces. Retrospective test results show that the forecast is consistent with the observations during the 1 January 1977 to 31 December 2014 period. Moreover, successful pseudoprospective evaluations for the 1 January 2015 to 31 December 2018 period demonstrate the power of the regionalized earthquake model to properly estimate subduction-zone seismicity. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
30. Spatiotemporal Variations of Ground Motion in Northern Chile before and after the 2014 Mw 8.1 Iquique Megathrust Event.
- Author
-
Piña-Valdés, Jesús, Socquet, Anne, Cotton, Fabrice, and Specht, Sebastian
- Abstract
To evaluate the spatiotemporal variations of ground motions in northern Chile, we built a high-quality rock seismic acceleration database and an interface earthquakes catalog. Two ground-motion prediction equation (GMPE) models for subduction zones have been tested and validated for the area. They were then used as backbone models to describe the time-space variations of earthquake frequency content (Fourier and response spectra). Consistent with previous studies of large subduction earthquakes, moderate interface earthquakes in northern Chile show an increase of the high-frequency energy released with depth. A regional variability of earthquake frequency content is also observed, which may be related to a lateral segmentation of the mechanical properties of the subduction interface. Finally, interface earthquakes show a temporal evolution of their frequency content in the earthquake sequence associated with the 2014 Iquique M
w 8.1 megathrust earthquake. Surprisingly, the change does not occur with the mainshock but is associated with an 8 month slow slip preceding the megathrust. [ABSTRACT FROM AUTHOR]- Published
- 2018
- Full Text
- View/download PDF
31. The Small Heat Shock Protein Hsp42 Controls the Spatio-Temporal Organization of Aggregated Proteins in Saccharomyces Cerevisiae
- Author
-
Specht, Sebastian
- Subjects
570 Life sciences - Abstract
Stress-induced protein aggregation represents a major threat for cell survival and is also associated with various human disorders and cellular aging. The primary cellular response to aberrant protein conformations is the refolding of misfolded proteins by molecular chaperones or their elimination by AAA+ proteases. Once this first line of defense has been overrun, aggregated proteins are directed to specific compartments, thus protecting the cellular environment from potentially deleterious protein conformations. Organizing protein aggregates might also facilitate the recruitment of protein quality control components, thereby increasing the efficiency of aggregate removal in a subsequent phase. In Saccharomyces cerevisiae application of mild stress (37°C) results, upon inhibiting proteasomal degradation, in partitioning of misfolded proteins between two distinct compartments (Kaganovich, 2008). More mobile misfolded proteins, which are ubiquitylated and likely represent substrates for proteasomal degradation, are sequestered at the JUNQ (juxtanuclear quality control) compartment. Terminally aggregated, insoluble proteins are sorted to the peripheral IPOD (insoluble protein deposit) compartment that also harbors amyloidogenic proteins. To gain further insight into the spatio-temporal organization of misfolded proteins in Saccharomyces cerevisiae, I analyzed the localization of stress-induced protein aggregates by employing various fluorescent reporter proteins that either misfold upon stress application or bind to aggregated proteins. Since little is known about cellular factors involved in the sorting of misfolded proteins, I performed a candidate approach and focused on the Saccharomyces cerevisiae small heat shock proteins (sHsps), namely Hsp26 and Hsp42. I identified Hsp42 as an essential factor in the formation of IPOD-like inclusions. In hsp42 cells misfolded proteins do not accumulate in peripheral inclusions, but seem to be re-directed to the JUNQ. As Hsp42 localizes specifically to IPOD-like inclusions, but is absent from the JUNQ compartment, the lack of peripheral aggregation foci is a direct effect of missing Hsp42, thus illuminating a novel function of sHsps in controlling the cellular sorting of damaged proteins. In contrast, the second Saccharomyces cerevisiae sHsp, Hsp26, does not affect aggregate sorting and is present in both JUNQ and IPOD-like compartments. Transferring the elongated N-terminal domain (NTD) of Hsp42 to Hsp26 enables Hsp26 partially to replace Hsp42 function in aggregate sorting. In contrast, Hsp42 deleted of its NTD is not able to restore the occurrence of peripheral inclusions in hsp42Δ cells. The NTD is thus a key determinant in contributing functional specificity to Hsp42. My data suggest that Hsp42 acts as an adaptor protein that co-aggregates efficiently with misfolded proteins. The sHsp might link such complexes via its NTD to further, so far unknown, sorting factors. Thereby, protein inclusions might be directed to the actin cytoskeleton, which I demonstrate to be crucial for aggregate sorting to JUNQ and IPOD-like compartments. Nonetheless, Hsp42 function is restricted to amorphous aggregates, because the localization of amyloidogenic proteins to IPOD-like inclusions does not depend on Hsp42. Comparing the mobility and stability of aggregated proteins deposited at the JUNQ in wild-type and hsp42Δ cells revealed the JUNQ compartment of hsp42 cells to have a moderate increase in substrate mobility and be solubilized more rapidly by Hsp104. These findings suggest that the Hsp42-dependent sorting to IPOD-like compartments retards substrate resolubilization, thereby potentially reducing substrate load of the quality control system. I also analyzed the spatio-temporal organization of protein aggregates in cells with intact proteasomal degradation during sublethal heat-stress and a subsequent recovery phase allowing for aggregate solubilization. Heat shock generates multiple aggregation foci that are distributed throughout the cell. Sorting of aggregated proteins to JUNQ and IPOD-like deposition sites does not occur upon return to physiological growth conditions. Instead, protein disaggregation takes places in situ and does not require an intact actin cytoskeleton. My data thus demonstrate that the applied stress condition has a profound impact on the organization of misfolded proteins. Moreover, my findings disclose functional divergence of the Saccharomyces cerevisiae sHsps in the refolding and organization of heat shock-generated protein aggregates. Incorporation of Hsp26 facilitates the reactivation of aggregated proteins. In contrast, Hsp42 is not influencing protein refolding, but serves as a sorting factor essential for the persistence of protein inclusions in the cellular periphery.
- Published
- 2010
32. Hydraulic fracture monitoring in hard rock at 410 m depth with an advanced fluid-injection protocol and extensive sensor array.
- Author
-
Zang, Arno, Stephansson, Ove, Stenberg, Leif, Plenkers, Katrin, Specht, Sebastian, Milkereit, Claus, Schill, Eva, Kwiatek, Grzegorz, Dresen, Georg, Zimmermann, Günter, Dahm, Torsten, and Weber, Michael
- Subjects
HARD rock mining ,HYDRAULIC fracturing ,HEAT exchangers ,MICROSEISMS ,ACOUSTIC emission ,ELECTROMAGNETISM - Abstract
In this paper, an underground experiment at the Äspö Hard Rock Laboratory (HRL) is described. Main goal is optimizing geothermal heat exchange in crystalline rock mass at depth by multistage hydraulic fracturing with minimal impact on the environment, that is, seismic events. For this, three arrays with acoustic emission, microseismicity and electromagnetic sensors are installed mapping hydraulic fracture initiation and growth. Fractures are driven by three different water injection schemes (continuous, progressive and pulse pressurization). After a brief review of hydraulic fracture operations in crystalline rock mass at mine scale, the site geology and the stress conditions at Äspö HRL are described. Then, the continuous, single-flow rate and alternative, multiple-flow rate fracture breakdown tests in a horizontal borehole at depth level 410 m are described together with the monitoring networks and sensitivity. Monitoring results include the primary catalogue of acoustic emission hypocentres obtained from four hydraulic fractures with the in situ trigger and localizing network. The continuous versus alternative water injection schemes are discussed in terms of the fracture breakdown pressure, the fracture pattern from impression packer result and the monitoring at the arrays. An example of multistage hydraulic fracturing with several phases of opening and closing of fracture walls is evaluated using data from acoustic emissions, seismic broad-band recordings and electromagnetic signal response. Based on our limited amount of in situ tests (six) and evaluation of three tests in Ävrö granodiorite, in the multiple-flow rate test with progressively increasing target pressure, the acoustic emission activity starts at a later stage in the fracturing process compared to the conventional fracturing case with continuous water injection. In tendency, also the total number and magnitude of acoustic events are found to be smaller in the progressive treatment with frequent phases of depressurization. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
33. Designing a Cross-Border Health Atlas through Immersion in Health Services Research.
- Author
-
Specht, Sebastian
- Subjects
- *
MAP design , *MEDICAL geography , *MEDICAL care research - Abstract
This work describes an ongoing cartographic design process in the field between medical geography and health services research. The desired result is the prototypical implementation of cartographic artefacts united under the umbrella of a "Cross-border Health Data Compass" (CHDC). The resulting web-based visual analytics software aims to support multidisciplinary cross-border healthcare research in the northern Dutch-German border region. Visualizations of consolidated and harmonized data from publicly available information sources will interact with a model of the spatial accessibility of health care facilities of the region. Medical geography describes the spatial accessibility of health care facilities as one of several dimensions of access to health care services. Of all the dimensions of access, the aspects of "accessibility" and "availability" (i.e. the presence of facilities) are spatial and distance-based and therefore empirically ascertainable (Kisteman et al. 2019). In addition, health care itself is part of a "social space" occurring in space and time. This implies that the regional socio-economic situation influences health care. For health services research these spatial dimensions are of great importance as well and therefore relevant background information. For this reason, an accessibility model from OpenStreetMap network data will calculate potential catchment areas of hospitals and care facilities. The resulting accessibility model data will be an integral part of the cartographic visualisation of demographic and socio-economic data. Common requirements engineering techniques, qualitative study methods, and the "design by immersion" approach described by Hall (2020) provide the methods for the design process of this work (Figure 1). The latter understands the design effort of a visualisation researcher as an "immersion experience" in a specific (scientific) domain: The elicitation Abstracts of the International Cartographic Association, 5, 2022. European Cartographic Conference -- EuroCarto 2022, 19-21 September 2022, TU Wien, Vienna, Austria. https://doi.org/10.5194/ica-abs-5-14-2022 | © Author(s) 2022. CC BY 4.0 License. of cartographic requirements result from the visualisation problems of that specific domain and the reflection about the iterative and cooperative search for solutions. The design process started with a qualitative study among Dutch and German health researchers from the CBI initiative (n=9, semi structured interviews). In the interviews, more than half of the participants stated that spatial aspects play a role or at least a subordinate role for their research. However, even though many collect data on socio-economic status as part of their own studies, only one researcher ever recorded spatial information about the study subjects. Generally, the study detected an interest in spatial aspects of their research questions, but at the same time a reservation about the feasibility of spatial analyses, the possible significance of the findings and uncertainty about the availability of secondary data across the border. A number of challenges result from the findings of the first study: The transnational perspective of the CBI researchers call for homogenous data models from national sources and needs to create solutions to normalize data and classifications, which are hard to compare. The variety of disciplines in the consortium calls for a wide range of spatial scale down to spatial models using small-scale cartographic grids. The use of these grids in terms of statistical disclosure control, the soundness of disaggregation and small area estimations, and methodological transparency pose the research questions for the CHDC and the accompanying cartographic visualizations. The presentation will give insight into the intermediate results of this design process, will report on methodological and algorithmic considerations and present the state of the visual analytics artefacts. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
34. Hsp42 is required for sequestration of protein aggregates into deposition sites in Saccharomyces cerevisiae.
- Author
-
Specht, Sebastian, Miller, Stephanie B. M., Mogk, Axel, and Bukau, Bernd
- Subjects
- *
PROTEIN research , *PROTEIN folding , *MOLECULAR structure of amino compounds , *HEAT shock proteins , *SACCHAROMYCES cerevisiae - Abstract
The aggregation of proteins inside cells is an organized process with cytoprotective function. In Saccharomyces cerevisiae, aggregating proteins are spatially sequestered to either juxtanuclear or peripheral sites, which target distinct quality control pathways for refolding and degradation. The cellular machinery driving the sequestration of misfolded proteins to these sites is unknown. In this paper, we show that one of the two small heat shock proteins of yeast, Hsp42, is essential for the formation of peripheral aggregates during physiological heat stress. Hsp42 preferentially localizes to peripheral aggregates but is largely absent from juxtanuclear aggregates, which still form in hsp42Δ cells. Transferring the amino-terminal domain of Hsp42 to Hsp26, which does not participate in aggregate sorting, enables Hsp26 to replace Hsp42 function. Our data suggest that Hsp42 acts via its amino-terminal domain to coaggregate with misfolded proteins and perhaps link such complexes to further sorting factors. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
35. Detail or Disclosure - Towards a Visualisation of Confidentiality Related Spatial Damage to Demographic Grids.
- Author
-
Specht, Sebastian and Kramer, Bernd
- Subjects
- *
MUNICIPAL services , *DEMOGRAPHY , *MEDICAL care , *CARTOGRAPHY , *MAPS - Abstract
Statistical data on demography is the basis for many population-related scientific questions, economic questions of health care and questions of planning public services. Population data in equal-area cartographic grid cells appears to be a good basis, especially for use cases in inter-municipal contexts of administration and planning (Specht et al. 2019). Census results have been used, since the 2011 census made available small-scale population data for the entire Federal Republic of Germany on a 100m grid for the first time. Unfortunately, this data is not updated by the statistical offices. This presentation describes a use case of demographic grids implemented in a context of inter-municipal cooperation in the region of Bremen. As the calculation of population forecasts was an objective, small-scale data on migration was required. Similar to the approach in the census, demographic data and data on migration are recorded in the residents' registration offices (EMA) of the cooperating municipalities. However, since outside the census other legal frameworks apply, the process cannot be adopted as is. In the EMAs, individual-related micro-data are available, serving as a base file. Under the respective legal framework, the data is anonymised, geo coded and converted into an aggregated tabular form on site. Aggregated data may still contain individual cases worthy of protection. The higher the number of queried characteristics (region, gender, age, nationality etc.) and their differentiation (100m grid or 1km grid, age years or age groups, etc.), the higher the probability to encounter such cases. A number of procedures for statistical disclosure control are available, of which the SAFE procedure (Höhne 2015) (used in the 2011 census) is currently implemented in the project. As other methods or strategies are up for consideration, how can they be evaluated in a specific regional context? From the perspective of confidentiality, space is at first just one feature dimension among others, although there are approaches that explicitly take spatial interrelation into account (Young, Martin, and Skinner 2009). From a geographical point of view, however, high resolution data, especially in sparsely populated areas, can generally be expected to show high before-and-after deviations as a result of confidentiality procedures. Depending on the subject matter, these spatial errors can have different degrees of relevance and thus be ultimately relevant for the selection of the confidentiality strategy. To support a decidedly spatial comparison of the effects of different classification, aggregation and confidentiality strategies, a set of indicators together with an interactive visualization for the project area under consideration is presented for discussion. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
36. Development of ground-motion prediction equations from a weighted mixture model with data-driven weights.
- Author
-
Specht, Sebastian von and Cotton, Fabrice
- Subjects
- *
FOCAL planes , *EQUATIONS , *SUBDUCTION , *MIXTURES , *SURFACE fault ruptures - Abstract
The steady increase of ground-motion data allows new possibilities but also comes with new challenges in ground-motion modelling and the development of ground-motion prediction equations.One challenge is data selection - not all data can be processed equally - and we introduce an extension of the widely used mixed effect model, i.e. a model of fixed and random effects.The extension incorporates data selection by assigning weights per event and model parameters are inferred by weighting the record data accordingly.This advancement allows for the possibility of data classification beyond the more classical, expert-driven, binary classification based e.g. on the depth of the event, distance to the trench, style-of-faulting and dip angle of the fault plane.We apply ACE - (A)ngular (C)lassification with (E)xpectation-maximization - a method to efficiently identify clusters of nodal planes from focal mechanisms to differentiate between interface and intraslab type events.The classification is continuous, i.e. no event belongs completely to one class and classification uncertainty is then evaluated and taken into account in the ground-motion modelling.As an example, we developed a ground-motion prediction equation from a database of approximately 2400 records from 319 events in the Chilean subduction zone.Our ground-motion model with the data-driven and reproducible classification is comparable to the expert classification based model.Furthermore, the models show temporal variations of the between-event residuals before and after large earthquakes in the region. [ABSTRACT FROM AUTHOR]
- Published
- 2019
37. Assimilating Stress and Strain in an Energy-Based PSHA Workflow.
- Author
-
Ziebarth, Malte J., Heidbach, Oliver, Cotton, Fabrice, Anderson, John G., Weatherill, Graeme, and Specht, Sebastian von
- Published
- 2019
38. Improving Strain-Rate based Forecasts.
- Author
-
Bayona, José Antonio, Specht, Sebastian, Cotton, Fabrice, Hainzl, Sebastian, and Schorlemmer, Danijel
- Subjects
- *
FORECASTING - Published
- 2018
39. Best Practices for Microbial Challenge In-Use Studies to Evaluate the Microbial Growth Potential of Parenteral Biological Products; Industry and Regulatory Considerations.
- Author
-
Zamiri C, Leiske DL, Hughes P, Kirwan JP, Der E, Cox E, Warburton R, Goss M, Weiser S, Perez-Brown J, Gopalrathnam G, Liu J, Mehta SB, Shereefa S, Specht S, Aedo SJ, Goldbach P, Jia F, Kuehnle B, Page S, Voeten L, Yi L, and Zhu C
- Subjects
- Humans, United States, Drug Industry standards, Drug Industry methods, Drug Industry legislation & jurisprudence, Drug Storage standards, Patient Safety, Biological Products standards, Biological Products administration & dosage, Drug Contamination prevention & control, United States Food and Drug Administration standards
- Abstract
Microbial challenge in-use studies are performed to evaluate the potential for microbial proliferation in preservative-free single-dose biological products after first puncture and potential accidental contamination during dose preparation (e.g., reconstitution or dilution) and storage. These studies, in addition to physicochemical in-use stability assessments, are used as part of product registration to define in-use hold times in Prescribing Information and in the pharmacy manual in the case of clinical products. There are no formal guidance documents describing regulator expectations on how to conduct microbial challenge in-use studies and interpret microbial data to assign in-use storage hold times. In lieu of guidance, US Food and Drug Administration (FDA) regulators have authored publications and presentations describing regulator expectations. Insufficient or unavailable microbial challenge data can result in shortened in-use hold times, thus microbial challenge data enables flexibility for health care providers (HCPs) and patients while ensuring patient safety. A cross-industry/FDA in-use microbial working group was formed through the Innovation & Quality (IQ) Consortium to gain alignment among industry practice and regulator expectations. The working group assessed regulatory guidance, current industry practice via a blinded survey of IQ Consortium member companies, and scientific rationale to align on recommendations for experimental design, execution of microbial challenge in-use studies, and a decision tree for microbial data interpretation to assign in-use hold times. Besides the study execution and data interpretation, additional considerations are discussed including the use of platform data for clinical stage products, closed system transfer devices (CSTDs), transport of dose solutions, long infusion times, and the use of USP <797> by HCPs for preparing sterile drugs for administration. The recommendations provided in this article will help streamline biological product development, ensure consistency on assignment of in-use hold times in biological product labels across industry, and provide maximum allowable flexibility to HCPs and patients while ensuring patient safety., (© PDA, Inc. 2024.)
- Published
- 2024
- Full Text
- View/download PDF
40. Tumor infiltrating T lymphocytes in colorectal cancer: Tumor-selective activation and cytotoxic activity in situ.
- Author
-
Koch M, Beckhove P, Op den Winkel J, Autenrieth D, Wagner P, Nummer D, Specht S, Antolovic D, Galindo L, Schmitz-Winnenthal FH, Schirrmacher V, Büchler MW, and Weitz J
- Subjects
- Adenocarcinoma metabolism, Adenocarcinoma pathology, Aged, Antigens, CD metabolism, Case-Control Studies, Colorectal Neoplasms metabolism, Colorectal Neoplasms pathology, Female, Humans, Male, Neoplasm Staging, Adenocarcinoma immunology, Colorectal Neoplasms immunology, Lymphocyte Activation physiology, Lymphocytes, Tumor-Infiltrating physiology
- Abstract
Objective: To examine whether tumor-selective infiltration, activation, and cytotoxic activity of tumor infiltrating T lymphocytes (TIL) can be demonstrated in situ in colorectal cancer samples., Summary Background Data: Recent studies indicated a correlation between the presence of TIL and an improved prognosis in colorectal cancer. However, tumor-selective activation and cytotoxic activity of CD8 TIL in situ in colorectal cancer patients have not yet been examined., Methods: Tumor samples from 49 patients and corresponding normal mucosa samples from 23 patients with colorectal cancer (UICC stages II-IV) were examined for TIL. Two-color fluorescence immunohistochemistry and multicolor flowcytometric (FACS) analysis were used for quantification of CD8 T cells and measurement of their activation status (CD69-expression) and cytotoxic activity (CD107a-expression) in situ. Presence of tumor antigen-reactive T cells in tumor, blood, and bone marrow was evaluated by IFN-gamma Elispot analysis., Results: While absolute numbers of CD8 T cells were similar, CD4 T helper cells were significantly increased in tumor tissue compared with normal mucosa. There was a significantly higher proportion of activated and cytotoxically active CD8 TIL in colorectal cancer compared with normal mucosa. Increased activation, cytotoxic activity, and functional reactivity of TIL were correlated with the presence of functional tumor antigen-reactive T cells in the blood and bone marrow. The proportion of activated TIL decreased significantly with higher tumor stage., Conclusions: Tumor-selective activation and cytotoxic activity of CD8 TIL and tumor-selective migration of CD4 T helper cells were demonstrated in colorectal cancer for the first time. Our data support the immunogenicity of colorectal cancer and suggest clinical significance of tumor-specific immune responses.
- Published
- 2006
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.