13 results on '"Stein, Sebastian"'
Search Results
2. Flexible service provisioning in multi-agent systems
- Author
-
Stein, Sebastian
- Subjects
005.3 - Abstract
Service-oriented computing is an increasingly popular approach for providing applications, computational resources and business services over highly distributed and open systems (such as the Web, computational Grids and peer-to-peer systems). In this approach, service providers advertise their offerings by means of standardised computer-readable descriptions, which can then be used by software applications to discover and consume appropriate services without human intervention. However, despite active research in service infrastructures, and in service discovery and composition mechanisms, little work has recognised that services are offered by inherently autonomous and self-interested entities. This autonomy implies that providers may choose not to honour every service request, demand remuneration for their efforts, and, in general, exhibit uncertain behaviour. This uncertainty is especially problematic for the service consumers when services are part of complex workflows, as is common in many application domains, such as bioinformatics, large-scale data analysis and processing, and commercial supply-chain management. In order to address this uncertainty, we propose a novel algorithm for provisioning services for complex workflows (i.e., for selecting suitable services for the constituent tasks of a workflow). This algorithm uses probabilistic performance information about providers to reason about service uncertainty and its impact on the overall workflow. Furthermore, our approach actively mitigates this uncertainty by employing two key techniques. First, it proactively provisions redundant services for particularly critical or failure-prone tasks (thus increasing the probability of success). Second, it recovers dynamically from service failures by re-provisioning services at run-time (without necessarily receiving explicit failure messages). Unlike existing work in this area, our algorithm employs principled decision-theoretic techniques to determine which services to provision, whether to introduce redundant services and when to re-provision failed services. In doing so, it explicitly balances the cost of provisioning with the expected value of the workflow. To show how our algorithm applies to a range of common service-oriented systems, we consider a variety of different scenarios in this thesis. More specifically, we first examine environments where the consumer lacks specific knowledge to differentiate between distinct service providers, as is common in highly dynamic and open systems. Despite this lack of detailed knowledge, we demonstrate how the consumer can use redundancy and dynamic re-provisioning to influence the outcome of a workflow and to deal with uncertainty. Then, we look into systems where the consumer has more specific knowledge about highly heterogeneous providers. While existing work has concentrated on selecting the single best provider for each workflow task, we show that a consumer can often improve its performance by provisioning multiple providers with different qualities for a single task. Finally, we discuss how our algorithm can be adapted for systems where consumers and providers reach explicit service contracts in advance. In this context, we are the first to propose a gradual provisioning approach, whereby the consumer negotiates contracts for some tasks in advance, but leaves the negotiation of others to a later time. This approach allows the consumer to better react to uncertain service outcomes and to avoid paying reservation fees that are later lost when services fail. Throughout this thesis, we compare our approach empirically to current provisioning algorithms. In doing so, we demonstrate that our approach typically achieves a significantly higher utility for the service consumer than approaches that do not reason about uncertainty, that rely on fixed levels of redundancy or service time-outs, and approaches that select single services to achieve the optimal balance of various performance characteristics. Furthermore, we show that these results hold over a large range of environments and workflow types and that our algorithm copes well even in highly uncertain environments where most services fail. As our approach relies on fast heuristics to solve a problem that is known to be intractable, it scales well to larger workflows with hundreds of tasks and thousands of providers. Finally, where it is tractable to compute an optimal solution, we show empirically that our algorithm achieves a high utility that is within 87% or more of the optimal.
- Published
- 2008
3. Electric Vehicle Charging on Long Journeys: Current Challenges and Future Opportunities
- Author
-
Shafipour Yourdshahi, Elnaz and Stein, Sebastian
- Published
- 2022
4. The future of connected and automated mobility in the UK: call for evidence
- Author
-
Ramchurn, Sarvapali, Mousavi, Mohammad Reza, Toliyat, Seyed Mohammad Hossein, Kleinman, Mark, Lisinska, Justyna, Sempreboni, Diego, Stein, Sebastian, Gerding, Enrico, Gomer, Richard, D'Amore, Francesco, and Dbouk, Wassim
- Abstract
This report is a response to the call for evidence from the Department for Business, Energy & Industrial Strategy and the Centre for Connected and Autonomous Vehicles on the future of connected and automated mobility in the UK.Executive Summary:Despite relative weaknesses in global collaboration and co-creation platforms, smart road and communication infrastructure, urban planning, and public awareness, the United Kingdom (UK) has a substantial strength in the area of Connected and Automated Mobility (CAM) by investing in research and innovation platforms for developing the underlying technologies, creating impact, and co-creation leading to innovative solutions. Many UK legal and policymaking initiatives in this domain are world leading. To sustain the UK's leading position, we make the following recommendations:⢠The development of financial and policy-related incentive schemes for research and innovation in the foundations and applications of autonomous systems as well as schemes for proof of concepts, and commercialisation.⢠Supporting policy and standardisation initiatives as well as engagement and community-building activities to increase public awareness and trust.⢠Giving greater attention to integrating CAM/Connected Autonomous Shared Electric vehicles (CASE) policy with related government priorities for mobility, including supporting active transport and public transport, and improving air quality.⢠Further investment in updating liability and risk models and coming up with innovative liability schemes covering the Autonomous Vehicles (AVs) ecosystem.⢠Investing in training and retraining of the work force in the automotive, mobility, and transport sectors, particularly with skills concerning Artificial Intelligence (AI), software and computer systems, in order to ensure employability and an adequate response to the drastically changing industrial landscape
- Published
- 2021
5. Coordinating measurements for environmental monitoring in uncertain participatory sensing settings
- Author
-
Zenonos, Alexandros, Stein, Sebastian, and Jennings, Nicholas R.
- Abstract
Environmental monitoring allows authorities to understand the impact of potentially harmful phenomena such as air pollution, excessive noise and radiation. Recently, there has been considerable interest in participatory sensing as a paradigm for such large-scale data collection because it is cost-effective and able to capture more fine-grained data than traditional approaches that use stationary sensors scattered in cities. In this approach, ordinary citizens (non-expert contributors) collect environmental data using low-cost mobile devices. However, these participants are generally self-interested actors that have their own goals and make local decisions about when and where to take measurements. This can lead to highly inefficient outcomes, where observations are either taken redundantly or do not provide sufficient information about key areas of interest. To address these challenges it is necessary to guide and to coordinate participants, so they take measurements when it is most informative. To this end, we develop a computationally-efficient coordination algorithm (adaptive Best-Match) that suggests to users when and where to take measurements. Our algorithm exploits probabilistic knowledge of human mobility patterns, but explicitly considers the uncertainty of these patterns and the potential unwillingness of people to take measurements when requested to do so. In particular, our algorithm uses a local search technique, clustering and random simulations to map participants to measurements that need to be taken in space and time. We empirically evaluate our algorithm on a real-world human mobility and air quality dataset and show that it outperforms the current state of the art by up to 24% in terms of utility gained.
- Published
- 2017
6. Competitive influence maximisation in social networks
- Author
-
Chakraborty, Sukankana, Stein, Sebastian, and Brede, Markus
- Abstract
Network-based interventions have shown immense potential in prompting behaviour changes in populations. Their implementation in the real world however, is often difficult and prone to failure as they are typically delivered on limited budgets and in many instances can be met with resistance in populations. Therefore, utilising available and limited resources optimally through careful and efficient planning is key for the successful implementation of any intervention. An important development in this aspect, is the 'influence maximisation' framework -which lies at the interface of 'network science' and 'computer science' -and is commonly used to study network-based interventions in a theoretical setup with the aim of determining best practices that can optimise intervention outcomes in the real world. In this thesis, we explore the 'influence maximisation' problem in a competitive setting (inspired by real-world conditions) where 'two' contenders compete to maximise the spread of their intervention (or influence) in a social network. In its traditional form, the 'influence maximisation' process identifies the 'k' most influential nodes in a network - where 'k' is given by a fixed budget. In this thesis, we propose the 'influence maximisation' model with continuous distribution of influence where individuals are targeted heterogeneously based on their role in the influence spread process. This approach allows policymakers to obtain a detailed plan of the optimal distribution of budgets which is otherwise abstracted in traditional methods. In the rest of the thesis we use this approach to study multiple real-world settings. We first propose the competitive 'influence maximisation' model with continuous allocation of resources. We then determine optimal intervention strategies against known competitor allocations in a network and show that continuous distribution of resources consistently outperform traditional approaches where influence is concentrated on a few nodes in the network (i.e. 'k' optimal nodes). We further extend the model to a game-theoretic framework which helps us examine settings with no prior information about competitor strategies. We find that the equilibrium solution in this setting is to uniformly target the network -implying that all nodes, irrespective of their topological positions, contribute equally to the 'influence maximisation' process. We extend this model further in 'two' directions. First, we introduce the notion of adoption barriers to the competitive 'influence maximisation' model, where an additional cost is paid every time an individual is approached for intervention. We find that this cost-of-access parameter ties our model to traditional methods, where only 'k' individuals are discretely targeted. We further generalise the model to study other real-world settings where the strength of influence changes nonlinearly with allocations. Here we identify two distinct regimes -one where optimal strategies offer significant gains, and the other where they do not yield any gains. The 'two' regimes also vary in their sensitivity to budget availability, and we find that in some cases, even a 'tenfold' increase in the budget only marginally improves the outcome of the intervention. Second, we extend the continuous allocation model to analyse network-based interventions in the presence of negative ties. Individuals sharing a negative tie typically influence each other to adopt opposing views, and hence they can be detrimental to the influence spread process if not considered in the dynamics. We show that in general it is important to consider negative ties when planning an intervention, and at the same time we identify settings where the knowledge of negative ties yields no gains, or leads to less favourable outcomes.
- Published
- 2023
7. Resource allocation methods for fog computing systems
- Author
-
Bi, Fan and Stein, Sebastian
- Abstract
Fog computing is gaining popularity as a suitable computer paradigm for the Internet of things (IoT). It is a virtualised platform that sits between IoT devices and centralised cloud computing. Fog computing has several characteristics, including proximity to IoT devices, low latency, geo-distribution, a large number of fog nodes, and real-time interaction. A key challenge in fog is resource allocation because existing resource allocation methods for cloud computing cannot directly apply to fog computing. Hence, many resource allocation methods for fog computing have been proposed since the birth of fog computing. However, most of these methods are centralised and not truthful, which means that users are not incentivised always to provide the true information of their tasks and their efficiency could decrease significantly if some users are strategic. Hence, an efficient resource allocation mechanism for this computing paradigm, which can be used in a strategic environment, is in need. Furthermore, a decentralised resource allocation algorithm is needed when there is no central control in the fog computing system. To this purpose, we consider three challenges: (1) near-optimal resource allocation in a fog system; (2) incentivising self-interested IoT users to truthfully report their tasks; and (3) decentralised resource allocation in a fog system. In this thesis, we examine relevant literature and describe its achievements and shortcomings. Currently, many resource allocation mechanisms using various techniques are proposed for resource allocation in cloud computing and fog computing. However, there is little work that studies truthful fog computing resource allocation mechanisms. Furthermore, reinforcement learning is also widely used in resource allocation for fog computing. However, most of these studies focus on single-agent reinforcement learning and centralised resource allocation. In summary, they only address a subset of the challenges in our fog computing resource allocation problem, and their application scenarios are highly limited. Therefore, we introduce our resource allocation model, i.e., Resource Allocation in Fog Computing (RAFC) and Distributed Resource Allocation in Fog Computing (DRAFC) in detail and choose the benchmark mechanisms to evaluate our proposed resource allocation mechanisms. Then, we develop and test an efficient and truthful mechanism called Flexible Online Greedy (FlexOG) using simulations. The simulations demonstrate that our mechanism can reach a higher level of social welfare than the truthful benchmark mechanisms by up to 10% and that it often achieves about 90% of the theoretical upper bound. To make FlexOG more scalable, we propose a modification of FlexOG called Semi-FlexOG, which is shown to use less processing time. Furthermore, to allocate resources in a decentralised fog system, we propose Decentralised Auction with PPO (DAPPO), which uses online reverse auctions and decentralised reinforcement learning for allocating tasks to resources in the fog. By enabling competition between resource providers, these auctions ensure that the most suitable provider is chosen for a given task, but without the computational and communication overheads of a centralised solution. In order to derive effective bidding strategies for nodes, we use a Proximal Policy Optimisation (PPO) reinforcement learning algorithm that takes into account the status of a node and task characteristics and that aims to maximise the node's long-term revenue. Hence, DAPPO deals naturally with highly dynamic systems, where the pattern of tasks could change dramatically. The results of our simulations show that DAPPO achieves a good performance in terms of social welfare. Specifically, its performance is close to the upper bound (around 90%) and better than benchmarks (0% to 30%). Finally, we conclude and outline possible future work.
- Published
- 2022
8. Incentive engineering in microtask crowdsourcing
- Author
-
Truong, Nhat and Stein, Sebastian
- Abstract
Crowdsourcing is emerging as an efficient approach to solve a wide variety of problems by engaging a large number of Internet users from many places in the world. However, the success of these systems relies critically on motivating the crowd to contribute, especially in microtask crowdsourcing contexts when the tasks are repetitive and easy for people to get bored. Given this, finding ways to efficiently incentivise participants in crowdsourcing projects in general and microtask crowdsourcing projects in particular is a major open challenge. Also, although there are numerous ways to incentivise participants in microtask crowdsourcing projects, the effectiveness of the incentives is likely to be different in different projects based on specific characteristics of those projects. Therefore, in a particular crowdsourcing project, a practical way to address the incentive problem is to choose a certain number of candidate incentives, then have a good strategy to select the most effective incentive at run time so as to maximise the cumulitive utility of the requesters within a given budget and time limit. We refer to this as the incentive selection problem (ISP). We present algorithms (HAIS and BOIS) to deal with the ISP by considering all characteristics of the problem. Specically, the algorithms make use of limited financial and time budgets to have a good exploration-exploitation balance. Also, they consider the group-based nature of the incentives (i.e., sampling two incentives with different group size yields two different number of samples) so as to make a good decision on how many times each incentive will be sampled at each time. By conducting extensive simulations, we show that our algorithms outperform state-of-the-art approaches in most cases. Also from the results of the simulations, practical usage of the two algorithms is discussed.
- Published
- 2021
9. Low-cost, open source acoustic sensors for conservation
- Author
-
Hill, Andrew and Stein, Sebastian
- Abstract
Biodiversity data-gaps must be better understood to inform conservation policy. Scalable technology, such as camera traps and satellite imaging, have been shown to increase coverage. This research explores the field of acoustic monitoring, which although longestablished, has struggled to scale effectively due to cost, usability and power inefficiency of existing equipment. This research aims to investigate whether creating an advanced, power-efficient and low-cost acoustic hardware solution can expand coverage. User-centred design principles and aspects of the collaborative economy are adopted in order to design a fit-for-purpose solution to scalability. The hardware design takes inspiration from the utilitarian construction of single-board computers and exploits the recent availability of advanced smartphone and Internet of Things technology. The resulting open source device, AudioMoth, is described, in which the baseline levels of performance improve on existing tools, with better power efficiency, miniature overall dimensions, reduced material cost, and the ability to capture audible and ultrasonic sound simultaneously. Open source hardware, however, imposes barriers to entry for non-technical users such as conservation practitioners. To improve access, it is necessary to remove the do-it-yourself nature of construction while remaining low-cost and flexible to adapt. Barriers can be overcome using new collaborative methods of consumption, where crowds can accumulate funds to bulk manufacture and automate hardware assembly with an economy of scale. A collaborative management framework is proposed, in which guidelines enable users to acquire fully assembled open source hardware from crowdfunding opportunities. The framework is applied to AudioMoth, permitting individual devices to be acquired readyto-use for $49.99 with approximately 8,000 delivered to date. This general system has provided conservation practitioners with access to an adaptable hardware solution, thus expanding the coverage of monitored biodiversity. Conservation policy should consider user-centred design in all new technical innovations and further explore the work outlined in this thesis, thereby allowing those communities outside of the pockets of wealth and high opportunity to monitor biodiversity at low cost.
- Published
- 2020
10. Informing user understanding of smart systems through feedback
- Author
-
Kittley-Davies, Jacob, Costanza, Enrico, Stein, Sebastian, and Rogers, Alex
- Abstract
Recent advances in microprocessing and low power radio technologies have catalyzed the transition of smart technologies from the domain of researchers and enthusiasts to everyday consumers. This new wave of smart devices, and the systems they form, marks a significant step towards Weiser's vision of ubiquitous computing and offers users a wealth of new and exciting opportunities. However, smart technologies are inherently complex and without careful design can prove complicated and confusing for users with no specific knowledge of the underpinning technologies. A poor understanding has the potential to inhibit user experience and may result in the abandonment of technologies which otherwise could bring real benefits to users. While a considerable body of work exists examining how confusion arising from complexity can be addressed, this work largely focuses on traditional heuristic systems. The non-deterministic nature of some smart technologies and the capacity for the sophisticated interconnected processes they employ to mask the relationship between system inputs and outcomes exacerbate the challenges examined in prior work. There is therefore a need to investigate how these challenges can be overcome for users of smart systems in particular. This thesis reports a series of five user studies, conducted under both controlled conditions and in the field. In particular, we examine how feedback can be used to inform user understanding of sensor based smart systems. Through qualitative and quantitative analysis we observe and evaluate over 145 participants interacting with sensor based smart systems. From our findings we identify a number of design implications and highlight the pitfalls of poor and uninformed design.
- Published
- 2020
11. Using software-based acoustic detection and supporting tools to enable large-scale environmental monitoring
- Author
-
Prince, Peter and Stein, Sebastian
- Subjects
620.2 - Abstract
Acoustic monitoring tools are often constrained to small-scale, short-term studies due to high energy consumption, limited storage, and high equipment costs. To broaden the scope of monitoring projects, affordability, energy efficiency, and space efficiency must be improved on such tools. This thesis describes efforts to empower researchers charged with monitoring ecosystems, faced with the challenges of limited budgets and cryptic targeted events. To this end AudioMoth was developed: a low-cost, open-source acoustic monitoring device which has been widely adopted by the conservation community, with over 6,600 devices sold as of August 2019. This thesis covers the development and deployment of three acoustic detection algorithms that reduce the power and storage requirements of acoustic monitoring. The algorithms aim to detect bat echolocation, to search for evidence of a endangered cicada species, and to collect evidence of poaching in a protected nature reserve. Each algorithm addresses a detection task of increasing complexity - analysing samples multiple times to prevent missed events, implementing extra analytical steps to account for environmental conditions such as wind, and incorporating a hidden Markov model for sample classification in both the time and frequency domain. For each algorithm this thesis reports on their detection accuracy as well as real-world deployments carried out with partner organisations. The deployments demonstrate how acoustic detection algorithms extend the use of low-cost, open-source hardware and facilitate a new avenue for conservation researchers to perform large-scale monitoring. The research also covers an analysis of the accessibility of acoustic monitoring technology, focusing on AudioMoth and its supporting software. This is done using a 75-respondent questionnaire and a thematic analysis done on a series of interviews. The results of both analyses discovered a number of potential methods for improving acoustic monitoring technology in terms of the various forms of accessibility (financial, usability, etc.). The community responses, along with the popularity of AudioMoth and the success of the deployed detection algorithms demonstrate the benefits of providing accessible acoustic monitoring solutions to conservationists.
- Published
- 2019
12. Crowd robotics : real-time crowdsourcing for crowd controlled robotic agents
- Author
-
Salisbury, Elliot, Ramchurn, Sarvapali, and Stein, Sebastian
- Subjects
629.8 - Abstract
Major man-made and natural disasters have a significant and long-lasting economic and social impact on countries around the world. The response eort in the first few hours of the aftermath of the disaster is crucial to saving lives and minimising damage to infrastructure. In these conditions, emergency response organisations on the ground face a major challenge in trying to understand what is happening, and where the casualties are. Crowdsourcing is often used in disasters to analyse the masses of data generated, and report areas of importance to the first responders, but the results are to slow to inform immediate decision making. This thesis describes techniques for utilising real-time crowdsourcing to analyse the disaster data in real-time. We utilise this real-time analysis to influence or control robotic search agents, unmanned aerial vehicles, that are increasingly being used in disaster scenarios. We investigate methods for reliably and promptly aggregating real-time crowd input, for two different crowd robotic applications. First, direct control, used for directing a robotic search and rescue agent around a complicated and dynamic environment. Second, real-time locational sensing, used for rapidly mapping disasters and to augment a pilot's video feed, such that they can make more informed decisions on the fly, but could be used to inform a higher artificial intelligence process to direct a robotic agent. We describe two systems, CrowdDrone and CrowdAR, that use state-of-the-art methods for human-intelligent control and sensing for crowd robotics.
- Published
- 2018
13. A comfort-based, energy-aware HVAC agent and its applications in the smart grid
- Author
-
Auffenberg, Frederik, Stein, Sebastian, and Rogers, Alexander
- Subjects
621.31 - Abstract
In this thesis, we introduce a novel heating, ventilation and air conditioning (HVAC) agent that maintains a comfortable thermal environmant for its users while minimising energy consumption of the HVAC system and incorporating demand side management (DSM) signals to shift HVAC loads towards achieving more desirable overall load profiles. To do so, the agent needs to be able to accurately predict user comfort, for example by using a thermal comfort model. Existing thermal comfort models are usually built using broad population statistics, meaning that they fail to represent individual users' preferences, resulting in poor estimates of the users' preferred temperatures. To address this issue, we propose the Bayesian comfort model (BCM). This personalised thermal comfort model using a Bayesian network learns from a user's feedback, allowing it to adapt to the users' individual preferences over time. We further propose an alternative to the ASHRAE 7-point scale used to assess user comfort. Using this model, we create an optimal HVAC control algorithm that minimizes energy consumption while preserving user comfort. We extend this algorithm to incorporate DSM signals into its scheduling, allowing it to shift HVAC loads towards more desirable load profiles, reduce peaks or make better use of energy produced from renewable sources. Through an empirical evaluation based on the ASHRAE RP-884 data set and data collected in a separate deployment by us, we show that our comfort model is consistently 13.2% to 25.8% more accurate than current models and that the alternative comfort scale can increase our model's accuracy. Through simulations we show that when using the comfort model instead of a fixed set point, our HVAC control algorithm can reduce energy consumption of the HVAC system by 11% while decreasing user discomfort by 17.5%, achieve a load profile 39.9% closer to a specified target profile and efficiently reduce peaks in the load profile.
- Published
- 2017
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.