3 results on '"Dirk von Grünigen"'
Search Results
2. A methodology for creating question answering corpora using inverse data annotation
- Author
-
Mark Cieliebak, Nicolas Kaiser, Álvaro Rodrigo, Dirk von Grünigen, Eneko Agirre, Kurt Stockinger, Philippe Schläpfer, Jan Milan Deriu, and Katsiaryna Mlynchyk
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Artificial intelligence ,Computer science ,Computer Science - Artificial Intelligence ,02 engineering and technology ,Semantic parsing ,006: Spezielle Computerverfahren ,computer.software_genre ,01 natural sciences ,Machine Learning (cs.LG) ,Annotation ,400: Sprache und Linguistik ,0202 electrical engineering, electronic engineering, information engineering ,Question answering ,Parsing ,Computer Science - Computation and Language ,business.industry ,Deep learning ,010401 analytical chemistry ,Construct (python library) ,Context-free grammar ,0104 chemical sciences ,Artificial Intelligence (cs.AI) ,Natural language interface to database ,020201 artificial intelligence & image processing ,business ,computer ,Computation and Language (cs.CL) ,Natural language processing ,Natural language - Abstract
In this paper, we introduce a novel methodology to efficiently construct a corpus for question answering over structured data. For this, we introduce an intermediate representation that is based on the logical query plan in a database called Operation Trees (OT). This representation allows us to invert the annotation process without losing flexibility in the types of queries that we generate. Furthermore, it allows for fine-grained alignment of query tokens to OT operations. In our method, we randomly generate OTs from a context-free grammar. Afterwards, annotators have to write the appropriate natural language question that is represented by the OT. Finally, the annotators assign the tokens to the OT operations. We apply the method to create a new corpus OTTA (Operation Trees and Token Assignment), a large semantic parsing corpus for evaluating natural language interfaces to databases. We compare OTTA to Spider and LC-QuaD 2.0 and show that our methodology more than triples the annotation speed while maintaining the complexity of the queries. Finally, we train a state-of-the-art semantic parsing model on our data and show that our corpus is a challenging dataset and that the token alignment can be leveraged to increase the performance significantly.
- Published
- 2020
3. Best practices in e-assessments with a special focus on cheating prevention
- Author
-
Dirk von Grünigen, Amani Magid, Beatrice Pradarelli, Fernando Benites de Azevedo e Souza, Mark Cieliebak, Zürich University of Applied Sciences (ZHAW), TEST (TEST), Laboratoire d'Informatique de Robotique et de Microélectronique de Montpellier (LIRMM), Centre National de la Recherche Scientifique (CNRS)-Université de Montpellier (UM)-Centre National de la Recherche Scientifique (CNRS)-Université de Montpellier (UM), New York University [Abu Dhabi], and NYU System (NYU)
- Subjects
BYOD ,Cheating ,Best practice ,Cheating prevention ,050801 communication & media studies ,371: Schulen und schulische Tätigkeiten ,Education ,Tools ,0508 media and communications ,ComputingMilieux_COMPUTERSANDEDUCATION ,Social media ,[SPI.NANO]Engineering Sciences [physics]/Micro and nanotechnologies/Microelectronics ,Grading (education) ,E-assessments ,Multiple choice ,business.industry ,4. Education ,05 social sciences ,Short answer ,Best-practices ,050301 education ,Public relations ,The Internet ,Psychology ,business ,0503 education - Abstract
International audience; In this digital age of the computer, Internet, and social media and Internet of Things, e-assessments have become an accepted method to determine if students have learned materials presented in a course. With acceptance of this electronic means of assessing students, many questions arise about this method. What should be the format of e-assessment? What amount of time? What kinds of questions should be asked (multiple choice, short answer, etc.)? These are only a few of the many different questions. In addition, educators have always had to contend with the possibility that some students might cheat on an examination. It is widely known that students are often times more technologically savvy than their professors. So how does one prevent students from cheating on an e-assessment? Understandably, given the amount of information available on e-assessments and the variety of formats to choose from, choosing to administer e-assessments over paper-based assessments can lead to confusion on the part of the professor. This paper presents helpful guidance for lecturers who want to introduce e-assessments in their class, and it provides recommendations about the technical infrastructure to implement to avoid students cheating. It is based on literature review, on an international survey that gathers insights and experiences from lecturers who are using e-assessment in their class, and on technological evaluation of e-assessment infrastructure.
- Published
- 2018
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.