382 results on '"Code Generation"'
Search Results
2. A Mesh-based Simulation Framework using Automatic Code Generation.
- Author
-
Herholz, Philipp, Stuyck, Tuur, and Kavan, Ladislav
- Subjects
AUTOMATIC differentiation ,SPARSE matrices ,RESEARCH personnel ,ALGORITHMS ,SYNCHRONIZATION - Abstract
Optimized parallel implementations on GPU or CPU have dramatically enhanced the fidelity, resolution and accuracy of physical simulations and mesh-based algorithms. However, attaining optimal performance requires expert knowledge and might demand complex code and memory layout optimizations. This adds to the fact that physical simulation algorithms require the implementation of derivatives, which can be a tedious and error-prone process. In recent years, researchers and practitioners have investigated the concept of designing systems that allow for a more expressive definition of mesh-based simulation code. These systems leverage domain-specific languages (DSL), automatic differentiation or symbolic computing to enhance readability of implementations without compromising performance. We follow this line of work and propose a symbolic code generation approach tailored to mesh-based computations on parallel devices. Our system extends related work by incorporating collision handling and a data access synchronization approach, enabling rapid sparse matrix assembly. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Large language model evaluation for high‐performance computing software development.
- Author
-
Godoy, William F., Valero‐Lara, Pedro, Teranishi, Keita, Balaprakash, Prasanna, and Vetter, Jeffrey S.
- Subjects
LANGUAGE models ,PROGRAMMING languages ,CHATGPT ,FORTRAN ,ARTIFICIAL intelligence - Abstract
We apply AI‐assisted large language model (LLM) capabilities of GPT‐3 targeting high‐performance computing (HPC) kernels for (i) code generation, and (ii) auto‐parallelization of serial code in C ++, Fortran, Python and Julia. Our scope includes the following fundamental numerical kernels: AXPY, GEMV, GEMM, SpMV, Jacobi Stencil, and CG, and language/programming models: (1) C++ (e.g., OpenMP [including offload], OpenACC, Kokkos, SyCL, CUDA, and HIP), (2) Fortran (e.g., OpenMP [including offload] and OpenACC), (3) Python (e.g., numpy, Numba, cuPy, and pyCUDA), and (4) Julia (e.g., Threads, CUDA.jl, AMDGPU.jl, and KernelAbstractions.jl). Kernel implementations are generated using GitHub Copilot capabilities powered by the GPT‐based OpenAI Codex available in Visual Studio Code given simple
+ + prompt variants. To quantify and compare the generated results, we propose a proficiency metric around the initial 10 suggestions given for each prompt. For auto‐parallelization, we use ChatGPT interactively giving simple prompts as in a dialogue with another human including simple "prompt engineering" follow ups. Results suggest that correct outputs for C++ correlate with the adoption and maturity of programming models. For example, OpenMP and CUDA score really high, whereas HIP is still lacking. We found that prompts from either a targeted language such as Fortran or the more general‐purpose Python can benefit from adding language keywords, while Julia prompts perform acceptably well for its Threads and CUDA.jl programming models. We expect to provide an initial quantifiable point of reference for code generation in each programming model using a state‐of‐the‐art LLM. Overall, understanding the convergence of LLMs, AI, and HPC is crucial due to its rapidly evolving nature and how it is redefining human‐computer interactions. [ABSTRACT FROM AUTHOR] - Published
- 2024
- Full Text
- View/download PDF
4. Novel rapid control prototyping for permanent magnet synchronous motor via model-based design and STM32 chip.
- Author
-
Hu, Mingyuan, Ahn, Hyeongki, Park, Jihoon, and You, Kwanho
- Subjects
- *
PERMANENT magnet motors , *ALGORITHMS , *COMPUTER software - Abstract
As control algorithms become increasingly sophisticated, delivering improved performance at the expense of greater complexity, practical experiments often become unfeasible. To address this challenge, this study introduces a novel rapid control prototyping (NRCP) approach based on model-based design (MBD) using MATLAB/Simulink, STM32CubeMX software, and field-oriented control strategies for permanent magnet synchronous motors. Compared with existing rapid control prototyping methods, our NRCP design offers several advantages: it simplifies model construction by utilizing only basic Simulink modules, minimizes dependency on MATLAB/Simulink toolboxes by only requiring Embedded Code conversion to C language, and ensures strong compatibility as the experimental code involves only C language. To demonstrate the feasibility and efficiency of this approach, sensor-based and sensorless control models were developed using the MBD method. The practicality of the NRCP was successfully validated through sensor-based and sensorless experiments using an ARM Cortex-M4-based STM32 microcontroller. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. AceCoder: An Effective Prompting Technique Specialized in Code Generation.
- Author
-
Li, Jia, Zhao, Yunfei, Li, Yongmin, Li, Ge, and Jin, Zhi
- Abstract
Large language models (LLMs) have shown great success in code generation. LLMs take as the input a prompt and output the code. How to make prompts (i.e., Prompting Techniques) is a key question. Existing prompting techniques are designed for natural language generation and have low accuracy in code generation. In this article, we propose a new prompting technique named AceCoder. Our motivation is that code generation meets two unique challenges (i.e., requirement understanding and code implementation). AceCoder contains two novel mechanisms (i.e., guided code generation and example retrieval) to solve these challenges. ❶ Guided code generation asks LLMs first to analyze requirements and output an intermediate preliminary (e.g., test cases). The preliminary clarifies requirements and tells LLMs "what to write." ❷ Example retrieval selects similar programs as examples in prompts, which provide lots of relevant content (e.g., algorithms, APIs) and teach LLMs "how to write." We apply AceCoder to four LLMs (e.g., GPT-3.5, CodeGeeX) and evaluate it on three public benchmarks using the Pass@ \(k\). Results show that AceCoder can significantly improve the performance of LLMs on code generation. In terms of Pass@1, AceCoder outperforms the SOTA baseline by up to 56.4% in MBPP, 70.7% in MBJP, and 88.4% in MBJSP. AceCoder is effective in LLMs with different sizes (i.e., 6B–13B) and different languages (i.e., Python, Java, and JavaScript). Human evaluation shows human developers prefer programs from AceCoder. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Optimization modeling and verification from problem specifications using a multi-agent multi-stage LLM framework.
- Author
-
Mostajabdaveh, Mahdi, Yu, Timothy T., Ramamonjison, Rindranirina, Carenini, Giuseppe, Zhou, Zirui, and Zhang, Yong
- Subjects
LANGUAGE models ,NATURAL languages - Abstract
This paper explores the use of Large Language Models (LLMs) in modeling real-world optimization problems. We concretely define the task of translating natural language descriptions into optimization models (NL2OPT) and provide criteria for classifying optimization problems for the NL2OPT task. Our novel multi-agent modeling framework leverages relations identifier agents and a multi-agent verification mechanism, eliminating the need for solver execution. Additionally, we introduce a straightforward and practical evaluation framework, offering a more effective assessment method compared to traditional execution-based evaluations. We have created a unique dataset tailored for optimization modeling, featuring Problem Specifications as a structured representation of optimization problems. Through comprehensive experiments, our study compares our modeling framework with existing LLM reasoning strategies, highlighting their relative effectiveness in optimization modeling tasks. We also perform ablation studies to explore the effect of different components of our modeling framework. Experimental results demonstrate that our multi-agent framework outperforms many common LLM prompting strategies. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. The underlying potential of NLP for microcontroller programming education.
- Author
-
Rocha, André, Sousa, Lino, Alves, Mário, and Sousa, Armando
- Subjects
LANGUAGE models ,ARTIFICIAL intelligence ,COMPUTER literacy ,COMPUTER engineering ,ENGINEERING education ,NATURAL language processing - Abstract
The trend for an increasingly ubiquitous and cyber‐physical world has been leveraging the use and importance of microcontrollers (μC) to unprecedented levels. Therefore, microcontroller programming (μCP) becomes a paramount skill for electrical and computer engineering students. However, μCP poses significant challenges for undergraduate students, given the need to master low‐level programming languages and several algorithmic strategies that are not usual in "generic" programming. Moreover, μCP can be time‐consuming and complex even when using high‐level languages. This article samples the current state of μCP education in Portugal and unveils the potential support of natural language processing (NLP) tools (such as chatGPT). Our analysis of μCP curricular units from seven representative Portuguese engineering schools highlights a predominant use of AVR 8‐bit μC and project‐based learning. While NLP tools emerge as strong candidates as students' μC companion, their application and impact on the learning process and outcomes deserve to be understood. This study compares the most prominent NLP tools, analyzing their benefits and drawbacks for μCP education, building on both hands‐on tests and literature reviews. By providing automatic code generation and explanation of concepts, NLP tools can assist students in their learning process, allowing them to focus on software design and real‐world tasks that the μC is designed to handle, rather than on low‐level coding. We also analyzed the specific impact of chatGTP in the context of a μCP course at ISEP, confirming most of our expectations, but with a few curiosities. Overall, this work establishes the foundations for future research on the effective integration of NLP tools in μCP courses. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Assessing the Use of GitHub Copilot on Students of Engineering of Information Systems.
- Author
-
Cirett-Galán, Federico, Torres-Peralta, Raquel, Navarro-Hernández, René, Ochoa-Hernández, José Luis, Contreras-Rivera, San, Estrada-Ríos, Luis Arturo, and Machado-Encinas, Germán
- Subjects
SOFTWARE development tools ,SOFTWARE engineering ,ENGINEERING students ,CHATGPT ,ARTIFICIAL intelligence - Abstract
This study examines the impact of AI programming assistants like GitHub Copilot and ChatGPT on software engineering efficiency, an area that has seen limited empirical research. We experimentally evaluated the performance of programmers (n = 1 6) in Python coding tasks with and without AI assistance, measuring time-to-completion and feature implementation. Results indicate that participants utilizing AI assistance completed tasks significantly faster (p = 0.033) and implemented more required features (p = 0.012) compared to those relying solely on unaided coding. These findings offer empirical insights into the integration of AI tools in software development workflows, highlighting their potential to enhance efficiency without compromising code quality or completeness, with implications for organizational pipelines and practitioner skills. Responses to exit surveys suggest that participants without IA tools assistance encountered frustrations related to code recall, time constraints, and problem-solving, while assisted participants reported no negative experiences, focusing instead on successful completion of tasks within the allotted time. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Models2Code: Autonomous model‐based generation to expedite the engineering process.
- Author
-
Paniagua, Cristina and Caso, Fernando Labra
- Subjects
- *
SYSTEMS engineering , *MANUFACTURING processes , *SOFTWARE engineering , *INDUSTRIALIZATION , *ARROWHEADS - Abstract
Insufficient resources and high costs are hindering industrial development, potentially impeding adaptation to market demands. Overcoming this challenge necessitates advancements in software engineering techniques to streamline processes and meet industrial requirements. Crucially, automating manual tasks and enhancing interoperability between engineering stages can yield efficiency gains. This paper presents a model‐based system engineering approach aimed at automating the transition from design to implementation, incorporating autonomous generation and validation features. Implemented as plugins and utilizing model transformation techniques, this solution targets reducing engineering time and facilitating the adoption of new technologies. Developed, implemented, and tested within the Arrowhead framework, the approach is followed by a discussion on its benefits and limitations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. ChatGeoAI: Enabling Geospatial Analysis for Public through Natural Language, with Large Language Models.
- Author
-
Mansourian, Ali and Oucheikh, Rachid
- Subjects
- *
GENERATIVE artificial intelligence , *LANGUAGE models , *NATURAL language processing , *TASK analysis , *NATURAL languages - Abstract
Large Language Models (LLMs) such as GPT, BART, and Gemini stand at the forefront of Generative Artificial Intelligence, showcasing remarkable prowess in natural language comprehension and task execution. This paper proposes a novel framework developed on the foundation of Llama 2, aiming to bridge the gap between natural language queries and executable code for geospatial analyses within the PyQGIS environment. It empowers non-expert users to leverage GIS technology without requiring deep knowledge of geospatial programming or tools. Through cutting-edge Natural Language Processing (NLP) techniques, including tailored entity recognition and ontology mapping, the framework accurately interprets user intents and translates them into specific GIS operations. Integration of geospatial ontologies enriches semantic comprehension, ensuring precise alignment between user descriptions, geospatial datasets, and geospatial analysis tasks. A code generation module empowered by Llama 2 converts these interpretations into PyQGIS scripts, enabling the execution of geospatial analysis and results visualization. Rigorous testing across a spectrum of geospatial analysis tasks, with incremental complexity, evaluates the framework and the performance of such a system, with LLM at its core. The proposed system demonstrates proficiency in handling various geometries, spatial relationships, and attribute queries, enabling accurate and efficient analysis of spatial datasets. Moreover, it offers robust error-handling mechanisms and supports tasks related to map styling, visualization, and data manipulation. However, it has some limitations, such as occasional struggles with ambiguous attribute names and aliases, which leads to potential inaccuracies in the filtering and retrieval of features. Despite these limitations, the system presents a promising solution for applications integrating LLMs into GIS and offers a flexible and user-friendly approach to geospatial analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. TSCompiler: efficient compilation framework for dynamic-shape models.
- Author
-
Luo, Xiang, Zhang, Chen, Geng, Chenbo, Yi, Yanzhi, Hu, Jiahui, Zhang, Renwei, Zhang, Zhen, Consolaro, Gianpietro, Yang, Fan, Lu, Tun, Gu, Ning, and Shang, Li
- Abstract
Today’s deep learning models face an increasing demand to handle dynamic shape tensors and computation whose shape information remains unknown at compile time and varies in a nearly infinite range at runtime. This shape dynamism brings tremendous challenges for existing compilation pipelines designed for static models which optimize tensor programs relying on exact shape values. This paper presents TSCompiler, an end-to-end compilation framework for dynamic shape models. TSCompiler first proposes a symbolic shape propagation algorithm to recover symbolic shape information at compile time to enable subsequent optimizations. TSCompiler then partitions the shape-annotated computation graph into multiple subgraphs and fine-tunes the backbone operators from the subgraph within a hardware-aligned search space to find a collection of high-performance schedules. TSCompiler can propagate the explored backbone schedule to other fusion groups within the same subgraph to generate a set of parameterized tensor programs for fused cases based on dependence analysis. At runtime, TSCompiler utilizes an occupancy-targeted cost model to select from pre-compiled tensor programs for varied tensor shapes. Extensive evaluations show that TSCompiler can achieve state-of-the-art speedups for dynamic shape models. For example, we can improve kernel efficiency by up to 3.97× on NVIDIA RTX3090, and 10.30 × on NVIDIA A100 and achieve up to five orders of magnitude speedups on end-to-end latency. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Trans-Compiler-Based Conversion from Cross-Platform Applications to Native Applications.
- Author
-
Mahmoud, Amira T., Radwan, Moataz-Bellah, Soliman, Abdelrahman Mohamed, Yousef, Ahmed H., Zayed, Hala H., and Medhat, Walaa
- Subjects
LANGUAGE models ,MOBILE apps ,NATIVE language ,INNOVATION management ,GENERALIZATION - Abstract
Cross-platform mobile application development is emerging widely in the mobile applications industry. Cross-platform Frameworks (CPFs) like React Native, Flutter, and Xamarin are used by many developing companies. The technology these frameworks use faces performance and resource use efficiency limitations compared to native applications. The native applications are written in the native languages of the platforms. Trans-complier-based conversion between native languages of different platforms of mobile applications has been addressed in recent research. However, the problem statement needed to be mathematically represented. The solution depended on hard coding and needed more generalization. In addition, it might not be a practical solution for companies that are using and already have built applications using CPFs. Therefore, in this paper, we present an enhanced-transcompiler-based converter to convert applications made by CPFs to native applications. We implemented the architecture to convert React Native and Xamarin applications. The React Native to Native tool converted thirteen applications to native Android and iOS applications, with accuracies ranging from 40% for large applications to 100% for simple applications. The maximum conversion time was seven minutes for converting 40% of an 8K LOC application. In addition, since Large Language Models (LLMs) are the trendiest technology in our era, we compared our proposed solution output with LLMs. We concluded its superiority compared to the status of LLMs. Performance evaluation is also done to compare the React Native applications against native applications generated by the transcompiler tool. The assessment showed that the native applications perform better than React Native regarding runtime memory consumption, storage, and speed. The Xamarin to Native tool was also tested to show the genericness of the architecture and how it can be extended to convert from any CPF to Native applications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. LLMs for science: Usage for code generation and data analysis.
- Author
-
Nejjar, Mohamed, Zacharias, Luca, Stiehle, Fabian, and Weber, Ingo
- Subjects
- *
LANGUAGE models , *DATA analytics , *ARTIFICIAL intelligence , *PRODUCTIVE life span , *LANGUAGE research - Abstract
Large language models (LLMs) have been touted to enable increased productivity in many areas of today's work life. Scientific research as an area of work is no exception: The potential of LLM‐based tools to assist in the daily work of scientists has become a highly discussed topic across disciplines. However, we are only at the very onset of this subject of study. It is still unclear how the potential of LLMs will materialize in research practice. With this study, we give first empirical evidence on the use of LLMs in the research process. We have investigated a set of use cases for LLM‐based tools in scientific research and conducted a first study to assess to which degree current tools are helpful. In this position paper, we report specifically on use cases related to software engineering, specifically, on generating application code and developing scripts for data analytics and visualization. While we studied seemingly simple use cases, results across tools differ significantly. Our results highlight the promise of LLM‐based tools in general, yet we also observe various issues, particularly regarding the integrity of the output these tools provide. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. (De/Re)-Composition of Data-Parallel Computations via Multi-Dimensional Homomorphisms.
- Author
-
Rasch, Ari
- Subjects
- *
LINEAR algebra , *MODERN architecture , *QUANTUM chemistry , *DATA mining , *QUANTUM computing , *DEEP learning - Abstract
Data-parallel computations, such as linear algebra routines and stencil computations, constitute one of the most relevant classes in parallel computing, e.g., due to their importance for deep learning. Efficiently de-composing such computations for the memory and core hierarchies of modern architectures and re-composing the computed intermediate results back to the final result—we say (de/re)-composition for short—is key to achieve high performance for these computations on, e.g., GPU and CPU. Current high-level approaches to generating data-parallel code are often restricted to a particular subclass of data-parallel computations and architectures (e.g., only linear algebra routines on only GPU or only stencil computations), and/or the approaches rely on a user-guided optimization process for a well-performing (de/re)-composition of computations, which is complex and error prone for the user. We formally introduce a systematic (de/re)-composition approach, based on the algebraic formalism of Multi-Dimensional Homomorphisms (MDHs). Our approach is designed as general enough to be applicable to a wide range of data-parallel computations and for various kinds of target parallel architectures. To efficiently target the deep and complex memory and core hierarchies of contemporary architectures, we exploit our introduced (de/re)-composition approach for a correct-by-construction, parametrized cache blocking, and parallelization strategy. We show that our approach is powerful enough to express, in the same formalism, the (de/re)-composition strategies of different classes of state-of-the-art approaches (scheduling-based, polyhedral, etc.), and we demonstrate that the parameters of our strategies enable systematically generating code that can be fully automatically optimized (auto-tuned) for the particular target architecture and characteristics of the input and output data (e.g., their sizes and memory layouts). Particularly, our experiments confirm that via auto-tuning, we achieve higher performance than state-of-the-art approaches, including hand-optimized solutions provided by vendors (such as NVIDIA cuBLAS/cuDNN and Intel oneMKL/oneDNN), on real-world datasets and for a variety of data-parallel computations, including linear algebra routines, stencil and quantum chemistry computations, data mining algorithms, and computations that recently gained high attention due to their relevance for deep learning. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Deep learning for code generation: a survey.
- Author
-
Zhang, Huangzhao, Zhang, Kechi, Li, Zhuo, Li, Jia, Li, Yongmin, Zhao, Yunfei, Zhu, Yuqi, Liu, Fang, Li, Ge, and Jin, Zhi
- Abstract
In the past decade, thanks to the powerfulness of deep-learning techniques, we have witnessed a whole new era of automated code generation. To sort out developments, we have conducted a comprehensive review of solutions to deep learning-based code generation. In this survey, we generally formalize the pipeline and procedure of code generation and categorize existing solutions according to taxonomy from perspectives of architecture, model-agnostic enhancing strategy, metrics, and tasks. In addition, we outline the challenges faced by current dominant large models and list several plausible directions for future research. We hope that this survey may provide handy guidance to understanding, utilizing, and developing deep learning-based code-generation techniques for researchers and practitioners. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Self-Collaboration Code Generation via ChatGPT.
- Author
-
Dong, Yihong, Jiang, Xue, Jin, Zhi, and Li, Ge
- Subjects
LANGUAGE models ,CHATGPT ,COMPUTER software development ,COMPUTER software quality control ,RAPID response teams ,VIRTUAL work teams - Abstract
Although large language models (LLMs) have demonstrated remarkable code-generation ability, they still struggle with complex tasks. In real-world software development, humans usually tackle complex tasks through collaborative teamwork, a strategy that significantly controls development complexity and enhances software quality. Inspired by this, we present a self-collaboration framework for code generation employing LLMs, exemplified by ChatGPT. Specifically, through role instructions, (1) Multiple LLM agents act as distinct "experts," each responsible for a specific subtask within a complex task; (2) Specify the way to collaborate and interact, so that different roles form a virtual team to facilitate each other's work, ultimately the virtual team addresses code generation tasks collaboratively without the need for human intervention. To effectively organize and manage this virtual team, we incorporate software-development methodology into the framework. Thus, we assemble an elementary team consisting of three LLM roles (i.e., analyst, coder, and tester) responsible for software development's analysis, coding, and testing stages. We conduct comprehensive experiments on various code-generation benchmarks. Experimental results indicate that self-collaboration code generation relatively improves 29.9–47.1% Pass@1 compared to the base LLM agent. Moreover, we showcase that self-collaboration could potentially enable LLMs to efficiently handle complex repository-level tasks that are not readily solved by the single LLM agent. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Self-Planning Code Generation with Large Language Models.
- Author
-
Jiang, Xue, Dong, Yihong, Wang, Lecheng, Fang, Zheng, Shang, Qiwei, Li, Ge, Jin, Zhi, and Jiao, Wenpin
- Subjects
LANGUAGE models ,PROGRAMMING languages ,LANGUAGE planning ,PROBLEM solving - Abstract
Although large language models (LLMs) have demonstrated impressive ability in code generation, they are still struggling to address the complicated intent provided by humans. It is widely acknowledged that humans typically employ planning to decompose complex problems and schedule solution steps prior to implementation. To this end, we introduce planning into code generation to help the model understand complex intent and reduce the difficulty of problem-solving. This paper proposes a self-planning code generation approach with large language models, which consists of two phases, namely planning phase and implementation phase. Specifically, in the planning phase, LLM plans out concise solution steps from the intent combined with few-shot prompting. Subsequently, in the implementation phase, the model generates code step by step, guided by the preceding solution steps. We conduct extensive experiments on various code-generation benchmarks across multiple programming languages. Experimental results show that self-planning code generation achieves a relative improvement of up to 25.4% in Pass@1 compared to direct code generation, and up to 11.9% compared to Chain-of-Thought of code generation. Moreover, our self-planning approach also enhances the quality of the generated code with respect to correctness, readability, and robustness, as assessed by humans. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. The performance of the LSTM-based code generated by Large Language Models (LLMs) in forecasting time series data
- Author
-
Saroj Gopali, Sima Siami-Namini, Faranak Abri, and Akbar Siami Namin
- Subjects
Large language models (LLMs) ,Code generation ,Forecasting time series data ,Deep learning models ,Long short-term memory (LSTM) ,Prompt engineering ,Computational linguistics. Natural language processing ,P98-98.5 - Abstract
Generative AI, and in particular Large Language Models (LLMs), have gained substantial momentum due to their wide applications in various disciplines. While the use of these game changing technologies in generating textual information has already been demonstrated in several application domains, their abilities in generating complex models and executable codes need to be explored. As an intriguing case is the goodness of the machine and deep learning models generated by these LLMs in conducting automated scientific data analysis, where a data analyst may not have enough expertise in manually coding and optimizing complex deep learning models and codes and thus may opt to leverage LLMs to generate the required models. This paper investigates and compares the performance of the mainstream LLMs, such as ChatGPT, PaLM, LLama, and Falcon, in generating deep learning models for analyzing time series data, an important and popular data type with its prevalent applications in many application domains including financial and stock market. This research conducts a set of controlled experiments where the prompts for generating deep learning-based models are controlled with respect to sensitivity levels of four criteria including (1) Clarify and Specificity, (2) Objective and Intent, (3) Contextual Information, and (4) Format and Style. While the results are relatively mix, we observe some distinct patterns. We notice that using LLMs, we are able to generate deep learning-based models with executable codes for each dataset separately whose performance are comparable with the manually crafted and optimized LSTM models for predicting the whole time series dataset. We also noticed that ChatGPT outperforms the other LLMs in generating more accurate models. Furthermore, we observed that the goodness of the generated models vary with respect to the “temperature” parameter used in configuring LLMS. The results can be beneficial for data analysts and practitioners who would like to leverage generative AIs to produce good prediction models with acceptable goodness.
- Published
- 2024
- Full Text
- View/download PDF
19. Large language model-based code generation for the control of construction assembly robots: A hierarchical generation approach
- Author
-
Hanbin Luo, Jianxin Wu, Jiajing Liu, and Maxwell Fordjour Antwi-Afari
- Subjects
Construction assembly robot ,Large language model ,Code generation ,ChatGPT ,Human–robot collaboration ,Engineering (General). Civil engineering (General) ,TA1-2040 ,Building construction ,TH1-9745 - Abstract
Offline programming (OLP) is a mainstream approach for controlling assembly robots at construction sites. However, existing methods are tailored to specific assembly tasks and workflows, and thus lack flexibility. Additionally, the emerging large language model (LLM)-based OLP cannot effectively handle the code logic of robot programming. Thus, this paper addresses the question: How can robot control programs be generated effectively and accurately for diverse construction assembly tasks using LLM techniques? This paper describes a closed user-on-the-loop control framework for construction assembly robots based on LLM techniques. A hierarchical strategy to generate robot control programs is proposed to logically integrate code generation at high and low levels. Additionally, customized application programming interfaces and a chain of action are combined to enhance the LLM's understanding of assembly action logic. An assembly task set was designed to evaluate the feasibility and reliability of the proposed approach. The results show that the proposed approach (1) is widely applicable to diverse assembly tasks, and (2) can improve the quality of the generated code by decreasing the number of errors. Our approach facilitates the automation of construction assembly tasks by simplifying the robot control process.
- Published
- 2024
- Full Text
- View/download PDF
20. Using State Transition Diagrams for Automated Knowledge Base Construction
- Author
-
Dorodnykh, Nikita O., Yurin, Aleksandr Yu., Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Kovalev, Sergey, editor, Kotenko, Igor, editor, Sukhanov, Andrey, editor, Li, Yin, editor, and Li, Yao, editor
- Published
- 2024
- Full Text
- View/download PDF
21. Dual Learning Model of Code Summary and Generation Based on Transformer
- Author
-
Wang, Jiaying, Cao, Lijun, Shan, Jing, Song, Xiaoxu, Jiang, Junyi, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Jin, Cheqing, editor, Yang, Shiyu, editor, Shang, Xuequn, editor, Wang, Haofen, editor, and Zhang, Yong, editor
- Published
- 2024
- Full Text
- View/download PDF
22. Prompt2DeModel: Declarative Neuro-Symbolic Modeling with Natural Language
- Author
-
Faghihi, Hossein Rajaby, Nafar, Aliakbar, Uszok, Andrzej, Karimian, Hamid, Kordjamshidi, Parisa, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Besold, Tarek R., editor, d’Avila Garcez, Artur, editor, Jimenez-Ruiz, Ernesto, editor, Confalonieri, Roberto, editor, Madhyastha, Pranava, editor, and Wagner, Benedikt, editor
- Published
- 2024
- Full Text
- View/download PDF
23. Case Study: Modeling, Simulation, Verification, and Code Generation of an Automatic Cruise Control System
- Author
-
Xu, Xiong, Wang, Shuling, Ji, Zekun, Gao, Qiang, Jin, Xiangyu, Zhan, Bohua, Zhan, Naijun, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Cavalcanti, Ana, editor, and Baxter, James, editor
- Published
- 2024
- Full Text
- View/download PDF
24. Evaluation Metrics in LLM Code Generation
- Author
-
Hartung, Kai, Mallick, Sambit, Gröttrup, Sören, Georges, Munir, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Nöth, Elmar, editor, Horák, Aleš, editor, and Sojka, Petr, editor
- Published
- 2024
- Full Text
- View/download PDF
25. NL2ProcessOps: Towards LLM-Guided Code Generation for Process Execution
- Author
-
Monti, Flavia, Leotta, Francesco, Mangler, Juergen, Mecella, Massimo, Rinderle-Ma, Stefanie, van der Aalst, Wil, Series Editor, Ram, Sudha, Series Editor, Rosemann, Michael, Series Editor, Szyperski, Clemens, Series Editor, Guizzardi, Giancarlo, Series Editor, Marrella, Andrea, editor, Resinas, Manuel, editor, and Jans, Mieke, editor
- Published
- 2024
- Full Text
- View/download PDF
26. Code Generation for Octree-Based Multigrid Solvers with Fused Higher-Order Interpolation and Communication
- Author
-
Angersbach, Richard, Kuckuk, Sebastian, Köstler, Harald, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Carretero, Jesus, editor, Shende, Sameer, editor, Garcia-Blas, Javier, editor, Brandic, Ivona, editor, Olcoz, Katzalin, editor, and Schreiber, Martin, editor
- Published
- 2024
- Full Text
- View/download PDF
27. Instruct-Code-Llama: Improving Capabilities of Language Model in Competition Level Code Generation by Online Judge Feedback
- Author
-
Liu, Zhaofeng, Su, Jing, Cai, Jia, Yang, Jingzhi, Wu, Chenfan, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Huang, De-Shuang, editor, Si, Zhanjun, editor, and Zhang, Qinhu, editor
- Published
- 2024
- Full Text
- View/download PDF
28. Generating Situated Reflection Triggers About Alternative Solution Paths: A Case Study of Generative AI for Computer-Supported Collaborative Learning
- Author
-
Naik, Atharva, Yin, Jessica Ruhan, Kamath, Anusha, Ma, Qianou, Wu, Sherry Tongshuang, Murray, Charles, Bogart, Christopher, Sakr, Majd, Rose, Carolyn P., Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Olney, Andrew M., editor, Chounta, Irene-Angelica, editor, Liu, Zitao, editor, Santos, Olga C., editor, and Bittencourt, Ig Ibert, editor
- Published
- 2024
- Full Text
- View/download PDF
29. SeamlessMDD: Framework for Seamless Integration of Generated and Hand-Written Code
- Author
-
Dragaš, Bojana, Todorović, Nenad, Rajačić, Tijana, Milosavljević, Gordana, Hartmanis, Juris, Founding Editor, van Leeuwen, Jan, Series Editor, Hutchison, David, Editorial Board Member, Kanade, Takeo, Editorial Board Member, Kittler, Josef, Editorial Board Member, Kleinberg, Jon M., Editorial Board Member, Kobsa, Alfred, Series Editor, Mattern, Friedemann, Editorial Board Member, Mitchell, John C., Editorial Board Member, Naor, Moni, Editorial Board Member, Nierstrasz, Oscar, Series Editor, Pandu Rangan, C., Editorial Board Member, Sudan, Madhu, Series Editor, Terzopoulos, Demetri, Editorial Board Member, Tygar, Doug, Editorial Board Member, Weikum, Gerhard, Series Editor, Vardi, Moshe Y, Series Editor, Goos, Gerhard, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Stefanidis, Kostas, editor, Systä, Kari, editor, Matera, Maristella, editor, Heil, Sebastian, editor, Kondylakis, Haridimos, editor, and Quintarelli, Elisa, editor
- Published
- 2024
- Full Text
- View/download PDF
30. Remote Debugger: A Tool to Remotely Monitor and Operate IOPT-Nets Controllers
- Author
-
Pereira, Fernando, Barros, João-Paulo, Moutinho, Filipe, Costa, Anikó, Campos-Rebelo, Rogério, Gomes, Luis, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, van Leeuwen, Jan, Series Editor, Hutchison, David, Editorial Board Member, Kanade, Takeo, Editorial Board Member, Kittler, Josef, Editorial Board Member, Kleinberg, Jon M., Editorial Board Member, Kobsa, Alfred, Series Editor, Mattern, Friedemann, Editorial Board Member, Mitchell, John C., Editorial Board Member, Naor, Moni, Editorial Board Member, Nierstrasz, Oscar, Series Editor, Pandu Rangan, C., Editorial Board Member, Sudan, Madhu, Series Editor, Terzopoulos, Demetri, Editorial Board Member, Tygar, Doug, Editorial Board Member, Weikum, Gerhard, Series Editor, Vardi, Moshe Y, Series Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Kristensen, Lars Michael, editor, and van der Werf, Jan Martijn, editor
- Published
- 2024
- Full Text
- View/download PDF
31. NL2Code: Harnessing Transformers for Automatic Code Generation from Natural Language Descriptions
- Author
-
Pavitha, N., Patrawala, Alimurtuza, Kulkarni, Tejas, Talati, Vidit, Dahiya, Shubham, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Senjyu, Tomonobu, editor, So–In, Chakchai, editor, and Joshi, Amit, editor
- Published
- 2024
- Full Text
- View/download PDF
32. Generative AI for Software Development: A Family of Studies on Code Generation
- Author
-
Dakhel, Arghavan Moradi, Nikanjam, Amin, Khomh, Foutse, Desmarais, Michel C., Washizaki, Hironori, Nguyen-Duc, Anh, editor, Abrahamsson, Pekka, editor, and Khomh, Foutse, editor
- Published
- 2024
- Full Text
- View/download PDF
33. Building BESSER: An Open-Source Low-Code Platform
- Author
-
Alfonso, Iván, Conrardy, Aaron, Sulejmani, Armen, Nirumand, Atefeh, Ul Haq, Fitash, Gomez-Vazquez, Marcos, Sottet, Jean-Sébastien, Cabot, Jordi, van der Aalst, Wil, Series Editor, Ram, Sudha, Series Editor, Rosemann, Michael, Series Editor, Szyperski, Clemens, Series Editor, Guizzardi, Giancarlo, Series Editor, van der Aa, Han, editor, Bork, Dominik, editor, Schmidt, Rainer, editor, and Sturm, Arnon, editor
- Published
- 2024
- Full Text
- View/download PDF
34. MetaOCaml: Ten Years Later : System Description
- Author
-
Kiselyov, Oleg, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Gibbons, Jeremy, editor, and Miller, Dale, editor
- Published
- 2024
- Full Text
- View/download PDF
35. An ML-Style Module System for Cross-Stage Type Abstraction in Multi-stage Programming
- Author
-
Suwa, Takashi, Igarashi, Atsushi, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Gibbons, Jeremy, editor, and Miller, Dale, editor
- Published
- 2024
- Full Text
- View/download PDF
36. Enhancing Large Language Models-Based Code Generation by Leveraging Genetic Improvement
- Author
-
Pinna, Giovanni, Ravalico, Damiano, Rovito, Luigi, Manzoni, Luca, De Lorenzo, Andrea, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Giacobini, Mario, editor, Xue, Bing, editor, and Manzoni, Luca, editor
- Published
- 2024
- Full Text
- View/download PDF
37. A Pilot Study on AI-Assisted Code Generation with Large Language Models for Software Engineering
- Author
-
Liu, Hsiao-Chuan, Tsai, Chia-Tung, Day, Min-Yuh, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Lee, Chao-Yang, editor, Lin, Chun-Li, editor, and Chang, Hsuan-Ting, editor
- Published
- 2024
- Full Text
- View/download PDF
38. Automated Verification of the Correctness of Transitions Between Elements of a Mobile Application Using Source Code Generation Tools
- Author
-
Naumova, Nadezhda, Barsukov, Ilya, Mashina, Ekaterina, Bostrikova, Darya, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Kumar, Sandeep, editor, K., Balachandran, editor, Kim, Joong Hoon, editor, and Bansal, Jagdish Chand, editor
- Published
- 2024
- Full Text
- View/download PDF
39. Critical Overview of Model Driven Engineering
- Author
-
El Gaoual, Yahya, Hanine, Mohamed, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Ben Ahmed, Mohamed, editor, Boudhir, Anouar Abdelhakim, editor, El Meouche, Rani, editor, and Karaș, İsmail Rakıp, editor
- Published
- 2024
- Full Text
- View/download PDF
40. Uncovering LLMs for Service-Composition: Challenges and Opportunities
- Author
-
Pesl, Robin D., Stötzner, Miles, Georgievski, Ilche, Aiello, Marco, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Monti, Flavia, editor, Plebani, Pierluigi, editor, Moha, Naouel, editor, Paik, Hye-young, editor, Barzen, Johanna, editor, Ramachandran, Gowri, editor, Bianchini, Devis, editor, Tamburri, Damian A., editor, and Mecella, Massimo, editor
- Published
- 2024
- Full Text
- View/download PDF
41. Developers’ Perspective on Trustworthiness of Code Generated by ChatGPT: Insights from Interviews
- Author
-
Rabani, Zeinab Sadat, Khorashadizadeh, Hanieh, Abdollahzade, Shirin, Groppe, Sven, Ghofrani, Javad, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Jabbar, M. A., editor, Tiwari, Sanju, editor, Ortiz-Rodríguez, Fernando, editor, Groppe, Sven, editor, and Bano Rehman, Tasneem, editor
- Published
- 2024
- Full Text
- View/download PDF
42. Auto-generation of Blockchain-Based Distributed Applications Using Ontologies
- Author
-
Qureshi, Muhammad Uzair, Graux, Damien, Orlandi, Fabrizio, O’Sullivan, Declan, El Madhoun, Nour, editor, Dionysiou, Ioanna, editor, and Bertin, Emmanuel, editor
- Published
- 2024
- Full Text
- View/download PDF
43. The Recent Trends of Research on GitHub Copilot: A Systematic Review
- Author
-
Ani, Zhamri Che, Hamid, Zauridah Abdul, Zhamri, Nur Nazifa, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Zakaria, Nur Haryani, editor, Mansor, Nur Suhaili, editor, Husni, Husniza, editor, and Mohammed, Fathey, editor
- Published
- 2024
- Full Text
- View/download PDF
44. Automated Code Generation for DES Controllers Modeled as Finite State Machines
- Author
-
Possato, Tiago, Valentini, João H., Southier, Luiz F. P., Teixeira, Marcelo, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Barbosa, Haniel, editor, and Zohar, Yoni, editor
- Published
- 2024
- Full Text
- View/download PDF
45. A journey with ASMETA from requirements to code: application to an automotive system with adaptive features.
- Author
-
Arcaini, Paolo, Bonfanti, Silvia, Gargantini, Angelo, Riccobene, Elvinia, and Scandurra, Patrizia
- Subjects
- *
ADAPTIVE control systems , *MACHINE theory , *SYSTEMS engineering , *METHODS engineering , *REQUIREMENTS engineering - Abstract
Modern automotive systems with adaptive control features require rigorous analysis to guarantee correct operation. We report our experience in modeling the automotive case study from the ABZ2020 conference using the ASMETA toolset, based on the Abstract State Machine formal method. We adopted a seamless system engineering method: from an incremental formal specification of high-level requirements to increasingly refined ASMETA models, to the C++ code generation from the model. Along this process, different validation and verification activities were performed. We explored modeling styles and idioms to face the modeling complexity and ensure that the ASMETA models can best capture and reflect specific behavioral patterns. Through this realistic automotive case study, we evaluated the applicability and usability of our formal modeling approach. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. Leveraging pre-trained language models for code generation.
- Author
-
Soliman, Ahmed, Shaheen, Samir, and Hadhoud, Mayada
- Subjects
LANGUAGE models ,NATURAL language processing ,CAUSAL models ,COMPUTER software development ,COMPUTER software developers - Abstract
Code assistance refers to the utilization of various tools, techniques, and models to help developers in the process of software development. As coding tasks become increasingly complex, code assistant plays a pivotal role in enhancing developer productivity, reducing errors, and facilitating a more efficient coding workflow. This assistance can manifest in various forms, including code autocompletion, error detection and correction, code generation, documentation support, and context-aware suggestions. Language models have emerged as integral components of code assistance, offering developers the capability to receive intelligent suggestions, generate code snippets, and enhance overall coding proficiency. In this paper, we propose new hybrid models for code generation by leveraging pre-trained language models BERT, RoBERTa, ELECTRA, and LUKE with the Marian Causal Language Model. Selecting these models based on their strong performance in various natural language processing tasks. We evaluate the performance of these models on two datasets CoNaLa and DJANGO and compare them to existing state-of-the-art models. We aim to investigate the potential of pre-trained transformer language models to revolutionize code generation, offering improved precision and efficiency in navigating complex coding scenarios. Additionally, conducting error analysis and refining the generated code. Our results show that these models, when combined with the Marian Decoder, significantly improve code generation accuracy and efficiency. Notably, the RoBERTaMarian model achieved a maximum BLEU score of 35.74 and an exact match accuracy of 13.8% on CoNaLa, while LUKE-Marian attained a BLEU score of 89.34 and an exact match accuracy of 78.50% on DJANGO. Implementation of this work is available at https://github.com/AhmedSSoliman/Leveraging-Pretrained-Language-Models-for-Code-Generation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. Fused GEMMs towards an efficient GPU implementation of the ADER‐DG method in SeisSol.
- Author
-
Dorozhinskii, Ravil, Gadeschi, Gonzalo Brito, and Bader, Michael
- Subjects
CODE generators ,MATRIX multiplications ,EARTHQUAKES ,GRAPHICS processing units ,SOURCE code ,EARTHQUAKE resistant design ,GALERKIN methods - Abstract
Summary: This study shows how GPU performance of the ADER discontinuous Galerkin method in SeisSol (an earthquake simulation software) can be further improved while preserving its original design that ensures high CPU performance. We introduce a new code generator ("ChainForge") that fuses subsequent batched matrix multiplications ("GEMMs") into a single GPU kernel, holding intermediate results in shared memory as long as necessary. The generator operates as an external module linked against SeisSol's domain specific language YATeTo and, as a result, the original SeisSol source code remains mainly unchanged. In this paper, we discuss several challenges related to automatic fusion of GPU kernels and provide solutions to them. By and large, we gain ≈$$ \approx $$60% in performance of SeisSol's wave propagation solver using Fused‐GEMMs compared to the original GPU implementation. We demonstrated this on benchmarks as well as on a real production scenario simulating the Northridge 1994 earthquake. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. "私教"还是"枪手": 基于大模型的计算机实践教学探索.
- Author
-
李清勇, 耿阳李敖, 彭文娟, 王繁, and 竺超今
- Abstract
Copyright of Experimental Technology & Management is the property of Experimental Technology & Management Editorial Office and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
49. GvdsSQL: Heterogeneous Database Unified Access Technology for Wide-Area Environments.
- Author
-
Shang, Jing, Xiao, Limin, Wu, Zhihui, Yang, Jinqian, Xiao, Zhiwen, Wang, Jinquan, Zhang, Yifei, Chen, Xuguang, Wang, Jibin, and Li, Huiyang
- Subjects
DATABASES ,DATABASE management ,DATA extraction ,SQL ,METADATA ,CONTENT analysis - Abstract
In a wide area environment, leveraging a unified interface for the management of diverse databases is appealing. Nonetheless, variations in access and operation across heterogeneous databases pose challenges in abstracting a unified access model while preserving specific database operations. Simultaneously, intricate deployment and network conditions in wide-area environments create obstacles for forwarding database requests and achieving high-performance access. To address these challenges, this paper introduces a technology for unified access to heterogeneous databases in wide-area environments, termed Global Virtual Data Space SQL (GvdsSQL). Initially, this paper implements a unified data access mechanism for heterogeneous databases through metadata extraction, abstracts the unified access model, and accomplishes identification and forwarding of fundamental database operations. Secondly, the paper introduces a mechanism for expanding database operations through code generation. This mechanism achieves compatibility for special database operations by injecting rules to generate code. Lastly, this paper implements a multilevel caching mechanism for query results in wide-area databases utilizing semantic analysis. Through intelligent analysis of operation statements, it achieves precise management of cache items, enhancing wide-area access performance. The performance is improved by approximately 35% and 240% compared to similar methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. Generating interactive documents for domain-specific validation of formal models.
- Author
-
Vu, Fabian, Happe, Christopher, and Leuschel, Michael
- Subjects
- *
CODE generators , *MODEL validation , *JAVASCRIPT programming language , *DATA visualization , *C++ , *INDUSTRIAL applications - Abstract
Especially in industrial applications of formal modeling, validation is as important as verification. Thus, it is important to integrate the stakeholders' and the domain experts' feedback as early as possible. In this work, we propose two approaches to enable this: (1) a static export of an animation trace into a single HTML file, and (2) a dynamic export of a classical B model as an interactive HTML document, both based on domain-specific visualizations. For the second approach, we extend the high-level code generator B2Program by JavaScript and integrate VisB visualizations alongside SimB simulations with timing, probabilistic and interactive elements. An important aspect of this work is to ease communication between modelers and domain experts. This is achieved by implementing features to run simulations, sharing animated traces with descriptions and giving feedback to each other. This work also evaluates the performance of the generated JavaScript code compared with existing approaches with Java and C++ code generation as well as the animator, constraint solver, and model checker ProB. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.