46,097 results on '"Ionescu, A."'
Search Results
2. Masked Image Modeling: A Survey
- Author
-
Hondru, Vlad, Croitoru, Florinel Alin, Minaee, Shervin, Ionescu, Radu Tudor, and Sebe, Nicu
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Artificial Intelligence ,Computer Science - Machine Learning - Abstract
In this work, we survey recent studies on masked image modeling (MIM), an approach that emerged as a powerful self-supervised learning technique in computer vision. The MIM task involves masking some information, e.g. pixels, patches, or even latent representations, and training a model, usually an autoencoder, to predicting the missing information by using the context available in the visible part of the input. We identify and formalize two categories of approaches on how to implement MIM as a pretext task, one based on reconstruction and one based on contrastive learning. Then, we construct a taxonomy and review the most prominent papers in recent years. We complement the manually constructed taxonomy with a dendrogram obtained by applying a hierarchical clustering algorithm. We further identify relevant clusters via manually inspecting the resulting dendrogram. Our review also includes datasets that are commonly used in MIM research. We aggregate the performance results of various masked image modeling methods on the most popular datasets, to facilitate the comparison of competing methods. Finally, we identify research gaps and propose several interesting directions of future work.
- Published
- 2024
3. Imagen 3
- Author
-
Imagen-Team-Google, Baldridge, Jason, Bauer, Jakob, Bhutani, Mukul, Brichtova, Nicole, Bunner, Andrew, Chan, Kelvin, Chen, Yichang, Dieleman, Sander, Du, Yuqing, Eaton-Rosen, Zach, Fei, Hongliang, de Freitas, Nando, Gao, Yilin, Gladchenko, Evgeny, Colmenarejo, Sergio Gómez, Guo, Mandy, Haig, Alex, Hawkins, Will, Hu, Hexiang, Huang, Huilian, Igwe, Tobenna Peter, Kaplanis, Christos, Khodadadeh, Siavash, Kim, Yelin, Konyushkova, Ksenia, Langner, Karol, Lau, Eric, Luo, Shixin, Mokrá, Soňa, Nandwani, Henna, Onoe, Yasumasa, Oord, Aäron van den, Parekh, Zarana, Pont-Tuset, Jordi, Qi, Hang, Qian, Rui, Ramachandran, Deepak, Rane, Poorva, Rashwan, Abdullah, Razavi, Ali, Riachi, Robert, Srinivasan, Hansa, Srinivasan, Srivatsan, Strudel, Robin, Uria, Benigno, Wang, Oliver, Wang, Su, Waters, Austin, Wolff, Chris, Wright, Auriel, Xiao, Zhisheng, Xiong, Hao, Xu, Keyang, van Zee, Marc, Zhang, Junlin, Zhang, Katie, Zhou, Wenlei, Zolna, Konrad, Aboubakar, Ola, Akbulut, Canfer, Akerlund, Oscar, Albuquerque, Isabela, Anderson, Nina, Andreetto, Marco, Aroyo, Lora, Bariach, Ben, Barker, David, Ben, Sherry, Berman, Dana, Biles, Courtney, Blok, Irina, Botadra, Pankil, Brennan, Jenny, Brown, Karla, Buckley, John, Bunel, Rudy, Bursztein, Elie, Butterfield, Christina, Caine, Ben, Carpenter, Viral, Casagrande, Norman, Chang, Ming-Wei, Chang, Solomon, Chaudhuri, Shamik, Chen, Tony, Choi, John, Churbanau, Dmitry, Clement, Nathan, Cohen, Matan, Cole, Forrester, Dektiarev, Mikhail, Du, Vincent, Dutta, Praneet, Eccles, Tom, Elue, Ndidi, Feden, Ashley, Fruchter, Shlomi, Garcia, Frankie, Garg, Roopal, Ge, Weina, Ghazy, Ahmed, Gipson, Bryant, Goodman, Andrew, Górny, Dawid, Gowal, Sven, Gupta, Khyatti, Halpern, Yoni, Han, Yena, Hao, Susan, Hayes, Jamie, Hertz, Amir, Hirst, Ed, Hou, Tingbo, Howard, Heidi, Ibrahim, Mohamed, Ike-Njoku, Dirichi, Iljazi, Joana, Ionescu, Vlad, Isaac, William, Jana, Reena, Jennings, Gemma, Jenson, Donovon, Jia, Xuhui, Jones, Kerry, Ju, Xiaoen, Kajic, Ivana, Ayan, Burcu Karagol, Kelly, Jacob, Kothawade, Suraj, Kouridi, Christina, Ktena, Ira, Kumakaw, Jolanda, Kurniawan, Dana, Lagun, Dmitry, Lavitas, Lily, Lee, Jason, Li, Tao, Liang, Marco, Li-Calis, Maggie, Liu, Yuchi, Alberca, Javier Lopez, Lu, Peggy, Lum, Kristian, Ma, Yukun, Malik, Chase, Mellor, John, Mosseri, Inbar, Murray, Tom, Nematzadeh, Aida, Nicholas, Paul, Oliveira, João Gabriel, Ortiz-Jimenez, Guillermo, Paganini, Michela, Paine, Tom Le, Paiss, Roni, Parrish, Alicia, Peckham, Anne, Peswani, Vikas, Petrovski, Igor, Pfaff, Tobias, Pirozhenko, Alex, Poplin, Ryan, Prabhu, Utsav, Qi, Yuan, Rahtz, Matthew, Rashtchian, Cyrus, Rastogi, Charvi, Raul, Amit, Rebuffi, Sylvestre-Alvise, Ricco, Susanna, Riedel, Felix, Robinson, Dirk, Rohatgi, Pankaj, Rosgen, Bill, Rumbley, Sarah, Ryu, Moonkyung, Salgado, Anthony, Singla, Sahil, Schroff, Florian, Schumann, Candice, Shah, Tanmay, Shillingford, Brendan, Shivakumar, Kaushik, Shtatnov, Dennis, Singer, Zach, Sluzhaev, Evgeny, Sokolov, Valerii, Sottiaux, Thibault, Stimberg, Florian, Stone, Brad, Stutz, David, Su, Yu-Chuan, Tabellion, Eric, Tang, Shuai, Tao, David, Thomas, Kurt, Thornton, Gregory, Toor, Andeep, Udrescu, Cristian, Upadhyay, Aayush, Vasconcelos, Cristina, Vasiloff, Alex, Voynov, Andrey, Walker, Amanda, Wang, Luyu, Wang, Miaosen, Wang, Simon, Wang, Stanley, Wang, Qifei, Wang, Yuxiao, Weisz, Ágoston, Wiles, Olivia, Wu, Chenxia, Xu, Xingyu Federico, Xue, Andrew, Yang, Jianbo, Yu, Luo, Yurtoglu, Mete, Zand, Ali, Zhang, Han, Zhang, Jiageng, Zhao, Catherine, Zhaxybay, Adilet, Zhou, Miao, Zhu, Shengqi, Zhu, Zhenkai, Bloxwich, Dawn, Bordbar, Mahyar, Cobo, Luis C., Collins, Eli, Dai, Shengyang, Doshi, Tulsee, Dragan, Anca, Eck, Douglas, Hassabis, Demis, Hsiao, Sissie, Hume, Tom, Kavukcuoglu, Koray, King, Helen, Krawczyk, Jack, Li, Yeqing, Meier-Hellstern, Kathy, Orban, Andras, Pinsky, Yury, Subramanya, Amar, Vinyals, Oriol, Yu, Ting, and Zwols, Yori
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
We introduce Imagen 3, a latent diffusion model that generates high quality images from text prompts. We describe our quality and responsibility evaluations. Imagen 3 is preferred over other state-of-the-art (SOTA) models at the time of evaluation. In addition, we discuss issues around safety and representation, as well as methods we used to minimize the potential harm of our models.
- Published
- 2024
4. CYBERSECEVAL 3: Advancing the Evaluation of Cybersecurity Risks and Capabilities in Large Language Models
- Author
-
Wan, Shengye, Nikolaidis, Cyrus, Song, Daniel, Molnar, David, Crnkovich, James, Grace, Jayson, Bhatt, Manish, Chennabasappa, Sahana, Whitman, Spencer, Ding, Stephanie, Ionescu, Vlad, Li, Yue, and Saxe, Joshua
- Subjects
Computer Science - Cryptography and Security ,Computer Science - Machine Learning - Abstract
We are releasing a new suite of security benchmarks for LLMs, CYBERSECEVAL 3, to continue the conversation on empirically measuring LLM cybersecurity risks and capabilities. CYBERSECEVAL 3 assesses 8 different risks across two broad categories: risk to third parties, and risk to application developers and end users. Compared to previous work, we add new areas focused on offensive security capabilities: automated social engineering, scaling manual offensive cyber operations, and autonomous offensive cyber operations. In this paper we discuss applying these benchmarks to the Llama 3 models and a suite of contemporaneous state-of-the-art LLMs, enabling us to contextualize risks both with and without mitigations in place.
- Published
- 2024
5. The Llama 3 Herd of Models
- Author
-
Dubey, Abhimanyu, Jauhri, Abhinav, Pandey, Abhinav, Kadian, Abhishek, Al-Dahle, Ahmad, Letman, Aiesha, Mathur, Akhil, Schelten, Alan, Yang, Amy, Fan, Angela, Goyal, Anirudh, Hartshorn, Anthony, Yang, Aobo, Mitra, Archi, Sravankumar, Archie, Korenev, Artem, Hinsvark, Arthur, Rao, Arun, Zhang, Aston, Rodriguez, Aurelien, Gregerson, Austen, Spataru, Ava, Roziere, Baptiste, Biron, Bethany, Tang, Binh, Chern, Bobbie, Caucheteux, Charlotte, Nayak, Chaya, Bi, Chloe, Marra, Chris, McConnell, Chris, Keller, Christian, Touret, Christophe, Wu, Chunyang, Wong, Corinne, Ferrer, Cristian Canton, Nikolaidis, Cyrus, Allonsius, Damien, Song, Daniel, Pintz, Danielle, Livshits, Danny, Esiobu, David, Choudhary, Dhruv, Mahajan, Dhruv, Garcia-Olano, Diego, Perino, Diego, Hupkes, Dieuwke, Lakomkin, Egor, AlBadawy, Ehab, Lobanova, Elina, Dinan, Emily, Smith, Eric Michael, Radenovic, Filip, Zhang, Frank, Synnaeve, Gabriel, Lee, Gabrielle, Anderson, Georgia Lewis, Nail, Graeme, Mialon, Gregoire, Pang, Guan, Cucurell, Guillem, Nguyen, Hailey, Korevaar, Hannah, Xu, Hu, Touvron, Hugo, Zarov, Iliyan, Ibarra, Imanol Arrieta, Kloumann, Isabel, Misra, Ishan, Evtimov, Ivan, Copet, Jade, Lee, Jaewon, Geffert, Jan, Vranes, Jana, Park, Jason, Mahadeokar, Jay, Shah, Jeet, van der Linde, Jelmer, Billock, Jennifer, Hong, Jenny, Lee, Jenya, Fu, Jeremy, Chi, Jianfeng, Huang, Jianyu, Liu, Jiawen, Wang, Jie, Yu, Jiecao, Bitton, Joanna, Spisak, Joe, Park, Jongsoo, Rocca, Joseph, Johnstun, Joshua, Saxe, Joshua, Jia, Junteng, Alwala, Kalyan Vasuden, Upasani, Kartikeya, Plawiak, Kate, Li, Ke, Heafield, Kenneth, Stone, Kevin, El-Arini, Khalid, Iyer, Krithika, Malik, Kshitiz, Chiu, Kuenley, Bhalla, Kunal, Rantala-Yeary, Lauren, van der Maaten, Laurens, Chen, Lawrence, Tan, Liang, Jenkins, Liz, Martin, Louis, Madaan, Lovish, Malo, Lubo, Blecher, Lukas, Landzaat, Lukas, de Oliveira, Luke, Muzzi, Madeline, Pasupuleti, Mahesh, Singh, Mannat, Paluri, Manohar, Kardas, Marcin, Oldham, Mathew, Rita, Mathieu, Pavlova, Maya, Kambadur, Melanie, Lewis, Mike, Si, Min, Singh, Mitesh Kumar, Hassan, Mona, Goyal, Naman, Torabi, Narjes, Bashlykov, Nikolay, Bogoychev, Nikolay, Chatterji, Niladri, Duchenne, Olivier, Çelebi, Onur, Alrassy, Patrick, Zhang, Pengchuan, Li, Pengwei, Vasic, Petar, Weng, Peter, Bhargava, Prajjwal, Dubal, Pratik, Krishnan, Praveen, Koura, Punit Singh, Xu, Puxin, He, Qing, Dong, Qingxiao, Srinivasan, Ragavan, Ganapathy, Raj, Calderer, Ramon, Cabral, Ricardo Silveira, Stojnic, Robert, Raileanu, Roberta, Girdhar, Rohit, Patel, Rohit, Sauvestre, Romain, Polidoro, Ronnie, Sumbaly, Roshan, Taylor, Ross, Silva, Ruan, Hou, Rui, Wang, Rui, Hosseini, Saghar, Chennabasappa, Sahana, Singh, Sanjay, Bell, Sean, Kim, Seohyun Sonia, Edunov, Sergey, Nie, Shaoliang, Narang, Sharan, Raparthy, Sharath, Shen, Sheng, Wan, Shengye, Bhosale, Shruti, Zhang, Shun, Vandenhende, Simon, Batra, Soumya, Whitman, Spencer, Sootla, Sten, Collot, Stephane, Gururangan, Suchin, Borodinsky, Sydney, Herman, Tamar, Fowler, Tara, Sheasha, Tarek, Georgiou, Thomas, Scialom, Thomas, Speckbacher, Tobias, Mihaylov, Todor, Xiao, Tong, Karn, Ujjwal, Goswami, Vedanuj, Gupta, Vibhor, Ramanathan, Vignesh, Kerkez, Viktor, Gonguet, Vincent, Do, Virginie, Vogeti, Vish, Petrovic, Vladan, Chu, Weiwei, Xiong, Wenhan, Fu, Wenyin, Meers, Whitney, Martinet, Xavier, Wang, Xiaodong, Tan, Xiaoqing Ellen, Xie, Xinfeng, Jia, Xuchao, Wang, Xuewei, Goldschlag, Yaelle, Gaur, Yashesh, Babaei, Yasmine, Wen, Yi, Song, Yiwen, Zhang, Yuchen, Li, Yue, Mao, Yuning, Coudert, Zacharie Delpierre, Yan, Zheng, Chen, Zhengxing, Papakipos, Zoe, Singh, Aaditya, Grattafiori, Aaron, Jain, Abha, Kelsey, Adam, Shajnfeld, Adam, Gangidi, Adithya, Victoria, Adolfo, Goldstand, Ahuva, Menon, Ajay, Sharma, Ajay, Boesenberg, Alex, Vaughan, Alex, Baevski, Alexei, Feinstein, Allie, Kallet, Amanda, Sangani, Amit, Yunus, Anam, Lupu, Andrei, Alvarado, Andres, Caples, Andrew, Gu, Andrew, Ho, Andrew, Poulton, Andrew, Ryan, Andrew, Ramchandani, Ankit, Franco, Annie, Saraf, Aparajita, Chowdhury, Arkabandhu, Gabriel, Ashley, Bharambe, Ashwin, Eisenman, Assaf, Yazdan, Azadeh, James, Beau, Maurer, Ben, Leonhardi, Benjamin, Huang, Bernie, Loyd, Beth, De Paola, Beto, Paranjape, Bhargavi, Liu, Bing, Wu, Bo, Ni, Boyu, Hancock, Braden, Wasti, Bram, Spence, Brandon, Stojkovic, Brani, Gamido, Brian, Montalvo, Britt, Parker, Carl, Burton, Carly, Mejia, Catalina, Wang, Changhan, Kim, Changkyu, Zhou, Chao, Hu, Chester, Chu, Ching-Hsiang, Cai, Chris, Tindal, Chris, Feichtenhofer, Christoph, Civin, Damon, Beaty, Dana, Kreymer, Daniel, Li, Daniel, Wyatt, Danny, Adkins, David, Xu, David, Testuggine, Davide, David, Delia, Parikh, Devi, Liskovich, Diana, Foss, Didem, Wang, Dingkang, Le, Duc, Holland, Dustin, Dowling, Edward, Jamil, Eissa, Montgomery, Elaine, Presani, Eleonora, Hahn, Emily, Wood, Emily, Brinkman, Erik, Arcaute, Esteban, Dunbar, Evan, Smothers, Evan, Sun, Fei, Kreuk, Felix, Tian, Feng, Ozgenel, Firat, Caggioni, Francesco, Guzmán, Francisco, Kanayet, Frank, Seide, Frank, Florez, Gabriela Medina, Schwarz, Gabriella, Badeer, Gada, Swee, Georgia, Halpern, Gil, Thattai, Govind, Herman, Grant, Sizov, Grigory, Guangyi, Zhang, Lakshminarayanan, Guna, Shojanazeri, Hamid, Zou, Han, Wang, Hannah, Zha, Hanwen, Habeeb, Haroun, Rudolph, Harrison, Suk, Helen, Aspegren, Henry, Goldman, Hunter, Damlaj, Ibrahim, Molybog, Igor, Tufanov, Igor, Veliche, Irina-Elena, Gat, Itai, Weissman, Jake, Geboski, James, Kohli, James, Asher, Japhet, Gaya, Jean-Baptiste, Marcus, Jeff, Tang, Jeff, Chan, Jennifer, Zhen, Jenny, Reizenstein, Jeremy, Teboul, Jeremy, Zhong, Jessica, Jin, Jian, Yang, Jingyi, Cummings, Joe, Carvill, Jon, Shepard, Jon, McPhie, Jonathan, Torres, Jonathan, Ginsburg, Josh, Wang, Junjie, Wu, Kai, U, Kam Hou, Saxena, Karan, Prasad, Karthik, Khandelwal, Kartikay, Zand, Katayoun, Matosich, Kathy, Veeraraghavan, Kaushik, Michelena, Kelly, Li, Keqian, Huang, Kun, Chawla, Kunal, Lakhotia, Kushal, Huang, Kyle, Chen, Lailin, Garg, Lakshya, A, Lavender, Silva, Leandro, Bell, Lee, Zhang, Lei, Guo, Liangpeng, Yu, Licheng, Moshkovich, Liron, Wehrstedt, Luca, Khabsa, Madian, Avalani, Manav, Bhatt, Manish, Tsimpoukelli, Maria, Mankus, Martynas, Hasson, Matan, Lennie, Matthew, Reso, Matthias, Groshev, Maxim, Naumov, Maxim, Lathi, Maya, Keneally, Meghan, Seltzer, Michael L., Valko, Michal, Restrepo, Michelle, Patel, Mihir, Vyatskov, Mik, Samvelyan, Mikayel, Clark, Mike, Macey, Mike, Wang, Mike, Hermoso, Miquel Jubert, Metanat, Mo, Rastegari, Mohammad, Bansal, Munish, Santhanam, Nandhini, Parks, Natascha, White, Natasha, Bawa, Navyata, Singhal, Nayan, Egebo, Nick, Usunier, Nicolas, Laptev, Nikolay Pavlovich, Dong, Ning, Zhang, Ning, Cheng, Norman, Chernoguz, Oleg, Hart, Olivia, Salpekar, Omkar, Kalinli, Ozlem, Kent, Parkin, Parekh, Parth, Saab, Paul, Balaji, Pavan, Rittner, Pedro, Bontrager, Philip, Roux, Pierre, Dollar, Piotr, Zvyagina, Polina, Ratanchandani, Prashant, Yuvraj, Pritish, Liang, Qian, Alao, Rachad, Rodriguez, Rachel, Ayub, Rafi, Murthy, Raghotham, Nayani, Raghu, Mitra, Rahul, Li, Raymond, Hogan, Rebekkah, Battey, Robin, Wang, Rocky, Maheswari, Rohan, Howes, Russ, Rinott, Ruty, Bondu, Sai Jayesh, Datta, Samyak, Chugh, Sara, Hunt, Sara, Dhillon, Sargun, Sidorov, Sasha, Pan, Satadru, Verma, Saurabh, Yamamoto, Seiji, Ramaswamy, Sharadh, Lindsay, Shaun, Feng, Sheng, Lin, Shenghao, Zha, Shengxin Cindy, Shankar, Shiva, Zhang, Shuqiang, Wang, Sinong, Agarwal, Sneha, Sajuyigbe, Soji, Chintala, Soumith, Max, Stephanie, Chen, Stephen, Kehoe, Steve, Satterfield, Steve, Govindaprasad, Sudarshan, Gupta, Sumit, Cho, Sungmin, Virk, Sunny, Subramanian, Suraj, Choudhury, Sy, Goldman, Sydney, Remez, Tal, Glaser, Tamar, Best, Tamara, Kohler, Thilo, Robinson, Thomas, Li, Tianhe, Zhang, Tianjun, Matthews, Tim, Chou, Timothy, Shaked, Tzook, Vontimitta, Varun, Ajayi, Victoria, Montanez, Victoria, Mohan, Vijai, Kumar, Vinay Satish, Mangla, Vishal, Albiero, Vítor, Ionescu, Vlad, Poenaru, Vlad, Mihailescu, Vlad Tiberiu, Ivanov, Vladimir, Li, Wei, Wang, Wenchen, Jiang, Wenwen, Bouaziz, Wes, Constable, Will, Tang, Xiaocheng, Wang, Xiaofang, Wu, Xiaojian, Wang, Xiaolan, Xia, Xide, Wu, Xilun, Gao, Xinbo, Chen, Yanjun, Hu, Ye, Jia, Ye, Qi, Ye, Li, Yenda, Zhang, Yilin, Zhang, Ying, Adi, Yossi, Nam, Youngjin, Yu, Wang, Hao, Yuchen, Qian, Yundi, He, Yuzi, Rait, Zach, DeVito, Zachary, Rosnbrick, Zef, Wen, Zhaoduo, Yang, Zhenyu, and Zhao, Zhiwei
- Subjects
Computer Science - Artificial Intelligence ,Computer Science - Computation and Language ,Computer Science - Computer Vision and Pattern Recognition - Abstract
Modern artificial intelligence (AI) systems are powered by foundation models. This paper presents a new set of foundation models, called Llama 3. It is a herd of language models that natively support multilinguality, coding, reasoning, and tool usage. Our largest model is a dense Transformer with 405B parameters and a context window of up to 128K tokens. This paper presents an extensive empirical evaluation of Llama 3. We find that Llama 3 delivers comparable quality to leading language models such as GPT-4 on a plethora of tasks. We publicly release Llama 3, including pre-trained and post-trained versions of the 405B parameter language model and our Llama Guard 3 model for input and output safety. The paper also presents the results of experiments in which we integrate image, video, and speech capabilities into Llama 3 via a compositional approach. We observe this approach performs competitively with the state-of-the-art on image, video, and speech recognition tasks. The resulting models are not yet being broadly released as they are still under development.
- Published
- 2024
6. Real time detection of C reactive protein in interstitial fluid using electrochemical impedance spectroscopy, towards wearable health monitoring
- Author
-
Grammoustianou, Aristea, Saeidi, Ali, Longo, Johan, Risch, Felix, and Ionescu, Adrian M.
- Subjects
Physics - Medical Physics ,Physics - Applied Physics - Abstract
Traditional detection methods of C-reactive protein (CRP) inflammation biomarker, in blood are expensive, time-consuming and labor-intensive. Such existing point-of-care CRP detection devices remain invasive, since they need blood sampling (finger-pricking or venous puncture). Here, we propose an electrochemical impedance spectroscopy (EIS)-based sensor for the real-time, fast, specific, sensitive, and label-free detection of C-reactive protein in the interstitial fluid (ISF) that can be accessed with minimally invasive microneedle arrays. The sensor has the potential to be integrated in a wearable device similar with the continuous glucose monitoring, that will detect CRP in interstitial fluid in a non-invasive, inexpensive and straightforward manner. The affinity based assay was tested in both buffer and ISF-like solution. The limit of detection achieved was 0.7 ug/mL of CRP in buffer, and 0.8 ug/mL of CRP in ISF-like solution and the sensor shows excellent linearity up to 10 ug/mL. It is worth noting that the proposed sensor operates in low sample volume (down to 5 uL), and has a response time of 100 seconds.
- Published
- 2024
7. CBM: Curriculum by Masking
- Author
-
Jarca, Andrei, Croitoru, Florinel-Alin, and Ionescu, Radu Tudor
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Artificial Intelligence ,Computer Science - Machine Learning - Abstract
We propose Curriculum by Masking (CBM), a novel state-of-the-art curriculum learning strategy that effectively creates an easy-to-hard training schedule via patch (token) masking, offering significant accuracy improvements over the conventional training regime and previous curriculum learning (CL) methods. CBM leverages gradient magnitudes to prioritize the masking of salient image regions via a novel masking algorithm and a novel masking block. Our approach enables controlling sample difficulty via the patch masking ratio, generating an effective easy-to-hard curriculum by gradually introducing harder samples as training progresses. CBM operates with two easily configurable parameters, i.e. the number of patches and the curriculum schedule, making it a versatile curriculum learning approach for object recognition and detection. We conduct experiments with various neural architectures, ranging from convolutional networks to vision transformers, on five benchmark data sets (CIFAR-10, CIFAR-100, ImageNet, Food-101 and PASCAL VOC), to compare CBM with conventional as well as curriculum-based training regimes. Our results reveal the superiority of our strategy compared with the state-of-the-art curriculum learning regimes. We also observe improvements in transfer learning contexts, where CBM surpasses previous work by considerable margins in terms of accuracy. We release our code for free non-commercial use at https://github.com/CroitoruAlin/CBM., Comment: Accepted at ECAI 2024
- Published
- 2024
8. PoPreRo: A New Dataset for Popularity Prediction of Romanian Reddit Posts
- Author
-
Rogoz, Ana-Cristina, Nechita, Maria Ilinca, and Ionescu, Radu Tudor
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence ,Computer Science - Machine Learning - Abstract
We introduce PoPreRo, the first dataset for Popularity Prediction of Romanian posts collected from Reddit. The PoPreRo dataset includes a varied compilation of post samples from five distinct subreddits of Romania, totaling 28,107 data samples. Along with our novel dataset, we introduce a set of competitive models to be used as baselines for future research. Interestingly, the top-scoring model achieves an accuracy of 61.35% and a macro F1 score of 60.60% on the test set, indicating that the popularity prediction task on PoPreRo is very challenging. Further investigations based on few-shot prompting the Falcon-7B Large Language Model also point in the same direction. We thus believe that PoPreRo is a valuable resource that can be used to evaluate models on predicting the popularity of social media posts in Romanian. We release our dataset at https://github.com/ana-rogoz/PoPreRo., Comment: Accepted at ICPR 2024
- Published
- 2024
9. PQPP: A Joint Benchmark for Text-to-Image Prompt and Query Performance Prediction
- Author
-
Poesina, Eduard, Costache, Adriana Valentina, Chifu, Adrian-Gabriel, Mothe, Josiane, and Ionescu, Radu Tudor
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Artificial Intelligence ,Computer Science - Computation and Language ,Computer Science - Machine Learning - Abstract
Text-to-image generation has recently emerged as a viable alternative to text-to-image retrieval, due to the visually impressive results of generative diffusion models. Although query performance prediction is an active research topic in information retrieval, to the best of our knowledge, there is no prior study that analyzes the difficulty of queries (prompts) in text-to-image generation, based on human judgments. To this end, we introduce the first dataset of prompts which are manually annotated in terms of image generation performance. In order to determine the difficulty of the same prompts in image retrieval, we also collect manual annotations that represent retrieval performance. We thus propose the first benchmark for joint text-to-image prompt and query performance prediction, comprising 10K queries. Our benchmark enables: (i) the comparative assessment of the difficulty of prompts/queries in image generation and image retrieval, and (ii) the evaluation of prompt/query performance predictors addressing both generation and retrieval. We present results with several pre-generation/retrieval and post-generation/retrieval performance predictors, thus providing competitive baselines for future research. Our benchmark and code is publicly available under the CC BY 4.0 license at https://github.com/Eduard6421/PQPP.
- Published
- 2024
10. Weight Copy and Low-Rank Adaptation for Few-Shot Distillation of Vision Transformers
- Author
-
Grigore, Diana-Nicoleta, Georgescu, Mariana-Iuliana, Justo, Jon Alvarez, Johansen, Tor, Ionescu, Andreea Iuliana, and Ionescu, Radu Tudor
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Artificial Intelligence ,Computer Science - Machine Learning - Abstract
Few-shot knowledge distillation recently emerged as a viable approach to harness the knowledge of large-scale pre-trained models, using limited data and computational resources. In this paper, we propose a novel few-shot feature distillation approach for vision transformers. Our approach is based on two key steps. Leveraging the fact that vision transformers have a consistent depth-wise structure, we first copy the weights from intermittent layers of existing pre-trained vision transformers (teachers) into shallower architectures (students), where the intermittence factor controls the complexity of the student transformer with respect to its teacher. Next, we employ an enhanced version of Low-Rank Adaptation (LoRA) to distill knowledge into the student in a few-shot scenario, aiming to recover the information processing carried out by the skipped teacher layers. We present comprehensive experiments with supervised and self-supervised transformers as teachers, on five data sets from various domains, including natural, medical and satellite images. The empirical results confirm the superiority of our approach over competitive baselines. Moreover, the ablation results demonstrate the usefulness of each component of the proposed pipeline.
- Published
- 2024
11. Curriculum Direct Preference Optimization for Diffusion and Consistency Models
- Author
-
Croitoru, Florinel-Alin, Hondru, Vlad, Ionescu, Radu Tudor, Sebe, Nicu, and Shah, Mubarak
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Artificial Intelligence ,Computer Science - Machine Learning - Abstract
Direct Preference Optimization (DPO) has been proposed as an effective and efficient alternative to reinforcement learning from human feedback (RLHF). In this paper, we propose a novel and enhanced version of DPO based on curriculum learning for text-to-image generation. Our method is divided into two training stages. First, a ranking of the examples generated for each prompt is obtained by employing a reward model. Then, increasingly difficult pairs of examples are sampled and provided to a text-to-image generative (diffusion or consistency) model. Generated samples that are far apart in the ranking are considered to form easy pairs, while those that are close in the ranking form hard pairs. In other words, we use the rank difference between samples as a measure of difficulty. The sampled pairs are split into batches according to their difficulty levels, which are gradually used to train the generative model. Our approach, Curriculum DPO, is compared against state-of-the-art fine-tuning approaches on three benchmarks, outperforming the competing methods in terms of text alignment, aesthetics and human preference. Our code is available at https://anonymous.4open.science/r/Curriculum-DPO-EE14.
- Published
- 2024
12. A Novel Cartography-Based Curriculum Learning Method Applied on RoNLI: The First Romanian Natural Language Inference Corpus
- Author
-
Poesina, Eduard, Caragea, Cornelia, and Ionescu, Radu Tudor
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence ,Computer Science - Machine Learning - Abstract
Natural language inference (NLI), the task of recognizing the entailment relationship in sentence pairs, is an actively studied topic serving as a proxy for natural language understanding. Despite the relevance of the task in building conversational agents and improving text classification, machine translation and other NLP tasks, to the best of our knowledge, there is no publicly available NLI corpus for the Romanian language. To this end, we introduce the first Romanian NLI corpus (RoNLI) comprising 58K training sentence pairs, which are obtained via distant supervision, and 6K validation and test sentence pairs, which are manually annotated with the correct labels. We conduct experiments with multiple machine learning methods based on distant learning, ranging from shallow models based on word embeddings to transformer-based neural networks, to establish a set of competitive baselines. Furthermore, we improve on the best model by employing a new curriculum learning strategy based on data cartography. Our dataset and code to reproduce the baselines are available at https://github.com/Eduard6421/RONLI., Comment: Accepted at ACL 2024 (Main)
- Published
- 2024
13. Nonlinear Landau damping and wave operators in sharp Gevrey spaces
- Author
-
Ionescu, A. D., Pausader, B., Wang, X., and Widmayer, K.
- Subjects
Mathematics - Analysis of PDEs ,Mathematical Physics - Abstract
We prove nonlinear Landau damping in optimal weighted Gevrey-3 spaces for solutions of the confined Vlasov-Poisson system on $\T^d\times\R^d$ which are small perturbations of homogeneous Penrose-stable equilibria. We also prove the existence of nonlinear scattering operators associated to the confined Vlasov-Poisson evolution, as well as suitable injectivity properties and Lipschitz estimates (also in weighted Gevrey-3 spaces) on these operators. Our results give definitive answers to two well-known open problems in the field, both of them stated in the recent review of Bedrossian [4, Section 6]., Comment: 38 pages
- Published
- 2024
14. Moment matching based reduced closed-loop design to achieve asymptotic performance
- Author
-
Ionescu, Tudor C.
- Subjects
Mathematics - Optimization and Control ,Mathematics - Dynamical Systems - Abstract
In this paper, the moment matching techniques are adopted to obtain reduced-order closed-loop systems with reduced-order controllers that maintain the closed-loop stability and guarantee desired asymptotic performance, after revealing the relationship between the Internal Model Principle used in control design and the time-domain moment matching problem. As a result, the design of a low order controller can be done starting from considering the achieving of asymptotic performance as a moment matching problem, resulting in a reduced order closed-loop system., Comment: 7 pages. Preliminary resukts have been presented in CDC2013
- Published
- 2024
15. UnibucLLM: Harnessing LLMs for Automated Prediction of Item Difficulty and Response Time for Multiple-Choice Questions
- Author
-
Rogoz, Ana-Cristina and Ionescu, Radu Tudor
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence ,Computer Science - Machine Learning - Abstract
This work explores a novel data augmentation method based on Large Language Models (LLMs) for predicting item difficulty and response time of retired USMLE Multiple-Choice Questions (MCQs) in the BEA 2024 Shared Task. Our approach is based on augmenting the dataset with answers from zero-shot LLMs (Falcon, Meditron, Mistral) and employing transformer-based models based on six alternative feature combinations. The results suggest that predicting the difficulty of questions is more challenging. Notably, our top performing methods consistently include the question text, and benefit from the variability of LLM answers, highlighting the potential of LLMs for improving automated assessment in medical licensing exams. We make our code available https://github.com/ana-rogoz/BEA-2024., Comment: Accepted at BEA 2024 (NAACL Workshop)
- Published
- 2024
16. Deepfake Sentry: Harnessing Ensemble Intelligence for Resilient Detection and Generalisation
- Author
-
Ştefan, Liviu-Daniel, Stanciu, Dan-Cristian, Dogariu, Mihai, Constantin, Mihai Gabriel, Jitaru, Andrei Cosmin, and Ionescu, Bogdan
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Recent advancements in Generative Adversarial Networks (GANs) have enabled photorealistic image generation with high quality. However, the malicious use of such generated media has raised concerns regarding visual misinformation. Although deepfake detection research has demonstrated high accuracy, it is vulnerable to advances in generation techniques and adversarial iterations on detection countermeasures. To address this, we propose a proactive and sustainable deepfake training augmentation solution that introduces artificial fingerprints into models. We achieve this by employing an ensemble learning approach that incorporates a pool of autoencoders that mimic the effect of the artefacts introduced by the deepfake generator models. Experiments on three datasets reveal that our proposed ensemble autoencoder-based data augmentation learning approach offers improvements in terms of generalisation, resistance against basic data perturbations such as noise, blurring, sharpness enhancement, and affine transforms, resilience to commonly used lossy compression algorithms such as JPEG, and enhanced resistance against adversarial attacks., Comment: 16 pages, 1 figure, U.P.B. Sci. Bull., Series C, Vol. 85, Iss. 4, 2023
- Published
- 2024
17. About the Cohen-Macaulay defect and almost Cohen-Macaulay rings
- Author
-
Ionescu, Cristodor
- Subjects
Mathematics - Commutative Algebra - Abstract
We notice the connection between almost Cohen-Macaulay rings and the Cohen-Macaulay defect. We introduce a Serre-type condition for modules, that is connected to the Cohen-Macaulay defect in the same way that the condition $(S_n)$ is connected to Cohen-Macaulay modules.
- Published
- 2024
18. TEE4EHR: Transformer Event Encoder for Better Representation Learning in Electronic Health Records
- Author
-
Karami, Hojjat, Atienza, David, and Ionescu, Anisoara
- Subjects
Computer Science - Machine Learning - Abstract
Irregular sampling of time series in electronic health records (EHRs) is one of the main challenges for developing machine learning models. Additionally, the pattern of missing data in certain clinical variables is not at random but depends on the decisions of clinicians and the state of the patient. Point process is a mathematical framework for analyzing event sequence data that is consistent with irregular sampling patterns. Our model, TEE4EHR, is a transformer event encoder (TEE) with point process loss that encodes the pattern of laboratory tests in EHRs. The utility of our TEE has been investigated in a variety of benchmark event sequence datasets. Additionally, we conduct experiments on two real-world EHR databases to provide a more comprehensive evaluation of our model. Firstly, in a self-supervised learning approach, the TEE is jointly learned with an existing attention-based deep neural network which gives superior performance in negative log-likelihood and future event prediction. Besides, we propose an algorithm for aggregating attention weights that can reveal the interaction between the events. Secondly, we transfer and freeze the learned TEE to the downstream task for the outcome prediction, where it outperforms state-of-the-art models for handling irregularly sampled time series. Furthermore, our results demonstrate that our approach can improve representation learning in EHRs and can be useful for clinical prediction tasks.
- Published
- 2024
- Full Text
- View/download PDF
19. TimEHR: Image-based Time Series Generation for Electronic Health Records
- Author
-
Karami, Hojjat, Hartley, Mary-Anne, Atienza, David, and Ionescu, Anisoara
- Subjects
Computer Science - Machine Learning - Abstract
Time series in Electronic Health Records (EHRs) present unique challenges for generative models, such as irregular sampling, missing values, and high dimensionality. In this paper, we propose a novel generative adversarial network (GAN) model, TimEHR, to generate time series data from EHRs. In particular, TimEHR treats time series as images and is based on two conditional GANs. The first GAN generates missingness patterns, and the second GAN generates time series values based on the missingness pattern. Experimental results on three real-world EHR datasets show that TimEHR outperforms state-of-the-art methods in terms of fidelity, utility, and privacy metrics.
- Published
- 2024
20. Cascaded Cross-Modal Transformer for Audio-Textual Classification
- Author
-
Ristea, Nicolae-Catalin, Anghel, Andrei, and Ionescu, Radu Tudor
- Subjects
Computer Science - Computation and Language ,Computer Science - Machine Learning ,Computer Science - Sound ,Electrical Engineering and Systems Science - Audio and Speech Processing - Abstract
Speech classification tasks often require powerful language understanding models to grasp useful features, which becomes problematic when limited training data is available. To attain superior classification performance, we propose to harness the inherent value of multimodal representations by transcribing speech using automatic speech recognition (ASR) models and translating the transcripts into different languages via pretrained translation models. We thus obtain an audio-textual (multimodal) representation for each data sample. Subsequently, we combine language-specific Bidirectional Encoder Representations from Transformers (BERT) with Wav2Vec2.0 audio features via a novel cascaded cross-modal transformer (CCMT). Our model is based on two cascaded transformer blocks. The first one combines text-specific features from distinct languages, while the second one combines acoustic features with multilingual features previously learned by the first transformer block. We employed our system in the Requests Sub-Challenge of the ACM Multimedia 2023 Computational Paralinguistics Challenge. CCMT was declared the winning solution, obtaining an unweighted average recall (UAR) of 65.41% and 85.87% for complaint and request detection, respectively. Moreover, we applied our framework on the Speech Commands v2 and HarperValleyBank dialog data sets, surpassing previous studies reporting results on these benchmarks. Our code is freely available for download at: https://github.com/ristea/ccmt., Comment: Accepted for publication in Artificial Intelligence Review
- Published
- 2024
21. Learning from One Continuous Video Stream
- Author
-
Carreira, João, King, Michael, Pătrăucean, Viorica, Gokay, Dilara, Ionescu, Cătălin, Yang, Yi, Zoran, Daniel, Heyward, Joseph, Doersch, Carl, Aytar, Yusuf, Damen, Dima, and Zisserman, Andrew
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Artificial Intelligence - Abstract
We introduce a framework for online learning from a single continuous video stream -- the way people and animals learn, without mini-batches, data augmentation or shuffling. This poses great challenges given the high correlation between consecutive video frames and there is very little prior work on it. Our framework allows us to do a first deep dive into the topic and includes a collection of streams and tasks composed from two existing video datasets, plus methodology for performance evaluation that considers both adaptation and generalization. We employ pixel-to-pixel modelling as a practical and flexible way to switch between pre-training and single-stream evaluation as well as between arbitrary tasks, without ever requiring changes to models and always using the same pixel loss. Equipped with this framework we obtained large single-stream learning gains from pre-training with a novel family of future prediction tasks, found that momentum hurts, and that the pace of weight updates matters. The combination of these insights leads to matching the performance of IID learning with batch size 1, when using the same architecture and without costly replay buffers., Comment: CVPR camera ready version
- Published
- 2023
22. Accounting for Localized Deformation: A Simple Computation of True Stress in Micropillar Compression Experiments
- Author
-
Smiri, J., Salman, O. U., Ghidelli, M., and Ionescu, I. R.
- Published
- 2024
- Full Text
- View/download PDF
23. Learning Rate Curriculum
- Author
-
Croitoru, Florinel-Alin, Ristea, Nicolae-Cătălin, Ionescu, Radu Tudor, and Sebe, Nicu
- Published
- 2024
- Full Text
- View/download PDF
24. Fats Extracted from Oil Press Cakes, Fish Meat, and Chicken Hearts as Potential CoQ10 Supplements
- Author
-
Semeniuc, Cristina Anamaria, Mandrioli, Mara, Podar, Andersina Simina, Ranga, Floricuța, Socaciu, Maria-Ioana, Ionescu, Simona Raluca, Fogarasi, Melinda, Fărcaș, Anca Corina, Toschi, Tullia Gallina, Vodnar, Dan Cristian, and Socaci, Sonia Ancuța
- Published
- 2024
- Full Text
- View/download PDF
25. Acoustic Monitoring of Inelastic Compaction in Porous Granular Materials
- Author
-
Canel, Vincent, Jia, Xiaoping, Campillo, Michel, and Ionescu, Ioan
- Subjects
Condensed Matter - Soft Condensed Matter - Abstract
We study the transition from cohesive to noncohesive granular states of synthetic rocks under oedometric loading, combining simultaneous measurements of ultrasound velocity and acoustic emissions. Our samples are agglomerates made of glass beads bonded with a few percent of cement, either ductile or brittle. These cemented granular samples exhibit an inelastic compaction beyond certain axial stresses likely due to the formation of compaction bands, which is accompanied by a significant decrease of compressional wave velocity. Upon subsequent cyclic unloading and reloading with constant consolidation stress, we found the mechanical and acoustic responses similar to those in noncohesive granular materials, which can be interpreted within the effective medium theory based on the Digby bonding model. Moreover, this model allows P-wave velocity measured at vanishing pressure to be interpreted as an indicator of the debonding on the scale of grain contact. During the inelastic compaction, stick-slip like stress drops were observed in brittle cement-bonded granular samples accompanied by the instantaneous decrease of the P-wave velocity and acoustic emissions which display an Omori-like law for foreshocks, i.e., precursors. By contrast, mechanical responses of ductile cement-bonded granular samples are smooth (without visible stick-slip like stress drops) and mostly aseismic. By applying a cyclic loading and unloading with increasing consolidation stress, we observed a Kaiser-like memory effect in the brittle cement-bonded sample in the weakly damaged state which tends to disappear when the bonds are mostly broken in the non-cohesive granular state after large-amplitude loading. Our study shows that the macroscopic ductile and brittle behavior of cemented granular media is controlled by the local processes on the scale of the bonds between grains., Comment: 22 pages, 15 figures
- Published
- 2023
26. Sea-Land-Cloud Segmentation in Satellite Hyperspectral Imagery by Deep Learning
- Author
-
Justo, Jon Alvarez, Garrett, Joseph L., Georgescu, Mariana-Iuliana, Gonzalez-Llorente, Jesus, Ionescu, Radu Tudor, and Johansen, Tor Arne
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Electrical Engineering and Systems Science - Image and Video Processing - Abstract
Satellites are increasingly adopting on-board AI for enhanced autonomy through in-orbit inference. In this context, the use of deep learning (DL) techniques for segmentation in hyperspectral (HS) satellite imagery offers advantages for remote sensing applications, and therefore, we train 16 different models, whose codes are made available through our study, which we consider to be relevant for on-board multi-class segmentation of HS imagery, focusing on classifying oceanic (sea), terrestrial (land), and cloud formations. We employ the HYPSO-1 mission as an illustrative case for sea-land-cloud segmentation, and to demonstrate the utility of the segments, we introduce a novel sea-land-cloud ranking application scenario. We consider how to prioritize HS image downlink based on sea, land, and cloud coverage levels from the segmented images. We comparatively evaluate the models for future in-orbit deployment, considering performance, parameter count, and inference time. The models include both shallow and deep models, and after we propose four new DL models, we demonstrate that segmenting single spectral signatures (1D) outperforms 3D data processing comprising both spectral (1D) and spatial (2D) contexts. We conclude that our lightweight DL model, called 1D-Justo-LiuNet, consistently surpasses state-of-the-art models for sea-land-cloud segmentation, such as U-Net and its variations, in terms of performance (0.93 accuracy) and parameter count (4,563). However, the 1D models present longer inference time (15s) in the tested processing architecture, which seems to be a suboptimal architecture for this purpose. Finally, after demonstrating that in-orbit segmentation should occur post L1b radiance calibration rather than on raw data, we also show that reducing spectral channels down to 3 lowers models' parameter counts and inference time, at the cost of weaker segmentation performance., Comment: Remote Sensing, Satellite Imagery, Hyperspectral Imaging, Deep Learning, Segmentation
- Published
- 2023
27. A Novel Contrastive Learning Method for Clickbait Detection on RoCliCo: A Romanian Clickbait Corpus of News Articles
- Author
-
Broscoteanu, Daria-Mihaela and Ionescu, Radu Tudor
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence ,Computer Science - Machine Learning - Abstract
To increase revenue, news websites often resort to using deceptive news titles, luring users into clicking on the title and reading the full news. Clickbait detection is the task that aims to automatically detect this form of false advertisement and avoid wasting the precious time of online users. Despite the importance of the task, to the best of our knowledge, there is no publicly available clickbait corpus for the Romanian language. To this end, we introduce a novel Romanian Clickbait Corpus (RoCliCo) comprising 8,313 news samples which are manually annotated with clickbait and non-clickbait labels. Furthermore, we conduct experiments with four machine learning methods, ranging from handcrafted models to recurrent and transformer-based neural networks, to establish a line-up of competitive baselines. We also carry out experiments with a weighted voting ensemble. Among the considered baselines, we propose a novel BERT-based contrastive learning model that learns to encode news titles and contents into a deep metric space such that titles and contents of non-clickbait news have high cosine similarity, while titles and contents of clickbait news have low cosine similarity. Our data set and code to reproduce the baselines are publicly available for download at https://github.com/dariabroscoteanu/RoCliCo., Comment: Accepted at EMNLP 2023
- Published
- 2023
28. Accounting for localized deformation: a simple computation of true stress in micropillar compression experiments
- Author
-
Smiri, Jalal, Salman, Oguz Umut, Ghidelli, Matteo, and Ionescu, Ioan R.
- Subjects
Condensed Matter - Materials Science ,Condensed Matter - Mesoscale and Nanoscale Physics - Abstract
Compression experiments are widely used to study the mechanical properties of materials at micro- and nanoscale. However, the conventional engineering stress measurement method used in these experiments neglects to account for the alterations in the material's shape during loading. This can lead to inaccurate stress values and potentially misleading conclusions about the material's mechanical behavior especially in the case of localized deformation. To address this issue, we present a method for calculating true stress in cases of localized plastic deformation commonly encountered in experimental settings: (i) a single band and (ii) two bands oriented in arbitrary directions with respect to the vertical axis of the pillar (either in the same or opposite directions). Our simple analytic formulas can be applied to homogeneous and isotropic materials and crystals, requiring only standard data (displacement-force curve, aspect ratio, shear band angle and elastic strain limit) obtained from experimental results and eliminating the need for finite element computations. Our approach provides a more precise interpretation of experimental results and can serve as a valuable and simple tool in material design and characterization., Comment: arXiv admin note: text overlap with arXiv:2012.12780
- Published
- 2023
29. Towards Few-Call Model Stealing via Active Self-Paced Knowledge Distillation and Diffusion-Based Image Generation
- Author
-
Hondru, Vlad and Ionescu, Radu Tudor
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Machine Learning - Abstract
Diffusion models showcased strong capabilities in image synthesis, being used in many computer vision tasks with great success. To this end, we propose to explore a new use case, namely to copy black-box classification models without having access to the original training data, the architecture, and the weights of the model, \ie~the model is only exposed through an inference API. More specifically, we can only observe the (soft or hard) labels for some image samples passed as input to the model. Furthermore, we consider an additional constraint limiting the number of model calls, mostly focusing our research on few-call model stealing. In order to solve the model extraction task given the applied restrictions, we propose the following framework. As training data, we create a synthetic data set (called proxy data set) by leveraging the ability of diffusion models to generate realistic and diverse images. Given a maximum number of allowed API calls, we pass the respective number of samples through the black-box model to collect labels. Finally, we distill the knowledge of the black-box teacher (attacked model) into a student model (copy of the attacked model), harnessing both labeled and unlabeled data generated by the diffusion model. We employ a novel active self-paced learning framework to make the most of the proxy data during distillation. Our empirical results on two data sets confirm the superiority of our framework over two state-of-the-art methods in the few-call model extraction scenario.
- Published
- 2023
30. Learning Using Generated Privileged Information by Text-to-Image Diffusion Models
- Author
-
Menadil, Rafael-Edy, Georgescu, Mariana-Iuliana, and Ionescu, Radu Tudor
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence ,Computer Science - Machine Learning - Abstract
Learning Using Privileged Information is a particular type of knowledge distillation where the teacher model benefits from an additional data representation during training, called privileged information, improving the student model, which does not see the extra representation. However, privileged information is rarely available in practice. To this end, we propose a text classification framework that harnesses text-to-image diffusion models to generate artificial privileged information. The generated images and the original text samples are further used to train multimodal teacher models based on state-of-the-art transformer-based architectures. Finally, the knowledge from multimodal teachers is distilled into a text-based (unimodal) student. Hence, by employing a generative model to produce synthetic data as privileged information, we guide the training of the student model. Our framework, called Learning Using Generated Privileged Information (LUGPI), yields noticeable performance gains on four text classification data sets, demonstrating its potential in text classification without any additional cost during inference., Comment: Accepted at ICPR 2024
- Published
- 2023
31. Gamma Hedging and Rough Paths
- Author
-
Armstrong, John and Ionescu, Andrei
- Subjects
Quantitative Finance - Mathematical Finance - Abstract
We apply rough-path theory to study the discrete-time gamma-hedging strategy. We show that if a trader knows that the market price of a set of European options will be given by a diffusive pricing model, then the discrete-time gamma-hedging strategy will enable them to replicate other European options so long as the underlying pricing signal is sufficiently regular. This is a sure result and does not require that the underlying pricing signal has a quadratic variation corresponding to a probabilisitic pricing model. We show how to generalise this result to exotic derivatives when the gamma is defined to be the Gubinelli derivative of the delta by deriving rough-path versions of the Clark--Ocone formula which hold surely. We illustrate our theory by proving that if the stock price process is sufficiently regular, as is the implied volatility process of a European derivative with maturity $T$ and smooth payoff $f(S_T)$ satisfying $f^{\prime \prime}>0$, one can replicate with certainty any European derivative with smooth payoff and maturity $T$.
- Published
- 2023
32. RoDia: A New Dataset for Romanian Dialect Identification from Speech
- Author
-
Rotaru, Codrut, Ristea, Nicolae-Catalin, and Ionescu, Radu Tudor
- Subjects
Computer Science - Computation and Language ,Computer Science - Sound ,Electrical Engineering and Systems Science - Audio and Speech Processing - Abstract
We introduce RoDia, the first dataset for Romanian dialect identification from speech. The RoDia dataset includes a varied compilation of speech samples from five distinct regions of Romania, covering both urban and rural environments, totaling 2 hours of manually annotated speech data. Along with our dataset, we introduce a set of competitive models to be used as baselines for future research. The top scoring model achieves a macro F1 score of 59.83% and a micro F1 score of 62.08%, indicating that the task is challenging. We thus believe that RoDia is a valuable resource that will stimulate research aiming to address the challenges of Romanian dialect identification. We release our dataset at https://github.com/codrut2/RoDia., Comment: Accepted at NAACL 2024
- Published
- 2023
33. Oxytocin Exhibits Neuroprotective Effects on Hippocampal Cultures under Severe Oxygen–Glucose Deprivation Conditions
- Author
-
Mara Ioana Ionescu, Ioana-Florentina Grigoras, Rosana-Bristena Ionescu, Diana Maria Chitimus, Robert Mihai Haret, Bogdan Ianosi, Mihai Ceanga, and Ana-Maria Zagrean
- Subjects
perinatal asphyxia ,hypoxic-ischemic encephalopathy ,oxytocin ,oxygen–glucose deprivation ,hippocampal cell cultures ,GABA ,Biology (General) ,QH301-705.5 - Abstract
Perinatal asphyxia (PA) and hypoxic-ischemic encephalopathy can result in severe, long-lasting neurological deficits. In vitro models, such as oxygen–glucose deprivation (OGD), are used experimentally to investigate neuronal response to metabolic stress. However, multiple variables can affect the severity level of OGD/PA and may confound any measured treatment effect. Oxytocin (OXT) has emerged as a potential neuroprotective agent against the deleterious effects of PA. Previous studies have demonstrated OXT’s potential to enhance neuronal survival in immature hippocampal cultures exposed to OGD, possibly by modulating gamma-aminobutyric acid-A receptor activity. Moreover, OXT’s precise impact on developing hippocampal neurons under different severities of OGD/PA remains uncertain. In this study, we investigated the effects of OXT (0.1 µM and 1 µM) on 7-day-old primary rat hippocampal cultures subjected to 2 h OGD/sham normoxic conditions. Cell culture viability was determined using the resazurin assay. Our results indicate that the efficacy of 1 µM OXT treatment varied according to the severity of the OGD-induced lesion, exhibiting a protective effect (p = 0.022) only when cellular viability dropped below 49.41% in non-treated OGD cultures compared to normoxic ones. Furthermore, administration of 0.1 µM OXT did not yield significant effects, irrespective of lesion severity (p > 0.05). These findings suggest that 1 µM OXT treatment during OGD confers neuroprotection exclusively in severe lesions in hippocampal neurons after 7 days in vitro. Further research is warranted to elucidate the mechanisms involved in OXT-mediated neuroprotection.
- Published
- 2024
- Full Text
- View/download PDF
34. Strategies to accelerate the elimination of cervical cancer in British Columbia, Canada: a modelling study
- Author
-
Pataky, Reka E., Izadi-Najafabadi, Sara, Smith, Laurie W., Gottschlich, Anna, Ionescu, Diana, Proctor, Lily, Ogilvie, Gina S., and Peacock, Stuart
- Subjects
British Columbia -- Health aspects ,Cancer -- Diagnosis ,Cervical cancer -- Prevention -- Diagnosis ,Health - Abstract
Background: To eliminate cervical cancer in Canada by 2040, defined as an annual age-standardized incidence rate (ASIR) lower than 4.0 per 100 000 women, the Canadian Partnership Against Cancer (CPAC) identified 3 priorities for action: increasing human papillomavirus (HPV) vaccine coverage, implementing HPV-based screening and increasing screening participation, and improving follow-up after abnormal screen results. Our objective was to explore the impact of these priorities on the projected time to elimination of cervical cancer in British Columbia. Methods: We used OncoSim-Cervical, a microsimulation model led and supported by CPAC and developed by Statistics Canada that simulates HPV transmission and the natural history of cervical cancer for the Canadian population. We updated model parameters to reflect BC's historical participation rates and program design. We simulated the transition to HPV-based screening and developed scenarios to explore the additional impact of achieving 90% vaccination coverage, 95% screening recruitment, 90% on-time screening, and 95% follow-up compliance. We projected cervical cancer incidence, ASIR, and year of elimination for the population of BC for 2023-2050. Results: HPV-based screening at current vaccination, participation, and follow-up rates can eliminate cervical cancer by 2034. Increasing on-time screening and follow-up compliance could achieve this target by 2031. Increasing vaccination coverage has a small impact over this time horizon. Interpretation: With the implementation of HPV-based screening, cervical cancer can be eliminated in BC before 2040. Efforts to increase screening participation and follow-up through this transition could potentially accelerate this timeline, but the transition from cytology- to HPV-based screening is fundamental to achieving this goal., A long-term, persistent infection with an oncogenic genotype of the human papillomavirus (HPV) is a necessary condition for the development of cervical cancer. Nine types of high-risk HPV are responsible [...]
- Published
- 2024
- Full Text
- View/download PDF
35. On the Stability of Shear Flows in Bounded Channels, II: Non-monotonic Shear Flows
- Author
-
Ionescu, Alexandru D., Iyer, Sameer, and Jia, Hao
- Published
- 2024
- Full Text
- View/download PDF
36. On the Stability of Shear Flows in Bounded Channels, I: Monotonic Shear Flows
- Author
-
Ionescu, Alexandru D. and Jia, Hao
- Published
- 2024
- Full Text
- View/download PDF
37. Identification of Preoperative Risk Factors for the Development of Cardiac Allograft Vasculopathy: A Systematic Review
- Author
-
Roberts, Will S., Pirovic, Annalena, Ionescu, Adrian, Ryan, Michael, Schaffer, Sarah, and Nguyen, Hoang
- Published
- 2024
- Full Text
- View/download PDF
38. Mitochondrial complex I activity in microglia sustains neuroinflammation
- Author
-
Peruzzotti-Jametti, L., Willis, C. M., Krzak, G., Hamel, R., Pirvan, L., Ionescu, R.-B., Reisz, J. A., Prag, H. A., Garcia-Segura, M. E., Wu, V., Xiang, Y., Barlas, B., Casey, A. M., van den Bosch, A. M. R., Nicaise, A. M., Roth, L., Bates, G. R., Huang, H., Prasad, P., Vincent, A. E., Frezza, C., Viscomi, C., Balmus, G., Takats, Z., Marioni, J. C., D’Alessandro, A., Murphy, M. P., Mohorianu, I., and Pluchino, S.
- Published
- 2024
- Full Text
- View/download PDF
39. Structure Elucidation of Castor Oil Based Self-Condensed Polyols and Applications in Flexible Foams and Elastomers
- Author
-
Shrestha, Maha L., Noble, Isaac, Ionescu, Mihail, Dai, Jack, Hong, Jian, and Petrović, Zoran S.
- Published
- 2024
- Full Text
- View/download PDF
40. CL-MAE: Curriculum-Learned Masked Autoencoders
- Author
-
Madan, Neelu, Ristea, Nicolae-Catalin, Nasrollahi, Kamal, Moeslund, Thomas B., and Ionescu, Radu Tudor
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Artificial Intelligence ,Computer Science - Machine Learning - Abstract
Masked image modeling has been demonstrated as a powerful pretext task for generating robust representations that can be effectively generalized across multiple downstream tasks. Typically, this approach involves randomly masking patches (tokens) in input images, with the masking strategy remaining unchanged during training. In this paper, we propose a curriculum learning approach that updates the masking strategy to continually increase the complexity of the self-supervised reconstruction task. We conjecture that, by gradually increasing the task complexity, the model can learn more sophisticated and transferable representations. To facilitate this, we introduce a novel learnable masking module that possesses the capability to generate masks of different complexities, and integrate the proposed module into masked autoencoders (MAE). Our module is jointly trained with the MAE, while adjusting its behavior during training, transitioning from a partner to the MAE (optimizing the same reconstruction loss) to an adversary (optimizing the opposite loss), while passing through a neutral state. The transition between these behaviors is smooth, being regulated by a factor that is multiplied with the reconstruction loss of the masking module. The resulting training procedure generates an easy-to-hard curriculum. We train our Curriculum-Learned Masked Autoencoder (CL-MAE) on ImageNet and show that it exhibits superior representation learning capabilities compared to MAE. The empirical results on five downstream tasks confirm our conjecture, demonstrating that curriculum learning can be successfully used to self-supervise masked autoencoders. We release our code at https://github.com/ristea/cl-mae., Comment: Accepted at WACV 2024
- Published
- 2023
41. An Open Hyperspectral Dataset with Sea-Land-Cloud Ground-Truth from the HYPSO-1 Satellite
- Author
-
Justo, Jon A., Garrett, Joseph, Langer, Dennis D., Henriksen, Marie B., Ionescu, Radu T., and Johansen, Tor A.
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Artificial Intelligence - Abstract
Hyperspectral Imaging, employed in satellites for space remote sensing, like HYPSO-1, faces constraints due to few labeled data sets, affecting the training of AI models demanding these ground-truth annotations. In this work, we introduce The HYPSO-1 Sea-Land-Cloud-Labeled Dataset, an open dataset with 200 diverse hyperspectral images from the HYPSO-1 mission, available in both raw and calibrated forms for scientific research in Earth observation. Moreover, 38 of these images from different countries include ground-truth labels at pixel-level totaling about 25 million spectral signatures labeled for sea/land/cloud categories. To demonstrate the potential of the dataset and its labeled subset, we have additionally optimized a deep learning model (1D Fully Convolutional Network), achieving superior performance to the current state of the art. The complete dataset, ground-truth labels, deep learning model, and software code are openly accessible for download at the website https://ntnu-smallsat-lab.github.io/hypso1_sea_land_clouds_dataset/ ., Comment: Computer Vision, Artificial Intelligence, Remote Sensing, Earth Observation, Hyperspectral Imaging, Classification, Labeled Data
- Published
- 2023
42. JEDI: Joint Expert Distillation in a Semi-Supervised Multi-Dataset Student-Teacher Scenario for Video Action Recognition
- Author
-
Bicsi, Lucian, Alexe, Bogdan, Ionescu, Radu Tudor, and Leordeanu, Marius
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Machine Learning - Abstract
We propose JEDI, a multi-dataset semi-supervised learning method, which efficiently combines knowledge from multiple experts, learned on different datasets, to train and improve the performance of individual, per dataset, student models. Our approach achieves this by addressing two important problems in current machine learning research: generalization across datasets and limitations of supervised training due to scarcity of labeled data. We start with an arbitrary number of experts, pretrained on their own specific dataset, which form the initial set of student models. The teachers are immediately derived by concatenating the feature representations from the penultimate layers of the students. We then train all models in a student-teacher semi-supervised learning scenario until convergence. In our efficient approach, student-teacher training is carried out jointly and end-to-end, showing that both students and teachers improve their generalization capacity during training. We validate our approach on four video action recognition datasets. By simultaneously considering all datasets within a unified semi-supervised setting, we demonstrate significant improvements over the initial experts., Comment: Accepted in ICCV 2023 Workshops
- Published
- 2023
43. Reverse Stable Diffusion: What prompt was used to generate this image?
- Author
-
Croitoru, Florinel-Alin, Hondru, Vlad, Ionescu, Radu Tudor, and Shah, Mubarak
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Computation and Language ,Computer Science - Machine Learning - Abstract
Text-to-image diffusion models such as Stable Diffusion have recently attracted the interest of many researchers, and inverting the diffusion process can play an important role in better understanding the generative process and how to engineer prompts in order to obtain the desired images. To this end, we introduce the new task of predicting the text prompt given an image generated by a generative diffusion model. We combine a series of white-box and black-box models (with and without access to the weights of the diffusion network) to deal with the proposed task. We propose a novel learning framework comprising of a joint prompt regression and multi-label vocabulary classification objective that generates improved prompts. To further improve our method, we employ a curriculum learning procedure that promotes the learning of image-prompt pairs with lower labeling noise (i.e. that are better aligned), and an unsupervised domain-adaptive kernel learning method that uses the similarities between samples in the source and target domains as extra features. We conduct experiments on the DiffusionDB data set, predicting text prompts from images generated by Stable Diffusion. Our novel learning framework produces excellent results on the aforementioned task, yielding the highest gains when applied on the white-box model. In addition, we make an interesting discovery: training a diffusion model on the prompt generation task can make the model generate images that are much better aligned with the input prompts, when the model is directly reused for text-to-image generation.
- Published
- 2023
44. Cascaded Cross-Modal Transformer for Request and Complaint Detection
- Author
-
Ristea, Nicolae-Catalin and Ionescu, Radu Tudor
- Subjects
Computer Science - Computation and Language ,Computer Science - Machine Learning ,Computer Science - Multimedia ,Electrical Engineering and Systems Science - Audio and Speech Processing - Abstract
We propose a novel cascaded cross-modal transformer (CCMT) that combines speech and text transcripts to detect customer requests and complaints in phone conversations. Our approach leverages a multimodal paradigm by transcribing the speech using automatic speech recognition (ASR) models and translating the transcripts into different languages. Subsequently, we combine language-specific BERT-based models with Wav2Vec2.0 audio features in a novel cascaded cross-attention transformer model. We apply our system to the Requests Sub-Challenge of the ACM Multimedia 2023 Computational Paralinguistics Challenge, reaching unweighted average recalls (UAR) of 65.41% and 85.87% for the complaint and request classes, respectively., Comment: Accepted at ACMMM 2023
- Published
- 2023
45. The Strong Field QED approach of the vacuum interaction processes at ELI-NP
- Author
-
Pentia, M., Badita, C. R., Dumitriu, D., Ionescu, A. R., and Petrascu, H.
- Subjects
High Energy Physics - Phenomenology ,Quantum Physics ,81V10, 81T18 - Abstract
The commissioning of the high power laser facility Extreme Light Infrastructure - Nuclear Physics (ELI-NP) at Bucharest-Magurele (Romania) allows the in-depth study of nonlinear interactions in Strong Field Quantum Electrodynamics (SF-QED). The present paper analyzes the SF-QED processes possible to study at ELI-NP. Carrying out such experiments will allow finding answers to many fundamental QED questions. After a brief review of the first experiment (E-144 SLAC) which confirmed the existence of nonlinear QED interactions of high-energy electrons with photons of a laser beam, we presented the fundamental QED processes that can be studied at ELI-NP in the multi-photon regime along with the characteristic parameters of the laser beam used in the QED interaction with electrons. To prepare an experiment at ELI-NP, it is necessary to analyze both the kinematics and the dynamics of the interactions. Therefore, we first reviewed the kinematics of linear QED processes and then the corresponding Feynman diagrams. For nonlinear, non-perturbative multi-photon QED interactions, the Feynman diagram technique must be adapted from linear to nonlinear processes. This is done by switching to quantum fields described by Dirac-Volkov dressed states, of particles in an intense electromagnetic (EM) field. This allows the evaluation of the amplitude of the physical processes and finally the determination of the cross-sections of these processes. SF-QED processes of multi-photon interactions with strong laser fields can be investigated taking into account the characteristics of the ELI-NP facility in the context of QED vacuum pair production of electron-positron pairs and energetic gamma rays. Finally, we present some similar experimental projects from other research centers, in different stages of implementation., Comment: 14 pages, 20 figures
- Published
- 2023
46. Self-Distilled Masked Auto-Encoders are Efficient Video Anomaly Detectors
- Author
-
Ristea, Nicolae-Catalin, Croitoru, Florinel-Alin, Ionescu, Radu Tudor, Popescu, Marius, Khan, Fahad Shahbaz, and Shah, Mubarak
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Machine Learning - Abstract
We propose an efficient abnormal event detection model based on a lightweight masked auto-encoder (AE) applied at the video frame level. The novelty of the proposed model is threefold. First, we introduce an approach to weight tokens based on motion gradients, thus shifting the focus from the static background scene to the foreground objects. Second, we integrate a teacher decoder and a student decoder into our architecture, leveraging the discrepancy between the outputs given by the two decoders to improve anomaly detection. Third, we generate synthetic abnormal events to augment the training videos, and task the masked AE model to jointly reconstruct the original frames (without anomalies) and the corresponding pixel-level anomaly maps. Our design leads to an efficient and effective model, as demonstrated by the extensive experiments carried out on four benchmarks: Avenue, ShanghaiTech, UBnormal and UCSD Ped2. The empirical results show that our model achieves an excellent trade-off between speed and accuracy, obtaining competitive AUC scores, while processing 1655 FPS. Hence, our model is between 8 and 70 times faster than competing methods. We also conduct an ablation study to justify our design. Our code is freely available at: https://github.com/ristea/aed-mae., Comment: Accepted at CVPR 2024
- Published
- 2023
47. Class Anchor Margin Loss for Content-Based Image Retrieval
- Author
-
Ghita, Alexandru and Ionescu, Radu Tudor
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Information Retrieval ,Computer Science - Machine Learning - Abstract
The performance of neural networks in content-based image retrieval (CBIR) is highly influenced by the chosen loss (objective) function. The majority of objective functions for neural models can be divided into metric learning and statistical learning. Metric learning approaches require a pair mining strategy that often lacks efficiency, while statistical learning approaches are not generating highly compact features due to their indirect feature optimization. To this end, we propose a novel repeller-attractor loss that falls in the metric learning paradigm, yet directly optimizes for the L2 metric without the need of generating pairs. Our loss is formed of three components. One leading objective ensures that the learned features are attracted to each designated learnable class anchor. The second loss component regulates the anchors and forces them to be separable by a margin, while the third objective ensures that the anchors do not collapse to zero. Furthermore, we develop a more efficient two-stage retrieval system by harnessing the learned class anchors during the first stage of the retrieval process, eliminating the need of comparing the query with every image in the database. We establish a set of four datasets (CIFAR-100, Food-101, SVHN, and Tiny ImageNet) and evaluate the proposed objective in the context of few-shot and full-set training on the CBIR task, by using both convolutional and transformer architectures. Compared to existing objective functions, our empirical evidence shows that the proposed objective is generating superior and more consistent results.
- Published
- 2023
48. Advancing oxygen separation: insights from experimental and computational analysis of La0.7Ca0.3Co0.3Fe0.6M0.1O3−δ (M = Cu, Zn) oxygen transport membranes
- Author
-
Chen, Guoxing, Liu, Wenmei, Widenmeyer, Marc, Yu, Xiao, Zhao, Zhijun, Yoon, Songhak, Yan, Ruijuan, Xie, Wenjie, Feldhoff, Armin, Homm, Gert, Ionescu, Emanuel, Fyta, Maria, and Weidenkaff, Anke
- Published
- 2024
- Full Text
- View/download PDF
49. Nonlinear Landau Damping for the Vlasov–Poisson System in R3: The Poisson Equilibrium
- Author
-
Ionescu, Alexandru D., Pausader, Benoit, Wang, Xuecheng, and Widmayer, Klaus
- Published
- 2024
- Full Text
- View/download PDF
50. Genomic Mysteries of Giant Bacteria: Insights and Implications
- Author
-
Ionescu, Danny, Volland, Jean-Marie, Contarini, Paul-Emile, and Gros, Olivier
- Subjects
Microbiology ,Biological Sciences ,Genetics ,Biotechnology ,Human Genome ,Genomics ,Bacteria ,Archaea ,Biological Evolution ,Pentaerythritol Tetranitrate ,bacterial heterozygosity ,genomics ,giant bacteria ,polyploidy ,size limitations ,Biochemistry and Cell Biology ,Evolutionary Biology ,Developmental Biology ,Biochemistry and cell biology ,Evolutionary biology - Abstract
Bacteria and Archaea are traditionally regarded as organisms with a simple morphology constrained to a size of 2-3 µm. Nevertheless, the history of microbial research is rich in the description of giant bacteria exceeding tens and even hundreds of micrometers in length or diameter already from its early days, for example, Beggiatoa spp., to the present, for example, Candidatus Thiomargarita magnifica. While some of these giants are still being studied, some were lost to science, with merely drawings and photomicrographs as evidence for their existence. The physiology and biogeochemical role of giant bacteria have been studied, with a large focus on those involved in the sulfur cycle. With the onset of the genomic era, no special emphasis has been given to this group, in an attempt to gain a novel, evolutionary, and molecular understanding of the phenomenon of bacterial gigantism. The few existing genomic studies reveal a mysterious world of hyperpolyploid bacteria with hundreds to hundreds of thousands of chromosomes that are, in some cases, identical and in others, extremely different. These studies on giant bacteria reveal novel organelles, cellular compartmentalization, and novel mechanisms to combat the accumulation of deleterious mutations in polyploid bacteria. In this perspective paper, we provide a brief overview of what is known about the genomics of giant bacteria and build on that to highlight a few burning questions that await to be addressed.
- Published
- 2023
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.