1,466,605 results on '"Tan AT"'
Search Results
2. Establishing a New Benchmark in Quantum Computational Advantage with 105-qubit Zuchongzhi 3.0 Processor
- Author
-
Gao, Dongxin, Fan, Daojin, Zha, Chen, Bei, Jiahao, Cai, Guoqing, Cai, Jianbin, Cao, Sirui, Zeng, Xiangdong, Chen, Fusheng, Chen, Jiang, Chen, Kefu, Chen, Xiawei, Chen, Xiqing, Chen, Zhe, Chen, Zhiyuan, Chen, Zihua, Chu, Wenhao, Deng, Hui, Deng, Zhibin, Ding, Pei, Ding, Xun, Ding, Zhuzhengqi, Dong, Shuai, Dong, Yupeng, Fan, Bo, Fu, Yuanhao, Gao, Song, Ge, Lei, Gong, Ming, Gui, Jiacheng, Guo, Cheng, Guo, Shaojun, Guo, Xiaoyang, He, Tan, Hong, Linyin, Hu, Yisen, Huang, He-Liang, Huo, Yong-Heng, Jiang, Tao, Jiang, Zuokai, Jin, Honghong, Leng, Yunxiang, Li, Dayu, Li, Dongdong, Li, Fangyu, Li, Jiaqi, Li, Jinjin, Li, Junyan, Li, Junyun, Li, Na, Li, Shaowei, Li, Wei, Li, Yuhuai, Li, Yuan, Liang, Futian, Liang, Xuelian, Liao, Nanxing, Lin, Jin, Lin, Weiping, Liu, Dailin, Liu, Hongxiu, Liu, Maliang, Liu, Xinyu, Liu, Xuemeng, Liu, Yancheng, Lou, Haoxin, Ma, Yuwei, Meng, Lingxin, Mou, Hao, Nan, Kailiang, Nie, Binghan, Nie, Meijuan, Ning, Jie, Niu, Le, Peng, Wenyi, Qian, Haoran, Rong, Hao, Rong, Tao, Shen, Huiyan, Shen, Qiong, Su, Hong, Su, Feifan, Sun, Chenyin, Sun, Liangchao, Sun, Tianzuo, Sun, Yingxiu, Tan, Yimeng, Tan, Jun, Tang, Longyue, Tu, Wenbing, Wan, Cai, Wang, Jiafei, Wang, Biao, Wang, Chang, Wang, Chen, Wang, Chu, Wang, Jian, Wang, Liangyuan, Wang, Rui, Wang, Shengtao, Wang, Xinzhe, Wei, Zuolin, Wei, Jiazhou, Wu, Dachao, Wu, Gang, Wu, Jin, Wu, Shengjie, Wu, Yulin, Xie, Shiyong, Xin, Lianjie, Xu, Yu, Xue, Chun, Yan, Kai, Yang, Weifeng, Yang, Xinpeng, Yang, Yang, Ye, Yangsen, Ye, Zhenping, Ying, Chong, Yu, Jiale, Yu, Qinjing, Yu, Wenhu, Zhan, Shaoyu, Zhang, Feifei, Zhang, Haibin, Zhang, Kaili, Zhang, Pan, Zhang, Wen, Zhang, Yiming, Zhang, Yongzhuo, Zhang, Lixiang, Zhao, Guming, Zhao, Peng, Zhao, Xianhe, Zhao, Xintao, Zhao, Youwei, Zhao, Zhong, Zheng, Luyuan, Zhou, Fei, Zhou, Liang, Zhou, Na, Zhou, Naibin, Zhou, Shifeng, Zhou, Shuang, Zhou, Zhengxiao, Zhu, Chengjun, Zhu, Qingling, Zou, Guihong, Zou, Haonan, Zhang, Qiang, Lu, Chao-Yang, Peng, Cheng-Zhi, Zhu, XiaoBo, and Pan, Jian-Wei
- Subjects
Quantum Physics - Abstract
In the relentless pursuit of quantum computational advantage, we present a significant advancement with the development of Zuchongzhi 3.0. This superconducting quantum computer prototype, comprising 105 qubits, achieves high operational fidelities, with single-qubit gates, two-qubit gates, and readout fidelity at 99.90%, 99.62% and 99.18%, respectively. Our experiments with an 83-qubit, 32-cycle random circuit sampling on Zuchongzhi 3.0 highlight its superior performance, achieving one million samples in just a few hundred seconds. This task is estimated to be infeasible on the most powerful classical supercomputers, Frontier, which would require approximately $6.4\times 10^9$ years to replicate the task. This leap in processing power places the classical simulation cost six orders of magnitude beyond Google's SYC-67 and SYC-70 experiments [Nature 634, 328(2024)], firmly establishing a new benchmark in quantum computational advantage. Our work not only advances the frontiers of quantum computing but also lays the groundwork for a new era where quantum processors play an essential role in tackling sophisticated real-world challenges.
- Published
- 2024
3. BlueLM-V-3B: Algorithm and System Co-Design for Multimodal Large Language Models on Mobile Devices
- Author
-
Lu, Xudong, Chen, Yinghao, Chen, Cheng, Tan, Hui, Chen, Boheng, Xie, Yina, Hu, Rui, Tan, Guanxin, Wu, Renshou, Hu, Yan, Zeng, Yi, Wu, Lei, Bian, Liuyang, Wang, Zhaoxiong, Liu, Long, Yang, Yanzhou, Xiao, Han, Zhou, Aojun, Wen, Yafei, Chen, Xiaoxin, Ren, Shuai, and Li, Hongsheng
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Computation and Language - Abstract
The emergence and growing popularity of multimodal large language models (MLLMs) have significant potential to enhance various aspects of daily life, from improving communication to facilitating learning and problem-solving. Mobile phones, as essential daily companions, represent the most effective and accessible deployment platform for MLLMs, enabling seamless integration into everyday tasks. However, deploying MLLMs on mobile phones presents challenges due to limitations in memory size and computational capability, making it difficult to achieve smooth and real-time processing without extensive optimization. In this paper, we present BlueLM-V-3B, an algorithm and system co-design approach specifically tailored for the efficient deployment of MLLMs on mobile platforms. To be specific, we redesign the dynamic resolution scheme adopted by mainstream MLLMs and implement system optimization for hardware-aware deployment to optimize model inference on mobile phones. BlueLM-V-3B boasts the following key highlights: (1) Small Size: BlueLM-V-3B features a language model with 2.7B parameters and a vision encoder with 400M parameters. (2) Fast Speed: BlueLM-V-3B achieves a generation speed of 24.4 token/s on the MediaTek Dimensity 9300 processor with 4-bit LLM weight quantization. (3) Strong Performance: BlueLM-V-3B has attained the highest average score of 66.1 on the OpenCompass benchmark among models with $\leq$ 4B parameters and surpassed a series of models with much larger parameter sizes (e.g., MiniCPM-V-2.6, InternVL2-8B)., Comment: 21 pages
- Published
- 2024
4. Evaluating the Generation of Spatial Relations in Text and Image Generative Models
- Author
-
Sim, Shang Hong, Lee, Clarence, Tan, Alvin, and Tan, Cheston
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Understanding spatial relations is a crucial cognitive ability for both humans and AI. While current research has predominantly focused on the benchmarking of text-to-image (T2I) models, we propose a more comprehensive evaluation that includes \textit{both} T2I and Large Language Models (LLMs). As spatial relations are naturally understood in a visuo-spatial manner, we develop an approach to convert LLM outputs into an image, thereby allowing us to evaluate both T2I models and LLMs \textit{visually}. We examined the spatial relation understanding of 8 prominent generative models (3 T2I models and 5 LLMs) on a set of 10 common prepositions, as well as assess the feasibility of automatic evaluation methods. Surprisingly, we found that T2I models only achieve subpar performance despite their impressive general image-generation abilities. Even more surprisingly, our results show that LLMs are significantly more accurate than T2I models in generating spatial relations, despite being primarily trained on textual data. We examined reasons for model failures and highlight gaps that can be filled to enable more spatially faithful generations.
- Published
- 2024
5. Personalize to generalize: Towards a universal medical multi-modality generalization through personalization
- Author
-
Tan, Zhaorui, Yang, Xi, Pan, Tan, Liu, Tianyi, Jiang, Chen, Guo, Xin, Wang, Qiufeng, Nguyen, Anh, Qi, Yuan, Huang, Kaizhu, and Cheng, Yuan
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Artificial Intelligence - Abstract
The differences among medical imaging modalities, driven by distinct underlying principles, pose significant challenges for generalization in multi-modal medical tasks. Beyond modality gaps, individual variations, such as differences in organ size and metabolic rate, further impede a model's ability to generalize effectively across both modalities and diverse populations. Despite the importance of personalization, existing approaches to multi-modal generalization often neglect individual differences, focusing solely on common anatomical features. This limitation may result in weakened generalization in various medical tasks. In this paper, we unveil that personalization is critical for multi-modal generalization. Specifically, we propose an approach to achieve personalized generalization through approximating the underlying personalized invariant representation ${X}_h$ across various modalities by leveraging individual-level constraints and a learnable biological prior. We validate the feasibility and benefits of learning a personalized ${X}_h$, showing that this representation is highly generalizable and transferable across various multi-modal medical tasks. Extensive experimental results consistently show that the additionally incorporated personalization significantly improves performance and generalization across diverse scenarios, confirming its effectiveness.
- Published
- 2024
6. PipeLLM: Fast and Confidential Large Language Model Services with Speculative Pipelined Encryption
- Author
-
Tan, Yifan, Tan, Cheng, Mi, Zeyu, and Chen, Haibo
- Subjects
Computer Science - Cryptography and Security ,Computer Science - Distributed, Parallel, and Cluster Computing - Abstract
Confidential computing on GPUs, like NVIDIA H100, mitigates the security risks of outsourced Large Language Models (LLMs) by implementing strong isolation and data encryption. Nonetheless, this encryption incurs a significant performance overhead, reaching up to 52.8 percent and 88.2 percent throughput drop when serving OPT-30B and OPT-66B, respectively. To address this challenge, we introduce PipeLLM, a user-transparent runtime system. PipeLLM removes the overhead by overlapping the encryption and GPU computation through pipelining - an idea inspired by the CPU instruction pipelining - thereby effectively concealing the latency increase caused by encryption. The primary technical challenge is that, unlike CPUs, the encryption module lacks prior knowledge of the specific data needing encryption until it is requested by the GPUs. To this end, we propose speculative pipelined encryption to predict the data requiring encryption by analyzing the serving patterns of LLMs. Further, we have developed an efficient, low-cost pipeline relinquishing approach for instances of incorrect predictions. Our experiments on NVIDIA H100 GPU show that compared with vanilla systems without confidential computing (e.g., vLLM, PEFT, and FlexGen), PipeLLM incurs modest overhead (less than 19.6 percent in throughput) across various LLM sizes, from 13B to 175B., Comment: To appear in ASPLOS 2025
- Published
- 2024
7. Can Personalized Medicine Coexist with Health Equity? Examining the Cost Barrier and Ethical Implications
- Author
-
Francisco, Kishi Kobe Yee, Apuhin, Andrane Estelle Carnicer, Tan, Myles Joshua Toledo, Byers, Mickael Cavanaugh, Maravilla, Nicholle Mae Amor Tan, Karim, Hezerul Abdul, and AlDahoul, Nouar
- Subjects
Computer Science - Computers and Society - Abstract
Personalized medicine (PM) promises to transform healthcare by providing treatments tailored to individual genetic, environmental, and lifestyle factors. However, its high costs and infrastructure demands raise concerns about exacerbating health disparities, especially between high-income countries (HICs) and low- and middle-income countries (LMICs). While HICs benefit from advanced PM applications through AI and genomics, LMICs often lack the resources necessary to adopt these innovations, leading to a widening healthcare divide. This paper explores the financial and ethical challenges of PM implementation, with a focus on ensuring equitable access. It proposes strategies for global collaboration, infrastructure development, and ethical frameworks to support LMICs in adopting PM, aiming to prevent further disparities in healthcare accessibility and outcomes., Comment: 30 pages, 1 figure
- Published
- 2024
8. Adaptive Few-shot Prompting for Machine Translation with Pre-trained Language Models
- Author
-
Tang, Lei, Qin, Jinghui, Ye, Wenxuan, Tan, Hao, and Yang, Zhijing
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence - Abstract
Recently, Large language models (LLMs) with in-context learning have demonstrated remarkable potential in handling neural machine translation. However, existing evidence shows that LLMs are prompt-sensitive and it is sub-optimal to apply the fixed prompt to any input for downstream machine translation tasks. To address this issue, we propose an adaptive few-shot prompting (AFSP) framework to automatically select suitable translation demonstrations for various source input sentences to further elicit the translation capability of an LLM for better machine translation. First, we build a translation demonstration retrieval module based on LLM's embedding to retrieve top-k semantic-similar translation demonstrations from aligned parallel translation corpus. Rather than using other embedding models for semantic demonstration retrieval, we build a hybrid demonstration retrieval module based on the embedding layer of the deployed LLM to build better input representation for retrieving more semantic-related translation demonstrations. Then, to ensure better semantic consistency between source inputs and target outputs, we force the deployed LLM itself to generate multiple output candidates in the target language with the help of translation demonstrations and rerank these candidates. Besides, to better evaluate the effectiveness of our AFSP framework on the latest language and extend the research boundary of neural machine translation, we construct a high-quality diplomatic Chinese-English parallel dataset that consists of 5,528 parallel Chinese-English sentences. Finally, extensive experiments on the proposed diplomatic Chinese-English parallel dataset and the United Nations Parallel Corpus (Chinese-English part) show the effectiveness and superiority of our proposed AFSP., Comment: published to AAAI2025
- Published
- 2025
9. Restoring Heisenberg-Limited Precision in Non-Markovian Open Quantum Systems via Dynamical Decoupling
- Author
-
Lahcen, Bakmou, Zeng, Ke, Jiang, Yu, and Tan, Kok Chuan
- Subjects
Quantum Physics - Abstract
Non-classical resources enable measurements to achieve a precision that exceeds the limits predicted by the central limit theorem. However, environmental noise arising from system-environment interactions severely limits the performance of such resources through decoherence. While significant progress has been made in mitigating Markovian noise, the extent to which non-Markovian noise can be mitigated remains poorly understood. We demonstrate that Heisenberg Scaling, the ultimate quantum limit on measurement precision, can be recovered in quantum metrology under non-Markovian noise by leveraging carefully designed Dynamical Decoupling Techniques. Importantly, our approach does not rely on assumptions of Markovian dynamics. By imposing appropriate conditions on the control Hamiltonian, we show that HS can be achieved irrespective of whether the noise is Markovian or non-Markovian. We also prove necessary and sufficient conditions for the existence of such control Hamiltonians. As an illustrative example, we apply our framework to the damped Jaynes-Cummings model, successfully mitigating memory effects and maintaining measurement precision in complex, non-Markovian environments. These findings highlight the power of quantum control to overcome decoherence challenges and enhance metrological performance in realistic, noisy quantum systems.
- Published
- 2025
10. Entanglement transfer between giant atoms in waveguide-QED systems
- Author
-
Liu, Jie, Liu, Zhi-Qiang, Sang, Yu, and Tan, Lei
- Subjects
Quantum Physics - Abstract
We investigate the entanglement transfer between giant atoms in waveguide-QED systems. The system consists of two pairs of two-level giant atoms, $ab$ and $cd$, each independently coupled to its respective one-dimensional waveguide. Initially, entangled states are stored in atom pair $ac$. There we consider three giant atom coupling configurations: separated, braided, and nested. For comparison, the entanglement transfer for small atoms configuration is also studied here. We focus on the entanglement transfer from atom pair $ac$ to atoms pair $bd$ and atom pair $ab$ in these four coupling configurations. It is shown that the transfer of entanglement in each coupling configuration strongly depends on phase shift. In particular, the braided configuration demonstrates superior performance in entanglement transfer. For the entanglement transfer from atom pair $ac$ to atom pair $bd$, complete entanglement transfer is presented in braided configuration, a behavior not found in small atom or other giant atom configurations. For the entanglement transfer from atom pair $ac$ to atom pair $ab$, although the maximum entanglement transferred to atom pair $ab$ in the braided configuration is half of that one to atom pair $bd$, it is still higher than that in the small atom, separated, and nested configurations. This study lays the foundation for entanglement transfer between giant atoms in waveguide-QED platforms., Comment: 10 page,6 figures
- Published
- 2025
11. Search for continuous gravitational waves from known pulsars in the first part of the fourth LIGO-Virgo-KAGRA observing run
- Author
-
The LIGO Scientific Collaboration, the Virgo Collaboration, the KAGRA Collaboration, Abac, A. G., Abbott, R., Abouelfettouh, I., Acernese, F., Ackley, K., Adhicary, S., Adhikari, N., Adhikari, R. X., Adkins, V. K., Agarwal, D., Agathos, M., Abchouyeh, M. Aghaei, Aguiar, O. D., Aguilar, I., Aiello, L., Ain, A., Ajith, P., Akutsu, T., Albanesi, S., Alfaidi, R. A., Al-Jodah, A., Alléné, C., Allocca, A., Al-Shammari, S., Altin, P. A., Alvarez-Lopez, S., Amato, A., Amez-Droz, L., Amorosi, A., Amra, C., Ananyeva, A., Anderson, S. B., Anderson, W. G., Andia, M., Ando, M., Andrade, T., Andres, N., Andrés-Carcasona, M., Andrić, T., Anglin, J., Ansoldi, S., Antelis, J. M., Antier, S., Aoumi, M., Appavuravther, E. Z., Appert, S., Apple, S. K., Arai, K., Araya, A., Araya, M. C., Areeda, J. S., Argianas, L., Aritomi, N., Armato, F., Arnaud, N., Arogeti, M., Aronson, S. M., Ashton, G., Aso, Y., Assiduo, M., Melo, S. Assis de Souza, Aston, S. M., Astone, P., Attadio, F., Aubin, F., AultONeal, K., Avallone, G., Babak, S., Badaracco, F., Badger, C., Bae, S., Bagnasco, S., Bagui, E., Baier, J. G., Baiotti, L., Bajpai, R., Baka, T., Ball, M., Ballardin, G., Ballmer, S. W., Banagiri, S., Banerjee, B., Bankar, D., Baral, P., Barayoga, J. C., Barish, B. C., Barker, D., Barneo, P., Barone, F., Barr, B., Barsotti, L., Barsuglia, M., Barta, D., Bartoletti, A. M., Barton, M. A., Bartos, I., Basak, S., Basalaev, A., Bassiri, R., Basti, A., Bates, D. E., Bawaj, M., Baxi, P., Bayley, J. C., Baylor, A. C., Baynard II, P. A., Bazzan, M., Bedakihale, V. M., Beirnaert, F., Bejger, M., Belardinelli, D., Bell, A. S., Benedetto, V., Benoit, W., Bentley, J. D., Yaala, M. Ben, Bera, S., Berbel, M., Bergamin, F., Berger, B. K., Bernuzzi, S., Beroiz, M., Bersanetti, D., Bertolini, A., Betzwieser, J., Beveridge, D., Bevins, N., Bhandare, R., Bhardwaj, U., Bhatt, R., Bhattacharjee, D., Bhaumik, S., Bhowmick, S., Bianchi, A., Bilenko, I. A., Billingsley, G., Binetti, A., Bini, S., Birnholtz, O., Biscoveanu, S., Bisht, A., Bitossi, M., Bizouard, M. -A., Blackburn, J. K., Blagg, L. A., Blair, C. D., Blair, D. G., Bobba, F., Bode, N., Boileau, G., Boldrini, M., Bolingbroke, G. N., Bolliand, A., Bonavena, L. D., Bondarescu, R., Bondu, F., Bonilla, E., Bonilla, M. S., Bonino, A., Bonnand, R., Booker, P., Borchers, A., Boschi, V., Bose, S., Bossilkov, V., Boudart, V., Boudon, A., Bozzi, A., Bradaschia, C., Brady, P. R., Braglia, M., Branch, A., Branchesi, M., Brandt, J., Braun, I., Breschi, M., Briant, T., Brillet, A., Brinkmann, M., Brockill, P., Brockmueller, E., Brooks, A. F., Brown, B. C., Brown, D. D., Brozzetti, M. L., Brunett, S., Bruno, G., Bruntz, R., Bryant, J., Bucci, F., Buchanan, J., Bulashenko, O., Bulik, T., Bulten, H. J., Buonanno, A., Burtnyk, K., Buscicchio, R., Buskulic, D., Buy, C., Byer, R. L., Davies, G. S. Cabourn, Cabras, G., Cabrita, R., Cáceres-Barbosa, V., Cadonati, L., Cagnoli, G., Cahillane, C., Bustillo, J. Calderón, Callister, T. A., Calloni, E., Camp, J. B., Canepa, M., Santoro, G. Caneva, Cannon, K. C., Cao, H., Capistran, L. A., Capocasa, E., Capote, E., Carapella, G., Carbognani, F., Carlassara, M., Carlin, J. B., Carpinelli, M., Carrillo, G., Carter, J. J., Carullo, G., Diaz, J. Casanueva, Casentini, C., Castro-Lucas, S. Y., Caudill, S., Cavaglià, M., Cavalieri, R., Cella, G., Cerdá-Durán, P., Cesarini, E., Chaibi, W., Chakraborty, P., Subrahmanya, S. Chalathadka, Chan, J. C. L., Chan, M., Chandra, K., Chang, R. -J., Chao, S., Charlton, E. L., Charlton, P., Chassande-Mottin, E., Chatterjee, C., Chatterjee, Debarati, Chatterjee, Deep, Chaturvedi, M., Chaty, S., Chen, A., Chen, A. H. -Y., Chen, D., Chen, H., Chen, H. Y., Chen, J., Chen, K. H., Chen, Y., Chen, Yanbei, Chen, Yitian, Cheng, H. P., Chessa, P., Cheung, H. T., Cheung, S. Y., Chiadini, F., Chiarini, G., Chierici, R., Chincarini, A., Chiofalo, M. L., Chiummo, A., Chou, C., Choudhary, S., Christensen, N., Chua, S. S. Y., Chugh, P., Ciani, G., Ciecielag, P., Cieślar, M., Cifaldi, M., Ciolfi, R., Clara, F., Clark, J. A., Clarke, J., Clarke, T. A., Clearwater, P., Clesse, S., Coccia, E., Codazzo, E., Cohadon, P. -F., Colace, S., Colleoni, M., Collette, C. G., Collins, J., Colloms, S., Colombo, A., Colpi, M., Compton, C. M., Connolly, G., Conti, L., Corbitt, T. R., Cordero-Carrión, I., Corezzi, S., Cornish, N. J., Corsi, A., Cortese, S., Costa, C. A., Cottingham, R., Coughlin, M. W., Couineaux, A., Coulon, J. -P., Countryman, S. T., Coupechoux, J. -F., Couvares, P., Coward, D. M., Cowart, M. J., Coyne, R., Craig, K., Creed, R., Creighton, J. D. E., Creighton, T. D., Cremonese, P., Criswell, A. W., Crockett-Gray, J. C. G., Crook, S., Crouch, R., Csizmazia, J., Cudell, J. R., Cullen, T. J., Cumming, A., Cuoco, E., Cusinato, M., Dabadie, P., Canton, T. Dal, Dall'Osso, S., Pra, S. Dal, Dálya, G., D'Angelo, B., Danilishin, S., D'Antonio, S., Danzmann, K., Darroch, K. E., Dartez, L. P., Dasgupta, A., Datta, S., Dattilo, V., Daumas, A., Davari, N., Dave, I., Davenport, A., Davier, M., Davies, T. F., Davis, D., Davis, L., Davis, M. C., Davis, P. J., Dax, M., De Bolle, J., Deenadayalan, M., Degallaix, J., De Laurentis, M., Deléglise, S., De Lillo, F., Dell'Aquila, D., Del Pozzo, W., De Marco, F., De Matteis, F., D'Emilio, V., Demos, N., Dent, T., Depasse, A., DePergola, N., De Pietri, R., De Rosa, R., De Rossi, C., DeSalvo, R., De Simone, R., Dhani, A., Diab, R., Díaz, M. C., Di Cesare, M., Dideron, G., Didio, N. A., Dietrich, T., Di Fiore, L., Di Fronzo, C., Di Giovanni, M., Di Girolamo, T., Diksha, D., Di Michele, A., Ding, J., Di Pace, S., Di Palma, I., Di Renzo, F., Divyajyoti, Dmitriev, A., Doctor, Z., Dohmen, E., Doleva, P. P., Dominguez, D., D'Onofrio, L., Donovan, F., Dooley, K. L., Dooney, T., Doravari, S., Dorosh, O., Drago, M., Driggers, J. C., Ducoin, J. -G., Dunn, L., Dupletsa, U., D'Urso, D., Duval, H., Duverne, P. -A., Dwyer, S. E., Eassa, C., Ebersold, M., Eckhardt, T., Eddolls, G., Edelman, B., Edo, T. B., Edy, O., Effler, A., Eichholz, J., Einsle, H., Eisenmann, M., Eisenstein, R. A., Ejlli, A., Eleveld, R. M., Emma, M., Endo, K., Engl, A. J., Enloe, E., Errico, L., Essick, R. C., Estellés, H., Estevez, D., Etzel, T., Evans, M., Evstafyeva, T., Ewing, B. E., Ezquiaga, J. M., Fabrizi, F., Faedi, F., Fafone, V., Fairhurst, S., Farah, A. M., Farr, B., Farr, W. M., Favaro, G., Favata, M., Fays, M., Fazio, M., Feicht, J., Fejer, M. M., Felicetti, R., Fenyvesi, E., Ferguson, D. L., Ferraiuolo, S., Ferrante, I., Ferreira, T. A., Fidecaro, F., Figura, P., Fiori, A., Fiori, I., Fishbach, M., Fisher, R. P., Fittipaldi, R., Fiumara, V., Flaminio, R., Fleischer, S. M., Fleming, L. S., Floden, E., Foley, E. M., Fong, H., Font, J. A., Fornal, B., Forsyth, P. W. F., Franceschetti, K., Franchini, N., Frasca, S., Frasconi, F., Mascioli, A. Frattale, Frei, Z., Freise, A., Freitas, O., Frey, R., Frischhertz, W., Fritschel, P., Frolov, V. V., Fronzé, G. G., Fuentes-Garcia, M., Fujii, S., Fujimori, T., Fulda, P., Fyffe, M., Gadre, B., Gair, J. R., Galaudage, S., Galdi, V., Gallagher, H., Gallardo, S., Gallego, B., Gamba, R., Gamboa, A., Ganapathy, D., Ganguly, A., Garaventa, B., García-Bellido, J., Núñez, C. García, García-Quirós, C., Gardner, J. W., Gardner, K. A., Gargiulo, J., Garron, A., Garufi, F., Gasbarra, C., Gateley, B., Gayathri, V., Gemme, G., Gennai, A., Gennari, V., George, J., George, R., Gerberding, O., Gergely, L., Ghosh, Archisman, Ghosh, Sayantan, Ghosh, Shaon, Ghosh, Shrobana, Ghosh, Suprovo, Ghosh, Tathagata, Giacoppo, L., Giaime, J. A., Giardina, K. D., Gibson, D. R., Gibson, D. T., Gier, C., Giri, P., Gissi, F., Gkaitatzis, S., Glanzer, J., Glotin, F., Godfrey, J., Godwin, P., Goebbels, N. L., Goetz, E., Golomb, J., Lopez, S. Gomez, Goncharov, B., Gong, Y., González, G., Goodarzi, P., Goode, S., Goodwin-Jones, A. W., Gosselin, M., Göttel, A. S., Gouaty, R., Gould, D. W., Govorkova, K., Goyal, S., Grace, B., Grado, A., Graham, V., Granados, A. E., Granata, M., Granata, V., Gras, S., Grassia, P., Gray, A., Gray, C., Gray, R., Greco, G., Green, A. C., Green, S. M., Green, S. R., Gretarsson, A. M., Gretarsson, E. M., Griffith, D., Griffiths, W. L., Griggs, H. L., Grignani, G., Grimaldi, A., Grimaud, C., Grote, H., Guerra, D., Guetta, D., Guidi, G. M., Guimaraes, A. R., Gulati, H. K., Gulminelli, F., Gunny, A. M., Guo, H., Guo, W., Guo, Y., Gupta, Anchal, Gupta, Anuradha, Gupta, Ish, Gupta, N. C., Gupta, P., Gupta, S. K., Gupta, T., Gupte, N., Gurs, J., Gutierrez, N., Guzman, F., H, H. -Y., Haba, D., Haberland, M., Haino, S., Hall, E. D., Hamilton, E. Z., Hammond, G., Han, W. -B., Haney, M., Hanks, J., Hanna, C., Hannam, M. D., Hannuksela, O. A., Hanselman, A. G., Hansen, H., Hanson, J., Harada, R., Hardison, A. R., Haris, K., Harmark, T., Harms, J., Harry, G. M., Harry, I. W., Hart, J., Haskell, B., Haster, C. -J., Hathaway, J. S., Haughian, K., Hayakawa, H., Hayama, K., Hayes, R., Heffernan, A., Heidmann, A., Heintze, M. C., Heinze, J., Heinzel, J., Heitmann, H., Hellman, F., Hello, P., Helmling-Cornell, A. F., Hemming, G., Henderson-Sapir, O., Hendry, M., Heng, I. S., Hennes, E., Henshaw, C., Hertog, T., Heurs, M., Hewitt, A. L., Heyns, J., Higginbotham, S., Hild, S., Hill, S., Himemoto, Y., Hirata, N., Hirose, C., Ho, W. C. G., Hoang, S., Hochheim, S., Hofman, D., Holland, N. A., Holley-Bockelmann, K., Holmes, Z. J., Holz, D. E., Honet, L., Hong, C., Hornung, J., Hoshino, S., Hough, J., Hourihane, S., Howell, E. J., Hoy, C. G., Hrishikesh, C. A., Hsieh, H. -F., Hsiung, C., Hsu, H. C., Hsu, W. -F., Hu, P., Hu, Q., Huang, H. Y., Huang, Y. -J., Huddart, A. D., Hughey, B., Hui, D. C. Y., Hui, V., Husa, S., Huxford, R., Huynh-Dinh, T., Iampieri, L., Iandolo, G. A., Ianni, M., Iess, A., Imafuku, H., Inayoshi, K., Inoue, Y., Iorio, G., Iqbal, M. H., Irwin, J., Ishikawa, R., Isi, M., Ismail, M. A., Itoh, Y., Iwanaga, H., Iwaya, M., Iyer, B. R., JaberianHamedan, V., Jacquet, C., Jacquet, P. -E., Jadhav, S. J., Jadhav, S. P., Jain, T., James, A. L., James, P. A., Jamshidi, R., Janquart, J., Janssens, K., Janthalur, N. N., Jaraba, S., Jaranowski, P., Jaume, R., Javed, W., Jennings, A., Jia, W., Jiang, J., Jin, H., Kubisz, J., Johanson, C., Johns, G. R., Johnson, N. A., Johnston, M. C., Johnston, R., Johny, N., Jones, D. H., Jones, D. I., Jones, R., Jose, S., Joshi, P., Ju, L., Jung, K., Junker, J., Juste, V., Kajita, T., Kaku, I., Kalaghatgi, C., Kalogera, V., Kamiizumi, M., Kanda, N., Kandhasamy, S., Kang, G., Kanner, J. B., Kapadia, S. J., Kapasi, D. P., Karat, S., Karathanasis, C., Kashyap, R., Kasprzack, M., Kastaun, W., Kato, T., Katsavounidis, E., Katzman, W., Kaushik, R., Kawabe, K., Kawamoto, R., Kazemi, A., Keitel, D., Kelley-Derzon, J., Kennington, J., Kesharwani, R., Key, J. S., Khadela, R., Khadka, S., Khalili, F. Y., Khan, F., Khan, I., Khanam, T., Khursheed, M., Khusid, N. M., Kiendrebeogo, W., Kijbunchoo, N., Kim, C., Kim, J. C., Kim, K., Kim, M. H., Kim, S., Kim, Y. -M., Kimball, C., Kinley-Hanlon, M., Kinnear, M., Kissel, J. S., Klimenko, S., Knee, A. M., Knust, N., Kobayashi, K., Koch, P., Koehlenbeck, S. M., Koekoek, G., Kohri, K., Kokeyama, K., Koley, S., Kolitsidou, P., Kolstein, M., Komori, K., Kong, A. K. H., Kontos, A., Korobko, M., Kossak, R. V., Kou, X., Koushik, A., Kouvatsos, N., Kovalam, M., Kozak, D. B., Kranzhoff, S. L., Kringel, V., Krishnendu, N. V., Królak, A., Kruska, K., Kuehn, G., Kuijer, P., Kulkarni, S., Ramamohan, A. Kulur, Kumar, A., Kumar, Praveen, Kumar, Prayush, Kumar, Rahul, Kumar, Rakesh, Kume, J., Kuns, K., Kuntimaddi, N., Kuroyanagi, S., Kurth, N. J., Kuwahara, S., Kwak, K., Kwan, K., Kwok, J., Lacaille, G., Lagabbe, P., Laghi, D., Lai, S., Laity, A. H., Lakkis, M. H., Lalande, E., Lalleman, M., Lalremruati, P. C., Landry, M., Lane, B. B., Lang, R. N., Lange, J., Lantz, B., La Rana, A., La Rosa, I., Lartaux-Vollard, A., Lasky, P. D., Lawrence, J., Lawrence, M. N., Laxen, M., Lazzarini, A., Lazzaro, C., Leaci, P., Lecoeuche, Y. K., Lee, H. M., Lee, H. W., Lee, K., Lee, R. -K., Lee, R., Lee, S., Lee, Y., Legred, I. N., Lehmann, J., Lehner, L., Jean, M. Le, Lemaître, A., Lenti, M., Leonardi, M., Lequime, M., Leroy, N., Lesovsky, M., Letendre, N., Lethuillier, M., Levin, S. E., Levin, Y., Leyde, K., Li, A. K. Y., Li, K. L., Li, T. G. F., Li, X., Li, Z., Lihos, A., Lin, C-Y., Lin, C. -Y., Lin, E. T., Lin, F., Lin, H., Lin, L. C. -C., Lin, Y. -C., Linde, F., Linker, S. D., Littenberg, T. B., Liu, A., Liu, G. C., Liu, Jian, Villarreal, F. Llamas, Llobera-Querol, J., Lo, R. K. L., Locquet, J. -P., London, L. T., Longo, A., Lopez, D., Portilla, M. Lopez, Lorenzini, M., Lorenzo-Medina, A., Loriette, V., Lormand, M., Losurdo, G., Lott IV, T. P., Lough, J. D., Loughlin, H. A., Lousto, C. O., Lowry, M. J., Lu, N., Lück, H., Lumaca, D., Lundgren, A. P., Lussier, A. W., Ma, L. -T., Ma, S., Ma'arif, M., Macas, R., Macedo, A., MacInnis, M., Maciy, R. R., Macleod, D. M., MacMillan, I. A. O., Macquet, A., Macri, D., Maeda, K., Maenaut, S., Hernandez, I. Magaña, Magare, S. S., Magazzù, C., Magee, R. M., Maggio, E., Maggiore, R., Magnozzi, M., Mahesh, M., Mahesh, S., Maini, M., Majhi, S., Majorana, E., Makarem, C. N., Makelele, E., Malaquias-Reis, J. A., Mali, U., Maliakal, S., Malik, A., Man, N., Mandic, V., Mangano, V., Mannix, B., Mansell, G. L., Mansingh, G., Manske, M., Mantovani, M., Mapelli, M., Marchesoni, F., Pina, D. Marín, Marion, F., Márka, S., Márka, Z., Markosyan, A. S., Markowitz, A., Maros, E., Marsat, S., Martelli, F., Martin, I. W., Martin, R. M., Martinez, B. B., Martinez, M., Martinez, V., Martini, A., Martinovic, K., Martins, J. C., Martynov, D. V., Marx, E. J., Massaro, L., Masserot, A., Masso-Reid, M., Mastrodicasa, M., Mastrogiovanni, S., Matcovich, T., Matiushechkina, M., Matsuyama, M., Mavalvala, N., Maxwell, N., McCarrol, G., McCarthy, R., McClelland, D. E., McCormick, S., McCuller, L., McEachin, S., McElhenny, C., McGhee, G. I., McGinn, J., McGowan, K. B. M., McIver, J., McLeod, A., McRae, T., Meacher, D., Meijer, Q., Melatos, A., Mellaerts, S., Menendez-Vazquez, A., Menoni, C. S., Mera, F., Mercer, R. A., Mereni, L., Merfeld, K., Merilh, E. L., Mérou, J. R., Merritt, J. D., Merzougui, M., Messenger, C., Messick, C., Metzler, Z., Meyer-Conde, M., Meylahn, F., Mhaske, A., Miani, A., Miao, H., Michaloliakos, I., Michel, C., Michimura, Y., Middleton, H., Miller, A. L., Miller, S., Millhouse, M., Milotti, E., Milotti, V., Minenkov, Y., Mio, N., Mir, Ll. M., Mirasola, L., Miravet-Tenés, M., Miritescu, C. -A., Mishra, A. K., Mishra, A., Mishra, C., Mishra, T., Mitchell, A. L., Mitchell, J. G., Mitra, S., Mitrofanov, V. P., Mittleman, R., Miyakawa, O., Miyamoto, S., Miyoki, S., Mo, G., Mobilia, L., Mohapatra, S. R. P., Mohite, S. R., Molina-Ruiz, M., Mondal, C., Mondin, M., Montani, M., Moore, C. J., Moraru, D., More, A., More, S., Moreno, G., Morgan, C., Morisaki, S., Moriwaki, Y., Morras, G., Moscatello, A., Mourier, P., Mours, B., Mow-Lowry, C. M., Muciaccia, F., Mukherjee, Arunava, Mukherjee, D., Mukherjee, Samanwaya, Mukherjee, Soma, Mukherjee, Subroto, Mukherjee, Suvodip, Mukund, N., Mullavey, A., Munch, J., Mundi, J., Mungioli, C. L., Oberg, W. R. Munn, Murakami, Y., Murakoshi, M., Murray, P. G., Muusse, S., Nabari, D., Nadji, S. L., Nagar, A., Nagarajan, N., Nagler, K. N., Nakagaki, K., Nakamura, K., Nakano, H., Nakano, M., Nandi, D., Napolano, V., Narayan, P., Nardecchia, I., Narikawa, T., Narola, H., Naticchioni, L., Nayak, R. K., Neilson, J., Nelson, A., Nelson, T. J. N., Nery, M., Neunzert, A., Ng, S., Quynh, L. Nguyen, Nichols, S. A., Nielsen, A. B., Nieradka, G., Niko, A., Nishino, Y., Nishizawa, A., Nissanke, S., Nitoglia, E., Niu, W., Nocera, F., Norman, M., North, C., Novak, J., Siles, J. F. Nuño, Nuttall, L. K., Obayashi, K., Oberling, J., O'Dell, J., Oertel, M., Offermans, A., Oganesyan, G., Oh, J. J., Oh, K., O'Hanlon, T., Ohashi, M., Ohkawa, M., Ohme, F., Oliveira, A. S., Oliveri, R., O'Neal, B., Oohara, K., O'Reilly, B., Ormsby, N. D., Orselli, M., O'Shaughnessy, R., O'Shea, S., Oshima, Y., Oshino, S., Ossokine, S., Osthelder, C., Ota, I., Ottaway, D. J., Ouzriat, A., Overmier, H., Owen, B. J., Pace, A. E., Pagano, R., Page, M. A., Pai, A., Pal, A., Pal, S., Palaia, M. A., Pálfi, M., Palma, P. P., Palomba, C., Palud, P., Pan, H., Pan, J., Pan, K. C., Panai, R., Panda, P. K., Pandey, S., Panebianco, L., Pang, P. T. H., Pannarale, F., Pannone, K. A., Pant, B. C., Panther, F. H., Paoletti, F., Paolone, A., Papalexakis, E. E., Papalini, L., Papigkiotis, G., Paquis, A., Parisi, A., Park, B. -J., Park, J., Parker, W., Pascale, G., Pascucci, D., Pasqualetti, A., Passaquieti, R., Passenger, L., Passuello, D., Patane, O., Pathak, D., Pathak, M., Patra, A., Patricelli, B., Patron, A. S., Paul, K., Paul, S., Payne, E., Pearce, T., Pedraza, M., Pegna, R., Pele, A., Arellano, F. E. Peña, Penn, S., Penuliar, M. D., Perego, A., Pereira, Z., Perez, J. J., Périgois, C., Perna, G., Perreca, A., Perret, J., Perriès, S., Perry, J. W., Pesios, D., Petracca, S., Petrillo, C., Pfeiffer, H. P., Pham, H., Pham, K. A., Phukon, K. S., Phurailatpam, H., Piarulli, M., Piccari, L., Piccinni, O. J., Pichot, M., Piendibene, M., Piergiovanni, F., Pierini, L., Pierra, G., Pierro, V., Pietrzak, M., Pillas, M., Pilo, F., Pinard, L., Pinto, I. M., Pinto, M., Piotrzkowski, B. J., Pirello, M., Pitkin, M. D., Placidi, A., Placidi, E., Planas, M. L., Plastino, W., Poggiani, R., Polini, E., Pompili, L., Poon, J., Porcelli, E., Porter, E. K., Posnansky, C., Poulton, R., Powell, J., Pracchia, M., Pradhan, B. K., Pradier, T., Prajapati, A. K., Prasai, K., Prasanna, R., Prasia, P., Pratten, G., Principe, G., Principe, M., Prodi, G. A., Prokhorov, L., Prosposito, P., Puecher, A., Pullin, J., Punturo, M., Puppo, P., Pürrer, M., Qi, H., Qin, J., Quéméner, G., Quetschke, V., Quigley, C., Quinonez, P. J., Raab, F. J., Raabith, S. S., Raaijmakers, G., Raja, S., Rajan, C., Rajbhandari, B., Ramirez, K. E., Vidal, F. A. Ramis, Ramos-Buades, A., Rana, D., Ranjan, S., Ransom, K., Rapagnani, P., Ratto, B., Rawat, S., Ray, A., Raymond, V., Razzano, M., Read, J., Payo, M. Recaman, Regimbau, T., Rei, L., Reid, S., Reitze, D. H., Relton, P., Renzini, A. I., Rettegno, P., Revenu, B., Reyes, R., Rezaei, A. S., Ricci, F., Ricci, M., Ricciardone, A., Richardson, J. W., Richardson, M., Rijal, A., Riles, K., Riley, H. K., Rinaldi, S., Rittmeyer, J., Robertson, C., Robinet, F., Robinson, M., Rocchi, A., Rolland, L., Rollins, J. G., Romano, A. E., Romano, R., Romero, A., Romero-Shaw, I. M., Romie, J. H., Ronchini, S., Roocke, T. J., Rosa, L., Rosauer, T. J., Rose, C. A., Rosińska, D., Ross, M. P., Rossello, M., Rowan, S., Roy, S. K., Roy, S., Rozza, D., Ruggi, P., Ruhama, N., Morales, E. Ruiz, Ruiz-Rocha, K., Sachdev, S., Sadecki, T., Sadiq, J., Saffarieh, P., Sah, M. R., Saha, S. S., Saha, S., Sainrat, T., Menon, S. Sajith, Sakai, K., Sakellariadou, M., Sakon, S., Salafia, O. S., Salces-Carcoba, F., Salconi, L., Saleem, M., Salemi, F., Sallé, M., Salvador, S., Sanchez, A., Sanchez, E. J., Sanchez, J. H., Sanchez, L. E., Sanchis-Gual, N., Sanders, J. R., Sänger, E. M., Santoliquido, F., Saravanan, T. R., Sarin, N., Sasaoka, S., Sasli, A., Sassi, P., Sassolas, B., Satari, H., Sato, R., Sato, Y., Sauter, O., Savage, R. L., Sawada, T., Sawant, H. L., Sayah, S., Scacco, V., Schaetzl, D., Scheel, M., Schiebelbein, A., Schiworski, M. G., Schmidt, P., Schmidt, S., Schnabel, R., Schneewind, M., Schofield, R. M. S., Schouteden, K., Schulte, B. W., Schutz, B. F., Schwartz, E., Scialpi, M., Scott, J., Scott, S. M., Seetharamu, T. C., Seglar-Arroyo, M., Sekiguchi, Y., Sellers, D., Sengupta, A. S., Sentenac, D., Seo, E. G., Seo, J. W., Sequino, V., Serra, M., Servignat, G., Sevrin, A., Shaffer, T., Shah, U. S., Shaikh, M. A., Shao, L., Sharma, A. K., Sharma, P., Sharma-Chaudhary, S., Shaw, M. R., Shawhan, P., Shcheblanov, N. S., Sheridan, E., Shikano, Y., Shikauchi, M., Shimode, K., Shinkai, H., Shiota, J., Shoemaker, D. H., Shoemaker, D. M., Short, R. W., ShyamSundar, S., Sider, A., Siegel, H., Sieniawska, M., Sigg, D., Silenzi, L., Simmonds, M., Singer, L. P., Singh, A., Singh, D., Singh, M. K., Singh, S., Singha, A., Sintes, A. M., Sipala, V., Skliris, V., Slagmolen, B. J. J., Slaven-Blair, T. J., Smetana, J., Smith, J. R., Smith, L., Smith, R. J. E., Smith, W. J., Soldateschi, J., Somiya, K., Song, I., Soni, K., Soni, S., Sordini, V., Sorrentino, F., Sorrentino, N., Sotani, H., Soulard, R., Southgate, A., Spagnuolo, V., Spencer, A. P., Spera, M., Spinicelli, P., Spoon, J. B., Sprague, C. A., Srivastava, A. K., Stachurski, F., Steer, D. A., Steinlechner, J., Steinlechner, S., Stergioulas, N., Stevens, P., StPierre, M., Stratta, G., Strong, M. D., Strunk, A., Sturani, R., Stuver, A. L., Suchenek, M., Sudhagar, S., Sueltmann, N., Suleiman, L., Sullivan, K. D., Sun, L., Sunil, S., Suresh, J., Sutton, P. J., Suzuki, T., Suzuki, Y., Swinkels, B. L., Syx, A., Szczepańczyk, M. J., Szewczyk, P., Tacca, M., Tagoshi, H., Tait, S. C., Takahashi, H., Takahashi, R., Takamori, A., Takase, T., Takatani, K., Takeda, H., Takeshita, K., Talbot, C., Tamaki, M., Tamanini, N., Tanabe, D., Tanaka, K., Tanaka, S. J., Tanaka, T., Tang, D., Tanioka, S., Tanner, D. B., Tao, L., Tapia, R. D., Martín, E. N. Tapia San, Tarafder, R., Taranto, C., Taruya, A., Tasson, J. D., Teloi, M., Tenorio, R., Themann, H., Theodoropoulos, A., Thirugnanasambandam, M. P., Thomas, L. M., Thomas, M., Thomas, P., Thompson, J. E., Thondapu, S. R., Thorne, K. A., Thrane, E., Tissino, J., Tiwari, A., Tiwari, P., Tiwari, S., Tiwari, V., Todd, M. R., Toivonen, A. M., Toland, K., Tolley, A. E., Tomaru, T., Tomita, K., Tomura, T., Tong-Yu, C., Toriyama, A., Toropov, N., Torres-Forné, A., Torrie, C. I., Toscani, M., Melo, I. Tosta e, Tournefier, E., Trapananti, A., Travasso, F., Traylor, G., Trevor, M., Tringali, M. C., Tripathee, A., Troian, G., Troiano, L., Trovato, A., Trozzo, L., Trudeau, R. J., Tsang, T. T. L., Tso, R., Tsuchida, S., Tsukada, L., Tsutsui, T., Turbang, K., Turconi, M., Turski, C., Ubach, H., Uchiyama, T., Udall, R. P., Uehara, T., Uematsu, M., Ueno, K., Ueno, S., Undheim, V., Ushiba, T., Vacatello, M., Vahlbruch, H., Vaidya, N., Vajente, G., Vajpeyi, A., Valdes, G., Valencia, J., Valentini, M., Vallejo-Peña, S. A., Vallero, S., Valsan, V., van Bakel, N., van Beuzekom, M., van Dael, M., Brand, J. F. J. van den, Broeck, C. Van Den, Vander-Hyde, D. C., van der Sluys, M., Van de Walle, A., van Dongen, J., Vandra, K., van Haevermaet, H., van Heijningen, J. V., Van Hove, P., VanKeuren, M., Vanosky, J., van Putten, M. H. P. M., van Ranst, Z., van Remortel, N., Vardaro, M., Vargas, A. F., Varghese, J. J., Varma, V., Vasúth, M., Vecchio, A., Vedovato, G., Veitch, J., Veitch, P. J., Venikoudis, S., Venneberg, J., Verdier, P., Verkindt, D., Verma, B., Verma, P., Verma, Y., Vermeulen, S. M., Vetrano, F., Veutro, A., Vibhute, A. M., Viceré, A., Vidyant, S., Viets, A. D., Vijaykumar, A., Vilkha, A., Villa-Ortega, V., Vincent, E. T., Vinet, J. -Y., Viret, S., Virtuoso, A., Vitale, S., Vives, A., Vocca, H., Voigt, D., von Reis, E. R. G., von Wrangel, J. S. A., Vyatchanin, S. P., Wade, L. E., Wade, M., Wagner, K. J., Wajid, A., Walker, M., Wallace, G. S., Wallace, L., Wang, H., Wang, J. Z., Wang, W. H., Wang, Z., Waratkar, G., Warner, J., Was, M., Washimi, T., Washington, N. Y., Watarai, D., Wayt, K. E., Weaver, B. R., Weaver, B., Weaving, C. R., Webster, S. A., Weinert, M., Weinstein, A. J., Weiss, R., Wellmann, F., Wen, L., Weßels, P., Wette, K., Whelan, J. T., Whiting, B. F., Whittle, C., Wildberger, J. B., Wilk, O. S., Wilken, D., Wilkin, A. T., Willadsen, D. J., Willetts, K., Williams, D., Williams, M. J., Williams, N. S., Willis, J. L., Willke, B., Wils, M., Winterflood, J., Wipf, C. C., Woan, G., Woehler, J., Wofford, J. K., Wolfe, N. E., Wong, H. T., Wong, H. W. Y., Wong, I. C. F., Wright, J. L., Wright, M., Wu, C., Wu, D. S., Wu, H., Wuchner, E., Wysocki, D. M., Xu, V. A., Xu, Y., Yadav, N., Yamamoto, H., Yamamoto, K., Yamamoto, T. S., Yamamoto, T., Yamamura, S., Yamazaki, R., Yan, S., Yan, T., Yang, F. W., Yang, F., Yang, K. Z., Yang, Y., Yarbrough, Z., Yasui, H., Yeh, S. -W., Yelikar, A. B., Yin, X., Yokoyama, J., Yokozawa, T., Yoo, J., Yu, H., Yuan, S., Yuzurihara, H., Zadrożny, A., Zanolin, M., Zeeshan, M., Zelenova, T., Zendri, J. -P., Zeoli, M., Zerrad, M., Zevin, M., Zhang, A. C., Zhang, L., Zhang, R., Zhang, T., Zhang, Y., Zhao, C., Zhao, Yue, Zhao, Yuhang, Zheng, Y., Zhong, H., Zhou, R., Zhu, X. -J., Zhu, Z. -H., Zimmerman, A. B., Zucker, M. E., Zweizig, J., Furlan, S. B. Araujo, Arzoumanian, Z., Basu, A., Cassity, A., Cognard, I., Crowter, K., del Palacio, S., Espinoza, C. M., Fonseca, E., Flynn, C. M. L., Gancio, G., Garcia, F., Gendreau, K. C., Good, D. C., Guillemot, L., Guillot, S., Keith, M. J., Kuiper, L., Lower, M. E., Lyne, A. G., McKee, J. W., Meyers, B. W., Palfreyman, J. L., Pearlman, A. B., Romero, G. E., Shannon, R. M., Shaw, B., Stairs, I. H., Stappers, B. W., Tan, C. M., Theureau, G., Thompson, M., Weltevrede, P., and Zubieta, E.
- Subjects
Astrophysics - High Energy Astrophysical Phenomena - Abstract
Continuous gravitational waves (CWs) emission from neutron stars carries information about their internal structure and equation of state, and it can provide tests of General Relativity. We present a search for CWs from a set of 45 known pulsars in the first part of the fourth LIGO--Virgo--KAGRA observing run, known as O4a. We conducted a targeted search for each pulsar using three independent analysis methods considering the single-harmonic and the dual-harmonic emission models. We find no evidence of a CW signal in O4a data for both models and set upper limits on the signal amplitude and on the ellipticity, which quantifies the asymmetry in the neutron star mass distribution. For the single-harmonic emission model, 29 targets have the upper limit on the amplitude below the theoretical spin-down limit. The lowest upper limit on the amplitude is $6.4\!\times\!10^{-27}$ for the young energetic pulsar J0537-6910, while the lowest constraint on the ellipticity is $8.8\!\times\!10^{-9}$ for the bright nearby millisecond pulsar J0437-4715. Additionally, for a subset of 16 targets we performed a narrowband search that is more robust regarding the emission model, with no evidence of a signal. We also found no evidence of non-standard polarizations as predicted by the Brans-Dicke theory., Comment: main paper: 12 pages, 6 figures, 4 tables
- Published
- 2025
12. AdaptVC: High Quality Voice Conversion with Adaptive Learning
- Author
-
Kim, Jaehun, Kim, Ji-Hoon, Choi, Yeunju, Nguyen, Tan Dat, Mun, Seongkyu, and Chung, Joon Son
- Subjects
Computer Science - Sound ,Computer Science - Computation and Language ,Electrical Engineering and Systems Science - Audio and Speech Processing - Abstract
The goal of voice conversion is to transform the speech of a source speaker to sound like that of a reference speaker while preserving the original content. A key challenge is to extract disentangled linguistic content from the source and voice style from the reference. While existing approaches leverage various methods to isolate the two, a generalization still requires further attention, especially for robustness in zero-shot scenarios. In this paper, we achieve successful disentanglement of content and speaker features by tuning self-supervised speech features with adapters. The adapters are trained to dynamically encode nuanced features from rich self-supervised features, and the decoder fuses them to produce speech that accurately resembles the reference with minimal loss of content. Moreover, we leverage a conditional flow matching decoder with cross-attention speaker conditioning to further boost the synthesis quality and efficiency. Subjective and objective evaluations in a zero-shot scenario demonstrate that the proposed method outperforms existing models in speech quality and similarity to the reference speech., Comment: insufficient/incorrect information is contained in the paper
- Published
- 2025
13. MuQ: Self-Supervised Music Representation Learning with Mel Residual Vector Quantization
- Author
-
Zhu, Haina, Zhou, Yizhi, Chen, Hangting, Yu, Jianwei, Ma, Ziyang, Gu, Rongzhi, Luo, Yi, Tan, Wei, and Chen, Xie
- Subjects
Computer Science - Sound ,Computer Science - Artificial Intelligence ,Computer Science - Computation and Language ,Computer Science - Machine Learning ,Electrical Engineering and Systems Science - Audio and Speech Processing - Abstract
Recent years have witnessed the success of foundation models pre-trained with self-supervised learning (SSL) in various music informatics understanding tasks, including music tagging, instrument classification, key detection, and more. In this paper, we propose a self-supervised music representation learning model for music understanding. Distinguished from previous studies adopting random projection or existing neural codec, the proposed model, named MuQ, is trained to predict tokens generated by Mel Residual Vector Quantization (Mel-RVQ). Our Mel-RVQ utilizes residual linear projection structure for Mel spectrum quantization to enhance the stability and efficiency of target extraction and lead to better performance. Experiments in a large variety of downstream tasks demonstrate that MuQ outperforms previous self-supervised music representation models with only 0.9K hours of open-source pre-training data. Scaling up the data to over 160K hours and adopting iterative training consistently improve the model performance. To further validate the strength of our model, we present MuQ-MuLan, a joint music-text embedding model based on contrastive learning, which achieves state-of-the-art performance in the zero-shot music tagging task on the MagnaTagATune dataset. Code and checkpoints are open source in https://github.com/tencent-ailab/MuQ.
- Published
- 2025
14. Kagome Metal GdNb$_6$Sn$_6$: A 4d Playground for Topological Magnetism and Electron Correlations
- Author
-
Xiao, Yusen, Duan, Qingchen, Li, Zhaoyi, Guo, Shu, Tan, Hengxin, and Zhong, Ruidan
- Subjects
Condensed Matter - Materials Science ,Condensed Matter - Strongly Correlated Electrons - Abstract
Magnetic kagome metals have garnered considerable attention as an ideal platform for investigating intrinsic topological structures, frustrated magnetism, and electron correlation effects. In this work, we present the synthesis and detailed characterization of GdNb$_6$Sn$_6$, a metal that features a niobium-based kagome lattice and a frustrated triangular gadolinium network. The compound adopts the HfFe$_6$Ge$_6$-type crystal structure, with lattice parameters of a = b = 5.765(4) {\AA} and c = 9.536(8) {\AA}. Magnetic susceptibility and specific heat measurements reveal a magnetic transition near 2.3 K. Electrical transport data confirm metallic behavior, unsaturated positive magnetoresistance, and a hole-dominated multiband Hall effect. Furthermore, first-principles calculations indicate that Nb-4d orbitals predominantly contribute to the electronic states near the Fermi energy, with the band structure showing multiple topologically nontrivial crossings around the Fermi surface. This study also compares GdNb$_6$Sn$_6$ with GdV$_6$Sn$_6$, highlighting their similarities and differences. Our findings pave the way for exploring RNb$_6$Sn$_6$ (R = rare earth) with customized substitutions of R sites to fine-tune their properties., Comment: 7 pages, 7 figures
- Published
- 2025
15. AutoPresent: Designing Structured Visuals from Scratch
- Author
-
Ge, Jiaxin, Wang, Zora Zhiruo, Zhou, Xuhui, Peng, Yi-Hao, Subramanian, Sanjay, Tan, Qinyue, Sap, Maarten, Suhr, Alane, Fried, Daniel, Neubig, Graham, and Darrell, Trevor
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Computation and Language - Abstract
Designing structured visuals such as presentation slides is essential for communicative needs, necessitating both content creation and visual planning skills. In this work, we tackle the challenge of automated slide generation, where models produce slide presentations from natural language (NL) instructions. We first introduce the SlidesBench benchmark, the first benchmark for slide generation with 7k training and 585 testing examples derived from 310 slide decks across 10 domains. SlidesBench supports evaluations that are (i)reference-based to measure similarity to a target slide, and (ii)reference-free to measure the design quality of generated slides alone. We benchmark end-to-end image generation and program generation methods with a variety of models, and find that programmatic methods produce higher-quality slides in user-interactable formats. Built on the success of program generation, we create AutoPresent, an 8B Llama-based model trained on 7k pairs of instructions paired with code for slide generation, and achieve results comparable to the closed-source model GPT-4o. We further explore iterative design refinement where the model is tasked to self-refine its own output, and we found that this process improves the slide's quality. We hope that our work will provide a basis for future work on generating structured visuals.
- Published
- 2025
16. Quasinormal Ringing of de Sitter Braneworlds
- Author
-
Jia, Hai-Long, Guo, Wen-Di, Liu, Yu-Xiao, and Tan, Qin
- Subjects
General Relativity and Quantum Cosmology - Abstract
Compared with the Poincar\'e braneworld, the de Sitter (dS) braneworld aligns more closely with the present universe characterized by a small but finite cosmological constant. To explore the quasinormal ringing properties within the dS brane scenario, we investigate the gravitational perturbations in both thin and thick dS brane configurations. Analysis of the perturbation equations reveals that the effective potential along the extra dimension exhibits the shape of P\"oschl-Teller potential, asymptotically approaching a constant value (mass gap) at infinity. And analytical calculations further indicate that the gravitational perturbations, apart from the zero mode, possess a series of discrete, purely imaginary quasinormal modes in the late stages. This result implies that these perturbations decay without oscillation over time. The analytical findings also demonstrate that the brane structure primarily determines the distribution of the quasinormal spectrum while preserving the purely imaginary nature of the quasinormal frequencies. Subsequently, we further simulate the gravitational wave signal by numerically evolving the perturbation equations, which yield late-stage results consistent with the analytical predictions. Interestingly, these quasinormal modes carry information about the cosmological constant on the brane, which provides a potential new pathway for the study of cosmology in the dS brane scenario.
- Published
- 2024
17. Towards Compatible Fine-tuning for Vision-Language Model Updates
- Author
-
Wang, Zhengbo, Liang, Jian, Sheng, Lijun, He, Ran, Wang, Zilei, and Tan, Tieniu
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Machine Learning - Abstract
So far, efficient fine-tuning has become a popular strategy for enhancing the capabilities of foundation models on downstream tasks by learning plug-and-play modules. However, existing methods overlook a crucial issue: if the underlying foundation model is updated, are these plug-and-play modules still effective? In this paper, we first conduct a detailed analysis of various fine-tuning methods on the CLIP in terms of their compatibility with model updates. The study reveals that many high-performing fine-tuning methods fail to be compatible with the upgraded models. To address this, we propose a novel approach, Class-conditioned Context Optimization (ContCoOp), which integrates learnable prompts with class embeddings using an attention layer before inputting them into the text encoder. Consequently, the prompts can dynamically adapt to the changes in embedding space (due to model updates), ensuring continued effectiveness. Extensive experiments over 15 datasets show that our ContCoOp achieves the highest compatibility over the baseline methods, and exhibits robust out-of-distribution generalization., Comment: preprint
- Published
- 2024
18. TimeRAF: Retrieval-Augmented Foundation model for Zero-shot Time Series Forecasting
- Author
-
Zhang, Huanyu, Xu, Chang, Zhang, Yi-Fan, Zhang, Zhang, Wang, Liang, Bian, Jiang, and Tan, Tieniu
- Subjects
Computer Science - Machine Learning - Abstract
Time series forecasting plays a crucial role in data mining, driving rapid advancements across numerous industries. With the emergence of large models, time series foundation models (TSFMs) have exhibited remarkable generalization capabilities, such as zero-shot learning, through large-scale pre-training. Meanwhile, Retrieval-Augmented Generation (RAG) methods have been widely employed to enhance the performance of foundation models on unseen data, allowing models to access to external knowledge. In this paper, we introduce TimeRAF, a Retrieval-Augmented Forecasting model that enhance zero-shot time series forecasting through retrieval-augmented techniques. We develop customized time series knowledge bases that are tailored to the specific forecasting tasks. TimeRAF employs an end-to-end learnable retriever to extract valuable information from the knowledge base. Additionally, we propose Channel Prompting for knowledge integration, which effectively extracts relevant information from the retrieved knowledge along the channel dimension. Extensive experiments demonstrate the effectiveness of our model, showing significant improvement across various domains and datasets.
- Published
- 2024
19. DELA: A Novel Approach for Detecting Errors Induced by Large Atomic Condition Numbers
- Author
-
Tan, Youshuai, Zhang, Zhanwei, Chen, Jinfu, Ding, Zishuo, Xuan, Jifeng, and Shang, Weiyi
- Subjects
Computer Science - Software Engineering - Abstract
Numerical programs form the foundation of modern science and engineering, providing essential solutions to complex mathematical problems. Therefore, errors in numerical results would lead to harmful consequences, especially in safety-critical applications. Since only a few inputs may lead to substantial errors for numerical programs, it is essential to determine whether a given input could result in a significant error. Existing researchers tend to use the results of high-precision programs to assess whether there is a substantial error, which introduces three main challenges: difficulty of implementation, existence of potential faults in the detection of numerical errors, and long execution time. To address these limitations, we propose a novel approach named DELA. Our approach is based on the observation that most numerical errors stem from large condition numbers in atomic operations (such as subtraction), which then propagate and accumulate. DELA injects small perturbations into the results of individual atomic operations within the program and compares the outcomes of the original program with the perturbed version to detect errors. We evaluate DELA with datasets from ATOMU and HSED, as well as data from a complex linear system-solving program. Experimental results demonstrate that we can detect all the significant errors that were reported by prior research. DELA shows strong alignment with high-precision programs of ATOMU and HSED, with average Pearson and Spearman correlations of 0.86 and 0.61. Additionally, DELA effectively detects significant errors in complex programs, achieving correlation scores of 0.9763 and 0.8993. More importantly, in experiments with ATOMU and HSED, DELA's perturbed programs run within only 0.13% of the time needed by high-precision versions; while for the linear system-solving programs, DELA is 73.46 times faster than the high-precision programs.
- Published
- 2024
20. FastCHGNet: Training one Universal Interatomic Potential to 1.5 Hours with 32 GPUs
- Author
-
Zhou, Yuanchang, Hu, Siyu, Wang, Chen, Wang, Lin-Wang, Tan, Guangming, and Jia, Weile
- Subjects
Computer Science - Distributed, Parallel, and Cluster Computing ,Computer Science - Machine Learning - Abstract
Graph neural network universal interatomic potentials (GNN-UIPs) have demonstrated remarkable generalization and transfer capabilities in material discovery and property prediction. These models can accelerate molecular dynamics (MD) simulation by several orders of magnitude while maintaining \textit{ab initio} accuracy, making them a promising new paradigm in material simulations. One notable example is Crystal Hamiltonian Graph Neural Network (CHGNet), pretrained on the energies, forces, stresses, and magnetic moments from the MPtrj dataset, representing a state-of-the-art GNN-UIP model for charge-informed MD simulations. However, training the CHGNet model is time-consuming(8.3 days on one A100 GPU) for three reasons: (i) requiring multi-layer propagation to reach more distant atom information, (ii) requiring second-order derivatives calculation to finish weights updating and (iii) the implementation of reference CHGNet does not fully leverage the computational capabilities. This paper introduces FastCHGNet, an optimized CHGNet, with three contributions: Firstly, we design innovative Force/Stress Readout modules to decompose Force/Stress prediction. Secondly, we adopt massive optimizations such as kernel fusion, redundancy bypass, etc, to exploit GPU computation power sufficiently. Finally, we extend CHGNet to support multiple GPUs and propose a load-balancing technique to enhance GPU utilization. Numerical results show that FastCHGNet reduces memory footprint by a factor of 3.59. The final training time of FastCHGNet can be decreased to \textbf{1.53 hours} on 32 GPUs without sacrificing model accuracy.
- Published
- 2024
21. An Experimental Study of Passive UAV Tracking with Digital Arrays and Cellular Downlink Signals
- Author
-
Sun, Yifei, Yu, Chao, Luo, Yan, Han, Tony Xiao, Tan, Haisheng, Wang, Rui, and Lau, Francis C. M.
- Subjects
Electrical Engineering and Systems Science - Signal Processing ,Electrical Engineering and Systems Science - Systems and Control - Abstract
Given the prospects of the low-altitude economy (LAE) and the popularity of unmanned aerial vehicles (UAVs), there are increasing demands on monitoring flying objects at low altitude in wide urban areas. In this work, the widely deployed long-term evolution (LTE) base station (BS) is exploited to illuminate UAVs in bistatic trajectory tracking. Specifically, a passive sensing receiver with two digital antenna arrays is proposed and developed to capture both the line-of-sight (LoS) signal and the scattered signal off a target UAV. From their cross ambiguity function, the bistatic range, Doppler shift and angle-of-arrival (AoA) of the target UAV can be detected in a sequence of time slots. In order to address missed detections and false alarms of passive sensing, a multi-target tracking framework is adopted to track the trajectory of the target UAV. It is demonstrated by experiments that the proposed UAV tracking system can achieve a meter-level accuracy., Comment: 13 pages, 10 figures, submitted to IEEE Journal for possible publication
- Published
- 2024
22. Similar but Patched Code Considered Harmful -- The Impact of Similar but Patched Code on Recurring Vulnerability Detection and How to Remove Them
- Author
-
Tan, Zixuan, Zhou, Jiayuan, Hu, Xing, Pan, Shengyi, Liu, Kui, and Xia, Xin
- Subjects
Computer Science - Software Engineering ,Computer Science - Cryptography and Security - Abstract
Identifying recurring vulnerabilities is crucial for ensuring software security. Clone-based techniques, while widely used, often generate many false alarms due to the existence of similar but patched (SBP) code, which is similar to vulnerable code but is not vulnerable due to having been patched. Although the SBP code poses a great challenge to the effectiveness of existing approaches, it has not yet been well explored. In this paper, we propose a programming language agnostic framework, Fixed Vulnerability Filter (FVF), to identify and filter such SBP instances in vulnerability detection. Different from existing studies that leverage function signatures, our approach analyzes code change histories to precisely pinpoint SBPs and consequently reduce false alarms. Evaluation under practical scenarios confirms the effectiveness and precision of our approach. Remarkably, FVF identifies and filters 65.1% of false alarms from four vulnerability detection tools (i.e., ReDeBug, VUDDY, MVP, and an elementary hash-based approach) without yielding false positives. We further apply FVF to 1,081 real-world software projects and construct a real-world SBP dataset containing 6,827 SBP functions. Due to the SBP nature, the dataset can act as a strict benchmark to test the sensitivity of the vulnerability detection approach in distinguishing real vulnerabilities and SBPs. Using this dataset, we demonstrate the ineffectiveness of four state-of-the-art deep learning-based vulnerability detection approaches. Our dataset can help developers make a more realistic evaluation of vulnerability detection approaches and also paves the way for further exploration of real-world SBP scenarios., Comment: Accepted by 47th IEEE/ACM International Conference on Software Engineering (ICSE 2025)
- Published
- 2024
23. Rapid, High-resolution and Distortion-free $R_{2}^{*}$ Mapping of Fetal Brain using Multi-echo Radial FLASH and Model-based Reconstruction
- Author
-
Wang, Xiaoqing, Fan, Fongli, Tan, Zhengguo, Vasylechko, Serge, Yang, Edward, Didier, Ryne, Afacan, Onur, Uecker, Martin, Warfield, Simon K., and Gholipour, Ali
- Subjects
Physics - Medical Physics - Abstract
Purpose: To develop a rapid, high-resolution and distortion-free quantitative $R_{2}^{*}$ mapping technique for fetal brain at 3 T. Methods: A 2D multi-echo radial FLASH sequence with blip gradients is adapted for fetal brain data acquisition during maternal free breathing at 3 T. A calibrationless model-based reconstruction with sparsity constraints is developed to jointly estimate water, fat, $R_{2}^{*}$ and $B_{0}$ field maps directly from the acquired k-space data. Validations have been performed on numerical and NIST phantoms and five fetal subjects ranging from 27 weeks to 36 weeks gestation age. Results: Both numerical and experimental phantom studies confirm good accuracy and precision of the proposed method. In fetal studies, both the parallel imaging compressed sensing (PICS) technique with a Graph Cut algorithm and the model-based approach proved effective for parameter quantification, with the latter providing enhanced image details. Compared to commonly used multi-echo EPI approaches, the proposed radial technique shows improved spatial resolution (1.1 $\times$ 1.1 $\times$ 3 mm$^{3}$ vs. 2-3 $\times$ 2-3 $\times$ 3 mm$^{3}$) and reduced distortion. Quantitative $R_{2}^{*}$ results confirm good agreement between the two acquisition strategies. Additionally, high-resolution, distortion-free $R_{2}^{*}$-weighted images can be synthesized, offering complementary information to HASTE. Conclusion: This work demonstrates the feasibility of radial acquisition for motion-robust quantitative $R_{2}^{*}$ mapping of the fetal brain. This proposed multi-echo radial FLASH, combined with calibrationless model-based reconstruction, achieves accurate, distortion-free fetal brain $R_{2}^{*}$ mapping at a nominal resolution of $1.1 \times 1.1 \times 3$ mm$^{3}$ within 2 seconds., Comment: Part of this work has been presented at the ISMRM, Singapore, 2024. Submitted to Magnetic Resonance in Medicine
- Published
- 2024
24. LicenseGPT: A Fine-tuned Foundation Model for Publicly Available Dataset License Compliance
- Author
-
Tan, Jingwen, Rajbahadur, Gopi Krishnan, Li, Zi, Song, Xiangfu, Lin, Jianshan, Li, Dan, Zheng, Zibin, and Hassan, Ahmed E.
- Subjects
Computer Science - Software Engineering ,Computer Science - Artificial Intelligence - Abstract
Dataset license compliance is a critical yet complex aspect of developing commercial AI products, particularly with the increasing use of publicly available datasets. Ambiguities in dataset licenses pose significant legal risks, making it challenging even for software IP lawyers to accurately interpret rights and obligations. In this paper, we introduce LicenseGPT, a fine-tuned foundation model (FM) specifically designed for dataset license compliance analysis. We first evaluate existing legal FMs (i.e., FMs specialized in understanding and processing legal texts) and find that the best-performing model achieves a Prediction Agreement (PA) of only 43.75%. LicenseGPT, fine-tuned on a curated dataset of 500 licenses annotated by legal experts, significantly improves PA to 64.30%, outperforming both legal and general-purpose FMs. Through an A/B test and user study with software IP lawyers, we demonstrate that LicenseGPT reduces analysis time by 94.44%, from 108 seconds to 6 seconds per license, without compromising accuracy. Software IP lawyers perceive LicenseGPT as a valuable supplementary tool that enhances efficiency while acknowledging the need for human oversight in complex cases. Our work underscores the potential of specialized AI tools in legal practice and offers a publicly available resource for practitioners and researchers.
- Published
- 2024
25. CaseSumm: A Large-Scale Dataset for Long-Context Summarization from U.S. Supreme Court Opinions
- Author
-
Heddaya, Mourad, MacMillan, Kyle, Malani, Anup, Mei, Hongyuan, and Tan, Chenhao
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence ,Computer Science - Computers and Society ,Computer Science - Machine Learning - Abstract
This paper introduces CaseSumm, a novel dataset for long-context summarization in the legal domain that addresses the need for longer and more complex datasets for summarization evaluation. We collect 25.6K U.S. Supreme Court (SCOTUS) opinions and their official summaries, known as "syllabuses." Our dataset is the largest open legal case summarization dataset, and is the first to include summaries of SCOTUS decisions dating back to 1815. We also present a comprehensive evaluation of LLM-generated summaries using both automatic metrics and expert human evaluation, revealing discrepancies between these assessment methods. Our evaluation shows Mistral 7b, a smaller open-source model, outperforms larger models on most automatic metrics and successfully generates syllabus-like summaries. In contrast, human expert annotators indicate that Mistral summaries contain hallucinations. The annotators consistently rank GPT-4 summaries as clearer and exhibiting greater sensitivity and specificity. Further, we find that LLM-based evaluations are not more correlated with human evaluations than traditional automatic metrics. Furthermore, our analysis identifies specific hallucinations in generated summaries, including precedent citation errors and misrepresentations of case facts. These findings demonstrate the limitations of current automatic evaluation methods for legal summarization and highlight the critical role of human evaluation in assessing summary quality, particularly in complex, high-stakes domains. CaseSumm is available at https://huggingface.co/datasets/ChicagoHAI/CaseSumm
- Published
- 2024
26. E2EDiff: Direct Mapping from Noise to Data for Enhanced Diffusion Models
- Author
-
Tan, Zhiyu, Qian, WenXu, Chen, Hesen, Yang, Mengping, Chen, Lei, and Li, Hao
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Diffusion models have emerged as a powerful framework for generative modeling, achieving state-of-the-art performance across various tasks. However, they face several inherent limitations, including a training-sampling gap, information leakage in the progressive noising process, and the inability to incorporate advanced loss functions like perceptual and adversarial losses during training. To address these challenges, we propose an innovative end-to-end training framework that aligns the training and sampling processes by directly optimizing the final reconstruction output. Our method eliminates the training-sampling gap, mitigates information leakage by treating the training process as a direct mapping from pure noise to the target data distribution, and enables the integration of perceptual and adversarial losses into the objective. Extensive experiments on benchmarks such as COCO30K and HW30K demonstrate that our approach consistently outperforms traditional diffusion models, achieving superior results in terms of FID and CLIP score, even with reduced sampling steps. These findings highlight the potential of end-to-end training to advance diffusion-based generative models toward more robust and efficient solutions., Comment: technical report, to be further updated
- Published
- 2024
27. Diffractive Magic Cube Network with Super-high Capacity Enabled by Mechanical Reconfiguration
- Author
-
Feng, Peijie, Liu, Fubei, Liu, Yuanfeng, Chong, Mingzhe, Zhang, Zongkun, Zhao, Qian, Sun, Jingbo, Zhou, Ji, and Tan, Yunhua
- Subjects
Physics - Optics ,Physics - Applied Physics ,Physics - Data Analysis, Statistics and Probability - Abstract
Multiplexing and dynamic reconfigurable metasurfaces have been extensively studied to enhance system capacity in response to the challenges posed by the exponential growth of optical information. Among them, the mechanically reconfigurable strategy offers a cost-effective and low-complexity approach for capacity enhancement. However, the channel numbers achieved in current studies are insufficient for practical applications because of inadequate mechanical transformations and suboptimal optimization methods. In this article, a diffractive magic cube network (DMCN) is proposed to advance the multiplexing capacity of mechanically reconfigurable metasurfaces. We utilized the deep diffractive neural network (D2NN) model to jointly optimize the subset of channels generated by the combination of three mechanical operations, permutation, translation, and rotation. The 144-channel holograms, 108-channel single-focus/multi-focus, and 60-channel orbital angular momentum (OAM) beam/comb generation were numerically achieved and experimentally validated using a spatial light modulator (SLM) and a reflective mirror. Our strategy not only provides a novel paradigm to improve metasurface capacity to super-high level with low crosstalk, but also paves the way for new advancements in optical storage, computing, communication, and photolithography., Comment: 17 pages, 6 figures
- Published
- 2024
28. Unlocking adaptive digital pathology through dynamic feature learning
- Author
-
Li, Jiawen, Guan, Tian, Xia, Qingxin, Wang, Yizhi, Ling, Xitong, Li, Jing, Huang, Qiang, Wang, Zihan, Shen, Zhiyuan, Ma, Yifei, Zhao, Zimo, Lei, Zhe, Chen, Tiandong, Tan, Junbo, Wang, Xueqian, Bian, Xiu-Wu, Wang, Zhe, Guo, Lingchuan, He, Chao, and He, Yonghong
- Subjects
Electrical Engineering and Systems Science - Image and Video Processing ,Computer Science - Computer Vision and Pattern Recognition - Abstract
Foundation models have revolutionized the paradigm of digital pathology, as they leverage general-purpose features to emulate real-world pathological practices, enabling the quantitative analysis of critical histological patterns and the dissection of cancer-specific signals. However, these static general features constrain the flexibility and pathological relevance in the ever-evolving needs of clinical applications, hindering the broad use of the current models. Here we introduce PathFiT, a dynamic feature learning method that can be effortlessly plugged into various pathology foundation models to unlock their adaptability. Meanwhile, PathFiT performs seamless implementation across diverse pathology applications regardless of downstream specificity. To validate PathFiT, we construct a digital pathology benchmark with over 20 terabytes of Internet and real-world data comprising 28 H\&E-stained tasks and 7 specialized imaging tasks including Masson's Trichrome staining and immunofluorescence images. By applying PathFiT to the representative pathology foundation models, we demonstrate state-of-the-art performance on 34 out of 35 tasks, with significant improvements on 23 tasks and outperforming by 10.20% on specialized imaging tasks. The superior performance and versatility of PathFiT open up new avenues in computational pathology., Comment: 49 pages, 14 figures
- Published
- 2024
29. Subconscious Robotic Imitation Learning
- Author
-
Xie, Jun, Wang, Zhicheng, Tan, Jianwei, Lin, Huanxu, and Ma, Xiaoguang
- Subjects
Computer Science - Robotics - Abstract
Although robotic imitation learning (RIL) is promising for embodied intelligent robots, existing RIL approaches rely on computationally intensive multi-model trajectory predictions, resulting in slow execution and limited real-time responsiveness. Instead, human beings subconscious can constantly process and store vast amounts of information from their experiences, perceptions, and learning, allowing them to fulfill complex actions such as riding a bike, without consciously thinking about each. Inspired by this phenomenon in action neurology, we introduced subconscious robotic imitation learning (SRIL), wherein cognitive offloading was combined with historical action chunkings to reduce delays caused by model inferences, thereby accelerating task execution. This process was further enhanced by subconscious downsampling and pattern augmented learning policy wherein intent-rich information was addressed with quantized sampling techniques to improve manipulation efficiency. Experimental results demonstrated that execution speeds of the SRIL were 100\% to 200\% faster over SOTA policies for comprehensive dual-arm tasks, with consistently higher success rates.
- Published
- 2024
30. Mining Platoon Patterns from Traffic Videos
- Author
-
Bei, Yijun, Ma, Teng, Zhang, Dongxiang, Wu, Sai, Tan, Kian-Lee, and Chen, Gang
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Databases - Abstract
Discovering co-movement patterns from urban-scale video data sources has emerged as an attractive topic. This task aims to identify groups of objects that travel together along a common route, which offers effective support for government agencies in enhancing smart city management. However, the previous work has made a strong assumption on the accuracy of recovered trajectories from videos and their co-movement pattern definition requires the group of objects to appear across consecutive cameras along the common route. In practice, this often leads to missing patterns if a vehicle is not correctly identified from a certain camera due to object occlusion or vehicle mis-matching. To address this challenge, we propose a relaxed definition of co-movement patterns from video data, which removes the consecutiveness requirement in the common route and accommodates a certain number of missing captured cameras for objects within the group. Moreover, a novel enumeration framework called MaxGrowth is developed to efficiently retrieve the relaxed patterns. Unlike previous filter-and-refine frameworks comprising both candidate enumeration and subsequent candidate verification procedures, MaxGrowth incurs no verification cost for the candidate patterns. It treats the co-movement pattern as an equivalent sequence of clusters, enumerating candidates with increasing sequence length while avoiding the generation of any false positives. Additionally, we also propose two effective pruning rules to efficiently filter the non-maximal patterns. Extensive experiments are conducted to validate the efficiency of MaxGrowth and the quality of its generated co-movement patterns. Our MaxGrowth runs up to two orders of magnitude faster than the baseline algorithm. It also demonstrates high accuracy in real video dataset when the trajectory recovery algorithm is not perfect.
- Published
- 2024
31. Topological Gauge Theories with Sixteen Supercharges: Higher $A_\infty$-categorification of Floer Homologies
- Author
-
Er, Arif and Tan, Meng-Chwan
- Subjects
High Energy Physics - Theory ,Mathematics - Algebraic Geometry ,Mathematics - Differential Geometry ,Mathematics - Geometric Topology ,Mathematics - Symplectic Geometry - Abstract
This work is a sequel to [arXiv:2410.18575], and a third and final installment of the program initiated in [arXiv:2311.18302]. We will show how, via a 3d gauged Landau-Ginzburg model interpretation of certain topologically-twisted 5d $\mathcal{N} = 2$ and 8d $\mathcal{N} = 1$ gauge theories, one can derive novel Fueter type $A_{\infty}$-2-categories that 2-categorify the 3d-Haydys-Witten, Haydys-Witten, and holomorphic Donaldson-Thomas Floer homology of two, four, and five-manifolds, respectively. Via a 2d gauged Landau-Ginzburg model interpretation of the aforementioned twisted gauge theories, these Fueter type $A_{\infty}$-2-categories can be shown to be equivalent to corresponding Fukaya-Seidel type $A_{\infty}$-categories. Together with previous results from [arXiv:2410.18575] and [arXiv:2311.18302], we will furnish purely physical proofs and generalizations of the mathematical conjectures by Bousseau [3] and Doan-Rezchikov [4]., Comment: 70 pp. This work is a sequel to arXiv:2410.18575, and a third and final installment of the program initiated in arXiv:2311.18302
- Published
- 2024
32. SegKAN: High-Resolution Medical Image Segmentation with Long-Distance Dependencies
- Author
-
Tan, Shengbo, Xue, Rundong, Luo, Shipeng, Zhang, Zeyu, Wang, Xinran, Zhang, Lei, Ergu, Daji, Yi, Zhang, Zhao, Yang, and Cai, Ying
- Subjects
Electrical Engineering and Systems Science - Image and Video Processing ,Computer Science - Computer Vision and Pattern Recognition - Abstract
Hepatic vessels in computed tomography scans often suffer from image fragmentation and noise interference, making it difficult to maintain vessel integrity and posing significant challenges for vessel segmentation. To address this issue, we propose an innovative model: SegKAN. First, we improve the conventional embedding module by adopting a novel convolutional network structure for image embedding, which smooths out image noise and prevents issues such as gradient explosion in subsequent stages. Next, we transform the spatial relationships between Patch blocks into temporal relationships to solve the problem of capturing positional relationships between Patch blocks in traditional Vision Transformer models. We conducted experiments on a Hepatic vessel dataset, and compared to the existing state-of-the-art model, the Dice score improved by 1.78%. These results demonstrate that the proposed new structure effectively enhances the segmentation performance of high-resolution extended objects. Code will be available at https://github.com/goblin327/SegKAN
- Published
- 2024
33. DrivingWorld: Constructing World Model for Autonomous Driving via Video GPT
- Author
-
Hu, Xiaotao, Yin, Wei, Jia, Mingkai, Deng, Junyuan, Guo, Xiaoyang, Zhang, Qian, Long, Xiaoxiao, and Tan, Ping
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Recent successes in autoregressive (AR) generation models, such as the GPT series in natural language processing, have motivated efforts to replicate this success in visual tasks. Some works attempt to extend this approach to autonomous driving by building video-based world models capable of generating realistic future video sequences and predicting ego states. However, prior works tend to produce unsatisfactory results, as the classic GPT framework is designed to handle 1D contextual information, such as text, and lacks the inherent ability to model the spatial and temporal dynamics essential for video generation. In this paper, we present DrivingWorld, a GPT-style world model for autonomous driving, featuring several spatial-temporal fusion mechanisms. This design enables effective modeling of both spatial and temporal dynamics, facilitating high-fidelity, long-duration video generation. Specifically, we propose a next-state prediction strategy to model temporal coherence between consecutive frames and apply a next-token prediction strategy to capture spatial information within each frame. To further enhance generalization ability, we propose a novel masking strategy and reweighting strategy for token prediction to mitigate long-term drifting issues and enable precise control. Our work demonstrates the ability to produce high-fidelity and consistent video clips of over 40 seconds in duration, which is over 2 times longer than state-of-the-art driving world models. Experiments show that, in contrast to prior works, our method achieves superior visual quality and significantly more accurate controllable future video generation. Our code is available at https://github.com/YvanYin/DrivingWorld.
- Published
- 2024
34. DeepSeek-V3 Technical Report
- Author
-
DeepSeek-AI, Liu, Aixin, Feng, Bei, Xue, Bing, Wang, Bingxuan, Wu, Bochao, Lu, Chengda, Zhao, Chenggang, Deng, Chengqi, Zhang, Chenyu, Ruan, Chong, Dai, Damai, Guo, Daya, Yang, Dejian, Chen, Deli, Ji, Dongjie, Li, Erhang, Lin, Fangyun, Dai, Fucong, Luo, Fuli, Hao, Guangbo, Chen, Guanting, Li, Guowei, Zhang, H., Bao, Han, Xu, Hanwei, Wang, Haocheng, Zhang, Haowei, Ding, Honghui, Xin, Huajian, Gao, Huazuo, Li, Hui, Qu, Hui, Cai, J. L., Liang, Jian, Guo, Jianzhong, Ni, Jiaqi, Li, Jiashi, Wang, Jiawei, Chen, Jin, Chen, Jingchang, Yuan, Jingyang, Qiu, Junjie, Li, Junlong, Song, Junxiao, Dong, Kai, Hu, Kai, Gao, Kaige, Guan, Kang, Huang, Kexin, Yu, Kuai, Wang, Lean, Zhang, Lecong, Xu, Lei, Xia, Leyi, Zhao, Liang, Wang, Litong, Zhang, Liyue, Li, Meng, Wang, Miaojun, Zhang, Mingchuan, Zhang, Minghua, Tang, Minghui, Li, Mingming, Tian, Ning, Huang, Panpan, Wang, Peiyi, Zhang, Peng, Wang, Qiancheng, Zhu, Qihao, Chen, Qinyu, Du, Qiushi, Chen, R. J., Jin, R. L., Ge, Ruiqi, Zhang, Ruisong, Pan, Ruizhe, Wang, Runji, Xu, Runxin, Zhang, Ruoyu, Chen, Ruyi, Li, S. S., Lu, Shanghao, Zhou, Shangyan, Chen, Shanhuang, Wu, Shaoqing, Ye, Shengfeng, Ma, Shirong, Wang, Shiyu, Zhou, Shuang, Yu, Shuiping, Zhou, Shunfeng, Pan, Shuting, Wang, T., Yun, Tao, Pei, Tian, Sun, Tianyu, Xiao, W. L., Zeng, Wangding, Zhao, Wanjia, An, Wei, Liu, Wen, Liang, Wenfeng, Gao, Wenjun, Yu, Wenqin, Zhang, Wentao, Li, X. Q., Jin, Xiangyue, Wang, Xianzu, Bi, Xiao, Liu, Xiaodong, Wang, Xiaohan, Shen, Xiaojin, Chen, Xiaokang, Zhang, Xiaokang, Chen, Xiaosha, Nie, Xiaotao, Sun, Xiaowen, Wang, Xiaoxiang, Cheng, Xin, Liu, Xin, Xie, Xin, Liu, Xingchao, Yu, Xingkai, Song, Xinnan, Shan, Xinxia, Zhou, Xinyi, Yang, Xinyu, Li, Xinyuan, Su, Xuecheng, Lin, Xuheng, Li, Y. K., Wang, Y. Q., Wei, Y. X., Zhu, Y. X., Zhang, Yang, Xu, Yanhong, Huang, Yanping, Li, Yao, Zhao, Yao, Sun, Yaofeng, Li, Yaohui, Wang, Yaohui, Yu, Yi, Zheng, Yi, Zhang, Yichao, Shi, Yifan, Xiong, Yiliang, He, Ying, Tang, Ying, Piao, Yishi, Wang, Yisong, Tan, Yixuan, Ma, Yiyang, Liu, Yiyuan, Guo, Yongqiang, Wu, Yu, Ou, Yuan, Zhu, Yuchen, Wang, Yuduan, Gong, Yue, Zou, Yuheng, He, Yujia, Zha, Yukun, Xiong, Yunfan, Ma, Yunxian, Yan, Yuting, Luo, Yuxiang, You, Yuxiang, Liu, Yuxuan, Zhou, Yuyang, Wu, Z. F., Ren, Z. Z., Ren, Zehui, Sha, Zhangli, Fu, Zhe, Xu, Zhean, Huang, Zhen, Zhang, Zhen, Xie, Zhenda, Zhang, Zhengyan, Hao, Zhewen, Gou, Zhibin, Ma, Zhicheng, Yan, Zhigang, Shao, Zhihong, Xu, Zhipeng, Wu, Zhiyu, Zhang, Zhongyu, Li, Zhuoshu, Gu, Zihui, Zhu, Zijia, Liu, Zijun, Li, Zilin, Xie, Ziwei, Song, Ziyang, Gao, Ziyi, and Pan, Zizheng
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence - Abstract
We present DeepSeek-V3, a strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token. To achieve efficient inference and cost-effective training, DeepSeek-V3 adopts Multi-head Latent Attention (MLA) and DeepSeekMoE architectures, which were thoroughly validated in DeepSeek-V2. Furthermore, DeepSeek-V3 pioneers an auxiliary-loss-free strategy for load balancing and sets a multi-token prediction training objective for stronger performance. We pre-train DeepSeek-V3 on 14.8 trillion diverse and high-quality tokens, followed by Supervised Fine-Tuning and Reinforcement Learning stages to fully harness its capabilities. Comprehensive evaluations reveal that DeepSeek-V3 outperforms other open-source models and achieves performance comparable to leading closed-source models. Despite its excellent performance, DeepSeek-V3 requires only 2.788M H800 GPU hours for its full training. In addition, its training process is remarkably stable. Throughout the entire training process, we did not experience any irrecoverable loss spikes or perform any rollbacks. The model checkpoints are available at https://github.com/deepseek-ai/DeepSeek-V3.
- Published
- 2024
35. KALAHash: Knowledge-Anchored Low-Resource Adaptation for Deep Hashing
- Author
-
Zhao, Shu, Yu, Tan, Hao, Xiaoshuai, Ma, Wenchao, and Narayanan, Vijaykrishnan
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Deep hashing has been widely used for large-scale approximate nearest neighbor search due to its storage and search efficiency. However, existing deep hashing methods predominantly rely on abundant training data, leaving the more challenging scenario of low-resource adaptation for deep hashing relatively underexplored. This setting involves adapting pre-trained models to downstream tasks with only an extremely small number of training samples available. Our preliminary benchmarks reveal that current methods suffer significant performance degradation due to the distribution shift caused by limited training samples. To address these challenges, we introduce Class-Calibration LoRA (CLoRA), a novel plug-and-play approach that dynamically constructs low-rank adaptation matrices by leveraging class-level textual knowledge embeddings. CLoRA effectively incorporates prior class knowledge as anchors, enabling parameter-efficient fine-tuning while maintaining the original data distribution. Furthermore, we propose Knowledge-Guided Discrete Optimization (KIDDO), a framework to utilize class knowledge to compensate for the scarcity of visual information and enhance the discriminability of hash codes. Extensive experiments demonstrate that our proposed method, Knowledge- Anchored Low-Resource Adaptation Hashing (KALAHash), significantly boosts retrieval performance and achieves a 4x data efficiency in low-resource scenarios., Comment: Accepted at AAAI 2025
- Published
- 2024
36. Manga Generation via Layout-controllable Diffusion
- Author
-
Chen, Siyu, Li, Dengjie, Bao, Zenghao, Zhou, Yao, Tan, Lingfeng, Zhong, Yujie, and Zhao, Zheng
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Generating comics through text is widely studied. However, there are few studies on generating multi-panel Manga (Japanese comics) solely based on plain text. Japanese manga contains multiple panels on a single page, with characteristics such as coherence in storytelling, reasonable and diverse page layouts, consistency in characters, and semantic correspondence between panel drawings and panel scripts. Therefore, generating manga poses a significant challenge. This paper presents the manga generation task and constructs the Manga109Story dataset for studying manga generation solely from plain text. Additionally, we propose MangaDiffusion to facilitate the intra-panel and inter-panel information interaction during the manga generation process. The results show that our method particularly ensures the number of panels, reasonable and diverse page layouts. Based on our approach, there is potential to converting a large amount of textual stories into more engaging manga readings, leading to significant application prospects.
- Published
- 2024
37. Gravitational Waves from Post-Collision of Fuzzy Dark Matter Solitons
- Author
-
Tan, Chen, Bin, Jing-Kang, and Wang, Ke
- Subjects
Astrophysics - Cosmology and Nongalactic Astrophysics ,Astrophysics - Astrophysics of Galaxies ,High Energy Physics - Phenomenology - Abstract
According to the Schr\"odinger-Poisson (SP) equations, fuzzy dark matter (FDM) can form a stable equilibrium configuration, the so-called FDM soliton. The SP system can also determine the evolution of FDM solitons, such as head-on collision. In this paper, we first propose a new adimensional unit of length, time and mass. And then, we simulate the adimensional SP system with $\mathtt{PyUltraLight}$ to study the GWs from post-collision of FDM solitons when the linearized theory is valid and the GW back reaction on the evolution of FDM solitons is ignored. Finally, we find that the GWs from post-collisions have a frequency of (few ten-years)$^{-1}$ or (few years)$^{-1}$ when FDM mass is $m=10^{-18}\rm{eV}/c^2$ or $m=10^{-17}\rm{eV}/c^2$. Therefore, future detection of such GWs will constrain the property of FDM particle and solitons., Comment: 9 pages, 7 figures
- Published
- 2024
38. VoiceDiT: Dual-Condition Diffusion Transformer for Environment-Aware Speech Synthesis
- Author
-
Jung, Jaemin, Ahn, Junseok, Jung, Chaeyoung, Nguyen, Tan Dat, Jang, Youngjoon, and Chung, Joon Son
- Subjects
Electrical Engineering and Systems Science - Audio and Speech Processing ,Computer Science - Sound - Abstract
We present VoiceDiT, a multi-modal generative model for producing environment-aware speech and audio from text and visual prompts. While aligning speech with text is crucial for intelligible speech, achieving this alignment in noisy conditions remains a significant and underexplored challenge in the field. To address this, we present a novel audio generation pipeline named VoiceDiT. This pipeline includes three key components: (1) the creation of a large-scale synthetic speech dataset for pre-training and a refined real-world speech dataset for fine-tuning, (2) the Dual-DiT, a model designed to efficiently preserve aligned speech information while accurately reflecting environmental conditions, and (3) a diffusion-based Image-to-Audio Translator that allows the model to bridge the gap between audio and image, facilitating the generation of environmental sound that aligns with the multi-modal prompts. Extensive experimental results demonstrate that VoiceDiT outperforms previous models on real-world datasets, showcasing significant improvements in both audio quality and modality integration., Comment: Accepted to ICASSP 2025
- Published
- 2024
39. $P$-wave bottom baryons of the $SU(3)$ flavor $\mathbf{\bar3}_F$
- Author
-
Wang, Yi-Jie, Luo, Xuan, Chen, Hua-Xing, Cui, Er-Liang, Tan, Wei-Han, and Zhou, Zhi-Yong
- Subjects
High Energy Physics - Phenomenology - Abstract
We study the $P$-wave bottom baryons of the $SU(3)$ flavor $\mathbf{\bar3}_F$ and systematically calculate their strong decay properties, including their $D$-wave decays into ground-state bottom baryons with light pseudoscalar mesons and $S$-wave decays into ground-state bottom baryons with light vector mesons. Together with Refs.~\cite{Tan:2023opd,Yang:2019cvw,Yang:2020zrh,Luo:2024jov}, a rather complete investigation has been performed to study their mass spectra and strong/radiative decay properties, through the methods of QCD sum rules and light-cone sum rules within the framework of heavy quark effective theory. Among various possibilities, we identify four $\Lambda_b$ and four $\Xi_b$ baryons, with limited decay widths and so capable of being observed in experiments. Their masses, mass splittings within the same multiplets, and strong/radiative decay widths are summarized in Table~\ref{tab:decayb3f} for future experimental searching., Comment: arXiv admin note: text overlap with arXiv:2407.04433
- Published
- 2024
40. RapGuard: Safeguarding Multimodal Large Language Models via Rationale-aware Defensive Prompting
- Author
-
Jiang, Yilei, Tan, Yingshui, and Yue, Xiangyu
- Subjects
Computer Science - Computation and Language - Abstract
While Multimodal Large Language Models (MLLMs) have made remarkable progress in vision-language reasoning, they are also more susceptible to producing harmful content compared to models that focus solely on text. Existing defensive prompting techniques rely on a static, unified safety guideline that fails to account for the specific risks inherent in different multimodal contexts. To address these limitations, we propose RapGuard, a novel framework that uses multimodal chain-of-thought reasoning to dynamically generate scenario-specific safety prompts. RapGuard enhances safety by adapting its prompts to the unique risks of each input, effectively mitigating harmful outputs while maintaining high performance on benign tasks. Our experimental results across multiple MLLM benchmarks demonstrate that RapGuard achieves state-of-the-art safety performance, significantly reducing harmful content without degrading the quality of responses.
- Published
- 2024
41. A Novel Algorithm for Periodic Conformal Flattening of Genus-one and Multiply Connected Genus-zero Surfaces
- Author
-
Tan, Zhong-Heng, Li, Tiexiang, Lin, Wen-Wei, and Yau, Shing-Tung
- Subjects
Mathematics - Numerical Analysis ,49Q10, 52C26, 65D18, 65F05, 68U05 - Abstract
In this paper, we propose a novel method for genus-one and multiply connected genus-zero surfaces, namely periodic conformal flattening.The primary advantage of this method is its independence from the cut paths and consistency preservation of the cut seams, which introduce no additional conformal distortion near the cut seams.We utilize the conformal energy minimization technique to compute the desired conformal map, which is characterised as an easy-solved quadratic functional minimization problem.The numerical experiments illustrate that our proposed algorithms DPCF and SPCF is of high accuracy and a 4-5 times improvement in terms of efficiency compared with state-of-the-art algorithms.
- Published
- 2024
42. An Attentive Dual-Encoder Framework Leveraging Multimodal Visual and Semantic Information for Automatic OSAHS Diagnosis
- Author
-
Wei, Yingchen, Qiu, Xihe, Tan, Xiaoyu, Huang, Jingjing, Chu, Wei, Xu, Yinghui, and Qi, Yuan
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Machine Learning - Abstract
Obstructive sleep apnea-hypopnea syndrome (OSAHS) is a common sleep disorder caused by upper airway blockage, leading to oxygen deprivation and disrupted sleep. Traditional diagnosis using polysomnography (PSG) is expensive, time-consuming, and uncomfortable. Existing deep learning methods using facial image analysis lack accuracy due to poor facial feature capture and limited sample sizes. To address this, we propose a multimodal dual encoder model that integrates visual and language inputs for automated OSAHS diagnosis. The model balances data using randomOverSampler, extracts key facial features with attention grids, and converts physiological data into meaningful text. Cross-attention combines image and text data for better feature extraction, and ordered regression loss ensures stable learning. Our approach improves diagnostic efficiency and accuracy, achieving 91.3% top-1 accuracy in a four-class severity classification task, demonstrating state-of-the-art performance. Code will be released upon acceptance., Comment: 5 pages, 2 figures, Published as a conference paper at ICASSP 2025
- Published
- 2024
43. Unified Local and Global Attention Interaction Modeling for Vision Transformers
- Author
-
Nguyen, Tan, Heldermon, Coy D., and Toler-Franklin, Corey
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Artificial Intelligence ,Computer Science - Machine Learning ,I.5.0, I.5.4, I.4.0 - Abstract
We present a novel method that extends the self-attention mechanism of a vision transformer (ViT) for more accurate object detection across diverse datasets. ViTs show strong capability for image understanding tasks such as object detection, segmentation, and classification. This is due in part to their ability to leverage global information from interactions among visual tokens. However, the self-attention mechanism in ViTs are limited because they do not allow visual tokens to exchange local or global information with neighboring features before computing global attention. This is problematic because tokens are treated in isolation when attending (matching) to other tokens, and valuable spatial relationships are overlooked. This isolation is further compounded by dot-product similarity operations that make tokens from different semantic classes appear visually similar. To address these limitations, we introduce two modifications to the traditional self-attention framework; a novel aggressive convolution pooling strategy for local feature mixing, and a new conceptual attention transformation to facilitate interaction and feature exchange between semantic concepts. Experimental results demonstrate that local and global information exchange among visual features before self-attention significantly improves performance on challenging object detection tasks and generalizes across multiple benchmark datasets and challenging medical datasets. We publish source code and a novel dataset of cancerous tumors (chimeric cell clusters)., Comment: 20 Pages, 24 figures
- Published
- 2024
44. Dora: Sampling and Benchmarking for 3D Shape Variational Auto-Encoders
- Author
-
Chen, Rui, Zhang, Jianfeng, Liang, Yixun, Luo, Guan, Li, Weiyu, Liu, Jiarui, Li, Xiu, Long, Xiaoxiao, Feng, Jiashi, and Tan, Ping
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Recent 3D content generation pipelines commonly employ Variational Autoencoders (VAEs) to encode shapes into compact latent representations for diffusion-based generation. However, the widely adopted uniform point sampling strategy in Shape VAE training often leads to a significant loss of geometric details, limiting the quality of shape reconstruction and downstream generation tasks. We present Dora-VAE, a novel approach that enhances VAE reconstruction through our proposed sharp edge sampling strategy and a dual cross-attention mechanism. By identifying and prioritizing regions with high geometric complexity during training, our method significantly improves the preservation of fine-grained shape features. Such sampling strategy and the dual attention mechanism enable the VAE to focus on crucial geometric details that are typically missed by uniform sampling approaches. To systematically evaluate VAE reconstruction quality, we additionally propose Dora-bench, a benchmark that quantifies shape complexity through the density of sharp edges, introducing a new metric focused on reconstruction accuracy at these salient geometric features. Extensive experiments on the Dora-bench demonstrate that Dora-VAE achieves comparable reconstruction quality to the state-of-the-art dense XCube-VAE while requiring a latent space at least 8$\times$ smaller (1,280 vs. > 10,000 codes). We will release our code and benchmark dataset to facilitate future research in 3D shape modeling., Comment: Project page: https://aruichen.github.io/Dora/
- Published
- 2024
45. Colouring t-perfect graphs
- Author
-
Chudnovsky, Maria, Cook, Linda, Davies, James, Oum, Sang-il, and Tan, Jane
- Subjects
Mathematics - Combinatorics ,Computer Science - Discrete Mathematics - Abstract
Perfect graphs can be described as the graphs whose stable set polytopes are defined by their non-negativity and clique inequalities (including edge inequalities). In 1975, Chv\'{a}tal defined an analogous class of t-perfect graphs, which are the graphs whose stable set polytopes are defined by their non-negativity, edge inequalities, and odd circuit inequalities. We show that t-perfect graphs are $199053$-colourable. This is the first finite bound on the chromatic number of t-perfect graphs and answers a question of Shepherd from 1995. Our proof also shows that every h-perfect graph with clique number $\omega$ is $(\omega + 199050)$-colourable., Comment: 23 pages, 4 figures
- Published
- 2024
46. Trypanosoma brucei moving in microchannels and through constrictions
- Author
-
Tan, Zihan, Peters, Julian I. U., and Stark, Holger
- Subjects
Condensed Matter - Soft Condensed Matter - Abstract
Trypanosoma brucei (T. brucei), a single-celled parasite and natural microswimmer, is responsible for fatal sleeping sickness in infected mammals, including humans. Understanding how T. brucei interacts with fluid environments and navigates through confining spaces is crucial not only for medical and clinical applications but also for a fundamental understanding of how life organizes in a confined microscopic world. Using a hybrid multi-particle collision dynamics (MPCD)--molecular dynamics (MD) approach, we present our investigations on the locomotion of an in silico T. brucei in three types of fluid environments: bulk fluid, straight cylindrical microchannels, and microchannels with constrictions. We observe that the helical swimming trajectory of the in silico T. brucei becomes rectified in straight cylindrical channels compared to bulk fluid. The swimming speed for different channel widths is governed by the diameter of the helical trajectory. The speed first slightly increases as the channel narrows and then decreases when the helix diameter is compressed. An optimal swimming speed is achieved, when the channel width is approximately twice the bulk helix diameter. It results from an interplay of the trypanosome's hydrodynamic interactions with the cylindrical channel walls and the high deformability of the parasite. In microchannels with constrictions, the motions of the anterior and posterior ends, the end-to-end distance, and the log-rolling motion of the cell body are characterized and show salient differences compared to the straight-channel case. Depending on the constriction length and width, we observe characteristic slip, stuck, and stuck-slip motions of the model T. brucei within the constriction. Our findings may provide some mechanical insights into how T. brucei moves through blood vessels and tissues, and across the blood-brain barrier., Comment: 28 pages, 13 figures
- Published
- 2024
47. BrainMAP: Learning Multiple Activation Pathways in Brain Networks
- Author
-
Wang, Song, Lei, Zhenyu, Tan, Zhen, Ding, Jiaqi, Zhao, Xinyu, Dong, Yushun, Wu, Guorong, Chen, Tianlong, Chen, Chen, Zhang, Aiying, and Li, Jundong
- Subjects
Computer Science - Artificial Intelligence - Abstract
Functional Magnetic Resonance Image (fMRI) is commonly employed to study human brain activity, since it offers insight into the relationship between functional fluctuations and human behavior. To enhance analysis and comprehension of brain activity, Graph Neural Networks (GNNs) have been widely applied to the analysis of functional connectivities (FC) derived from fMRI data, due to their ability to capture the synergistic interactions among brain regions. However, in the human brain, performing complex tasks typically involves the activation of certain pathways, which could be represented as paths across graphs. As such, conventional GNNs struggle to learn from these pathways due to the long-range dependencies of multiple pathways. To address these challenges, we introduce a novel framework BrainMAP to learn Multiple Activation Pathways in Brain networks. BrainMAP leverages sequential models to identify long-range correlations among sequentialized brain regions and incorporates an aggregation module based on Mixture of Experts (MoE) to learn from multiple pathways. Our comprehensive experiments highlight BrainMAP's superior performance. Furthermore, our framework enables explanatory analyses of crucial brain regions involved in tasks. Our code is provided at https://github.com/LzyFischer/Graph-Mamba., Comment: AAAI 2025
- Published
- 2024
48. Complete Implementation of WXF Chinese Chess Rules
- Author
-
Tan, Daniel and Medina, Neftali Watkinson
- Subjects
Computer Science - Artificial Intelligence ,I.2.0 - Abstract
Unlike repetitions in Western Chess where all repetitions are draws, repetitions in Chinese Chess could result in a win, draw, or loss depending on the kind of repetition being made by both players. One of the biggest hurdles facing Chinese Chess application development is a proper system for judging games correctly. This paper introduces a complete algorithm for ruling the WXF rules correctly in all 110 example cases found in the WXF manual. We introduce several novel optimizations for speeding up the repetition handling without compromising the program correctness. This algorithm is usable in engines, and we saw a total increase in playing strength by +10 point rating increase, or an increased 5% winrate when integrating this approach into our prototype engine., Comment: 19 pages, 8 figures
- Published
- 2024
49. DepthLab: From Partial to Complete
- Author
-
Liu, Zhiheng, Cheng, Ka Leong, Wang, Qiuyu, Wang, Shuzhe, Ouyang, Hao, Tan, Bin, Zhu, Kai, Shen, Yujun, Chen, Qifeng, and Luo, Ping
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Missing values remain a common challenge for depth data across its wide range of applications, stemming from various causes like incomplete data acquisition and perspective alteration. This work bridges this gap with DepthLab, a foundation depth inpainting model powered by image diffusion priors. Our model features two notable strengths: (1) it demonstrates resilience to depth-deficient regions, providing reliable completion for both continuous areas and isolated points, and (2) it faithfully preserves scale consistency with the conditioned known depth when filling in missing values. Drawing on these advantages, our approach proves its worth in various downstream tasks, including 3D scene inpainting, text-to-3D scene generation, sparse-view reconstruction with DUST3R, and LiDAR depth completion, exceeding current solutions in both numerical performance and visual quality. Our project page with source code is available at https://johanan528.github.io/depthlab_web/., Comment: Project page and code: https://johanan528.github.io/depthlab_web/
- Published
- 2024
50. Study of the Proper NNUE Dataset
- Author
-
Tan, Daniel and Medina, Neftali Watkinson
- Subjects
Computer Science - Artificial Intelligence ,Computer Science - Machine Learning ,I.2.0 - Abstract
NNUE (Efficiently Updatable Neural Networks) has revolutionized chess engine development, with nearly all top engines adopting NNUE models to maintain competitive performance. A key challenge in NNUE training is the creation of high-quality datasets, particularly in complex domains like chess, where tactical and strategic evaluations are essential. However, methods for constructing effective datasets remain poorly understood and under-documented. In this paper, we propose an algorithm for generating and filtering datasets composed of "quiet" positions that are stable and free from tactical volatility. Our approach provides a clear methodology for dataset creation, which can be replicated and generalized across various evaluation functions. Testing demonstrates significant improvements in engine performance, confirming the effectiveness of our method., Comment: 10 pages, 4 figures
- Published
- 2024
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.