885 results on '"Litmaath, M"'
Search Results
2. The upgraded DØ detector
- Author
-
Abazov, V.M., Abbott, B., Abolins, M., Acharya, B.S., Adams, D.L., Adams, M., Adams, T., Agelou, M., Agram, J.-L., Ahmed, S.N., Ahn, S.H., Ahsan, M., Alexeev, G.D., Alkhazov, G., Alton, A., Alverson, G., Alves, G.A., Anastasoaie, M., Andeen, T., Anderson, J.T., Anderson, S., Andrieu, B., Angstadt, R., Anosov, V., Arnoud, Y., Arov, M., Askew, A., Åsman, B., Assis Jesus, A.C.S., Atramentov, O., Autermann, C., Avila, C., Babukhadia, L., Bacon, T.C., Badaud, F., Baden, A., Baffioni, S., Bagby, L., Baldin, B., Balm, P.W., Banerjee, P., Banerjee, S., Barberis, E., Bardon, O., Barg, W., Bargassa, P., Baringer, P., Barnes, C., Barreto, J., Bartlett, J.F., Bassler, U., Bhattacharjee, M., Baturitsky, M.A., Bauer, D., Bean, A., Baumbaugh, B., Beauceron, S., Begalli, M., Beaudette, F., Begel, M., Bellavance, A., Beri, S.B., Bernardi, G., Bernhard, R., Bertram, I., Besançon, M., Besson, A., Beuselinck, R., Beutel, D., Bezzubov, V.A., Bhat, P.C., Bhatnagar, V., Binder, M., Biscarat, C., Bishoff, A., Black, K.M., Blackler, I., Blazey, G., Blekman, F., Blessing, S., Bloch, D., Blumenschein, U., Bockenthien, E., Bodyagin, V., Boehnlein, A., Boeriu, O., Bolton, T.A., Bonamy, P., Bonifas, D., Borcherding, F., Borissov, G., Bos, K., Bose, T., Boswell, C., Bowden, M., Brandt, A., Briskin, G., Brock, R., Brooijmans, G., Bross, A., Buchanan, N.J., Buchholz, D., Buehler, M., Buescher, V., Burdin, S., Burke, S., Burnett, T.H., Busato, E., Buszello, C.P., Butler, D., Butler, J.M., Cammin, J., Caron, S., Bystricky, J., Canal, L., Canelli, F., Carvalho, W., Casey, B.C.K., Casey, D., Cason, N.M., Castilla-Valdez, H., Chakrabarti, S., Chakraborty, D., Chan, K.M., Chandra, A., Chapin, D., Charles, F., Cheu, E., Chevalier, L., Chi, E., Chiche, R., Cho, D.K., Choate, R., Choi, S., Choudhary, B., Chopra, S., Christenson, J.H., Christiansen, T., Christofek, L., Churin, I., Cisko, G., Claes, D., Clark, A.R., Clément, B., Clément, C., Coadou, Y., Colling, D.J., Coney, L., Connolly, B., Cooke, M., Cooper, W.E., Coppage, D., Corcoran, M., Coss, J., Cothenet, A., Cousinou, M.-C., Cox, B., Crépé-Renaudin, S., Cristetiu, M., Cummings, M.A.C., Cutts, D., da Motta, H., Das, M., Davies, B., Davies, G., Davis, G.A., Davis, W., De, K., de Jong, P., de Jong, S.J., De La Cruz-Burelo, E., De La Taille, C., De Oliveira Martins, C., Dean, S., Degenhardt, J.D., Déliot, F., Delsart, P.A., Del Signore, K., DeMaat, R., Demarteau, M., Demina, R., Demine, P., Denisov, D., Denisov, S.P., Desai, S., Diehl, H.T., Diesburg, M., Doets, M., Doidge, M., Dong, H., Doulas, S., Dudko, L.V., Duflot, L., Dugad, S.R., Duperrin, A., Dvornikov, O., Dyer, J., Dyshkant, A., Eads, M., Edmunds, D., Edwards, T., Ellison, J., Elmsheuser, J., Eltzroth, J.T., Elvira, V.D., Eno, S., Ermolov, P., Eroshin, O.V., Estrada, J., Evans, D., Evans, H., Evdokimov, A., Evdokimov, V.N., Fagan, J., Fast, J., Fatakia, S.N., Fein, D., Feligioni, L., Ferapontov, A.V., Ferbel, T., Ferreira, M.J., Fiedler, F., Filthaut, F., Fisher, W., Fisk, H.E., Fleck, I., Fitzpatrick, T., Flattum, E., Fleuret, F., Flores, R., Foglesong, J., Fortner, M., Fox, H., Franklin, C., Freeman, W., Fu, S., Fuess, S., Gadfort, T., Galea, C.F., Gallas, E., Galyaev, E., Gao, M., Garcia, C., Garcia-Bellido, A., Gardner, J., Gavrilov, V., Gay, A., Gay, P., Gelé, D., Gelhaus, R., Genser, K., Gerber, C.E., Gershtein, Y., Gillberg, D., Geurkov, G., Ginther, G., Gobbi, B., Goldmann, K., Golling, T., Gollub, N., Golovtsov, V., Gómez, B., Gomez, G., Gomez, R., Goodwin, R., Gornushkin, Y., Gounder, K., Goussiou, A., Graham, D., Graham, G., Grannis, P.D., Gray, K., Greder, S., Green, D.R., Green, J., Green, J.A., Greenlee, H., Greenwood, Z.D., Gregores, E.M., Grinstein, S., Gris, Ph., Grivaz, J.-F., Groer, L., Grünendahl, S., Grünewald, M.W., Gu, W., Guglielmo, J., Gupta, A., Gurzhiev, S.N., Gutierrez, G., Gutierrez, P., Haas, A., Hadley, N.J., Haggard, E., Haggerty, H., Hagopian, S., Hall, I., Hall, R.E., Han, C., Han, L., Hance, R., Hanagaki, K., Hanlet, P., Hansen, S., Harder, K., Harel, A., Harrington, R., Hauptman, J.M., Hauser, R., Hays, C., Hays, J., Hazen, E., Hebbeker, T., Hebert, C., Hedin, D., Heinmiller, J.M., Heinson, A.P., Heintz, U., Hensel, C., Hesketh, G., Hildreth, M.D., Hirosky, R., Hobbs, J.D., Hoeneisen, B., Hohlfeld, M., Hong, S.J., Hooper, R., Hou, S., Houben, P., Hu, Y., Huang, J., Huang, Y., Hynek, V., Huffman, D., Iashvili, I., Illingworth, R., Ito, A.S., Jabeen, S., Jacquier, Y., Jaffré, M., Jain, S., Jain, V., Jakobs, K., Jayanti, R., Jenkins, A., Jesik, R., Jiang, Y., Johns, K., Johnson, M., Johnson, P., Jonckheere, A., Jonsson, P., Jöstlein, H., Jouravlev, N., Juarez, M., Juste, A., Kaan, A.P., Kado, M.M., Käfer, D., Kahl, W., Kahn, S., Kajfasz, E., Kalinin, A.M., Kalk, J., Kalmani, S.D., Karmanov, D., Kasper, J., Katsanos, I., Kau, D., Kaur, R., Ke, Z., Kehoe, R., Kermiche, S., Kesisoglou, S., Khanov, A., Kharchilava, A., Kharzheev, Y.M., Kim, H., Kim, K.H., Kim, T.J., Kirsch, N., Klima, B., Klute, M., Kohli, J.M., Konrath, J.-P., Komissarov, E.V., Kopal, M., Korablev, V.M., Kostritski, A., Kotcher, J., Kothari, B., Kotwal, A.V., Koubarovsky, A., Kozelov, A.V., Kozminski, J., Kryemadhi, A., Kouznetsov, O., Krane, J., Kravchuk, N., Krempetz, K., Krider, J., Krishnaswamy, M.R., Krzywdzinski, S., Kubantsev, M., Kubinski, R., Kuchinsky, N., Kuleshov, S., Kulik, Y., Kumar, A., Kunori, S., Kupco, A., Kurča, T., Kvita, J., Kuznetsov, V.E., Kwarciany, R., Lager, S., Lahrichi, N., Landsberg, G., Larwill, M., Laurens, P., Lavigne, B., Lazoflores, J., Le Bihan, A.-C., Le Meur, G., Lebrun, P., Lee, S.W., Lee, W.M., Leflat, A., Leggett, C., Lehner, F., Leitner, R., Leonidopoulos, C., Leveque, J., Lewis, P., Li, J., Li, Q.Z., Li, X., Lima, J.G.R., Lincoln, D., Lindenmeyer, C., Linn, S.L., Linnemann, J., Lipaev, V.V., Lipton, R., Litmaath, M., Lizarazo, J., Lobo, L., Lobodenko, A., Lokajicek, M., Lounis, A., Love, P., Lu, J., Lubatti, H.J., Lucotte, A., Lueking, L., Luo, C., Lynker, M., Lyon, A.L., Machado, E., Maciel, A.K.A., Madaras, R.J., Mättig, P., Magass, C., Magerkurth, A., Magnan, A.-M., Maity, M., Makovec, N., Mal, P.K., Malbouisson, H.B., Malik, S., Malyshev, V.L., Manakov, V., Mao, H.S., Maravin, Y., Markley, D., Markus, M., Marshall, T., Martens, M., Martin, M., Martin-Chassard, G., Mattingly, S.E.K., Matulik, M., Mayorov, A.A., McCarthy, R., McCroskey, R., McKenna, M., McMahon, T., Meder, D., Melanson, H.L., Melnitchouk, A., Mendes, A., Mendoza, D., Mendoza, L., Meng, X., Merekov, Y.P., Merkin, M., Merritt, K.W., Meyer, A., Meyer, J., Michaut, M., Miao, C., Miettinen, H., Mihalcea, D., Mikhailov, V., Miller, D., Mitrevski, J., Mokhov, N., Molina, J., Mondal, N.K., Montgomery, H.E., Moore, R.W., Moulik, T., Muanza, G.S., Mostafa, M., Moua, S., Mulders, M., Mundim, L., Mutaf, Y.D., Nagaraj, P., Nagy, E., Naimuddin, M., Nang, F., Narain, M., Narasimhan, V.S., Narayanan, A., Naumann, N.A., Neal, H.A., Negret, J.P., Nelson, S., Neuenschwander, R.T., Neustroev, P., Noeding, C., Nomerotski, A., Novaes, S.F., Nozdrin, A., Nunnemann, T., Nurczyk, A., Nurse, E., O’Dell, V., O’Neil, D.C., Oguri, V., Olis, D., Oliveira, N., Olivier, B., Olsen, J., Oshima, N., Oshinowo, B.O., Otero y Garzón, G.J., Padley, P., Papageorgiou, K., Parashar, N., Park, J., Park, S.K., Parsons, J., Partridge, R., Parua, N., Patwa, A., Pawloski, G., Perea, P.M., Perez, E., Peters, O., Pétroff, P., Petteni, M., Phaf, L., Piegaia, R., Pleier, M.-A., Podesta-Lerma, P.L.M., Podstavkov, V.M., Pogorelov, Y., Pol, M.-E., Pompoš, A., Polosov, P., Pope, B.G., Popkov, E., Porokhovoy, S., Prado da Silva, W.L., Pritchard, W., Prokhorov, I., Prosper, H.B., Protopopescu, S., Przybycien, M.B., Qian, J., Quadt, A., Quinn, B., Ramberg, E., Ramirez-Gomez, R., Rani, K.J., Ranjan, K., Rao, M.V.S., Rapidis, P.A., Rapisarda, S., Raskowski, J., Ratoff, P.N., Ray, R.E., Reay, N.W., Rechenmacher, R., Reddy, L.V., Regan, T., Renardy, J.-F., Reucroft, S., Rha, J., Ridel, M., Rijssenbeek, M., Ripp-Baudot, I., Rizatdinova, F., Robinson, S., Rodrigues, R.F., Roco, M., Rotolo, C., Royon, C., Rubinov, P., Ruchti, R., Rucinski, R., Rud, V.I., Russakovich, N., Russo, P., Sabirov, B., Sajot, G., Sánchez-Hernández, A., Sanders, M.P., Santoro, A., Satyanarayana, B., Savage, G., Sawyer, L., Scanlon, T., Schaile, D., Schamberger, R.D., Scheglov, Y., Schellman, H., Schieferdecker, P., Schmitt, C., Schwanenberger, C., Schukin, A.A., Schwartzman, A., Schwienhorst, R., Sengupta, S., Severini, H., Shabalina, E., Shamim, M., Shankar, H.C., Shary, V., Shchukin, A.A., Sheahan, P., Shephard, W.D., Shivpuri, R.K., Shishkin, A.A., Shpakov, D., Shupe, M., Sidwell, R.A., Simak, V., Sirotenko, V., Skow, D., Skubic, P., Slattery, P., Smith, D.E., Smith, R.P., Smolek, K., Snow, G.R., Snow, J., Snyder, S., Söldner-Rembold, S., Song, X., Song, Y., Sonnenschein, L., Sopczak, A., Sorín, V., Sosebee, M., Soustruznik, K., Souza, M., Spartana, N., Spurlock, B., Stanton, N.R., Stark, J., Steele, J., Stefanik, A., Steinberg, J., Steinbrück, G., Stevenson, K., Stolin, V., Stone, A., Stoyanova, D.A., Strandberg, J., Strang, M.A., Strauss, M., Ströhmer, R., Strom, D., Strovink, M., Stutte, L., Sumowidagdo, S., Sznajder, A., Talby, M., Tentindo-Repond, S., Tamburello, P., Taylor, W., Telford, P., Temple, J., Terentyev, N., Teterin, V., Thomas, E., Thompson, J., Thooris, B., Titov, M., Toback, D., Tokmenin, V.V., Tolian, C., Tomoto, M., Tompkins, D., Toole, T., Torborg, J., Touze, F., Towers, S., Trefzger, T., Trincaz-Duvoid, S., Trippe, T.G., Tsybychev, D., Tuchming, B., Tully, C., Turcot, A.S., Tuts, P.M., Utes, M., Uvarov, L., Uvarov, S., Uzunyan, S., Vachon, B., van den Berg, P.J., van Gemmeren, P., Van Kooten, R., van Leeuwen, W.M., Varelas, N., Varnes, E.W., Vartapetian, A., Vasilyev, I.A., Vaupel, M., Vaz, M., Verdier, P., Vertogradov, L.S., Verzocchi, M., Vigneault, M., Villeneuve-Seguier, F., Vishwanath, P.R., Vlimant, J.-R., Von Toerne, E., Vorobyov, A., Vreeswijk, M., Vu Anh, T., Vysotsky, V., Wahl, H.D., Walker, R., Wallace, N., Wang, L., Wang, Z.-M., Warchol, J., Warsinsky, M., Watts, G., Wayne, M., Weber, M., Weerts, H., Wegner, M., Wermes, N., Wetstein, M., White, A., White, V., Whiteson, D., Wicke, D., Wijnen, T., Wijngaarden, D.A., Wilcer, N., Willutzki, H., Wilson, G.W., Wimpenny, S.J., Wittlin, J., Wlodek, T., Wobisch, M., Womersley, J., Wood, D.R., Wyatt, T.R., Wu, Z., Xie, Y., Xu, Q., Xuan, N., Yacoob, S., Yamada, R., Yan, M., Yarema, R., Yasuda, T., Yatsunenko, Y.A., Yen, Y., Yip, K., Yoo, H.D., Yoffe, F., Youn, S.W., Yu, J., Yurkewicz, A., Zabi, A., Zanabria, M., Zatserklyaniy, A., Zdrazil, M., Zeitnitz, C., Zhang, B., Zhang, D., Zhang, X., Zhao, T., Zhao, Z., Zheng, H., Zhou, B., Zhu, J., Zielinski, M., Zieminska, D., Zieminski, A., Zitoun, R., Zmuda, T., Zutshi, V., Zviagintsev, S., Zverev, E.G., and Zylberstejn, A.
- Published
- 2006
- Full Text
- View/download PDF
3. Federated Identity Management for Research
- Author
-
WISE SCI Working Group, Barton, Thomas, Gietz, Peter, Kelsey, David, Koranda, Scott, Short, Hannah, Stevanovic, Uros, Forti, A., Betev, L., Litmaath, M., Smirnova, O., and Hristov, P.
- Subjects
Focus (computing) ,AAI ,Physics ,QC1-999 ,DATA processing & computer science ,Authorization ,Data science ,Computing and Computers ,Identification (information) ,White paper ,FIM4R ,Federated identity management ,ddc:004 ,Set (psychology) ,Worldwide LHC Computing Grid - Abstract
Federated identity management (FIM) is an arrangement that can be made among multiple organisations that lets subscribers use the same identification data to obtain access to the secured resources of all organisations in the group. In many research communities there is an increasing interest in a common approach to FIM as there is obviously a large potential for synergies. FIM4R [1] provides a forum for communities to share challenges and ideas, and to shape the future of FIM for our researchers. Current participation covers high energy physics, life sciences and humanities, to mention but a few. In 2012 FIM4R converged on a common vision for FIM, enumerated a set of requirements and proposed a number of recommendationsfor ensuring a roadmap for the uptake of FIM [2]. In summer 2018, FIM4R published an updated version of this paper [3]. The High Energy Physics (HEP) Community has been heavily involved in creating both the original white paper and the new version, which documented the progress made in FIM for Research, in addition to the current challenges. This paper presents the conclusions of this second FIM4R white paper and a summary of the identified requirements and recommendations. We focus particularly on the direction being taken by the Worldwide LHC Computing Grid (WLCG), through the WLCG Authorisation Working Group, and the requirements gathered from the HEP Community.
- Published
- 2019
4. Federated Identity Management for Research
- Author
-
Barton, Thomas, Gietz, Peter, Kelsey, David, Koranda, Scott, Short, Hannah, Stevanovic, Uros, Forti, A. [Hrsg.], Betev, L. [Hrsg.], Litmaath, M. [Hrsg.], Smirnova, O. [Hrsg.], and Hristov, P. [Hrsg.]
- Subjects
AAI ,FIM4R - Abstract
Federated identity management (FIM) is an arrangement that can be made among multiple organisations that lets subscribers use the same identification data to obtain access to the secured resources of all organisations in the group. In many research communities there is an increasing interest in a common approach to FIM as there is obviously a large potential for synergies. FIM4R [1] provides a forum for communities to share challenges and ideas, and to shape the future of FIM for our researchers. Current participation covers high energy physics, life sciences and humanities, to mention but a few. In 2012 FIM4R converged on a common vision for FIM, enumerated a set of requirements and proposed a number of recommendationsfor ensuring a roadmap for the uptake of FIM [2]. In summer 2018, FIM4R published an updated version of this paper [3]. The High Energy Physics (HEP) Community has been heavily involved in creating both the original white paper and the new version, which documented the progress made in FIM for Research, in addition to the current challenges. This paper presents the conclusions of this second FIM4R white paper and a summary of the identified requirements and recommendations. We focus particularly on the direction being taken by the Worldwide LHC Computing Grid (WLCG), through the WLCG Authorisation Working Group, and the requirements gathered from the HEP Community.
- Published
- 2019
- Full Text
- View/download PDF
5. Quantum Associative Memory in HEP Track Pattern Recognition
- Author
-
Shapoval, Illya, Forti, A1, Betev, L, Litmaath, M, Smirnova, O, Hristov, P, Shapoval, Illya, Calafiura, Paolo, Shapoval, Illya, Forti, A1, Betev, L, Litmaath, M, Smirnova, O, Hristov, P, Shapoval, Illya, and Calafiura, Paolo
- Abstract
We have entered the Noisy Intermediate-Scale Quantum Era. A plethora of quantum processor prototypes allow evaluation of potential of the Quantum Computing paradigm in applications to pressing computational problems of the future. Growing data input rates and detector resolution foreseen in High-Energy LHC (2030s) experiments expose the often high time and/or space complexity of classical algorithms. Quantum algorithms can potentially become the lower-complexity alternatives in such cases. In this work we discuss the potential of Quantum Associative Memory (QuAM) in the context of LHC data triggering. We examine the practical limits of storage capacity, as well as store and recall errorless efficiency, from the viewpoints of the state-of-the-art IBM quantum processors and LHC real-time charged track pattern recognition requirements. We present a software prototype implementation of the QuAM protocols and analyze the topological limitations for porting the simplest QuAM instances to the public IBM 5Q and 14Q cloud-based superconducting chips.
- Published
- 2019
6. Measurement of the Spin-dependent Structure Function g 1(x) of the Deuteron and the Proton
- Author
-
Litmaath, M. F., primary
- Published
- 1995
- Full Text
- View/download PDF
7. The CHARON detector—an emulsion/counter hybrid set-up to measure the mean free path of near-elastic pion scattering in nuclear emulsion (white kink) at 2, 3 and 5 GeV/ c
- Author
-
Bülte, A, Winter, K, Litmaath, M, Gernitzky, Y, Goldberg, J, Grégoire, G, Niwa, K, Nakano, T, Komatsu, M, Itoh, K, Frekers, D, Bruski, N, and Kückmann, J
- Published
- 2002
- Full Text
- View/download PDF
8. The data acquisition system of the CHORUS experiment
- Author
-
Artamonov, A., Bonekämper, D., Brunner, J., Bülte, A., Carnevale, G., Catanesi, M.G., Cocco, A., Cussans, D., Ferreira, R., Friend, B., Gorbunov, P., Guerriero, A., Gurin, R., de Jong, M., Litmaath, M., Macina, D., Maslennikov, A., Mazzoni, M.A., Meijer Drees, R., Meinhard, H., Mommaert, C., Oldeman, R.G.C., Øverȧs, H., Panman, J., van der Poel, C.A.F.J., Riccardi, F., Rondeshagen, D., Rozanov, A., Saltzberg, D., Uiterwijk, J.W.E., Vander Donckt, M., Wolff, T., Wong, H., and Zucchelli, P.
- Published
- 2002
- Full Text
- View/download PDF
9. The compact emulsion spectrometer
- Author
-
Buontempo, S, Camilleri, L, Catanesi, M.G, Chizov, M, Santo, A.De, e Silva, E.Do Couto, Doucet, M, Goldberg, J, Grégoire, G, Grella, G, Linssen, L, Lisowski, B, Litmaath, M, Kokkonen, J, Melzer, O, Mexner, V, Migliozzi, P, Muciaccia, M.T, Niu, E, Panman, J, Papadopoulos, I.M, Pesen, E, Radicioni, E, Ricciardi, S, Runolfsson, O, Simone, S, Soler, F.J.P, Stiegler, U, Uiterwijk, J.W.E, and de Vyver, B.Van
- Published
- 2001
- Full Text
- View/download PDF
10. WIRED — World Wide Web interactive remote event display
- Author
-
Ballaminut, A., Colonello, C., Dönszelmann, M., van Herwijnen, E., Köper, D., Korhonen, J., Litmaath, M., Perl, J., Theodorou, A., Whiteson, D., and Wolff, E.
- Published
- 2001
- Full Text
- View/download PDF
11. Measurement of the SMC muon beam polarisation using the asymmetry in the elastic scattering off polarised electrons
- Author
-
Adams, D., Adeva, B., Akdogan, T., Arik, E., Arvidson, A., Badelek, B, Bardin, G., Baum, G., Berglund, P., Betev, L., Birsa, R., Björkholm, P., Bonner, B.E., de Botton, N., Boutemeur, M., Bradamante, F., Bravar, A., Bressan, A., Bültmann, S., Burtin, E., Cavata, C., Clocchiatti, M., Crabb, D., Cranshaw, J., Çuhadar, T., Dalla Torre, S., van Dantzig, R., Derro, B., Deshpande, A., Dhawan, S., Dulya, C., Dyring, A., Eichblatt, S., Faivre, J.C., Fasching, D., Feinstein, F., Fernandez, C., Forthmann, S., Frois, B., Gallas, A., Garzon, J.A., Gatignon, L., Gaussiran, T., Gilly, H., Giorgi, M., von Goeler, E., Goertz, S., Golutvin, I.A., Gracia, G., de Groot, N., Grosse Perdekamp, M., Haft, K., von Harrach, D., Hasegawa, T., Hautle, P., Hayashi, N., Heusch, C.A., Horikawa, N., Hughes, V.W., Igo, G., Ishimoto, S., Iwata, T., Kabuß, E.M., Kageya, T., Karev, A., Kessler, H.J., Ketel, T.J., Kiryluk, J., Kiryushin, I., Kishi, A., Kisselev, Yu., Klostermann, L., Krämer, D., Krivokhijine, V., Kröger, W., Kukhtin, V., Kurek, K., Kyynäräinen, J., Lamanna, M., Landgraf, U., Le Goff, J.M., Lehar, F., de Lesquen, A., Lichtenstadt, J., Lindqvist, T., Litmaath, M., Lowe, M., Magnon, A., Mallot, G.K., Marie, F., Martin, A., Martino, J., Matsuda, T., Mayes, B., McCarthy, J.S., Medved, K., Meyer, W., van Middelkoop, G., Miller, D., Miyachi, Y., Mori, K., Moromisato, J., Nagaitsev, A., Nassalski, J., Naumann, L., Niinikoski, T.O., Oberski, J.E.J., Ogawa, A., Ozben, C., Pereira, H., Perrot-Kunne, F., Peshekhonov, D., Piegaia, R., Pinsky, L., Platchkov, S., Plo, M., Pose, D., Postma, H., Pretz, J., Pussieux, T., Rädel, G., Rijllart, A., Reicherz, G., Roberts, J.B., Rock, S., Rodriguez, M., Rondio, E., Ropelewski, L., Sabo, I., Saborido, J., Sandacz, A, Savin, I., Schiavon, P., Schiller, A., Schüler, K.P., Seitz, R., Semertzidis, Y., Sergeev, S., Shanahan, P., Sichtermann, E.P., Simeoni, F., Smirnov, G.I., Staude, A., Steinmetz, A., Stiegler, U., Stuhrmann, H., Szleper, M., Tessarotto, F., Thers, D., Tlaczala, W., Tripet, A., Unel, G., Velasco, M., Vogt, J., Voss, R., Whitten, C., Windmolders, R., Willumeit, R., Wislicki, W., Witzmann, A., Ylöstalo, J., Zanetti, A.M., Zaremba, K., Zamiatin, N.I., and Zhao, J.
- Published
- 2000
- Full Text
- View/download PDF
12. Integration and Evaluation of QUIC and TCP-BBR in longhaul Science Data Transfers.
- Author
-
Lopes, Raul H. C., Franqueira, Virginia N. L., Rand, Duncan, Forti, A., Betev, L., Litmaath, M., Smirnova, O., and Hristov, P.
- Subjects
DATA integration ,BACK up systems ,CLIENT/SERVER computing ,DATA security ,DATA transmission systems - Abstract
Two recent and promising additions to the internet protocols are TCP-BBR and QUIC. BBR defines a congestion policy that promises a better control in TCP bottlenecks on long haul transfers and can also be used in the QUIC protocol. TCP-BBR is implemented in the Linux kernels above 4.9. It has been shown, however, to demand careful fine tuning in the interaction, for example, with the Linux Fair Queue. QUIC, on the other hand, replaces HTTP and TLS with a protocol on the top of UDP and thin layer to serve HTTP. It has been reported to account today for 7% of Google's traffic. It has not been used in server-to-server transfers even if its creators see that as a real possibility. Our work evaluates the applicability and tuning of TCP-BBR and QUIC for data science transfers. We describe the deployment and performance evaluation of TCP-BBR and comparison with CUBIC and H-TCP in transfers through the TEIN link to Singaren (Singapore). Also described is the deployment and initial evaluation of a QUIC server. We argue that QUIC might be a perfect match in security and connectivity to base services that are today performed by the Xroot redirectors. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
13. Sharing server nodes for storage and computer.
- Author
-
Smith, David, Di Girolamo, Alessandro, Glushkov, Ivan, Jones, Ben, Kiryanov, Andrey, Lamanna, Massimo, Mascetti, Luca, McCance, Gavin, Rousseau, Herve, Schovancová, Jaroslava, Schulz, Markus, Tollefsen, Havard, Valassi, Andrea, Forti, A., Betev, L., Litmaath, M., Smirnova, O., and Hristov, P.
- Subjects
DATA warehousing ,CENTRAL processing units ,BATCH processing ,HARDWARE ,CLIENT/SERVER computing - Abstract
Based on the observation of low average CPU utilisation of several hundred file storage servers in the EOS storage system at CERN, the Batch on EOS Extra Resources (BEER) project developed an approach to also utilise these resources for batch processing. Initial proof of concept tests showed little interference between batch and storage services on a node. Subsequently a model for production was developed and implemented. This has been deployed on part of the CERN EOS production service. The implementation and test results will be presented. The potential for additional resources at the CERN Tier-0 centre is of the order of ten thousand hardware threads in the near term, as well as being a step towards a hyper-converged infrastructure. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
14. Power Usage Effectiveness analysis and optimization in the INFN CNAF Tier-1 data center infrastructure.
- Author
-
Ricci, Pier Paolo, Mazza, Andrea, De Zan, Andrea, Forti, A., Betev, L., Litmaath, M., Smirnova, O., and Hristov, P.
- Subjects
DATA libraries ,CENTRIFUGAL compressors ,INVESTMENTS ,WATER pumps ,TECHNOLOGICAL innovations - Abstract
The accurate calculation of the power usage effectiveness (PUE) is the most important factor when trying to analyse the overall efficiency of the power consumption in a big data center. In the INFN CNAF Tier-1 a new monitoring infrastructure, also known as Building Management System (BMS), has been recently implemented using the Schneider StruxureWare™ Building Operation (SBO) software. During the design phase of this new BMS, a great attention was given to the possibility of collecting several detailed information about the electric absorption of specific devices and parts of the facility. Considering the annual trends and the demands for reducing the operating costs it became clear that some improvements were certainly needed in the very short time. For this reason, a hardware upgrade of the cooling chillers and related chilled water pumps distribution system was seriously considered using innovative cooling technology. We focused on chillers using the Danfoss Turbocor centrifugal compressors technology that uses magnetic levitation and an oil-free approach for obtaining the best efficiency. Subsequently, we studied a solution that could easily compensate the initial investment during the first years of usage (considering the Total Cost of Ownership of the project) and that will improve the overall PUE of our data center. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
15. Integrated automation for configuration management and operations in the ATLAS online computing farm.
- Author
-
Amirkhanov, Artem, Ballestrero, Sergio, Brasolin, Franco, du Plessis, Haydn, Lee, Christopher Jon, Mitrogeorgos, Konstantinos, Pernigotti, Marco, Sanchez Pineda, Arturo, Scannicchio, Diana Alessandra, Twomey, Matthew Shaun, Forti, A., Betev, L., Litmaath, M., Smirnova, O., and Hristov, P.
- Subjects
CONFIGURATION management ,DETECTORS ,AUTOMATION ,FARM management ,INFORMATION technology - Abstract
The online farm of the ATLAS experiment at the LHC, consisting of nearly 4000 PCs with various characteristics, provides configuration and control of the detector and performs the collection, processing, selection, and conveyance of event data from the front-end electronics to mass storage. Different aspects of the farm management are already accessible via several tools. The status and health of each node are monitored by a system based on Icinga 2 and Ganglia. PuppetDB gathers centrally all the status information from Puppet, the configuration management tool used to ensure configuration consistency of every node. The in-house Configuration Database (ConfDB) controls DHCP and PXE, while also integrating external information sources. In these proceedings we present our roadmap for integrating these and other data sources and systems, and building a higher level of abstraction on top of this foundation. An automation and orchestration tool will be able to use these systems and replace lengthy manual procedures, some of which also require interactions with other systems and teams, e.g. for the repair of a faulty node. Finally, an inventory and tracking system will complement the available data sources, keep track of node history, and improve the evaluation of long-term lifecycle management and purchase strategies. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
16. Data Allocation Service ADAS for the Data Rebalancing of ATLAS.
- Author
-
Vamosi, Ralf, Lassnig, Mario, Schikuta, Erich, Forti, A., Betev, L., Litmaath, M., Smirnova, O., and Hristov, P.
- Subjects
INFORMATION retrieval ,DISTRIBUTED computing ,AUTOMATION ,SCIENCE databases ,MACHINE learning - Abstract
The distributed data management system Rucio manages all data of the ATLAS collaboration across the grid. Automation, such as data replication and data rebalancing are important to ensure proper operation and execution of the scientific workflow. In this proceedings, a new data allocation grid service based on machine learning is proposed. This learning agent takes subsets of the global datasets and proposes a better allocation based on the imposed cost metric, such as waiting time in the workflow. As a service, it can be modularized and can run independently of the existing rebalancing and replication mechanisms. Furthermore, it collects data from other services and learns better allocation while running in the background. Apart from the user selecting datasets, other data services may consult this meta-heuristic service for improved data placement. Network and storage utilization is also taken into account. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
17. Quantum Computing.
- Author
-
Amundson, James, Sexton-Kennedy, Elizabeth, Forti, A., Betev, L., Litmaath, M., Smirnova, O., and Hristov, P.
- Subjects
QUANTUM computing ,COMPUTER input-output equipment ,QUANTUM theory ,COMPUTER software ,PARTICLE physics - Abstract
In recent years Quantum Computing has attracted a great deal of attention in the scientific and technical communities. Interest in the field has expanded to include the popular press and various funding agencies. We discuss the origins of the idea of using quantum systems for computing. We then give an overview in recent developments in quantum hardware and software, as well as some potential applications for high energy physics. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
18. LHC Computing: past, present and future.
- Author
-
Charpentier, Philippe, Forti, A., Betev, L., Litmaath, M., Smirnova, O., and Hristov, P.
- Subjects
LARGE Hadron Collider ,DISTRIBUTED computing ,INFORMATION technology ,MIDDLEWARE ,COMPUTER software development - Abstract
Although the LHC experiments have been designed and prepared since 1984, the challenge of LHC computing was only tackled seriously much later, at the end of the '90s. This was the time at which the Grid paradigm wasemerging, and LHC computing had great hopes that most of its challenges would be solved by this new paradigm. The path to having functional and efficient distributed computing systems was in the end much more complex than anticipated. However, most obstacles were overcome, thanks to the introductionof new paradigms and a lot of manpower investment from the experiments and from the supporting IT units (for middleware development and infrastructuresetup). This contribution is briefly outlining some of the biggest hopes and disillusions of these past 20 years, and gives a brief outlook to the coming trends. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
19. Disaster recovery of the INFN Tier–1 data center: lesson learned.
- Author
-
Luca dell', Agnello, Forti, A., Betev, L., Litmaath, M., Smirnova, O., and Hristov, P.
- Subjects
DISASTER resilience ,DATA libraries ,INFORMATION technology ,COMPUTER storage devices ,DATA recovery - Abstract
The year 2017 was most likely a turning point for the INFN Tier- 1. In fact, on November 9th 2017 early at morning, a large pipe of the city aqueduct, located under the road next to CNAF, broke. As a consequence, a river of water and mud flowed towards the Tier-1 data center. The level of the water did not exceed the threshold of safety of the waterproof doors but, due to the porosity of the external walls and the floor, it could find a way into the data center. The flooding almost compromised all the activities and represented a serious threat to future of the Tier-1 itself. The most affected part of the data center was the electrical room, with all switchboards for both power lines and for the continuity systems, but the damages were diffused also to all the IT systems, including all the storage devices and the tape library. After a careful assessment of the damages, an intense recovery activity was launched, aimed not only to restore the services but also to secure data stored on disks and tapes. After nearly two months, in January, we were able to start to reopen gradually all the services, including part of the farm and the storage systems. The long tail of recovery (tapes recovery, second power line) has lasted until the end of May. As a short term consequence we have started a deep consolidation of the data center infrastructure to be able to cope also with this type of incidents; for the medium and long term we are working to move to a new, larger, location, able also to accommodate the foreseen increase of resources for HL-LHC. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
20. Towards a serverless CernVM-FS.
- Author
-
Blomer, Jakob, Ganis, Gerardo, Mosciatti, Simone, Popescu, Radu, Forti, A., Betev, L., Litmaath, M., Smirnova, O., and Hristov, P.
- Subjects
APPLICATION software ,COMPUTER operating systems ,GRAVITATIONAL wave detectors ,CLOUD computing ,ARCHIVES - Abstract
The CernVM File System (CernVM-FS) provides a scalable and reliable software distribution and—to some extent—a data distribution service. It gives POSIX access to more than a billion binary files of experiment application software stacks and operating system containers to end user devices, grids, clouds, and supercomputers. Increasingly, CernVM-FSalso provides access to certain classes of data, such as detector conditions data, genomics reference sets, or gravitational wave detector experiment data. For most of the high- energy physics experiments, an underlying HTTP content distribution infrastructure is jointly provided by universities and research institutes around the world. In this contribution, we will present recent developments and future plans. For future developments, we put a focus on evolving the content distribution infrastructure and at lowering the barrier for publishing into CernVM-FS. Through so-called serverless computing, we envision cloud hosted CernVM-FS repositories without the need to operate dedicated servers or virtual machines. An S3 compatible service in conjunction with a content delivery network takes on data provisioning, replication, and caching. A chainof time-limited and resource-limited functions (so called "lambda function" or "function-as- a-service") operate on the repository and stage the updates. As a result, any CernVM-FS client should be able to turn intoawriter, possession of suitable keys provided. For repository owners, we aim at providing cost transparency and seamless scalability from very small to very large CernVM-FS installations. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
21. HNSciCloud, a Hybrid Cloud for Science.
- Author
-
Fernandes, João, Jones, Bob, Yakubov, Sergey, Chierici, Andrea, Forti, A., Betev, L., Litmaath, M., Smirnova, O., and Hristov, P.
- Subjects
INFORMATION technology ,ARTIFICIAL neural networks ,OPEN source software ,BANDWIDTHS ,HYBRID cloud computing - Abstract
Helix Nebula Science Cloud (HNSciCloud) has developed a hybrid cloud platform that links together commercial cloud service providers and research organizations' in-house IT resources via the GEANT network. The platform offers data management capabilities with transparent data access where applications can be deployed with no modifications on both sides of the hybrid cloud and with compute services accessible via eduGAIN [1] and ELIXIR [2] federated identity and access management systems. In addition, it provides support services, account management facilities, full documentation and training. The cloud services are being tested by a group of 10 research organisations from across Europe [3], against the needs of use-cases from seven ESFRI infrastructures [4]. The capacity procured by ten research organisations from the commercial cloud service providers to support these use-cases during 2018 exceeds twenty thousand cores and two petabytes of storage with a network bandwidth of 40Gbps. All the services are based on open source implementations that do not require licenses in order to be deployed on the in-house IT resources of research organisations connected to the hybrid platform. An early adopter scheme has been put in place so that more research organisations can connect to the platform and procure additional capacity to support their research programmes. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
22. Next Generation Generative Neural Networks for HEP.
- Author
-
Farrell, Steven, Bhimji, Wahid, Kurth, Thorsten, Mustafa, Mustafa, Bard, Deborah, Lukic, Zarija, Nachman, Benjamin, Patton, Harley, Forti, A., Betev, L., Litmaath, M., Smirnova, O., and Hristov, P.
- Subjects
ARTIFICIAL neural networks ,SUPERCOMPUTERS ,INTERPOLATION ,DEEP learning ,BIG data - Abstract
Initial studies have suggested generative adversarial networks (GANs) have promise as fast simulations within HEP. These studies, while promising, have been insufficiently precise and also, like GANs in general, suffer from stability issues.We apply GANs to to generate full particle physics events (not individual physics objects), explore conditioning of generated events based on physics theory parameters and evaluate the precision and generalization of the produced datasets. We apply this to SUSY mass parameter interpolation and pileup generation. We also discuss recent developments in convergence and representations that match the structure of the detector better than images.In addition we describe on-going work making use of large-scale distributed resources on the Cori supercomputer at NERSC, and developments to control distributed training via interactive jupyter notebook sessions. This will allow tackling high-resolution detector data; model selection and hyper-parameter tuning in a productive yet scalable deep learning environment. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
23. Belle II at the Start of Data Taking.
- Author
-
Kuhr, Thomas, Forti, A., Betev, L., Litmaath, M., Smirnova, O., and Hristov, P.
- Subjects
PARTICLE physics ,COLLISIONS (Physics) ,PHYSICS software ,DETECTORS ,LUMINOSITY - Abstract
The Belle II experiment is expected to collect e
+ e- collision data at a 40 times higher instantaneous luminosity than achieved so far. The high collision rate requires not only an upgrade of the detector, but also of the computing system and the software to handle the data. The first collision data taken during the commissioning runin Spring 2018 provides an excellent opportunity to review the status of the Belle II computing and software and assess its readiness for the physics data taking starting 2019. [ABSTRACT FROM AUTHOR]- Published
- 2019
- Full Text
- View/download PDF
24. The obsolescence of Information and Information Systems CERN Digital Memory project.
- Author
-
Le Meur, Jean-Yves, Tarocco, Nicola, Forti, A., Betev, L., Litmaath, M., Smirnova, O., and Hristov, P.
- Subjects
INFORMATION storage & retrieval systems ,COMPUTER storage devices ,DIGITAL libraries ,DIGITIZATION of archival materials ,ARCHIVES - Abstract
In 2016 was started the CERN Digital Memory project with the main goal of preventing loss of historical content produced by the organisation. The first step of the project was targeted to address the risk of deterioration of the most vulnerable materials, mostly the multimedia assets created in analogue formats from 1954 to the late 1990's, like still and moving images kept on magnetic carriers. In parallel was studied today's best practices to guarantee a long life to digital content, either born digital or resulting from a digitization process. If traditional archives and libraries have grown up during centuries establishing recognized standards to deal with the preservation of printed content, the field of digital archiving is in its infancy. This paper shortly exposes the challenges when migrating hundreds of thousands of audio, slides, negatives, videotapes or films from the analogue to the digital era. It will then describe how a Digital Memory platform is being built, conform to the principles of the ISO-16363 digital object management norm that defines trustworthy digital repositories. Finally, as all information repository managers are faced with the necessary migration of underlying systems and the obsolescence of the information itself, the talk will explain how a digital archiving platform focusing only on content preservation could be of direct interest for most of the live systems. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
25. Beyond X.509: token-based authentication and authorization for HEP.
- Author
-
Ceccanti, Andrea, Vianello, Enrico, Caberletti, Marco, Giacomini, Francesco, Forti, A., Betev, L., Litmaath, M., Smirnova, O., and Hristov, P.
- Subjects
COMPUTER access control ,DIGITAL libraries ,MIDDLEWARE ,CLOUD computing ,COMPUTER storage devices - Abstract
X.509 certificates and VOMS have proved to be a secure and reliable solution for authentication and authorization on the Grid, but also showed usability issues and required the development of ad-hoc services and libraries to support VO-based authorization schemes in Grid middleware and experiment computing frameworks. The need to move beyond X.509 certificates is recognized as an important objective in the HEP R&D roadmap for software and computing, to overcome the usability issues of the current AAI and embrace recent advancement in web technologies widely adopted in industry, but also to enable the secure composition of computing and storage resources provisioned across heterogeneous providers in order to meet the computing needs of HL-LHC. A flexible and usable AAI based on modern web technologies is a key enabler of such secure composition and has been a major topic of research of the recently concluded INDIGO-DataCloud project. In this contribution, we present an integrated solution, based on the INDIGO-DataCloud Identity and Access Management service that demonstrates how a next generation, token-based VO-aware AAI can be built in support of HEP computing use cases, while maintaining compatibility with the existing, VOMS-based AAI used by the Grid. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
26. The DAQ systems of the DUNE Prototypes at CERN/.
- Author
-
Hennessy, Karol, Forti, A., Betev, L., Litmaath, M., Smirnova, O., and Hristov, P.
- Subjects
DATA acquisition systems ,NEUTRINOS ,PROTOTYPES ,PHOTON detectors ,ARGON - Abstract
DUNE is a long baseline neutrino experiment due to take data in 2025. Two prototypes of the DUNE far detector were built to assess candidate technologies and methods in advance of the DUNE detector build. Here are described the data acquisition (DAQ) systems for both of its prototypes, Proto-DUNE single-phase (SP) and ProtoDUNE dual-phase (DP). The ProtoDUNEs also break records as the largest beam test experiments yet constructed, and are the fundamental elements of CERN's Neutrino Platform. This renders each ProtoDUNE an experiment in its own right and the design and construction have been chosen to meet this scale. Due to the aggressive timescale, off-the-shelf electronics have been chosen to meet the demands of the experiments where possible. The ProtoDUNE-SP cryostat comprises two primary sub-detectors - a single phase liquid Argon TPC and a companion Photon Detector. The TPC has two candidate readout solutions under test in ProtoDUNE-SP – RCE (ATCAbased) and FELIX (PCIe-based). Fermilab's artDAQ is used as the dataflow software for the single phase experiment. ProtoDUNE-DP will read out the dual-phase liquid argon detector using a microTCA solution. The timing, triggering, and compression schemes are described for both experiments, along with mechanisms for sending data offline to permanent data storage in CERN's EOS infrastructure. This paper describes the design and implementation of the TDAQ systems as well as first measurements of their performance. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
27. Evolution of monitoring, accounting and alerting services at INFN-CNAF Tier-1.
- Author
-
Dal Pra, Stefano, Falabella, Antonio, Fattibene, Enrico, Cincinelli, Gianluca, Magnani, Matteo, De Cristofaro, Tiziano, Ruini, Martin, Forti, A., Betev, L., Litmaath, M., Smirnova, O., and Hristov, P.
- Subjects
ACCOUNTING ,SCIENTIFIC community ,DATA libraries ,SCIENCE databases ,COMPUTER software - Abstract
CNAF is the national center of INFN (Italian National Institute for Nuclear Physics) for IT technology services. The Tier-1 data center operated at CNAF offers computing and storage resources to scientific communities as those working on the four experiments of LHC (Large Hadron Collider) at CERN and other 30 experiments in which INFN is involved. In past years, monitoring and alerting services for Tier-1 resources were performed with several software, such as LEMON (developed at CERN and customized on the char-acteristics of datacenters managing scientific data), Nagios (especially used for alerting purposes) and a system based on Graphite database and other ad-hoc developed services and web pages. By 2015, a task force has been organized with the purpose of defining and deploying a common infrastructure (based on Sensu, InfluxDB and Grafana) to be exploited by the different CNAF depart-ments. Once the new infrastructure was deployed, a major task was then to adapt the whole monitoring and alerting services. We are going to present the steps that the Tier-1 group followed in order to accomplish a full migration, that is now completed with all the new services in production. In particular we will show the monitoring sensors and alerting checks redesign to adapt them to the infrastructure base on the Sensu software, the web dashboards creation for data presentation, the porting of historical data from LEMON/Graphite to InfluxDB. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
28. Securing and sharing Elasticsearch resources with Read-onlyREST.
- Author
-
Schwickerath, Ulrich, Saiz, Pablo, Toteva, Zhechka, Forti, A., Betev, L., Litmaath, M., Smirnova, O., and Hristov, P.
- Subjects
DATA security ,INFORMATION technology ,COMPUTER network resources ,COST effectiveness ,INTERNET - Abstract
In early 2016 CERN IT created a new project to consolidate and centralise Elas-ticsearch instances across the site, with the aim to offer a production quality new IT services to experiments and departments. We present the solutions we adapted for securing the system using open source only tools, which allows us to consolidate up to 20 different use cases on a single Elasticsearch cluster. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
29. MONIT: Monitoring the CERN Data Centres and the WLCG Infrastructure.
- Author
-
Aimar, Alberto, Aguado Corman, Asier, Andrade, Pedro, Delgado Fernandez, Javier, Garrido Bear, Borja, Karavakis, Edward, Marek Kulikowski, Dominik, Magnoni, Luca, Forti, A., Betev, L., Litmaath, M., Smirnova, O., and Hristov, P.
- Subjects
SERVER farms (Computer network management) ,COMPUTER software ,SOFTWARE measurement ,OPEN source software ,CLOUD computing - Abstract
The new unified monitoring architecture (MONIT) for the CERN Data Centres and for the WLCG Infrastructure is based on established open source technologies to collect, stream, store and access monitoring data. The previous solutions, based on in-house development and commercial software, have been replaced with widely- recognized technologies such as Collectd, Kafka, Spark, Elasticsearch, InfluxDB, Grafana and others. The monitoring infrastructure, fully based on CERN cloud resources, covers the whole workflow of the monitoring data: from collecting and validating metrics and logs to making them available for dashboards, reports and alarms. The deployment in production of this new DC and WLCG monitoring is well under way and this contribution provides a summary of the progress, hurdles met and lessons learned in using these open source technologies. It also focuses on the choices made to achieve the required levels of stability, scalability and performance of the MONIT monitoring service. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
30. Detection of Erratic Behavior in Load Balanced Clusters of Servers Using a Machine Learning Based Method.
- Author
-
Adam, Martin, Magnoni, Luca, Pilát, Martin, Adamová, Dagmar, Forti, A., Betev, L., Litmaath, M., Smirnova, O., and Hristov, P.
- Subjects
MACHINE learning ,CLIENT/SERVER computing ,DETECTORS ,SOFTWARE measurement ,INFORMATION retrieval - Abstract
With the explosion of the number of distributed applications, a new dynamic server environment emerged grouping servers into clusters, whose utilization depends on the current demand for the application. To provide reliable and smooth services it is crucial to detect and fix possible erratic behavior of individual servers in these clusters. Use of standard techniques for this purpose delivers suboptimal results. We have developed a method based on machine learning techniques which allows detecting outliers indicating a possible problematic situation. The method inspects the performance of the rest of the cluster and provides system operators with additional information which allows them to identify quickly the failing nodes. We applied this method to develop a Spark application using the CERN MONIT architecture and with this application, we analyzed monitoring data from multiple clusters of dedicated servers in the CERN data center. In this contribution, we present our results achieved with this new method and with the Spark application for analytics of CERN monitoring data. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
31. Notifications workflows using the CERN IT central messaging infrastructure.
- Author
-
Toteva, Zhechka, Lukic, Darko, Cons, Lionel, Forti, A., Betev, L., Litmaath, M., Smirnova, O., and Hristov, P.
- Subjects
WORKFLOW ,CONFIGURATION management ,VIRTUAL machine systems ,EMAIL ,PYTHON programming language ,INFORMATION technology - Abstract
In the CERN IT agile infrastructure (AI), Puppet, the CERN IT central messaging infrastructure (MI) and the Roger application are the key constituents handling the configuration of the machines of the computer centre. The machine configuration at any given moment depends on its declared state in Roger and Puppet ensures the actual implementation of the desired configuration by running the Puppet agent on the machine at regular intervals, typically every 90 minutes. Sometimes it is preferable that the configuration change is propagated immediately to the targeted machine, ahead of the next scheduled Puppet agent run on this machine. The particular need of handling notifications in a highly scalable manner for a large scale infrastructure has been satisfied with the implementation of the CERNMegabus architecture, based on the ActiveMQ messaging system. The design and implementation of the CERN-Megabus architecture are introduced, followed by the implementation of the Roger notification workflow. The choice of ActiveMQ is analysed and the message flow between the Roger notification producer and the CASTOR, EOS, BATCH and Load Balancing consumers are presented. The employment of predefined consumer modules in order to speed up the on-boarding of new CERN-Megabus use cases is also described. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
32. Concurrent Adaptive Load Balancing at CERN.
- Author
-
Canilho, Paulo, Reguero, Ignacio, Saiz, Pablo, Forti, A., Betev, L., Litmaath, M., Smirnova, O., and Hristov, P.
- Subjects
LOAD balancing (Computer networks) ,VIRTUAL machine systems ,SOFTWARE measurement ,DATABASES ,CLIENT/SERVER computing - Abstract
CERN is using an increasing number of DNS based load balanced aliases (currently over 700). This article explains the Go based concurrent implementation of the Load Balancing Service, both the client (lbclient) and the server (lbd). The article describes how it is being progressively deployed using Puppet and how concurrency greatly improves scalability, ultimately allowing a single master-slave couple of Openstack virtual machines to server all the aliases. It explains the new implementation of the lbclient, which, among other things, allows to incorporate Collectd metrics to determine the status of the node and takes advantage of the Go language concurrency features to reduce the real time needed for checking the status of the node. The article explains that the LBD server acts as an arbiter getting feedback on load and health from the backend nodes using snmp (Simple Network Management Protocol) to decide which IP addresses the LB alias will present. While this architecture has been used since long at CERN for DNS based aliases, the LBD code is generic enough to drive other load balancers. A proof of concept using HAProxy to provide adaptive responses to load and health monitoring has been implemented. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
33. The ALICE Analysis Facility Prototype at GSI.
- Author
-
Schwarz, Kilian, Fleischer, Soeren, Grosso, Raffaele, Knedlik, Jan, Kollegger, Thorsten, Kramp, Paul, Forti, A., Betev, L., Litmaath, M., Smirnova, O., and Hristov, P.
- Subjects
LARGE Hadron Collider ,DETECTORS ,BIT rate ,MONTE Carlo method ,BACK up systems - Abstract
In LHC Run 3 the ALICE Computing Model will change. The Grid Tiers will to a large extend be specialised for a given role. 2/3 of the reconstruction and calibration will be done at the combined online and offline O2 compute facility, 1/3 will be done by the Tier1 centres in the Grid. Additionally all Tier1 centres, as already now, will take care of archiving one copy of the raw data on tape. The Tier2 centres will only do simulation. The AODs will will be collected on specialised Analysis Facilities which shall be capable of processing 10 PB of data within 24 hours. A prototype of such an Analysis Facility has been set up at GSI based on the experiences with the local ALICE Tier2 centre which has been in production since 2002. The main components are a general purpose HPC cluster with a mounted cluster file system enhanced by Grid components like an XRootD based Storage Element and an interface for being able to receive and run dedicated Grid jobs on the Analysis Facility prototype. The necessary I/O speed as well as easy local data access is facilitated by self developed XRootD PlugIns. Performance tests with real life ALICE analysis trains suggest that the target throughput rate can be achieved. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
34. Trident : An Automated System Tool for Collecting and Analyzing Performance Counters.
- Author
-
Muralidharan, Servesh, Smith, David, Forti, A., Betev, L., Litmaath, M., Smirnova, O., and Hristov, P.
- Subjects
QUALITATIVE research ,HARDWARE ,BANDWIDTH allocation ,SOFTWARE measurement ,ACQUISITION of data - Abstract
Trident, a qualitative analysis tool that can look at various low level metrics with respect to the Core, Memory and I/O to highlight performance bottlenecks during the execution of an application. Trident uses a three pronged approach in analysing a node's utilisation of hardware resources and to help a non system expert understand the stress on different parts of the system by a given job. Currently metrics such as memory bandwidth, core utilization, active processor cyles, etc., are being collected. Interpretation of the data in raw form is often non intuitive. Therefore, the tool converts these data into derived metrics that are then represented as a system wide extended Top-Down analysis that helps developers and site managers likewise understand the application behavior without the need for in-depth expertise of architecture details. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
35. IT Service Management at CERN: Data Centre and Service monitoring and status.
- Author
-
Martín Clavo, David, Cremel, Nicole, Delamare, Catherine, Garcia Cuervo, Jorge, Moller, Mats, Salter, Wayne, Toteva, Zhechka, Forti, A., Betev, L., Litmaath, M., Smirnova, O., and Hristov, P.
- Subjects
INFORMATION technology ,HARDWARE ,COMPUTER network resources ,COMPUTER operating systems ,CLIENT/SERVER computing - Abstract
The Information Technology department at CERN has been using ITIL Service Management methodologies [1] and ServiceNow since early 2011. In recent years, several developments have been accomplished regarding the data centre and service monitoring, as well as service status reporting. The CERN Service Portal, built on top of ServiceNow, hosts the CERN Service Status Board, which informs end users and supporters of ongoing service incidents, planned interventions and service changes. The Service Portal also includes the Service Availability Dashboard, which displays the technical status of CERN computing services. Finally, ServiceNow has been integrated with the data centre monitoring infrastructure, via GNI (General Notification Infrastructure) in order to implement event management and generate incidents from hardware, network, operating system and application alarms. We detail how these developments have been implemented, and how they help supporters monitor and solve issues and keep users informed of service status. Also, we highlight which lessons have been learnt after the implementation. Finally, possible future improvements are discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
36. Increasing Windows security by hardening PC configurations.
- Author
-
Martín Zamora, Pablo, Kwiatek, Michal, Nicolas Bippus, Vincent, Cruz Elejalde, Eneko, Forti, A., Betev, L., Litmaath, M., Smirnova, O., and Hristov, P.
- Subjects
INTERNET security ,PERSONAL computers ,CYBERTERRORISM ,FINANCE ,COMPUTER software - Abstract
Over 8000 Windows PCs are actively used on the CERN site for tasks ranging from controlling the accelerator facilities to processing invoices. PCs are managed through CERN's Computer Management Framework and Group Policies, with configurations deployed based on machine sets and a lot of autonomy left to the end-users. While the generic central configuration works well for the majority of the users, a specific hardened PC configuration is now provided for users who require stronger resilience against external attacks. This paper describes the technical choices and configurations involved and discusses the effectiveness of the hardened PC approach. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
37. Establishment of new WLCG Tier Center using HTCondor-CE on UMD middleware.
- Author
-
Ryu, Geonmo, Noh, Seo-Young, Forti, A., Betev, L., Litmaath, M., Smirnova, O., and Hristov, P.
- Subjects
MIDDLEWARE ,COMPUTING platforms ,INFORMATION technology ,COMPUTER access control ,DATA libraries - Abstract
CREAM is a CE program used by many sites based on UMD middleware to handle grid computing jobs. CREAM can be combined with various local batch programs, but it did not work with the HTCondor batch program. Since KISTI has been using HTCondor for many years, we searched another CE program which can work with HTCondor to process grid computing tasks. We proposed HTCondor-CE as the CE program, but it was not smoothly compatible with the UMD middleware environment because the OSG developed the HTCondor-CE software. In 2017, we manually configured HTCondor-CE and deployed a new CMS Tier-2 Center using the HTCondor-CE and a local HT-Condor cluster in the UMD middleware environment. The center has passed a SAM test suite that confirms the CMS computing performance of the center. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
38. SGSI project at CNAF.
- Author
-
Chierici, Andrea, de Girolamo, Donato, Guizzunti, Guido, Longo, Stefano, Maron, Gaetano, Martelli, Barbara, Vistoli, Cristina, Zani, Stefano, Castellani, Gastone, Giampieri, Enrico, Forti, A., Betev, L., Litmaath, M., Smirnova, O., and Hristov, P.
- Subjects
DATA libraries ,DATA warehousing ,INTERNET security ,COMPUTER software ,BIG data - Abstract
The Italian Tier1 center is mainly focused on LHC and physics experiments in general. Recently we tried to widen our area of activity and established a collaboration with the University of Bologna to set-up an area inside our computing center for hosting experiments with high demands of security and privacy requirements on stored data. The first experiment we are going to host is Harmony, a project part of IMI's Big Data for Better Outcomes programme (IMI stands for Innovative Medicines Initiative). In order to be able to accept this kind of data we had to make a subset of our computing center compliant with the ISO 27001 regulation. In this article we will describe the SGSI project (Sistema Gestione Sicurezza Informazioni, Information Security Management System) with details of all the processes we have been through in order to become ISO 27001 compliant, with a particular focus on the separation of the project dedicated resources from all the others hosted in the center. We will also describe the software solutions adopted to allow this project to accept in the future any experiment or collaboration in need for this kind of security procedures. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
39. Simulation approach for improving the computing network topology and performance of the China IHEP Data Center.
- Author
-
Nechaevskiy, Andrey, Ososkov, Gennady, Pryahina, Darya, Trofimov, Vladimir, Li, Weidong, Forti, A., Betev, L., Litmaath, M., Smirnova, O., and Hristov, P.
- Subjects
COMPUTER simulation ,COMPUTER network resources ,INFORMATION storage & retrieval systems ,WORKFLOW ,ELECTRIC network topology - Abstract
The paper describes the project intended to improve the computing network topology and performance of the China IHEP Data Center taking into account growing numbers of hosts, experiments and computing resources. The analysis of the computing performance of the IHEP Data Center in order to optimize its distributed data processing system is a really hard problem due to the great scale and complexity of shared computing and storage resources between various HEP experiments. In order to fulfil the requirements, we adopt the simulation program SyMSim that was developed at the Laboratory of Information Technologies of the Joint Institute for Nuclear Research. This simulation system is focused on improving the efficiency of the grid-cloud structures development by using the work quality indicators of some real system. SyMSim facilitates making a decision regarding required equipment and resources. The simulation uses input parameters from the data base of the IHEP computing infrastructure, besides we use some data of the BESIII experiments to indicate workflow and data flow parameters for simulation three different cases of organizing IHEP computing infrastructure. The first simulation results show that the proposed approach allows us to make an optimal choice of the network topology improving its performance and saving resources. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
40. Service monitoring system for JINR Tier-1.
- Author
-
Kadochnikov, Ivan, Korenkov, Vladimir, Mitsyn, Valery, Pelevanyuk, Igor, Strizh, Tatiana, Forti, A., Betev, L., Litmaath, M., Smirnova, O., and Hristov, P.
- Subjects
INFRASTRUCTURE policy ,COMPUTER network resources ,HARDWARE ,AGGREGATION (Statistics) ,INFORMATION technology - Abstract
The JINR Tier-1 for CMS was created in 2015. It is important to keep an eye on the Tier-1 center all the time in order to maintain its performance. The one monitoring system is based on Nagios: it monitors the center on several levels: engineering infrastructure, network, and hardware. It collects many metrics, creates plots and determines hardware components states like HDD states, temperatures, loads and many other. But this information is not always enough to tell if the Tier-1 services are working properly. For that purpose, a service monitoring system was developed in order to collect data from different resources including WLCG monitoring services. The purpose of this system is to aggregate data from different sources, determine states of the services based on new and historical data and react according to some predefined instructions. The systems, general idea, and architecture are described and analyzed in this work [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
41. Advanced features of the CERN OpenStack Cloud.
- Author
-
Castro León, José, Forti, A., Betev, L., Litmaath, M., Smirnova, O., and Hristov, P.
- Subjects
CLOUD computing ,VIRTUAL machine systems ,DATA warehousing ,COMPUTER networks ,PRIVATE networks - Abstract
The CERN OpenStack cloud has been delivering a wide variety of services to the whole laboratory since it entered in production in 2013. Initially, standard resources such as Virtual Machines and Block Storage were offered. Today, the cloud offering includes advanced features such as Container Orchestration (for Kubernetes, Docker Swarm mode, Mesos/DCOS clusters), File Shares and Bare Metal, and the Cloud team is preparing the addition of Networking and Workflow-as-a-Service components. In this paper, we will describe these advanced features, the OpenStack projects that provide them, as well as some of the main usecases that benefit from them. We will show the ongoing work on those services that will increase functionality, such as container orchestration upgrades and networking features such as private networks and floating IPs. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
42. ATLAS Technical Coordination Expert System.
- Author
-
Asensi Tortajada, Ignacio, Rummler, André, Salukvadze, George, Solans Sánchez, Carlos, Reeves, Kendall, Forti, A., Betev, L., Litmaath, M., Smirnova, O., and Hristov, P.
- Subjects
SYSTEMS design ,INFORMATION retrieval ,KNOWLEDGE transfer ,INSPECTION & review ,DATA analysis - Abstract
When planning an intervention on a complex experiment like ATLAS, the detailed knowledge of the system under intervention and of the interconnection with all the other systems is mandatory. In order to improve the understanding of the parties involved in an intervention, a rule-based expert system has been developed. On the one hand this helps to recognise dependencies that are not always evident and on the other hand it facilitates communication between experts with different backgrounds by translating between vocabularies of specific domains. To simulate an event this tool combines information from different areas such as detector control (DCS) and safety (DSS) systems, gas, cooling, ventilation, and electricity distribution. The inference engine provides a list of the systems impacted by an intervention even if they are connected at a very low level and belong to different domains. It also predicts the probability of failure for each of the components affected by an intervention. Risk assessment models considered are fault tree analysis and principal component analysis. The user interface is a web-based application that uses graphics and text to provide different views of the detector system adapted to the different user needs and to interpret the data [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
43. HEPCon – A Cross-Platform Mobile Application for HEP Events.
- Author
-
Vassilev, Martin, Vassilev, Vassil, Penev, Alexander, Vassileva, Petya, Forti, A., Betev, L., Litmaath, M., Smirnova, O., and Hristov, P.
- Subjects
PARTICLE physics ,CROSS-platform software development ,MOBILE apps ,INFORMATION retrieval - Abstract
Collaboration in research is essential for saving time and money. The field of high-energy physics (HEP) is no different. The higher level of collaboration the stronger the community. The HEP field encourages organizing various events in format and size such as meetings, workshops and conferences. Making attending a HEP event easier leverages cooperation and dialogue and this is what makes Indico service defacto a community standard. The paper describes HEPCon, a cross-platform mobile application which collects all information available on Indico and makes it available on a portable device. It keeps most of the data locally which speeds up the interaction. HEP-Con uses a shared code base which allows easy multiplatform development and support. There are iOS and Android implementations available for free download. The project is based on C# and we use the Xamarin mobile app technology for building native iOS and Android apps. SQLite database is responsible for retrieving and storing conference data. The app can be used to preview data from past CHEP conferences but the tool is implemented generic enough to support other Indico events. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
44. Conditions DataHandling in the Multithreaded ATLAS Framework.
- Author
-
Leggett, Charles, Shapoval, Illya, Snyder, Scott, Tsulaia, Vakho, Forti, A., Betev, L., Litmaath, M., Smirnova, O., and Hristov, P.
- Subjects
COMPUTER software ,CALIBRATION ,DATA quality ,DATA analysis ,GEOMETRY - Abstract
In preparation for Run 3 of the LHC, the ATLAS experiment is migrating its offline software to use a multithreaded framework, which will allow multiple events to be processed simultaneously. This implies that the handling of non-event, time-dependent (conditions) data, such as calibrations and geometry, must also be extended to allow for multiple versions of such data to exist simultaneously. This has now been implemented as part of the new ATLAS framework. The detector geometry is included in this scheme by having sets of time-dependent displacements on top of a static base geometry. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
45. Evolving CERN's Network Configuration Management System.
- Author
-
Stancu, Stefan Nicolae, Shevrikuko, Arkadiy, Gutierrez Rueda, David, Forti, A., Betev, L., Litmaath, M., Smirnova, O., and Hristov, P.
- Subjects
COMPUTER network resources ,INFORMATION technology ,CONFIGURATION management ,OPEN source software ,DATABASES - Abstract
CERN's networks comprise several thousands network devices from multiple vendors and from different generations, fulfilling various purposes (campus network, data centre network, and dedicated networks for the LHC accelerator and experiments control). To ensure the reliability of the networks, the IT Communication Systems group has developed an in-house, Perl-based software called "cfmgr", capable of deriving and enforcing the appropriate configuration on all these network devices, based on information from a central network database. Due to the decrease in popularity of the technologies it relies upon, maintaining and expanding the current network configuration management system has become increasingly challenging. Hence, we have evaluated the functionality of various open-source network configuration tools, in view of leveraging them for evolving the cfmgr platform. We will present the result of this evaluation, as well as the plan for evolving CERN's network configuration management system by decoupling the configuration generation (CERN specific) from the configuration enforcement (generic problem, partially addressed by vendor or community Python based libraries). [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
46. Design and development of vulnerability management portal for DMZ admins powered by DBPowder.
- Author
-
Murakami, Tadashi, Forti, A., Betev, L., Litmaath, M., Smirnova, O., and Hristov, P.
- Subjects
INTERNET security ,COMPUTER network resources ,CLIENT/SERVER computing ,DATA security ,WEB portals - Abstract
It is difficult to promote cyber security measures in research institutes, especially in DMZ networks that allow connections from outside network. This difficulty mainly arises from two types of variety. One is the various requirements of servers operated by each research group. The other is the divergent skill level among server administrators. Unified manners rarely fit managing those servers. One of the solutions to overcome the above mentioned difficulties is vulnerability management. To overcome these challenges, There are two possible approaches. One of the options is to offer a simple and powerful vulnerability management service to the administrators of the DMZ hosts (DMZ admins). The other is to facilitate flexibility and efficiency in the development process of the service. To achieve these requirements, we designed and developed a vulnerability management portal site for DMZ admins, named DMZ User's Portal. This paper describes the design of DMZ User's Portal and the development process using a development framework, named DBPowder. Using the DMZ User's Portal, each DMZ admin can perform a vulnerability scan on his/her own servers with ease. In other words, this delegates security vulnerability discovery and responsibility to individual DMZ admins that improve security awareness for them. Then, each DMZ admin can grasp and manage the security by himself/herself. The 13-year result from vulnerability scans show that the status of security in the KEK-DMZ has been kept in good conditions. Also, we are developing DBPowder object-relational mapping (ORM) framework to improve the flexibility and efficiency in the development process of DMZ User's Portal. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
47. Equal-cost multi-pathing in high power systems with TRILL.
- Author
-
Baginyan, Andrey, Korenkov, Vladimir, Dolbilov, Andrey, Kashunin, Ivan, Forti, A., Betev, L., Litmaath, M., Smirnova, O., and Hristov, P.
- Subjects
NETWORK failures (Telecommunication) ,DOWNLOADING ,DATA transmission systems ,TELECOMMUNICATION traffic ,COMPUTER interfaces - Abstract
The article presents a hierarchical diagram of the network farm and a model of the network architecture levels. Protocols for disposal full mesh network topologies are considered. Modern data transfer protocol TRILL is presented. Its advantages are analysed in comparison with other possible protocols that may be used in the full-mesh topology. Empirical calculations of data routing based on a Dijkstra's algorithm and a patent formula of the TRILL protocol are given. Two monitoring systems of downloading data channels are described. The data obtained from 40G interfaces through each monitoring systems is presented, and their behaviour is analysed. The main result is that the discrepancy of experimental data with theoretical predictions to be equal to the weight balancing of the traffic when transmitting the batch information over the equivalent edges of the graph. It is shown that the distribution of the traffic over such routes is of arbitrary and inconsistent with the patent formula character. The conclusion analyses issues of the traffic behaviour under extreme conditions. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
48. Developing a monitoring system for Cloud-based distributed data-centers.
- Author
-
Elia, Domenico, Vino, Gioacchino, Donvito, Giacinto, Antonacci, Marica, Forti, A., Betev, L., Litmaath, M., Smirnova, O., and Hristov, P.
- Subjects
CLOUD computing ,DATA libraries ,PATTERN perception ,DATABASES ,MACHINE learning - Abstract
Nowadays more and more datacenters cooperate each others to achieve a common and more complex goal. New advanced functionalities are required to support experts during recovery and managing activities, like anomaly detection and fault pattern recognition. The proposed solution provides an active support to problem solving for datacenter management teams by providing automatically the root-cause of detected anomalies. The project has been developed in Bari using the datacenter ReCaS as testbed. Big Data solutions have been selected to properly handle the complexity and size of the data. Features like open source, big community, horizontal scalability and high availability have been considered and tools belonging to the Hadoop ecosystem have been selected. The collected information is sent to a combination of Apache Flume and Apache Kafka, used as transport layer, in turn delivering data to databases and processing components. Apache Spark has been selected as analysis component. Different kind of databases have been considered in order to satisfy multiple requirements: Hadoop Distributed File System, Neo4j, InfluxDB and Elasticsearch. Grafana and Kibana are used to show data in a dedicated dashboards. The Root-cause analysis engine has been implemented using custom machine learning algorithms. Finally, results are forwarded to experts by email or Slack, using Riemann. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
49. Next Generation of HEP CPU Benchmarks.
- Author
-
Giordano, Domenico, Alef, Manfred, Michelotto, Michele, Forti, A., Betev, L., Litmaath, M., Smirnova, O., and Hristov, P.
- Subjects
CENTRAL processing units ,HARDWARE ,COMPUTER programming ,MOTHERBOARDS ,CLOUD computing - Abstract
As of 2009, HEP-SPEC06 (HS06) is the benchmark adopted by the WLCG community to describe the computing requirements of the LHC experiments, to assess the computing capacity of the WLCG data centres and to procure new hardware. In the recent years, following the evolution of CPU architectures and the adoption of new programming paradigms, such as multi-threading and vectorization, it has turned out that HS06 is less representative of the relevant applications running on the WLCG infrastructure. Meanwhile, in 2017 a new SPEC generation of benchmarks for CPU intensive workloads has been released: SPEC CPU 2017. This report summarises the findings of the HEPiX Benchmarking Working Group in comparing SPEC CPU 2017 and other HEP benchmarks with the typical WLCG workloads mixes. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
50. Dynamic Integration and Management of Opportunistic Resources for HEP.
- Author
-
Schnepf, Matthias J., von Cube, R. Florian, Fischer, Max, Giffels, Manuel, Heidecker, Christoph, Heiss, Andreas, Kuehn, Eileen, Petzold, Andreas, Quast, Guenter, Sauter, Martin, Forti, A., Betev, L., Litmaath, M., Smirnova, O., and Hristov, P.
- Subjects
PARTICLE physics ,HIGH performance computing ,CLOUD computing ,VIRTUAL machine systems ,BANDWIDTHS - Abstract
Demand for computing resources in high energy physics (HEP) shows a highly dynamic behavior, while the provided resources by the Worldwide LHC Computing Grid (WLCG) remains static. It has become evident that opportunistic resources such as High Performance Computing (HPC) centers and commercial clouds are well suited to cover peak loads. However, the utilization of these resources gives rise to new levels of complexity, e.g. resources need to be managed highly dynamically and HEP applications require a very specific software environment usually not provided at opportunistic resources. Furthermore, aspects to consider are limitations in network bandwidth causing I/O-intensive workflows to run inefficiently. The key component to dynamically run HEP applications on opportunistic resources is the utilization of modern container and virtualization technologies. Based on these technologies, the Karlsruhe Institute of Technology (KIT) has developed ROCED, a resource manager to dynamically integrate and manage a variety of opportunistic resources. In combination with ROCED, HTCondor batch system acts as a powerful single entry point to all available computing resources, leading to a seamless and transparent integration of opportunistic resources into HEP computing. KIT is currently improving the resource management and job scheduling by focusing on I/O requirements of individual workflows, available network bandwidth as well as scalability. For these reasons, we are currently developing a new resource manager, called TARDIS. In this paper, we give an overview of the utilized technologies, the dynamic management, and integration of resources as well as the status of the I/O-based resource and job scheduling. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.