In this contribution we discuss the various aspects of the computing resource needs experiments in High Energy and Nuclear Physics, in particular at the Large Hadron Collider. This will evolve in the future when moving from LHC to HL-LHC in ten years from now, when the already exascale levels of data we are processing could increase by a further order of magnitude. The distributed computing environment has been a great success and the inclusion of new super-computing facilities, cloud computing and volunteering computing for the future is a big challenge, which we are successfully mastering with a considerable contribution from many super-computing centres around the world, academic and commercial cloud providers. We also discuss R&D computing projects started recently in National Research Center 'Kurchatov Institute'

Original languageEnglish
Article numberC06044
JournalJournal of Instrumentation
Volume12
Issue number6
DOIs
StatePublished - 29 Jun 2017

    Research areas

  • Computing (architecture, farms, GRID for recording, storage, archiving, and distribution of data), Data processing methods, Software architectures (event data models, frameworks and databases)

    Scopus subject areas

  • Mathematical Physics
  • Instrumentation

ID: 8843986