News & events
Speeding up fusion computing? It’s just a matter of time | 13/10/2016
Computing experts are literally opening up a new dimension as they help scientists predict the performance of future fusion reactors.
Calculating physics has come a long way from the trusty paper and pencil used in the early days of fusion research. The age of the supercomputer means that highly sophisticated modelling of reactor performance can be carried out at the touch of a button. It's led to faster and more accurate predictions, and a much better understanding of the turbulent, hot fusion plasma inside tokamaks and how it interacts with the machine.
But there is still a lot of room for improvement – especially to speed up coding work for the giant international ITER experiment, as physicists prepare for the machine's first operations in the mid 2020s.
By using a technique called ‘time parallelisation' they hope to slash the time it takes to run complex codes for ITER, which sometimes take six months to complete.
Fusion computing hardware is already gearing up for the ‘Exascale' era – in which supercomputers will be able to run a billion billion calculations per second. The race is on to develop the coding performance ready to exploit this capacity. Running calculations in parallel to speed them up is already a commonly-used technique in the high-performance computing world. Time parallelisation, however – solving slices of data from the past, present and future all at once – is an emerging field, as Debasmita Samaddar of CCFE's Theory & Modelling team explains:
“Time really is the final dimension in high-performance computing. Traditional techniques won't work for Exascale machines, so we need new ideas like time parallelisation. It's a promising area of research and shows how we can apply computing science to a physics problem like fusion plasmas.”
But how do we use information from the future – is there a secret time machine at Culham? Sadly not.
“What we actually do is predict future data and then make corrections as the real data become available,” Debasmita continues. “There are various algorithms for this, based on what we call the ‘predictor-corrector' approach.”
The techniques have clear benefits for fusion research, she says:
“Computer simulations are a great way to predict plasma behaviour – especially as we can do what we want without melting the tokamak! But at the moment we have to simplify things by ignoring much of the physics, as the data are too much for present-day computers to cope with. Pushing computing performance will give us a fuller picture and really improve our understanding of fusion plasma.”
One problem is that our existing codes for calculating plasma performance don't scale well to the improved hardware. Some are faster than others, leading to computing bottlenecks – and frustrated fusion physicists. Debasmita and her fellow computing experts are solving this by writing algorithms to speed up the codes.
“It's crucial to get everyone using fast, consistent codes so we have a clear idea of how ITER's plasma will interact with the magnetic field and the wall.
“Marrying the right algorithm with the right code can make calculations 20 times quicker in some cases. There is no one magic recipe, so we find the best tool for the particular job. When they work, the physicists love me, but when they don't I'm not so popular…”
Other fields such as climate science and medical research face similar issues, and Debasmita is keen to build bridges with these communities.
“Computing conferences are great for networking with people who are working on the same problems. You also meet mathematicians who have new theoretical ideas and want to try them out on one of the greatest of all scientific endeavours – fusion.”