HPC activities at FFT

Eloi Gaudry, HPC & IT Manager at FFT, interviewed by PlanetHPC
 

About PlanetHPC

PlanetHPC is a Support Action funded by the EU's 7th Framework Programme. Launched in November 2009, this two-year initiative provides a unique forum for European researchers and industrialists to identify the research challenges facing High Performance Computing (HPC). PlanetHPC brings together the major players in European HPC from the scientific and business sectors - users, service providers, hardware providers and software providers. By bringing these communities together and facilitating their interaction, our aim is to coordinate activities, strategies and roadmaps for HPC in Europe. PlanetHPC is led by EPCC, one of Europe's largest supercomputing centres based at The University of Edinburgh.

 

Interview:

Eloi Gaudry is a software developer for FFT, based in Belgium. FFT produces software to simulate acoustic fields. Its business is in the transport sector, primarily the aerospace and automotive industry, but it also addresses the design of loudspeakers and windturbines. PlanetHPC asked him about HPC activities at FFT.

In which application areas do you use HPC?

The principal business of FFT is to simulate noise radiated from vibrating structures or turbomachinery, such as the wind-body-pylon-engine configuration in the Airbus 380. FFT is an ISV whose applications simulate acoustic fields using finite-element-like discretisation techniques, both in the time domain and the frequency domain.

Why do you use HPC?

HPC is essential in order to offer customers acceptable design times for their products. In FFT’s market sectors, numerical simulation is cheaper and more feasible than prototyping and makes the design of new systems much easier.

What are the cost benefits to your business of using HPC?

FFT is led by the demands of its customers. They see the benefits of HPC through shorter design cycles and more accurate simulations. This then places a requirement on FFT to make its applications available on a range of HPC platforms.

Which HPC systems do you use in your business? What is their capital value? How scalable are your applications?

These systems are dictated by customer demand. A typical target system is a cluster with around 100 cores. FFT has such systems in house because it needs to use them for software development and testing. The FFT time-domain code is highly scalable, up to 1000 cores, because it is based on loosely coupled domain-decomposition techniques. The frequency-domain technique is much more closely coupled. However, through the use of algebraic decomposition, domain decomposition and multi-threading techniques, it scales to over 200 cores. FFT is continually evaluating alternative solution techniques to increase the scalability of its codes.

For your business does the Cloud offer a viable alternative to owning and managing your own systems?

The Cloud is not a viable alternative to in-house clusters because of security issues related to the highly competitive nature of the sectors in which FFT’s software is used. FFT’s customers require codes to run in-house because of the sensitive nature of their data. Furthermore there are performance issues in using the Cloud related to network latency and bandwidth. There is also the unpredictable performance of applications when they run on hardware which is not clearly specified, as is the case with the Cloud. The Cloud may become a viable option in 10 years time, but currently it is not appropriate for FFT’s customers.

What are the challenges you see in the development of your HPC capability (e.g. scalability of applications, power consumption, cost of systems)?

FFT is addressing two main challenges. The first is to increase the frequency limit of its customers’ models. This means reducing acoustic and vibration length scales, meaning finer meshes and thus larger models. The second is to develop software which can run efficiently across a range of machines with varying parameters such as number of processors, network bandwidth and latency, network topology and file-system performance.

Are new languages and programming paradigms needed particularly as we move toward exascale systems?

FFT does not need exascale systems to run its codes. However it needs to find and understand the performance bottlenecks of its codes when running on target clusters. FFT uses conventional, current languages such as C, C++, Fortran, Python and MPI. In the future it might look at OpenCL.

What are your views on GPGPU computing?

GPGPUs are certainly appearing over the horizon for FFT. It will definitely look at these devices. However in doing so, software portability and data transfer rates will be very important issues.

What are your views on reconfigurable (i.e. FPGA-based) computing particularly in light of developments at Convey?

FFT has looked at FPGAs in the past, but has not based any products on them. Since FFT’s software uses BLAS and matrix solvers extensively, FPGAs may be of greater interest in the future, but so might other devices such as GPGPUs. Describe the HPC systems you would like to have available in 3, 5 and 10 years' time. The principal requirements from FFT in the future are robustness, reliability and fault-tolerance. FFT expects increasing performance every year from both processors and interconnection networks. It wants machines to be as reliable as they are now, but with many more cores. Such systems will be scalable and able to detect issues, reschedule tasks and handle failure intelligently. Memory is also a bottleneck, so improvements both in terms of size and speed are very important, for both RAM and storage.

In which HPC research areas would EU-funded programmes benefit your business?

FFT’s priority would be to investigate the potential of GPGPUs for running its applications. A further priority would be the development of new solvers and techniques which could address new architectures including GPGPUs and multi-core devices.

Our thanks Mr Gaudry for his interesting and market-relevant observations.

PlanetHPC