The benefits are often perceived to be hard to access because of the domain specific knowledge required to employ new methods. In this course we will provide guidance on and hands-on training in the use of openly available tools. Note that attendees should bring a laptop for the tutorial sessions.
Background
Prior to 2000, most conventional computers used single-core processors (CPUs). Year-on-year performance improvement in turnaround time for computational fluid dynamics (CFD) was achieved with increasing clock speeds – until we simply could not make the chips run any faster. Within the IT community, there is an expectation that computing performance will follow “Moore’s Law” – an observation about the trend for increasing power via transistor density in early computer chips. This “Law” has seen computing power increase exponentially – roughly doubling every 2 years. With the limit on clock speed, the industry started increasing the number of CPU “cores” in each chip in order to keep delivering more power. This age of “multi-core” chip architecture offers both opportunity and challenges for authors of fluid dynamics solvers. An extreme example of multi-core hardware is the modern graphical processing unit, or GPU. These were initially developed for the gaming industry, and rapidly found an application in high performance computing (HPC). GPU’s have thousands of cores and run at a lower clock speed than CPUs – thereby delivering enormous power at very high efficiency. A single modern GPU can deliver more than a Teraflop of double precision floating point performance. This is equivalent to approximately 20 CPUs and the price performance can be very significant. The largest GPU-based computers worldwide - for example TITAN at Oak Ridge National Labs, with 18,000 GPUs can deliver of the order of 10 Petaflops in double precision. However, accessing this power requires software to be written in a specific way. It is generally not a trivial task to port existing code to run on GPUs, though some benefit can usually be derived by off-loading linear algebra from the CPU to the GPU. For developers willing to re-structure their code to take full advantage of GPUs, the performance on CPUs often increases as a by-product! In the course we will provide an introduction to GPU technology and a hands-on tutorial for participants to start developing their own GPU-based software.
Cloud computing is now everywhere – we use it to watch streaming movies, exchange files and post pictures on social media sites. The hardware required to deliver this new digital world is massive and has created a new market in on-demand computing resource. For users of HPC, it is no longer necessary to purchase and manage large computers – rather they can access the necessary hardware over an Internet connection. This can be done in a secure and private manner, and at a fraction of the cost. The opportunity to transfer capital expenditure (CAPEX) to operational expenditure (OPEX) – as and when needed – is highly attractive to company finance directors, especially in times of economic austerity. Market leader AMAZON made an early start by moving its own operations to a service-oriented architecture, but others have followed suit. In the course we will show how a range of cloud-based solutions can deliver computing on-demand. Course participants will be able to explore a range of cloud computing capabilities in a hands-on session, and are invited to make suggestions for software that we can pre-install.
The opportunity to use on-demand hardware brings with it the question of data transfer. The input files to set up and run a fluid dynamics simulation are usually manageable in size, but the output files can be enormous. An alternative to data download is remote visualisation and post-processing. This allows the user to keep the data in the cloud, and interact with it to produce the graphs, plots and outputs (even videos) that they need. In the course we will show alternatives using the open-source ParaView toolkit from KitWare.
The new technology options and hardware suppliers has created a new industry in customisation and solution delivery. Smaller companies can now be entirely cloud-based and offer integrated fluid dynamics capability by combining the on-demand hardware with specific software products. In the course we will explain how Zenotech and NICE are exploiting the current technology trends and participants will be able to talk directly with the “middle-ware” developers to understand the details of on-demand computing. In particular we will address concerns regarding data security and strategies for data management – often perceived as barriers to using these low-cost highly powerful systems.
This course is intended to be a gentle introduction to these new technology trends, with hands-on participation. The course is designed for non-IT specialists (academic and industrial) with an interest in more powerful, flexible and cost effective fluid dynamics simulation options.
Outcomes of the course are:
- Understanding and using GPU-based computing for powerful fluid dynamics simulation.
- Hands-on use of cloud-computing technology for on-demand resource.
- Understanding remote data access and visualisation
- Networking with market leaders and gaining insights into future trends.
Coordinator: Dr. David Standingford, Zenotech, UK
Open for registration: richard.seoud-ieo@ercoftac.org
|