You are here: Home Program Workshops Future hardware challenges to scientific computing
Personal tools

Future hardware challenges to scientific computing

Chair: Prof. Erik De Schutter, University of Antwerp, Belgium

The recent move of the computer industry towards multi-core technology will cause parallel programming to spread through all levels of society.

Although many neuroscientific computing applications are already parallel they are often not optimized for multicore configurations. We will consider these challenges both from a computer science, scientific computing, and a neuroinformatics perspective. In addition we will consider some of the opportunities massive parallelism offer in how we model neuronal networks.

Invited speakers:

             

Gabriel Wittum

Ruprecht-Karls-University of Heidelberg, Germany


Title: Detailed Modeling of Signal Processing in Neurons
wittum pic 1

Abstract: The crucial feature of neuronal ensembles is their high complexity and variability. This makes modelling and computation very difficult, in particular for detailed models based on first principles. The problem starts with modelling geometry, which has to extract the essential features from those highly complex and variable phenotypes and at the same time has to take in to account the stochastic variability. Moreover, models of the highly complex processes which are living on these geometries are far from being well established, since those are highly complex too and couple on a hierarchy of scales in space and time. Simulating such systems always puts the whole approach to test, including modeling, numerical methods and software implementations. In combination with validation based on experimental data, all components have to be enhanced to reach a reliable solving strategy.

To handle problems of this complexity, new mathematical methods and software tools are required. In recent years, new approaches such as parallel adaptive multigrid methods and corresponding software tools have been developed allowing to treat problems of huge complexity.

In the lecture we present a three dimensional model of signaling in neurons. First we show a method for the reconstruction of the geomety of cells and subcellular structures as three dimensional objects. With this tool, NeuRA, complex geometries of neuron nuclei were reconstructed. We present the results and discuss reasons for the complicated shapes. To that end, we present a model of calcium signaling to the nucleus and show simulation results on reconstructed nuclear geometries. We discuss the implications of these simulations.

We further show reconstructed cell geometries and simulations with a three dimensional active model of signal transduction in the cell which is derived from the Maxwell equations and uses generalized Hodgkin-Huxley fluxes for the description of the ion channels.

Gabriel Wittum - Bio-sketch


 

Marc-Oliver Gewaltig

Honda Research Institute, Germany 

gewaltig pic self
Title: What are the real challenges for Computational Neuroscience?

Abstract: The human brain contains some 1011 neurons with some 1015 synaptic connections between them. It is tempting to believe that we can model the brain, once computers are fast enough and computer memory is big enough to store and solve the models for all neurons and their connections. The largest models today have 105 neurons with up to 109 synapses, corresponding to one cubic millimeter of human cortex. These models already require computer clusters, but use simple neuron and synapse models. Larger models merely show what is technically feasible. To model a large neuronal system, we cannot take the best neuron model. Instead, we must use the smallest or fastest model to squeeze our network into the available computer memory. 

Recent trends in computing technology indicate that faster and more powerful computers will soon be available to a wide group of researchers. The most interesting for scientific computing are: multi-core processors for parallel computing and so-called graphical processing units (GPU), the high-performance engines of 3D graphics cards. These technologies may speed up simulations by orders of magnitude and we may soon be able to simulate big parts of the brain in short times.

However, the biggest challenge for computational neuroscience is the complexity of the brain, not its size. Each nerve cell is already so complex that researchers disagree about the appropriate level of description. Even less is known about neural circuits or systems. Conceptual progress in systems neuroscience depends on a concise and powerful notation to develop, analyze, and covey ideas and models of neurons, networks and systems. Today, simulation code is the only reliable source of information about a model. But simulation code cannot replace a formal notation, because it is incomplete and platform. Only with an appropriate formal notation can we cope with the increasing complexity of neural models, because it allows us to formally manipulate, analyze, and enhance our models.

Complex models require complex simulation software. Today's simulation code is mostly hand-written for a particular model. But with increasing complexity, it becomes more difficult to validate published results. There are no accepted quality standards to ensure that simulation results indeed describe neuroscientific phenomena rather than errors in the implementation. Researchers must adopt standards and practices long common in software engineering, to keep up with the ever increasing complexity of models, simulation software, and computing hardware. Reviewers must begin to critically review not only the model, but also the simulation methods. Journals must accept that simulation methods must not be banished to the supplementary material section where few are likely to see them.

In this talk, I will review recent results and trends in simulation technology for large neural systems and will discuss possible solutions to the challenges posed by the ever increasing complexity of models, simulation software and computing architectures.

Marc-Oliver Gewaltig - Bio-sketch


John Shalf

Lawrence Berkley National Laboratory, USA 


Title: The New Landscape of Parallel Computer Architecture
shalf pic 1 130p

Abstract: The past few years has seen a sea change in computer architecture that will impact every facet of our society as every electronic device from cell phone to supercomputer will need to confront parallelism of unprecedented scale. Whereas the conventional multicore approach (2, 4, and even 8 cores) adopted by the computing industry will eventually hit a performance plateau, the highest performance per watt and per chip area is achieved using manycore technology (hundreds or even thousands of cores).  However, fully unleashing the potential of the manycore approach to ensure future advances in sustained computational performance will require fundamental advances in computer architecture and programming models that are nothing short of reinventing computing.

Recent trends in the microprocessor industry have important ramifications for the design of the next generation of High Performance Computing (HPC) systems as we look beyond the petaflop scale. The need to switch to a geometric growth path in system concurrency is leading to reconsideration of interconnect design, memory balance, and I/O system design that will have dramatic consequences for the design of future HPC applications and algorithms. The required reengineering of existing application codes will likely be as dramatic as the migration from vector HPC systems to Massively Parallel Processors (MPPs) that occurred in the early 90’s. Such comprehensive code reengineering took nearly a decade, so there are serious concerns about undertaking yet another major transition in our software infrastructure.

This presentation explores the fundamental device constraints that have led to the recent stall in CPU clock frequencies.  It examines whether multicore (or manycore) is in fact a reasonable response to the underlying constraints to future IC designs. Then it explores the ramifications of these changes in the context of computer architecture, system architecture, and programming models for future HPC systems.  Finally, the talk examines the power-efficiency benefits of tailoring computer designs to the problem requirements.  We show a design study of a purpose-built system for climate modelling that could achieve power efficiency and performance improvements hundreds of times better than following conventional industry trends.  This same design approach could be applied to simulations of complex neuronal systems.

John Shalf - Bio-sketch

Document Actions