Search Our Site

Invited Speakers & Abstracts

Workshop Program

SC07 Travel Info



UNC GAMMA Research Group

UNC Graphics & Imaging Analysis Research Cluster

  Manycore and Multicore Computing:
Architectures, Applications And Directions

Sponsored by Microsoft Corporation

Workshop dates: Sunday, November 11 - Monday, November 12, 2007
Location: SC07 Conference, Reno, Nevada
Conference dates: November 10 - 16, 2007
More information can be found at

The era of "Supercomputing applications are what supercomputing does best," which is generally associated with heavyweight, numerically-intensive science computation, is rapidly converging with real-time and consumer applications such as computer gaming and multimedia. These new application genres are achieving a level of sophistication that makes them almost indistinguishable from supercomputing applications, and, consequently, are driving the recent developments in processor architectures. Most of the emphasis over the last few years has been on multi-core architectures, which are used as CPUs (e.g. quad-cores) in the current desktop and laptop systems and also used to develop supercomputing clusters.

The current performance improvement trend uses parallelism to motivate the development of heterogeneous architectures that combine both fine grained and coarse grained parallelism in systems with tens or hundreds of processors. These systems can be considered "manycore" processor systems, with the goal of achieving higher parallel code performance. This is in contrast with multicore processors, which consist of several replicated serial cores or combinations of specialty and general-purpose cores. One of the best examples of manycore processing system is General-Purpose GPU (GPGPU) computing using programmable graphics processing units (GPUs) in conjunction with the system's CPU. For example, NVIDIA GeForce 8800 GPU consists of 128 fragment processors, which can provide high throughput for graphics rasterization as well as some GPGPU applications.

Manycore processor systems have tremendous potential for high-performance computing and scientific applications, as these processors can be used as accelerators in the design of tera-flop or peta-flop computers. The significant increase in parallelism within a processor can also lead to other benefits including higher power-efficiency and better memory latency tolerance. Building on the success of last year's "General-Purpose GPU Computing: Practice and Experience," as well as the success of other GPGPU and Edge workshops held over the last few years, this workshop will examine the recent trends in these areas as well as the following topic areas:

  • Existing and emerging digital media, streaming and virtual world applications leveraging the compute power of commodity supercomputing systems, such as dynamic, data-driven applications and systems and interactive sensor-simulation-real time collaborative and exploration applications;
  • Design and experience with commodity supercomputing systems;
  • Manycore and multicore computing software environments and toolkits that enable convergence between supercomputing, richly interactive, real-time and existing computationally intensive application genres;
  • Languages, compilers and parallel programming tools that ease the burden of developing for many/multicore architectures ("programming many/multicore for mere mortals");
  • Benchmarks for assessing the quality of manycore and multicore implementations for particular problem domains;
  • Novel applications in databases, computer gaming and multimedia.

The workshop will span 1 1/2 days, consisting of many invited talks, and a panel session to discuss many/multicore programmability and overall community research themes.