The Berkeley View: A New Framework and a New Platform for Parallel Research

Details

Date Thursday November 9th 2006
Time 4:00-5:00pm
Venue George E. Pake Auditorium

PARC Forum

The recent switch to parallel microprocessors is a milestone in history of computing. Industry has laid out a roadmap for multicore designs that preserve the programming paradigm of the past via binary-compatibility and cache-coherence. Conventional wisdom is now to double the number of cores on a chip with each silicon generation.

A multidisciplinary group of Berkeley researchers met for 18 months to discuss this change. Our investigations into the future opportunities in led to the follow recommendations which are more revolutionary what industry plans to do:

  • The target should be 1000s of cores per chip, as this hardware is the most efficient in MIPS per watt, MIPS per area of silicon, and MIPS per development dollar.
  • To maximize application efficiency, programming models should support a wide range of data types and successful models of parallelism: data-level parallelism, independent task parallelism, and instruction-level parallelism.
  • “Autotuners” should play a larger role than conventional compilers in translating parallel programs.

The conventional path to architecture innovation is to study a benchmark suite like SPEC or Splash to guide and evaluate innovation. A problem for innovation in parallelism is that it’s unclear today how to express it best. Hence, it seems unwise to let a set of old programs from the past drive an investigation into parallel computing of the future.

Phil Colella identified 7 numerical methods that he believed will be important for science and engineering for at least next decade. The idea is that programs that implement these numerical methods may change, but the methods themselves will remain important. After examining how well these “7 dwarfs” of high performance computing capture the computation and communication of a much broader range of computing— including embedded computing, computer graphics and games, data bases, and machine learning— we doubled them yielding “14 dwarfs.” Those interested in our perspective on parallelism should take a look at the wiki: http://view.eecs.berkeley.edu.

To rapidly evaluate all the possible alternatives in parallel architectures and programming systems, we need a flexible, scalable, and economical platform that is fast enough to run extensive applications and operating systems.

Today, one to two dozen processor cores can be programmed into a single FPGA. With multiple FPGAs on a board and multiple boards in a system, 1000 processor architectures can be explored. Such a system will not just invigorate multiprocessors research in the architecture community, but since processors cores can run at 100 to 200 MHz, a large scale multiprocessor would be fast enough to run operating systems and large programs at speeds sufficient to support software research. Hence, we believe such a system will accelerate research across all the fields that touch multiple processors: operating systems, compilers, debuggers, programming languages, scientific libraries, and so on. Thus the acronym RAMP, for Research Accelerator for Multiple Processors.

A group of 10 investigators from 6 universities (Berkeley, CMU, MIT, Stanford Texas Washington) have volunteered to create th RAMP “gateware” (logic to go into the FPGAs) and have the boards fabricated and available at cost . It will run industrial standard instruction sets (Power, SPARC, …) and operating systems (Linux, Solaris, …) We hope to have a system that can scale to 1000 processors in late 2007 that costs universities about $100 per processor. I’ll report on our results for the initial RAMP implementations at the meeting.Those interested learning more should take a look at: http://ramp.eecs.berkeley.edu.

Presenter(s)

David A. Patterson has been Professor of Computer Science at the University of California, Berkeley since 1977, after receiving his all his degrees from UCLA. He is one of the pioneers of both RISC and RAID. He co-authored five books, including two on computer architecture with John Hennessy; the fourth edition of their graduate book was released in September. Past chair of the Computer Science Department at U.C. Berkeley and the Computing Research Association (CRA), he was elected President of the Association for Computing Machinery (ACM) for 2004 to 2006 and served on the Information Technology Advisory Committee for the U.S. President (PITAC) from 2003 to 2005.

His work was recognized by education and research awards from ACM (Karlstrom Educator Award, Fellow) and IEEE (Von Neumann Medal, Mulligan Educator Medal, Johnson Information Storage Award, Fellow) and by election to the National Academy of Engineering. In 2005 he shared Japan's Computer & Communication award with Hennessy and was named to the Silicon Valley Engineering Hall of Fame. This year he received the Distinguished Service Award from CRA and was elected to both the American Academy of Arts and Sciences and to the National Academy of Sciences.

Additional information

Focus Areas

Our work is centered around a series of Focus Areas that we believe are the future of science and technology.

FIND OUT MORE
Licensing & Commercialization Opportunities

We’re continually developing new technologies, many of which are available for Commercialization.

FIND OUT MORE
News

PARC scientists and staffers are active members and contributors to the science and technology communities.

FIND OUT MORE