The HASE Project 1990 - 2024: A Computer Architecture Simulation & Visualisation Environment

line

Introduction

HASE is a Hierarchical computer Architecture design and Simulation Environment developed at the University of Edinburgh to support, through simulation, the visualisation of activities taking place inside computers as they execute programs. HASE allows for the rapid development and exploration of computer architectures at multiple levels of abstraction, encompassing both hardware and software. Many complex systems of interacting components can be more easily understood as pictures rather than as words, and in a computer architecture the dynamic behaviour of systems is frequently of interest. The HASE graphical design window allows users to view the results of simulation runs though animation of the design image.

A HASE model consists of a number of interconnected entities, each with its own simulation code written in Hase++. Hase++ is a discrete event simulation langauge with a programming interface similar to that of Sim++, but implemented using C++ and threads. It includes a set of library routines to provide for process oriented discrete event simulation and a run time system for multi-threading many objects in parallel and keeping track of simulation time. When a simulation is run, HASE produces a trace file which can be used to animate the on-screen display of the model so as to show data movements, parameter value updates, state changes, etc.

This website presents HASE as it was at the end of 2024. Up-to-date information about HASE can be found in the GitHub HASE repository.

line

History

The ideas for HASE grew from a simulator built for an MC88000 system in 1989, written in occam and run on a Meiko Computing Surface at the Edinburgh Parallel Computing Centre. HASE itself was developed using object oriented simulation languages, the first prototype using DEMOS, the second Sim++ and the current version Hase++. The first production version of HASE was developed as part of the ALAMO project supported by the UK EPSRC under grant GR/J43295 but was later further developed and used for the EMIN Project (EPSRC Grant GR/K19716), the QCDOC Computer Simulation project (EPSRC grant GR/R27129) and the Storlite Project (EPSRC grant GR/S28153). In addition to supporting research projects, HASE was used to support numerous undergraduate and taught MSc student projects and several models were used for virtual laboratory practical exercises.

line

Computer Architecture Simulation Models

Simulation models of a variety of computer architectures and architectural components were created using HASE. These models were intended for use as teaching and learning resources: in lectures, for student self-learning or for virtual laboratory experiments. The source files for the following computer architecture models are available at

github.com/HASE-Group/hase_iii_models,

together with accompanying documentation that describes both the system being modelled and the model itself.

HASE Research Projects

ALAMO: ALgorithms, Architectures and MOdels of computation

The ALAMO project set out to address two of the four "Grand Challenge Problems in Computer Architecture" identified by the Purdue Workshop on Grand Challenges in Computer Architecture for the Support of High Performance Computing. These are: "to identify a small number of fundamental models of parallel computation that serve as a natural basis for programming languages and that facilitate high performance hardware implementations" and "to develop sufficient infrastructure to allow rapid prototyping of hardware ideas and the associated software in a way that permits realistic evaluation."

The aim was to combine work on these two Grand Challenges, using Heywood's H-PRAM as the bridging model of parallel computation and HASE as the `prototyping infrastructure'. Strategies for implementing the H-PRAM on a physical mesh architecture were devised and investigated both theoretically and practically, through simulation. The H-PRAM successfully outperformed the PRAM by a factor which was small but significant (2 to 3 for the sizes of mesh (typically 1024) we were able to simulate) and which, importantly, was clearly growing with the number of processors involved, thereby demonstrating improved scalability.

Contributers to the ALAMO project incuded George Chochia, Paul Coe, Murray Cole, Pat Heywood, Todd Heywood, Roland Ibbett, Rob Pooley, Peter Thanisch, and Nigel Topham. The project was funded by EPSRC under Grant GR/J43295 and ran from August 1994 to February 1997. Further information about the project can be found in
Algorithms, Architectures and Models of Computation, CSG report ECS-CSG-22-96, 1996.

EMIN: Evaluation of Multiprocessor Interconnection Networks

Designing multiprocessor systems is complicated because of the varied interactions between parallel software and hardware. Evaluating the impact of design decisions on overall performance is difficult. The EMIN project sought to address these issues by developing a software testbed for designing and analysing multiprocessor interconnection network performance. Rather than apply a single technique to the problem, a suite of design techniques was used. The simplest (and often overlooked) technique is spreadsheet analysis. This enables quick broad brush comparisons of networks. Microbenchmarks are useful both for characterising network performance and for providing data which is relevant to software.

A testbed was developed into which various workloads and system models could be plugged, with facilities for measuring detailed performance as well as large scale experimentation. The testbed was used to measure, for example, the total time for variable numbers of processors and the same workload. The results showed that for a bus the time grows linearly with the number of processors, for a crossbar the time is constant, and because of contention, a multistage network is slower for 4 processors than 8, 12 or 16. Measurements of average memory utilisation showed that as the number of processors is increased, contention on a bus means that the memory units are not kept busy, whereas memory utilisation remains constant for the crossbar and multistage networks.

The main contributer to the project was Fred howell. The EMIN Project was funded by EPSRC under Grant GR/K19716 and ran from December 1994 to November 1997. Further information about the project can be found in Evaluation of Multiprocessor Interconnection Networks, CSG report ECS-CSG-38-98, 1998.

The QCD Computer Simulation Project

Quantum Chromodynamics (QCD) describes theoretically the strong interactions between quarks and gluons. One of the essential features of QCD is that these elementary particles are always bound together, confined inside mesons and baryons, collectively called hadrons. This provides a challenge in relating theoretical and practical results, since the Standard Model of particle physics describes the interactions of the quarks and gluons, not of the experimentally observed hadrons. To relate the experimental observations to the predictions from the Standard Model thus needs detailed evaluation of the hadronic structure, relating the quark constituents to the observed hadronic properties in a precise way. The only theoretical method to achieve this, with full control of all sources of error, is via large-scale numerical simulation: lattice QCD. Members of the Edinburgh University Department of Physics and Astronomy were leading contributers to the UKQCD collaboration which had funding to construct a QCDOC (QCD On a Chip) computer in which a number of Power PC based application specific integrated circuit (ASIC) nodes would be interconnected as a 4-dimensional torus.

The aims of the HASE QCD Computer Simulation project were to build HASE simulation models of the QCDOC computer system, to investigate the factors which influence the performance of QCD computers and to explore the design parameter space of the models to investigate variations in performance against a range of architectural parameters in order to inform the design of subsequent generations of such computers. An extension to the project was to introduce a metamodelling scheme toallow for efficient generation of simulation models with alternate system configurations. This allowed the IBM Bluegene/L architecture to be modelled and evaluated..

Contributers to the project included Sadaf Alam, Marcelo Cintra, Roland Ibbett, Anthony Kennedy, Richard Kenway and Frederic Mallet. The QCD Computer Simulation project was supported by EPSRC (Grant GR/R/27129) from May 2001 to April 2004. Further information about this project can be found in Sadaf Alam's PhD thesis: "Simulation of the UKQCD computer".

Simulation Modelling of Distributed Shared Memory Clusters

Advances in scalable, distributed shared memory (DSM) systems continue to create an increased need for bandwidth of shared memory accesses. A number of research projects on large-scale DSM implementations have shown that bandwidth loss due to data locality is significant. As DSM systems provide a shared address space on top of distributed memory, memory management activities at different system layers impact data locality. Techniques such as hidden pages, manager migration, prefetching, and double faulting have been shown to improve overall performance by exploiting locality at memory page level. However, research on large-scale DSM systems has also shown that the portability of optimisation schemes is limited, and an efficient technique to analyse the impact of overheads caused by layered activities is unavailable. Therefore, advancing the development of a cost-effective direction requires a precise performance analysis and prediction technique. This project developed an appropriate analysis technique by simulating the behaviour of DSM systems and studied the factors which affect performance and their interactions. A synchronised, discrete event simulation model of DSM nodes was developed using the construction environment provided by HASE.

Further information about this project can be found in Worawan Marurngsith's PhD thesis: "Simulation Modelling of Distributed-Shared Memory Multiprocessors".

Storlite: Storage Systems Optical Networking

The Storlite project was a collaborative industry-academia research project funded under the DTI/EPSRC LINK Information Storage and Displays (LINK ISD) programme and led by Xyratex, a disk storage company based in Havant. It has involved an investigation of the application of affordable short-reach optical backplane technologies to future architectures of storage systems. Xyratex has worked with Exxelis (a company based in Glasgow) and University College London on optical components and with ICSA at the University of Edinburgh (EPSRC grant GR/S28143) on the simulation of RAID systems.

As the intra-enclosure transmission rate and the number of disks required in each storage sub-system increase, implementing electrical backplanes in storage systems is becoming increasingly difficult. Hence, the optical part of the Storlite project investigated how to implement optical backplanes for storage systems to make them more scalable.

The simulation part of the project involved using HASE to create simulation models of RAID storage systems and industry standard benchmark traffic generators (SPC and IOmeter) and using these models to identify performance bottlenecks, to evaluate communication protocol options and to explore the design space for new hardware acceleration architecture options for next generation storage sub-systems based on optical backplanes.

Contributors to the project included Franck Chevalier, Tim Courtney (Xyratex), Juan Carlos Diaz y Carballo, David Dolman, Roland Ibbett and Yan Li. Further information about this project can be found in Yan Li's PhD thesis: "Scalability of RAID Systems".

Contributors to HASE

Many people contributed to the development of HASE, both directly to the HASE application and through the creation and use of HASE models.The HASE application includes the work of Paul Coe, Pat Heywood, Fred Howell, Frederic Mallet, Sandy Robertson, Christos Sotiriou and Lawrence Williams. The Java version of HASE, HASE-III, was translated from the original C code by Juan Carlos Diaz y Carballo with the GUI being written by David Dolman. The latest version of HASE contains numerous revisions and improvments thanks to the work of David Dolman. The HASE Project was led by Roland Ibbett.

Research Project Models were built by Sadaf Alam, Paul Coe, Franck Chevalier, George Chochia, Tim Courtney, Todd Heywood, Fred Howell, Yan Li and Worawan Marurngsith. The models of historic computers and most of the teaching models were built by Roland Ibbett, some ab initio, some based on models built by the numerous undergraduate and MSc students who undertook projects using HASE.

line

HASE Project
Institute for Computing Systems Architecture, School of Informatics, University of Edinburgh
Last change 18/03/2025