The HASE Project 1990 - 2024:
a Computer Architecture Simulation & Visualisation Environment
Introduction
HASE is a Hierarchical computer Architecture design and
Simulation Environment developed at the University of Edinburgh to
support, through simulation, the visualisation of activities taking
place inside computers as they execute programs. HASE allows for the
rapid development and exploration of computer architectures at
multiple levels of abstraction, encompassing both hardware and
software.
Many complex systems of interacting components can be more easily
understood as pictures rather than as words. In a computer
architecture the dynamic behaviour of systems is frequently of
interest. HASE allows a user to observe this behaviour.
While running a simulation, HASE produces a trace file that can be
played back to animate the on-screen display of the model. The
animation can show data movements, parameter value updates and state
changes, as illustrated in the screenshot taken during a playback
sequence in a HASE model of the Manchester Atlas computer.
HASE was used to support a number of research projects as well as
numerous undergraduate and taught MSc student projects. Several models
were use for virtual laboratory exercises.
This website presents HASE (version III) as it was at the end of
2024. Up-to-date information about HASE can be found in the GitHub
HASE repository.
GitHub HASE
repository.
|

Screenshot from a HASE Atlas model playback sequence |
Computer Architecture Simulation Models
A HASE model consists of a number of interconnected entities, each
with its own simulation code written in Hase++. Hase++ is a discrete
event simulation langauge with a programming interface similar to that
of Sim++, but implemented using C++ and threads. It includes a set of
library routines to provide for process oriented discrete event
simulation and a runtime system for multi-threading many objects in
parallel and keeping track of simulation time.
Simulation models of a variety of computer architectures and
architectural components were created using HASE. These models were
intended for use as teaching and learning resources: in lectures, for
student self-learning or for virtual laboratory experiments.
The source files for the models listed below, together with
documentation that describes each model and the system it represents,
are available at
github.com/HASE-Group/hase_iii_models,
List of Models
- Cache Models
- Two-level Cache Model (based on Stanford DASH Node)
- Cache Coherence (based on Stanford DASH Cluster)
- Snoopy Cache Coherence
(WTWI-N,
WTWI-A,
WTWU
, CBWI)
- Directory-based Cache Coherence
(CD,
SDD,
SCI)
- CDC 6600
- Cray-1
- DLX with Parallel Function Units
- DLX with Predication
- EMMA - Edinburgh Microcoded Microprocessor Architecture
- Manchester Atlas
- Manchester MU5
- Manchester Small Scale Experimental Machine
- MIPS Processors
- Simple Pipelined MIPS Processor
- MIPS with Parallel Function Units
- SIMD Array Processors (SIMD-1, SIMD-2)
- Tomasulo’s Algorithm
HASE Research Projects
ALAMO: ALgorithms, Architectures and MOdels of computation
The ALAMO project set out to address two of the four "Grand Challenge
Problems in Computer Architecture" identified by the Purdue Workshop
on Grand Challenges in Computer Architecture for the Support of High
Performance Computing. These are: "to identify a small number of
fundamental models of parallel computation that serve as a natural
basis for programming languages and that facilitate high performance
hardware implementations" and "to develop sufficient infrastructure to
allow rapid prototyping of hardware ideas and the associated software
in a way that permits realistic evaluation."
The aim was to combine work on these two Grand Challenges, using
Heywood's H-PRAM as the bridging model of parallel computation
and HASE as the `prototyping infrastructure'. Strategies for
implementing the H-PRAM on a physical mesh architecture were devised
and investigated both theoretically and practically, through
simulation.
The H-PRAM successfully outperformed the standard PRAM model by a
factor which was small but significant (2 to 3 for the sizes of mesh
(typically 1024) we were able to simulate) and which, importantly, was
clearly growing with the number of processors involved, thereby
demonstrating improved scalability.
The ALAMO project was funded by
EPSRC
under Grant GR/J43295
and ran from August 1994 to February 1997. Contributors to the
project incuded George Chochia, Paul Coe, Murray Cole, Pat Heywood,
Todd Heywood, Roland Ibbett, Rob Pooley, Peter Thanisch and Nigel
Topham.
More information about the project can be found
in CSG report
ECS-CSG-22-96:
Algorithms, Architectures and Models of Computation.
EMIN: Evaluation of Multiprocessor Interconnection Networks
Designing multiprocessor systems is complicated because of the varied
interactions between parallel software and hardware. Evaluating the
impact of design decisions on overall performance is difficult. The
EMIN project sought to address these issues by developing a software
testbed for designing and analysing multiprocessor interconnection
network performance. Rather than apply a single technique to the
problem, a suite of design techniques was used. The simplest (and
often overlooked) technique is spreadsheet analysis. This enables
quick broad brush comparisons of networks. Microbenchmarks are useful
both for characterising network performance and for providing data
which is relevant to software.
A testbed was developed into which various workloads and system models
could be plugged, with facilities for measuring detailed performance
as well as large scale experimentation. The testbed was used to
measure, for example, the total time for variable numbers of
processors and the same workload. The results showed that for a bus
the time grows linearly with the number of processors, for a crossbar
the time is constant, and because of contention, a multistage network
is slower for 4 processors than 8, 12 or 16. Measurements of average
memory utilisation showed that as the number of processors is
increased, contention on a bus means that the memory units are not
kept busy, whereas memory utilisation remains constant for the
crossbar and multistage networks.
The EMIN Project was funded by EPSRC under Grant GR/K19716 and ran
from December 1994 to November 1997. The main contributor to the
project was Fred howell.
More information about the project can be found in CSG report
ECS-CSG-38-98:
Evaluation of Multiprocessor Interconnection Networks.
The QCD Computer Simulation Project
Quantum Chromodynamics (QCD) describes theoretically the strong
interactions between quarks and gluons. One of the essential features
of QCD is that these elementary particles are always bound together,
confined inside mesons and baryons, collectively called hadrons. This
provides a challenge in relating theoretical and practical results,
since the Standard Model of particle physics describes the
interactions of the quarks and gluons, not of the experimentally
observed hadrons.
To relate the experimental observations to the predictions from the
Standard Model thus needs detailed evaluation of the hadronic
structure, relating the quark constituents to the observed hadronic
properties in a precise way. The only theoretical method to achieve
this, with full control of all sources of error, is via large-scale
numerical simulation: lattice QCD.
Members of the Edinburgh University Department of Physics and
Astronomy were leading contributors to the UKQCD collaboration which
had funding to construct a QCDOC (QCD On a Chip) computer in which a
number of Power PC based application specific integrated circuit
(ASIC) nodes would be interconnected as a 4-dimensional torus.
The aims of the HASE QCD Computer Simulation project were to build
HASE simulation models of the QCDOC computer system, to investigate
the factors which influence the performance of QCD computers and to
explore the design parameter space of the models to investigate
variations in performance against a range of architectural parameters
in order to inform the design of subsequent generations of such
computers. An extension to the project was to introduce a
metamodelling scheme toallow for efficient generation of simulation
models with alternate system configurations. This allowed the IBM
Bluegene/L architecture to be modelled and evaluated..
The QCD Computer Simulation project was supported by EPSRC (Grant
GR/R/27129) from May 2001 to April 2004. Contributors to the project
included Sadaf Alam, Marcelo Cintra, Roland Ibbett, Anthony Kennedy,
Richard Kenway and Frederic Mallet.
More information about this project can be found in Sadaf Alam's PhD
thesis: Simulation
of the UKQCD computer.
Simulation Modelling of Distributed Shared Memory Clusters
Advances in scalable, distributed shared memory (DSM) systems continue
to create an increased need for bandwidth of shared memory accesses. A
number of research projects on large-scale DSM implementations have
shown that bandwidth loss due to data locality is significant. As DSM
systems provide a shared address space on top of distributed memory,
memory management activities at different system layers impact data
locality. Techniques such as hidden pages, manager migration,
prefetching, and double faulting have been shown to improve overall
performance by exploiting locality at memory page level. However,
research on large-scale DSM systems has also shown that the
portability of optimisation schemes is limited, and an efficient
technique to analyse the impact of overheads caused by layered
activities is unavailable. Therefore, advancing the development of a
cost-effective direction requires a precise performance analysis and
prediction technique. This project developed an appropriate analysis
technique by simulating the behaviour of DSM systems and studied the
factors which affect performance and their interactions. A
synchronised, discrete event simulation model of DSM nodes was
developed using the construction environment provided by HASE.
More information about this project can be found in Worawan
Marurngsith's PhD thesis:
Simulation Modelling
of Distributed-Shared Memory Multiprocessors.
Storlite: Storage Systems Optical Networking
The Storlite project was a collaborative industry-academia research project funded under the DTI/EPSRC LINK Information Storage and Displays (LINK ISD) programme and led by Xyratex, a disk storage company based in Havant. It has involved an investigation of the application of affordable short-reach optical backplane technologies to future architectures of storage systems. Xyratex has worked with Exxelis (a company based in Glasgow) and University College London on optical components and with ICSA at the University of Edinburgh (EPSRC grant GR/S28143) on the simulation of RAID systems.
As the intra-enclosure transmission rate and the number of disks required in each storage sub-system increase, implementing electrical backplanes in storage systems is becoming increasingly difficult. Hence, the optical part of the Storlite project investigated how to implement optical backplanes for storage systems to make them more scalable.
The simulation part of the project involved using HASE to create simulation models of RAID storage systems and industry standard benchmark traffic generators (SPC and IOmeter) and using these models to identify performance bottlenecks, to evaluate communication protocol options and to explore the design space for new hardware acceleration architecture options for next generation storage sub-systems based on optical backplanes.
Contributors to the project included Franck Chevalier, Tim Courtney
(Xyratex), Juan Carlos Diaz y Carballo, David Dolman, Roland Ibbett
and Yan Li.
More information about this project can be found in Yan Li's PhD
thesis: Scalability
of RAID Systems.
HASE Project History
The ideas for HASE grew from a simulator built for an MC88000 system
in 1989, written in occam and run on a Meiko Computing Surface at the
Edinburgh Parallel Computing Centre. HASE itself was developed using
object oriented simulation languages, the first prototype using DEMOS,
the second Sim++ and the current version Hase++.
The first production version of HASE was developed as part of the
ALAMO project supported by the UK EPSRC under grant GR/J43295 but was
later further developed and used for the EMIN Project (EPSRC Grant
GR/K19716), the QCDOC Computer Simulation project (EPSRC grant
GR/R27129) and the Storlite Project (EPSRC grant GR/S28153).
In addition to supporting research projects, HASE was used to
support numerous undergraduate and taught MSc student projects and
several models were used for virtual laboratory practical
exercises.
Contributors to HASE
Many people contributed to the development of HASE, both directly
to the HASE application and through the creation and use of HASE
models. The HASE application includes the work of Paul Coe, Pat Heywood, Fred
Howell, Frederic Mallet, Sandy Robertson, Christos Sotiriou and
Lawrence Williams. The Java version of HASE, HASE-III, was translated
from the original C code by Juan Carlos Diaz y Carballo with the GUI
being written by David Dolman. The latest version of HASE contains
numerous revisions and improvments thanks to the work of David Dolman.
The HASE Project was led by Roland Ibbett.
Research Project Models were built by Sadaf Alam, Paul Coe, Franck
Chevalier, George Chochia, Tim Courtney, Todd Heywood, Fred Howell,
Yan Li and Worawan Marurngsith. The models of historic computers and
most of the teaching models were built by Roland Ibbett, some ab
initio, some based on models built by the numerous undergraduate
and MSc students who undertook projects using HASE.
HASE Project
Institute for Computing Systems Architecture, School of Informatics,
University of Edinburgh
Last change 21/04/2025