CogSci 2006
July 26-29
Tutorials/workshops: July 26
Vancouver, BC, Canada
Sheraton Vancouver Wall Centre

The 28th Annual Conference of the Cognitive Science Society
Tutorial Program

The tutorial program of Cognitive Science 2006 will take place on Wednesday July 26 in Vancouver, Canada, at the Sheraton Vancouver Wall Centre Hotel.

    Overview   |   Call for Tutorial Proposals   |   Registration   |   Schedule   |   Main Conference


Tutorial Schedule

Overview

Free lunchtime presentation (12:45-1:30):
McGinnis: The NSF TeraGrid

Morning: 08:30-10:00 and 10:30-12:00Afternoon: 1:30-3:00 and 3:30-5:00
Griffiths, Kemp, and Tenenbaum: Bayesian models (Part 1) Griffiths, Kemp, and Tenenbaum: Bayesian models (Part 2)
Ohlsson and Mitrovic: Constraint-Based Modeling Taatgen and van Rijn: ACT-R
Netto: Artificial Life Hudlicka: Affective Computing
Strauss, Mirman, and Magnuson: Speech Perception


Free lunchtime presentation: Laura F. McGinnis: The NSF TeraGrid (12:45-1:30)

TeraGrid is a national facility that provides computational, data management, and analysis services to support scientific discovery. TeraGrid integrates computing, visualization, storage and data collection systems at nine institutions nation-wide, supporting users through a unified set of consulting, training, and education resources. In 2005 the TeraGrid team initiated a "Science Gateways" program designed to adapt TeraGrid services to enable their use through existing scientific tools such as community-driven web portals and desktop applications. This presentation wil lintroduce the audience to the TeraGrid and show some of the scientific activity this program is facilitating.


Tutorial T1: Tom Griffiths, Charles Kemp, and Josh Tenenbaum: Bayesian models of inductive learning (full day)

Many of the central problems of cognitive science are problems of induction, calling for uncertain inferences from limited data. This tutorial will introduce an approach to explaining everyday inductive leaps in terms of Bayesian statistical inference, drawing upon tools from probability theory, statistics, and artificial intelligence. We will demonstrate how this approach can be used to model natural tasks such as learning the meanings of words, inferring hidden properties of natural kinds, or discovering causal laws, where people draw on considerable prior knowledge in the form of abstract domain theories and structured systems of relations.

Bayesian models have become increasingly popular in the recent cognitive literature. This tutorial aims to prepare students to use these modeling methods intelligently: to understand how they work, what advantages they offer over alternative approaches, and what their limitations are. We also discuss how to relate the abstract computations of Bayesian models to more traditional models framed in terms of cognitive processing or neurocomputational mechanisms. The tutorial will focus on case studies of several cognitive tasks, and for each task contrast multiple models both within the Bayesian approach and across different modeling approaches.


Tutorial T2: Eva Hudlicka: Introduction to Affective Computing and Affective Modeling (half day)

Affective computing represents a broad, interdisciplinary research and practice area focusing on a range of topics, including: computational models of emotion generation and cognitive-affective interactions; development of cognitive-affective architectures; affective user modeling; sensing and recognition of emotions; and emotion expression techniques. Theoretically oriented research aims to better understand the mechanisms of cognitive appraisal and emotion-cognition interactions, architectural requirements for emotion, roles of emotion in adaptive behavior, and the theoretical feasibility of emotion recognition by machines. Applied research aims to develop methods to improve human-computer interaction by introducing affective factors. This includes the development of affect-adaptive interfaces and affective user models to improve computerized education systems and training; the use of affect-adaptive interfaces and decision-aiding systems to improve human performance, especially in high-stress environments; and augmenting virtual agents with emotions and personality traits to increase their realism and believability.

This tutorial will provide an introduction to the broad area of affective computing, focusing on established empirical findings from psychology, and methods and techniques developed in cognitive science, AI and HCI. Specific topics will include the following: (1) overview of the broad area of affective computing; (2) historical overview of emotion theories; (3) review of the relevant emotion research in psychology and neuroscience; (4) descriptions of specific techniques and approaches to modeling emotion, focusing on cognitive-affective architectures, models of cognitive appraisal, and models of affective-cognitive interactions; (5) overview of the roles of emotion in human-computer interaction; (6) techniques and tools for emotion sensing, recognition, and expression; (7) affective user modeling methods and applications; (8) and a selection of specific topics, such as development and applications of virtual affective agents, and relevance of affect in team research.


Tutorial T3: Marcio Lobo Netto: Artificial Life as a Virtual Lab for Cognitive Science Experiments (half day)

This is a tutorial on Artificial Life (AL), presenting the main concepts and principles behind it. But it shows also how AL applications can be developed to assist researchers on some of their activities, and then focus mainly on cognitive sciences, including virtual experiments for social behavior studies, evolution of live beings, adaptive live systems, decision taking, and learning and communication abilities.

Artificial Life is a research area related to cognitive on many aspects. Both study live beings, propose models to simulate some of their characteristics and to evaluate their behavior. Even though different in many other aspects, their strong relationship allows a linkage between both, as for instance in the conduction of virtual experiments. Some artificial life models and correspondent simulation platforms can be used to analyze different topics related to live and cognitive beings, as those presented above.

The tutorial begins presenting a short historical overview of this area, with the eminent scientists that started the discussion of "What is Life", life fundamental concepts, and consequently the possibility to simulate it on computers. Following this presentation we discuss life under different approaches (metabolic, genetic, entropic), levels of detail (from micro to macro-organisms, considering them in accordance with the objectives of the experiment to be conducted), substrates (including the virtual ones), and organization (considering different aspects, as organs and their purposes). Based on these concepts we propose a model for artificial live beings structured in accordance their phylogeny (species evolution), ontogeny (self evolution, learning), and epistemology (knowledge and reasoning). After this presentation, we propose how different mathematical methods and models can be used to implement appropriate simulation platforms, able to handle some of those aspects of interest in an artificial life experiments (computational simulations). In order to do so, we propose an architecture combining those different aspects of a virtual live being. And finally we present some case studies to illustrate how to make effective usage of these tools to study interesting, even though already simple, natural mechanism and phenomena in live beings, considered as individuals and as social groups.


Tutorial T4: Stellan Ohlsson and Antonija Mitrovic: Constraint-Based Modeling: An Introduction (half day)

Cognitive models typically cast declarative knowledge as consisting of propositions, knowledge units that encode assertions, that can be true or false, and support description, deduction and prediction. We have developed an alternative model of declarative knowledge as consisting of constraints, units of knowledge that are more prescriptive than descriptive and that primarily support evaluation and judgment. In this 3-hour tutorial, we first present a formal representation of constraints and explain its conceptual rationale.

We then develop two applications of constraint-based modeling. The first is the use of constraints as a basis for a machine learning algorithm that allows a heuristic search system to detect and correct its own errors. From this point of view, constraint-based learning is a form of adaptive search. The algorithm is presented in some detail. The algorithm was originally developed as a hypothesis about how people learn from errors, and we summarize briefly applications to various problems in the psychology of cognitive skill acquisition.

We develop in detail the application of constraint-based modeling to the design and implementation of Intelligent Tutoring Systems (ITS). The constraint-based knowledge representation provides a novel way to represent the target subject matter knowledge, which has the advantage of directly supporting one of the main functions of expert knowledge in an ITS: To detect student errors. More important, the constraint-based representation provides a theoretically sound and practical solution to the intractable problem of student modeling. Finally, the constraint-based representation and the associated learning algorithm provide detailed implications for how to formulate individual tutoring messages. We present multiple systems that follow this blueprint, together with empirical evaluation data.

In the last part of the tutorial, we point out other areas of cognitive science where the constraint-based format has a potential to provide significant advantages. We point the participants to sources for further study of the constraint-based approach to cognition.


Tutorial T6: Ted J. Strauss, Daniel Mirman, and James S. Magnuson: Speech Perception: Linking Computational Models and Human Data (half day)

Computational models provide a concrete instantiation of a set of hypotheses that can be tested and can provide novel predictions. In complex domains such as speech perception, which have a rich base of behavioral and computational work, it is useful to test novel hypotheses in the context of established models that are known to be consistent with a broad range of data. The TRACE model of speech perception (McClelland & Elman, 1986) provides an account of a broad range of data in speech perception and spoken word recognition. TRACE is representative of a stable of related models characterized by activation-competition dynamics; including Shortlist, Merge, NAM, Parsyn.

This tutorial will give participants the skills needed to carry out computational modeling of speech perception and spoken word recognition (SWR) using the TRACE model. Experienced modelers will be offered new tools to add to their repertoires. Participants will learn to use the recently developed jTRACE tool (Strauss, Harris, & Magnuson, in press) to test their hypotheses. Particular emphasis will be placed on interpreting model behavior and making the link from model to human behavior.


Tutorial T8: Niels Taatgen and Hedderik van Rijn: ACT-R Tutorial (half day)

This tutorial serves as a general introduction to the ACT-R theory. It therefore assumes no knowledge not already present in the typical cognitive science audience: some basic experimental psychology and some understanding of what a formal theory entails. The tutorial will not attempt to teach ACT-R modeling, because the time span is too limited for that. Instead we hope to wet the appetite of the participants enough for the 7 day tutorial that is available online, and that can be followed by attending the yearly ACT-R summer school.

The general strategy in the tutorial is to introduce the various elements of the theory on the basis of example models and research paradigms, largely following Taatgen, Lebiere and Anderson (2006). The abstract provides some more details on the topics. Apart from a general introduction of approximately half an hour, we will devote about half an hour to each of the five research paradigms. We will not focus too much on the more syntactic aspects of the theory, but will provide enough details to give participants a feeling for what is going on in the models.