Bender, E. and A. Lascarides (2019) Linguistic Fundamentals for Natural Language Processing II: 100 Essentials from Semantics and Pragmatics, Morgan and Claypool Publishers.
Available from
or
Meaning is a fundamental concept in Natural Language Processing (NLP),
in the tasks of both Natural Language Understanding (NLU) and Natural
Language Generation (NLG). This is because the aims of these fields
are to build systems that understand what people mean when they speak
or write, and that can produce linguistic strings that successfully
express to people the intended content. In order for NLP to scale
beyond partial, task-specific solutions, researchers in these fields
must be informed by what is known about how humans use language to
express and understand communicative intents. The purpose of this
book is to present a selection of useful information about semantics
and pragmatics, as understood in linguistics, in a way that's
accessible to and useful for NLP practitioners with minimal (or even
no) prior training in linguistics.
@book{bender:lascarides:2019,
author = {Emily Bender and Alex Lascarides},
year = {2019},
title={Linguistic Fundamentals in Natural Language Processing II: 100 Essentials in Semantics and Pragmatics},
publisher = {Morgan and Claypool Publishers}
}
Stenning, K., A. Lascarides and J. Calder (2006) Introduction to
Cognition and Communication, MIT Press.
Available from
This introduction to the interdisciplinary study of cognition takes
the novel approach of bringing several disciplines to bear on the
subject of communication. Using the prespectives of linguistics,
logic, AI, philosophy, and psychology---the component fields of
cognitive science---to explore topics in human communication in depth,
the book shows readers and students from any background how these
disciplines developed their distinctive views, and how those views
interact. The book introduces some sample phenomena of human
communication that illustrate the approach of cognitive science in
understanding the mind, and then considers theoretical issues,
including the relation of logic and computation and the concept of
representation. It describes the development of a model of natural
alngauge and explores the link between an utterances and its meaning
and how this can be described in a formal way on the basis of recent
advances in AI research. It looks at communication employing
graphical messages and the similarities and differences between
language and diagrams. Finally, the book considers some general
philosophical critiques of computational models of mind. The book can
be used at a number of different levels. A glossary, suggestions for
further reading, and a Web site with multiple-choice questions are
provided for nonspecialist students; advanced students can supplement
the material with readings that take the topics into greater depth.
@book{stenning:etal:2006,
author = {Keith Stenning and Alex Lascarides and Jo Calder},
year = {2006},
title={Introduction to Cognition and Communication},
publisher = {MIT Press}
}
Asher, N. and A. Lascarides (2003) Logics of Conversation,
Cambridge University Press.
Available from
People often mean more than they say. Grammar on its own is typically
insufficient for determining the full meaning of an utterance; the
assumption that the discourse is coherent or ‘makes
sense’ has an important role to play in determining meaning
as well. Logics of Conversation presents a dynamic semantic framework
called Segmented Discourse Representation Theory, or SDRT, where this
interaction between discourse coherence and discourse interpretation
is explored in a logically precise manner. Combining ideas from
dynamic semantics, commonsense reasoning and speech act theory, SDRT
uses its analysis of rhetorical relations to capture intuitively
compelling implicatures. It provides a computable method for
constructing these logical forms and is one of the most formally
precise and linguistically grounded accounts of discourse
interpretation currently available. The book will be of interest to
researchers and students in linguistics and in philosophy of language.
@book{asher:lascarides:2003,
author = {N. Asher and A. Lascarides},
year = {2003},
title = {Logics of Conversation},
publisher = {Cambridge University Press}
}
Journal Articles
Appelgren, M. and A. Lascarides (2020) Interactive Task Learning via Embodied Corrective Feedback, Journal of Autonomous Agents and Multiagent Systems (JAAMAS), doi:https://doi.org/10.1007/s10458-020-09481-8.
This paper addresses a task in Interactive Task Learning (Laird et al. IEEE Intell Syst 32:6–21, 2017). The agent must learn to build towers which are constrained by rules, and whenever the agent performs an action which violates a rule the teacher provides verbal corrective feedback: e.g. “No, red blocks should be on blue blocks”. The agent must learn to build rule compliant towers from these corrections and the context in which they were given. The agent is not only ignorant of the rules at the start of the learning process, but it also has a deficient domain model, which lacks the concepts in which the rules are expressed. Therefore an agent that takes advantage of the linguistic evidence must learn the denotations of neologisms and adapt its conceptualisation of the planning domain to incorporate those denotations. We show that by incorporating constraints on interpretation that are imposed by discourse coherence into the models for learning (Hobbs in On the coherence and structure of discourse, Stanford University, Stanford, 1985; Asher et al. in Logics of conversation, Cambridge University Press, Cambridge, 2003), an agent which utilizes linguistic evidence outperforms a strong baseline which does not.
@article{appelgren:lascarides:2020,
author = {M.\ Appelgren and A.\ Lascarides},
year = {2020},
title = {Interactive Task Learning via Embodied Corrective Feedback},
journal = {Journal of Autonomous Agents and Multi-Agent Systems},
volume = {34},
number = {54},
doi = {doi.org/10.1007/s10458-020-09481-8}
}
Schloeder, J. and A. Lascarides (2020) Understanding Focus: Pitch, Placement and Coherence, Semantics and Pragmatics, 1:1--53, doi:http://dx.doi.org/10.3765/sp.13.1.
This paper presents a novel account of focal stress and pitch
contour in English dialogue. We argue that one should analyse and
treat focus and pitch contour jointly, since (i) some pragmatic
interpretations vary with contour (e.g., whether an utterance
accepts or rejects; or whether it implicates a positive or negative
answer); and (ii) there are utterances with identical prosodic
focus that in the same context are infelicitous with one contour,
but felicitous with another. We offer an account of two distinct
pitch contours that predicts the correct felicity judgements and
implicatures, outclassing other models in empirical coverage or
formality. Prosodic focus triggers a presupposition, where what is
presupposed and how the presupposition is resolved depends on
prosodic contour. If resolving the presupposition entails the
proffered content, then the proffered content is
uninteresting and hence the utterance is
infelicitous. Otherwise, resolving the presupposition may lead to
an implicature. We regiment this account in SDRT.
@article{schloeder:lascarides:2020,
author = {J.\ Schl\"oder and A.\ Lascarides},
year = {2020},
title = {Understanding Focus: Pitch, Placement and Coherence},
journal = {Semantics and Pragmatics},
volume = {13},
number = {1},
pages = {1--53},
doi = {dx.doi.org/10.3765/sp.13.1}
}
Lascarides, A. and M. Guhe (2019) Persuasion with Limited Sight, Review of Philosophy and Psychology, 10(1), pp1--33, Springer.
Humans face many game problems that are too large for the whole game
tree to be used in their deliberations about action, and very little
is understood about how they cope in such scenarios. However, when
a human player's chosen strategy is conditioned on her limited
perspective of how the game might progress
(Degremont et al, 2016), then it should be possible to
manipulate her into changing her planned move by mentioning a
possible outcome of an alternative move. This
paper demonstrates
that human players can be manipulated this way: in the game The
Settlers of Catan, where
negotiation is only a small part of what one must do to win the
game thereby generating uncertainty about which outcomes to the
negotiation are good and which are bad, the likelihood that a player
accepts a trade offer that deviates from their declared preferred
strategy is higher if it is accompanied by a description of what that
trade offer can lead to.
@article{lascarides:guhe:2019,
author = {A.\ Lascarides and M.\ Guhe},
year = {2019},
title = {Persuasion with Limited Sight},
journal = {Review of Philosophy and Psychology},
volume = {10},
number = {1},
pages = {1--33},
doi = {10.1007/s13164-018-0398-z},
publisher = {Springer}
}
Hunter, J. N. Asher, and A. Lascarides (2018) A Formal Semantics for Situated Conversation, Semantics and Pragmatics, 11(10), pp1--59, doi.org/10.3765/sp.11.10.
While linguists and philosophers have sought to model the
various ways in which the meaning of what we say can depend on the
nonlinguistic context, this work has by and large focused on how the
nonlinguistic context can be exploited to ground or anchor referential
or otherwise context-sensitive expressions. In this paper, we focus on
examples in which nonlinguistic events contribute entire discourse
units that serve as arguments to coherence relations, without the
mediation of context-sensitive expressions. We use both naturally
occurring and constructed examples to highlight these interactions and
to argue that extant coherence-based accounts of discourse should be
extended to model them. We also argue that extending coherence-based
accounts in this way is a nontrivial task. It forces us to reassess
basic notions of the nonlinguistic context and rhetorical relations as
well as models of discourse structure, evolution, and
interpretation. Our paper addresses the conceptual and technical
revisions that these types of interaction demand.
@article{hunter:etal:2018,
author = {Julia Hunter and Nicholas Asher and Alex Lascarides},
year = {2018},
title = {A Formal Semantics for Situated Conversation},
journal = {Semantics and Pragmatics},
volume = {11},
number = {10},
pages = {1--59},
doi = {10.3765/sp.11.10}
}
Alahverdzhieva, K., A. Lascarides and D. Flickinger (2017) Aligning Speech and Co-speech Gesture in a Constraint-based Grammar, Journal of Language Modelling, 55(3), pp421--464.
This paper concerns the form-meaning mapping of communicative actions
consisting of speech and improvised co-speech gestures. Based on the
findings of previous cognitive and computational approaches, we
advance a new theory in which this form-meaning mapping is analysed in
a constraint-based grammar. Motivated by observations in naturally
occurring examples, we propose several construction rules, which use
linguistic form, gesture form and their relative timing to constrain
the derivation of a single speech-gesture syntax tree, from which a
meaning representation can be composed via standard methods for
semantic composition. The paper further reports on implementing these
speech-gesture construction rules within the English Resource Grammar
(Flickinger, 2000). Since gestural form often underspecifies its
meaning, the logical formulae that are composed via syntax are
underspecified so that current models of the semantics/pragmatics
interface support the range of possible interpretations of the
speech-gesture act in its context of use.
@article{alahverdzhieva:etal:2017,
author = {K.\ Alahverdzhieva and A.\ Lascarides and D.\ Flickinger},
year = {2017},
title = {Aligning Speech and Co-Speech Gesture in a Constraint Based Grammar},
journal = {Journal of Language Modelling},
volume = {55},
number = {3},
pages = {421--464},
publisher = {Insitute of Computer Science of the Polish Academy of Sciences}
}
Cadilhac, A., N. Asher, A. Lascarides and F. Benamara (2015) Preference change, in Journal of Logic, Language and Information, 24(3), pp267--288.
Most models of rational action assume that all possible states and
actions are pre-defined and that preferences change only when
beliefs do. But several decision and game problems lack these
features, calling for a dynamic model of preferences:
preferences can change when unforeseen possibilities come to light
or when there is no specifiable or measurable change in belief. We
propose a formally precise dynamic model of preferences that extends
an existing static model (Boutilier et al, 2004). Our axioms for
updating preferences preserve consistency while minimising change,
like Hansson's (1995). But unlike prior models of preference
change, ours supports default reasoning with partial preference
information, which is essential to handle decision problems where
the decision tree isn't surveyable. We also show that our model
avoids problems for other models of preference change discussed in
Spohn (2009).
@article{cadilhac:etal:2015,
author = {A.\ Cadilhac and N.\ Asher and A.\ Lascarides and F.\ Benamara},
year = {2015},
title = {Preference Change},
journal = {Journal of Logic, Language and Information},
volume = {24},
number = {3},
pages = {267--288},
doi = {10.1007/s10849-015-9221-8}
}
Asher, N. and A. Lascarides (2013)
Strategic Conversation,
Semantics and Pragmatics, 6, 2:1--62.
Models of conversation that rely on a strong notion of cooperation
don't handle dialogues where the agents' goals conflict; for
instance, courtroom cross examination and political debate. We
provide a game-theoretic framework in which both cooperative and
non-cooperative conversation can be analysed, as well as a proof
theoretic analysis, in which we prove a correspondence between a
situation where the agents' preferences normally align and Gricean
principles of cooperative conversation (e.g., Sincerity). We
introduce the notion of safety in discourse interpretation:
our logic provides the means to test whether an implicature in
non-cooperative conversation can be treated as a matter of public
record.
@article{asher:lascarides:2013,
author = {N.\ Asher and A.\ Lascarides},
year = {2013},
title = {Strategic Conversation},
journal = {Semantics and Pragmatics},
volume = {6},
number = {2},
pages = {2:1--:62},
doi = {dx.doi.org/10.3765/sp.6.2}
}
Asher, N. and A. Lascarides (2011)
Reasoning Dynamically about What One Says,
Synthese, 183(1), pp5--31,
Kluwer Academic Press.
In this paper we make SDRT's glue logic for computing logical
form dynamic. This allows us to model a dialogue agent's
understanding of what the update of the semantic representation of
the dialogue would be after his next contribution, including the
effects of the rhetorical moves that he is contemplating performing
next. This is a pre-requisite for developing a model of how agents
reason about what to say next. We make the glue logic dynamic by
using a dynamic public announcement logic (PAL). We extend
PAL with a particular variety of default reasoning suited to
reasoning about discourse---this default reasoning being an
essential component of inferring the pragmatic effects of one's
dialogue moves. We add to the PAL language a new type of
announcement, known as ceteris paribus announcement, and this
is used to model how an agent anticipates the (default) pragmatic effects
of his next dialogue move. Our extended PAL validates certain
intuitive patterns of default inference that existing PALs for
practical reasoning do not. We prove that the dynamic glue logic has a
PSPACE validity problem, and as such is no more complex than PAL
with multiple modal operators.
@article{asher:lascarides:inpress,
author = {Nicholas Asher and Alex Lascarides},
year = {2011},
title = {Reasoning Dynamically about What One Says},
journal = {Synthese},
volume = {183},
number = {1},
pages = {5--31},
publisher = {Kluwer Academic Publishers}
}
Lascarides, A. and M. Stone (2009) Discourse Coherence and Gesture
Interpretation, Gesture, 9(2), pp147--180, John Benjamins Publishing Company.
In face-to-face conversation, communicators orchestrate
multimodal contributions that meaningfully combine the linguistic
resources of spoken language and the visuo-spatial affordances of
gesture. In this paper, we characterise this meaningful
combination in terms of the coherence of gesture and speech.
Descriptive analyses illustrate the diverse ways gesture
interpretation can supplement and extend the interpretation of
prior gestures and accompanying speech. We draw certain parallels
with the inventory of coherence relations found in discourse
between successive sentences. In both domains, we suggest,
interlocutors make sense of multiple communicative actions in
combination by using these coherence relations to link the actions'
interpretations into an intelligible whole. Descriptive analyses
also emphasise the improvisation of gesture; the abstraction and
generality of meaning in gesture allows communicators to interpret
gestures in open-ended ways in new utterances and contexts. We
draw certain parallels with interlocutors' reasoning about
underspecified linguistic meanings in discourse. In both domains,
we suggest, coherence relations facilitate meaning-making by
resolving the meaning of each communicative act through
constrained inference over information made salient in the prior
discourse. Our approach to gesture interpretation lays the
groundwork for formal and computational models that go beyond
previous approaches based on compositional syntax and semantics, in
better accounting for the flexibility and the constraints found in
the interpretation of speech and gesture in conversation. At the
same time, it shows that gesture provides an important source of
evidence to sharpen the general theory of coherence in
communication.
@article{lascarides:stone:2009,
author = {Alex Lascarides and Matthew Stone},
title = {Discourse Coherence and Gesture Interpretation},
year = {2009},
journal = {Gesture},
volume = {9},
number = {2},
pages = {147--180},
publisher = {John Benjamins Publishing Company}
}
Lascarides, A. and M. Stone (2009)
A Formal Semantic Analaysis of
Gesture, Journal of Semantics, 26(4), pp393--449, Oxford
University Press.
The gestures that speakers use in tandem with speech include not
only conventionalised actions with identifiable meanings (so called
narrow gloss gestures or emblems) but also productive
iconic and deictic gestures whose form and meanings seem largely
improvised in context. In this paper, we bridge the descriptive
tradition with formal models of reference and discourse structure so
as to articulate an approach to the interpretation of these
productive gestures. Our model captures gestures' partial and
incomplete meanings as derived from form, and accounts for the more
specific interpretations they derive in context. Our work
emphasises the commonality of the pragmatic mechanisms for
interpreting both language and gesture, and the place of formal
methods in discovering the principles and knowledge that those
mechanisms rely on.
@article{lascarides:stone:2009,
author = {Alex Lascarides and Matthew Stone},
year = {2009},
title = {A Formal Semantic Analysis of Gesture},
journal = {Journal of Semantics}
volume = {26},
number = {4},
pages = {393--449},
publisher = {Oxford University Press}
}
Lascarides, A. and N. Asher (2009) Agreement, Disputes and Commitments in
Dialogue, Journal of Semantics, 26(2), pp109--158.
This paper provides a logically precise analysis of agreement and
disputes in dialogue. The semantics distinguishes among the public
commitments of each dialogue agent, including commitments to
relational speech acts or rhetorical relations (e.g., Narration, Explanation, Correction). Agreement
is defined to be the shared entailments of the agents'
public commitments. We show that this makes precise predictions
about implicit agreement. The theory also provides a consistent
interpretation of disputes and models what content is
agreed upon when a dispute has taken place.
@article{lascarides:asher:2009,
author = {Alex Lascarides and Nicholas Asher},
year = {2009},
title = {Agreement, Disputes and Commitments in Dialogue},
journal = {Journal of Semantics},
volume = {26},
number = {2},
pages = {109--158},
publisher = {Oxford University Press}
}
Sporleder, C. and A. Lascarides (2008) Using Automatically
Labelled Examples to Classify Rhetorical Relations: An Assessment,
Natural Language Engineering, 14(3) pp369--416.
Being able to identify the rhetorical relations, such as
contrast or explanation, that hold between spans of
text is important for many natural language processing (NLP)
applications. Using machine learning to obtain a classifier which can
distinguish between different relations typically depends on the
availability of manually labelled training data, which is very
time-consuming to create. However, rhetorical relations are sometimes
lexically marked, i.e., signalled by discourse markers (e.g.,
because, but, consequently etc.), and it
has been suggested (Marcu and Echihabi, 2002) that the presence of
these cues in some examples can be exploited to label them
automatically with the corresponding relation. The discourse markers
are then removed and the automatically labelled data are used to train
a classifier to determine relations even when no discourse marker is
present (based on other linguistic cues such as word co-occurrences).
In this paper, we investigate empirically how feasible this approach
is. In particular, we test whether automatically labelled, lexically
marked examples are really suitable training material for classifiers
that are then applied to unmarked examples (i.e., examples which
naturally occur without a discourse marker). We also explore how
training on automatically labelled examples compares to training on
manually labelled, unmarked examples. Our results suggest that
automatically labelled data are of very limited use for classifying
rhetorical relations in unmarked examples.
@article{sporleder:lascarides:2008,
author = {Caroline Sporleder and Alex Lascarides},
year = {2008},
title = {Using Automatically Labelled Examples to Classify Rhetorical
Relations: A Critical Assessment},
journal = {Natural Language Engineering},
volume = {14},
number = {3},
pages = {369--416},
publisher = {Cambridge University Press}
}
Lapata, M. and A. Lascarides (2006) Learning
Sentence-internal
Temporal Relations, Journal of Artificial
Intelligence Research, 27, pp85--117.
In this paper we propose a data intensive approach for inferring
sentence-internal temporal relations. Temporal inference is relevant
for practical NLP applications which either extract or synthesize
temporal information (e.g., summarisation, question answering). Our
method bypasses the need for manual coding by exploiting the presence
of markers like "after", which overtly signal a temporal
relation. We first show that models trained on main and subordinate
clauses connected with a temporal marker achieve good performance on a
pseudo-disambiguation task simulating temporal inference (during
testing the temporal marker is treated as unseen and the models must
select the right marker from a set of possible candidates). Secondly,
we assess whether the proposed approach holds promise for the
semi-automatic creation of temporal information. Specifically, we use
a model trained on noisy and approximate data (i.e., main and
subordinate clauses) to predict intra-sentential relations present in
TimeBank, a corpus containing rich temporal annotations. Our
experiments compare and contrast several probabilistic models
differing in their feature space, linguistic assumptions and data
requirements. We evaluate performance against gold standard corpora
and also against human subjects.
@article{lapata:lascarides:2006,
author = {Mirella Lapata and Alex Lascarides},
year = {2006},
title = {Learning Sentence-internal Temporal Relations},
journal = {Journal of Artificial Intelligence Research}
volume = {27},
pages = {85--117}
}
Grover, C., M. Lapata and A. Lascarides (2005) A
Comparison of Parsing
Technologies for the Biomedical Domain, Jounal of
Natural Language Engineering, 11(1), pp25--65.
This paper reports on a number of experiments which are designed to
investigate the extent to which current NLP resources are able
to syntactically and semantically analyse biomedical text. We address
two tasks: parsing a real corpus with a hand-built wide-coverage
grammar, producing both syntactic analyses and logical forms and
automatically computing the interpretation of compound nouns where the
head is a nominalisation (e.g., hospital arrival means an
arrival at hospital, while patient arrival means an arrival of a
patient). For the former task we demonstrate that flexible and yet
constrained `pre-processing' techniques are crucial to success: these
enable us to use part-of-speech tags to overcome inadequate lexical
coverage, and to `package up' complex technical expressions prior to
parsing so that they are blocked from creating misleading amounts of
syntactic complexity. We argue that the XML-processing paradigm
is ideally suited for automatically preparing the corpus for parsing.
For the latter task, we compute interpretations of the compounds by
exploiting surface cues and meaning paraphrases, which in turn are
extracted from the parsed corpus. This provides an empirical setting
in which we can compare the utility of a deep parser vs. a shallow
one, exploring the trade-off between resolving attachment ambiguities
on the one hand and generating errors in the parses on the other. We
demonstrate that a robust and reliable model of the meaning of
compound nominalisations is achievable with the aid of current
broad-coverage parsers.
@article{grover:etal:2005,
AUTHOR = {Claire Grover and Mirella Lapata and Alex Lascarides},
YEAR = {2005},
TITLE = {A Comparison of Parsing Technology for the Biomedical Domain},
journal = {Natural Language Engineering},
volume = {11},
number = {1},
pages = {25--65}
}
Lapata, M. and A. Lascarides (2003) A
Probabilisitic Account of Logical
Metonymy, Computational Linguistics, 29(2),
pp263--317.
In this paper we investigate logical metonymy, i.e., constructions
involving a form of semantic type coercion, in that the semantic type
of the argument of a word in syntax appears to be different from the
semantic type of that argument in logical form (e.g.,enjoy the
book means enjoy reading the book, and easy problem
means a problem that is easy to solve). The systematic variation in
the interpretation of such constructions suggest a rich and complex
theory of composition on the syntax/semantics interface (e.g.,
Pustejovsky, 1995). But the generative devices which are used to
model logical metonymy typically fail to exhaustively describe all the
possible interpretations, or they don't rank those interpretations in
terms of their likelihood. In view of this, we acquire the meanings of
metonymic verbs and adjectives from a large corpus and propose a
probabilistic model which provides a ranking on the set of possible
interpretations. We identify lexical semantic information
automatically by exploiting the consistent correspondences between
surface syntactic cues and lexical meaning. We evaluate our results
against paraphrase judgements elicited experimentally from humans, and
show that the model's ranking of meanings correlates reliably with
human intuitions: meanings that are found highly probable by the model
are also rated as plausible by the human subjects.
@article{lapata:lascarides:2003,
author = {Mirella Lapata and Alex Lascarides},
year = {2003},
title = {A Probabilistic Account of Logical Metonymy},
journal = {Computational Linguistics},
volume = {29},
number = {2},
pages = {263--317},
publisher = {MIT Press}
}
Asher, N. and A. Lascarides (2001) Indirect Speech Acts,
Synthese, 128(1--2), pp183--228,
Kluwer Academic Press.
In this paper, we address several puzzles concerning speech acts,
particularly indirect speech acts. We show how a formal semantic
theory of discourse interpretation can be used to define speech acts
and to avoid murky issues concerning the metaphysics of action. We
provide a formally precise definition of indirect speech acts,
including the subclass of so-called conventionalized indirect speech
acts. This analysis draws heavily on parallels between phenomena at
the speech act level and the lexical level. First, we argue that,
just as co-predication shows that some words can behave linguistically
as if they're `simultaneously' of incompatible semantic types, certain
speech acts behave this way too. Secondly, as Horn and Bayer (1984)
and others have suggested, both the lexicon and speech acts are
subject to a principle of blocking or ``preemption by synonymy'':
Conventionalised indirect speech acts can block their `paraphrases'
from being interpreted as indirect speech acts, even if this
interpretation is calculable from Gricean-style principles. We
provide a formal model of this blocking, and compare it with existing
accounts of lexical blocking.
@article{asher:lascarides:2001,
author = {Nicholas Asher and Alex Lascarides},
year = {2001},
title = {Indirect Speech Acts},
journal = {Synthese},
volume = {128},
number = {1--2},
pages = {183--228}
}
Lascarides, A. and A. Copestake (1999) Default Representation
in Constraint-based Frameworks,
Computational Linguistics, 25(1), pp55-105, MIT Press.
Default unification has been used in several linguistic
applications. Most of them have utilised defaults at a meta-level,
as part of an extended description language. We propose that
allowing default unification to be a fully integrated part of a
typed feature structure system requires default unification to be a
binary, order independent function, so that it acquires the
perspicuity and declarativity familiar from normal unification-based
frameworks. Furthermore, in order to respect the behaviour of
defaults, default unification should allow default reentrancies and
values on more general types to be overridden by conflicting default
information on more specific types. We define what we believe is
the first definition of default unification to satisfy these
criteria, and argue that it can improve the declarativity of
existing uses of default inheritance within the lexicon (because it
doesn't require one to pre-specify the order in which information is
accumulated) without loss of expressivity (because it validates the
overriding of general defaults by conflicting more specific ones).
We also argue that some linguistic phenomena suggest that there are
conventional default generalisations that persist as default beyond
the lexicon, and are potentially overridden by more open-ended
pragmatic reasoning. We demonstrate that our version of default
unification can be used to model this. Finally, we discuss the
complexity of the operation, and argue that the overhead of using it
in practical systems need not be large.
@article{lascarides:copestake:1999,
author = {Alex Lascarides and Ann Copestake},
year = {1999},
title = {Default Representation in Constraint-based Frameworks},
journal = {Computational Linguistics},
volume = {128},
number = {1},
pages = {55--105}
}
Asher, N. and A. Lascarides (1998) The Semantics and Pragmatics of
Presupposition, Journal of Semantics, 15,
pp239-299,
Oxford University Press.
In this paper, we offer a novel analysis of presuppositions, paying
particular attention to the interaction between the knowledge
resources that are required to interpret them. The analysis has two
main features. First, we capture an analogy between presuppositions,
anaphora and scope ambiguity (cf. van der Sandt, 1992), by utilising
semantic underspecification (cf. Reyle, 1993). Second, resolving this
underspecification requires reasoning about how the presupposition is
rhetorically connected to the discourse context.
This has several consequences. First, since pragmatic information
plays a role in computing the rhetorical relation, it also constrains
the interpretation of presuppositions. Our account therefore goes
beyond existing ones, and provides a forum for analysing problematic
data, that require pragmatic reasoning. Second, binding
presuppositions to the context via rhetorical links replaces
accommodating them, in the sense of adding them to the context
(cf. Lewis, 1979). Thus, unlike previous theories, we don't resort to
interpretation mechanisms that are peculiar to presuppositions.
Rather, they are handled entirely in terms of the discourse update
procedure.
We formalise this approach in SDRT (Asher 1993, Lascarides and Asher
1993), and demonstrate that it provides a rich framework for
interpreting presuppositions, where semantic and pragmatic constraints
are integrated.
@article{asher:lascarides:1998,
author = {Nicholas Asher and Alex Lascarides},
year = {1998},
title = {The Semantics and Pragmatics of Presupposition},
journal = {Journal of Semantics},
volume = {15},
number = {2},
pages = {239--299},
publisher = {Oxford University Press}
}
Asher, N. and A. Lascarides (1998) Bridging,
Journal of Semantics, 15(1), pp83-113, Oxford
University Press.
In this paper, we offer a novel analysis of bridging, paying
particular attention to definite descriptions. We argue that extant
theories don't do justice to the way different knowledge resources
interact. In line with Hobbs (1979), we claim that the rhetorical
connections between the propositions introduced in the text plays an
important part. But our work is distinct from his in that we model
how this source of information interacts with compositional and
lexical semantics. We formalise bridging in a framework known as SDRT
(Asher, 1993). We demonstrate that this provides a richer, more
accurate interpretation of definite descriptions than has been offered
so far.
@article{asher:lascarides:1998,
author = {Nicholas Asher and Alex Lascarides},
year = {1998},
title = {Bridging},
journal = {Journal of Semantics},
volume = {15},
number = {1},
pages = {83--113},
publisher = {Oxford University Press}
}
Asher, N. and A. Lascarides (1998) Questions in Dialogue,
Linguistics and Philosophy, 23(2), pp237-309,
Kluwer Academic Publishers.
In this paper we explore how compositional semantics, discourse
structure, and the cognitive states of participants all contribute to
pragmatic constraints on answers to questions in dialogue. We
synthesise formal semantic theories on questions and answers with
techniques for discourse interpretation familiar from computational
linguistics, and show how this provides richer constraints on
responses in dialogue than either component can achieve alone.
@article{asher:lascarides:1998,
author = {Nicholas Asher and Alex Lascarides},
year = {1998},
title = {Questions in Dialogue},
journal = {Linguistics and Philosophy},
volume = {23},
number = {3},
pages = {237--309}
}
Lascarides, A. and A. Copestake (1998) Pragmatics and Word
Meaning, Journal of
Linguistics, 34(2), pp387-414, Cambridge University Press.
In this paper, we explore the interaction between lexical semantics
and pragmatics. We argue that linguistic processing is
informationally encapsulated and utilises relatively simple
`taxonomic' lexical semantic knowledge. On this basis, defeasible
lexical generalisations deliver defeasible parts of logical form. In
contrast, pragmatic inference is open-ended and involves arbitrary
real-world knowledge. Two axioms specify when pragmatic defaults
override lexical ones. We demonstrate that modeling this interaction
allows us to achieve a more refined interpretation of words in a
discourse context than either the lexicon or pragmatics could do on
their own.
@article{lascarides:copestake:1998,
author = {Alex Lascarides and Ann Copestake},
title = {Pragmatics and Word Meaning},
journal = {Journal of Linguistics},
year = {1998},
volume = {34},
number = {2},
pages = {55--105},
topic = {pragmatics;lexical-semantics;}
}
Lascarides, A., E. J. Briscoe, N. Asher, and A. Copestake, (1996)
Order Independent
Persistent Typed Default
Unification, Linguistics and Philosophy, 19(1), pp1-89,
Kluwer Academic Publishers.
We define an order independent version of default unification on typed
feature structures. The operation is one where default information in
a feature structure typed with a more specific type, will override
default information in a feature structure typed with a more general
type, where specificity is defined by the subtyping relation in the
type hierarchy. The operation is also able to handle feature
structures where reentrancies are default. We provide a formal
semantics, prove order independence and demonstrate the utility of
this version of default unification in several linguistic
applications. First, we show how it can be used to define multiple
orthogonal default inheritance in the lexicon in a fully declarative
fashion. Secondly, we show how default lexical specifications
(introduced via default lexical inheritance) can be made to usefully
`persist beyond the lexicon' and interact with syntagmatic rules.
Finally, we outline how persistent default unification might underpin
default feature propagation principles and a more restrictive and
constraint-based approach to lexical rules.
@article{lascarides:etal:1996,
author = {Alex Lascarides and Ted Briscoe and Nicholas Asher and
Ann Copestake},
title = {Order Independent and Persistent Typed Default Unification},
journal = {Linguistics and Philosophy},
year = {1996},
volume = {19},
number = {1},
pages = {1--89},
topic = {nm-ling;unification;default-unification;}
}
Lascarides, A., A. Copestake, and E. J. Briscoe, (1996)
Ambiguity and Coherence, Journal of Semantics, 13(1),
pp41-65, Oxford University Press.
Several recent theories of linguistic representation treat the lexicon
as a highly structured object, incorporating fairly detailed semantic
information, and allowing multiple aspects of meaning to be
represented in a single entry (e.g. Pustejovsky, 1991; Copestake,
1992; Copestake and Briscoe, 1995). One consequence of these
approaches is that word senses cannot be thought of as discrete units
which are in one-to-one correspondence with lexical entries. This has
many advantages in allowing an account of systematic polysemy, but
leaves the problem of accounting for effects such as zeugma and the
absence of crossed readings, which have traditionally been explained
in terms of multiple lexical entries, but which can also arise in
examples where other criteria demand that a single entry be involved.
Copestake and Briscoe (1995) claimed that these cases could be
explained by discourse coherence, but did not describe how this might
work. We remedy this here, by formalising a general pragmatic
principle which encapsulates discourse effects on word meaning. We
demonstrate how it contributes to the creation of zeugma and the
non-availability of crossed readings.
@article{lascarides:etal:1996b,
author = {Alex Lascarides and Ann Copestake and Ted Briscoe},
year = {1996},
title = {Ambiguity and Coherence},
journal = {Journal of Semantics},
volume = {13},
number = {1},
pages = {41--65}
}
Asher, N. and A. Lascarides, (1995) Lexical
Disambiguation in a
Discourse Context, Journal of
Semantics, 12(1), pp69-108, Oxford University Press.
In this paper we investigate how discourse structure affects the
meanings of words, and how the meanings of words affect discourse
structure. We integrate three ingredients: a theory of discourse
structure called SDRT, which represents discourse in terms of
rhetorical relations that glue together the propositions introduced by
the text segments; an accompanying theory of discourse attachment
called DICE, which computes which rhetorical relations hold between
the constituents, on the basis of the reader's background information;
and a formal language for specifying the lexical knowledge---both
syntactic and semantic---called the LKB. Through this integration, we
can model the information flow from words to discourse, and discourse
to words. From words to discourse, we show how the LKB permits the
rules for computing rhetorical relations in DICE to be generalised and
simplified, so that a single law applies to several semantically
related lexical items. From discourse to words, we encode two novel
heuristics for lexical disambiguation: disambiguate words so that
discourse incoherence is avoided; and disambiguate words so that
rhetorical connections are reinforced. These heuristics enable us to
tackle several cases of lexical disambiguation, that have until now
been outside the scope of theories of lexical processing.
@article{asher:lascarides:1995,
author = {Nicholas Asher and Alex Lascarides},
year = {1995},
title = {Lexical Disambiguation in a Discourse Context},
journal = {Journal of Semantics},
volume = {12},
number = {1},
pages = {69--108},
publisher = {Oxford University Press}
}
Lascarides, A. and N. Asher (1993) Temporal
Interpretation,
Discourse Relations and Commonsense Entailment, Linguistics and
Philosophy, 16(5),
pp437-493, Kluwer Academic Publishers, Dordrecht,
Holland.
This paper presents a formal account of how to determine the discourse
relations between propositions introduced in a text, and the relations
between the events they describe. The distinct natural
interpretations of texts with similar syntax are explained in terms of
defeasible rules. These characterise the effects of causal knowledge
and knowledge of language use on interpretation. Patterns of
defeasible entailment that are supported by the logic in which the
theory is expressed are shown to underly temporal interpretation.
@article{lascarides:asher:1993a,
author = {Alex Lascarides and Nicholas Asher},
year = {1993},
title = {Temporal Interpretation, Discourse Relations and Commonsense
Entailment},
journal = {Linguistics and Philosophy},
volume = {16},
number = {5},
pages = {437--493},
publisher = {Kluwer Academic Publishers}
}
Lascarides, A. and J. Oberlander, (1993)
Temporal Coherence
and Defeasible Knowledge, Theoretical Linguistics,
19(1), pp1--35, Walter de Gruyter, Berlin, New York.
We discuss data involving the temporal structure of connected
discourse. Questions are raised about the relation between clause
order in discourse and causal order in the world, and about the
coherence of certain discourses. We maintain that interpretation is
contextually influenced by knowledge of the world and of pragmatics,
and that the role of this knowledge should be formalised via a
defeasible logic. It transpires that a constrained set of reasoning
patterns underlies the retrieval of certain temporal structures. Not
all defeasible logics capture the set; the data help choose between
candidate logics. We demonstrate that an adequate logic characterises
when a text is temporally coherent, reliable and unambiguous relative
to the context. We also discuss defeasible reasoning in language
generation, and some consequences for the semantics-pragmatics
interface.
@article{lascarides:oberlander:1993,
author = {Alex Lascarides and Jon Oberlander},
year = {1993},
title = {Temporal Coherence and Defeasible Knowledge},
journal = {Theoretical Linguistics},
volume = {19},
number = {1},
pages = {1--35},
publisher = {Walter de Gruyter}
}
Lascarides, A. (1992) Knowledge,
Causality and Temporal
Representation, Linguistics, 30(5),
pp941-973, Walter de Gruyter, Berlin, New York.
In this paper, a formal semantic account of the simple past tense in
text is offered. The contributions to the interpretation of text made
by the text's syntactic structure, semantic content, aspectual
classification, world knowledge of the causal relations between
events, and Gricean pragmatic maxims are all represented within a
single logical framework. This feature of the theory gives rise to
solutions to several puzzles concerning the relation between the
descriptive order of events in text and their temporal relations in
interpretation.
@article{lascarides:1992,
author = {Alex Lascarides},
year = {1992},
title = {Knowledge, Causality and Temporal Representation},
journal = {Linguistics},
volume = {30},
number = {5},
pages = {941--973},
publisher = {Walter d Gruyter}
}
Lascarides, A. (1991)
The Progressive and the Imperfective
Paradox, synthese,
87(6), pp401-447, Kluwer Academic Publishers, Dordrecht, Holland.
Formal semantics constitutes the framework of the research presented
here, and the aim is to provide a solution to the imperfective
paradox; i.e. explain why “Max was running” entails “Max ran”, but
“Max was running home” does not entail “Max ran home”. This paper
is divided into two parts. In Part I we evaluate what I will call the
Eventual Outcome Strategy for solving the imperfective paradox.
This strategy is commonly used (Dowty 1979, Hinrichs 1983, Cooper
1985), and is highly intuitively motivated. I will show, however,
that the formulations of the intuitions give rise to conflicts and
tensions when it comes to explaining the natural language data. In
Part II we offer a new approach to tackle the imperfective paradox
that overcomes the problems with the Eventual Outcome Strategy.
@article{lascarides:1991,
author = {Alex Lascarides},
year = {1991},
title = {The Progressive and the Imperfective Paradox},
journal = {Synthese},
volume = {87},
number = {6},
pages = {401--447},
publisher = {Kluwer Academic Publishers}
}
Book Chapters
Bender, E. and A. Lascarides (2013) On Modeling Scope of
Inflectional Negation, in P. Hofmeister and Elisabeth Norcliffe (eds.), The Core and the Periphery: Data Driven Perspectives on Syntax inspired by Ivan A.\ Sag, pages 101--124, CSLI Publications.
In this paper, we investigate the representation of negated sentences in
Minimal Recursion Semantics. We begin with
its treatment in the English Resource Grammar,
a broad-coverage implemented HPSG, and argue that it is largely a
suitable representation for English, despite possible objections. We
then explore whether it is suitable for typologically different
languages: namely, those that express sentential negation via
inflection on the verb, particularly Turkish and Inuktitut. We find
that the interaction between negation and intersective modifiers
requires a change to the way in which (at least) one of them contributes to
semantic composition, and we argue for adapting the analysis of
intersective modifiers.
More generally, this work can be seen as a case study of universality
in semantic representation in surface-oriented compositional
semantics. Such representations are necessarily somewhat
language-specific.
Nonetheless, we still expect to see many common structures in areas of semantics
such as negation. As we strive to work with a surface-oriented,
compositional framework, however, we must negotiate the
surface-structural differences between languages so as to achieve
those common semantic structures. Thus a
cross-linguistically appropriate semantic representation must not
only capture meanings as they are used in different languages but also
be buildable on the basis of the diverse morphosyntactic scaffolding
provided by the different languages.
@incollection{bender:lascarides:2013,
author = {Emily Bender and Alex Lascarides},
year = {2013},
title = {On Modeling Scope of Inflectional Negation},
editor = {P.\ Hofmeister and E.\ Norcliffe},
booktitle = {The Core and the Periphery: Data Driven Perspectives on Syntax inspired by Ivan A.\ Sag},
pages = {101--124},
publisher = {CSLI Publications}
}
Lascarides, A. and N. Asher (2007) Segmented Discourse
Representation Theory: Dynamic Semantics with Discourse Structure,
in
H. Bunt and R. Muskens (eds.) Computing Meaning: Volume 3,
pp87--124, Springer.
This paper motivates and describes a dynamic semantic theory of
discourse interpretation called SDRT, which uses rhetorical
relations to model the semantics/pragmatics interface. We describe
the syntax and dynamic semantics of the language in which logical
forms are represented, a separate but related language in which
semantic underspecification is expressed as partial descriptions of
logical forms, and a glue logic which uses commonsense reasoning to
construct logical forms, relating the semantically underspecified
forms that are generated by the grammar to their pragmatically
preferred interpretations. We apply the framework to some examples
involving anaphora and other kinds of semantic ambiguities.
@incollection{lascarides:asher:2007,
author = {Alex Lascarides and Nicholas Asher},
year = {2007},
title = {Segmented Discourse
Representation Theory: Dynamic Semantics with Discourse Structure},
editor = {H.\ Bunt and R.\ Muskens},
booktitle = {Computing Meaning: Volume 3},
publisher = {Kluwer Academic Publishers},
pages = {87--124}
}
Lascarides, A. and N. Asher (2004) Imperatives in Dialogue,
in P. Kuehnlein, H. Rieser and H. Zeevat (eds.) The
Semantics and Pragmatics of Dialogue for the New Millenium,
Benjamins.
In this paper, we offer a semantic analysis of imperatives. We
explore the effects of context on their interpretation, particularly
on the content of the action to be performed, and whether or not the
imperative is commanded. We demonstrate that by utilising a dynamic,
discourse semantics which features rhetorical relations such as
Narration, Elaboration and Correction, we can
capture the discourse effects as a byproduct of discourse update
(i.e., the dynamic construction of logical forms). We argue that this
has a number of advantages over static approaches and over
plan-recognition techniques for interpreting imperatives.
@incollection{lascarides:asher:2004,
author = {Alex Lascarides and Nicholas Asher},
title = {Imperatives in Dialogue},
year = {2004},
editor = {P.\ Kuehnlein and H.\ Rieser and H.\ Zeevat},
booktitle = {The Semantics and Pragmatics of Dialogue for the New Millenium},
publisher = {Benjamins}
}
Schlangen, D. and A. Lascarides (2003) A Compositional and
Constraint-Based Approach to Non-Sentential Utterances, in Muller,
S. (ed.) The Proceedings of the 10th International Conference on
Head-Driven Phrase Structure Grammar, pp. 380--390, CSLI
Publications.
We present an approach to non-sentential utterances like B's utterance
in the following dialogue:
A: Who came to the party?
B: Peter
Such utterances pose several puzzles: they convey `sentence-types'
messages (propositions, questions, requests) while being of
non-sentential form; and they are constrained by syntactically and
semantically by the context. We address these puzzles in our approach
which is compositional, since we provide a formal semantics of these
utterances which is independent of context, and constraint-based
because resolution is based on collecting contextual constraints.
@inproceedings{schlangen:lascarides:2002,
author = {D.~Schlangen and A.~Lascarides},
year = {2002},
title = {A Compositional and Constraint-Based Approach to Non-Sentential Utterances},
booktitle = {Proceedings of the 10th International Conference on Head-Driven Phrase Structure Grammar},
pages = {380--390},
publisher = {{\sc csli}}
}
Asher, N. and A. Lascarides (2001) The Semantics and
Pragmatics of Metaphor, in Bouillon, P. and F. Busa (eds.)
The Language of Word Meaning, Cambridge
University Press, pp262--289..
This paper focuses on metaphor and the interpretation of metaphor in a
discourse setting. There have been several accounts put forward by
eminent philosophers of language---Black (1962), Hesse (1966) and
Searle (1979), among others---but none of them are satisfactory. They
offer a few rules for metaphoric interpretation, but many of them are
redundant, and they form a list without much coherence.
Many have thought that the principles of metaphorical interpretation
cannot be formally specified (e.g., Davidson, 1984). We'll attack
this position with two claims. The first is that we support the view
taken by Lakoff and Johnson (1980), that some aspects of metaphor are
productive. We enrich this position, by demonstrating that this
productivity can be captured effectively by encoding generalisations
that limit metaphorical interpretation in a constraint-based framework
for defining lexical semantics. Indeed from a methodological
perspective, we would claim that the productive aspects of metaphor
can give the linguist clues about how to represent semantic
information in lexical entries.
@incollection{asher:lascarides:2001a,
author = {Nicholas Asher and Alex Lascarides},
year = {2001},
title = {The Semantics and Pragmatics of Metaphor},
booktitle = {The Language of Word Meaning},
editor = {P.\ Bouillon and F.\ Busa},
pages = {262--289},
publisher = {Cambridge University Press}
}
Oberlander, J. and A. Lascarides, (2000) Laconic Discourses and
Total Eclipses: Abduction in DICE, in Harry Bunt and William Black
(eds.),
Abduction, Beliefs and Context: Studies in Computational Pragmatics,
pp391--412, John Benjamins, London.
The purpose of this chapter is to demonstrate one particular use of
abduction in the processing of natural language discourse. DICE
(Discourse In Commonsense Entailment) can be used for both
interpretation and generation. For interpretation, it uses defeasible
deduction to compute the discourse structures and the event structures
of multi-sentential text. For generation, it uses abduction to build
up specifications of text from the underlying event structures. Here
we demonstrate how the information flow between impicatures on the one
hand and interpretation and generation on the other can be modelled,
thereby showing that DICE provides a `reversible' model of the
semantics/pragmatics interface.
@incollection{oberlander:lascarides:2000,
author = {Jon Oberlander and Alex Lascarides},
year = {2000},
title = {Laconic Discourses and Total Eclipses: Abduction in DICE},
editor = {H.\ Bunt and W.\ Black},
booktitle = {Abduction, Beliefs and Context: Studies in Computational Pragmatics},
pages = {391--412},
publisher = {John Benjamins}
}
Briscoe, E. J., A. Copestake, and A. Lascarides, (1995) Blocking, in
St. Dizier, P. and Viegas, E. Computational Lexical Semantics,
pp273--302,
Cambridge University Press.
A major motivation for the introduction of default inheritance
mechanisms into theories of lexical organisation has been to account
for the prevalence of the family of phenomena variously described as
blocking (Aronoff, 1976:43), the elsewhere condition (Kiparsky, 1973),
or preemption by synonymy (Clark and Clark, 1979:798). In Copestake
and Briscoe (1991) we argued that productive processes of sense
extension also undergo the same process, suggesting that an integrated
account of lexical semantic and morphological processes must allow for
blocking. In this paper, we review extant accounts which follow from
theories of lexical organisation based on default inheritance, such as
Paradigmatic Morphology (Calder, 1989), {\sc datr} (Evans \& Gazdar,
1989), ELU (Russell et al., 1991, in press), Word Grammar (Hudson,
1990; Fraser and Hudson, 1992), or the LKB (Copestake 1992; this
volume; Copestake et al., in press). We argue that these theories
fail to capture the full complexity of even the simplest cases of
blocking and sketch a more adequate framework, based on a nonmonotonic
logic that incorporates more powerful mechanisms for resolving
conflict among defeasible knowledge resources (Commonsense Entailment,
Asher and Morreau, 1991). Finally, we explore the similarities and
differences between various phenomena which have been intuitively felt
to be cases of blocking within this formal framework, and discuss the
manner in which such processes might interact with more general
interpretative strategies during language comprehension. Our
presentation is necessarily brief and rather informal; we are
primarily concerned to point out the potential advantages using a more
expressive default logic for remedying some of the inadequacies of
current theories of lexical description.
@incollection{briscoe:etal:1995,
author = {Ted Briscoe and Ann Copestake and Alex Lascarides},
year = {1995},
title = {Blocking},
editor = {P.\ St Dizier and E.\ Viegas},
booktitle = {Computational Lexical Semantics},
pages = {273--302},
publisher = {Cambridge University Press}
}
Lascarides, A. and J. Oberlander, (1992) Abducing Temporal
Discourse, in Dale, R., Hovy, E., Rosner, D. and Stock, O. (eds.)
Aspects of Automated Natural Language Generation,
pp167--182, Springer Verlag.
We focus on the following question: given the causal and temporal
relations between events in a knowledge base, what are the ways they
can be described in extended text? We argue that we want to be able
to generate laconic text, where certain temporal information
remains implicit but pragmatically inferrable. An algorithm for
generating laconic text is proposed, interleaving abduction and
nonmonotonic deduction over a formal model of pragmatic implicature.
We demonstrate that the nonmonotonicity ensures that the generation of
laconic text is influenced by the preceding linguistic and
extra-linguistic context.
@incollection{lascarides:oberlander:1992,
author = {Alex Lascarides and Jon Oberlander},
year = {1992},
title = {Abducing Temporal Discourse},
booktitle = {Aspects of Automated Natural Language Generation},
editor = {R.\ Dale and E.\ Hovy and D.\ Rosner and O.\ Stock},
pages = {167--182},
publisher = {Springer Verlag}
}
Conference Proceedings
Miceli-Barone, A., A. Lascarides and C. Innes (2023),
Dialogue-based Generation of Self-Driving Simulation
Scenarios using Large Language Models,
Proceedings of the Third International Combined Workshop on
Spatial Language Understanding and Grounded Communication for Robotics
(SPLU-RoboNLP), Singapore.
Simulation is an invaluable tool for developing
and evaluating controllers for self-driving
cars. Current simulation frameworks are driven
by highly-specialist domain specific languages,
and so a natural language interface would
greatly enhance usability. But there is often
a gap, consisting of tacit assumptions the user
is making, between a concise English utterance
and the executable code that captures the user’s
intent. In this paper we describe a system that
addresses this issue by supporting an extended
multimodal interaction: the user can follow
up prior instructions with refinements or revisions,
in reaction to the simulations that have
been generated from their utterances so far. We
use Large Language Models (LLMs) to map
the user’s English utterances in this interaction
into domain-specific code, and so we explore
the extent to which LLMs capture the context
sensitivity that’s necessary for computing the
speaker’s intended message in discourse.
@inproceedings{miceli-barone_etal_2023,
author = {Antonio Miceli-Barone and Craig Innes and Alex Lascarides},
year = {2023}.
title = {Dialogue-based Generation of Self-Driving Simulation
Scenarios using Large Language Models},
booktitle = {Proceedings of the Third International Combined Workshop on
Spatial Language Understanding and Grounded Communication for Robotics
(SPLU-RoboNLP)},
address = {Singapore}
}
Park, J., A. Lascarides and R. Ramamoorthy (2023), Interactive Acquisition of Fine-grained Visual Concepts by Exploiting
Semantics of Generic Characterizations in Discourse,
Proceedings of the 15th International Conference on
Computational Semantics (IWCS), Nancy, July 2023. Best paper award
Interactive Task Learning (ITL) concerns learning about unforeseen domain concepts via natural interactions with human users.
The learner faces a number of significant constraints: learning should be online, incremental and few-shot, as it is expected to perform tangible belief updates right after novel words denoting unforeseen concepts are introduced.
In this work, we explore a challenging symbol grounding task---discriminating among object classes that look very similar---within the constraints imposed by ITL.
We demonstrate empirically that more data-efficient grounding results from exploiting the truth-conditions of the teacher's generic statements (e.g., ``Xs have attribute Z.'') and their implicatures in context (e.g., as an answer to ``How are Xs and Ys different?'', one infers Y lacks attribute Z).
@inproceedings{park:etal:2023,
author = {Jay Park and Alex Lascarides and Ram Ramamoorthy},
year = {2023},
title = {Interactive Acquisition of Fine-grained Visual Concepts by Exploiting
Semantics of Generic Characterizations in Discourse},
booktitle = {Proceedings of the 15th International Conference on
Computational Semantics (IWCS)},
address = {Nancy, July 2023}
}
Appelgren, M. and A. Lascarides (2023), Learning Manner of Execution from Partial Corrections
Proceedings of the International Conference on Autonomous Agents and
Multi-Agent Systems: Extended Abstract (AAMAS)
London, June 2023
Some actions must be executed in different ways depending on the context. Wiping away marker requires vigorous force while almonds require gentle force. We provide a model where an agent learns which manner to execute in which context, drawing on evidence from trial and error and verbal corrections when it makes a mistake (e.g., ``no, do it gently''). The learner's initial domain model lacks the concepts denoted by the words in the teacher's feedback: both those describing the context (e.g., almonds) and those describing manner (e.g., gently). We show that discourse coherence helps the agent refine its domain model and perform the symbol grounding that's necessary for using the guidance to solve its planning problem: to perform its actions in the current context in the correct way.
@inproceedings{appelgren:lascarides:2023,
author = {Mattias Appelgren and Alex Lascarides},
title = {Learning Manner of Execution from Partial Corrections},
year = {2023},
booktitle = {Proceedings of the International Conference on Autonomous Agents and
Multi-Agent Systems (AAMAS)},
address = {London, June 2023}
}
Dagan, G., F. Keller and A. Lascarides (2023) Learning the Effects of Physical Actions in a Multi-modal Environment, Findings of the 17th Conference of the European Chapter of the Association for Computational Linguistics (EACL), Croatia, May 2023.
Large Language Models (LLMs) handle physical commonsense information inadequately. As a result of being trained in a disembodied setting, LLMs often fail to predict an action's outcome in a given environment. However, predicting the effects of an action before it is executed is crucial in planning, where coherent sequences of actions are often needed to achieve a goal. Therefore, we introduce the multi-modal task of predicting the outcomes of actions solely from realistic sensory inputs (images and text). Next, we extend an LLM to model latent representations of objects to better predict action outcomes in an environment. We show that multi-modal models can capture physical commonsense when augmented with visual information. Finally, we evaluate our model's performance on novel actions and objects and find that combining modalities help models to generalize and learn physical commonsense reasoning better.
@inproceedings{dagan:etal:2023,
author = {Gautier Dagan and Frank Keller and Alex Lascarides},
title = {Learning the Effects of Physical Actions in a Multi-modal Environment},
year = {2023},
booktitle = {Findings of the 17th Conference of the European Chapter
of the Association for Computational Linguistics (EACL)},
address = {Croatia, May 2023}
}
Rubavicius, R. and A. Lascarides (2022) Interactive Symbol Grounding with Complex Referential Expressions, Proceedings of Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), Seattle, July 2022.
We present a procedure for learning to ground
symbols from a sequence of stimuli consisting
of an arbitrarily complex noun phrase (e.g. ``all
but one green square above both red circles.”)
and its designation in the visual scene. Our
distinctive approach combines: a) lazy fewshot
learning to relate open-class words like
green and above to their visual percepts;
and b) symbolic reasoning with closed-class
word categories like quantifiers and negation.
We use this combination to estimate new training
examples for grounding symbols that occur
within a noun phrase but aren’t designated
by that noun phase (e.g, red in the above example),
thereby potentially gaining data efficiency.
We evaluate the approach in a visual
reference resolution task, in which the learner
starts out unaware of concepts that are part of
the domain model and how they relate to visual
percepts.
@inproceedings{rubavicius:lascarides:2022,
author = {Rimvydas Rubavicius and Alex Lascarides},
title = {Interactive Symbol Grounding with Complex Referential Expressions},
year = {2022},
booktitle = {Proceedings of the Annual Conference of the
North American Chapter of the Association for Computational Linguistics (NAACL)},
address = {Seattle, July 2022}
}
Appelgren, M. and A. Lascarides (2021) Symbol Grounding and Task Learning from Imperfect Corrections, Proceedings of the Second International Combined Workshop on Spatial
Language Understanding and Grounded Communication for Robotics (SPLU-RoboNLP), ACL-IJNLP 2021.
This paper describes a method for learning from a teacher's potentially unreliable corrective feedback in an interactive task learning setting. The graphical model uses discourse coherence to jointly learn symbol grounding, domain concepts and valid plans. Our experiments show that the agent learns its domain-level task in spite of the teacher's mistakes.
@inproceedings{appelgren:lascarides:2021,
author = {Mattias Appelgren and Alex Lascarides},
title = {Symbol Grounding and Task Learning from Imperfect Corrections},
year = {2021},
booktitle = {Proceedings of the Second International Combined Workshop on
Spatial Language Understanding and Grounded Communication for Robotics (SPLU-RoboNLP)},
address = {ACL-IJNLP 2021}
}
Hristov, Y., D. Angelov, M. Burke, A. Lascarides and S. Ramamoorthy (2019) Disengtangled Relational Representations for Explaining and Learning from Demonstration, the 3rd Conference on Robot Learning (CoRL), Osaka, Japan.
Best paper runner up award.
Learning from demonstration is an effective method for human users to instruct desired robot behaviour. However, for most non-trivial tasks of practical interest, efficient learning from demonstration depends crucially on inductive bias in the chosen structure for rewards/costs and policies. We address the case where this inductive bias comes from an exchange with a human user. We propose a method in which a learning agent utilizes the information bottleneck layer of a high-parameter variational neural model, with auxiliary loss terms, in order to ground abstract concepts such as spatial relations. The concepts are referred to in natural language instructions and are manifested in the high-dimensional sensory input stream the agent receives from the world.
We evaluate the properties of the latent space of the learned model in a photorealistic synthetic environment and particularly focus on examining its usability for downstream tasks. Additionally, through a series of controlled table-top manipulation experiments, we demonstrate that the learned manifold can be used to ground demonstrations as symbolic plans, which can then be executed on a PR2 robot.
@inproceedings{hristov:etal:2019,
author = {Yordan Hristov and Daniel Angelov and Michael Burke and Alex Lascarides and
Subramanian Ramamoorthy},
year = {2019},
title = {Disentangled Relational Representations for Explaining and Learning from Demonstration},
booktitle = {Proceedings of the 3rd Conference on Robot Learning (CoRL)},
address = {Osaka, Japan}
}
Appelgren, M. and A. Lascarides (2019) Coherence, Symbol Grounding and Interactive Task Learning, Proceedings of the 23rd Workshop on the Semantics and Pragmatics of Dialogue (SEMDIAL), Queen Mary University, London.
To teach agents through natural language interaction, we need methods
for updating the agent's knowledge, given a teacher's feedback. But
natural language is ambiguous at many levels and so a major challenge
is for the agent to disambiguate the intended message, given the
signal and the context in which it's uttered. In this paper we look at
how coherence relations can be used to help disambiguate the teachers'
feedback and so contribute to the agent's reasoning about how to solve
their domain-level task. We conduct experiments where the agent must
learn to build towers that comply with a set of rules, which the agent
starts out ignorant of. It is also unaware of the concepts used to
express the rules. We extend a model for learning these tasks which is
based on coherence and show experimentally that our extensions can
improve how fast the agent learns.
@inproceedings{appelgren:lascarides:2019,
author = {Mattias Appelgren and Alex Lascarides},
title = {Coherence, Symbol Grounding and Interactive Task Learning},
year = {2019},
booktitle = {Proceedings of the 23rd Workshop on the Semantics and
Pragmatics of Dialogue (SEMDIAL)},
address = {Queen Mary University, London}
}
Innes, C. and A. Lascarides (2019) Learning Factored Markov Decision Processes with Unawareness, Proceedings of the Conference on
Uncertainty in Artificial Intelligence (UAI), Tel-Aviv, Israel.
Methods for learning and planning in sequential
decision problems often assume the learner
is aware of all possible states and actions in
advance. This assumption is sometimes untenable.
In this paper, we give a method to
learn factored markov decision problems from
both domain exploration and expert assistance,
which guarantees convergence to near-optimal
behaviour, even when the agent begins unaware
of factors critical to success. Our experiments
show our agent learns optimal behaviour
on small and large problems, and that conserving
information on discovering new possibilities
results in faster convergence.
@inproceedings{innes:lascarides:2019,
author = {C.\ Innes and A.\ Lascarides},
year = {2019},
title = {Learning Factored Markov Decision Processes with Unawareness},
booktitle = {Proceedings of the Conference on
Uncertainty in Artificial Intelligence (UAI)},
address = {Tel-Aviv, Israel}
}
Innes, C. and A. Lascarides (2019) Learning Structured Decision Problems with Unawareness, Proceedings of the 36th International Conference on Machine Learning (ICML), Long Beach, USA.
Structured models of decision making often assume
an agent is aware of all possible states and
actions in advance. This assumption is sometimes
untenable. In this paper, we learn influence diagrams
from both domain exploration and expert
assertions in a way which guarantees convergence
to optimal behaviour, even when the agent starts
unaware of actions or belief variables that are critical
to success. Our experiments show that our
agent learns optimal behaviour on small and large
decision problems, and that allowing an agent to
conserve information upon discovering new possibilities
results in faster convergence.
@inproceedings{innes:lascarides:2019,
author = {C.\ Innes and A.\ Lascarides},
year = {2019},
title = {Learning Structured Decision Problems with Unawareness},
booktitle = {Proceedings of the 36th International Conference on Machine
Learning (ICML)},
address = {Long Beach, USA}
}
Innes, C. and A. Lascarides (2019) Learning Factored Markov Decision Processes with Unawareness, Proceedings of the International Conference on Autonomous
Agents and Multi-Agent Systems (AAMAS), Montreal, Canada.
Methods for learning and planning in sequential decision problems
often assume the learner is fully aware of all possible states and actions
in advance. This assumption is sometimes untenable: evidence
gathered via domain exploration or external advice may reveal not
just information about which of the currently known states are
probable, but that entirely new states or actions are possible. This
paper provides a model-based method for learning factored markov
decision problems from both domain exploration and contextually
relevant expert corrections in a way which guarantees convergence
to near-optimal behaviour, even when the agent is initially unaware
of actions or belief variables that are critical to achieving success.
Our experiments show that our agent converges quickly on the
optimal policy for both large and small decision problems. We also
explore how an expert’s tolerance towards the agent’s mistakes
affects the agent’s ability to achieve optimal behaviour.
@inproceedings{innes:lascarides:2019,
author = {C.\ Innes and A.\ Lascarides},
year = {2019},
title = {Learning Factored Markov Decision Processes with Unawareness},
booktitle = {Proceedings of the International Conference on Autonomous
Agents and Multi-Agent Systems (AAMAS)},
address = {Montreal, Canada}
}
Appelgren, M. and A. Lascarides (2019) Learning Plans by Acquiring Grounded Linguistic Meanings from Corrections, Proceedings of the International Conference on Autonomous
Agents and Multi-Agent Systems (AAMAS), Montreal, Canada.
We motivate and describe a novel task which is modelled on interactions
between apprentices and expert teachers. In the task the
agent must learn to build towers which are constrained by rules.
Whenever the agent performs an action which violates a rule the
teacher provides verbal corrective feedback (e.g. “No, put red blocks
on blue blocks”) and answers the learner’s clarification questions.
The agent must learn to build rule compliant towers from these corrections
and the context in which they were given. The agent starts
out unaware of the constraints as well as the domain concepts in
which the constraints are expressed. Therefore an agent that takes
advantage of the linguistic evidence must learn the denotations of
neologisms and adapt its conceptualisation of the planning domain
to incorporate those denotations. We show that an agent which
does utilise linguistic evidence outperforms a strong baseline which
does not.
@inproceedings{appelgren:lascarides:2019,
author = {M.\ Appelgren and A.\ Lascarides},
year = {2019},
title = {Learning Plans by Acquiring Grounded Linguistic Meanings from Corrections},
booktitle = {Proceedings of the International Conference on Autonomous
Agents and Multi-Agent Systems (AAMAS)},
address = {Montreal, Canada}
}
Hristov, Y., A. Lascarides and S. Ramamoorthy (2018) Interpretable Latent Spaces for Learning from Demonstration, Proceedings of the 2nd Conference on Robot Learning (CoRL), Zurich, Switzerland.
Effective human-robot interaction, such as in robot learning from
human demonstration, requires the learning agent to be able to ground
abstract concepts (such as those contained within instructions) in a
corresponding highdimensional sensory input stream from the
world. Models such as deep neural networks, with high capacity through
their large parameter spaces, can be used to compress the
high-dimensional sensory data to lower dimensional representations.
These low-dimensional representations facilitate symbol grounding, but
may not guarantee that the representation would be
human-interpretable. We propose a method which utilises the grouping
of user-defined symbols and their corresponding sensory observations
in order to align the learnt compressed latent representation with the
semantic notions contained in the abstract labels. We demonstrate this
through experiments with both simulated and real-world object data,
showing that such alignment can be achieved in a process of physical
symbol grounding.
@inproceedings{hristov:etal:2018,
author = {Yordan Hristov and Alex Lascarides and
Subramanian Ramamoorthy},
year = {2018},
title = {Interpretable Latent Spaces for Learning from Demonstration},
booktitle = {Proceedings of the Conference on Robot Learning},
address = {Z\"urich, Switzerland}
}
Dobre, M. and A. Lascarides (2018) POMCP with Human Preferences in Settlers of Catan, Proceedings of the Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE), Edmonton, Canada.
We present a suite of techniques for extending the Partially
Observable Monte Carlo Planning algorithm to handle complex
multi-agent games. We design the planning algorithm to exploit the
inherent structure of the game. When game rules naturally cluster the
actions into sets called types, these can be leveraged to extract
characteristics and high-level strategies from a sparse corpus of
human play. Another key insight is to account for action legality both
when extracting policies from game play and when these are used to
inform the forward sampling method. We evaluate our algorithm against
other baselines and versus ablated versions of itself in the
well-known board game Settlers of Catan.
@inproceedings{dobre:lascarides:2018,
author = {Mihai Dobre and Alex Lascarides},
year = {2018},
title = {POMCP with Human Preferences in Settlers of Catan},
booktitle = {Proceedings of the Conference on Artificial Intelligence
and Interactive Digital Entertainment (AIIDE)},
address = {Edmonton, Canada}
}
Hristov, Y, S. Penkov, A. Lascarides and S. Ramamoorthy (2017) Grounding Symbols in Multimodal Instructions, Proceedings of the ACL workshop on Language Grounding for Robotics, Vancouver, Canada.
As robots begin to cohabit with humans in semi-structured
environments, the need arises to understand instructions involving
rich variability---for instance, learning to ground symbols in the
physical world. Realistically, this task must cope with small datasets
consisting of a particular users' contextual assignment of meaning to
terms. We present a method for processing a raw stream of cross-modal
input---i.e., linguistic instructions, visual perception of a scene
and a concurrent trace of 3D eye tracking fixations---to produce the
segmentation of objects with a correspondent association to high-level
concepts. To test our framework we present experiments in a table-top
object manipulation scenario. Our results show our model learns the
user's notion of colour and shape from a small number of physical
demonstrations, generalising to identifying physical referents for
novel combinations of the words.
@inproceedings{hristov:etal:2017,
author = {Yordan Hristov and Svetlin Penkov and Alex Lascarides and
Subramanian Ramamoorthy},
year = {2017},
title = {Grounding Symbols in Multi-Modal Instructions},
booktitle = {Proceedings of the ACL workshop Language Grounding for
Robotics},
address = {Vancouver, Canada}
}
Dobre, M. and A. Lascarides (2017),
Exploiting Action Categories in Learning Complex Games, Proceedings of the IEEE Conference on Intelligent Systems, London. Best paper award.
This paper presents a model for planning in a highly
complex game, where certain action types are more common
than others and cyclic behaviour can also easily arise. These
issues are addressed by exploiting the inherent structure among
the possible options to enhance the online learning algorithm:
sampling during Monte Carlo Tree Search becomes a two step
process, by first sampling from a distribution over the types of
legal actions followed by sampling from individual actions of
the chosen type. This policy drastically reduces the breadth of
the rollout as well as its depth by avoiding redundant sampling
behaviour. The result is a large increase in both the performance
and efficiency of the model. Another contribution of this paper is
assessing the benefits of a parallel implementation and afterstates
in complex games. Evaluation is done via agent simulations in
the board game Settlers of Catan. The resulting agent is the first
based on purely online learning strategies that can handle the full
set of legal actions of the game. The evaluation shows that our
model outperforms previous state-of-the-art agents while taking
decisions in a time threshold tolerated by human opponents.
@inproceedings{dobre:lascarides:2017,
author={M.\ Dobre and A.\ Lascarides},
year = {2017},
title = {Exploiting Action Categories in Learning Complex Games},
booktitle = {Proceedings of the IEEE Conference on Intelligent Systems},
address = {London}
}
Dobre, M. and A. Lascarides (2017),
Combining a Mixture of Experts with Transfer Learning in
Complex Games, Proceedings of the AAAI Spring Symposium: Learning from
Observation of Humans, Stanford
We present a supervised approach for learning policies
in a highly complex game from small amounts of human
data consisting of state–action pairs. Our Neural
Network architecture can adapt to the varying size of the
set of legal actions, thus bypassing the need to hardcode
the actions in the output layer or iterate over them. This
makes the training more data efficient. We use synthetic
data created via game simulations among AI agents to
show that a mixture of experts, where each expert predicts
actions in different portions of the game, improves
performance. We then show that this approach applied
to human data also improves performance: in particular,
using transfer learning to enable one expert to learn
from another enhances performance on those portions
of the game for which there is relatively little training
data compared to other portions. The domain chosen for
evaluation is the board game Settlers of Catan.
@inproceedings{dobre:lascarides:2017,
author = {Mihai Dobre and Alex Lascarides},
year = {2017},
title = {Combining a Mixture of Experts with Transfer Learning in
Complex Games},
booktitle = {Proceedings of the AAAI Spring Symposium: Learning from
Observation of Humans},
address = {Stanford}
}
Keizer, S., M. Guhe, H. Cuayahuitl,
I. Efstathiou, K. Engelbrecht, M. Dobre,
A. Lascarides and O. Lemon (2017),
Evaluating Persuasion Strategies and Deep Reinforcement
Learning methods for Negotiation Dialogue Agents
Proceedings of the European Chapter of the Association
for Computational Linguistics (EACL),
pages 480--484, Valencia, Spain.
In this paper we present a comparative
evaluation of various negotiation strategies
within an online version of the
game ``Settlers of Catan”. The comparison
is based on human subjects playing
games against artificial game-playing
agents (`bots’) which implement different
negotiation dialogue strategies, using a
chat dialogue interface to negotiate trades.
Our results suggest that a negotiation strategy
that uses persuasion, as well as a strategy
that is trained from data using Deep
Reinforcement Learning, both lead to an
improved win rate against humans, compared
to previous rule-based and supervised
learning baseline dialogue negotiators.
@inproceedings{keizer:etal:2017,
author = {Simon Keizer and Markus Guhe and Heriberto Cuayahuitl and
Ioannis Efstathiou and Klaus-Peter Engelbrecht and Mihai Dobre and
Alex Lascarides and Oliver Lemon},
year = {2017},
title = {Evaluating Persuasion Strategies and Deep Reinforcement
Learning methods for Negotiation Dialogue Agents},
booktitle = {Proceedings of the European Chapter of the Association
for Computational Linguistics (EACL)},
pages = {480--484},
address = {Valencia}
}
Schloeder, J. and A. Lascarides (2015) Interpreting English Pitch Countours in Context, Proceedings of the 19th Workshop on the Semantics and Pragmatics of Dialogue (SEMDIAL), pages 131--139, Gothenburg, Sweden.
This paper presents a model of how pitch contours influence the
illocutionary and perlocutionary effects of utterances in
conversation. Our account is grounded in several insights from the
prior literature. Our distinctive contribution is to replace earlier
informal claims about the implicatures arising from intonation with
logical derivations: we validate inferences in the SDRT framework
that resolve the partial meaning we associate with a pitch contour
to different specific interpretations in different contexts.
@inproceedings{schloeder:lascarides:2015,
author = {Julian Schl\"oder and Alex Lascarides},
year = {2015},
title = {Interpreting English Pitch Contours in Context},
booktitle = {Proceedings of 19th Workshop on the Semantics and
Pragmatics of Dialogue},
pages = {131--139},
address = {Gothenburg, Sweden}
}
Dobre, M. and A. Lascarides (2015) Online Learning and Mining Human Play in Complex Games, Proceedings of the IEEE Conference on Computational Intelligence in Games (CIG), Tainan, Taiwan.
We propose a hybrid model for automatically acquiring
a policy for a complex game, which combines online
learning with mining knowledge from a corpus of human game
play. Our hypothesis is that a player that learns its policies
by combining (online) exploration with biases towards human
behaviour that’s attested in a corpus of humans playing the
game will outperform any agent that uses only one of the
knowledge sources. During game play, the agent extracts similar
moves made by players in the corpus in similar situations, and
approximates their utility alongside other possible options by
performing simulations from its current state. We implement and
assess our model in an agent playing the complex win-lose board
game Settlers of Catan, which lacks an implementation that would
challenge a human expert. The results from the preliminary set
of experiments illustrate the potential of such a joint model.
@inproceedings{dobre:lascarides:2015,
author = {M.\ Dobre and A.\ Lascarides},
year = {2015},
title = {Online Learning and Mining Human Play in Complex Games},
booktitle = {Proceedings of the IEEE Conference on
Computational Intelligence in Games (CIG)},
address = {Tainan, Taiwan}
}
Hunter, J., N. Asher and A. Lascarides (2015) Integrating Non-Linguistic Events and Discourse Structure, Proceedings of the 11th International Conference on Computational Semantics (IWCS), pages 184--194, London.
Interpreting an utterance sometimes depends on the
presence and nature of non-linguistic actions. In this paper, we
motivate and develop a semantic model of embodied interaction in
which the contribution that non-linguistic events make to the
content of the interaction is dependent on their rhetorical
connections to other actions, both linguistic and non-linguistic.
We support our claims with concrete examples from a corpus of online
chats, comparing annotations of the linguistic-only content
against annotations in which non-linguistic events in
the context are taken into account.
@inproceedings{hunter:etal:2015,
author = {J.\ Hunter and N.\ Asher and A.\ Lascarides},
year = {2015},
title = {Integrating Non-Linguistic Events into Discourse Structure},
booktitle = {Proceedings of the 11th International Conference on Computational Semantics (IWCS)},
pages = {184--194},
address = {London}
}
Perret, J., S. Afantenos, N. Asher and A. Lascarides (2014) Revealing Resources in Strategic Contexts, Proceedings of the 18th Workshop on the Semantics and Pragmatics of Dialogue (SEMDIAL), pages 135--144, Edinburgh.
Identifying an optimal game strategy often
involves estimating the strategies of other
agents, which in turn depends on hidden
parts of the game state. In this paper we
focus on the win-lose game The Settlers of
Catan (or Settlers), in which players negotiate
over limited resources. More precisely,
our goal is to map each player’s utterances
in such negotiations to a model
of which resources they currently possess,
or don’t possess. Our approach comprises
three subtasks: (a) identify whether
a given utterance (dialogue turn) reveals
possession of a resource, or not; (b) determine
the type of resource; and (c) determine
the exact interval representing the
quantity involved. This information can
be exploited by a Settlers playing agent to
identify his optimal strategy for winning.
@inproceedings{perret:etal:2014,
author = {Jeremy Perret and Stergos Afantenos and Nicholas Asher and Alex Lascarides},
year = {2014},
title = {Revealing Resources in Strategic Contexts},
booktitle = {Proceedings of the 18th Workshop on
the Semantics and Pragmatics of Dialogue (SEMDIAL)},
address = {Edinburgh},
pages = {135--144}
}
Guhe, M. and A. Lascarides (2014) Persuasion in Complex Games, Proceedings of the 18th Workshop on the Semantics and Pragmatics of Dialogue (SEMDIAL), pages 62--70, Edinburgh
We study the power of persuasion in a game where each player's own
preferences over the negotiation's outcomes are dynamic and
uncertain. Our empirical set up supports evaluating individual
aspects of the persuasion and reaction strategies in controlled
ways. We show how this general method gives rise to domain-specific
conclusions, in our case for The Settlers of Catan: e.g.,
the less scope there is for persuading during the game, the more one
must ensure one gains an immediate benefit from it beyond the
desired trade.
@inproceedings{guhe:lascarides:2014,
author = {Markus Guhe and Alex Lascarides},
year = {2014},
title = {Persuasion in Complex Games},
booktitle = {Proceedings of the 18th Workshop on
the Semantics and Pragmatics of Dialogue (SEMDIAL)},
address = {Edinburgh},
pages = {62--70}
}
Guhe, M. and A. Lascarides (2014) The Effectiveness of Persuasion in The Settlers of Catan, Proceedings of IEEE Conference on Computational Intelligence in Games (CIG), pp. 25--32, Dortmund.
We present a method for obtaining a useful symbolic model of
persuasion in a complex game, where players' preferences over the
outcomes of negotiations over resources are incomplete, uncertain,
and dynamic. We focus on the problem of identifying the stage in the
game where successfully persuading an agent to perform a particular
action will have the most impact on one's chances to win. Our
approach exploits empirical data from game simulations, where the
set up allows us to investigate individual aspects of the policy in
suitably controlled ways. We demonstrate the effectiveness of the
approach within the domain of The Settlers of Catan and
present some specific lessons that can be learned for this
particular game, e.g. that a tipping point in the game occurs for
persuasion moves that are made when the leading player reaches 7
Victory Points.
@inproceedings{guhe:lascarides:2014,
author = {Markus Guhe and Alex Lascarides},
year = {2014},
title = {The Effectiveness of Persuasion in {\em The Settlers of Catan}},
booktitle = {Proceedings of the IEEE Conference on Computational Intelligence in Games},
pages = {25--32},
address = {Dortmund}
}
Guhe, M. and A. Lascarides (2014) Game Strategies in The Settlers of Catan, Proceedings of IEEE Conference on Computational Intelligence in Games (CIG), pages 201--208, Dortmund.
We present an empirical framework for testing game strategies in
The Settlers of Catan, a complex win--lose game that lacks
any analytic solution. This framework provides the means to change
different components of an autonomous agent's strategy, and to test
them in suitably controlled ways via performance metrics in game
simulations and via comparisons of the agent's behaviours with those
exhibited in a corpus of humans playing the game. We provide changes
to the game strategy that not only improve the agent's strength, but
corpus analysis shows that they also bring the agent closer to a model
of human players.
@inproceedings{guhe:lascarides:2014,
author = {Markus Guhe and Alex Lascarides},
year = {2014},
title = {Game Strategies in {\em The Settlers of Catan}},
booktitle = {Proceedings of the IEEE Conference on Computational Intelligence in Games},
pages = {201--208},
address = {Dortmund}
}
Guhe, M., A. Lascarides, K. O'Connor and V. Rieser (2013) Effects of
Belief and Memory on Strategic Negotiation, Proceedings of the 17th
Workshop on Semantics and Pragmatics of Dialogue (DialDam), Amsterdam.
We present an empirical framework for testing negotiation strategies
in a complex win--lose game that lacks any analytic solution. We
explore how different belief and memory models
affect trading and win rates. We show that cognitive limitations can
be compensated for by being an `optimistic' negotiator: make your
desired trade offer, regardless of your beliefs about how opponents
will react. In contrast, agents with good cognitive abilities can
win with fewer but more effective offers. Corpus analysis shows
human negotiators are somewhere in between, suggesting that they
compensate for deficient memory and belief when necessary.
@inproceedings{guhe:etal:2013,
author = {Markus Guhe and Alex Lascarides and Kevin O'Connor and Verena Rieser},
year = {2013},
title = {Effects of Belief and Memory on Strategic Negotiation},
booktitle = {Proceedings of the 17th Workshop on the Semantics and Pragmatics of Dialogue (DialDam)},
address = {Amsterdam}
}
Cadilhac, A., N. Asher, F. Benamara and A. Lascarides (2013)
Grounding Strategic Conversation: Using negotiation dialogues to predict trades in a win-lose game, Proceedings of Empirical Methods in NLP (EMNLP), Melbourne.
This paper describes a method that predicts which trades players execute
during a win-lose game. Our method uses data collected from chat negotiations of the game The Settlers of Catan and exploits the conversation to
construct dynamically a partial model of each player's preferences. This in
turn yields equilibrium trading moves via principles from game
theory. We compare our method against four
baselines and show that tracking how
preferences evolve through the dialogue and
reasoning about equilibrium moves are both crucial to success.
@inproceedings{cadilhac:etal:2013,
author = {Ana\"is Cadilhac and Nicholas Asher and Farah Benemara and Alex Lascarides},
year = {2013},
title = {Grounding Strategic Conversation: Using negotiation dialogues to predict trades in a win-lose game},
booktitle = {Proceedings of Empirical Methods in Natural Language Processing (EMNLP)},
address = {Seattle}
pages = {357--368}
}
Afantenos, S., N. Asher, F. Benamara, A. Cadilhac, C. D'egremont,
P. Denis, M. Guhe, S. Keizer, A. Lascarides, O. Lemon, P. Muller,
S. Paul, V. Rieser and L. Vieu, (2012) Developing a corpus of
strategic conversation in The Settlers of Catan, in Proceedings of
the 1st Workshop on Games and NLP
We describe a dialogue model and an implemented annotation scheme for
a pilot corpus of annotated online chats concerning bargaining
negotiations in the game The Settlers of Catan. We will use
this model and data to analyze how conversations proceed in the
absence of strong forms of cooperativity, where agents have diverging
motives. Here we concentrate on the description of our annotation
scheme for negotiation dialogues, illustrated with our pilot data, and
some perspectives for future research on the issue.
@inproceedings{afentenos:etal:2012}
author = {Stefanos Afantenos and Nicholas Asher and Farah Benamara and Anais Cadilhac and Cedric Degremont and Pascal Denis and Markus Guhe and Simon Keizer and Alex Lascarides and Oliver
Lemon, Philippe Muller and Soumya Paul and Verena Rieser and Laure Vieu},
year = {2012},
title = {Developing a corpus of strategic conversation in The Settlers of
Catan},
booktitle = {Proceedings of the 1st Workshop on Games and NLP},
address = {Kanazawa, Japan}
}
Afantenos, S., N. Asher, F. Benamara, A. Cadilhac, C. Degremont, P. Denis, M. Guhe, S. Keizer, A. Lascarides, O. Lemon, P. Muller, S. Paul, V. Popescu, V. Rieser and L. Vieu (2012) Modelling Strategic Conversation: model, annotation design and corpus, Proceedings
of 16th Workshop on the Semantics and Pragmatics of Dialogue (Seinedial), Paris.
@inproceedings{afentenos:etal:2012}
author = {Stefanos Afantenos and Nicholas Asher and Farah Benamara and Anais Cadilhac and Cedric D\'{e}gremont and Pascal Denis and Markus Guhe and Simon Keizer and Alex Lascarides and Oliver Lemon and Philippe Muller and Soumya Paul and Vladamir Popescu and Verena Rieser and Laure Vieu},
year = {2012},
title = {Modelling Strategic Conversation: model, annotation design and corpus},
booktitle = {Proceedings of the 16th Workshop on the Semantics and Pragmatics of Dialogue (Seinedial)},
address = {Paris}
}
Asher, N., A. Lascarides, O. Lemon, M. Guhe, V. Rieser, P. Muller, S. Afantenos, F. Benamara, L. Vieu, P. Denis, S. Paul, S. Keizer, and C. Degremont (2012) Modelling Strategic Conversation: the STAC Project, Proceedings
of 16th Workshop on the Semantics and Pragmatics of Dialogue (Seinedial), Paris.
The STAC project will develop new, formal and robust models of
non-cooperative conversation, drawing from ideas in linguistics,
philosophy, computer science, and economics. The project brings a
state of the art, linguistic theory of discourse interpretation
together with new data-driven models of agent interaction and
strategic decision making. Here we discuss the project’s linguistic
underpinnings, and the conceptual and empirical challenges the project
faces. We also describe the project’s current data collection
activities.
@inproceedings{asher:etal:2012}
author = {Nicholas Asher and Alex Lascarides and Oliver Lemon and Markus Guhe and Verena Rieser and Philippe Muller and Stefanos Afantenos and Farah Benamara and Laure Vieu and Pascal Denis and Soumya Paul and Simon Keizer and Cedric D\'{e}gremont},
year = {2012},
title = {Modelling Strategic Conversation: the STAC Project},
booktitle = {Proceedings of the 16th Workshop on the Semantics and Pragmatics of Dialogue (Seinedial)},
address = {Paris}
}
Asher, N. and A. Lascarides (2012)
A Cognitive Model of Conversation, Proceedings of 16th Workshop on the Semantics and Pragmatics of Dialogue (Seinedial), Paris.
This paper describes a
symbolic model of rational action and decision making to support
analysing dialogue. The model approximates principles of behaviour
from game theory, and its proof theory makes Gricean principles of
cooperativity derivable when the agents' preferences align.
@inproceedings{asher:lascarides:2012,
author = {Nicholas Asher and Alex Lascarides},
year = {2012},
title = {A Cognitive Model of Conversation},
booktitle = {Proceedings of the 16th Workshop on the Semantics and Pragmatics of Dialogue (Seinedial)},
address = {Paris}
}
Guhe, M. and A. Lascarides (2012)
Trading in a Multiplayer Board Game: Towards an Analsyis of Non-Cooperative Dialogue, Proceedings of Cognitive Science, Tokyo.
We collected and analysed a small dialogue corpus of people playing
The Settlers of Catan. Dialogues are trading negotiations where
Gricean maxims of cooperativity often break down as players adopt
conflicting intentions in their attempt to win the game. This has
consequences for what information players are sharing and for the
sincerity of their contributions.
In this paper, we motivate and describe a two-level scheme for
analysing non-cooperative dialogues, where both levels are
interdependent. Each dialogue move is a move in the game (e.g., an
offer to trade), and a coherent contribution to the dialogue so far,
connected to a prior segment with a coherence relation, such as
indirect answerhood or rejection. Parsing and generating coherence
relations is computationally feasible (e.g.,
(Baldridge and Lascarides, 2005)), and here we'll argue that their
semantics help to identify the game move, even when it is implicated
rather than linguistically explicit.
@inproceedings{guhe:lascarides:2012,
author = {Markus Guhe and Alex Lascarides},
title = {Trading in a Multiplayer Board Game: Towards an Analysis of Non-Cooperative Dialogue},
year = {2012},
booktitle = {Proceedings of Cognitive Science},
address = {Tokyo}
}
Alahverdzhieva, K. D. Flickinger and A. Lascarides (2012) Multimodal Grammar Implementation, Proceedings of NAACL-HLT, Montreal.
This paper reports on an implementation of a multimodal grammar of
speech and co-speech gesture within the LKB/PET grammar
engineering environment. The implementation extends the English
Resource Grammar (ERG, Flickinger (2000)) with HPSG types and
rules that capture the form of the linguistic signal, the form of
the gestural signal and their relative timing to constrain the
meaning of the multimodal action. The grammar yields a single
parse tree that integrates the spoken and gestural modality
thereby drawing on standard semantic composition techniques to
derive the multimodal meaning representation. Using the current
machinery, the main challenge for the grammar engineer is the
non-linear input: the modalities can overlap temporally. We
capture this by identical speech and gesture token edges. Further,
the semantic contribution of gestures is encoded by lexical rules
transforming a speech phrase into a multimodal entity of conjoined
spoken and gestural semantics.
@inproceedings{alahverdzhieva:etal:2012,
author = {Katya Alahverdzhieva and Dan Flickinger and Alex Lascarides},
title = {Multimodal Grammar Implementation},
year = {2012},
booktitle = {Proceedings of Annual Meeting of the North American Chapter of the Association for Computational Linguistics and Human Language Technologies (NAACL-HLT 2012)},
address = {Montreal}
}
Alahverdzhieva, K. and A. Lascarides (2011) An HPSG Approach to Synchronous Deixis and Speech, in Stefan Muller (ed.), Proceedings of the 18th International Conference on Head-Driven Phase Structure Grammar
(HPSG), pages 6--24, Seattle.
In this paper, we present a constraint-based analysis of multimodal
communicative signals consisting of deictic gestures and speech. Our
approach re-uses standard devices from linguistics to map multimodal
form to meaning, thereby accounting for the gestural ambiguity by
means of established underspecification mechanisms. To specify this
mapping, we use empirically extracted grammar construction rules
which capture the conditions under which the speech-deixis signal is
grammatical and semantically intended. We demonstrate that the
constraint-based grammar framework of HSPG is expressive enough to
produce multimodal lfs from syntax.
@inproceedings{alahverdzhieva:lascarides:2011,
author = {Katya Alahverdzhieva and Alex Lascarides},
year = {2011},
title = {An HPSG Approach to Synchronous Speech and Deixis},
booktitle = {Proceedings of the
18th International Conference on Head-Driven Phase Structure Grammar
(HPSG)},
editor = {S.\ M\"uller},
pages = {6--24},
publisher = {CSLI Publications},
address = {Seattle}
}
Alahverdzhieva, K. and A. Lascarides (2011) Semantic Composition of Multimdal Actions in Constraint-based Grammars, Proceedings of CID 2011, Agay-Roches Rouges, Var, France.
The use of speech-accompanying hand gestures to depict
objects, to structure the discourse or to give directions is
ubiquitous in face-to-face interaction. In this paper, we analyse
multimodal signals consisting of speech and gesture from the
perspective of semantic composition in constraint-based grammars: we
elevate standard methods from linguistics to description of multimodal
input so as to connect the semantics of the gestural signal to the
semantics of the speech signal and to produce an integrated logical form.
@inproceedings{alahverdzhieva:lascarides:2011,
author = {Katya Alahverdzhieva and Alex Lascarides},
title = {Semantic Composition of Multimodal Actions in Constraint-based Grammars},
year = {2011},
booktitle = {Proceedings of Constraints in Discourse (CID) 2011},
address = {Agay-Roches Rouges, Var, France}
}
Cadilhac, A., N. Asher, F. Benamara and A. Lascarides (2011)
Commitments to Preferences in Dialogue, Proceedings of SIGDIAL 2011, Portland OR.
We propose a method for modelling how dialogue moves influence and
are influenced by the agents' preferences. We extract constraints
on preferences and dependencies among them, even when they are
expressed indirectly, by exploiting discourse structure. Our method
relies on a study of 20 dialogues chosen at random from the
Verbmobil corpus. We then test the algorithms predictions against
the judgements of naive annotators on 3 random unseen dialogues. The
average annotator-system agreement and the average inter-annotator
agreement show that our method is reliable.
@inproceedings{cadillac:etal:2011,
author = {Ana\"is Cadillac and Nicholas Asher and Farah Benamara and Alex Lascarides},
year = {2011},
title = {Commitments to Preferences in Dialogue},
booktitle = {Proceedings of the Annual SIGDIAL Meeting on Discourse and Dialogue},
address = {Portland, OR}
}
Alahverdzhieva, K. and A. Lascarides (2011) Integration of Speech and Gesture in
a Multimodal Grammar, Proceedings of TALN 2011, Montpelier.
In this paper we present a constraint-based analysis of the
form-meaning mapping of deictic gesture and its synchronous speech
signal. Based on an empirical study of multimodal corpora, we capture
generalisations about well-formed multimodal utterances that support
the preferred interpretations in the final context-of-use. More
precisely, we articulate a multimodal grammar whose construction rules
use the prosody, syntax and semantics of speech, the form and meaning
of the deictic signal, as well as the temporal performance of speech
relative to the temporal performance of deixis to constrain the
derivation of a single multimodal tree and to map it to a meaning
representation. The contribution of our project is two-fold: it
augments the existing NLP resources with annotated speech and gesture
corpora, and it also provides the theoretical grammar framework where
the semantic composition of an utterance results from its
speech-and-deixis synchrony.
@inproceedings{alahverdzhieva:lascarides:2011,
author = {Katya Alahverdzhieva and Alex Lascarides},
title = {Integretation of Speech and Deictic Gesture in a Multimodal Grammar},
year = {2011},
booktitle = {Proceedings of Traitement Automatique de Langues Naturelles (TALN 2011)},
address = {Montpelier}
}
Asher, N. and A. Lascarides (2010) Reasoning Dynamically about What One Says,
Proceedings of the
Workshop on Theories of Information Dynamics and Interaction and their
Application to Dialogue, ESSLLI 2010, Copenhagen.
In this paper we make SDRT's glue logic for computing logical form
dynamic. This allows a dialogue agent to anticipate what the update
of the semantic representation of the dialogue would be after his next
contribution, including the effects of the rhetorical moves that he is
contemplating performing next. This is a pre-requisite for planning
what to say. We make the glue logic dynamic by extending a
dynamic public announcement logic (PAL) with the capacity to perform
default reasoning---an essential component of inferring the pragmatic
effects of one's dialogue moves. We add to the PAL language a
new type of announcement, known as ceteris paribus
announcement, and this is used to model how an agent anticipates the
default consequences of his next dialogue move. Our extended PAL
validates more intuitively compelling patterns of default inference
than existing PALs for practical reaosning, and we demonstrate via the
proof of reduction axioms that the dynamic glue logic, like its static
version, remains decidable.
@inproceedings{asher:lascarides:2010,
author = {Nicholas Asher and Alex Lascarides},
year = {2010},
title = {Reasoning Dynamically about What One Says},
booktitle = {Proceedings of
the Workshop on Theories of Information
Dynamics and Interaction and their Application to Dialogue, at
ESSLLI 2010},
address = {Copenhagen}
}
Stone, M. and A. Lascarides (2010) Coherence and Rationality in Grounding,
Proceedings of the 14th SEMDIAL Workshop on the Semantics and
Pragmatics of Dialogue, Poznan, pp51--58.
This paper analyses dialogues where understanding and agreement
are problematic. We argue that pragmatic theories can account for
such dialogues only by models that combine linguistic
principles of discourse coherence and cognitive models of practical
rationality.
@inproceedings{stone:lascarides:2010,
author = {Matthew Stone and Alex Lascarides},
title = {Coherence and Rationality in Dialogue},
booktitle = {Proceedings of the 14th SEMDIAL Workshop on the
Semantics and Pragmatics of Dialogue},
address = {Poznan},
pages = {51--58}
}
Alahverdzhieva, K. and A. Lascarides (2010) A Grammar for
Language and Co-verbal Gesture, Proceedings of the 4th
Conference of the International Society for Gesture Studies
(ISGS), Frankfurt.
Meaning in everyday communication is conveyed by various signals
including spoken utterances and spontaneous hand gestures. The
literature has attested that gestures function in synchrony with
speech to deliver an integrated message, or a `single thought',
exhibit language-specific properties and are subject to formal
semantic modeling. One of the challenges in modeling synchrony is to
use the form of the verbal signal, the form of the gesture and their
relative timing to produce an integrated meaning representation. We
meet this challenge by exploiting well-established semantic
composition rules for deriving meaning from the form of the multimodal
action. So, while the existing grammars (HPSG, LFG, CCG) produce
semantic representations for unimodal input, we argue that any
formalisation of language should fit into the multimodal perspective
of synchronising language and co-verbal gesture.
We will further show that any formalism that interfaces
syntax/semantics and prosody is well-suited for regimenting synchrony
and its effects on multimodal meaning, regardless of whether the
surface syntactic structure is isomorphic to prosodic structure (e.g.,
CCG) or not (e.g., HPSG, LFG).
@inproceedings{alahverdzhieva:lascarides:2010,
author = {Katya Alahverdzhieva and Alex Lascarides},
title = {A Grammar for Language and Co-Verbal Gesture},
year = {2010},
booktitle = {Proceedings of the Fourth Conference of the International
Society for Gesture Studies (ISGS)},
address = {Frankfurt}
}
Alahverdzhieva, K. and A. Lascarides (2010) Analysing Language and Co-verbal Gesture
in Constraint-based Grammars, in Stefan Muller (ed.), Proceedings of the 17th International Conference on Head-Driven Phase Structure Grammar
(HPSG), pp5--25, Paris.
We demonstrate that current
methods for semantic composition can be extended
to multimodal language so as to produce an integrated meaning
representation based on the form of
the verbal signal, the form of the co-verbal gesture
and their relative timing. The ambiguous gestural form provides
one-to-many form-meaning mappings without violating coherence in
the final interpretation.
@inproceedings{alahverdzhieva:lascarides:2010,
author = {Katya Alahverdzhieva and Alex Lascarides},
title = {Analysing Language and Co-verbal Gesture
in Constraint-based Grammars},
booktitle = {Proceedings of the
17th International Conference on Head-Driven Phase Structure Grammar
(HPSG)},
editor = {S.\ M\"uller},
pages = {5--25},
address = {Paris}
}
Asher, N., E. Bonzon and A. Lascarides (2010)
Extracting and Modelling Preferences from
Dialogue, Proceedings of the International Conference on
Information Processing and Management of Uncertainty in
Knowledge-Based Systems, Dortmund.
Dialogue moves influence and are influenced by the agents'
preferences. We propose a method for modelling this interaction.
We motivate and describe a recursive method for calculating the
preferences that are expressed, sometimes indirectly, through the
speech acts performed. These yield partial CP-nets, which
provide a compact and efficient method for computing how preferences
influence each other. Our study of 100 dialogues in the Verbmobil
corpus can be seen as a partial vindication of using CP-nets to
represent preferences.
@inproceedings{asher:etal:2010,
author = {Nicholas Asher and Elise Bonzon and Alex Lascarides},
title = {Extracting and Modelling Preferences from Dialogue},
year = {2010},
booktitle = {Proceedings of the International Conference on
Information Processing and Management of Uncertainty in
Knowledge-based Systems (IPMU)},
address = {Dortmund}
}
Lascarides, A. and N. Asher (2009) The Interpretation of Questions in Dialogue,
in A. Riester and T. Solstad (eds.), Proceedings of Sinn
und Bedeutung 13, volume 5, SinSpeC. Working Papers of the SFB
732, University of Stuttgart, pp17--30.
We propose a dynamic semantics of questions in
dialogue that tracks the public commitments of each dialogue agent,
including commitments to issues raised by questions.
@inproceedings{lascarides:asher:2009,
author = {Alex Lascarides and Nicholas Asher},
title = {The Interpretation of Questions in Dialogue},
booktitle = {Proceedings of Sinn und Bedeutung 13},
year = {2009},
editor = {Riester, Arndt and Solstad, Torgrim},
volume = {5},
series = {SinSpeC. Working Papers of the SFB 732},
organization = {University of Stuttgart},
pages = {17--30}
}
Koller, A. and A. Lascarides (2009)
A Logic of Semantic Representations for
Shallow Parsing, in Proceedings of the 12th Conference of the
European Chapter of the Association for Computational Linguistics
(EACL), pp451--459, Athens.
One way to construct semantic representations in a robust manner is
to enhance shallow language processors with semantic components.
Here, we provide a model theory for a semantic formalism that is
designed for this, namely Robust Minimal Recursion Semantics (RMRS).
We show that RMRS supports a notion of entailment that allows it to
form the basis for comparing the semantic output of different parses
of varying depth.
@inproceedings{koller:lascarides:2009,
author = {Alexander Koller and Alex Lascarides},
year = {2009},
title = {A Logic of Semantic Representations for Shallow Parsing},
booktitle = {Proceedings of the 12th Meeting of the European Chapter of the Association for Computational Linguistics},
pages = {451--460},
address = {Athens}
}
Lascarides, A. and N. Asher (2008) Agreement and Disputes in
Dialogue, in Proceedings of the 9th SIGDIAL Workshop on
Discourse and Dialogue (SIGDIAL), Columbus, pp29--36.
In this paper we define agreement in terms of shared public
commitments, and implicit agreement is conditioned on the semantics of
the relational speech acts (e.g., Narration,
Explanation) that each agent performs. We provide a
consistent interpretation of disputes, and updating a logical form
with the current utterance always involves extending it and not
revising it, even if the current utterance denies earlier content.
@inproceedings{lascarides:asher2008,
author = {Alex Lascarides and Nicholas Asher},
year = {2008},
title = {Agreement and Disputes in Dialogue},
booktitle = {Proceedings of the 9th SIGDIAL Workshop on Discourse and
Dialogue (SIGDIAL)},
address = {Columbus, Ohio},
pages = {29--36}
}
Asher, N. and A. Lascarides (2008) Commitments, Beliefs and Intentions in
Dialogue, in Proceedings of the 12th Workshop on the Semantics
and
Pragmatics of Dialogue (Londial), London, pp35--42.
We define grounding in terms of shared public
commitments, and link public commitments to other, private,
attitudes within a decidable dynamic logic for computing
implicatures and predicting an agent's next dialogue move.
@inproceedings{asher:lascarides:2008,
author = {Nicholas Asher and Alex Lascarides},
year = {2008},
title = {Commitments, Beliefs and Intentions in Dialogue},
booktitle = {Proceedings of the 12th Workshop on the Semantics and Pragmatics of Dialogue (Londial)},
address = {London},
pages = {35--42}
}
Lascarides, A. and M. Stone (2006) Formal Semantics For Iconic
Gesture, in Proceedings of the 10th Workshop on the Semantics and
Pragmatics of Dialogue (Brandial), Potsdam.
We present a formal analysis of iconic coverbal gesture. Our model
describes the incomplete meaning of gesture that's derivable from its
form, and the pragmatic reasoning that yields a more specific
interpretation. Our formalism builds on established models of
discourse interpretation to capture key insights from the descriptive
literature on gesture: synchronous speech and gesture express a single
thought, but while the form of iconic gesture is an important clue to
its interpretation, the content of gesture can be resolved only by
linking it to its context.
@inproceedings{lascarides:stone:2006,
author = {Alex Lascarides and Matthew Stone},
year = {2006},
title = {Formal Semantics for Iconic Gesture},
booktitle = {Proceedings of the 10th Workshop on the Semantics and Pragmatics of Dialogue (Brandial)},
address = {Potsdam}
}
Sporleder, C. and A. Lascarides (2005) Exploiting Linguistic Cues to
Classify Rhetorical Relations, in Proceedings of Recent Advances in
Natural Language Processing (RANLP), Bulgaria.
We propose a method for automatically identifying rhetorical
relations. We use supervised machine learning but exploit cue phrases
to automatically extract and label training data. Our models draw on
a variety of linguistic cues to distinguish between the relations. We
show that these feature-rich models outperform the previously
suggested bigram models by more than 20%, at least for small training
sets. Our approach is therefore better suited to deal with relations
for which it is difficult to automatically label a lot of training
data because they are rarely signalled by unambiguous cue phrases
(e.g., Continuation).
@inproceedings{sporleder:lascarides:2005,
author = {Caroline Sporleder and Alex Lascarides},
year = {2005},
title = {Exploiting Linguistic Cues to Classify Rhetorical Relations},
booktitle = {Proceedings of Recent Advances in Natural Langauge Processing (RANLP)},
address = {Bulgaria}
}
Baldridge, J. and A. Lascarides (2005) Probabilistic Head-Driven
Parsing for Discourse Structure, in Proceedings of the Ninth
Conference on Computational Natural Language Learning (CoNNL), Ann
Arbor.
We describe a data-driven approach to building interpretable discourse
structures for appointment scheduling dialogues. We represent
discourse structures as headed trees and model them with probabilistic
head-driven parsing techniques. We show that dialogue-based features
regarding turn-taking and domain specific goals have a large positive
impact on performance. Our best model achieves an f-score of 43.2%
for labelled discourse relations and 67.9% for unlabelled ones,
significantly beating a right-branching baseline that uses the most
frequent relations.
@inproceedings{baldridge:lascarides:2005a,
author = {Jason Baldridge and Alex Lascarides},
year = {2005},
title = {Probabilistic Head-Driven Parsing for Discourse Structure},
booktitle = {Proceedings of the Ninth Conference on Computational Natural Language Learning (CoNLL)}
}
Baldridge, J. and A. Lascarides (2005) Annotating Discourse Structure
for Robust Semantic Interpretation, Proceedings of the Sixth
International Workshop on Computational Semantics (IWCS), Tilberg The
Netherlands.
To date, no one has demonstrated the feasibility of automatically
learning a comprehensive model of discourse interpretation
that integrates seamlessly with representations fo compositional
semantics for individual sentences. Current approaches are either too
imprecise in terms of the semantic import they assign to
discourse-level information, or too complex to represent and compute
efficiently.
Our approach to practical discourse interpretation begins with a
reduced form of SDRT. Because SDRT analyses are created by
referencing multiple, possibly redundant information sources, a large
part of an analysis is encoded via discourse segmentation and
rhetorical relations alone. SDRT's semantic interpretation of this
yields information about the contextually-determined content of
utterances. Resolving discourse structure is thus an important step in
producing the full contextually-determined information states for
dialogs. However, producing full-fledged SDRT discourse structures
is complex. To simplify this task, we define a tree-form of discourse
structures which can be created more efficiently, both by human
annotators and discourse parsers.
@inproceedings{baldridge:lascarides:2005,
author = {Jason Baldridge and Alex Lascarides},
year = {2005},
title = {Annotating Discourse Structures for Robust Semantic Interpretation},
booktitle = {Proceedings of the Sixth International Workshop on Computational Semantics (IWCS)},
pages = {17--29},
address = {Tilburg, The Netherlands}
}
Sporleder, C. and A. Lascarides (2004) Combining Hierarchical
Clustering and Machine Learning to Predict High-Level Discourse
Structure, Proceedings of Coling 2004, pp.43--49.
We propose a novel method to predict the inter-paragraph discourse
structure of text, i.e., to infer which paragraphs are related to each
other and form larger segments on a higher level. Our method c
ombines a clustering algorithm wiht a model of segment ``relatedness''
acquired in a machine learning step. The model integrates information
form a variety of sources, such as word co-occurrence, lexical chains,
cue phrases, punctuation and tense. Our method outperforms an
approach that relies on word co-occurrence alone.
@inproceedings{sporleder:lascarides:2004,
author = {Caroline Sporleder and Alex Lascarides},
year = {2004},
title = {Combining Hierarchical Clustering and Machine Learning to Predict High-Level Discourse Structure},
booktitle = {Proceedings of the International Conference in Computational Linguistics (COLING)},
pages = {43--49}
}
Lapata, M. and A. Lascarides (2004) Inferring Sentence-internal
Temporal Relations, in Proceedings of the North American Chapter
of the Assocation of Computational Linguistics (NAACL),
pp153--160.
In this paper we propose a data intensive approach for inferring
sentence-internal temporal relations, which relies on a simple
probabilistic model and assumes no manual coding. We explore various
combinations of features, and evaluate performance against a
gold-standard corpus and human subjects performing the same task. The
best model achieves 70.7% accuracy in inferring the temporal relation
between two clauses and 97.4% accuracy in ordering them, assuming
that the temporal relation is known.
@inproceedings{lapata:lascarides:2004,
author = {Mirella Lapata and Alex Lascarides},
year = {2004},
title = {Inferring Sentence-Internal Temporal Relations},
booktitle = {Proceedings of the North American Chapter of the Association for Computational Linguistics},
pages = {153--160}
}
Lapata, M. and A. Lascarides (2003) Detecting Novel Compounds: The
Role of Distributional Evidence, in Proceedings of the 11th
Conference of the European Chapter for the Association of
Computational Linguistics (EACL2003), pp235--242.
Research on the discovery of terms from corpora has focused on word
sequences whose recurrent occurrence in a corpus is indicative of
their terminological status, and has not addressed the issue of
discovering terms when data is sparse. This becomes apparent in the
case of noun compounding, which is extremely productive: more than
half of the candidate compounds extracted from a corpus are attested
only once. We show how evidence about established (i.e.,frequent)
compounds can be used to estimate features that can discriminate rare
valid compounds from rare nonce terms in addition to a variety of
linguistic features than can be easily gleaned from corpora without
relying on parsed text.
@inproceedings{lapata:lascarides:2003,
author = {Mirella Lapata and Alex Lascarides},
year = {2003},
title = {Detecting Novel Compounds: The Role of Distributional Evidence},
booktitle = {Proceedings of the 11th Meeting of the European Chapter of the Association for Computational Linguistics},
pages = {235--242}
}
Schlangen, D. and A. Lascarides (2002) Resolving Fragments Using
Discourse Information, Proceedings of the
Sixth International Workshop on the Semantics and Pragmatics of
Dialogue (EDILOG), pp. 161--168, Edinburgh, September 2002.
This paper describes an extension of RUDI, a dialogue system component
for `Resolving Underspecification with Discourse Information'
(Schlangen et al., 2001). The extension handles the resolution of the
intended meaning of non-sentential utterances that denote propositions
or questions. Some researchers have observed that there are complex
syntactic, semantic and pragmatic constraints on the acceptability of
such fragments, and have used this to motivate an unmodular
architecture for their analysis. In contrast, our implementation is
based on a clear separation of the processes of constructing
compositional semantics of fragments from those for resolving their
meaning in context. This is shown to have certain theoretical and
practical advantages.
@inproceedings{schlangen:lascarides:2002,
author = {David Schlangen and Alex Lascarides},
year = {2002},
title = {Resolving Fragments using Discourse Information},
booktitle = {Proceedings of the 6th International Workshop on the Semantics and Pragmatics of Dialogue (Edilog)},
address = {Edinburgh}
}
Grover, C., Klein, E., Lascarides, A. and Lapata, M. (2002) XML-Based
NLP for Analysing and Annotating Medical Langauge, Proceedings of
the Second Workshop on NLP and XML (NLPXML-02), Coling 2002,
Taipei.
We describe the use of a suite of highly flexible XML-based NLP tools
in a project for processing and interpreting text in the medical
domain. The main aim of the paper is to demonstrate the central role
that XML mark-up and XML NLP tools have played in the analysis process
and to describe the resultant annotated corpus of MedLine
abstracts. In addition to the XML tools, we have succeeded in
integrating a variety of non-XML `off the shelf' NLP tools into our
pipelines, so that their output is added into the mark-up. We
demonstrate the utility of the annotations that result in two ways.
First, we investigate how they can be used to improve parse coverage
of a hand-crafted grammar that generates logical forms. And second,
we investigate how they contribute to automatic lexical semantic
acquisition processes.
@inproceedings{grover:etal:2002,
author = {Claire Grover and Ewan Klein and Alex Lascarides and Mirella Lapata},
year = {2002},
title = {{\sc xml}-based {\sc nlp} Tools for Analysing and Annotating Medical Language},
booktitle = {Proceedings of the Second International Workshop on {\sc nlp} and {\sc xml}},
address = {Coling 2002, Taipei, Taiwan}
}
Lascarides, A. (2001) Imperatives in Dialogue, Proceedings of
the 5th International Workshop on Formal Semantics and Pragmatics of
Dialogue (BI-DIALOG 2001), pp1--16, Bielefeld Germany, June 2001.
In this paper, we offer a semantic analysis of imperatives. We
explore the effects of context on their interpretation, particularly
on the content of the action to be performed, and whether or not the
imperative is commanded. We demonstrate that by utilising a dynamic,
discourse semantics which features rhetorical relations such
as Narration, Elaboration and Correction,
we can capture the discourse effects as a byproduct of discourse
update (i.e., the dynamic construction of logical forms). We argue
that this has a number of advantages over static approaches and over
plan-recognition techniques for interpreting imperatives.
@inproceedings{lascarides:2001,
author = {Alex Lascarides},
year = {2001},
title = {Imperatives in Dialogue},
booktitle = {Proceedings of the 5th International Workshop on the
Formal Semantics and Pragmatics of Dialogue
(Bi-Dialog)},
note = {also to appear in Peter K\"uhnlein, Hannes Rieser and Henk
Zeevat (eds.) {\em The Semantics and Pragmatics of Dialogue}, John
Benjamins},
pages = {1--16},
address = {Bielefeld}
}
Schlangen, D., A. Lascarides and A. Copestake, (2001) Resolving
Underspecified using Discourse Information,
Proceedings of
the 5th International Workshop on Formal Semantics and Pragmatics of
Dialogue (BI-DIALOG 2001), pp79--93, Bielefeld Germany, June 2001.
This paper describes RUDI (Resolving Underspecification with Discourse
Information), a dialogue system component which computes automatically
some aspects of the content of scheduling dialogues, particularly the
intended denotation of the temporal expressions, the speech acts
performed and the underlying goals. RUDI has a number of nice
features: it is a principled approximation of a logically precise and
linguistically motivated framework for representing semantics and
implicatures; it has a particularly simple architecture; and it
records how reasoning with a combination of goals, semantics and
speech acts serves to resolve underspecification that's generated by
the grammar.
@inproceedings{schlangen:etal:2001,
author = {David Schlangen and Alex Lascarides and Ann Copestake},
year = {2001},
title = {Resolving Underspecification Using Discourse Information},
booktitle = {Proceedings of the 5th International Workshop on Formal
Semantics and Pragmatics of Dialogue (Bi-Dialog)},
pages = {79--93},
note = {Also to appear in Peter K\"uhnlein, Hannes Rieser and Henk
Zeevat (eds.) {\em The Semantics and Pragmatics of Dialogue}, John
Benjamins},
address = {Bielefeld}
}
Copestake, A., A. Lascarides and D. Flickinger (2001) An Algebra for
Semantic Construction in Constraint-based Grammars,
Proceedings of the 39th Annual Meeting
of the Association for Computational Linguistics (ACL/EACL
2001), pp132--139, Toulouse, France.
We develop a framework for formalizing semantic construction
within grammars expressed in typed feature structure logics,
including HPSG.
The approach provides an alternative to
the lambda calculus; it maintains much of the desirable
flexibility of unification-based approaches to composition,
while constraining the allowable operations in order to
capture basic generalizations and improve maintainability.
@inproceedings{copestake:etal:2001,
author = {Ann Copestake and Alex Lascarides and Dan Flickinger},
year = {2001},
title = {An Algebra for Semantic Construction in Constraint-based
Grammars},
booktitle = {Proceedings of the 39th
Annual Meeting of the Association for Computational Linguistics
(ACL/EACL 2001)},
pages = {132--139},
address = {Toulouse}
}
Grover, C. and A. Lascarides (2001) XML-based Data Preparation for
Robust Deep Parsing,
to appear in Proceedings of the 39th Annual Meeting
of the Association for Computational Linguistics (ACL/EACL
2001), pp252--259, Toulouse, France.
We describe the use of XML tokenisation, tagging and mark-up tools
to prepare a corpus for parsing. Our techniques are generally
applicable but here we focus on parsing Medline abstracts with the
ANLT wide-coverage grammar. Hand-crafted grammars inevitably lack
coverage but many coverage failures are due to inadequacies of their
lexicons. We describe a method of gaining a degree of robustness by
interfacing POS tag information with the existing lexicon. We also
show that XML tools provide a sophisticated approach to
pre-processing, helping to ameliorate the `messiness' in real language
data and improve parse performance.
@inproceedings{grover:lascarides:2001,
author = {Claire Grover and Alex Lascarides},
title = {{\sc xml}-based Data Preparation for Robust Deep Parsing},
year = {2001},
booktitle = {Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics (ACL/EACL 2001)},
pages = {252--259},
address = {Toulouse}
}
Lascarides, A. and Asher, N. (1999) Cognitive States, Discourse
Structure and the Content of Dialogue, in Proceedings to
Amstelogue 1999, Amstelogue, March 1999.
In this paper, we focus on two puzzles concerning anaphora in
dialogue. On the basis of these puzzles we argue for three criteria
for solving them: the representation of dialogue content must include
rhetorical relations since they impose different constraints on the
antecedents to anaphora; for discourse interpretation to be
computable, it must be highly modular, for example the logic for
discourse update and the logic for cognitive modelling must be
separate; but these different modules must be allowed to interact in
complex ways. We specify an account which meets these three criteria
within Segmented Discourse Representation Theory (SDRT: Asher,
1993). We illustrate our approach by first using principles or
rationality and cooperativity to derive principles for computing
discourse content---in particular, the rhetorical connections between
the propositions. We then exploit these cognitively-motivated but
discourse-level axioms to account for the puzzles concerning anaphora.
@inproceedings{lascarides:asher:1999,
author = {Alex Lascarides and Nicholas Asher},
year = {1999},
title = {Cognitive States, Discourse Structure and the Content of
Dialogue},
booktitle = {Proceedings of the 3rd International Workshop on the
Formal Semantics and Pragmatics of Dialogue
(Amstelogue 1999)},
pages = {1--12},
address = {Amsterdam}
}
Copestake, A. and A. Lascarides (1997) Integrating
Symbolic and Statistical Representations: The Lexicon Pragmatics
Interface, Proceedings of the 35th Annual Meeting
of the Association for Computational Linguistics (ACL97), Madrid,
July 7th--12th 1997, pp136--143.
We describe a formal framework for interpretation of words and
compounds in a discourse context which integrates a symbolic
lexicon/grammar, word-sense probabilities, and a pragmatic component.
The approach is motivated by the need to handle productive word
use. In this paper, we concentrate on compound nominals. We discuss
the inadequacies of approaches which consider compound interpretation
as either wholly lexico-grammatical or wholly pragmatic, and provide
an alternative integrated account.
@inproceedings{copestake:lascarides:1997,
author = {Ann Copestake and Alex Lascarides},
year = {1997},
title = {Integrating Symbolic and Statistical Representations: The
Lexicon Pragmatics Interface},
booktitle = {Proceedings of the 35th Annual Meeting of the Association
for Computational Linguistics and the 8th Meeting of
the European Chapter for the Association for
Computational Linguistics (ACL97/EACL97)},
pages = {136--143},
address = {Madrid}
}
Asher, N. and A. Lascarides (1996) Bridging, in R. van der Sandt,
R. Blutner and M. Bierwisch (eds.), From Underspecification to
Interpretation, Working Papers of the Institute for Logic and
Linguistics. IBM Deutschland, Heidelberg.
In this paper, we offer a novel method for processing given
information in discourse, paying particular attention to definite
descriptions. We argue that extant theories don't do justice to the
complexity of interaction between the knowledge resources that are
used. In line with Hobbs (1979), we claim that discourse
structure---as defined by the rhetorical connections between the
propositions introduced in the text---is an important source of
knowledge for processing given information, because rhetorical
relations can change the semantic content of it. We model the
processing of given information as a byproduct of computing rhetorical
structure in a framework known as SDRT (Asher, 1993), which formalises
the interaction between discourse structure and compositional and
lexical semantics in determining semantic content. We demonstrate
that it provides a richer, more accurate interpretation of definite
descriptions than has been offered so far.
@incollection{asher:lascarides:1996,
author = {Nicholas Asher and Alex Lascarides},
year = {1996},
title = {Bridging},
booktitle = {From Underspecification to Interpretation},
pubisher = {Working Papers of the Institute for Lotgic and Linguistcs},
address = {IBM Deutschland, Heidelberg},
editor = {R.\ van der Sandt and R.\ Blutner and M.\ Bierwisch}
}
Lascarides, A. (1995) The
Pragmatics of Word Meaning
Proceedings of the AAAI Spring Symposium
Series: Representation and Acquisition of Lexical Knowledge:
Polysemy, Ambiguity and Generativity, pp75--80, Stanford CA,
March 1995.
A parsimonious lexicon must encode generalisations (e.g., Daelemans et
al 1992). One then needs to reason about when these apply. A general
consensus is that an operation known as default inheritance
is useful for this (Boguraev and Pustejovsky 1990, Briscoe et al 1990,
Daelemans 1987, Evans and Gazdar 1989a, Flickinger 1987, and others).
A frequent motivation for using it, is to capture the overriding of
regularities by subregularities in a computationally efficient manner.
Information need only be stated once, instead of many times in each
separate word, and default inheritance ensures that words inherit the
right information.
But there's a problem with this. Many lexical generalisations are of
the sort where there are exceptions to the rules, which are triggered
by information which resides outside the lexicon. In particular,
pragmatic knowledge can trigger exceptions and default inheritance
doesn't communicate properly with pragmatics to encode this.
In this paper, we'll consider three examples where this occurs:
logical metonymy (e.g., enjoy the book means enjoy
reading the book), adjectives (e.g., the interpretation of
fast in fast car, fast motorway, fast
typist etc.), and noun-verb agreement. We'll argue for a new
version of default inheritance, which allows default results of
lexical generalisations to persist as default beyond the
lexicon. We'll show that this persistence can be exploited by the
pragmatic component, to reason about when generalisations encoded in
the lexicon survive in a discourse context. We'll represent the link
between the lexicon and pragmatics via two axioms. These will predict
the pragmatic exceptions to lexical generalisations that arise in a
discourse context. We thereby explain how words are interpreted in
discourse, in a way that neither the lexicon nor pragmatics could
achieve on their own.
@inproceedings{lascarides:1995,
author = {Alex Lascarides},
year = {1995},
title = {The Pragmatics of Word Meaning},
booktitle = {Proceedings of the AAAI Spring Symposium Series: Representation and Acquisition of Lexical Knowledge: Polysemy, Ambiguity and Generativity},
pages = {75--80}
}
Asher, N. and A. Lascarides, (1995) Metaphor in Discourse,
Proceedings of the AAAI Spring Symposium
Series: Representation and Acquisition of Lexical Knowledge:
Polysemy, Ambiguity and Generativity, pp3--7, Stanford CA,
March 1995.
This paper focuses on metaphor and the interpretation of metaphor in a
discourse setting. There have been several accounts put forward by
eminent philosophers of language---Max Black, John Searle and Donald
Davidson, among others---but none of them are satisfactory. They
offer a few rules for metaphoric interpretation, but many of them are
redundant, and they form a list without much coherence.
Many have thought that the principles of metaphorical interpretation
cannot be formally specified. We'll attack this position with two
claims. Our first claim is that some aspects of metaphor are
productive, and this productivity can be captured by perspicuous links
between generalisations that are specified in the lexicon, and general
purpose circumscriptive reasoning in the pragmatic component. Indeed
from a methodological perspective, we would claim the productive
aspects of metaphor can give the lexicographer clues about how to
represent semantic information in lexical entries.
Moreover, it is well known that domain knowledge influences
metaphorical interpretation. Our second claim takes this further, and
we argue that
rhetorical relations---such as Elaboration,
Contrast, and Parallel, among others---that connect
the meanings of segments of text together, also influence this.
Through studying these cases, we learn how to link lexical processing
to discourse processing in a formal framework, and we give some
preliminary accounts of how the link between words and discourse
determine metaphor.
@inproceedings{asher:lascarides:1995b,
author = {Nicholas Asher and Alex Lascarides},
year = {1995},
title = {Metaphor in Discourse},
booktitle = {Proceedings of the AAAI Spring Symposium Series: Representation and Acquisition of Lexical Knowledge: Polysemy, Ambiguity and Generativivity},
pages = {3--7}
}
Lascarides, A., and A. Copestake, (1995) Pragmatics of Word
Meaning, Proceedings of Semantics and Linguistic Theory
(SALT5), April 1995, Austin Texas.
In this paper, we explore the interaction between lexical semantics
and pragmatics. Linguistic processing is informationally encapsulated
and utilises relatively simple `taxonomic' lexical semantic knowledge.
On this basis, defeasible lexical generalisations deliver defeasible
parts of logical form. In contrast, pragmatics is open-ended and
involves arbitrary knowledge. Two axioms specify when pragmatic
defaults override lexical ones. We demonstrate that modelling this
interaction allows us to achieve a more refined interpretation of
words in a discourse context than either the lexicon or pragmatics
could do on their own.
@inproceedings{lascarides:copestake:1995,
author = {Alex Lascarides and Ann Copestake},
title = {Pragmatics and Word Meaning},
booktitle = {Proceedings of Semantics and Linguistic Theory (SALT-5)},
year = {1995},
}
Asher, N. and A. Lascarides, (1994) Intentions and Information in
Discourse, in Proceedings of the 32nd Annual Meeting of the
Association of Computational Linguistics (ACL94), Las Cruces, USA, June 1994,
pp34--41.
This paper is about the flow of inference between communicative
intentions, discourse structure and the domain during discourse
processing. We augment a theory of discourse interpretation with a
theory of distinct mental attitudes and reasoning about them, in order
to provide an account of how the attitudes interact with reasoning
about discourse structure.
@inproceedings{asher:lascarides:1994,
author = {Nicholas Asher and Alex Lascarides},
year = {1994},
title = {Intentions and Information in Discourse},
booktitle = {Proceedings of the 32nd Annual Meeting of the Association
of Computational Linguistics (ACL94)},
pages = {34--41},
address = {Las Cruces, New Mexico}
}
Lascarides, A. and Asher, N. (1993) A Semantics and Pragmatics for the
Pluperfect, in Proceedings of the
European Chapter of the Association for
Computational Linguistics (EACL93), pp250--259, Utrecht, The
Netherlands.
We offer a semantics and pragmatics of the pluperfect in narrative
discourse. We examine in a formal model of implicature, how the
reader's knowledge about the discourse, Gricean-maxims and causation
contribute to the meaning of the pluperfect. By placing the analysis
in a theory where the interactions among these knowledge resources can
be precisely computed, we overcome some problems with previous
Reichenbachian approaches.
@inproceedings{lascarides:asher:1993,
author = {Alex Lascarides and Nicholas Asher},
year = {1993},
title = {A Semantics and Pragmatics for the Pluperfect},
booktitle = {Proceedings of the European Chapter of the Association
for Computational Linguistics (EACL93)},
pages = {250--259},
address = {Utrecht, the Netherlands}
}
Lascarides, A. and J. Oberlander, (1993) Temporal Connectives in a
Discourse Context, in Proceedings of the
European Chapter of the Association for
Computational Linguistics (EACL93), pp260--268, Utrecht, The
Netherlands. Nominated for a best paper award.
We examine the role of temporal connectives in multi-sentence
discourse. In certain contexts, sentences containing temporal
connectives that are equivalent in temporal structure can fail to be
equivalent in terms of discourse coherence. We account for this by
offering a novel, formal mechanism for accommodating the
presuppositions in temporal subordinate clauses. This mechanism
encompasses both accommodation by discourse attachment and
accommodation by temporal addition. As such, it offers a
precise and systematic model of interactions between presupposed
material, discourse context, and the reader's background knowledge.
We show how the results of accommodation help to determine a
discourse's coherence.
@inproceedings{lascarides:oberlander:1993,
author = {Alex Lascarides and Jon Oberlander},
year = {1993},
title = {Temporal Connectives in a Discourse Context},
booktitle = {Proceedings of the
European Chapter of the Association for
Computational Linguistics (EACL93)},
pages = {260--268},
address = {Utrecht}
}
Blackburn, P. and A. Lascarides, (1992) Sorts and Operators
for Temporal Semantics, Proceedings of the Fourth Symposium on
Logic and Language, Budapest, Hungary, 1992.
An essential part of natural language understanding, and hence of
formal semantics, is the interpretation of temporal expressions. But
the very variety of temporal phenomena---such as tense, aspect,
aktionsart, temporal adverbials, and the temporal structure of
extended text---has tended to result in formal semantic analyses using
a wide variety of formal tools, often of a complex nature. It seems
important to try and find unifying perspectives on this work, and
above all, to try and gain some insight into the logical resources
needed to deal with natural language temporal phenomena.
In this paper, we show how a wide variety of temporal expressions can
be analysed using a simple modal language. The underlying language is
well known in the AI literature: it's Halpern and Shoham's (1986)
monotonic interval-based logic. However, as it stands, this language
is insufficient for natural language analysis on at least two counts.
The first problem is the Reference Problem: it lacks any mechanisms
for temporal reference, which are essential for an adequate treatment
of tense, adverbials and indexicals. The second problem is the
Ontology Problem: the language doesn't reflect the wide variety of
temporal ontologies, stemming from events, states and processes, which
are essential for an adequate treatment of {\sc nl} aktionsart. We
will show how these two defects can be removed in a simple and uniform
way: sorting. Systematic use of sorting will result in
simple frameworks suitable for modelling the semantics of a wide
variety of natural language temporal expressions; moreover, as we
shall see, the framework is strong enough to model them in a variety
of ways.
This article falls into three main parts. In the first we will show
that many different kinds of referential information can easily be
marked in a propositional modal language by means of sorting. There
are many kinds of temporal referential information; we will examine
Reichenbachian reference times, indexicals and durations. In the
second part of the article, we turn our attention to temporal
ontology. It's a commonly held view in temporal semantics that in
order to treat many temporal phenomena, one must do justice to the
rich structures underlying temporal ontology. The classical work in
this area is Vendler's (1967), though the ideas stretch back much
earlier (Aristotle), and have been developed in many directions since
(e.g., Moens and Steedman 1988, Nakhimovsky 1988, Lascarides 1991).
We will see how sorting enables some of these ideas to modeled. In
the third part we combine our sorted modal languages with ideas from
the default logic literature.
@inproceedings{blackburn:lascarides:1992,
author = {Patrick Blackburn and Alex Lascarides},
year = {2002},
title = {Sorts and Operators for Temporal Semantics},
booktitle = {Proceedings of the Fourth Symposium on Logic and Language},
address = {Budapest}
}
Lascarides, A., N. Asher, and J. Oberlander, (1992) Inferring
Discourse Relations in Context, in Proceedings of the 30th Annual
Meeting of the Association of Computational Linguistics (ACL92), pp1--8, Delaware
USA, June 1992.
We investigate various contextual effects on text
interpretation, and account for them by providing contextual
constraints in a logical theory of text interpretation. On the basis
of the way these constraints interact with the other knowledge
sources, we draw some general conclusions about the role of
domain-specific information, top-down and bot\-tom-up discourse
information flow, and the usefulness of formalisation in discourse theory.
@inproceedings{lascarides:etal:1992,
author = {A. Lascarides and N. Asher and J. Oberlander},
year = {1992},
title = {Inferring Discourse Relations in Context},
booktitle = {Proceedings of the 30th Annual Meeting of the Association
for Computational Linguistics (ACL92)},
pages = {1--8},
address = {Delaware}
}
Oberlander, J. and A. Lascarides, (1992) Preventing False Temporal
Implicatures: Interactive Defaults for Text Generation, in
Proceedings of COLING92, pp721--727, Nantes France, July 1992.
Given the causal and temporal relations between events in a knowledge
base, what are the ways they can be described in text?
Elsewhere, we have argued that during interpretation, the
reader-hearer H must infer certain temporal information from
knowledge about the world, language use and pragmatics. It is
generally agreed that processes of Gricean implicature help determine
the interpretation of text in context. But without a notion of
logical consequence to underwrite them, the inferences---often
defeasible in nature---will appear arbitrary, and unprincipled.
Hence, we have explored the requirements on a formal model of temporal
implicature, and outlined one possible nonmonotonic framework for
discourse interpretation (Lascarides
and Asher (1991), Lascarides and Oberlander (1992a)).
Here, we argue that if the writer-speaker S is to tailor text to
H, then discourse generation can be informed by a similar formal
model of implicature. We suggest two ways to do it: a version of
Hobbs et al's (1988, 1990) Generation as Abduction; and the
Interactive Defaults strategy introduced by Joshi et al (1984a, 1984b,
1986). In investigating the latter strategy, the basic goal is to
determine how notions of temporal reliability, precision and coherence
can be used by a nonmonotonic logic to constrain the space of possible
utterances. We explore a defeasible reasoning framework in which the
interactions between the relative knowledge bases of S and
H helps do this. Finally, we briefly discuss limitations of
the strategy: in particular, its apparent marginalisation of discourse
structure.
The paper focuses very specifically on implicatures of a temporal
nature. To examine the relevant examples in sufficient detail, we
have had to exclude discussion of many closely related issues in the
theory of discourse structure. To motivate this restriction, let us
therefore consider first why we might want to generate discourses with
structures which lead to temporal complexities.
@inproceedings{oberlander:lascarides:1992,
author = {Jon Oberlander and Alex Lascarides},
year = {1992},
title = {Preventing False Temporal
Implicatures: Interactive Defaults for Text Generation},
booktitle = {Proceedings of the International Conference in Computational Linguistics (COLING)},
pages = {721--727},
address = {Nantes, France}
}
Lascarides, A. and N. Asher, (1991) Discourse Relations and Defeasible
Knowledge, in Proceedings to the 29th Annual Meeting of
the Association of Computational Linguistics (ACL91), pp55--63,
Berkeley USA, June 1991.
This paper presents a formal account of the temporal interpretation of
text. The distinct natural interpretations of texts with similar
syntax are explained in terms of defeasible rules characterising
causal laws and Gricean-style pragmatic maxims. Intuitively
compelling patterns of defeasible entailment that are supported by the
logic in which the theory is expressed are shown to underly temporal
interpretation.
@inproceedings{lascarides:asher:1991,
author = {A. Lascarides and N. Asher},
year = {1991},
title = {Discourse Relations and Defeasible Knowledge},
booktitle = {Proceedings of the 29th Annual Meeting of the Association
for Computational Linguistics (ACL91)},
pages = {55--63},
address = {Berkeley}
}
Oberlander, J. and A. Lascarides, (1991) Discourse Generation,
Temporal Constraints and Defeasible Reasoning, in Proceedings of
the AAAI Fall Symposium on Discourse Structure in Interpretation
and Generation, Asilomar, California, November 1991.
Given the causal and temporal relations between events in a knowledge
base, what are the ways they can be described in text?
Elsewhere, we have argued that defeasible reasoning underlies the
hearer's temporal interpretation of text. Here, we argue, on the
basis of the kind of temporal information that remains implicit in
candidate utterances, that if the speaker is to tailor text to the
hearer, then defeasible reasoning must be integrated into the
generation process. We suggest two ways in which this can be done: a
version of Hobbs et al's (1988, 1990) Generation as Abduction, and the
Interactive Defaults Strategy introduced by Joshi et al (1984, 1986).
Assuming the Interactive Defaults strategy, the basic goal is to
determine how notions of temporal reliability, precision and coherence
can be used by a nonmonotonic logic to constrain the space of possible
utterances. We explore a defeasible reasoning framework in which the
interactions between the relative knowledge bases of speakers and
hearers helps do this. Finally, we briefly discuss an objection to
the programme as outlined, considering whether discourse structure has
been marginalised.
@inproceedings{oberlander:lascarides:1992,
author = {Jon Oberlander and Alex Lascarides},
year = {1991},
title = {Discourse Generation, Temporal Constraints and Defeasible Reasoning},
booktitle = {Proceedings of the AAAI Fall Symposium on Discourse Structure in Interpretation and Generation},
\address = {Asilomar, California}
}