* * * * * * * * * * * * * * * * * * * * * * * *
LINGUIST List logo Eastern Michigan University Wayne State University *
* People & Organizations * Jobs * Calls & Conferences * Publications * Language Resources * Text & Computer Tools * Teaching & Learning * Mailing Lists * Search *
* *
LINGUIST List 19.1435

Tue Apr 29 2008

Confs: Applied Ling, Cognitive Science, Comp Ling, Pragmatics/Morocco

Editor for this issue: Stephanie Morse <morselinguistlist.org>

To post to LINGUIST, use our convenient web form at http://linguistlist.org/LL/posttolinguist.html.
        1.    Patrizia Paggio, Multimodal Corpora

Message 1: Multimodal Corpora
Date: 28-Apr-2008
From: Patrizia Paggio <patriziacst.dk>
Subject: Multimodal Corpora
E-mail this message to a friend

Multimodal Corpora

Date: 27-May-2008 - 27-May-2008
Location: Marrakech, Morocco
Contact: Jean-Claude Martin
Contact Email: martinlimsi.fr
Meeting URL: http://www.lrec-conf.org/lrec2008/

Linguistic Field(s): Applied Linguistics; Cognitive Science; Computational
Linguistics; Pragmatics

Meeting Description:

The focus of this LREC 2008 workshop on multimodal corpora will be on models of
natural interaction and their contribution to the design of multimodal systems
and applications.

Call For Participation

International Workshop on

Multimodal Corpora:
From Models of Natural Interaction to Systems and Applications

Tuesday, 27 May 2008
Full day workshop

Marrakech (Morocco)

In Association with LREC 2008
(the 6th International Conference on Language Resources and Evaluation)
Main conference: 28-29-30 May 2008
Palais des Congrès Mansour Eddahbi
Marrakech (Morocco)

A 'Multimodal Corpus' targets the recording and annotation of several
communication modalities such as speech, hand gesture, facial
expression, body posture, etc. Theoretical issues are also addressed,
given their importance to the design of multimodal corpora.

This workshop continues the successful series of similar workshops at
LREC 00, 02, 04 and 06 also documented in a special issue of the
Journal of Language Resources and Evaluation due to come out in spring
2008. There is an increasing interest in multimodal communication and
multimodal corpora as visible by European Networks of Excellence and
integrated projects such as HUMAINE, SIMILAR, CHIL, AMI,
CALLAS. Furthermore, the success of recent conferences and workshops
dedicated to multimodal communication (ICMI, IVA, Gesture, PIT, Nordic
Symposia on Multimodal Communication, Embodied Language Processing)
and the creation of the Journal of Multimodal User Interfaces also
testifies to the growing interest in this area, and the general need
for data on multimodal behaviours.

The focus of this LREC 2008 workshop on multimodal corpora will be on
models of natural interaction and their contribution to the design of
multimodal systems and applications.

Topics to be addressed include, but are not limited to:

Multimodal corpus collection activities (e.g. direction-giving
dialogues, emotional behaviour, human-avatar interaction, human-robot
interaction, etc.)

Relations between modalities in natural (human) interaction and in
human-computer interaction

Application of multimodal corpora to the design of multimodal and
multimedia systems

Fully or semi-automatic multimodal annotation, using e.g. motion
capture and image processing, and its integration with manual

Corpus-based design of systems that involve human-like modalities
either in input (Virtual Reality, motion capture, etc.) and output
(virtual characters)

Multimodal interaction in specific scenarios, e.g. group interaction
in meetings

Coding schemes for the annotation of multimodal corpora

Evaluation and validation of multimodal annotations
Methods, tools, and best practices for the acquisition, creation,
management, access, distribution, and use of multimedia and multimodal

Interoperability between multimodal annotation tools (exchange
formats, conversion tools, standardization)

Metadata descriptions of multimodal corpora

Automated multimodal fusion and/or generation (e.g., coordinated
speech, gaze, gesture, facial expressions)

Analysis methods tailored to multimodal corpora using
e.g. statistical measures or data mining.

We expect the output of this workshop to be:

1) deeper understanding of theoretical issues and research questions
related to verbal and non-verbal communication that multimodal corpora
should address,

2) larger consensus on how such corpora should be built in order to
provide useful and usable answers to research questions,

3) shared knowledge of how the corpora are contributing to multimodal
and multimedia system design, and

4) an updated view of state-of-the-art research on multimodal corpora.

Schedule and Registration Information

The workshop will consist of a morning session and an afternoon
session. There will be time for collective discussions.

For this full-day workshop, registration is possible on site or via the lrec web

Organising Committee

MARTIN Jean-Claude, LIMSI-CNRS, France
PAGGIO Patrizia, Univ. of Copenhagen, Denmark
KIPP Michael, DFKI, Saarbrücken, Germany
HEYLEN Dirk, Univ. Twente, The Netherlands

Programme Committee

Jan Alexandersson, D
Jens Allwood, SE
Elisabeth Ahlsén, SE
Elisabeth André, D
Gerard Bailly, F
Stéphanie Buisine, F
Susanne Burger, USA
Loredana Cerrato, SE
Piero Cosi, I
Morena Danieli, I
Nicolas Ech Chafai, F
John Glauert, UK
Kostas Karpouzis, G
Alfred Kranstedt, D
Peter Kuehnlein, NL
Daniel Loehr, USA
Maurizio Mancini, F
Costanza Navarretta, DK
Catherine Pelachaud, F
Fabio Pianesi, I
Isabella Poggi, I
Laurent Romary, D
Ielka van der Sluis, UK
Rainer Stiefelhagen, D
Peter Wittenburg, NL
Massimo Zancanaro, I

Workshop Programme


Session ''Multimodal Expression of Emotion''

Annotation of Cooperation and Emotions in Map Task Dialogues
(Federica Cavicchio and Massimo Poesio)

Double Level Analysis of the Multimodal Expressions of Emotions in
Human-machine Interaction
(Jean-Marc Colletta, Ramona Kunene, Aurélie Venouil and Anna Tcherkassof)

10.15 - 10.45
Coffee Break

Session ''Multimodality and Conservation'''

Multimodality in Conversation Analysis: a Case of Greek TV Interviews
(Maria Koutsombogera, Lida Touribaba and Harris Papageorgiou)

The MUSCLE Movie Database: A Multimodal Corpus with Rich Annotation
for Dialogue and Saliency Detection
(Dimitrios Spachos, Athanasia Zlantintsi, Vassiliki Moschou, Panagiotis
Antonopoulos, Emmanouil Benetos, Margarita Kotti, Katerina Tzimouli,
Constantine Kotropoulos, Nikos Nikolaidis, Petros Maragos and Ioannis

Session ''Multimodal Analysis of Activities'''

A Multimodal Data Collection of Daily Activities in a Real
Instrumented Apartment
(Alessandro Cappelletti, Bruno Lepri, Nadia Mana, Fabio Pianesi and
Massimo Zancanaro)

Unsupervised Clustering in Multimodal Multiparty Meeting Analysis
(Yosuke Matsusaka, Yasuhiro Katagiri, Masato Ishizaki and Mika Enomoto)


13.00 - 14.30

Session ''Individual Differences in Multimodal Behaviors''

Multimodal Intercultural Information and Communication Technology:
A Conceptual Framework for Designing and Evaluating Multimodal
Intercultural ICT
(Jens Allwood and Elisabeth Ahlsén)

Multitrack Annotation of Child Language and Gestures
(Jean-Marc Colletta, Aurélie Venouil, Ramona Kunene, Virginie Kaufmann
and Jean-Pascal Simon)

The Persuasive Impact of Gesture and Gaze
(Isabella Poggi and Laura Vincze)

Coffee Break

On the Contextual Analysis of Agreement Scores
(Dennis Reidsma, Dirk Heylen and Rieks Op den Akker)

Session ''Processing and Indexing of Multimodal Coprora''

Dutch Multimodal Corpus for Speech Recognition
(Alin G. Chitu and Leon J.M. Rothkrantz)

Multimodal Data Collection in The AMASS++ project
(Scott Martens, Jan Hendrik Becker, Tinne Tuytelaars and Marie-Francine Moens)

The Nottingham Multi-modal Corpus: A Demonstration
(Dawn Knight, Svenja Adolphs, Paul Tennent and Ronald Carter)

Analysing Interaction: A Comparison of 2D and 3D techniques
(Stuart A. Battersby, Mary Lavelle, Patrick G.T. Healey and Rosemarie McCabe)


End of Workshop

(Followed by an informal dinner)
Read more issues|LINGUIST home page|Top of issue

Please report any bad links or misclassified data

LINGUIST Homepage | Read LINGUIST | Contact us

NSF Logo

While the LINGUIST List makes every effort to ensure the linguistic relevance of sites listed
on its pages, it cannot vouch for their contents.