Login ISRI E-mail Account

Abstracts


The Evolution of Shared-Task Evaluation
Douglas W. Oard, University of Maryland, College Park (USA)

In this talk I will principally focus on describing the initiation, evolution, and completion of a “shared task” evaluation campaign such as FIRE tasks. I will draw on examples from Cross-Language Information Retrieval, Topic Detection and Tracking, Speech Retrieval, and E-Discovery to illustrate these processes. With that as background, I will then step back to examine how evaluation campaigns such as FIRE fit into a larger information retrieval innovation ecosystem that also includes universities, publication in conferences and journals, government funding, and commercial applications of the resulting technology. By looking in this way not just at how but also at why we do what we do, I hope to shed some light on how we can best help to shape our own future.


About the Speaker: Douglas Oard is a Professor at the University of Maryland, College Park, with joint appointments in the College of Information Studies and the Institute for Advanced Computer Studies. He is a General Co-Chair for NTCIR-11 and he has served as a track or task coordinator at TREC and CLEF, and FIRE. Additional information is available at http://terpconnect.umd.edu/~oard/




Authorship Attribution, Profile, and Style
Jacques Savoy, Universite de Neuchatel, Switzerland

Huge amounts of textual information are available nowadays. What can we extract from such a source? Written text reveals some information about the author. In authorship attribution studies, we have defined various strategies to determine the most probable author of a disputed text. Using words, we will show how we can determine the real author of a disputed text. This question may also open new directions such as, for instance, to define information about the author profile (gender, age, psychological status, etc.). Based on a distance-based method, we can also apply the suggested classification scheme to political speeches to detect textual patterns more closely associated to each candidate (e.g., between B. Obama and J. McCain when considering the electoral speeches uttered in the 2008 US campaign). Moreover, similar approaches can be used in conjunction with a clustering algorithm to regroup US presidents (from H. Truman to B. Obama) according to their State of the Union addresses. In this case, can we detect the similarities between presidents coming from the same party or according to other factors?


About the Speaker: Jacques Savoy is a Professor at the University of Neuchatel (Switzerland). His research interests cover mainly natural language processing and particularly information retrieval for languages other than English (European, Asian, and Indian) as well as multilingual and cross-lingual information retrieval. His current research interests are related to text clustering and categorization as well as authorship attribution.




Risk-Sensitive Information Retrieval
Kevyn Collins-Thompson, University of Michigan

Search engines have explored many enhancements to basic retrieval methods to help improve retrieval effectiveness, from automatic query rewriting to personalized re-ranking. In general, these can lead to gains in average effectiveness across a set of queries - and that is a key reason why they are deployed in systems. However, even in state-of-the-art systems, these algorithms can exhibit poor robustness: that is, they have high *variance* in gains and losses across queries and are risky to apply: these algorithms can work well on some queries, but actually hurt results on other queries, compared to not being used at all. Surprisingly, the robustness aspect of IR algorithms is still often ignored in both optimization and evaluation, even in state-of-the-art systems, despite the real-world consequences of retrieval failures.


In this talk I’ll describe how the reliability of current search systems can be improved by introducing models, algorithms, and evaluation methods for risk-sensitive retrieval, a new research direction I've developed over the past several years with collaborators that uses ideas from computational finance to jointly optimize both the effectiveness and robustness of information retrieval algorithms. I'll show how in many cases the robustness of some widely-used IR algorithms, such as query expansion and adaptive ranking, can be significantly increased, reducing serious worst-case failures while maintaining state-of-the-art average effectiveness. Finally, I'll summarize results from the risk-sensitive task of the TREC 2013 Web track, illustrating the growing participation of the IR community in this line of research.


About the Speaker: Dr. Kevyn Collins-Thompson is an Associate Professor at the University of Michigan, with appointments in the School of Information and Dept. of EE/CS. His research explores advances in information systems that can reliably and automatically adapt to users and their information needs in different contexts, especially to help human learning. His research on personalization has been applied to real-world systems ranging from intelligent tutoring systems to Web search engines. Kevyn has also pioneered techniques for modeling reading difficulty, and understanding how people learn new words. He received his Ph.D. in Computer Science from Carnegie Mellon University, where he was a member of the Language Technologies Institute. Before joining the University of Michigan he spent five years as a Researcher at Microsoft Research. His research has been recognized by paper awards that include an ACM SIGIR'13 Best Student-led Paper award (for his work on intrinsic diversity with Karthik Raman and Paul Bennett) and an ACM SIGIR'12 Best Paper Honorable Mention (for his work with Lidan Wang and Paul Bennett on risk-sensitive ranking).




Argumentation mining
Marie-Francine Moens, Katholieke Universiteit Leuven, Belgium

Argumentation mining involves automatically identifying argumentative information and its argumentative structure in text, that is, the supporting premises and conclusion of a claim, the argumentation scheme of each argument, and the argument-subargument and argument-counterargument relationships between pairs of arguments. Argumentation mining improves information retrieval and also provides the end user with instructive visualizations and summaries of the arguments. In the talk we focus on the methods to extract argumentative information, which pose interesting research questions with regard to structured machine learning. We illustrate the talk with applications that mine argumentation in legal texts, court decisions, scientific texts, debates and reviews.




PAN@FIRE&CLEF: similarity search, plagiarism detection, author identification or profiling
Paolo Rosso, Universidad Politécnica de Valencia, Spain

Since 2007 PAN has organised ten activities on uncovering plagiarism, authorship, and social software misuse: the first two years as workshops and since 2009 as benchmark activities. At FIRE, since 2011 PAN has been organising tasks on cross-language similarity search, with special emphasis on Indian language: CL!TR (2011) on Cross-Language !ndian Text Reuse, and CL!NSS (2012 and 2013) on Cross-Language !ndian News Story Search. At CLEF tasks have been organised since 2010; in 2013 PAN proposed three tasks on: plagiarism detection, author identification, and author profiling. In this talk, after a brief report from PAN@CLEF in the name of cross-fertilization of evaluation fora, special emphasis will be given to the new task on author profiling, 21 worldwide teams participated at. The focus of author profiling at PAN has been on identifying age and gender in social media since we are mainly interested in everyday language and how it reflects basic social and personality traits.




From legal texts to legal ontologies and question-answering systems
Paulo Quaresma, Universidade de Évora, Portugal

In this talk speaker will discuss problems and possible solutions in the development of a system able:
a) to analyse legal documents;
b) to extract relevant information from these documents;
c) to represent the information in adequate ontologies;
d) and to create question-answering systems supporting interaction in natural language.


In the first part of the talk, a brief review of the challenges of these tasks and the main existing research approaches will be presented and, in the second part, a detailed description of the approach followed at the University of Évora, Portugal, will be discussed.


Our methodology aims to integrate symbolic linguistic-based approaches with statistical-based machine learning ones. In our proposal legal documents are processed by lexical, syntactical and semantic (partial) analysers aiming to allow an identification and representation of the information conveyed by the texts: agents, actions, and events. Then, this information is interpreted in the context of existent ontologies, which are automatically populated. Finally, a question-answering system, which is able to process natural language questions, is used to access the populated ontologies and to answer users' questions. Each module of the proposed architecture will be presented and examples will be shown and discussed.




Finite population retrieval and evaluation in e-discovery
William Webber, College of Information Studies, University of Maryland, USA

The goal in e-discovery (the production of relevant material from corporate document repositories under civil litigation) is the exhaustive retrieval of relevant material from a finite document population. This task is often conceptualized as one of text classification, and the tools and mindsets of statistical machine learning applied to solve it. The standard model in text classification, however, is one in which the target population is implicitly indefinite, or at least not all available at training time. In this talk, we consider what changes in our approach to learning for text classification when we recognize the finite, fully-available nature of the document population in e-discovery, and how this effects our evaluation of retrieval completeness.

Links

Home
Home 2013
Schedule
Working notes
Presentations
Abstracts
Data
Committee
Resources
Venue
fire-list
FAQ

Archives

FIRE 2012
FIRE 2011
FIRE 2010
FIRE 2008

Updates



Copyright © 2014 IRSI All rights reserved.

This page was last updated 11/01/2014 10:13:54 IST.