Multilingual and multimodal information access evaluation : International Conference of the Cross-Language Evaluation Forum, CLEF 2010, Padua, Italy, September 20-23, 2010. proceedings / Maristella Agosti [and others] (eds.).

By: Cross-Language Evaluation Forum. Workshop (2010 : Padua, Italy)
Contributor(s): Agosti, Maristella
Material type: TextTextSeries: SerienbezeichnungLecture notes in computer science: 6360.; LNCS sublibrary: Publisher: Berlin : Springer, 2010Description: 1 online resource (xiv, 144 pages) : illustrationsContent type: text Media type: computer Carrier type: online resourceISBN: 9783642159985; 3642159982; 9783642159978; 3642159974Other title: CLEF 2010Subject(s): Information retrieval -- Congresses | Informatique | Information retrievalGenre/Form: Electronic books. | Conference papers and proceedings. Additional physical formats: Print version:: Multilingual and multimodal information access evaluation.DDC classification: 025.04 LOC classification: Z667 | .C76 2010Online resources: Click here to access online
Contents:
Keynote Addresses -- IR between Science and Engineering, and the Role of Experimentation -- Retrieval Evaluation in Practice -- Resources, Tools, and Methods -- A Dictionary- and Corpus-Independent Statistical Lemmatizer for Information Retrieval in Low Resource Languages -- A New Approach for Cross-Language Plagiarism Analysis -- Creating a Persian-English Comparable Corpus -- Experimental Collections and Datasets (1) -- Validating Query Simulators: An Experiment Using Commercial Searches and Purchases -- Using Parallel Corpora for Multilingual (Multi-document) Summarisation Evaluation -- Experimental Collections and Datasets (2) -- MapReduce for Information Retrieval Evaluation: "Let's Quickly Test This on 12 TB of Data" -- Which Log for Which Information? Gathering Multilingual Data from Different Log File Types -- Evaluation Methodologies and Metrics (1) -- Examining the Robustness of Evaluation Metrics for Patent Retrieval with Incomplete Relevance Judgements -- On the Evaluation of Entity Profiles -- Evaluation Methodologies and Metrics (2) -- Evaluating Information Extraction -- Tie-Breaking Bias: Effect of an Uncontrolled Parameter on Information Retrieval Evaluation -- Automated Component-Level Evaluation: Present and Future -- Panels -- The Four Ladies of Experimental Evaluation -- A PROMISE for Experimental Evaluation.
Summary: Annotation This book constitutes the refereed proceedings of the 11th symposium of the Cross-Language Evaluation Forum, CLEF 2010, held in Padua, Italy, in September 2010 as the First International Conference on Multilingual and Multimodal Information Access Evaluation - in continuation of the popular CLEF campaigns and workshops that have run for the last decade. The 12 revised full papers presented together with 2 keynote talks and 2 panel presentations were carefully reviewed and selected from numerous submissions. The papers include advanced research into the evaluation of complex multimodal and multilingual information systems in order to support individuals, organizations, and communities who design, develop, employ, and improve such systems. The papers are organized in topical sections on resources, tools, and methods; experimental collections and datasets, and evaluation methodologies.
Tags from this library: No tags from this library for this title. Log in to add tags.
    Average rating: 0.0 (0 votes)
Item type Current location Collection Call number Status Date due Barcode Item holds
eBook eBook e-Library

Electronic Book@IST

EBook Available
Total holds: 0

Includes bibliographical references and author index.

Annotation This book constitutes the refereed proceedings of the 11th symposium of the Cross-Language Evaluation Forum, CLEF 2010, held in Padua, Italy, in September 2010 as the First International Conference on Multilingual and Multimodal Information Access Evaluation - in continuation of the popular CLEF campaigns and workshops that have run for the last decade. The 12 revised full papers presented together with 2 keynote talks and 2 panel presentations were carefully reviewed and selected from numerous submissions. The papers include advanced research into the evaluation of complex multimodal and multilingual information systems in order to support individuals, organizations, and communities who design, develop, employ, and improve such systems. The papers are organized in topical sections on resources, tools, and methods; experimental collections and datasets, and evaluation methodologies.

Print version record.

Keynote Addresses -- IR between Science and Engineering, and the Role of Experimentation -- Retrieval Evaluation in Practice -- Resources, Tools, and Methods -- A Dictionary- and Corpus-Independent Statistical Lemmatizer for Information Retrieval in Low Resource Languages -- A New Approach for Cross-Language Plagiarism Analysis -- Creating a Persian-English Comparable Corpus -- Experimental Collections and Datasets (1) -- Validating Query Simulators: An Experiment Using Commercial Searches and Purchases -- Using Parallel Corpora for Multilingual (Multi-document) Summarisation Evaluation -- Experimental Collections and Datasets (2) -- MapReduce for Information Retrieval Evaluation: "Let's Quickly Test This on 12 TB of Data" -- Which Log for Which Information? Gathering Multilingual Data from Different Log File Types -- Evaluation Methodologies and Metrics (1) -- Examining the Robustness of Evaluation Metrics for Patent Retrieval with Incomplete Relevance Judgements -- On the Evaluation of Entity Profiles -- Evaluation Methodologies and Metrics (2) -- Evaluating Information Extraction -- Tie-Breaking Bias: Effect of an Uncontrolled Parameter on Information Retrieval Evaluation -- Automated Component-Level Evaluation: Present and Future -- Panels -- The Four Ladies of Experimental Evaluation -- A PROMISE for Experimental Evaluation.

There are no comments for this item.

to post a comment.

Powered by Koha