Loading…
Using Continuous Integration to organize and monitor the annotation process of domain specific corpora
Applications in the World Wide Web aggregate vast amounts of information from different data sources. The aggregation process is often implemented with Extract, Transform and Load (ETL) processes. Usually ETL processes require information for aggregation available in structured formats, e. g. XML or...
Saved in:
Main Authors: | , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Applications in the World Wide Web aggregate vast amounts of information from different data sources. The aggregation process is often implemented with Extract, Transform and Load (ETL) processes. Usually ETL processes require information for aggregation available in structured formats, e. g. XML or JSON. In many cases the information is provided in natural language text which makes the application of ETL processes impractical. Due to the fact that information is provided in natural language, Information Extraction (IE) systems have been evolved. They make use of Natural Language Processing (NLP) tools to derive meaning from natural language text. State-of-the-art NLP tools apply Machine Learning methods. These NLP tools perform on newspapers with good quality, but they drop accuracy in other domains. However, to improve the quality for IE systems in specific domains often NLP tools are trained on domain specific text which is a time consuming process. This paper introduces an approach using a Continuous Integration pipeline for organizing and monitoring the annotation process on domain specific corpora. |
---|---|
DOI: | 10.1109/IACS.2014.6841958 |