The "ICDAR2015 Competition HTRtS: Handwritten Text Recognition on the tranScriptorium Dataset" competition is organised in the framework of the ICDAR 2015 competitions by the Pattern Recognition and Human Language Technologies research centre with the collaboration of the tranScriptorium partners. This contest aims to bring together researchers working on off-line Handwritten Text Recognition (HTR) and provide them a suitable benchmark to compare their techniques on the task of transcribing typical historical handwritten documents. The first edition of this contest HTRtS2014 was organised in the ICFHR 2014 (Sánchez, 2104).

The proposed dataset consists of a series of documents from the Bentham collection, which has been prepared in the tranScriptorium project. This dataset includes manuscripts written by Jeremy Bentham (1748-1832) himself over a period of sixty years, as well as fair copies written by Bentham's secretarial staff. Handwriting in this collection is complex enough to challenge the HTR software: manuscripts written by secretarial staff will provide variety, while Bentham's manuscripts are often complicated by deletions, marginalia, interlineal additions and other features (Gatos, 2014). The data used in this contest is closely related to the data used in the ICDAR2015 Competition on Keyword Spotting for Handwritten Documents.

The dataset for this competition is composed of 796 pages; most of the pages consist of a single block with many difficulties for line detection and extraction (see page samples below). The dataset is divided into 3 batches for the competition: 2 batches for training and 1 batch for test. The number of writers is unknown.

The first batch is composed of 433 pages. This set was used in the HTRtS2014 contest. The ground-truth in this set is in PAGE format (Pletschacher, 2010) and it will be provided annotated at line level in the PAGE files. For making easier the participation, the data will be provided in several formats as we describe below.

The second batch is composed by 313 pages. The ground-truth in this batch is in PAGE format but it will be provided annotated at text block level. The line transcripts for the blocks will be provided in a separated file, with a newline character at the end of each line. The idea of this second batch is that the entrants in the contest try to use this training set with their own methods. This training set simulates a real situation in which, sometimes, there exist transcription for some collection but the lines in the images are not annotated in correspondance with the transcripts.

Training data will be provided as soon as the competition becomes open.

The third bacth is a test set of 50 pages that will be kept hidden and released in due time just to obtain the results to be evaluated and compared.

Description and goals

The systems entering this contest should try to obtain the most accurate recognition results in the test partition.

The available data for the first batch will consist of:

  1. The original images of all the training pages
  2. The PAGE file corresponding to each page image. For each text line in this image, the PAGE file contains a bounding polygon and the corresponding correct transcript.
  3. The preprocessed and extracted line images for all the lines of the training and validation sets in grayscale (see examples below)
  4. A sequence of feature vectors for each line processed according to (Kozielski, 2013)
  5. The corresponding transcripts of each of these lines

Items 1 and 2 are redundant with items 3 and 5 and are provided for those who wish to try improving results by using specific image preprocessing and line extraction tools. Item 4 is provided for those who do not wish to try improving results at pre-procesing and feature extraction level.

The available data for the second batch will consist of:

  1. The original images of all the training pages
  2. The PAGE file corresponding to each page image. The PAGE file contains the bounding polygon for the text regions, not for the line regions
  3. For the text regions, a separated file with the corresponding correct transcripts will be provided

The test images, with the transcript fields empty, will be eventually provided in the same (redundant) formats as first batch for evaluation purposes (see schedule below).

A baseline system based on HTK hidden Markov models and SRILM language modelling will be provided, including a set of scripts to perform a basic training and test experiment (using the first batch). The participants can use this baseline system as an initial approach to their own systems, where they will be allowed to improve this baseline by changing one or several of the following steps:

Several submissions per participant will be allowed and all the results will be considered when presenting the competition results. In each submission, the participant must provide a brief description of the characteristics of the submitted system, emphasising the main characteristics of the submitted system. The final goal is to analyse the different proposals of the participants.

Evaluation modalities

The evaluation will be performed on the transcription results provided by each recognition system. The evaluation metric will be the Word Error Rate (WER) between the reference transcript and the transcript provided by the system from each line. The winner will be the system which obtains the least WER on the test set. A web-based platform will be available for the participants to check their test results.

Two tracks are planned in this competition:

The baseline system will be prepared only for the restricted track. It is mandatory that the entrants participating in the "Unrestricted track" participate in the "Restricted track". The idea of this obligation is to be able to compare several systems in analogous training conditions.

Registration and access to data

To register in this contest send an e-mail to jandreu_AT_prhlt_DOT_upv_DOT_es with the subject ICDAR 2015 HTRtS competition registration. In the message you must provide the following data: A username and password will be given to each registered participant, which will grant access to the data and evaluation page.

Registered participants

  1. CITlab
  2. DHLAB
  3. DSHW
  4. A2IA
  5. RWTH Aachen University
  6. Indian Statistical Institute
  7. Qatar Computing Research Institute
  8. Multimedia Analysis and Data Mining
  9. WL Tecnologia


Best WER / CER of the submitted systems on the test set

Restricted track Unrestricted track
CITlab 30.2 / 15.5
A2iA 31.6 / 14.7 27.9 / 13.6
QCRI 44.0 / 28.8