Executive Summary
  • Abstract: This project aims to create a module that could be integrated into the currentframework where the forms could be generated anduser could upload a scanned image and then get an interface where the scanned information is displayed on the screen along with the corresponding image counterparts for the manual verification. Since training the data set is the most critical feature that affects the accuracy automated character training sub-module shall be incorporated into this module.
  • Student: Suryajith Chillara
  • Mentor(s): Michael Howden and Praneeth Bodduluri
Functional Specifications.


There is only an end user who uses the module. Presence of a superuser is not required.

Data Model

The data of the form has to be organized so as to minimalise the handwritten character recognition. Thus the common data can be dealt with just the check boxes and the specific details like name has to be in the form of a Text input. After discussing with Fran, he said some text input boxes like the comment boxes on the web page for additional info have to be present. These are the following input we might deal with

  1. Strings for names.
  2. Integers for various data.
  3. Check boxes.

The check boxes shall first be read using an 'X' and then be later be tested with a complete circular dot.


Sahana-Eden → xforms → [ Parse xforms + form a layout ] → pdf

Scanned Images → [OCR blackbox] → User interface to correct/verify → xml → Sahana-Eden


They are screen specific and they shall be explained in the screens.

Screens Correction UI


An interface to the end user shall be provided where the user shall upload the scanned image and then get a UI where the user shall compare and correct.


  1. Tesseract
  2. Apache-FOP / rst2pdf / ReportLab

Open Issues

<Shall be updated>


Meeting Schedule
Project timeline
SMART goalMeasureDue dateComments
Xforms to pdf OCR Forms generated. 15th June DONE
Correction UI Web UI with a text box and a corresponding image for every element 28th June (Postponed towards the end)
Tesseract integration Tests with printed data. 5th July In Progress (Tesseract is integrated, testing the training data set integration too)
Automated training A training form and automation scripts(A web UI for the same wasnt neccessary) 20th July DONE(Pending verification)
Web UI Making some necessary modifications to the UI 2nd August TO BE DONE
Testing Accuracy 9th August IN PROGRESS
  • Unordered List ItemI worked on the form generation. One of the ways to print a form is by using w3 standard way of using an xml and also a styling sheet xsl as inputs for Apache-FOP to generate PDFs. I have tried this system out but couldn't do all the things I wanted to have in an OCR compatible sheet, specially the form decorations for alignment. I have used the xml parsers and wrote a module to convert xforms into PDFs. The location of the data has been written into an xml document for easy parsing later.
  • I have hooked up tesseract with python and thus enabled the parsing of the forms via the os.system function. Now the parsed data has to be dumped specifically into the related files instead of a complete dump of the form.
  • The other thing I have been looking into is the training of the ocr engine. I has to be provided with the training data for proper evaluation. The literature for training of the engine I have read up has been mentioned below at the bottom of the page. The tesseract tools page has various shell scripts which help generate the training data and thus train tesseract where as Training Tesseract and the page Tesseract - Summary explain how to generate and thus train tesseract manually.
  • A training form has been generated where the user in his handwriting updates the form. Here 10 samples of a particular letter have to be filled in. The box file which has to be generated be standardized, the segments in the boxes provided shall be the letters to consider. The consequent steps of generating a unicharset and clustering and thus putting it all together shall be automated. There is no need of dictionary data here.
  • Reading the following articles at the moment. Overview of Tesseract, Recognizing roman numerals via Tesseract and working with the base APIs of tesseract to enable isolation of the characters.
  • I have integrated the image processing routines.(like conversion into a binary image to improve the contrast, alignment of the images using the connected component analysis to find the boxes and then find the centroid and thus check the angle between them etc. ) The sample collection is going on ( I have distributed the forms in my father's office for data collection ;) ). I shall be scanning them and reading the data.
  • The testing with the printed data is done using the Times new roman font for which the tesseract is trained by default has been giving decent results but I am now selectively dumping the data into xml.
Technical Layout

This is how the scripts function. I am working to tune these to work efficiently.

The Technical layout of the project



  • analyse.py
  • formHandler.py
  • parseform.py
  • printForm.py
  • script.py
  • tess.py
  • trainingform.py
  • xforms2pdf.py

Usage and their functionality:

analyse.py It checks for the markers to align the form and then asserts if the form is valid

Usage: python analyse.py <image>

formHandler.py Its a class which handles the parsing of the xforms and which in turn uses the object from the printform class.

parseform.py Its a file which takes in the imput of the xml file which has the logical data placement and outputs the xml dump of the parsed data. This particular module tesselates the required images and writes the data to an xmlfile.

Usage: python parseform.py <xmlinput> <imageinput> <xmloutput>


A class to handle the forms so to generate the reportlab pdfs

script.py Training of tesseract automated.

Usage: python script.py <trainingimage>


Generation of the training forms. Uses the datainput in the misc folder

Usage: python trainingform.py ../misc/<datainput>


It converts the input xform to a pdf to be printed.

Usage: python xforms2pdf.py <xforminput> <OPTIONAL → pdfoutput>

QR Code
QR Code foundation:gsoc_chillara (generated for current page)