How can one abbreviate visual items? For example, if your goal is to make tactile versions of maps or diagrams in a book, there is much less resolution available in a tactile form. Or, if your goal is to detect similarities of overall shape, you do not wish to be distracted by image detail. We experimented with sketching done in a way that captures the stroke sequence and timing, and realize that the first strokes made often abbreviate the image.
We used both sketches made locally on a Wacom tablet, giving us both stroke sequence and timing data, and sketches from the SIGGRAPH dataset which provide sequence but not timing. Our sketches are necessarily simple, but we hope that we can use what we’ve learned from them to build machine-learning databases to extract important features from more complex imagery. This is a revised version of a talk given at ICDAR in 2013.