This isn’t my first app that targets reading, it’s the first time I’ve worked hard to develop a unique methodology. I published a mobile app a while back using Adobe Air called Learn to Read Kids! to several app stores – and my daughter actually still uses the app on my phone. However this app is very traditional, and uses the phonics approach.
But in the end, this app falls way short of providing the type of visual method needed for people with hearing loss or auditory processing issues. Phonics based programs assume that the child should learn letters by associating them with a sound. There is no real way around that. If they struggle with making that association, phonics programs are doomed to fail these students, or at least cause them a lot of grief.
But what is the alternative?
This points to a question that sounds simpler than it is, which is: how do we actually learn to read?
I was a little surprised to see that there isn’t any absolute consensus on this, and as such I think there is still plenty of room for innovation in the field. The mapping of letters to something our brain can hold onto seems to be key, and if the audio channel is not working correctly, this leaves the visual channel as the primary means for trying to map this information. At some point I’ll get more organized and add links to the academic PDFs and programs I waded through as I began attempting to build the groundwork for my program and its methodology, but the entire thing rests on the idea that you can bypass the audio encoding and encode letters visually as a method for building a foundation for reading.
As I spent the better part of the summer working on Smarty Decks, a kanji learning application I developed with my now 8 year old daughter Julia – my first thought was to throw the English alphabet into that app to try and work with the colors a bit, to go about trying to map a unique color to each letter. The app isn’t really built to teach the English alphabet, but it gave me a chance to at least lay out the letters and compare the colors side by side in tiles. Here is what this first pass looked like:
What I quickly realized was that my 4 year old would never be able to learn these colors through flipcards and matching in an app build for data rather than as an immersive learning game! So I began developing a game style app, and also realized that as important as the color could be, identifying letters by their shapes could be possibly even more important. I looked through the letters carefully and realized that there are 4 categories of shapes that you can divide them into:
I’m trying to use terms that are a little more fun, and ones that can also be used in contexts of play and for animation. Some letters could actually be placed in more than 1 category, but if you look carefully at the letters, these shapes begin to jump out at you!
Balls (9): a, b, c, d, e, g, o, p, q
Sticks (6): l, i, j, r, f, t
Thumbs (5): u, n, m, s, h
Triangles (6): v, w, k, z, x ,y
So the rather boring presentation above in the data centric app, became something much more interesting in my gamified app:
I’ll detail the steps my app uses to try and teach these letters in a later post!