2d Character Animation Generator

The future of my vector character generator may very well be as an artwork generator for developers looking to quickly create beautiful 2D art, with 1 click export to their favorite animation software. Here is an example I did for Spine.


Kwippe can generate all sorts of random combinations from any of the 4 character sets (man, woman, monster, and alien) – and even interchange parts by importing a character into another dataset. I do that a lot with monster eyes 🙂 – From there switching out eyes, mouths, bodies, and other features is a matter of 1 click – with easy controls to move around, recolor, scale and stretch the different pieces individually. It’s nice not to have to worry about locking all of the layers you DON’T want to edit – which I’m always doing in Illustrator!  Kwippe is a bit like an Illustrator for people who want to use fun presets and quick editing tools, rather than the ability to hand craft each path.  Then with just 1 click – all of the artwork and necessary JSON is created for Esoteric’s Spine animation software.


I preserve the head and body groups separately, as well as any text, props, shapes, or background layer. They are all ready to go to begin animating without needing to do anything else in Spine other than have some imagination, which this scene below definitely lacks! 🙂


Since Kwippe allows you to save any scene to the Gallery – you can use it to make another animation later, and if you just want to add alternate parts like different eyes, hair, or background – you can just import those parts to your existing project, and everything else will work right out of the box. Since the images are trimmed – they don’t need any other manual placement to serve as skins to existing slots/bones.

I added a few other cool features: the ability to set any scale you want, as well as the ability to output JSON w/out images (faster if you are using the same parts and just want to update positions). The text and banner creation deserves its own blog post, as there are way too many features to go over quickly.

I’d love to share this project with other developers – so my husband (and dev partner, he’s the “idea” guy 🙂 – and I are looking at possibly doing a crowdfunding thing to help grow this app a bit and get a little outside help. If you are interested in helping or hearing more, give us a shout out!

From Art to Spine Animation in 1 Click

I’ve been working on a vector character generator/editor for a few years on and off, and often keep fiddling with it as I need it for other projects. Last week I decided to create unique characters for each letter using the color and shape for each one, as well as word cards where colors and shaped are used to help the child remember those letters. The idea of trying to create all of this in Illustrator seems like a real nightmare, and even with a good exporter, recreating all of that in my animation software (Spine) is another nightmare, as the Illustrator script is pretty bare bones and forces you to add bones yourself, and associate the images with slots too. All of this is time consuming – I estimate at least an hour for each piece of ready artwork – probably closer to 90 minutes if I’m being honest.

So I decided to try and build these features into Kwippe, my character editor. This took some serious additions: adding the ability to import external SVGs after my browser wiped out my entire Indexed DB where I stored all 26 letter character definitions…a bunch of new text functions drawing on the letter colors and shapes arrays, a new prop import function so I can add external SVGs from places like openclipart.org and thenounproject.com, and finally – a complete Spine export solution that would size, trim, and export each individual part of an image, and save the positions, groups, and names all to a JSON file.

Whew! Normally I’d look at a list like that and figure 2 weeks minimum. But luckily, the SVG import stuff ended up being mostly pre-wired – as I already import the raw SVG from the Indexed DB (could be from anywhere, really) – so adding a text import for the entire project file which saves as a single SVG turned out to be super easy. The prop import was also simple, although it’s not as good. Some SVGs are gnarly with all sorts of weird transforms, gradients, and textures that don’t play well with new programs. But for basic stuff like just about anything from the Noun Project – the artwork imports and can be colored, filtered, and repositioned easily.

I’m especially excited about the text functions – as creating nice text has never been easy in Illustrator or Photoshop – I used to use a program called eDrawMax which was much more user friendly, and had lots of cool presets for nice block text.  For creating images and banners using text, or comic style storycards like I’m doing – Kwippe is downright awesome. And to have all of it export to Spine in 1 click is almost mind boggling.  Here are a few screenshots.

Each piece including individual shapes & words is animatable and ready to go in Spine – complete with bones.

Wow! I am so excited about this, and it has given me a new enthusiasm to keep building Kwippe while I continue with Learn to Read Visually. More thoughts later!

Sorting to learn


I watched in amazement as my 4 year old daughter whizzed through level after level of her Kindle app called Kids Learn to Sort by Intellijoy (Android version here). Within a short time, she completed 35 levels, doing most of them multiple times – gleefully. One of the more interesting activities has the child sort vikings into ships that are different sizes and colors. It was fun to watch her learn as she tried the wrong combination at first, then figured out the right one.

It was surprising to me just how fast she was completing the activities, and how easy it seemed for her to remember activities that she had already done, and figure out how to do new ones. As her auditory processing skills are definitely impaired – we don’t always know what she has understood – and it was a little like watching her mind at work, in a way she can’t normally express.

So in my quest to develop a visual based reading system – one of my first goals was to harness the power of sorting as a learning tool. I looked all through as many IOS apps as I could, as well as through the Kindle and Play stores – and I couldn’t find anything advertising to teach letters through sorting of any kind.

I think it may be in part because the idea that phonics is necessary and/or the only valid approach for teaching letters is so widespread, that few app developers have attempted to create anything different. I found a few autism related apps that said they taught through a more visual method, but that seemed limited to pictures (which are phonetically linked to the letter, lol) – and tracing of the letter, as well as the presentation of whole words.

I’ll be detailing each step of my app here – but for now I wanted to add the graphic of the 1st shape sorting one. As the sub animates in, and shoots out the letters – my daughter has responded with laughter and smiles whenever the activity comes on – and easily sorts the letters into their shaped port holes… so far so good 🙂


Welcome! About Learn to Read Visually

Hello and welcome! I decided to start a blog so that I could chart my journey with this app, which began when I noticed that my 4 year old daughter who has auditory processing issues was nonetheless extremely interested in letters. While language acquisition has been tough for her,  her memory and sorting skills are quite fantastic, especially when using a tablet. Shapes, colors, and sizes are very easy for her to remember and sort quickly.

So I began researching non phonics approaches to teaching reading, and found that good methods for visual reading approaches are few and far between. And worse, in my opinion, is that they mostly rely on whole word methods, which would be inherently quite difficult for a 4 year old who struggles with auditory processing.

I read a great PDF by a hearing impaired man who got a PhD where he implores educators to focus on visual reading strategies for hearing impaired students, who often learn to read only AFTER they’ve learned ASL, and essentially learn to read “through” ASL rather than visually.

I found another blog where an autistic man described his use of color coding to remember letters, which was a mechanism he developed as a child, never realizing that other people didn’t do this. He was over 30 years old before he realized that other people didn’t actually associate colors with letters!

Then I researched Color Grapheme Synesthesia – people who see letters in colors, and studies that have been done on colors that people naturally associate with certain letters. I found a research piece that included toddlers, and some 40 percent of them chose black for X and white for O. I figured that this research could serve as a starting point for developing an app using colors and shapes to map letters for people in a non phonics or audio way, and to develop a series of animated learning ebooks as well.

So that is how the Learn to Read Visually App was born.

As a developer, one of the most exciting things is when you find a perfect combination of tools to build your app, especially when they are ones you haven’t discovered before. After much research and testing, I came up with the combination of Pixi.js and Spine animation as the basis of my app development. In my next post, I’ll talk a little about how I chose these 2, and what cool things I’ve been able to do with them!


Thinking of moving to New Zealand!

From Alaska to New Zealand. Huh? Seriously?  Well, yes!  My husband and I have actually been looking at the possibility of moving to New Zealand for some time now. It started from his interest in the outdoor activities available, as he is a pretty avid outdoorsman who would rather be outdoors most of the time. But then it morphed into a genuine interest into the way of life offered over there. The more we read about the schools, attitude towards work/life balance, and opportunity to have our kids to meet new people and see a different part of the world – the more we liked the idea.

We read through a lot of blogs and forums, many of them from disgruntled expats who detailed every aspect of their dissatisfaction with the Kiwis and their country. But in the end, many of these complaints could be made of Alaska. We do have a slower pace of life here – and some people can’t take it. You don’t move here to “get ahead” unless you are in the oil business, and the harsh weather and cost of living drive a lot of people out after a short while. But for those us that consider ourselves real Alaskans, we love it here for the people, the landscape, and the very fact that this isn’t your average city where keeping up with your neighbor is even on your radar.

But as much as we love Alaska, we yearn to explore the world a bit, and show our kids that there are other ways of life that are equally valid, and can be just as fulfilling. In addition, on a personal note I’d really like to be involved with other people who are doing cool things and making new digital products happen.

The tech scene in Wellington seems to be quite vibrant; whereas the tech scene here in Anchorage barely has a pulse.

So I’ve applied for something called LookSee Wellington, along with some 10 or 20 thousand other developers, lol. While I got an email from the Immigration New Zealand website where I’ve subscribed for a while now, they also published articles in places like Forbes magazine about “winning a free trip to New Zealand” – which is in fact a great opportunity for somebody that is really interested in living there, or a bit of a scam for somebody trying to just get the free trip. Hopefully they’ll figure out how to separate the wheat from the chaff – and weed out people who aren’t actually interested in building a life in New Zealand. I worry that I’ve been self employed – so perhaps the employers won’t believe that I actually built all this stuff myself!  Just look at my Github profile, peppered w/ all the tech help I’ve sought when learning new libraries 🙂

But as the chance of getting selected for 100 seats amidst fierce competition doesn’t seem great, we’re plowing ahead with looking at any jobs we can find for either of us, and hope to post some of the developments, if any, here on this blog.

Developing a Visual Reading Approach

This isn’t my first app that targets reading, it’s the first time I’ve worked hard to develop a unique methodology.  I published a mobile app a while back using Adobe Air called Learn to Read Kids! to several app stores – and my daughter actually still uses the app on my phone. However this app is very traditional, and uses the phonics approach.

As I switched from Flash to Javascript shortly after publishing this app,  I just wasn’t able to keep up with updating it, which was a shame because it provided a nice array of activities (alphabet learning, word scramble for sight words, audio match word creation, multiple choice cards, word builder slot machine, and a sentence builder with images game).

But in the end, this app falls way short of providing the type of visual method needed for people with hearing loss or auditory processing issues. Phonics based programs assume that the child should learn letters by associating them with a sound. There is no real way around that. If they struggle with making that association, phonics programs are doomed to fail these students, or at least cause them a lot of grief.

But what is the alternative?

This points to a question that sounds simpler than it is, which is: how do we actually learn to read?

I was a little surprised to see that there isn’t any absolute consensus on this, and as such I think there is still plenty of room for innovation in the field. The mapping of letters to something our brain can hold onto seems to be key, and if the audio channel is not working correctly, this leaves the visual channel as the primary means for trying to map this information.  At some point I’ll get more organized and add links to the academic PDFs and programs I waded through as I began attempting to build the groundwork for my program and its methodology, but the entire thing rests on the idea that you can bypass the audio encoding and encode letters visually as a method for building a foundation for reading.

As I spent the better part of the summer working on Smarty Decks, a kanji learning application I developed with my now 8 year old daughter Julia – my first thought was to throw the English alphabet into that app to try and work with the colors a bit, to go about trying to map a unique color to each letter.  The app isn’t really built to teach the English alphabet, but it gave me a chance to at least lay out the letters and compare the colors side by side in tiles.  Here is what this first pass looked like:


What I quickly realized was that my 4 year old would never be able to learn these colors through flipcards and matching in an app build for data rather than as an immersive learning game! So I began developing a game style app, and also realized that as important as the color could be, identifying letters by their shapes could be possibly even more important. I looked through the letters carefully and realized that there are 4 categories of shapes that you can divide them into:

  • balls
  • sticks
  • thumbs
  • triangles

I’m trying to use terms that are a little more fun, and ones that can also be used in contexts of play and for animation. Some letters could actually be placed in more than 1 category, but if you look carefully at the letters, these shapes begin to jump out at you!

Balls (9): a, b, c, d, e, g, o, p, q
Sticks (6): l, i, j, r, f, t
Thumbs (5): u, n, m, s, h
Triangles (6): v, w, k, z, x ,y

So the rather boring presentation above in the data centric app, became something much more interesting in my gamified app:


I’ll detail the steps my app uses to try and teach these letters in a later post!

Why Build your App with Pixi.js?

For a long time, Macromedia and later Adobe Flash was really the only kid on the block. If you wanted to build a rich, interactive website or software like experience in the browser – Flash was it. Later Microsoft’s Silverlight came along, but not many developers that I know actually ever used it. I supposed Java was there too, but it couldn’t compare to Flash in terms of ease for creating an engaging UI and content delivery system for the web.

But then the iPad was born, and the emergence of HTML5.  Steve Jobs made it his personal mission to kill Flash – for reasons that were actually pretty good, looking back, although it sure didn’t seem so to this Flash dev at the time! So most of us made the switch 4 or 5 years ago to Javascript.

There was actually much to love about Javascript. It had enough similarities to AS3, both being based on an earlier version of EMCAScript – and there was a real beauty in having all of your code in 1 spot, with design rules in another (CSS). Who misses having to look through code on all of your movieclips to try and figure out where you left an unwanted piece!

But the combination of HTML/Javascript/CSS has never come close to Flash in terms of providing a unified developing and compiling environment, with animation, transitions, and audio all built in.  With different browsers interpreting things differently, and CSS being something that can become a downright mangled beast, it’s easy for a Javascript app to become a bit of a design nightmare, and that’s before you even get into the idea of memory management.

For data centric apps, I use a combination of Backbone & Dust.js, with Bourbon/SASS for the CSS.  But for a graphical based app – these tools would be sorely lacking. So I began researching HTML5 game development, and what people use to build immersive games to be published via the appstores, as well as a browser. This lead me to some game engines such as Phaser, Melonjs, and Kiwijs – but after experimenting with them, I decided they didn’t allow me enough control – and added bloat of things I didn’t need.  I decided that a game based learning app really comes down to 3 visual elements:

  • graphic rendering of scenes & elements
  • particle generation for effects
  • animation

The centerpiece of many kids’ learning apps is buttery smooth drag and drop. You see this all over many of the gorgeous IOS apps for kids.  I was able to achieve this using a combination of Greensock’s GSAP and Pixi, thanks to a sample generously built for me by Blake over at Greensock.

Pixi offers a particle plugin and designer, the fastest JS graphic rendering, and a well documented Spine runtime that makes animating easier than it was in Flash. In addition, I feel like I have graphical control over my app for the first time since switching to Javascript. For a Flash developer, working in Pixi is a bit like coming home, which is just what I needed to help dive in to these new libraries.

I’ll continue this post with specific examples of how what I’ve done with these libraries, with code samples.