SCIFI User Interface Designer Lukasz Wieczorek
Posted on April 21, 2013 by lukasz on design, HMI, Interaction Design, NUI

The hybrid. Why code + design?

So I’m sure you, like a lot of designers, are saying “why would I learn to code, I’m a designer…?”  First off, I should say it isn’t for everyone, nor should ever designer be expected to know to code. All I am saying is that there are definite advantages to being a designer who knows code. There also other deeper rooted problems like lack of collaboration in studios as well, I prefer collaboration when creating an app, and it definitely is a per basis situation. Sometimes I like to prototype something out with hacky fast code and hand it over to coder so they make it more refined. This way my intention is conveyed.

The best way to predict the future is to invent it. Alan Kay, 1971

doc_brown

I’m sure you’ve heard this quote before, but I think there is a lot of merit to it. This can cross the boundary of most design/art, but here I will be speaking to code + design in user interface design.

There are two big reasons I support designers using code.

  1. Speeds up your process (namely with extendscript)
  2. Helps you to understand the industry and where it is going. This is the one we will focus on.

It would be silly to think that our jobs as UI designers would stay the way they are forever. They are in fact already changing. Currently we are undergoing a transition from screen based UI to UI + NUI. Though it’s apparent that things like the keyboard will take a while before they are phased out, projects like DIY EEG’s might make it come sooner than you think.

I’ve even been messing with a few of my own hacks for it, and it is definitely teaching me new ways to think about UI.

[divider] [/divider]

2013-04-22 00.12.18 copy

[divider] [/divider]

These have quite a way to go before they can be used as actual input devices, but they are definitely getting close.  Knowing that I have access to raw data, one of which is knowing when someone is concentrating, I can utilize this directly in an application. Had I not done some intimate coding with this, I would have never known I could access this data. Sure you can read about these things, but I would say it is like drawing, sometime you have to do it to really understand its potential.

Don’t ask me how the mouse is still here, but the reason the keyboards hasn’t been phased out is because it’s a private form of communication, amongst other things. There haven’t been a lot of alternatives that are very promising. Things like voice to text is cool, but not always a reasonable way to type. The mouse seems just flat out archaic at this point though. You see things like buttons being packed on to the interface when it’s really an all together bigger problem that needs to be solved. It reminds me of the sticky note hack, which is a simple way to tell if there are problems with an interface. If you are adding sticky notes all over a machine, there something going very wrong with that UI.

[divider] [/divider]

blog1

[divider] [/divider]

A great example of adding an interface where it shouldn’t be is the scroll wheel or the back and forth button they mash into a mouse. I wrote a tutorial a while back on how to make a gesture based scroll wheel, because I felt like the scroll wheel on my mouse currently was just ridiculous and not very ergonomic. I expect this method will soon be replaced by my Leap Motion, but proximity sensor is definitely the cheaper route, so I feel it is still a good solution depending on the circumstance.

I get very excited when I see things like the Leap Motion. I did get my developer kit about 2-3 months ago, and it is quite an amazing little device. Check it out if you haven’t already.

 

 

If you’re not at least thinking about designing for these types of services, you’re already late. Epps

Instead of us to learning how to interact with machines, the machine should learn to interact with us, explains Olof Schybergson.

“If you’re not at least thinking about designing for these types of services, you’re already late,” Epps says in 2012. “Think about the media industry and how difficult the mobile and tablet revolution has been for them. What are they going to do when the devices we interact with don’t even have screens?”

Though, I do not agree all interfaces should be NUI or invisible, I think there is a time and place for them and we have to be able to design for them. We work well with images and metaphors, so it would be strange to take this away from UI.

So you might be asking, what does all this have to do with design + code? Well as a designer I fight with this a lot, for fear of becoming jack of all trades. I have learned recently that working with NUI devices that I definitely have a different perspective even on tablet based UI. It helps me to think of ways in how to interact with these devices in a more natural way. An example of this would be using the camera for gestures on an iPad or the microphone on a phone along with the speaker to tell distance. What if your app is in need of gesture that requires navigating forward and backward, stead of left and right. Using a sensor of this kind would be perfect in this application. The last example would work similarly to a sonar sensors one might see in arduino:

How come we don’t see more of these kinds of interfaces? Probably because the designer + coder paradigm is shunned often. Being able to understand the core of how these work really opens your imagination for how you can use UI in a new and exciting way. It is why I hold that code and design really go hand in hand, even though it is not for everyone.

We need to think forward as UI designers and not be stuck in the now, because there will be a day when we look up and realize we’ve already been left behind.

Leave a Reply