Scenarios: “User First, Designer Second”

Class on Wednesday was another very informative lecture. I personally enjoyed it because I was just applying the knowledge I had learned from the readings in another team-based project earlier in the week. The two concepts I am referring to are a persona’s mental model and the implementation model of a design.

Screen-shot-2010-08-06-at-7.05.37-PM1-475x183

Implentation vs Mental

The topic of class was scenarios and in our reading, Cooper stresses the integration of personas and goal based design into scenarios. Scenarios are basically hypothetical situations used by designers to research how users will react to their product. It’s an integral part of user research design and one major contribution scenarios make is showing whether or not there is a disconnect between Cooper’s two models of implementation and mental model.

The mental model is the user’s mental representation of how a product should work with all the desirable features and goal meeting aspects integrated. The implementation model is the actual design of the product that the designer focuses on and has many features that the user will never even notice (if the designer does his job).

In the process of design there is a high chance that the designer doesn’t always know what the user wants or how to implement it. This may push the designer to create their own version of the product that doesn’t appeal to the user or meet their needs! Hence, the use of scenarios is important in solving this.

As my colleague, Spencer Edwards said user design should be focused on one single sentence: “User first, designer second”

What a strong statement and so exceptionally true. This is why personas and scenarios are so important. It’s not enough for a designer to do guess work. Real research should be put into simulation to really deliver what a user wants. Or else the product may end up being more designer-centric and like I’ve said before, it doesn’t matter how awesome your product is if no one will buy it.

Check out more of Spencer Edwards on his webpage: http://www.skedwards.com/ or his blog http://en.wordpress.com/read/blog/id/56880024/

Thanks for reading! Seeya next time!

 

Dat’s Bad Design: Backwards Compatibility

A new generation of consoles is on the horizon which means gamers are only asking two questions: “What are the new games?” and is it “backwards compatible?” Well maybe that’s a bit of an exaggeration. But whenever new hardware or software comes out, users and developers both always have to consider what came before.

bonkersworld 2011

When I say backwards compatibility, I am talking about the ability for new hardware or software to support applications once supported on older hardware or software. Specifically, old versions of videogames being supported on new consoles.

The current generation of consoles have been back and forth on backwards compatibility. The Wii always supported Gamecube games and the built in Virtual Console allowed gamers to purchase old NES and SNES games and other nintendo-licensed franchises.

The PS3 started out supporting all PS1 and PS2 games initially. But later models of the PS3, lovingly called Slim, did not. This is because the old PS3 “Fats” had extra hardware that allowed backwards compatibility with older games. New Slim models cut out that extra hardware (and cut down the price) and Sony opted to sell PS1 games through the online Playstation Store as digital downloads.

The Xbox 360 didn’t even go that far though. Only certain Xbox games were compatible with Xbox 360 and it was a varied library of games. The Xbox Live Arcade though had a collection of classic Xbox games that users could buy and download to play again digitally.

The new generation of consoles have now fully embraced this idea of “digital backwards compatibility”. Instead of creating hardware that needs to read discs, the newest generation of consoles instead support old game titles by selling them through their various digital stores. Sony has promised to put out PS1,2 and 3 classics on the Playstation store while discontinuing the readability of PS3 discs and the WiiU has taken the same route but at least it still reads Wii game discs.  Xbox One will eventually have a cloud service that may or may not support old titles.

While it’s nice of the game industry to continue to support my favorite classic titles with digital ports and remasteries, I can’t help but feel saddened when I look at my expansive collection of now obsolete games. Some of my best memories were with PS2 games and I used to treasure my old PS3 Fat which let me play my entire PS2 collection. But when it’s Blu-ray drive finally burnt out and I had to buy a PS3 Slim, my PS2 basically became unplayable.

This isn’t entirely the fault of console developers. The game industry needs to make more and more optimizations with hardware to compete with each other. The same goes for software developers who need to push the edge in programming the best way they can.

But this I feel is a huge problem in the game industry and in many cases really stops me from buying new consoles as soon as they come out! Just because I don’t want to have so many consoles sitting around on my TV stand and I have to unhook one to blah blah…it’s inconvenient. I’d rather wait until the newest consoles have a truly decent lineup before switching over.

The reason why I even thought of this recently was because of the release of iOS7. I’m a pretty avid gamer and I play a lot of mobile games as well. I recently upgraded to iOS7 and well what do you know, a lot of my old applications were broken or needed updating!

Games on iOS7 suffer from a shrinking problem

Games on iOS7 suffer from a shrinking problem

One common problem is that screen size on iOS7 is rendered differently and this has caused some games to show up at a much much smaller resolution, while their control elements controlled by the GUI will function at the same size. This is a huge disconnect and problem with the visual interface.

Luckily iOS developers are an active community and updates are pretty frequent. Still, it’s a little annoying to have to sit around and wait for a patch to make a game compatible with a new operating software.

And that inability to have foresight, in my opinion, is bad design.

Thanks for reading! Seeya next time!

 

 

Personas: Who is the User and What do they Do?

Class on Wednesday focused on Personas. I think personas are one of the most important, and fun, parts of user research design. Researchers create personas to typify their audience and create user sets. Personas aren’t just stereotypes of user groups but include goals, needs and interests.

The process of creating a persona comes late into the user research process and usually only occurs after data is collected on possible users. Data comes in the form of observational data and interviews. From the data gathered an affinity diagram is created, organized and melded into single or multiple personas.

On Wednesday, the class was split into groups and we were tasked with analyzing survey responses to create an affinity diagram. The survey had asked what the responder’s BEST learning experience entailed. There were over ten responses to the survey, yet interestingly enough a lot of common themes popped up. Below is an image of the affinity diagram my group came up with.

It's crowded!

It’s crowded!

The affinity diagram consisted of post it notes with ideas we pulled out of the survey responses. We organized these ideas into common categories, such as having a safe atmosphere when learning or the professor’s qualities.

From the affinity diagram we were able to create a small persona: Ken! Your average 21 your old male college student.

photo 5

Ken, our persona, had a lot to say about his learning experience. The data we gathered from the responses helped form this little guy. The task of the assignment ultimately was to create a learning software that provided the best possible experience. The little bubble is Ken’s desires in his learning experience as formulated with his persona, and I feel confident that we could make an appealing software using this persona.

Though maybe we should do a little more research and get multiple personas with overlapping wants. Still a good start I thought and definitely very informative. Personas are important because of the information you can get from them at face value about the wants and needs of your potential audiences.

And as I’ve said before, you don’t have much of a product if no one’s going to buy it.

Thanks for reading! Seeya next time!

That’s Good Design: Console Wars, Listening to the Crowd

This year is a big year for gaming because after one of the longest console generations in history, the two great superpowers in the console wars revealed their big guns: Sony’s PS4 and Microsoft’s Xbox One. Sony and Microsoft have been dropping spoilers on their newest hardware since early 2012 and gamers worldwide have been patiently waiting through the release of Nintendo’s WiiU to find out what the next generation of gaming will look like.

After months of teasing and initial fighting, the true battle between Sony and Microsoft began at E3 2013 in June. In the midst of whispers from haters and lovers alike, Microsoft held their press conference and opened fire with the Xbox One.

Big announcements at the Xbox One conference was Xbox One always having to be connected to the internet to “check in” periodically, which also made it limited in initial release to several countries. It had a higher price because it was bundled with the new Kinect 2. Xbox One also didn’t support used games and had a download only schema for games. Games would be installed from a CD on to the Xbox One and the CD would be one use only. Images of the Xbox One had been leaked to the internet months before and many people knew what they were getting into with the Xbox One long before Microsoft’s press release. Many gamers were worried about the future of gaming with the Xbox One’s new arguably invasive features and there was protest from the community. So when E3 rolled around it was up to Sony: to decide whether or not they would follow in the footsteps of Microsoft and seal the fate of the newest console generation or take a risk and fight their rival.

And boy did Sony fight back. The PS4 opened with supporting used games and pushed the concept that when a gamer bought a game it was their right to do as they wished with the CD. They also crushed the idea of always online authentication and even gave a price cheaper than the Xbox One.

Sony listened to the complaints of their users and adapted their system and policies. Over the length of almost two hours, Sony went from dark horse to the white knight of the gaming community! The Sony PS4 was a slap in the face of the Xbox One and gamers jumped ship. Illustrating Sony’s tongue in cheek battle with Microsoft is the following video, an ad put together and released immediately after the conclusion of the Sony press release.

Soon after the end of E3, Microsoft revealed that they were changing the Xbox One to allow support for used games and were also changing their always online requirements. Still it was clear that Sony had won the battle of E3 and had set itself up for the coming reckoning this holiday season.

It just goes to show that no matter how big you get, you always need to listen to your customers or else your competitor will. Only time will tell if Microsoft’s backpedaling will be enough to get them back on top of the console wars.

All I know is, I’m clearing space on my TV stand for my PS4 this November.

Thanks for reading! Seeya next time!

Oculus Rift: Immersion at its Finest

With the PS4 just released and the Xbox One right around the corner, it’s hard to distract oneself from the blood spilling of a long overdue console war. But let’s take a step back from all of the consoles and instead examine a really cool new technology that may make every gamer’s day. It’s not a console, or a game or a controller. It is instead a new display method! Let’s take a look at the Oculus Rift!

The Oculus Rift is designed to act as a low latency VR headset. Users who wear the Oculus Rift can fully immerse themselves in First Person games and applications, taking themselves out of the living room and into the virtual world. The possibilities with the Oculus Rift are limitless and it’s a really exciting device not just for gamers but also developers.

Ok, yeah this thing looks cool, sounds cool and well is cool. But is it practical? As someone who’s tried out the Oculus Rift recently and discussed it with fellow colleagues, I think I’m read to pass judgment. Now let’s keep in mind that the Rift is still in development and testing. The only available Rifts right now come in the Oculus Development Kit so it’s really in beta.

The Oculus Rift is truly immersive. With it’s built in speakers and virtual projection system, I felt completely immersed in the application I was playing (a simulation of a roller coaster). It was awesome! I could look in all directions and the Rift tracked the movement of my head allowing me to fully view my surroundings.

But I could only handle the Oculus Rift for a few minutes, because the Rift is unfortunately highly nauseating. With my field of view constantly moving and my body remaining still in my seat I was dizzied very quickly and soon after that felt the urge to collapse in a heap. The Rift is jarring because the motion of the system doesn’t match what your body is doing. While I think this is intended effect for the roller coaster simulation I was playing, I don’t think it would be very conducive to a normal first person shooter.

http://willhubbell.blogspot.com/2013/03/the-oculus-rift-and-receiver.html

The first half of the above blog covers the problem quite well. Characters in videogames generally move too fast! In order to necessitate action, most games are made at speeds higher than normal. A classic example is the game Unreal Tournament, a high-paced first person shooter where the player’s character runs at about 30mph and can turn on a dime in the other direction. This kind of speedy movement isn’t what the human body is made to feel.

This isn’t an insurmountable problem though. It just means that game developers will have to take into consideration the Oculus Rift, specifically creating movement modes that cater to the Rift. Besides that games that don’t require movement can be ideal for this system. Imagine playing a super HD version of Galaga where the screen remains relatively motionless or any other top down game for that matter. The Oculus Rift doesn’t need to be relegated to a certain game genre but instead should be used as just another display type like a TV or monitor. I can think of some games that would be awesome to play up close like that.

Just have to be careful not to rot my eyes though! Luckily this isn’t a VR Boy!

Virtual-Boy-Set

Oh lord no

Anyway, I can’t wait to get my own Oculus Rift once the developers get the kinks out of it. Thanks for reading! Seeya next time!

 

Data Collection: Putting the User in U

In class today we discussed a very important part of the design process. The process of designing…itself. Well to be exact we discussed the process of ideating to designing to data collection to testing to producing. There were a lot of steps and a lot of questions that came up during our discussion so here’s just my thoughts on today’s class.

My question about the design process was how to decide who your audience is! Sure it’s easy enough to conduct an ethnography on your user group (well maybe a bit more complicated than I make it sound) but how do you even know who your user group is!

Well in order to answer this question our professor gave us an interesting exercise. She asked us all for an anecdote where we could improve our lives with some sort of technology or study. Our group chose how GPSes have this strange ability to lead their users astray during routing, taking users down dark alleys or hitting them up on dead ends. We had to figure out our user group as well as how we would collect data and study them.

Our strategy was influenced by the readings. We decided our users would be frequent GPS users and our study would begin with surveys that categorized participants by their use of GPS as well as their initial feelings with GPS. We would then have our users carry a diary in their car and whenever they would have a session with GPS record their feelings, whether they be frustration or relief. Finally after a certain amount of time we would conduct interviews with the participants.

The design methods we were exposed to were very focused on getting user input. Ethnographic study in particular I found interesting, after all most companies conduct ethnographic studies in order to improve their products with users. Or at least the good ones should. I think I will end up using an ethnographic study with my thesis eventually and that would be interesting I feel.

Thanks for reading! Seeya next time!

3D Printing and 3D Scanning: NOW IT GOES BOTH WAYS

3D Printing has been a pretty hot topic in the technology sector since the introduction of domestically priced products. 3D printers, once expensive and locked away in industrial sectors, are finding their ways into people’s homes and hands. This is thanks initiatives taken by companies like Makerbot and the Form 1, which was Kickstarted successfully a year ago. Both Makerbot and Form 1 have started landing in people’s workshops and the possibilities are very close to endless. Below is a picture of a custom chess set I 3D printed using the Makerbot over the summer.

Image

I’m not much of a modeler though. The way the Makerbot 3D printer works is by reading in STLs or Maya models in its Replicator software and then sending the 3D information of the model to the 3D printer. With the chess set above, I was able to find the STLs for free online and printed them. Normally though you would need to have some modeling experience to really get the bang for your buck from a 3D printer.

Not any more.

Makerbot revealed their first 3D desktop scanner, the Digitizer, at the end of August and it is…well…pure magic. Now you can truly replicate anything with a 3D Replicator/Digitizer combination! At least anything that fits in the printer/scanner. But there’s no need to be able to model the object you want to replicate.

The possibilities are now truly limitless I feel with this innovation in technology. One application I can think of, just looking around my room, is replicating dice. As a gamer, I can always use more dice but dice is expensive! Well no more with the Makerbot Scanner!

For more info about Makerbot, their site has its own blog that is regularly updated with cool projects involving 3D printing!

Here’s one of their latest posts where a sculptor creates a head in clay, scans it and prints it for a comparison!

http://www.makerbot.com/blog/2013/09/04/digitizing-clay/

The Makerbot Replicator/Digitizer is at the top of my list for awesome gadgets and they both have a hefty price tag! But here’s to a future where they fall into my hands like so many other technologies!

Thanks for reading! Seeya next time!

Let’s Look At: The Google Glass, Explorer Edition

Augmented Reality isn’t the future! It’s now! Or that’s what Google wants to you to think with it’s shiny new work in progress: The Google Glass! I recently was loaned a model of this innovative new technology and got to test run it in order to determine if it was worth developing for.

Well, where do I start? Let’s take a look at the Google Glass, Explorer Edition.

The Bad

The Google Glass

To begin with Google Glass is not what I expected from a future with Augmented Reality. The appearance of it is just well…not ugly…but not final. I like the titanium core design and the sweet almost Dragonball Z esque scanner box which serves as the main visual interface. But the plastic bridge clasps are a bit uncomfortable and I almost feel silly with only half of the real estate of the glasses being used. For balancing reasons I can see why the left side exists, but it’s just so barren. Maybe having two screens would be wierd, but I would almost prefer that to the minimalist view one screen gives me.

Which is what it is. I slipped the Google Glass over my regular eyeglasses and it was pleasing to note that Google Glass is glasses-friendly. The glass cube snugly fit over my regular glasses and the titanium frame is made in a way to allow the adjusting of regular glasses. Unfortunately, the Glass model itself isn’t very receptive to adjustment. Many times I found the Glass to be out of focus or out of my comfortable viewing position and my attempts to adjust it just resulted in it returning to the position they were in when I first put them on.

The actual glass itself wasn’t very vision friendly. I had to really focus my vision on the image displayed in front of my eye or close my unused left eye to see the image properly. Not to mention it was always at the edge of my vision. It didn’t help that my eyes could tell there was a physical layer of corrective lens before the image and my eyes were hurting after a few minutes of use.

The above is a mildly informative tutorial of the Google Glass…and that’s all you’re getting with this thing out of the bag. To be fair, maybe the Glass came with an in package instruction manual I was just never passed, but right away Google Glass is not the most intuitive thing I’ve ever used. The system of swiping left and right to shuffle through screens is very simple to use, but many times I found myself bumping up against the end of the screens or forgetting how to go back to the home screen from a selection I made.

The Good

That being said I felt very comfortable just having Google Glass over my eye. Tapping the side of the Glass to select items felt satisfying and the frame has a integrated sound system that projects sound directly to you and only you. The vocal recognition system is also pretty strong with Glass. From the homepage, your an “Ok, Glass” away from googling good Chinese restaurants around your location or finding out the height of Robert Downey Jr. Taking pictures was also very easy and the Glass’s camera is high quality and can be hands free. Glass can also record videos and watching videos on the Glass while real life is happening around you is a surreal but cool feeling.

The Verdict

But Glass is very very restricted and it doesn’t do what people want it to. The Explorer Edition doesn’t have an app store implemented yet and most of the apps that have been made are hacks of the core system to allow for wink triggered pictures. There’s nothing like an AR enhanced game or system yet that’s native to Google Glass. Of course Glass is a nascent technology. It’s practically in Alpha build. It needs work and more support and maybe it’ll become a viable piece of hardware in today’s market.

Until then, I’m preodering an Oculus Rift! Thanks for reading! Seeya next time!

Three Designs and One GDD

In class today we discussed a number of design methods and the processes that go into them. The three major design processes are User-Centered Design (UCD), Activity-Centered Design (ACD) and Goal Directed Design (GDD). Of course there are plenty of other design processes such as co-design processes, but these are three major ones and I feel that they’re pretty important. Why are there so many design processes in the first place? Designers and users are different and many of the readings we did for class presented research on different design methods because of this disparity in designers and users. What works for one user group may not work for another.

Of the design processes the one I was most interested in was Goal Directed Design. Coined and pioneered by Alan Cooper, goal directed design focuses on the goal of the user above everything else. Below is a chart of the GDD process starting from research to final presentation.

Image

As a software developer, I find that it’s more important to consider a product’s goals than the usability for the end user. While this isn’t necessarily the case with GDD, Alan Cooper’s focus is on identifying the not just the end product of the software but also to software that is adapted to respond to users and their needs.

Functionality and usability play into a GDD software. A good example is when a captain of a ship steers his ship towards a point on the horizon. The ship veers off target because of wind and the tide but the captain makes corrections to make sure the ship reaches its destination.

Users aren’t always the target audience. In a business or professional work environment, sometimes software needs to meet the needs of a development team and has to follow company standards. Still there are places where User-centered design comes into play even in these cases, for instance in the reading on Life Scientists and Omera.

Design processes are varied, as varied as the scenarios they are used in. It takes a well-rounded designer who has knowledge of a variety of methods to design a product correctly for a presented problem.

Here’s a link to a blog post summarizing Alan Cooper and his views on GDD. I found it a nice read and quite useful!

http://www.dubberly.com/articles/alan-cooper-and-the-goal-directed-design-process.html

Thanks for reading! I’ll seeya next time!