Archive for the 'Requirements' Category

Putting the Requirements Doc in its Place

I sat in on a webinar today entitled “A Roadmap for Building the Right Solution” (Slides are now posted.) It was sponsored by iRise, a company that I greatly admire (even though I didn’t choose their prototyping tool). At iRise, as at Axure (the tool I DO use–almost every day), they have some great thinkers who know that people are visual in nature, and, as Kurt Bittner, the speaker in the webinar said today, requirements documents are rarely read. It was a refreshing reiteration of the value of visualizations in software development. In order to build a software solution that truly satisfies the business need, you need to use a communication vehicle that’s visual–and a requirements document just isn’t visual enough to show people what you mean. That’s where rapid prototyping using a tool like Axure or iRise comes in. 

Requirements documents serve an important purpose, but it’s not the way they’re usually employed. Typically, businesses write up requirements documents and hand them over for someone else to “review and comment” on. Now, for someone to really review and comment on a requirements document, they have to not only read it, but also form a mental picture in their minds of what the writer had in mind when they wrote it, and then structure some sort of response to that. All of this takes a lot of time. And time is money. If you write up your requirements and then spend a lot of time (a.k.a., money) reviewing them, you may still be no closer to solving the business problem than you were when you started. You may know how fast it has to go and how reliable it has to be, but does it really answer the right questions? Do the right things? Is it usable? You can’t really answer that until you have something visual in front of you.

We should use requirements documents–absolutely we should! But they should be the formalized notes stating the decisions resulting from our discussions. Use other methods (with a visual, interactive prototype early in the process) to get to the decisions. If we do it this way, we get a lot closer to delivering what the client really wants/needs, rather than what the client SAID they wanted.

I am involved in quite a few projects, all with the same client company, and it’s amazing how different the requirements process was for each one. There was always a requirements document, but we always introduced a visualization early in the process. We took what our clients asked us for, interpreted it by creating a visualization of the solution using Axure, then used that as a focus for our discussions. Usually that resulted in some changes to the requirements document, and we moved on from there. Once we’ve agreed that it’s what we want, the engineers then go and build to the prototype, with modifications as noted in the requirements document. The most successful (and enjoyable) project we’ve had involved a very short description of requirements, which we took and then built a visual prototype which showed the solution in action. This prototype was the center of our discussions as we molded this solution to become a viable result, simplifying even further as we began to more fully understand the problem and cut out all unnecessary elements to make the interface even easier.

They have posted the presentation from the webinar at irise.com. This is a drastic change from waterfall software development where the requirements are defined in great detail before anything is ever prototyped and some people might find it a dramatic shift in thinking. But it’s much more successful. Because of our visual nature, humans need a visual communications medium to SHOW us the idea, not just tell us. Words are great, but pictures, especially interactive pictures that show us the behavior of the solution, go a lot further towards getting to an effective response to the business need.

Alan Cooper’s talk on how User Experience Design fits in Agile Development

Agile programming is big now, and I know several programmers who subscribe to it. But how does usability and user experience design fit in an agile world? Many usability experts couldn’t tell you because they’re still trying to figure it out themselves. I’ve long been arguing that we need to stop and figure out what the user really wants and design the user experience first–then hand it over to the programmers to build. Does that work with Agile programming? Why wouldn’t it? I’ve won part of that argument in our company. They let me design the user experience, but there’s no time for user research, user personas, or usability testing of the interface before I have to hand it over. They’re in such a rush to meet the deadlines set by the “board”.

But, thanks to a thread I was following on CUACentral (a social networking site hosted by Human Factors International for their Certified Usability Analysts), I ran across this gem from Alan Cooper, one of the leaders in Interaction Design and the author of The Inmates are Running the Asylum. It’s his keynote speech, complete with all his notes, that he gave at Agile 2008.

Enjoy!

The value of good requirements

Over on the iRise blog, there’s a great article about measuring the hidden return on investment (ROI) of good requirements definition. David Walker speaks of how companies are focusing on “reduction in rework” and points out that the after-effect of reducing rework is accelerated time to market, which has several significant (monetary) benefits. David also points out:

Rework isn’t the only place to look for time-to-market advantages.  Focusing on requirements definition not only reduces re-work – it also reduces WORK work.  At an agile development conference in 2006, Mary Poppendiek (author of Lean Software Development: An Agile Toolkit for Software Development Managers) shared that  45% of code that’s written is NEVER EXECUTED!  Another 20% is only rarely executed.  It is however coded, documented, tested, trained, supported, maintained, etc.  All of which adsorbs resources – and adds time & cost – to projects.  Getting the requirements right interactively and up-front with end users helps make sure that unnecessary functionality never gets built.  What if your development teams could get 45% additional bandwidth simply by not working on stuff that will never be used?  Less stuff to build & test means, you guessed it, shortened time to market.

Even though I currently work in the government sector where time to market isn’t relevant (or at least not paid much attention), I can sure relate to the frustration of rework. We seem to be reinventing the wheel almost every day, as one group goes out to accomplish some great goal that another group has been working on for months.

How do you develop good requirements?

Rapid prototyping and user testing is key. Understanding your customers and their goals and motivations  is critical. Sound business analysis focusing on the right metrics, direct user interaction, and a focus on usability from the start will give successful results. Using rapid prototyping/requirements developments tools should help as well.

Of course, good vision, communication and organizational structure are going to be critical. Make sure you have all your people marching towards the same goal and everyone knows what the other groups are doing, so you don’t have a whole lot of re-solving the same old problem going on. That’s the kind of rework we don’t much talk about or measure, but whenever you have a fairly large organization, it happens.

More Tools for Building the User Experience

Prompted by a comment from Amit on my Faster Prototypes, Better Specs entry where I discussed Lucid Spec, I went and looked for some other UI-prototyping tools. I looked at GUI Design Studio, as Amit suggested, and also iRise. At first glance, they all look very similar. Initial thoughts:

  • I like the “whiteboard” in iRise Studio where you can model page interactions. They also have Masters and Templates that allow you to standardize your page designs and quick-start any new page design.
  • GUI Design Studio lets you put in anything you can draw as a control–you’re not limited to standard controls.

I’m wondering if anyone has done a comparison study of these types of UI prototyping/requirements tools? Are there others I’m missing?

… On my quick journey investigating these prototyping tools, I stopped off at the iRise blog, where I found a really interesting article on the adoptability of a product depending on where it is in the continuum from functional to usable to desirable, where:

  • Functional = A user can finish what someone could describe as a functional task but doesn’t necessarily meet their needs or goals as a user.
  • Usable = A user can meet needs and goals without frustration.
  • Desirable = The satisfaction of needs and goals is done in such a way that a user builds a positive emotional association with the product (i.e. positive product equity).

Dave showed this chart in the article:
Desirability_5

and explains the chart like so: 

What this shows on the lower right side is that yes, there are software products that can be adopted that are almost purely functional if they provide a huge amount of relative value.  However, even if they are adopted in a temporary fashion, the negative product equity associated with them means that they are easily and enthusiastically displaced by competitors.  The dot shows a product that moves from no adoption, to adoption with negative product equity, to adoption with positive product equity.

What a great chart! and a great way to explain the human factors of design. Dave also talks about “Desirability testing” and says this isn’t currently done. Shouldn’t that be part of any good usability test? I typically try to guage the user’s emotional response to the interface in all my usability tests, in addition to asking several questions about satisfaction and “would you use this again?”, “would you recommend it to your friends?” kinds of questions. That’s desirability, is it not?

But I really like the name. Perhaps we shouldn’t call it a usability test–perhaps we should call it a desirability test? Would that make more sense to the world? Every time I mention “usability” to someone outside the field, they get this deer-in-the-headlights look like they can’t begin to imagine what I’m talking about.

But then “desirability testing” might be even worse–I can just imagine the snide comments I’d get from guys outside the UI design field … (or even worse, those within!)

Return on Investment for Web Sites

Sandra Niehaus recently e-mailed me asking if I would read and review the book she co-authored, Web Design for ROI. I am a strong advocate of using metrics and business analysis along with usability testing to drive decisions on software/web design, so I found this book a breath of fresh air.

In this book, Sandra and Lance Loveday make several powerful points:

  1. There is a science to shopping. Provide a pleasant shopping experience for customers and they buy more. Retailers learned this long ago, but many organizations haven’t begun to understand this simple principle when it comes to their web site. This applies even if you’re not really “selling” anything–even government and nonprofit organization sites have a point and they would be much more effective at it if they’d pay attention to the user experience.
  2. Your web site is an investment. Treat it like one. Evaluate it objectively with appropriate metrics, just like you do every other business investment. Know the business case and rationale for your decisions. Don’t trust to personal design preferences–chances are, the designers and managers making the decisions aren’t anything like the users who will use the site.
  3. Focus on conversion and you will have a competitive advantage. Many organizations focus on buying more traffic for their site, rather than converting their existing visitors to customers (conversion). Optimizing the user experience will increase conversion, which will pay off more over time than buying traffic. Sure, you can still buy traffic–just do it AFTER you improve usability.
  4. Follow these key principles for successfully managing web sites for ROI:
  • Know what you want. What are you trying to accomplish? How can you use your web site to accomplish organizational objectives?
  • Know your audience. Who’s it for? As Sandra and Lance say in their book, “It isn’t about you. It’s about the audience.” User testing is hands-down the most effective tool for understanding your audience.
  • Treat your web site like a business. That means you need a site strategy. “If you don’t know where you’re going, it doesn’t really matter which direction you choose.”
  • Create a web site strategy. List objectives, target audiences (with profiles), assess the competition and traffic sources, then define your strategy for accomplishing your objectives. Your objectives are the what, your strategy is the how.
  • Measure the RIGHT metrics. Use business metrics (the same ones you use to track your business’ success–revenue, transactions, profit, etc.), site metrics (web site analytics–these tell you what users are doing on your site and where there may be weak points), and user metrics (user testing, user surveys, customer service inquiries). 
  • Prioritize design efforts intelligently. Use analytics to find problem areas (places with high dropoff rates) and focus on getting the customer through the entire sales process. Focus less on things that won’t make a big difference in meeting your organizational objectives. Estimate your ROI for a specific change, then track it.
  • Test, learn, repeat. You’re never done. Keep refining your site as technology, competition, and user expectations change.

I really like this analogy presented in the book: Treat your web site like a science experiment–set a hypothesis, try it out, measure results, repeat.

In the book, Sandra and Lance also explore the design issues and metrics related with several different types of pages: Landing pages, Home pages, Category pages, Detail pages, Forms, and Checkout. Each of these sections of the book are very helpful guides for designing and measuring the effectiveness of each type of page.

I loved the book, but found myself wanting more information in some places, so I was pleased to find that Sandra and Lance included a resource section at the end of the book. It’s set up with thumbnail images of each resource and a short, but very helpful description of each. Very readable, scannable, and a place to turn to whenever you are looking for a little assistance in your design efforts.

Why not just ask customer service?

When developing a new software application, product or web site, sometimes people think they can check for usability by bringing in the customer service folks and asking them what they think. After all, they talk to the users all the time-Customer service rep, image from http://www.sheervision.com/-shouldn’t they understand what the users need?

Customer service does understand what the users need. They talk to real users every day, and they know what questions get asked. Their input is invaluable in determining areas of focus. They know what’s selling, what’s not, and often they know just how many downloads of a certain product there have been and who downloaded it. They also know what’s NOT working because they get loads of questions about things that don’t work–and that is a great help for identifying usability issues. Ask a customer service person what questions they’re getting and how frequently and (if they track them) you’ll get some truly valuable feedback that will tell you:

  1. what functionality (and products) are needed
  2. hints at hidden user needs
  3. what people are having the most trouble with
  4. where there are bugs that need to be fixed

Absolutely, you should talk to your customer service folks and take everything they say very seriously. They are your most direct and frequent method of contact with your users.

So why do you need a usability person if you’ve got “user experts” already? 

The problem is customer service knows too much. They are experts on all the systems and the products, so it’s second nature to them how all your web sites and systems work. They know exactly which product is good for every type of situation. They know where all the functions and buttons are in every application and web site you have. The are the very best at explaining it all and helping users get their tasks done with the existing systems.They know everything inside and out. That’s their job. 

The user doesn’t have that expertise. 

That’s not the situation the user is in–the user probably doesn’t know (or care about) the system–they’re just trying to get some task done. The user doesn’t know the organization’s terminology, doesn’t know where all the buttons are on the web site, and may not even know what function they need to do to complete their task. An actual user can uncover problems that are completely invisible to customer service. That’s why customer service reps are often NOT effective usability reviewers. It’s very hard for humans to unlearn something we know very well, and the customer service person is the person who knows it the best. (That is not to say that customer service people can’t be usability analysts. As long as they have the ability to be objective, there’s no reason they perform effective and useful usability tests. I wouldn’t recommend solely relying on customer service for a usability review, however.)

Customer service has a conflict of interest. 

The other problem is that customer service may have a bit of a conflict of interest when it comes to usability. Their job is to help the user, so (in some cases) the customer service folks might actually not want the system fixed–after all, if the system is broken, it makes the customer service person more useful. And we all want to be useful (and employed). There may be an underlying (even unconscious) fear that they won’t be as useful or even a fear of losing their jobs if the system works so well the users can do it themselves.

So bring in the editors.

Expecting a customer service review to suffice for usability is akin to publishing a magazine article or a book without having anyone review, proofread or edit it first. The author of the article literally can’t see the mistakes–they need an outsider to find them. The same is true of usability. You need an outside view–real users, not inside experts–to find the usability problems.

And do the testing.

This is why usability testing is so critical, and why many folks recommend having an outside usability consultant conduct the testing. If you have someone that can step back from the design and look at the product, site, or system objectively, by all means have them review the product, site, or system. But then also bring in someone to test it with real users, someone who can formulate meaningful, real user tasks for a usability test, keep from prompting the user during the test, and who can analyze the results and identify places where users struggle. Designers and developers with a thick skin can do usability testing themselves. Customer service folks? Only if they can constrain their “helpful” side and let the user struggle through it on their own, and only if they realize that by making the products and systems more usable they create more business and opportunity for everyone (including themselves).