Wednesday, February 22, 2012

Conceptual Integrity and Design

Conceptual integrity is really a component of good design. I have examined works dealing with issues of software design and human computer interfaces published over the past thirty years. The common theme in these publications is that good software comes from good solid design. Here, I will address design as it applies to software, who we are designing for, and some general principles to keep in mind when designing interactive systems.

Some Terms

‘Design’ is an ambiguous term that seems to fill a spectrum with artistic endeavor at one end and planning at the other. So why would a person who deals in absolutes and empiricism want to consider such a nebulous concept? Because without it, any product is going to have to be significantly greater than any similar product to overcome the shortcoming of being poorly planned in implementation and in presentation. It is important to define ‘design’ in the context of software so that it has a real, tangible meaning that people can grasp and use to make better software. So here, design is going to have two meanings: the plan of the implementation and the arrangement of the interface.

Having jumped straight in to the design quagmire, it would be prudent to step back for a moment and define the idea of conceptual integrity. Fred Brooks, Jr. discusses many facets of maintaining conceptual integrity in The Mythical Man-Month (chapters 4-6), and defines it as “‘unity of design.”[1] From Brooks and others who discuss similar ideas, this seems reasonable and will be used as the meaning here.


Design as a Process

Brooks notes that “the difference between poor conceptual designs and good ones may lie in the soundness of design method, the difference between good designs and great ones surely does not.”[2] Design methodologies can be taught and successfully applied to projects, but the best designs rely on more than that. Experience, talent, and inventiveness are required to reach that level. It also requires an understanding that there is a fuzziness to design. Realizing that design is a process, the process is not hierarchical, the process is dynamic, and it leads to the discovery of new goals [3] is a lot to swallow. However, it is a necessary step in considering a project as a whole, rather than a sum of parts: “The Composition Fallacy assumes that the whole is exactly equal to the sum of its parts.”[4] Division of labor in to groups working on distinct parts will result in a collection of parts unless the interactions between the parts are decided upon and well-understood in advance. A cohesiveness can only be achieved if the design is established before work begins. This applies to the architecture as well as the interface.

Shneiderman quotes an author as saying “our instincts and training as engineers encourage us to think logically instead of visually, and this is counterproductive to friendly design.”[5] The problem of “friendly design” will be discussed later, for now the important part is that thinking logically is not enough. Norman proposes that “human behavior is the key”[6] to designing software that works for people and that studying people is superior to reasoning out a solution. He suggests that observing how people perform activities will lead to more natural usage. Where most things are currently structured in a “hardware store approach”, they should be structured in a way that supports how people use objects: hammers next to nails rather than hammers with hammers, nails with nails.[7] This suggests that the design philosophy is due for a paradigm shift, to include behavioral analysis in addition to logical structure.

Another area in which the philosophy may be due for a change is in the interface directly. As in art, where there may exist an implicit communication between the artist and the viewer, the design of a interface is not limited to composition and color. These aspects may be used to establish a dialog between the designer and the person using the software. Norman, after reading up on semiotics, says that once this shift has occurred, the design philosophy will change for the better.[8] He goes on to say that each decision made by the designer is done for “both utility and for communication…in the hands of good designers, the communication is intentional.”[9] Schneiderman echoes this in arguing that the smallest interactions between the system and the person are important considerations, because they happen all the time.[10]

So far, this sounds a lot like things to consider when designing the interface; however, some in the field of Human Computer Interaction, such as Alan Cooper, suggest that the way to get the most cohesive design is to study the people who are to use the software and then make the interface first. The reasoning is that making an interface that works for people at the outset gives the programmers a final goal to work towards that encompasses all of the requirements. In this sense, designing the outward appearance of the software is just as important as the part the person doesn’t see. This isn’t the only approach, of course, but it does make sense to have a distinct plan for how a person is to use the software early on. Otherwise, what would be the point in aiming for a unity of design?


Conceptual Integrity

A carpenter does not care how friendly his hammer is, he wants the handle to fit his hand, to be long enough to provide good leverage without being too long, to have a head of sufficient weight to drive nails with ease, and so on. He doesn’t want it to be friendly, he wants it to be designed for the task. Nelson declares that the “problem is not software ‘friendliness.’ It is conceptual clarity.”[11] Going back to the definition of conceptual integrity as unity of design, it becomes apparent that it comes from a solid plan that takes in to account the goals, constraints, and audience for a particular project. Norman proposes that the “appropriate way to design a complex system is to develop a clear, coherent conceptual model…and to design the system so that the user’s mental model would coincide.”[12]

A good design is not just one in which all of the parts of the system are planned for, but one in which the designer has a clear understanding of the user’s conceptual model of how a task is performed. We have to work with what we have, not what we wish to be. Or, as Nelson put it, “If the button is not shaped like the thought, the thought will end up shaped like the button.”[13] People use software all the time, changing their tasks to work as the software designer envisioned their task. This is a result of the designer(s) missing the mark, either not understanding the task domain completely, reasoning out a solution, or not spending lots and lots of time designing the product before embarking on implementation (which probably encompasses the other two reasons as well.)

Brooks states that it is better to leave things out in order to maintain “one set of design ideas, than to have one that contains many good but independent and uncoordinated ideas.”[14] On the face of it, it sounds reasonable; however, it begs the question, who gets to decide?


The Benevolent Dictator

If unity of design is a major goal in a project, then design must be the path to achieving it. Someone has to decide what the purpose of the project is, how it is to be implemented (at least at a high-level), what it will look like, the intended audience, and so on. This requires a solid understanding of the requirements, constraints, and vision of the final product. Whether the lead realizes it or not, these are all principles of design, whatever the end product happens to be. Norman and others have advocated that the lead be a benevolent dictator in order to enforce the design plan and settle disputes once and for all.

In two of Brooks’ essays in The Mythical Man-Month, he states that conceptual integrity can only be achieved if the design comes from one person, or a small group of people in agreement[15][16]. Nelson flatly states that there must be “a chief designer with dictatorial powers….”[17] This avoids the concessions and compromises that come about in design by committee approaches while maintaining the focused vision of the product.


Know the User

“‘Know the user’ was the first principle in Hansen’s (1971) list of user engineering principles.”[18] There is no way that a product can be a success without knowing who the person on the far end of the development cycle is. In order to communicate effectively with the person on the other end, it is critical to know who that person is and what their goals are. This person is you and I and the lady down the street in a general sense: people have certain characteristics that are fairly consistent across a broad spectrum. Things such as physical abilities and limitations, emotional reactions, modes of thinking, mental constructs and the like. Jef Raskin uses a lot of research in The Humane Interface along those lines, and the HCI community is fond of quoting things such as Fitts’ Law to explain why something should be placed in a particular location or given a particular size. The designer has to be aware of these issues and familiar enough to know whether the interface designers are fulfilling design goals or not.

People find it difficult to retain too many things in short term memory at the same time. “Seven plus or minus two chunks”[19] seems to be a good rule of thumb. If people have to be given information, it is best to not overload them with lots of dialogs, warnings, notices, etc. Taking advantage of semantic knowledge of computer concepts, which are linked to already familiar ideas, helps people by allowing them to use long term memory constructs.[20] “Semantic knowledge is conveyed by showing examples of use, offering a general theory or pattern, relating the concepts to previous knowledge by analogy, describing a concrete or abstract model, and by indicating examples of incorrect use.”[21] If some tasks in the program can be related to tasks the person performs in other domains, then much work has been done for you.

Falling in to the ‘friendly’ software mode can be more harmful than helpful. “Attributions of intelligence, independent activity, free will, or knowledge to computers can deceive, confuse, and mislead users.”[22] People may use the same semantic knowledge mentioned before to misconstrue the software (or the hardware) as having some level of intelligence if the responses are similar to what a person might say. Helpful messages are useful, friendly ones are counterproductive.

People will “respond to design, both good and bad, in appropriate manners.”[23] They will have a response, either way, so pretending to be the user and running through tasks may be an easy way to flush out flaws or remind the designer what people are trying to accomplish.

Errors are common, probably one of the most common things people do which makes it extremely important that errors are handled gracefully. Norman says errors can be avoided by organizing according to function, making choices distinctive, and making it hard to do something irreversibly.[24] Gilb and Weinberg suggest a three part design to guard against errors:

a. Select natural sequences.
b. Specify error recognition, handling, and recording.
c. If b is too hard, redesign the codes, adding redundancy to make them distinguishable from other codes in the sequence.[25]

Increasing speed and reducing errors may be accomplished with the use of defaults. Using defaults can be expected to reduce errors based on the “idea that what you don’t do you can’t do wrong.”[26] Manual entry should be a last resort, with defaults or a list of possibilities provided being preferred.[27]

Shortcuts may also be used to allow adept users a way of working faster, while providing a means for beginning users a way to advance gracefully. Typeahead and listing shortcut keys next to menu items are two methods of achieving this.[28] Making the system adaptive to frequently selected items is another means of increasing speed.

Using a subset of a natural language is another method of designing with the person’s semantic knowledge in mind. One project “tried to copy English grammar closely…did not allow the meaningful reordering of phrases permitted in English, such as ‘Into A, copy B’.”[29] It would be nearly impossible to account for every variation permitted in a natural language, but something as simple as ‘copy A into B’ easy for the designer to account for and the user to grasp.

Data displays should be consistent, allow efficient information assimilation, require a minimal memory load, and be flexible in data display.[30] They should take advantage of the persons semantic knowledge in order to put elements of the software in to long term memory quickly. And “the importance of long range user feedback in maintaining a system cannot be underestimated.”[31]


Methods of Approach

As knowledge in a field grows, the natural progression is for assumptions to be questioned and revised, methods changed to reflect new discoveries, and incorrect ideas thrown out. “There is an implicit assumption in performing human factors work that the systems we have already designed are somewhat flawed.”[32] This assumption is not a negative, but a realistic approach that accepts that better products are the result of examining that which has come before for what works and what does not.

Norman says that the process of evaluating human needs, field studies, and observations should be done outside of the product process.[33] His reasoning is that it is too late, from a business sense, to spend the time doing these functions when a project has already started. It costs money and delays the rest of the development team. Rather than stand in the way, the HCI research should be done when researching products to pursue. In this work flow, the development team can get straight to work on a product, with HCI people working alongside. This runs counter to Alan Cooper’s process of performing the analysis at the outset of a project (see About Face 2.0).

A four level approach for systems design has been proposed. From highest level to lowest these levels are:

  • Conceptual model
  • Semantic model
  • Syntax level
  • Lexical level [34]

These correspond to the big idea behind the project, the meanings of input and output, specific commands, and finally, hardware or device dependencies. The approach breaks the project in to levels of detail, tackling each as necessary. It follows a process that uses as high a level of notation as possible at each step, “exposing the concepts and concealing the details until further refinement becomes necessary.”[35]

Writing formal specifications can be an extremely powerful tool. Putting design decisions to paper, or text file for that matter, forces the project lead to examine decisions as they are written and to make sure that the various elements of the design are consistent.

Once the design is formalized, it is time to focus on the programming. Nelson simplified some good programming practices in to three things:

  • Don’t be afraid to start over.
  • Design long, program short.
  • When you are sure the design is right, code it.[36]

The widespread use of object-oriented programming has developers working in terms of, well, objects. This process involves mapping new data types with their operations to things in the real world. Naturally, a good understanding of the task domain results in better objects. Brooks notes, however, that instead of teaching OOP as a type of design, it has been taught as a tool.[37] Some developers have gone against this trend and built programs where the users act directly on objects, but this seems to be a small minority. With a proper use of OOP and a solid design, the task domain objects are better realized in the software. This would create products with a much stronger unity of design.

Having a strong design that utilizes object flow analysis, redesigns can be performed before or after the system is used.[38] The iterative process tests design assertions, takes in user feedback, and reveals structural flaws. These things cannot be avoided, but can be minimized by spending a lot of time with the design at the start and using an approach that allows for changes easily.

Ultimately, the goal is to move complexity away from the user. Reducing what the person has to do makes the product more complex, [39] but that complexity has to be dealt with far fewer times and by far fewer people. Not addressing the simplification of the interface does not save work, it puts the difficulty in other areas, namely those of the user.[40]

In Directions in Human Factors for Interactive Systems, the authors proposed ten hypotheses:

  1. The inclusion of features not needed for a task interferes with task performance.
  2. The implementation of features unknown to the user interferes with task performance.
  3. Command systems should not be layered of hierarchical.
  4. Error messages should have a positive emotional tone.
  5. The user should be alerted to any potentially damaging action.
  6. Error correction should be easy and immediate.
  7. Abbreviation rules should be consistent and simple.
  8. First-letter abbreviation of command words is a superior abbreviation scheme.
  9. Command languages should be based on legitimate English phrases composed of familiar, descriptive words.
  10. Commands should be described with examples rather than in generalized form.[41]

It should be pointed out that command languages are not dead by any means. In fact, Norman argues that search engine queries are really command languages that tolerate variation and allow for some natural language variations.[42] When they fail, they tend to fail gracefully, asking for confirmation or suggesting alternatives when the command/query is not valid.

In general terms, it is important to use familiar terms and be consistent, elements need to be distinct enough from one another to avoid confusion, phrasing should be succinct, and action words prominent.[43] This is true of menu items, error messages, screen layout, and every other piece that a person encounters when using the product.

When evaluating the success of a product, “user-friendliness” is not helpful. Some measurable, meaningful criteria are time to learn, speed of performance, rate of errors, subjective satisfaction, and retention over time.[44] User satisfaction is valid performance criteria, despite its subjectivity. A “like it” or “hate it” forecasts how likely it is for the users to invest in a new product and make it successfully or scorn it altogether.

Conclusion

Avoid missing ball for high score.[45]

Those six words were the manual for Pong. An extremely simple set of instructions for an extremely simple game. The idea of conceptual integrity has been around for some time, Ted Nelson declaring conceptual simplicity a new frontier in 1974[46], Brooks wrote in 1975 that it was the “most important factor in ease of use”[47], and, well, Pong has been around forever.

The body of work in this field is huge. Here, I have only scratched the surface in discussing design in the software field, putting people’s needs at the fore in interactive systems, and some principles to accomplish that goal. To summarize, a successful project is going to depend on a lot of time spent on planning out the system, having someone lead with absolute authority, and, most importantly, focus on people.

So, the idea is not new. There must be some distractions that have focused people on tackling the wrong issues. This really goes beyond the scope of this paper and would be an interesting study on its own: what methods do projects use when designing software?

End Notes
[1] Brooks, The Mythical Man-Month, p 44
[2] Brooks, The Mythical Man-Month, p 202
[3] Schneiderman, B., Designing the User Interface: Strategies for Effective Human Computer Interaction, p 391
[4] Gilb, T. and Weinberg, G., Humanized Input: Techniques for Reliable Keyed Input, p32-3
[5] Schneiderman, B., Designing the User Interface: Strategies for Effective Human Computer Interaction, p 198
[6] Norman, D., Simplicity Is Highly Overrated, http://www.jnd.org/dn.mss/simplicity_is_highly.html
[7] Norman, D., Logic versus Usage: The Case for Activity-Centered Design, http://www.jnd.org/dn.mss/logic_versus_usage_t.html
[8] Norman, D., Design as Communication, http://www.jnd.org/dn.mss/design_as_comun.html
[9] Norman, D., Design as Communication, http://www.jnd.org/dn.mss/design_as_comun.html
[10] Ledgard, H., Singer, A., and Whiteside, J., Directions in Human Factors for Interactive Systems, p 19
[11] Nelson, Ted, Computer Lib/Dream Machines,p 25
[12] Norman, D., Design as Communication, http://www.jnd.org/dn.mss/design_as_comun.html
[13] Nelson, Ted, Computer Lib/Dream Machines,p 12
[14] Brooks, The Mythical Man-Month, p 42
[15] Brooks, The Mythical Man-Month, p 44
[16] Brooks, The Mythical Man-Month, p 35
[17] Nelson, Ted, Computer Lib/Dream Machines,p 72
[18] Schneiderman, B., Designing the User Interface: Strategies for Effective Human Computer Interaction, p 53
[19] Schneiderman, B., Designing the User Interface: Strategies for Effective Human Computer Interaction, p 275
[20] Schneiderman, B., Designing the User Interface: Strategies for Effective Human Computer Interaction, p 50
[21] Schneiderman, B., Designing the User Interface: Strategies for Effective Human Computer Interaction, p 49
[22] Schneiderman, B., Designing the User Interface: Strategies for Effective Human Computer Interaction, p 322
[23] Gilb, T. and Weinberg, G., Humanized Input: Techniques for Reliable Keyed Input, p 5
[24] Schneiderman, B., Designing the User Interface: Strategies for Effective Human Computer Interaction, p 63
[25] Gilb, T. and Weinberg, G., Humanized Input: Techniques for Reliable Keyed Input, p 77
[26] Gilb, T. and Weinberg, G., Humanized Input: Techniques for Reliable Keyed Input, p 26
[27] Mehlmann, Marilyn, When People Use Computers: An Approach to Developing an Interface, p 35
[28] Schneiderman, B., Designing the User Interface: Strategies for Effective Human Computer Interaction, p 109
[29] Ledgard, H., Singer, A., and Whiteside, J., Directions in Human Factors for Interactive Systems, p 41
[30] Schneiderman, B., Designing the User Interface: Strategies for Effective Human Computer Interaction, p 69
[31] Ledgard, H., Singer, A., and Whiteside, J., Directions in Human Factors for Interactive Systems, p 47
[32] Ledgard, H., Singer, A., and Whiteside, J., Directions in Human Factors for Interactive Systems, p 141
[33] Norman, D., Why doing user observations first is wrong, http://www.jnd.org/dn.mss/why_doing_user_obser.html
[34] Schneiderman, B., Designing the User Interface: Strategies for Effective Human Computer Interaction, p 46
[35] Brooks, The Mythical Man-Month, p 143
[36] Nelson, Ted, Computer Lib/Dream Machines,p 41
[37] Brooks, The Mythical Man-Month, p 221
[38] Mehlmann, Marilyn, When People Use Computers: An Approach to Developing an Interface, p 29
[39] Mehlmann, Marilyn, When People Use Computers: An Approach to Developing an Interface, p 45
[40] Gilb, T. and Weinberg, G., Humanized Input: Techniques for Reliable Keyed Input, p 180
[41] Ledgard, H., Singer, A., and Whiteside, J., Directions in Human Factors for Interactive Systems, p 148
[42] Norman, D., UI Breakthrough-Command Line Interfaces, http://www.jnd.org/dn.mss/ui_breakthroughcomma.html
[43] Schneiderman, B., Designing the User Interface: Strategies for Effective Human Computer Interaction, p 113
[44] Schneiderman, B., Designing the User Interface: Strategies for Effective Human Computer Interaction, p 73
[45] Nelson, Ted, Computer Lib/Dream Machines,p 36
[46] Nelson, Ted, Computer Lib/Dream Machines,p 12
[47] Brooks, The Mythical Man-Month, p 255

No comments:

Post a Comment