StudentShare
Contact Us
Sign In / Sign Up for FREE
Search
Go to advanced search...
Free

Usability Testing and Heuristic Evaluation - Essay Example

Cite this document
Summary
A Comparison Introduction In computing terms, usability is described as the quality of a system to be easy to use and acceptable to a specific group of users and environment. These two components are always taken into consideration during system development…
Download full paper File format: .doc, available for editing
GRAB THE BEST PAPER91.8% of users find it useful
Usability Testing and Heuristic Evaluation
Read Text Preview

Extract of sample "Usability Testing and Heuristic Evaluation"

?Usability Testing and Heuristic Evaluation: A Comparison Introduction In computing terms, usability is described as the quality of a system to be easy to use and acceptable to a specific group of users and environment. These two components are always taken into consideration during system development. An easy to use system improves user performance and increases satisfaction. Meanwhile, the level of acceptability of a system is a crucial in determining if the user would prefer to use the system or not (Holzinger, 2005). Holzinger (2005) identified five specific attributes which serve as the foundation of system usability: (1) learnability; (2) efficiency; (3) memorability; (4) low error rate; and (5) satisfaction. Learnability affects how fast a user can learn and use the system after undergoing a certain amount and duration of training. Efficiency affects the amount of tasks a user can perform in a set amount of time. Meanwhile, memorability enables a user to use the system after a period of inactivity without having to relearn its operation. Low error rate refers to the number of problems encountered by the user and the ease of correcting such errors. Lastly, satisfaction generally refers to the overall user perception of the system serving its intended purpose. Usability Testing The term usability testing has been generally referred to as any method used to evaluate a system or product. For the purpose of clarification, the term usability testing shall be used in this paper as a distinct empirical method of system evaluation with the goal of identifying usability issues and developing recommendation on how address such issues (Manzari & Trinidad-Christensen, 2006). Rubin and Chisnell (2008) described usability testing as a process of recruiting people as test participants to evaluate the system based on a series of usability guidelines. Test participants are normally composed of individuals whose profiles represent the target user audience. The inclusion of test participants based on real world parameters is what makes usability testing unique among other usability assessment methods. In this method, test participants are selected from the target user audience and are asked to perform specific tasks using a prototype of the system. During the duration of the test, user performance and reactions to the product are observed and recorded by a facilitator (Fiset, 2009). In essence, usability testing is a research tool which originated from conventional experimental methodology. The range of usability tests that can be performed is significantly broad, allowing the developer to tailor-fit approaches according to the test objectives, time constraints, and resources available (Rubin & Chisnell, 2008). Since it originated from conventional approaches for controlled experiments, usability testing follows formal methods which include: (1) hypothesis formulation; (2) random sampling of participants; (3) utilization of experimental controls; (4) utilization of controlled groups; and (4) composition of sample size to determine statistical differences between groups (Rubin & Chisnell, 2008). Fiset (2009) outlined the following basic steps in conducting usability assessments: (1) definition of test objectives; (2) enumeration of tasks; (3) developing a prototype or mock-up of the system; (4) performing a preliminary validation test on prototype; (5) recruiting test participants; (6) preparing forms, venue, and equipment; (7) determining level of confidentiality of acquired data; (8) conducting the test proper; (9) filling out of evaluation questionnaire; (10) analysis and consolidation of results; (11) writing down of recommendations. The objectives of the usability test are normally determined by the phase of system development the test will be carried out. Initial usability tests usually involve performing specific tasks based on the system design. As development progresses, additional objectives may be added such as identification of number errors, gauging user satisfaction, measuring time spent completing a task, and determining learning curve. The tasks are then evaluated according to frequency, criticality, and complexity. Developers are expected to make short narratives of each task for clarity (Fiset, 2009). A prototype or a mock setup of the system will be built for the test. A paper-based mock-up of the system works best during initial testing. In addition, a paper-based prototype conserves time and resources, allowing developers to allocate their resources better. However, if navigation issues are identified early on, a computer-based mock-up of the system including limited functionality is recommended. A validation test on mock-up and test scenarios is recommended to eliminate any obvious or ambiguous errors in the test setup. This is usually performed by having a member of the development team run through the scenarios by using the prototype (Fiset, 2009). The recruitment of individuals to comprise the test involves selecting representative users from a target audience. Normally, system operators are selected as primary users whom are then complemented with maintenance and technical users if needed. Fiset (2009) has provided some guidelines when selecting test participants: (1) two or three test participants are enough for most initial tests; (2) do not use the same set of test participants for subsequent test rounds to maintain the integrity of results. Depending on the test parameters, a simple test venue is sufficient. A test venue may include a table or desk, two chairs, a computer with internet connection. If a paper-based test is planned, paper forms of the prototype replace the computer (Weber, 2004). Paper forms used in the test are printed out and the handling of confidential material is agreed upon (Fiset, 2009). During the actual test, participants are asked to enter the test venue individually, rather than in a group. After the initial introductions, the test objectives are disclosed to the participants with a reminder that the subject of the test is the system, not the participant. Confidentiality of information is discussed with the participants, as well as encouraging participants to “think aloud”. In addition, the use of information gathered from the test shall be discussed (Fiset, 2009). Scenarios shall be accomplished one by one, with the facilitator observing and would interrupt only as needed. Moreover, facilitators are required to show a positive attitude regardless of the results. Upon completion of the test, participants undergo debriefing in order to garner additional information regarding user perception on the strong and weak points of the system. In addition to participant debriefing, a post-test questionnaire may be distributed for the participants to fill out (Fiset, 2009). Results of the test are collected and analyzed. Based on the results, Silver (2005) suggested the following guidelines in developing recommendations for usability improvements: (1) recommendations should be kept as short and simple as possible; (2) enumerate the strong and weak points of the system; (3) test user observation and comments must be collected and categorized. These elements shall be included in an executive summary. On the downside, usability testing presents some disadvantages. First, members of the testing test should be qualified to conduct the test. Second, usability tests may have numerous variations, making it hard to standardize. Finally, usability tests involve the use of organizational resources such as money, time, and personnel, making it dependent on available resources. However, these costs are outweighed by the benefits in terms of productivity, safety, user satisfaction, company reputation, and development costs (Fiset, 2009). Heuristic Evaluations Heuristic evaluation is a usability assessment method which involves the use of a small group of expert evaluators to assess the usability of a system interface based on a set of usability heuristics (Manzari and Trinidad-Christensen, 2006). A heuristic is defined as a principle or guideline used to gauge or critique a decision which has already been made (Zaphiris & Kurniawan, 2007). Usability heuristics include: (1) visibility of system status; (2) match between the system and the real world; (3) user control and freedom, (4) consistency and standards; (5) error prevention; (6) recognition rather than recall; (7) flexibility and efficiency of use; (8) aesthetic and minimalist design; (9) helps users recognize, diagnose, and recover from errors; and (10) help and documentation (Manzari and Trinidad-Christensen, 2006). The first heuristic involves providing the user updated information regarding the operation of the system. This is usually done by ensuring that feedback from the system is produced in regular intervals or in real-time (Manzari and Trinidad-Christensen, 2006). The second heuristic emphasized on using words, phrases, and concepts which are familiar to the user. Technical jargon should be avoided as much as possible and the layout of the system interface should present the information in a natural and logical manner (Manzari and Trinidad-Christensen, 2006). It is inevitable in any system environment for users to select an incorrect function. Therefore, the third heuristic points out the need for a cancel or exit function for users who selected a system function by mistake. This also applies for functions which allow the user to undo or redo an action (Manzari and Trinidad-Christensen, 2006). The fourth heuristic involves following standard formats or screens throughout the system interface. Users should be able to differentiate and identify specific functions without taking too much time to figure it out. Meanwhile, care should be taken even during the early development of the system interface. The fifth heuristic calls for a thorough review of the system design to prevent errors from occurring in the first place (Manzari and Trinidad-Christensen, 2006). The sixth heuristic emphasized the importance of making it easier for users to identify specific objects, actions, and options in the system interface. In addition, the system interface should be designed in a way that users do not need to memorize information as they go through several parts of the system. Moreover, the system interface should always have a help option readily available and accessible to users (Manzari and Trinidad-Christensen, 2006). To speed up access to frequently-accessed functions, the seventh heuristic suggests the use of accelerators. Accelerators are hidden functions in the system which allows users to identify and access functions which are used more frequently in order to save time. To help further make system operations faster, the eight heuristic reminds system developers to include only relevant and necessary information. This is done to prevent users from being distracted from system components not related to the task that they need to accomplish (Manzari and Trinidad-Christensen, 2006). It would be inevitable for systems to encounter a previously undiscovered error or problem. In this case, the ninth heuristic reminds developers to present error messages to users in a format that would be easiest for them to understand. In addition, error messages should be clearly written to give users an idea of what went wrong and should also provide suggestion on how to fix the error or get the system to a state before the error occurred (Manzari and Trinidad-Christensen, 2006). The tenth heuristic focused on providing sufficient and relevant system help and documentation functions for users. In cases wherein users may need assistance on a specific function or error, help information would be crucial in ensuring that loss of user productivity while figuring out a function or error is kept at a minimum. Moreover, information included in user help and documentation should be easy to locate and relevant to the user’s needs. Also, steps included in tutorials and error resolutions should be concise and clear (Manzari and Trinidad-Christensen, 2006). Among several system usability assessment methods, heuristic evaluation boasts of a number of advantages. First, heuristic evaluations can be performed with just a handful of resources and in a limited amount of time. Instead of investing money, time, and other resources in preparing test materials, recruiting user participants, and developing test scenarios, developers can have a panel of expert evaluators conduct an assessment of system usability based on the ten aforementioned heuristics (Fiset, 2009). Second, expert evaluators can identify major usability problems early on, saving resources which could have been spent on more expensive assessment methods. Third, heuristic evaluation does not require participation of system users. Though user inputs are accepted, evaluation data collected from expert evaluators are more focused on checking for discrepancies between good design practices and the system being assessed (Fiset, 2009). On the other hand, heuristic evaluation does have its shortcomings. First, the panel of evaluators should have substantial credentials and should be well-versed in system usability concepts and testing methods. Second, heuristic evaluation can only identify around 3 to 5 out of 10 usability issues (Fiset, 2009). This is attributed to the fact that expert users may not be the best representatives of real world users. The likelihood that expert evaluators perceive the system at the same level as representative of the target user audience is low (Rosenbaum, 2008). Heuristic evaluation is composed of several stages. First, experts who will comprise the evaluation panel will be recruited based on familiarity with the heuristic evaluation process. In addition, an expert may also possess a certain level of familiarity with related systems. For the purpose of clarification, the main objective of a heuristics evaluation is to determine compliance of system design based on a given set of heuristics, not necessarily how adequate the system is in performing its intended functions (Fiset, 2009). On the average, up to five inspectors are recruited to form the evaluation panel (Rosenbaum, 2008). Second, the members of the evaluation panel are asked to familiarize themselves with the system interface. Using a system prototype or paper mock-up, the evaluators explore the different functions of the system and how each are linked with each other. Each evaluator then rates the system based on a predefined set of heuristics, after which the individual ratings are collected for analysis (Fiset, 2009). Third, evaluator responses are analysed and ranked according to significance. Problems based on severity level (i.e. major, minor, superficial) may be used to determine priority. Based on these results, expert evaluators can develop recommendations to be forwarded to the development team for refinement of the system interface (Fiset, 2009). Usability Testing versus Heuristic Evaluations Based on the aforementioned descriptions, usability testing and heuristic evaluation does have its unique strengths and weaknesses. Depending on the situation, a developer decides on which assessment method to utilize. In terms of objective, usability testing is performed to determine the level of usability of the system based on the feedback from a representative group of target users. On the other hand, heuristics evaluation aims to gauge system usability based on the information gathered by an expert panel. In terms of credibility, the use of a representative group of test participants from the target user audience ensures that the feedback and ratings generated would best represent the perception of the target audience. In contrast, the utilization of an expert panel may decrease the likelihood that the usability problems or issues that the panel will be able to identify would be the same with the target audience (Preston, 2004). In terms of resources used and preparation time, usability testing involves the use of organizational resources such as manpower, time, and money. Depending on the test model, the amount of resources utilized varies. Meanwhile, heuristic evaluation takes less time and uses more resources. Heuristic evaluation works best in situations wherein time and resources are limited. Overall, if the organization has the resources available and is not pressed for time, usability testing would be the best option. On the other hand, if the organization is working on a set deadline and limited resources, a heuristic evaluation would be the most viable path to take. Information gathered may be supplemented by subsequent usability testing sessions as more time and resource become available. References Fiset, JY (2009). Human-machine interface design for process control applications. Research Triangle Park, NC: Instrumentation, Systems, and Automation Society. Holzinger, A (2005). Usability engineering methods for software developers. Communications of the ACM, 48(1), 71-74. Manzari, L & Trinidad-Christensen, J (2006). User-centred design of a website for library and information science students: Heuristics evaluation and usability testing. Information Technology and Libraries, 25(3), 163-169. Preston, A (2004). Types of usability methods. Usability Interface, 10(3), 15. Rosenbaum, S (2008). The future of usability evaluation: Increasing impact on value. In Law, E, Hvannberg, E & Cockton, G (Eds.), Maturing usability: Quality in software, interaction, and value (pp. 344-380). New York: Springer. Rubin, J & Chisnell, D (2008). Handbook of usability testing: How to plan, design, and conduct effective tests. Indianapolis, IN: Wiley Publishing. Silver, M (2005). Exploring interface design. Clifton Park, NY: Thomson-Delmar Learning. Weber, JH (2004). Is the help useful? How to create online help that meets your users’ needs. Whitefush Bay, WI: Hentzenwerke Publishing. Zaphiris, P & Kurniawan, S (2007). Human computer interaction research in web design and evaluation. London: Idea Group Publishing. Read More
Cite this document
  • APA
  • MLA
  • CHICAGO
(“Usability Testing and Heuristic Evaluation Essay”, n.d.)
Retrieved from https://studentshare.org/environmental-studies/1415418-usability-testing-and-heuristic-evaluation
(Usability Testing and Heuristic Evaluation Essay)
https://studentshare.org/environmental-studies/1415418-usability-testing-and-heuristic-evaluation.
“Usability Testing and Heuristic Evaluation Essay”, n.d. https://studentshare.org/environmental-studies/1415418-usability-testing-and-heuristic-evaluation.
  • Cited: 0 times

CHECK THESE SAMPLES OF Usability Testing and Heuristic Evaluation

Description of implementation and its issues

hen one or more human factors experts are involved in evaluation of an interface,this is referred to as heuristic evaluation of usability Audit.... … The experts who are included in the evaluation are charged with the responsibility of measuring the usability, effectiveness and efficiency of the interface founded on Jakob Nielsen definition (heuristic evaluation par 1).... hen one or more human factors experts are involved in evaluation of an interface, this is referred to as heuristic evaluation of usability Audit....
4 Pages (1000 words) Essay

Human-Computer Interaction: Usability and Evaluation

Software developers also sometimes ignore usability, which does not justify the cost of usability testing and evaluation.... This report "Human-Computer Interaction: Usability and evaluation" presents the principles of usability, and the methods used to evaluate software interface.... It became apparent that a number of usability evaluation techniques exist that can be employed in the evaluation of software interfaces.... om allocated the usability to three areas namely the page design, site design, and content design....
8 Pages (2000 words) Report

Human Computer Interaction

This research presents detailed overview of specialized CDROM Encyclopaedia, which is aimed at offering a specialist interest in Football.... The main aims of this research are to renovate the CDROM based Encyclopaedia system and make it better and effective for the usage of this information system....
6 Pages (1500 words) Essay

HCI Design Project Usability Evaluation Criteria

This paper aims to provide an elaboration of the evaluation criteria by reviewing articles and literature on two major usability evaluation criteria namely; heuristic evaluation and usability testing.... For purposes of this evaluation we shall focus on the heuristic evaluation and usability testing criteria that has been supported by several authors in the field of user interface design (Madan & Dubey, 2012).... Jacob Nielsen is considered the father of heuristic evaluation criteria....
5 Pages (1250 words) Research Paper

Heuristic Evaluation of a Cell Phone

"heuristic evaluation of a Cell Phone" paper covers a discussion of the effectiveness of Nielsen's heuristics over mobile phone interface.... Part –I of the document covers a heuristic evaluation of usability mobile phones based on Nielsen's Heuristics.... Here we have used the heuristic evaluation (HE)method which focuses on experimenting and testing to design/improve products for the ease-of-use of the end consumer.... To develop an easy-to-use cell phone User Interface, an effective evaluation method is essential....
16 Pages (4000 words) Coursework

Evaluation of Website Interface Re-Design 5

he analysis is done after the heuristic evaluation and user testing led to the advancement of interface development.... In heuristic evaluation, where inspection of the prototype and the fully developed interface was done with the intention of taking into account all the changes that need to be incorporated to optimize user performance and acceptance, various benefits were established.... The paper "evaluation of Website Interface Re-Design 5" asserts Both heuristic and user-centered design evaluations for the Nebula Web interface had their unique advantages and disadvantages....
8 Pages (2000 words) Research Paper

HCI Design Project Usability Evaluation Plan

Included within this usability plan are evaluation plans for the heuristic evaluation, usability testing and the cognitive walk through.... This paper ''HCI Design Project Usability evaluation Plan'' is to be used by the software designers to meet each milestone and deliver reliable and efficient software to meet the user needs.... he main goal of the design usability is to meet the needs of the user in terms of interface use....
11 Pages (2750 words) Research Paper

Usability Testing and Heuristic Evaluation - Website Interface Design

… The paper " Usability Testing and Heuristic Evaluation - Website Interface Design " is a worthy example of a literature review on information technology.... nbsp;Usability Testing and Heuristic Evaluation are two techniques that an organization may use to measure the effectiveness and efficiency of its system user interface (Lecture 6 2013).... The paper " Usability Testing and Heuristic Evaluation - Website Interface Design " is a worthy example of a literature review on information technology....
11 Pages (2750 words) Literature review
sponsored ads
We use cookies to create the best experience for you. Keep on browsing if you are OK with that, or find out how to manage cookies.
Contact Us