Retrieved from https://studentshare.org/information-technology/1689976-usability-engineering
https://studentshare.org/information-technology/1689976-usability-engineering.
Common Industry Format for Usability Test Report Comments and questions about this format: iusr@nist.gov Assignment Part 3 Usability Engineering Website (www…) //Edit the Table of Contents Executive Summary 1Introduction 2Description/Overview of site 2Scope 2 Test Objectives 2Method 3 Participants 3 Context of Site/Application Use in Test 3 Tasks 3 Test Facility 3 Participants Computing Environment 3 4 Experimental Design 4Procedure 4Participant Task Instructions 4 Usability Metrics 5Results 5 Data Analysis 5 Data Scoring 5 Data Reduction 6Presentation of Results 6a) One or more measures of user satisfaction.
Questionnaires to measure satisfaction and associated attitudes are commonly built using Likert and semantic differential scales. A variety of instruments are available for measuring user satisfaction (such as the SUS). 6Example Effectiveness Results 6 Recommendations 7References 7Appendix A – Data Collection Forms 8Screening Questionnaire 8Orientation Script 8Pre Test Questionnaire 8Video Consent Release Form 8Task Scenarios 8Post Test Questionnaire 8Data Logger 8Facilitators Checklist 8Appendix B – Screen Shots 8 Executive Summary The website under scrutiny is called (DIC) and can be accessed through the URL address http://www.
dreamincode.net. This website hosts an online leading community for programmers and web developers. currently has 634,778 members who are registered in the site. Each day, about 400 members register in the site (Dream in code: Online). Registered members normally enjoy full access to thousands of tutorials in programming, code snippets, and definite forums topics where members can engage in constructive discussions in regard to the topics of concern to the members. In the last eight years, DIC has become very reputable in providing expertise for many students and professional in a very friendly and timely manner.
Members of this site have profiles and depending on their activities online, they are categorised as badges, contributors, authors, experts, mentors, alumni, administrators, moderators and webmasters. Badges occur below the usernames f the members. Contributors are members who submit tutorials and code snippets on the site (Dream in code: Online). Authors on the other and are members who have taken their time to write a number of tutorials or some unique code snippets. Experts are members who answer questions accurately and are seen as being experts in a particular forum.
Members who are categorised as mentors are given privilege to moderate forums. The alumni are members who were previously members of the DCI staff. Moderators help the community to understand the rules of engagement and dedicate a number of hours daily in moderating the sight (HCI in the software process: Online). Administrators are awarded a green badge and are very active in the site. They help in the daily operations of the site and suggest improvements. The webmaster oversees the daily operations of the site.
Introduction This project involves a number of issues that are related to usability engineering. It ranges from the development of the appropriate method and a detailed description of the same. This project is deeply motivated by the desire to analyse the usability of DCI and determine ways in which the target community group can be satisfied more effectively (HCI in the software process: Online). Therefore, the research work will involve two main but separate parts which are interrelated but will be very crucial in the project.
First will be introduction to the site under scrutiny and its importance in the process of engineering. The next sections will be concerned with the results from the research work and the recommendations that can be made to improve community. Description/Overview of siteDIC hosts an online leading community for programmers and web developers. It currently has 634,778 registered members. Every day, about 400 new members register in the site. Registered members can enjoy access to thousands of tutorials in programming, code snippets, and definite forums topics where members can engage in constructive discussions in regard to the topics of concern to the members.
In the last eight years, DIC has become very reputable in providing expertise for many students and professional in a very friendly and timely manner. Scope The project will explore a number of areas in regard to the site ranging from the membership of the community, activities of the members, the interests of the community members, the kind of activities carried out in the site, the number of members in the site and the bulk of the work done by the members. Test Objectives To determine the computing environment in the DCI environment.
To explore the activities, membership and participation of the members of the computing community.To determine the performance of the computing environment in terms of members satisfaction. Method Participantsa) The total number of participants tested. b) Segmentation of user groups tested, if more than one. c) Key characteristics and capabilities of user group. d) How participants were selected; whether they had the essential characteristics. e) Differences between the participant sample and the user population.
EXAMPLE: Actual users might attend a training course whereas test subjects were untrained. f) Table of participants by characteristics, including demographics, professional experience, computing experience and special needs. The characteristics shall be complete enough so that an essentially similar group of participants can be recruited. Characteristics should be chosen to be relevant to the product’s usability; they should allow a customer to determine how similar the participants were to the customers’ user population.
Context of Site/Application Use in Test Tasksa) The task scenarios for testing. b) Why these tasks were selected. EXAMPLES: The most frequent tasks, the most troublesome tasks. c) Any task data given to the participants. d) Completion or performance criteria established for each task. Test Facility a) The setting and type of space in which the evaluation was conducted. EXAMPLES: Usability lab, cubicle office, meeting room, home office, home family room, manufacturing floor b) Any relevant features or circumstances that could affect the results.
EXAMPLES: Video and audio recording equipment, one-way mirrors, or automatic data collection equipment c) Computer configuration, including model, OS version, required libraries or settings. d) Browser name and version; relevant plug-in names and versions. e) If screen-based, screen size, resolution. f) Any hardware or software used to record data. Participants Computing EnvironmentThe computing configuration that was used for the test -:Computer model and specification: Operation System: Internet connection: Mouse: Browser: Plug-ins: Other software: Display DevicesThe monitor configuration used for the test is as follows:Screen size: Resolution: Frequency: Colour: Experimental Design Include - Number of participants, number of tasks they performed, environment for each of themDiscuss –the control variables in this test: browser, screen size, resolution, same internet connection for each participant, etc.
The measures recorded during the test per user:Goal achievement, Number assists per task: ,Number of errors,Time-On-Task: Satisfaction, ease of task achievement, usefulness of search functions, ease of finding way around the site, amount of information on the web pages,adequacy of the amount of instructions on the website and overall satisfaction with the website using the Likert scale.Procedure - Discuss the entire procedure from welcoming to thanking the users after the test-Time limits on tasks.
- Verification that the participants knew and understood their rights as human subjects. - Number and roles of people who interacted with the participants during the test session. - Whether participants were paid or otherwise compensated.-Measures that were recorded during the test per user are: discuss how you defined the measuresExample - Number assists per task: count = zero at the beginning of a task, which is incremented each time the facilitator helps the user.-When and how the facilitator can assist and give hints must be discussed.
You may use a tableTask numbersTime limit (minutes)Assists allowed?Type of assistanceParticipant Task InstructionsGeneral instructions given to the participants (here or in an Appendix). Instructions on how participants interact with any other person present, include how to ask for assistance and interact with other participants, if applicable. Task instruction summary. Usability Metrics Effectiveness can be measured using -Goal achievement: Completion rate: Assisted completion rate: Error rating:And any otherEfficiency:Time on task: Success in benchmark time rate:Satisfaction:Satisfaction measured using the Likert scale Sample questions - 1.
How do you rate your overall satisfaction of the site?2. How useful do you think the website is?3. How easy do you think the site was to use to achieve the tasks you were given?…………………………….……………………………Results Data Analysis Data ScoringUse the various metrics to discussExample - Goal achievement:Task numberCompletion/Performance criteria1100 % if the task is …50% complete if …0% complete if…Satisfaction:MetricMeasureScaleMeasureOverall SatisfactionNot Satisfied3210123Very Satisfied Data ReductionGroup tasks into types to discuss Presentation of ResultsDiscuss Success rates a) The number of participants who completely and correctly achieve each task goal. b) Errors are instances where test participants did not complete the task successfully, or had to attempt portions of the task more than once. c) The unassisted completion rate (i.e. the rate achieved without intervention ) as well as the assisted rate (i.e. the rate achieved with intervention) where these two metrics differ.
Satisfaction ratings a) One or more measures of user satisfaction. Questionnaires to measure satisfaction and associated attitudes are commonly built using Likert and semantic differential scales. A variety of instruments are available for measuring user satisfaction (such as the SUS).Example Effectiveness Results Task typeTask 1Task 2Task 3Participant 1100100100Participant 2100100100Participant 310020100Participant 4100100100Participant 5100100100Goal Achievement (%)Mean10084100Median100100100Range100-10020-100100-100SD0.0035.780.
00Participant 1001Participant 2001Participant 3230Participant 4100Participant 5110Error RatingMean0.80.80.4Median100Range0-20-30-1Standard Deviation0.841.300.55Assisted Completion Rate0/50/50/5Unassisted Completion Rate5/5 4/55/5Completion Rate5/5 4/55/5 RecommendationsFindingsList your findings – positive / negative for every task and give an explanationObservations//List your observationRecommendations://List the recommendation and include the rationale behind itOverall Conclusion// Conclude with an overall idea about the website, issues and recommendationsReferencesDream in code: Available on: http://www.
dreamincode.net. Accessed on 28th April 2015. HCI in the software process. Retrieved from: http://www.hcibook.com/e3-docs/slides/notes-pdf/e3-chap-06-6up.pdf. Accessed on 28th April 2015. Appendix A – Data Collection Forms Screening QuestionnaireOrientation ScriptPre Test QuestionnaireVideo Consent Release FormTask ScenariosPost Test QuestionnaireData LoggerFacilitators ChecklistAppendix B – Screen ShotsEmbed screenshots of website and highlight the usability issues/ observation
Read More