Challenge Summary


Welcome to the NASA HATTB Researcher and Participant Wireframe Challenge!

The scope of this challenge is to create interactive wireframes for a software application to run tasks evaluating operator performance and workload.

HATTB (Human-Autonomy Teaming Task Battery) is intended to provide a simplified version of the real-world tasks performed by remote operators monitoring and controlling vehicles with varying levels of automation (a.k.a. pilots controlling drones). The primary use case for HATTB is a psychology researcher studying the performance of participants as they complete specific defined Scenarios. In the future, the application could be extended to many use cases outside of drone operation.

For this challenge we are looking for a true interactive wireframe solution to describe content and functionality. Your solution must allow interactive click-thru of functional elements and screens. The branding and visual design of the UI for this application will be addressed in future challenges.

We have already completed Part 1 of this challenge here. Your wireframe must be built on the source files on the winner from the first challenge.

Note: the winning solution needs to be used as the base and it’s in Figma, but you can use any of the tools listed under Deliverables section, as long as the look and feel of the output screens is the same.

Please read the challenge specifications c

Round 1

Submit your initial wireframes for checkpoint review. Feel free to add any screens which are necessary to explain your concept.
1. Experiment View (Researcher)
2. Session Manager View (Researcher)
3. Familiarization View (Participant)

Round 2

Submit your final wireframes plus checkpoint feedback implemented. Feel free to add any screens which are necessary to explain your concept.
1. Experiment View (Researcher)
2. Session Manager View (Researcher)
3. Familiarization View (Participant)
4. Scenario View (Participant)
5. Questionnaire View (Participant)
 
CHALLENGE OBJECTIVES
  • Interactive wireframes for a responsive desktop application.
  • Design for use only on Apple Mac computers (iMac and Macbooks). UI Design must follow Apple’s Human Interface Guidelines.
  • Provide an end to end flow for the researcher’s & participant’s workflow as described in the specification.
  • Important: Screenshots will be provided for understanding of the application, we are looking for your concepts and effective solutions to the functionality described (do not copy reference screen design)

APPLICATION OVERVIEW
With the HATTB application, researchers are able to conduct experiments that evaluate the performance of research participants as they monitor simulated autonomous objects. A Researcher is responsible for setting up experiments and configuring all the parameters for the tasks they wish to have included. The overall interface should be as flexible as possible and have a modular approach to be easy to update in the future. The desired outcome is a software application that provides a simple, intuitive interface while offering an extensive set of highly configurable research capabilities.
  • The majority of the application features are intended for the researcher. With the application they will create experiments, manage multiple components and configure them for participants to complete.
  • The researcher will have the ability to create new components or select from existing (Familiarizations, Questionnaires, Scenarios)
  • To create a Scenario, the researcher will use the application to plan and set-up multiple tasks for the participant to complete. Each task will require various levels of automation and decision making. The application will allow the user to create extensive Scenarios with highly configurable variations.
  • Functional Note: The application is NOT browser based, but operates locally on the target device. The software does not require an internet connection for any functionality, as the software does not connect to any servers. The only necessary connections are device-to-device and the various connection options associated with sharing files.

AUDIENCE
The full application will address two (2) user roles: researcher and participant.
This challenge will focus on a portion of their workflows.

User Role 1: Researcher
  • Background: Experimental Psychologist
    • Academic environment (professor or student)
    • Government or Industry (researcher)
  • Age: varies
  • Programming experience: minimal to extensive
  • Requires interfaces for planning Scenarios and implementing experiments.

User Role 2: Participant
  • Background: Undergraduate Psychology Student
  • Age: 20 years old average
  • Aviation / Drone experience: none
  • Video Game Use: avg. 9 hrs/ week
  • Computer Use: avg 17-30 hrs/ week
  • Requires interfaces to complete assigned experiment tasks.


PERSONA

Researcher:
  • Name: Mica Garcia
  • Occupation: Professor
  • Goals:
    • Compile data based on their students’ (participants) performance with experiment tasks
    • Ability to easily customize the functionality included in the experiment Scenarios
    • Ability to customize the interface for their participants (move and resize windows, style elements, etc.)
  • Frustrations:
    • There is not an existing tool which includes all the complex features Mica requires in one application
  • Wants:
    • A single, easy to use application available for MacOS to execute research with their students. Have the ability to customize each experiment as needed.

Participant:
  • Name: Jessie Smith
  • Occupation: Undergraduate Student
  • Goals:
    • Participate in complex research experiments set up by their professor.
  • Frustrations:
    • The tasks are often very difficult to use (e.g., they can't easily navigate around and use the controls). It is frustrating for Jessie who has never needed to do the task before and is now getting tested on how well they can do it.
  • Wants:
    • A single, easy to use application to perform the tasks assigned to him.

USER WORKFLOW
Mica is using the NASA HATTB application to conduct research with their students. They are able to define an experiment with multiple sessions in specific order.

Each session consists of:
  1. A Familiarization view - This view gives the participant information about the goal of the sessions, how their points are calculated and what they need to do.
  2. A Scenario view - This is where the intense action happens. A scenario consists of several tasks the participant will complete. There are a multitude of tasks and settings to choose from and customize. Tasks may happen one after the other or run in parallel, testing the participant’s attention.
  3. A Questionnaire view - This is a set of questions presented to the participant to collect feedback. It may be presented before, during and/or after a scenario. Note: there will be multiple questionnaires per session (possibly in a combination of before, during, and after).

High-level workflow:
  1. Author a new component (Authoring views of Familiarization, Scenario, Questionnaire)
  2. Access existing components (Library views of Familiarization, Scenario, Questionnaire)
  3. Arrange desired components into Sessions and Sessions into Experiments (Experiment Authoring view)
  4. Select Experiment to configure and run with participants (Experiment Library view)

Mica has created Familiarizations, Questionnaires, and detailed Scenarios. Mica is able to define a New Experiment by arranging multiple sessions, including these elements, in the sequence they prefer. Mica includes the new elements and some existing elements into a few sessions. They have browsed and selected from the libraries (Familiarization Library, Scenario Library View, and the Questionnaire Library).

When Mica choses to define a scenario, they will interact with the Scenario Authoring View. Here they can configure the tasks the participants will complete in the experiment. For example:  MAP Task, Search Task and System Monitoring Task. The design of these features is being done in a separate challenge.

Once Mica sees that the Experiment includes the components desired, they have the ability to save and lock this experiment to prevent changes.

To run an experiment with a participant, Mica views the Experiment and selects session(s) to set-up and activate. After Mica does this, the participant is able to interact with that session - specified Familiarizations, Scenarios, Questionnaires - and complete the tasks that are included.

During the experiments, the application will record and save participant data locally on the device. The recording can be later played back to review the user interactions.

GLOSSARY
Experimental psychologist = A psychologist  who uses scientific methods to collect data and perform research (read more here).

Automation = A device or system that accomplishes (partially or fully) a function that was carried out (partially or fully) by a human operator

Task = A single display window (or element) that has a specific objective(s), which  requires the participant’s attention and interaction.  

Scenario = A predefined series of one or more tasks, which can be performed  simultaneously, consecutively, or as a combination of both.  

Familiarization = A view (or multiple views) shown to a participant immediately  preceding a scenario to provide specific information, examples, or training for the  participant.  

Questionnaire = A set of questions presented to the participant (before, during, or  after a scenario) to collect feedback from the participant.

Session = A predefined series of Scenarios, Familiarizations, and Questionnaires,  which must be completed by the participant sequentially (with a specific order). A  participant will complete a session from start to finish.

Experiment = A set of sessions and associated participant data collected from  completed sessions. Multiple participants may complete the same session within an  experiment.

MAP Task = Multi-Agent Planning Task, is the main task of the experiment, where the user has to follow several entities/ vehicles flying and prevent them from colliding or entering restricted areas

HAT = Human-Autonomy Team, a distinguishable set of two or more agents (humans or autonomous systems) who interact toward a common goal

HAT Goal = Capabilities and principles that facilitate humans and autonomous systems working and thinking better together

LOA = Levels of Automation, consists of a range of levels from level 1 where the system provides no assistance and the human makes all decisions, to level 10 where everything is automated by the system and the human doesn’t get involved. More details can be seen here.

Ownship = In an experiment, the representation of the drone which the participant is operating

Waypoints = Points added for an entity/ drone to create a path to go from start location to target location

SCREENS / FEATURES REQUIRED
The screens/features described below are required for the researcher and participant’s experiences. They are based on the workflow described above. This challenge is a follow up of the previous challenge, where we have defined the Library Views and Authoring Views for Experiments, Familiarizations and Questionnaires.

Reference guide for app functionality: NASA HATTB Functional Overview Short.pdf.
Pages will be noted with requirements described below.


The wireframe solution will be based on the winner’s design from challenge Researcher Part 1 (download here). The source application for that is Figma, however you can use any of the tools listed under the Deliverables section, as long as the look and feel of the output screens is the same.

1. Experiment View (Researcher)
Experiment example: link.
The researcher will come to this screen from the Experiment Library View, after selecting a specific Experiment.

Note: This screen will fill the “Details” tab when viewing a single experiment within the previous challenge solution.

At the view, the researcher will see the structure of sessions with their included elements. The elements included in this view are based on what the researcher has defined in the Experiment Authoring View (from Researcher Part 1 challenge).

The researcher will have the ability to select and start sessions for participants. Sessions have the option to be randomly selected (blind studies). From this view, researchers may also export the experiment data in different formats and playback scenarios from participant sessions.

2. Session Manager View (Researcher)
This view includes all of the functionality that a researcher will need to access during an active session with a participant. This includes:
  • Set participant ID
  • Move between session elements (i.e., familiarizations, scenarios, and questionnaires)
  • Restart a session
  • See data collection status for the current session.
    • Not Started
    • Complete
    • In Progress

3. Familiarization View (Participant)
Example of Familiarization: link.

This view is one of the few views that is presented to the participant during a session. The Familiarization View  provides the participant with information, examples, and/or training (typically before the beginning of a scenario). Familiarization views may contain multiple screens and may be  presented more than once during a session.

The elements included in this feature are based on what the researcher has defined in the Familiarization Authoring View (from Researcher Part 1 challenge).


4. Scenario View (Participant) 
Scenario Example: link.
The participant will execute the tasks of the experiment in this view.

The Scenario View includes all of the tasks as defined by the researcher in the Scenario Authoring View. This authoring functionality is being solved in a separate challenge. The functionality details are described in the NASA HATTB Functional Overview PDF.

The scenario view could include a combination of various task components from this set:
  1. MAP Task
  2. HAT Message Board
  3. Predictive Timeline
  4. Search Task
  5. System Monitoring Task
  6. Compensatory Tracking Task
  7. Distractor Data Window
  8. Performance Status Window

See an example of a completed scenario. In most experiments, all of the components would not be included at the same time on one screen. Suggest display options that could accommodate a different number of task components.

The reference PDF describes how each of those components are created by the researcher. However, in this view, the participant would only see how these are displayed, not authored. Consider something similar to the above screenshot example.

Note: The visual appearance of the task components will be highly customizable by the researcher. For this view, a simple default representation of each component is sufficient.
  1. Map Task: At a high level, this task allows the user to supervise their ownship to prevent it from colliding with any of the other objects, or entering restricted areas. (PDF p.1-10)
  2. HAT Message Board: This window is a support tool for the Multi-Agent Planning (MAP) Task that displays outputs (e.g., vehicle state data) from the MAP Task and will occasionally prompt the  participant for input (e.g., button press) to respond to decisions about a vehicle. Depending on the MAP Task configuration, this window will not always be necessary.  (PDF p.1-10)
  3. Predictive Timeline: This window is an optional support tool for the Multi-Agent Planning (MAP) Task that displays vehicle details (e.g., departure time, arrival time) on a timeline view for additional situation awareness.
  4. Search Task: In this task, the participant must identify if a target letter is present or absent among a field of distractor letters. In this example find the letter ‘O’ amongst the fields of ‘Q’ letters. (PDF p.74-93)
  5. System Monitoring Task: This window displays several components that either move or change color to indicate that input is required from the participant. (PDF p.97)
  6. Compensatory Tracking Task: This window shows a circle within crosshairs . The participant must provide directional  input to keep the circle in the center. The circle will drift away from the center if there is no input  from the participant. (PDF p.96)
  7. Distractor Data Window: This window shows a text feed of dynamic information that has no correlation with any of the tasks. Some of the data labels can optionally correspond to MAP Task elements. The purpose of  this window is to distract the participant without providing any useful information.
  8. Performance Status Window: Based on their performance, the participants will accumulate points associated with specific tasks completed. These are tracked in the “Point Bank” throughout the session. (PDF p.94).

5. Questionnaire View (Participant)  
Example of questionnaire: link 1 and link 2.

The Questionnaire View  presents the participant with questions during a session (before, during, and/or after a scenario). These questions and response options are defined by the researcher in the Questionnaire Authoring View (from Researcher Part 1 challenge). Participants will typically be presented with multiple questionnaires consecutively.


JUDGEMENT CRITERIA
  1. Presentation: Thorough: provide a thorough wireframe solution. It should simulate the workflow described, including linked screens, variations and behaviors of elements.
  2. Creativity: Impactful: the solution is different or unique from what is already out there and can be implemented.
  3. Exploration: Out of the box: consider the screen requirements and guidelines as a draft or start point. Provide alternate experiences or workflows to achieve what we are proposing in the requirements and satisfy the user goals.
  4. Aesthetics: Low-fidelity design: provide plain simple aesthetics within a detailed wireframe solution.
  5. Branding: Open: Your solution MUST be pure wireframes without the use of branding or styling of UI elements. Focus is on content and functionality. Design must follow Apple’s Human Interface Guidelines.

Device Specifications
  • Desktop: Macintosh OS MacBook
  • Size: 1366 x 768 (width x height)

Stock Photos and Icons
  • Photos: Use only free photos from the sites allowed by Topcoder
  • Icons: Use only icons that are free (based on Topcoder icons policy) without giving attribution, or create the icons yourself. If the icons are not free, we will require them to be replaced in the final fixes.
  • Fonts: Use fonts as allowed by Topcoder policy

Marvel Prototype
  • Upload your screens to Marvel App.
  • Ask in the forums for a Marvel project
  • Include your Marvel app URL as a text file in your final submission. Label it “MarvelApp URL”.

Final Deliverables
  • An archive called Source.zip file to include:
    • All original source files created with a wireframe tool such as: Axure, Adobe XD, Figma or Sketch.
  • An archive called Submission.zip file to include:
    • All your wireframes in html format or PNG / JPG format
  • Your declaration files to include any notes about fonts usage, photo and icon declaration and link to your Marvel project
  • Create a JPG preview file of 1024 x 1024 px

Please read the challenge specification carefully and watch the forums for any questions or feedback concerning this challenge. It is important that you monitor any updates provided by the client or Studio Admins in the forums. Please post any questions you might have for the client in the forums.

How To Submit

  • New to Studio? ‌Learn how to compete here
  • Upload your submission in three parts (Learn more here). Your design should be finalized and should contain only a single design concept (do not include multiple designs in a single submission).
  • If your submission wins, your source files must be correct and “Final Fixes” (if applicable) must be completed before payment can be released.
  • You may submit as many times as you'd like during the submission phase, but only the number of files listed above in the Submission Limit that you rank the highest will be considered. You can change the order of your submissions at any time during the submission phase. If you make revisions to your design, please delete submissions you are replacing.

Winner Selection

Submissions are viewable to the client as they are entered into the challenge. Winners are selected by the client and are chosen solely at the client's discretion.

Challenge links

Screening Scorecard

Submission format

Your Design Files:

  1. Look for instructions in this challenge regarding what files to provide.
  2. Place your submission files into a "Submission.zip" file.
  3. Place all of your source files into a "Source.zip" file.
  4. Declare your fonts, stock photos, and icons in a "Declaration.txt" file.
  5. Create a JPG preview file.
  6. Place the 4 files you just created into a single zip file. This will be what you upload.

Trouble formatting your submission or want to learn more? ‌Read the FAQ.

Fonts, Stock Photos, and Icons:

All fonts, stock photos, and icons within your design must be declared when you submit. DO NOT include any 3rd party files in your submission or source files. Read about the policy.

Screening:

All submissions are screened for eligibility before the challenge holder picks winners. Don't let your hard work go to waste. Learn more about how to  pass screening.

Challenge links

Questions? ‌Ask in the Challenge Discussion Forums.

Source files

  • Sketch
  • Adobe XD
  • Figma
  • Axure

You must include all source files with your submission.

Submission limit

Unlimited

ID: 30191412