Challenge Summary



Welcome to the NASA HATTB Researcher Part 2 Wireframe Challenge! The scope of this challenge is to create interactive wireframes for a software application to run tasks evaluating operator performance and workload.

HATTB (Human-Autonomy Teaming Task Battery) is intended to provide a simplified version of the real-world tasks performed by remote operators monitoring and controlling vehicles with varying levels of automation (a.k.a. pilots controlling drones). The primary use case for HATTB is a psychology researcher studying the performance of participants as they complete specific defined Scenarios. In the future, the application could be extended to many use cases outside of drone operation.

For this challenge we are looking for a true interactive wireframe solution to describe content and functionality. Your solution must allow interactive click-thru of functional elements and screens. The branding and visual design of the UI for this application will be addressed in future challenges.

We have already completed Part 1 of this challenge here. Your wireframe must be  built on the source files on the winner from the first challenge.

Note: the winning solution needs to be used as the base and it’s in Figma, but you can use any of the tools listed under Deliverables section, as long as the look and feel of the output screens is the same.

Please read the challenge specifications carefully

Round 1

Submit your initial wireframes for checkpoint review. Feel free to add any screens which are necessary to explain your concept.
1. Scenario Authoring View
2. MAP Task >> 2a - 2b

Round 2

Submit your final wireframes plus checkpoint feedback implemented. Feel free to add any screens which are necessary to explain your concept.
1. Scenario Authoring View
2. MAP Task >> 2a - 2d
 
CHALLENGE OBJECTIVES
  • Interactive wireframes for a responsive desktop application.
  • Design for use only on Apple Mac computers (iMac and Macbooks). UI Design must follow Apple’s Human Interface Guidelines.
  • Provide an end to end flow for the researcher’s workflow as described in the specification.
  • Important: Screenshots will be provided for understanding of the application, we are looking for your concepts and effective solutions to the functionality described (do not copy reference screen design)

APPLICATION OVERVIEW
With the HATTB application, researchers are able to conduct experiments that evaluate the performance of research participants as they monitor simulated autonomous objects. A researcher is responsible for setting up experiments and configuring all the parameters for the tasks they wish to have included. The overall interface should be as flexible as possible and have a modular approach to be easy to update in the future. The desired outcome is a software application which provides a simple, intuitive interface while offering an extensive set of highly configurable research capabilities.
  • The majority of the application features are intended for the researcher. With the application they will create experiments, manage multiple components and configure them for participants to complete.
  • The researcher will have the ability to create new components or select from existing (Familiarizations, Questionnaires, Scenarios)
  • To create a Scenario, the researcher will use the application to plan and set-up multiple tasks for the participant to complete. Each task will require various levels of automation and decision making. The application will allow the user to create extensive Scenarios with highly configurable variations.
  • Functional Note: The application is NOT browser based, but operates locally on the target device. The software does not require an internet connection for any functionality, as the software does not connect to any servers. The only necessary connections are device-to-device and the various connection options associated with sharing files.
  • Note on sharing: The format to be shared should be human readable (e.g. JSON, CSV) and editable outside of the software. Additionally,  users should be able to quickly determine that they have the same version of an element on separate devices (e.g. unique short code or word).

AUDIENCE
The full application will address two (2) user roles: researcher and participant.
This challenge will only focus on the workflow for the researcher. The participant role is described to give more context to the problem.

User Role 1: Researcher
  • Background: Experimental Psychologist
    • Academic environment (professor or student)
    • Government or Industry (researcher)
  • Age: varies
  • Programming experience: minimal to extensive
  • Requires interfaces for planning Scenarios and implementing experiments.

User Role 2: Participant (in parallel challenge)
  • Background: Undergraduate Psychology Student
  • Age: 20 years old average
  • Aviation / Drone experience: none
  • Video Game Use: avg. 9 hrs/ week
  • Computer Use: avg 17-30 hrs/ week
  • Requires interfaces to complete assigned experiment tasks.


PERSONA
  • Name: Mica Garcia
  • Occupation: Professor
  • Goals:
    • Compile data based on their students’ (participants) performance with experiment tasks
    • Ability to easily customize the functionality included in the experiment Scenarios
    • Ability to customize the interface for their participants (move and resize windows, style elements, etc.)
  • Frustrations:
    • There is not an existing tool which includes all the complex features Mica requires in one application
  • Wants:
    • A single, easy to use application available for MacOS to execute research with their students. Have the ability to customize each experiment as needed.


USER WORKFLOW
Mica is using the NASA HATTB application to conduct research with their students. They are able to define an experiment with multiple sessions in specific order.

Each session consists of:
  1. A Familiarization view - This view gives the participant information about the goal of the sessions, how their points are calculated and what they need to do.
  2. A Scenario view - This is where the intense action happens. A scenario consists of several tasks the participant will complete. There are a multitude of tasks and settings to choose from and customize. Tasks may happen one after the other or run in parallel, testing the participant’s attention.
  3. A Questionnaire view - This is a set of questions presented to the participant to collect feedback. It may be presented before, during and/or after a scenario. Note: there will be multiple questionnaires per session (possibly in a combination of before, during, and after).

High-level workflow:
  1. Author a new component (Authoring views of Familiarization, Scenario, Questionnaire)
  2. Access existing components (Library views of Familiarization, Scenario, Questionnaire)
  3. Arrange desired components into Sessions and Sessions into Experiments (Experiment Authoring view)
  4. Select Experiment to configure and run with participants (Experiment Library view)

Mica has full control to define the tasks the participant will complete in the experiment. For example: MAP Task, Search Task, System Monitoring Task and many others. As multiple tasks may happen simultaneously, a multi-track timeline approach is used to schedule the events. A score is tracked based on how well the participants perform.

GLOSSARY
Experimental psychologist =  a psychologist  who uses scientific methods to collect data and perform research (read more here).

Automation = a device or system that accomplishes (partially or fully) a function that was carried out (partially or fully) by a human operator

Task = a single display window (or element) that has a specific objective(s), which  requires the participant’s attention and interaction.  

Scenario = a predefined series of one or more tasks, which can be performed  simultaneously, consecutively, or as a combination of both.  

Familiarization = a view (or multiple views) shown to a participant immediately  preceding a scenario to provide specific information, examples, or training for the  participant.  

Questionnaire = A set of questions presented to the participant (before, during, or  after a scenario) to collect feedback from the participant.

Session = A predefined series of Scenarios, Familiarizations, and Questionnaires,  which must be completed by the participant sequentially (with a specific order). A  participant will complete a session from start to finish.

Experiment = A set of sessions and associated participant data collected from  completed sessions. Multiple participants may complete the same session within an  experiment.

MAP Task = Multi-Agent Planning Task, is the main task of the experiment, where the user has to follow several entities/ vehicles flying and prevent them from colliding or entering restricted areas

HAT = Human-Autonomy Team, a distinguishable set of two or more agents (humans or autonomous systems) who interact toward a common goal

HAT Goal = Capabilities and principles that facilitate humans and autonomous systems working and thinking better together

LOA = Levels of Automation, consists of a range of levels from level 1 where the system provides no assistance and the human makes all decisions, to level 10 where everything is automated by the system and the human doesn’t get involved. More details can be seen here.

Ownship = In an experiment, the representation of the drone which the participant is operating

Waypoints = Points added for an entity/ drone to create a path to go from start location to target location

SCREENS / FEATURES REQUIRED
The screens/features described below are required for the researcher’s experience, based on the workflow described above.  This Researcher Part 2 challenge will focus only on defining the Scenario Authoring views. This challenge is a follow up of the previous challenge, where we have defined the Library views and Authoring views for Experiments, Familiarizations and Questionnaires.
Scope Note: This challenge does NOT include the participants’ user flow.

Reference guide for app functionality: NASA HATTB Functional Overview Short.pdf.
Pages will be noted with requirements described below.


The wireframe solution will be based on the winner’s design from challenge Researcher Part 1. (download here).  The source application for that is Figma, however you can use any of the tools listed under the Deliverables section, as long as the look and feel of the output screens is the same.

1. Scenario Authoring View
This screen is the interface to build a scenario from the available components.

Note: This is the primary authoring view. From here the user will have the ability to select which components to include, customize each of them, and define the sequence of activity via a timeline interface.
  • Each scenario is defined on a timeline with events (or tasks) being triggered at specific times.  
  • Because of the number of available tasks that can be included in a scenario and the number of configurable events and parameters within each task, this view will require a creative approach.
  • The multi-track timeline views from video editing or audio editing software may provide a good framework for this scenario authoring tool, as there  will need to be multiple “tracks” on the timeline for the various tasks and events that occur simultaneously.
  • The researcher will also need to configure the layout of these tasks on the screen.  

Additionally, some questionnaires may be presented during scenarios, which should be configured within this view. The scenarios are saved locally to the device.

The scenario view could include a combination of various task components (4.3) from this set:
  1. MAP Task (required for this challenge)
  2. HAT Message Board (out of scope)
  3. Predictive Timeline (out of scope)
  4. Search Task (out of scope)
  5. System Monitoring Task (out of scope)
  6. Compensatory Tracking Task (out of scope)
  7. Distractor Data Window (out of scope)
  8. Performance Status Window (out of scope)
See an example of a completed scenario.

Each task will be represented as a separate configurable window.
  • Examples of windows layouts are here
  • The user will be able to:
    • Move windows around
    • Resize windows
    • Allow for windows without information

2. MAP Task
The primary component for most experiments will be the MAP Task. This window will show objects moving around the screen and will occasionally prompt the user for input. This is the most complex task to configure in the Scenario Authoring View because there are many configurable parameters and time-based event triggers  associated with this task.

At a high level, this task allows the user to supervise their ownship to prevent it from colliding with any of the other objects, or entering restricted areas. There should be a default style for elements within the task screens (For example, blue arrows for ownships). However, the author will have the ability to customize appearance attributes for their experiment (color, border, etc.).

The task can be configured to include multiple origins/destinations, multiple vehicles, restricted  areas, and various automation tools. Additionally, the user can modify colors, icons, and the  background (with the option to use locally saved images). The MAP Task will generate multiple performance metrics, which will be recorded by the software.

2a. MAP Task Details:
The participant’s goal is to navigate their ownship from a Start Location A to Target Location B. In addition there are other elements which affect the navigation. Please address all the functionality described in the document (PDF p.1-8). Include:
  • Dynamic Entities
  • Start Locations
  • Target Locations
  • Restricted Areas

Dynamic Entities (PDF p.9):
  • These include:
    • Ownships
    • Cooperative Entities: that the participant has on their team but is not directly responsible for managing them (represented with outline triangles)
    • Uncooperative Entities: are not on the participant’s team. Their Start and Target locations may be the same or different from the ownship and cooperative entities
  • For each type:
    • Add as many entities as they wish
    • Customize the start location and target location
    • Customize the label, color, pattern and/or size.
    • Select speed
    • Select time into MAP Task to introduce a Dynamic Entity
      • Enable/disable automation
      • Collision avoidance
      • Restricted area avoidance
      • Separation indicator aid
      • Current route path
      • Automated plan bank
    • Information Display Option: a window showing data tags for all Dynamic Entities  (PDF p.68-70).
      • Configured by the researcher
      • Option for display on right click on an entity or on a separate window or both places

Start Locations (p.10); Target Locations (p.11); Restricted Areas (p.12):
  • Customizations similar to Dynamic Entities described above

General Settings (PDF p.13):
  • Select a blank background or change as desired
  • Set Dynamic Entities to disappear if collide with another Dynamic Entity
  • Set point system (more about this is shown on point 9 of the specifications):
    • Starting point bank value
    • Deduction value for ownship collision
    • Deduction value for ownship Restricted Area breach
    • Addition value for reaching Target Location.

Event Triggers (PDF p.14):
  • Temporal: the event triggers at a pre-specified amount of time into the experimental scenario
  • Spatial: the event triggers when ownship(s) breach edge of a Target Area, Restricted Area, or Dynamic Entity
  • Events: the event triggers by another event or another task.

2b. Automated Task Manager
Within the MAP task is a suite of automated tools to help the participant complete the task. These tools consist of:
A. Waypoint following
B. Collision avoidance
C. Restricted area avoidance
D. Indicator circle
F. Current route path
G. Battery indicator/ alert

A. Waypoint following (PDF p.17-35 ). The researcher will be able to define the following options for the task:
  • Manual Waypoint Following - Option to allow the participant to manually place waypoints for ownship to follow automatically. The researcher can set this on/off for the participant.
  • Pre-defined Waypoint Following - The researcher can use Waypoint Following to pre-define the paths for a session. When the experiment starts the ownship will follow that path.
  • Dynamic Waypoint Following - Option to allow the participant to add waypoints as the ownship moves towards the target. This would allow the ship to avoid a collision with an Uncooperative Entity or to avoid the Restricted Area.
  • Different levels of automations can be set from 1 to 10, as shown on pages 37-45 in the same PDF. You should show all those levels and what the researcher can set for each of them in the interface.
    
B. Restricted Area Avoidance Aid (PDF p. 48-56 )
  • The intention here is to prevent Dynamic Entities from breaching restricted areas.
  • See details in PDF regarding the set-up of various levels of automation for this feature. Please use levels 1, 3, and 7 as examples to show in your submissions.
    
C-D. Indicator Circle: Collision and Restricted Areas (PDF p.58-59 )
  • These are 2 colored circles around the Dynamic Entities that indicate predefined thresholds for a minimum separation to another entity or a restricted area
  • Caution Indicator Circle:
    • Caution Collision Indicator Circle: This circle will illuminate if the outermost circle is breached by another Dynamic Entity. Default color is yellow, but will be configurable by the researcher.
    • Warning Collision Indicator Circle: will illuminate if the innermost circle is breached by another Dynamic Entity. Default color is red, but will be configurable by the researcher.
    • These rings can be enabled/ disabled to ownships, Cooperative Entities and/ or Uncooperative Entities
    • False Alarm Rate allows the researcher to define the number of false alarms generated by the Collision Indicator Circle during a scenario or specified timespan.
    • Fault Association allows the researcher to pre-assign specific Dynamic Entities to NOT activate Collision Indicator Circles of other Dynamic Entities.
  • Restricted Area Indicator Circle:
    • Caution Indicator Circle: This circle will illuminate if the outermost circle is breached by a Restricted Area boundary. Default color is yellow, but will be configurable by the researcher.
    • Warning Indicator Circle: This circle will illuminate if the innermost circle is breached by a Restricted Area boundary. Default color is red, but is configurable by the researcher.
    • These rings can be enabled/disabled to ownships, Cooperative Entities and/or Uncooperative Entities

E. Current route path:
  • Dynamic Entity trails can be configured to be set on/off
  • They will have customizable options for: line patterns, thickness and color

F. Battery Indicator/ Alert:
  • An optional battery status will appear in the ownship data window or a separate one. It should indicate the battery level has breached a predefined lower threshold and triggered an alert. (More details PDF p.63-64)

2c. Dependent Variables:
These data points are outputs from the scenario. They will be used by the researcher to analyze the performance of the participant. (Details about the data PDF p. 72-73). The researcher may have options to adjust how or what is collected.
  • Route Perfection Delta
  • Restricted Area
  • Collision
  • Vehicle Battery Depletion
  • Automation Dependence Rate

2d. Additional Independent Variable (PDF p.65)
Additional features associated with the MAP Task:
1. Loading Icon
2. User Input Lag


JUDGEMENT CRITERIA
  1. Presentation: Thorough: provide a thorough wireframe solution. It should simulate the workflow described, including linked screens, variations and behaviors of elements.
  2. Creativity: Impactful: the solution is different or unique from what is already out there and can be implemented.
  3. Exploration: Out of the box: consider the screen requirements and guidelines as a draft or start point. Provide alternate experiences or workflows to achieve what we are proposing in the requirements and satisfy the user goals.
  4. Aesthetics: Low-fidelity design: provide plain simple aesthetics within a detailed wireframe solution.
  5. Branding: Open: Your solution MUST be pure wireframes without the use of branding or styling of UI elements. Focus is on content and functionality. Design must follow Apple’s Human Interface Guidelines.

Device Specifications
  • Desktop: Macintosh OS MacBook
  • Size: 1366 x 768 (width x height)

Stock Photos and Icons
  • Photos: Use only free photos from the sites allowed by Topcoder
  • Icons: Use only icons that are free (based on Topcoder icons policy) without giving attribution, or create the icons yourself. If the icons are not free, we will require them to be replaced in the final fixes.
  • Fonts: Use fonts as allowed by Topcoder policy

Marvel Prototype
  • Upload your screens to Marvel App.
  • Ask in the forums for a Marvel project
  • Include your Marvel app URL as a text file in your final submission. Label it “MarvelApp URL”.

Final Deliverables
  • An archive called Source.zip file to include:
    • All original source files created with a wireframe tool such as: Axure, Adobe XD, Figma or Sketch.
  • An archive called Submission.zip file to include:
    • All your wireframes in html format or PNG / JPG format
  • Your declaration files to include any notes about fonts usage, photo and icon declaration and link to your Marvel project
  • Create a JPG preview file of 1024 x 1024 px

Please read the challenge specification carefully and watch the forums for any questions or feedback concerning this challenge. It is important that you monitor any updates provided by the client or Studio Admins in the forums. Please post any questions you might have for the client in the forums.

Stock Photography

Stock photography is not allowed in this challenge. All submitted elements must be designed solely by you. See this page for more details.

How To Submit

  • New to Studio? ‌Learn how to compete here
  • Upload your submission in three parts (Learn more here). Your design should be finalized and should contain only a single design concept (do not include multiple designs in a single submission).
  • If your submission wins, your source files must be correct and “Final Fixes” (if applicable) must be completed before payment can be released.
  • You may submit as many times as you'd like during the submission phase, but only the number of files listed above in the Submission Limit that you rank the highest will be considered. You can change the order of your submissions at any time during the submission phase. If you make revisions to your design, please delete submissions you are replacing.

Winner Selection

Submissions are viewable to the client as they are entered into the challenge. Winners are selected by the client and are chosen solely at the client's discretion.

Challenge links

Screening Scorecard

Submission format

Your Design Files:

  1. Look for instructions in this challenge regarding what files to provide.
  2. Place your submission files into a "Submission.zip" file.
  3. Place all of your source files into a "Source.zip" file.
  4. Declare your fonts, stock photos, and icons in a "Declaration.txt" file.
  5. Create a JPG preview file.
  6. Place the 4 files you just created into a single zip file. This will be what you upload.

Trouble formatting your submission or want to learn more? ‌Read the FAQ.

Fonts, Stock Photos, and Icons:

All fonts, stock photos, and icons within your design must be declared when you submit. DO NOT include any 3rd party files in your submission or source files. Read about the policy.

Screening:

All submissions are screened for eligibility before the challenge holder picks winners. Don't let your hard work go to waste. Learn more about how to  pass screening.

Challenge links

Questions? ‌Ask in the Challenge Discussion Forums.

Source files

  • RP file created with Axure
  • Sketch
  • Adobe XD
  • Figma

You must include all source files with your submission.

Submission limit

Unlimited

ID: 30191418