Background
The client for the Sharkzor project needed a user interface for sort and summary of large numbers of images. Analysts need to quickly get a sense of both what exists, and what they may possibly be interested in, within datasets on the order of millions of images.

Once the analyst has a general idea of the dataset and their goal, the deep learning features 'learn' the analyst's organization method and automate it for fast and efficient task completion.
The Sharkzor software has been demo'ed at venues such as NeurIPS 2017 and IUI 2018. The Sharkzor UX and Interactive Machine Learning Heuristics developed with the project have been published at IUI 2018, IEEE VIS 2018 Workshop on Machine Learning from User Interaction for Visualization and Analytics as well as at the Black in AI Workshop at NeurIPS 2018.
process / 
The UX process for the Sharkzor project started with literature and prior art review with similar projects / applications. We observed users interacting with a Sharkzor beta, held brainstorming meetings with the team, and met with the clients regularly. Certain requirements were provided by the client, including dataset size requirements and surrogate usecases.
Sharkzor's development is ongoing, and as such the UX is still evolving with emerging user needs and client requirements.
NOTE: The client, users and use-cases are confidential and can not be discussed here.
process / prior art
The UX process for the Sharkzor project started with literature and similar application review. We reviewed both existing research literature for image organization as well as applications for interactive deep learning. The initial Sharkzor research idea was an application called Active Canvas, developed internally at PNNL. We then built upon that prototype, pulling in relevant features and interaction paradigms to support the intended use-cases.
The attention to, and contribution to, current research in the field of interactive machine learning and explainable AI is an ongoing focus for the Sharkzor project.
process / heuristic eval
We were fortunate to work with Dr. Margaret Burnett who developed the GenderMag heuristic evaluation technique for identifying gender bias in software tools. The entire Sharkzor team (this means developers and project manager!) and members of PNNL's UX team participated in the evaluation. We identified and proposed solutions to each of our identified gender inclusiveness issues. The GenderMag personas and foundational ideas are now an integral part of our design process.
process / version(s) 0-5
Sharkzor was initially designed (with browser limitations in mind) for use with only 500 images. In order to support the client need of one million images, there would be heavy lifts for all of our teams; design, development, engineering and deep learning.
Selected versions and intermediate sketches are shown here.
v1. everything is square
v2. welcome to the matrix
v3/4. streamlining interactions
v5. images forever
working solution
The current version of Sharkzor is in standard agile design and development cycle. Quarterly stakeholder presentations and monthly progress readouts help ensure that the product is meeting client needs.
homescreen
metadata view
map view
model history exploration
feature highlight / Visual sort
To add user understandable context to large datasets during the triage phase, we allow the user to sort the dataset between one or two user-chosen example images. The target images used to sort are not intended to be images the user is interested in grouping, but rather a way to superficially organize the large number of images in a way that makes sense to the user during the triage task phase.
In the sketches and mockups below, we use target images of different colors. This particular example would sort the images between these two colors (in the mockups, light yellow to dark brown) adding a color spectrum context to the large dataset. Target images could be anything the user chooses; content shape, size, even background color.
future directions
Future directions include user studies (with surrogate users and tasks), usability assessments and experiments using the interface to test Interactive Machine Learning Heuristics. IML Heuristics were developed on this project and presented at the IEEE VIS 2018 Workshop on Machine Learning from User Interaction for Visualization and Analytics as well as at the Black in AI Workshop at NeurIPS 2018.
This work was funded by the US Government.
Back to Top