I am on the team for almost 3 years as the sole/lead UX/UI designer. I worked with the product manager and engineering partners to create a human in the loop design system for machine learning in the robotics field to automate repetitive tasks and increase work efficiency.
Lead designer for creating a "human in the loop" product for user to perform QA inspection, and create customized inspection routines.
Research, Prototype, User testing, wireframe, info architecture, UI design,
Shipped the web app with 2 main functionalities:
1. QA inspection
2. Create customizable routines with robot control.
Increased customer's work efficiency
Reduced labor cost.
What Is It
Live Inspection works in quality control manufacturing environments.
It allows QA inspectors to semi-automate their work process by controlling robots with high res cameras to scan items with the preset routines and use pre-trained machine learning models to give Pass/Fail judgments. Inspectors need to supervise the robot's judgments then provide feedback base on their experience.
Inspectors are the main type of user who interacts with the live inspection UI.
Managers will be trained to create customized inspection routines with the robot.
The Inspector experience is geared around using live inspection UI to work with the robot to reach their inspection goals, giving live feedback to the robot's judgments.
The manager experience is geared around using routine creation UI to create customized inspection routines with the robot.
They will be supervising multiple inspection stations from a distance to make sure they are going smoothly.
Inspector's standard workflow in the industry before
Open a box of parts that need to be inspected, unwrap the part from a plastic bag.
Use an inspection magnifier to find all defects on each part.
Mark defects with a red sticker using a pair of tweezers.
Pack the defected part back to the plastic bag, mark the defect names on the bag.
Place all defected parts into a box and mark the box.
Place all good parts back to the original box, and pass it down.
1. Live Inspection
The first big feature we launched for our customers is how to use our solution to inspect their product. We are taking a good amount of work internally on the ML engineering side to set up routines, collect data, and train the model for our customers.
Lift users from repetitive work and tiring postures.
Inspectors will unwrap the product and place it on the robot's plate. Now inspectors can use the UI on top of the robot to inspect the product. While the robot inspecting the product, users can be lifted from repetitive work and tiring positions. They just need to make sure the inspection runs smoothly, items are placed in the fixture properly.
Create trust between humans and robots.
Users wanted to understand what the robot's doing during the inspection. It was easy to get lost in the progress of the robot's inspection.
By offering a routine overview panel, we were able to give users a live update on the progress of the robot till the end of one inspection. Bounding boxes with pass/fail results will appear correspondingly when a certain position is being scanned.
Most importantly, this creates a sense of trust between humans and machines.
One pain point we discovered during the customer onsite interview was the new system wasn't efficient enough to match their current workload. We designed the UI to allow users to review images while the robot doing the inspection. Inspectors can simply tap on the "Review" page to start reviewing the inspection results. They can toggle back to the "Inspection Page" anytime if they want to keep track of the inspection.
By updating this design, we noticed a 40% reduction in time spent on each inspection compare to the previous prototype.
There are 2 elements that need navigations on this review page:
1st Area of interests (AOI), 2nd position.
One position might contain multiple areas of interest that need to be reviewed.
Since navigate to review the next defect is a repetitive action for this step, we have the navigation for AOI on the right bottom corner of the screen. This will also automatically navigate to the next position when the previous position is finished being reviewed. We are able to provide a faster hand movement for each inspection. It is also more comfortable ergonomically for inspectors.
Close supervised learning loop for machine learning model training.
With machine learning embedded in the system, the robot will be able to make judgments for the pass/fail result of each position at first. In order to increase ML accuracy for better performance in the future, we designed a step for the user's supervising input. Inspectors will be able to correct the robot's judgment based on their inspection experience.
This UX step also increased business value for the entire product. The human in the loop selling point is more appealing to our customers since we do not eliminate human labor, but only to increase their efficiency and provide a better working experience which aligns with customer's goal.
Customers have different preferences on a balance of accuracy and efficiency.
Some of them would only want to review failed images, some will review all images.
We provided a filter option to choose in between "All" or "Need Review " (Failed Result) on Areas of Interest and Positions panel.
When "All" is selected, we placed "Failed" and "Unknown" positions at the beginning of the cue to match the user's priority.
2. Routine Creation
With the scaling of the customer needs and the size of the company, We started to offer our users the ability to create their own inspection routines with our interface product.
Easily toggle between white and dark editions for routine creation UI.
Users are afraid of breaking the machine, how to make robot control more user friendly is the key.
For this hardware model, it has 6 degrees of freedom. It is a fairly complex concept to describe robot movement in the mechanical design world.
We worked closely with the hardware team to translate a complex concept from mechanical design to a camera coordinate system, where all robot's movement is relevant from the camera perspective on the end effector.
The user only needs to care about which way they want the camera to look.
We also gave info on the info tooltip for the pitch and yaw concept.
This is not the perfect UX in the design world, I also pitched the idea of dragging the live camera feed to achieve some robot movements. But with the consideration of the size of the team and the timeline of the project, the team went with a faster solution for the MVP version of the feature.
Routine creation UI
Live Inspection UI
By having similar panels and the same placement of the main action button for 2 main features on the app, we were able to reduce the learning curve for inspectors. It also helps them to get over their fear of breaking the robot faster.
This design decision also helped our development team building the routine creation page faster.
This is the beginning of forming our own design system/guideline.
And we launched! Check out the Elementary Robotics for more info about the product.