API, Web, E-commerce
Simflection Virtual Try-on

YEAR
ROLE
Company
**Due to NDA constraints, certain details in this case study have been generalized or omitted.**
Every identified security risk is assigned a Service Level Agreement (SLA) — a formal contract that defines the timeframe for remediation based on the severity of the issue.
Despite these statuses, many lower-severity risks were lingering unresolved.
This raised a critical question: if clients want security breaches to be quickly resolved, why were non-urgent risks stagnating?
I needed to rethink how SLA status was presented, moving beyond risk criticality to account for business context and prevent low-severity risks from slipping through the cracks.
I needed to design a system that empowered teams to act before issues escalated, not just when the SLA status changed.
I led this redesign over a two-week sprint, collaborating with IT specialists, customer service reps, developers, and my supervisor.
I was responsible for the end-to-end design process, from research through to wireframes and interaction flows.
To understand the disconnect, I conducted interviews with internal stakeholders across support, IT, and development.
A recurring theme emerged. SLA status was often interpreted as a countdown clock, not a call to action.
Teams admitted that unless a risk was marked as "nearing SLA," it rarely got prioritized, especially for medium and low-risk findings.
“We know it's within SLA, so we don't worry... until it's almost too late.” — IT team member
Through research, it was clear that high-criticality issues were being remediated in time, but low and medium-criticality risks accumulated while still in the "green" state (i.e. within SLA).
This surface-level signal led to procrastination.
Teams thought these risks were under control when no progress was being made, leaving the tendency for multiple low-criticality to accumulate.
This led me to rethink how SLA status is tracked and also how we visualize risks.
SLA shouldn’t just signal “on time” vs “late,” but should reflect the actual activity and movement behind a risk.
How might we help teams recognize and act on stagnant risks before they breach SLA without disrupting existing priorities?
SLA status should not be a passive label, rather an active signal for prioritization and accountability. I reimagined it with progress velocity indicators and “no movement” warnings to break the illusion of momentum to show how actively a risk is being worked on, not just how much time is left.
I created a concept map linking the platform’s tagging logic with the attributes assigned to each risk.
This helped clarify which attributes influence SLA classification and laid the foundation for designing an algorithm to assign SLA statuses more intelligently.
However, what stood out was the absence of visibility into remediation activity. A risk could be “within SLA” for weeks yet completely untouched.
That’s when I began exploring the idea of latency urgency: risks that aren't immediately severe, but if ignored, could have compounding business impact like access issues for employees or customer-facing bugs.
This led me to consider not just when a risk needs to be fixed, but whether it’s moving toward resolution at all.
The original table design relied heavily on colors and visual indicators (icons). This created alert desensitization, where the signal got lost in the noise.
Initially, I experimented with a progress bar representing how far along the remediation was. While visually expressive, it didn’t communicate a clear user action. Feedback confirmed that it showed motion, but not meaning.
I also considered displaying the number of days since a risk was last updated. But interviews showed that users didn’t find this meaningful. What they really wanted was binary: "Is this on track or does it need my attention?"
The turning point was realizing that users only needed two key signals:
This led to a simpler model: reserve color and labels only for attention-worthy states.
In follow-up sessions, users could more reliably identify breached or action-needed risks without instruction.
After several design reviews with my supervisor and the technical team, the final designs were handed off to developers. I conducted a design walkthrough before leaving to ensure alignment across product and engineering.
While I wasn’t able to see the implementation go live due to the timing of my coop term, I ensured a smooth transition by documenting edge cases, responsive behaviors, and interaction details in Figma.
Although I left before the design was fully implemented, early feedback during usability testing pointed to strong improvements in clarity and prioritization.
Participants were 2.3x more likely to correctly identify which risks required attention in under 10 seconds.
By reducing visual noise, introducing latency-aware states, and aligning color with actionability, the design helped teams respond more proactively before risks became critical.