Background

This project was developed as part of my graduate work in the MDes Experience Design program at San José State University. It explores how AI tools can be designed to support us without encouraging over‑relianceThe project evolved through multiple stages of research, ideation, prototyping, and validation, informed by real conversations with students, professionals, and educators who regularly utilise AI tools in their workflows.

  1. Project Setup and Exploration
  2. Discovery and Early Insights
  3. Design and Testing
  4. Final Synthesis and Refinement

Detailed Report

My ROle

Experience Designer & Researcher

I led the project end‑to‑end, including:

  • Research planning and execution
  • Synthesis and insight generation
  • Concept development and interaction design
  • Prototyping and user testing
  • Final storytelling and presentation

timeline

3 months

Project Overview

AI tools are increasingly embedded in academic, professional, and creative workflows. While users value their speed and convenience, there is growing discomfort around dependency, loss of ownership, and reduced cognitive engagement.

This project investigates how AI can act as a thinking partner—guiding, nudging, and supporting users while preserving agency, confidence, and independent thought.

problem space

Early conversations revealed a tension: users rely on AI daily, yet feel uneasy about how much thinking they are outsourcing. Many expressed guilt, mistrust, or fear of becoming overly dependent. To understand this relationship, I explored:

  • How people currently use AI in real tasks
  • Emotional responses to different AI behaviors
  • When AI feels helpful versus intrusive
  • How context (time pressure, learning vs. output) changes expectations
“Across groups, brain connectivity systematically scaled down with the amount of external support: Brain-only participants exhibited the strongest, most distributed networks… and LLM users displayed the weakest connectivity.”

Research Approach

Methods

  • Think-aloud task observations
  • Semi-structured interviews
  • Emotion wheel check-ins
  • Card sorting of AI response styles
  • Scenario-based questioning

Participants

  • 4 primary users + 1 stakeholder (Phase 1)
  • 5 primary users + 1 SME + 1 stakeholder (Testing phase)
  • Ages 18–55, moderate to high AI usage

Evaluating AI Response Styles

Participants were shown three types of AI responses:

  • Socratic (questions only)
  • Guided (scaffolded suggestions)
  • Direct (complete answers)

Findings

  • Socratic responses felt frustrating and irrelevant unless explicitly opted into
  • Guided responses were consistently rated most helpful
  • Direct answers were useful short-term but often triggered guilt or passivity

This became a major pivot point in the project.

Socratic AI responses often felt irrelevant, frustrating, or unhelpful
Guided responses were seen as supportive and informative.

KEY RESEARCH INSIGHTS

1. Users want AI help—especially to save time

AI is widely used for research, exam prep, debugging, brainstorming, and writing. Users see it as a productivity booster and starting point.

2. But not at the cost of ownership

Participants repeatedly emphasised that they want ideas to feel like their own. Fully generated answers triggered guilt, mistrust, or disengagement.

3. Context changes expectations

Under time pressure, users accept direct help. In creative or learning contexts, they prefer guidance and control. One-size-fits-all AI responses fail.

4. The struggle is part of learning

Without AI, users still turn to friends, communities, forums, and libraries. Many acknowledged that effort leads to better understanding and retention.

problem statement

Users want AI to support their learning and creativity, but misaligned or overly directive help can reduce engagement, confidence, and independent thinking.

design criteria

Design

To keep users at the center of the solution, I conducted multiple rounds of testing with participants across different ages, AI familiarity levels, and use cases, including academic, professional, and creative tasks.

Design Concept: Nudge

Nudge is a context-aware AI assistant designed to support thinking without replacing it. It lives alongside the user’s work and adapts its behavior based on the user’s intent.

Learnings

This project reshaped how I think about AI design.

I learned that:

  • More intelligence is not always better

  • Ambiguity without consent creates frustration

  • Good AI design is about knowing when to step back

Designing Nudge reinforced the importance of clarity, emotional safety, and human agency—especially as AI becomes embedded in everyday thinking.

Scroll to Top