Designing AI-Assisted Feedback for Prompt Learning: A Framework for Human-AI Collaboration in Classification Tasks

Researcher(s)

  • Duy Duc Tran, Computer Science, University of Delaware
  • Khang Nguyen, Computer Science, University of Delaware

Faculty Mentor(s)

  • Mauriello Matthew, Computer Science, University of Delaware

Abstract

This project proposes a framework for designing and evaluating AI-assisted feedback to support prompt learning in classification tasks. The study consists of two components: (1) an engineering effort to build a modular, real-time feedback system using large language models (LLMs), integrated with survey tools like Qualtrics to deliver immediate feedback after each classification attempt; and (2) a human study investigating how users respond to different types of feedback during prompt writing.
The human study uses a 2×3 experimental design across classification tasks (binary vs. multi-class) and feedback conditions (no feedback, fixed human-written feedback, and LLM-assisted adaptive feedback). Participants are asked to iteratively revise their prompts, and the study measures perceived helpfulness, trust in the feedback source, and improvement in prompt quality. By analyzing how users interact with and rely on LLM-generated guidance, this research aims to inform the development of more effective and trustworthy AI-human feedback systems for data annotation and model training workflows.