AUTHOR=Denham Danette T. , Wang Colin Y. , Maric Emil , Hinton Lucy R. , Heniford B. Todd TITLE=Large Language Models in Surgery: Promise, Pitfalls, and Practical Use JOURNAL=Journal of Abdominal Wall Surgery VOLUME=Volume 5 - 2026 YEAR=2026 URL=https://www.frontierspartnerships.org/journals/journal-of-abdominal-wall-surgery/articles/10.3389/jaws.2026.16349 DOI=10.3389/jaws.2026.16349 ISSN=2813-2092 ABSTRACT=BackgroundLarge Language Models (LLMs) represent a transformative advancement in artificial intelligence (AI) with rapidly expanding applications in medicine. While AI-related medical publications increased 36-fold between 2000–2022, practical guidance for surgeons remains limited. This mini-review delineates pragmatic applications of LLMs in surgical practice while addressing key limitations, implementation considerations, and ethical considerations.MethodsWe reviewed contemporary LLM platforms and their integration into clinical workflows, patient communication, surgical research and academic writing, evaluating both benefits, constraints and risk mitigation relevant to practicing surgeons.FindingsLLMs demonstrate significant utility across multiple domains. In clinical workflows, ambient documentation and chart summarization may reduce documentation burden and support rapid synthesis of complex patient data. For patient communication, these tools can simplify complex medical information, tailor or translate patient instructions to appropriate reading levels or languages, and generate empathetic responses to patient messages with improved efficiency. In research, LLMs assist with literature summarization, study design optimization, and risk of bias assessment in RCT, allowing surgeons to focus on higher-level scientific reasoning. Despite promising applications, several constraints demand attention. Effective prompting requires specific techniques including clear clinical objectives, explicit instructions, and iterative refinement. LLM outputs require verification to prevent “hallucinations” - fabricated or inaccurate information. Protected health information (PHI) must never be entered into public LLM platforms to maintain HIPAA compliance. Liability frameworks for AI-generated errors remain ambiguous, with unclear responsibility deferred amongst providers, institutions, and developers.ConclusionLLMs offer surgeons valuable tools for enhancing workflow efficiency and patient communication when deployed with appropriate oversight. Success requires understanding prompt engineering principles, maintaining rigorous fact-checking protocols, protecting patient privacy, and recognizing that human judgment remains irreplaceable in clinical decision-making.