<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article article-type="review-article" dtd-version="2.3" xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">J. Abdom. Wall Surg.</journal-id>
<journal-title>Journal of Abdominal Wall Surgery</journal-title>
<abbrev-journal-title abbrev-type="pubmed">J. Abdom. Wall Surg.</abbrev-journal-title>
<issn pub-type="epub">2813-2092</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">13059</article-id>
<article-id pub-id-type="doi">10.3389/jaws.2024.13059</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Health Archive</subject>
<subj-group>
<subject>Mini Review</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Artificial Intelligence&#x2014;What to Expect From Machine Learning and Deep Learning in Hernia Surgery</article-title>
<alt-title alt-title-type="left-running-head">Vogel and M&#xfc;ck</alt-title>
<alt-title alt-title-type="right-running-head">Artificial Intelligence in Hernia Surgery</alt-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Vogel</surname>
<given-names>Robert</given-names>
</name>
<xref ref-type="corresp" rid="c001">&#x2a;</xref>
<uri xlink:href="https://loop.frontiersin.org/people/2701463/overview"/>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>M&#xfc;ck</surname>
<given-names>Bj&#xf6;rn</given-names>
</name>
<xref ref-type="corresp" rid="c001">&#x2a;</xref>
</contrib>
</contrib-group>
<aff>
<institution>Klinikum Kempten - Klinikverbund Allg&#xe4;u</institution>, <addr-line>Kempten</addr-line>, <country>Germany</country>
</aff>
<author-notes>
<corresp id="c001">&#x2a;Correspondence: Robert Vogel, <email>robert.vogel@klinikverbund-allgaeu.de</email>; Bj&#xf6;rn M&#xfc;ck, <email>bjoern.mueck@klinikverbund-allgaeu.de</email>
</corresp>
</author-notes>
<pub-date pub-type="epub">
<day>06</day>
<month>09</month>
<year>2024</year>
</pub-date>
<pub-date pub-type="collection">
<year>2024</year>
</pub-date>
<volume>3</volume>
<elocation-id>13059</elocation-id>
<history>
<date date-type="received">
<day>29</day>
<month>03</month>
<year>2024</year>
</date>
<date date-type="accepted">
<day>26</day>
<month>07</month>
<year>2024</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#xa9; 2024 Vogel and M&#xfc;ck.</copyright-statement>
<copyright-year>2024</copyright-year>
<copyright-holder>Vogel and M&#xfc;ck</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/">
<p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p>
</license>
</permissions>
<abstract>
<p>This mini-review explores the integration of Artificial Intelligence (AI) within hernia surgery, highlighting the role of Machine Learning (ML) and Deep Learning (DL). The term AI incorporates various technologies including ML, Neural Networks (NN), and DL. Classical ML algorithms depend on structured, labeled data for predictions, requiring significant human oversight. In contrast, DL, a subset of ML, generally leverages unlabeled, raw data such as images and videos to autonomously identify patterns and make intricate deductions. This process is enabled by neural networks used in DL, where hidden layers between the input and output capture complex data patterns. These layers&#x2019; configuration and weighting are pivotal in developing effective models for various applications, such as image and speech recognition, natural language processing, and more specifically, surgical procedures and outcomes in hernia surgery. Significant advancements have been achieved with DL models in surgical settings, particularly in predicting the complexity of abdominal wall reconstruction (AWR) and other postoperative outcomes, which are elaborated in detail within the context of this mini-review. The review method involved analyzing relevant literature from databases such as PubMed and Google Scholar, focusing on studies related to preoperative planning, intraoperative techniques, and postoperative management within hernia surgery. Only recent, peer-reviewed publications in English that directly relate to the topic were included, highlighting the latest advancements in the field to depict potential benefits and current limitations of AI technologies in hernia surgery, advocating for further research and application in this evolving field.</p>
</abstract>
<kwd-group>
<kwd>AI</kwd>
<kwd>machine learning</kwd>
<kwd>deep learning</kwd>
<kwd>hernia surgery</kwd>
<kwd>neural network</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec id="s1">
<title>Introduction</title>
<p>&#x201c;A computer would deserve to be called intelligent if it could deceive a human into believing that it was human&#x201d; [<xref ref-type="bibr" rid="B1">1</xref>] a phrase once coined by Alan Mathison Turing, who is widely regarded as a pioneer in computer science and artificial intelligence. Turing&#x2019;s early substantial contributions to AI in the mid-20th century, notably his role in deciphering the Enigma code during World War II, laid the foundation for the field [<xref ref-type="bibr" rid="B2">2</xref>].</p>
<p>Adopting Turing&#x2019;s quote to contemporary times, one could identify any device capable of producing realistic videos, photos, and conversations as intelligent since artificial Intelligence (AI) has reached a level of sophistication where it can convincingly mimic human behavior. As a result, the term AI is now used ubiquitously as it is entangled within a vast area in modern computer science. Consequently, related terms are often incorrectly used interchangeably [<xref ref-type="bibr" rid="B3">3</xref>]. For simplicity reasons, AI can be considered an umbrella term that includes concepts such as machine learning (ML), neural networks (NN), and deep learning (DL) (<xref ref-type="fig" rid="F1">Figure 1</xref>).</p>
<fig id="F1" position="float">
<label>FIGURE 1</label>
<caption>
<p>Schematic depicting the relationship between AI, artificial intelligence; ML, machine learning; DL, deep learning; NN, neural networks.</p>
</caption>
<graphic xlink:href="jaws-03-13059-g001.tif"/>
</fig>
<p>Many of these technologies have naturally also found applications in medicine, including hernia surgery. Machine learning algorithms (MLA) for example can, in simple terms, learn from existing data, generalize them, and extrapolate future, yet unknown data, allowing them to perform tasks without explicit instructions [<xref ref-type="bibr" rid="B4">4</xref>]. For instance, Hassan et al [<xref ref-type="bibr" rid="B5">5</xref>] trained an MLA on preoperative clinical data to accurately predict complications in abdominal wall reconstruction (AWR) surgery.</p>
<p>Classical machine learning algorithms typically rely on structured, labeled data (i.e., data organized according to a set hierarchy) to make predictions, making them generally more dependent on human intervention for learning. In contrast, deep learning (DL), being a subset of machine learning, does not necessarily require labeled datasets. Deep Learning algorithms can be regarded both as a refined as well as mathematically complex evolution of machine learning algorithms. To put it illustratively, just as a human brain uses its neural pathways to process information and reach decisions, deep learning employs artificial neural networks that simulate this process. These networks are capable of learning from data in an incremental manner, which enables them to make complex deductions as more information becomes available. It can utilize raw data, such as text, images, and videos, to identify features and distinguish explicit patterns, thus discovering data groupings without human interference [<xref ref-type="bibr" rid="B6">6</xref>&#x2013;<xref ref-type="bibr" rid="B8">8</xref>].</p>
<p>The distinction between ML and DL is defined by the number of layers in a neural network also known as hidden layers (<xref ref-type="fig" rid="F2">Figure 2</xref>) [<xref ref-type="bibr" rid="B9">9</xref>]. If more than three of these layers are present, the algorithm is considered a DL model. In summary, hidden layers act as intermediary stages between the input and output of a neural network. They play a crucial role in capturing complex patterns in data, which makes neural networks highly effective for diverse applications such as image and speech recognition and natural language processing. The configuration and weighting of these hidden layers are essential for creating effective neural network models. In particular, DLMs (deep learning models) involved in computer-aided diagnosis have been successfully applied in cranial [<xref ref-type="bibr" rid="B10">10</xref>, <xref ref-type="bibr" rid="B11">11</xref>] trauma [<xref ref-type="bibr" rid="B12">12</xref>, <xref ref-type="bibr" rid="B13">13</xref>] and oncologic [<xref ref-type="bibr" rid="B14">14</xref>&#x2013;<xref ref-type="bibr" rid="B16">16</xref>] computed tomography (CT) analysis. In this context, Elhage et al. [<xref ref-type="bibr" rid="B17">17</xref>] have demonstrated that an 8-layered convolutional neural network (CNN), a type of DL architecture, can effectively predict surgical complexity in AWR procedures, competing with a panel of expert surgeons. Furthermore, DLMs have proven to be effective in predicting surgical outcomes and post-operative complications [<xref ref-type="bibr" rid="B18">18</xref>&#x2013;<xref ref-type="bibr" rid="B20">20</xref>].</p>
<fig id="F2" position="float">
<label>FIGURE 2</label>
<caption>
<p>Schematic visualization of a deep learning model with input layer, n &#x2b; 1 hidden layers and output layer).</p>
</caption>
<graphic xlink:href="jaws-03-13059-g002.tif"/>
</fig>
<p>The aim of this mini-review is to explore the advancements and applications of AI, particularly ML and DL, as well as other notable advancements in computing science in the field of hernia surgery, highlighting their potential benefits and current limitations.</p>
</sec>
<sec sec-type="methods" id="s2">
<title>Methodology</title>
<p>In conducting this mini review, we utilized literature from several databases including, PubMed and Google Scholar. The key terms used in our search were: &#x201c;hernia,&#x201d; &#x201c;artificial intelligence, &#x201c;deep learning,&#x201d; &#x201c;machine learning,&#x201d; &#x201c;abdominal wall reconstruction,&#x201d; &#x201c;hernia surgery,&#x201d; and &#x201c;augmented reality.&#x201d;</p>
<p>Inclusion criteria for this review involved publications directly relevant to the study topic. Specifically, we included studies that focused on preoperative planning, intraoperative imaging and techniques, as well as postoperative management and follow-up in the context of abdominal wall reconstruction. We considered articles published in peer-reviewed journals, studies published in English, and publications from the last 10&#xa0;years to ensure the inclusion of the most recent advancements.</p>
<p>Publications were excluded if they were not related to hernia surgery, studies not involving AI, ML, DL, or augmented reality in a surgical context, and non-peer-reviewed articles such as opinion pieces, editorials, or non-scientific reports. We also excluded articles focusing on animals or <italic>in vitro</italic> models rather than human subjects, and duplicate studies or publications with overlapping data sets.</p>
</sec>
<sec sec-type="discussion" id="s3">
<title>Discussion</title>
<p>The potential impact of computing sciences on medicine was already being anticipated in the 1970s. W B Schwartz [<xref ref-type="bibr" rid="B21">21</xref>] predicted that &#x201c;computing science would likely have significant effects by enhancing and possibly even replacing certain intellectual functions traditionally performed by physicians.&#x201d; The author noted that the integration of computers in medical practice could profoundly influence physician manpower and the quality of healthcare. Schwartz&#x2019;s foresight highlighted the transformative potential of computing science in medicine. 52&#xa0;years later in 2022, a protocol was developed by Saeidi et al. [<xref ref-type="bibr" rid="B22">22</xref>] that enables a robot to autonomously perform a small bowel anastomosis with minimal human interference.</p>
<p>The rapid advancements in AI hence raise the question of not whether AI will shape our surgical future, but rather how it will do so.</p>
<p>In a quality improvement study conducted by Elhage et al. [<xref ref-type="bibr" rid="B17">17</xref>], the researchers aimed to assess the potential of image-based deep learning in predicting the complexity of AWR surgeries, specifically the need for component separation, as well as predicting pulmonary and wound complications. To achieve this, they developed an 8-layer CNN capable of analysing image characteristics.</p>
<p>The study design involved a comparison of the CNN-based surgical complexity model with a validation set of CT-images. The validation set was evaluated by a panel of 6 expert AWR surgeons, who were blinded to the surgical complexity DLM. The dataset utilised for analysis comprised 369 patients and 9303 CT images.</p>
<p>In summary, Elhage et al. found that the CNN-based DLM was more accurate than expert surgeon judgment in predicting the surgical complexity of AWR procedures [81.3% compared to the surgeons&#x2019; accuracy of 65.0% (p &#x3c; 0.001)]. Furthermore, they observed that an additional DLM accurately predicted the occurrence of surgical site infections [AUC 0.898 (p &#x3c; 0.001)]. These findings highlight the potential of image-based DLMs as valuable tools in forecasting surgical outcomes and improving decision-making in AWR surgery.</p>
<p>In another article published by Hassan et al. [<xref ref-type="bibr" rid="B5">5</xref>], the authors displayed the effectiveness of machine learning models (MLMs) in predicting hernia recurrence (HR), surgical site occurrences (SSOs), and 30-day readmission. Their study reported that MLMs achieved mean accuracy rates of 85% (95% CI 80%&#x2013;90%) for HR prediction, 72% (95% CI 64%&#x2013;80%) for SSOs, and 84% (95% CI 77%&#x2013;90%) for 30-day readmission. These ML algorithms, trained on readily available preoperative clinical data, proved to be highly accurate in forecasting complications associated with AWR-surgery. The authors concluded to support the integration of MLMs into the preoperative evaluation process for patients undergoing AWR.</p>
<p>To facilitate the advancements of AI in hernia surgery, it is in our view crucial to expand patient databases on an international scale. Increasing the diversity and size of these databases will provide a broader range of data for training AI models and allow for more comprehensive and accurate predictions.</p>
<p>Since the effectiveness of AI predictions relies heavily on both the accuracy and the thoroughness of the input data, access to diverse patient populations will help address potential biases and ensure the reliability of AI algorithms across different demographics [<xref ref-type="bibr" rid="B23">23</xref>]. Expanding patient databases internationally can also help identify patterns and trends that may not be apparent in smaller or more localised datasets.</p>
<p>Moreover, the collection and analysis of video data in surgical procedures hold great potential for advancing the field of hernia surgery. By capturing surgical procedures through video recordings, it becomes possible to create detailed anatomical maps and explore the integration of AR with robotics in surgical interventions [<xref ref-type="bibr" rid="B24">24</xref>]. By integrating DLMs on video data, it appears to be possible to extract relevant information, such as anatomical landmarks, tissue characteristics, and procedural phase recognition, which can contribute to the development of more precise surgical interventions [<xref ref-type="bibr" rid="B25">25</xref>]. Additionally, the combination of video data with augmented reality has the potential to enhance surgical visualisation and navigation as well as robotic surgery training-simulators. By possibly overlaying real-time anatomical information and guidance onto the surgeon&#x2019;s view, AR could provide valuable assistance during complex hernia surgeries. Surgeons can benefit from visual cues, real-time feedback, and enhanced precision, ultimately leading to improved surgical outcomes. For example, Cui et al. [<xref ref-type="bibr" rid="B26">26</xref>] used a CNN model comprised out of surgical videos from 35 patients in laparoscopic hernia repair in order to detect the vas deferens and their results suggested that the CNN promptly identifies and visualises vas deferens images.</p>
<p>Another promising approach combines wide-field, planar, near-infrared fluorescence imaging with AI for automated real-time guidance during surgery as highlighted by Gioux et al. [<xref ref-type="bibr" rid="B27">27</xref>]. This technology could help identify and avoid hidden tissues, such as nerves or blood vessels covered by fatty or connective tissue, by highlighting their location within the surgical field. Like following a breadcrumb trail, the AI system could guide surgeons in the direction of dissection, enhancing precision and hence reducing the risk of complications.</p>
<p>An essential pillar of AI is characterized through the concept of phase recognition. This involves the utilization of MLAs and computer vision techniques to automatically discern and categorize distinct phases or stages within a process.</p>
<p>Phase recognition AI utilizes algorithms, frequently rooted in DL, to scrutinize patterns, features, and temporal sequences within video data. This capability allows the system to differentiate between diverse stages of the surgical process [<xref ref-type="bibr" rid="B28">28</xref>, <xref ref-type="bibr" rid="B29">29</xref>].</p>
<p>In a recently published study by Takeuchi et al. [<xref ref-type="bibr" rid="B30">30</xref>], the primary objective was to develop a DL-based automated phase-recognition system for identifying surgical phases in Transabdominal Preperitoneal (TAPP) procedures (i.e. preparation, peritoneal flap incision, peritoneal flap dissection, hernia dissection, mesh deployment, mesh fixation, peritoneal flap closure, and additional closure). A secondary aim was to explore the correlation between surgical skills and the duration of each phase. An AI model (AIM) was trained to automatically recognize surgical phases from videos, and the study assessed the relationship between phase duration and surgical skills. A fourfold cross-validation was used to evaluate the AIMs performance, achieving accuracies of 88.81% and 85.82% for unilateral and bilateral cases, respectively.</p>
<p>Ortenzi et al. [<xref ref-type="bibr" rid="B31">31</xref>] reported on an AI-based computer vision algorithm designed to automatically recognize surgical steps in videos of totally extraperitoneal (TEP) inguinal hernia repair. The algorithm achieved an overall accuracy of 88.8% in recognizing the complete procedure. The per-step accuracy was highest for the hernia sac reduction step at 94.3% and lowest for the preperitoneal dissection step at 72.2%. The authors concluded that this novel AIM could provide fully automated video analysis with a high level of accuracy. High-accuracy AIMs that enable automation of surgical video analysis allow for the identification and evaluation of surgical performance.</p>
<p>However, the integration of video data and DLMs in hernia surgery research and practice requires the accumulation of diverse and high-quality video datasets. It seems obvious, that these datasets should encompass various surgical techniques, patient characteristics, and procedural variations to ensure the robustness and generalisability of AIMs [<xref ref-type="bibr" rid="B32">32</xref>]. To optimize this process, Hashimoto et al. [<xref ref-type="bibr" rid="B33">33</xref>] propose extensive collaborations among surgeons as well as data scientists. These efforts are crucial to facilitate the sharing and pooling of video data, which will accelerate the development and refinement of AI-driven approaches in hernia surgery.</p>
<p>In a scoping review published by Taha et al. [<xref ref-type="bibr" rid="B34">34</xref>], the authors provided a comprehensive summary of the current objectives related to the integration of AI in the field of hernia surgery. They highlighted the potential applications and benefits of AI in areas such as medical imaging and surgical training. However, the authors also acknowledged the limited number of publications available on this specific topic, indicating a gap in the existing literature.</p>
<p>Based on this observation, Taha et al. emphasised the need for further research and the publication of original articles to explore and investigate the ways in which AI can effectively assist in medical imaging and support the training of surgeons in the context of hernia surgery.</p>
<p>Furthermore, ever since AI has emerged, ethical considerations have played a significant role in discussions surrounding the technology. While the potential benefits are undeniable, questions arise regarding accountability if complications occur. O&#x2019;Sullivan et al. [<xref ref-type="bibr" rid="B35">35</xref>] categorise responsibility into accountability, liability, and culpability. Drawing a parallel to self-driving cars, they compare the surgeon overseeing the hypothetical autonomous robot to the driver of a car, making the surgeon ultimately responsible for the robot&#x2019;s actions. Undeniably, a robust legal framework is paramount prior to implementing semi-autonomous programs or machines into any medical field.</p>
<p>In conclusion, further research on the applications of AI is undoubtedly of crucial importance. Moreover, we believe that promoting international collaboration in expanding patient and video databases are indispensable in facilitating this process.</p>
</sec>
</body>
<back>
<sec id="s4">
<title>Author Contributions</title>
<p>RV and BM contributed equally to this work; RV and BM drafted the manuscript; RV and BM are responsible for the interpretation of results and the critical revision of the manuscript. All authors contributed to the article and approved the submitted version.</p>
</sec>
<sec sec-type="funding-information" id="s5">
<title>Funding</title>
<p>The author(s) declare that no financial support was received for the research, authorship, and/or publication of this article.</p>
</sec>
<sec sec-type="COI-statement" id="s6">
<title>Conflict of Interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec sec-type="disclaimer" id="s7">
<title>Publisher&#x2019;s Note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
<ref-list>
<title>References</title>
<ref id="B1">
<label>1.</label>
<citation citation-type="book">
<collab>School of Mathematics and Statistics</collab>. <source>Quotations by Alan Turing</source>. <publisher-loc>United Kingdom</publisher-loc>: <publisher-name>University of St Andrews</publisher-name> (<year>2024</year>). <comment>Available from: <ext-link ext-link-type="uri" xlink:href="https://mathshistory.st-andrews.ac.uk/Biographies/Turing/quotations/">https://mathshistory.st-andrews.ac.uk/Biographies/Turing/quotations/</ext-link>
</comment> <comment>(Accessed July 02, 2024)</comment>.</citation>
</ref>
<ref id="B2">
<label>2.</label>
<citation citation-type="book">
<collab>Encyclop&#xe6;dia Britannica, Inc</collab>. <source>Artificial Intelligence</source>. <publisher-loc>Chicago, IL, United States</publisher-loc>: <publisher-name>Encyclop&#xe6;dia Britannica</publisher-name> (<year>2024</year>). <comment>Available from: <ext-link ext-link-type="uri" xlink:href="https://www.britannica.com/technology/artificial-intelligence/Alan-Turing-and-the-beginning-of-AI">https://www.britannica.com/technology/artificial-intelligence/Alan-Turing-and-the-beginning-of-AI</ext-link>
</comment> <comment>(Accessed July 02, 2024)</comment>.</citation>
</ref>
<ref id="B3">
<label>3.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Graziani</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Dutkiewicz</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Calvaresi</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Amorim</surname>
<given-names>JP</given-names>
</name>
<name>
<surname>Yordanova</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Vered</surname>
<given-names>M</given-names>
</name>
<etal/>
</person-group> <article-title>A Global Taxonomy of Interpretable AI: Unifying the Terminology for the Technical and Social Sciences</article-title>. <source>Artif intelligence Rev</source> (<year>2023</year>) <volume>56</volume>(<issue>4</issue>):<fpage>3473</fpage>&#x2013;<lpage>504</lpage>. <pub-id pub-id-type="doi">10.1007/s10462-022-10256-8</pub-id>
</citation>
</ref>
<ref id="B4">
<label>4.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Bell</surname>
<given-names>J</given-names>
</name>
</person-group>. <article-title>What Is Machine Learning?</article-title> In: <source>Machine Learning and the City: Applications in Architecture and Urban Design</source> (<year>2022</year>). p. <fpage>207</fpage>&#x2013;<lpage>16</lpage>.</citation>
</ref>
<ref id="B5">
<label>5.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hassan</surname>
<given-names>AM</given-names>
</name>
<name>
<surname>Lu</surname>
<given-names>SC</given-names>
</name>
<name>
<surname>Asaad</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Offodile</surname>
<given-names>AC</given-names>
</name>
<name>
<surname>Sidey-Gibbons</surname>
<given-names>C</given-names>
</name>
<etal/>
</person-group> <article-title>Novel Machine Learning Approach for the Prediction of Hernia Recurrence, Surgical Complication, and 30-Day Readmission After Abdominal Wall Reconstruction</article-title>. <source>J Am Coll Surg</source> (<year>2022</year>) <volume>234</volume>(<issue>5</issue>):<fpage>918</fpage>&#x2013;<lpage>27</lpage>. <pub-id pub-id-type="doi">10.1097/XCS.0000000000000141</pub-id>
</citation>
</ref>
<ref id="B6">
<label>6.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Choi</surname>
<given-names>RY</given-names>
</name>
<name>
<surname>Coyner</surname>
<given-names>AS</given-names>
</name>
<name>
<surname>Kalpathy-Cramer</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Chiang</surname>
<given-names>MF</given-names>
</name>
<name>
<surname>Campbell</surname>
<given-names>JP</given-names>
</name>
</person-group>. <article-title>Introduction to Machine Learning, Neural Networks, and Deep Learning</article-title>. <source>Translational Vis Sci Technol</source> (<year>2020</year>) <volume>9</volume>(<issue>2</issue>):<fpage>14</fpage>. <pub-id pub-id-type="doi">10.1167/tvst.9.2.14</pub-id>
</citation>
</ref>
<ref id="B7">
<label>7.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>LeCun</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Bengio</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Hinton</surname>
<given-names>G</given-names>
</name>
</person-group>. <article-title>Deep Learning</article-title>. <source>Nature</source> (<year>2015</year>) <volume>521</volume>(<issue>7553</issue>):<fpage>436</fpage>&#x2013;<lpage>44</lpage>. <pub-id pub-id-type="doi">10.1038/nature14539</pub-id>
</citation>
</ref>
<ref id="B8">
<label>8.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Ghahramani</surname>
<given-names>Z</given-names>
</name>
</person-group>. <article-title>Unsupervised Learning</article-title>. In: <source>Advanced Lectures on Machine Learning</source>. <publisher-loc>Berlin, Heidelberg</publisher-loc>: <publisher-name>Springer</publisher-name> (<year>2004</year>). p. <fpage>72</fpage>&#x2013;<lpage>112</lpage>.</citation>
</ref>
<ref id="B9">
<label>9.</label>
<citation citation-type="book">
<collab>IBM</collab>. <source>AI vs. Machine Learning vs. Deep Learning vs. Neural Networks</source>. <publisher-loc>Armonk, NY, United States</publisher-loc>: <publisher-name>IBM</publisher-name> (<year>2023</year>). <comment>Available from: <ext-link ext-link-type="uri" xlink:href="https://www.ibm.com/think/topics/ai-vs-machine-learning-vs-deep-learning-vs-neural-networks">https://www.ibm.com/think/topics/ai-vs-machine-learning-vs-deep-learning-vs-neural-networks</ext-link>
</comment> <comment>(Accessed July 02, 2024)</comment>.</citation>
</ref>
<ref id="B10">
<label>10.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gao</surname>
<given-names>XW</given-names>
</name>
<name>
<surname>Hui</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Tian</surname>
<given-names>Z</given-names>
</name>
</person-group>. <article-title>Classification of CT Brain Images Based on Deep Learning Networks</article-title>. <source>Comput Methods Programs Biomed</source> (<year>2017</year>) <volume>138</volume>:<fpage>49</fpage>&#x2013;<lpage>56</lpage>. <pub-id pub-id-type="doi">10.1016/j.cmpb.2016.10.007</pub-id>
</citation>
</ref>
<ref id="B11">
<label>11.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Chilamkurthy</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Ghosh</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Tanamala</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Biviji</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Campeau</surname>
<given-names>NG</given-names>
</name>
<name>
<surname>Venugopal</surname>
<given-names>VK</given-names>
</name>
<etal/>
</person-group> <article-title>Deep Learning Algorithms for Detection of Critical Findings in Head CT Scans: A Retrospective Study</article-title>. <source>Lancet</source> (<year>2018</year>) <volume>392</volume>(<issue>10162</issue>):<fpage>2388</fpage>&#x2013;<lpage>96</lpage>. <pub-id pub-id-type="doi">10.1016/S0140-6736(18)31645-3</pub-id>
</citation>
</ref>
<ref id="B12">
<label>12.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Cheng</surname>
<given-names>CT</given-names>
</name>
<name>
<surname>Lin</surname>
<given-names>HH</given-names>
</name>
<name>
<surname>Hsu</surname>
<given-names>CP</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>HW</given-names>
</name>
<name>
<surname>Huang</surname>
<given-names>JF</given-names>
</name>
<name>
<surname>Hsieh</surname>
<given-names>CH</given-names>
</name>
<etal/>
</person-group> <article-title>Deep Learning for Automated Detection and Localization of Traumatic Abdominal Solid Organ Injuries on CT Scans</article-title>. <source>J Imaging Inform Med</source> (<year>2024</year>) <volume>37</volume>:<fpage>1113</fpage>&#x2013;<lpage>23</lpage>. <pub-id pub-id-type="doi">10.1007/s10278-024-01038-5</pub-id>
</citation>
</ref>
<ref id="B13">
<label>13.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wang</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Wood</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Gao</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Najarian</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Gryak</surname>
<given-names>J</given-names>
</name>
</person-group>. <article-title>Automated Spleen Injury Detection Using 3D Active Contours and Machine Learning</article-title>. <source>Entropy</source> (<year>2021</year>) <volume>23</volume>(<issue>4</issue>):<fpage>382</fpage>. <pub-id pub-id-type="doi">10.3390/e23040382</pub-id>
</citation>
</ref>
<ref id="B14">
<label>14.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Cheng</surname>
<given-names>JZ</given-names>
</name>
<name>
<surname>Ni</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Chou</surname>
<given-names>YH</given-names>
</name>
<name>
<surname>Qin</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Tiu</surname>
<given-names>CM</given-names>
</name>
<name>
<surname>Chang</surname>
<given-names>YC</given-names>
</name>
<etal/>
</person-group> <article-title>Computer-Aided Diagnosis With Deep Learning Architecture: Applications to Breast Lesions in US Images and Pulmonary Nodules in CT Scans</article-title>. <source>Sci Rep</source> (<year>2016</year>) <volume>6</volume>:<fpage>24454</fpage>. <pub-id pub-id-type="doi">10.1038/srep24454</pub-id>
</citation>
</ref>
<ref id="B15">
<label>15.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gao</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Geras</surname>
<given-names>KJ</given-names>
</name>
<name>
<surname>Lewin</surname>
<given-names>AA</given-names>
</name>
<name>
<surname>Moy</surname>
<given-names>L</given-names>
</name>
</person-group>. <article-title>New Frontiers: An Update on Computer-Aided Diagnosis for Breast Imaging in the Age of Artificial Intelligence</article-title>. <source>Am J Roentgenology</source> (<year>2019</year>) <volume>212</volume>(<issue>2</issue>):<fpage>300</fpage>&#x2013;<lpage>7</lpage>. <pub-id pub-id-type="doi">10.2214/AJR.18.20392</pub-id>
</citation>
</ref>
<ref id="B16">
<label>16.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hawkins</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Garcia</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Stringfield</surname>
<given-names>O</given-names>
</name>
<name>
<surname>Krewer</surname>
<given-names>H</given-names>
</name>
<etal/>
</person-group> <article-title>Predicting Malignant Nodules From Screening CT Scans</article-title>. <source>J Thorac Oncol</source> (<year>2016</year>) <volume>11</volume>(<issue>12</issue>):<fpage>2120</fpage>&#x2013;<lpage>8</lpage>. <pub-id pub-id-type="doi">10.1016/j.jtho.2016.07.002</pub-id>
</citation>
</ref>
<ref id="B17">
<label>17.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Elhage</surname>
<given-names>SA</given-names>
</name>
<name>
<surname>Deerenberg</surname>
<given-names>EB</given-names>
</name>
<name>
<surname>Ayuso</surname>
<given-names>SA</given-names>
</name>
<name>
<surname>Murphy</surname>
<given-names>KJ</given-names>
</name>
<name>
<surname>Shao</surname>
<given-names>JM</given-names>
</name>
<name>
<surname>Kercher</surname>
<given-names>KW</given-names>
</name>
<etal/>
</person-group> <article-title>Development and Validation of Image-Based Deep Learning Models to Predict Surgical Complexity and Complications in Abdominal Wall Reconstruction</article-title>. <source>JAMA Surg</source> (<year>2021</year>) <volume>156</volume>(<issue>10</issue>):<fpage>933</fpage>&#x2013;<lpage>40</lpage>. <pub-id pub-id-type="doi">10.1001/jamasurg.2021.3012</pub-id>
</citation>
</ref>
<ref id="B18">
<label>18.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Xue</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Lu</surname>
<given-names>C</given-names>
</name>
<name>
<surname>King</surname>
<given-names>CR</given-names>
</name>
<name>
<surname>Wildes</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Avidan</surname>
<given-names>MS</given-names>
</name>
<etal/>
</person-group> <article-title>Use of Machine Learning to Develop and Evaluate Models Using Preoperative and Intraoperative Data to Identify Risks of Postoperative Complications</article-title>. <source>JAMA Netw open</source> (<year>2021</year>) <volume>4</volume>(<issue>3</issue>):<fpage>e212240</fpage>. <pub-id pub-id-type="doi">10.1001/jamanetworkopen.2021.2240</pub-id>
</citation>
</ref>
<ref id="B19">
<label>19.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bonde</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Varadarajan</surname>
<given-names>KM</given-names>
</name>
<name>
<surname>Bonde</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Troelsen</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Muratoglu</surname>
<given-names>OK</given-names>
</name>
<name>
<surname>Malchau</surname>
<given-names>H</given-names>
</name>
<etal/>
</person-group> <article-title>Assessing the Utility of Deep Neural Networks in Predicting Postoperative Surgical Complications: A Retrospective Study</article-title>. <source>The Lancet Digital Health</source> (<year>2021</year>) <volume>3</volume>(<issue>8</issue>):<fpage>e471</fpage>&#x2013;<lpage>e485</lpage>. <pub-id pub-id-type="doi">10.1016/S2589-7500(21)00084-4</pub-id>
</citation>
</ref>
<ref id="B20">
<label>20.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ayuso</surname>
<given-names>SA</given-names>
</name>
<name>
<surname>Elhage</surname>
<given-names>SA</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Aladegbami</surname>
<given-names>BG</given-names>
</name>
<name>
<surname>Gersin</surname>
<given-names>KS</given-names>
</name>
<name>
<surname>Fischer</surname>
<given-names>JP</given-names>
</name>
<etal/>
</person-group> <article-title>Predicting Rare Outcomes in Abdominal Wall Reconstruction Using Image-Based Deep Learning Models</article-title>. <source>Surgery</source> (<year>2023</year>) <volume>173</volume>(<issue>3</issue>):<fpage>748</fpage>&#x2013;<lpage>55</lpage>. <pub-id pub-id-type="doi">10.1016/j.surg.2022.06.048</pub-id>
</citation>
</ref>
<ref id="B21">
<label>21.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Schwartz</surname>
<given-names>WB</given-names>
</name>
</person-group>. <article-title>Medicine and the Computer. The Promise and Problems of Change</article-title>. <source>New Engl J Med</source> (<year>1970</year>) <volume>283</volume>(<issue>23</issue>):<fpage>1257</fpage>&#x2013;<lpage>64</lpage>. <pub-id pub-id-type="doi">10.1056/NEJM197012032832305</pub-id>
</citation>
</ref>
<ref id="B22">
<label>22.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Saeidi</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Opfermann</surname>
<given-names>JD</given-names>
</name>
<name>
<surname>Kam</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Wei</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Leonard</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Hsieh</surname>
<given-names>MH</given-names>
</name>
<etal/>
</person-group> <article-title>Autonomous Robotic Laparoscopic Surgery for Intestinal Anastomosis</article-title>. <source>Sci Robot</source> (<year>2022</year>) <volume>7</volume>(<issue>62</issue>):<fpage>eabj2908</fpage>. <pub-id pub-id-type="doi">10.1126/scirobotics.abj2908</pub-id>
</citation>
</ref>
<ref id="B23">
<label>23.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Murthy</surname>
<given-names>VH</given-names>
</name>
<name>
<surname>Krumholz</surname>
<given-names>HM</given-names>
</name>
<name>
<surname>Gross</surname>
<given-names>CP</given-names>
</name>
</person-group>. <article-title>Participation in Cancer Clinical Trials: Race-Sex-And Age-Based Disparities</article-title>. <source>Jama</source> (<year>2004</year>) <volume>291</volume>(<issue>22</issue>):<fpage>2720</fpage>&#x2013;<lpage>6</lpage>. <pub-id pub-id-type="doi">10.1001/jama.291.22.2720</pub-id>
</citation>
</ref>
<ref id="B24">
<label>24.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Seetohul</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Shafiee</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Sirlantzis</surname>
<given-names>K</given-names>
</name>
</person-group>. <article-title>Augmented Reality (AR) for Surgical Robotic and Autonomous Systems: State of the Art, Challenges, and Solutions</article-title>. <source>Sensors (Basel)</source> (<year>2023</year>) <volume>23</volume>(<issue>13</issue>):<fpage>6202</fpage>. <pub-id pub-id-type="doi">10.3390/s23136202</pub-id>
</citation>
</ref>
<ref id="B25">
<label>25.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bar</surname>
<given-names>O</given-names>
</name>
<name>
<surname>Neimark</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Zohar</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Hager</surname>
<given-names>GD</given-names>
</name>
<name>
<surname>Girshick</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Fried</surname>
<given-names>GM</given-names>
</name>
<etal/>
</person-group> <article-title>Impact of Data on Generalization of AI for Surgical Intelligence Applications</article-title>. <source>Sci Rep</source> (<year>2020</year>) <volume>10</volume>:<fpage>22208</fpage>. <pub-id pub-id-type="doi">10.1038/s41598-020-79173-6</pub-id>
</citation>
</ref>
<ref id="B26">
<label>26.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Cui</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Zhao</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>W</given-names>
</name>
</person-group>. <article-title>Identification of the Vas Deferens in Laparoscopic Inguinal Hernia Repair Surgery Using the Convolutional Neural Network</article-title>. <source>J Healthc Eng</source> (<year>2021</year>) <volume>2021</volume>:<fpage>5578089</fpage>. <pub-id pub-id-type="doi">10.1155/2021/5578089</pub-id>
</citation>
</ref>
<ref id="B27">
<label>27.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gioux</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Choi</surname>
<given-names>HS</given-names>
</name>
<name>
<surname>Frangioni</surname>
<given-names>JV</given-names>
</name>
</person-group>. <article-title>Image-Guided Surgery Using Invisible Near-Infrared Light: Fundamentals of Clinical Translation</article-title>. <source>Mol Imaging</source> (<year>2010</year>) <volume>9</volume>(<issue>5</issue>):<fpage>237</fpage>&#x2013;<lpage>55</lpage>. <pub-id pub-id-type="doi">10.2310/7290.2010.00034</pub-id>
</citation>
</ref>
<ref id="B28">
<label>28.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Meeuwsen</surname>
<given-names>FC</given-names>
</name>
<name>
<surname>van Luyn</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Blikkendaal</surname>
<given-names>MD</given-names>
</name>
<name>
<surname>Jansen</surname>
<given-names>FW</given-names>
</name>
<name>
<surname>van den Dobbelsteen</surname>
<given-names>JJ</given-names>
</name>
</person-group>. <article-title>Surgical Phase Modelling in Minimal Invasive Surgery</article-title>. <source>Surg Endosc</source> (<year>2019</year>) <volume>33</volume>(<issue>5</issue>):<fpage>1426</fpage>&#x2013;<lpage>32</lpage>. <pub-id pub-id-type="doi">10.1007/s00464-018-6417-4</pub-id>
</citation>
</ref>
<ref id="B29">
<label>29.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ramesh</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Dall&#x2019;Alba</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Gonzalez</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Yu</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Mascagni</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Mutter</surname>
<given-names>D</given-names>
</name>
<etal/>
</person-group> <article-title>Multi-Task Temporal Convolutional Networks for Joint Recognition of Surgical Phases and Steps in Gastric Bypass Procedures</article-title>. <source>Int J Comput Assist Radiol Surg</source> (<year>2021</year>) <volume>16</volume>:<fpage>1111</fpage>&#x2013;<lpage>9</lpage>. <pub-id pub-id-type="doi">10.1007/s11548-021-02388-z</pub-id>
</citation>
</ref>
<ref id="B30">
<label>30.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Takeuchi</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Collins</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Ndagijimana</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Kawakubo</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Kitagawa</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Marescaux</surname>
<given-names>J</given-names>
</name>
<etal/>
</person-group> <article-title>Automatic Surgical Phase Recognition in Laparoscopic Inguinal Hernia Repair With Artificial Intelligence</article-title>. <source>Hernia</source> (<year>2022</year>) <volume>26</volume>(<issue>6</issue>):<fpage>1669</fpage>&#x2013;<lpage>78</lpage>. <pub-id pub-id-type="doi">10.1007/s10029-022-02621-x</pub-id>
</citation>
</ref>
<ref id="B31">
<label>31.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ortenzi</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Rapoport Ferman</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Antolin</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Bar</surname>
<given-names>O</given-names>
</name>
<name>
<surname>Zohar</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Perry</surname>
<given-names>O</given-names>
</name>
<etal/>
</person-group> <article-title>A Novel High Accuracy Model for Automatic Surgical Workflow Recognition Using Artificial Intelligence in Laparoscopic Totally Extraperitoneal Inguinal Hernia Repair (TEP)</article-title>. <source>Surg Endosc</source> (<year>2023</year>) <volume>37</volume>(<issue>11</issue>):<fpage>8818</fpage>&#x2013;<lpage>28</lpage>. <pub-id pub-id-type="doi">10.1007/s00464-023-10375-5</pub-id>
</citation>
</ref>
<ref id="B32">
<label>32.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gianfrancesco</surname>
<given-names>MA</given-names>
</name>
<name>
<surname>Tamang</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Yazdany</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Schmajuk</surname>
<given-names>G</given-names>
</name>
</person-group>. <article-title>Potential Biases in Machine Learning Algorithms Using Electronic Health Record Data</article-title>. <source>JAMA Intern Med</source> (<year>2018</year>) <volume>178</volume>(<issue>11</issue>):<fpage>1544</fpage>&#x2013;<lpage>7</lpage>. <pub-id pub-id-type="doi">10.1001/jamainternmed.2018.3763</pub-id>
</citation>
</ref>
<ref id="B33">
<label>33.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hashimoto</surname>
<given-names>DA</given-names>
</name>
<name>
<surname>Rosman</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Rus</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Meireles</surname>
<given-names>OR</given-names>
</name>
</person-group>. <article-title>Artificial Intelligence in Surgery: Promises and Perils</article-title>. <source>Ann Surg</source> (<year>2018</year>) <volume>268</volume>(<issue>1</issue>):<fpage>70</fpage>&#x2013;<lpage>6</lpage>. <pub-id pub-id-type="doi">10.1097/SLA.0000000000002693</pub-id>
</citation>
</ref>
<ref id="B34">
<label>34.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Taha</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Enodien</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Frey</surname>
<given-names>DM</given-names>
</name>
<name>
<surname>Taha-Mehlitz</surname>
<given-names>S</given-names>
</name>
</person-group>. <article-title>The Development of Artificial Intelligence in Hernia Surgery: A Scoping Review</article-title>. <source>Front Surg</source> (<year>2022</year>) <volume>9</volume>:<fpage>908014</fpage>. <pub-id pub-id-type="doi">10.3389/fsurg.2022.908014</pub-id>
</citation>
</ref>
<ref id="B35">
<label>35.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>O&#x27;Sullivan</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Nevejans</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Allen</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Blyth</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Leonard</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Pagallo</surname>
<given-names>U</given-names>
</name>
<etal/>
</person-group> <article-title>Legal, Regulatory, and Ethical Frameworks for Development of Standards in Artificial Intelligence (AI) and Autonomous Robotic Surgery</article-title>. <source>Int J Med Robot</source> (<year>2019</year>) <volume>15</volume>(<issue>1</issue>):<fpage>e1968</fpage>. <pub-id pub-id-type="doi">10.1002/rcs.1968</pub-id>
</citation>
</ref>
</ref-list>
</back>
</article>