Skip to main content

ORIGINAL RESEARCH article

Front. Artif. Intell.

Sec. AI for Human Learning and Behavior Change

Volume 8 - 2025 | doi: 10.3389/frai.2025.1558938

This article is part of the Research Topic Prompts: The Double-Edged Sword Using AI View all articles

Enhancing Structured Data Generation with GPT-4o Evaluating Prompt Efficiency Across Prompt Styles

Provisionally accepted
  • Vanderbilt University, Nashville, United States

The final, formatted version of the article will be published soon.

    Large language models (LLMs), such as GPT-4o, provide versatile techniques for generating and formatting structured data. However, prompt style plays a critical role in determining the accuracy, efficiency, and token cost of the generated outputs. This paper explores the effectiveness of three specific prompt styles-JSON, YAML, and Hybrid CSV/Prefix-for structured data generation across diverse applications. We focus on scenarios such as personal stories, receipts, and medical records, using randomized datasets to evaluate each prompt style's impact.Our analysis examines these prompt styles across three key metrics: accuracy in preserving data attributes, token cost associated with output generation, and processing time required for completion. By incorporating structured validation and comparative analysis, we ensure precise evaluation of each prompt style's performance. Results are visualized through metrics-based comparisons, such as Prompt Style vs. Accuracy, Prompt Style vs. Token Cost, and Prompt Style vs. Processing Time. Our findings reveal trade-offs between prompt style complexity and performance, with JSON providing high accuracy for complex data, YAML offering a balance between readability and efficiency, and Hybrid CSV/Prefix excelling in token and time efficiency for flat data structures. This paper explores the pros and cons of applying the GPT-4o LLM to generate structured data. It also provides practical recommendations for selecting prompt styles tailored to specific requirements, such as data integrity, costeffectiveness, and real-time processing needs. Our findings contribute to research on how prompt engineering can optimize structured data generation for AI-driven applications, as well as documenting limitations that motivate future work needed to improve LLMs for complex tasks.

    Keywords: Structured Data Generation, Prompt Engineering, GPT-4o, JSON, YAML, Hybrid CSV/Prefix, Token Efficiency, Cost-Effective AI

    Received: 11 Jan 2025; Accepted: 07 Mar 2025.

    Copyright: © 2025 Elnashar, White and Schmidt. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

    * Correspondence: Ashraf Elnashar, Vanderbilt University, Nashville, United States

    Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

    Research integrity at Frontiers

    Man ultramarathon runner in the mountains he trains at sunset

    94% of researchers rate our articles as excellent or good

    Learn more about the work of our research integrity team to safeguard the quality of each article we publish.


    Find out more