Optimizing Prompt Engineering for AI-Based Logo Generation Using Response Surface Methodology
Abstract
This research developed an optimized prompt engineering framework for AI-based logo generation using Response Surface Methodology (RSM) with Central Composite Design (CCD). Despite rapid AI adoption, users face challenges in communicating design intent effectively, leading to inconsistent outputs. This study systematically tested 47 prompt combinations across five variables: prompt clarity, detail level, thematic description, visual elements, and color specification. The optimization identified eight critical components forming a structured template: Main Design Focus, Detail Elements, Thematic Style, Primary Colors, Complementary Colors, Rewording, Layout Size, and Element Limit. Experimental validation with 30 graphic designers demonstrated substantial improvements over unstructured prompts: visual consistency increased from 65% to 87%, iteration efficiency improved by 48.5% (from 6.6 to 3.4 attempts), and user satisfaction rose from 58% to 82%. Both manual designers and AI-experienced users successfully applied the framework with comparable effectiveness. This research contributes a systematic, optimization-based approach to prompt engineering in creative AI applications and provides a practical framework enhancing accessibility for non-technical users while maintaining professional quality standards in logo desin.
Downloads
References
C. Li et al., “AGIQA-3K: An Open Database for AI-Generated Image Quality Assessment,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 34, no. 8, pp. 6833–6846, Aug. 2024, doi: 10.1109/TCSVT.2023.3319020.
Y. Feng et al., “PromptMagician: Interactive Prompt Engineering for Text-to-Image Creation,” IEEE Trans Vis Comput Graph, vol. 30, no. 1, pp. 295–305, Jan. 2024, doi: 10.1109/TVCG.2023.3327168.
V. Liu, J. Vermeulen, G. Fitzmaurice, and J. Matejka, “3DALL-E: Integrating Text-to-Image AI in 3D Design Workflows,” Association for Computing Machinery (ACM), vol. 1, no. 2, pp. 1955–1977, Jul. 2023, doi: 10.1145/3563657.3596098.
T. Papadopoulos and Y. Charalabidis, “What do governments plan in the field of artificial intelligence?: Analysing national AI strategies using NLP,” ACM International Conference Proceeding Series, vol. 20, no. 1, pp. 100–111, Sep. 2020, doi: 10.1145/3428502.3428514.
J. Wang et al., “Review of large vision models and visual prompt engineering,” Meta-Radiology, vol. 1, no. 3, pp. 1–13, Dec. 2023, doi: 10.1016/j.metrad.2023.100047.
J. Wang, H. Duan, J. Liu, S. Chen, X. Min, and G. Zhai, “AIGCIQA2023: A Large-Scale Image Quality Assessment Database for AI Generated Images: From the Perspectives of Quality, Authenticity and Correspondence,” Springer Nature Singapore, vol. 14474, no. 1, pp. 46–57, Feb. 2024, doi: 10.1007/978-981-99-9119-8_5.
M. Verdicchio, “Ekphrasis and prompt engineering. A comparison in the era of generative AI,” Studi di Estetica, vol. 1, no. 28, pp. 59–78, Mar. 2024, doi: 10.7413/1825864661.
L. Giray, “Prompt Engineering with ChatGPT: A Guide for Academic Writers,” Ann Biomed Eng, vol. 51, no. 12, pp. 2629–2633, Dec. 2023, doi: 10.1007/s10439-023-03272-4.
A. KUZMIN and O. PAVLOVA, “Analysis of Artificial Intelligence Based Systems for Automated Generation of Digital Content,” Computer systems and information technologies, vol. 1, no. 1, pp. 82–88, Mar. 2024, doi: 10.31891/csit-2024-1-10.
M. Mitchell, “The Turing Test and our shifting conceptions of intelligence,” Science (1979), vol. 385, no. 6710, pp. 1–8, Aug. 2024, doi: 10.1126/science.adq9356.
C. Janiesch, P. Zschech, and K. Heinrich, “Machine learning and deep learning,” Springer, vol. 31, no. 2, pp. 685–695, Apr. 2021, doi: 10.1007/s12525-021-00475-2.
A. Koubaa, W. Boulila, L. Ghouti, A. Alzahem, and S. Latif, “Exploring ChatGPT Capabilities and Limitations: A Critical Review of the NLP Game Changer,” Preprints (Basel), vol. 1, no. 1, pp. 1–29, Mar. 2023, doi: 10.20944/preprints202303.0438.v1.
K. Aggarwal et al., “Has the Future Started? The Current Growth of Artificial Intelligence, Machine Learning, and Deep Learning,” Iraqi Journal for Computer Science and Mathematics, vol. 3, no. 1, pp. 115–123, Jan. 2022, doi: 10.52866/ijcsm.2022.01.01.013.
F. French, D. Levi, C. Maczo, A. Simonaityte, S. Triantafyllidis, and G. Varda, “Creative Use of OpenAI in Education: Case Studies from Game Development,” Multimodal Technologies and Interaction, vol. 7, no. 8, p. 1, Aug. 2023, doi: 10.3390/mti7080081.
T. B. Brown et al., “Language Models are Few-Shot Learners,” ArXiv, vol. 1, no. 4, pp. 1–75, May 2020, doi: 10.48550/arXiv.2005.14165.
T. Kojima, S. S. Gu, M. Reid, Y. Matsuo, and Y. Iwasawa, “Large Language Models are Zero-Shot Reasoners,” ArXiv, vol. 1, no. 4, pp. 1–42, May 2023, doi: 10.48550/arXiv.2205.11916.
M. GÜNAY, “THE IMPACT OF TYPOGRAPHY IN GRAPHIC DESIGN,” International Journal Of Eurasia Social Sciences, vol. 15, no. 57, pp. 1446–1464, Jan. 2024, doi: 10.35826/ijoess.4519.
S. L. Franconeri, L. M. Padilla, P. Shah, J. M. Zacks, and J. Hullman, “Corrigendum: The science of visual data communication: What works,” Psychological Science in the Public Interest, vol. 23, no. 1, pp. 110–161, May 2022, doi: 10.1177/15291006211051956.
M. Fan and Y. Li, “The application of computer graphics processing in visual communication design,” Journal of Intelligent and Fuzzy Systems, vol. 39, no. 4, pp. 5183–5191, Jan. 2020, doi: 10.3233/JIFS-189003.
J. Wu, “Research on the Combination of UI and Graphic Design in Museums in the Digital Age,” Transactions on Social Science, Education and Humanities Research, vol. 7, no. 1, pp. 366–370, May 2024, doi: 10.62051/6vww1j63.
H. Li, T. Xue, A. Zhang, X. Luo, L. Kong, and G. Huang, “The application and impact of artificial intelligence technology in graphic design: A critical interpretive synthesis,” Heliyon, vol. 10, no. 21, pp. 11–21, Nov. 2024, doi: 10.1016/j.heliyon.2024.e40037.
B. Mustafa, “The Impact of Artificial Intelligence on the Graphic Design Industry,” Arts and Design Studies, vol. 104, no. 1, pp. 1–9, Mar. 2023, doi: 10.7176/ADS/104-01.
Y. Meron, “Graphic design and artificial intelligence: Interdisciplinary challenges for designers in the search for research collaboration,” DRS Digital Library, vol. 1, no. 1, pp. 1–17, Jun. 2022, doi: doi.org/10.21606/drs.2022.157.
C.-L. Cheng, “Alive Scene: Participatory Multimodal AI Framework for Collective Narratives in Dynamic 3D Scene by,” Massachusetts Institute of Technology, vol. 1, no. 1, pp. 1–91, Apr. 2024.
P. Korzynski, G. Mazurek, P. Krzypkowska, and A. Kurasinski, “Artificial intelligence prompt engineering as a new digital competence: Analysis of generative AI technologies such as ChatGPT,” Entrepreneurial Business and Economics Review, vol. 11, no. 3, pp. 25–37, Sep. 2023, doi: 10.15678/EBER.2023.110302.
T. Sanchez, “Examining the Text-to-Image Community of Practice: Why and How do People Prompt Generative AIs?,” Association for Computing Machinery, vol. 1, no. 1, pp. 43–61, Jun. 2023, doi: 10.1145/3591196.3593051.
A. M. Goloujeh, A. Sullivan, and B. Magerko, “Is It AI or Is It Me? Understanding Users’ Prompt Journey with Text-to-Image Generative AI Tools,” Association for Computing Machinery, vol. 1, no. 183, pp. 1–13, May 2024, doi: 10.1145/3613904.3642861.
Z. Lai, X. Zhu, J. Dai, Y. Qiao, and W. Wang, “Mini-DALLE3: Interactive Text to Image by Prompting Large Language Models,” ArXiv, vol. 1, no. 2, pp. 1–20, Oct. 2023, doi: 10.48550/arXiv.2310.07653.
J. White et al., “A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT,” ArXiv, vol. 1, no. 1, pp. 1–9, Feb. 2023, doi: 10.48550/arXiv.2302.11382.
X. Chen, Y. Chen, Y. Xie, and L. Cheng, “Translation and psychometric validation of the Medical Artificial Intelligence Readiness Scale (MAIRS-MS) for Chinese medical students,” BMC Nurs, vol. 24, no. 1, pp. 1–10, Dec. 2025, doi: 10.1186/s12912-025-03852-w.
Copyright (c) 2025 Shermay, Syaeful Anas Aklani, Muhamad Dody Firmansyah

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
This is an open-access article distributed under the terms of the Creative Commons Attribution-ShareAlike 4.0 International License which permits unrestricted use, distribution, and reproduction in any medium. Users are allowed to read, download, copy, distribute, search, or link to full-text articles in this journal without asking by giving appropriate credit, provide a link to the license, and indicate if changes were made. All of the remix, transform, or build upon the material must distribute the contributions under the same license as the original.












