Fine-Tuning GPT-4o-Mini for Programming Questions Generation

Document Type

Article

Publication Date

2024

Abstract

Creating programming questions that are both meaningful and educationally relevant is a critical task in computer science education. This paper introduces a fine-tuned GPT4o-mini model (C2Q). It is designed to generate meaningful questions by leveraging semantic feature extraction and well- crafted prompts. The approach addresses the limitations of traditional generative models, offering a deeper understanding of programming code and producing questions that are precise, diverse, and relevant to a given code snippets. The proposed framework incorporates essential code elements, such as control structures and method attributes, to generate questions that align with programming concepts. Evaluation metrics used were BLEU, ROUGE-1, and ROUGE-L to evaluate the model's performance. The findings reveal that the model achieves better structural coherence and conceptual relevance while focusing on contextual understanding over exact term matching. This work highlights the potential of the proposed approach to advance teaching and assessment methods in computer science.

Share

COinS