# magicoder_empowering_code_generation_with_ossinstruct__4894d8db.pdf Magicoder: Empowering Code Generation with OSS-INSTRUCT Yuxiang Wei 1 Zhe Wang 2 Jiawei Liu 1 Yifeng Ding 1 Lingming Zhang 1 We introduce Magicoder, a series of fully opensource (code, weights, and data) Large Language Models (LLMs) for code that significantly closes the gap with top code models while having no more than 7B parameters. Magicoder models are trained on 75K synthetic instruction data using OSS-INSTRUCT, a novel approach to enlightening LLMs with open-source code snippets to generate diverse instruction data for code. Our main motivation is to mitigate the inherent bias of the synthetic data generated by LLMs through the wealth of open-source references for the production of more realistic and controllable data. The orthogonality of OSS-INSTRUCT and other data generation methods like Evol-Instruct further enables us to build an enhanced Magicoder S. Both Magicoder and Magicoder S substantially outperform state-of-the-art code models with similar or even larger sizes on a wide range of coding benchmarks. Notably, Magicoder S-CL-7B based on CODELLAMA even surpasses the prominent Chat GPT on Human Eval+ (66.5 vs. 65.9 in pass@1). Overall, OSS-INSTRUCT opens a new direction for crafting diverse synthetic instruction data for code using abundant open-source references. 1. Introduction Code generation, also known as program synthesis (Gulwani et al., 2017), is a long-standing challenge in computer science. In the past few decades, a large body of research has been studying symbolic approaches, such as abstraction-based synthesis (Wang et al., 2017; Feng et al., 2018) for general-purpose synthesis problems and programming by examples (Cambronero et al., 2023; Liu et al., The work was done during a remote summer internship at the University of Illinois. 1University of Illinois at Urbana-Champaign, USA 2Tsinghua University, China. Correspondence to: Yuxiang Wei . Proceedings of the 41 st International Conference on Machine Learning, Vienna, Austria. PMLR 235, 2024. Copyright 2024 by the author(s). 2023a) for domain-specific tasks. Until recently, Large Language Models (LLMs) trained on code (Austin et al., 2021; Chen et al., 2021) has shown outstanding breakthroughs in generating code that accurately satisfies user intents, and they are widely deployed to assist real-world software development (Microsoft, 2023b; Services, 2023). Initially, closed-source models such as GPT-3.5 Turbo (Open AI, 2022) (i.e., Chat GPT) and GPT-4 (Open AI, 2023) massively dominated various coding benchmarks and leaderboards (Chen et al., 2021; Austin et al., 2021; Liu et al., 2023b; Lai et al., 2022; Xia & Zhang, 2023). To further push the boundaries of code generation with open source LLMs, SELF-INSTRUCT (Wang et al., 2023a) is adopted to bootstrap the instruction-following ability of LLMs. In the realm of code, practitioners commonly devise synthetic coding instructions using a stronger teacher model (e.g., Chat GPT and GPT-4) and then finetune a weaker student model (e.g., CODELLAMA (Rozi ere et al., 2023)) with the generated data to distill the knowledge from the teacher (Taori et al., 2023; Chaudhary, 2023). For example, Code Alpaca (Chaudhary, 2023) consists of 20K automatically generated code instructions by applying SELF-INSTRUCT on Chat GPT using 21 seed tasks. To further enhance the coding abilities of LLMs, Luo et al. (2023b) proposes Code Evol-Instruct that employs various heuristics to increase the complexity of seed code instructions (Code Alpaca in this case), achieving state-ofthe-art (SOTA) results among open-source models. While these data generation methods can effectively improve the instruction-following capability of an LLM, they rely on a narrow range of predefined tasks or heuristics under the hood. For example, on the one hand, Code Alpaca that adopts SELF-INSTRUCT only relies on 21 seed tasks to generate new code instructions using an identical prompt template. On the other hand, Code Evol-Instruct takes Code Alpaca as seeds and merely depends on 5 heuristics to evolve the dataset. As partly suggested by Yu et al. (2023) and Wang et al. (2023a), such approaches may significantly inherit the system bias inherent in the LLMs as well as the predefined tasks. Therefore, in this paper, we propose OSS-INSTRUCT to mitigate the inherent bias of LLMs and to unleash their potential to craft diverse and creative code instructions via direct learning from the open source. As shown in Figure 1, Magicoder: Empowering Code Generation with OSS-INSTRUCT You are working on a natural language processing (NLP) project and need to create a program to preprocess and classify movie reviews... ... Your program should be able to preprocess new movie reviews, train the model, and classify new reviews accurately. Generated problem (details omitted) Q Pos Neg.py D Grant Info.ts D Program.cs D Strength.swift D Open-source codebase learn_model( tf_idf SVM, tf_idf NB, target) def get_clean_review(raw_review): letters_only = re.sub( "[ a-z A-Z]", " ", raw_review) Seed code snippet D OSS-INSTRUCT Please gain inspiration from the code snippet to create a highquality programming problem Prompt (details omitted) s from sklearn.feature_extraction.text import Tfidf Vectorizer ... def get_clean_review(raw_review): ... def train_model(tf_idf SVM, tf_idf NB, reviews, labels): ... def classify_review(clean_review, tf_idf SVM, tf_idf NB): ... ... train_model(tf_idf SVM, tf_idf NB, reviews, labels) cleaned_review = get_clean_review(...)... Generated solution (details omitted) L Language Model Figure 1: Overview of OSS-INSTRUCT and the pass@1 results of different LLMs on Human Eval (+) OSS-INSTRUCT leverages a powerful LLM to automatically generate new coding problems by drawing inspiration from any random code snippets collected from the open source. In this example, the LLM gets inspired by two incomplete code fragments from different functions and manages to relate them and craft a realistic machine learning problem. Thanks to the infinite real-world opensource code, OSS-INSTRUCT can directly produce diverse, realistic, and controllable code instructions by providing distinct seed code snippets. In the end, we generate 75K synthetic data to finetune CODELLAMA-PYTHON-7B, resulting in Magicoder-CL. While being simple and effective, OSS-INSTRUCT is orthogonal to existing data generation methods, and they can be combined to further boost the models coding capabilities. Therefore, we continually finetune Magicoder-CL on an open-source Evol-Instruct dataset with 110K entries, producing Magicoder S-CL. We evaluate Magicoder and Magicoder S on a wide range of coding tasks, including Human Eval (Chen et al., 2021) and MBPP (Austin et al., 2021) for Python text-to-code generation, Multi PL-E (Cassano et al., 2022) for multilingual code completion, and DS-1000 (Lai et al., 2022) for solving data science problems. We further adopt Eval Plus (Liu et al., 2023b), which includes the augmented Human Eval+ and MBPP+ datasets for more rigorous model evaluation. Both Magicoder-CL and Magicoder S-CL substantially boost the base CODELLAMA-PYTHON-7B. Additionally, Magicoder- CL even outperforms Wizard Coder-CL-7B, Wizard Coder SC-15B, and all studied SOTA LLMs with less than or equal to 16B parameters on all the benchmarks we tested. Also, the pass@1 result of the enhanced Magicoder S-CL is on par with Chat GPT on Human Eval (70.7 vs. 72.6) and surpasses it on the more rigorous Human Eval+ (66.5 vs. 65.9), indicating that Magicoder S-CL can generate more robust code. It also achieves SOTA results among all code models at the same scale. Additionally, we notice a very recent advancement in the development of the Deep Seek-Coder series (Guo et al., 2024) which has shown exceptional coding performance. However, due to the limited technical details disclosed, we only briefly discuss them in 3.4. Despite this, we applied OSS-INSTRUCT on Deep Seek-Coder-Base 6.7B, resulting in the creation of Magicoder-DS and Magicoder SDS. In addition to the consistent findings on the previous results with CODELLAMA-PYTHON-7B as the base model, Magicoder-DS and Magicoder S-DS benefit from the more powerful Deep Seek-Coder-Base-6.7B. This advantage is demonstrated by Magicoder S-DS, which achieves a remarkable 76.8 pass@1 on Human Eval. Magicoder S-DS also outperforms Deep Seek-Coder-Instruct-6.7B on Human Eval (+) and MBPP (+) with 8 less finetuning tokens. To justify the design of OSS-INSTRUCT, i.e., generating instruction-tuning data from open-source references rather Magicoder: Empowering Code Generation with OSS-INSTRUCT than using the references directly, we demonstrate that finetuning the base models with semantically relevant commentfunction pairs extracted from open-source projects even negatively impacts the model performance ( 4.2). In general, we make the following contributions: We introduce OSS-INSTRUCT, a pioneering approach to enlightening LLMs with open-source code snippets to generate more diverse, realistic, and controllable coding instruction data, which can be leveraged to substantially boost the performance of various LLMs via instruction tuning. It opens a new dimension for creating low-bias and diverse instruction-tuning data from the abundance of open-source references. We build the Magicoder series trained with OSSINSTRUCT and Magicoder S series trained on a combination of OSS-INSTRUCT and Evol-Instruct. Our evaluation across 6 benchmarks shows that all Magicoders significantly improve the base LLMs. Notably, both Magicoder S-CL and Magicoder S-DS outperform Chat GPT on Human Eval+ with only 7B parameters. We fully open source the model weights, training data, and source code at https://github.com/ise-uiuc/ magicoder to facilitate future research. 2. OSS-INSTRUCT: Instruction Tuning from Open Source In this section, we elaborate on our OSS-INSTRUCT approach. From a high level, as shown in Figure 1, OSSINSTRUCT works by prompting an LLM (e.g., Chat GPT) to generate a coding problem and its solution according to some seed code snippet collected from the wild (e.g., from Git Hub). The seed snippet offers controllability of the generation and encourages the LLM to create diverse coding problems that can reflect real-world programming scenarios. 2.1. Generating Coding Problems OSS-INSTRUCT is powered by seed code snippets that can be easily collected from open source. In this work, we directly adopt starcoderdata as our seed corpus, a filtered version of The Stack (Kocetkov et al., 2022) dataset that Star Coder is trained on, containing permissively licensed source code documents in various programming languages. We chose starcoderdata because it is widely adopted, includes massive high-quality code snippets, and is even post-processed for data decontamination (Li et al., 2023; Allal et al., 2023). For each code document from the corpus, we randomly extract 1 15 consecutive lines as the seed snippet for the model to gain inspiration from and produce coding problems. In total, we collected 80K initial seed snippets from 80K code documents, 40K from Python, and 5K from each of C++, Java, Type Script, Shell, C#, Rust, PHP, and Swift respectively. Then, each collected seed code snippet is applied to the prompt template shown in Appendix A.1, which a teacher model takes as input and outputs both a coding problem and its solution. 2.2. Data Cleaning and Decontamination We perform data cleaning by excluding samples that are identical or share the same seed code snippet. While there exist other sorts of noisiness (e.g., the solution is incomplete) in the generated data, inspired by Honovich et al. (2023), they are not removed as we believe they still contain valuable information for LLMs to learn. More experimental details can be found in Appendix C.3. Finally, we apply the same logic as Star Coder Li et al. (2023) to decontaminate our training data by removing coding problems that contain docstrings or solutions from Human Eval (Chen et al., 2021) and MBPP (Austin et al., 2021), docstrings from APPS (Hendrycks et al., 2021), prompts from DS1000 (Lai et al., 2022), or questions from GSM8K (Cobbe et al., 2021). As part of our analysis, the decontamination procedure only filters out 9 additional samples. Since the seed corpus starcoderdata has already gone through rigorous data decontamination, this observation suggests that OSS-INSTRUCT is unlikely to introduce additional data leakage beyond the seeds. The eventual OSS-INSTRUCT dataset contains about 75K entries. An overview of the dataset statistics can be found in Appendix A.3. 2.3. Qualitative Examples of OSS-INSTRUCT Figure 2 shows some qualitative examples of how OSSINSTRUCT can help LLM get inspiration from a seed code snippet to create new coding problems and solutions. For example, the shell script example shows how an LLM crafts a Python coding problem with just one line of shell script. The library imports example demonstrates how an LLM can create a realistic machine learning problem using just a few import statements. Meanwhile, the class signature instance illustrates the ability of LLM to draw inspiration from an incomplete class definition featuring annotations like Spring Boot Application and keywords such as bank. From this, the LLM generates a problem that requires implementing a complete banking system based on Spring Boot. Overall, OSS-INSTRUCT can inspire an LLM with distinct code structures and semantics to create diverse coding tasks, including algorithmic challenges, realistic issues, single-function code generation, library-based program completion, whole-program development, and even whole-application construction. Similarity with Human Eval To study whether our data generation process produces more Human Eval-like prob- Magicoder: Empowering Code Generation with OSS-INSTRUCT D Seed: shell script python3 make Error File.py data/test_dataset_14 14 Q Problem Create a Python program that generates an error file based on a given dataset... L Code def generate_error_file(dataset_file, ...): error_lines = [] with open(dataset_file, 'r') as file: for line in file: ... with open(error_file_name, 'w') as error_file: for error_line in error_lines: error_file.write(error_line + '\n') if __name__ == "__main__": if len(sys.argv) != 3: print("Usage: ...") else: dataset_file = sys.argv[1] dataset_number = sys.argv[2] generate_error_file(...) D Seed: library imports import numpy as np import gym_electric_motor as gem import matplotlib.pyplot as plt Q Problem Create a reinforcement learning agent to control an electric motor using the Open AI Gym environment... L Code import numpy as np import gym_electric_motor as gem ... env = gem.make("Dc Series Cont-v1") class DQNAgent: def __init__(self, state_dim, action_dim): ... def build_model(self): ... def act(self, state): ... def train(self, state, action, reward, ...): ... ... for episode in range(episodes): state = env.reset() state = np.reshape(state, [1, state_dim]) ... D Seed: class signature @Spring Boot Application @Import({ Axon Config.class }) public class Axonbank Application { public static void main(String[] args) { Q Problem Create a simple Java Spring Boot application for a banking system... L Code import org.axonframework.commandhandling... import org.axonframework.config... ... @Spring Boot Application @Import({ Axon Config.class }) public class Axonbank Application {...} public class Bank Account {...} public class Create Account Command {...} public class Deposit Funds Command {...} public class Withdraw Funds Command {...} public class Account Created Event {...} public class Funds Deposited Event {...} public class Funds Withdrawn Event {...} Figure 2: Examples showing how OSS-INSTRUCT generates problems and solutions from seed code snippets. Detailed problem requirements, implementations, and explanations are omitted for brevity. More examples can be found in Appendix A.2. 0.0 0.1 0.2 0.3 0.4 0.5 Cosine Similarity Score Self-Instruct; Avg Score: 0.169 Evol-Instruct; Avg Score: 0.131 OSS-Instruct; Avg Score: 0.105 Figure 3: Cosine similarities between Human Eval and synthetic data generated by different methods. lems or solutions that contribute to high performance, we pair each sample from our 75K dataset with each of the 164 Human Eval (Chen et al., 2021) samples and compute their cosine similarity using TF-IDF (SPARCK JONES, 1972) embeddings. We then associate each OSS-INSTRUCT sample with a Human Eval sample with the highest similarity score. We also compare our dataset against Code Alpaca, a 20K dataset applying SELF-INSTRUCT to code, and evol-codealpaca-v1 (theblackcat102, 2023), an open-source reproduction of Evol-Instruct containing 110K coding instructions. We resort to the open-source implementation because the official Code Evol-Instruct (Luo et al., 2023b) dataset is not released. We decontaminate all the datasets beforehand using the same way discussed in 2.2. Figure 3 shows that OSS-INSTRUCT exhibits the lowest average similarity among all the studied data generation techniques while SELF-INSTRUCT shows the highest aver- age similarity. This result indicates that the improvements from OSS-INSTRUCT are not merely due to including data from the same distribution. 3. Evaluation We choose CODELLAMA-PYTHON-7B and Deep Seek Coder-Base 6.7B as the base LLMs. To derive Magicoder series, we first finetune them on 75K synthetic data generated through OSS-INSTRUCT. We then obtain Magicoder S by continuing finetuning Magicoder with the evol-codealpaca-v1 dataset, an open-source Evol Instruct implementation containing about 110K samples. More implementation details and additional evaluation results are listed in Appendices B and C. We also present interesting use cases that reflect the effectiveness of instruction tuning in Appendix D and demonstrate Magicoder s capability to generate complex programs in Appendix E. 3.1. Python Text-to-Code Generation Human Eval (Chen et al., 2021) and MBPP (Austin et al., 2021) are two of the most widely used benchmarks for code generation. Each task in these benchmarks includes a task description (e.g., docstring) as the prompt, where LLMs generate corresponding code whose correctness is checked by a handful of test cases. Because tests in these benchmarks can be insufficient, for more rigorous evaluation, we use Human Eval+ and MBPP+, both powered by the Eval Plus framework (Liu et al., 2023b) to obtain 80 /35 more tests. Following prior work (Liu et al., 2023b; Chen et al., 2023), for each task and LLM we use greedy decoding to generate one sample and focus on comparing the pass@1 metric. We consider a wide range of baseline models, including Magicoder: Empowering Code Generation with OSS-INSTRUCT Table 1: Pass@1 (%) results of different LLMs on Human Eval (+) and MBPP (+) computed with greedy decoding. The abbreviations CL and SC refer to the base models CODELLAMA-PYTHON and Star Coder, respectively. We report the results consistently from the Eval Plus (Liu et al., 2023b) Leaderboard. Model Release Date Size Benchmark Open-Source Human Eval (+) MBPP (+) Weight Data GPT-3.5 Turbo Nov 2023 - 72.6 (65.9) 81.7 (69.4) # # GPT-4 Turbo Nov 2023 - 85.4 (81.7) 83.0 (70.7) # # CODELLAMA-PYTHON Aug 2023 34B 51.8 (42.7) 67.2 (52.9) # Wizard Coder-CL Sep 2023 34B 73.2 (64.6) 73.2 (59.9) # Code T5+ May 2023 16B 31.7 (26.2) 54.6 (44.4) Code Gen-Mono Mar 2022 16B 32.9 (27.4) 52.6 (43.6) Star Coder May 2023 15B 34.1 (29.3) 55.1 (46.1) CODELLAMA-PYTHON Aug 2023 13B 42.7 (36.6) 61.2 (50.9) # Wizard Coder-SC Sep 2023 15B 51.9 (45.1) 61.9 (50.6) # Star Coder May 2023 7B 24.4 (20.7) 33.1 (28.8) Mistral Oct 2023 7B 28.7 (23.2) 50.1 (40.9) # Code T5+ May 2023 6B 29.3 (23.8) 51.9 (40.9) Code Gen-Mono Mar 2022 6B 29.3 (25.6) 49.9 (42.1) CODELLAMA-PYTHON Aug 2023 7B 37.8 (34.1) 57.6 (45.4) # Wizard Coder-CL Sep 2023 7B 48.2 (40.9) 56.6 (47.1) # Magicoder-CL Dec 2023 7B 60.4 (55.5) 64.2 (52.6) Magicoder S-CL Dec 2023 7B 70.7 (66.5) 68.4 (56.6) CODELLAMA-PYTHON (Rozi ere et al., 2023), Wizard Coder (Luo et al., 2023b), GPT-3.5 Turbo (Open AI, 2022), GPT-4 Turbo (Open AI, 2023), Star Coder (Li et al., 2023), Code T5+ (Wang et al., 2023b), Code Gen-Mono (Nijkamp et al., 2023), and Mistral (Jiang et al., 2023a). All the results are consistently reported from the Eval Plus (Liu et al., 2023b) leaderboard (Eval Plus hash: 1895d2f). Table 1 shows the pass@1 results of different LLMs on these benchmarks. From the results, we can first observe that Magicoder-CL has a clear improvement over the base CODELLAMA-PYTHON-7B, and outperforms all studied open-source models except CODELLAMA-PYTHON-34B and Wizard Coder-CL-34B. Notably, Magicoder-CL surpasses Wizard Coder-SC-15B and has a substantial improvement on Human Eval and Human Eval+ over CODELLAMAPYTHON-34B. Magicoder S-CL demonstrates further improvements by being trained with the orthogonal Evol Instruct method. Magicoder S-CL outperforms Chat GPT and all other open-source models on Human Eval+. Moreover, although it scores slightly lower than Wizard Coder CL-34B and Chat GPT on Human Eval, it surpasses both of them on the more rigorous Human Eval+ dataset, indicating that Magicoder S-CL may produce more robust code. 3.2. Multilingual Code Generation In addition to Python, as shown in Table 2, we perform an extensive evaluation on 6 widely used programming languages, i.e., Java, Java Script, C++, PHP, Swift, and Rust, using the Multi PL-E benchmark (Cassano et al., 2022). We report available results from the Wizard Coder paper (Luo et al., 2023b) and evaluate our models consistently through bigcode-evaluation-harness (Ben Allal et al., 2022). We skip proprietary models such as Chat GPT and GPT-4 as they are not supported by the framework. Due to a significant inference latency when running Wizard Coder-CL-7B using the harness in our environment, we choose not to include it in our analysis. The results indicate that Magicoder-CL improves the base CODELLAMA-PYTHON-7B by a large margin among all the studied programming languages. Moreover, Magicoder CL also achieves better results than the SOTA 15B Wizard Coder-SC among half of the programming languages. Additionally, Magicoder S-CL demonstrates further improvement over Magicoder-CL on all programming languages, achieving comparable performance against Wizard Coder-CL-34B with only 7B parameters. It is worth noting that Magicoder-CL is only trained with very limited multilingual data but still outperforms other LLMs with similar or even larger sizes. Also, although the harness Magicoder: Empowering Code Generation with OSS-INSTRUCT evaluates models in completion formats which are for base models, Magicoders still show significant improvements despite being only instruction-tuned. This implies that LLMs can learn knowledge from the data beyond its format. 3.3. Code Generation for Data Science The DS-1000 dataset (Lai et al., 2022) contains 1K distinct data science coding issues ranging from 7 popular data science libraries in Python. It evaluates the realistic and practical use case of an LLM and offers unit tests for validating each problem. DS-1000 has both completion and insertion modes, but here we only evaluate completion because the base CODELLAMA-PYTHON does not support infilling. Table 3 shows the evaluation results where we include the recent INCODER (Fried et al., 2023), Code Gen (Nijkamp et al., 2023), Code-Cushman-001 (Microsoft, 2023a), Star Coder (Li et al., 2023), CODELLAMA-PYTHON (Rozi ere et al., 2023), and Wizard Coder (Luo et al., 2023b). We can see from the table that Magicoder-CL-7B already outperforms all the baselines we evaluate, including stateof-the-art Wizard Coder-CL-7B and Wizard Coder-SC-15B. Magicoder S-CL-7B further breaks the limit by introducing an 8.3 percentage point absolute improvement over Wizard Coder-SC-15B. 3.4. Comparison with Deep Seek-Coder Deep Seek-Coder (Guo et al., 2024) is a series of models released concurrently to our work and they demonstrate superior coding performance. We only briefly discuss it in this section because its data and instruction tuning details are not publicly available at the time of writing. We apply the same finetuning strategy on Deep Seek-Coder-Base-6.7B as we performed on CODELLAMA-PYTHON-7B, leading to Magicoder-DS and Magicoder S-DS. Table 4 shows a similar trend as Table 1 that the base model can be significantly improved after applying OSS-INSTRUCT. Remarkably, the Magicoder S-DS variant surpasses Deep Seek Coder-Instruct-6.7B on all the benchmarks with 8 fewer training tokens, and it also closely matches Deep Seek Coder-Instruct-33B on these datasets. 4. Ablations of Data Source 4.1. Impact of the Language Distribution To understand the correlation between the programming languages appearing in the training data and the downstream performance of different languages, we conduct an additional ablation study about the training data. We classify the 75K training data into approximately 43K Python-only, and 32K non-Python data according to whether python is a substring of the generated data. We do not classify the data based on the seed code snippet because LLMs per- forming OSS-INSTRUCT may produce code in a different programming language than the seed. Table 5 shows the evaluation results, where we consistently finetune the base CODELLAMA-PYTHON-7B for 2 epochs on different data partitions using the same training hyperparameters explained in Appendix B. From the table, we can see that, as can be imagined, training on Python or non-Python data can substantially boost the performance of the base model in Python or non-Python tasks, respectively. Interestingly, instruction tuning on different programming languages can still boost the overall coding performance that includes out-of-distribution languages. For example, when trained on only non-Python data, Magicoder-CL still achieves a 10.4 percentage point improvement over the base model in the Python-only evaluation. This implies LLMs can establish correlations between different programming languages and perform transfer learning of deeper code semantics. Finally, we observe a more significant boost in Python evaluation when combining data from both sources, with a slight decrease in multilingual performance compared with only finetuning on multilingual data. We attribute this decrease to the dominant amount of Python data (around 57%) during instruction tuning. 4.2. OSS-INSTRUCT vs. Direct Finetuning The fact that OSS-INSTRUCT gets an LLM inspired from open-source code snippets may lead to a natural question: why not directly finetuning on these open-source code? To answer this question, we follow Code Search Net (Husain et al., 2020) to mine semantically relevant comment-function pairs from the same seed document corpus we use to construct the 75K OSS-INSTRUCT dataset. We then train the model to predict the function bodies from the function signatures and comments. We prioritize comment-function pairs that overlap with our 75K seed snippets, resulting in about 11K data points. To align with our 75K samples, we collect the remaining 64K samples using the whole corpus of 75K seed documents. Eventually, we have the same number of comment-function pairs with OSS-INSTRUCT data. We finetune the base CODELLAMA-PYTHON-7B for 2 epochs using the paired data, following the same training setup discussed in Appendix B. From Table 6, we observe that finetuning on 75K paired comment-function data even worsens the base model, while OSS-INSTRUCT helps to introduce a substantial boost. We conjecture that the degradation is owing to the substantial noise and inconsistency that exists intrinsically in the data pairs, even though these paired data exhibit very similar format as Human Eval or Multi PL-E problems. This further shows that data factuality, rather than the format, is essential to code instruction tuning. It also indicates the superiority of OSS-INSTRUCT which can translate these loosely related code fragments Magicoder: Empowering Code Generation with OSS-INSTRUCT Table 2: Pass@1 results of different LLMs on Multi PL-E (Cassano et al., 2022) following the same hyperparameter settings as the Wizard Coder paper (Luo et al., 2023b): temperature = 0.2, top p = 0.95, max length = 512, and num samples = 50. We evaluate all 7B models using bigcode-evaluation-harness (Ben Allal et al., 2022) and report other results from Wizard Coder. Model Size Programming Language Java Java Script C++ PHP Swift Rust CODELLAMA 34B 40.2 41.7 41.4 40.4 35.3 38.7 CODELLAMA-PYTHON 34B 39.5 44.7 39.1 39.8 34.3 39.7 CODELLAMA-INSTRUCT 34B 41.5 45.9 41.5 37.0 37.6 39.3 Wizard Coder-CL 34B 44.9 55.3 47.2 47.2 44.3 46.2 Star Coder Base 15B 28.5 31.7 30.6 26.8 16.7 24.5 Star Coder 15B 30.2 30.8 31.6 26.1 22.7 21.8 Wizard Coder-SC 15B 35.8 41.9 39.0 39.3 33.7 27.1 CODELLAMA 7B 29.3 31.7 27.0 25.1 25.6 25.5 CODELLAMA-PYTHON 7B 29.1 35.7 30.2 29.0 27.1 27.0 Magicoder-CL 7B 36.4 45.9 36.5 39.5 33.4 30.6 Magicoder S-CL 7B 42.9 57.5 44.4 47.6 44.1 40.3 Table 3: Pass@1 results on DS-1000 (completion format) with temperature = 0.2, top p = 0.5, max length = 1024, and num samples = 40, following the same hyperparameter setting used in Wizard Coder (Luo et al., 2023b). We evaluate all the 7B models with their preferred prompt formats and report other results from Wizard Coder. Model Size + 155 Matplotlib + 220 Num Py + 291 Pandas + 68 Py Torch + 106 Sci Py + 115 Sklearn + 45 Tensor Flow = 1000 Overall INCODER 6.7B 28.3 4.4 3.1 4.4 2.8 2.8 3.8 7.4 Code Gen-Mono 16B 31.7 10.9 3.4 7.0 9.0 10.8 15.2 11.7 Code-Cushman-001 - 40.7 21.8 7.9 12.4 11.3 18.0 12.2 18.1 Star Coder 15B 51.7 29.7 11.4 21.4 20.2 29.5 24.5 26.0 Wizard Coder-SC 15B 55.2 33.6 16.7 26.2 24.2 24.9 26.7 29.2 CODELLAMA-PYTHON 7B 55.3 34.5 16.4 19.9 22.3 17.6 28.5 28.0 Wizard Coder-CL 7B 53.5 34.4 15.2 25.7 21.0 24.5 28.9 28.4 Magicoder-CL 7B 54.6 34.8 19.0 24.7 25.0 22.6 28.9 29.9 Magicoder S-CL 7B 55.9 40.6 28.4 40.4 28.8 35.8 37.6 37.5 Table 4: Pass@1 (greedy decoding) comparison between Magicoder and Deep Seek-Coder (Guo et al., 2024) on Human Eval (+) and MBPP (+). Deep Seek-Coder results are reported from Eval Plus (Liu et al., 2023b) Leaderboard. Model Size Training Tokens Benchmark Open-Source Human Eval (+) MBPP (+) Weight Data Deep Seek-Coder-Base 1.3B 2T - 55.4 (46.9) # 6.7B 2T 47.6 (39.6) 70.2 (56.6) # 33B 2T 51.2 (43.3) - # Deep Seek-Coder Instruct 1.3B +2B 64.6 (58.5) 63.7 (53.1) # 6.7B +2B 73.8 (70.1) 72.7 (63.4) # 33B +2B 78.7 (72.6) 78.7 (66.7) # Magicoder-DS 6.7B +90M 66.5 (60.4) 75.4 (61.9) Magicoder S-DS 6.7B +240M 76.8 (70.7) 75.7 (64.4) Magicoder: Empowering Code Generation with OSS-INSTRUCT Table 5: Ablation study of using different programming languages as training data. We show the pass@1 results on Human Eval+ (Liu et al., 2023b) for Python and the average pass@1 results on Multi PL-E (Cassano et al., 2022) for the same set of programming languages used in Table 2 (i.e., Java, Java Script, C++, PHP, Swift, and Rust). All the variants are finetuned with 2 epochs and evaluated through greedy-decoding. Model (7B) Finetuning Data Python (Human Eval+) Others (Multi PL-E) CODELLAMA-PYTHON - 34.1 29.6 Magicoder-CL Python (43K) 47.6 32.7 Magicoder-CL Others (32K) 44.5 38.3 Magicoder-CL Both (75K) 55.5 37.8 Table 6: Comparison between OSS-INSTRUCT and directly finetuning on comment-function pairs with CODELLAMAPYTHON-7B as the base model. Finetuning Data Human Eval+ Multi PL-E Base model w/o finetuning 34.1 29.6 Comment-function pairs (75K) 34.1 24.1 OSS-INSTRUCT (75K) 55.5 37.8 into semantically-consistent instruction-tuning data. 4.3. OSS-INSTRUCT with A Less Powerful Teacher In this section, we explore the factors contributing to the effectiveness of OSS-INSTRUCT beyond just the distillation of the teacher model. We propose two potential key reasons. First, since the base model is pretrained with comprehensive code data, the distillation process likely activates the model s internal capabilities, leading to improved performance in coding tasks. Second, OSS-INSTRUCT uses seed code snippets to generate problem-solution pairs in one shot. These seed snippets provide valuable context, enabling the model to create better solutions than a plain teacher model lacking such seed information. These enhanced solutions can then be used to train more effective student models. To verify these points, we conduct an additional experiment by generating a subset of 20K OSS-INSTRUCT data using Mixtral-8x7B-Instruct-v0.1 (Jiang et al., 2024), a state-ofthe-art, general-purpose, open-source LLM. Table 7: Pass@1 on Human Eval+ and MBPP+ when finetuning CODELLAMA-PYTHON-7B for 2 epochs on 20K OSS-INSTRUCT data generated by Mixtral-8x7B-Instructv0.1 (Jiang et al., 2024). Model Human Eval+ MBPP+ Mixtral-8x7B-Instruct-v0.1 39.6 47.4 CODELLAMA-PYTHON-7B 34.1 45.4 Magicoder-CL-Mixtral-7B 55.5 50.4 Table 7 indicates that Magicoder-CL-Mixtral-7B not only significantly improves over the base CODELLAMAPYTHON, but is also better than Mixtral-8x7B-Instruct-v0.1 (i.e., the teacher model) across Human Eval+ and MBPP+. These results suggest that OSS-INSTRUCT is not simply distilling a teacher model, but also triggering the base model s own capability and effectively leveraging the information encapsulated in seed code snippets. 5. Related Work Foundation models for code Trained over billions of lines of code, LLMs have demonstrated outstanding performance in a wide range of software engineering tasks, including code generation (Chen et al., 2021; Austin et al., 2021), program repair (Xia & Zhang, 2022; Wei et al., 2023; Xia et al., 2023b; Jiang et al., 2023b; Bouzenia et al., 2024), and software testing (Xia et al., 2023a; Deng et al., 2023; Yuan et al., 2023; Sch afer et al., 2023; Lemieux et al., 2023). In particular, prominent base models, such as Code Gen (Nijkamp et al., 2023), Code T5 (Wang et al., 2021), Star Coder (Li et al., 2023), and CODELLAMA (Rozi ere et al., 2023), are pre-trained over a huge number of codebase from scratch, establishing the fundamental ability of general code generation and understanding. More recent code LLMs, such as Deep Seek-Coder (Guo et al., 2024) and Star Coder2 (Lozhkov et al., 2024), additionally organize the pretraining data at the repository level to enhance the model s contextual understanding capabilities. Furthermore, these base models are also finetuned (Luo et al., 2023b) or prompted (Chen et al., 2023) to unlock their true potential to specialize in solving domain-specific coding tasks. Instruction tuning with synthetic data Instruction tuning aims to improve pretrained LLMs by finetuning them with a mixture of instructions and corresponding responses (Wei et al., 2022). However, obtaining highquality instructional data is oftentimes laborious. Hence, researchers are increasingly focusing on the development of methods to generate synthetic instruction data. Wang et al. (2023a) introduces SELF-INSTRUCT, where a founda- Magicoder: Empowering Code Generation with OSS-INSTRUCT tion LLM (GPT-3 (Brown et al., 2020)) is used to generate synthetic instruction-response pairs with carefully crafted prompts. The same LLM is then instruction-tuned on the synthetic data to distill such self-generated knowledge. This technique has been further extended to create synthetic data with different LLMs. For example, Alpaca (Taori et al., 2023) and Code Alpaca (Chaudhary, 2023) apply SELFINSTRUCT to finetune LLAMA with Chat GPT-generated instructions. To improve SELF-INSTRUCT, Wizard LM (Xu et al., 2023) and Wizard Coder (Luo et al., 2023a) propose Evol-Instruct and Code Evol-Instruct by guiding Chat GPT with heuristic prompts to make the synthetic data more complex and diverse. More recently, Gunasekar et al. (2023) shows that textbook-quality synthetic data alone can help the model achieve remarkable coding and reasoning capabilities. Orthogonal to all existing methods, our proposed OSS-INSTRUCT allows LLMs to get inspired from realworld code snippets for better controllability, quality, and creativity in coding tasks. Evaluating LLMs for code Most code benchmarks evaluate LLMs on generating single-function programs from natural language descriptions. Such benchmarks include Human Eval (Chen et al., 2021), MBPP (Austin et al., 2021), APPS (Hendrycks et al., 2021), and Code Contests (Li et al., 2022). A handful of manual tests are used to assess the functional correctness of LLM-generated solutions. However, insufficient tests can lead to false negatives. Consequently, the Eval Plus framework (Liu et al., 2023b) produces Human Eval+ and MBPP+ by extending 80 /35 more tests. To address dataset contamination issues, researchers propose Live Code Bench (Jain et al., 2024), which compiles fresh coding problems not included in model training, and Evo Eval (Xia et al., 2024), which strategically leverages LLMs to evolve existing benchmarks into new coding tasks. Meanwhile, there are comprehensive benchmarks evaluating code generation for data science (DS-1000 (Lai et al., 2022)), addressing open-source issues (SWE-bench (Jimenez et al., 2023)), and repository-level code generation (CROSSCODEEVAL (Ding et al., 2023) and Repo Eval (Zhang et al., 2023)). 6. Conclusion and Future Work We propose OSS-INSTRUCT, a novel data generation method using Large Language Models to generate diverse coding challenges from open-source code snippets. This approach enables Magicoder, which significantly improves the base LLM. Despite having less than 7B parameters, it can outperform all evaluate LLMs with less than or equal to 16B parameters, including the 15B Wizard Coder. Combining OSS-INSTRUCT with Evol-Instruct allows us to build the enhanced Magicoder S models. They achieve remarkable results by rivaling leading models like Chat GPT in Human Eval benchmarks. We fully open source the model weights, training data, and source code, to enable future research in LLMs for code. In the near future, we will apply OSS-INSTRUCT to larger base models. We will also continue advancing OSS-INSTRUCT by generating higherquality data with a strategically designed distribution of the seed code snippets and with more advanced teacher LLMs such as GPT-4. Acknowledgement We thank all the reviewers for their insightful comments and suggestions for our paper. This work was partially supported by NSF grant CCF-2131943, as well as Kwai Inc. Impact Statement This work is motivated to boost large language models in terms of their code generation and understanding capabilities through instruction tuning. The proposed OSSINSTRUCT method leverages the abundance of open source to generate diverse and controllable instruction data. We expect this idea to also foster innovative software solutions tailored to domain-specific needs, particularly in areas where real data is private and scarce, by generating extensive synthetic data. Additionally, our method reinforces the value of community-driven content and knowledge sharing by incorporating open-source code as references. However, it is essential to recognize the potential for misuse, such as the deliberate generation of vulnerable code that can be exploited for malicious purposes. Ultimately, adhering to ethical guidelines is crucial to ensure the responsible use of this technique. Allal, L. B., Li, R., Kocetkov, D., Mou, C., Akiki, C., Ferrandis, C. M., Muennighoff, N., Mishra, M., Gu, A., Dey, M., Umapathi, L. K., Anderson, C. J., Zi, Y., Poirier, J. L., Schoelkopf, H., Troshin, S., Abulkhanov, D., Romero, M., Lappert, M., Toni, F. D., del R ıo, B. G., Liu, Q., Bose, S., Bhattacharyya, U., Zhuo, T. Y., Yu, I., Villegas, P., Zocca, M., Mangrulkar, S., Lansky, D., Nguyen, H., Contractor, D., Villa, L., Li, J., Bahdanau, D., Jernite, Y., Hughes, S., Fried, D., Guha, A., de Vries, H., and von Werra, L. Santacoder: don t reach for the stars!, 2023. Austin, J., Odena, A., Nye, M. I., Bosma, M., Michalewski, H., Dohan, D., Jiang, E., Cai, C. J., Terry, M., Le, Q. V., and Sutton, C. Program synthesis with large language models. Co RR, abs/2108.07732, 2021. URL https: //arxiv.org/abs/2108.07732. Ben Allal, L., Muennighoff, N., Kumar Umapathi, Magicoder: Empowering Code Generation with OSS-INSTRUCT L., Lipkin, B., and von Werra, L. A framework for the evaluation of code generation models. https://github.com/bigcode-project/ bigcode-evaluation-harness, 2022. Bouzenia, I., Devanbu, P., and Pradel, M. Repairagent: An autonomous, llm-based agent for program repair. ar Xiv preprint ar Xiv:2403.17134, 2024. Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D., Wu, J., Winter, C., Hesse, C., Chen, M., Sigler, E., Litwin, M., Gray, S., Chess, B., Clark, J., Berner, C., Mc Candlish, S., Radford, A., Sutskever, I., and Amodei, D. Language models are few-shot learners. In Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., and Lin, H. (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 1877 1901. Curran Associates, Inc., 2020. URL https://proceedings.neurips. cc/paper_files/paper/2020/file/ 1457c0d6bfcb4967418bfb8ac142f64a-Paper. pdf. Cambronero, J., Gulwani, S., Le, V., Perelman, D., Radhakrishna, A., Simon, C., and Tiwari, A. Flashfill++: Scaling programming by example by cutting to the chase. Proc. ACM Program. Lang., 7(POPL), jan 2023. doi: 10.1145/3571226. URL https://doi.org/10. 1145/3571226. Cassano, F., Gouwar, J., Nguyen, D., Nguyen, S., Phipps Costin, L., Pinckney, D., Yee, M.-H., Zi, Y., Anderson, C. J., Feldman, M. Q., Guha, A., Greenberg, M., and Jangda, A. Multipl-e: A scalable and extensible approach to benchmarking neural code generation, 2022. Chaudhary, S. Code alpaca: An instruction-following llama model for code generation. https://github.com/ sahil280114/codealpaca, 2023. Chen, M., Tworek, J., Jun, H., Yuan, Q., de Oliveira Pinto, H. P., Kaplan, J., Edwards, H., Burda, Y., Joseph, N., Brockman, G., Ray, A., Puri, R., Krueger, G., Petrov, M., Khlaaf, H., Sastry, G., Mishkin, P., Chan, B., Gray, S., Ryder, N., Pavlov, M., Power, A., Kaiser, L., Bavarian, M., Winter, C., Tillet, P., Such, F. P., Cummings, D., Plappert, M., Chantzis, F., Barnes, E., Herbert-Voss, A., Guss, W. H., Nichol, A., Paino, A., Tezak, N., Tang, J., Babuschkin, I., Balaji, S., Jain, S., Saunders, W., Hesse, C., Carr, A. N., Leike, J., Achiam, J., Misra, V., Morikawa, E., Radford, A., Knight, M., Brundage, M., Murati, M., Mayer, K., Welinder, P., Mc Grew, B., Amodei, D., Mc Candlish, S., Sutskever, I., and Zaremba, W. Evaluating large language models trained on code, 2021. Chen, X., Lin, M., Sch arli, N., and Zhou, D. Teaching large language models to self-debug, 2023. Cobbe, K., Kosaraju, V., Bavarian, M., Chen, M., Jun, H., Kaiser, L., Plappert, M., Tworek, J., Hilton, J., Nakano, R., Hesse, C., and Schulman, J. Training verifiers to solve math word problems, 2021. Deng, Y., Xia, C. S., Peng, H., Yang, C., and Zhang, L. Large language models are zero-shot fuzzers: Fuzzing deep-learning libraries via large language models, 2023. Ding, Y., Wang, Z., Ahmad, W. U., Ding, H., Tan, M., Jain, N., Ramanathan, M. K., Nallapati, R., Bhatia, P., Roth, D., and Xiang, B. Crosscodeeval: A diverse and multilingual benchmark for cross-file code completion. In Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2023. URL https://openreview.net/forum? id=wg Dcb BMSfh. Feng, Y., Martins, R., Bastani, O., and Dillig, I. Program synthesis using conflict-driven learning. SIGPLAN Not., 53(4):420 435, jun 2018. ISSN 0362-1340. doi: 10. 1145/3296979.3192382. URL https://doi.org/ 10.1145/3296979.3192382. Fried, D., Aghajanyan, A., Lin, J., Wang, S., Wallace, E., Shi, F., Zhong, R., Yih, S., Zettlemoyer, L., and Lewis, M. Incoder: A generative model for code infilling and synthesis. In The Eleventh International Conference on Learning Representations, 2023. URL https: //openreview.net/forum?id=h Qwb-lb M6EL. Gulwani, S., Polozov, O., and Singh, R. Program synthesis. Foundations and Trends in Programming Languages, 4(1-2):1 119, 2017. ISSN 2325-1107. doi: 10.1561/2500000010. URL http://dx.doi.org/ 10.1561/2500000010. Gunasekar, S., Zhang, Y., Aneja, J., Mendes, C. C. T., Giorno, A. D., Gopi, S., Javaheripi, M., Kauffmann, P., de Rosa, G., Saarikivi, O., Salim, A., Shah, S., Behl, H. S., Wang, X., Bubeck, S., Eldan, R., Kalai, A. T., Lee, Y. T., and Li, Y. Textbooks are all you need, 2023. Guo, D., Zhu, Q., Yang, D., Xie, Z., Dong, K., Zhang, W., Chen, G., Bi, X., Wu, Y., Li, Y. K., Luo, F., Xiong, Y., and Liang, W. Deepseek-coder: When the large language model meets programming the rise of code intelligence, 2024. Hendrycks, D., Basart, S., Kadavath, S., Mazeika, M., Arora, A., Guo, E., Burns, C., Puranik, S., He, H., Song, D., and Steinhardt, J. Measuring coding challenge competence with apps, 2021. Magicoder: Empowering Code Generation with OSS-INSTRUCT Honovich, O., Scialom, T., Levy, O., and Schick, T. Unnatural instructions: Tuning language models with (almost) no human labor. In Rogers, A., Boyd-Graber, J., and Okazaki, N. (eds.), Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 14409 14428, Toronto, Canada, July 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.acl-long.806. URL https: //aclanthology.org/2023.acl-long.806. Hugging Face. Hugging face: The ai community building the future. https://huggingface.co/, 2023. Accessed: 2023-12-01. Husain, H., Wu, H.-H., Gazit, T., Allamanis, M., and Brockschmidt, M. Codesearchnet challenge: Evaluating the state of semantic code search, 2020. Jain, N., Han, K., Gu, A., Li, W.-D., Yan, F., Zhang, T., Wang, S., Solar-Lezama, A., Sen, K., and Stoica, I. Livecodebench: Holistic and contamination free evaluation of large language models for code, 2024. Jiang, A. Q., Sablayrolles, A., Mensch, A., Bamford, C., Chaplot, D. S., de las Casas, D., Bressand, F., Lengyel, G., Lample, G., Saulnier, L., Lavaud, L. R., Lachaux, M.- A., Stock, P., Scao, T. L., Lavril, T., Wang, T., Lacroix, T., and Sayed, W. E. Mistral 7b, 2023a. Jiang, A. Q., Sablayrolles, A., Roux, A., Mensch, A., Savary, B., Bamford, C., Chaplot, D. S., de las Casas, D., Hanna, E. B., Bressand, F., Lengyel, G., Bour, G., Lample, G., Lavaud, L. R., Saulnier, L., Lachaux, M.-A., Stock, P., Subramanian, S., Yang, S., Antoniak, S., Scao, T. L., Gervet, T., Lavril, T., Wang, T., Lacroix, T., and Sayed, W. E. Mixtral of experts, 2024. Jiang, N., Liu, K., Lutellier, T., and Tan, L. Impact of code language models on automated program repair, 2023b. Jimenez, C. E., Yang, J., Wettig, A., Yao, S., Pei, K., Press, O., and Narasimhan, K. Swe-bench: Can language models resolve real-world github issues?, 2023. Kocetkov, D., Li, R., Allal, L. B., Li, J., Mou, C., Ferrandis, C. M., Jernite, Y., Mitchell, M., Hughes, S., Wolf, T., Bahdanau, D., von Werra, L., and de Vries, H. The stack: 3 tb of permissively licensed source code, 2022. Lai, Y., Li, C., Wang, Y., Zhang, T., Zhong, R., Zettlemoyer, L., tau Yih, S. W., Fried, D., Wang, S., and Yu, T. Ds1000: A natural and reliable benchmark for data science code generation, 2022. Lemieux, C., Inala, J. P., Lahiri, S. K., and Sen, S. Codamosa: Escaping coverage plateaus in test generation with pre-trained large language models. In 2023 IEEE/ACM 45th International Conference on Software Engineering (ICSE), pp. 919 931. IEEE, 2023. Li, R., Allal, L. B., Zi, Y., Muennighoff, N., Kocetkov, D., Mou, C., Marone, M., Akiki, C., Li, J., Chim, J., Liu, Q., Zheltonozhskii, E., Zhuo, T. Y., Wang, T., Dehaene, O., Davaadorj, M., Lamy-Poirier, J., Monteiro, J., Shliazhko, O., Gontier, N., Meade, N., Zebaze, A., Yee, M.-H., Umapathi, L. K., Zhu, J., Lipkin, B., Oblokulov, M., Wang, Z., Murthy, R., Stillerman, J., Patel, S. S., Abulkhanov, D., Zocca, M., Dey, M., Zhang, Z., Fahmy, N., Bhattacharyya, U., Yu, W., Singh, S., Luccioni, S., Villegas, P., Kunakov, M., Zhdanov, F., Romero, M., Lee, T., Timor, N., Ding, J., Schlesinger, C., Schoelkopf, H., Ebert, J., Dao, T., Mishra, M., Gu, A., Robinson, J., Anderson, C. J., Dolan-Gavitt, B., Contractor, D., Reddy, S., Fried, D., Bahdanau, D., Jernite, Y., Ferrandis, C. M., Hughes, S., Wolf, T., Guha, A., von Werra, L., and de Vries, H. Starcoder: may the source be with you!, 2023. Li, Y., Choi, D., Chung, J., Kushman, N., Schrittwieser, J., Leblond, R., Eccles, T., Keeling, J., Gimeno, F., Dal Lago, A., Hubert, T., Choy, P., de Masson d Autume, C., Babuschkin, I., Chen, X., Huang, P.-S., Welbl, J., Gowal, S., Cherepanov, A., Molloy, J., Mankowitz, D. J., Sutherland Robson, E., Kohli, P., de Freitas, N., Kavukcuoglu, K., and Vinyals, O. Competitionlevel code generation with alphacode. Science, 378 (6624):1092 1097, December 2022. ISSN 1095-9203. doi: 10.1126/science.abq1158. URL http://dx.doi. org/10.1126/science.abq1158. Liu, J., Peng, J., Wang, Y., and Zhang, L. Neuri: Diversifying dnn generation via inductive rule inference. In Proceedings of the 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering, ESEC/FSE 2023, pp. 657 669, New York, NY, USA, 2023a. Association for Computing Machinery. ISBN 9798400703270. doi: 10.1145/3611643.3616337. URL https://doi. org/10.1145/3611643.3616337. Liu, J., Xia, C. S., Wang, Y., and Zhang, L. Is your code generated by chat GPT really correct? rigorous evaluation of large language models for code generation. In Thirtyseventh Conference on Neural Information Processing Systems, 2023b. URL https://openreview.net/ forum?id=1qvx610Cu7. Lozhkov, A., Li, R., Allal, L. B., Cassano, F., Lamy-Poirier, J., Tazi, N., Tang, A., Pykhtar, D., Liu, J., Wei, Y., Liu, T., Tian, M., Kocetkov, D., Zucker, A., Belkada, Y., Wang, Z., Liu, Q., Abulkhanov, D., Paul, I., Li, Z., Li, W.-D., Risdal, M., Li, J., Zhu, J., Zhuo, T. Y., Zheltonozhskii, E., Dade, N. O. O., Yu, W., Krauß, L., Jain, N., Su, Y., He, X., Dey, M., Abati, E., Chai, Y., Muennighoff, N., Tang, X., Oblokulov, M., Akiki, C., Marone, M., Mou, C., Mishra, M., Gu, A., Hui, B., Dao, T., Zebaze, A., Dehaene, O., Patry, N., Xu, C., Mc Auley, J., Hu, H., Magicoder: Empowering Code Generation with OSS-INSTRUCT Scholak, T., Paquet, S., Robinson, J., Anderson, C. J., Chapados, N., Patwary, M., Tajbakhsh, N., Jernite, Y., Ferrandis, C. M., Zhang, L., Hughes, S., Wolf, T., Guha, A., von Werra, L., and de Vries, H. Starcoder 2 and the stack v2: The next generation, 2024. Luo, Z., Xu, C., Zhao, P., Sun, Q., Geng, X., Hu, W., Tao, C., Ma, J., Lin, Q., and Jiang, D. Wizardcoder: Empowering code large language models with evol-instruct. ar Xiv preprint ar Xiv:2306.08568, 2023a. Luo, Z., Xu, C., Zhao, P., Sun, Q., Geng, X., Hu, W., Tao, C., Ma, J., Lin, Q., and Jiang, D. Wizardcoder: Empowering code large language models with evol-instruct, 2023b. Microsoft. Azure openai service models. https: //learn.microsoft.com/en-us/azure/ cognitive-services/openai/concepts/ models, 2023a. Microsoft. Git Hub Copilot Your AI pair programmer. https://github.com/features/ copilot, 2023b. Muennighoff, N., Liu, Q., Zebaze, A., Zheng, Q., Hui, B., Zhuo, T. Y., Singh, S., Tang, X., von Werra, L., and Longpre, S. Octopack: Instruction tuning code large language models, 2023. Nijkamp, E., Pang, B., Hayashi, H., Tu, L., Wang, H., Zhou, Y., Savarese, S., and Xiong, C. Codegen: An open large language model for code with multi-turn program synthesis. In The Eleventh International Conference on Learning Representations, 2023. URL https: //openreview.net/forum?id=ia Yc JKp Y2B_. Olausson, T. X., Inala, J. P., Wang, C., Gao, J., and Solar-Lezama, A. Is self-repair a silver bullet for code generation? In The Twelfth International Conference on Learning Representations, 2024. URL https:// openreview.net/forum?id=y0GJXRung R. Open AI. Chatgpt: Optimizing language models for dialogue. https://openai.com/blog/chatgpt/, 2022. Open AI. Gpt-4 technical report, 2023. Rozi ere, B., Gehring, J., Gloeckle, F., Sootla, S., Gat, I., Tan, X. E., Adi, Y., Liu, J., Remez, T., Rapin, J., Kozhevnikov, A., Evtimov, I., Bitton, J., Bhatt, M., Ferrer, C. C., Grattafiori, A., Xiong, W., D efossez, A., Copet, J., Azhar, F., Touvron, H., Martin, L., Usunier, N., Scialom, T., and Synnaeve, G. Code llama: Open foundation models for code, 2023. Sch afer, M., Nadi, S., Eghbali, A., and Tip, F. An empirical evaluation of using large language models for automated unit test generation. IEEE Transactions on Software Engineering, 2023. Services, A. W. AI Code Generator - Amazon Code Whisperer - AWS. https://aws.amazon.com/ codewhisperer/, 2023. Shazeer, N. and Stern, M. Adafactor: Adaptive learning rates with sublinear memory cost, 2018. SPARCK JONES, K. A statistical interpretation of term specificity and its application in retrieval. 28(1):11 21, 2023/11/30 1972. doi: 10.1108/eb026526. URL https: //doi.org/10.1108/eb026526. Su, H., Shi, W., Kasai, J., Wang, Y., Hu, Y., Ostendorf, M., Yih, W.-t., Smith, N. A., Zettlemoyer, L., and Yu, T. One embedder, any task: Instruction-finetuned text embeddings. 2022. URL https://arxiv.org/abs/ 2212.09741. Taori, R., Gulrajani, I., Zhang, T., Dubois, Y., Li, X., Guestrin, C., Liang, P., and Hashimoto, T. B. Stanford alpaca: An instruction-following llama model. https://github.com/tatsu-lab/ stanford_alpaca, 2023. theblackcat102. The evolved code alpaca dataset. https://huggingface.co/datasets/ theblackcat102/evol-codealpaca-v1, 2023. Wang, X., Dillig, I., and Singh, R. Program synthesis using abstraction refinement. Proc. ACM Program. Lang., 2 (POPL), dec 2017. doi: 10.1145/3158151. URL https: //doi.org/10.1145/3158151. Wang, Y., Wang, W., Joty, S., and Hoi, S. C. Code T5: Identifier-aware unified pre-trained encoder-decoder models for code understanding and generation. In Moens, M.-F., Huang, X., Specia, L., and Yih, S. W.-t. (eds.), Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 8696 8708, Online and Punta Cana, Dominican Republic, November 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.emnlp-main.685. URL https:// aclanthology.org/2021.emnlp-main.685. Wang, Y., Kordi, Y., Mishra, S., Liu, A., Smith, N. A., Khashabi, D., and Hajishirzi, H. Self-instruct: Aligning language models with self-generated instructions. In Rogers, A., Boyd-Graber, J., and Okazaki, N. (eds.), Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 13484 13508, Toronto, Canada, July 2023a. Association for Computational Linguistics. doi: 10.18653/v1/ 2023.acl-long.754. URL https://aclanthology. org/2023.acl-long.754. Magicoder: Empowering Code Generation with OSS-INSTRUCT Wang, Y., Le, H., Gotmare, A. D., Bui, N. D. Q., Li, J., and Hoi, S. C. H. Codet5+: Open code large language models for code understanding and generation, 2023b. Wei, J., Bosma, M., Zhao, V. Y., Guu, K., Yu, A. W., Lester, B., Du, N., Dai, A. M., and Le, Q. V. Finetuned language models are zero-shot learners, 2022. Wei, Y., Xia, C. S., and Zhang, L. Copiloting the copilots: Fusing large language models with completion engines for automated program repair, 2023. Xia, C. S. and Zhang, L. Less training, more repairing please: Revisiting automated program repair via zeroshot learning, 2022. Xia, C. S. and Zhang, L. Keep the conversation going: Fixing 162 out of 337 bugs for $0.42 each using chatgpt. ar Xiv preprint ar Xiv:2304.00385, 2023. Xia, C. S., Paltenghi, M., Tian, J. L., Pradel, M., and Zhang, L. Universal fuzzing via large language models, 2023a. Xia, C. S., Wei, Y., and Zhang, L. Automated program repair in the era of large pre-trained language models. In 2023 IEEE/ACM 45th International Conference on Software Engineering (ICSE), pp. 1482 1494, 2023b. doi: 10.1109/ICSE48619.2023.00129. Xia, C. S., Deng, Y., and Zhang, L. Top leaderboard ranking = top coding proficiency, always? evoeval: Evolving coding benchmarks via llm, 2024. Xu, C., Sun, Q., Zheng, K., Geng, X., Zhao, P., Feng, J., Tao, C., and Jiang, D. Wizardlm: Empowering large language models to follow complex instructions. ar Xiv preprint ar Xiv:2304.12244, 2023. Yu, Y., Zhuang, Y., Zhang, J., Meng, Y., Ratner, A., Krishna, R., Shen, J., and Zhang, C. Large language model as attributed training data generator: A tale of diversity and bias, 2023. Yuan, Z., Lou, Y., Liu, M., Ding, S., Wang, K., Chen, Y., and Peng, X. No more manual tests? evaluating and improving chatgpt for unit test generation. ar Xiv preprint ar Xiv:2305.04207, 2023. Zhang, F., Chen, B., Zhang, Y., Keung, J., Liu, J., Zan, D., Mao, Y., Lou, J.-G., and Chen, W. Repocoder: Repository-level code completion through iterative retrieval and generation, 2023. Magicoder: Empowering Code Generation with OSS-INSTRUCT You are exceptionally skilled at crafting high-quality programming problems and offering precise solutions. Please gain inspiration from the following random code snippet to create a high-quality programming problem. Present your output in two distinct sections: [Problem Description] and [Solution]. Code snippet for inspiration: {code} Guidelines for each section: 1. [Problem Description]: This should be **completely self-contained**, providing all the contextual information one needs to understand and solve the problem. Assume common programming knowledge, but ensure that any specific context, variables, or code snippets pertinent to this problem are explicitly included. 2. [Solution]: Offer a comprehensive, **correct** solution that accurately addresses the [Problem Description] you provided. Figure 4: The detailed prompt design for OSS-INSTRUCT A. More Details of OSS-INSTRUCT A.1. Prompt Design Figure 4 illustrates the prompt template of OSS-INSTRUCT, where the first section presents a high-level description of the task, the second section incorporates the code snippet, and the third section offers guidelines on the response. A.2. Qualitative Examples Figure 5 extends Figure 2 and shows more qualitative examples of OSS-INSTRUCT generation, including outputs that are inspired by method definitions, shell scripts, library imports, class signatures, code statements, and code comments. A.3. Breakdown of OSS-INSTRUCT Dataset Categories To study the categories of OSS-INSTRUCT-generated data, we use INSTRUCTOR (Su et al., 2022), which is one of the SOTA embedding models and can generate different text embeddings according to a task instruction. Inspired by Octo Pack (Muennighoff et al., 2023) and the topic tags on Git Hub, we manually designed 10 categories specific to coding. As shown in Figure 6, we calculate the cosine similarity between the embeddings of each sample in OSS-INSTRUCT and the 10 categories to obtain the category breakdown. Overall, OSS-INSTRUCT exhibits diversity and balance across different categories. Length distribution We depict the length distribution for both generated problems and solutions in Figure 7. The x-axis represents the number of tokens in each problem/solution, while the y-axis shows the correspondent number of samples. B. Implementation Details B.1. Data Generation We use gpt-3.5-turbo-1106 as the foundation model to do OSS-INSTRUCT due to its high cost-effectiveness. We randomly extract 1 15 lines from each selected code document from starcoderdata and let gpt-3.5-turbo-1106 imagine a self-contained coding problem and a correct solution. Given the numerous seed code snippets, we perform greedy decoding to maximize the consistency between the generated problems and solutions. Magicoder: Empowering Code Generation with OSS-INSTRUCT D Seed: method definition render() { Q Problem Your task is to complete the render method to generate the rendered shape as a string... L Code class Shape Renderer { constructor(vertices) { this.vertices = vertices; } render() { let rendered Shape = ""; for (let i = 0; i < this.vertices.length; i++) { const vertex = this.vertices[i]; rendered Shape += (${vertex.x}, ${vertex.y}) ; if (i < this.vertices.length - 1) { rendered Shape += " - "; } } return rendered Shape; } } D Seed: library imports import numpy as np import gym_electric_motor as gem import matplotlib.pyplot as plt Q Problem Create a reinforcement learning agent to control an electric motor using the Open AI Gym environment... L Code import numpy as np import gym_electric_motor as gem ... env = gem.make("Dc Series Cont-v1") class DQNAgent: def __init__(self, state_dim, action_dim): ... def build_model(self): ... def act(self, state): ... def train(self, state, action, reward, ...): ... ... for episode in range(episodes): state = env.reset() state = np.reshape(state, [1, state_dim]) ... D Seed: class signature @Spring Boot Application @Import({ Axon Config.class }) public class Axonbank Application { public static void main(String[] args) { Q Problem Create a simple Java Spring Boot application for a banking system... L Code import org.axonframework.commandhandling.Command Bus; import org.axonframework.config.Configuration; ... @Spring Boot Application @Import({ Axon Config.class }) public class Axonbank Application {...} public class Bank Account {...} public class Create Account Command {...} public class Deposit Funds Command {...} public class Withdraw Funds Command {...} public class Account Created Event {...} public class Funds Deposited Event {...} public class Funds Withdrawn Event {...} D Seed: code statements cutoff_range = np.ptp(cutoffs) if cutoff_range == 0: cutoff_range = 1 cutoff_colors = plt.get_cmap('plasma')( (cutoffs - np.min(cutoffs)) / cutoff_range ) Q Problem Implement a function that calculates the color values for a given set of cutof values based on a specified color map... L Code import numpy as np import matplotlib.pyplot as plt def calculate_cutoff_colors(cutoffs, cmap_name): cutoff_range = np.ptp(cutoffs) if cutoff_range == 0: cutoff_range = 1 cmap = plt.get_cmap(cmap_name) normalized_cutoffs = ... cutoff_colors = ... return cutoff_colors D Seed: shell script python3 make Error File.py data/test_dataset_14 14 Q Problem Create a Python program that generates an error file based on a given dataset... L Code def generate_error_file(dataset_file, ...): error_lines = [] with open(dataset_file, 'r') as file: for line in file: ... with open(error_file_name, 'w') as error_file: for error_line in error_lines: error_file.write(error_line + '\n') if __name__ == "__main__": if len(sys.argv) != 3: print("Usage: ...") else: dataset_file = sys.argv[1] dataset_number = sys.argv[2] generate_error_file(...) D Seed: comments # Set degrees Q Problem Implement a Python class that represents a temperature in degrees... L Code class Temperature Converter: def __init__(self): ... def set_celsius(self, degrees): ... def set_fahrenheit(self, degrees): ... def set_kelvin(self, degrees): ... def get_celsius(self): ... def get_fahrenheit(self): ... def get_kelvin(self): ... def convert_to(self, unit): if unit == 'C': return self.get_celsius() elif unit == 'F': return self.get_fahrenheit() elif unit == 'K': return self.get_kelvin() ... Figure 5: More examples showing how OSS-INSTRUCT generates problems and solutions from seed code snippets. Detailed problem requirements, implementations, and explanations are omitted for brevity. Magicoder: Empowering Code Generation with OSS-INSTRUCT Figure 6: The category constitution of OSS-INSTRUCT 0 100 200 300 400 500 600 700 Number of Tokens #Count (Thousand) problem solution Figure 7: Token count distribution of OSS-INSTRUCT-generated problems and solutions B.2. Data Decontamination We apply data decontamination before training our Magicoder and Magicoder S models. Following Li et al. (2023), we decontaminate both our 75K OSS-INSTRUCT dataset and the evol-codealpaca-v1 (theblackcat102, 2023) dataset, an open-source reproduction of Evol-Instruct generated by GPT-4 (Open AI, 2023), by removing exact matches from Human Eval (Chen et al., 2021), MBPP (Austin et al., 2021), DS-1000 (Lai et al., 2022), and GSM8K (Cobbe et al., 2021). Eventually, we filtered out 9 problems for OSS-INSTRUCT dataset and 89 for evol-codealpaca-v1. B.3. Training We employ CODELLAMA-PYTHON-7B and Deep Seek-Coder-Base 6.7B as the base LLMs. To obtain Magicoder series, we first finetune the base models on about 75K synthetic data generated through OSS-INSTRUCT using the transformers library from Hugging Face (Hugging Face, 2023). We finetune the base models for 2 epochs using two NVIDIA A100-80GB GPUs through the Distributed Data Parallel (DDP) module from Py Torch. We set the initial learning rate at 5e-5 with 15 warmup steps and a linear scheduler. We use Adafactor (Shazeer & Stern, 2018) as our optimizer and choose a batch size of 512 with a sequence truncation length of 1216. To obtain Magicoder S, we continue to finetune Magicoder models with the evol-codealpaca-v1 dataset, an open-source Evol-Instruct implementation containing about 110K samples. We use the same hyperparameters except for 15 warmup steps and a 1024 maximum sequence length. C. More Evaluation Results C.1. Evaluation on APPS for Competitive Programming We additionally evaluate Magicoder on APPS (Hendrycks et al., 2021), a benchmark suite of competitive programming problems. Following Olausson et al. (2024), we select a subset of 300 problems from the APPS test set. From Table 8, we can observe that the CODELLAMA-PYTHON-based Magicoder-CL significantly outperforms the base model and Wizard Coder CL. Magicoder S-CL-7B is even better than Wizard Coder-SC-15B despite having less than half the number of parameters. Magicoder: Empowering Code Generation with OSS-INSTRUCT Meanwhile, Deep Seek-Coder-based Magicoder S-DS achieves the best result among all the evaluated baselines, substantially outperforming the instruction-tuned Deep Seek-Coder-6.7B-Instruct. Table 8: Pass@1 results on APPS evaluated using greedy decoding in a zero-shot setting. Model Introductory (60) Interview (180) Competition (60) Overall (300) Wizard Coder-SC-15B 21.7 6.1 1.7 8.3 CODELLAMA-PYTHON-7B 3.3 2.8 0.0 2.3 Wizard Coder-CL-7B 10.0 3.9 1.7 4.7 Magicoder-CL-7B 18.3 5.6 1.7 7.3 Magicoder S-CL-7B 23.3 6.1 1.7 8.7 Deep Seek-Coder-6.7B-Base 16.7 7.2 0.0 7.7 Deep Seek-Coder-6.7B-Instruct 23.3 9.4 0.0 10.3 Magicoder-DS-6.7B 20.0 8.9 1.7 9.7 Magicoder S-DS-6.7B 28.3 11.7 3.3 13.3 C.2. Fill-in-the-Middle Evaluation on DS-1000 Table 9 shows the evaluation results of Magicoder-DS and Magicoder S-DS on DS-1000 (Lai et al., 2022) (Insertion format), assessing a model s fill-in-the-middle capability. In this experiment, we use Deep Seek-Coder as the base model and exclude CODELLAMA-PYTHON-based results, as CODELLAMA-PYTHON does not support the fill-in-the-middle format. The results highlight Magicoder s superior performance in fill-in-the-middle tasks compared to all other evaluated baselines. This outstanding capability suggests that Magicoder can serve as a valuable copilot for developers. Table 9: Pass@1 results on DS-1000 (Insertion format) with temperature = 0.2, top p = 0.5, max length = 1024, and num samples = 40. Model Num Py Pandas Py Torch Sci Py Sklearn Tensor Flow Overall Wizard Coder-SC-15B 35.1 20.4 30.4 28.9 32.3 37.8 28.6 Deep Seek-Coder-6.7B-Base 36.3 28.6 15.8 19.3 32.8 35.1 29.3 Deep Seek-Coder-6.7B-Instruct 44.1 27.3 38.2 30.8 38.4 29.6 34.6 Magicoder-DS-6.7B 39.7 31.2 27.4 23.7 44.6 30.2 33.9 Magicoder S-DS-6.7B 43.3 29.5 39.2 26.2 44.5 36.2 35.9 C.3. Impact of Removing Noisy Data In 2.2, we highlight the benefits of preserving certain types of noise in instruction-tuning data. To support this argument, we removed data samples with partially implemented code, such as those containing pass or Not Implemented tokens in Python problems. Table 10 shows the pass@1 performance of CODELLAMA-PYTHON-7B on Human Eval+ when finetuned for 2 epochs on the raw OSS-INSTRUCT dataset compared to the same dataset with noisy samples removed. We can observe that retaining the noisy samples results in slightly better performance. Table 10: Pass@1 on Human Eval+ when finetuning CODELLAMA-PYTHON-7B with and without noisy samples. Setting Dataset Size Human Eval+ With noisy data 75K 55.5 Without noisy data 68K 54.9 Magicoder: Empowering Code Generation with OSS-INSTRUCT D. Cases where OSS-INSTRUCT Improves Code Understanding In this section, we compare the base model, Deep Seek-Coder-6.7B-Base, and its OSS-INSTRUCT-tuned version, Magicoder DS-6.7B, in both in-domain and out-of-domain scenarios and demonstrate how OSS-INSTRUCT helps improve the base model s instruction following capability. We provide the base model with an appropriate prompt prefix to make sure it can produce a plausible output format. D.1. More Precise Understanding of Requirements The following example shows that after instruction tuning, the model can understand the requirement Check whether the input is valid more precisely. Deep Seek-Coder-6.7B-Base only considers the integer case, while Magicoder-DS-6.7B effectively checks if the parameter is of the correct type. Prompt: Write a function that takes an integer a and returns True if this integer is a cube of some integer number. Note: you should check whether the input is valid. def is_cube(a): return False x = int(a ** (1/3)) return x**3 == a Deep Seek-Coder-6.7B-Base def is_cube(a): if not isinstance(a, int) or a < 0: return False # Invalid input # Check if the number is a perfect cube x = int(a ** (1/3)) return x**3 == a Magicoder-DS-6.7B D.2. Handling Corner Cases The example below indicates that Magicoder-DS-6.7B can correctly handle the corner case, where the input string has less than or equal to 2 characters, even if this is not explicitly stated in the prompt. This is also an out-of-domain example since the Scala programming language is not included in OSS-INSTRUCT dataset. Prompt: Implement a Scala function that takes a string as input and returns a new string where all the characters are reversed except the first and last characters. Reply only with the function. def reverse Except First Last(str: String): String = { val chars = str.to Char Array val first = chars(0) val last = chars(chars.length - 1) val middle = chars.slice(1, chars.length - 1).reverse first + middle + last } Deep Seek-Coder-6.7B-Base def reverse Except First Last(input: String): String = { if (input.length <= 2) { input } else { val reversed Middle = input.substring(1, input.length - 1).reverse input.char At(0) + reversed Middle + input.char At(input.length - 1) } } Magicoder-DS-6.7B D.3. Generalizing to Unseen Tasks We also show that after instruction tuning, Magicoder-DS-6.7B can respond plausibly to unseen requests during training. This example asks the model to create an easy Python coding exercise, which the base model cannot interpret. Such kinds of tasks are also not incorporated in the OSS-INSTRUCT dataset. Magicoder: Empowering Code Generation with OSS-INSTRUCT Prompt: Create an easy Python coding exercise def add_numbers(a, b): return a + b Deep Seek-Coder-6.7B-Base # Given a list of integers, return the sum of all the positive integers in the list. def sum_positive_integers(nums): return sum(num for num in nums if num > # Test the function numbers = [1, -2, 3, -4, 5, -6, 7, -8, 9, -10] print(sum_positive_integers(numbers)) # Magicoder-DS-6.7B E. Generating Complex Programs In this section, we present the cases where the best performing Magicoder S-DS-6.7B is capable of generating real-world, complex programs. E.1. Implementing Snake Game in Python Prompt: Write a snake game in Python using the turtle library (the game is created by Magicoder). Response: Here is a simple snake game in Python using the turtle library. This game is a basic version and does not include many features that a full-fledged snake game would have. import turtle import time import random delay = 0.1 # Score score = 0 high_score = 0 # Set up the screen wn = turtle.Screen() wn.title("Snake Game by Magicoder") wn.bgcolor("green") wn.setup(width=600, height=600) wn.tracer(0) # Turns off the screen updates # Snake head head = turtle.Turtle() head.speed(0) head.shape("square") head.color("black") head.penup() head.goto(0, 0) head.direction = "stop" # Snake food food = turtle.Turtle() food.speed(0) food.shape("circle") food.color("red") food.penup() food.goto(0, 100) segments = [] # Pen pen = turtle.Turtle() pen.speed(0) pen.shape("square") Magicoder: Empowering Code Generation with OSS-INSTRUCT pen.color("white") pen.penup() pen.hideturtle() pen.goto(0, 260) pen.write("Score: 0 High Score: 0", align="center", font=("Courier", 24, "normal")) # Functions def go_up(): if head.direction!= "down": head.direction = "up" def go_down(): if head.direction!= "up": head.direction = "down" def go_left(): if head.direction!= "right": head.direction = "left" def go_right(): if head.direction!= "left": head.direction = "right" def move(): if head.direction == "up": y = head.ycor() head.sety(y + 20) if head.direction == "down": y = head.ycor() head.sety(y - 20) if head.direction == "left": x = head.xcor() head.setx(x - 20) if head.direction == "right": x = head.xcor() head.setx(x + 20) # Keyboard bindings wn.listen() wn.onkeypress(go_up, "w") wn.onkeypress(go_down, "s") wn.onkeypress(go_left, "a") wn.onkeypress(go_right, "d") # Main game loop while True: wn.update() # Check for a collision with the border if head.xcor() > 290 or head.xcor() < -290 or head.ycor() > 290 or head.ycor() < -290: time.sleep(1) head.goto(0, 0) head.direction = "stop" # Hide the segments for segment in segments: segment.goto(1000, 1000) # Clear the segments list segments.clear() # Reset the score score = 0 # Reset the delay delay = 0.1 # Update the score display pen.clear() pen.write("Score: {} High Score: {}".format(score, high_score), align="center", font=("Courier", 24, " Magicoder: Empowering Code Generation with OSS-INSTRUCT # Check for a collision with the food if head.distance(food) < 20: # Move the food to a random spot x = random.randint(-280, 280) y = random.randint(-280, 280) food.goto(x, y) # Add a segment new_segment = turtle.Turtle() new_segment.speed(0) new_segment.shape("square") new_segment.color("grey") new_segment.penup() segments.append(new_segment) # Shorten the delay delay -= 0.001 # Increase the score score += 10 if score > high_score: high_score = score pen.clear() pen.write("Score: {} High Score: {}".format(score, high_score), align="center", font=("Courier", 24, " normal")) # Move the end segments first in reverse order for index in range(len(segments) - 1, 0, -1): x = segments[index - 1].xcor() y = segments[index - 1].ycor() segments[index].goto(x, y) # Move segment 0 to where the head is if len(segments) > 0: x = head.xcor() y = head.ycor() segments[0].goto(x, y) # Check for head collision with the body segments for segment in segments: if segment.distance(head) < 20: time.sleep(1) head.goto(0, 0) head.direction = "stop" # Hide the segments for segment in segments: segment.goto(1000, 1000) # Clear the segments list segments.clear() # Reset the score score = 0 # Reset the delay delay = 0.1 # Update the score display pen.clear() pen.write("Score: {} High Score: {}".format(score, high_score), align="center", font=("Courier", 24, " normal")) time.sleep(delay) wn.mainloop() This game is a simple version of the classic snake game. The snake starts with a single segment and grows in size when it eats food. The game ends when the snake hits the border or its own body. The score is increased by 10 points for each piece of food eaten. The game speed increases as the snake grows. Magicoder: Empowering Code Generation with OSS-INSTRUCT E.2. Building Othello Game in Java Prompt: Build a console-based Othello game in Java with row and column numbers shown on the board. The game should end when there are no more valid moves for either player. Response: Here is a simple console-based Othello game in Java. It does not include the graphical interface, but it does include the row and column numbers on the board. import java.util.Scanner; public class Othello { private static final int BOARD_SIZE = 8; private static final char EMPTY = - ; private static final char BLACK = B ; private static final char WHITE = W ; private static final char[][] board = new char[BOARD_SIZE][BOARD_SIZE]; private static char current Player = BLACK; public static void main(String[] args) { initialize Board(); print Board(); while (true) { System.out.println("Player " + current Player + ", enter your move (row column):"); Scanner scanner = new Scanner(System.in); int row = scanner.next Int(); int col = scanner.next Int(); if (is Valid Move(row, col)) { make Move(row, col); print Board(); if (!has Valid Move()) { System.out.println("Player " + current Player + " has no valid moves. Game over!"); break; } current Player = (current Player == BLACK)? WHITE : BLACK; } else { System.out.println("Invalid move. Try again."); } } } private static void initialize Board() { for (int i = 0; i < BOARD_SIZE; i++) { for (int j = 0; j < BOARD_SIZE; j++) { board[i][j] = EMPTY; } } board[3][3] = board[4][4] = BLACK; board[3][4] = board[4][3] = WHITE; } private static void print Board() { System.out.print(" "); for (int i = 0; i < BOARD_SIZE; i++) { System.out.print(i + " "); } System.out.println(); for (int i = 0; i < BOARD_SIZE; i++) { System.out.print(i + " "); for (int j = 0; j < BOARD_SIZE; j++) { System.out.print(board[i][j] + " "); } System.out.println(); } } private static boolean is Valid Move(int row, int col) { if (board[row][col]!= EMPTY) { return false; } for (int i = -1; i <= 1; i++) { for (int j = -1; j <= 1; j++) { if (i!= 0 || j!= 0) { if (is Valid Direction(row, col, i, j)) { return true; } } Magicoder: Empowering Code Generation with OSS-INSTRUCT } } return false; } private static boolean is Valid Direction(int row, int col, int row Dir, int col Dir) { int new Row = row + row Dir; int new Col = col + col Dir; while (new Row >= 0 && new Row < BOARD_SIZE && new Col >= 0 && new Col < BOARD_SIZE) { if (board[new Row][new Col] == EMPTY) { return false; } if (board[new Row][new Col] == current Player) { return true; } new Row += row Dir; new Col += col Dir; } return false; } private static void make Move(int row, int col) { board[row][col] = current Player; for (int i = -1; i <= 1; i++) { for (int j = -1; j <= 1; j++) { if (i!= 0 || j!= 0) { flip Direction(row, col, i, j); } } } } private static void flip Direction(int row, int col, int row Dir, int col Dir) { int new Row = row + row Dir; int new Col = col + col Dir; while (new Row >= 0 && new Row < BOARD_SIZE && new Col >= 0 && new Col < BOARD_SIZE) { if (board[new Row][new Col] == EMPTY) { return; } if (board[new Row][new Col] == current Player) { while (true) { new Row -= row Dir; new Col -= col Dir; if (new Row == row && new Col == col) { break; } board[new Row][new Col] = current Player; } return; } new Row += row Dir; new Col += col Dir; } } private static boolean has Valid Move() { for (int i = 0; i < BOARD_SIZE; i++) { for (int j = 0; j < BOARD_SIZE; j++) { if (is Valid Move(i, j)) { return true; } } } return false; } } This program starts with a 8x8 board with two pieces in the center of the board, one black and one white. The player who goes first is black. The game ends when there are no more valid moves for either player. The player enters their move as two numbers, the row and the column. If the move is valid, the game makes the move and switches the player. If the move is not valid, the player is asked to enter a valid move. E.3. Writing Specific Gradio Application Magicoder: Empowering Code Generation with OSS-INSTRUCT Prompt: Write a gradio (3.48.0) application for the following use case: Take an input image and return a 45 degree clockwise rotated image. You should also add text description under the output showing the rotation degree. Response: Here is a simple snake game in Python using the turtle library. This game is a basic version and does not include many features that a full-fledged snake game would have. import gradio as gr import numpy as np from PIL import Image def rotate_image(img): img = Image.fromarray(img) rotated_img = img.rotate(-45) return np.array(rotated_img), "45 degree clockwise rotation" demo = gr.Interface( fn=rotate_image, inputs=gr.inputs.Image(type="numpy"), outputs=[gr.outputs.Image(type="numpy"), "text"], title="Image Rotation", description="Upload an image and get a 45 degree clockwise rotated image", ) demo.launch() This script defines a function rotate_image that takes an image as input, rotates it 45 degrees clockwise using the PIL library, and returns the rotated image and a text description. The gr.Interface function is then used to create a Gradio interface for this function. The gr.inputs.Image(type="numpy") line specifies that the input to the function should be an image, and the gr.outputs.Image(type="numpy") line specifies that the output should be an image. The text output type is used to display the text description. Finally, demo.launch() is used to launch the Gradio interface. E.4. Developing Machine Learning Pipeline with Py Torch Prompt: Build a simple neural network in Python using Pytorch to classify handwritten digits from the MNIST dataset. You should use CNN as the model structure, train the model for 5 epochs, draw a chart of the training loss, and show the final result. Response: Here is a simple implementation of a Convolutional Neural Network (CNN) using Py Torch to classify handwritten digits from the MNIST dataset. import torch import torch.nn as nn import torchvision import torchvision.transforms as transforms import matplotlib.pyplot as plt # Device configuration device = torch.device( cuda if torch.cuda.is_available() else cpu ) # Hyper-parameters num_epochs = 5 batch_size = 100 learning_rate = 0.001 # MNIST dataset train_dataset = torchvision.datasets.MNIST(root= ../../data , train=True, transform=transforms.To Tensor(), download=True) test_dataset = torchvision.datasets.MNIST(root= ../../data , train=False, transform=transforms.To Tensor()) # Data loader train_loader = torch.utils.data.Data Loader(dataset=train_dataset, batch_size=batch_size, Magicoder: Empowering Code Generation with OSS-INSTRUCT shuffle=True) test_loader = torch.utils.data.Data Loader(dataset=test_dataset, batch_size=batch_size, shuffle=False) # Convolutional neural network (two convolutional layers) class Conv Net(nn.Module): def __init__(self): super(Conv Net, self).__init__() self.layer1 = nn.Sequential( nn.Conv2d(1, 16, kernel_size=5, stride=1, padding=2), nn.Batch Norm2d(16), nn.Re LU(), nn.Max Pool2d(kernel_size=2, stride=2)) self.layer2 = nn.Sequential( nn.Conv2d(16, 32, kernel_size=5, stride=1, padding=2), nn.Batch Norm2d(32), nn.Re LU(), nn.Max Pool2d(kernel_size=2, stride=2)) self.fc = nn.Linear(7*7*32, 10) def forward(self, x): out = self.layer1(x) out = self.layer2(out) out = out.reshape(out.size(0), -1) out = self.fc(out) return out model = Conv Net().to(device) # Loss and optimizer criterion = nn.Cross Entropy Loss() optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) # Train the model total_step = len(train_loader) loss_list = [] for epoch in range(num_epochs): for i, (images, labels) in enumerate(train_loader): images = images.to(device) labels = labels.to(device) # Forward pass outputs = model(images) loss = criterion(outputs, labels) # Backward and optimize optimizer.zero_grad() loss.backward() optimizer.step() if (i+1) % 100 == 0: print ( Epoch [{}/{}], Step [{}/{}], Loss: {:.4f} .format(epoch+1, num_epochs, i+1, total_step, loss.item())) loss_list.append(loss.item()) # Test the model model.eval() # eval mode (batchnorm uses moving mean/variance instead of mini-batch mean/variance) with torch.no_grad(): correct = 0 total = 0 for images, labels in test_loader: images = images.to(device) labels = labels.to(device) outputs = model(images) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() print( Test Accuracy of the model on the 10000 test images: {} % .format(100 * correct / total)) # Draw a chart of the training loss plt.plot(loss_list) plt.title( Training Loss ) plt.xlabel( Epoch ) plt.ylabel( Loss ) plt.show() Magicoder: Empowering Code Generation with OSS-INSTRUCT This script first loads the MNIST dataset and then defines a Convolutional Neural Network (CNN) with two convolutional layers. The CNN is trained for 5 epochs using the Adam optimizer and the Cross Entropy Loss function. After training, the model is tested on the test dataset and the accuracy is printed. Finally, a chart of the training loss is drawn. F. Limitations While Magicoders prove to be effective by being instruction-tuned with OSS-INSTRUCT and Evol-Instruct generated data, we apply our techniques to models with no more than 7B parameters due to resource constraints. This limitation hinders our ability to observe the method s impact on larger models. Moreover, the OSS-INSTRUCT prompt is sophisticated and may require a relatively strong model to comprehend the correct intent. Weaker models may fail to produce plausible instruction data. Future research may explore applying OSS-INSTRUCT to models at different capacities in the context of self-training.