alreadyme ai research
1.0.0
使用GPT-3生成readme.md几乎没有学习
Asme-ai-Research是一个核心项目,用于从任何存储库中的源代码生成README.md 。 AI模型读取源代码的某些部分,并编写相应的README.md文档。 Aboutme.md团队目前正在提供有关此功能的服务,您可以在此页面上找到我们的结果。
该存储库包含几个子标记。您可以在目录中看到详细的描述。
正如GPT-3这样的大型模型所显示的那样,很少有射击学习是构建广义语言模型的最重要关键。他们可以根据以前的提示和很少的示例来理解应该写的内容。使用此功能,他们几乎可以在不进行微调的情况下做任何事情。他们可以总结新闻,回答问题,甚至进行对话!
Openai Codex通过微调GPT-3推出了新的大型Langauge模型,用于编程语言。现在,我们可以期望对编程语言的广义性能(几乎没有学习)。例如,从源代码创建DOCSTRING,从描述中编写新代码(这就是Copilot的工作方式),然后从Python转换为Java。
我们使用的是开放科学和大规模语言模型的开放访问。 Bloom支持多语言,不仅是自然语言,而且是编程语言。我们设计了及时的模板,并找到了它们的最佳版本。
&&&&&&
$ head -n 30 model-finetuning/src/data.py
from __future__ import annotations
from dataclasses import dataclass
import torch
[...]
&&&&&&
$ head -n 37 model-finetuning/src/train.py
from __future__ import annotations
import argparse
import os
[...]
&&&&&&
$ git config --get remote.origin.url
https://github.com/readme-generator/alreadyme-ai-research.git
&&&&&&
$ cat README.md
[...]
所有示例将由&&&&&&隔离。我们旨在使Bloom执行(或模拟)Linux Bash命令。 Bloom将从给定的提示符中读取源代码的某些部分,并生成适当的README.md文件。
有关更多详细信息,请查看我们的模型调整子标题。
已经以Apache许可证2.0发布了已经发布的研究。可以在此处找到许可证。
@misc { https://doi.org/10.48550/arxiv.2005.14165 ,
title = { Language Models are Few-Shot Learners } ,
author = { Brown, Tom B. and Mann, Benjamin and Ryder, Nick and Subbiah, Melanie and Kaplan, Jared and Dhariwal, Prafulla and Neelakantan, Arvind and Shyam, Pranav and Sastry, Girish and Askell, Amanda and Agarwal, Sandhini and Herbert-Voss, Ariel and Krueger, Gretchen and Henighan, Tom and Child, Rewon and Ramesh, Aditya and Ziegler, Daniel M. and Wu, Jeffrey and Winter, Clemens and Hesse, Christopher and Chen, Mark and Sigler, Eric and Litwin, Mateusz and Gray, Scott and Chess, Benjamin and Clark, Jack and Berner, Christopher and McCandlish, Sam and Radford, Alec and Sutskever, Ilya and Amodei, Dario } ,
year = 2020 ,
publisher = { arXiv } ,
doi = { 10.48550/ARXIV.2005.14165 } ,
url = { https://arxiv.org/abs/2005.14165 } ,
copyright = { arXiv.org perpetual, non-exclusive license } ,
keywords = { Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences }
} @misc { https://doi.org/10.48550/arxiv.2107.03374 ,
title = { Evaluating Large Language Models Trained on Code } ,
author = {Chen, Mark and Tworek, Jerry and Jun, Heewoo and Yuan, Qiming and Pinto, Henrique Ponde de Oliveira and Kaplan, Jared and Edwards, Harri and Burda, Yuri and Joseph, Nicholas and Brockman, Greg and Ray, Alex and Puri, Raul and Krueger, Gretchen and Petrov, Michael and Khlaaf, Heidy and Sastry, Girish and Mishkin, Pamela and Chan, Brooke and Gray, Scott and Ryder, Nick and Pavlov, Mikhail and Power, Alethea and Kaiser, Lukasz and Bavarian, Mohammad and Winter, Clemens and Tillet, Philippe and Such, Felipe Petroski and Cummings, Dave and Plappert, Matthias and Chantzis, Fotios and Barnes, Elizabeth and Herbert-Voss, Ariel and Guss, William Hebgen and Nichol, Alex and Paino, Alex and Tezak, Nikolas and Tang, Jie and Babuschkin, Igor and Balaji, Suchir and Jain, Shantanu and Saunders, William and Hesse, Christopher and Carr, Andrew N. and Leike, Jan and Achiam, Josh and Misra, Vedant and Morikawa, Evan and Radford, Alec and Knight, Matthew and Brundage, Miles and Murati, Mira and Mayer, Katie and Welinder, Peter and McGrew, Bob and Amodei, Dario and McCandlish, Sam and Sutskever, Ilya and Zaremba, Wojciech},
year = 2021 ,
publisher = { arXiv } ,
doi = { 10.48550/ARXIV.2107.03374 } ,
url = { https://arxiv.org/abs/2107.03374 } ,
copyright = { arXiv.org perpetual, non-exclusive license } ,
keywords = { Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences }
} @misc { https://doi.org/10.48550/arxiv.2106.09685 ,
title = { LoRA: Low-Rank Adaptation of Large Language Models } ,
author = { Hu, Edward J. and Shen, Yelong and Wallis, Phillip and Allen-Zhu, Zeyuan and Li, Yuanzhi and Wang, Shean and Wang, Lu and Chen, Weizhu } ,
year = 2021 ,
publisher = { arXiv } ,
doi = { 10.48550/ARXIV.2106.09685 } ,
url = { https://arxiv.org/abs/2106.09685 } ,
copyright = { arXiv.org perpetual, non-exclusive license } ,
keywords = { Computation and Language (cs.CL), Artificial Intelligence (cs.AI), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences }
} @misc { bigscience_2022 ,
title = { Bigscience large open-science openaccess multilingual language model. } ,
author = { BigScience } ,
year = 2022 ,
journal = { bigscience/bloom · Hugging Face } ,
url = { https://huggingface.co/bigscience/bloom }
}