Microsoft向我伸出援手,要将该库过渡到新的官方C#OpenAi库,现在已经准备好了!从v2.0.0-beta.3开始,官方图书馆现在拥有全部覆盖范围,并将保持最新状态。此处的博客文章中的更多详细信息:https://devblogs.microsoft.com/dotnet/openai-dotnet-library
该GitHub存储库将留在这里,以通过1.11版来记录我的原始版本,该版本也可以在Nuget上使用。 ?
一个简单的C#.NET包装库,可与OpenAI的API一起使用。我的博客上的更多上下文。这是我原始的非官方包装库,周围是OpenAI API。
var api = new OpenAI_API . OpenAIAPI ( " YOUR_API_KEY " ) ;
var result = await api . Chat . CreateChatCompletionAsync ( " Hello! " ) ;
Console . WriteLine ( result ) ;
// should print something like "Hi! How can I help you?" 从v2.0.0-beta开始,该库已由Microsoft采用。图书馆的新官方版本将具有全面覆盖范围,并将完全保持最新。此处的博客文章中的更多详细信息:https://devblogs.microsoft.com/dotnet/openai-dotnet-library/此github repo将留在这里,以通过版本1.11记录我的原始版本,该版本也可以在Nuget上使用。
该库基于.NET Standard 2.0,因此它应该在所有版本的.NET上使用,从传统的.NET框架> = 4.7.2 to .net(core)> = 3.0。它应该跨控制台应用程序,Winforms,WPF,ASP.NET,Unity,Xamarin等工作。它应该在Windows,Linux和Mac乃至移动设备上使用。依赖性很小,并且已在公共领域获得许可。
从Nuget安装OpenAI v1.11。这是通过命令行的方式:
Install-Package OpenAI - Version 1.11 . 0有3种提供您的API键的方法,按优先顺序:
APIAuthentication(string key)构造函数.openai的用户目录中包含一个配置文件,并包含该行: OPENAI_API_KEY=sk-aaaabbbbbccccddddd当您初始化API时,您会使用APIAuthentication :如图所示:
// for example
OpenAIAPI api = new OpenAIAPI ( " YOUR_API_KEY " ) ; // shorthand
// or
OpenAIAPI api = new OpenAIAPI ( new APIAuthentication ( " YOUR_API_KEY " ) ) ; // create object manually
// or
OpenAIAPI api = new OpenAIAPI ( APIAuthentication LoadFromEnv ( ) ) ; // use env vars
// or
OpenAIAPI api = new OpenAIAPI ( APIAuthentication LoadFromPath ( ) ) ; // use config file (can optionally specify where to look)
// or
OpenAIAPI api = new OpenAIAPI ( ) ; // uses default, env, or config file您可以选择在Env或Config File中包含一个OpenAiorganization(OpenAi_organization),指定哪个组织用于API请求。这些API请求的用法将计入指定组织的订阅配额。组织ID可以在您的组织设置页面上找到。
// for example
OpenAIAPI api = new OpenAIAPI ( new APIAuthentication ( " YOUR_API_KEY " , " org-yourOrgHere " ) ) ;聊天API可通过OpenAIAPI.Chat访问。通过简化的对话或完整的请求/响应方法,有两种使用聊天端点的方法。
对话类使您可以通过在聊天中添加消息并要求Chatgpt回复来轻松与Chatgpt进行交互。
var chat = api . Chat . CreateConversation ( ) ;
chat . Model = Model . GPT4_Turbo ;
chat . RequestParameters . Temperature = 0 ;
/// give instruction as System
chat . AppendSystemMessage ( " You are a teacher who helps children understand if things are animals or not. If the user tells you an animal, you say " yes " . If the user tells you something that is not an animal, you say " no " . You only ever respond with " yes " or " no " . You do not say anything else. " ) ;
// give a few examples as user and assistant
chat . AppendUserInput ( " Is this an animal? Cat " ) ;
chat . AppendExampleChatbotOutput ( " Yes " ) ;
chat . AppendUserInput ( " Is this an animal? House " ) ;
chat . AppendExampleChatbotOutput ( " No " ) ;
// now let's ask it a question
chat . AppendUserInput ( " Is this an animal? Dog " ) ;
// and get the response
string response = await chat . GetResponseFromChatbotAsync ( ) ;
Console . WriteLine ( response ) ; // "Yes"
// and continue the conversation by asking another
chat . AppendUserInput ( " Is this an animal? Chair " ) ;
// and get another response
response = await chat . GetResponseFromChatbotAsync ( ) ;
Console . WriteLine ( response ) ; // "No"
// the entire chat history is available in chat.Messages
foreach ( ChatMessage msg in chat . Messages )
{
Console . WriteLine ( $" { msg . Role } : { msg . Content } " ) ;
} 流媒体使您获得结果是生成的,这可以帮助您的应用程序感觉更快。
使用新的C#8.0异步迭代器:
var chat = api . Chat . CreateConversation ( ) ;
chat . AppendUserInput ( " How to make a hamburger? " ) ;
await foreach ( var res in chat . StreamResponseEnumerableFromChatbotAsync ( ) )
{
Console . Write ( res ) ;
}或使用经典.NET框架或C#<8.0:
var chat = api . Chat . CreateConversation ( ) ;
chat . AppendUserInput ( " How to make a hamburger? " ) ;
await chat . StreamResponseFromChatbotAsync ( res =>
{
Console . Write ( res ) ;
} ) ; 您可以将图像发送到聊天中以使用新的GPT-4视觉模型。这仅适用于Model.GPT4_Vision模型。有关更多信息和限制,请参见https://platform.openai.com/docs/guides/vision。
// the simplest form
var result = await api . Chat . CreateChatCompletionAsync ( " What is the primary non-white color in this logo? " , ImageInput . FromFile ( " path/to/logo.png " ) ) ;
// or in a conversation
var chat = api . Chat . CreateConversation ( ) ;
chat . Model = Model . GPT4_Vision ;
chat . AppendSystemMessage ( " You are a graphic design assistant who helps identify colors. " ) ;
chat . AppendUserInput ( " What are the primary non-white colors in this logo? " , ImageInput . FromFile ( " path/to/logo.png " ) ) ;
string response = await chat . GetResponseFromChatbotAsync ( ) ;
Console . WriteLine ( response ) ; // "Blue and purple"
chat . AppendUserInput ( " What are the primary non-white colors in this logo? " , ImageInput . FromImageUrl ( " https://rogerpincombe.com/templates/rp/center-aligned-no-shadow-small.png " ) ) ;
response = await chat . GetResponseFromChatbotAsync ( ) ;
Console . WriteLine ( response ) ; // "Blue, red, and yellow"
// or when manually creating the ChatMessage
messageWithImage = new ChatMessage ( ChatMessageRole . User , " What colors do these logos have in common? " ) ;
messageWithImage . images . Add ( ImageInput . FromFile ( " path/to/logo.png " ) ) ;
messageWithImage . images . Add ( ImageInput . FromImageUrl ( " https://rogerpincombe.com/templates/rp/center-aligned-no-shadow-small.png " ) ) ;
// you can specify multiple images at once
chat . AppendUserInput ( " What colors do these logos have in common? " , ImageInput . FromFile ( " path/to/logo.png " ) , ImageInput . FromImageUrl ( " https://rogerpincombe.com/templates/rp/center-aligned-no-shadow-small.png " ) ) ; 如果聊天对话历史记录太长,则可能不适合模型的上下文长度。默认情况下,最早的非系统消息将从聊天历史记录中删除,API呼叫将被重述。您可以通过设置chat.AutoTruncateOnContextLengthExceeded = false禁用此功能,也可以覆盖截断算法:
chat . OnTruncationNeeded += ( sender , args ) =>
{
// args is a List<ChatMessage> with the current chat history. Remove or edit as nessisary.
// replace this with more sophisticated logic for your use-case, such as summarizing the chat history
for ( int i = 0 ; i < args . Count ; i ++ )
{
if ( args [ i ] . Role != ChatMessageRole . System )
{
args . RemoveAt ( i ) ;
return ;
}
}
} ;您可能还希望使用具有更大上下文长度的新模型。您可以通过设置chat.Model = Model.GPT4_Turbo或chat.Model = Model.ChatGPTTurbo_16k等来执行此操作。
您可以通过chat.MostRecentApiResult.Usage.PromptTokens和相关属性看到令牌使用情况。
您可以使用OpenAIAPI.Chat.CreateChatCompletionAsync()和相关方法访问CHAT API的完全控制。
async Task < ChatResult > CreateChatCompletionAsync ( ChatRequest request ) ;
// for example
var result = await api . Chat . CreateChatCompletionAsync ( new ChatRequest ( )
{
Model = Model . ChatGPTTurbo ,
Temperature = 0.1 ,
MaxTokens = 50 ,
Messages = new ChatMessage [ ] {
new ChatMessage ( ChatMessageRole . User , " Hello! " )
}
} )
// or
var result = api . Chat . CreateChatCompletionAsync ( " Hello! " ) ;
var reply = results . Choices [ 0 ] . Message ;
Console . WriteLine ( $" { reply . Role } : { reply . Content . Trim ( ) } " ) ;
// or
Console . WriteLine ( results ) ;它返回一个大多数是元数据的ChatResult ,因此,如果您想要的只是助手的回复文本,请使用其.ToString()方法获取文本。
还有一个异步流API,其工作原理类似于完成点流端结果。
使用新Model.GPT4_Turbo或gpt-3.5-turbo-1106型号,您可以将ChatRequest.ResponseFormat设置为ChatRequest.ResponseFormats.JsonObject启用JSON模式。启用JSON模式后,该模型仅限于仅生成将分析为有效JSON对象的字符串。有关更多详细信息,请参见https://platform.openai.com/docs/guides/guides/text-generation/json-mode。
ChatRequest chatRequest = new ChatRequest ( )
{
Model = model ,
Temperature = 0.0 ,
MaxTokens = 500 ,
ResponseFormat = ChatRequest . ResponseFormats . JsonObject ,
Messages = new ChatMessage [ ] {
new ChatMessage ( ChatMessageRole . System , " You are a helpful assistant designed to output JSON. " ) ,
new ChatMessage ( ChatMessageRole . User , " Who won the world series in 2020? Return JSON of a 'wins' dictionary with the year as the numeric key and the winning team as the string value. " )
}
} ;
var results = await api . Chat . CreateChatCompletionAsync ( chatRequest ) ;
Console . WriteLine ( results ) ;
/* prints:
{
"wins": {
2020: "Los Angeles Dodgers"
}
}
*/ Openai认为完成。通过OpenAIAPI.Completions访问完整的API:
async Task < CompletionResult > CreateCompletionAsync ( CompletionRequest request ) ;
// for example
var result = await api . Completions . CreateCompletionAsync ( new CompletionRequest ( " One Two Three One Two " , model : Model . CurieText , temperature : 0.1 ) ) ;
// or
var result = await api . Completions . CreateCompletionAsync ( " One Two Three One Two " , temperature : 0.1 ) ;
// or other convenience overloads您可以提前创建CompletionRequest ,也可以为方便起见使用其中一个助手过载。它返回一个主要是元数据的CompletionResult ,因此,如果您想要的只是完成,请使用其.ToString()方法获取文本。
流媒体可以使您获得结果,这可以帮助您的应用程序感觉更快,尤其是在诸如Davinci之类的慢速模型上。
使用新的C#8.0异步迭代器:
IAsyncEnumerable < CompletionResult > StreamCompletionEnumerableAsync ( CompletionRequest request ) ;
// for example
await foreach ( var token in api . Completions . StreamCompletionEnumerableAsync ( new CompletionRequest ( " My name is Roger and I am a principal software engineer at Salesforce. This is my resume: " , Model . DavinciText , 200 , 0.5 , presencePenalty : 0.1 , frequencyPenalty : 0.1 ) ) )
{
Console . Write ( token ) ;
}或使用经典.NET框架或C#<8.0:
async Task StreamCompletionAsync ( CompletionRequest request , Action < CompletionResult > resultHandler ) ;
// for example
await api . Completions . StreamCompletionAsync (
new CompletionRequest ( " My name is Roger and I am a principal software engineer at Salesforce. This is my resume: " , Model . DavinciText , 200 , 0.5 , presencePenalty : 0.1 , frequencyPenalty : 0.1 ) ,
res => ResumeTextbox . Text += res . ToString ( ) ) ;音频API是语音,转录(文本语音)和翻译(英语文本的非英语语音)的文本。
通过OpenAIAPI.TextToSpeech访问TTS API:
await api . TextToSpeech . SaveSpeechToFileAsync ( " Hello, brave new world! This is a test. " , outputPath ) ;
// You can open it in the defaul audio player like this:
Process . Start ( outputPath ) ;您还可以使用TextToSpeechRequest对象指定所有请求参数:
var request = new TextToSpeechRequest ( )
{
Input = " Hello, brave new world! This is a test. " ,
ResponseFormat = ResponseFormats . AAC ,
Model = Model . TTS_HD ,
Voice = Voices . Nova ,
Speed = 0.9
} ;
await api . TextToSpeech . SaveSpeechToFileAsync ( request , " test.aac " ) ;您可以使用api.TextToSpeech.GetSpeechAsStreamAsync(request)获得音频字节流,而不是保存到文件。
using ( Stream result = await api . TextToSpeech . GetSpeechAsStreamAsync ( " Hello, brave new world! " , Voices . Fable ) )
using ( StreamReader reader = new StreamReader ( result ) )
{
// do something with the audio stream here
} 音频转录API允许您以任何支持的语言从音频中生成文本。它可以通过OpenAIAPI.Transcriptions访问。转录:
string resultText = await api . Transcriptions . GetTextAsync ( " path/to/file.mp3 " ) ;您可以要求提供详细的结果,这将为您提供细分市场和令牌级信息,以及标准的OpenAI元数据,例如处理时间:
AudioResultVerbose result = await api . Transcriptions . GetWithDetailsAsync ( " path/to/file.m4a " ) ;
Console . WriteLine ( result . ProcessingTime . TotalMilliseconds ) ; // 496ms
Console . WriteLine ( result . text ) ; // "Hello, this is a test of the transcription function."
Console . WriteLine ( result . language ) ; // "english"
Console . WriteLine ( result . segments [ 0 ] . no_speech_prob ) ; // 0.03712
// etc您还可以要求以SRT或VTT格式提供结果,这对于为视频生成字幕很有用:
string result = await api . Transcriptions . GetAsFormatAsync ( " path/to/file.m4a " , AudioRequest . ResponseFormats . SRT ) ;可以指定每次要求或默认情况下的其他参数,例如温度,提示,语言等:
// inline
result = await api . Transcriptions . GetTextAsync ( " conversation.mp3 " , " en " , " This is a transcript of a conversation between a medical doctor and her patient: " , 0.3 ) ;
// set defaults
api . Transcriptions . DefaultTranscriptionRequestArgs . Language = " en " ;您可以提供音频字节流,而不是在磁盘上提供本地文件。这对于从麦克风或其他源流动音频而无需首次写入磁盘可能很有用。请不要指定不必存在的文件名,但必须对您发送的音频类型具有准确的扩展名。 OpenAI使用文件名扩展名来确定您的音频流的格式。
using ( var audioStream = File . OpenRead ( " path-here.mp3 " ) )
{
return await api . Transcriptions . GetTextAsync ( audioStream , " file.mp3 " ) ;
} 翻译使您可以将文本从任何受支持的语言转录为英语。 Openai不支持翻译成任何其他语言,仅英语。它可以通过OpenAIAPI.Translations访问。它支持与转录的所有功能。
string result = await api . Translations . GetTextAsync ( " chinese-example.m4a " ) ;通过OpenAIAPI.Embeddings访问嵌入API。
async Task < EmbeddingResult > CreateEmbeddingAsync ( EmbeddingRequest request ) ;
// for example
var result = await api . Embeddings . CreateEmbeddingAsync ( new EmbeddingRequest ( " A test text for embedding " , model : Model . AdaTextEmbedding ) ) ;
// or
var result = await api . Embeddings . CreateEmbeddingAsync ( " A test text for embedding " ) ;嵌入结果包含大量元数据,结果是浮点的实际向量。data[]。嵌入。
为简单起见,您可以直接要求浮子的向量,并用api.Embeddings.GetEmbeddingsAsync("test text here")将额外的元数据弄乱
通过OpenAIAPI.Moderation访问MEDIENARE API。
async Task < ModerationResult > CreateEmbeddingAsync ( ModerationRequest request ) ;
// for example
var result = await api . Moderation . CallModerationAsync ( new ModerationRequest ( " A test text for moderating " , Model . TextModerationLatest ) ) ;
// or
var result = await api . Moderation . CallModerationAsync ( " A test text for moderating " ) ;
Console . WriteLine ( result . results [ 0 ] . MainContentFlag ) ;结果在.results[0]中,并且具有FlaggedCategories和MainContentFlag等不错的辅助方法。
文件API端点可通过OpenAIAPI.Files访问:
// uploading
async Task < File > UploadFileAsync ( string filePath , string purpose = " fine-tune " ) ;
// for example
var response = await api . Files . UploadFileAsync ( " fine-tuning-data.jsonl " ) ;
Console . Write ( response . Id ) ; //the id of the uploaded file
// listing
async Task < List < File > > GetFilesAsync ( ) ;
// for example
var response = await api . Files . GetFilesAsync ( ) ;
foreach ( var file in response )
{
Console . WriteLine ( file . Name ) ;
}还有一些方法可以获取文件内容,删除文件,等等。
微调端点本身尚未实施,但很快就会添加。
DALL-E图像生成API可通过OpenAIAPI.ImageGenerations访问。ImageGenerations:
async Task < ImageResult > CreateImageAsync ( ImageGenerationRequest request ) ;
// for example
var result = await api . ImageGenerations . CreateImageAsync ( new ImageGenerationRequest ( " A drawing of a computer writing a test " , 1 , ImageSize . _512 ) ) ;
// or
var result = await api . ImageGenerations . CreateImageAsync ( " A drawing of a computer writing a test " ) ;
Console . WriteLine ( result . Data [ 0 ] . Url ) ;图像结果包含用于在线图像或Base64编码图像的URL,具体取决于ImageGenerationRequest.ResponseFormat(URL是默认值)。
这样使用dall-e 3这样:
async Task < ImageResult > CreateImageAsync ( ImageGenerationRequest request ) ;
// for example
var result = await api . ImageGenerations . CreateImageAsync ( new ImageGenerationRequest ( " A drawing of a computer writing a test " , OpenAI_API . Models . Model . DALLE3 , ImageSize . _1024x1792 , " hd " ) ) ;
// or
var result = await api . ImageGenerations . CreateImageAsync ( " A drawing of a computer writing a test " , OpenAI_API . Models . Model . DALLE3 ) ;
Console . WriteLine ( result . Data [ 0 ] . Url ) ; 对于使用Azure OpenAI服务,您需要指定Azure OpenAI资源的名称以及模型部署ID。
我无法访问Microsoft Azure OpenAI服务,因此我无法测试此功能。如果您可以访问并且可以测试,请提交描述您结果的问题。具有集成测试的公关也将不胜感激。具体来说,我尚不清楚指定模型与Azure相同的方式。
有关更多信息,请参阅#64中的Azure OpenAI文档和详细的屏幕截图。
配置应该看起来像这样的Azure服务:
OpenAIAPI api = OpenAIAPI . ForAzure ( " YourResourceName " , " deploymentId " , " api-key " ) ;
api . ApiVersion = " 2023-03-15-preview " ; // needed to access chat endpoint on Azure然后,您可以像普通一样使用api对象。您还可以指定APIAuthentication是上面“身份验证”部分中列出的任何其他方法。当前,此库仅支持API-KEY流,而不支持AD-FLOW。
截至2023年4月2日,您需要手动选择API版本2023-03-15-preview ,如上所述,以访问Azure上的聊天端点。一旦预览,我将更新默认值。
您可以指定用于HTTP请求的IHttpClientFactory ,它允许调整HTTP请求属性,连接池和模拟。 #103中的详细信息。
OpenAIAPI api = new OpenAIAPI ( ) ;
api . HttpClientFactory = myIHttpClientFactoryObject ; 每个类,方法和属性都有广泛的XML文档,因此应在Intellisense中自动显示。结合官方的OpenAI文件应该足以开始。如果您有任何疑问,请随时在此处打开问题。更好的文档可能会在以后出现。
CC-0公共领域
该库是公共领域的许可CC-0。您可以公开或私人地将其用于您想要的任何东西,而不必担心许可,许可或其他任何内容。它只是OpenAI API周围的包装器,因此您仍然需要直接从他们那里访问OpenAi。我不隶属于Openai,并且该库不受他们的认可,我只有Beta访问权限,并且想制作一个C#库来更容易访问它。希望其他人也发现这也有用。如果您想贡献任何东西,请随时打开公关。