Microsoft向我伸出援手,要將該庫過渡到新的官方C#OpenAi庫,現在已經準備好了!從v2.0.0-beta.3開始,官方圖書館現在擁有全部覆蓋範圍,並將保持最新狀態。此處的博客文章中的更多詳細信息:https://devblogs.microsoft.com/dotnet/openai-dotnet-library
該GitHub存儲庫將留在這裡,以通過1.11版來記錄我的原始版本,該版本也可以在Nuget上使用。 ?
一個簡單的C#.NET包裝庫,可與OpenAI的API一起使用。我的博客上的更多上下文。這是我原始的非官方包裝庫,周圍是OpenAI API。
var api = new OpenAI_API . OpenAIAPI ( " YOUR_API_KEY " ) ;
var result = await api . Chat . CreateChatCompletionAsync ( " Hello! " ) ;
Console . WriteLine ( result ) ;
// should print something like "Hi! How can I help you?" 從v2.0.0-beta開始,該庫已由Microsoft採用。圖書館的新官方版本將具有全面覆蓋範圍,並將完全保持最新。此處的博客文章中的更多詳細信息:https://devblogs.microsoft.com/dotnet/openai-dotnet-library/此github repo將留在這裡,以通過版本1.11記錄我的原始版本,該版本也可以在Nuget上使用。
該庫基於.NET Standard 2.0,因此它應該在所有版本的.NET上使用,從傳統的.NET框架> = 4.7.2 to .net(core)> = 3.0。它應該跨控制台應用程序,Winforms,WPF,ASP.NET,Unity,Xamarin等工作。它應該在Windows,Linux和Mac乃至移動設備上使用。依賴性很小,並且已在公共領域獲得許可。
從Nuget安裝OpenAI v1.11。這是通過命令行的方式:
Install-Package OpenAI - Version 1.11 . 0有3種提供您的API鍵的方法,按優先順序:
APIAuthentication(string key)構造函數.openai的用戶目錄中包含一個配置文件,並包含該行: OPENAI_API_KEY=sk-aaaabbbbbccccddddd當您初始化API時,您會使用APIAuthentication :如圖所示:
// for example
OpenAIAPI api = new OpenAIAPI ( " YOUR_API_KEY " ) ; // shorthand
// or
OpenAIAPI api = new OpenAIAPI ( new APIAuthentication ( " YOUR_API_KEY " ) ) ; // create object manually
// or
OpenAIAPI api = new OpenAIAPI ( APIAuthentication LoadFromEnv ( ) ) ; // use env vars
// or
OpenAIAPI api = new OpenAIAPI ( APIAuthentication LoadFromPath ( ) ) ; // use config file (can optionally specify where to look)
// or
OpenAIAPI api = new OpenAIAPI ( ) ; // uses default, env, or config file您可以選擇在Env或Config File中包含一個OpenAiorganization(OpenAi_organization),指定哪個組織用於API請求。這些API請求的用法將計入指定組織的訂閱配額。組織ID可以在您的組織設置頁面上找到。
// for example
OpenAIAPI api = new OpenAIAPI ( new APIAuthentication ( " YOUR_API_KEY " , " org-yourOrgHere " ) ) ;聊天API可通過OpenAIAPI.Chat訪問。通過簡化的對話或完整的請求/響應方法,有兩種使用聊天端點的方法。
對話類使您可以通過在聊天中添加消息並要求Chatgpt回復來輕鬆與Chatgpt進行交互。
var chat = api . Chat . CreateConversation ( ) ;
chat . Model = Model . GPT4_Turbo ;
chat . RequestParameters . Temperature = 0 ;
/// give instruction as System
chat . AppendSystemMessage ( " You are a teacher who helps children understand if things are animals or not. If the user tells you an animal, you say " yes " . If the user tells you something that is not an animal, you say " no " . You only ever respond with " yes " or " no " . You do not say anything else. " ) ;
// give a few examples as user and assistant
chat . AppendUserInput ( " Is this an animal? Cat " ) ;
chat . AppendExampleChatbotOutput ( " Yes " ) ;
chat . AppendUserInput ( " Is this an animal? House " ) ;
chat . AppendExampleChatbotOutput ( " No " ) ;
// now let's ask it a question
chat . AppendUserInput ( " Is this an animal? Dog " ) ;
// and get the response
string response = await chat . GetResponseFromChatbotAsync ( ) ;
Console . WriteLine ( response ) ; // "Yes"
// and continue the conversation by asking another
chat . AppendUserInput ( " Is this an animal? Chair " ) ;
// and get another response
response = await chat . GetResponseFromChatbotAsync ( ) ;
Console . WriteLine ( response ) ; // "No"
// the entire chat history is available in chat.Messages
foreach ( ChatMessage msg in chat . Messages )
{
Console . WriteLine ( $" { msg . Role } : { msg . Content } " ) ;
} 流媒體使您獲得結果是生成的,這可以幫助您的應用程序感覺更快。
使用新的C#8.0異步迭代器:
var chat = api . Chat . CreateConversation ( ) ;
chat . AppendUserInput ( " How to make a hamburger? " ) ;
await foreach ( var res in chat . StreamResponseEnumerableFromChatbotAsync ( ) )
{
Console . Write ( res ) ;
}或使用經典.NET框架或C#<8.0:
var chat = api . Chat . CreateConversation ( ) ;
chat . AppendUserInput ( " How to make a hamburger? " ) ;
await chat . StreamResponseFromChatbotAsync ( res =>
{
Console . Write ( res ) ;
} ) ; 您可以將圖像發送到聊天中以使用新的GPT-4視覺模型。這僅適用於Model.GPT4_Vision模型。有關更多信息和限制,請參見https://platform.openai.com/docs/guides/vision。
// the simplest form
var result = await api . Chat . CreateChatCompletionAsync ( " What is the primary non-white color in this logo? " , ImageInput . FromFile ( " path/to/logo.png " ) ) ;
// or in a conversation
var chat = api . Chat . CreateConversation ( ) ;
chat . Model = Model . GPT4_Vision ;
chat . AppendSystemMessage ( " You are a graphic design assistant who helps identify colors. " ) ;
chat . AppendUserInput ( " What are the primary non-white colors in this logo? " , ImageInput . FromFile ( " path/to/logo.png " ) ) ;
string response = await chat . GetResponseFromChatbotAsync ( ) ;
Console . WriteLine ( response ) ; // "Blue and purple"
chat . AppendUserInput ( " What are the primary non-white colors in this logo? " , ImageInput . FromImageUrl ( " https://rogerpincombe.com/templates/rp/center-aligned-no-shadow-small.png " ) ) ;
response = await chat . GetResponseFromChatbotAsync ( ) ;
Console . WriteLine ( response ) ; // "Blue, red, and yellow"
// or when manually creating the ChatMessage
messageWithImage = new ChatMessage ( ChatMessageRole . User , " What colors do these logos have in common? " ) ;
messageWithImage . images . Add ( ImageInput . FromFile ( " path/to/logo.png " ) ) ;
messageWithImage . images . Add ( ImageInput . FromImageUrl ( " https://rogerpincombe.com/templates/rp/center-aligned-no-shadow-small.png " ) ) ;
// you can specify multiple images at once
chat . AppendUserInput ( " What colors do these logos have in common? " , ImageInput . FromFile ( " path/to/logo.png " ) , ImageInput . FromImageUrl ( " https://rogerpincombe.com/templates/rp/center-aligned-no-shadow-small.png " ) ) ; 如果聊天對話歷史記錄太長,則可能不適合模型的上下文長度。默認情況下,最早的非系統消息將從聊天歷史記錄中刪除,API呼叫將被重述。您可以通過設置chat.AutoTruncateOnContextLengthExceeded = false禁用此功能,也可以覆蓋截斷算法:
chat . OnTruncationNeeded += ( sender , args ) =>
{
// args is a List<ChatMessage> with the current chat history. Remove or edit as nessisary.
// replace this with more sophisticated logic for your use-case, such as summarizing the chat history
for ( int i = 0 ; i < args . Count ; i ++ )
{
if ( args [ i ] . Role != ChatMessageRole . System )
{
args . RemoveAt ( i ) ;
return ;
}
}
} ;您可能還希望使用具有更大上下文長度的新模型。您可以通過設置chat.Model = Model.GPT4_Turbo或chat.Model = Model.ChatGPTTurbo_16k等來執行此操作。
您可以通過chat.MostRecentApiResult.Usage.PromptTokens和相關屬性看到令牌使用情況。
您可以使用OpenAIAPI.Chat.CreateChatCompletionAsync()和相關方法訪問CHAT API的完全控制。
async Task < ChatResult > CreateChatCompletionAsync ( ChatRequest request ) ;
// for example
var result = await api . Chat . CreateChatCompletionAsync ( new ChatRequest ( )
{
Model = Model . ChatGPTTurbo ,
Temperature = 0.1 ,
MaxTokens = 50 ,
Messages = new ChatMessage [ ] {
new ChatMessage ( ChatMessageRole . User , " Hello! " )
}
} )
// or
var result = api . Chat . CreateChatCompletionAsync ( " Hello! " ) ;
var reply = results . Choices [ 0 ] . Message ;
Console . WriteLine ( $" { reply . Role } : { reply . Content . Trim ( ) } " ) ;
// or
Console . WriteLine ( results ) ;它返回一個大多數是元數據的ChatResult ,因此,如果您想要的只是助手的回复文本,請使用其.ToString()方法獲取文本。
還有一個異步流API,其工作原理類似於完成點流端結果。
使用新Model.GPT4_Turbo或gpt-3.5-turbo-1106型號,您可以將ChatRequest.ResponseFormat設置為ChatRequest.ResponseFormats.JsonObject啟用JSON模式。啟用JSON模式後,該模型僅限於僅生成將分析為有效JSON對象的字符串。有關更多詳細信息,請參見https://platform.openai.com/docs/guides/guides/text-generation/json-mode。
ChatRequest chatRequest = new ChatRequest ( )
{
Model = model ,
Temperature = 0.0 ,
MaxTokens = 500 ,
ResponseFormat = ChatRequest . ResponseFormats . JsonObject ,
Messages = new ChatMessage [ ] {
new ChatMessage ( ChatMessageRole . System , " You are a helpful assistant designed to output JSON. " ) ,
new ChatMessage ( ChatMessageRole . User , " Who won the world series in 2020? Return JSON of a 'wins' dictionary with the year as the numeric key and the winning team as the string value. " )
}
} ;
var results = await api . Chat . CreateChatCompletionAsync ( chatRequest ) ;
Console . WriteLine ( results ) ;
/* prints:
{
"wins": {
2020: "Los Angeles Dodgers"
}
}
*/ Openai認為完成。通過OpenAIAPI.Completions訪問完整的API:
async Task < CompletionResult > CreateCompletionAsync ( CompletionRequest request ) ;
// for example
var result = await api . Completions . CreateCompletionAsync ( new CompletionRequest ( " One Two Three One Two " , model : Model . CurieText , temperature : 0.1 ) ) ;
// or
var result = await api . Completions . CreateCompletionAsync ( " One Two Three One Two " , temperature : 0.1 ) ;
// or other convenience overloads您可以提前創建CompletionRequest ,也可以為方便起見使用其中一個助手過載。它返回一個主要是元數據的CompletionResult ,因此,如果您想要的只是完成,請使用其.ToString()方法獲取文本。
流媒體可以使您獲得結果,這可以幫助您的應用程序感覺更快,尤其是在諸如Davinci之類的慢速模型上。
使用新的C#8.0異步迭代器:
IAsyncEnumerable < CompletionResult > StreamCompletionEnumerableAsync ( CompletionRequest request ) ;
// for example
await foreach ( var token in api . Completions . StreamCompletionEnumerableAsync ( new CompletionRequest ( " My name is Roger and I am a principal software engineer at Salesforce. This is my resume: " , Model . DavinciText , 200 , 0.5 , presencePenalty : 0.1 , frequencyPenalty : 0.1 ) ) )
{
Console . Write ( token ) ;
}或使用經典.NET框架或C#<8.0:
async Task StreamCompletionAsync ( CompletionRequest request , Action < CompletionResult > resultHandler ) ;
// for example
await api . Completions . StreamCompletionAsync (
new CompletionRequest ( " My name is Roger and I am a principal software engineer at Salesforce. This is my resume: " , Model . DavinciText , 200 , 0.5 , presencePenalty : 0.1 , frequencyPenalty : 0.1 ) ,
res => ResumeTextbox . Text += res . ToString ( ) ) ;音頻API是語音,轉錄(文本語音)和翻譯(英語文本的非英語語音)的文本。
通過OpenAIAPI.TextToSpeech訪問TTS API:
await api . TextToSpeech . SaveSpeechToFileAsync ( " Hello, brave new world! This is a test. " , outputPath ) ;
// You can open it in the defaul audio player like this:
Process . Start ( outputPath ) ;您還可以使用TextToSpeechRequest對象指定所有請求參數:
var request = new TextToSpeechRequest ( )
{
Input = " Hello, brave new world! This is a test. " ,
ResponseFormat = ResponseFormats . AAC ,
Model = Model . TTS_HD ,
Voice = Voices . Nova ,
Speed = 0.9
} ;
await api . TextToSpeech . SaveSpeechToFileAsync ( request , " test.aac " ) ;您可以使用api.TextToSpeech.GetSpeechAsStreamAsync(request)獲得音頻字節流,而不是保存到文件。
using ( Stream result = await api . TextToSpeech . GetSpeechAsStreamAsync ( " Hello, brave new world! " , Voices . Fable ) )
using ( StreamReader reader = new StreamReader ( result ) )
{
// do something with the audio stream here
} 音頻轉錄API允許您以任何支持的語言從音頻中生成文本。它可以通過OpenAIAPI.Transcriptions訪問。轉錄:
string resultText = await api . Transcriptions . GetTextAsync ( " path/to/file.mp3 " ) ;您可以要求提供詳細的結果,這將為您提供細分市場和令牌級信息,以及標準的OpenAI元數據,例如處理時間:
AudioResultVerbose result = await api . Transcriptions . GetWithDetailsAsync ( " path/to/file.m4a " ) ;
Console . WriteLine ( result . ProcessingTime . TotalMilliseconds ) ; // 496ms
Console . WriteLine ( result . text ) ; // "Hello, this is a test of the transcription function."
Console . WriteLine ( result . language ) ; // "english"
Console . WriteLine ( result . segments [ 0 ] . no_speech_prob ) ; // 0.03712
// etc您還可以要求以SRT或VTT格式提供結果,這對於為視頻生成字幕很有用:
string result = await api . Transcriptions . GetAsFormatAsync ( " path/to/file.m4a " , AudioRequest . ResponseFormats . SRT ) ;可以指定每次要求或默認情況下的其他參數,例如溫度,提示,語言等:
// inline
result = await api . Transcriptions . GetTextAsync ( " conversation.mp3 " , " en " , " This is a transcript of a conversation between a medical doctor and her patient: " , 0.3 ) ;
// set defaults
api . Transcriptions . DefaultTranscriptionRequestArgs . Language = " en " ;您可以提供音頻字節流,而不是在磁盤上提供本地文件。這對於從麥克風或其他源流動音頻而無需首次寫入磁盤可能很有用。請不要指定不必存在的文件名,但必須對您發送的音頻類型具有準確的擴展名。 OpenAI使用文件名擴展名來確定您的音頻流的格式。
using ( var audioStream = File . OpenRead ( " path-here.mp3 " ) )
{
return await api . Transcriptions . GetTextAsync ( audioStream , " file.mp3 " ) ;
} 翻譯使您可以將文本從任何受支持的語言轉錄為英語。 Openai不支持翻譯成任何其他語言,僅英語。它可以通過OpenAIAPI.Translations訪問。它支持與轉錄的所有功能。
string result = await api . Translations . GetTextAsync ( " chinese-example.m4a " ) ;通過OpenAIAPI.Embeddings訪問嵌入API。
async Task < EmbeddingResult > CreateEmbeddingAsync ( EmbeddingRequest request ) ;
// for example
var result = await api . Embeddings . CreateEmbeddingAsync ( new EmbeddingRequest ( " A test text for embedding " , model : Model . AdaTextEmbedding ) ) ;
// or
var result = await api . Embeddings . CreateEmbeddingAsync ( " A test text for embedding " ) ;嵌入結果包含大量元數據,結果是浮點的實際向量。data[]。嵌入。
為簡單起見,您可以直接要求浮子的向量,並用api.Embeddings.GetEmbeddingsAsync("test text here")將額外的元數據弄亂
通過OpenAIAPI.Moderation訪問MEDIENARE API。
async Task < ModerationResult > CreateEmbeddingAsync ( ModerationRequest request ) ;
// for example
var result = await api . Moderation . CallModerationAsync ( new ModerationRequest ( " A test text for moderating " , Model . TextModerationLatest ) ) ;
// or
var result = await api . Moderation . CallModerationAsync ( " A test text for moderating " ) ;
Console . WriteLine ( result . results [ 0 ] . MainContentFlag ) ;結果在.results[0]中,並且具有FlaggedCategories和MainContentFlag等不錯的輔助方法。
文件API端點可通過OpenAIAPI.Files訪問:
// uploading
async Task < File > UploadFileAsync ( string filePath , string purpose = " fine-tune " ) ;
// for example
var response = await api . Files . UploadFileAsync ( " fine-tuning-data.jsonl " ) ;
Console . Write ( response . Id ) ; //the id of the uploaded file
// listing
async Task < List < File > > GetFilesAsync ( ) ;
// for example
var response = await api . Files . GetFilesAsync ( ) ;
foreach ( var file in response )
{
Console . WriteLine ( file . Name ) ;
}還有一些方法可以獲取文件內容,刪除文件,等等。
微調端點本身尚未實施,但很快就會添加。
DALL-E圖像生成API可通過OpenAIAPI.ImageGenerations訪問。ImageGenerations:
async Task < ImageResult > CreateImageAsync ( ImageGenerationRequest request ) ;
// for example
var result = await api . ImageGenerations . CreateImageAsync ( new ImageGenerationRequest ( " A drawing of a computer writing a test " , 1 , ImageSize . _512 ) ) ;
// or
var result = await api . ImageGenerations . CreateImageAsync ( " A drawing of a computer writing a test " ) ;
Console . WriteLine ( result . Data [ 0 ] . Url ) ;圖像結果包含用於在線圖像或Base64編碼圖像的URL,具體取決於ImageGenerationRequest.ResponseFormat(URL是默認值)。
這樣使用dall-e 3這樣:
async Task < ImageResult > CreateImageAsync ( ImageGenerationRequest request ) ;
// for example
var result = await api . ImageGenerations . CreateImageAsync ( new ImageGenerationRequest ( " A drawing of a computer writing a test " , OpenAI_API . Models . Model . DALLE3 , ImageSize . _1024x1792 , " hd " ) ) ;
// or
var result = await api . ImageGenerations . CreateImageAsync ( " A drawing of a computer writing a test " , OpenAI_API . Models . Model . DALLE3 ) ;
Console . WriteLine ( result . Data [ 0 ] . Url ) ; 對於使用Azure OpenAI服務,您需要指定Azure OpenAI資源的名稱以及模型部署ID。
我無法訪問Microsoft Azure OpenAI服務,因此我無法測試此功能。如果您可以訪問並且可以測試,請提交描述您結果的問題。具有集成測試的公關也將不勝感激。具體來說,我尚不清楚指定模型與Azure相同的方式。
有關更多信息,請參閱#64中的Azure OpenAI文檔和詳細的屏幕截圖。
配置應該看起來像這樣的Azure服務:
OpenAIAPI api = OpenAIAPI . ForAzure ( " YourResourceName " , " deploymentId " , " api-key " ) ;
api . ApiVersion = " 2023-03-15-preview " ; // needed to access chat endpoint on Azure然後,您可以像普通一樣使用api對象。您還可以指定APIAuthentication是上面“身份驗證”部分中列出的任何其他方法。當前,此庫僅支持API-KEY流,而不支持AD-FLOW。
截至2023年4月2日,您需要手動選擇API版本2023-03-15-preview ,如上所述,以訪問Azure上的聊天端點。一旦預覽,我將更新默認值。
您可以指定用於HTTP請求的IHttpClientFactory ,它允許調整HTTP請求屬性,連接池和模擬。 #103中的詳細信息。
OpenAIAPI api = new OpenAIAPI ( ) ;
api . HttpClientFactory = myIHttpClientFactoryObject ; 每個類,方法和屬性都有廣泛的XML文檔,因此應在Intellisense中自動顯示。結合官方的OpenAI文件應該足以開始。如果您有任何疑問,請隨時在此處打開問題。更好的文檔可能會在以後出現。
CC-0公共領域
該庫是公共領域的許可CC-0。您可以公開或私人地將其用於您想要的任何東西,而不必擔心許可,許可或其他任何內容。它只是OpenAI API周圍的包裝器,因此您仍然需要直接從他們那裡訪問OpenAi。我不隸屬於Openai,並且該庫不受他們的認可,我只有Beta訪問權限,並且想製作一個C#庫來更容易訪問它。希望其他人也發現這也有用。如果您想貢獻任何東西,請隨時打開公關。