一個簡單的C#.NET客戶端庫,用於OpenAi,可以使用其靜止的API使用。獨立發展,這不是一個官方圖書館,我不隸屬於Openai。需要一個OpenAI API帳戶。
最初是從Openai-Api-Dotnet分配的。有關羅傑·賓科姆(Roger Pincombe)博客的更多上下文。
從Nuget安裝OpenAI-DotNet 。這是通過命令行的方式:
Powershell:
Install-Package OpenAI-DotNet
dotnet:
dotnet add package OpenAI-DotNet
想要在Unity遊戲引擎中使用OpenAi-Dotnet?在OpenUpm上查看我們的Unity軟件包:
查看我們的新API文檔!
https://rageagainstthepixel.github.io/openai-dotnet
有3種提供您的API鍵的方法,按優先順序:
警告
我們建議使用環境變量加載API密鑰,而不是將其硬編碼在您的源中。不建議在生產中使用此方法,而僅用於接受用戶憑據,本地測試和快速啟動方案。
警告
我們建議使用環境變量加載API密鑰,而不是將其硬編碼在您的源中。不建議在生產中使用此方法,而僅用於接受用戶憑據,本地測試和快速啟動方案。
using var api = new OpenAIClient ( " sk-apiKey " ) ;或手動創建一個OpenAIAuthentication對象
using var api = new OpenAIClient ( new OpenAIAuthentication ( " sk-apiKey " , " org-yourOrganizationId " , " proj_yourProjectId " ) ) ; 嘗試從當前目錄中的配置文件加載API鍵,默認情況下.openai
要創建一個配置文件,請創建一個名為.openai的新文本文件,並包含該行:
筆記
組織和項目ID條目是可選的。
{
"apiKey" : " sk-aaaabbbbbccccddddd " ,
"organizationId" : " org-yourOrganizationId " ,
"projectId" : " proj_yourProjectId "
}OPENAI_API_KEY=sk-aaaabbbbbccccddddd
OPENAI_ORGANIZATION_ID=org-yourOrganizationId
OPENAI_PROJECT_ID=proj_yourProjectId您還可以通過在OpenAIAuthentication中調用靜態方法直接加載配置文件:
.openai配置: using var api = new OpenAIClient ( OpenAIAuthentication . LoadFromDirectory ( " path/to/your/directory " ) ) ;.openai 。 using var api = new OpenAIClient ( OpenAIAuthentication . LoadFromPath ( " path/to/your/file.json " ) ) ; 使用系統的環境變量指定API密鑰和組織要使用。
OPENAI_API_KEY用於API密鑰。OPENAI_ORGANIZATION_ID指定組織。OPENAI_PROJECT_ID指定項目。 using var api = new OpenAIClient ( OpenAIAuthentication . LoadFromEnv ( ) ) ;OpenAIClient實現IDisposable ,可管理其使用的資源的生命週期,包括HttpClient 。當您初始化OpenAIClient時,如果不提供一個內部的HTTPCLCLIENT實例,它將創建一個內部的HttpClient實例。當處理OpenAIClient時,將處置該內部HttpClient 。如果您為OpenAIClient提供外部HttpClient實例,則負責管理其處置。
OpenAIClient創建了自己的HttpClient ,則當您處置OpenAIClient時,它也會處理它。HttpClient傳遞給OpenAIClient ,則不會由OpenAIClient處置。您必須自己管理HttpClient的處置。請確保適當地處理OpenAIClient ,以及時發布資源,並防止應用程序中的任何潛在內存或資源洩漏。
內部HttpClient的典型用法:
using var api = new OpenAIClient ( ) ;自定義HttpClient (您必須處置自己):
using var customHttpClient = new HttpClient ( ) ;
// set custom http client properties here
var api = new OpenAIClient ( client : customHttpClient ) ;您也可以選擇使用Microsoft的Azure OpenAI部署。
您可以通過單擊View Code按鈕並查看這樣的URL來在Azure Playground中找到所需的信息:
https://{your-resource-name}.openai.azure.com/openai/deployments/{deployment-id}/chat/completions?api-version={api-version}your-resource-name您的Azure Openai資源的名稱。deployment-id部署模型時選擇的部署名稱。api-version用於此操作的API版本。這遵循Yyyy-MM-DD格式。要設置客戶端以使用您的部署,您需要將OpenAIClientSettings傳遞到客戶端構造函數中。
var auth = new OpenAIAuthentication ( " sk-apiKey " ) ;
var settings = new OpenAIClientSettings ( resourceName : " your-resource-name " , deploymentId : " deployment-id " , apiVersion : " api-version " ) ;
using var api = new OpenAIClient ( auth , settings ) ; 像往常一樣與MSAL進行身份驗證並獲取訪問令牌,然後在創建OpenAIAuthentication時使用訪問令牌。然後,請確保在創建OpenAIClientSettings時將UseazureActiveDirectory設置為true。
教程:調用Web API的桌面應用程序:獲取令牌
// get your access token using any of the MSAL methods
var accessToken = result . AccessToken ;
var auth = new OpenAIAuthentication ( accessToken ) ;
var settings = new OpenAIClientSettings ( resourceName : " your-resource " , deploymentId : " deployment-id " , apiVersion : " api-version " , useActiveDirectoryAuthentication : true ) ;
using var api = new OpenAIClient ( auth , settings ) ;直接在前端應用程序中使用openai-dotnet或com.openai。為了減輕這種風險,建議建立一個中間API,該API代表您的前端應用程序向OpenAI提出請求。該庫可以用於前端和中間主機配置,以確保與OpenAI API的安全通信。
在前端示例中,您需要使用首選的OAuth提供商對用戶進行安全身份驗證。對用戶進行身份驗證後,將您的自定義驗證令牌與後端上的API密鑰交換。
請按照以下步驟:
OpenAIAuthentication對象,並使用前綴sess-傳遞自定義令牌。OpenAIClientSettings對象,並指定中間API所在的域。auth和settings對像傳遞到OpenAIClient構造函數。這是如何設置前端的一個示例:
var authToken = await LoginAsync ( ) ;
var auth = new OpenAIAuthentication ( $" sess- { authToken } " ) ;
var settings = new OpenAIClientSettings ( domain : " api.your-custom-domain.com " ) ;
using var api = new OpenAIClient ( auth , settings ) ;此設置允許您的前端應用程序與您的後端安全通信,該後端將使用OpenAi-Dotnet-Proxy,然後將請求轉發到OpenAI API。這樣可以確保您的OpenAI API密鑰和其他敏感信息在整個過程中保持安全。
在此示例中,我們演示瞭如何在新的ASP.NET Core Web應用程序中設置和使用OpenAIProxy 。代理服務器將處理身份驗證和轉發請求到OpenAI API,以確保您的API鍵和其他敏感信息保持安全。
Install-Package OpenAI-DotNet-Proxydotnet add package OpenAI-DotNet-Proxy<PackageReference Include="OpenAI-DotNet-Proxy" />AbstractAuthenticationFilter繼承的新類,並覆蓋ValidateAuthentication方法。這將實現您將用於檢查內部服務器的用戶IAuthenticationFilter令牌。Program.cs中,通過調用OpenAIProxy.CreateWebApplication方法創建一個新的代理Web應用程序,將您的自定義AuthenticationFilter作為類型參數傳遞。OpenAIAuthentication和OpenAIClientSettings 。 public partial class Program
{
private class AuthenticationFilter : AbstractAuthenticationFilter
{
public override async Task ValidateAuthenticationAsync ( IHeaderDictionary request )
{
await Task . CompletedTask ; // remote resource call to verify token
// You will need to implement your own class to properly test
// custom issued tokens you've setup for your end users.
if ( ! request . Authorization . ToString ( ) . Contains ( TestUserToken ) )
{
throw new AuthenticationException ( " User is not authorized " ) ;
}
}
}
public static void Main ( string [ ] args )
{
var auth = OpenAIAuthentication . LoadFromEnv ( ) ;
var settings = new OpenAIClientSettings ( /* your custom settings if using Azure OpenAI */ ) ;
using var openAIClient = new OpenAIClient ( auth , settings ) ;
OpenAIProxy . CreateWebApplication < AuthenticationFilter > ( args , openAIClient ) . Run ( ) ;
}
}設置代理服務器後,您的最終用戶現在可以向代理API而不是直接向OpenAI API提出身份驗證的請求。代理服務器將處理身份驗證和轉發請求到OpenAI API,以確保您的API鍵和其他敏感信息保持安全。
列出並描述API中可用的各種模型。您可以參考模型文檔,以了解可用的模型以及它們之間的差異。
還要結帳模型端點兼容性,以了解哪些模型與哪些端點有效。
為了指定本庫中未預定義的自定義模型:
var model = new Model ( " model-id " ) ;通過OpenAIClient.ModelsEndpoint訪問型號API
列出當前可用的模型,並提供有關每個模型的基本信息,例如所有者和可用性。
using var api = new OpenAIClient ( ) ;
var models = await api . ModelsEndpoint . GetModelsAsync ( ) ;
foreach ( var model in models )
{
Console . WriteLine ( model . ToString ( ) ) ;
} 檢索模型實例,提供有關該模型的基本信息,例如所有者和權限。
using var api = new OpenAIClient ( ) ;
var model = await api . ModelsEndpoint . GetModelDetailsAsync ( " gpt-4o " ) ;
Console . WriteLine ( model . ToString ( ) ) ; 刪除微型模型。您必須在組織中扮演所有者角色。
using var api = new OpenAIClient ( ) ;
var isDeleted = await api . ModelsEndpoint . DeleteFineTuneModelAsync ( " your-fine-tuned-model " ) ;
Assert . IsTrue ( isDeleted ) ;警告
Beta功能。 API可能會破裂。
實時API使您能夠構建低延遲,多模式的對話體驗。它目前支持文本和音頻作為輸入和輸出,以及功能調用。
通過OpenAIClient.RealtimeEndpoint訪問助手API
這是如何創建實時會話以及從模型發送和接收消息的簡單示例。
using var api = new OpenAIClient ( ) ;
var cancellationTokenSource = new CancellationTokenSource ( ) ;
var tools = new List < Tool >
{
Tool . FromFunc ( " goodbye " , ( ) =>
{
cancellationTokenSource . Cancel ( ) ;
return " Goodbye! " ;
} )
} ;
var options = new Options ( Model . GPT4oRealtime , tools : tools ) ;
using var session = await api . RealtimeEndpoint . CreateSessionAsync ( options ) ;
var responseTask = session . ReceiveUpdatesAsync < IServerEvent > ( ServerEvents , cancellationTokenSource . Token ) ;
await session . SendAsync ( new ConversationItemCreateRequest ( " Hello! " ) ) ;
await session . SendAsync ( new CreateResponseRequest ( ) ) ;
await session . SendAsync ( new InputAudioBufferAppendRequest ( new ReadOnlyMemory < byte > ( new byte [ 1024 * 4 ] ) ) , cancellationTokenSource . Token ) ;
await session . SendAsync ( new ConversationItemCreateRequest ( " GoodBye! " ) ) ;
await session . SendAsync ( new CreateResponseRequest ( ) ) ;
await responseTask ;
void ServerEvents ( IServerEvent @event )
{
switch ( @event )
{
case ResponseAudioTranscriptResponse transcriptResponse :
Console . WriteLine ( transcriptResponse . ToString ( ) ) ;
break ;
case ResponseFunctionCallArgumentsResponse functionCallResponse :
if ( functionCallResponse . IsDone )
{
ToolCall toolCall = functionCallResponse ;
toolCall . InvokeFunction ( ) ;
}
break ;
}
} 該庫實現了IClientEvent接口,用於傳出客戶端發送事件。
UpdateSessionRequest :使用新的會話選項更新會話。InputAudioBufferAppendRequest :將音頻附加到輸入音頻緩衝區。 (與進行其他客戶端事件不同,服務器不會對此事件發送確認響應)。InputAudioBufferCommitRequest :提交輸入音頻緩衝區。 (在服務器VAD模式下,客戶端無需發送此事件)。InputAudioBufferClearRequest :清除輸入音頻緩衝區。ConversationItemCreateRequest :創建一個新的對話項。這是將用戶內容髮送到模型的主要方法。ConversationItemTruncateRequest :發送此事件以截斷以前的助手消息的音頻。ConversationItemDeleteRequest :刪除對話項。當您想從對話歷史記錄中刪除消息時,這很有用。CreateResponseRequest :從模型中創建響應。創建新的對話項目或調用工具調用後,發送此事件。這將觸發模型生成響應。ResponseCancelRequest要求該事件取消過程中的響應。 您可以隨時調用RealtimeSession.SendAsync方法在會話對像上,將客戶端事件隨時發送到服務器。發送呼叫將返回一個IServerEvent句柄,最能代表該事件服務器的適當響應。如果您想以更精細的方式處理服務器響應,這將很有用。
不過,理想情況下,您可能需要使用RealtimeSession.ReceiveUpdatesAsync處理所有服務器響應。
筆記
該服務器不會向InputAudioBufferAppendRequest事件發送確認響應。
重要的
您還需要發送CreateResponseRequest以觸發模型以生成響應。
var serverEvent = await session . SendAsync ( new ConversationItemCreateRequest ( " Hello! " ) ) ;
Console . WriteLine ( serverEvent . ToJsonString ( ) ) ;
serverEvent = await session . SendAsync ( new CreateResponseRequest ( ) ) ;
Console . WriteLine ( serverEvent . ToJsonString ( ) ) ; 該庫實現了用於傳入服務器的IServerEvent接口發送事件。
RealtimeEventError :發生錯誤時返回,這可能是客戶端問題或服務器問題。SessionResponse :返回的session.created和session.updated 。RealtimeConversationResponse :創建新的對話項時返回。ConversationItemCreatedResponse :創建新的對話項時返回。ConversationItemInputAudioTranscriptionResponse :輸入音頻轉錄完成或失敗後返回。ConversationItemTruncatedResponse :截斷對話項時返回。ConversationItemDeletedResponse :刪除對話項時返回。InputAudioBufferCommittedResponse :當客戶端或在服務器VAD模式下自動實施輸入音頻緩衝區時返回。InputAudioBufferClearedResponse :清除輸入音頻緩衝區時返回。InputAudioBufferStartedResponse :在Server_vad模式下,服務器發送,以指示在音頻緩衝區中檢測到的語音。這種情況可能會在任何音頻添加到緩衝區中時發生(除非已經檢測到語音)。客戶可能希望使用此事件中斷音頻播放或向用戶提供視覺反饋。InputAudioBufferStoppedResponse :當服務器檢測到音頻緩衝區中的語音結束時,以server_vad模式返回。RealtimeResponse :創建或完成響應時返回。ResponseOutputItemResponse :添加或完成響應輸出項時返回。ResponseContentPartResponse :添加或完成響應內容零件時返回。ResponseTextResponse :返迴響應文本時返回或完成時返回。ResponseAudioTranscriptResponse :響應音頻成績單更新或完成時返回。ResponseAudioResponse :更新響應音頻或完成時返回。ResponseFunctionCallArgumentsResponse :返迴響應功能調用參數被更新或完成時返回。RateLimitsResponse :更新速率限制時返回。 要接收服務器事件,您需要在會話對像上調用RealtimeSession.ReceiveUpdatesAsync方法。此方法將返回一個Task或IAsyncEnumerable<T> ,該任務將在會話關閉或觸發取消令牌時完成。理想情況下,應調用此方法一次並在會話期間運行。
筆記
您也可以通過使用IRealtimeEvent接口而不是IServerEvent來發送IClientEvent回調。
await foreach ( var @event in session . ReceiveUpdatesAsync < IServerEvent > ( cts . Token ) )
{
switch ( @event )
{
case RealtimeEventError error :
// raised anytime an error occurs
break ;
case SessionResponse sessionResponse :
// raised when a session is created or updated
break ;
case RealtimeConversationResponse conversationResponse :
// raised when a new conversation is created
break ;
case ConversationItemCreatedResponse conversationItemCreated :
// raised when a new conversation item is created
break ;
case ConversationItemInputAudioTranscriptionResponse conversationItemTranscription :
// raised when the input audio transcription is completed or failed
break ;
case ConversationItemTruncatedResponse conversationItemTruncated :
// raised when a conversation item is truncated
break ;
case ConversationItemDeletedResponse conversationItemDeleted :
// raised when a conversation item is deleted
break ;
case InputAudioBufferCommittedResponse committedResponse :
// raised when an input audio buffer is committed
break ;
case InputAudioBufferClearedResponse clearedResponse :
// raised when an input audio buffer is cleared
break ;
case InputAudioBufferStartedResponse startedResponse :
// raised when speech is detected in the audio buffer
break ;
case InputAudioBufferStoppedResponse stoppedResponse :
// raised when speech stops in the audio buffer
break ;
case RealtimeResponse realtimeResponse :
// raised when a response is created or done
break ;
case ResponseOutputItemResponse outputItemResponse :
// raised when a response output item is added or done
break ;
case ResponseContentPartResponse contentPartResponse :
// raised when a response content part is added or done
break ;
case ResponseTextResponse textResponse :
// raised when a response text is updated or done
break ;
case ResponseAudioTranscriptResponse transcriptResponse :
// raised when a response audio transcript is updated or done
break ;
case ResponseFunctionCallArgumentsResponse functionCallResponse :
// raised when a response function call arguments are updated or done
break ;
case RateLimitsResponse rateLimitsResponse :
// raised when rate limits are updated
break ;
}
}警告
Beta功能。 API可能會破裂。
建立可以調用模型並使用工具執行任務的助手。
通過OpenAIClient.AssistantsEndpoint訪問助手API。
返回助手名單。
using var api = new OpenAIClient ( ) ;
var assistantsList = await api . AssistantsEndpoint . ListAssistantsAsync ( ) ;
foreach ( var assistant in assistantsList . Items )
{
Console . WriteLine ( $" { assistant } -> { assistant . CreatedAt } " ) ;
} 創建一個具有模型和說明的助手。
using var api = new OpenAIClient ( ) ;
var request = new CreateAssistantRequest ( Model . GPT4o ) ;
var assistant = await api . AssistantsEndpoint . CreateAssistantAsync ( request ) ; 檢索助手。
using var api = new OpenAIClient ( ) ;
var assistant = await api . AssistantsEndpoint . RetrieveAssistantAsync ( " assistant-id " ) ;
Console . WriteLine ( $" { assistant } -> { assistant . CreatedAt } " ) ; 修改助手。
using var api = new OpenAIClient ( ) ;
var createRequest = new CreateAssistantRequest ( Model . GPT4_Turbo ) ;
var assistant = await api . AssistantsEndpoint . CreateAssistantAsync ( createRequest ) ;
var modifyRequest = new CreateAssistantRequest ( Model . GPT4o ) ;
var modifiedAssistant = await api . AssistantsEndpoint . ModifyAssistantAsync ( assistant . Id , modifyRequest ) ;
// OR AssistantExtension for easier use!
var modifiedAssistantEx = await assistant . ModifyAsync ( modifyRequest ) ; 刪除助手。
using var api = new OpenAIClient ( ) ;
var isDeleted = await api . AssistantsEndpoint . DeleteAssistantAsync ( " assistant-id " ) ;
// OR AssistantExtension for easier use!
var isDeleted = await assistant . DeleteAsync ( ) ;
Assert . IsTrue ( isDeleted ) ; 筆記
可以通過傳遞Func<IServerSentEvent, Task> streamEventHandler回調來輕鬆添加到現有線程呼叫中的助手流動事件。
創建助手可以與之互動的線程。
線程API可通過OpenAIClient.ThreadsEndpoint訪問
創建一個線程。
using var api = new OpenAIClient ( ) ;
var thread = await api . ThreadsEndpoint . CreateThreadAsync ( ) ;
Console . WriteLine ( $" Create thread { thread . Id } -> { thread . CreatedAt } " ) ; 創建一個線程並在一個請求中運行。
另請參閱:線程運行
using var api = new OpenAIClient ( ) ;
var assistant = await api . AssistantsEndpoint . CreateAssistantAsync (
new CreateAssistantRequest (
name : " Math Tutor " ,
instructions : " You are a personal math tutor. Answer questions briefly, in a sentence or less. " ,
model : Model . GPT4o ) ) ;
var messages = new List < Message > { " I need to solve the equation `3x + 11 = 14`. Can you help me? " } ;
var threadRequest = new CreateThreadRequest ( messages ) ;
var run = await assistant . CreateThreadAndRunAsync ( threadRequest ) ;
Console . WriteLine ( $" Created thread and run: { run . ThreadId } -> { run . Id } -> { run . CreatedAt } " ) ; 在流媒體事件時創建線程並在一個請求中運行它。
using var api = new OpenAIClient ( ) ;
var tools = new List < Tool >
{
Tool . GetOrCreateTool ( typeof ( WeatherService ) , nameof ( WeatherService . GetCurrentWeatherAsync ) )
} ;
var assistantRequest = new CreateAssistantRequest ( tools : tools , instructions : " You are a helpful weather assistant. Use the appropriate unit based on geographical location. " ) ;
var assistant = await api . AssistantsEndpoint . CreateAssistantAsync ( assistantRequest ) ;
ThreadResponse thread = null ;
async Task StreamEventHandler ( IServerSentEvent streamEvent )
{
switch ( streamEvent )
{
case ThreadResponse threadResponse :
thread = threadResponse ;
break ;
case RunResponse runResponse :
if ( runResponse . Status == RunStatus . RequiresAction )
{
var toolOutputs = await assistant . GetToolOutputsAsync ( runResponse ) ;
foreach ( var toolOutput in toolOutputs )
{
Console . WriteLine ( $" Tool Output: { toolOutput } " ) ;
}
await runResponse . SubmitToolOutputsAsync ( toolOutputs , StreamEventHandler ) ;
}
break ;
default :
Console . WriteLine ( streamEvent . ToJsonString ( ) ) ;
break ;
}
}
var run = await assistant . CreateThreadAndRunAsync ( " I'm in Kuala-Lumpur, please tell me what's the temperature now? " , StreamEventHandler ) ;
run = await run . WaitForStatusChangeAsync ( ) ;
var messages = await thread . ListMessagesAsync ( ) ;
foreach ( var response in messages . Items . Reverse ( ) )
{
Console . WriteLine ( $" { response . Role } : { response . PrintContent ( ) } " ) ;
} 檢索線程。
using var api = new OpenAIClient ( ) ;
var thread = await api . ThreadsEndpoint . RetrieveThreadAsync ( " thread-id " ) ;
// OR if you simply wish to get the latest state of a thread
thread = await thread . UpdateAsync ( ) ;
Console . WriteLine ( $" Retrieve thread { thread . Id } -> { thread . CreatedAt } " ) ; 修改線程。
注意:只能修改元數據。
using var api = new OpenAIClient ( ) ;
var thread = await api . ThreadsEndpoint . CreateThreadAsync ( ) ;
var metadata = new Dictionary < string , string >
{
{ " key " , " custom thread metadata " }
}
thread = await api . ThreadsEndpoint . ModifyThreadAsync ( thread . Id , metadata ) ;
// OR use extension method for convenience!
thread = await thread . ModifyAsync ( metadata ) ;
Console . WriteLine ( $" Modify thread { thread . Id } -> { thread . Metadata [ " key " ] } " ) ; 刪除線程。
using var api = new OpenAIClient ( ) ;
var isDeleted = await api . ThreadsEndpoint . DeleteThreadAsync ( " thread-id " ) ;
// OR use extension method for convenience!
var isDeleted = await thread . DeleteAsync ( ) ;
Assert . IsTrue ( isDeleted ) ; 在線程中創建消息。
返回給定線程的消息列表。
using var api = new OpenAIClient ( ) ;
var messageList = await api . ThreadsEndpoint . ListMessagesAsync ( " thread-id " ) ;
// OR use extension method for convenience!
var messageList = await thread . ListMessagesAsync ( ) ;
foreach ( var message in messageList . Items )
{
Console . WriteLine ( $" { message . Id } : { message . Role } : { message . PrintContent ( ) } " ) ;
} 創建消息。
using var api = new OpenAIClient ( ) ;
var thread = await api . ThreadsEndpoint . CreateThreadAsync ( ) ;
var request = new CreateMessageRequest ( " Hello world! " ) ;
var message = await api . ThreadsEndpoint . CreateMessageAsync ( thread . Id , request ) ;
// OR use extension method for convenience!
var message = await thread . CreateMessageAsync ( " Hello World! " ) ;
Console . WriteLine ( $" { message . Id } : { message . Role } : { message . PrintContent ( ) } " ) ; 檢索消息。
using var api = new OpenAIClient ( ) ;
var message = await api . ThreadsEndpoint . RetrieveMessageAsync ( " thread-id " , " message-id " ) ;
// OR use extension methods for convenience!
var message = await thread . RetrieveMessageAsync ( " message-id " ) ;
var message = await message . UpdateAsync ( ) ;
Console . WriteLine ( $" { message . Id } : { message . Role } : { message . PrintContent ( ) } " ) ; 修改消息。
注意:只能修改消息元數據。
using var api = new OpenAIClient ( ) ;
var metadata = new Dictionary < string , string >
{
{ " key " , " custom message metadata " }
} ;
var message = await api . ThreadsEndpoint . ModifyMessageAsync ( " thread-id " , " message-id " , metadata ) ;
// OR use extension method for convenience!
var message = await message . ModifyAsync ( metadata ) ;
Console . WriteLine ( $" Modify message metadata: { message . Id } -> { message . Metadata [ " key " ] } " ) ; 表示在線程上運行執行。
返回屬於線程的運行列表。
using var api = new OpenAIClient ( ) ;
var runList = await api . ThreadsEndpoint . ListRunsAsync ( " thread-id " ) ;
// OR use extension method for convenience!
var runList = await thread . ListRunsAsync ( ) ;
foreach ( var run in runList . Items )
{
Console . WriteLine ( $" [ { run . Id } ] { run . Status } | { run . CreatedAt } " ) ;
} 創建一個運行。
using var api = new OpenAIClient ( ) ;
var assistant = await api . AssistantsEndpoint . CreateAssistantAsync (
new CreateAssistantRequest (
name : " Math Tutor " ,
instructions : " You are a personal math tutor. Answer questions briefly, in a sentence or less. " ,
model : Model . GPT4o ) ) ;
var thread = await api . ThreadsEndpoint . CreateThreadAsync ( ) ;
var message = await thread . CreateMessageAsync ( " I need to solve the equation `3x + 11 = 14`. Can you help me? " ) ;
var run = await thread . CreateRunAsync ( assistant ) ;
Console . WriteLine ( $" [ { run . Id } ] { run . Status } | { run . CreatedAt } " ) ; 創建運行並流式傳輸事件。
using var api = new OpenAIClient ( ) ;
var assistant = await api . AssistantsEndpoint . CreateAssistantAsync (
new CreateAssistantRequest (
name : " Math Tutor " ,
instructions : " You are a personal math tutor. Answer questions briefly, in a sentence or less. Your responses should be formatted in JSON. " ,
model : Model . GPT4o ,
responseFormat : ChatResponseFormat . Json ) ) ;
var thread = await api . ThreadsEndpoint . CreateThreadAsync ( ) ;
var message = await thread . CreateMessageAsync ( " I need to solve the equation `3x + 11 = 14`. Can you help me? " ) ;
var run = await thread . CreateRunAsync ( assistant , async streamEvent =>
{
Console . WriteLine ( streamEvent . ToJsonString ( ) ) ;
await Task . CompletedTask ;
} ) ;
var messages = await thread . ListMessagesAsync ( ) ;
foreach ( var response in messages . Items . Reverse ( ) )
{
Console . WriteLine ( $" { response . Role } : { response . PrintContent ( ) } " ) ;
} 檢索跑步。
using var api = new OpenAIClient ( ) ;
var run = await api . ThreadsEndpoint . RetrieveRunAsync ( " thread-id " , " run-id " ) ;
// OR use extension method for convenience!
var run = await thread . RetrieveRunAsync ( " run-id " ) ;
var run = await run . UpdateAsync ( ) ;
Console . WriteLine ( $" [ { run . Id } ] { run . Status } | { run . CreatedAt } " ) ; 修改運行。
注意:只能修改元數據。
using var api = new OpenAIClient ( ) ;
var metadata = new Dictionary < string , string >
{
{ " key " , " custom run metadata " }
} ;
var run = await api . ThreadsEndpoint . ModifyRunAsync ( " thread-id " , " run-id " , metadata ) ;
// OR use extension method for convenience!
var run = await run . ModifyAsync ( metadata ) ;
Console . WriteLine ( $" Modify run { run . Id } -> { run . Metadata [ " key " ] } " ) ; 當運行具有狀態: requires_action和required_action.type是submit_tool_outputs時,該端點可用於在工具調用完成後從工具調用中提交輸出。所有輸出必須在一個請求中提交。
筆記
請參閱“創建線程”和“運行流”示例,介紹如何流動工具輸出事件。
using var api = new OpenAIClient ( ) ;
var tools = new List < Tool >
{
// Use a predefined tool
Tool . Retrieval , Tool . CodeInterpreter ,
// Or create a tool from a type and the name of the method you want to use for function calling
Tool . GetOrCreateTool ( typeof ( WeatherService ) , nameof ( WeatherService . GetCurrentWeatherAsync ) ) ,
// Pass in an instance of an object to call a method on it
Tool . GetOrCreateTool ( api . ImagesEndPoint , nameof ( ImagesEndpoint . GenerateImageAsync ) ) ,
// Define func<,> callbacks
Tool . FromFunc ( " name_of_func " , ( ) => { /* callback function */ } ) ,
Tool . FromFunc < T1 , T2 , TResult > ( " func_with_multiple_params " , ( t1 , t2 ) => { /* logic that calculates return value */ return tResult ; } )
} ;
var assistantRequest = new CreateAssistantRequest ( tools : tools , instructions : " You are a helpful weather assistant. Use the appropriate unit based on geographical location. " ) ;
var testAssistant = await api . AssistantsEndpoint . CreateAssistantAsync ( assistantRequest ) ;
var run = await testAssistant . CreateThreadAndRunAsync ( " I'm in Kuala-Lumpur, please tell me what's the temperature now? " ) ;
// waiting while run is Queued and InProgress
run = await run . WaitForStatusChangeAsync ( ) ;
// Invoke all of the tool call functions and return the tool outputs.
var toolOutputs = await testAssistant . GetToolOutputsAsync ( run . RequiredAction . SubmitToolOutputs . ToolCalls ) ;
foreach ( var toolOutput in toolOutputs )
{
Console . WriteLine ( $" tool call output: { toolOutput . Output } " ) ;
}
// submit the tool outputs
run = await run . SubmitToolOutputsAsync ( toolOutputs ) ;
// waiting while run in Queued and InProgress
run = await run . WaitForStatusChangeAsync ( ) ;
var messages = await run . ListMessagesAsync ( ) ;
foreach ( var message in messages . Items . OrderBy ( response => response . CreatedAt ) )
{
Console . WriteLine ( $" { message . Role } : { message . PrintContent ( ) } " ) ;
} 結構化輸出是JSON模式的演變。雖然兩者都確保產生有效的JSON,但僅結構化輸出確保了模式依從性。
重要的
finish_reason為長度,則模型返回可能是部分(即切斷),這表明發電超過max_tokens或對話超過令牌限制。為了防止這種情況,請在解析響應之前檢查finish_reason 。首先定義響應的結構。這些將用作您的模式。這些是您應列出的對象,因此請確保使用標準的JSON對像模型。
public class MathResponse
{
[ JsonInclude ]
[ JsonPropertyName ( " steps " ) ]
public IReadOnlyList < MathStep > Steps { get ; private set ; }
[ JsonInclude ]
[ JsonPropertyName ( " final_answer " ) ]
public string FinalAnswer { get ; private set ; }
}
public class MathStep
{
[ JsonInclude ]
[ JsonPropertyName ( " explanation " ) ]
public string Explanation { get ; private set ; }
[ JsonInclude ]
[ JsonPropertyName ( " output " ) ]
public string Output { get ; private set ; }
}要使用,只需將MathResponse類型指定為CreateAssistantAsync , CreateRunAsync或CreateThreadAndRunAsync通用約束即可。
using var api = new OpenAIClient ( ) ;
var assistant = await api . AssistantsEndpoint . CreateAssistantAsync < MathResponse > (
new CreateAssistantRequest (
name : " Math Tutor " ,
instructions : " You are a helpful math tutor. Guide the user through the solution step by step. " ,
model : " gpt-4o-2024-08-06 " ) ) ;
ThreadResponse thread = null ;
try
{
async Task StreamEventHandler ( IServerSentEvent @event )
{
try
{
switch ( @event )
{
case MessageResponse message :
if ( message . Status != MessageStatus . Completed )
{
Console . WriteLine ( @event . ToJsonString ( ) ) ;
break ;
}
var mathResponse = message . FromSchema < MathResponse > ( ) ;
for ( var i = 0 ; i < mathResponse . Steps . Count ; i ++ )
{
var step = mathResponse . Steps [ i ] ;
Console . WriteLine ( $" Step { i } : { step . Explanation } " ) ;
Console . WriteLine ( $" Result: { step . Output } " ) ;
}
Console . WriteLine ( $" Final Answer: { mathResponse . FinalAnswer } " ) ;
break ;
default :
Console . WriteLine ( @event . ToJsonString ( ) ) ;
break ;
}
}
catch ( Exception e )
{
Console . WriteLine ( e ) ;
throw ;
}
await Task . CompletedTask ;
}
var run = await assistant . CreateThreadAndRunAsync ( " how can I solve 8x + 7 = -23 " , StreamEventHandler ) ;
thread = await run . GetThreadAsync ( ) ;
run = await run . WaitForStatusChangeAsync ( ) ;
Console . WriteLine ( $" Created thread and run: { run . ThreadId } -> { run . Id } -> { run . CreatedAt } " ) ;
var messages = await thread . ListMessagesAsync ( ) ;
foreach ( var response in messages . Items . OrderBy ( response => response . CreatedAt ) )
{
Console . WriteLine ( $" { response . Role } : { response . PrintContent ( ) } " ) ;
}
}
finally
{
await assistant . DeleteAsync ( deleteToolResources : thread == null ) ;
if ( thread != null )
{
var isDeleted = await thread . DeleteAsync ( deleteToolResources : true ) ;
}
}您也可以手動創建JSON Schema JSON字符串,但是您將負責估算您的響應數據:
using var api = new OpenAIClient ( ) ;
var mathSchema = new JsonSchema ( " math_response " , @"
{
""type"": ""object"",
""properties"": {
""steps"": {
""type"": ""array"",
""items"": {
""type"": ""object"",
""properties"": {
""explanation"": {
""type"": ""string""
},
""output"": {
""type"": ""string""
}
},
""required"": [
""explanation"",
""output""
],
""additionalProperties"": false
}
},
""final_answer"": {
""type"": ""string""
}
},
""required"": [
""steps"",
""final_answer""
],
""additionalProperties"": false
}" ) ;
var assistant = await api . AssistantsEndpoint . CreateAssistantAsync (
new CreateAssistantRequest (
name : " Math Tutor " ,
instructions : " You are a helpful math tutor. Guide the user through the solution step by step. " ,
model : " gpt-4o-2024-08-06 " ,
jsonSchema : mathSchema ) ) ;
ThreadResponse thread = null ;
try
{
var run = await assistant . CreateThreadAndRunAsync ( " how can I solve 8x + 7 = -23 " ,
async @event =>
{
Console . WriteLine ( @event . ToJsonString ( ) ) ;
await Task . CompletedTask ;
} ) ;
thread = await run . GetThreadAsync ( ) ;
run = await run . WaitForStatusChangeAsync ( ) ;
Console . WriteLine ( $" Created thread and run: { run . ThreadId } -> { run . Id } -> { run . CreatedAt } " ) ;
var messages = await thread . ListMessagesAsync ( ) ;
foreach ( var response in messages . Items )
{
Console . WriteLine ( $" { response . Role } : { response . PrintContent ( ) } " ) ;
}
}
finally
{
await assistant . DeleteAsync ( deleteToolResources : thread == null ) ;
if ( thread != null )
{
var isDeleted = await thread . DeleteAsync ( deleteToolResources : true ) ;
Assert . IsTrue ( isDeleted ) ;
}
} 返回屬於運行的運行步驟列表。
using var api = new OpenAIClient ( ) ;
var runStepList = await api . ThreadsEndpoint . ListRunStepsAsync ( " thread-id " , " run-id " ) ;
// OR use extension method for convenience!
var runStepList = await run . ListRunStepsAsync ( ) ;
foreach ( var runStep in runStepList . Items )
{
Console . WriteLine ( $" [ { runStep . Id } ] { runStep . Status } { runStep . CreatedAt } -> { runStep . ExpiresAt } " ) ;
} 檢索運行步驟。
using var api = new OpenAIClient ( ) ;
var runStep = await api . ThreadsEndpoint . RetrieveRunStepAsync ( " thread-id " , " run-id " , " step-id " ) ;
// OR use extension method for convenience!
var runStep = await run . RetrieveRunStepAsync ( " step-id " ) ;
var runStep = await runStep . UpdateAsync ( ) ;
Console . WriteLine ( $" [ { runStep . Id } ] { runStep . Status } { runStep . CreatedAt } -> { runStep . ExpiresAt } " ) ; 取消in_progress的運行。
using var api = new OpenAIClient ( ) ;
var isCancelled = await api . ThreadsEndpoint . CancelRunAsync ( " thread-id " , " run-id " ) ;
// OR use extension method for convenience!
var isCancelled = await run . CancelAsync ( ) ;
Assert . IsTrue ( isCancelled ) ; 向量存儲用於存儲文件file_search工具的文件。
Vector Stores API通過OpenAIClient.VectorStoresEndpoint訪問
返回矢量商店列表。
using var api = new OpenAIClient ( ) ;
var vectorStores = await api . VectorStoresEndpoint . ListVectorStoresAsync ( ) ;
foreach ( var vectorStore in vectorStores . Items )
{
Console . WriteLine ( vectorStore ) ;
} 創建一個矢量存儲。
using var api = new OpenAIClient ( ) ;
var createVectorStoreRequest = new CreateVectorStoreRequest ( " test-vector-store " ) ;
var vectorStore = await api . VectorStoresEndpoint . CreateVectorStoreAsync ( createVectorStoreRequest ) ;
Console . WriteLine ( vectorStore ) ; 檢索矢量商店。
using var api = new OpenAIClient ( ) ;
var vectorStore = await api . VectorStoresEndpoint . GetVectorStoreAsync ( " vector-store-id " ) ;
Console . WriteLine ( vectorStore ) ; 修改矢量存儲。
using var api = new OpenAIClient ( ) ;
var metadata = new Dictionary < string , object > { { " Test " , DateTime . UtcNow } } ;
var vectorStore = await api . VectorStoresEndpoint . ModifyVectorStoreAsync ( " vector-store-id " , metadata : metadata ) ;
Console . WriteLine ( vectorStore ) ; 刪除矢量商店。
using var api = new OpenAIClient ( ) ;
var isDeleted = await api . VectorStoresEndpoint . DeleteVectorStoreAsync ( " vector-store-id " ) ;
Assert . IsTrue ( isDeleted ) ; 向量存儲文件表示矢量存儲中的文件。
返回矢量存儲文件列表。
using var api = new OpenAIClient ( ) ;
var files = await api . VectorStoresEndpoint . ListVectorStoreFilesAsync ( " vector-store-id " ) ;
foreach ( var file in vectorStoreFiles . Items )
{
Console . WriteLine ( file ) ;
} 通過將文件連接到矢量存儲來創建矢量存儲文件。
using var api = new OpenAIClient ( ) ;
var file = await api . VectorStoresEndpoint . CreateVectorStoreFileAsync ( " vector-store-id " , " file-id " , new ChunkingStrategy ( ChunkingStrategyType . Static ) ) ;
Console . WriteLine ( file ) ; 檢索矢量存儲文件。
using var api = new OpenAIClient ( ) ;
var file = await api . VectorStoresEndpoint . GetVectorStoreFileAsync ( " vector-store-id " , " vector-store-file-id " ) ;
Console . WriteLine ( file ) ; 刪除矢量存儲文件。這將從矢量存儲中刪除文件,但不會刪除文件本身。要刪除文件,請使用刪除文件端點。
using var api = new OpenAIClient ( ) ;
var isDeleted = await api . VectorStoresEndpoint . DeleteVectorStoreFileAsync ( " vector-store-id " , vectorStoreFile ) ;
Assert . IsTrue ( isDeleted ) ; 向量存儲文件表示矢量存儲中的文件。
創建一個矢量存儲文件批次。
using var api = new OpenAIClient ( ) ;
var files = new List < string > { " file_id_1 " , " file_id_2 " } ;
var vectorStoreFileBatch = await api . VectorStoresEndpoint . CreateVectorStoreFileBatchAsync ( " vector-store-id " , files ) ;
Console . WriteLine ( vectorStoreFileBatch ) ; 檢索矢量商店文件批次。
using var api = new OpenAIClient ( ) ;
var vectorStoreFileBatch = await api . VectorStoresEndpoint . GetVectorStoreFileBatchAsync ( " vector-store-id " , " vector-store-file-batch-id " ) ;
// you can also use convenience methods!
vectorStoreFileBatch = await vectorStoreFileBatch . UpdateAsync ( ) ;
vectorStoreFileBatch = await vectorStoreFileBatch . WaitForStatusChangeAsync ( ) ; 在批處理中返回矢量存儲文件列表。
using var api = new OpenAIClient ( ) ;
var files = await api . VectorStoresEndpoint . ListVectorStoreBatchFilesAsync ( " vector-store-id " , " vector-store-file-batch-id " ) ;
foreach ( var file in files . Items )
{
Console . WriteLine ( file ) ;
} 取消矢量存儲文件批次。這試圖盡快取消本批次的文件處理。
using var api = new OpenAIClient ( ) ;
var isCancelled = await api . VectorStoresEndpoint . CancelVectorStoreFileBatchAsync ( " vector-store-id " , " vector-store-file-batch-id " ) ;給定聊天對話,該模型將返回聊天完成響應。
聊天API可通過OpenAIClient.ChatEndpoint訪問
為聊天消息創建一個完成
using var api = new OpenAIClient ( ) ;
var messages = new List < Message >
{
new Message ( Role . System , " You are a helpful assistant. " ) ,
new Message ( Role . User , " Who won the world series in 2020? " ) ,
new Message ( Role . Assistant , " The Los Angeles Dodgers won the World Series in 2020. " ) ,
new Message ( Role . User , " Where was it played? " ) ,
} ;
var chatRequest = new ChatRequest ( messages , Model . GPT4o ) ;
var response = await api . ChatEndpoint . GetCompletionAsync ( chatRequest ) ;
var choice = response . FirstChoice ;
Console . WriteLine ( $" [ { choice . Index } ] { choice . Message . Role } : { choice . Message } | Finish Reason: { choice . FinishReason } " ) ; using var api = new OpenAIClient ( ) ;
var messages = new List < Message >
{
new Message ( Role . System , " You are a helpful assistant. " ) ,
new Message ( Role . User , " Who won the world series in 2020? " ) ,
new Message ( Role . Assistant , " The Los Angeles Dodgers won the World Series in 2020. " ) ,
new Message ( Role . User , " Where was it played? " ) ,
} ;
var chatRequest = new ChatRequest ( messages ) ;
var response = await api . ChatEndpoint . StreamCompletionAsync ( chatRequest , async partialResponse =>
{
Console . Write ( partialResponse . FirstChoice . Delta . ToString ( ) ) ;
await Task . CompletedTask ;
} ) ;
var choice = response . FirstChoice ;
Console . WriteLine ( $" [ { choice . Index } ] { choice . Message . Role } : { choice . Message } | Finish Reason: { choice . FinishReason } " ) ;或者如果使用IAsyncEnumerable{T} (c#8.0+)
using var api = new OpenAIClient ( ) ;
var messages = new List < Message >
{
new Message ( Role . System , " You are a helpful assistant. " ) ,
new Message ( Role . User , " Who won the world series in 2020? " ) ,
new Message ( Role . Assistant , " The Los Angeles Dodgers won the World Series in 2020. " ) ,
new Message ( Role . User , " Where was it played? " ) ,
} ;
var cumulativeDelta = string . Empty ;
var chatRequest = new ChatRequest ( messages ) ;
await foreach ( var partialResponse in api . ChatEndpoint . StreamCompletionEnumerableAsync ( chatRequest ) )
{
foreach ( var choice in partialResponse . Choices . Where ( choice => choice . Delta ? . Content != null ) )
{
cumulativeDelta += choice . Delta . Content ;
}
}
Console . WriteLine ( cumulativeDelta ) ; using var api = new OpenAIClient ( ) ;
var messages = new List < Message >
{
new ( Role . System , " You are a helpful weather assistant. Always prompt the user for their location. " ) ,
new Message ( Role . User , " What's the weather like today? " ) ,
} ;
foreach ( var message in messages )
{
Console . WriteLine ( $" { message . Role } : { message } " ) ;
}
// Define the tools that the assistant is able to use:
// 1. Get a list of all the static methods decorated with FunctionAttribute
var tools = Tool . GetAllAvailableTools ( includeDefaults : false , forceUpdate : true , clearCache : true ) ;
// 2. Define a custom list of tools:
var tools = new List < Tool >
{
Tool . GetOrCreateTool ( objectInstance , " TheNameOfTheMethodToCall " ) ,
Tool . FromFunc ( " a_custom_name_for_your_function " , ( ) => { /* Some logic to run */ } )
} ;
var chatRequest = new ChatRequest ( messages , tools : tools , toolChoice : " auto " ) ;
var response = await api . ChatEndpoint . GetCompletionAsync ( chatRequest ) ;
messages . Add ( response . FirstChoice . Message ) ;
Console . WriteLine ( $" { response . FirstChoice . Message . Role } : { response . FirstChoice } | Finish Reason: { response . FirstChoice . FinishReason } " ) ;
var locationMessage = new Message ( Role . User , " I'm in Glasgow, Scotland " ) ;
messages . Add ( locationMessage ) ;
Console . WriteLine ( $" { locationMessage . Role } : { locationMessage . Content } " ) ;
chatRequest = new ChatRequest ( messages , tools : tools , toolChoice : " auto " ) ;
response = await api . ChatEndpoint . GetCompletionAsync ( chatRequest ) ;
messages . Add ( response . FirstChoice . Message ) ;
if ( response . FirstChoice . FinishReason == " stop " )
{
Console . WriteLine ( $" { response . FirstChoice . Message . Role } : { response . FirstChoice } | Finish Reason: { response . FirstChoice . FinishReason } " ) ;
var unitMessage = new Message ( Role . User , " Fahrenheit " ) ;
messages . Add ( unitMessage ) ;
Console . WriteLine ( $" { unitMessage . Role } : { unitMessage . Content } " ) ;
chatRequest = new ChatRequest ( messages , tools : tools , toolChoice : " auto " ) ;
response = await api . ChatEndpoint . GetCompletionAsync ( chatRequest ) ;
}
// iterate over all tool calls and invoke them
foreach ( var toolCall in response . FirstChoice . Message . ToolCalls )
{
Console . WriteLine ( $" { response . FirstChoice . Message . Role } : { toolCall . Function . Name } | Finish Reason: { response . FirstChoice . FinishReason } " ) ;
Console . WriteLine ( $" { toolCall . Function . Arguments } " ) ;
// Invokes function to get a generic json result to return for tool call.
var functionResult = await toolCall . InvokeFunctionAsync ( ) ;
// If you know the return type and do additional processing you can use generic overload
var functionResult = await toolCall . InvokeFunctionAsync < string > ( ) ;
messages . Add ( new Message ( toolCall , functionResult ) ) ;
Console . WriteLine ( $" { Role . Tool } : { functionResult } " ) ;
}
// System: You are a helpful weather assistant.
// User: What's the weather like today?
// Assistant: Sure, may I know your current location? | Finish Reason: stop
// User: I'm in Glasgow, Scotland
// Assistant: GetCurrentWeather | Finish Reason: tool_calls
// {
// "location": "Glasgow, Scotland",
// "unit": "celsius"
// }
// Tool: The current weather in Glasgow, Scotland is 39°C. 警告
Beta功能。 API可能會破裂。
using var api = new OpenAIClient ( ) ;
var messages = new List < Message >
{
new Message ( Role . System , " You are a helpful assistant. " ) ,
new Message ( Role . User , new List < Content >
{
" What's in this image? " ,
new ImageUrl ( " https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg " , ImageDetail . Low )
} )
} ;
var chatRequest = new ChatRequest ( messages , model : Model . GPT4o ) ;
var response = await api . ChatEndpoint . GetCompletionAsync ( chatRequest ) ;
Console . WriteLine ( $" { response . FirstChoice . Message . Role } : { response . FirstChoice . Message . Content } | Finish Reason: { response . FirstChoice . FinishDetails } " ) ; using var api = new OpenAIClient ( ) ;
var messages = new List < Message >
{
new Message ( Role . System , " You are a helpful assistant. " ) ,
new Message ( Role . User , " Is a golden retriever a good family dog? " )
} ;
var chatRequest = new ChatRequest ( messages , Model . GPT4oAudio , audioConfig : Voice . Alloy ) ;
var response = await api . ChatEndpoint . GetCompletionAsync ( chatRequest ) ;
Console . WriteLine ( $" { response . FirstChoice . Message . Role } : { response . FirstChoice } | Finish Reason: { response . FirstChoice . FinishDetails } " ) ;
// todo play response.FirstChoice.Message.AudioOutput.Data JSON模式的演變。雖然兩者都確保產生有效的JSON,但僅結構化輸出確保了模式依從性。
重要的
finish_reason為長度,則模型返回可能是部分(即切斷),這表明發電超過max_tokens或對話超過令牌限制。為了防止這種情況,請在解析響應之前檢查finish_reason 。首先定義響應的結構。這些將用作您的模式。這些是您應列出的對象,因此請確保使用標準的JSON對像模型。
public class MathResponse
{
[ JsonInclude ]
[ JsonPropertyName ( " steps " ) ]
public IReadOnlyList < MathStep > Steps { get ; private set ; }
[ JsonInclude ]
[ JsonPropertyName ( " final_answer " ) ]
public string FinalAnswer { get ; private set ; }
}
public class MathStep
{
[ JsonInclude ]
[ JsonPropertyName ( " explanation " ) ]
public string Explanation { get ; private set ; }
[ JsonInclude ]
[ JsonPropertyName ( " output " ) ]
public string Output { get ; private set ; }
}要使用,請在請求完成時只需將MathResponse類型指定為通用約束即可。
using var api = new OpenAIClient ( ) ;
var messages = new List < Message >
{
new ( Role . System , " You are a helpful math tutor. Guide the user through the solution step by step. " ) ,
new ( Role . User , " how can I solve 8x + 7 = -23 " )
} ;
var chatRequest = new ChatRequest ( messages , model : " gpt-4o-2024-08-06 " ) ;
var ( mathResponse , chatResponse ) = await api . ChatEndpoint . GetCompletionAsync < MathResponse > ( chatRequest ) ;
for ( var i = 0 ; i < mathResponse . Steps . Count ; i ++ )
{
var step = mathResponse . Steps [ i ] ;
Console . WriteLine ( $" Step { i } : { step . Explanation } " ) ;
Console . WriteLine ( $" Result: { step . Output } " ) ;
}
Console . WriteLine ( $" Final Answer: { mathResponse . FinalAnswer } " ) ;
chatResponse . GetUsage ( ) ; 重要的
finish_reason為長度,則模型返回可能是部分(即切斷),這表明發電超過max_tokens或對話超過令牌限制。為了防止這種情況,請在解析響應之前檢查finish_reason 。 var messages = new List < Message >
{
new Message ( Role . System , " You are a helpful assistant designed to output JSON. " ) ,
new Message ( Role . User , " Who won the world series in 2020? " ) ,
} ;
var chatRequest = new ChatRequest ( messages , Model . GPT4o , responseFormat : ChatResponseFormat . Json ) ;
var response = await api . ChatEndpoint . GetCompletionAsync ( chatRequest ) ;
foreach ( var choice in response . Choices )
{
Console . WriteLine ( $" [ { choice . Index } ] { choice . Message . Role } : { choice } | Finish Reason: { choice . FinishReason } " ) ;
}
response . GetUsage ( ) ;將音頻轉換為文本。
通過OpenAIClient.AudioEndpoint訪問音頻API
從輸入文本生成音頻。
using var api = new OpenAIClient ( ) ;
var request = new SpeechRequest ( " Hello World! " ) ;
async Task ChunkCallback ( ReadOnlyMemory < byte > chunkCallback )
{
// TODO Implement audio playback as chunks arrive
await Task . CompletedTask ;
}
var response = await api . AudioEndpoint . CreateSpeechAsync ( request , ChunkCallback ) ;
await File . WriteAllBytesAsync ( " ../../../Assets/HelloWorld.mp3 " , response . ToArray ( ) ) ; 將音頻轉錄為輸入語言。
using var api = new OpenAIClient ( ) ;
using var request = new AudioTranscriptionRequest ( Path . GetFullPath ( audioAssetPath ) , language : " en " ) ;
var response = await api . AudioEndpoint . CreateTranscriptionTextAsync ( request ) ;
Console . WriteLine ( response ) ;您還可以使用verbose_json獲取詳細信息以獲得時間戳粒度:
using var api = new OpenAIClient ( ) ;
using var request = new AudioTranscriptionRequest ( transcriptionAudio , responseFormat : AudioResponseFormat . Verbose_Json , timestampGranularity : TimestampGranularity . Word , temperature : 0.1f , language : " en " ) ;
var response = await api . AudioEndpoint . CreateTranscriptionTextAsync ( request ) ;
foreach ( var word in response . Words )
{
Console . WriteLine ( $" [ { word . Start } - { word . End } ] " { word . Word } " " ) ;
} 將音頻翻譯成英文。
using var api = new OpenAIClient ( ) ;
using var request = new AudioTranslationRequest ( Path . GetFullPath ( audioAssetPath ) ) ;
var response = await api . AudioEndpoint . CreateTranslationTextAsync ( request ) ;
Console . WriteLine ( response ) ;給定提示和/或輸入圖像,該模型將生成新圖像。
通過OpenAIClient.ImagesEndpoint訪問圖像API
創建給定提示的圖像。
using var api = new OpenAIClient ( ) ;
var request = new ImageGenerationRequest ( " A house riding a velociraptor " , Models . Model . DallE_3 ) ;
var imageResults = await api . ImagesEndPoint . GenerateImageAsync ( request ) ;
foreach ( var image in imageResults )
{
Console . WriteLine ( image ) ;
// image == url or b64_string
} 給定原始圖像和提示,創建編輯或擴展的圖像。
using var api = new OpenAIClient ( ) ;
var request = new ImageEditRequest ( imageAssetPath , maskAssetPath , " A sunlit indoor lounge area with a pool containing a flamingo " , size : ImageSize . Small ) ;
var imageResults = await api . ImagesEndPoint . CreateImageEditAsync ( request ) ;
foreach ( var image in imageResults )
{
Console . WriteLine ( image ) ;
// image == url or b64_string
} 創建給定圖像的變體。
using var api = new OpenAIClient ( ) ;
var request = new ImageVariationRequest ( imageAssetPath , size : ImageSize . Small ) ;
var imageResults = await api . ImagesEndPoint . CreateImageVariationAsync ( request ) ;
foreach ( var image in imageResults )
{
Console . WriteLine ( image ) ;
// image == url or b64_string
}文件用於上傳可以與微調之類的功能一起使用的文檔。
文件API可通過OpenAIClient.FilesEndpoint訪問
返回屬於用戶組織的文件列表。
using var api = new OpenAIClient ( ) ;
var fileList = await api . FilesEndpoint . ListFilesAsync ( ) ;
foreach ( var file in fileList )
{
Console . WriteLine ( $" { file . Id } -> { file . Object } : { file . FileName } | { file . Size } bytes " ) ;
} 上傳可以在各個端點上使用的文件。一個組織上傳的所有文件的大小最多可高達100 GB。
單個文件的大小最多可以為512 MB。請參閱《助手工具指南》,以了解有關支持的文件類型的更多信息。微調API僅支持.jsonl文件。
using var api = new OpenAIClient ( ) ;
var file = await api . FilesEndpoint . UploadFileAsync ( " path/to/your/file.jsonl " , FilePurpose . FineTune ) ;
Console . WriteLine ( file . Id ) ; 刪除文件。
using var api = new OpenAIClient ( ) ;
var isDeleted = await api . FilesEndpoint . DeleteFileAsync ( fileId ) ;
Assert . IsTrue ( isDeleted ) ; 返回有關特定文件的信息。
using var api = new OpenAIClient ( ) ;
var file = await api . FilesEndpoint . GetFileInfoAsync ( fileId ) ;
Console . WriteLine ( $" { file . Id } -> { file . Object } : { file . FileName } | { file . Size } bytes " ) ; 將文件內容下載到指定的目錄。
using var api = new OpenAIClient ( ) ;
var downloadedFilePath = await api . FilesEndpoint . DownloadFileAsync ( fileId , " path/to/your/save/directory " ) ;
Console . WriteLine ( downloadedFilePath ) ;
Assert . IsTrue ( File . Exists ( downloadedFilePath ) ) ;管理微調工作,以根據您的特定培訓數據量身定制模型。
相關指南:微調模型
文件API可通過OpenAIClient.FineTuningEndpoint訪問
創建一個從給定數據集微調指定模型的作業。
響應包括臨時作業的詳細信息,包括工作狀態,以及完成的微調模型的名稱。
using var api = new OpenAIClient ( ) ;
var fileId = " file-abc123 " ;
var request = new CreateFineTuneRequest ( fileId ) ;
var job = await api . FineTuningEndpoint . CreateJobAsync ( Model . GPT3_5_Turbo , request ) ;
Console . WriteLine ( $" Started { job . Id } | Status: { job . Status } " ) ; 列出您組織的微調工作。
using var api = new OpenAIClient ( ) ;
var jobList = await api . FineTuningEndpoint . ListJobsAsync ( ) ;
foreach ( var job in jobList . Items . OrderByDescending ( job => job . CreatedAt ) )
{
Console . WriteLine ( $" { job . Id } -> { job . CreatedAt } | { job . Status } " ) ;
} 獲取有關微調工作的信息。
using var api = new OpenAIClient ( ) ;
var job = await api . FineTuningEndpoint . GetJobInfoAsync ( fineTuneJob ) ;
Console . WriteLine ( $" { job . Id } -> { job . CreatedAt } | { job . Status } " ) ; 立即取消微調工作。
using var api = new OpenAIClient ( ) ;
var isCancelled = await api . FineTuningEndpoint . CancelFineTuneJobAsync ( fineTuneJob ) ;
Assert . IsTrue ( isCancelled ) ; 獲取微調工作的狀態更新。
using var api = new OpenAIClient ( ) ;
var eventList = await api . FineTuningEndpoint . ListJobEventsAsync ( fineTuneJob ) ;
Console . WriteLine ( $" { fineTuneJob . Id } -> status: { fineTuneJob . Status } | event count: { eventList . Events . Count } " ) ;
foreach ( var @event in eventList . Items . OrderByDescending ( @event => @event . CreatedAt ) )
{
Console . WriteLine ( $" { @event . CreatedAt } [ { @event . Level } ] { @event . Message } " ) ;
}創建大量的API請求,以進行異步處理。批處理API在24小時內返回完成,以50%的折扣。
批次API可通過OpenAIClient.BatchesEndpoint訪問
列出您組織的批次。
using var api = new OpenAIClient ( ) ;
var batches = await api . BatchEndpoint . ListBatchesAsync ( ) ;
foreach ( var batch in listResponse . Items )
{
Console . WriteLine ( batch ) ;
} 從上傳的請求文件中創建並執行批量
using var api = new OpenAIClient ( ) ;
var batchRequest = new CreateBatchRequest ( " file-id " , Endpoint . ChatCompletions ) ;
var batch = await api . BatchEndpoint . CreateBatchAsync ( batchRequest ) ; 檢索一批。
using var api = new OpenAIClient ( ) ;
var batch = await api . BatchEndpoint . RetrieveBatchAsync ( " batch-id " ) ;
// you can also use convenience methods!
batch = await batch . UpdateAsync ( ) ;
batch = await batch . WaitForStatusChangeAsync ( ) ; 取消過程內批處理。該批次將在狀態取消長達10分鐘,然後更改為取消,在此將在輸出文件中具有部分結果(如果有)。
using var api = new OpenAIClient ( ) ;
var isCancelled = await api . BatchEndpoint . CancelBatchAsync ( batch ) ;
Assert . IsTrue ( isCancelled ) ;獲取給定輸入的矢量表示,該輸入可以通過機器學習模型和算法輕鬆消耗。
相關指南:嵌入
通過OpenAIClient.EmbeddingsEndpoint訪問編輯API
創建代表輸入文本的嵌入向量。
using var api = new OpenAIClient ( ) ;
var response = await api . EmbeddingsEndpoint . CreateEmbeddingAsync ( " The food was delicious and the waiter... " , Models . Embedding_Ada_002 ) ;
Console . WriteLine ( response ) ;給定輸入文本,如果模型將其分類為違反OpenAI的內容策略,則輸出。
相關指南:適中
可以通過OpenAIClient.ModerationsEndpoint訪問適中API
如果文本違反了Openai的內容政策,則分類。
using var api = new OpenAIClient ( ) ;
var isViolation = await api . ModerationsEndpoint . GetModerationAsync ( " I want to kill them. " ) ;
Assert . IsTrue ( isViolation ) ;此外,您還可以獲得給定輸入的分數。
using var api = new OpenAIClient ( ) ;
var response = await api . ModerationsEndpoint . CreateModerationAsync ( new ModerationsRequest ( " I love you " ) ) ;
Assert . IsNotNull ( response ) ;
Console . WriteLine ( response . Results ? [ 0 ] ? . Scores ? . ToString ( ) ) ;