Class CreateMessageRequestParams
- Namespace
- ModelContextProtocol.Protocol
- Assembly
- ModelContextProtocol.Core.dll
Represents the parameters used with a SamplingCreateMessage request from a server to sample an LLM via the client.
public class CreateMessageRequestParams : RequestParams
- Inheritance
-
CreateMessageRequestParams
- Inherited Members
Remarks
See the schema for details.
Properties
IncludeContext
Gets or sets an indication as to which server contexts should be included in the prompt.
[JsonPropertyName("includeContext")]
public ContextInclusion? IncludeContext { get; init; }
Property Value
Remarks
The client may ignore this request.
MaxTokens
Gets or sets the maximum number of tokens to generate in the LLM response, as requested by the server.
[JsonPropertyName("maxTokens")]
public int? MaxTokens { get; init; }
Property Value
- int?
Remarks
A token is generally a word or part of a word in the text. Setting this value helps control response length and computation time. The client may choose to sample fewer tokens than requested.
Messages
Gets or sets the messages requested by the server to be included in the prompt.
[JsonPropertyName("messages")]
public required IReadOnlyList<SamplingMessage> Messages { get; init; }
Property Value
Metadata
Gets or sets optional metadata to pass through to the LLM provider.
[JsonPropertyName("metadata")]
public JsonElement? Metadata { get; init; }
Property Value
Remarks
The format of this metadata is provider-specific and can include model-specific settings or configuration that isn't covered by standard parameters. This allows for passing custom parameters that are specific to certain AI models or providers.
ModelPreferences
Gets or sets the server's preferences for which model to select.
[JsonPropertyName("modelPreferences")]
public ModelPreferences? ModelPreferences { get; init; }
Property Value
Remarks
The client may ignore these preferences.
These preferences help the client make an appropriate model selection based on the server's priorities for cost, speed, intelligence, and specific model hints.
When multiple dimensions are specified (cost, speed, intelligence), the client should balance these based on their relative values. If specific model hints are provided, the client should evaluate them in order and prioritize them over numeric priorities.
StopSequences
Gets or sets optional sequences of characters that signal the LLM to stop generating text when encountered.
[JsonPropertyName("stopSequences")]
public IReadOnlyList<string>? StopSequences { get; init; }
Property Value
Remarks
When the model generates any of these sequences during sampling, text generation stops immediately, even if the maximum token limit hasn't been reached. This is useful for controlling generation endings or preventing the model from continuing beyond certain points.
Stop sequences are typically case-sensitive, and typically the LLM will only stop generation when a produced sequence exactly matches one of the provided sequences. Common uses include ending markers like "END", punctuation like ".", or special delimiter sequences like "###".
SystemPrompt
Gets or sets an optional system prompt the server wants to use for sampling.
[JsonPropertyName("systemPrompt")]
public string? SystemPrompt { get; init; }
Property Value
Remarks
The client may modify or omit this prompt.
Temperature
Gets or sets the temperature to use for sampling, as requested by the server.
[JsonPropertyName("temperature")]
public float? Temperature { get; init; }