CreateMessageRequest

A request from the server to sample an LLM via the client.

The client has full discretion over which model to select. The client should also inform the user before beginning sampling, to allow them to inspect the request (human in the loop) and decide whether to approve it.

Important: This is a request from server to client, not client to server. The server is asking the client to use its LLM to generate a completion.

Human-in-the-loop: Clients should:

  1. Show the sampling request to the user before executing it

  2. Allow the user to approve or reject the request

  3. Show the generated response to the user before sending it back to the server

Constructors

Link copied to clipboard
constructor(params: CreateMessageRequestParams)

Properties

Link copied to clipboard

A request to include context from one or more MCP servers (including the caller), to be attached to the prompt.

Link copied to clipboard

The requested maximum number of tokens to sample (to prevent runaway completions).

Link copied to clipboard

The messages to use as context for sampling.

Link copied to clipboard

Metadata for this request. May include a progressToken for out-of-band progress notifications.

Link copied to clipboard

Metadata to pass through to the LLM provider. The format of this metadata is provider-specific.

Link copied to clipboard
open override val method: Method
Link copied to clipboard

The server's preferences for which model to select.

Link copied to clipboard

The parameters for the sampling request, including messages and model preferences.

Link copied to clipboard

List of sequences that will stop generation if encountered.

Link copied to clipboard

An optional system prompt the server wants to use for sampling.

Link copied to clipboard

Temperature parameter for sampling (typically 0.0-2.0).

Functions

Link copied to clipboard

Converts the request to a JSON-RPC request.