Example

const model = new ChatIflytekXinghuo();
const response = await model.call([new HumanMessage("Nice to meet you!")]);
console.log(response);

Hierarchy

  • BaseChatIflytekXinghuo
    • ChatIflytekXinghuo

Constructors

  • Parameters

    Returns ChatIflytekXinghuo

Properties

ParsedCallOptions: Omit<BaseLanguageModelCallOptions, never>
apiUrl: string
caller: AsyncCaller

The async caller should be used by subclasses to make any async calls, which will thus benefit from the concurrency and retry logic.

domain: string
iflytekApiKey: string
iflytekApiSecret: string
iflytekAppid: string
max_tokens: number = 2048
streaming: boolean = false
temperature: number = 0.5
top_k: number = 4
verbose: boolean

Whether to print out response text.

version: string = "v2.1"
callbacks?: Callbacks
metadata?: Record<string, unknown>
tags?: string[]
userId?: string

Accessors

  • get callKeys(): string[]
  • Returns string[]

Methods

  • Makes a single call to the chat model.

    Parameters

    Returns Promise<BaseMessage>

    A Promise that resolves to a BaseMessage.

  • Makes a single call to the chat model with a prompt value.

    Parameters

    Returns Promise<BaseMessage>

    A Promise that resolves to a BaseMessage.

  • Calls the Xinghuo API completion.

    Parameters

    • request: ChatCompletionRequest

      The request to send to the Xinghuo API.

    • stream: true
    • Optional signal: AbortSignal

      The signal for the API call.

    Returns Promise<IterableReadableStream<string>>

    The response from the Xinghuo API.

  • Parameters

    • request: ChatCompletionRequest
    • stream: false
    • Optional signal: AbortSignal

    Returns Promise<ChatCompletionResponse>

  • Generates chat based on the input messages.

    Parameters

    Returns Promise<LLMResult>

    A Promise that resolves to an LLMResult.

  • Generates a prompt based on the input prompt values.

    Parameters

    Returns Promise<LLMResult>

    A Promise that resolves to an LLMResult.

  • Parameters

    Returns Promise<number>

  • Get the identifying parameters for the model

    Returns {
        streaming: boolean;
        version: string;
        chat_id?: string;
        max_tokens?: number;
        temperature?: number;
        top_k?: number;
    }

    • streaming: boolean
    • version: string
    • Optional chat_id?: string
    • Optional max_tokens?: number
    • Optional temperature?: number
    • Optional top_k?: number
  • Get the parameters used to invoke the model

    Returns Omit<ChatCompletionRequest, "messages"> & {
        streaming: boolean;
    }

  • Invokes the chat model with a single input.

    Parameters

    Returns Promise<BaseMessageChunk>

    A Promise that resolves to a BaseMessageChunk.

  • Type Parameters

    • WebSocketStream

    Parameters

    • options: WebSocketStreamOptions

    Returns Promise<WebSocketStream>

  • Create a new runnable sequence that runs each individual runnable in series, piping the output of one runnable into another runnable or runnable-like.

    Type Parameters

    • NewRunOutput

    Parameters

    Returns RunnableSequence<BaseLanguageModelInput, Exclude<NewRunOutput, Error>>

    A new runnable sequence.

  • Predicts the next message based on a text input.

    Parameters

    • text: string

      The text input.

    • Optional options: string[] | BaseLanguageModelCallOptions

      The call options or an array of stop sequences.

    • Optional callbacks: Callbacks

      The callbacks for the language model.

    Returns Promise<string>

    A Promise that resolves to a string.

  • Predicts the next message based on the input messages.

    Parameters

    Returns Promise<BaseMessage>

    A Promise that resolves to a BaseMessage.

  • Returns SerializedLLM

    Deprecated

    Return a json-like object representing this LLM.

  • Stream all output from a runnable, as reported to the callback system. This includes all inner runs of LLMs, Retrievers, Tools, etc. Output is streamed as Log objects, which include a list of jsonpatch ops that describe how the state of the run has changed in each step, and the final state of the run. The jsonpatch ops can be applied in order to construct state.

    Parameters

    Returns AsyncGenerator<RunLogPatch, any, unknown>

  • Returns Serialized

  • Default implementation of transform, which buffers input and then calls stream. Subclasses should override this method if they can start producing output while input is still being generated.

    Parameters

    Returns AsyncGenerator<BaseMessageChunk, any, unknown>

  • Bind lifecycle listeners to a Runnable, returning a new Runnable. The Run object contains information about the run, including its id, type, input, output, error, startTime, endTime, and any tags or metadata added to the run.

    Parameters

    • params: {
          onEnd?: ((run, config?) => void | Promise<void>);
          onError?: ((run, config?) => void | Promise<void>);
          onStart?: ((run, config?) => void | Promise<void>);
      }

      The object containing the callback functions.

      • Optional onEnd?: ((run, config?) => void | Promise<void>)
          • (run, config?): void | Promise<void>
          • Called after the runnable finishes running, with the Run object.

            Parameters

            Returns void | Promise<void>

      • Optional onError?: ((run, config?) => void | Promise<void>)
          • (run, config?): void | Promise<void>
          • Called if the runnable throws an error, with the Run object.

            Parameters

            Returns void | Promise<void>

      • Optional onStart?: ((run, config?) => void | Promise<void>)
          • (run, config?): void | Promise<void>
          • Called before the runnable starts running, with the Run object.

            Parameters

            Returns void | Promise<void>

    Returns Runnable<BaseLanguageModelInput, BaseMessageChunk, BaseLanguageModelCallOptions>

Generated using TypeDoc