const model = new ChatIflytekXinghuo();
const response = await model.invoke([new HumanMessage("Nice to meet you!")]);
console.log(response);

Hierarchy

  • BaseChatIflytekXinghuo
    • ChatIflytekXinghuo

Constructors

Properties

apiUrl: string
domain: string
iflytekApiKey: string

API key to use when making requests. Defaults to the value of IFLYTEK_API_KEY environment variable.

iflytekApiSecret: string

API Secret to use when making requests. Defaults to the value of IFLYTEK_API_SECRET environment variable.

iflytekAppid: string

APPID to use when making requests. Defaults to the value of IFLYTEK_APPID environment variable.

max_tokens: number = 2048
streaming: boolean = false
temperature: number = 0.5

Amount of randomness injected into the response. Ranges from 0 to 1 (0 is not included). Use temp closer to 0 for analytical / multiple choice, and temp closer to 1 for creative and generative tasks. Defaults to 0.5.

top_k: number = 4
version: string = "v2.1"

Model version to use. Available options are: v1.1, v2.1, v3.1

"v2.1"
userId?: string

ID of the end-user who made requests.

Accessors

Methods

  • Calls the Xinghuo API completion.

    Parameters

    • request: ChatCompletionRequest

      The request to send to the Xinghuo API.

    • stream: true
    • Optionalsignal: AbortSignal

      The signal for the API call.

    Returns Promise<IterableReadableStream<string>>

    The response from the Xinghuo API.

  • Parameters

    • request: ChatCompletionRequest
    • stream: false
    • Optionalsignal: AbortSignal

    Returns Promise<ChatCompletionResponse>

  • Get the identifying parameters for the model

    Returns {
        streaming: boolean;
        version: string;
        chat_id?: string;
        max_tokens?: number;
        temperature?: number;
        top_k?: number;
    }

    • streaming: boolean
    • version: string
    • Optionalchat_id?: string
    • Optionalmax_tokens?: number
    • Optionaltemperature?: number
    • Optionaltop_k?: number
  • Get the parameters used to invoke the model

    Returns Omit<ChatCompletionRequest, "messages"> & {
        streaming: boolean;
    }