Wrapper around Ali Tongyi large language models that use the Chat endpoint.

To use you should have the ALIBABA_API_KEY environment variable set.

const qwen = new ChatAlibabaTongyi({
alibabaApiKey: "YOUR-API-KEY",
});

const qwen = new ChatAlibabaTongyi({
model: "qwen-turbo",
temperature: 1,
alibabaApiKey: "YOUR-API-KEY",
});

const messages = [new HumanMessage("Hello")];

await qwen.call(messages);

Hierarchy (view full)

Implements

  • AlibabaTongyiChatInput

Constructors

Properties

apiUrl: string
model:
    | string & {}
    | "qwen-turbo"
    | "qwen-plus"
    | "qwen-max"
    | "qwen-max-1201"
    | "qwen-max-longcontext"
    | "qwen-7b-chat"
    | "qwen-14b-chat"
    | "qwen-72b-chat"
    | "llama2-7b-chat-v2"
    | "llama2-13b-chat-v2"
    | "baichuan-7b-v1"
    | "baichuan2-13b-chat-v1"
    | "baichuan2-7b-chat-v1"
    | "chatglm3-6b"
    | "chatglm-6b-v2"

Model name to use. Available options are: qwen-turbo, qwen-plus, qwen-max, or Other compatible models.

"qwen-turbo"
modelName:
    | string & {}
    | "qwen-turbo"
    | "qwen-plus"
    | "qwen-max"
    | "qwen-max-1201"
    | "qwen-max-longcontext"
    | "qwen-7b-chat"
    | "qwen-14b-chat"
    | "qwen-72b-chat"
    | "llama2-7b-chat-v2"
    | "llama2-13b-chat-v2"
    | "baichuan-7b-v1"
    | "baichuan2-13b-chat-v1"
    | "baichuan2-7b-chat-v1"
    | "chatglm3-6b"
    | "chatglm-6b-v2"

Model name to use. Available options are: qwen-turbo, qwen-plus, qwen-max, or Other compatible models. Alias for model

"qwen-turbo"
streaming: boolean

Whether to stream the results or not. Defaults to false.

alibabaApiKey?: string

API key to use when making requests. Defaults to the value of ALIBABA_API_KEY environment variable.

enableSearch?: boolean
maxTokens?: number
prefixMessages?: TongyiMessage[]

Messages to pass as a prefix to the prompt

repetitionPenalty?: number

Penalizes repeated tokens according to frequency. Range from 1.0 to 2.0. Defaults to 1.0.

seed?: number
temperature?: number

Amount of randomness injected into the response. Ranges from 0 to 1 (0 is not included). Use temp closer to 0 for analytical / multiple choice, and temp closer to 1 for creative and generative tasks. Defaults to 0.95.

topK?: number
topP?: number

Total probability mass of tokens to consider at each step. Range from 0 to 1.0. Defaults to 0.8.

Accessors

Methods

  • Get the identifying parameters for the model

    Returns {
        enable_search?: null | boolean;
        incremental_output?: null | boolean;
        max_tokens?: null | number;
        repetition_penalty?: null | number;
        result_format?: "message" | "text";
        seed?: null | number;
        stream?: boolean;
        temperature?: null | number;
        top_k?: null | number;
        top_p?: null | number;
    } & Pick<ChatCompletionRequest, "model">

  • Get the parameters used to invoke the model

    Returns {
        enable_search?: null | boolean;
        incremental_output?: null | boolean;
        max_tokens?: null | number;
        repetition_penalty?: null | number;
        result_format?: "message" | "text";
        seed?: null | number;
        stream?: boolean;
        temperature?: null | number;
        top_k?: null | number;
        top_p?: null | number;
    }

    • Optionalenable_search?: null | boolean
    • Optionalincremental_output?: null | boolean
    • Optionalmax_tokens?: null | number
    • Optionalrepetition_penalty?: null | number
    • Optionalresult_format?: "message" | "text"
    • Optionalseed?: null | number
    • Optionalstream?: boolean
    • Optionaltemperature?: null | number
    • Optionaltop_k?: null | number
    • Optionaltop_p?: null | number