OpenAI Chat Completion
Semantic Kernel JS provides a powerful interface for working with chat completions using the OpenAI API. This allows developers to create applications that can generate human-like text responses based on user input.
Install the package
npm install --save semantic-kernel @semantic-kernel/openai
Initialize the Kernel
import { OpenAIChatClient } from '@semantic-kernel/openai';
import {
FunctionChoiceBehavior,
functionInvocation,
Kernel,
KernelArguments,
kernelFunction,
} from 'semantic-kernel';
const openAIChatClient = new OpenAIChatClient({
apiKey: 'YOUR_OPENAI_API_KEY',
modelId: 'gpt-3.5-turbo',
})
.asBuilder()
// Enable FunctionInvocation
.use(functionInvocation)
.build();
// Add OpenAIChatClient to the Kernel
const kernel = new Kernel().addService(openAIChatClient);
Add your plugins
const temperature = kernelFunction(({ loc }) => (loc === 'Dublin' ? '10' : '24'), {
name: 'temperature',
description: 'Returns the temperature in a given city',
schema: {
type: 'object',
properties: {
loc: { type: 'string', description: 'The location to return the temperature for' },
},
},
});
kernel.addPlugin({
name: 'weather',
description: 'Weather plugin',
functions: [temperature],
});
const currentLocation = kernelFunction(() => Math.random() > 0.5 ? 'Austin' : 'Dublin', {
name: 'currentLocation',
description: 'Returns the current location',
});
kernel.addPlugin({
name: 'location',
description: 'Location plugin',
functions: [currentLocation],
});
Invoke a prompt
const res = await kernel.invokePrompt('Return the current temperature in my exact location. Do not ask me for my location.', {
executionSettings: {
functionChoiceBehavior: FunctionChoiceBehavior.Auto(),
},
});
console.log("Result: " + res);
Or streaming response:
const res = kernel.invokeStreamingPrompt('Return the current temperature in my exact location. Do not ask me for my location.', {
executionSettings: {
functionChoiceBehavior: FunctionChoiceBehavior.Auto(),
},
});
for await (const message of res) {
console.log(message);
}