Build an AI App

Generate Text

Make your first LLM call.

import { openai } from '@ai-sdk/openai'
import { generateText } from 'ai'
import 'dotenv/config'
 
const main = async () => {
  const result = await generateText({
    model: openai('gpt-4o-mini'),
    prompt: 'Hello, world!',
  })
  console.log(result.text)
}
 
main()

Let's try asking about something recent.

import { openai } from '@ai-sdk/openai'
import { generateText } from 'ai'
import 'dotenv/config'
 
const main = async () => {
  const result = await generateText({
    model: openai('gpt-4o-mini'),
    prompt: 'When is the AI Engineer summit?',
  })
  console.log(result.text)
}
 
main()

No luck...

Why don't we try Perplexity?

import { perplexity } from '@ai-sdk/perplexity'
import { generateText } from 'ai'
import 'dotenv/config'
 
const main = async () => {
  const result = await generateText({
    model: perplexity('sonar'),
    prompt: 'When is the AI Engineer summit?',
  })
  console.log(result.text)
}
 
main()

Awesome, that worked! While we're at it, let's try Google.

import { google } from '@ai-sdk/google'
import { generateText } from 'ai'
import 'dotenv/config'
 
const main = async () => {
  const result = await generateText({
    model: google('gemini-2.0-flash-001', { useSearchGrounding: true }),
    prompt: 'When is the AI Engineer summit?',
  })
  console.log(result.text)
}
 
main()

As you can see, changing providers with the AI SDK is as simple as changing two lines of code.

Oh - and we can even see what sources were used to generate the text:

import { google } from '@ai-sdk/google'
import { generateText } from 'ai'
import 'dotenv/config'
 
const main = async () => {
  const result = await generateText({
    model: google('gemini-2.0-flash-001', { useSearchGrounding: true }),
    prompt: 'When is the AI Engineer summit?',
  })
  console.log(result.text, result.sources)
}
 
main()