Skip to content

Updating visuals

If you see any images containing outdated UI, please bear with us.

We are updating all content as quickly as possible to mirror our new UI.

OpenAI integration

OpenAI provides AI models you can use to generate responses from text (and, depending on your inputs, other formats). This integration lets your WeWeb backend call OpenAI from APIs and event triggers.

Use cases

  • Generate a reply to a user message.
  • Summarize text from a form submission.
  • Rewrite content in a different tone.
  • Classify text into categories you define.

Setup

  1. In OpenAI, create an API key.
  2. Copy your OpenAI Organization ID (from your organization settings).
  3. In WeWeb, go to the Integrations tab and open the OpenAI integration.
  4. Add your credentials for each environment you use (Editor, Staging, Production), then save:
    • API Key
    • Organization ID (Shown as Secret Key or Restricted Key in WeWeb)
  5. Test with the Create response action in a simple API.

Common pitfalls (setup & usage)

401 error (incorrect API key)

If you get a 401 error, the API key is missing, has a typo, or is no longer valid. Create a new key in OpenAI and replace the value saved in WeWeb.

403 error (insufficient permissions)

If you use a restricted key, it may not have access to the model or endpoint you selected. Update the key permissions in OpenAI, or use a key with access.

404 error (resource not found)

This usually happens when the Model value is wrong or no longer available to your account. Double-check the model ID and try again.

Streaming enabled but you expected a single JSON result

When Stream response is enabled in the OpenAI action, the response is sent progressively (in small chunks).

  • If you want a single JSON response at the end, turn Stream response Off.
  • If you want to show text progressively in the UI, keep Stream response On and forward chunks from your API to the interface.

Streaming OpenAI responses

You can stream OpenAI output so your interface receives updates while the model is generating.

1) Backend (API) workflow

  1. Add the Create response action (OpenAI).
  2. In the OpenAI action, turn Stream response On.
  3. While the OpenAI action streams, forward each chunk to the interface using Send streaming response.
    • Put Send streaming response in the OpenAI action’s loop so it runs once per chunk.
    • Set its Event Data to the current chunk.

2) Interface workflow (calling the API Endpoint)

  1. Create a global variable with type Array (example: aiStreamChunks).
  2. At the start of the interface workflow, reset that array to [] (so you clear previous outputs).
  3. Call your API Endpoint and enable Handle stream in the action’s advanced options.
  4. For each incoming chunk:
    • Check if it includes text you want to display.
    • If yes, push the new text to the end of your global array.

A common check is:

  • chunk.data.type Equals response.output_text.delta
  • Then take chunk.data.delta (the text piece) and append it

3) Bind it to the UI

  1. Add a Text element.
  2. Bind it to your global array (example: aiStreamChunks).
  3. Use the Merge formula to combine the array into a readable string (for example, merge with an empty separator or a space).

What you receive while streaming

While streaming is enabled, the interface receives a sequence of “chunk” objects. Each chunk includes a type (what kind of update it is) and a data payload. For text streaming, you’ll usually use the chunks where:

  • chunk.data.type Is response.output_text.delta
  • chunk.data.delta Contains the next piece of text

If you need a full walkthrough of streaming API calls, see Handling streaming responses →.

All Actions

This integration currently provides three actions.

ActionDescription
Create responseGenerate a text response from a model, with optional streaming
Create imageGenerate image(s) from a prompt
Create speechConvert text into audio

Action details

Create response

Generate a response using an OpenAI model (Responses API). Supports system instructions, tools (e.g. web search, code interpreter), conversation continuity, and streaming.

Inputs

Display KeyExample InputDescriptionRestrictions
Model"gpt-4.1"Model ID used to generate the responseRequired
Input type"Text"Whether the input is a single text prompt or an array of message objectsText or Array
Input"Write a 1-sentence welcome message for a new user."Text prompt, or array of message objects with role and content (when Input type is Array)Required
Instructions
Optional
"You are a helpful assistant."System instructions defining assistant behavior
Max Output Tokens
Optional
150Maximum number of tokens to generateNumber
Streaming options
Stream response
Optional
falseReturn output progressively while generating; use stream.chunk and stream.chunks in subsequent stepstrue or false
Advanced options
Temperature
Optional
1Sampling temperature (0–2); higher = more randomNumber
Store
Optional
trueWhether to store the response for later retrieval (e.g. for Previous response ID)true or false
Previous response ID
Optional
"resp_123"ID of a stored previous response to continue the conversationResponse must have been stored
Tools
Optional
[{ "type": "web_search" }]Tools the model may call (e.g. web_search, file_search, code_interpreter, image_generation)Array of tool objects
Tool choice
Optional
"auto"Whether the model can use toolsauto or none
Parallel tool calls
Optional
trueAllow parallel function calling when using toolstrue or false
Metadata
Optional
{"user_id": "user_123"}Custom metadata attached to the responseObject

Example output

json
{
  "id": "resp_123",
  "object": "response",
  "model": "gpt-4.1",
  "output": [
    {
      "id": "msg_123",
      "type": "message",
      "role": "assistant",
      "content": [
        {
          "type": "output_text",
          "text": "Welcome aboard - we're glad you're here."
        }
      ]
    }
  ]
}

Documentation of API endpoint that powers action: OpenAI API – Create response (POST /v1/responses)

Create image

Generate image output from a text prompt.

Inputs

Display KeyExample InputDescriptionRestrictions
Prompt"A minimalist poster of a mountain at sunrise"Text description of the imageRequired
Model"gpt-image-1"Image model to useRequired
Number of Images
Optional
1Number of images to generatedall-e-2 supports 1 to 10; dall-e-3 and GPT image models use 1
Image Size
Optional
"1024x1024"Output dimensionsAvailable sizes depend on selected Model
Quality
Optional
"high"Output qualityHidden when model is dall-e-2
Style
Optional
"vivid"Visual styledall-e-3 only
Response format
Optional
"b64_json"Return the image as base64 or a URLdall-e-2 and dall-e-3 only. Valid: b64_json, url
Format
Optional
"png"Output file formatGPT image models only. Valid: png, jpeg, webp
Compression
Optional
80Compression level for the output imageGPT image models only. Used for jpeg or webp, from 0 to 100
Background
Optional
"transparent"Background style for the generated imageGPT image models only. Valid: opaque, transparent

Provider behavior to know:

  • Selecting a different model can reset defaults for fields like Image Size, Quality, Style, and output settings.
  • On dall-e-2, the Quality field is hidden and fixed to standard behavior.
  • Transparent backgrounds require png or webp.

Documentation of API endpoint that powers action: OpenAI API – Create image (POST /v1/images/generations)

Create speech

Convert text to audio.

Inputs

Display KeyExample InputDescriptionRestrictions
Input Text"Welcome to our app!"Text to convert into speechRequired
Model"tts-1"Speech modelRequired
Voice"alloy"Voice presetMust be a supported voice
Response Format"mp3"Output audio formatmp3, opus, aac, flac, wav, pcm
Speed
Optional
1Playback speed0.25 to 4

Documentation of API endpoint that powers action: OpenAI API – Create speech (POST /v1/audio/speech)

Error handling

Error code and typeReason
400 Bad RequestInvalid or missing data sent (for example, missing Model or Input).
401 UnauthorizedIncorrect API key provided.
403 ForbiddenYour key does not have access to the requested resource.
404 Not FoundThe resource does not exist (for example, an invalid Model ID).
429 Too Many RequestsRate limit reached or you exceeded your current quota.

FAQs

Why does the streamed text look garbled or have missing spaces?

When streaming is enabled, OpenAI sends the text in very small pieces.

This means:

  • A single piece is not always a full word or sentence.
  • Spaces and punctuation can arrive in separate pieces.

To display streaming text, keep adding each new piece to the end of your existing text.

Why does my response have no text?

Sometimes the model returns something other than normal text (for example, it may try to use tools or return a different output type).

Why does my request get rate limited?

429 means you are sending too many requests too quickly, or you reached your usage limit. Reduce concurrency, add delays, and check your OpenAI plan and billing settings.