Appearance
OpenAI integration
OpenAI provides AI models you can use to generate responses from text (and, depending on your inputs, other formats). This integration lets your WeWeb backend call OpenAI from APIs and event triggers.
Use cases
- Generate a reply to a user message.
- Summarize text from a form submission.
- Rewrite content in a different tone.
- Classify text into categories you define.
Setup
- In OpenAI, create an API key.
- Copy your OpenAI
Organization ID(from your organization settings). - In WeWeb, go to the
Integrationstab and open theOpenAIintegration. - Add your credentials for each environment you use (
Editor,Staging,Production), then save:API KeyOrganization ID(Shown asSecret Key or Restricted Keyin WeWeb)
- Test with the
Create responseaction in a simple API.
Common pitfalls (setup & usage)
401 error (incorrect API key)
If you get a 401 error, the API key is missing, has a typo, or is no longer valid. Create a new key in OpenAI and replace the value saved in WeWeb.
403 error (insufficient permissions)
If you use a restricted key, it may not have access to the model or endpoint you selected. Update the key permissions in OpenAI, or use a key with access.
404 error (resource not found)
This usually happens when the Model value is wrong or no longer available to your account. Double-check the model ID and try again.
Streaming enabled but you expected a single JSON result
When Stream response is enabled in the OpenAI action, the response is sent progressively (in small chunks).
- If you want a single JSON response at the end, turn
Stream responseOff. - If you want to show text progressively in the UI, keep
Stream responseOn and forward chunks from your API to the interface.
Streaming OpenAI responses
You can stream OpenAI output so your interface receives updates while the model is generating.
Recommended streaming pattern (API → Interface)
1) Backend (API) workflow
- Add the
Create responseaction (OpenAI). - In the OpenAI action, turn
Stream responseOn. - While the OpenAI action streams, forward each chunk to the interface using
Send streaming response.- Put
Send streaming responsein the OpenAI action’s loop so it runs once per chunk. - Set its
Event Datato the current chunk.
- Put
2) Interface workflow (calling the API Endpoint)
- Create a global variable with type Array (example:
aiStreamChunks). - At the start of the interface workflow, reset that array to
[](so you clear previous outputs). - Call your API Endpoint and enable
Handle streamin the action’s advanced options. - For each incoming chunk:
- Check if it includes text you want to display.
- If yes, push the new text to the end of your global array.
A common check is:
chunk.data.typeEqualsresponse.output_text.delta- Then take
chunk.data.delta(the text piece) and append it
3) Bind it to the UI
- Add a Text element.
- Bind it to your global array (example:
aiStreamChunks). - Use the
Mergeformula to combine the array into a readable string (for example, merge with an empty separator or a space).
What you receive while streaming
While streaming is enabled, the interface receives a sequence of “chunk” objects. Each chunk includes a type (what kind of update it is) and a data payload. For text streaming, you’ll usually use the chunks where:
chunk.data.typeIsresponse.output_text.deltachunk.data.deltaContains the next piece of text
If you need a full walkthrough of streaming API calls, see Handling streaming responses →.
All Actions
This integration currently provides three actions.
| Action | Description |
|---|---|
| Create response | Generate a text response from a model, with optional streaming |
| Create image | Generate image(s) from a prompt |
| Create speech | Convert text into audio |
Action details
Create response
Generate a response using an OpenAI model (Responses API). Supports system instructions, tools (e.g. web search, code interpreter), conversation continuity, and streaming.
Inputs
| Display Key | Example Input | Description | Restrictions |
|---|---|---|---|
Model | "gpt-4.1" | Model ID used to generate the response | Required |
Input type | "Text" | Whether the input is a single text prompt or an array of message objects | Text or Array |
Input | "Write a 1-sentence welcome message for a new user." | Text prompt, or array of message objects with role and content (when Input type is Array) | Required |
InstructionsOptional | "You are a helpful assistant." | System instructions defining assistant behavior | — |
Max Output TokensOptional | 150 | Maximum number of tokens to generate | Number |
| Streaming options | |||
Stream responseOptional | false | Return output progressively while generating; use stream.chunk and stream.chunks in subsequent steps | true or false |
| Advanced options | |||
TemperatureOptional | 1 | Sampling temperature (0–2); higher = more random | Number |
StoreOptional | true | Whether to store the response for later retrieval (e.g. for Previous response ID) | true or false |
Previous response IDOptional | "resp_123" | ID of a stored previous response to continue the conversation | Response must have been stored |
ToolsOptional | [{ "type": "web_search" }] | Tools the model may call (e.g. web_search, file_search, code_interpreter, image_generation) | Array of tool objects |
Tool choiceOptional | "auto" | Whether the model can use tools | auto or none |
Parallel tool callsOptional | true | Allow parallel function calling when using tools | true or false |
MetadataOptional | {"user_id": "user_123"} | Custom metadata attached to the response | Object |
Example output
json
{
"id": "resp_123",
"object": "response",
"model": "gpt-4.1",
"output": [
{
"id": "msg_123",
"type": "message",
"role": "assistant",
"content": [
{
"type": "output_text",
"text": "Welcome aboard - we're glad you're here."
}
]
}
]
}Documentation of API endpoint that powers action: OpenAI API – Create response (POST /v1/responses)
Create image
Generate image output from a text prompt.
Inputs
| Display Key | Example Input | Description | Restrictions |
|---|---|---|---|
Prompt | "A minimalist poster of a mountain at sunrise" | Text description of the image | Required |
Model | "gpt-image-1" | Image model to use | Required |
Number of ImagesOptional | 1 | Number of images to generate | dall-e-2 supports 1 to 10; dall-e-3 and GPT image models use 1 |
Image SizeOptional | "1024x1024" | Output dimensions | Available sizes depend on selected Model |
QualityOptional | "high" | Output quality | Hidden when model is dall-e-2 |
StyleOptional | "vivid" | Visual style | dall-e-3 only |
Response formatOptional | "b64_json" | Return the image as base64 or a URL | dall-e-2 and dall-e-3 only. Valid: b64_json, url |
FormatOptional | "png" | Output file format | GPT image models only. Valid: png, jpeg, webp |
CompressionOptional | 80 | Compression level for the output image | GPT image models only. Used for jpeg or webp, from 0 to 100 |
BackgroundOptional | "transparent" | Background style for the generated image | GPT image models only. Valid: opaque, transparent |
Provider behavior to know:
- Selecting a different model can reset defaults for fields like
Image Size,Quality,Style, and output settings. - On
dall-e-2, theQualityfield is hidden and fixed to standard behavior. - Transparent backgrounds require
pngorwebp.
Documentation of API endpoint that powers action: OpenAI API – Create image (POST /v1/images/generations)
Create speech
Convert text to audio.
Inputs
| Display Key | Example Input | Description | Restrictions |
|---|---|---|---|
Input Text | "Welcome to our app!" | Text to convert into speech | Required |
Model | "tts-1" | Speech model | Required |
Voice | "alloy" | Voice preset | Must be a supported voice |
Response Format | "mp3" | Output audio format | mp3, opus, aac, flac, wav, pcm |
SpeedOptional | 1 | Playback speed | 0.25 to 4 |
Documentation of API endpoint that powers action: OpenAI API – Create speech (POST /v1/audio/speech)
Error handling
| Error code and type | Reason |
|---|---|
| 400 Bad Request | Invalid or missing data sent (for example, missing Model or Input). |
| 401 Unauthorized | Incorrect API key provided. |
| 403 Forbidden | Your key does not have access to the requested resource. |
| 404 Not Found | The resource does not exist (for example, an invalid Model ID). |
| 429 Too Many Requests | Rate limit reached or you exceeded your current quota. |
FAQs
Why does the streamed text look garbled or have missing spaces?
When streaming is enabled, OpenAI sends the text in very small pieces.
This means:
- A single piece is not always a full word or sentence.
- Spaces and punctuation can arrive in separate pieces.
To display streaming text, keep adding each new piece to the end of your existing text.
Why does my response have no text?
Sometimes the model returns something other than normal text (for example, it may try to use tools or return a different output type).
Why does my request get rate limited?
429 means you are sending too many requests too quickly, or you reached your usage limit. Reduce concurrency, add delays, and check your OpenAI plan and billing settings.

