Soracom Flux
AI Action
When your Flux App triggers an AI Action that calls an AI model, Flux Credits are consumed depending on the model and whether you're using your own API key. Refer to Available Models and Credit Consumption under Soracom Flux in our Pricing & Fee Schedule for more information.
The AI Action allows you to specify an AI model and prompt, send instructions or questions to the AI model, and then forward the AI model's response to the next channel.
Configuration
Condition
In the Action Condition section, you can specify the conditions under which the action will be executed using the values (Message or Context) from the event source to the channel. For more details on the expressions you can input, please refer to Use Expressions in Actions.
Config
You can configure the following.
AI Model
Select the AI model from the following that are available:
- Azure OpenAI (GPT-4o)
- Azure OpenAI (GPT-4o mini)
- OpenAI (GPT-4o mini) *1
- OpenAI (GPT-4o) *1
- Amazon Bedrock - Anthropic Claude 3.5 Haiku
- Amazon Bedrock - Anthropic Claude 3 Haiku
- Amazon Bedrock - Anthropic Claude 3.5 Sonnet
- Amazon Bedrock - Anthropic Claude 3 Opus
- Google Gemini 1.5 Flash
- Google Gemini 1.5 Pro
*1 - To use these AI models, you need to bring your OpenAI API key.
AI models consume a set number of credits
- When your Remaining Credit reaches
0
, you will no longer be able to use AI actions that consume credits. In this case, use your own OpenAI API key to continue using AI actions. - Consumed credits cannot be recovered.
- Soracom will provide plans to purchase credits in the future.
Credentials
If you select OpenAI (GPT-4o mini) or OpenAI (GPT-4o) in the AI Model menu, choose the credentials where your OpenAI API key is registered. If you have not, please register your OpenAI API key in the credentials set as follows:
- Credentials Set ID: Enter any name to identify the credentials. Example:
OpenAI-API-Key
- Type: Select API Token Credentials.
- API Token: Enter your OpenAI API key.
Prompt
Enter the instructions or questions to be sent to the AI model.
You can enter up to 4096 characters.
- If you want to use data from the event source that triggers the AI action (e.g., the execution result of a Webhook action), specify it like
${payload.daily.temperature_2m_max[0]}
. - If you want to use data from the event source that triggers the Flux App (e.g., data sent by an IoT device belonging to a SIM group to the Unified Endpoint), specify it like
${event.payload.temp}
.
Format AI Response as JSON
Check this option if you want to restrict the AI model's response to JSON format. If checked, include instructions to return the response in JSON format in the Prompt field.
Load Image into AI
Enable this option to send a still image to the AI model, and specify the URL of the image. For example, if you are using the Soracom Harvest Files event source in this Flux App, specify ${event.payload.presignedUrls.get}
to send the file uploaded to Soracom Harvest Files to the generative AI.
Advanced Options: System Prompt
Enter a System Prompt, available for the GPT-4o series.
The System Prompt is available only when you specify Azure OpenAI (GPT-4o), OpenAI (GPT-4o mini), or OpenAI (GPT-4o) as the AI Model.
Output
Configure how to handle the output data of the action. Refer to Enable Republishing of Action Output for more details.
Output Data of the Action
The output data of the AI Action is as follows.
{
"output": {
<response from AI model>
},
"usage": {
"completion_tokens": 10,
"prompt_tokens": 300,
"total_tokens": 310,
"model": "gpt-4o",
"byol": false,
"credit": 10
}
}
The attributes are as follows:
Attribute | Description |
---|---|
output | Response from the AI model. For example, if you Enable Republishing of Action Output you can retrieve the AI model's response from the destination channel using ${payload.output.xxx} . |
usage | Data related to AI model usage. |
usage.completion_tokens | Number of tokens generated by the AI model. |
usage.prompt_tokens | Number of tokens sent as prompts to the AI model. |
usage.total_tokens | Total of usage.completion_tokens and usage.prompt_tokens . |
usage.model | The AI model used. |
usage.byol | Whether your OpenAI API key was specified. |
usage.credit | Number of credits consumed. |