AI command nodes
Basics
For an intro on command nodes and all non-AI commands, see Command nodes
Details
š§Ŗ Prompt workbench
When you trigger an AI command, the AI is given a prompt composed of a request + your data.
The challenge of writing prompts is that it's hard to get it right the first time. To get it right necessitates rapid iteration cycles where you can compare and contrast results, and you need to understand exactly what gets sent to AI when you run that command.
This is what the Prompt workbench was built for.
Commands with prompts
Many of the AI commands will ask for a prompt. The Prompt workbench is available for these, including:
- Ask AI (formerly Generic AI query)
- Ask AI (non-streaming) (formerly Ask AI)
- Make API request
- Generate image(s) with DALL-E
- Transcribe audio
- Text processing agent
How it works
Here's a quick primer on how it works:
- Building a command: When you use a command that requires a prompt, a button will appear called "Prompt workbench".
- Opening the prompt workbench: This opens up a new panel where you can do rapid iteration testing of a prompt and its parameters to see the kind of answer they yield.
- Node to test with (required): This is necessary context to set so the workbench has a real node to test the prompt on. The context is sent by default after the prompt.
- Custom prompt (required): This is where you build your prompt which gets sent to an AI. You must include some kind of reference to or expression that embeds your content into the prompt, otherwise the job will run context-less. A prompt expression looks like this:
${name}. For more, see below. - Expanded prompt: When you have composed your prompt with the expressions, you can preview what gets sent to AI. It will show your custom prompts with the expressions replaced with actual content based on the node you're testing with. You'll also see an estimate of how many tokens the prompt shown will cost if you send it.
- Configure and test: Use the sliding scale to increase model creativity.
- Ready to test: When you're ready to send a test job, hit "Test AI completion". This does spend AI credits, which you can monitor by running the command
Open GPT log monitor
Prompt expressions
Prompt expressions allows you to refer to parts of the content in a prompt.
Prompt expression keywords
When you use a command that generates new content in a draft panel, you can edit the prompt of the command directly in the prompt editor:
You can use @ to trigger a menu of options, to tweak the prompt to give you the output you want - you can also reference specific nodes here:
Full list of prompt expression keywords:
- Context (All data from current node, incl. all fields and children)
- Content (All content from current node, incl. the node's children, excluding content inherited from supertag)
- Chat (All content in chat, for use in chat-commands)
- Created At (Time when the node was created)
- Current Date (The date today)
- Current DateTime (Time and date right now)
- Date from day node (the date from ancestor day node)
- Date from calendar node (Date based on ancestor calendar node, if any)
- Description (The description of the node)
- Done time (What time a checkbox was checked by a user)
- Edited by (All users who have edited the node, most frequent editor at the top)
- Last modified at (The last time the node was modified)
- Last modified by (The last user who modified the node)
- Last edited time (The time of the last edit)
- Last edited by (The user who last edited the node)
- Node ID (The unique ID of the node)
- Node name (The name/title of the current node)
- Node URL (A URL if the node has one)
- Owner (of node) (Where this node lives)
- Tags (The tags on a node)
- Source (All content from source material from voice memos)
- Source or content (Source material from voice memo and derived content)
Prompt expressions (advanced)
Prompt expressions in Tana are built like Title expressions, with some extra functionality:
- To get the name of the node that you are targeting, useĀ
${name} - To reference field values, useĀ
${fieldĀ label}. - To reference the entire node context with all fields and children in Tana Paste format, use
${sys:context} - To return a node's supertags, use
${sys:tags} - To show a node's children (excluding content inherited from supertag), use
${sys:content} - To insert the current date/date and time, use
${sys:currentDate}and${sys:currentDateTime}respectively. Keep in mind that GPT doesn't know the current date or time natively. - To return the URL and ID of the node, use
${sys:nodeURL}and${sys:nodeId}respectively. - All of the normal title expressions are also available.
- Extra: to get the full context of the field content, use a field definition.
Reminder: Prompt expressions only work in nodes. They do not work if within references.
Bonus: Expand References
This operator will include the expanded owned content of references that are direct children of the target. This is handy if you want to include content that is not directly related to the node you're running the command on, if you want the results of a search node reference to be included, or if there are field values you want expanded in the prompt.
All AI commands and parameters
Ask AI
Renamed from Generic AI query on Jan 21, 2025.
A generic AI query command that streams content into Tana (text will appear as soon as it's ready). Accepts user-provided OpenAI keys.
- Prompt
- Insert output strategy
- Target node
- Temperature
- Model to use
- Use your own AI key (check to use your own OpenAI key)
Ask AI (non-streaming)
Can be run on a single node, a single field, or can be configured with prompts, batch processing, temperature etc. Output will not be shown until the whole content is ready. Accepts user-provided OpenAI keys.
- Prompt
- Node filter
- Node context
- Field dependencies
- Target node
- Temperature
- Top P
- Suffix
- Best of
- Max tokens
- Model to use
- Stop sequences
- Presence penalty
- Frequency penalty
- Insert output strategy
- Combination prompt
- Batch prompt context
- Fill context window percentage
- Use your own AI key (check to use your own OpenAI key)
Make API request
Making an API request involves specifying the API endpoint, choosing the appropriate HTTP method, providing any necessary headers or authentication, and handling the response from the API. This process allows different software systems to exchange data and functionality seamlessly.
- Prompt
- Node filter
- Node context
- Target node
- URL
- Insert output strategy
- Payload
- API method
- Parse results
- Authorization header
- Headers
- Avoid using proxy
Generate image(s) with DALL-E
Returns an AI-generated image using DALL-E.
- Prompt
- Node filter
- Node context
- Target node
- Image size
- Number of images to generate
- Metaprompt to enhance prompt with GPT-3
Cluster children with embeddings
Takes a list of children, gets embeddings, and does unsupervised clustering. Given that you have to manually specify number of clusters, this is more of an experiment until we have server-side processing.
Fill in all empty AI fields
System command that uses the field's context to fill empty AI fields
Transcribe audio
Sends an audio file to WhisperAI and returns a transcription.
Autotag
Add tags using GPT interpretation
Add meeting bot
Send a meeting agent to a video call to transcribe what participants are saying. Many of these fields map onto the ones being synced by the Calendar integration.
- Meeting link (required): Where to find the link to the meeting
- Works well with Google Meet and Zoom. Coming soon: MS Teams
- Transcript field (required): Where to place the transcript of the meeting
- Attendees field: Where to find attendees for the meeting
- Having attendees here improves participant identification during the transcription process and when making meeting notes and attributing who said what
- Recording target: The link to the recording will be placed here. It will be deleted after 7 day
- Meeting date field: Where to find the date of the meeting. The bot will join automatically when the meeting starts.
- If a meeting has no date filled in, it will join immediately when you hit "Add meeting agent" button.
- Transcription provider: Which provider to use for transcription
- Meeting bot name: Name of the bot to use for the meeting
- When the agent joins the meeting, the name of the agent can be changed here to "Tam's assistant"
- Post process command: Command to run after transcription is completed. The default is a command with the Text processing agent, see below.
Text processing agent
Take a transcript (or any text), extract summaries and other things, put them in a target location.
General
- Transcript source (required): Where to find the transcript. The field with the text for processing.
- Could be a meeting transcript, article, voice memo or similar.
- Status attribute: Use this if you need to override the default system provided status attribute.
- You may be using several text processors on the same transcript. You can have each text processor triggered based on a different status field.
- Text processing agent mode: Which mode to run the text processing agent in. Defaults to meeting if not set. Available modes: meeting|generic.
- There are two modes: meeting or generic. For standard meetings, we recommend the meeting setting as it has been optimized to process action items, entities, and attribution of ideas to attendees. For other uses like processing a voice memo, we recommend generic. Default: meeting
Note: If all you define is the Transcript source, the processing agent will create a summary of the transcript, output as a child of the node.
Tag choices (optional)
- Attendees field: Where to find users to be used for speaker identification
- Field where attendees are listed. Most applicable to meetings where the meeting agent identifies speakers and can link them to the list in this field.
- Tags to use for entities: Entities (e.g. persons, companies) will be detected and replaced for these tags
- Tags to use for action items: Action items (e.g. decisions, todos) will be extracted for these tags
- Tags to use for item extraction: Looks for sections of the transcript that match these tags, and extracts them as nodes
Note: if no tags are defined, the processor will not look for anything specific and only run the summary.
Targets (optional)
- Summary target: Where to place the summary of the meeting
- New entities target: Where to place discovered entities
- Action items target: Where to place discovered action items
- Extracted items target: Where to place extracted items
Note: If you don't set targets, things are placed as children
Prompts (optional)
- Summary prompt: Prompt to use for generating the summary
- Augment prompt: Prompt to use for augmenting the attendees
- Action items prompt: Prompt to use for detecting action items. Use ${sys:actionItemTags} to get configured tags
- Entities prompt: Prompt to use for detecting new entities, Use ${sys:entityTags} to get configured tags
- Extracted items user prompt: Additional prompt to use for extracting items (optional).
EXAMPLE PROMPT:
GOAL:
- You are a senior UX researcher tasked with finding relevant āuser observationsā
in an onboarding transcript and reporting them back classified according to
the provided tags.
TASK:
- Find all moments in the transcript when the user experience any of the
described types of observation found in the supplied tags.
- You work for Acme Inc., and it is the experiences of the user - not the
team member - you want to observe and capture. The person with the #team acme
tag works at Acme Inc. The person being onboarded is tagged #person.
- Find at least 10 items, and at least 2 of each type.
Disabling functions (optional)
- Disable augmentation: Checkbox. Do not enhance attendees with additional information
- Disable summary: Checkbox. Do not generate a summary
All parameters
Prompt
- Description: Prompt expression - same syntax as title expression, but can be multiline.
${sys:context}gives context of node and children. - Source: Tana
- Note: While you can use references in prompts, any variables like
${sys:context}will not convert. See FAQ on why prompt variables have to be plain nodes.
Node filter
- Description: Search query to filter nodes that this command can be run on
- Source: Tana
Node context
- Description: Node context for commands - defaults to node the command is run on
- Source: Tana
Field dependencies
- Description: References to other fields on the same node. If any of these fields are empty, and have AI turned on, then their AI prompts will be run before evaluating this command.
- Source: Tana
Target node
- Description: Defines what Tana object to insert the result into. Can be a reference to a template node, or a field reference. There are some special types of nodes that references "universal" locations in Tana:
- Current day page
- Draft panel
- Library
- Source: Tana
Temperature
- Description: What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. OpenAI generally recommends altering this or top_p but not both. Defaults to 0.
- Source: OpenAI
Top P
- Description: An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. OpenAI generally recommends altering this or temperature but not both. Defaults to 1.
- Source: OpenAI
Suffix
- Description: A string that will be appended to the end of the text.
- Source: OpenAI
Best of
- Description: Generates best_of completions server-side and returns the "best" (the one with the highest log probability per token). When used with n, best_of controls the number of candidate completions and n specifies how many to return ā best_of must be greater than n.
- Source: OpenAI
Max tokens
- Description: The maximum number of tokens to generate. Will be capped at maximum given prompt length. Defaults to maximum.
- Source: OpenAI
Model to use
- Description: Allows you to select from the models that are available to use in Tana.
- Source: Tana
Stop sequences
- Description: Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence. One sequence on each node.
- Source: OpenAI
Presence penalty
- Description: Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. Defaults to 0.
- Source: OpenAI
Frequency penalty
- Description: Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. Defaults to 0.
- Source: OpenAI
Combination prompt
- Description: After running the normal prompt over all of the context, this prompt will be run with ${sys:context} set to the concatenated output.
- Source: OpenAI
Batch prompt context
- Description: The presence of this field enables splitting up a long context into multiple prompts, which can optionally be combined with the Combination prompt. The field ${sys:context} is determined by the dropdown options, or by a reference to a field or a node.
- Source: OpenAI
Fill context window percentage
- Description: The context window size includes both the prompt and the response. This parameter takes a number from 0 to 1, and determines the percentage of the context window that the prompt is allowed to fill - both for individual and batched prompts. The default is 0.5, which means that the prompt and response are equal in size.
- Source: OpenAI
URL
- Description: URL, processed using title expressions
- Source: Tana
Insert output strategy
- Description: Default is adding as a child (except in fields, where default is replace)
- Source: Tana
Payload
- Description: Payload type
- Source: Tana
API method
- Description: Defaults to GET
- Source: Tana
Parse results
- Description: Defaults to no parsing (raw)
- Source: Tana
Authorization header
- Description: For authentication (for example "Bearer ....")
- Source: Tana
Headers
- Description: API headers, must start with a word and a colon, like "Authorization: ..."
- Source: Tana
Avoid using proxy
- Description: For local sites, or where you know there is no CORS issue
- Source: Tana
Tags
- Description: Define supertags
- Source: Tana
Image size
- Description: The size of the generated images. Must be one of
256x256,512x512, or1024x1024fordall-e-2. Must be one of1024x1024,1792x1024, or1024x1792fordall-e-3models. - Source: OpenAI
Number of images to generate
- Description: The number of images to generate. Must be between 1 and 10. For
dall-e-3, onlyn=1is supported. - Source: OpenAI
Metaprompt to enhance prompt with GPT-3
- Description: n/a
- Source: Tana
Number of groups
- Description: Defaults to 3
- Source: OpenAI
Transcription Language
- Description: The language of the input audio. Supplying the input language in ISO-639-1 format will improve accuracy and latency.
- Source: OpenAI
View definition
- Description: View definition to apply to node
- Source: Tana
View type
- Description: Set view type by picking from dropdown.
- Source: Tana
Fields to set
- Description: Fields to set, and optionally values
- Source: Tana
Commands
- Description: Commands to execute
- Source: Tana
Tag candidates
- Description: Define supertags that become candidates
- Source: Tana
Fields to remove
- Description: Define fields that, if present, will be removed.
- Source: Tana
Move node target
- Description: Define node to be moved. Reference to a specific node, field or a dropdown option
- Source: Tana
Remove reference after moving node
- Description: Will remove the reference after moving node
- Source: Tana
Move original node
- Description: By default, if you run the command on a reference, the reference will be moved. This option can force the original node to be moved.
- Source: Tana
Done status to set
- Description: Gives you done state options to set. There are four options to choose from:
- Node has no checkbox
- Done
- Not done
- Toggle done status
- Source: Tana
Command to run
- Description: Write out the commands you want to run.
- If you're uncertain about how a command is written out, navigate to it through the command line, create a custom shortcut, and go to
Settings > Private keyboard shortcutsto the shortcut you just made and expand it. You'll find the command written out in a node.
- If you're uncertain about how a command is written out, navigate to it through the command line, create a custom shortcut, and go to
- Source: Tana
Look for children in field
- Description: Looks for children in a specific field, instead of the node. Add reference to field definition.
- Source: Tana
Relative date string
- Description: Write out a date string like this week, in two months, etc. Interpreted with prompt expressions.
- Source: Tana
Reference date
- Description: Either a date, or use Lookup field to reference a field. If reference date is May 1, and relative date is in two weeks, the output will be May 15th.
- Source: Tana
Date/time granularity
- Description: Allows you to specify the granularity of the date object. Use the dropdown to set year, month, week, day, hour, minute, second, or millisecond.
- Source: Tana
Set only start or end of date
- Description: Allows you to specify whether you want to set a Start or End date/time.
- Source: Tana
