Tana now offers a fully integrated AI meeting notetaker as part of the desktop experience. You can use it to transcribe meetings, generate summaries, and link notes directly to your work - without needing to add a bot in the call.
The meeting notetaker works for both calls and in-person meetings by recording system audio, and can also be used for lectures, interviews, and other audio sources. The feature is available on both Mac and Windows through the Tana desktop app and supports over 60 languages with automatic language detection.
Tana now offers a fully integrated AI meeting notetaker as part of the desktop experience. You can use it to transcribe meetings, generate summaries, and link notes directly to your work - without needing to install a plugin or add a bot.
Click the microphone button on any note tagged with an audio-enabled supertag.
Click the desktop notification if Tana detects a meeting or you've synced your Google Calendar to Tana.
Use the command line with Cmd+K (or Ctrl+K) and choose “Start meeting with...”. and select the supertag you want to use (all audio-enabled tags will show).
When meeting transcription is started you’ll see a floating recording menu when Tana’s not in focus, or a recording indicator in the sidebar when it is. This gives you quick controls to pause for privacy, or end the meeting and start post-processing. Clicking the recording indicator will open the meeting note in Tana.
You'll see a Transcribing... button on the meeting that you can click to open and see the live transcription in source material. This button will also be available when you pause or stop the transcription so you can revisit the transcription at any time.
Tana AI chat is available on all notes. To chat with a meeting note: - Press space on an empty line inside the meeting - Or click ✨ Ask AI about this content in the top right
Tana supports realtime transcription in over 60 languages. By default, transcription uses auto-detect, which can dynamically switch between languages during a meeting.
To change this:
Shift-click the microphone button before starting the transcription,
Click the globe icon in the transcription view during, or
Set a default language using the “Set default transcription language to...” command
The summary will be generated in the same language as the spoken language. You can translate the summary by using AI chat.
You can add context to improve the quality of transcription for difficult words or names. This is configured in the Realtime transcribe audio command (automatically added when you enable audio on a tag):
Add the Transcription context parameter to the Realtime transcribe audio command under "Compact menu".
Include names or other vocabulary manually, or reference a field (e.g. Attendees) to include dynamic context from the meeting.
When you click to stop transcription, Tana can run a post processing command. The post processing command is set up under the audio-enabled toggle in the supertag configuration.
The default command is the Text processing agent, which is adapted to work well for meeting transcripts. You can also set up your own custom post processing using command nodes.
By default the Text processing agent gives you a summary based on the full meeting transcript. You can further customize its behavior, below are some of the customization you can do.
To automatically extract action items from your meeting.
Edit the Text processing agent in the meeting tag configuration.
Add the Action items target parameter, and reference the field where the items should be placed.
Add the Tag to use for action items parameter and reference the supertag to use for extraction.
Optional: Add the Action items prompt parameter to specify what should be extracted in more detail.
Action items will be extracted from the transcript and created as individual nodes with a tag suggestion. You can simply click the tag suggestion to apply the tag.
If you want extracted action items to be tagged directly, not just suggested, then you can use the Add tags directly parameter.
You can add a notes field to your meeting to include your own meeting notes as context to the summary that the Text processing agent creates. This allows you to highlighting key points during the meeting or add additional context to be included in the summary.
Add a field to your audio-enabled supertag (ex. Notes).
Add the Meeting notes field parameter to the Text processing agent and reference the field.
Any notes written in the field will be included as context to the summary when the live transcription is stopped.
In addition to action items, you can extract other kinds of information from your meeting transcripts.
Item extraction is better suited for extracting things that are not directly actionable like tasks, follow-ups, etc. This functions well as a more general extraction, for example user observations, learnings, insights, etc. There are no restrictions to what you can extract.
Edit the Text processing agent in the meeting tag configuration.
Add the Extracted items target parameter, and reference the field where the items should be placed.
Add the Tags to use for item extraction parameter and reference the supertag to use for extraction.
Optional: Add the Extracted items user prompt parameter to specify what should be extracted in more detail.
You can also extract entities with using the same method and the parameters: New entities target, Tags to use for entities, and optionally Entities prompt. Entities can be things like companies, people, cities, countries, etc.
You can add a custom prompt to the Text processing agent to tailor the generated summary or extracted items. All functions have system prompts that are tailor made for the task, but you can override these by using the different prompt parameters if you want to customize the output.
In the supertag configuration, edit the Text processing agent.
Add the prompt parameter that you want to override: Summary prompt, Action items prompt, Extracted items user prompt, Entities prompt, Augment prompt.
ImprovedBetter transcription model which supports 60 most common languages, very good at recognizing language switching and automatically choosing correct language. Used both for live transcription in app and from mobile capture. Non-supported languages in app fall back to Whisper (if you set it with Set default transcription language command). ()
FixedFixed transcription formatting and improved accuracy for Chinese language input. ()
NewWe now detect meetings in Telegram, WhatsApp, Signal and FaceTime ()
ImprovedAdded a notification if we don’t detect speech in 30 seconds with AI meeting notetaker turned on ()
FixedWe no longer send notifications for meetings that are in trash ()
ImprovedScheduled meeting notifications now open meetings and start transcription automatically ()
ImprovedMore refinements to bot-less meetings to make audio quality less flaky, stop crashing when using multiple monitors, and use much less CPU in some cases ()
ImprovedUpdated default prompt for Text Processing Agent to achieve better summary results for most meetings. We recommend tweaking your summary prompt for better results for specific meeting types ()
ImprovedMeeting notifications will now stick around to at least two minutes into the meeting before being auto-dismissed ()
FixedFixed a bug where the audio command would override compact menu items when no commands where set in full menu ()
InfoChanged default model for Text Processing Agent to o4-mini ()
InfoWe will now extract action items with the text processing agent as long as there is a "Action item target". If you want them tagged as well, configure the "Tags to use for action items" as well ()
ImprovedSome cleanup of text processing agent parameters: removed some deprecated ones and reorganized the most important ones up top. No change for already configured commands. ()
FixedFixed issue where applying a tag suggestion didn't properly initialize fields ()
FixedFixed issue with switching languages when real-time transcribing losing transcription context ()
Yes, Tana now offers a fully integrated bot-less AI meeting notetaker as part of the desktop experience. You can use it to transcribe meetings, generate summaries, and link notes directly to your work - without needing to install a plugin or add a bot.
How do I change the tags used in the meeting agent and calendar integration?
Dec 02, 2024
Supertags are created or assigned as part of the setup for your meeting agent/calendar integration. Here's a quick guide on how to change these tags post-setup.
Supertags that are used during calendar sync are listed under Classification of events in the Google Calendar settings:
Go to Account settings > Settings > Google Calendar settings > Classification of events
For more on what the different event types mean, see here.
When you change a tag, it needs to use the same fields that are set up for the calendar integration, otherwise the information will get lost. To check which fields are used during sync, go to Google Calendar settings > Advanced > Raw configuration:
The raw configuration defines all the fields that Tana will sync specific calendar event information to.
Supertags are used when processing the transcript to create meeting notes. These supertags are set during setup and can be changed by going into the command node that controls the meeting agent and the transcription process.
First, you must navigate to the text processing agent command for your meetings. For instructions on how to find this, go here and scroll up: Meeting agent > Text processing agent command
You should now have the Text processing agent config open. Here you will see parameters like "Tags used for X", which is the list of supertags used when processing meetings.
To change the tags, you simply add or replace tags from these lists.
However, there are several things to consider when you add/replace from these lists:
Study existing tag: Open the config for the tag you want to replace, and study how it's set up. Try and identify dependencies like specific fields that are also used in the meeting config. Try and make sure your replacement tag replaces as much of the original one as possible.
Set Base type: Make sure the new supertag you add has the appropriate Base type defined, and that it's suitable for the type of object it is:
Items typically use base tags: Project, Topic, Event
Entities typically use base tags: Person, Organization, Location
Action items typically use base tag Task
Decommission the old tag: Make sure the tag you're no longer using is merged with the one you're using, or deleted, to prevent clutter and confusion
There are likely more things to consider, let us know if we missed any via Rate this article 👍👎.
Does the GPT log monitor show AI credits usage by the Meeting agent?
Sep 12, 2024
Only partially; the transcription of the meeting does not use OpenAI and is therefore not logged in the GPT log monitor.
Meetings use a service called Soniox and we are working with them to expose the AI credit cost of individual meetings in a simple way, but currently it's not possible.
The post-processing of the meeting transcript is however done using OpenAI and will be visible in the GPT log monitor.
How can I revoke a meeting agent from a meeting I'm not hosting?
Sep 05, 2024
We are looking into creating a button that will make the agent leave a meeting if you press it. Until we have this in place, it is possible to force the agent to stop transcribing and kickstart the meeting notes processing if necessary.
To do this:
Run the command Debug node on the meeting node
Set the Meeting bot status to Done
The agent should now have stopped transcribing, and the text processing agent should start running.