Paid Add-On
Farseer AI is a paid add-on and isn't enabled by default. To enable it for your workspace, reach out to the Farseer sales team.
What Is Farseer AI?
Farseer AI is a built-in assistant that answers questions about your model in natural language. Ask it for a summary of a variable, an explanation of how a number was calculated, the definition of a dimension, or anything else about the structure or contents of your workspace — and it produces an answer grounded in your real data.
Under the hood, the assistant runs Python code in an isolated sandbox using the Farseer SDK. It can fetch entities, evaluate formulas, search dimension members, and inspect uploaded files — all scoped to your workspace.
Opening the Chat
Click the Farseer AI entry in the side navigation. The chat opens in one of two layouts:
Modal — a floating panel that lets you keep the rest of the app visible.
Fullscreen — a focused, full-window view for longer conversations.
Use the layout toggle in the chat header to switch between them. Your choice is remembered across sessions.
Multi-Chat & History
Each topic you work on can live in its own chat. From the chat header you can:
Create a new chat — start a fresh conversation when switching topics.
Switch chats — open the chats dropdown to jump between recent ones.
Rename a chat — give it a clearer title than the auto-generated one.
Delete a chat — remove a conversation you no longer need.
When you send your first message in a workspace with no existing chats, a new one is created automatically. Its title is generated for you a few seconds after the first reply lands — based on what you asked.
Asking Good Questions
The assistant gives sharper answers when you're concrete about what you're looking for. The more clearly you scope year, version, entity, and metric, the more directly you can act on the result.
Worth specifying whenever it's relevant:
What to specify | Examples |
Year | "for 2025" · "in 2024" · "2023 vs 2024" |
Version | "in Actual" · "compare Actual and Plan" · "in Forecast" |
Entity | "for Aurora" · "for the EMEA region" · "for the whole group" |
Period | "for June" · "YTD through August" · "Q3 2025" |
Metric | "Revenue" · "Gross Margin and EBITDA" · "OpEx ratio" |
Format | "as a table" · "with a short comment" · "sorted by variance" |
If you leave something out, the assistant uses sensible defaults (Actual version, current year) and tells you which assumptions it applied — confirm or correct in the next message.
Refining a Vague Question
Instead of: "What's the gross margin?" Better: "What's the Gross Margin for Aurora in June 2025 in Actual? Include GM%."
Instead of: "Compare revenues." Better: "Compare revenue by sales channel for Nimbus for 2025 in Actual vs Plan."
Instead of: "Analyze costs." Better: "Analyze cost of goods sold per unit for Aurora for 2025 — compare Actual vs Plan and explain the difference."
Starter Prompts
When a chat is empty, Farseer AI shows a set of starter prompts to help you get going. They cover common entry points like:
Summarize a variable's purpose and how it's used.
Explain a dimension's structure and members.
Describe how a measure is computed.
Walk through a date-property setup.
Investigate a fixed-dimension behavior.
Explain on-demand calculation for a variable.
Diagnose calculation delay on a complex formula.
Click any prompt to use it as a starting point, or type your own question.
Worked Examples
Five patterns that cover most of what you'll do day-to-day, from quick checks to executive summaries. The examples below use a sample group with four product lines — Aurora, Helios, Vertex, and Nimbus.
Example 1 — Single value lookup
Prompt: "What's the Gross Margin for Aurora in February 2025 in Actual? Include GM%."
What you get: The absolute Gross Margin for the period in your model's reporting currency, GM% as a percentage, and a note confirming the version and period that were used. Best for quickly verifying one specific number.
Example 2 — Tabular breakdown by dimension
Prompt: "Show revenue by sales channel for Nimbus for 2025 in Actual. Sort largest to smallest."
What you get: A table of revenue per channel (Direct, Wholesale, Online, Distribution, …), sorted descending, with a total row. Surfaces concentration and distribution across channels.
Example 3 — Cross-entity comparison
Prompt: "Compare Revenue, Gross Margin %, and OpEx ratio for all four product lines (Aurora, Helios, Vertex, Nimbus) for 2025 in Actual. Which line has the smallest gap between Actual and Plan?"
What you get: A comparison table with the three KPIs per product line in Actual, alongside Plan, with absolute and percentage variance per line — plus a comment identifying the smallest and largest gap to plan.
Example 4 — Variance analysis with drivers
Prompt: "What are the main drivers of the Gross Margin variance between Actual and Plan for the whole group YTD through Q1 2026? Break it down by Revenue, COGS, and Direct OpEx, and call out which product line contributes most to the variance."
What you get: A waterfall-style breakdown of the GM variance — Actual vs Plan — split by line item (Revenue, COGS, Direct OpEx). For each line: absolute variance and share of total variance. Identifies the entity that contributes the largest deviation from plan.
Example 5 — Executive summary
Prompt: "Write an executive summary of group performance for H1 2026. Compare Actual vs Plan at the Revenue, Gross Margin, and EBITDA level — show absolute and percentage variance. Identify 2–3 key drivers, call out which entity or channel contributes most to the variance, and end with a short management comment with one concrete recommendation."
What you get: A structured summary in several parts:
KPI overview — table with Revenue, Gross Margin, and EBITDA for the group (Actual vs Plan, absolute and percentage)
Key drivers — 2–3 specific factors that explain the variance
Entity contribution — which entity over- or under-performs vs plan
Management commentary — narrative conclusion with a concrete recommendation
What the Assistant Can Do
The assistant has access to a single sandboxed Python tool with the Farseer SDK preinstalled. Through this tool it can:
Look up variables, dimension tables, and dimension members by name or fuzzy search.
Evaluate formulas against the live model — for example, summing revenue across a dimension slice.
Inspect uploaded files from the workspace.
Run pandas-style data manipulation in the sandbox to summarize, group, or compare results before responding.
How Replies Work
Replies stream back as the assistant works. While a request is in flight, you'll see one of three live status messages:
Analyzing context — gathering the model details relevant to your question.
Executing Python — running queries or calculations in the sandbox.
Generating response — composing the final answer.
Only one request is processed per user at a time. If you try to send a new message while one is still running, you'll see a "previous message is processing" notice. You can cancel an in-flight request from the chat header.
Frequently Asked Questions
Can I ask questions in my own language?
Yes. The assistant understands and responds in any language your team uses, and answers in the same language the question was asked in.
Can I ask follow-up questions?
Yes — within the same chat, the assistant retains context. After an executive summary you can ask "break down the GM variance for Aurora by month" without re-stating the year, version, or entity.
What if the answer isn't right?
Correct the assumption in your next message: "I didn't want YTD, just June." or "Use Plan instead of Actual." The assistant recalculates with the corrected scope.
What if the assistant doesn't know the answer?
If the data isn't in the model (e.g., a period that hasn't been imported yet), the assistant says so explicitly rather than inventing a number. Check whether the import or data load for that period has run.
How granular can I get?
The assistant can drill down to whatever level of detail your model supports — including individual dimension members, custom hierarchies, and line items at the lowest level of your P&L. For granular slices, name the entity or member explicitly.
Audit Trail
Chat activity is recorded in the workspace's audit log. Relevant audit types: CHAT.CREATE, CHAT.RENAME, CHAT.DELETE, CHAT.ASK. Admins can review who used the assistant and when from the Audit Logs page.
Read-Only by Design
The assistant only reads from your model — it never edits cells, changes variables, or runs imports. All data entry and model changes still happen through the regular sheets and admin tools.
