Intended Use, Limitations & Transparency Documentation
Intended use, known limitations, and accuracy characteristics of Revial's AI-powered features, prepared to support enterprise customers under the EU AI Act, GDPR, and internal AI governance frameworks.
Version: 1.1 Date created: March 2026 Latest update: May 2026 Classification: Internal
1. Purpose and Scope
This document describes the intended use, known limitations, and accuracy characteristics of Revial's AI-powered features. It is provided to support enterprise customers in meeting their obligations under the EU AI Act, GDPR, and internal AI governance frameworks.
Revial is a B2B sales enablement platform. All AI features are designed as decision-support tools. They assist sales teams in preparing for meetings, analysing conversations, and generating draft materials. No AI feature in Revial makes autonomous decisions or takes actions without human review.
These AI capabilities are intended solely as decision-support tools. They do not constitute professional advice, automated decision-making, or authoritative interpretations of business data. Users remain fully responsible for reviewing, validating and deciding how AI-generated outputs are used in their business processes.
Customer data processed within the Revial platform is used exclusively for providing the service to the customer. Customer data is not used to train general-purpose AI models and is not shared with third parties for model training purposes. This ensures that confidential business information remains protected and isolated from external AI training environments.
2. AI Features Overview
2.1 Meeting Briefing Generation
Intended use: Generate pre-meeting preparation documents that help salespeople understand a prospect's company, industry context, and relevant discussion topics.
Input data: Company name, industry, website content (via web search), uploaded documents, CRM data (if connected), previous meeting context for follow-up meetings, playbook configuration, language preference.
Output: A structured briefing document containing company overview, industry context, suggested discovery questions, and meeting preparation notes. Stored as structured data within the platform.
Trigger mechanism: Manually by the user, or automatically when a calendar-linked meeting is detected.
Limitations and accuracy:
- Company research relies on web search results, which may be outdated or incomplete.
- Web-sourced information is synthesised by the AI model and may contain inaccuracies. Users should verify key facts before relying on them in meetings.
- CRM data accuracy depends on the customer's own CRM hygiene.
- For follow-up meetings, context from previous meetings is included, but the AI may not perfectly recall all prior discussion nuances.
- AI-generated outputs may contain inaccuracies or incomplete interpretations. Users are responsible for reviewing and validating AI-generated content before making business decisions based on it.
- Revial's AI functions are designed to support decision-making, not to replace human judgment.
2.2 Transcript Summarisation
Intended use: Analyse meeting transcripts to produce summaries, extract discussed topics, identify next actions, and surface business signals (budget indicators, decision-maker identification, timeline references).
Input data: Full meeting transcript (produced by external transcription service), speaker identification, meeting metadata, playbook context, language preference.
Output: Executive summary, key discussion points, BANT analysis (Budget, Authority, Need, Timeline), buying signals, relationship health indicators, and suggested next actions.
How triggered: Automatically after a meeting transcription completes, or manually by the user.
Limitations and accuracy:
- Summary quality is directly dependent on transcription quality. Poor audio, overlapping speakers, or technical jargon may produce inaccurate transcripts, which propagate to summaries.
- BANT extraction is inferential. The AI may over-interpret casual mentions of budgets or timelines as firm signals.
- Speaker identification errors can misattribute statements.
- The AI produces a best-effort interpretation; it cannot distinguish between what a participant truly intends versus what they say.
2.3 Coaching Insights
Intended use: Provide individual salespeople with feedback on their meeting performance to support self-directed skill development. Categories include discovery quality, objection handling, relationship building, and closing effectiveness.
Input data: Meeting transcript, speaker identification, speech metrics (talk time, speaking pace), transcript summary data, playbook context, language preference.
Output: Per-category scores (0–10), identified strengths, areas for improvement with specific suggestions, and questions that could have been asked.
How triggered: Automatically after transcript summarisation completes, or manually.
Limitations and accuracy:
- Coaching scores are subjective assessments produced by a language model, not objective measurements. They should be treated as directional feedback, not performance ratings.
- Scores are not benchmarked against industry averages or historical baselines.
- Communication style, cultural context, and relationship dynamics may not be fully captured.
- This feature is designed for the salesperson's own development. It is not intended for, and should not be used as, automated employee performance evaluation or HR decision-making.
Note on the EU AI Act Annex III, point 4(b)
We acknowledge that Annex III, point 4(b) of the EU AI Act covers not only AI systems used for employment-related decisions (recruitment, promotion, termination, task allocation, compensation) but also systems intended to be used for "monitoring and evaluating the performance and behaviour of persons in such relationships". Because Coaching Insights produces per-category scores (0–10) on aspects such as discovery quality, objection handling, relationship building, and closing effectiveness, this question warrants explicit treatment. Revial's position is that Coaching Insights falls outside the scope of Annex III, point 4(b) for the cumulative reasons set out below.
First, Annex III applies on the basis of the intended purpose of the AI system, not on the basis of any technically conceivable use. Coaching Insights is intended as a self-directed sales-skill development tool delivered to the salesperson themselves. Its design objective is to help the individual user reflect on a specific sales conversation in the context of a sales methodology, not to provide an employer, line manager, or HR function with a tool for assessing an employee's overall work performance or workplace behaviour. The intended purpose is therefore sales coaching, not workplace monitoring or evaluation in the sense of Annex III, point 4(b).
Second, scope and object of analysis. The categories analysed by Coaching Insights (discovery quality, objection handling, relationship building, closing effectiveness) and the underlying signals (talk time, speaking pace, transcript content of a single sales meeting) are sales-methodology constructs tied to the progression of a specific commercial opportunity. They are not measurements of the employee's general work conduct, productivity, attendance, compliance, or personality traits. The protected interest under Annex III, point 4(b), protecting workers from AI-driven workplace surveillance and evaluation, is therefore not engaged in the same way as it would be for, e.g., a contact-centre quality-monitoring system that scores agents for managerial review.
Third, contractual use restrictions and provider/deployer allocation. Revial's product documentation, including but not limited to this document, explicitly prohibits using Coaching Insights outputs for employee performance evaluation, HR processes, or any decision affecting employment terms, promotion, compensation, task allocation, or termination. Under Article 25 of the AI Act, a deployer who puts an AI system into service for a different intended purpose, including a high-risk purpose covered by Annex III, becomes a provider for that purpose and assumes the corresponding obligations. Revial does not place Coaching Insights on the market for an Annex III, point 4(b) use, and contractual controls reinforce that allocation of responsibility.
Fourth, Article 6(3) exemption as a secondary safeguard. Even if a regulator were to consider Coaching Insights to fall within the literal wording of Annex III, point 4(b), Article 6(3) of the AI Act provides that an Annex III system is not high-risk where it does not pose a significant risk of harm to the health, safety, or fundamental rights of natural persons, including by not materially influencing the outcome of decision-making. Coaching Insights performs a narrow procedural and preparatory task (post-hoc reflective feedback on a single conversation), does not profile the individual, does not produce inputs into automated employment decisions, and is always subject to the salesperson's own review. These characteristics align with the exemption criteria in Article 6(3), and the exemption is documented as part of Revial's risk assessment.
Conclusion. Taken together, intended purpose as self-directed sales coaching, narrow sales-methodology scope, contractual prohibitions on HR use with the corresponding Article 25 reallocation, and the Article 6(3) safeguard, Revial concludes that Coaching Insights is not an AI system intended for the monitoring and evaluation of workers within the meaning of Annex III, point 4(b), and is therefore not classified as high-risk on that basis.
2.4 Summary Generation
Intended use: Generate summaries based on discovery answers, meeting notes, and selected products/services.
Input data: Questionnaire responses, sales notes, selected products/services (with pricing from the product catalogue), pitch materials, meeting transcript (if available), playbook configuration, language preference.
Output: A structured summary document containing problem framing, solution description, pricing breakdown, timeline, deliverables, and next steps.
How triggered: Manually by the user or automatically on recording ending.
Limitations and accuracy:
- Pricing is sourced from the product catalogue and is accurate to what the user has configured.
- Solution descriptions and problem framing are AI-generated and may include inaccurate claims about capabilities. Users must review and edit content before sharing with prospects.
- Timeline estimates are AI-generated without historical project data and should be treated as rough guidance.
- Summaries require human review. They are never sent to prospects automatically.
2.5 Follow-up Email Generation
Intended use: Draft follow-up emails in multiple styles (direct, reactivation, interim, value-add) based on proposal context and meeting history.
Input data: Proposal content, email type, user profile (name, title, contact details), meeting transcript (optional), playbook tone-of-voice settings, language preference.
Output: A draft email (typically 3–5 sentences) with sender signature.
How triggered: Manually by the user.
Limitations and accuracy:
- Emails are drafts for the user to review and edit before sending. They are never sent automatically.
- Generated text follows playbook tone settings but may not perfectly match the user's personal communication style.
- Content quality depends on proposal context completeness.
2.6 Signal Analysis
Intended use: Quantify deal momentum, solution fit, and identify potential blockers or positive sentiment patterns from meeting transcripts.
Input data: Meeting transcript, coaching analysis scores, speaker information, language preference.
Output: Deal momentum scores (0–100), solution fit scores, friction themes (identified blockers with supporting evidence), and sentiment patterns.
How triggered: Automatically after coaching analysis completes, or manually.
Limitations and accuracy:
- Scores are relative indicators, not absolute measurements. There is no universal benchmark for what constitutes a "good" deal momentum score.
- Theme identification is inferential. The AI may flag discussed topics as blockers when they were merely informational.
- Signal analysis should inform, not replace, the salesperson's own judgment about deal health.
2.7 Discovery Questionnaire Generation
Intended use: Generate structured discovery questionnaires for sales teams based on their industry, products, and sales methodology.
Input data: Organisation context, industry category, product/service information, language preference, optional focus areas.
Output: 8–12 structured questions mapped to sales methodology phases, with question types (select, multiselect, number, text).
How triggered: Manually during playbook setup, or during onboarding.
Limitations and accuracy:
- Generated questions are starting points that should be reviewed and customised by the sales team.
- Questions may be generic if the organisation context provided is sparse.
2.8 Product/Service Extraction
Intended use: Extract or infer product/service offerings from a company's website or uploaded documents to populate the product catalogue.
Input data: Website URL or uploaded file (PDF/Excel), requested number of solutions, language preference.
Output: Structured product/service descriptions with features, deliverables, and estimated timeframes.
How triggered: Manually by the user.
Limitations and accuracy:
- Website scraping may be incomplete or blocked by the target site.
- If fewer products are found than requested, the AI will infer additional offerings, which may not reflect the company's actual portfolio.
- All extracted products should be reviewed and corrected before use.
2.9 Stakeholder Communication Guidance (EBK)
Intended use: Generate targeted sales guidance and email templates for specific stakeholder profiles (e.g., CFO, technical lead, champion, sceptic).
Input data: Proposal content, target stakeholder profile (role and type), playbook settings, language preference.
Output: Internal guidance for the salesperson (profile analysis, recommended approach, potential objections) and a draft email for the stakeholder.
How triggered: Manually by the user.
Limitations and accuracy:
- Profile analysis is based on role archetypes, not the specific individual. Guidance should be adapted to the actual person.
- Generated emails are drafts requiring review before sending.
- Arguments are constrained to proposal content. No external research is performed.
2.10 Document Embedding (RAG)
Intended use: Process uploaded documents into searchable vector embeddings to provide relevant context during briefing generation.
Input data: Uploaded documents (PDF, Excel), document category and metadata.
Output: Searchable document chunks used to enrich AI-generated briefings with company-specific product information, contracts, or previous meeting notes.
How triggered: Automatically after document upload.
Limitations and accuracy:
- Document chunking may split information across boundaries, potentially losing context.
- Semantic search retrieves the most relevant chunks but may miss specific details.
- OCR or formatting issues in source documents propagate to the embedded content.
3. Cross-Cutting Characteristics
3.1 Human Oversight
Every AI feature in Revial produces draft outputs for human review. No AI-generated content is published, sent, or acted upon without user interaction. Specifically:
- Summaries and emails are never sent to external recipients automatically.
- Briefings are preparation documents, they do not trigger any external actions.
3.2 Data Minimisation
Each AI workload receives only the data necessary for its specific task. For example, a transcript summary receives only the transcript text and relevant playbook context, not the organisation's full CRM data, billing information, or other users' data. Prompt templates are workload-specific and define exactly which data fields are included.
3.3 Transparency and Labelling
All AI-generated content is presented in dedicated UI sections that are clearly distinguishable from human-created content. Users always know when they are viewing AI-generated output.
3.4 Language Support
AI features support multiple languages (English, Finnish, French, and others). Language consistency is enforced at the prompt level. However, mixed-language transcripts or uncommon terminology may reduce output quality.
3.5 Processing Timeouts
AI workloads operate under strict time budgets (60–120 seconds). In rare cases, complex inputs may cause a timeout, resulting in incomplete output. Users are notified when this occurs.
3.6 Model Providers
AI processing is performed via external API providers (primarily Google Gemini, with OpenAI as fallback) through the Requesty.ai routing layer. No customer data is used for model training by any provider. Data is processed in real time and not retained beyond the immediate request.
4. What Revial Is Not
To support enterprise deployers in their risk classification under the EU AI Act:
- Revial does not process biometric data (voice tone, facial expressions, body language).
- Revial does not infer emotions or emotional states. All analysis is performed on transcribed text.
- Revial does not make or support employment decisions (hiring, promotion, termination, compensation).
- Revial does not perform automated individual profiling with legal or significant effects.
- Revial does not generate scores intended for employee performance evaluation or HR processes.
- Revial does not autonomously contact prospects, send communications, or take actions on behalf of users.
5. Recommended Use
For optimal results, we recommend that enterprise users:
- Review all AI outputs before relying on them in external communications or decision-making.
- Verify factual claims in briefings against primary sources, especially for high-stakes meetings.
- Treat coaching scores as developmental feedback for individuals, not performance ratings.
- Edit proposals and emails to match their personal style and verify accuracy before sending.
- Maintain product catalogue accuracy, as proposal pricing depends on manually configured data.
- Provide complete context (questionnaire answers, meeting notes, CRM data) for higher-quality AI outputs.
6. Risk Mitigation & Safeguards
Revial manages AI-related risks through a structured set of technical and operational controls applied across all AI features.
6.1 Hallucination Risk Management
All AI outputs are grounded in the data explicitly provided to each prompt (transcript content, CRM fields, uploaded documents). Revial uses prompt constraints to prevent the model from generating content outside the defined scope of each task. System-level instructions explicitly prohibit speculation, invention of facts, or referencing information not present in the input. Where factual accuracy is critical — such as in meeting briefings or BANT analysis — outputs are accompanied by source references or confidence indicators to support user verification.
6.2 Output Validation & Evaluation Framework
Revial maintains an internal evaluation (eval) framework to systematically test AI output quality across key use cases. Evals are run against representative test sets whenever the underlying model or prompt templates are updated. Each eval set covers factual accuracy, task completion, and formatting consistency for the relevant feature. Results are reviewed before any change is deployed to production. This ensures that quality regressions are identified and resolved prior to reaching end users.
7. Model Monitoring & Quality Assurance
7.1 Ongoing Performance Monitoring
Revial monitors AI model behaviour in production on an ongoing basis. Key quality signals — including output length, task completion rate, and user correction patterns — are tracked to detect degradation over time. Monitoring covers both technical performance (latency, error rates) and output quality (relevance, accuracy as indicated by user feedback). Any significant deviation from baseline quality thresholds triggers a review by the product and engineering team.
7.2 User Feedback Loop
Users can provide explicit feedback on AI outputs (e.g., thumbs up/down, edit actions, or direct reporting). This feedback is aggregated and reviewed regularly by the Revial product team. Patterns of negative feedback or frequent manual edits are treated as quality signals and feed directly into prompt improvement and eval test case development. Enterprise customers may additionally report quality concerns through their dedicated support channel.
8. Customer Governance & Admin Controls
8.1 Organisation-Level Configuration
Revial AI features are configurable at the organisation level, allowing enterprise customers to enable or disable specific AI capabilities according to their internal governance policies. This means an organisation can, for example, restrict AI-generated coaching insights from being visible to managers, or disable automated meeting briefing generation for specific teams. Configuration is managed through the admin panel and does not require involvement from Revial's engineering team.
8.2 Admin Controls & Role-Based Access
Organisation administrators have access to controls that govern how AI features operate within their environment. Current and planned admin capabilities include feature-level enable/disable per team or role, visibility settings for AI-generated content (e.g., limiting coaching score visibility to the individual salesperson only), and audit logging of AI feature usage. Enterprise customers with specific governance requirements are encouraged to discuss their needs with their Revial account manager, as custom configuration options may be available.
Last updated: 8 May 2026