In this document, we answer frequently asked questions about the usage of AI at Jam Technologies GmbH. Beyond a purely technical or legal documentation, this document is meant to facilitate discussions with your internal stakeholders, such as the IT department or the workers council.
General Remarks on AI Usage at Jam
Our mission at Jam Technologies GmbH is to provide every employee with a "personal coach."
This coach conducts realistic conversation simulations (voice-based role-plays with fictional personas powered by AI) with users and provides improvement-oriented personalized feedback (by analyzing transcripts based on transparently communicated "scorecards").
Our AI-powered coach helps employees improve their sales success, customer service, or leadership skills in a safe and supportive environment. At Jam, artificial intelligence serves exclusively to train employees' "soft skills" - we use it to develop human capabilities.
We are happy to support your internal approval process and discussions with internal stakeholders through the following FAQs. We are also happy to answer additional questions via email or video call.
System Overview & Technical Architecture
☑️ How Does Jam’s AI System Work?
- The Jam platform is a tool designed to support human teams through the use of AI, particularly through simulations of sales conversations (”role-plays”). We place the highest priority on data protection and security.
- Brief system description:
- Customer company shares information about required training (scenarios, personas, "scorecards" with evaluation criteria, possibly sales materials (PDFs, guidelines)
- Jam and/or the customer company's training administrators create individual AI training scenarios based on this information
- The shared information flows exclusively as instructions into the AI models' prompts, can be viewed and modified at any time by training administrators, and is never used for "model training"
- Employee conducts simulated sales conversation via a voice-to-voice (V2V) interaction powered by a large language model (LLM)
- AI system (LLM + V2V) takes on the customer's role and responds in real-time
- Conversation is recorded and transcribed
- The transcript is analyzed by another LLM
- This LLM evaluates the transcript based on “scorecards” determined by Jam and the customer regarding, e.g, conversation skills, objection handling, product knowledge, customer orientationcreates improvement-oriented feedback for the employee
- The LLM generates personalized feedback with improvement suggestions for the user
- Employee receives audio, transcript, and improvement-oriented AI feedback
- Trainers/coaches and - if desired - supervisors can access training results with the sole purpose of supporting employees in their development
1. Preparation (”content creation”)
2. Conversation simulation (”training”)
3. AI Evaluation (”analysis”)
4. Results (”feedback”)
☑️ Where is AI used in Jam's coaching solution?
- “AI” is used in several places in Jam’s app.
- Speech recognition/transcription: Converting spoken language to written language
- Conversation simulation: Large Language Model (LLM) conducts conversation simulation with the user by “playing” the role of a fictitious customer
- Feedback: Large Language Model (LLM) evaluates conversation transcript based on evaluation criteria transparently communicated to users
- Content editor: LLM supports training administrators as a "copilot" in creating training content based on Jam's templates
- Important to know:
- ✓ Data storage: All data is stored in the EU
- ✓ Transparency: Employees see their own data and evaluations
- ✓ Data protection: DPAs with all providers, no use for AI training
- ✓ Co-determination options: Access by supervisors to training data can be individually granted or denied
☑️ What Data Sources Does the AI System Use - and How Does It Ensure Data Reliability and Integrity?
- The AI models used at Jam (currently primarily OpenAI's GPT models) were trained by the model providers using large amounts of publicly available texts such as books, websites, databases, etc.
- Neither these model providers nor Jam itself ever use customer data, uploaded documents, images, videos, or audio data from operational use for training the underlying AI models.
- The only way non-public information from our customer companies is used in connection with Jam's AI is through "prompts," i.e., instructions to the AI models for conversation simulations (e.g., what role the AI persona should play, what scenario is being practiced) and analysis of the resulting transcript for improvement-oriented feedback.
- What information goes to the model as instructions via prompts can be viewed and modified at any time by training administrators in our role-play editor (”Jam Studio”). Customers can determine who receives administrator rights.
- The voice training sessions conducted (audio/transcript) for real-time analysis (improvement-oriented feedback) are sent exclusively as context to the AI model during operational use. No data is incorporated into OpenAI's model training ("no data retention for training purposes" when using the API). OpenAI's Data Processing Agreement (DPA) stipulates that all data transmitted via the API is processed exclusively for service delivery but not used to improve or train the AI.
- Summary:
- Jam does not train its own AI model with customer data but uses pre-trained models from third-party providers (for AI coaching: OpenAI).
- OpenAI does not use customer data for training purposes in API operations.
- Prompt contents are under the control of our customers' training administrators.
- Evidence includes OpenAI's publicly available API Data Usage Policy and the DPA with OpenAI, which explicitly contains this regulation.
- We are happy to provide or reference the relevant passages from the DPA with OpenAI for your data protection team.
☑️ How Is the AI System Trained?
- The AI model used by Jam (OpenAI GPT) is trained exclusively by OpenAI continuously based on publicly available data and licensed material. Customer data from the Jam system is neither permanently stored nor used for training, model improvement, or research by OpenAI. For real-time analysis, data is only processed temporarily ("volatile") and not incorporated into the training set.
- Improvement of our services occurs through manual creation, curation, and annotation of synthetic data that Jam creates internally and makes available to models exclusively temporarily via few-shot prompting.
- We consciously avoid a self-learning, autonomous system and prioritize human control ("human in the loop")
☑️ Does the AI System Learn Autonomously or Only Under Supervision?
- No, our system is not an autonomous (self-learning) one.
- We deliberately rely on human control ("human in the loop"). Any changes after operation begins are under the full control of human users.
- These changes in LLM outputs occur mediated through the prompts of the AI conversation simulation (role-play prompt) and the AI feedback system. These are executed by training administrators via a CMS/LMS and can be viewed there at any time.
☑️ Is There a System Description That Makes the AI Model, Its Logic, Functionality, Scope, and Data Sources Understandable?
- Yes, such a system description exists, see answers above and additional documents.
- In a nutshell:
- The system consists of conversation simulations with feedback for training conversation skills. Users conduct role-plays via voice interaction and receive improvement-oriented feedback based on the resulting transcript.
- The role-play is based on voice-based interaction with a large language model (LLM) via speech-to-text (STT) and text-to-speech (TTS) technology.
- The feedback is also based on an LLM that analyzes transcripts from role-plays and outputs structured text feedback.
- The audio recording of the role-play interaction is also stored for the user to listen to but is not otherwise processed.
- Both LLMs (for conversation and feedback) are instructed through prompts.
- The prompts are under full human control. No third-party data sources are used.
- The system's scope is limited insofar as it serves exclusively training purposes as a coaching tool and makes no autonomous decisions about the user.
- Users also have the option at any time to provide feedback in the app on whether they agree with the AI feedback or not.
- List of Jam’s Subprocessors
- System Description with Notes on Security and Data Protection
Additional Documents - please request access if not yet granted:
Risk & Impact Assessment and Monitoring
☑️ How Would You Classify the Risk of Your AI System?
- We classify the risk of our AI system as "low."
- We justify this with:
- The sole purpose of use: Training of employee skills
- The data basis: The system has no information about the user outside the conversation simulations; no sensitive data (e.g., customer data) is fed into the system. There is also no sensitive personal data about the training users (gender, ethnicity, health).
- The consistent avoidance of autonomous "high-stakes" decisions about users (e.g., hiring an employee vs. rejection)
- The secure storage of data with full GDPR protection: Encrypted on European servers by third-party providers who may not process the data for any other purposes
- Full control over any customer-specific information in the AI models' prompts, which we grant to training administrators
- We have verified this assessment with the EU Act compliance checker at https://ai-act-service-desk.ec.europa.eu/en/eu-ai-act-compliance-checker
☑️ How Do You Evaluate Model Quality in Ongoing Operations - and What Emergency Concepts Are in Place?
- Jam Technologies has implemented a multi-level system for continuous evaluation of AI model quality and corresponding emergency concepts.
- Ongoing quality assurance occurs through systematic monitoring of all AI-generated feedback scores using Sentry and Betterstack for real-time detection of anomalies and outliers in evaluation patterns, regular manual reviews by our technical team to verify the plausibility and consistency of AI outputs, and continuous evaluation of customer feedback via Jira Service Desk, with complaints about unfair or inconsistent evaluations escalated immediately.
- Additionally, we conduct periodic bias checks through systematic analysis of scoring patterns across different user groups and regular validation of AI outputs against predefined criteria.
- In risk situations, complete shutdown of the AI feedback functionality is possible, with the system being cloud-native as per Section 13 of our SaaS Agreement and capable of being stopped at any time by deactivating services and revoking API credentials to OpenAI. Affected customers are transparently informed about quality problems, measures taken, and recommissioning schedules, with all incidents documented and tracked as part of our incident management process to derive systematic improvements and prevent future quality problems.
☑️ To What Extent Can the AI System Draw Autonomous Conclusions About Users?
- Our AI system serves to train employees in conversation skills (soft skills). It does not draw (partially) autonomous conclusions about the employee's person that could have negative consequences for the employee (no automated individual decisions per Art. 22 GDPR). It merely provides a platform to improve one's own skills through AI-based conversation simulations and AI-based feedback on conversation transcripts.
- Any (partially) autonomous aspects of the AI (= behavior of the AI persona in conversation simulation; evaluation of conversation transcripts) are subject to a high degree of human control, and we have built strong "guardrails" into our prompts.
- Conversation behavior is controlled via prompt-side instructions to the AI on "persona" and "scenario"
- AI feedback is controlled via prompt-side instructions in the form of a "scorecard" with fully transparent evaluation criteria (visible to all users)
- There is also ongoing monitoring of user feedback on the AI system in the app, which enables adjustments if necessary.
☑️ Can you Ensure the Model are Fair, Accurate, and Free From Bias?
- General: Jam Technologies relies on AI models from certified market leaders like OpenAI, which implement state-of-the-art measures against bias. Additionally, Jam regularly tests and monitors AI feedback processes, takes hints of possible discrimination very seriously, and makes quick technical adjustments when needed. Should bias be identified, direct escalation, technical safeguarding, and - if necessary - temporary shutdown of affected functions occurs until the deficiency is remedied. This has never been the case in more than 10,000 completed role-play sessions.
- Bias-freedom and fairness are further ensured by the AI having no knowledge about the user (e.g., gender, ethnicity, sexual orientation, image files). Evaluation is based exclusively on the role-play transcript and with the explicit instruction (via the feedback model's prompt) to evaluate objectively, including clear evaluation criteria.
- High accuracy of feedback evaluation: Accuracy of AI feedback on role-play transcripts is > 93% compared to annotations of the same transcripts by human experts (trainers).
- Very high user acceptance: Only 0.26% of all AI-generated feedback received negative feedback from users (as of Oct 16) via the always-visible "thumbs down" button.
☑️ How Do You Conduct Impact Assessments for Your AI Application?
- Jam Technologies has established comprehensive mechanisms for conducting AI-specific impact assessments, even though our sales training platform is not currently classified as a high-risk application under the AI Act.
- The substantive concept for fundamental rights impact assessment includes assessment of potential discrimination risks through systematic monitoring of AI scoring patterns for gender-, age-, or origin-specific biases, transparency and explainability through structured prompts with customer-specific sales methodologies instead of open content generation, and documented evaluation criteria accessible to users.
- Rights of data subjects are ensured through easy access to own training data and performance scores, objection possibilities for unfair evaluations, and DPO involvement in data protection-relevant complaints.
Data Protection, Compliance & User Rights
☑️ How Long and Where are Personal Data Stored?
- All personal data is stored in European data centers, primarily via Heroku PostgreSQL databases in AWS EU regions and Amazon S3 file storage in the EU.
- Storage duration is based on contract term and legal retention obligations, with Jam Technologies deleting all customer data after contract termination or upon written request per Section 12 of our DPA, unless there is a legal retention obligation.
- After contract end, we make customer data available for 14 days in a market-standard format for download before it is completely deleted from the server.
- Detailed information can be found in our Data Processing Agreement (DPA).
☑️ Are Users Informed When AI Draws Conclusions Regarding Their Person?
- No conclusions concerning the person in the strict sense are drawn.
- Users only receive improvement-oriented feedback on transcripts from conversation simulations they conducted with fictional customer personas.
- End users are always the first recipients of any AI output - in this case: feedback on conversation simulations with AI avatars. If desired, end users are also the only recipients.
☑️ To What Extent Are the Experiences and Interests of Users and Affected Employees Included in Training and Impact Assessment of the Application?
- Jam Technologies systematically includes the experiences and interests of users and affected employees in the further development and impact assessment of the AI application.
- Our Customer Success team is in continuous dialogue with users and collects direct feedback on training experiences, AI evaluations, and system functionalities.
- AI-generated performance evaluations are regularly reviewed by our team, with hints of possible discrimination or unfair evaluations taken very seriously and leading to quick technical adjustments.
☑️ Are Terms of Use Posted on Jam's Homepage?
Our website currently has no separate terms of use posted, as we conclude individual SaaS contracts including Data Processing Agreement (DPA) with our business customers. These contractually regulate all usage and data protection aspects after the approval and co-determination process.
We are happy to share the relevant documents directly with you so that your data protection officer can review them.
Additional Documents - please request access if not yet granted:
- SaaS Agreement:
- Data Processing Agreement (DPA):
- Textbausteine für Einwilligungs-Pop-Up und weitere Informationen zur Datenverarbeitung
Jam Technologies GmbH | Jennerstraße 7a, 80999 München | https://www.wejam.ai
👤 Contact for questions:
Dr. Clemens Lechner Chief Product Officer Mail: clemens@wejam.ai Phone: +49 176 83 08 68 22 Profile on Linkedin
