Skip to main content

FAQs: Artificial intelligence

What recourse is available for model errors or other AI-related issues that impact customers?

All services and models deployed in Genesys Cloud have defined service level agreements (SLAs) on various relevant metrics defined at the design stage, and control and rollback mechanisms for models in specific scenarios. For example:

  1. incorporates failsafe mechanisms to have the call routed to a specific pool of agents when the model is unable to identify a suitable agent within a pre-defined period.
  2. has a pacing implementation failsafe that limits the number of engagements offered to customers when the model is erroneously targeting too large of an audience.
  3. Model Life Cycle Orchestration defines SLAs on certain metrics and rolls back newly deployed active models to the previous version when the number of prediction errors exceeds defined thresholds.

These mechanisms are monitored actively following typical processes for cloud software development. This process can include alerts to the on-call team if such model metrics as missing feature value thresholds, prediction errors, and so on, exceed pre-defined thresholds.

What approaches have you used to reduce bias and disparate impact in model selection?

The Genesys approach to managing AI bias and model impact is focused primarily on:

  1. Assessment and curation of model input data to make certain that no sensitive features are included in the models.
  2. Model monitoring tracks various model metrics to help avoid data drift and concept drift.
  3. Model cards and dataset cards document various characteristics of AI models and training datasets in a standardized format. These features also relate to bias detection.

What is your enterprise governance process and risk assessment framework for AI and ML technologies?

Genesys governance processes are tightly coupled with the development processes. As part of these processes, Genesys incorporates mechanisms such as Data Privacy Impact Assessments, Security and Compliance reviews, and AI Model Risk-focused reviews. These reviews occur at various stages of the software life cycle to promote responsible AI development and risk mitigation.

Genesys Cloud model training and inference pipeline, including metrics, sampling, and validation, follows the standard review process shared by all Genesys Cloud software. The first round of reviews takes place during the design phase as part of the software design life cycle, where AI architects and data scientists review and approve the design, alongside privacy and security reviews. The second phase of reviews occurs at the development stage, during which all model training and inference code is peer-reviewed before merging and deploying. This stage often includes benchmarks and the addition of model-specific tests.

Also, the Genesys AI Ethics Committee, which spans across functions, including privacy, security, architecture, product, and AI, holds regular quarterly meetings to assess our products and processes from an ethics and governance perspective.

This comprehensive governance framework helps to ensure that artificial intelligence and machine learning technologies deployed by Genesys meet the highest standards for security, privacy, and ethical considerations.

What guardrails does Genesys Cloud AI ethics provide to protect customer privacy?

Genesys Cloud AI Ethics enables customer privacy through the following key principles:

  1. Balance value creation with empathy: Genesys prioritizes understanding and addressing the needs of all stakeholders during the value-creation process, with privacy considerations integral to any decision.
  2. Incorporate privacy design principles: Privacy is embedded by design at Genesys. The right to privacy is protected from the outset, governed by explicit customer consent through mechanisms like master service agreements (MSA). These principles include opt-in clauses and data-use consent, with a focus on anonymization and regulatory compliance.
  3. Understand and reduce bias: Genesys actively works to mitigate bias in AI models to support ethical and fair decision-making, considering the broader context when handling data.
  4. Value transparency: Genesys takes measures to make sure that stakeholders are informed and understand the decision-making processes behind AI models, promoting trust in how data is used and managed.

Can customers bring our own AI models, choose from existing models, or customize models to suit our business needs?

Yes, as a customer, you can bring your own AI models, select from existing models, or customize models to suit your business needs. Genesys supports a “bring your own” (BYO) approach through connectors and the open platform, allowing you to build custom versions of solutions like or . Also, Genesys Cloud supports integration with third-party services, such as speech-to-text and text-to-speech engines, providing further flexibility in tailoring the AI capabilities to your specific requirements.

What are the key steps for deploying Genesys Cloud AI solutions?

Every AI implementation is unique, so it is important to tailor the deployment to your specific business goals. Start by selecting the correct AI capabilities, configuring and integrating them with your existing systems, and then testing thoroughly.

To support you further, experts who specialize in rapid deployment, customization, and multivendor integration are available. Genesys consultants have decades of experience, so you can avoid common pitfalls and be certain your AI solution is optimized for both customer and employee experience goals.

How does Genesys monitor compliance with industry standards and regulations?

Genesys enables customer compliance with regulations through a robust framework that includes 23 accreditations and certifications for adhering to local, regional and global regulations. Additionally, we require that Genesys Cloud AI solutions successfully pass compliance checks each year to meet market expectations and Genesys regulatory requirements. This can give customers confidence in the security, ethics and compliance of their AI solutions.

What is your privacy policy regarding AI services?

Privacy by Design and Privacy by Default are embedded in the processes around the design, set up and update of our products and services — from development to release and improvement. Our standard baseline for compliance with privacy and data protection is the EU General Data Protection Regulation (GDPR) and related legislative pieces, which are foundational for our privacy program.

In addition to that, our risk management framework incorporates AI model actions, including privacy as a pillar. Our corporate and product privacy teams are knowledgeable of other AI risk frameworks incorporating privacy to the assessment of AI products, such as the NIST AI Risk Management Framework, ISO/IEC 23894:2023, and additional resources such as the Assessment List for Trustworthy AI developed by the High-Level Expert Group on AI set up by the European Commission. This provides a robust framework designed to protect fundamental rights from the design phase of AI models.

Here are additional details about the Genesys position on privacy and compliance:

  • Building large-scale systems that apply AI to optimize customer experience often requires very large datasets that can contain data on many individuals coming from a variety of sources. Genesys is committed to core principles of privacy by design, limiting the data collected about individuals as the default.
  • Beyond enforcing privacy by design principles across systems, Genesys uses rigorous processes to monitor compliance of our AI products with regulations such as GDPR and any other applicable legislation globally.
  • While Genesys has technical and administrative controls in place to limit the access to customer data, we have established additional safeguards designed to ensure that all data used for the development of new products is anonymized and governed by a set of processes detailed in the Genesys data anonymization framework.

How does Genesys fine-tune large language models (LLMs)?

Hallucinations are mitigated via fine-tuning models with conversational datasets that are selected for the use cases such as Customer Care, and industry verticals such as healthcare, financial services, retail. This process can reduce hallucinations significantly by reweighting the models to the use case.

Prompting best practices are set to instruct the large language model (LLM) so that it avoids fabricating answers and says “I don’t know” if the question isn’t relevant or answerable. This behavior gives the LLM a high degree of confidence in the answer, constraining the response with examples of correct outputs and setting the deterministic temperature to be as low as possible.

Retrieval-augmented generation (RAG) constrains responses so that they are derived from a known-good set of data from the business.

In what cases is customer data used to train your AI models?

Customers can consent to participate in service improvements through a rigorously controlled process. Data is sampled and fully anonymized in the production environment before it can be used for AI model training purposes. By default, the Genesys Master Service Agreement (MSA) opts customers out of any data donation.

What procedures do you have in place to keep customer data strictly confidential?

Customers can opt in to help Genesys with service improvements. Before Genesys uses any data, the data is fully anonymized and human-validated within the production environment to confirm that no personally identifiable information (PII) is present before use in model training or fine-tuning. To maintain data security and privacy, Genesys Compliance and Ethics teams rigorously review the anonymization processes. For more information, see .

What data sources do you use to train your large language model (LLM)?

Genesys curates the data used in model fine-tuning from both open-source conversations and from Genesys Cloud customers that agree to participate in product improvements, including voice calls and chats from various digital channels. Care is taken to ensure that the data spans multiple domains and industries, and is reviewed rigorously for integrity and accuracy through both automated processes and manual annotation.

The data reflects the types of conversations the model is expected to encounter in real-world production scenarios. Measures are in place to mitigate bias related to domain, gender, race, or other protected characteristics. Genesys also enforces strict processes to filter out inappropriate language; all data is securely archived in Genesys Cloud with tightly controlled access.

What datasets are used to train and refine your AI services or products?

For fine-tuning models, Genesys uses open-source, purchased and anonymized datasets to improve specific tasks and language coverage. Genesys also works on minimizing hallucinations and validating the LLM (large language model) outputs to help ensure performance, privacy, and compliance around data. Genesys never uses customer data for any AI training or fine-tuning purposes without the consent of the customer.

How does Genesys Cloud AI scale to support business demands and increasing customer interactions?

Genesys Cloud AI uses a cloud-native architecture to scale resources automatically based on demand, enabling easy handling of increasing customer interactions. The flexible, token-based pricing model allows you to allocate tokens easily across different AI services. This process enables smooth scaling as your business develops and new use cases emerge without being to be tied to rigid pricing structures.

Do you offer any products or services that incorporate generative AI solutions?

Yes, Genesys offers generative AI-powered solutions, such as advanced summarization and answer highlight in . Genesys also provides enhancements in virtual agents to streamline bot building, handle requests, gather information efficiently and automate wrap-ups for self-service interactions. These features are embedded into the secure and compliant Genesys Cloud platform.

Do you have dedicated processes in place to monitor prompts and validate results for AI?

All large language model (LLM) based services have quality assurance acceptance tests. The model is required to pass these tests before being deployed into production. Genesys also monitors customer feedback via the user interface (UI) and other channels. In addition to internal LLM testing processes, Genesys Cloud also runs automated testing against all production instances and assesses the output of LLMs against known-good outputs. Genesys Cloud is also compliant with the EU AI Act.

Which AI models are used on your platform?

Genesys has a three-fold artificial intelligence (AI) model strategy; a structured approach that applies different types of AI models, each serving a unique purpose in the Genesys Cloud AI-powered platform. This approach enables Genesys Cloud to address a wide range of use cases with precision, flexibility, and adaptability.

  1. Proprietary machine learning (ML) models: Custom, enterprise-grade AI models developed in-house and tailored to meet your organizational requirements, with a focus on advanced features and performance.
  2. Open-source models: Genesys Cloud integrates a diverse set of pre-trained, open-source AI models to help facilitate adoption and deliver cost-effective AI capabilities. These models are further fine-tuned with task-specific and industry-specific data to help ensure that they meet the specialized demands of our customers. This process allows Genesys Cloud to provide flexible AI solutions that extends across industries and adapt to unique business requirements.
  3. Foundation models: Innovative, large language models (LLMs) delivered as a service within our data and security compliance envelope. Foundation models cater to advanced use cases that require high levels of comprehension. With this option, Genesys Cloud’s AI capabilities offer customers advanced AI for complex applications, such as retrieval-augmented generation.
  4. Custom models: If you need a custom AI model, Genesys Cloud also supports Bring Your Own (BYO) custom model integrations, which provides a consistent experience for customers with highly specialized needs.
    • Transcription with options to connect Google or Microsoft Azure Transcription.
    • BYO Knowledge Connectors to content management systems.
    • Later, BYO LLM for services, such as summarization.

This three-fold approach allows Genesys Cloud to deploy versatile, powerful AI capabilities that give you the best of proprietary innovation, open-source adaptability, and foundation-level advancements.