Skip to content
Use cases > AI-powered chatbots

Understanding Generative AI

What is Generative AI?

Generative AI refers to a collection of artificial intelligence techniques capable of creating new content derived from the training data they have been fed with, combined with additional context provided by users. This content can encompass text, code, images, audio, and video.

Gen AI

Generative AI relies on Large Language Models (LLMs), which undergo training using diverse, usually publicly available, datasets.

Application users provide prompts or instructions to these models, asking them to generate output in various formats such as text, images, audio, or videos, depending on the specific model being employed.

Challenges of Generative AI

The potential of generative AI is huge, but it also presents several challenges:

  • Quality & reliability: LLMs tend to hallucinate, so quality and reliability are crucial factors in the content generated by AI models. Enforcing them involves maintaining accuracy and considering the timeliness of data input. The goal is to produce information that is not only relevant but also accurate and trustworthy.
  • Ethical & societal: Generative AI raises ethical considerations, such as the creation of deepfakes, which could lead to serious privacy concerns.
  • Computational costs & environmental impact: The significant computational costs and environmental impact of generative AI, such as energy consumption equivalent to charging a phone for image generation, must be considered.
  • Intellectual property & copyright: Legal questions also arise, particularly around intellectual property and copyright. It's crucial to determine who owns the copyright of the generated content and ensure that the models are not trained on copyrighted content used to generate new content.
  • Managing & governing AI: Appropriate frameworks for the development and deployment of generative AI technologies are essential to ensure proper management and governance. There are still several open questions in this space, particularly around accuracy (most recent information needs to be available for meaningful answers) and the use of private data (properly tagging it as internal, confidential, sensitive, subject to privacy regulations).

Providing Custom Context and Private Data in Generative AI

Foundational models are trained on publicly available content. There are different ways to provide custom context to these models. The list below is ordered by increasing level of difficulty (combining development effort, AI skills, compute costs, and hardware needs):

  • Prompt engineering. In generative AI, custom context can be provided simply through prompt engineering, which involves giving specific instructions to the model. This is easily adjustable and can be guided by prompt templates. This is the simplest approach to give specific instructions and has a high degree of flexibility for adapting the LLM and prompt templates. It is ideal for use cases that do not need much domain context.
  • Retrieval Augmented Generation (RAG) offers the highest degree of flexibility to change different components (data sources, embeddings, LLM, vector database). It reduces hallucinations and keeps the output quality high by providing the particular context for response generation based on private, i.e., company-owned data. Knowledge is not incorporated into the LLM. Access control can be implemented to manage who is allowed to access which context.
  • Fine-tuning incorporates more context into the foundational model by adjusting parameters, which is particularly useful in building domain-specific models (in the legal and biology industries, for example). However, it lacks access control, is prone to hallucination, and can be influenced by a single incorrect training data entry.
  • Training a custom foundational model allows a high degree of customization but requires significant resources: trillions of well-curated tokenized data points, sophisticated hardware infrastructure, and a team of highly skilled ML experts. You should also have a significant budget and time for such initiatives.

Want to read more?

How to Build AI-driven knowledge assistants

Want to discuss your project?