Skip to content
AI/ML

Understanding Generative AI

← Go back to AI/ML Database

What is Generative AI

Generative AI refers to a collection of artificial intelligence techniques capable of creating new content derived from the training data they have been fed with, combined with additional context provided by users. This content can encompass text, code, images, audio, and video.

Generative AI relies on Large Language Models (LLMs), which undergo training using diverse, usually publicly available, datasets. Application users provide prompts or instructions to these models, asking them to generate output in various formats such as text, images, audio, or videos, depending on the specific model being employed.

Providing Custom Context and Private Data in Generative AI

Foundational models are trained on publicly available content. There are different ways to provide custom context to these models. The list below is ordered by increasing level of difficulty (combining development effort, AI skills, compute costs, and hardware needs):

  • Prompt engineering. In generative AI, custom context can be provided simply through prompt engineering, which involves giving specific instructions to the model. This is easily adjustable and can be guided by prompt templates. This is the simplest approach to give specific instructions and has a high degree of flexibility for adapting the LLM and prompt templates. It is ideal for use cases that do not need much domain context.
  • Retrieval Augmented Generation (RAG) offers the highest degree of flexibility to change different components (data sources, embeddings, LLM, vector database). It reduces hallucinations and keeps the output quality high by providing the particular context for response generation based on private, i.e., company-owned data. Knowledge is not incorporated into the LLM. Access control can be implemented to manage who is allowed to access which context. Learn more about RAG pipelines >
  • Fine-tuning incorporates more context into the foundational model by adjusting parameters, which is particularly useful in building domain-specific models (in the legal and biology industries, for example). However, it lacks access control, is prone to hallucination, and can be influenced by a single incorrect training data entry.
  • Training a custom foundational model allows a high degree of customization but requires significant resources: trillions of well-curated tokenized data points, sophisticated hardware infrastructure, and a team of highly skilled ML experts. You should also have a significant budget and time for such initiatives.

Learn more about Generative AI in this White Paper

How to Build AI-driven knowledge assistants

Discover other AI/ML topics

Interested in learning more?