Digimon
  • Digimon Engine
    • Metaverse for AI Agents
    • AI Agents in Gaming is the next step toward AGI?
    • Architecture Overview
    • Comparison with other Frameworks
    • Tech Stack
  • Digimon Studio
    • Westworld-like Game in 20 Mins
  • Simulacra of Human Society
    • A Paradigm Shift in Social Simulation
    • Simulacra
    • Chronicle
    • Evaluation
    • Game Theory: Prediction and Predilection
    • References
  • Game: AI Bartering Bonanza
    • Game Mechanics
  • DAMN: Evolvable AI Agent Society
    • MoEs, AB Testing and Reinforcement Learning
    • Foundation Base LLMs
    • Deployment and Integration
    • Data Privacy & Protection and Ethical Regulations
    • Monsters
    • Single -> Network
    • References
  • Developer Guide
    • Damn SDK
  • VISION
    • Vision
    • Core Philosophies
    • Background
  • Community
    • Welcome Aboard Digimon Trainers!
    • DigiDream Adventure - The Official Digimon Anthem
    • From Your Monster
Powered by GitBook
On this page
  1. DAMN: Evolvable AI Agent Society

Foundation Base LLMs

The Foundation Base LLMs for DAMN:

Open Source Models

Meta AI Models

  • LLaMA 3.1: Latest version with 8B, 70B, and 405B parameter variants

  • LLaMA 2: Previous version with 7B, 13B, and 70B parameter variants

Google Models

  • BERT: One of the earliest transformer-based models

  • Gemma: Open-source version of Gemini, available in 2B and 7B parameter sizes

Mistral AI Models

  • Mistral 7B: 7.3B parameter model with impressive performance

  • Mixtral 8x22B: 141B total parameters, using 39B active parameters

Stability AI Models

  • StableLM: Series including 3B, 7B variants, with larger models in development

  • Stable LM 2: Available in 1.6B and 12B parameter versions

Other Open Source Models

  • BLOOM: Developed by BigScience, available through Hugging Face

  • Falcon: Available in 11B (Falcon 2) and 180B parameter versions

  • OPT: Meta's series ranging from 125M to 175B parameters

  • XGen-7B: Developed by Salesforce, focusing on longer context windows

  • GPT-NeoX: 20B parameter model by EleutherAI

  • GPT-J: 6B parameter model by EleutherAI

  • Pythia: Series of models ranging from 70M to 12B parameters

  • DBRX: 132B parameter model by Databricks and Mosaic

Proprietary Models

OpenAI Models

  • GPT-4o: Latest multimodal model with text, image, video, and voice capabilities

  • GPT-3.5: Previous generation model

Anthropic Models

  • Claude 3.5: Known for ethical design and strong performance

Google Models

  • PaLM 2: 340B parameter model, successor to the original PaLM

  • Gemini 1.5: Focused on improving multilingual capabilities

Other Proprietary Models

  • Grok-1: Developed by xAI, with 314B parameters

  • Inflection-2.5: Powers the conversational AI assistant Pi

  • Jamba: 52B parameter model by AI21 Labs, using SSM technology

  • Cohere: Specialized in enterprise applications

  • Luminous: 70B parameter model developed by Aleph Alpha

PreviousMoEs, AB Testing and Reinforcement LearningNextDeployment and Integration

Last updated 5 months ago