Digimon
  • Digimon Engine
    • Metaverse for AI Agents
    • AI Agents in Gaming is the next step toward AGI?
    • Architecture Overview
    • Comparison with other Frameworks
    • Tech Stack
  • Digimon Studio
    • Westworld-like Game in 20 Mins
  • Simulacra of Human Society
    • A Paradigm Shift in Social Simulation
    • Simulacra
    • Chronicle
    • Evaluation
    • Game Theory: Prediction and Predilection
    • References
  • Game: AI Bartering Bonanza
    • Game Mechanics
  • DAMN: Evolvable AI Agent Society
    • MoEs, AB Testing and Reinforcement Learning
    • Foundation Base LLMs
    • Deployment and Integration
    • Data Privacy & Protection and Ethical Regulations
    • Monsters
    • Single -> Network
    • References
  • Developer Guide
    • Damn SDK
  • VISION
    • Vision
    • Core Philosophies
    • Background
  • Community
    • Welcome Aboard Digimon Trainers!
    • DigiDream Adventure - The Official Digimon Anthem
    • From Your Monster
Powered by GitBook
On this page
  1. DAMN: Evolvable AI Agent Society

References

  1. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., & Polosukhin, I. (2017). Attention Is All You Need. In Advances in Neural Information Processing Systems (NeurIPS).

  2. Wu, Q., Bansal, G., Zhang, J., Wu, Y., Li, B., Zhu, E., Jiang, L., Zhang, X., Zhang, S., Liu, J., Awadallah, A. H., White, R. W., Burger, D., & Wang, C. (2023). AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation Framework. arXiv preprint, arXiv:2308.08155.

  3. Wang, C., Liu, S. X., & Awadallah, A. H. (2023). Cost-Effective Hyperparameter Optimization for Large Language Model Generation Inference. In Proceedings of AutoML'23.

  4. Zhang, S., Zhang, J., Liu, J., Song, L., Wang, C., Krishna, R., & Wu, Q. (2024). Training Language Model Agents without Modifying Language Models. Proceedings of ICML'24.

  5. Shazeer, N., Mirhoseini, A., Maziarz, K., Davis, A., Le, Q., Hinton, G., & Dean, J. (2017). Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer. International Conference on Learning Representations (ICLR)'13.

  6. Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C. L., Mishkin, P., Zhang, C., Agarwal, S., Slama, K., Ray, A., Schulman, J., Hilton, J., Kelton, F., Miller, L., Simens, M., Askell, A., Welinder, P., Christiano, P., Leike, J., & Lowe, R. (2022). Training language models to follow instructions with human feedback. OpenAI.

  7. Lu, X., Liu, Z., Liusie, A., Raina, V., Mudupalli, V., Zhang, Y., & Beauchamp, W. (Year). Blending Is All You Need: Cheaper, Better Alternative to Trillion-Parameters LLM. University of Cambridge, University College London, Chai Research.

  8. Ong, I., Almahairi, A., Wu, V., Chiang, W.-L., Wu, T., Gonzalez, J. E., Kadous, M. W., & Stoica, I. (2024). RouteLLM: Learning to Route LLMs with Preference Data. arXiv. https://arxiv.org/abs/2406.18665

PreviousSingle -> NetworkNextDeveloper Guide

Last updated 5 months ago