mozilla.ai

Open-source AI for developers. Building tools that foster transparency, accessibility, and real-world impact.

mozilla.aiMozillaAI
2025-12-05

Product teams often spend hours each week answering repeat roadmap questions.

We’re exploring early concepts for the Agent Platform, including a Roadmap Agent that turns roadmap details into clear, easy-to-read answers.

Learn more: link.mozilla.ai/roadmap-agent

mozilla.aiMozillaAI
2025-12-04

📢 We’re rolling out the alpha of the any-llm Managed Platform!

It offers zero-knowledge API-key storage, unified cost tracking, budgeting controls, and usage analytics that never expose prompts or responses.

It also integrates directly with any-llm and any-llm-gateway.

Full blog post: link.mozilla.ai/any-llm-manage

mozilla.aiMozillaAI
2025-12-03

Do you choose an LLM based on cost, accuracy, or response time?

Or do you switch between models depending on the query?

If you want a clearer way to compare them, here is our blog post: blog.mozilla.ai/the-challenge-

mozilla.aiMozillaAI
2025-12-02

The recording of Davide Eynard’s (@mala) “Build Your Own Timeline Algorithm” talk from SFSCON is now live.

In the session, Davide walks through how to build a local, customizable timeline algorithm using Mastodon.py, llamafile, and marimo.

Watch the full video: sfscon.it/talks/build-your-own

mozilla.aiMozillaAI
2025-12-01

We just wrapped our sessions at DataFest Tbilisi (Nov 27–29)!

• Raz Besaleli on systems engineering lessons for more robust AI
• Davide Eynard (@mala) on running your own open-source AI agents
• Irina V. on treating AI software with the same rigor as any other system

Thanks for being part of it — see you next year!

mozilla.aiMozillaAI
2025-11-28

We’re opening up some early concepts from the Agent Platform.

The Agent Showroom highlights a few prototype agents, including call prep, post-mortem drafting, and roadmap support.

Learn more: link.mozilla.ai/agent-platform

mozilla.aiMozillaAI
2025-11-27

Why developers use any-llm:

• One interface for every LLM provider
• Cleaner integrations with fewer rewrites
• Consistent behavior across models

Install it here: link.mozilla.ai/any-llm-repo

mozilla.aiMozillaAI
2025-11-26

DataFest Tbilisi starts tomorrow 🎊

From Nov 27-29, Raz Besaleli, Davide Eynard (@mala), and Irina Vidal will be presenting on AI resilience, software reliability, and practical ways to run open source AI agents with full control.

More info: datafest.ge/

mozilla.aiMozillaAI
2025-11-24

Encoderfile v0.1.0 introduces a single-binary deployment format for encoder transformers.

No containers and no runtime dependencies, just deterministic, self-contained binaries built with ONNX and Rust.

Learn more: link.mozilla.ai/encoderfile-v0

mozilla.aiMozillaAI
2025-11-19

The lethal trifecta creates new failure points in agent systems.

Most guardrails miss them.

Read our full analysis: link.mozilla.ai/open-source-gu

mozilla.aiMozillaAI
2025-11-18

If your team works with MCP tools, here’s something new that may simplify your workflow.

mcpd-proxy gives developers a single access point for all MCP servers in VS Code, Cursor, and other IDEs. No more manual setup or mismatched configs.

Full post: blog.mozilla.ai/mcpd-proxy-cen

mozilla.aiMozillaAI
2025-11-17

We’ve been exploring a cleaner way to work with MCP tools inside IDEs like VS Code and Cursor.

A small update is coming tomorrow that we think teams will find useful.

mozilla.aiMozillaAI
2025-11-14

Big cloud or local models. Same interface.

any-llm (v1.0) keeps your workflow stable.

👉 Try it today: github.com/mozilla-ai/any-llm

mozilla.aiMozillaAI
2025-11-13

Local LLMs are back for good reason:

✅ Control
✅ Privacy
✅ Simplicity

llamafile makes it easy. One file, no setup.

Start using llamafile: github.com/mozilla-ai/llamafile

llamafile <> Mozilla.ai
mozilla.aiMozillaAI
2025-11-11

Introducing any-llm-gateway 🏗️

A FastAPI-based open-source gateway to manage and monitor LLM usage across providers.

Handle budgets, track costs, manage API keys, and deploy with confidence all in one place.

Read the full post: blog.mozilla.ai/control-llm-sp

mozilla.aiMozillaAI
2025-11-10

2025 has wrapped up after three days of creative discussions and collaboration around a better internet for everyone and open-source AI.

Grateful to everyone who joined us and to the organizers for an incredible experience.

See you next year!

mozilla.aiMozillaAI
2025-11-08

2025 🎉

Creators and communities are gathering to explore how technology can better serve people.

@peteski22, Javier, and Mario from Mozilla.ai are attending, and our CEO John is co-hosting “Context is All You Need: The Collective Land Grab Reshaping the Internet.”

🔗 schedule.mozillafestival.org/s

mozilla.aiMozillaAI
2025-11-06

We put open-source guardrails to the test for agent safety.

Setup:
• Indirect prompt injection: BIPIA email/table + benign WildGuardMix
• Function-call malfunctions: HammerBench

Results:
• PIGuard was effective on injection detection
• Custom judges struggled on function-call correctness

Methods, data, and code: blog.mozilla.ai/can-open-sourc

mozilla.aiMozillaAI
2025-11-05

Your feed should serve your curiosity, not engagement goals.

Join Davide Eynard (@mala) from Mozilla.ai at SFSCon to explore how to build a personal, local timeline algorithm that runs entirely on your computer using Mastodon.py, llamafile, and marimo.

🕔 November 7 at 17:00 (Seminar 4)

🔗 sfscon.it/talks/build-your-own

mozilla.aiMozillaAI
2025-11-04

🚀 Any-LLM v1.0 is now live!

One API for every model whether cloud-based or local. Run OpenAI, Claude, Mistral, or llama.cpp.

Version 1.0 brings better performance, stability, and a cleaner developer experience.

Read the full post: blog.mozilla.ai/run-any-llm-wi

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst