#ToolCalling

2025-12-04

Một SDK Python mới tên Consoul đã ra mắt, giúp đơn giản hóa việc gọi công cụ (tool calling) với Ollama, Claude và GPT. Nó xử lý vòng lặp gọi công cụ, chỉnh sửa file, tìm kiếm code và có giao diện người dùng văn bản (TUI) tiện lợi, cùng với đếm token nhanh hơn cho Ollama. Rất hữu ích cho các nhà phát triển AI!

#Python #SDK #Ollama #AI #ToolCalling #DeveloperTools #Python #Ollama #CôngCụAI #PhátTriển

reddit.com/r/LocalLLaMA/commen

2025-11-08

💬🤖 Cộng đồng hỏi: “Kimi K2 Thinking” có chạy được với vLLM hoặc sglang, hỗ trợ gọi tool (tool‑calling) mà không bị hoang tưởng chưa? Hình như vấn đề mọc từ cách gọi tool và sai lệch trong grammar. Kimi hiện đang cố gắng áp dụng quy tắc grammar để sửa lỗi, còn nguồn tài nguyên hạn chế khi không hỗ trợ gọi tool. #KimiK2 #vLLM #sgLang #toolcalling #AI #NhómAI #ToolsLlama #AIhub #TechNews 🚀

reddit.com/r/LocalLLaMA/commen

2025-10-18

Google의 AI 프레임워크 Genkit, 개발자 도구가 달라졌다

Google Firebase 팀이 만든 오픈소스 AI 프레임워크 Genkit을 활용한 실전 가이드. 통합 API로 Gemini, GPT, Claude를 자유롭게 사용하고, 시각적 디버깅 도구로 개발 생산성을 높이며, 프로덕션 배포까지 한 번에 해결하는 방법을 소개합니다.

aisparkup.com/posts/5604

2025-10-10

Người dùng đang tìm kiếm mô hình LLM đáng tin cậy (dưới 48GB VRAM) để chạy tác nhân AI với vLLM. Các mô hình như Qwen3, Gemma3, GPT-OSS, Mistral đều gặp vấn đề về gọi công cụ không ổn định. Cộng đồng có gợi ý nào không?
#vLLM #LLM #Agent #ToolCalling #AI #LocalLLaMA #MôHìnhAI #TríTuệNhânTạo

reddit.com/r/LocalLLaMA/commen

aaron ~# :blinkingcursor:neuroexception@infosec.exchange
2025-09-10

Making the most out of a small LLM

Yesterday i finally built my own #AI #server. I had a spare #Nvidia RTX 2070 with 8GB of #VRAM laying around and wanted to do this for a long time.

The problem is that most #LLMs need a lot of VRAM and i don't want to buy another #GPU just to host my own AI. Then i came across #gemma3 and #qwen3. Both of these are amazing #quantized models with stunning reasoning given that they need so less resources.

I chose huihui_ai/qwen3-abliterated:14b since it supports #deepthinking, #toolcalling and is pretty unrestricted. After some testing i noticed that the 8b model performs even better than the 14b variant with drastically better performance. I can't make out any quality loss there to be honest. The 14b model sneaked in chinese characters into the response very often. The 8b model on the other hand doesn't.

Now i've got a very fast model with amazing reasoning (even in German) and tool calling support. The only thing left to improve is knowledge. #Firecrawl is a great tool for #webscraping and as soon as i implemented websearching, the setup was complete. At least i thought it was.

I want to make the most out of this LLM and therefore my next step is to implement a basic #webserver that exposes the same #API #endpoints as #ollama so that everywhere ollama is supported, i can point it to my python script instead. This way it feels like the model is way more capable than it actually is. I can use these advanced features everywhere without being bound to it's actual knowledge.

To improve this setup even more i will likely switch to a #mixture_of_experts architecture soon. This project is a lot of fun and i can't wait to integrate it into my homelab.

#homelab #selfhosting #privacy #ai #llm #largelanguagemodels #coding #developement

2025-08-27

Go-UTCP: Giao thức gọi công cụ đa năng cho Go! ⚙️ Hỗ trợ nhiều giao thức (HTTP, WebSocket, TCP/UDP, gRPC, GraphQL...), cấu hình thân thiện qua .env, và tiện ích chuyển đổi OpenAPI. Dễ dàng bắt đầu với `go get`. Phiên bản v1.7.0 đang được duy trì tích cực!

#golang #utcp #toolcalling #lậptrình #giao_thức #dev #go
#golang #utcp #toolcalling #laptrinh #giaothuc #dev #go

reddit.com/r/programming/comme

Harald KlinkeHxxxKxxx@det.social
2025-08-22

LiveMCP-101: Benchmarking AI Tool Use
New benchmark with 101 real-world queries testing AI agents on multi-step tasks using diverse MCP tools (search, file ops, math, data analysis).

Key points:
• Ground-truth execution plans for realistic evaluation
• Frontier LLMs succeed <60% → major orchestration challenges
• Error analysis highlights inefficiencies & failure modes

arxiv.org/abs/2508.15760v1
#AI #Agents #ToolCalling #Benchmarking

Deepu K Sasidharandeepu105
2025-08-01

I've been diving deep into the world of AI lately. My latest blog post explores how to build an AI agent that can call internal and external APIs using LangGraph and Auth0 Token Vault. 🗓️ You can check it out to learn how to use it!

auth0.com/blog/genai-tool-call

2025-07-26

Captain’s Log, Stardate Java: Building a Quarkus-Powered AI Sci-Fi App with Langchain4j and Ollama. Use the power of local LLMs, Quarkus magic, and Langchain4j tool calling to generate dynamic, weekday-aware space captain logs
myfear.substack.com/p/quarkus-
#Java #Quarkus #LangChain4j #ToolCalling #CaptainsLog

Santhosh Thottingalsthottingal
2025-06-23

Sharing the documentation of an exploration I did some time back about grounding LLM on wikidata facts using tool calling features - WQ42: Grounding LLMs in Wikidata Facts via Tool Calling. thottingal.in/blog/2025/06/21/

You may try wq42.toolforge.org/ to see this in action.

Natural language questions are answered using the facts available in Wikidata. Some analytical, mult-hop, mathematical questions are also supported.

Qn: Which river is the longest? Nile river or Amazon river?

This is an analytical, multi-hop question. After fetching required information, Lua code is written and executed to find which is longer.
JuanluElGuerre :verified:JuanluElGuerre@hachyderm.io
2025-06-18

🌤️ What if your LLM could check the weather using your own .NET REST API?
Learn how to connect Semantic Kernel, Google Gemini 2.5 Flash, and your MCP server to build a real-time weather agent.
👉 elguerre.com/2025/06/18/extend
#AI #DotNet #SemanticKernel #GoogleGemini #LLM #ToolCalling #MCP #CSharp

Gavin Morgangavmor@ruby.social
2024-08-30

I suspect that #LLM #ToolCalling is going to eat #Zapier's lunch if they don't lead on it.

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst