Daniel
2025-04-15

I made a browser extension to enable MCP for claude[.]ai. It works via SSE, so you can just directly point it to your SSE MCP servers. It's kinda cool because it actually just uses the existing code Anthropic already has, just not enabled. So all of the MCP UI works :D

Daniel boosted:
pancake :radare2:pancake@infosec.exchange
2025-04-10

If anyone is curious about r2mcp, yes, it now runs in local with openwebui and mcpo #r2ai #radare2 #reverseengineering #llm

2025-04-03

@pancake yooo how do i use this

2025-04-03

@pancake @cryptax yeah the way Claude Desktop does it it just asks you to approve the tool. You can't really approve individual calls

Daniel boosted:
pancake :radare2:pancake@infosec.exchange
2025-04-02

The whole #mcp ecosystem is pure magic, here's a quick demo seamlessly running the r2mcp server. Kudos to the plugin's author @dnakov #r2ai #reverseengineering #llm #claude if you want to try it out, just run “r2pm -Uci r2mcp" and add the json block described in the repo’s readme!

2025-03-31

C is just so much better

2025-03-31

Added exponential backoff to r2ai.c and an execute_javascript tool. It's pretty much feature complete at this point.
The one missing feature is the "ask_to_execute" mode, where it stops and lets you edit each command/script. I'm trying to implement it now

2025-03-30

@pancake nickname is worth at least 100M

2025-03-29

Vibe ported r2ai-py auto mode to C, got pretty much everything working, so we can now just use from r2 as a plugin without loading all the python bs

2024-11-11

@pancake @bemodtwz lol i'm stealing this

2024-11-10

@astralia tbf, random out of 3 pre-selected, but yeah, a step towards :D

2024-11-02

@radareorg awesome!

2024-10-29

@pancake in the US too, but it's already available on the dev build 18.2. Its summarizing everything and kind of annoying

2024-10-27

@pancake and the conversation is mostly just so that it doesn't start degrading after the first answer

2024-10-27

@pancake nah it's just text. making the modes think out loud supposedly improves quality. So, adding the <reasoning>some explanation</reasoning> to the dataset will make it more likely for the model to respond with longer explanation

2024-10-27

@pancake yeah that might help with small models like 1-3b but i think bigger ones already know that. For reasoning, i was thinking of combining multiple individual commands and a <reasoning> tag.
We're also going to have to somehow combine all that into various longer conversations

2024-05-25

@radareorg r2ai :)

2024-05-25

@pancake i started refactoring the auto providers, adding wait for quota, some exception handling, etc. changed the prompt a bit too. starting to get some reallly nice results from gemini

2024-05-24

@pancake nice, you'd have to worry about the different special tokens for each model though, no? sys, inst, etc

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst