Ronnie Wong
Ronnie Wongronniew
2026-01-20

到底誰家開發播放器是這樣開發的?

Ronnie Wongronniew
2026-01-20

現在就是一個偉大的 AI 時代。
那些面向開發者的維護 app 翻譯字段的工具。竟然被 Gemini CLi 一句話指令搞定了。

Ronnie Wongronniew
2026-01-18

--enable-videotoolbox
--enable-hwaccel=hevc_videotoolbox
--enable-encoder=h264_videotoolbox

this is not easy.

github.com/qoli/FFmpegKit

Real-time transcoding solution for black screen when playing H265 with AVPlayer

Ronnie Wongronniew
2026-01-18

I’ve been using this small script for ~2 years to send text input into the tvOS Simulator (via Raycast + AppleScript keystrokes).
It’s been a reliable helper for testing flows without an easy on-screen keyboard path.
Sharing it in case it helps other tvOS devs.

gist.github.com/qoli/2f865fadf

Ronnie Wongronniew
2026-01-17

I think this is mostly an expectation mismatch on my side.

I ran the same probe script on the same URL with Puppeteer vs Lightpanda. Puppeteer finds the runtime media URL, Lightpanda doesn’t (see screenshot).

This kind of JS + iframe + dynamic player site really needs a real browser runtime. So not a bug, just a capability boundary. Lightpanda is still the only web-runtime-like project I’ve seen that’s realistically portable to tvOS, which is why I tried it.

gist.github.com/qoli/bac04246e

Ronnie Wongronniew
2026-01-17

I’m somewhat disappointed with Lightpanda.

1. Seeing the “headless browser” label, I expected something on the level of headless Chrome.
2. It’s written in Zig, which makes it very portable.
3. I even had Codex port it to tvOS, and it runs.
4. But after testing, I found that it’s essentially just a “high-end curl”.

github.com/qoli/browser/blob/m

Ronnie Wongronniew
2026-01-16

@eugenpirogoff

You’re right — I misunderstood the Claude Skills concept before.

I originally thought Skills had some kind of post-validation / QA script layer. After re-reading the docs, it doesn’t. In practice it’s actually very similar to MCP in nature.

So yes — Skills are not the answer either.

Ronnie Wongronniew
2026-01-16

我正在開始一個 「observō」的概念。它很有趣,回答了codex 當前的行動。

運行在 qwen3-coder-30b-a3b-instruct-mlx-6 (24G vRAM)。性能很好。

Ronnie Wongronniew
2026-01-16

@eugenpirogoff

I tried it. Honestly, I don’t see a big difference.

When it works, it’s still mostly the model fixing things by itself. The failures are still DSL / Swift structure issues (e.g. AppShortcut vs [AppShortcut]). Using the MCP doesn’t really change that.

Ronnie Wongronniew
2026-01-16

@eugenpirogoff

Let me test it first and see.

For context, this is in a real project: adding a new App Intent wrapping an existing feature, using iOS 26 interactive snippets (brand new API). I’ll try constraining the model to Cupertino’s patterns and see if it helps.

Ronnie Wongronniew
2026-01-16

Codex (and similar AI CLIs) are terrible at writing 「Apple AppIntents」.
They often generate code that doesn’t even compile.

This is not a prompt problem. It needs a domain-specific Agent Skill.

But instead, people build “skills” for:
commit messages, renaming variables, summarizing files.

Safe. Shallow. Demo-friendly.
The real broken parts? Nobody cares.

Ronnie Wongronniew
2026-01-15

這樣發版本的話,完全不影響我的傳統習慣。真舒服。

Ronnie Wongronniew
2026-01-15

我順手用 Gemini + Notion MCP 做了一個輕鬆的命令;
gemini --yolo "1. 讀取 notion page eisonAI 更新日誌;然後回報「最新版本」的更新內容;2. 然後寫入 /Volumes/Data/Github/eisonAI/telegram/changelog.md;"

這張圖片記錄了一個名為 `eisonAI` 的軟件在更新到版本 2.6 時的終端輸出日誌。日誌中包含了腳本執行的過程、遇到的錯誤(如文件缺失、網絡連接問題),以及對新版本功能的詳細說明,包括新增功能、修復的錯誤和移除的選項。它還顯示了當前系統的運行模式和配置狀態。
Ronnie Wongronniew
2026-01-15

mlc-llm 調了一整天 config,結論是:
prefill_chunk_size 才是長文本 pipeline 的關鍵瓶頸,不是 context window。

在 800~2k token 輸入場景下,預設 128 會直接卡死在 prefill。
我最後用 640(1792 / 640 ≈ 2.8)切段,壓力分配終於變均勻。

Qwen3 0.6B q4f16 現在可以在 iPhone 15 Pro / iPad Pro M4 跑長文本 pipeline 了。

不過 Safari extension 的 web-llm 還是會炸:
WebGPU device lost / Object already disposed 🤷

結論:
✅ 模型 + pipeline 可用
❌ WebGPU runtime 穩定性還是工程地獄

"model_id": "Qwen3-0.6B-q4f16_1-MLC",
"overrides": {
"prefill_chunk_size": 640,
"context_window_size": 3072
}

Ronnie Wongronniew
2026-01-15

@fatbobman @mattiem cool! hi Matt.

Ronnie Wongronniew
2026-01-15

I built something called HLN Machine (Hell Light News Factory).
1. It’s not an “AI magic button”.
2. It takes a news seed as input.
3. It runs a multi-stage pipeline: LLM / TTS / ASR / VL / T2V / I2V / S2V
4. And outputs a YouTube Shorts video.

It’s a fully local, white-box AI pipeline, not a black box.

Details here:
qoli.notion.site/HLN-HLN-Machi

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst