Do we have any owners of one of those Ryzen AI Max+ 395 128GB UMA boxes here that operate them on the daily for at least a few months as a claude LLM coding server and are capable of giving a comparative run down on their performance vs the OG claude and its collection of formal prose generators?
Also: Especially curious to hear any numbers that came out of a watt meter in daily consumption and base/peak numbers. Same with the used models, their size, their respective achieved tok/s and response times.
And should you have had the opportunity of comparing this against non-UMA beefy dGPUs on the above parameters that'd also be quote interesting.
#claude #aicoding #AIAsssisted #ollama #onprem #selfhosing #StrixPoint #ryzenaimaxplus395 #ryzenAiMax #powerconsumption #costefficiency #uma