ZOYA ✪ (@HeyZoyaKhan)
여러 최첨단(frontier) 모델과 'Auto' 모드를 지원해 사용자가 모델을 직접 고르지 않아도 작업별로 최적 모델을 선택해주는 툴의 기능 소개. 속도가 필요할 땐 속도 우선 모델, 깊이가 필요할 땐 정확도 우선 모델을 자동으로 골라주는 방식으로 도구가 작동한다는 설명입니다.
ZOYA ✪ (@HeyZoyaKhan)
여러 최첨단(frontier) 모델과 'Auto' 모드를 지원해 사용자가 모델을 직접 고르지 않아도 작업별로 최적 모델을 선택해주는 툴의 기능 소개. 속도가 필요할 땐 속도 우선 모델, 깊이가 필요할 땐 정확도 우선 모델을 자동으로 골라주는 방식으로 도구가 작동한다는 설명입니다.
Why does AI orchestration succeed? Not the size of the LLM, but hitting ~90 % router accuracy. Learn how precise routing, semantic cues, and smart decision logic let specialist models shine in production. A deep dive into model selection and router design that could reshape your AI pipeline. #AIRouterAccuracy #LLMRouting #ModelSelection #SemanticRouting
🔗 https://aidailypost.com/news/ai-orchestration-success-hinges-90-router-accuracy-not-model-size
Diving deep into the world of model selection! Discover how to choose your favorite and most effective model for optimal results and make data-driven decisions. #ModelSelection #MachineLearning #DataScience #AI #Analytics
Robert from Code Web Chat (@robertpiosik)
인라인(inline) 모델 선택 기능 제안으로, 작성자는 'fast', 'thinking', 'pro' 같은 모델(모드)을 자주 전환하니 한 곳에 모아두는 UI 개선을 원한다고 말합니다. 모델 간 빠른 전환을 위한 인터페이스 변경 요청입니다.
#statstab #467 Hypothesis testing, model selection, model comparison some thoughts
Thoughts: An excellent (but too short) discussion on bayesian inference.
#bayesian #bayesfactor #modelselection #inference #NBHT #BF #ROPE #primer
Not So Prompt: Prompt Optimization as Model Selection
https://www.gojiberries.io/not-so-prompt-prompt-optimization-as-model-selection/
#HackerNews #PromptOptimization #ModelSelection #AIResearch #GojiBerries #HackerNews
#statstab #393 Statistically Efficient Ways to Quantify Added Predictive Value of New Measurements [actual post]
Thoughts: #392 has the comments, but this is where the magic happens.
#modelselection #modelcomparison #variance #effectsize #tutorial
#statstab #392 Statistically Efficient Ways to Quantify Added Predictive Value of New Measurements (forum thread)
Thoughts: Forums can be great for asking the author for exact answers to complex questions
#modelselection #causalinference #prediction #bias #information
#statstab #358 What are some of the problems with stepwise regression?
Thoughts: Model selection is not an easy task, but maybe don't naively try step wise reg.
#stepwise #regression #QRPs #issues #phacking #modelselection #bias
https://www.stata.com/support/faqs/statistics/stepwise-regression-problems/
IRIS Insights I Nico Formanek: Are hyperparameters vibes?
April 24, 2025, 2:00 p.m. (CEST)
Our second IRIS Insights talk will take place with Nico Formanek.
🟦
This talk will discuss the role of hyperparameters in optimization methods for model selection (currently often called ML) from a philosophy of science point of view. Special consideration is given to the question of whether there can be principled ways to fix hyperparameters in a maximally agnostic setting.
🟦
This is a WebEx talk to which everyone who is interested is cordially invited. It will take place in English. Our IRIS speaker, Jun.-Prof. Dr. Maria Wirzberger, will moderate it. Following Nico Formanek's presentation, there will be an opportunity to ask questions. We look forward to active participation.
🟦
Please join this Webex talk using the following link:
https://lnkd.in/eJNiUQKV
🟦
#Hyperparameters #ModelSelection #Optimization #MLMethods #PhilosophyOfScience #ScientificMethod #AgnosticLearning #MachineLearning #InterdisciplinaryResearch #AIandPhilosophy #EthicsInAI #ResponsibleAI #AITheory #WebTalk #OnlineLecture #ResearchTalk #ScienceEvents #OpenInvitation #AICommunity #LinkedInScience #TechPhilosophy #AIConversations
Can anyone help with understanding how to best do #modelselection in the context of #neuralnetworks ? I'm trying to understand how to reduce #bias due to the selection of a particular test set.
More details here
from the standpoint of model selection, parsimony often boils down to dimensionality reduction
#modelSelection #parsimony #OccamsRazor #dimensionalityReduction #degreesOfFreedom #complexity #informationTheory #biasVarianceTradeoff #overfitting #underfitting #optimization #parameterTuning #crossValidation #inverseProblems #inference #statisticalLearning #machineLearning #ML #dataScience #modeling #decisionTheory #fitting #regression #classification #residualError #costFunction #performanceLoss
7/10) This finding led to our #proposal: Can we use α for #modelSelection in an #SSL pipeline?
Two key +s of α:
1. α doesn’t require labels
2. α is quick to #compute (compared to training a readout)
We study hyperparam selection in #BarlowTwins (Zbontar et al.) as a case study!