#LanguageModeling

Hacker Newsh4ckernews
2025-05-30

Tokenization for language modeling: BPE vs. Unigram Language Modeling (2020)

ndingwall.github.io/blog/token

Harald KlinkeHxxxKxxx@det.social
2025-04-21

Andrey Markov & Claude Shannon Counted Letters to Build the First Language-Generation Models
Shannon’s said: “OCRO HLI RGWR NMIELWIS”
#Shannon #Markov #NLP #AIhistory #LanguageModeling
spectrum.ieee.org/andrey-marko

2024-11-06

🎯 #OuteTTS introduces a novel approach to text-to-speech synthesis using pure #languagemodeling
🔧 Built on #LLaMa architecture with just 350M parameters, featuring:

Zero-shot #voicecloning capability
Integration with #WavTokenizer (75 tokens/sec)
Local deployment via #llamacpp
#GGUF format compatibility

🔍 Technical Implementation:

Audio tokenization process
CTC forced alignment
Structured prompt system
Temperature-adjustable outputs

⚠️ Current Limitations:

Limited vocabulary range
String-only input support
Best performance with shorter sentences
Variable temperature sensitivity

github.com/edwko/OuteTTS
huggingface.co/OuteAI/OuteTTS-

khushnumakhushnuma
2024-10-10

Mastering these core NLP techniques is crucial for any data scientist dealing with text data. From tokenization to language modeling, each method serves a unique purpose in processing, analyzing, and extracting valuable insights from textual information.

read more: blogulr.com/khushnuma7861/topn

2023-09-15

New #languagemodeling #nlp #ai #paper, led by Angelica Chen! We break the steepest MLM training loss drop into *2* phase changes: first in internal grammatical structure, then external capabilities. Big implications for emergence, simplicity bias, and interpretability! arxiv.org/abs/2309.07311

A screenshot of a paper titled "Sudden Drops in the Loss: Syntax Acquisition, Phase Transitions, and Simplicity Bias in MLMs," with an abstract that starts "Most interpretability research in NLP focuses on understanding the behavior and features of a fully trained model. However, certain insights into model behavior may only be accessible by observing the trajectory of the training process. In this paper, we present a case study of syntax acquisition in masked language models (MLMs). Our findings demonstrate how analyzing the evolution of interpretable artifacts throughout training deepens our understanding of emergent behavior. In particular, we study Syntactic Attention Structure (SAS), a naturally emerging property of MLMs wherein specific Transformer heads tend to focus on specific syntactic relations. We identify a brief window in training when models abruptly acquire SAS and find that this window is concurrent with a steep drop in loss..."
స్వేచ్ఛ | Swechaswecha@fsmi.social
2023-08-24
Join our upcoming #DevDaysHyd on "The Power of Effective Prompt Engineering: Enhance your interactions with Gen-AI." on August 26th.

🗓️ Date: 26th August, 2023
🕒 Time: 10am - 1pm
👥 Mode: In-person
🏢 Venue: ZapCom Group Inc, Dallas Center, Rai Durg, Hyderabad.
📍 Location: https://goo.gl/maps/3D6wnghoN3cowcy17

Find more details and register at swecha.org/devdays

Meet our speaker: Vishal Jaishankar, is a Software Engineer at Microsoft, working on programming distributed systems and Software supply chain management. He loves learning new tech and applying them in his work.

Kindly note: Laptops are allowed at the venue, and we encourage you to bring your laptops for hands-on activities.

#PromptEngineering #LanguageModeling #AICommunication #NLPInsights #TextGeneration #CodeGeneration #PromptOptimization #ContextualAI
Webappiawebappia
2023-07-07

GPT-4 API by OpenAI Now Available – Analytics India Magazine 

Hashtags: Summery: OpenAI, the leading artificial intelligence research lab, has announced the general availability of its GPT-4 API. GPT-4, or Generative Pre-trained Transformer 4, is the latest version of OpenAI's language model, which has been trained on a vast amount of internet text to generate human-like responses. The GPT-4 API allows developers to…

webappia.com/gpt-4-api-by-open

Webappiawebappia
2023-06-28

Chat-GPT: The Unnoticed Transformation of Your Thought Process 

Hashtags: Summery: Generative models, a type of artificial intelligence (AI) model, have the ability to generate responses without uncertainty or the ability to communicate their lack of certainty. This means that they can confidently provide answers, even if they are false or hallucinated. This poses a problem because people are more likely to…

webappia.com/chat-gpt-the-unno

Webappiawebappia
2023-06-26

Improving Dialogue through Optimization of Language Models

Hashtags: Summery: ChatGPT has become one of the most popular technologies in the IT industry due to its impressive capabilities in optimizing language models for dialogue generation. It offers a human-like conversational experience and has gained immense popularity, with over 100 million users and 1 million users within just 5 days…

webappia.com/improving-dialogu

Victoria Stuart 🇨🇦 🏳️‍⚧️persagen
2023-06-21

Revealing the structure of language model capabilities
arxiv.org/abs/2306.10062

Building a theoretical understanding of the capabilities of large language models (LLMs) is vital for our ability to predict & explain the behavior of these systems. ... we analyzed data from 29 LLMs / 27 cognitive tasks. LLM are better explained by 3 well-delineated factors that represent reasoning, comprehension & core language modeling.

Netherlands eScience CentereScienceCenter@akademienl.social
2023-06-01

#LanguageModeling is trending, to a large extent because of #ChatGPT. But did you know language modeling has been with us for more than a century? And that it was born of the collaboration of a poet and a mathematician?

Our engineer Carsten Schnober tells us more:
blog.esciencecenter.nl/languag

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst