#HistoryOfScience

The Inquisitive Biologistinqbiol@scicomm.xyz
2025-05-24

149 years ago today, #HMSChallenger returned to Spithead from a scientific expedition that birthed the discipline of #Oceanography, but what did they find? Read more about this in Full Fathom 5000, an engaging book that focuses on the many animals the expedition found in the deep sea.

inquisitivebiologist.com/2024/

#Books #BookReview #Bookstodon #Scicomm #Oceans #MarineBiology #HistoryOfScience #ScienceHistory #HistSci

The Inquisitive Biologistinqbiol@scicomm.xyz
2025-05-23

318 years ago today #CarlLinnaeus was born. The scholarly #biography The Man Who Organized Nature provides a full immersion in his life, revealing the polymath behind his reputation as the father of taxonomy.

inquisitivebiologist.com/2024/

#Books #BookReview #Bookstodon #Taxonomy #Botany #HistoryOfScience #ScienceHistory #HistSci #SciComm @princetonupress

2025-05-23

🆕 On 25 May, the exhibition Angola: Saberes em movimento [Angola: Knowledge on the Move] will open at the Frei Manuel do Cenáculo National Museum in Évora, as part of the KNOW.AFRICA project.

👉 We tell you all about it here: ihc.fcsh.unl.pt/en/knowafrica-

@histodons

#Histodons #ColonialCollections #AfricanExpeditions #PortugueseColonialism #HistoryOfScience #Exhibition #HistoryInThePublicSphere #IndigenousKnowledge #ColecçÔesColoniais #ExpediçÔesAfricanas #HistĂłriaDaCiĂȘncia #ColonialismoPortuguĂȘs

Illustration of a group of black men crossing a watercourse carrying white men on their backs and parcels on their heads.
Daniel PomarĂšdepomarede
2025-05-23

CEA: Une histoire spatiale qui commence en 1959

Pour traquer les poussiĂšres radioactives liĂ©es aux essais nuclĂ©aires, le CEA embarque un compteur Geiger dans un missile. À 100 km d’altitude, surprise : des rayons gamma viennent d’au-dessus. C’est le dĂ©but de l’astrophysique au CEA.

đŸ“· CEA/D. Baclet/C. Jehanno/J.Labeyrie
cea.fr/Pages/actualites/scienc

La premiĂšre expĂ©rience spatiale du CEA, le 27 janvier 1959. Un compteur Geiger a Ă©tĂ© installĂ© Ă  bord d’un missile pour une des premiĂšres mesures des rayons gamma du ciel.
2025-05-19

This Wednesday, I'll be presenting part of my PhD research on the history of neuroscience in Argentina. The talk brings together scientometric analysis and qualitative research to explore how the field has taken shape, and the factors that influence it. Let me know if you're interested, I'd be happy to share the link
#neuroscience #STS #historyofscience #scientometrics #bibliometrics #Argentina

Flyer: English Lectures 2025. Presentation title: A history of neuroscience in Argentina. Author: AgustĂ­n Mauro. Date: 21 May at 12:00 (argentinian time)
2025-05-13

Last week, our students learned how to conduct a proper evaluation for an NLP experiment. To this end, we introduced a small textcorpus with sentences about Joseph Fourier, who counts as one of the discoverers of the greenhouse effect, responsible for global warming.

github.com/ISE-FIZKarlsruhe/IS

#ise2025 #nlp #lecture #climatechange #globalwarming #historyofscience #climate @fiz_karlsruhe @fizise @tabea @enorouzi @sourisnumerique

Slide of the Information Service ENgineering lecture 03, Natural Language Processing 02, section 2.6: Evaluation, Precision, and Recall
Headline: Experiment
Let's consider the following text corpus (FOURIERCORPUS):
 1
In 1807, Fourier's work on heat transfer laid the foundation for understanding the greenhouse effect.
2
Joseph Fourier's energy balance analysis showed atmosphere's heat-trapping role.
3
Fourrier's calculations, though rudimentary, suggested that the atmosphere acts as an insulator.
4
Fourier’s greenhouse effect explains how atmospheric gases influence global temperatures.
5
Jean-Baptiste Joseph Fourier's mathematical treatment of heat flow is essential to climate modeling.
6
Climate science acknowledges that Fourier helped to understand the atmospheric absorption of heat.
7
Climate change origins often cite Fourier's mathematical work on radiative heat.
8
J. Fourier published  his "Analytical theory of heat" in 1822.
9
Fourier analysis is used in signal processing.
10
Fourier series are key in heat conduction math.
11
Fourier and related algebras occur naturally in the harmonic analysis of locally compact groups.
12
The Fourier number is the ratio of time to a characteristic time scale for heat diffusion.

The corpus is available at https://github.com/ISE-FIZKarlsruhe/ISE-teaching/blob/b72690d38911b37748082256b61f96cf86171ace/materials/dataset/fouriercorpus.txt

On the right side in the background is a portrait engraving of Joseph Fourier
2025-05-12

New week, new lesson. How much can make up tell us about a people? Quite a bit, when you don't have much else to go off.
matildaslab.wordpress.com/2025
#scicomm #historyofscience #ancientegyptians

2025-05-12

Last leg on our brief history of NLP (so far) is the advent of large language models with GPT-3 in 2020 and the introduction of learning from the prompt (aka few-shot learning).

T. B. Brown et al. (2020). Language models are few-shot learners. NIPS'20

proceedings.neurips.cc/paper/2

#llms #gpt #AI #nlp #historyofscience @fiz_karlsruhe @fizise @tabea @enorouzi @sourisnumerique #ise2025

Slide from Information System Engineering 2025 lecture, 02 - Natural Language Processing 01, A brief history of NLP, NLP Timeline.
The NLP timeline is in the middle of the page from top to bottom. The marker is at 2020. On the left side, an original screenshot of GPT-3 is shown, giving advise on how to present a talk about "Symbolic and Subsymbolic AI - An Epic Dilemma?".
The right side holds the following text: 
2020: GPT-3 was released by OpenAI, based on 45TB data crawled from the web. A “data quality” predictor was trained to boil down the training data to 550GB “high quality” data. Learning from the prompt is introduced (few-shot learning)

Bibliographical Reference:
T. B. Brown et al. (2020). Language models are few-shot learners. In Proceedings of the 34th Int. Conf. on Neural Information Processing Systems (NIPS'20). Curran Associates Inc., Red Hook, NY, USA, Article 159, 1877–1901.
2025-05-11

Next stop in our NLP timeline is 2013, the introduction of low dimensional dense word vectors - so-called "word embeddings" - based on distributed semantics, as e.g. word2vec by Mikolov et al. from Google, which enabled representation learning on text.

T. Mikolov et al. (2013). Efficient Estimation of Word Representations in Vector Space.
arxiv.org/abs/1301.3781

#NLP #AI #wordembeddings #word2vec #ise2025 #historyofscience @fiz_karlsruhe @fizise @tabea @sourisnumerique @enorouzi

Slide from the Information Service Engineering 2025 lecture, lecture 02, Natural Language Processing 01, NLP Timeline. The timeline is in the middle of the slide from top to bottom, indicating a marker at 2013. On the left, a diagram is shown, displaying vectors  for "man" and "woman" in a 2D diagram. An arrow leades from the point of "man" to the point of "woman". Above it, there is also the point marked for "king" and the same difference vector is transferred from "man - > woman" to "king - ?" asking, what might be the appropriate completion.
Right of the timeline, the following text is displayed: Word2Vec neural network based framework to learn distributed representations of words as dense vectors in continuous space (word embeddings) was developed by Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean at Google. 
These language models are based on the Distributional Hypothesis in linguistics  i.e. words that are used and occur in the same contexts tend to purport similar meanings.

Bibliographical reference:
T. Mikolov et al. (2013). Efficient Estimation of Word Representations in Vector Space. arXiv:1301.3781
OccasionalDucksOccasionalDucks@c.im
2025-05-10

I finally finished reading "The Invention of Nature" yesterday, having been a bit slow due to the other books I read alongside it.

It's really good, and if you're not aware of both Alexander von Humboldt (not the sail training ship, (a.k.a. Alexander von Becks - if anyone remembers those TV adverts with the barque with a green hull and green sails), but its namesake) and his influence on Darwin, Muir and others, I'd recommend it. We should have been taught about him, and Marsh, Haeckel and Muir, in school; they should be household names - as indeed Humboldt was in his day - hence the Humboldt current, and many other things named after him .

Though I'm not 100% convinced there are more things named after von Humboldt than anyone else - I think he's probably pipped to the post by a chap from Nazareth.

#Books #Reading #HistoryOfScience #ClimateChange #Biodiversity #TheInventionOfNature

2025-05-09

In today's featured post, we look at a monument that is twice as old as the pyramids of Giza.
matildaslab.wordpress.com/2021
#scicomm #historyofscience #GöbekliTepe

2025-05-09

Building on the 90s, statistical n-gram language models, trained on vast text collections, became the backbone of NLP research. They fueled advancements in nearly all NLP techniques of the era, laying the groundwork for today's AI.

F. Jelinek (1997), Statistical Methods for Speech Recognition, MIT Press, Cambridge, MA

#NLP #LanguageModels #HistoryOfAI #TextProcessing #AI #historyofscience #ISE2025 @fizise @fiz_karlsruhe @tabea @enorouzi @sourisnumerique

Slide from Information Service Engineering 2025, LEcture 02, Natural Language PRocessing 01, A Brief History of NLP, NLP timeline. The timeline is located in the middle of the slide from top to bottom. The pointer on the timeline indicates 1990s. On the left, the formula for conditional probability of a word, following a given series of words, is given as a formula. Below, an AI generated portrait of William Shakespeare is displayed with 4 speech buubles, representing artificially generated text based on 1-grams, 2-grams, 3-grams and 4 grams. The 4-grams text example looks a lot like original Shakespeare text. On the right side the following text is displayed: 
N-grams for statistical language modeling were introduced and popularised by Frederick Jelinek and Stanley F. Chen from IBM Thomas J. Watson Research Center, who developed efficient algorithms and techniques for estimating n-gram probabilities from large text corpora for speech recognition and machine translation.

Bibliographical reference:
F. Jelinek (1997), Statistical Methods for Speech Recognition, MIT Press, Cambridge, MA.
2025-05-08

Next stop on our NLP timeline (as part of the #ISE2025 lecture) was Terry Winograd's SHRDLU, an early natural language understanding system developed in 1968-70 that could manipulate blocks in a virtual world.

Winograd, T. Procedures as a Representation for Data in a Computer Program for Understanding Natural Language. MIT AI Technical Report 235.
dspace.mit.edu/bitstream/handl

#nlp #lecture #historyofscience @fiz_karlsruhe @fizise @tabea @sourisnumerique @enorouzi #AI

Slide from the Information Service Engineering 2025 lecture, Natural Language Processing 01, A Brief History of NLP, NLP Timeline. The picture depicts a timeline in the middle from top to bottom. There is a marker placed at 1970. Left of the timeline, a screenshot of the SHRDLU system is shown displaying a block world in simple line graphics. On the right side, the following text is displayed: SHRDLU was an early natural language understanding system developed by Terry Winograd in 1968-70 that could manipulate blocks in a virtual world. Users could issue commands like “Move the red block onto the green block,” and SHRDLU would execute the task accordingly. This demonstration highlighted the potential of NLP in understanding and responding to complex instructions. 

Bibliographical references:
Winograd, Terry (1970-08-24). Procedures as a Representation for Data in a Computer Program for Understanding Natural Language. MIT AI Technical Report 235.
2025-05-07

With the advent of ELIZA, Joseph Weizenbaum's first psychotherapist chatbot, NLP took another major step with pattern-based substitution algorithms based on simple regular expressions.

Weizenbaum, Joseph (1966). ELIZA—a computer program for the study of natural language communication between man and machine. Com. of the ACM. 9: 36–45.

dl.acm.org/doi/pdf/10.1145/365

#nlp #lecture #chatbot #llm #ise2025 #historyofScience #AI @fizise @fiz_karlsruhe @tabea @enorouzi @sourisnumerique

Slide from the Information Service Enguneering 2025 lecture slidedeck, lecture 02, Natural language processing 01, Excursion: A Brief History of NLP, NLP timeline
On the right side of the image, a historic text terminal screenshot of a starting ELIZA dialogue is depicted. The timeline in the middle of the picture (from top to bottom) indicates the year 1966. The text left of the timeline says: ELIZA was an early natural language processing computer program created from 1964 to 1966 at the MIT Artificial Intelligence Laboratory by Joseph Weizenbaum which simulated conversation giving users an illusion of  understanding on the part of the program based on pattern matching and pre-scripted response templates.

Bibliographical reference: 
Weizenbaum, Joseph (1966). ELIZA—a computer program for the study of natural language communication between man and machine. Communications of the ACM. 9: 36–45.
2025-05-06

My featured blog post today is the development of the boomerang, which occurred at least 14,000 years ago.
matildaslab.wordpress.com/2021
#scicomm #historyofscience

The Inquisitive Biologistinqbiol@scicomm.xyz
2025-05-06

166 years ago today, the world lost #AlexanderVonHumboldt. This admirably concise biography offers a factual and nuanced picture of his life and work, and critically interrogates previous portrayals.

inquisitivebiologist.com/2025/

#Books #BookReview #Bookstodon #Biography #HistoryOfScience #ScienceHistory #HistSci #Scicomm @princetonupress @princetonnature

2025-05-05

Next stop in our NLP timeline are the (mostly) futile tries of machine translation during the cold war era. The rule-based machine translation approach was used mostly in the creation of dictionaries and grammar programs. It’s major drawback was that absolutely everything had to be made explicit.

#nlp #historyofscience #ise2025 #lecture #machinetranslation #coldwar #AI #historyofAI @tabea @enorouzi @sourisnumerique @fiz_karlsruhe @fizise

Slide from Information Service Engineering lecture 02, Natural Language Processing 1. Title: NLP Timeline
The indicated era on the timeline is 1954-1966. On the right side of the timeline, an AI generated picture of a military parade with mobile missiles in front of the Kremlin basilica is sketched, overlapped with the following machine translation example:
English: "The spirit was willing, but the flesh was weak". This sentence was automatically translated to Russian. Then, it was translated back again into English with the following result: "The vodka was good, but the meat was rotten."

The text left of the timeline says: 1954 - 1966
Futile cold-war motivated efforts in rule-based machine translation from Russian to English. The rule-based machine translation approach was used mostly in the creation of dictionaries and grammar programs. It’s major drawback was that everything had to be made explicit.

Bibliographical references: 
John A.Kouwenhoven ‘The trouble with translation’ in Harper's Magazine, August 1962
and W. John Hutchins, Machine Translation: Past, Present, and Future, Longman Higher Education, 1985, p. 5.
2025-05-04

Next step in our NLP timeline is Claude Elwood Shannon, who already laid the foundations for statistical language modeling by recognising the relevance of n-grams to model properties of language and predicting the likelihood of word sequences.

C.E. Shannon ""A Mathematical Theory of Communication" (1948) web.archive.org/web/1998071501

#ise2025 #nlp #lecture #languagemodel #informationtheory #historyofscience @enorouzi @tabea @sourisnumerique @fiz_karlsruhe @fizise

Slide from the Information Service ENgineering lecture 02, Natural Language Processing 01. Title: NLP Timeline.
A black & white portrait picture of Claude Elwood Shannon (1916-2001) is shown on the left side of a timeline marked with "1948". Shannon is depicted in front of an old 1950s "electronic" computer. The text on the right side of the timeline says: Claude Shannon proposed the idea of using n-grams as a means to analyse the statistical properties of language in "A Mathematical Theory of Communication" (1948). While Shannon's primary focus was on communication and information transmission, he recognised the relevance of n-grams in modeling language and predicting the likelihood of word sequences.

BIbliographical reference: 
Shannon, Claude Elwood (July 1948). A Mathematical Theory of Communication, Bell System Technical Journal. 27 (3): 379–423.

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst