What a great meetup! Thanks to everyone who came by yesterday for an evening of real-world AI and bullshit-free strategies for building LLM applications.
Recap and highlights: https://www.linkedin.com/feed/update/urn:li:ugcPost:7340659326027489281/
程序猿 🐒
Mainly #NLP for applied research in #conversational #search and #information_retrieval
Pushing code in a startup nowadays...
Alternate identities in this fediverse: @ggdupont
What a great meetup! Thanks to everyone who came by yesterday for an evening of real-world AI and bullshit-free strategies for building LLM applications.
Recap and highlights: https://www.linkedin.com/feed/update/urn:li:ugcPost:7340659326027489281/
"Sometimes it could take employees more time to correct the AI errors than if they’d done everything by hand in the first place, she said." Yes. Make more people aware of this. It's like having a super unreliable coworker, and EVERYONE LOVES THAT SO MUCH
@cardamine #lefuturcetaitmieuxavant
vivement l'an 2000 ;-)
Matt is also live now, by the way 👀 https://www.youtube.com/watch?v=YeKixytORFo
@davidschlangen if only this experience with deepseek could make them take the bias issue (of all LLMs) seriously
@daph pourquoi "intelligence artificielle de l’éducation nationale" ? Il me semblait que le modèle était une émanation de OpenLLM ?
Est-ce qu'on peut arrêter de donner des noms de femme aux logiciels de recherche ?
@flomaraninchi @sncs_inria je propose
A new position just open on our side for a senior PM.
Context: Mavenoid => swedish startup +
AI product with ethics and real impact + great team (well plus me... but I don't bite)
On human's trust with AI, I found another study supporting this idea:
https://hal.univ-lorraine.fr/hal-04229467/file/CHI_TRAIT_2023_Paper_17.pdf
It's small scale but the protocol is sound and experimental variables well defined and controlled. In the end: significant bias to overtrust the "AI" (twist: there is no AI, the chatbot is simulated by human).
FrontierMath my ass. Marching towards AGI one fraud at a time.
Hmm.. 🤔
TikTok can be banned, Twitter can be banned, Meta can be banned even Bluesky(major network parts via the official company) can be banned
If they want to ban the Fediverse, they'll have to ban each server within the network of tens of thousands
Somewhere in there is a point I guess :blobcatgiggle:
I'm in contact with a recruiter who is looking to fill 5 to 8 data science roles. If you're in the market, DM me, and I'll put you in contact with him. The pay is decent, and it's on-site in either Virginia or the San Francisco Bay area.
Oh, and please boost for reach. Jobs are hard to come by, and you might be saving someone's life.
#DataScience
#MachineLearning
#FediHire
#FediHired
#GetFediHired
#GetFediBHired
@datasciencejobs
The era of ChatGPT is kind of horrifying for me as an instructor of mathematics... Not because I am worried students will use it to cheat (I don't care! All the worse for them!), but rather because many students may try to use it to *learn*.
For example, imagine that I give a proof in lecture and it is just a bit too breezy for a student (or, similarly, they find such a proof in a textbook). They don't understand it, so they ask ChatGPT to reproduce it for them, and they ask followup questions to the LLM as they go.
I experimented with this today, on a basic result in elementary number theory, and the results were disastrous... ChatGPT sent me on five different wild goose-chases with subtle and plausible-sounding intermediate claims that were just false. Every time I responded with "Hmm, but I don't think it is true that [XXX]", the LLM responded with something like "You are right to point out this error, thank you. It is indeed not true that [XXX], but nonetheless the overall proof strategy remains valid, because we can [...further gish-gallop containing subtle and plausible-sounding claims that happen to be false]."
I know enough to be able to pinpoint these false claims relatively quickly, but my students will probably not. They'll instead see them as valid steps that they can perform in their own proofs.
This is a cool website:
Did you realize that we live in a reality where SciHub is illegal, and OpenAI is not?
You may have seen news about a study on the cultural difference among scientists about ethics. It is mainly cited mentioning Chineses scientists, like in this popular science website stating "Variations in scientific ethics: Chinese scientists prioritize government service more than global peers"¹.
It is worth noting that the original scientific article is (a lot) more nuanced, of course, and not exempt of cultural bias itself.
[…]
¹ https://phys.org/news/2024-10-variations-scientific-ethics-chinese-scientists.html
One of the reasons I've been quiet on Mastodon for a few months is because I've been working on something big - something I'm really happy to share now!
http://Pathoplexus.org
A new, open-source database for pathogen sequence sharing. 👐🧬🌎
1/12
Hi y'all! I'm a writer, researcher, and organizer living on unceded Kumeyaay land. I'm Faculty Director of the #Labor Center at UC San Diego as my day job, where I profess in the #Communication dept, the Science Studies #sts program, and the #Design Lab. I write about #techworkers serving capitalism and fighting capitalism. I help build #tech people want (eg Ride United #coop #taxi #app) and fight the tech we don't. I helped found AI data worker project #turkopticon. Former coop kitchen manager.