Also thanks to @michael for introducing me to Kush in the first place!
⌁ writer
⌁ poet
⌁ human-AI interaction researcher
Currently wrapping up my PhD at Columbia University. Investigating language models, writing assistance, computational creativity, and making sense of generative AI outputs.
Maker of digital & analog poetry, writer of essays, and lover of collage.
🌊✨ 💻✨ ✍✨ 🏳️🌈✨
Also thanks to @michael for introducing me to Kush in the first place!
Wow. I have a comment out in Nature Machine Intelligence about why people don't do data work (enough) and what we might do to encourage it.
https://www.nature.com/articles/s42256-023-00673-x.epdf
Many thanks to everyone at IBM who supported my somewhat circuitous internship: Kush Varshney for letting me explore and being the most encouraging mentor I've ever had. Payel Das for truly making the paper happen at every step. Prasanna Sattigiri, Inkit Padhi, and Pierre Dognin for their guidance and contributions along the way.
@grvsmth Ah, I see. Yea, I think Guyan does a good job identifying the difficulty of getting representative samples in various queer communities and what this means for using collected data. Overall I found the book pretty nuanced and deeply investigated.
@grvsmth Mm, not sure exactly what you mean, but I think he has pretty good opinions about when data collection is good v. bad for queer communities.
For pride non-fiction reading may I suggest:
Ace by Angela Chen
and
Queer Data by Kevin Guyan
...they have shockingly similar covers...
When I get anxious about a research project that's not going very smoothly or quickly, I often try to remind myself that research is hard and if it all went super well then the problem probably wasn't challenging enough or wouldn't be considered research. But perhaps another framing is that research is a practice and I'm always learning and when something isn't going well it just means I'm learning a new thing.
Discovering that it's possible I like boring books. There's a particular kind of not-that-exciting book that I love. It's like abstract art or a movie that's all texture and no plot. But I don't like all boring books; I'm picky.
I like the word boring because it suggests bad and it's fun to turn the idea that boring is bad on its head. But of course boring might not be the right word.
slow calm gradual narrow inefficient sustained slight spacious steady undirected rhythmic even restless
Issue 2 of Crawlspace, a journal of digital-born literature, is out now! Best viewed on desktop: https://crawlspace.cool
Overall I had a shockingly good time at CHI, which in the past has felt super overwhelming. Something clicked this year, and that was really cool to experience.
- Finally, I met a lot of really cool people with similar interests. Technology for creative writing seems to be exploding, or at least getting a lot more popular than it was five years ago. But I don't really have the time to take on new collaborations, so it's a mixed bag. I'm happy to finally find my people! But it's a bummer to not be able to engage with them all very deeply right now.
- Work on creative writing reminded me of the tension between creative exploration of new technologies, which is rich and interesting, and the corporate exploitation many new technologies rely on. If I make something really cool and interesting and introspective with ChatGPT, am I effectively whitewashing the inherent problems of OpenAI? Maybe. This makes me sad, because I want to let creative exploration have free reign over technology.
- The difference between the paper session I presented in and my labmates paper session was palpable. I was in a smaller room with similar-ish papers; I felt I had a great audience and the room felt full and energetic. My labmate was in a large room with pretty random papers; he didn't get as many questions and the room felt empty. This was just due to the session we happened to be slotted into; I think our work was equally good!
Now that I've settled, some thoughts on my time this year at #CHI2023
- I had a great time at the workshop I co-organized on intelligent and interactive writing assistants. (Check out our website! https://in2writing.glitch.me/) I think what made it great was the mix of participants. We tried hard to recruit from HCI, NLP, as well as linguistics, english, rhetoric. We had people who worked on accessibility, creativity, non-native speakers. Diversity is great. We all learned a lot.
Taper #10 has just been published with 23 tiny computational poems, as always, all free (libre) software https://taper.badquar.to/10/
Oops, here's the link to the paper http://www.katygero.com/papers/2023_SocialDynamics.pdf
This work answered some of my questions, but opened up so many more! As more and more people start to engage with language models as writing support tools, I think we can start asking more sophisticated questions about these interactions.
[end thread]
But writers also had different ideas about where authenticity lies. Some people would never get help crafting their plotline, where others thought that was fine but would never get help rewriting their sentences.
Overall, writers develop rich understandings of different 'support actors', and have different ideas about the kind of help they want.
These results point towards some big confounding factors when studying writing support tools!
When we study writing support tools, we need to understand the variety of perspectives writers bring to the very idea of getting help. And we need to acknowledge that we simply cannot (yet?) understand computational help in the way we understand human help.
Finally, I'll touch on some concerns about authenticity. Writers worried about how even viewing suggestions can impact their writing, and how human help can feel less threatening because you have a relationship with a person which makes their help feel like there's more of you in the help that you receive. On the other hand, computers can feel more private than a person, perhaps more like "talking to yourself."
Writers also talked about the difficulty in communicating their *intentions*, even with other writers. Lots of writers don't want help early on in a project, where their ideas are too nascent and may be trampled by over-eager feedback or ideas.
But writers worried that computers couldn't understand or respect their intention, especially when it's hard to explain even to other writers. (Writers also said they didn't think computers brought their own intention, which may be a good thing!)