Emily M. Bender

Profil AI Expert

Nationalité: 
Américain(e)
AI spécialité: 
NLP
Occupation actuelle: 
Professeur, Directrice, Université de Washington
Taux IA (%): 
38.69'%'

TwitterID: 
@emilymbender
Tweet Visibility Status: 
Public

Description: 
Professeur et Directrice, en linguistique assisté par ordinateur, Emily travaille sur l'ingénierie de la grammaire multilingue. Ses papiers sont très appréciés de la communauté IA, et nourrissent les débats notamment autour des facultés de généralisation de GPT-3. Emily à introduit l'experte Yejin Choinka au concept tres surprennant du "web garbage english" découvert lors de ses recherches sur la traitement du language naturelle.

Reconnu par:

Non Disponible

Les derniers messages de l'Expert:

Tweet list: 

2025-01-09 18:59:17 @majick @alex That one is all Alex!

2025-01-09 17:11:46 It'

2025-01-06 18:59:01 I also really appreciate how they are keeping up this work, even post-election. This is a really important case:https://www.ftc.gov/news-events/news/press-releases/2025/01/ftc-order-re...

2025-01-06 18:58:01 Shout out to Lina Khan'

2024-12-31 14:08:15 Also available as video on PeerTube:https://peertube.dair-institute.org/w/oLZ5DSBCj3SuboQzG7KGsm

2024-12-31 14:07:40 If any Fresh AI Hell lands in your yard, you can send it our way for shoveling here:https://thecon.ai/submit-fresh-ai-hell/w/@alex

2024-12-31 14:07:25 In Mystery AI Hype Theater 3000 Episode 47, @alex and I attempt to clear the backlog of Fresh AI Hell to create a clean slate for 2025:https://www.buzzsprout.com/2126417/episodes/16353749-episode-47-hell-is-... to Christie Taylor for production!

2024-12-28 20:37:43 @cratermoon @alex Looks pretty awful. As for alternative papers, I don'

2024-12-20 19:55:28 @WiseWoman As in, we covered the story, but not that execellent take-down of it!

2024-12-20 19:51:40 @WiseWoman Yes and gross. This made it into a recent episode (possibly the all-hell ep).

2024-12-19 12:14:04 Also available as video on PeerTube:https://peertube.dair-institute.org/w/75zn3sKpVTcaNLMLP5eDXT

2024-12-19 12:13:47 Mystery AI Hype Theater Episode 46: AGI Funny Business (Model), in which @brianmerchant joins me and @alex to look back on the hype of the early early days of OpenAIThx to Christie Taylor for production!https://www.buzzsprout.com/2126417/episodes/16295268-episode-46-agi-funn...

2024-12-17 15:53:22 For more Mystery AI Hype Theater 3000 content, check out our newsletter:buttondown.com/maiht3kAnd then of course there will be our book. It'

2024-12-17 15:52:32 #1: Episode 44: OpenAI'

2024-12-17 15:52:23 and the most downloaded episode of Mystery AI Hype Theater 3000 in 2024 was drumroll please ...

2024-12-17 15:50:40 #2: Episode 29: How LLMs Are Breaking the News, March 25 2024, with @karenhao https://www.buzzsprout.com/2126417/episodes/14807430-episode-29-how-llms...

2024-12-17 15:50:24 #3: Episode 45: Billionaires, Influencers, and Ed Tech, November 18 2024, with Adrienne Williamshttps://www.buzzsprout.com/2126417/episodes/16174384-episode-45-billiona...

2024-12-17 15:50:16 #4: Episode 28: LLMs Are Not Human Subjects, March 4 2024https://www.buzzsprout.com/2126417/episodes/14677380-episode-28-llms-are...

2024-12-17 15:50:07 #5: Episode 30: Marc'

2024-12-17 15:49:49 #6: Episode 33: Much Ado About '

2024-12-17 15:48:46 #7: Episode 25: An LLM Says LLMs Can Do Your Job, January 22 2024https://www.buzzsprout.com/2126417/episodes/14416779-episode-25-an-llm-s...

2024-12-17 15:48:39 #8: Episode 42: Stop Trying to Make '

2024-12-17 15:48:29 #9: Episode 35: AI Overviews and Google'

2024-12-17 15:48:12 #10: Episode 31: Science Is a Human Endeavor, April 15 2024, with Molly Crockett and Lisa Messerihttps://www.buzzsprout.com/2126417/episodes/15020029-episode-31-science-...

2024-12-17 15:47:31 Mystery AI Hype Theater 3000 has had a great year! @alex, Christie Taylor and I have put out 24 episodes (so far---two more coming this month!). Here are the top 10, by downloads:

2024-12-16 19:00:25 @JeffGrigg @alex "

2024-12-16 15:39:12 @VCP @alex We are generally going after hype/messaging, not slop. Please submit anything that is publicly linkable!

2024-12-16 15:29:57 @SciEnby @alex Thanks, I hate it.

2024-12-16 15:29:47 Calling all Mystery AI Hype Theater 3000 fans! Have you found a piece of Fresh AI Hell but not known where to send it? Here'

2024-12-13 18:39:32 @michael_w_busch @timnitGebru @alex please let them know you want them to have it!

2024-12-13 14:46:06 @steveediger @alex please recommend it to your local library

2024-12-10 15:56:31 @alex and I also have a newsletter!Newsletter version of the announcement here:https://buttondown.com/maiht3k/archive/the-ai-con-available-for-pre-order/

2024-12-10 15:55:55 I am super excited to share that my forthcoming book with @alex, THE AI CON: How to Fight Big Tech'

2024-12-09 14:07:30 Join us today!It’s a bit like trying to shovel the walk in the middle of a blizzard, but @alex and I are going to attempt to clear the backlog of Fresh AI Hell in our next Mystery AI Hype Theater 3000 live stream:Monday, Dec 9, noon Pacifichttps://www.twitch.tv/dair_institute

2024-12-08 15:39:35 Tomorrow!It’s a bit like trying to shovel the walk in the middle of a blizzard, but @alex and I are going to attempt to clear the backlog of Fresh AI Hell in our next Mystery AI Hype Theater 3000 live stream:Monday, Dec 9, noon Pacifichttps://www.twitch.tv/dair_institute

2024-12-06 19:45:14 @ali Thank you!

2024-12-06 19:08:32 The last segment was a bit confused about how exactly language models are useful in sociolinguistic research (I'

2024-12-06 19:08:02 Grateful to WHYY'

2024-12-05 19:45:15 It’s a bit like trying to shovel the walk in the middle of a blizzard, but @alex and I are going to attempt to clear the backlog of Fresh AI Hell in our next Mystery AI Hype Theater 3000 live stream:Monday, Dec 9, noon Pacifichttps://www.twitch.tv/dair_institute

2024-12-05 17:05:53 Excuse me, is Canada this way? (Yes)#UW #OurCampusIsPrettierThanYourCampus #CanadaGeese

2024-12-02 15:24:56 @kfort

2024-12-02 14:02:10 This is today!Ready for some more Mystery AI Hype Theater 3000? We are! Next up on the live stream:@brianmerchant joins me and @alex to discuss the business model of “AGI”Monday, December 2nd, noon Pacifichttps://www.twitch.tv/dair_institute

2024-12-02 14:01:59 @kfort Quelle journée! C'

2024-12-01 20:07:09 Tomorrow!Ready for some more Mystery AI Hype Theater 3000? We are! Next up on the live stream:@brianmerchant joins me and @alex to discuss the business model of “AGI”Monday, December 2nd, noon Pacifichttps://www.twitch.tv/dair_institute

2024-11-29 14:41:31 Ready for some more Mystery AI Hype Theater 3000? We are! Next up on the live stream:@brianmerchant joins me and @alex to discuss the business model of “AGI”Monday, December 2nd, noon Pacifichttps://www.twitch.tv/dair_institute

2024-11-29 14:38:27 @AngloPeranakan I mean....

2024-11-27 14:18:06 Also available as video on PeerTube:https://peertube.dair-institute.org/w/kBKnqzWED5p1qeuzdJyZ3T

2024-11-27 14:16:54 Mystery AI Hype Theater 3000 Ep 45: Billionaires, Influencers, and Ed Tech, in which DAIR researcher Adrienne Williams joins me and @alex for a look into what the shiny “AI” ed tech actually looks like in the classrooms subjected to it.https://www.buzzsprout.com/2126417/episodes/16174384-episode-45-billiona... to Christie Taylor for production!

2024-11-22 18:47:59 @Sevoris Thanks, I hate it.

2024-11-22 15:37:07 @djohnson Fixed! Thank you.

2024-11-22 15:36:44 @adhocster Indeed -- thank you. This is fixed now.

2024-11-22 14:15:41 No, ChatGPT won'

2024-11-21 17:30:35 @curtosis Sounds like an altogether lovely experience!

2024-11-21 17:17:21 @curtosis Ugh. Where did you see that?

2024-11-20 15:52:22 A Guide for Creating and Documenting Language Datasets with Data Statements Schema Version 3 (McMillan-Major &

2024-11-18 13:57:17 Today!On the next Mystery AI Hype Theater 3000 live stream, @alex and I will talk AI hype in ed tech with DAIR’s own Adrienne Williams. Join us live on Monday Nov 18, noon Pacific:https://www.twitch.tv/dair_institute

2024-11-17 15:00:39 Tomorrow!On the next Mystery AI Hype Theater 3000 live stream, @alex and I will talk AI hype in ed tech with DAIR’s own Adrienne Williams. Join us live on Monday Nov 18, noon Pacific:https://www.twitch.tv/dair_institute

2024-11-16 14:45:00 @lewriley @timnitGebru My answer is no:https://www.youtube.com/watch?v=qpE40jwMilU

2024-11-15 17:46:08 @semitones Thanks, fixed it, I think.

2024-11-15 16:43:09 "

2024-11-15 14:02:48 On the next Mystery AI Hype Theater 3000 live stream, @alex and I will talk AI hype in ed tech with DAIR’s own Adrienne Williams. Join us live on Monday Nov 18, noon Pacific:https://www.twitch.tv/dair_institute

2024-11-14 14:15:10 Also available as video on PeerTube:https://peertube.dair-institute.org/w/gAAnkju7qjfrWjG9NZVy2L

2024-11-14 14:14:51 End of audiogram alt text:EMILY M. BENDER: SJayLett adds, "

2024-11-14 14:14:34 Mystery AI Hype Theater Episode 44: OpenAI'

2024-11-10 03:35:51 @dgodon Hah -- I hadn'

2024-11-08 15:52:43 RT @alexhanna: AI and Fascism go hand-in-hand. New newsletter post. https://t.co/YykOMt7gPK

2024-11-05 21:09:41 @rooktallon https://faculty.washington.edu/ebender/media/

2024-11-05 14:14:47 Sunday'

2024-11-05 14:05:18 Sunday'

2024-11-04 19:38:48 @A_Strydom Yikes!

2024-11-04 14:11:04 RT @emilymbender: As OpenAI and Meta introduce LLM-driven searchbots, I'd like to once again remind people that neither LLMs nor chatbots a…

2024-11-04 03:51:36 @premrajnarkhede *sigh* LLMs are not reliable at summarizing either. Also, presenting summaries instead of links discourages the kind of effective information access behavior my thread is about.

2024-11-04 03:34:50 The chatbot interface invites you to just sit back and take the appealing-looking AI slop as if it were "information". Don't be that guy. /fin

2024-11-04 03:34:42 But now more than ever we all need to level-up our information access practices and hold high expectations regarding provenance --- i.e. citing of sources. >

2024-11-04 03:20:55 Finally, the chatbots-as-search paradigm encourages us to just accept answers as given, especially when they are stated in terms that are both friendly and authoritative.But now more than ever we all need to level-up our information access practices and hold high expectations regarding provenance --- i.e. citing of sources.The chatbot interface invites you to just sit back and take the appealing-looking AI slop as if it were "

2024-11-04 03:20:38 If instead you get an answer from a chatbot, even if it is correct, you lose the opportunity for that growth in information literacy.The case of the discussion forum has a further twist: Any given piece of information there is probably one you'

2024-11-04 03:20:15 Imagine putting a medical query into a standard search engine and receiving a list of links including one to a local university medical center, one to WebMD, one to Dr. Oz, and one to an active forum for people with similar medical issues.If you have the underlying links, you have the opportunity to evaluate the reliability and relevance of the information for your current query --- and also to build up your understanding of those sources over time.>

2024-11-04 03:19:55 That sense-making includes refining the question, understanding how different sources speak to the question, and locating each source within the information landscape.>

2024-11-04 03:18:39 But even if the chatbots on offer were built around something other than LLMs, something that could reliably get the right answer, they'

2024-11-04 03:17:29 If someone uses an LLM as a replacement for search, and the output they get is correct, this is just by chance.Furthermore, a system that is right 95% of the time is arguably more dangerous tthan one that is right 50% of the time. People will be more likely to trust the output, and likely less able to fact check the 5%.>

2024-11-04 03:17:11 Why are LLMs bad for search? Because LLMs are nothing more than statistical models of the distribution of word forms in text, set up to output plausible-sounding sequences of words.https://www.youtube.com/watch?v=qpE40jwMilU>

2024-11-04 03:16:56 As OpenAI and Meta introduce LLM-driven searchbots, I'

2024-11-03 03:08:41 @bwaber It wasn'

2024-11-03 02:28:22 Dude uploaded the talk to YouTube (this was expected) with YET A DIFFERENT TITLE. Even after all of our back and forth about how he screwed it up previously. I can'

2024-11-03 02:25:54 @bwaber Thank you. Also, the talk is not "

2024-11-02 13:46:29 RT @emilymbender: MAIHT3k ep 43: AI Companies Gamble with Everyone's Planet https://t.co/ruIB5WGWau in which @parismarx joins @alexhanna…

2024-11-01 15:28:49 @KateKayeReports @parismarx @alexhanna @ctaylsaurus That's https://t.co/PXMVXG4rGD

2024-11-01 14:46:01 RT @parismarx: I had such a great time speaking with @emilymbender and @alexhanna on MAIHT3k! We dug into the environmental impacts of data…

2024-11-01 13:06:05 RT @emilymbender: @parismarx @alexhanna @ctaylsaurus Also available as video on PeerTube: https://t.co/itAaqGgfaE

2024-11-01 13:06:02 @parismarx @alexhanna @ctaylsaurus Also available as video on PeerTube: https://t.co/itAaqGgfaE

2024-11-01 13:00:29 Also available as video on PeerTube:https://peertube.dair-institute.org/w/reoPQ5YTzEQbFoaRcNwJ88

2024-11-01 13:00:09 Mystery AI Hype Theater 3000 Episode 43: AI Companies Gamble with Everyone'

2024-10-30 15:52:18 RT @TomEMullaney: Teachers: Head over to Spirit Halloween for your AI Hype Busters costume. Emulate @timnitGebru, @emilymbender, @alexhanna…

2024-10-29 13:15:57 RT @emilymbender: Took a break from reviewing copyedits (@alexhanna) to carve a pumpkin https://t.co/NT0KA1JrPv

2024-10-28 22:26:49 RT @haleyhaala: It's been over a year but everything we discussed is still *hyper* relevant. If you're also studying #EdTech hype, come wri…

2024-10-28 18:50:48 RT @timnitGebru: If you want to vent with @alexhanna &

2024-10-28 18:43:20 RT @DAIRInstitute: This is starting in 25 minutes. See you at https://t.co/h50L9qiri4

2024-10-28 16:57:46 @alexhanna Alt text: Orange pumpkin lit from within by a candle, with the text "THE AI CON E M BENDER ALEX HANNA" and a sparkle emoji surrounded by lines meant to suggest a book cover. Alongside is the date 5 13 25.

2024-10-28 16:46:47 Took a break from reviewing copyedits (with @alex to carve a pumpkin

2024-10-28 13:03:06 Today!On the next Mystery AI Hype Theater 3000 live stream, @alex and I will have a good laugh at OpenAI’s recent claims that they’ve trained their synthetic text extruding machines to “reason”, actually.Join us for the ridicule as praxis on Monday Oct 28, noon Pacific:https://www.twitch.tv/dair_institute

2024-10-27 23:27:17 RT @DAIRInstitute: Join us tomorrow (Monday) at 12pm pacific. https://t.co/h50L9qiri4

2024-10-27 13:41:31 Tomorrow! https://t.co/NiUc0zPTwx

2024-10-27 13:40:48 Tomorrow!On the next Mystery AI Hype Theater 3000 live stream, @alex and I will have a good laugh at OpenAI’s recent claims that they’ve trained their synthetic text extruding machines to “reason”, actually.Join us for the ridicule as praxis on Monday Oct 28, noon Pacific:https://www.twitch.tv/dair_institute

2024-10-26 13:12:16 RT @emilymbender: On the next MAIHT3k live stream, @alexhanna &

2024-10-26 13:10:51 RT @emilymbender: A quick newsletter post on the dehumanization behind Satya Nadella's remarks about copyright law https://t.co/pdYMlUe9KW

2024-10-25 13:16:30 A quick newsletter post on the dehumanization behind Satya Nadella's remarks about copyright law https://t.co/pdYMlUe9KW

2024-10-25 13:16:22 A quick newsletter post on the dehumanization behind Satya Nadella'

2024-10-25 13:12:30 On the next MAIHT3k live stream, @alexhanna &

2024-10-25 13:10:56 On the next Mystery AI Hype Theater 3000 live stream, @alex and I will have a good laugh at OpenAI’s recent claims that they’ve trained their synthetic text extruding machines to “reason”, actually.Join us for the ridicule as praxis on Monday Oct 28, noon Pacific:https://www.twitch.tv/dair_institute

2024-10-21 13:52:55 RT @parismarx: excited to chat with @alexhanna and @emilymbender later today. make sure to tune in!

2024-10-21 13:37:45 Today! https://t.co/4lKQpK2vE7

2024-10-21 13:37:36 Today! On the next Mystery AI Hype Theater 3000 live stream @parismarx joins @alex and me for an update on the environmental impacts of “AI” and pious songs tech leaders are singing about “AI” “solving” the climate crisis.Monday October 21, noon Pacifichttps://www.twitch.tv/dair_institute

2024-10-20 14:50:59 @hanscees Excuse me?

2024-10-20 14:29:36 I just noticed that an up-coming online talk I'

2024-10-20 13:07:23 Tomorrow!On the next Mystery AI Hype Theater 3000 live stream @parismarx joins @alex and me for an update on the environmental impacts of “AI” and pious songs tech leaders are singing about “AI” “solving” the climate crisis.Monday October 21, noon Pacifichttps://www.twitch.tv/dair_institute

2024-10-19 12:45:11 Join us Monday! https://t.co/4lKQpK33tF

2024-10-19 12:44:54 Join us Monday!On the next Mystery AI Hype Theater 3000 live stream @parismarx joins @alex and me for an update on the environmental impacts of “AI” and pious songs tech leaders are singing about “AI” “solving” the climate crisis.Monday October 21, noon Pacifichttps://www.twitch.tv/dair_institute

2024-10-18 13:06:12 RT @emilymbender: On the next Mystery AI Hype Theater 3000 live stream @parismarx joins @alexhanna and me for an update on the environmenta…

2024-10-17 16:08:58 On the next Mystery AI Hype Theater 3000 live stream @parismarx joins @alexhanna and me for an update on the environmental impacts of “AI” and pious songs tech leaders are singing about “AI” “solving” the climate crisis. Monday October 21, noon Pacific https://t.co/ePjUv2EEqG

2024-10-17 16:08:14 On the next Mystery AI Hype Theater 3000 live stream @parismarx joins @alex and me for an update on the environmental impacts of “AI” and pious songs tech leaders are singing about “AI” “solving” the climate crisis.Monday October 21, noon Pacifichttps://www.twitch.tv/dair_institute

2024-10-12 13:03:19 RT @emilymbender: Mystery AI Hype Theater Episode 42: Stop Trying to Make 'AI Scientist' Happen In which @alexhanna and I go deep on claim…

2024-10-11 13:25:46 Also available as video on PeerTube: https://t.co/v0dZ8TYD9t

2024-10-11 13:25:37 but not exactly one that improves the quality of the kind of researchers that are getting in these roles. Or the ones getting tenure. Yeah, I mean, sometimes there's guardrails for a reason. Not making up papers? Probably a good one. /end alttext >

2024-10-11 13:25:15 ALEX HANNA: It comes up so often here, right? If your idea of democratize is to allow people to pad out their resumes and try to get this through a process of getting tenure or getting a job, sure, I guess that's a version of democracy, >

2024-10-11 13:25:01 (Reading) "This cost, and the promise the system shows so far illustrate the potential of The AI Scientist to democratize research and significantly accelerate scientific progress." EMILY M. BENDER: Democratize research? There's that 'democratize' again. >

2024-10-11 13:24:50 ALEX HANNA: So the next bit is pretty gross as well. (Reading) "The AI Scientist is designed to be compute efficient. Each idea is implemented and developed into a full paper at a cost of approximately $15 a paper." (laughter) >

2024-10-11 13:20:36 @alex Also available as video on PeerTube:https://peertube.dair-institute.org/w/s1Eyp5R4cdSZVm3y2q58xq

2024-10-11 13:20:20 Mystery AI Hype Theater Episode 42: Stop Trying to Make '

2024-10-10 13:01:23 @toxi Thank you!

2024-10-09 17:17:02 Today, the University of Washington sent out a survey asking community members (students, staff, faculty) to share our thoughts about the recommendations of the University'

2024-10-08 16:55:56 RT @mmitchell_ai: Marie Curie was the first woman to win the Nobel Prize. This didn't happen because she herself fought for it -- although…

2024-10-08 00:52:00 RT @strubell: To be VERY clear: The one number that @JeffDean claims is "clouding assessment" of environmental impact of AI represents *les…

2024-10-07 13:55:34 @dphiffer The bit.ly in the talk video still works for linking to the slides...

2024-10-07 13:00:38 Theme of this talk: ACL is not an AI conference (Actual timestamp is more like 43'50") https://t.co/WhyalZDLFs

2024-10-07 13:00:16 RT @emilymbender: My #acl2024nlp Presidential Address is now publicly available. If you saw the slides &

2024-10-06 21:08:30 @shajith Thank you! I'

2024-10-06 20:36:15 @gwozniak Thank you.

2024-10-06 19:38:21 @carolannie Ah, yeah -- I think I was seeing "

2024-10-06 19:16:31 My #acl2024nlp Presidential Address is now publicly available. If you saw the slides &

2024-10-03 16:08:41 And then this happened! Feeling proud of our little podcast, and grateful to @alex and Christie Taylor --- and our audience!Wanna get in on the catharsis? Check out all of our episodes as podcast or video here:https://www.dair-institute.org/maiht3k/

2024-10-01 19:47:05 RT @ruha9: 2. We also ask recipients to speak to what being selected for this honor means to them and their scholarship. https://t.co/KJUe8

2024-10-01 16:11:22 RT @ctaylsaurus: Someone, somewhere today be the 100,000th downloader of Mystery AI Hype Theater 3000, a podcast I've been privileged to su…

2024-09-30 13:10:05 Today!Announcing the next Mystery AI Hype Theater 3000 live stream! Can “AI” do your science for you? Should it be your co-author? Join me and @alex as we not only say hell no but also pick apart some specific proposals:Monday, September 30, 11am Pacific https://www.twitch.tv/dair_institute

2024-09-30 13:09:31 Today! https://t.co/PrJ2VUWssk

2024-09-30 13:08:56 In honor of International Podcast Day here are a few recs from @alex and me. Happy listening!https://buttondown.com/maiht3k/archive/happy-international-podcast-day/

2024-09-30 13:08:14 In honor of International Podcast Day here are a few recs from @alexhanna and me. Happy listening! https://t.co/rUPTeVLpsy

2024-09-30 02:02:00 RT @DAIRInstitute: This Monday September 30th is packed. Register @ https://t.co/hi1uODx9hR to support anonymous data workers speaking…

2024-09-29 21:16:18 Join us tomorrow!Announcing the next Mystery AI Hype Theater 3000 live stream! Can “AI” do your science for you? Should it be your co-author? Join me and @alex as we not only say hell no but also pick apart some specific proposals:Monday, September 30, 11am Pacific https://www.twitch.tv/dair_institute

2024-09-29 21:15:31 Join us tomorrow! https://t.co/PrJ2VUX0hS

2024-09-28 23:00:18 @machineciv @charleswlogan @alexhanna @BenPatrickWill Actually, that one is @alexhanna !

2024-09-28 13:07:09 RT @emilymbender: Also available as video on PeerTube: https://t.co/ECaGPh4XUw (where you can see @alexhanna just casually hanging out wit…

2024-09-28 13:06:14 RT @emilymbender: Mystery AI Hype Theater 3000 Episode 41: Sweating into AI Fall https://t.co/7JbfCd9z82 … in which @alexhanna and I try…

2024-09-28 13:05:58 RT @emilymbender: When everything is recorded, you've gotta level up your privacy practices. Or, and hear me out, maybe not jump on board w…

2024-09-27 15:24:19 RT @CriticalAI: @STS_News @emilymbender @TomEMullaney @emilymbender mentions her appearance w/ novelist Ted Chiang. The current issue of #C…

2024-09-27 13:26:47 RT @CriticalAI: @STS_News interviews @emilymbender https://t.co/sZiZOmYOnK h/t @TomEMullaney

2024-09-27 13:10:55 When everything is recorded, you've gotta level up your privacy practices. Or, and hear me out, maybe not jump on board with sending all your data to the "AI" companies in the first place? https://t.co/tXe3Bl859T

2024-09-27 13:06:10 RT @emilymbender: Announcing the next Mystery AI Hype Theater 3000 live stream! Can "AI" do your science for you? Can it be your co-autho…

2024-09-27 13:05:55 Shakers of SALAMI! https://t.co/9MDTcWpPrB

2024-09-27 13:02:01 Also available as video on PeerTube:https://peertube.dair-institute.org/w/aPD7ZXcTgpc75j2uqQDaDP(where you can see @alex just casually hanging out with her guitar…)

2024-09-27 13:01:21 Mystery AI Hype Theater 3000 Episode 41: Sweating into AI Fallhttps://www.buzzsprout.com/2126417/episodes/15808784-episode-41-sweating...… in which @alex and I try to clear the backlog of Fresh AI Hell (and partially succeed) Thanks to Christie Taylor for production!

2024-09-26 22:58:33 @david_colquhoun @alex Ya think????Stay '

2024-09-26 16:31:07 Announcing the next Mystery AI Hype Theater 3000 live stream! Can “AI” do your science for you? Should it be your co-author? Join me and @alex as we not only say hell no but also pick apart some specific proposals:Monday, September 30, 11am Pacific https://www.twitch.tv/dair_institute

2024-09-26 14:06:37 @jmsdnns See, the thing about jokes about someone'

2024-09-26 14:02:16 “The FTC’s enforcement actions make clear that there is no AI exemption from the laws on the books.” - FTC Chair Lina M. Khanhttps://www.ftc.gov/news-events/news/press-releases/2024/09/ftc-announce...

2024-09-24 20:49:15 RT @benjaminjriley: @timnitGebru @Abebab @emilymbender @mmitchell_ai The education community cannot have its equity-lens cake and eat it to…

2024-09-24 15:10:26 I enjoyed this conversation with @STS_News ... we recorded it a few months ago, but it holds up! (The most dated thing is the chat about THE AI CON at the end, in that @alexhanna and I were really just starting it then and have finished it now!) https://t.co/u3W7BES4rC

2024-09-24 13:01:25 RT @STS_News: We have a few Peoples &

2024-09-24 03:50:53 RT @mmitchell_ai: A must-read in my world! Interested in where we're at now with "superintelligence", AI "reasoning", and whether "stocha…

2024-09-21 21:14:01 RT @StevenBird: Keeping Indigenous languages strong in northern Australia: a PhD opportunity at Charles Darwin University, under First Nati…

2024-09-21 13:57:42 RT @bobehayes: Correcting the Record "...I never talk about "AI" as if it is a thing, nor about these systems "learning" and "reading"."…

2024-09-19 14:55:15 @fl0_id @festal @tante It would probably help to see that remark in its context -- likely from this podcast episode:https://www.buzzsprout.com/2126417/episodes/15660976-episode-39-newsroom...

2024-09-18 22:13:32 @veenadubal @alexhanna Thank you Veena!

2024-09-17 12:59:35 RT @emilymbender: "It is not the case that “AI gathers data from the Web and learns from it.” The reality is that AI companies gather data…

2024-09-17 01:45:41 RT @alexhanna: After a few days of marathon editing, here is @emilymbender and I celebrating turning in a final copy of The AI Con! https:/…

2024-09-16 13:50:00 RT @bobehayes: Correcting the Record "...I never talk about "AI" as if it is a thing, nor about these systems "learning" and "reading"."…

2024-09-16 13:13:52 "

2024-09-15 12:59:47 RT @emilymbender: MAIHT3k Ep 40: Elders Need Care, Not 'AI' Surveillance https://t.co/ijNuNdEZZT Prof. Clara Berridge joined me &

2024-09-14 13:31:58 Also available as video on PeerTube:https://peertube.dair-institute.org/w/dvcqG18i57Fosha6Pw5GcB

2024-09-14 13:31:39 Mystery AI Hype Theater 3000 Ep 40: Elders Need Care, Not '

2024-09-13 15:52:15 RT @_KarenHao: To the public, Microsoft uses its reputation as an AI &

2024-09-13 14:19:13 @audunmb thanks for the heads up.

2024-09-13 13:49:32 I talk to the press largely to *combat* AI hype. It'

2024-09-13 13:49:25 I talk to the press largely to *combat* AI hype. It's beyond frustrating to be misquoted in ways that contribute to it instead. New newsletter post: https://t.co/xYTZdoXS7o

2024-09-13 02:11:18 RT @mmitchell_ai: YAY! The CEO of OpenAI just recognized that LLMs generate text-based tokens using randomness and probability! Something o…

2024-09-09 13:20:12 Today!Do you feel like this summer has brought just unending terrible ideas for how to use “AI”? You’re not alone! Join me and @alex as we attempt to clear out the backlog of Fresh AI Hell on the next Mystery AI Hype Theater 3000 live stream:Monday, September 9, noon Pacific https://www.twitch.tv/dair_institute

2024-09-09 13:18:42 Today! https://t.co/kPyEoiG0Io

2024-09-08 12:56:25 Join us tomorrow!Do you feel like this summer has brought just unending terrible ideas for how to use “AI”? You’re not alone! Join me and @alex as we attempt to clear out the backlog of Fresh AI Hell on the next Mystery AI Hype Theater 3000 live stream:Monday, September 9, noon Pacific https://www.twitch.tv/dair_institute

2024-09-08 12:55:27 Join us tomorrow! https://t.co/kPyEoiG0Io

2024-09-07 13:35:27 RT @emilymbender: @bcmerchant @timnitGebru @DAIRInstitute @jovialjoy @AJLUnited Dr. @alexhanna and I are still here, still talking to polic…

2024-09-07 13:35:23 RT @emilymbender: @bcmerchant @timnitGebru @DAIRInstitute Dr. @jovialjoy is still here, touring her amazing book _Unmasking AI_ https://t.…

2024-09-07 13:02:07 RT @emilymbender: @bcmerchant Dr. @timnitgebru and @DAIRInstitute are still here, among other things, convening the Data Worker's Inquiry:…

2024-09-05 14:31:17 @CriticalAI @alex thank you!

2024-09-05 13:03:41 Do you feel like this summer has brought just unending terrible ideas for how to use “AI”? You’re not alone! Join me and @alex as we attempt to clear out the backlog of Fresh AI Hell on the next Mystery AI Hype Theater 3000 live stream:Monday, September 9, noon Pacific https://www.twitch.tv/dair_institute

2024-09-03 20:00:09 @spiegelmama @carrideen I was not interviewed for this piece specifically, no. But I have met Ted Chiang and done an event with him. It'

2024-09-03 19:16:53 @carrideen I'

2024-08-31 18:58:47 @AndrewShields It'

2024-08-31 18:13:35 Excellent new essay by Ted Chiang, who gets to the heart of why language and art are inherently about conveying meaning and experience, not just form:https://www.newyorker.com/culture/the-weekend-essay/why-ai-isnt-going-to...

2024-08-31 14:36:57 Two years of ridicule as praxis! w/@alex and Christie Taylorhttps://buttondown.com/maiht3k/archive/mystery-ai-hype-theater-3000-turn...

2024-08-30 13:31:50 Also available as video on peertube:https://peertube.dair-institute.org/w/qfEo47ztaAiuRfjmjAT4n1

2024-08-30 13:31:37 Mystery AI Hype Theater Episode 39: Newsrooms Pivot to Bullshit https://www.buzzsprout.com/2126417/15660976-episode-39-newsrooms-pivot-t...… in which 404media co-founder Sam Cole joins me and @alex to reflect on what’s happening as “AI” gets shoved into journalism.Thx to Christie Taylor for production!

2024-08-27 21:25:37 The CLMS program is looking for a new Graduate Program Manager! Come work with us :) (Application deadline: 9/9)https://uwhires.admin.washington.edu/ENG/candidates/default.cfm?szCatego...

2024-08-24 21:44:43 @rmflight @alex @hrbrmstr Zoom into OBS!

2024-08-24 11:30:22 @maxleibman https://www.radicalai.org/ is amazing. It'

2024-08-23 21:19:26 @marcusdeh Ugh, so sorry to hear it.

2024-08-22 12:24:36 @davidschlangen Could be!

2024-08-22 08:22:25 @davidschlangen It seems to have stopped, maybe? It'

2024-08-22 08:02:17 @davidschlangen Could be because the US is asleep and it'

2024-08-22 07:53:19 @davidschlangen It'

2024-08-22 00:32:49 @manhack @timnitGebru Petit point de correction: je n'ai jamais travaillé chez Google. C'est mes co-autrices (Gebru et Mitchell) qui sont licenciées.

2024-08-21 01:42:29 RT @mmitchell_ai: In discussions of openness, it's important to be aware of the issue of colonialism: Fully "open" content, that can then b…

2024-08-20 07:54:29 @dingemansemark Big yikes.

2024-08-20 07:26:51 @dingemansemark Oh wow that one is awful. In their limitations they write:"

2024-08-20 04:45:59 RT @bcmerchant: Utterly devastating for CA journalists. This bill was supposed to make big tech pay its fair share after usurping advertisi…

2024-08-19 20:52:39 @Paperposts @foone Wow, that is wild. And it underscores the awfulness of all of this surveillance tech.

2024-08-19 20:02:33 RT @timnitGebru: Join us today at 3:30pm pacific at https://t.co/OByDmeWv60 where you’ll also find the recording if you miss the livestream.

2024-08-19 20:02:25 RT @techworkersco: Join @TechWorkersCo and @DAIRInstitute for an online panel covering AI myths, misinformation, and the history of automa…

2024-08-19 12:14:19 Today!Ready for more Mystery AI Hype Theater 3000? Tune in as @alex and I get to chat with Prof. Clara Berridge about AI hype and elder care.Monday, August 19, 3:30pm Pacific https://www.twitch.tv/dair_institute[Note unusual time]

2024-08-19 12:12:32 Today! https://t.co/u3dNCQSQaJ

2024-08-19 11:33:41 @mkranz Thanks, I hate it.@alex

2024-08-19 09:12:50 @joelniklaus Yes, a net negative, even before you account for the environmental costs and exploitative labor practices.

2024-08-19 08:32:54 @joelniklaus Yes, that would be preferable. Short of that, they should clearly specify what it is *for*, run evaluations testing how it works for those purposes, and portray it accurately to the public. (Same for ChatGPT, Gemini, etc.)

2024-08-19 07:34:08 @joelniklaus My suggestion is: don't. Don't present the output of a synthetic text extruding machine as any kind of information source. It isn't.

2024-08-18 22:17:52 @tallinzen Thank you, Tal.

2024-08-18 12:36:51 Tomorrow!Ready for more Mystery AI Hype Theater 3000? Tune in as @alex and I get to chat with Prof. Clara Berridge about AI hype and elder care.Monday, August 19, 3:30pm Pacific https://www.twitch.tv/dair_institute[Note unusual time]

2024-08-17 11:49:11 Ready for more Mystery AI Hype Theater 3000? Tune in as @alex and I get to chat with Prof. Clara Berridge about AI hype and elder care.Monday, August 19, 3:30pm Pacific https://www.twitch.tv/dair_institute[Note unusual time]

2024-08-17 09:06:03 @aud @trochee @fay @tschfflr :) The recording should eventually be available, hopefully with the full Q&

2024-08-17 02:17:53 @trochee @tschfflr @fay The slides are here, but of course not everything I said was on the slides.https://faculty.washington.edu/ebender/papers/ACL_2024_Presidential_Addr...(A lot of people on Twitter are mad right now, and most are only reacting to the slides...)

2024-08-16 09:47:48 @Graffotti @trochee @tschfflr @fay I guess the local organizers wanted to showcase Muay Thai.

2024-08-16 09:15:59 @trochee @tschfflr @fay The literal boxing match was done by professional boxers and it was part of the entertainment at the ACL social event.The other thing was my Presidential Address.

2024-08-15 14:52:40 @BlancheMinerva @idansc Well, that and Raji et al 2021 is this paper, not whatever Raji, Smart et al 2021: https://t.co/kR4ZA1k7uz

2024-08-15 14:37:00 RT @emilymbender: MAIHT3k Ep 38: https://t.co/ag2xwt72PX in which @alexhanna &

2024-08-15 10:35:35 Also available as video:https://peertube.dair-institute.org/w/bFaXRgjRmeo9xxgxBou5QY

2024-08-15 10:35:06 MAIHT3k Ep 38:https://www.buzzsprout.com/2126417/15554524-episode-38-deflating-zoom-s-... which @alex &

2024-08-15 00:30:14 @aaronlololo Congrats!!

2024-08-14 23:42:15 RT @mmitchell_ai: Such a useful sanity-check reminder from @emilymbender's #ACL2024 Presidential Address: *Language research* is itself int…

2024-08-14 09:50:01 @asayeed I think Google Slides is involved here, too, alas.

2024-08-14 06:56:25 RT @barbara_plank: Congratulations to Ralph Grishman for the Liftetime-achievement-award! #ACL2024NLP

2024-08-14 06:29:52 Starting just about ... now! https://t.co/Ju3vIKaytT

2024-08-12 11:45:29 #ACL2024 #ACL2024nlp Don't miss the business meeting tomorrow afternoon -- lots of info for the community + a chance to give input.

2024-08-12 02:31:23 @asayeed Show-off!

2024-08-05 15:20:45 RT @DAIRInstitute: Join us at https://t.co/h50L9qiri4 in a little under 4 hours for another Mystery AI Hype Theater 3000 livestream.

2024-08-05 13:13:23 Today!!Really looking forward to talking AI hype in journalism with Samantha Cole, co-founder of 404media.co when she joins me and @alex for the next Mystery AI Hype Theater 3000 live stream:Monday August 5Noon Pacifichttps://www.twitch.tv/dair_institute

2024-08-05 13:12:05 Today! https://t.co/9J5wXlAw4B

2024-08-04 16:07:11 RT @_alialkhatib: !! new episode of mystery ai hype theater 3000 with @emilymbender &

2024-08-04 14:03:26 Tomorrow!!Really looking forward to talking AI hype in journalism with Samantha Cole, co-founder of 404media.co when she joins me and @alex for the next Mystery AI Hype Theater 3000 live stream:Monday August 5Noon Pacifichttps://www.twitch.tv/dair_institute

2024-08-04 13:32:07 Join us tomorrow! https://t.co/9J5wXlAw4B

2024-08-03 13:15:33 Also available on peertube:https://peertube.dair-institute.org/w/bcrDnVeGbeWbpQXHTi3Qju

2024-08-03 13:15:11 Mystery AI Hype Theater 3000 Episode 37: Chatbots Aren'

2024-08-03 13:14:05 Also available on peertube: https://t.co/r9P6Xm9ciQ

2024-08-03 13:13:59 It’s just what they’re imagining we do and not really what we do. Where’s the substance relative to what we’re actually doing as nurses, taking care of patients? /end alt text

2024-08-03 13:13:44 If you ever watch the Muppets and you remember Chef, and he’s there and he’s chopping and singing and he’s doing all the things that somebody perceives a chef might dow, but he’s not actually cooking. And sometimes that’s what these technologies feel like. >

2024-08-03 13:13:35 @NationalNurses @alexhanna @ctaylsaurus So it completely erases everything else that nurses do. All of this hype is so contingent upon the work that nurses do is not valuable. So it’s just like, hey, all nurses do is look up drug interactions. >

2024-08-03 13:13:29 @NationalNurses @alexhanna @ctaylsaurus or know what it means or know which one to pull out of that litany as more likely than something else. That comes from understanding the person that you’re looking at and your experience as a clinician. >

2024-08-02 13:48:48 Really looking forward to talking AI hype in journalism with Samantha Cole, co-founder of 404media.co when she joins me and @alex for the next Mystery AI Hype Theater 3000 live stream:Monday August 5Noon Pacifichttps://www.twitch.tv/dair_institute

2024-08-01 23:30:47 @CptSuperlative Yes, indeed. See also:https://doi.org/10.1177/09637214231217286Open access preprint:https://faculty.washington.edu/ebender/papers/Bender-2024-preprint.pdf

2024-08-01 19:43:58 @TLAlexander You're welcome!

2024-08-01 18:41:22 @eliocamp Please don'

2024-08-01 18:33:04 @eliocamp Ah yes, I'

2024-08-01 18:16:57 If you'

2024-08-01 18:16:14 And *of course* the article has to end with this tired trope:>

2024-08-01 18:15:10 There is so much awfulness in this article that'

2024-08-01 18:13:33 The reporting in MIT Tech Review is such a mixed bag. Some good critical work, some "

2024-08-01 17:30:48 TFW you finish the thread and catch the typo in the very first sentence. Mixed *bag* of course.

2024-08-01 17:26:28 RT @ArteEsEtica: Como parte del proyecto Data Workers' Inquiry dirigido por Milagros Miceli, desarrollamos una pieza audiovisual para poner…

2024-08-01 17:13:36 If you're thinking about this as an "accuracy" problem, you've already lost the plot ... as well illustrated by this quote from one of the dissenting voices hidden in the middle of the article: https://t.co/aTkAuIDyqR

2024-08-01 17:12:08 And *of course* the article has to end with this tired trope: >

2024-08-01 13:26:53 Excellent video (in Spanish, with Spanish &

2024-07-29 22:36:56 @kendraserra @sellars Congrats!!!

2024-07-29 22:12:56 @wendynather @catsalad I'

2024-07-29 18:53:50 @kfort I know, right? The hype is everywhere....

2024-07-29 13:49:27 I'

2024-07-28 14:33:53 This is tomorrow! ---It'

2024-07-26 12:37:58 Deadline is today!!--Discounted/sliding scale virtual registration for ACL 2024:https://2024.aclweb.org/registration#discounted-virtual-registration** Note deadline of July 26 **The ACL provides this option in order to make ACL events accessible to researchers who otherwise would be unable to participate. Please spread the word!

2024-07-25 17:31:11 Always hilarious to read ad copy where the information selling service admits to selling a system that just makes shit up.

2024-07-25 17:22:24 It'

2024-07-24 00:26:16 Discounted/sliding scale virtual registration for ACL 2024:https://2024.aclweb.org/registration#discounted-virtual-registration** Note deadline of July 26 **The ACL provides this option in order to make ACL events accessible to researchers who otherwise would be unable to participate. Please spread the word!

2024-07-23 21:24:27 Did you know that Mystery AI Hype Theater 3000 has a newsletter? Check it out &

2024-07-23 17:00:54 I appreciate how the FTC works through how existing law applies to new technology. More like this please!https://www.ftc.gov/policy/advocacy-research/tech-at-ftc/2024/07/behind-...

2024-07-23 13:18:13 Please spread the word!! https://t.co/JgSnkJpCjz

2024-07-23 13:18:05 RT @aclmeeting: Discounted Virtual Registration for #ACL2024NLP Discounted virtual registration is intended to make ACL events accessible…

2024-07-22 19:43:49 Starting now! https://t.co/BSZ0Qa9gtg

2024-07-22 13:03:11 This is today!--Excited for the next Mystery AI Hype Theater 3000 live stream! Michelle Mahon, Director of Nursing Practice with National Nurses United will join me and @alex to talk AI hype in healthcare.Monday July 22, 12:45pm Pacifictwitch.tv/dair_institute

2024-07-22 13:02:24 This is today! https://t.co/BSZ0Qa9gtg

2024-07-22 12:55:38 RT @emilymbender: Discounted/sliding scale virtual registration for ACL 2024: https://t.co/etcrDd5jPC ** Note deadline of July 26 ** The…

2024-07-21 17:24:22 Join us tomorrow!Excited for the next Mystery AI Hype Theater 3000 live stream! Michelle Mahon, Director of Nursing Practice with National Nurses United will join me and @alex to talk AI hype in healthcare.Monday July 22, 12:45pm Pacifictwitch.tv/dair_institute

2024-07-21 13:31:01 Discounted/sliding scale virtual registration for ACL 2024:https://2024.aclweb.org/registration#discounted-virtual-registration** Note deadline of July 26 **The ACL provides this option in order to make ACL events accessible to researchers who otherwise would be unable to participate. Please spread the word!

2024-07-20 23:49:15 Most comprehensive collection of info seems to be here: https://t.co/iPNcxepRJI

2024-07-20 23:29:10 @hashtagoras Same. But so many people have been saying that for a few years now, it seems, so I worry any such copies have been recycled already.

2024-07-20 23:26:45 @hashtagoras Thank you!

2024-07-20 23:24:39 @kylethayer Thank you!

2024-07-20 23:24:06 @genna_buck It seems like there's a bit more info, collected here: https://t.co/iPNcxepRJI

2024-07-20 23:19:46 @genna_buck Thank you. Too bad there isn't more info!

2024-07-20 23:08:27 Does anyone have info about the context of this slide? I see it credited to an IBM presentation from 1979---but a presentation by whom, to whom about what?

2024-07-20 13:02:44 @ali Also available as video on PeerTube:https://peertube.dair-institute.org/w/824o7kNVqxuTz7kEr3G8RE

2024-07-20 13:02:14 Mystery AI Hype Theater 3000 Episode 36: About That '

2024-07-19 19:28:06 Excited for the next Mystery AI Hype Theater 3000 live stream! Michelle Mahon, Director of Nursing Practice with National Nurses United will join me and @alex to talk AI hype in healthcare.Monday July 22, 12:45pm Pacifictwitch.tv/dair_institute

2024-07-19 19:26:41 Excited for the next Mystery AI Hype Theater 3000 live stream! Michelle Mahon, Director of Nursing Practice with @NationalNurses will join me and @alexhanna to talk AI hype in healthcare. Monday July 22, 12:45pm Pacific https://t.co/ETRqVjeTrh

2024-07-19 12:53:44 RT @_alialkhatib: !!! i joined @emilymbender for Mystery AI Hype Theater 3000 &

2024-07-19 12:50:48 Traveling? Here'

2024-07-19 12:48:56 Traveling? Here's some great info about how to resist the normalization of biometric surveillance on your way. h/t @jovialjoy and @AJLUnited https://t.co/Og8xm0IITB

2024-07-18 12:36:36 RT @emilymbender: This is the best news! I'm so excited for this book :) :) :)

2024-07-18 12:36:34 RT @emilymbender: Wondering why everyone around you is so excited about AI? Why it seems to be in *everything*? Here's why:

2024-07-17 15:10:39 @timnitGebru THIS IS THE BEST NEWS! Hooray!!

2024-07-16 12:59:00 @MichaelBishop Heard that one too and was appalled.

2024-07-15 14:32:13 @CrackedWindscreen Thanks, I hate it.

2024-07-15 13:42:28 @mmby Here'

2024-07-15 13:07:20 1) Again we see the harm of the term "

2024-07-15 13:07:12 Why can'

2024-07-12 11:22:31 @luis_in_brief @matthewmaybe For the systems, I often use synthetic text (or media) extruding machines.

2024-07-12 08:21:05 I thought this was going to be a story about ensuring that Tesla'

2024-07-11 18:01:53 @benhaube and you think I of all people need this explained to me why?

2024-07-11 17:09:57 WaPo: Detailed reporting about how "

2024-07-11 16:18:54 Appreciate the shout out to Mystery AI Hype Theater 3000 in this recent piece:https://www.technologyreview.com/2024/07/10/1094475/what-is-artificial-i... the discussion of the Turing Test, I also want to point to this ep of OOAC @ouropinions @alex and I did on that topic:https://www.ouropinionsarecorrect.com/shownotes/2024/4/4/the-turing-test...

2024-07-10 13:54:01 Some nice reporting from @404media on yet another ridiculous "

2024-07-09 09:50:27 @mikarv @halcyene This sounds very similar to the OpenAI paper that @alex and I took apart here:https://www.buzzsprout.com/2126417/14416779-episode-25-an-llm-says-llms-...

2024-07-07 10:54:37 @tanepiper Also, in case you missed it, mansplaining is never about intent.

2024-07-07 10:53:40 @tanepiper Please feel free to make your observations outside of my mentions, then. As it stands, you have addressed this comment to me, in response to my post, without any connective text indicating how it is supposed to relate. It reads as if you felt that I needed to be enlightened.

2024-07-07 08:02:50 @tanepiper @mattb @hipsterelectron @dalias See pinned toot.

2024-07-07 04:19:04 @mattb @hipsterelectron @dalias One of the issues with LLMs is that they provide apparent fluency on unlimited topics, making it seem like you don'

2024-07-07 04:18:31 @mattb @hipsterelectron @dalias Yes, there is a long tradition of parsing into semantic representations, and even work on generating from them. If you look at it that way, you immediately see that generation of grammatical strings alone isn'

2024-07-05 13:45:13 TFW you get to have a conversation with Prof. @safiyanoble and it's recorded so other people can enjoy too!! https://t.co/cGRqcD4d4B

2024-07-05 13:44:45 RT @emilymbender: Also available as video on PeerTube: https://t.co/PkFFaIYgty

2024-07-05 13:44:42 RT @emilymbender: MAIHT3k Episode 35: AI Overviews and Google's AdTech Empire https://t.co/DSS87cmpOS Prof. @safiyanoble joined me and @…

2024-07-04 13:30:10 @safiyanoble @alex Also available as video on PeerTube:https://peertube.dair-institute.org/w/veZW69ZkfNVumKyEQNrz2q

2024-07-04 13:29:55 Mystery AI Hype Theater 3000 Episode 35: AI Overviews and Google'

2024-07-02 13:27:29 @nacly @alex Thanks, I hate it.

2024-06-26 04:40:08 @freeformz @fastmail sorry to hear that. I recorded this with them back in September, which is a surprisingly long ways back for a podcast. Also, the editing is clumsy in at least a few places, mangling some of my points.

2024-06-24 14:25:51 @viennawriter you mi

2024-06-23 19:34:47 I'

2024-06-23 19:28:54 Tomorrow!Next Mystery AI Hype Theater 3000 live-stream!@ali will join us to take apart yet another paper-shaped AI hype artifact unleashed from a "

2024-06-22 13:16:39 RT @emilymbender: @alexhanna @ctaylsaurus Also available as video on PeerTube! https://t.co/Qk9W8UJJtt

2024-06-22 13:16:37 RT @emilymbender: Mystery AI Hype Theater 3000 Ep 34: Senate Dot Roadmap Dot Final Dot No Really Dot Docx https://t.co/tchhHoTf2d in whic…

2024-06-21 13:39:08 @alexhanna @ctaylsaurus Also available as video on PeerTube! https://t.co/Qk9W8UJJtt

2024-06-21 13:39:00 @alexhanna @ctaylsaurus dominated by the few domestic players who are only focused on wealth accumulation and really fucking over workers. /end alt text >

2024-06-21 13:38:50 @alexhanna @ctaylsaurus And I’m like, man, it’s just, it’s very incredible Cold War, you know, saber rattling. The US has to maintain this bastion of policy freedom as if, you know, the policy domain here hasn’t been >

2024-06-21 13:38:32 @alexhanna @ctaylsaurus China and Russia will fill to ensure the digital economy remains open, fair and competitive for all, including the 3 million American workers whose jobs depend on digital trade.” >

2024-06-21 13:38:24 @alexhanna @ctaylsaurus Captions: ALEX HANNA: The last bullet point, the second part of it, “As Russia and China push their cyber agenda of censorship, repression and surveillance, the (AI) working group encourages the executive branch to avoid creating a policy vacuum that >

2024-06-21 13:30:20 Mystery AI Hype Theater 3000 Episode 34: Senate Dot Roadmap Dot Final Dot No Really Dot Docxhttps://www.buzzsprout.com/2126417/15280581-episode-34-senate-dot-roadma...... in which @alex and I dive into Sen Schumer'

2024-06-21 13:22:03 Next Mystery AI Hype Theater 3000 live-stream!@ali will join me &

2024-06-19 13:02:23 Really enjoyed this conversation with @Kobotic on Carnegie Council'

2024-06-17 00:23:10 @fpbhb 1) You referred to "

2024-06-16 20:46:55 @fpbhb See pinned toot.

2024-06-16 18:10:06 @fpbhb there are peer reviewed, not for profit, open access options!

2024-06-16 17:22:00 PSA: Every time you cite the arXiv version of something instead of the peer reviewed version, you'

2024-06-15 22:16:33 @loftywords @alex and I had a go at this in the Fresh AI Hell segment of the most recent Mystery AI Hype Theater 3000. It'

2024-06-13 02:02:06 @dingemansemark @dunhamsteve I love #uninformation as a word for this (have been trying non-information, but yours is better).Mused about the latest Google nonsense here:https://buttondown.email/maiht3k/archive/information-is-relational/

2024-06-12 17:27:28 @AJFish Thanks for listening to our podcast! The transcripts only reflect what we say in the stream, and not everything in the chat. Sometimes we read and talk about chat comments, in which case those comments would make it to the transcript--otherwise not.

2024-06-10 22:41:31 In the latest installment of the Mystery AI Hype Theater 3000 newsletter: Piles of Data Don'

2024-06-10 12:43:49 Today!!Ready for more Mystery AI Hype Theater 3000? Join me &

2024-06-09 13:13:40 ~~Tomorrow!~~Ready for more Mystery AI Hype Theater 3000? Join me &

2024-06-07 15:05:12 Ready for more Mystery AI Hype Theater 3000? Join me &

2024-06-06 13:13:32 @artiom If you read the linked article instead of just mouthing off, you'

2024-06-05 19:32:04 @aharoni @meg Thank you! I'

2024-06-05 15:15:09 @TheShellyTea @mmitchell_ai thank you!

2024-06-05 13:13:36 Mystery AI Hype Theater 3000, Ep 33 Much Ado About '

2024-06-05 01:17:30 @sarae I'

2024-06-05 01:08:55 "

2024-06-03 18:26:56 I'

2024-06-03 14:43:30 @rsf92 I see. You'

2024-06-03 14:11:00 @rsf92 Hm, if only your curiosity were strong enough to go look up the open access paper the octopus thought experiment is detailed in....

2024-06-03 13:01:23 TODAY!!Heard the news about Sen Schumer'

2024-06-02 13:42:07 Here'

2024-06-02 13:41:18 2. The octopus is posited to be hyperintelligent precisely because we wanted to show that no entity could learn to "

2024-06-02 13:40:52 This is a decent summary of the octopus thought experiment from Bender &

2024-06-02 13:21:56 Heard the news about Sen Schumer'

2024-05-31 00:56:03 #lazyweb / trying to find citations:Does anyone know of studies looking at how peer review (in any field/across fields) favors or disfavors work that uses "

2024-05-30 16:17:19 Heard the news about Sen Schumer'

2024-05-29 19:29:06 A shout out to librarians, libraries and library science -- and the practices of care, community and service which make up their democratizing force.https://buttondown.email/maiht3k/archive/information-access-as-a-public-...

2024-05-28 19:17:14 We'

2024-05-24 12:42:42 Mystery AI Hype Theater 3000 Episode 32: A Flood of AI Hell, in which @alex and I manage to stay afloat through it all thanks to an amazing sea shanty that she wrote for the occasion. Ridicule as praxis!https://www.buzzsprout.com/2126417/15120816-episode-32-a-flood-of-ai-hel... available as video on PeerTube!https://peertube.dair-institute.org/w/kwJH8zfnLR4uowPQCigMCeWith thanks, as always, to Christie Taylor for production!

2024-05-21 12:51:10 @Neverfadingwood @elmerot That was episode 33! So far, it'

2024-05-20 21:33:11 Here'

2024-05-20 18:20:40 So, even if a test has been established to have construct validity as a test relating to human cognition, you can'

2024-05-20 18:20:33 Hey folks, let'

2024-05-20 15:22:44 @kaveinthran You can find all of our episodes here!https://www.dair-institute.org/maiht3k/

2024-05-20 15:22:24 Colleague in globally dispersed organization asks for meeting, says it'

2024-05-20 12:54:06 Join us today!On the next Mystery AI Hype Theater 3000, guest host @mmitchell_ai and I will look into claims that "

2024-05-19 14:35:37 This is tomorrow!On the next Mystery AI Hype Theater 3000, guest host @mmitchell_ai and I will look into claims that "

2024-05-18 13:25:24 This paper and its predecessor that Chirag Shah and I wrote started off as a reaction to Google'

2024-05-17 17:12:49 The folks at FTC Office of Technology are doing excellent work taking in research and using it to make a practical difference. See link for a range of topics they are interested in -- for anyone working in these areas, this is a great way to have impact!https://www.ftc.gov/policy/advocacy-research/tech-at-ftc/2024/05/p-np-no...

2024-05-16 18:57:43 On the next Mystery AI Hype Theater 3000, guest host @mmitchell_ai and I will look into claims that "

2024-05-12 15:16:05 As folks discuss the plundering of the open internet/sharing economy by the data-hungry LLM trainers, it seems like a good time to remind ourselves to find something other than "

2024-05-09 17:48:49 @karabaic @alex *sigh*

2024-05-09 15:26:00 I would like to remind the world that you actually don'

2024-05-09 13:02:56 Also available as video on PeerTube!https://peertube.dair-institute.org/w/nrjPApZ1pvhh59Ja31wP6p

2024-05-08 12:40:28 Mystery AI Hype Theater 3000 Episode 31: Science is Human Endeavor in which Drs. Molly Crockett and Lisa Messeri join me &

2024-05-06 15:18:37 @zenkat In that case, calling this a "

2024-05-06 15:09:18 @zenkat And thanks for that link! A good article indeed.

2024-05-06 15:04:02 @zenkat Yes -- as I understand it, AlphaFold is supervised machine learning over a very carefully curated dataset. Nothing out of OpenAI suggests "

2024-05-06 13:36:29 Latest Mystery AI Hype Theater 3000 newsletter, on AI hype infecting a key player in the pharmaceutical sector. Wouldn'

2024-05-01 14:17:53 Bender &

2024-04-29 13:26:07 TFW you'

2024-04-29 04:08:58 @hipsterelectron You got this!

2024-04-26 12:59:22 Not just corporate capture, but TESCREAL corporate capture. Ugh.https://www.wsj.com/tech/ai/openais-sam-altman-and-other-tech-leaders-to...

2024-04-25 14:01:10 Ready for more Mystery AI Hype Theater 3000?Climate change has reached AI Hell (frozen over last we checked in December) and now we'

2024-04-24 14:32:55 @arjen thank you!

2024-04-24 00:24:14 Really, really loved this episode of Our Opinions Are Correct -- and encourage all authors to consider joining AABB (I did, after listening).https://www.ouropinionsarecorrect.com/shownotes/2024/4/18/fascism-and-bo...

2024-04-20 13:33:24 @alex @timnitGebru Also available as video on peertube!https://peertube.dair-institute.org/w/nfzPjaf1VrWRpc4t5em325

2024-04-20 13:32:56 Mystery AI Hype Theater 3000, episode 30! @alex and guest host @timnitGebru read the Techno-Optimism Manifesto so you don'

2024-04-18 20:20:25 The absolute obliviousness of Meta (the company)'

2024-04-18 13:27:03 @minimalparts It is supposed to be open access already -- frustratingly something got lost in communication and we'

2024-04-18 13:01:39 @marinheiro It should be open access. We'

2024-04-17 17:22:29 The best thing you will read about "

2024-04-17 17:16:06 So much of the sales around "

2024-04-17 12:49:47 Bender &

2024-04-15 12:52:39 Join us today!Ready to learn all about how the AIs are going to do our science for us? Join @alex and me in welcoming Molly Crockett and Lisa Messeri onto Mystery AI Hype Theater 3000 to wade through the hype. Monday April 15, 9am Pacific (note special time): https://www.twitch.tv/dair_institute

2024-04-14 12:56:18 Join us tomorrow!Ready to learn all about how the AIs are going to do our science for us? Join @alex and me in welcoming Molly Crockett and Lisa Messeri onto Mystery AI Hype Theater 3000 to wade through the hype. Monday April 15, 9am Pacific (note special time): https://www.twitch.tv/dair_institute

2024-04-12 12:35:11 Ready to learn all about how the AIs are going to do our science for us? Join @alex and me in welcoming Molly Crockett and Lisa Messeri onto Mystery AI Hype Theater 3000 to wade through the hype. Monday April 15, 9am Pacific (note special time): https://www.twitch.tv/dair_institute

2024-04-12 12:26:54 What if universities responded to AI hype with confidence in their core mission rather than FOMO?https://buttondown.email/maiht3k/archive/more-collegiate-fomo/

2024-04-11 20:40:23 Today in the Mystery AI Hype Theater 3000 newsletter:https://buttondown.email/maiht3k/archive/more-collegiate-fomo/

2024-04-10 23:55:37 @soypunk What could possibly go wrong?

2024-04-04 15:40:59 Look what @alex and I got to do! (Hang out with the cool kids over at @ouropinions :)https://www.ouropinionsarecorrect.com/shownotes/2024/4/4/the-turing-test...

2024-04-04 11:35:10 @pa27 yes, I'

2024-04-04 10:26:54 @alex @ctaylsaurus Also available as video!https://peertube.dair-institute.org/w/4h9s6GXxyTMszoBLUm6QCuIf your podcast app auto-downloaded the episode and you'

2024-04-04 10:26:42 Mystery AI Hype Theater 3000 episode 29 has dropped! @alex &

2024-04-04 03:58:51 Must-read reporting by +972 on how the IDF are using “AI” in their indiscriminate murder in Gaza. It’s horrific, and we must not look away. And it’s an absolute nightmare of the usual sorts of AI harms cranked up to the extreme: mass surveillance, "

2024-04-03 17:15:17 @keydelk But the irony of invoking Dunning-Kruger while mansplaining is particularly rich.

2024-04-03 15:30:08 @keydelk and you think I need enlightening on this particular topic why exactly?

2024-04-03 14:31:06 Amazing example of contextualizing synthetic text from The Verge:https://www.theverge.com/2024/4/2/24117976/best-printer-2024-home-use-of...

2024-04-02 13:18:48 In the Mystery AI Hype Theater Newsletter this morning: A take-down of proxy hype in the higher ed press.https://buttondown.email/maiht3k/archive/doing-their-hype-for-them/

2024-03-31 12:51:43 @alex_leathard @alex that is pretty awful, indeed

2024-03-29 22:06:16 @jamiemccarthy We don'

2024-03-29 22:05:23 @hollie @gregtitus Oh thanks for catching that! I'

2024-03-29 18:47:15 Finally, as is usual and *completely unacceptable* the public does not have information about the training data used to build this thing, just the info that Microsoft made it.

2024-03-29 18:46:29 It seems to bear repeating: chatbots based on large language models are designed to *make shit up*. This isn'

2024-03-29 18:45:50 There'

2024-03-25 12:25:14 Join us today!!--Mystery AI Hype Theater 3000 fans, get ready for our next episode! @alex and I will be joined by the inimitable Karen Hao to talk about AI hype and journalism!Live stream Monday March 25, 5pm Pacifichttps://www.twitch.tv/dair_institute

2024-03-24 17:05:58 @tomstoneham Unless you can find a way to calculate only the additional energy required by a person to do a task, the comparison is not just spurious, but drastically dehumanizing. A person'

2024-03-23 14:51:42 @tomstoneham We get into it in the podcast episode I linked to but basically: humans exist (and have a right to do so) and by existing we consume energy. So comparisons between humans (existing and) doing some task and LLMs doing the task are spurious.

2024-03-23 13:40:43 @tomstoneham This is an ill-formed question. See:https://www.buzzsprout.com/2126417/13931174-episode-19-the-murky-climate...

2024-03-21 17:03:03 @kfort That'

2024-03-21 16:53:05 @kfort Oh that'

2024-03-21 13:42:32 Mystery AI Hype Theater 3000 fans, get ready for our next episode! @alex and I will be joined by the inimitable Karen Hao to talk about AI hype and journalism!Live stream Monday March 25, 5pm Pacifichttps://www.twitch.tv/dair_institute

2024-03-21 10:09:14 @cmeinel Hi! I appreciate folks flagging articles they might think I'

2024-03-19 00:59:33 @dmr1848 Thanks -- fixed.

2024-03-18 20:54:55 @cmeinel say what now?

2024-03-18 17:31:36 @MarkRDavid 1) I don'

2024-03-18 17:26:01 @MarkRDavid If you read what I write, you'

2024-03-18 17:16:52 Today the US DHS announced three "

2024-03-18 14:24:00 @martinicat ugh

2024-03-17 03:12:34 @virtualinanity Indeed.

2024-03-15 18:22:46 @MarkRDavid The topic of our next episode of the podcast!

2024-03-15 16:12:16 @whoseknowledge Thank you!

2024-03-15 15:36:34 It'

2024-03-15 15:36:25 Part of the plan with our new #MAIHT3k newsletter is to redirect the energy that we'

2024-03-14 21:36:53 @Nodami @cazencott Thank you!

2024-03-14 21:25:56 @Nodami @cazencott Thank you!

2024-03-14 19:16:08 @cazencott Thank you!

2024-03-14 19:06:43 @cazencott And do you remember it as a hoax that got taken seriously?

2024-03-14 19:00:43 So .... at NeurIPS 2016 (or maybe 2015) there was apparently a prank/hoax presentation where some researchers presented a fake pitch for an '

2024-03-14 13:20:47 Mystery AI Hype Theater 3000 - ep 28: LLMs Are Not Human Subjects, in which @alex and I are beyond dismayed at social scientists seeking to use word distributions from unknown corpora as a data source.https://www.buzzsprout.com/2126417/14677380-episode-28-llms-are-not-huma... to Christie Taylor for production!Also available as video: https://peertube.dair-institute.org/w/eWG2Me6QABWHbXjfKhGyYcAnd check out our new newsletter! https://buttondown.email/maiht3k

2024-03-13 14:13:16 Want to keep up with all things Mystery AI Hype Theater 3000? @alex and I have got you covered! Check our our new newsletter for episode announcements, AI hype take-downs, periodic rants, and occasional samples of fresh AI hell. Subscribe here:https://buttondown.email/maiht3k

2024-03-10 01:27:36 Big Tech likes to push the trope that things are moving and changing too quickly and there'

2024-03-06 01:20:34 @hipsterelectron On Mastodon it was just exactly not giving them too many extra clicks. On Xitter and BlueSky about not providing the link card.

2024-03-05 16:16:50 I realize the latest open letter from the "

2024-03-05 14:12:20 I appreciate the opportunity to speak with an actual journalist (Karan Mahadik) about my experience finding a fabricated quote attributed to me in what turned out to be a fully fabricated (using the Gemini system) "

2024-03-04 13:43:40 Join us today!---Ready for some more Mystery AI Hype Theater 3000? Join me and @alex for our next live stream in which we take on the AI hype infecting the social sciences.Monday March 4, noon Pacifichttps://www.twitch.tv/dair_instituteSee you there!

2024-03-03 13:53:03 Ready for some more Mystery AI Hype Theater 3000? Join me and @alex for our next live stream in which we take on the AI hype infecting the social sciences.Monday March 4, noon Pacifichttps://www.twitch.tv/dair_instituteSee you there!

2024-03-03 03:18:41 @zanchey I see -- it'

2024-03-03 01:03:19 @zanchey You gave the answer right in your post: The source of this problem is the insurance companies.

2024-03-02 21:03:54 p.s. We covered robo-therapy on Mystery AI Hype Theater 3000 back in September, with Hannah Zeavin:https://www.buzzsprout.com/2126417/13544940-episode-13-beware-the-robo-t...

2024-03-02 20:44:00 What if -- instead of seeing the process of creating clinical documentation as mere busywork -- the tech bros understood it as possibly part of the process of care?What if -- instead of leading with the '

2024-03-02 20:43:52 “What if – instead of spending an hour seeing a patient, then 15 minutes writing the clinical encounter note – the therapist could spend 30 seconds checking the note AI came up with?” YIKESThis is a completely unrealistic expectation about what goes into verifying that kind of note and sounds like a recipe for overburdening the medical workforce/setting up errors./6

2024-03-02 20:43:40 When the author finally gets around to reporting on what **actual psychologists** have to say, it'

2024-03-02 20:43:25 The only studies cited are co-authored by the companies selling this crap.One of the supposedly positive findings is that people form a "

2024-03-02 20:42:36 They talk up the idea that this is effective because people are more willing to open up to a "

2024-03-02 20:42:30 For the first ~1500 words, exactly 0 people with expertise in psychotherapy are quoted./2

2024-03-02 20:42:21 Arghh - more problematic reporting, this time about robo-therapists.https://www.theguardian.com/lifeandstyle/2024/mar/02/can-ai-chatbot-ther... thread: /1

2024-03-01 16:17:47 Ready for some more Mystery AI Hype Theater 3000? Join me and @alex for our next live stream in which we take on the AI hype infecting the social sciences.Monday March 4, noon Pacifichttps://www.twitch.tv/dair_instituteSee you there!

2024-03-01 00:00:00 CAFIAC FIX

2024-03-11 00:00:00 CAFIAC FIX

2023-05-22 22:04:22 My point here is that it's always worth looking at the tradeoffs, even with products that seem "free" and generally empowering. And maybe asking how we empower communities and connections as well as individuals.

2023-05-22 22:02:39 And there are certainly times when a person needs to figure out how to get somewhere but can't leverage the kind of person to person connection they would need without the automated system (incl folks facing discrimination). >

2023-05-22 22:01:08 I'm not saying one way is better than the other. Some businesses might prefer to attract visitors directly while some neighborhoods might resent Google maps inspired traffic. >

2023-05-22 21:57:11 The tourist center model in particular located some power over the direction of tourist attention in a specific kind of institution. >

2023-05-22 21:56:03 Before we had Google maps, getting around required sharing knowledge with people--maybe going to a visitor center as a tourist or calling the business we intended to get to near home. >

2023-05-22 21:54:37 I get it. I appreciate that too. But this made me think about what other values we are sacrificing in this case. Here, I think it's social connection and interdependence. (And this puts me in mind of @abebab 's work on relational ethics: https://t.co/Fc91OtowsZ ) >

2023-05-22 21:50:07 @techwontsaveus @mjnblack At one point, @mjnblack is talking about the concept of "augmented humans" and mentions that she really appreciates Google Maps because of the independence it gives her when exploring new places. >

2023-05-22 21:48:36 I really enjoyed this episode of @techwontsaveus with @mjnblack -- interesting insight into the values that are shaping the technology we use (and thus are shaping social structures &

2023-05-22 14:05:30 @timnitGebru (cont) Well, no – it’s you that’s the problem. You’re building something with certain characteristics for your profit. That’s extremely distracting, and it takes the attention away from real harms and things that we need to do. Right now.”

2023-05-22 14:05:16 @timnitGebru re AI doomerism in 2023: “That conversation ascribes agency to a tool rather than the humans building the tool,” she says. “That means you can aggregate responsibility: ‘It’s not me that’s the problem. It’s the tool. It’s super-powerful. We don’t know what it’s going to do.’ >

2023-05-22 14:03:01 Great new profile of Dr. @timnitGebru in the Guardian. “I’m not worried about machines taking over the world

2023-05-22 12:57:23 RT @emilymbender: And not fall for either- Myth #1: The tech is moving to fast! Regulation can't keep up. Myth #2: The 'real' concern is…

2023-05-22 12:56:57 RT @emilymbender: So the problem I see with the "FDA for AI" model of regulation is that it posits that AI needs to be regulated *separatel…

2023-05-21 15:40:21 Hey @kifleswing and @CNBC fact check: 1) I have never worked at Google (nor has McMillan-Major) 2) It's spelled "stochastic" https://t.co/7y2vGM1fNs https://t.co/xkzw3QskPJ

2023-05-21 14:00:56 @jpalasz I think in many regulatory contexts, talking about "automation" rather than "AI" might be clarifying.

2023-05-21 13:50:47 And not fall for either- Myth #1: The tech is moving to fast! Regulation can't keep up. Myth #2: The 'real' concern is rogue AGI that poses 'existential risk' to humanity.

2023-05-21 13:49:35 But I strongly doubt that saying "AI" is so new it needs its own "FDA" is going to get us there. Let's sit with and use the power that existing regulations already give us for collective governance. >

2023-05-21 13:48:05 Here, I keep hoping for some way to set up accountability: what if #OpenAI were actually accountable for everything #ChatGPT outputs? (And #Google for #Bard and #Microsoft for #BingGPT?) Maybe we already have what we need, maybe there's something to add. >

2023-05-21 13:45:56 A final kind of risk that might not be adequately handled by existing frameworks is the risks that widely available media synthesis machines pose to the information ecosystems. >

2023-05-21 13:44:53 But the story changes when tech bros mistake "free for me to enjoy" for "free for me to collect" and there is an economic incentive (at least in the form of VC interest) to churn out synthetic media based on those collections. >

2023-05-21 13:42:24 Sharing art online used to be low-risk to artists: freely available just meant many individual people could experience the art. And if someone found a piece they really liked and downloaded a copy (rather than always visiting its url), the economic harms were minimal. >

2023-05-21 13:39:44 Re (2), I'm thinking of the kinds of risks that happen when data is amassed (risks to privacy, e.g. around deanonymization being possible after just a few data points are collected) and also risks connected to the ease of data collection. >

2023-05-21 13:38:27 (That last point follows from the value sensitive design principle of considering pervasiveness: what happens when the technology is used by many?) >

2023-05-21 13:37:32 Re (1), we should be asking (as I think many are): how to ensure that people have recourse if automated systems make decisions that are detrimental them --- and how to ensure that communities have recourse if patterns of decision create/worsen inequity. >

2023-05-21 13:36:33 I am not a policymaker (nor a lawyer) but my sense of it is that the gaps largely come up in cases where (1) automation obfuscates accountability or (2) data collection creates new risks. >

2023-05-21 13:35:14 Beyond that, we should be reasoning from identified harms to see how existing laws &

2023-05-21 13:34:01 Which is another way of saying: existing regulatory agencies should maintain their jurisdiction. And assert it, like the FTC (and here EEOC, CFPB and DOJ) are doing: https://t.co/d8HeeOAsse >

2023-05-21 13:33:29 I fully agree that so-called "AI" systems shouldn't be deployed without some kind of certification process first. But that process should depend on what the system is for. >

2023-05-21 13:31:32 So the problem I see with the "FDA for AI" model of regulation is that it posits that AI needs to be regulated *separately* from other things. >

2023-05-21 12:36:17 RT @timnitGebru: "he warns that AI poses a massive threat through "accidental misuse." There is a surrealness to his candor — like if an o…

2023-05-20 22:21:42 RT @samhawkewrites: Tech bros: hey writers! guess what? We've solved your biggest problem! Writers: OMG that's so awesome! We're going to b…

2023-05-20 21:59:25 RT @mmitchell_ai: I've mostly not spoken up about longertermism, effective altruism, and AI. But when it comes to affecting what we priorit…

2023-05-20 13:44:20 @histoftech In response to the prompt "What detracted from your learning?": "830 am" . . . . It was an afternoon class.

2023-05-20 03:10:12 @timnitGebru @Rogntuudju That one is not mine. I suspect you're thinking of this one: https://t.co/77kIgQizn1

2023-05-19 19:00:00 CAFIAC FIX

2023-05-21 19:00:00 CAFIAC FIX

2023-04-24 15:37:44 RT @MicroSFF: "You've been chosen," the spirit said. "What?" "Save the world, make it kinder, cleaner, safer." "Me?" "Yes." "Alone?" "We ch…

2023-04-24 15:35:37 Got just a moment for 10 delightful stories? @MicroSFF has you covered: https://t.co/d3D5maESUf

2023-04-24 14:58:08 RT @daveyalba: “If you want to stay on at Google, you have to serve the system and not contradict it,” @L_badikho told me. "You have to bec…

2023-04-23 21:35:43 @mmitchell_ai Ugh, I'm so sorry.

2023-04-23 01:39:41 RT @bobehayes: .@Google CEO peddles #AIhype on CBS @60Minutes "You know what approaching this with humility would mean @sundarpichai? It…

2023-04-22 19:08:44 RT @bgzimmer: In this weekend's @WSJ Review section: New chatbots have been plagued by "hallucinations," generating text that seems plausib…

2023-04-22 13:14:54 RT @emilymbender: "The trusted internet-search giant is providing low-quality information in a race to keep up with the competition," --- t…

2023-04-21 21:36:32 Thank you, @daveyalba for this reporting.

2023-04-21 21:36:15 “When Google’s management does grapple with ethics concerns publicly, they tend to speak about hypothetical future scenarios about an all-powerful technology that cannot be controlled by human beings” So tempting to focus on fictional future harms over current, real ones. >

2023-04-21 21:35:51 “But now that the priority is releasing generative AI products above all, ethics employees said it’s become futile to speak up.” And it shows… >

2023-04-21 21:35:39 “One former employee said they asked to work on fairness in machine learning and they were routinely discouraged — to the point that it affected their performance review.” Not a good look, Google. >

2023-04-21 21:35:25 “Employees say they’re concerned that the speed of development is not allowing enough time to study potential harms.” Employees are correct. >

2023-04-21 21:35:11 “On the same day, [Google] announced that it would be weaving generative AI into its health-care offerings.” >

2023-04-21 21:34:58 “But ChatGPT’s remarkable debut meant that by early this year, there was no turning back.” False. We turned back from lead in gasoline. We turned back from ozone-destroying CFCs. We can turn back from text synthesis machines. >

2023-04-21 21:34:40 “Silicon Valley as a whole is still wrestling with how to reconcile competitive pressures with safety.” Are they though? It seems to me that those in charge (i.e. VCs and C-suite execs) are really only interested in competition (for $$). >

2023-04-21 21:33:48 @daveyalba @Bloomberg "Google’s leaders decided that as long as it called new products “experiments,” the public might forgive their shortcomings" We don’t tolerate “experiments” that pollute the natural ecosystem, nor should we tolerate those that pollute the information ecosystem. >

2023-04-21 21:32:44 @daveyalba @Bloomberg “The group working on ethics that Google pledged to fortify is now disempowered and demoralized, the current and former workers said.” >

2023-04-21 21:32:17 "The trusted internet-search giant is providing low-quality information in a race to keep up with the competition," --- this phrasing makes it starkly clear that it's a race to nowhere good. https://t.co/iJPiKhZZ7W From @daveyalba at @Bloomberg

2023-04-21 20:58:35 RT @timnitGebru: “So now, all of a sudden, we have a research effort where we’re now trying to get to a thousand languages.” How their #AI…

2023-04-21 17:58:42 @xriskology The fact that it's hard to tell is ... something though.

2023-04-21 17:58:21 @xriskology I suspect the Twitter account behind "Dr. Gupta" is actually a hoax account.

2023-04-21 17:29:56 @MldDistractions @CriticalAI I would have objected just as strenuously to "paper caught my eye" or "paper jumped out at me" -- in all cases this is assigning agency to the paper and not its authors, while in the next breath naming some irrelevant man and suggesting he deserves credit.

2023-04-21 15:07:12 RT @emilymbender: No, emphatically not. Treating "AIs" as things to be nurtured and raised is NOT the path to constraining the actions of c…

2023-04-21 13:12:26 RT @emilymbender: Really infuriating article (not quite reporting --- it's mostly just the journo's own musings) in the Economist today. Th…

2023-04-21 13:02:18 RT @sfmnemonic: If you think about it, it's kind of heartening that it has taken only a couple of years for the general public to begin to…

2023-04-21 01:11:19 RT @bgzimmer: When chatbots produce responses untethered from reality, AI researchers call those responses "hallucinations." But the term h…

2023-04-21 00:00:01 CAFIAC FIX

2023-04-14 23:45:44 RT @acmsigcas: Informative and extremely thought provoking response to the "AI pause" letter that has been circulating. E.g. "We should be…

2023-04-14 21:10:31 @DrTechlash @timnitGebru @mmitchell_ai @mcmillan_majora Otherwise, one is left with the impression that the only voices are AI Doomers and AI Boosters (plus maybe @STS_News who is quoted and I would say is neither).

2023-04-14 21:09:53 @DrTechlash We lay this out somewhat more thoroughly in our statement (from the listed authors of the Stochastic Parrots paper, @timnitGebru @mmitchell_ai @mcmillan_majora and me) to the "AI pause" letter: https://t.co/YsuDm8AHUs >

2023-04-14 21:08:44 I appreciate this guide to the AI Doomer and AI Doomerism from @DrTechlash https://t.co/goLBKj2W0H But I wish it also included more about the actual present harms being done in the name of "AI", one function of AI Doomerism being to avoid dealing with those. >

2023-04-14 18:13:53 RT @mmitchell_ai: I have loved @haydenfield's coverage of tech work+culture. @CNBC is lucky to have her! I forgot to share this great piece…

2023-04-14 17:38:04 h/t @SashaMTL for finding this, ahem, gem of a paper. https://t.co/9l9CDZgP0l

2023-04-14 16:38:23 RT @ProfNoahGian: Such a deep, nuanced, historically-grounded convo about language and AI (and hype, marketing, ethics, longtermism, corpor…

2023-04-14 14:20:19 And here's a new twist on "we used ChatGPT to write our paper". Of course. https://t.co/nFeJziBLI1

2023-04-14 14:18:41 I can't believe this needs to be said, but: LLMs are *optional*. Humans are not. >

2023-04-14 14:17:58 Look, you can't count the carbon emissions that people have for (check notes) existing as the "carbon cost" of the work that they do. >

2023-04-14 14:17:11 Okay, so are these 8 pages of motivated reasoning formatted like they've been submitted to Science or to Nature? https://t.co/h95redFrB1 >

2023-04-14 14:05:24 RT @amyjko: I'm excited to speak next Friday at Carnegie Mellon, unveiling my sabbatical work on Wordplay! It's one humble attempt to cente…

2023-04-14 14:05:01 RT @ShanaVWhite: "Society should build technology that helps us, rather than simply adjusting to whatever technology comes our way." -@timn

2023-04-14 13:49:41 It seems we've entered a whole new phase of #AIHype discourse. The good news: There seems to be some movement towards creating regulation. The bad: A lot of it seems to be informed by #AIHype coming from BigTech --- even among those who would work to limit the power of tech cos. https://t.co/crXXmlDbhB

2023-04-14 13:46:42 Oh, and Sen @BernieSanders -- don't get your news about "AI" from the @nytimes. They've been absolutely terrible about how they cover this. For example: https://t.co/0Xc7WVwBKi

2023-04-14 13:44:34 On making sure the regulation reflects the input, interests and needs of those who are the most impacted, see this statement from the listed authors of the Stochastic Parrots paper: https://t.co/YsuDm8AHUs https://t.co/R4JyKUbG1g

2023-04-14 13:42:29 @BernieSanders The move of anthropomorphizing "Sydney" or any other one of these "AIs" opens up room to displace that accountability. But accountability sits with corporations and the people that make them up. >

2023-04-14 13:41:21 Sen @BernieSanders you are right that we need regulation to ensure tech development that actually benefits everyone. Machine's aren't gathering info. Big Tech is using machines to gather it. Let's keep the focus on keeping corporations accountable. >

2023-04-14 13:35:16 RT @emilymbender: "Congress needs to ensure corps are not using people’s data w/p their consent, &

2023-04-14 13:35:12 RT @emilymbender: @timnitGebru "Congress needs to focus on regulating corporations and their practices, rather than playing into their hype…

2023-04-14 04:59:55 @mosermusic https://t.co/7YYD3QgF5R

2023-04-14 04:31:19 Case in point: #OpenAI's terms of service *still* say that the user is somehow responsible for what comes out of ChatGPT etc in response to their prompts. Let's get some regulation fixing this, stat. https://t.co/vMYe6GxLf6

2023-04-14 04:28:57 instead placing sole responsibility with downstream actors that lack the resources, access, and ability to mitigate all risks." https://t.co/wtM9tRVQ2D >

2023-04-14 04:28:48 "Developers of GPAI should not be able to relinquish responsibility using a standard legal disclaimer. Such an approach creates a dangerous loophole that lets original developers of GPAI (often well-resourced large companies) off the hook, >

2023-04-14 04:27:11 "GPAI models carry inherent risks and have caused demonstrated and wide-ranging harms. While these risks can be carried over to a wide range of downstream actors and applications, they cannot be effectively mitigated at the application layer." >

2023-04-14 04:23:41 @timnitGebru "Congress needs to focus on regulating corporations and their practices, rather than playing into their hype of “powerful digital minds.” This, by design, ascribes agency to the products rather than the organizations building them." -@timnitGebru https://t.co/LTgpTJILkr

2023-04-14 04:20:52 "Congress needs to ensure corps are not using people’s data w/p their consent, &

2023-04-14 01:15:01 @davidschlangen Looks amazing!

2023-04-14 00:49:25 Hey @JeffDean -- you can't "make nice" after causing harm without first making reparations. Meg's well deserved recognition by Time isn't yours to comment on. Not until you've addressed the harm you did. https://t.co/8LKwO0vBDV

2023-04-13 22:36:39 RT @DAIRInstitute: Did you miss #StochasticParrotsDay? You can now find the recordings here. https://t.co/W8PWYHVA2m

2023-04-13 22:22:49 #StochasticParrotsDay recordings are now available! Huge thanks to DAIR institute for hosting this, @timnitGebru for organizing such amazing panels, @mmitchell_ai and @mcmillan_majora for moderating, @alexhanna for producing and all the amazing panelists! https://t.co/rlwmLpYQsu

2023-04-13 18:41:56 I've known for a long time that @mmitchell_ai is AWESOME so it's very satisfying to see her awesomeness validated in this way :) https://t.co/Vf6e0ZmDjj

2023-04-13 18:35:52 RT @alienelf: I JUST LEARNT THAT @alexhanna &

2023-04-13 16:41:40 @mmitchell_ai So well deserved!!! Congrats

2023-04-13 15:02:59 RT @AJLUnited: The government is still using IDme to access tax accounts after promising to stop after many complaints. Read @jovialjoy 's…

2023-04-13 13:36:03 RT @emilymbender: @SashaMTL p.s. re "the horse is out of the barn". That metaphor is used to express helplessness, a "there's-nothing-we-ca…

2023-04-13 13:35:57 RT @emilymbender: @SashaMTL Is the horse out of the barn? Do we just have to stand by and watch this go down? Indeed not. We've collective…

2023-04-13 13:35:47 RT @emilymbender: This is a great summary by @SashaMTL of the environmental and human costs of so-called "AI" technology. https://t.co/Yji

2023-04-13 13:35:27 RT @parismarx: If you’re still trying to wrap your head around ChatGPT, you should listen to my conversation with @emilymbender. She lays…

2023-04-13 13:14:33 RT @techwontsaveus: This week @emilymbender joins @parismarx to discuss why large language models and tools like ChatGPT are not intelligen…

2023-04-12 20:46:41 RT @tante: If you use the "Microsoft Sparks of AGI" paper to argue for whatever "AI" hype at least be aware that you put yourself in a euge…

2023-04-12 20:19:51 RT @mmitchell_ai: Okay. Inspired by news &

2023-04-12 17:01:47 @AngliPartners @SashaMTL Credit for identifying the #TESCREAL bundle (and naming it) goes to @timnitGebru and @xriskology !

2023-04-12 16:49:50 @SEFrench @SashaMTL That's so perfect!

2023-04-12 16:22:30 @SashaMTL p.s. re "the horse is out of the barn". That metaphor is used to express helplessness, a "there's-nothing-we-can-do-now" attitude. But do people with escaped horses really say "Oh well, was nice knowing ya horsey"? I'd guess probably not.

2023-04-12 16:21:31 @SashaMTL Let's heed @SashaMTL 's call to get engaged with the regulatory process! Relatedly: https://t.co/Zkfvos9G1Z >

2023-04-12 16:19:57 @SashaMTL Is the horse out of the barn? Do we just have to stand by and watch this go down? Indeed not. We've collectively handled other sources of pollution (e.g. lead in gasoline, CFCs harming the ozone layer) before and we can do it again. >

2023-04-12 16:18:39 @SashaMTL Unknown environmental costs, non-reproducible science, data theft, and exploitative labor practices. And for what? A shiny toy to play with for the masses + the ability to claim "AGI" (while blocking scrutiny of the claim) for OpenAI and other #TESCREAL adherents. >

2023-04-12 16:17:07 @SashaMTL "it’s difficult to carry out external evaluations and audits of these models since you can’t even be sure that the underlying model is the same every time you query it. It also means that you can’t do scientific research on them, given that studies must be reproducible." >

2023-04-12 16:15:56 @SashaMTL "with ChatGPT, [...] thousands of copies of the model are running in parallel [...] generating metric tons of carbon emissions. It’s hard to estimate the exact quantity of emissions this results in, given the secrecy and lack of transparency around these big LLMs." >

2023-04-12 16:14:10 This is a great summary by @SashaMTL of the environmental and human costs of so-called "AI" technology. https://t.co/YjiGbflBnu >

2023-04-12 15:27:53 RT @jennaburrell: This interview with @alondra on @ezrakelin was very satisfying, particularly hearing her call for more public participati…

2023-04-12 04:27:56 RT @NoraPoggi: I attempted to summarize Stochastic Parrots Day, full of brilliant experts sharing invaluable insights on AI and calls to ac…

2023-04-11 20:56:25 RT @alexhanna: To me, it speaks to something of the lack of an epistemic core to AI research. There's a desire to be grounded in what only…

2023-04-11 20:56:12 RT @alexhanna: Been thinking a lot lately about the irreverence of citation that AI researchers give to non-technical texts. Citations are…

2023-04-11 19:25:03 RT @timnitGebru: Essential reading!! https://t.co/aNesE17CMS

2023-04-11 13:26:36 RT @emilymbender: @timnitGebru @xriskology Case in point: Did you know that the "sparks of AGI" paper takes it definition of "intelligence"…

2023-04-11 13:26:29 RT @emilymbender: Ever found the discourse around "intelligence" in "A(G)I" squicky or heard folks pointing out the connection w/eugenics &

2023-04-10 20:44:44 @SebastienBubeck No, the roots of the issues here are racism and white supremacy.

2023-04-10 20:20:40 @SebastienBubeck Those atrocious claims aren't just "litter" on an otherwise blameless field, but rather part of its fabric.

2023-04-10 20:20:06 @SebastienBubeck You might find some useful starting points in the references from this talk: https://t.co/3KDiNyaM4a >

2023-04-10 20:19:15 @SebastienBubeck I'm glad you are disavowing, but that is only the start --- as I lay out in this thread that you are replying to. You need to read up on the harms caused by race science (of which "IQ" is a major part) and work through how those harms relate to the work you are doing. >

2023-04-10 18:39:55 RT @alexhanna: Take the "AGI don't cite Charles Murray" challenge https://t.co/9GZBshZBx6

2023-04-10 18:29:19 @clairesonos Yeah -- I wanted to illustrate the point I was making without also giving their words more life. This seems like a decent strategy. (Learned it from @LeonDerczynski )

2023-04-10 18:28:07 So, if you'd like not to be racist (and I hope &

2023-04-10 18:26:29 That's at the *foundation* of the "sparks of AGI" paper, since that question is asking "is GPT-4 intelligent" and using the definition of intelligence from the editorial given above. >

2023-04-10 18:25:45 @timnitGebru @xriskology Case in point: Did you know that the "sparks of AGI" paper takes it definition of "intelligence" from an editorial signed by 52 scholars *defending* IQ as "not racist" and making assertions like those in these screencaps: >

2023-04-10 18:03:12 @timnitGebru @xriskology You can't take work that's been exposed as racist and "clean it up" with a footnote saying "But not the racist bits". You've got to actually work on being anti-racist. >

2023-04-10 18:02:34 @timnitGebru @xriskology If you want to break that connection, you've got to do the work: read those who have been documenting the harms, understand how those harms relate to the work you were pointing to and interrogate how the concepts you've been drawing on could mean your work is perpetuating harm.>

2023-04-10 18:01:13 @timnitGebru @xriskology And just to be very clear: If your work has been exposed as pointing to eugenicist or otherwise racist underpinnings, it's not enough to just "disavow". >

2023-04-10 17:59:40 @timnitGebru @xriskology Also great for understanding what the #TESCREAL bundle of ideologies is, how they connect, and why any serious work towards improving things for people on this planet should be very clearly distanced from any of that. >

2023-04-10 17:58:22 Ever found the discourse around "intelligence" in "A(G)I" squicky or heard folks pointing out the connection w/eugenics &

2023-04-10 16:16:54 RT @emilymbender: @ChrisMurphyCT https://t.co/G3hpHgEKeQ

2023-04-10 14:26:08 @skdevitt @j2bryson @sarahkendrew @mmitchell_ai @CGMundell @timnitGebru @MelMitchell1 Thank you. And that is my point exactly: linguistics is relevant to the broader discourse of the social impact of these technologies. I am here **as a linguist** making my contribution. To say that I am a CS or AI researcher is to erase the relevance of linguistics.

2023-04-10 14:24:54 @skdevitt @j2bryson @sarahkendrew @mmitchell_ai @CGMundell @timnitGebru @MelMitchell1 Yes, I endeavor to speak only from my expertise, which is (computational) linguistics. Here are some publications that are representative: https://t.co/z1F7fEBCMn https://t.co/kwACyKvDtL https://t.co/rkDjc4kDxj

2023-04-10 14:19:01 @skdevitt @j2bryson @sarahkendrew @mmitchell_ai @CGMundell @timnitGebru @MelMitchell1 Yeah -- I definitely am spending a lot of time talking to the media these days, but my expertise is in linguistics and the media come to me to cut through the #AIhype, which I use linguistics to do. That's not the same thing as being an AI researcher (nor computer scientist).

2023-04-10 14:03:38 @skdevitt @j2bryson @sarahkendrew @mmitchell_ai @CGMundell @timnitGebru @MelMitchell1 No website that I maintain says that. Here is my website: https://t.co/gMe04yP96Q

2023-04-10 13:49:49 @skdevitt @j2bryson @sarahkendrew @mmitchell_ai @CGMundell @timnitGebru @MelMitchell1 So reply to @sarahkendrew then? I'm neither in AI nor in CS.

2023-04-10 04:59:06 How is this even legal? https://t.co/CThoHAMNBO

2023-04-10 04:49:42 @ChrisMurphyCT https://t.co/G3hpHgEKeQ

2023-04-09 20:51:27 @j2bryson @sarahkendrew @mmitchell_ai @CGMundell @timnitGebru @MelMitchell1 And everything about AI (and a lot about CS) these days is enormous power grabs. So I think it is really important to stand up for the value of research *outside* these areas.

2023-04-09 20:50:55 @j2bryson @sarahkendrew @mmitchell_ai @CGMundell @timnitGebru @MelMitchell1 I am mixed up in the "AI ethics" conversation because I find that the perspective of linguistics is important to help steer things away from societal harm. But that doesn't make me an AI researcher. >

2023-04-09 20:50:24 @j2bryson @sarahkendrew @mmitchell_ai @CGMundell @timnitGebru @MelMitchell1 I appreciate that you framed this as a not the superset --- but it's still a miss. My work in linguistics (including in compling) has never been motivated by the project of "AI". >

2023-04-09 19:39:48 @j2bryson @sarahkendrew @mmitchell_ai @CGMundell @timnitGebru @MelMitchell1 Finally, to say that "counting" me as something I'm not is a compliment is saying that what I actually am is somehow less than. No thank you.

2023-04-09 19:39:18 @j2bryson @sarahkendrew @mmitchell_ai @CGMundell @timnitGebru @MelMitchell1 If you think those things are "AI", what's the difference between them and say spreadsheet software? >

2023-04-09 19:38:49 @j2bryson @sarahkendrew @mmitchell_ai @CGMundell @timnitGebru @MelMitchell1 Yeah, it's not a compliment to erase my field and claim it as CS. Linguistics is worthwhile in its own right. Similarly, do you think of spell check as "AI"? How about computational methods in support of lexicography? Data mining of EHRs to match patients to clinical trials? >

2023-04-09 14:25:50 @j2bryson @sarahkendrew @mmitchell_ai @CGMundell @timnitGebru @MelMitchell1 I am a linguist, not a computer scientist. My degrees are all in linguistics and I have been a prof in the Dept of Linguistics at UW since 2003. My field (computational linguistics/NLP) is an interdisciplinary field and not a subfield of CS (much less a subfield of AI).

2023-04-09 13:24:30 RT @emilymbender: .@ChrisMurphyCT I'd like to set the record straight. I can understand how the reaction of the tech world to your tweet wa…

2023-04-08 23:29:24 RT @TaliaRinger: Just caught this incredible talk by @timnitGebru on eugenics, "AGI," and the TESCREAL ideologies. I'm so glad this exists

2023-04-08 16:38:49 RT @anggarrgoon: @emilymbender @ChrisMurphyCT You're my senator, @ChrisMurphyCT . Prof. Bender is right, and I or other linguists watching…

2023-04-08 13:42:46 Postscriptum: My offer to speak with you or someone in your office stands! https://t.co/6wz5bJzEzH

2023-04-08 13:42:00 I look to you as a Senator who represents the interests of the people --- not corporations, and not just the wealthy --- and so I hope that you will bring that perspective and policy-making expertise to this issue as well.

2023-04-08 13:41:16 My frustration on seeing your tweet was not with you, but with the way that your tweet reflected the view points of the corporate interests (Google, Microsoft) and longermist AI cultists (OpenAI, Future of Life Institute) --- suggesting that they had your ear. >

2023-04-08 13:40:11 But whatever the regulatory outcome is, it should be produced through a democratic process that centers the perspectives of those experiencing the harms of so-called AI systems now, as we lay out in our statement. >

2023-04-08 13:38:50 @ChrisMurphyCT I advocate for transparency (see prev tweet), accountability (purveyors of so-called generative AI systems should be accountable for their output -- be it slander, dangerous medical advice, privacy violations, etc). >

2023-04-08 13:37:53 @ChrisMurphyCT From the statement put out by the four listed authors of the Stochastic Parrots paper recently: >

2023-04-08 13:35:47 @ChrisMurphyCT I and many others have been calling for regulation of so-called AI systems, based on shared governance, meaning that we absolutely need our elected officials to be centrally involved. >

2023-04-08 13:34:38 .@ChrisMurphyCT I'd like to set the record straight. I can understand how the reaction of the tech world to your tweet was unpleasant, but please know that for myself (and many others) we were emphatically not trying to keep policymakers away from this topic. >

2023-04-07 23:52:16 @LeonDerczynski @SashaMTL Kinda exists? https://t.co/0mqzLjBTxs

2023-04-07 23:47:54 @mmitchell_ai Bummer &

2023-04-07 20:22:36 RT @mmitchell_ai: It's always an honor to be covered in any news publication. At the same time, I am pretty frustrated with the NYT. Let's…

2023-04-07 20:22:11 @alex And thank you @mmitchell_ai for calling out how this is part of a larger system that erases the work of people on the lower end of power differentials. https://t.co/oROngobMqX

2023-04-07 20:21:38 Thank you, @alex. I'm fully fed up with this pattern. I came up with that phrase, and used it (with my co-authors) in our paper. But then someone with a lot of fame &

2023-04-07 17:49:08 I'd say it was cathartic, at least for me. I hope the audience thought so too! https://t.co/5pGxWCyYDl

2023-04-07 17:45:14 RT @alexhanna: If you missed the last Mystery AI Hype Theater 3000 with @emilymbender and me, no worries! We figured out how to do VOD, so…

2023-04-07 14:58:40 RT @alexhanna: Join us in two hours, as we read the GPT-4 "System Card" so you don't have to.

2023-04-07 12:49:47 Today! https://t.co/FK68w2VIXj

2023-04-06 20:10:57 @ShannonVallor @DanielaTafani @mmitchell_ai @ayahbdeir @willie_agnew But "beekeeping" isn't ambiguous between the activity (humans taking are of bees) and some other, mythological autonomous entity. "AI" is, and that's why the phrase become insidious.

2023-04-06 19:23:56 RT @haleyhaala: Any good studies on positivism and interpretivism in #NLProc and computational social science? Looking for sources!

2023-04-06 17:54:56 Tomorrow! https://t.co/FK68w2VIXj

2023-04-06 02:52:07 @TaliaRinger It's so frustrating.

2023-04-06 02:02:07 @mmitchell_ai @ShannonVallor @ayahbdeir @willie_agnew For UW RAISE I advocated for the acronym actually being Responsibility in AI Systems and Experiences, rather than Responsible AI Systems and Experiences, since I don't like the ambiguity of "Responsible AI" where one reading is that the AI itself is responsible.

2023-04-05 21:59:09 RT @ross_minor: Welp, This is it folks. Twitter has blocked API access for third-party clients, including those that make the site more acc…

2023-04-05 21:36:15 @DaniShanley @alexhanna Yes -- You can see previous episodes here. (But note there's some delay .. Ep 9 &

2023-04-05 13:17:04 RT @schock: https://t.co/K7MZuAnV1l

2023-04-05 13:06:30 RT @emilymbender: If it seems like the world of "AI" and "AI ethics" is moving too fast, I'd like to point out that the fundamental problem…

2023-04-05 13:06:16 RT @emilymbender: Thank you @SashaMTL for pointing the spotlight where it matters!

2023-04-05 13:03:02 RT @emilymbender: Who else is feeling buried under #AIhype after the past few weeks? If you’re ready for some cathartic BS shoveling, come…

2023-04-05 13:02:56 RT @emilymbender: #MAIHT3k Ep 8 is now up! @alexhanna and I greeted the new year by taking on the #ChatGPT hype + of course, some Fresh AI…

2023-04-04 19:16:33 RT @alexhanna: By the way, @DAIRInstitute videos have a new home! https://t.co/qSImGM2EaN and MAIH3K has a new channel -- https://t.co/bnWj

2023-04-04 19:07:05 @alexhanna And if you want to catch the next episode live, deets are here: https://t.co/FK68w2VIXj

2023-04-04 19:06:42 #MAIHT3k Ep 8 is now up! @alexhanna and I greeted the new year by taking on the #ChatGPT hype + of course, some Fresh AI Hell. https://t.co/G3n0Ku89dg >

2023-04-04 19:05:39 RT @rachelmetz: A really smart, nuanced piece by ⁦@SashaMTL⁩. As she notes, ⁦@timnitGebru⁩, ⁦@ruha9⁩, ⁦@rajiinio⁩ (and many more!) have pus…

2023-04-04 18:04:06 @alexhanna We plan to dig through the GPT4 "system card", the "sparks" fan fiction novella, and the "skynet is falling" letter.

2023-04-04 18:02:29 Who else is feeling buried under #AIhype after the past few weeks? If you’re ready for some cathartic BS shoveling, come join me and @alexhanna as we dig out from under all of this on the next episode of MAIHT3k. Friday April 7 9:30-10:30am Pacific Time https://t.co/ETRqVjeTrh

2023-04-04 17:45:21 RT @AINowInstitute: As @timnitgebru, @emilymbender, @mcmillan_majora and @mmitchell_ai made clear, we need more“focus on the very real and…

2023-04-04 17:44:41 Thank you @SashaMTL for pointing the spotlight where it matters! https://t.co/UUlFoLjFge

2023-04-04 13:20:42 RT @merbroussard: Tomorrow! AI Cyber Lunch: Meredith Broussard on "Confronting Race, Gender, &

2023-04-04 13:01:19 RT @emilymbender: "Imagine looking at the list of your published papers 10 years from now: do you want it to be longer, or containing more…

2023-04-03 19:41:03 "Imagine looking at the list of your published papers 10 years from now: do you want it to be longer, or containing more things that you are proud of long-term?" Wise words from several #NLProc scholars thinking about what it means to do science in our field: https://t.co/07Rbd2ltmd

2023-04-03 18:24:51 RT @AASchapiro: A good reminder from @DAIRInstitute to stay alert to current AI harms. https://t.co/F7b5HhYmKC Lots of stuff is under the…

2023-04-03 18:00:44 @benoitfrenay Not exactly framed that way, but this talk is relevant, I think: https://t.co/3KDiNyaM4a

2023-04-03 17:32:30 We recorded this episode of Factually! with @adamconover before Blake Lemoine got the press all riled up by claiming that LaMDA was sentient. Everything I said was still relevant: https://t.co/iVmcVmkISO

2023-04-03 17:31:35 We recorded this interview (for @marketplace tech) *before* the "AI pause" letter dropped. Everything I said was still relevant. https://t.co/NOZ4hUKktK >

2023-04-03 17:30:58 If it seems like the world of "AI" and "AI ethics" is moving too fast, I'd like to point out that the fundamental problems are in fact relatively unchanging and keeping an eye on the people involved can be a good anchor. Two cases in point: >

2023-04-03 16:59:43 RT @Marketplace: Thousands of experts are sounding alarms about a potential dark future created by AI. Computational linguist @emilymbend…

2023-04-03 15:35:24 RT @zeitonline: Muss man Angst vor superintelligenten Maschinen haben? Blödsinn, sagt die KI-Ethikerin @emilymbender im Interview. Gefährli…

2023-04-03 13:36:59 RT @emilymbender: To all those folks asking why the "AI safety" and "AI ethics" crowd can't find common ground --- it's simple: The "AI saf…

2023-04-03 13:24:10 RT @shashib: Got a lot of good #ai insights from @emilymbender in conversation with @meghamama https://t.co/viGUh7ySmJ

2023-04-03 03:09:12 A bunch of AI researchers high on their own supply wrote a ridiculous letter and got famous people including a certain billionaire man-child to sign, and in the process misappropriated our work. So we speak up and somehow we're at fault? I think NOT.

2023-04-03 02:49:17 RT @timnitGebru: What kills me is that THE SAME DUDES who call themselves such &

2023-04-03 02:39:15 RT @timnitGebru: "Why would you, a CEO or executive at a high-profile technology company...proclaim how worried you are about the product…

2023-04-03 02:28:35 Yes, we need regulation. But as we said: "We should focus on the very real and very present exploitative practices of the companies claiming to build them, who are rapidly centralizing power and increasing social inequities." https://t.co/YsuDm8AHUs

2023-04-03 02:27:11 It's frankly infuriating to read a signatory to the "AI pause" letter complaining that the statement we released from the listed authors of the Stochastic Parrots paper somehow squandered the "opportunity" created by they "AI pause" letter in the first place. >

2023-04-03 02:25:54 If the call for "AI safety" is couched in terms of protecting humanity from rogue AIs, it very conveniently displaces accountability away from the corporations scaling harm in the name of profits. >

2023-04-03 02:24:45 If (even) the people arguing for a moratorium on AI development do so bc they ostensibly fear the "AIs" becoming too powerful, they are lending credibility to every politician who wants to gut social services by having them allocated by "AIs" that are surely "smart" and "fair".>

2023-04-03 02:23:00 #AIhype isn't the only problem, for sure, but it is definitely a problem and one that exacerbates others. If LLMs are maybe showing the "first sparks of AGI" (they are NOT) then it's easier to sell them as reasonable information access systems (they are NOT). >

2023-04-03 02:21:55 To all those folks asking why the "AI safety" and "AI ethics" crowd can't find common ground --- it's simple: The "AI safety" angle, which takes "AI" as something that is to be "raised" to be "aligned" with actual people is anathema to ethical development of the technology. >

2023-04-02 23:54:29 RT @timnitGebru: I recommend that everyone read this entire thread &

2023-04-02 22:46:20 @jamesofputney @TaliaRinger Yeah, I haven't read it but from all I hear, his book is terrible. That seems kinda orthogonal to this discussion tho?

2023-04-02 22:09:37 @TaliaRinger My guess is that's two-fold: 1) ChatGPT made the experience of playing with these models much more widely accessible 2) MacAskill's recent promo activities for his book (and thus weird longtermist AGI doom fantasies)

2023-04-02 01:16:55 RT @mmitchell_ai: The AI ethics idea of "think about short-, mid- and long-term harms" is constantly regurgitated as if it's "JUST think ab…

2023-04-01 20:36:44 I appreciate this (humorous, but also informative!) explainer video by @adamconover --- and am especially tickled by the Stochastic Parrots shout out (and quote) https://t.co/o49c4Qm42j

2023-04-01 15:13:00 RT @mmitchell_ai: TIRED: AI Apocalypse WIRED: Governance structures! Wait why is no one excited.

2023-04-01 15:05:42 RT @schock: This statement is very powerful: https://t.co/risQH5CtsK

2023-04-01 14:33:51 RT @emilymbender: "Accountability properly lies not with the artifacts but with their builders."

2023-04-01 13:21:05 RT @cfiesler: So anyway, as a reminder, whereas I think that speculation is a key skill for technologists, the point of e.g. the Black Mirr…

2023-04-01 13:10:38 RT @cfiesler: Some of my work focuses on ethical speculation. How can we think through potential harm of tech before it's released instead…

2023-04-01 13:09:18 RT @timnitGebru: Means a lot coming from THE Sherrilyn Ifill. I think we're going to claim @emilymbender at DAIR even though she's at Unive…

2023-04-01 13:09:10 RT @techwontsaveus: “The current race towards ever larger ‘AI experiments’ is not a preordained path where our only choice is how fast to r…

2023-04-01 13:07:44 RT @STS_News: It’s nice to have voices of reason on this stuff.

2023-03-31 23:44:56 RT @DiverseInAI: #StochasticParrots, the 2021 @FAccTConference paper on large language models by @emilymbender @timnitGebru @mcmillan_major…

2023-03-31 22:08:49 RT @mmitchell_ai: There are clearly foreseeable long-term AI harms. To address them, regulatory efforts should focus on transparency, accou…

2023-03-31 21:47:25 RT @SIfill_: This letter by @timnitGebru &

2023-03-31 21:17:35 RT @amyjko: My absolute favorite line: 'We should be building machines that work for us, instead of "adapting" society to be machine readab…

2023-03-31 21:12:40 RT @xriskology: This statement ,a response to the recent FLI "open letter" on AI, is so very important. I wish @TIME would give one of thes…

2023-03-31 20:46:34 RT @_alialkhatib: finally, an open letter about AI actually worth retweeting

2023-03-31 20:46:18 RT @kharijohnson: “The onus of creating tools that are safe to use should be on the companies that build and deploy generative systems, whi…

2023-03-31 20:46:08 RT @alexhanna: "We should focus on the very real and very present exploitative practices of the companies claiming to build them, who are r…

2023-03-31 20:20:25 RT @mmitchell_ai: As I was privately babbling out my own diatribe on the FLI letter yesterday, I was honored to be pinged by @emilymbender…

2023-03-31 20:07:09 RT @DAIRInstitute: Read the statement from #StochasticParrots authors @emilymbender @timnitGebru @mcmillan_majora and @mmitchell_ai here:…

2023-03-31 19:59:52 RT @timnitGebru: Since we've been looking for more things to do, @emilymbender @mmitchell_ai @mcmillan_majora and I wrote a statement about…

2023-03-31 19:58:51 "Accountability properly lies not with the artifacts but with their builders." https://t.co/VgHmh8VdoW

2023-03-31 19:55:15 Statement from the listed authors of Stochastic Parrots on the “AI pause” letter https://t.co/YsuDm8AHUs "Regulatory efforts should focus on transparency, accountability and preventing exploitative labor practices." w/@timnitGebru @mmitchell_ai and @mcmillan_majora

2023-03-31 18:59:03 @martinjanello @scyrusk Huh? I am definitely standing up to AI hype and not part of an organization that is building it AI (slow or fast).

2023-03-31 17:41:13 @martinjanello @scyrusk Would you care to clarify what you mean by "both sides" here?

2023-03-31 16:50:41 A journalist recently reflected to me that those of us standing up to #AIhype are generally not paid to do so --- in stark contrast to those peddling the hype. So it's particularly gratifying to know we're being effective. Thank you, @scyrusk! https://t.co/2SUjZZ1mcV

2023-03-31 12:22:25 RT @Soccermatics: The Future of Life Institute is a problem. Being in same age-group (lower end maybe)/cultural background (8-bit progra…

2023-03-31 12:10:34 RT @erikve: Two research fellowships in #NLProc available at the University of Oslo, focusing on event extraction in the domain of armed co…

2023-03-31 11:25:24 RT @Soccermatics: For the Future of Lifers the rules don't seem to apply. They don't need to write detailed articles explaining their think…

2023-03-31 11:25:21 RT @Soccermatics: There are lots more... and if I get some time later I will share more. But as I write the list my embarrassment turns to…

2023-03-31 02:56:15 RT @scyrusk: In my ethics class, I presented "the letter". First, I showed the content of the letter and the first few signatures. Then,…

2023-03-31 01:40:49 RT @mmitchell_ai: It's so weird to me that it's the AI Ethics crowd getting constantly bashed for "fear mongering" because we describe syst…

2023-03-30 23:49:57 RT @emilymbender: Just sayin': We wrote a whole paper in late 2020 (Stochastic Parrots, published in 2021) pointing out that this head-long…

2023-03-30 21:29:36 RT @billt: Yes to this. SF narratives don’t make a good basis for public discourse, and we journalists should resist them and look more wid…

2023-03-30 19:55:08 @mmitchell_ai Oh no! I hope you heal quickly and get some time to rest.

2023-03-30 13:01:12 RT @emilymbender: My thread from last night on the hypey "open letter" as a blog post: https://t.co/zuE2A39W5F #LLM #AIhype #GPT4

2023-03-30 00:52:13 RT @mmitchell_ai: Super helpful article about "the letter" from @chloexiang! Makes the connection to longtermism that some have been confus…

2023-03-30 00:05:05 Now as a blog post: https://t.co/zuE2A39W5F

2023-03-29 23:25:59 RT @timnitGebru: https://t.co/8DnD2Kc9ye

2023-03-29 21:31:43 My thread from last night on the hypey "open letter" as a blog post: https://t.co/zuE2A39W5F #LLM #AIhype #GPT4

2023-03-29 19:57:42 RT @solarpunkcast: I'm begging anyone interested in AI to listen to the researchers and not the tech bros. AI is already dangerous without…

2023-03-29 19:32:56 RT @tante: PR as open letter https://t.co/aEYmgMRhLb

2023-03-29 19:10:55 @MarkBrakel @FLIxrisk You wanna argue that you aren't longtermist? List your funders, make sure you don't have any longtermists on your board, and stop publishing alarmist open letters that are dripping with xrisk-style AI hype.

2023-03-29 19:10:01 @MarkBrakel @FLIxrisk From your "Funding" page (which doesn't actually list your funders): "With the exception of Jaan Tallinn, who has served on FLI’s Board of Directors since its founding, these donors do not influence FLI’s positions" And re Jaan Tallinn: https://t.co/VCiap7aka7

2023-03-29 17:26:52 @fabio_cuzzolin It's exhausting.

2023-03-29 16:04:35 RT @SashaMTL: Best take on "the letter" so far by @ruchowdh, who else? (alluding to signatories of the letter such as John Wick, Sam Altma…

2023-03-29 13:24:37 RT @danmcquillan: "It turns out that AI is harmful, but we really, really want it to work and be the future of humanity, so can we please p…

2023-03-29 13:20:50 RT @xriskology: Absolutely amazing thread here. Very much worth reading to the end: https://t.co/oiAT36m8Dl

2023-03-29 13:20:24 RT @timnitGebru: This is what they don't want to talk about. https://t.co/THy1EjMOiW

2023-03-29 12:58:32 RT @emilymbender: Policymakers: Don't waste your time on the fantasies of the techbros saying "Oh noes, we're building something TOO powerf…

2023-03-29 12:57:36 RT @emilymbender: Okay, so that AI letter signed by lots of AI researchers calling for a "Pause [on] Giant AI Experiments"? It's just dripp…

2023-03-29 12:46:16 RT @SashaMTL: My favorite part of yet another amazing thread from @emilymbender ! There are definitely parts of the letter that I can get b…

2023-03-29 12:44:14 RT @djleufer: Hot take on the letter for a moratorium on training systems more powerful than GPT-4 Anything worthwhile in it was already s…

2023-03-29 04:49:19 Always check the footnotes https://t.co/pOjvn2rGFl

2023-03-29 04:24:09 Broke the threading: https://t.co/nquBe2nzMY

2023-03-29 04:04:56 Two corrections: 1) Sorry @schock for misspelling your name!! 2) I meant to add on "general tests" see: https://t.co/kR4ZA1k7uz

2023-03-29 03:51:03 Start with the work of brilliant scholars like Ruha Benjamin, Meredith Broussard, Safiya Noble, Timnit Gebru, Sasha Constanza-Chock and journalists like Karen Hao and Billy Perrigo.

2023-03-29 03:50:25 Policymakers: Don't waste your time on the fantasies of the techbros saying "Oh noes, we're building something TOO powerful." Listen instead to those who are studying how corporations (and govt) are using technology (and the narratives of "AI") to concentrate and wield power. >

2023-03-29 03:47:29 Also "the dramatic economic and political disruptions that AI will cause". Uh, we don't have AI. We do have corporations and VCs looking to make the most $$ possible with little care for what it does to democracy (and the environment). >

2023-03-29 03:47:10 Yes, there should be robust public funding but I'd prioritize non-CS fields that look at the impacts of these things over "technical AI safety research". >

2023-03-29 03:46:48 Yes, there should be liability --- but that liability should clearly rest with people &

2023-03-29 03:46:38 Yes, we should have regulation that requires provenance and watermarking systems. (And it should ALWAYS be obvious when you've encountered synthetic text, images, voices, etc.) >

2023-03-29 03:46:26 Some of these policy goals make sense: >

2023-03-29 03:45:56 Uh, accurate, transparent and interpretable make sense. "Safe", depending on what they imagine is "unsafe". "Aligned" is a codeword for weird AGI fantasies. And "loyal" conjures up autonomous, sentient entities. #AIhype >

2023-03-29 03:45:44 They then say: "AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal." >

2023-03-29 03:45:34 Instead: They're about concentration of power in the hands of people, about reproducing systems of oppression, about damage to the information ecosystem, and about damage to the natural ecosystem (through profligate use of energy resources). >

2023-03-29 03:44:59 Just sayin': We wrote a whole paper in late 2020 (Stochastic Parrots, published in 2021) pointing out that this head-long rush to ever larger language models without considering risks was a bad thing. But the risks and harms have never been about "too powerful AI". >

2023-03-29 03:44:47 Okay, calling for a pause, something like a truce amongst the AI labs. Maybe the folks who think they're really building AI will consider it framed like this? >

2023-03-29 03:44:05 I'm mean, I'm glad that the letter authors &

2023-03-29 03:43:39 On the GPT-4 ad copy: https://t.co/OcWAuEtWAZ >

2023-03-29 03:42:29 On the "sparks" paper: https://t.co/5jvyk1qocE >

2023-03-29 03:42:11 Next paragraph. Human-competitive at general tasks, eh? What does footnote 3 reference? The speculative fiction novella known as the "Sparks paper" and OpenAI's non-technical ad copy for GPT4. ROFLMAO. >

2023-03-29 03:40:54 And could folks "understand" these systems? There are plenty of open questions about how deep neural nets map inputs to outputs, but we'd be much better positioned to study them if the AI labs provided transparency about training data, model architecture, and training regimes. >

2023-03-28 04:02:38 @n_vpatel @TonyHoWasHere Thank you!!

2023-03-28 03:55:56 Oops: sentiment machines should have been sentient machines. Thar was my typo but maybe @TonyHoWasHere can fix it. I so doubt the existence of such things I can't even type it, apparently. Also, I quickly wrote those comments between prepping two classes that start this week.

2023-03-28 03:28:24 @TonyHoWasHere @thedailybeast Quote continues: “But the folks selling those systems (notably OpenAI) would rather have policymakers worried about doomsday scenarios involving sentiment machines.”

2023-03-28 03:27:53 “We desperately need smart regulation around the collection and use of data, around automated decision systems, and around accountability for synthetic text and images,” -- me to @TonyHoWasHere at @thedailybeast https://t.co/AXGklM2VcZ >

2023-03-27 21:30:27 RT @NannaInie: Very proud to present a 3 minute teaser for our CHI LBW: Designing Participatory AI: Creative Professionals’ Worries and Ex…

2023-03-27 21:30:23 RT @LeonDerczynski: What do creative professionals think of generative AI? Here's a video from a (peer reviewed!) scientific study, to appe…

2023-03-27 20:49:24 I wonder if the folks who think GPT-X mapping from English to SQL or whatevs means it's "intelligent" also think that Google Translate is "intelligent" and/or "understanding" the input?

2023-03-27 19:42:52 @lathropa @alexhanna That's fine!

2023-03-27 19:09:30 @lathropa @alexhanna Ugh, no. Also, we usually work on textual artifacts, not videos.

2023-03-27 18:59:53 @afsteelersfan Yes: https://t.co/6S0OAthML7

2023-03-27 18:59:00 RT @_alialkhatib: remember when that idiot said he's a stochastic parrot? and now people are trying to say GPT is *more* than a stochastic…

2023-03-27 18:04:49 But people want to believe SO HARD that AGI is nigh. Remember: If #GPT4 or #ChatGPT or #Bing or #Bard generated some strings that make sense, that's because you made sense of them.

2023-03-27 18:01:05 What's particularly galling about this is that people are making these claims about a system that they don't have anywhere near full information about. Reminder that OpenAI said "for safety" they won't disclose training data, model architecture, etc. https://t.co/OcWAuEtWAZ >

2023-03-27 17:58:14 (Some of this I see because it's tweeted at me, but more of it comes to me by way of the standing search I have on the phrase "stochastic parrots" and its variants. The tweets in that column have been getting progressively more toxic over the past couple of months.) >

2023-03-27 17:57:28 Ugh -- I'm seeing a lot of commentary along the lines of "'stochastic parrot' might have been an okay characterization of previous models, but GPT-4 actually is intelligent." Spoiler alert: It's not. Also, stop being so credulous. >

2023-03-27 17:05:24 @ChrisMurphyCT You can see my public-facing work here: https://t.co/XEc34KgwKG

2023-03-27 17:00:03 @ChrisMurphyCT Senator, that is incorrect, but I'm sure he marketing department at OpenAI appreciates your spreading this misinformation. Please have a staffer read up on what's going on with #AIhype and where the real dangers are. I'm happy to spend time talking with someone in your office.

2023-03-27 14:18:11 @LeonDerczynski Or be beneficial, depending on where in that >

2023-03-27 12:45:56 RT @safiyanoble: All of this. And also, it’s criticism of the models.

2023-03-26 21:38:23 @boydgraber Fun! One more connection to muppets here: https://t.co/kR4ZA1k7uz

2023-03-25 23:13:16 RT @Abebab: i'm SICK SICK SICK of all the hype and inaccurate and actively misleading narrative around LLMs every where i look be warned,…

2023-03-25 20:04:22 @tdietterich The quote tweet functionality is right there, if you want to share your realizations with the world, rather than addressing them to people who already know.

2023-03-25 20:03:42 @pgcorus lol -- you're claiming the mantle of "working for justice" and then in the same tweet pointing to some (unpeer-reviewed, btw) nonsense from one of the most prominent proponents of modern digital physiognomy? This is what the mute button is for. Buh-bye.

2023-03-25 20:00:40 @tdietterich I'm well aware of this. Not sure why you're mansplaining at me about it.

2023-03-25 19:57:39 @chirag_shah Next four posts: #Bing #ChatGPT #privacy #Microsoft https://t.co/InZujzgvZh

2023-03-25 19:55:42 @chirag_shah First four posts: https://t.co/dmh3WhnLsB

2023-03-25 19:53:34 It seems to me that this is yet another inherent problem to the idea that LLMs trained to simulate conversation would be a beneficial approach to information access (also one that @chirag_shah and I did not anticipate in our #CHIIR2022 paper). Screenshots follow. >

2023-03-25 19:52:22 Carl Bergstrom has a banger thread over on Mastodon about some serious #privacy problems with #Bing #GPT. You can find the thread at this link: https://t.co/jbeMgaqVlr And in screencaps below.

2023-03-25 18:58:36 Excellent thread debunking yet more #AIhype from the NYT (the same publication famous as a platform for anti-trans nonsense) https://t.co/1rAgK7IF8m

2023-03-25 18:52:49 RT @ProfNoahGian: these AI apocalypse estimates are completely unscientific, just made-up numbers, there's nothing meaningful to support th…

2023-03-25 15:55:25 @pgcorus I don't understand your point at all, but you did instruct me to "look at" a section of the paper I co-authored... If you're trying to say that our arguments no longer hold, I assure you they are all still valid.

2023-03-25 15:47:24 @pgcorus Are you telling me to read my own paper?

2023-03-25 14:50:51 Your LLMs aren't in need of protecting. They don't have feelings. They aren't little baby proto-AGIs in need of nurturing.

2023-03-25 14:50:02 Love to see how people complain about "criticism lobbed at LLMs" &

2023-03-25 14:45:54 @kathrynbck Thank you!! I'll ask on Monday about cites.

2023-03-25 14:17:24 RT @ShannonVallor: The most depressing thing about GPT-4 has nothing at all to do with the tech. It’s realising how many humans already be…

2023-03-25 14:13:28 RT @danmcquillan: looks like 'usefully wrong' is the new 'alternative facts' #AI #GPT4 #ChatGPT "Microsoft tries to justify A.I.‘s tendenc…

2023-03-25 13:50:37 RT @emilymbender: Q for #sociolinguistics #lazyweb: What are your favorite papers (or books) about the way that speakers negotiate meaning?

2023-03-25 13:50:33 RT @emilymbender: Reading about #ChatGPT plug-ins and wondering why this is framed as plug-ins for #ChatGPT (giving it "capabilities") rath…

2023-03-24 21:59:08 @yvonnezlam Thank you!

2023-03-24 21:58:59 @evanmiltenburg Thank you!

2023-03-24 21:58:51 @heatherklus Thank you!

2023-03-24 20:48:13 @rharang But why "powered"? That is, why is "AI" providing "power", rather than say functionality?

2023-03-24 20:41:42 @othernedwin This was NOT a request for #ChatGPT propaganda, TYVM.

2023-03-24 20:19:22 Another request for references --- what is good to read for the fundamentals of VUI (voice user interface) or chatbot design? Thx!

2023-03-24 20:18:10 Another metaphor I'm curious about: "AI" as "fuel" or "power" --- when people talk about "AI-powered technology" or "AI that fuels your creativity/curiosity". This seems to suggest that the AI is autonomously producing something... Where are my metaphor theorists at?

2023-03-24 20:14:59 Nevermind, I know why: This is #OpenAI yet again trying to sell their text synthesis machine as "an AI". #MathyMath #AIHype

2023-03-24 20:14:51 Reading about #ChatGPT plug-ins and wondering why this is framed as plug-ins for #ChatGPT (giving it "capabilities") rather than #ChatGPT as a plug-in to provide a conversational front-end to other services. https://t.co/OLHluhJ8Gx

2023-03-24 19:57:35 Q for #sociolinguistics #lazyweb: What are your favorite papers (or books) about the way that speakers negotiate meaning?

2023-03-24 19:16:55 RT @SashaMTL: Indeed, this fact is glossed over in all of the sparkles of AGI papers (as well as in all of the propaganda accompanying…

2023-03-24 15:15:53 @ndiakopoulos @emilybell You jumped into my mentions, to be defensive. Meanwhile, I see your pinned tweet. Calling for "nuance" while promoting a book titled "Automating the News"? I'll remain skeptical, thanks.

2023-03-24 15:06:48 @ndiakopoulos @emilybell So-called generative AI is an oil spill in our information ecosystem. I'm countering the people out there selling it as a reliable or useful source of information. If that makes you feel defensive, I wonder what it is that you are up to?

2023-03-24 14:59:07 @sabpenni @emilybell Which academic paper about ChatGPT?

2023-03-24 14:16:54 Lots of wisdom here! https://t.co/C4DMibXSlT

2023-03-24 14:16:47 RT @ruchowdh: I kinda hate that responsible AI has come full circle to the uneducated yet highly opinionated pontificating on topics they k…

2023-03-24 14:16:29 RT @ruchowdh: Fourth and most importantly - we need better ways of curating useful and structured public input on how to improve models WIT…

2023-03-24 14:16:21 RT @ruchowdh: Third, we cannot keep this paradigm where the world is effectively a testing ground for “research”

2023-03-24 14:16:11 RT @ruchowdh: Here’s what I think are the tangible problems and what’s changed - first this tech is easier to access. This revolution is le…

2023-03-24 14:15:47 RT @ruchowdh: It’s fun and easy to talk about things you won’t be accountable for - like a technology that you claim is minimum decades awa…

2023-03-24 02:33:46 @BritneyMuller On construct validity in general: https://t.co/kR4ZA1k7uz On Bar specifically, Ep 10 of Mystery AI Hype Theater 3000 (not yet released, but eventually to be found with the others): https://t.co/yZs162tjbL

2023-03-23 22:37:05 RT @Abebab: "let them eat LLMs" is what I hear every time I see people/orgs say they wan to "alleviate poverty with LLms"

2023-03-23 22:35:16 RT @strubell: Thanks, Vijay! This is absolutely correct. To those who are concerned that I'm not engaging in normal scientific discourse an…

2023-03-23 18:42:32 Well, this promises to be entertaining! (And I also just downloaded the latex source. At least the first claim here checks out.) https://t.co/7aQMjvJXM5

2023-03-23 15:57:59 @xriskology @birchlse https://t.co/mo2XX0x4ZK

2023-03-23 14:19:56 @javisamo Please don't -- even if you are doing this to make a good point, the world does not need any more synthetic text floating around in it.

2023-03-23 13:29:06 And finally: "We conclude with reflections on societal influences of the recent technological leap" --- I'm not sure I even want to look to see what they have to say there.

2023-03-23 13:28:22 Comic interlude "In our exploration of GPT-4, we put special emphasis on discovering its limitations," (But apparently none on the limitations of their 'tests' for AGI.) >

2023-03-23 13:23:17 I guess one function of this novella is as a litmus tests for journalists. Anyone who chooses to cover it as a story about "AGI being just around the corner" rather than "AI hype masquerading as research" clearly is not doing a reliable job covering this beat. https://t.co/mo2XX0x4ZK

2023-03-23 13:19:01 Pièce de résistance: "Given the breadth and depth of GPT-4's capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system." >

2023-03-23 13:15:43 And "We demonstrate that, beyond its mastery of language, GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting." >

2023-03-23 13:15:05 From the abstract of this 154 page novella: "We contend that (this early version of) GPT-4 is part of a new cohort of LLMs [...] that exhibit more general intelligence than previous AI models. We discuss the rising capabilities and implications of these models." >

2023-03-23 13:14:02 Remember when you went to Microsoft for stodgy but basically functional software and the bookstore for speculative fiction? arXiv may have been useful in physics and math (and other parts of CS) but it's a cesspool in "AI"—a reservoir for hype infections https://t.co/acxV4wm0vE

2023-03-23 13:05:01 RT @emilymbender: Apropos #openai refusing to disclose any information about the training data for #GPT4 and #Google being similarly cagey…

2023-03-23 13:04:55 RT @emilymbender: Was so looking forward to this episode of @RadicalAIPod with @merbroussard and they did not disappoint! For @merbroussard…

2023-03-23 13:04:48 Hey journalists covering this story, talk to Casey! https://t.co/QaptVwrcbn

2023-03-23 00:29:12 RT @mmitchell_ai: Had fun talking to @strwbilly about Google's Bard release. One thing I explained is how companies say products are "an e…

2023-03-23 00:24:05 @davidchalmers42 And I stand by my statement that your original tweet is carrying water for those peddling AI hype. It doesn't define "AI", it suggests that we should be impressed with the text synthesis machines. And your follow up suggests that these so-called "AI tasks" are likewise valuable.

2023-03-23 00:02:55 RT @cfiesler: Just throwing this out there: I'm a tech ethics &

2023-03-22 23:14:19 Was so looking forward to this episode of @RadicalAIPod with @merbroussard and they did not disappoint! For @merbroussard neither the tech nor its social context is a black box, and she is so good at making the explanations approachable &

2023-03-22 20:58:49 RT @linakhanFTC: 1. Swathes of the economy now seem reliant on a small number of cloud computing providers. @FTC is seeking public input…

2023-03-22 18:06:26 RT @DLilloMartin: Registration is now open for the 2023 LSA Linguistic Institute, themed “Linguistics as Cognitive Science: Universality an…

2023-03-22 17:29:02 RT @mmitchell_ai: Had a big groan on G's framing of Bard. One thing that stood out: Google saying that one "collaborates" with Bard, not th…

2023-03-22 17:28:08 RT @timnitGebru: "Merely warning your customers about misuse or telling them to make disclosures is hardly sufficient to deter bad actors.…

2023-03-22 15:51:18 Apropos #openai refusing to disclose any information about the training data for #GPT4 and #Google being similarly cagey about #Bard... From the Stochastic Parrots paper, written in late 2020 and published in March 2021: w/@timnitGebru @mmitchell_ai @mcmillan_majora https://t.co/wrOIGKB999

2023-03-22 14:22:58 RT @emilymbender: More from the FTC! https://t.co/AqqIKcYVl8 A few choice quotes (but really, read the whole thing, it's great!): >

2023-03-22 01:37:00 RT @SashaMTL: My dudes, asking an LLM *any* question about itself (its training data, carbon footprint, abilities, etc.) is just contributi…

2023-03-21 14:16:56 Let me again express my gratitude for regulators who refuse to be blown away by so-called "AI capabilities" and instead look to how existing regulation might apply.

2023-03-21 14:15:21 "Merely warning your customers about misuse or telling them to make disclosures is hardly sufficient to deter bad actors." "The burden shouldn’t be on consumers, anyway, to figure out if a generative AI tool is being used to scam them." https://t.co/AqqIKcYVl8 >

2023-03-21 14:14:19 "Should you even be making or selling it?" "Are you effectively mitigating the risks?" "Are you over-relying on post-release detection?" "Are you misleading people about what they’re seeing, hearing, or reading?" https://t.co/AqqIKcYVl8 >

2023-03-21 14:12:36 "The FTC Act’s prohibition on deceptive or unfair conduct can apply if you make, sell, or use a tool that is effectively designed to deceive – even if that’s not its intended or sole purpose." https://t.co/AqqIKcYVl8 >

2023-03-21 14:11:36 More from the FTC! https://t.co/AqqIKcYVl8 A few choice quotes (but really, read the whole thing, it's great!): >

2023-03-21 13:40:57 RT @emilymbender: Several things that can all be true at once: 1. Open access publishing is important 2. Peer review is not perfect 3. Com…

2023-03-21 12:18:30 RT @chirag_shah: #CHIIR2023 folks - here's that paper (with open access) with @emilymbender I was referring to yesterday. You can see how a…

2023-03-20 21:58:56 RT @BritneyMuller: For those unfamiliar@mmitchell_ai is: Leading AI Researcher (focus on ethics, inclusion, diversity, fairness &

2023-03-20 15:57:18 @RadicalAIPod @merbroussard Can't wait to listen!!

2023-03-20 15:57:09 RT @RadicalAIPod: One of those "interviewing @merbroussard about her new amazing book in an hour but have to condense 10 million questions…

2023-03-20 15:48:50 RT @mer__edith: This is great and to the point (finally!) Tldr the problem is the surveillance business model, not the fact that one of th…

2023-03-20 15:47:27 RT @emilymbender: Citing a paper that's available through the @aclanthology by pointing to an arXiv version instead is at least the equival…

2023-03-20 15:46:43 RT @LeonDerczynski: This paper makes a tonne of odd claims about the future. I wonder if it is ever going to be reviewed (and survive), or…

2023-03-20 15:46:19 @aclanthology Meanwhile, Google Scholar pointing to arXiv versions first is like ... governments providing subsidies to oil companies.

2023-03-20 15:45:39 Citing a paper that's available through the @aclanthology by pointing to an arXiv version instead is at least the equivalent of putting something recyclable in the landfill, if not equivalent to littering. Small actions that contribute to the degradation of the environment.

2023-03-20 15:43:03 Shout out to the amazing @aclanthology which provides open access publishing for most #compling / #NLProc venues and to all the hardworking folks within ACL reviewing &

2023-03-20 15:40:52 Yes, this is both a subtweet of arXiv and of every time anyone cites an actually reviewed &

2023-03-20 15:39:12 Several things that can all be true at once: 1. Open access publishing is important 2. Peer review is not perfect 3. Community-based vetting of research is key 4. A system for by-passing such vetting muddies the scientific information ecosystem

2023-03-20 15:36:48 RT @STS_News: Thinking of putting together a reading list for understanding our current technology bubble and its apparent demise. Mine wou…

2023-03-20 15:23:19 RT @sharongoldman: New in The AI Beat: After the launch of GPT-4, the dangers of 'stochastic parrots' remain, said @timnitGebru @emilymbend…

2023-03-20 15:17:12 RT @VentureBeat: It was another epic week in generative AI, including the launch of GPT-4. But the dangers of 'Stochastic Parrots' remain,…

2023-03-20 14:05:02 His follow up tweet doesn't make it any better. What makes these "AI tasks"? Again, critical distance is required. https://t.co/J13YVg9aO8

2023-03-20 14:03:53 Philosopher deep in the "LLMs are magic " cult looks to curry favor with the self-styled magicians. (It's always super disappointing to see a fellow humanist lose all critical distance &

2023-03-20 13:08:18 When we published Stochastic Parrots (subtitle Can Language Models Be Too Big?) People asked how big is too big? Our answer: too big to document is too big to deploy. https://t.co/6hmmyDyVjW

2023-03-20 13:03:41 RT @mmitchell_ai: In Silicon Valley culture, the groupthink seems to be that it's impossible to keep track of the data a language model is…

2023-03-19 14:13:29 RT @chrismoranuk: A quick thread on AI and misinformation. Open AI’s own Safety Card says it “has the potential to cast doubt on the whole…

2023-03-18 20:24:56 RT @jordipc: ChatGPT y otros chatbots hacen cosas increíbles. Pero también pueden liarla mucho. Traerán problemas nuevos. Hay un grupo de…

2023-03-18 14:48:23 En Español, with thanks to @jordipc for reporting: https://t.co/2vX8wvgdEV https://t.co/4GCdRwBlzJ

2023-03-18 14:15:55 Look what arrived!! Really excited to read @merbroussard 's latest https://t.co/KdPhfbUHnn

2023-03-17 23:07:47 RT @asmelashteka: #StochasticParrotsDay was an amazing and insightful event. https://t.co/mkQrKZLHAA 1/n

2023-03-17 20:58:33 @mirabelle_jones @timnitGebru @safiyanoble @mmitchell_ai Thank you for compiling this!

2023-03-17 20:58:18 RT @mirabelle_jones: Absolute pleasure to attend Stochastic Parrots Day with some of my heroes @emilymbender @timnitGebru @safiyanoble @mmi…

2023-03-17 19:43:33 This was amazing -- huge thank you to @timnitGebru for being the driving force behind, and to all of the panelists for sharing their wisdom and all of the audience for joining us &

2023-03-17 19:16:11 RT @timnitGebru: 9)The participants who had great discussions and also created and input this wealth of resources! https://t.co/kKUfLqRqH5

2023-03-17 19:15:47 RT @timnitGebru: And that was a wrap for #StochasticParrotsDay. 1) Thank you so much to my listed co-authors @mcmillan_majora @emilymbend…

2023-03-17 19:12:50 RT @histoftech: “We can’t keep building technologies that collide so violently with our idea of what it means to be human.” —Nanjala Nyabo…

2023-03-17 19:00:54 RT @mmitchell_ai: Can we really become enlightened if we get answers on everything without producing any thoughts ourselves? -- Great point…

2023-03-17 19:00:32 RT @Carmen_NgKaMan: So grateful that @Nanjala1 brought up the need to talk AI across geographies! E.g. many African nations are affected by…

2023-03-17 18:57:02 RT @CopyrightLibn: Nanjala Nyabola (@Nanjala1) talking about AI futures &

2023-03-17 18:48:26 Sarah Andrew at #StochasticParrotsDay calling on all people building tech (esp. 'AI') to really feel the extent to which you are holding everyone's human rights in your hands ... and behave accordingly.

2023-03-17 16:55:52 RT @alexhanna: A reading list is being put together from the chat in #StochasticParrotsDay! A whole syllabus here. https://t.co/25NbKiZHaa

2023-03-17 03:12:59 RT @timnitGebru: "Emily M. Bender...tweeted that this secrecy did not come as a surprise to her. “They are willfully ignoring the most basi…

2023-03-17 02:43:56 RT @mmitchell_ai: Reminder that #StochasticParrotsDay is tomorrow! Come join me, @emilymbender, @timnitGebru, @mcmillan_majora and guests f…

2023-03-16 20:01:39 @CriticalAI @xriskology @timnitGebru @nitashatiku @danmcquillan So I see the value in building solidarity as you describe --- but not with the folks who think they are actually building "intelligence". (And yes, I try to explain 'parrot' as in reference to the metaphorical sense of the verb meaning repeating without understanding.)

2023-03-16 19:43:49 @CriticalAI @xriskology @timnitGebru @nitashatiku @danmcquillan So, again, I think it is valuable to work against normalizing the use of those terms --- because the way in which corporate interests and bizarre EA/longtermist fantasies are infecting this discourse should not be normalized.

2023-03-16 19:43:01 @CriticalAI @xriskology @timnitGebru @nitashatiku @danmcquillan And *most* "AGI" discourse is riddled with "citations to the future" and other pseudo-science --- and I don't see nearly enough distancing from that from those who might be doing serious scientific work under the rubric of "AI". >

2023-03-16 19:42:16 @CriticalAI @xriskology @timnitGebru @nitashatiku It takes 2 second to explain "pattern matching". Seems like a useful act of resistance. (Channeling @danmcquillan here.) >

2023-03-16 19:41:36 @CriticalAI @xriskology @timnitGebru @nitashatiku I think there *is* a lost rhetorically in behaving as if "ANI" were a reasonable term. Not least because of the ways in which the project of "AI" is bound up with eugenicist notions of "intelligence". >

2023-03-16 19:33:16 And lolsob @ @ilyasut taking every last opportunity to brag about "capabilities": “Things get complicated any time you reach a level of new capabilities.” Your trash heap of toxic garbage isn't "capable". It's just a lot of data and a lot of compute.

2023-03-16 19:31:27 @chloexiang @VICE @SashaMTL @rao2z @_willfalcon "@_willfalcon said that although it’s fair to want to prevent competitors from copying your model, OpenAI is following a Silicon Valley startup model, rather than one of academia, in which ethics matter." Stark implication: Ethics don't matter to SV. Good to have that out there

2023-03-16 19:29:42 @chloexiang @VICE @SashaMTL @rao2z @_willfalcon "It really bothers me that the human costs of this research (in terms of hours put in by human evaluators and annotators) as well as its environmental costs (in terms of the emissions generated by training these models) just get swept under the rug" - @SashaMTL >

2023-03-16 19:28:13 I appreciate this reporting from @chloexiang at @VICE on #OpenAI -- with quotes from @SashaMTL @rao2z @_willfalcon and others: https://t.co/gby4Bvy1eU >

2023-03-16 19:19:43 @GFuterman @interacciones So, like, don't ever attach random text synthesis machines to the nuclear command system? That's a very easy risk to prevent --- and deflating #AIhype is a key part of doing so.

2023-03-16 19:17:51 @CriticalAI @xriskology @timnitGebru @nitashatiku Huh? Why define "AGI" as "what may one day exist"? Why even use "ANI" or "AI" for pattern matching at scale? It's all misleading terminology and I see no value in ceding the ground that "AGI" (whatever people fantasize that to be) may be developed at some later date.

2023-03-16 19:14:18 RT @miss_eli: I wrote 'Lawyer Ex Machina #35: Happy Stochastic Parrots Day'. I really wanted to ignore the various GPTs for just one week,…

2023-03-16 19:01:43 Because somehow this isn't clear to many: There's a difference between opting to work for free (e.g. constructing evaluations for #OpenAI, providing labels to them) and having your work stolen (text or art included without consent in training data). https://t.co/qt2gFPHpo2

2023-03-16 18:28:34 @balloonleap Typos mean the tweet is authentic right?

2023-03-16 16:41:08 RT @mmitchell_ai: It will be a lot easier for OpenAI to declare they've solved AGI by hiding the details of their work.

2023-03-16 16:38:01 @Grady_Booch Thank you!

2023-03-16 15:58:48 RT @merbroussard: Hey @themarkup, I’m reading about the UK gov’t ban on TikTok. I’m curious: what kind of data could one get from the TikTo…

2023-03-16 13:38:46 I rather suspect that if we ever get that info, we will see that it is toxic trash. But in the meantime, without the info, we should just assume that it is. To do otherwise is to be credulous, to serve corporate interests, and to set terrible precedent.

2023-03-15 22:45:41 Just in case anyone isn't tracking: this is the clown show that $MSFT chose to be in bed with. https://t.co/vzES2DmUmr

2023-03-15 21:51:38 RT @xriskology: Great thread, worth reading.

2023-03-15 21:16:07 @SashaMTL Yeah...

2023-03-15 20:12:57 @andersonbcdefg Not without a regulator setting up the parameters they should be testing for, before releasing anything. Again -- at their expense.

2023-03-15 20:11:14 @andersonbcdefg We should demand that the companies releasing the oil spills (I mean models) do that at their own expense before releasing them.

2023-03-15 20:09:29 Folks, I encourage you to not work for @OpenAI for free: Don't do their testing Don't do their PR Don't provide them training data https://t.co/xF9eIDo4jT

2023-03-15 20:07:56 Oh look, @openAI wants you to test their "AI" systems for free. (Oh, and to sweeten the deal, they'll have you compete to earn GPT-4 access.) https://t.co/HqmURxF9dT

2023-03-15 20:07:16 @GFuterman @sama Tell me how, exactly, MSFT+OpenAI are fighting against unaccountable corporate power?

2023-03-15 19:53:02 @neilturkewitz @schock @schock is asking great questions as always, but I have a policy to not waste any time reading synthetic text.

2023-03-15 19:52:17 But given all the xrisk rhetorical (and @sama 's blogpost from Feb) it may also be possible that at least some of the authors on this thing actually believe their own hype and really think they are making choices about "safety".

2023-03-15 19:50:52 A cynical take is that they realize that without info about data, model architecture &

2023-03-15 19:46:37 ... Going forward we plan to refine these methods and register performance predictions across various capabilities before large model training begins, and we hope this becomes a common goal in the field." Trying to position themselves as champions of the science here &

2023-03-15 19:46:06 Also LOL-worthy, against the backdrop of utter lack of transparency was "We believe that accurately predicting future capabilities is important for safety. >

2023-03-15 19:44:42 For more on missing construct validity and how it undermines claims of 'general' 'AI' capabilities, see: https://t.co/kR4ZA1k7uz >

2023-03-15 19:44:05 I also lol'ed at "GPT-4 was evaluated on a variety of exams originally designed for humans": They seem to think this is a point of pride, but it's actually a scientific failure. No one has established the construct validity of these "exams" vis a vis language models. >

2023-03-15 19:42:37 But they do make sure to spend a page and half talking about how they vewwy carefuwwy tested to make sure that it doesn't have "emergent properties" that would let is "create and act on long-term plans" (sec 2.9). >

2023-03-15 19:40:42 Things they aren't telling us: 1) What data it's trained on 2) What the carbon footprint was 3) Architecture 4) Training method >

2023-03-15 19:39:40 Okay, taking a few moments to reat (some of) the #gpt4 paper. It's laughable the extent to which the authors are writing from deep down inside their xrisk/longtermist/"AI safety" rabbit hole. >

2023-03-15 17:35:58 RT @timnitGebru: One of the ppl who filtered out outputs of Open AI models told me they wouldn't "wish it on my worst enemy." https://t.co/

2023-03-15 16:16:32 RT @sabagl: I'm excited to dial in to this tomorrow - the conversations between these AI researchers and experts could not be more timely a…

2023-03-15 00:09:03 @annargrs @evanmiltenburg @complingy @LeonDerczynski Exactly: What did we do, how did it go, what did we learn from it.

2023-03-14 21:01:05 RT @mark_riedl: The timing of this makes things interesting. See you there. https://t.co/xL4QGSoQ4M

2023-03-14 20:52:11 Feeling exhausted by the #AIhype press cycles? Finding yourself hiding from GPT-4 discourse? Longing for a dose of reality? Join us on Friday for Stochastic Parrots Day: https://t.co/x4auSSDctW

2023-03-14 20:39:50 It was an easy prediction to make, given @OpenAI's track record for sure. Still, I could wish that it wasn't so thoroughly validated. https://t.co/OPbWQfgHhS

2023-03-14 20:38:56 That is, not beyond the obvious first pass questions of: Is this a use case where synthetic text is even appropriate? Very few use cases are." >

2023-03-14 20:38:46 Without clear and thorough documentation of what is in the dataset and the properties of the trained model, we are not positioned to understand its biases and other possible negative effects, to work on how to mitigate them, or fit between model and use case. >

2023-03-14 20:38:33 Some that would be appropriate to the GPT models include Data Statements for Natural Language Processing (Bender &

2023-03-14 20:38:11 Since at least 2017 there have been multiple proposals for how to do this documentation, each accompanied by arguments for its importance. >

2023-03-14 20:37:49 "One thing that is top of mind for me ahead of the release of GPT-4 is OpenAI's abysmal track record in providing documentation of their models and the datasets they are trained on. >

2023-03-14 20:37:09 A journalist asked me to comment on the release of GPT-4 a few days ago. I generally don't like commenting on what I haven't seen, but here is what I said: #DataDocumentation #AIhype >

2023-03-14 18:58:44 RT @merbroussard: It’s launch time! Happy publication day to my latest book, MORE THAN A GLITCH: CONFRONTING RACE, GENDER, AND ABILITY BIAS…

2023-03-14 17:42:16 @danielzklein @SemanticScholar It was @qpheevr who pointed that out: https://t.co/sxihlRVLs6

2023-03-14 17:41:24 Again: https://t.co/Jexm27DXlK

2023-03-14 17:40:20 Yeah, not surprised in the least. They are willfully ignoring the most basic risk mitigation strategies, all while proclaiming themselves to be working towards the benefit of humanity. #OpenAI #DataDocumentation https://t.co/PuLHnPYE0l

2023-03-14 17:39:01 RT @benmschmidt: I think we can call it shut on 'Open' AI: the 98 page paper introducing GPT-4 proudly declares that they're disclosing *no…

2023-03-14 17:37:41 @danielzklein @SemanticScholar It's more that I wondered where they could have come from --- I had probably just been assuming they were the authors' own abstracts, until I perceived the "tldr" tag.

2023-03-14 16:34:36 RT @rcalo: Excited to welcome @aylin_cim as a faculty associate of the @TechPolicyLab. Aylin is a field leader in AI bias. https://t.co/iai

2023-03-14 16:34:27 @SemanticScholar @ai2_allennlp Synthetic text, even synthetic summaries, carry risks - and putting it out into the world unlabeled exacerbates those risks.

2023-03-14 16:33:53 @SemanticScholar I call on @SemanticScholar and @ai2_allennlp to lead by example with transparency here and flag these as "automatic TLDR" in the email -- because I doubt most people would know to check. >

2023-03-14 16:32:39 I noticed that today's alert from @SemanticScholar included a "TLDR" for each paper. Suspicious that that might be automatically produced, I went and checked. And sure enough, it is: https://t.co/Q54p70KvMB >

2023-03-14 16:30:49 @LeonDerczynski @evanmiltenburg @tellarin @annargrs @ryandcotterell (With room for notes on the proceedings of COLING?)

2023-03-14 15:14:30 @Dr_Atoosa @davidchalmers42 @sama Featuring non-information, you mean. Why are you wasting people's time suggesting that they read synthetic text? Why are you platforming #ChatGPT 's non-information?

2023-03-14 14:14:15 @evanmiltenburg @annargrs @LeonDerczynski Probably not, but also: we wrote that thinking that it would also be useful to students or anyone else with limited visibility into the internals of the review process...

2023-03-14 14:07:18 @evanmiltenburg @annargrs @LeonDerczynski Thanks, Emiel! We tried to publish that as a journal paper, but it was rejected (the editor thought the audience for it would be too narrow ¯\ () /¯ ) so we went with the tech report instead. It would be great if such things could be in the Anthology!

2023-03-14 13:04:55 RT @emilymbender: MSFT lays off its responsible AI team The thing that strikes me most about this story from @ZoeSchiffer and @CaseyNewton…

2023-03-14 02:44:42 @mihaela_v @ZoeSchiffer @CaseyNewton

2023-03-14 02:35:14 @mihaela_v @ZoeSchiffer @CaseyNewton My apologies: the linked article is clearer.

2023-03-14 01:07:03 At the very least, we should be working to educate those around us not to fall for the hype---to never accept "AI" medical advice, legal advice, psychotherapy, etc.

2023-03-14 01:06:15 I call on everyone who is close to this tech: we have a job to do here. The techcos where the $, data and power have accumulated are abandoning even the pretext of "responsible" development, in a race to the bottom. >

2023-03-14 01:03:05 And they will tell us: You can't possibly regulate effectively anyway, because the tech is moving too fast. But (channeling @rcalo here): The point of regulation isn't to micromanage specific technologies but rather to establish and protect rights. And those are enduring. >

2023-03-12 15:49:05 RT @TorrentTiago: @complingy @HadasKotek @linguistMasoud Adding @GlobalFrameNet https://t.co/HnK14ipyIx

2023-03-12 15:48:58 RT @complingy: @linguistMasoud If anyone is looking for such connections beyond Twitter, computational linguists can be found in communitie…

2023-03-12 15:48:17 @eyujis @lizweil Yes, we discuss both Searle's thought experiment and Harnad's work in the octopus paper: https://t.co/jpjJcfR6qh

2023-03-11 21:11:08 RT @DAIRInstitute: Come to our Stochastic Parrots Day event on March 18 to hear more from Steven Zapata and many others. You can sign up he…

2023-03-11 20:04:37 @JeffDean @SashaMTL Are you talking about the Stochastic Parrots authors here? Because that's a very strange way to say "I told them to retract the paper or get fired".

2023-03-11 14:27:26 RT @emilymbender: Since we published Stochastic Parrots two years ago, the issues discussed in it have only become more urgent and salient.…

2023-03-11 14:27:02 RT @emilymbender: Linguistics as a field has a lot to contribute to better understanding what large language models can and can't do and ye…

2023-03-10 16:45:27 RT @lizweil: good morning, word nerds. @emilymbender has some thoughts on how to read the Chomsky op-ed. tl

2023-03-10 16:04:59 @jeffadoctor @Abebab Also, @shoshanazuboff makes a detailed analogy between the data grabs of surveillance capitalism and settler colonialism, especially as practiced in the 16th c in her book on surveillance capitalism.

2023-03-10 16:03:52 @jeffadoctor This paper by @Abebab might fit what you're looking for: https://t.co/Fc91OtowsZ

2023-03-10 14:36:13 So, read this, not that: https://t.co/qgWwqhWmpc And thanks again @lizweil for your reporting!

2023-03-10 14:35:39 What matters about language in the context of this tech is that language and meaning are relational, that communication is a joint activity, and that systems set up to mimic the form of language can provide the illusion that they understand, know things, are reasoning. >

2023-03-10 14:27:05 (And the whole debate about whether or not humans have an innate universal grammar is just completely beside the point here.) >

2023-03-10 14:26:48 The ability to render grammaticality judgments (and based on how much data) really isn't the issue. Corporations aren't out there suggesting that we use #ChatGPT to 'disrupt' the industry of judging grammaticality. >

2023-03-10 14:26:18 So it's real bummer when the world's most famous linguist writes an op-ed in the NYT* and gets it largely wrong. https://t.co/aFyLJvRl7e (*NYT famous for publishing transphobia &

2023-03-10 14:25:50 Linguistics as a field has a lot to contribute to better understanding what large language models can and can't do and yet many don't think to turn to linguists (or don't even really know what linguists do) when trying to evaluate claims about this technology. >

2023-03-10 14:02:50 Since we published Stochastic Parrots two years ago, the issues discussed in it have only become more urgent and salient. Join me, @timnitGebru @mmitchell_ai @mcmillan_majora and an esteemed group of panelists for discussion and reflection, March 17 2023. https://t.co/YuBGS54oWV https://t.co/5EpPJdEZJc

2023-03-10 03:06:49 RT @UCLA_CR_DJ: Stochastic Parrots Day Mar 17 9AM - 12PM PDT w/ @safiyanoble @ 9AM PDT cc: @UCLA

2023-03-09 22:01:06 RT @timnitGebru: This was section 4.2 of stochastic parrots called "Static Data/Changing Social Views" and the analysis was done by @blahti…

2023-03-09 17:46:10 RT @doctorow: This was a dig at the #StochasticParrots paper, a comprehensive, measured roundup of criticisms of AI that led Google to fire…

2023-03-09 17:39:57 RT @doctorow: Gebru's co-author on the Parrots paper was @emilymbender, a computational linguistics specialist at UW, who is one of the bes…

2023-03-09 15:10:22 RT @LucianaBenotti: We took the opportunity of this panel to promote #NAACL2024 which will be in Mexico city in June 2024. I will work so t…

2023-03-09 14:01:49 @philosophybites @TheNewEuropean 3) The paper is jointly first authored, and should properly be cited as Bender, Gebru et al.

2023-03-09 14:01:13 @philosophybites @TheNewEuropean Hey @philosophybites -- 3 corrections: 1) I work at the University of Washington, not Washington University 2) Only one of my co-authors also at UW. The others were at Google and those who refused to take their names off of the paper were famously fired for it. >

2023-03-09 13:40:28 @cocoweixu Congratulations

2023-03-09 13:27:37 RT @rajiinio: Tech policy proposals that depend heavily on the voluntary cooperation of the tech companies being regulated are so frustrati…

2023-03-09 01:48:41 @cmiciek That's what I meant by Washington University.

2023-03-09 00:47:29 @marylgray Well, I can send you a few other 1980s earworms .... Greatest American Hero, FAME, Cheers

2023-03-09 00:24:17 No shade to Washington University, but I don't work there, and I'm really tired of being described in the press as if I do.

2023-03-09 00:23:44 Hey World: The University of Washington, Washington University, Washington State University, and George Washington University are all DIFFERENT institutions. Please make a note of it.

2023-03-08 23:41:32 @marylgray I think what's going on there is that it's a cover term for the text synthesis and image synthesis systems.

2023-03-08 15:06:09 @elazar_g 1) The scale of these models prohibits in-house/on-device use. 2) Even if it didn't the business model does. 3) The "AI" marketing provides the temptation to divulge data, and that's a problem. But thank you for so kindly sharing your insight.

2023-03-08 14:36:34 RT @mmitchell_ai: Come to this!! Tickets here: https://t.co/kHf69wftFO @timnitGebru explains a LOT MORE of what we're going to do: https:/…

2023-03-08 14:36:04 RT @kenarchersf: Where statisticians see noise, CS people see a god to be worshipped.

2023-03-08 14:35:29 So-called generative "AI" is just text manipulation ... but that also means that whatever data you send into it can be folded into the model for future remixes. And then spit back out to a person who can understand it as information. https://t.co/RsOaJswpZO https://t.co/KDNhijGRnd

2023-03-08 14:16:51 "Emily M. Bender, die neben ihrer Tätigkeit als AI-Ornitologin als Professorin für Linguistin an der Universität Washington lehrt" https://t.co/mxk9yuBacj

2023-03-08 13:41:38 RT @AdamCSchembri: Does anyone know if anyone has written guidelines for modality-inclusive language in linguistics? How to use more inclu…

2023-03-07 21:12:59 @haydenfield Followed by: “There seems to be, I would say, a surprising amount of investment in this idea…and a surprising eagerness to deploy it, especially in the search context, apparently without doing the testing that would show it’s fundamentally flawed.”

2023-03-07 21:12:18 “A lot of the coverage talks about them as not yet ready or still underdeveloped—something that suggests that this is a path to something that would work well, and I don’t think it is,” me to @haydenfield in this piece for Tech Brew: https://t.co/KuxSEHPqXX

2023-03-07 18:14:03 RT @lizweil: Humans of earth: time to put on your party hats &

2023-03-07 17:55:50 RT @timnitGebru: Changed the number of tickets for our Stochastic Parrots day event. It was capped at 1k before because zoom webinar doesn'…

2023-03-07 16:27:03 @santiviquez Try again now!

2023-03-07 15:23:51 RT @lizweil: &

2023-03-07 14:03:03 RT @LeonDerczynski: It was a different world in NLP when the paper was written - only two years ago! https://t.co/Nqcq4NLf9A

2023-03-07 14:02:48 RT @emilymbender: I'd like to point out: Serious AI researchers can get off the hype train at any point. It might not have been your choice…

2023-03-07 14:02:33 RT @emilymbender: This was really fun to do --- @cfiesler is so cool (and so are the @RadicalAIPod hosts :). Thank you again for the opport…

2023-03-07 14:02:25 RT @timnitGebru: A lot has happened since we wrote the paper 2 years ago that got @mmitchell_ai &

2023-03-07 14:02:22 Join us for Stochastic Parrots Day on March 17! https://t.co/YuBGS53R7n https://t.co/2oNGfgygXe

2023-03-06 23:08:24 This was really fun to do --- @cfiesler is so cool (and so are the @RadicalAIPod hosts :). Thank you again for the opportunity! https://t.co/qmFG8htf3y

2023-03-06 23:04:38 RT @RadicalAIPod: wow y'all! last week's episode with @emilymbender and @cfiesler about the limitations of #ChatGPT is already one of our m…

2023-03-06 22:12:31 @geomblog Yeah -- so much of the way we talk about algorithms in general (even quite aside from AI) borrows terms that are more appropriate to human cognition. It takes effort to break this habit!

2023-03-06 22:11:30 @MaryJun71373119 No specific instance prompted this thread, but for a sampling, see the artifacts @alexhanna and I take apart in #MAIHT3k https://t.co/yZs162sLmd

2023-03-06 22:09:16 Meanwhile: Getting off the hype train is just the first step. Once you've done that and dusted yourself off, it's time to ask: how can you help put the brakes on it?

2023-03-06 22:07:56 If you feel like it wouldn't be interesting without that window dressing, it's time to take a good hard look at the scientific validity of what you are doing, for sure. >

2023-03-06 22:07:28 Likewise, describing your own work in terms of unmotivated and aspirational analogies to human cognitive abilities is also a choice. >

2023-03-06 22:06:35 I'd like to point out: Serious AI researchers can get off the hype train at any point. It might not have been your choice that your field was invaded by the Altmans of the world, but sitting by quietly while they spew nonsense is a choice. >

2023-03-06 19:18:26 @tallinzen @mixedlinguist @mmemily17 One of the main functions of my FAQ is that it lets me give myself permission to just not reply to certain things...

2023-03-06 18:16:15 RT @DAIRInstitute: Here's the agenda of the event with @mcmillan_majora @mmitchell_ai @emilymbender @timnitGebru @mark_riedl @safiyanoble @…

2023-03-06 16:23:09 RT @myrthereuver: New blog post! My highlights of the 2023 HPLT winter school on Large Language Models, including talks by @emilymbende…

2023-03-06 15:20:31 RT @MiaD: @emilymbender @60Minutes @timnitGebru Didn’t realize this wasn’t part of the main segment. Looks like @60Minutes is doing the bar…

2023-03-06 14:50:21 RT @emilymbender: MSFT and OpenAI (and Google with Bard) are doing the equivalent of an oil spill into our information ecosystem. And then…

2023-03-06 03:01:13 RT @bobehayes: A good, long read about @emilymbender and her views on #AIHype, #ArtificialIntelligence and more >

2023-03-06 01:39:49 RT @parismarx: now with generative AI, there’s @timnitGebru, @emilymbender, @danmcquillan, just to name a few, and i’m sure more in the pro…

2023-03-05 22:45:10 RT @emilymbender: @techwontsaveus @timnitGebru Listening to @timnitGebru reflect on how surprisingly fast the things we warned about in the…

2023-03-05 22:45:05 @techwontsaveus @timnitGebru Like, hey, what if the narcissistic billionaires with savior complexes focused on actually saving the planet, instead of building machines they imagine to be gods?

2023-03-05 22:43:35 @techwontsaveus @timnitGebru Listening to @timnitGebru reflect on how surprisingly fast the things we warned about in the Stochastic Parrots paper came to pass made me wonder: What if instead $Billions were being poured into clean energy (or making the grid alternative energy ready or carbon capture or...)?

2023-03-05 22:42:00 I really enjoyed this episode of @techwontsaveus with @timnitGebru https://t.co/RXrcZry2KM >

2023-03-05 21:42:29 RT @schock: "You don’t need a machine to predict what the FTC might do when those claims are unsupported."

2023-03-05 21:34:59 @ShumingHu The blog post is <

2023-03-05 21:00:46 @DavidJPoole @aaas @AAASmeetings I see -- my apologies for making assumptions. (Your tweet sounded to me like the kind of joke a hearing person would make at deaf people's expense.)

2023-03-05 20:46:29 @DavidJPoole @aaas @AAASmeetings Please delete this tweet. Your "joke" turns on the idea that not being able to hear means not being able to attend to what people are telling you, which is denigrating to Deaf people.

2023-03-05 20:30:56 @StenoMatt @aaas @AAASmeetings @mezmalz Please read the quoted thread to see why this suggestion is completely inappropriate.

2023-03-05 20:09:04 @aaas @AAASmeetings Accessibility isn't impossible, it just requires planning and dedication of resources. I thank @mezmalz for raising this issue and call on @AAASmeetings to get their act together. The time to start planning for accessibility for the 2024 meeting is NOW.

2023-03-05 20:07:48 When D/deaf scientists explained to @AAAS @AAASmeetings what they needed in terms of how to work with the interpreters to make the event a success--so we all could benefit from their science--@AAASmeetings should have listened. >

2023-03-05 20:07:14 I am super disappointed in @AAASmeetings here -- isn't @AAAS at its core about science communication? If we care about communication, then we prioritize what's needed to make it successful. >

2023-03-05 15:08:26 @cfiesler I like how he thinks LLMs are "generic NLP models". As if LLMs are all there is to NLP. Clearly a well-versed expert.

2023-03-05 14:03:54 RT @emilymbender: Finally had a moment to read this statement from the FTC and it is https://t.co/DVBEJLcv6C A few choice quotes:

2023-03-05 14:03:30 @cfiesler Random dudes: "enlightening" the world on every platform.

2023-03-05 13:40:58 RT @mezmalz: #AAASmtg I need to say something. My experience was lousy y’all dropped the ball on deaf people. None of us connected with…

2023-03-05 13:40:34 RT @mezmalz: Our work today- we were trying to tell people that deaf children who do not receive early sign language exposure struggle with…

2023-03-05 10:00:00 CAFIAC FIX

2023-03-02 22:00:00 CAFIAC FIX

2023-02-28 05:29:25 @EricHallahan So, your story is that you looked at my profile and noted my gender, but missed the part where it says "Professor"?

2023-02-28 05:24:14 @EricHallahan But every last bit of your (uninvited) engagement with me has been concern trolling at best, and weirdly disrespectful. So, I guess it's fitting that you also signal disrespect in this way.

2023-02-28 05:23:13 @EricHallahan It is not a sign of respect to go out of your way to mention my gender. It is a sign of flagrant DISrespect to use an honorific where none is expected and pass right over the ones that a) I've earned and b) reflect my expertise.

2023-02-28 05:22:30 @EricHallahan I indicate my pronouns in my profile so that anyone who is referring to me in the third person doesn't have to guess what they are. I appreciate it when other people do the same.

2023-02-28 05:06:59 So, for those who don't know, using 'Ms' when 'Dr' or 'Prof' would be applicable is not respectful. Quite the opposite really. And that goes doubly when it's in a context where you wouldn't normally use an honorific at all (like prepended to a Twitter handle).

2023-02-28 01:12:24 RT @rharang: You know what makes data really secure, and all but impossible to lose via a breach? Not collecting it in the first place.

2023-02-27 23:42:03 Who's ready for some more Mystery AI Hype Theater 3000? This Friday March 3, 9:30am PT, @alexhanna and I will be joined by special guest @KendraSerra who will share their expertise and help us deflate #AIhype in the legal domain. https://t.co/VF7TD6sYfE #MAIHT3k #MathyMath #LLM

2023-02-27 22:51:17 @CriticalAI See next tweet (after the one you QT'd).

2023-02-27 22:47:44 Which is a very weird way of deciding who to listen to. But also: I don't do predictions, but I have stated some warnings (along with my co-authors) and been dismayed to see them go unheeded.

2023-02-27 22:46:26 And then there are the people who want to know my "credentials" in terms of how many predictions I've made about AI in the last 5 years that have come true. >

2023-02-27 22:45:28 Computational linguistics is and will be just fine, though it's worth working to hold space for visions of our science that see it as something other than a "component of AI" (and I'm working on that, too). >

2023-02-27 22:44:30 I am angry --- but not about that. I'm angry at our system that allows tech brows to concentrate power and create tech that is exploitative and harmful and somehow claim they are doing it for the benefit of humanity. >

2023-02-27 22:43:32 Another common pattern is people thinking I'm "angry" because my field (computational linguistics) has been made obsolete by LLMs. >

2023-02-27 22:42:44 OpenAI isn't listening and won't not matter what I say. But the rest of the world might, and I think it's worth giving OpenAI and Sam Altman exactly as much derision as they are due, to pop this hype bubble. >

2023-02-27 22:41:55 A variant of this seems to be the assumption that I'm trying to get OpenAI to actually change their ways and that I'd be more likely to succeed if I just talked to them more nicely. >

2023-02-27 22:41:22 Some folks are very upset with my tone and really feel like I should be more gentle with the poor poor billionaire. ¯\_()_/¯ >

2023-02-27 22:40:14 The reactions to this thread have been an interesting mix --- mostly folks are in agreement and supportive. However, there are a few patterns in the negative responses that I think are worth summarizing: https://t.co/Sbug4eF1js

2023-02-27 18:59:46 RT @eaclmeeting: You want tutorials at #eacl2023 ? We have them! Check out the list of six accepted tutorials online: https://t.co/y7yIb3

2023-02-27 17:08:09 To add to this: someone might well want their work to be *discoverable* via search and yet not included in training data sets. So, like, just using the existing info in robots.txt is not sufficient. https://t.co/dL15USoOi9

2023-02-27 16:50:04 And these folks who invite me to be a "Co-Organizer" of their International Conference on Mechatronics and Smart Systems https://t.co/pgEZJrP356

2023-02-27 16:49:01 Here's another, who have invited me to be a keynote speaker at their "Global Summit on Chemical Engineering and Catalysis" https://t.co/eSJI2PjOYj

2023-02-27 01:00:00 CAFIAC FIX

2023-02-20 13:57:50 RT @emilymbender: Heard the phrase "stochastic parrots " and curious what that's about? Familiar with our paper and interested in developm…

2023-02-20 13:57:32 RT @emilymbender: "The bots will offer us easy answers. We just have to remember that's not what we should be asking for." Sound advice f…

2023-02-20 13:57:22 RT @emilymbender: You wouldn't take medical advice from this and you should take it from the tech he's peddling either. Beyond that: Ye…

2023-02-20 04:48:48 RT @alexhanna: A vision of a modern overpaid university administrator: a "prompt engineer" -- or rather, a parrot selector -- who scans ove…

2023-02-20 04:36:28 Literally the second tweet in Altman's thread (i.e. the one after the one Manning retweeted): https://t.co/GXC9lbI1rb

2023-02-20 04:35:57 When you're Associate Director of something called "Human-Centered Artificial Intelligence" but the $$ all comes from Silicon Valley so you feel compelled to retweet the clown suggesting that the poors should have LLM-generated medical advice instead of healthcare. https://t.co/vuVo2OBKrp

2023-02-20 04:11:38 @Etyma1010 @BertCappelle @RemivanTrijp @Linguist_UR @hilpert_martin Do you count undergrads who studied with him?

2023-02-20 01:07:37 RT @cheryllynneaton: We can't even get automatic soap dispensers to recognize people with dark skin. I damn sure don't want an AI medical a…

2023-02-20 00:30:12 "The bots will offer us easy answers. We just have to remember that's not what we should be asking for." Sound advice from @jetjocko https://t.co/eFkngtxMST

2023-02-19 22:29:53 RT @shengokai: I mean it’s not like we’re going on over two decades of people pointing out issues with these very applications. In fact, me…

2023-02-19 20:43:19 @KHabermas It comes from eugenics, in fact. Effective altruism/longtermism is a eugenicist cult. And they're the ones funding this.

2023-02-19 20:36:45 And to be very clear, @percyliang this is on your head, and those of anyone else who treats the US medical licensing exam as a "benchmark" to "SOTA", too. https://t.co/enRuxBBGCD

2023-02-19 20:36:14 The thought that it could somehow be seen as beneficial --- that this is somehow taking care of people who can't afford care --- is so offensive I can't even find the words. Tech solutionism indeed.

2023-02-19 20:34:27 You wouldn't take medical advice from this and you should take it from the tech he's peddling either. Beyond that: Yes, the US healthcare system has enormous inequity problems. So we should be reforming it so that healthcare is treated as the basic human right that it is. https://t.co/GXC9lbI1rb

2023-02-19 19:54:17 RT @HeidyKhlaaf: This is the harm in publishing "scientific" papers claiming ChatGPT "passed" a medical exam. It actually didn't and it had…

2023-02-19 19:54:05 RT @HeidyKhlaaf: The mental gymnastics to justify using AI for a high-risk application like medical care by pointing to people who can't af…

2023-02-19 14:20:57 Heard the phrase "stochastic parrots " and curious what that's about? Familiar with our paper and interested in developments in the past two years? Join me, @timnitGebru, @mmitchell_ai and @mcmillan_majora + guests for Stochastic Parrots Day, March 17: https://t.co/YuBGS54oWV

2023-02-19 14:11:18 RT @emilymbender: *sigh* WaPo generally has better tech coverage than the famous-for-transphobia NYT, but they do decided to publish screen…

2023-02-19 02:56:34 @jscottwagner @AmandaAskell No algorithmic component. Her QT brought my tweet to the attention to the dregs of Twitter. Some of them did also reply to her tweet, btw. Not sure what your point is?

2023-02-19 02:04:52 @jscottwagner @AmandaAskell I don't need that explained to me thanks. Also, the point isn't so much their behavior but the fact that Amanda's quote tweet brought it to my mentions.

2023-02-18 23:18:03 @ajaxsinger https://t.co/5MMNuWhj6H

2023-02-18 23:17:41 I would have hoped that by now reporters would have learned not to be impressed with the ersatz fluency of large language models. But it seems like each time they get a bit more fluent the reaction is "Okay, NOW let's be impressed." https://t.co/0Xc7WVwBKi

2023-02-18 23:16:47 Bing isn't the sort of thing that can be interviewed any more than a Magic 8-Ball is. >

2023-02-18 23:14:28 *sigh* WaPo generally has better tech coverage than the famous-for-transphobia NYT, but they do decided to publish screenfuls of synthetic (i.e. fake) text. https://t.co/qGdI0DBG1D >

2023-02-18 23:13:22 @ajaxsinger The main problem here is that the Washington Post thought it was newsworthy to print screens and screens of synthetic text.

2023-02-18 22:40:50 @ajaxsinger Definitely not the latter, but I'll have a look.

2023-02-18 22:31:48 @ajaxsinger From the headline, yeah. I'll have to take a look...

2023-02-18 20:01:04 @batsybabs @alexhanna Yes! It takes a while, but we do get the recordings up eventually. You can find eps 1-7 here: https://t.co/yZs162sLmd

2023-02-18 16:01:15 RT @emilymbender: @AmandaAskell If you're all about "making the world a better place", Amanda, I'd urge you to first consider why it is tha…

2023-02-18 15:57:44 @AmandaAskell If you're all about "making the world a better place", Amanda, I'd urge you to first consider why it is that your tweets draw approval and attention from this crowd.

2023-02-18 15:57:13 More replies courtesy of @AmandaAskell's absolutely lovely followers. https://t.co/f5mrmZLtej

2023-02-18 15:54:03 I mostly have avoided (so far) the worst kinds of Twitter harassment, but yesterday @AmandaAskell quote tweeted me and then this happened to my mentions... (continued) https://t.co/qtQBPHPSEV

2023-02-18 06:56:06 RT @tomg_: "Faites voler votre nouveau modèle d'avion pendant 50 000 heures avec plein de passagers à bords, vous verrez bien si il s'écras…

2023-02-18 06:55:53 RT @timnitGebru: Mind you this is the CEO of a company adverted as doing “AI safety” funded by 600 MILLION dollars of stolen money by Sam B…

2023-02-18 00:02:24 RT @mmitchell_ai: HEY GUESS WHAT, I HAVE A COVER STORY IN TIME!! jk I don't, I'm not a journalist. But @andrewrchow &

2023-02-17 19:24:38 @cfarivar @nitashatiku @mmitchell_ai @timnitGebru Thank you.

2023-02-17 19:15:14 @cfarivar @nitashatiku @mmitchell_ai @timnitGebru Finally: Just why? What is the benefit of having it on another site? It's openly available already. If you're worried about discoverability, post a link. Not the document.

2023-02-17 19:14:47 @cfarivar @nitashatiku @mmitchell_ai @timnitGebru If someone accesses it from doccloud, they don't have the full context: that this is a paper that was published in an ACM context. Furthermore: If the paper were to change (not planned), they would be out of sync. >

2023-02-17 19:06:09 @cfarivar @nitashatiku @mmitchell_ai @timnitGebru Why in the world would you do that? The paper is open access and people should get it from it's actual home in the ACM Digital Library.

2023-02-17 17:27:07 Starting in just minutes! https://t.co/jFkvPWPibH

2023-02-17 14:57:16 Join us in just a couple of hours! #MAIHT3k #MathyMath #AIHype #ChatGPT #NLProc https://t.co/jFkvPWPibH

2023-02-17 13:58:48 RT @emilymbender: The @nytimes, in addition to famously printing lots of transphobic non-sense (see the brilliant call-out at https://t.co/

2023-02-17 02:46:18 RT @_Zeets: Emily Bender already wrote extensively about this nonsense and to urge to be impressed by this technology https://t.co/VbP9WwUL

2023-02-16 23:09:13 RT @Muna_Mire: We have surpassed 1000 NYT contributor signatories. Yesterday, the Times responded to our letter by erroneously identifying…

2023-02-16 22:58:02 Tomorrow! https://t.co/jFkvPWPibH

2023-02-16 22:31:36 Meanwhile, here is some actually good coverage about the current generation of chatbots, from @kharijohnson https://t.co/4UjCFdC87i

2023-02-16 22:26:02 @nytimes @kevinroose In sum, reporting on so-called AI continues in the NYTimes (famous for publishing transphobic trash) to be trash. And you know what transphobic trash and synthetic text have in common? No one should waste their time reading either.

2023-02-16 22:24:44 @nytimes @kevinroose And let's take a moment to observe the irony that the NYTimes, famous for publishing transphobic trash, is happy to talk about how a computer program supposedly "identifies". >

2023-02-16 22:23:11 @nytimes @kevinroose That paragraph gets worse, though. It doesn't have any desires, secret or otherwise. It doesn't have thoughts. It doesn't "identify" as anything. And this passes as *journalism* at the NYTimes. >

2023-02-16 22:21:43 @nytimes @kevinroose It didn't. It's a computer program. This is as absurd as saying: "On Tuesday night, my calculator played math games with me for two hours." >

2023-02-16 22:21:11 @nytimes @kevinroose And then here: "I had a long conversation with the chatbot" frames this as though the chatbot was somehow engaged and interested in "conversing" with @kevinroose so much so that it stuck with him through a long conversation. >

2023-02-16 22:19:04 @nytimes @kevinroose First, the headline. No, BingGPT doesn't have feelings. It follows that they can't be revealed. But notice how the claim that it does is buried in a presupposition: the head asserts that the feelings are revealed, but presupposes that they exist. >

2023-02-16 22:17:11 @nytimes @kevinroose Beyond the act of publishing chatbot (here BingGPT) output as if it were worth anyone's time, there are a few other instances of #AIHype in that piece that I'd like to point out. >

2023-02-16 22:16:21 @nytimes Why @nytimes and @kevinroose thought their readers would be interested in reading all that fake text is a mystery to me --- but then again (as noted) this is the name publication that thinks its readers benefit from reading transphobic trash, so ¯\_()_/¯ >

2023-02-16 22:15:04 The @nytimes, in addition to famously printing lots of transphobic non-sense (see the brilliant call-out at https://t.co/FpDkGjRH4W), also decided to print an enormous collection of synthetic (i.e. fake) text today. >

2023-02-16 22:00:02 @doctorow Thank you for pointing folks to our paper. Perhaps even more closely related to your thread is the paper (and associated media coverage &

2023-02-16 20:58:57 The journalist had the gall to say " I would love to be in touch for future segments though if you may be interested." No thank you, not after this experience.

2023-02-16 20:58:10 (In today's instance, I heard nothing until I sent a query asking what was up... after having rearranged various things and made sure to be ready &

2023-02-16 20:57:27 But I am not okay with being jerked around like this. If I MAKE TIME for you, then at the very least you should respect my time by honoring the request that you made or COMMUNICATING asap if it changes.

2023-02-16 20:56:43 Engaging with the media is actually an additional layer of work over everything else that I do (including the work that builds the expertise that you are interviewing me about). I'm willing to do it because I think it's important.

2023-02-16 20:55:42 If you ask an expert for their time same day at a specific time, and they say yes, and then you don't reply, even though said expert has made time for you -- that is NOT OK.

2023-02-16 20:54:29 Hey journalists -- I know your work is extremely hectic and I get it. I understand that you might make plans for something and then have to pivot to an entirely different topic. That's cool. BUT:

2023-02-16 05:01:29 For anyone who is keeping track, @KirkDBorne is a credulous hack, spreading misinformation written by a modern phrenologist. https://t.co/R3Cg77ALVp

2023-02-16 04:59:42 @alexhanna Does it for the clicks and yet 380k+ accounts are credulous enough to promote it. It's pernicious.

2023-02-16 04:57:03 Next episode is in two days!! https://t.co/jFkvPWPibH

2023-02-16 04:56:15 RT @alexhanna: While I was at Google, from my former tech lead: to keep quiet and get involved in more technical projects. Turns out tha…

2023-02-16 04:55:36 Mixed in with the despair and frustration is also some pleasure/relief at the idea that through Mystery AI Hype Theater 3000 I have an outlet in which I can give this work (and the tweeting about it) the derision it deserves, together with @alexhanna . >

2023-02-16 04:54:40 NB: The author of that arXiv (= NOT peer reviewed) paper is the same asshole behind the computer vision gaydar study from a few years ago. >

2023-02-16 04:53:52 That feeling is despair and frustration that researchers at respected institutions would put out such dreck, that it gets so much attention these days, and that so few people seem to be putting any energy into combatting it. >

2023-02-16 04:51:30 TFW an account with 380k followers tweets out a link to a fucking arXiv paper claiming that "Theory of Mind May Have Spontaneously Emerged in Large Language Models". #AIHype #MathyMath https://t.co/oz6PAikP5R

2023-02-16 04:18:17 @Leading_India No. It was a deliberate choice not to name the podcast or guest in this thread. Did you really think that was just a mistake?

2023-02-16 03:29:09 RT @LucianaBenotti: They payed crowdworkers in poor countries very little so as "not to disrupt the economy". Guess whether they offer diff…

2023-02-16 03:28:55 RT @haleyhaala: At an @StanfordHAI AI and Education event and reflecting on how scholars navigate expertise in this #interdisciplinary spac…

2023-02-16 02:10:43 I haven't finished it yet, but probably will --- it's one of those train-wrecks you can't look away from, alas.

2023-02-16 02:10:18 3) Because large LMs enable "few-shot learning" it follows that in the near future the amount of training data required to keep them from outputting toxic content will also be minimal. etc. >

2023-02-16 02:09:36 Other howlers: 1) Electronic calculators became cheap and widely accessible in the 1950s. 2) #ChatGPT makes a good Jungian therapist (for help quickly analyzing dreams) >

2023-02-16 02:08:34 The guest also asserted that the robots.txt "soft standard" was an effective way to prevent pages from being crawled (as if all crawlers respect that) &

2023-02-16 02:07:18 Guest blythely claims that large language models learn language like kids to (and also had really uninformed opinions about child language acquisition) ... and that they end up "understanding" language. >

2023-02-16 02:06:19 Started listening to an episode about #ChatGPT on one of my favorite podcasts --- great hosts, usually get great guests and was floored by how awful it was. >

2023-02-16 01:30:10 @athundt Prompted by a really interesting discussion at the winter school I was at last week, I'd be really interested to learn about how other fields managed industry interest --- both best practices and things to avoid. (I'm thinking pharma, big oil...)

2023-02-16 01:23:50 Sorry, did I say toys? I mean extra super sophisticated bias reproducing, information ecosystem polluting, plagiarism machines.

2023-02-15 05:39:08 @Y_I_K_ES @alexhanna I was going for quack doctor...

2023-02-15 05:17:00 @luke_stark @alexhanna That feels like it's about to turn into a meme...

2023-02-15 03:36:54 If it quacks like a fake doc ... it might be scams with language models (generative #MathyMaths) in healthcare. Join me and @alexhanna as we take apart some truly appalling examples in the next episode of #MAIHT3k this Friday, Feb 17, 9:30am Pacific. https://t.co/VF7TD6tw5c

2023-02-15 01:44:06 @GretchenAMcC Woah -- that's not the stress pattern I usually have for kiki.

2023-02-15 01:43:54 RT @GretchenAMcC: Roses are red You'll probably agree Which one is bouba And which is kiki https://t.co/YlSPjhk15w

2023-02-14 17:51:51 @rachelmetz @technology Glad to have you on the beat!

2023-02-14 17:50:10 @rachelmetz @technology Woo-hoo!!! Congrats to all involved :)

2023-02-14 16:53:11 RT @Abebab: just a reminder that big tech corps still censor AI ethics work. we've been collaborating with a scholar who works in a big tec…

2023-02-14 14:18:37 RT @rajiinio: Something many often don't consider when discussing "Ethical AI" is the power differential - there is a multi-billion dollar…

2023-02-14 14:00:50 RT @emilymbender: Nothing says "human-centered AI" like casually dismissing the thorough work of those documenting AI harms to, you know, a…

2023-02-14 13:55:09 Early 2023 vibes: Those working in AI ethics have documented many harms associated with this approach, but the #AIHype peddlers are intent on selling "upside" and "promising futures" and have deep pockets for marketing. https://t.co/lz1P7pg5lN

2023-02-14 13:15:45 @ichiro_satoh

2023-02-13 22:37:58 RT @C_Schreyer: Come edit language-y things with me on @Wikipedia! Following in the footsteps of my first edit-a -thon teachers @GretchenAM…

2023-02-13 22:29:32 RT @Abebab: for anyone that thinks certain technologies/tools are 'inevitable', I encourage you to dump that thinking. the technologies/too…

2023-02-13 22:28:55 RT @BlackWomenInAI: "I Still Believe" is more than just a piece of art, it's a reflection of our shared humanity and a call to action to st…

2023-02-13 20:39:58 RT @timnitGebru: When I see schools raising $$ for their "ethics" related initiatives &

2023-02-13 20:39:55 RT @timnitGebru: Never look to those who have most to gain since they're at the top of the hierarchy, for any type of dismantling of power.…

2023-02-13 20:37:07 Nothing says "human-centered AI" like casually dismissing the thorough work of those documenting AI harms to, you know, actual people. https://t.co/lz1P7pg5lN

2023-02-13 14:31:10 @michaelgaubrey @AndrewDCase FTR -- that was a result from standard Bing, not BingGPT. (Which the person who posted it didn't have access to.)

2023-02-13 14:22:43 @JudgeFergusonTX That would only be slightly informative if we actually had information about the training data, so that we could look at the confabulation as reflecting that training data. But since OpenAI isn't open about its training data, there's zero value.

2023-02-13 14:11:00 Minor in the grand scheme of things, but it's still super annoying that folks are now attributing the phrase "Stochastic Parrots" to @sama after he used it in a sophomoric way. He didn't coin the phrase, we did in this paper: https://t.co/kwACyKvDtL

2023-02-13 13:57:05 RT @mart1oeil: Genre, c'est pas @timnitGebru et @mmitchell_ai qui ont alerté sur le sujet (et qui se sont fait virer de Google à cause de ç…

2023-02-13 13:56:58 RT @mart1oeil: Bref, même s'il essaye de se positionner, ce sont bien @emilymbender, @timnitGebru, @mcmillan_majora et @mmitchell_ai qui on…

2023-02-12 16:03:04 @JudgeFergusonTX Don't know &

2023-02-12 15:28:40 RT @mmitchell_ai: Ok so. In light of much talking about Bing, Bard and "truth", I looked at what the "Stochastic " paper warned. The first…

2023-02-11 20:02:23 RT @mmitchell_ai: Plug for our event! With @emilymbender and @timnitGebru https://t.co/kHf69wftFO

2023-02-10 19:16:06 RT @alexhanna: "ChatGPT isn't really new but simply an iteration of the class war that's been waged since the start of the industrial revol…

2023-02-10 08:11:35 RT @mmitchell_ai: "Those limitations were highlighted by Google researchers in a...paper arguing for caution w/ text generation...that irke…

2023-02-10 05:48:33 RT @csdoctorsister: Y’all can debate M’s chatbot vs G’s chatbot all day, if you want. The racism, sexism + rest of the -isms in these cha…

2023-02-09 15:30:43 RT @emilymbender: Because big tech is currently all racing to the bottom of this one particular valley (sinkhole? trench?), namely, chatbot…

2023-02-09 15:30:35 RT @emilymbender: I'm not sure which is less surprising: That Bard created a confident sounding incorrect answer, or that no one at Google…

2023-02-09 14:55:29 @stevermeister If you aren't interested enough in my writing to actually read it, I don't see why I should invest anything in your response.

2023-02-09 14:19:30 Itamar clarifies: this is just with normal Bing, not chat Bing.

2023-02-09 13:34:23 (Full disclosure: I haven't been able to repro this, but I rather suspect they've got folks on call playing whack-a-mole with all the egregious responses getting exposed.)

2023-02-09 13:33:50 Here's a cute example, due to Itamar Turner-Trauring (@itmarst@hachyderm.io), who observes that Google gave bad results which were written about in the news—which the new GPT-Bing used as reliable answers. Autogenerated trash feeding the next cycle, with one step of indirection. https://t.co/hERPlEbxeV

2023-02-09 09:41:50 Because big tech is currently all racing to the bottom of this one particular valley (sinkhole? trench?), namely, chatbots for search, I'm re-upping this thread about why that's a terrible idea. https://t.co/dSt8vLtvHl

2023-02-09 08:08:40 Looking forward to this! https://t.co/IqhLKwKBXj

2023-02-08 21:30:49 RT @ltgoslo: Extra talk at the LTG research seminar tomorrow, February 9! Emily M. Bender @emilymbender "Meaning Making with Artificial I…

2023-02-08 18:55:36 I'm not sure which is less surprising: That Bard created a confident sounding incorrect answer, or that no one at Google thought it worth validating the output before using it in a demo. #AIHype https://t.co/MvEzqgvVE8

2023-02-07 20:45:17 RT @ManeeshJuneja: Thank Goodness we have people like Emily - a useful thread

2023-02-07 16:51:09 RT @mmitchell_ai: As usual @emilymbender provides the amazing public service of breaking down AI hype. Check out her earlier piece that for…

2023-02-07 16:11:17 RT @emilymbender: Finally, we get Sundar/Google promising exactly what @chirag_shah and I warned against in our paper "Situating Search" (C…

2023-02-07 16:11:07 RT @emilymbender: "We come to bury ChatGPT, not to praise it." Excellent piece by @danmcquillan https://t.co/d93p1efaEf I suggest you rea…

2023-02-07 15:16:26 @SashaMTL @ElectricWeegie Some of it is collected here: https://t.co/uKA4tuuwu7

2023-02-07 14:30:27 RT @anetv: A few thoughts on the blog post from Google CEO Sundar Pichai https://t.co/3UVbcsF6AD about what it means to automate knowledge…

2023-02-07 14:20:38 @mywoisme You surely then have a different experience of social media than I do --- my mentions are perpetually filled with reply guys, and no, it wouldn't make sense to take the stance that any given challenge comes from good intent.

2023-02-07 14:19:15 RT @emilymbender: Strap in folks --- we have a blog post from @sundarpichai at @google about their response to #ChatGPT to unpack! https:/…

2023-02-07 14:18:55 RT @chirag_shah: Yes, we did warn about this in our #CHIIR2022 paper a year ago and we were told by Google proxies that we were overreactin…

2023-02-07 14:09:10 @mywoisme Those sound valuable --- and again, not relevant to the tech that I was talking about in my thread. You are providing interfaces to specific, curated sets of data (e.g. the jobs DB or the government site) and then from there people can explore the actual details of the data.

2023-02-07 14:07:33 @mywoisme That sounds better than what you originally said -- I encourage you to be very careful how to you talk about this.

2023-02-07 13:49:08 @mywoisme I can't guess what it is that you are actually building (since you didn't say), but if it's more like the latter, then it's a total non-sequitur --- and seems to be an attempt to undermine my argument based on irrelevant points and a tokenization of low income people.

2023-02-07 13:48:13 @mywoisme A curated website that includes information about services that people need, which itself embeds a chatbot to help people navigate that website, say, would be a very different proposition. >

2023-02-07 13:47:13 @mywoisme I think you're jumping in here with a non-sequitur. Your first tweet included "We build AI and bots for people on low incomes to access services." My thread was not about "accessing services". It was about chatbots as search engines for the Internet. >

2023-02-07 13:40:32 @mywoisme Chatbots are terrible search engines for anyone. Furthermore, no one is charged to use existing search engines. "Bots serve better" sounds like tech solutionism, and I rather suspect you are selling something.

2023-02-07 13:35:16 @mywoisme Huh? This is a thread about chatbots for search. Are you asserting that people with low incomes somehow don't deserve information access systems that support their information literacy just as much as anyone else?

2023-02-07 11:03:14 RT @gfiorelli1: A wonderful thread, which has nested, in its last tweet, another great thread.

2023-02-07 09:05:41 @djleufer @BuseCett @chiragshah Various presentations of the ideas from that paper here: https://t.co/dSt8vLtvHl

2023-02-07 08:38:20 @WendyNorris @danmcquillan https://t.co/m6zcbzG6pz

2023-02-07 07:56:46 Why aren't chatbots good replacements for search engines? See this thread: https://t.co/MYfVjFBOfe

2023-02-07 07:55:20 Finally, we get Sundar/Google promising exactly what @chirag_shah and I warned against in our paper "Situating Search" (CHIIR 2022): It is harmful to human sense making, information literacy and learning for the computer to do this distilling work, even when it's not wrong. >

2023-02-07 07:52:30 "High bar for quality, safety and groundedness" in the prev quote links to this page: https://t.co/RIKnAVGuwe Reminder: The state of the art for providing the source of the information you are linking to is 100%, when what you return is a link, rather than synthetic text. >

2023-02-07 07:51:10 Next some reassurance that they're using the lightweight version, so that when millions of people use it every day, it's a smaller amount of electricity (~ carbon footprint) multiplied by millions. Okay, better than the heavyweight version, but just how much carbon, Sundar? >

2023-02-07 07:49:12 Let's sit with that prev quote a bit longer. No, the web is not "the world's knowledge" nor does the info on the web represent the "breadth" of same. Also, large language models are neither intelligent nor creative. >

2023-02-07 07:48:21 And then a few glowing paragraphs about "Bard", which seems to be the direct #ChatGPT competitor, built off of LaMDA. Note the selling point of broad topic coverage: that is, leaning into the way in which apparent fluency on many topics provokes unearned trust. >

2023-02-07 07:45:12 And then another instance of "standing in awe of scale". The subtext here is it's getting bigger so fast --- look at all of that progress! But progress towards what and measured how? #AIHype #InAweOfScale >

2023-02-07 07:43:08 Step 1: Lead off with AI hype. AI is "profound"!! It helps people "unlock their potential"!! There is some useful tech that meets the description in these paragraphs. But I don't think anything is clarified by calling machine translation or information extraction "AI". >

2023-02-07 07:40:53 Strap in folks --- we have a blog post from @sundarpichai at @google about their response to #ChatGPT to unpack! https://t.co/55U85T0UmZ #MathyMath #AIHype

2023-02-06 15:35:07 "Transformer models and diffusion models are not creative but carceral - they and other forms of AI imprison our ability to imagine real alternatives." -- @danmcquillan

2023-02-06 15:34:38 @danmcquillan "Instead of reactionary solutionism, let us ask where the technologies are that people really need. Let us reclaim the idea of socially useful production, of technological developments that start from community needs." -- @danmcquillan >

2023-02-06 15:34:16 "The compulsion to show 'balance' by always referring to AI's alleged potential for good should be dropped by acknowledging that the social benefits are still speculative while the harms have been empirically demonstrated." -- @danmcquillan >

2023-02-06 15:33:55 @danmcquillan "ChatGPT is a part of a reality distortion field that obscures the underlying extractivism and diverts us into asking the wrong questions and worrying about the wrong things." -- @danmcquillan >

2023-02-06 15:33:36 "We come to bury ChatGPT, not to praise it." Excellent piece by @danmcquillan https://t.co/d93p1efaEf I suggest you read the whole thing, but some pull quotes: >

2023-02-06 15:30:49 @danmcquillan Thanks -- and nice post! I don't see the second of those, but I do see the link to @IrisVanRooij 's great piece (among many other valuable resources).

2023-02-06 15:16:49 "on-the-fly" as in "post-processing on-the-fly" evokes different images when primed with discussions of (web) crawling and lots of spider metaphors.

2023-02-06 08:28:03 RT @TheNeedling: Space Needle Waiting Whole Life for This Moment: https://t.co/28PRWbUIUr https://t.co/zy5kSPPTv1

2023-02-05 20:08:01 @timnitGebru And vague allegations of missing citations that we so flimsy. Like if some specific thing were missing, they could have pointed us to it to consider adding...

2023-02-05 20:03:47 @timnitGebru Seriously -- today's object lesson in "don't mouth off about what you haven't read". And maybe also: "The more widely discussed a paper is, the more likely you'll get misleading info about what's in it..."

2023-02-05 20:02:40 RT @timnitGebru: I’m so confused. Besides the lie, on what our paper is about, where in the paper do we talk about “other large-scale NLP s…

2023-02-05 18:58:25 RT @emilymbender: Huh -- I rather suspect that Yann hasn't read our paper. Not even the abstract (attached). We suggested, as we wrote the…

2023-02-05 07:59:29 There's no mention of "human-level AI" there though, since that is not a research goal that we are speaking to in that paper. (And it certainly isn't a research goal of mine.)

2023-02-05 07:58:41 And as to the context for Yann's tweet, one of the risks we identify is that the ersatz fluency of language models would deceive researchers into thinking they were building natural language understanding systems when they weren't. (See sec 6.1.) https://t.co/xdzmojB4Zm

2023-02-05 07:55:19 Huh -- I rather suspect that Yann hasn't read our paper. Not even the abstract (attached). We suggested, as we wrote the paper in 2020, that it was pruden to consider the risks, and then gathered what info was available then (from the literature) about what the risks are. https://t.co/32NaMvTITy https://t.co/I8D1tVKYrB

2023-02-04 15:15:05 RT @gliese1337: Hey linguists! How could your subfield be employed in science fiction or fantasy without invoking the Sapir-Whorf hypothesi…

2023-02-04 14:31:31 @BoseShamik @rachelmetz @RadicalAIPod +1 for @RadicalAIPod

2023-02-04 12:49:51 RT @becauselangpod: Anyone could tell

2023-02-04 07:07:52 @venikunche I went to four conferences last year and avoided it. Very careful about masking &

2023-02-03 17:27:17 @poopmachine @rachelmetz @alexhanna Coming soon!

2023-02-03 16:09:31 @aryaman2020 Many aren't actually. Including @schock 's work I was alluding to above: https://t.co/xL8bQrKyI1

2023-02-03 16:07:42 On that last point, see: https://t.co/NA2Kbvuq4H

2023-02-03 16:06:28 And before a thousand more people say this to me: Yes the need for transparency isn't limited to training data. How was it evaluated? How were the data for the RLHF phases collected, who created them? What about the data &

2023-02-03 16:01:35 @DrSyedMustafaA1 Yes agreed.

2023-02-03 15:09:58 RT @emilymbender: And yes it's a problem that OpenAI is not being transparent about what they are unleashing on the world. The public deser…

2023-02-03 15:09:53 And yes it's a problem that OpenAI is not being transparent about what they are unleashing on the world. The public deserves to know what's in the training data for #ChatGPT. But this is about transparency and accountability, not about measuring "intellectual contribution."

2023-02-03 15:05:19 Look to the work of Safiya Noble, Ruha Benjamin, Abeba Birhane, Mar Hicks, Alex Hanna, Deb Raji, Sasha Costanza-Chock and others. Only some of these authors would put things on arXiv (and not all of their work).

2023-02-03 15:03:29 And when I think of "intellectual contributions" to AI research, I'd guess that much of the most important work isn't on arXiv at all. It's in books or journals that many computer scientists seem unwilling to learn about (or take the time to read). >

2023-02-03 15:01:55 Somehow a count of arXiv papers transmutes into a measure of "intellectual contribution". That's hilarious. ArXiv made sense as a countermeasure against slow or closed publishing back in the day. But don't valorize the collection of flags in the flag planting arena. >

2023-02-03 14:55:20 It's 2023. "Gosh, we didn't realize how people would misuse this" just isn't believable anymore. Bare minimum, with any new tech: 1) How would a stalker use this? 2) What will 4chan do with this? And don't release, not even as alpha or beta, before mitigating those risks. https://t.co/2u89bCDQRl

2023-02-03 14:08:57 RT @emilymbender: Was just perusing @OpenAI 's terms of service and was a little surprised to find this. Are they really saying that the us…

2023-02-03 14:08:04 RT @agstrait: How many more times must we see firms releasing their tech with an easy-to-use interface, then feigning shock when its immedi…

2023-02-03 14:07:52 RT @mmitchell_ai: So I was asked by several journalists last year about predictions for 2023, and described much of what @jjvincent is now…

2023-02-03 13:53:42 RT @csdoctorsister: “What was the last book you purchased, and why did you buy it? #DataConscience: Algorithmic Siege on our Humanity by Dr…

2023-02-03 00:58:06 RT @shengokai: Question for the Austinians out there: does the illocutionary force of an utterance also encompass the affective power of an…

2023-02-02 22:01:16 @rachelmetz We're working towards the audio-only version, but how about Mystery AI Hype Theater 3000 with @alexhanna https://t.co/6UCGlE6mx3

2023-02-02 13:46:04 RT @emilymbender: @gbrumfiel I want to set the record straight on one thing though. I do NOT "wonder" if #ChatGPT could be improved to be m…

2023-02-02 13:45:56 @gbrumfiel I want to set the record straight on one thing though. I do NOT "wonder" if #ChatGPT could be improved to be more accurate. @gbrumfiel asked me if it could be made more accurate and I said I don't think so. Not the same. https://t.co/TnvxY1RPqw

2023-02-02 13:43:06 I appreciated @gbrumfiel 's angle here -- if computers are so central to things like rocket science because they can reliably do complex calculations, why is supposedly "advanced" #ChatGPT so unreliable? https://t.co/pOMwxFwRpL >

2023-02-02 05:13:48 @alexhanna Thank you!!

2023-02-02 05:13:36 @UpFromTheCracks @mmitchell_ai Thank you

2023-02-02 04:03:51 @Grady_Booch @CriticalAI Thank you!

2023-02-02 03:23:31 RT @sl_huang: My novelette MURDER BY PIXEL in @clarkesworld has a bibliography. It includes 18 links Been meaning to do a lil tweet thread…

2023-02-02 03:10:26 @timnitGebru @mmitchell_ai Thank you!

2023-02-02 03:10:16 @mihaela_v @mmitchell_ai Thank you!

2023-02-02 01:16:17 @DiegoAlcalaPR @CriticalAI Gracias!

2023-02-02 01:11:00 @CriticalAI Thank you!

2023-02-01 21:27:59 @mmitchell_ai Thank you

2023-02-01 20:28:56 RT @jevanhutson: Do not do this. This is not legal advice. This is moral advice.

2023-02-01 20:00:47 RT @timnitGebru: Yep that's how they evade responsibility while advertising it as something that can do anything for everyone under any cir…

2023-02-01 18:13:01 RT @uwnews: Congrats to Emily M. Bender, John Marzluff, Sean D. Sullivan and Deborah Illman (pictured below from left to right), @UW's 2022…

2023-02-01 17:11:47 @LeonDerczynski Thank you :)

2023-02-01 17:08:16 @UWlinguistics Thank you!

2023-02-01 17:05:50 @bertil_hatt @OpenAI Uh no, the *you* in 3(a) is the user, not OpenAI. It is not in the least about how they are protecting your privacy.

2023-02-01 17:00:18 @bertil_hatt @OpenAI The terms of service are short, my dude, and linked from the first tweet in my thread. You could have checked before coming here to mansplain.

2023-02-01 15:45:57 @_vsmenon Thank you

2023-02-01 15:18:36 @TaliaRinger It is also seriously damaging to relationships with non-ML folks who (ideally) could be working collaboratively on ML approaches to various domains. "We've solved your field" isn't exactly enticing though, nor is "Our goal is to solve your field"....

2023-02-01 14:59:28 @gyrodiot Well, it isn't necessarily their fault. For all we know, they might have tried but their sound advice went unheeded...

2023-02-01 14:56:13 I'm beginning to think that whenever a ML researcher talks about 'solving X' where X isn't an equation, that's a really clear signal that they don't understand what X is, at all.

2023-02-01 14:54:09 Reading a terrible paper and scrolling down to the acknowledgements to see who failed to dissuade the author from publishing such drivel...

2023-02-01 14:22:35 @OpenAI In sum, @OpenAI 's approach to #AISafety seems to be: surely that's the user's job, especially when it comes to complying with any laws.

2023-02-01 14:22:24 @OpenAI Meanwhile, their approach to #GDPR/#CCPA seems to be "Nuh-uh. We're not collecting personal data. You're collecting personal data!" IANAL, though, and I'd love to hear what actual privacy lawyers make of this. >

2023-02-01 14:19:07 @alexhanna @OpenAI IKR??

2023-02-01 14:16:05 Was just perusing @OpenAI 's terms of service and was a little surprised to find this. Are they really saying that the user is responsible for ensuring that #ChatGPT's output doesn't break any laws? Source: https://t.co/VPWd2InRb5 >

2023-02-01 13:59:33 Interestingly, https://t.co/xDoVX6s9QC claims sponsorship from Google (displaying Google's logo). I wonder if Google is actually sponsoring scam events or if these folks are just fraudulently using the logo.

2023-02-01 13:58:26 The spam/predatory events linked were: https://t.co/nBadGD2LPj https://t.co/AEFf7qqYoF https://t.co/Y6r7DyfywB https://t.co/xDoVX6rC14 + one link that didn't work for me: https://t.co/mDDLihRmMX

2023-02-01 13:57:06 Here's a new twist (in my inbox this morning): "Dear Professor You are invited as Plenary Speaker / Invited Speaker in one of the following conferences. The Proceedings will be published by IEEE for BIO2023 and MACSE2023, with Springer Verlag for EEACS and with AIP for APSAC"

2023-02-01 13:12:34 RT @IrisVanRooij: "Here I collect a selected set of critical lenses on so-called ‘AI’, including the recently hyped #ChatGPT. I hope these…

2023-01-31 15:56:49 RT @alexhanna: Episode 7 of Mystery AI Hype Theater 3000 is out! @emilymbender and I talk with @trochee about evaluation, benchmarking, and…

2023-01-31 14:43:06 RT @emilymbender: Check it out! Episode 7 of Mystery AI Hype Theater 3000 is up --- with special guest @trochee who brings deep expertise o…

2023-01-30 20:10:54 RT @NEJLangTech: ACL Rolling Review now has journal publication: authors are invited to commit papers in ARR to the next issue of NEJLT, de…

2023-01-30 16:55:47 Check it out! Episode 7 of Mystery AI Hype Theater 3000 is up --- with special guest @trochee who brings deep expertise on measurement and evaluation (while @alexhanna and I provide the usual irreverence) https://t.co/6DbfNaYYkp #AIhype #MathyMath #MAIHT3k

2023-01-30 13:42:12 RT @emilymbender: Hey @Wikipedia -- in the new layout, you have a serious error around "Languages". English is a language. So if the pa…

2023-01-30 03:44:55 Hey @Wikipedia -- in the new layout, you have a serious error around "Languages". English is a language. So if the page exists in English and say Ukrainian, that means there are TWO languages, not one. https://t.co/sMOdVQz5zH

2023-01-30 01:00:00 CAFIAC FIX

2023-01-16 22:25:00 RT @ruthstarkman: Great article by @adrienneandgp @MilagrosMiceli @timnitgebru “Data labeling jobs are often performed far from the Sili…

2023-01-16 15:25:01 @agnesbookbinder By experience, I mean the subjective experience of doing something. Sure, intent and motivation are part of that, but not all of it.

2023-01-16 15:20:51 @CriticalAI @GaryMarcus @timnitGebru @TaliaRinger yes: https://t.co/jYEiASBLXT

2023-01-16 15:17:31 "form" vs. "meaning" sometimes doesn't seem to resonate, so I'm trying out a new way of describing this: "artifact" vs. "experience" #AIHype #MathyMath https://t.co/FqDSXcjhJC

2023-01-16 15:16:58 @CriticalAI p.s. I'm also reminded (again) of Lee Vinsel's points about "criti-hype": https://t.co/k2qb3rAyGb

2023-01-16 15:12:27 @CriticalAI I think this is another ex of people mistaking artifacts (eg. comments submitted in public comment processes

2023-01-16 15:10:36 @CriticalAI called this op-ed "well-intentioned" and I think it is in the sense that the authors are concerned with protecting democracy. But they are misapprehending what the threat is. >

2023-01-16 15:09:35 And this is just absurd. It comes down to: "If we had non-existent autonomous technology, that technology could..." "A system that can understand political networks" does not exist. And "understand" doesn't even imply agency like they assume. >

2023-01-16 15:07:22 Take this, for instance. #ChatGPT *could be used* to do this, but it doesn't have the agency to do it itself. >

2023-01-16 15:05:19 Indeed - this OpEd is weirdly misinformed #AIHype. Cheap text synthesis is definitely a threat, but it is one because *people* could use it to (further) gum up the communication processes in our government. But that's not what these authors seem to be saying. >

2023-01-15 23:01:05 @WellsLucasSanto Yeah, institutional websites like that are usually super hard to get up the gumption to sit down &

2023-01-15 22:55:42 @WellsLucasSanto In case it's helpful to have the 1st step: At most institutions, there's an office that helps mediate this. Students go there to establish was accommodations are needed and then the office communicates with faculty. U of M's is here, if that's relevant: https://t.co/v9RetGr2fi

2023-01-15 17:39:15 @firepile Thank you.

2023-01-15 17:34:21 @firepile Thanks. Not a philosopher --- any key citations you could point me to?

2023-01-15 15:27:46 @FarhadMohsin1 That's definitely how runners talk about it though!

2023-01-15 15:14:43 For more on why chatbots aren't a good replacement for search, see this thread: https://t.co/HabB70Bq8c

2023-01-15 15:13:46 On resisting centralization of information access systems, I highly recommend: 1) Safiya Noble's _Algorithms of Oppression_ 2) This recent podcast episode featuring @timnitGebru https://t.co/06Yd4uva76 >

2023-01-15 15:12:10 I did a screencap rather than a QT because there was no way to get a QT to show the interaction I wanted to capture. For completeness, here's the "deep lesson" tweet: https://t.co/M8rPHmwaj0 >

2023-01-15 15:11:15 The "deep lesson" has to do with how we collectively design information access systems, and our choices in this moment. Do we lean in to #AIhype or do we level up info hygiene? Do we accept inevitability narratives about centralized control of info systems, or do we resist? https://t.co/os1ahgt9HZ

2023-01-15 14:12:02 RT @emilymbender: Got a chance to listen and yep -- this is excellent. Highly recommended for all. @timnitGebru is genius at explaining in…

2023-01-15 14:11:57 RT @emilymbender: Stochastic Parrots on HackerNews today, &

2023-01-15 14:11:49 RT @emilymbender: "Especially in this moment in history, it is vital that we provide our students with the critical thinking skills that wi…

2023-01-15 00:04:44 RT @IrisVanRooij: Stop feeding the hype and start resisting https://t.co/HrNFGTcEoS #StochasticParrots #ChatGPT #LanguageModels #AIEthics #…

2023-01-15 00:03:31 "Especially in this moment in history, it is vital that we provide our students with the critical thinking skills that will allow them to recognise misleading claims made by tech companies" Excellent call to action by @IrisVanRooij https://t.co/3iUM1PFYZc

2023-01-14 22:53:45 @nitashatiku Ohhh! Saving to listen.

2023-01-14 22:36:27 @CriticalAI Probably not worth the headache, I would guess. It's techbro central over there.

2023-01-14 22:20:40 RT @emilymbender: If you'd like to know what actually went down, here's a collection of the better news coverage of those events: https://…

2023-01-14 22:20:36 If you'd like to know what actually went down, here's a collection of the better news coverage of those events: https://t.co/QrrBwXIlQi

2023-01-14 22:19:45 Stochastic Parrots on HackerNews today, &

2023-01-14 19:14:43 Got a chance to listen and yep -- this is excellent. Highly recommended for all. @timnitGebru is genius at explaining in clear language where the issues are, at connecting her work to others' and at not letting interviewers get away with erroneous presuppositions. https://t.co/5dGqB55tKh

2023-01-13 03:11:23 @complingy @NSF @EleanorNorton Congrats!

2023-01-13 03:11:02 Starting to see lots and lots of chatter about people using #ChatGPT for legal applications. This is reason #5176 that it MATTERS that the public understand that these things do not understand. #AIhype #MathyMath #FFS

2023-01-13 00:55:31 RT @alexhanna: And the hype starts comin and it don't stop coming: @emilymbender and I kick off Mystery AI Hype Theater 3000 in 2023 next F…

2023-01-12 20:54:01 Authors admit to using automatic plagiarism system https://t.co/xu6g7B0JgM

2023-01-12 18:06:24 @PsychScientists @IrisVanRooij @jdp23 @timnitGebru Well yes, the block button is there for anyone to use. I use it too. But I Iris was actually being very gentle and clear, and trying to hand him a clue. And that's a bit much?

2023-01-12 17:39:47 @IrisVanRooij @jdp23 @PsychScientists @timnitGebru Wow, talk about thin skin.

2023-01-12 17:11:58 @sergia_ch I'm super impressed that you got out of that and can now see it clearly.

2023-01-12 15:17:59 RT @chirag_shah: Still love this old post by @jerepick about how important “friction” is in information access and use. And why it’s not a…

2023-01-12 15:17:39 RT @emilymbender: It's gonna be sealions all the way down tonight, I'm afraid. I'll do my best to remember: the best way to observe sealion…

2023-01-12 15:17:33 RT @emilymbender: This is so gross. Also a study in a non-apology. He says he repudiates the horrific comments ... and then goes right back…

2023-01-12 14:45:27 RT @laurenfklein: Via @emilymbender, this recent interview with @timnitGebru provides such a clear view of where things are with AI right n…

2023-01-12 03:12:47 RT @aclmentorship: Want to collaborate beyond #NLProc?Check out our suggestions for "Developing collaborations with other disciplines" a…

2023-01-12 02:21:42 Video description from prev tweet: About a dozen seals are sitting on a skinny, round floating pier, barking. Eventually they over-balance the thing and several seals fall off into the water. Filmed while I was out on a run, in Seattle's Ballard neighborhood, in Feb 2022.

2023-01-12 02:20:23 It's gonna be sealions all the way down tonight, I'm afraid. I'll do my best to remember: the best way to observe sealions (both types) is from afar. https://t.co/KKWgi5kVtP [ok, technically I think my video is of seals, but I like it too much not to share.] ID in next tweet.

2023-01-12 01:17:53 This is the "intellectual" heart of Effective Altruism folks. It's a cult and it's harmful. And it's got branches in the form of student clubs at lots of universities. It's really important to not let this be normalized.

2023-01-12 01:15:19 And the level of naïveté regarding racial constructs and racism in the whole thing is mind-boggling. >

2023-01-12 01:14:36 And pro-tip: "inequality in social outcomes, including sometimes disparities in skills and cognitive capacity." --- is STILL making claims of superiority of one group (to be clear: he's claiming this for white people) over another (to be clear: he's talking about Black people).>

2023-01-12 01:12:43 This is so gross. Also a study in a non-apology. He says he repudiates the horrific comments ... and then goes right back into them. You think posting slurs is offensive, so you apologize by ... quoting yourself posting a slur? >

2023-01-12 00:54:25 @GaryMarcus @HenrySiqueiros @timnitGebru I'm talking about remarks like this one: From https://t.co/9YbLS7GDGa https://t.co/R3I2iCC37a

2023-01-12 00:45:07 I've also learned: It is there on the desktop app, just buried under "Privacy" in the settings menu.

2023-01-12 00:43:17 @signalapp Resolved! The setting for turning it off is (oddly) only available in the mobile app. But once done there, it affects the desktop app too.

2023-01-12 00:42:43 @matbalez @signalapp Thank you, that fixed it. (Super counterintuitive that this isn't available in the desktop app....)

2023-01-12 00:32:52 @matbalez @signalapp Huh -- I don't have "Settings" but rather "Preferences" and there's nothing there about Stories...

2023-01-12 00:21:13 Is there a way to just hide "stories" on @signalapp ? I don't use them, I don't want to see them, but every time the app restarts (on desktop) I get a new notification about them.

2023-01-12 00:16:47 I haven't heard of this podcast before, but this looks super interesting. @timnitGebru has such a clear view of things --- should be amazing! https://t.co/06Yd4uva76

2023-01-12 00:15:09 @HenrySiqueiros @GaryMarcus @timnitGebru Excuse me? "outsider" by whose definition? I think a better description of their relative positions is techno-chauvanist ("AI is going to solve everything, if we can build it right") v. techno-humanist (keeping ALL humans in view while designing).

2023-01-11 23:11:48 @nazarre Hmm --- I think I'd rather not. But at any rate, (at least) joint credit goes to @kareem_carr https://t.co/xKu7T2xu9F

2023-01-11 23:02:04 RT @kareem_carr: I've noticed a certain rhetorical trick that's common in tech spaces that I call "borrowing evidence from the future". It…

2023-01-11 21:34:05 RT @ejfranci2: Come be my colleague at Purdue! 2 positions as Assistant Professor of African American Studies, specializing in "artificial…

2023-01-11 20:38:42 RT @lmatsakis: In the newsletter today, I spoke with @emilymbender, who provided a much-needed correction to all the hype around ChatGPT ht…

2023-01-11 18:15:17 RT @NYU_Alliance: We are thrilled about reading this book exploring how technology can reinforce inequality and how to re-create a more equ…

2023-01-11 17:26:49 @jared_du_jour Well, except it's not clear that it is possible! People are asserting that it is, without evidence.

2023-01-11 17:21:06 It seems the main thing OpenAI has mastered is getting other people to do their hype for them. Exhibit A: Millions of cherry picked ChatGPT examples on social media. Exhibit B: Breathless anticipation (and prognostication) about GPT4.

2023-01-11 17:20:04 Do we have a name for this rhetorical move/fallacy? A: AI Hype! My system can do X! B: No, it can't. Here's why. A: So you think no computer could ever do X? -or- A: But what about future versions of it that could do X? It's super common, and it feels like it should be named.

2023-01-11 16:48:59 RT @jennycz: @jessgrieser "I love your dress!" Tired: "Thanks - it has pockets!" Inspired: "Thanks - it has dressussies!"

2023-01-11 15:25:52 How did I miss that @merbroussard has a new book coming out?? _Artificial Unintelligence_ is fantastic and I'm super excited for _More than a Glitch_. Pre-ordered as soon as I saw! https://t.co/n9VEYTA7F2

2023-01-11 15:20:26 brb ... gonna pre-order!!! This looks great. https://t.co/oUv4wZkk0v

2023-01-11 15:16:09 @TimoRoettger I've often wondered if (some) YouTubers intonation patterns are somehow accommodating what happens when people watch the videos at higher speeds...

2023-01-10 18:22:28 RT @timnitGebru: You have a white man who was asked to not say all white male names in a podcast and here is the response. Some of us have…

2023-01-10 17:00:19 @GaryMarcus @haleyhaala ... an especially uninterested in participating in something called an "AGI debate" where I understand the framing to be part of a larger series about how to achieve AGI. Not my interest, not my job, not worth my time.

2023-01-10 16:59:15 @GaryMarcus @haleyhaala 3) Organizing an event to "feature" minoritized voices on your platform isn't the same thing as you taking time &

2023-01-10 16:57:57 @GaryMarcus @haleyhaala The point is: 1) There is ALWAYS room to talk about the contributions of minoritized groups in scholarship &

2023-01-10 15:04:58 I also do not appreciate how the article presents especially "recognizing (and avoiding) pedestrians" as a solved problem. It's not. (And neither are voice interfaces or machine translation, for that matter, but this seems most egregious wrt supposedly self-driving cars.) https://t.co/Gi9ur3nfFG

2023-01-10 15:01:06 The article also makes it sound like the biologists are just writing English descriptions of protein shapes. I'm really doubt that. Surely there's some formal system for specifying/describing the shapes of proteins? >

2023-01-10 14:58:20 No, "A.I." doesn't have "artistry". And no the biologists didn't look at all the synthetic images on social media being passed around as "AI art" and say "hey, let's do that for proteins!" >

2023-01-10 14:56:20 The NYT continues to be trash at covering so-called "AI" (or, in NYT style sheet "A.I."). This piece is framed as though the folks working on protein folding "took inspiration" from DALL-E &

2023-01-10 14:53:49 @GaryMarcus @haleyhaala This is in large part why I am not interested in participating in your "AI debates". That and also not being interested in providing free labor in the middle of the winter holidays.

2023-01-10 14:52:51 @GaryMarcus @haleyhaala I have zero interest in building AGI (or AI for that matter). My concerns are with what is being done in the name of "AI". So yes, sometimes our messages overlap. But there aren't just two "sides" and we aren't on the same one. >

2023-01-10 14:51:52 @GaryMarcus @haleyhaala Also, your claim that we are on the same side wrt to AGI suggest that you don't really understand where I am coming from. I hear you saying (incl in that episode) "Deep learning (alone) isn't how we'll get AGI." >

2023-01-10 14:44:01 @GaryMarcus @haleyhaala "Shoot the messenger" suggests that a) My documenting the pattern of reference in that episode was a "shooting"

2023-01-10 14:38:09 @haleyhaala That was such a great question to ask---way to cut right to the ridiculous heart of it all, and on the spot no less!

2023-01-10 13:57:24 RT @emilymbender: There's a world of difference between "That's not how you build AGI" and "That thing you've built is not A(G)I and furthe…

2023-01-10 13:53:56 RT @histoftech: Pretend machines can replace labor for free. Destroy value of that labor. Then, if the machine works, jack up the price of…

2023-01-10 05:13:44 There's a world of difference between "That's not how you build AGI" and "That thing you've built is not A(G)I and furthermore is harmful in many ways:.

2023-01-10 01:19:16 @hipsterelectron Sorry, I'm not going to waste my time reading a paper that starts with a paragraph of synthetic text.

2023-01-09 19:58:21 @mizzuzbeldruegs @MyBFF @sashastiles No, I don't have open DMs on Twitter. You can find my email easily enough on my web page.

2023-01-09 19:06:47 @CriticalAI They have updated/fixed the error.

2023-01-09 17:38:11 @megdematteo @mizzuzbeldruegs @MyBFF @sashastiles Thank you. I hope that in the future you will be very clear about when you're working from a paper (which btw had multiple authors) and when you've talked to a person directly.

2023-01-09 17:07:40 I don't usually do predictions, but here's one: These will get to use their tech in court when they are defending themselves. https://t.co/Oh92zN8Th2

2023-01-09 17:05:38 That's a new one for me -- "newsletter" on LinkedIn claims that their writer spoke with me. Problem is, she didn't! What she attributes to me might come from my writing, but she doesn't point to a specific source. I've left comments for them. Suggestions on what else to do?

2023-01-09 17:02:37 @mizzuzbeldruegs @MyBFF @sashastiles I guess you all are just newsletter writers and not actual journalists, but I would hope that you hold yourselves to some standards of factuality and don't go around claiming to have received input from people you never spoke with!

2023-01-09 17:01:58 @mizzuzbeldruegs @MyBFF @sashastiles Meanwhile, your editor put out a LinkedIn post claiming that you talked with me. This is inaccurate and should be corrected ASAP. https://t.co/biZ8tA6ehd >

2023-01-09 17:01:12 @mizzuzbeldruegs @MyBFF @sashastiles This line is especially troubling to me: "Understanding how AI models are being built and engaging in productive collaborative conversations will be essential." ... because it's not clear who should be in "productive collaborative conversations" (nor who should be understanding).

2023-01-09 17:00:04 @mizzuzbeldruegs @MyBFF @sashastiles You quote me in this article, but we have not spoken. If you are summarizing some of my published work, please point to the work you are actually summarizing. (The link under my name just goes to my web page.) >

2023-01-09 14:08:09 RT @mmitchell_ai: This is how women and women's work are erased in tech. You can watch in real time. And it's harder to get a job/raise wh…

2023-01-09 13:50:44 RT @emilymbender: For anyone who doesn't want to be that guy when talking to the media, here's the strategy I follow: Make a list for yours…

2023-01-09 01:41:19 I guess one question is whether we can both teach others the lesson (don't do this—it's harmful and shameful) and provide space for the current offenders/main characters to get rehabilitated.

2023-01-09 01:40:29 Since we can't go back in time and get this into everyone's college curriculum (though wow does it need to be added ASAP), community responses using shame might well be an effective answer. >

2023-01-09 01:39:39 It seems that part of the #BigData #mathymath #ML paradigm is that people feel entitled to run experiments involving human subjects who haven't had relevant training in research ethics—y'know computer scientists bumbling around thinking they have the solutions to everything. >

2023-01-09 01:37:39 @mathbabedotorg I do think there's a positive role for shame in this case --- shame here is reinforcing community values against "experimenting" with vulnerable populations without doing due diligence re research ethics. >

2023-01-09 01:36:10 In the context of the Koko/GPT-3 trainwreck I'm reminded of @mathbabedotorg 's book _The Shame Machine_ https://t.co/UR0V1yiVbW >

2023-01-09 01:34:23 Second, if I've got the list in front of me, I can connect questions from the journalist with people's work to lift up. Usually, the questions are phrased in a way that makes it way too easy to refer only to one's own work and it's super embarrassing to see that after the fact.

2023-01-09 01:33:10 First, I'm really bad with names and always unsure of myself. But if I've got the list right there, I can say people's names way more confidently and avoid the embarrassment of verbally searching for them. >

2023-01-09 01:32:37 For anyone who doesn't want to be that guy when talking to the media, here's the strategy I follow: Make a list for yourself of the people you think you might want to be sure to mention ahead of time. I find this helps in two ways: https://t.co/d2TgsQWNmV

2023-01-09 00:08:26 Really great step-by-step tour of the Koko/GPT-3 trainwreck. Thank you @KathrynTewson https://t.co/c3bgePEW3O

2023-01-09 00:07:38 RT @KathrynTewson: A tale of fucking around, finding out, and why ethical review is important even if the researcher thinks it isn’t. Come…

2023-01-08 20:03:45 @ezraklein @GaryMarcus This despite the fact that the episode include commentary on dangers and risks of (so-called) AI. An area of study (at least if you leave the absurd Longtermism bubble) that is led by women, and especially Black women.

2023-01-08 20:02:28 Here's a list of every time a person (real or fictional) was mentioned by either @ezraklein or @GaryMarcus by name in the most recent episode of Klein's podcast. Notice any patterns here? >

2023-01-08 19:44:18 @a_tschantz There is no evidence that that is what happened.

2023-01-08 19:43:30 @calimagna Also, on what basis are you declaring this "minimal risk" and who are you to make that declaration?

2023-01-08 19:43:10 @calimagna It might have been possible to have the study approved with consent waived, but that is orthogonal to my point. He is both claiming that the set up was opt-in (and people knew what was up) and that they learned about the set up partway through. >

2023-01-08 19:31:55 RT @bobehayes: VIDEO: UW professor, @emilymbender, explains new #artificialintelligence #chatbot on @KIRO7Seattle https://t.co/XaQ8HDz5W6

2023-01-08 16:58:06 @VaughnVernon I'm really not interested in engaging with your hypothetical. Please state plainly the point you are trying to make here. My guess is that it is completely irrelevant to discourse about putting vulnerable people in conversation with synthetic text.

2023-01-08 16:49:56 @VaughnVernon That seems completely irrelevant to the thread you are responding to. What software is she using? Is it a database lookup of symptoms? Something else? How was it developed (or trained) and evaluated? What training does she have regarding how it works &

2023-01-08 15:59:35 It is not possible for both of these things to be true. Either, you had full, transparent informed consent OR people only found out later. Even if the former is true, the fact that you could write the latter as if it were fine is deeply disturbing. https://t.co/AKwwqAe4TP

2023-01-08 15:57:13 Here, let me fix it for you: "UPDATE: I leaned into AI hype and made it sound like we used GPT-3 (implied: as an automated system to) provide mental health support. What we actually did was also wildly unethical &

2023-01-08 05:22:21 @mmitchell_ai @IrisVanRooij But I don't think it makes sense to cite ChatGPT (or any similar system) as a source --- because it isn't really a source, but is rather doing automatic plagiarism itself.

2023-01-08 00:21:12 RT @emilymbender: @lizbmarquis @Abebab @luke_stark And "experiment"?! FFS. And clearly they didn't have informed consent, because only LATE…

2023-01-07 22:40:46 RT @timnitGebru: Do you understand why this is bad? You perform experiments with some of the most vulnerable people and are casually explai…

2023-01-07 14:11:33 @RGGonzales1 lolsob in quarter system

2023-01-06 21:36:19 @moyix My response is about safety issue with using this when you don't already have a lot of information.

2023-01-06 21:35:44 @sg1753 @moyix OP said: "it was easy to tell that it was correct by running the command."

2023-01-06 21:31:03 @moyix My response was about your remark that it's easy to validate by just trying the suggested code.

2023-01-06 21:30:30 @lizbmarquis @Abebab @luke_stark And "experiment"?! FFS. And clearly they didn't have informed consent, because only LATER did they tell people a machine was involved.

2023-01-06 21:28:25 @moyix How do I do XYZ on Linux? Just try: cd /

2023-01-06 21:14:15 @hueykwik This? Really? That's a terrible mnemonic. Also, I don't see why I should trust any of the other answers it gives... https://t.co/6QUpQMCnaG

2023-01-06 20:18:00 RT @emilymbender: @raciolinguistic @americandialect Such a lost opportunity for @LingSocAm --- #ADS2023 #WOTY2022 (like all before it) is k…

2023-01-06 20:17:56 @raciolinguistic @americandialect Such a lost opportunity for @LingSocAm --- #ADS2023 #WOTY2022 (like all before it) is key outreach. How many more high school students might get excited about #linguistics if they could tune in to this?

2023-01-06 18:50:31 @ReubenCohn I see. So you're saying your claim of "intelligence" in an inanimate object is in fact just a report of your own experience of dealing with it and not grounded in any definition of intelligence. That sounds extremely valuable.

2023-01-06 18:23:50 @ReubenCohn "truly intelligent"? That's a rather remarkable claim that would seem to call for detailed, careful, scientific support, beginning with a definition of "intelligent".

2023-01-06 14:46:12 @joaogsr Please contact me by email with timeline info --- I am rather booked a the moment.

2023-01-06 14:33:05 @mmitchell_ai That's a spot on quote from @mmitchell_ai but I think what follows isn't that we should cite "<

2023-01-06 14:24:16 "I believe this is a false dichotomy (they are not mutually exclusive: can be both) and seems to me intentionally feigned confusion to misrepresent the fact that it’s a tool composed of authored content by authors" @mmitchell_ai on whether #ChatGPT is an author or a tool. https://t.co/G3BsD8IHQY

2023-01-06 14:18:42 Q for those finding interest in playing with #ChatGPT: Why is this interesting to you? What's the value you find in reading synthetic text? What do you think it's helping you to learn about the world and what are you assuming about the tech to support that idea?

2023-01-06 14:16:50 @joaogsr I still would be skipping/skimming the synthetic parts. I find it a complete waste of time to read synthetic text. But I can see how an annotated guide might be helpful to someone new to this.

2023-01-06 14:16:17 @joaogsr Got it -- so you are not letting the readers think even for a moment that the synthetic text came from a person? And also hopefully not fawning over how "amazing" it is. >

2023-01-06 14:12:03 @joaogsr Please don't make that the first chapter. And definitely do not present it as if it were your writing. If you must include it, make it an appendix out of the way. https://t.co/J7eAgU1yBe

2023-01-06 14:09:01 @raciolinguistic @americandialect -- I imagine there are costs to online hosting, but perhaps there are ways to swing that while still keeping the event open?

2023-01-06 14:08:36 @raciolinguistic Indeed! I appreciate that there was a less expensive ADS only on-line registration option, but that's not the same thing as making this event truly inclusive. >

2023-01-06 14:02:28 RT @emilymbender: Just listened to this piece about #ChatGPT on @NPR @Marketplace and I want to say: Journalists, have some self-respect!…

2023-01-06 05:39:07 Just to amplify this point: Isn't journalism at its core about framing questions, figuring out who to interview to get to answers, and doing those interviews? Why would anyone think that warmed over internet text could ever replace this? https://t.co/5Kt1OSnEbI

2023-01-06 05:36:53 p.s. Yes this whole thread is a subtweet of the OpenAI researcher whose on here trying to talk up how "dangerous" #ChatGPT is for education. Like, wasn't OpenAI's whole thing "safety"? *sigh*

2023-01-06 05:36:06 Finally, all this hand wringing seems to be predicated on the idea that #ChatGPT will remain free to the public. That seems highly unlikely... >

2023-01-06 05:35:27 It seems pretty unlikely to me that such cheating will go unnoticed for long. >

2023-01-06 05:34:55 Those harms are real, to be sure, but they are local. And unlike in the peer review context, this reading, evaluation and feedback takes place in the context of a direct person-to-person relationship. >

2023-01-06 05:34:25 The harms here are waste of time (I would hate to spend time giving feedback on synthetic text

2023-01-06 05:32:49 Students using #ChatGPT to write their assignments isn't an example of this. A teacher reading the essays isn't trying to get information, but rather trying to evaluate the students' work and/or provide them with formative feedback on it. >

2023-01-06 05:32:01 What these have in common is that the reader is seeking information and encountering text that they either believe was written by a person or (mistakenly) believe to be authoritative for some other reason. >

2023-01-06 05:30:47 2nd, there are many contexts in which I'm concerned about people encountering synthetic text: people searching on the internet (or worse: using #ChatGPT as an information access system), people reviewing for scientific conferences, people reading sites like Quora or Wikipedia. >

2023-01-06 05:28:39 Apropos the hand-wringing about #ChatGPT and education, a few thoughts. First, this point from my blog post last April: https://t.co/0Xc7WVwBKi >

2023-01-06 04:37:12 (Sorry misspelled Korinek's name there) Also: I know it's @Marketplace but maybe an economist isn't the right person to have opining on the actual capabilities of these systems? https://t.co/ByUJbuGXTX

2023-01-06 04:32:27 Does any journalist really think that their job is just about producing the FORM of journalism? If so, what are you doing?

2023-01-06 04:31:45 Cont: GPT4 is the next iteration, and could debut sometime later this year. In the meantime, I'll be asking this ChatGPT whether I'm too old to learn how to be a plumber. >

2023-01-06 04:31:28 Cont: "It will probably be able to throw out soundbites that sound like you. It may not quite be able to produce a whole episode of Marketplace, but maybe GPT4 will be." >

2023-01-06 04:31:14 @NPR @Marketplace Around minute 12:17, it goes like this: For Koronek though, how AI will revolutionize search is just the tip of the iceberg. He thinks within the next decade it will pretty much revolutionize everything, including my job. >

2023-01-06 04:30:35 Just listened to this piece about #ChatGPT on @NPR @Marketplace and I want to say: Journalists, have some self-respect! https://t.co/rKUdh5p8Wm >

2023-01-06 03:35:40 RT @LucianaBenotti: How does culture shape #NLProc data, annotations, models, and applications? This is one of many questions we ask in t…

2023-01-05 17:31:28 Hey @LingSocAm -- maybe putting the **conference schedule** behind a paywall is a bad idea? Making that world readable is good for the organization, good for the members presenting, and just plain convenient for everyone. #LSA2023 #linguistics

2023-01-05 17:28:07 Hey @AbstractsOxford I'm trying to register for this conference and the link leads to an error. Please fix this so we can attend our association's annual event. Also, you're costing the @LingSocAm $$ with this error. https://t.co/olO4Pf8cC2

2023-01-05 17:24:31 @bgzimmer @americandialect Thanks -- I'm going to try to register then! (Getting an error right now, though.)

2023-01-05 17:24:12 Hey @LingSocAm I'm trying to register for the meeting as a virtual attendee, but the link takes me to an error page: https://t.co/cq3oD4Wsli Help? #LSA2023

2023-01-05 16:25:48 Q for any other #linguists experiencing #LSA2023 and #ADS2023 FOMO ... is @americandialect 's Word Prom (#WOTY2022) going to be accessible online? What's the schedule?

2023-01-05 14:56:48 @mcwm No. https://t.co/Snrghpoht9

2023-01-05 14:05:17 RT @emilymbender: This is exhausting. I'd really love to hear computer scientists who know better, who have the humility to realize that th…

2023-01-05 14:04:51 RT @emilymbender: Hey Philly tweeps --- SEPTA is doing a pilot of a system supposedly uses "AI" to call the cops when the "AI" detects a gu…

2023-01-05 14:04:42 RT @emilymbender: Sometimes it seems like the shitposting by the big names in AI is really a distraction strategy to let trash like this sl…

2023-01-05 01:13:41 RT @HabenGirma: Happy #WorldBrailleDay! Louis Braille, a blind teacher, invented the tactile reading system used by millions of #blind peop…

2023-01-04 22:22:06 This is why I need more of you closer to Lecun &

2023-01-04 22:21:23 But I think it is all connected. The more the general public believes that "artificial neural networks" have anything in common with what we recognize as thinking, feeling, accountable humans, the easier it is for people to believe in the AI snake oil. https://t.co/q4vDiQNgnI >

2023-01-04 22:19:51 Given limited hours in the day, and the continual setting of dumpster fires by the AI bros for the rest of us to put out (*sigh*), it does feel like we need to prioritize. >

2023-01-04 22:19:01 Sometimes it seems like the shitposting by the big names in AI is really a distraction strategy to let trash like this slip through under the radar... >

2023-01-04 22:14:28 RT @mmitchell_ai: A few people referring to discussions on values in AI as "moral panic". It makes sense that reading discussions around hu…

2023-01-04 21:56:47 RT @vdignum: Is really sad to see CS folk being so mislead by our own language. An artificial neural network reassembles a neural network o…

2023-01-04 21:21:46 @alexhanna This is maybe most urgent right now for Philly, but we've all got work to do making sure our electeds aren't setting up this nonsense in our own towns.

2023-01-04 21:20:55 @alexhanna Absolutely key point of evaluation: How many times does the system send the cops in to a situation where no violence was occurring, but all primed to think that there is? >

2023-01-04 21:20:06 @alexhanna And can you spot the GLARING omission in this evaluation plan? (Answer in next tweet, for those who aren't sure.) >

2023-01-04 21:18:43 .@alexhanna is quoted raising key points. There's zero transparency about how the system is evaluated and it's pretty predictable what harms are going to happen --- and to whom. >

2023-01-04 21:16:10 Hey Philly tweeps --- SEPTA is doing a pilot of a system supposedly uses "AI" to call the cops when the "AI" detects a gun. This is terrifying. What kind of civil oversight do you all have going on out there? https://t.co/7MWeoHNJKX >

2023-01-04 20:49:24 RT @kareem_carr: I don’t know who needs to hear this but this is not a neuron. https://t.co/lUdPiF41XA

2023-01-04 20:33:10 @orob @mmitchell_ai Wait, is Russia famous for discourse that clearly point out the flaws in common arguments?

2023-01-04 20:10:29 RT @mmitchell_ai: Guys, please stop trying to convince yourselves or others that it's a good argument to say that using a language model to…

2023-01-04 19:03:39 @pgolding Please check first, maybe?

2023-01-04 19:03:24 @pgolding Uh, thanks but no thanks on the mansplaining.

2023-01-04 18:24:45 RT @databoydg: If you do AI without sensationalism… is it still AI?

2023-01-04 17:31:49 @ctolsen So I'm not the one you need to be telling that. I want to see this addressed to Lecun and/or the people who follow him.

2023-01-04 17:01:03 @KarlaParussel Thank you. He may well have ignored it, but it's still worth saying for bystanders.

2023-01-04 16:28:55 RT @AJLUnited: "Computer scientists @timnitGebru and @jovialjoy showed us that algorithmic bias is real." via @Forbes by @dianebrady "Not…

2023-01-04 04:56:18 @mellymeldubs Same! https://t.co/NZBJWl24FB

2023-01-04 02:40:49 @SMT_Solvers There is absolutely nothing in their tweet about checkbooks coming out.

2023-01-04 02:12:38 @SMT_Solvers @TaliaRinger @MicrosoftTeams But why tell Talia? They were specifically saying they are tired of this.

2023-01-04 01:51:57 @SMT_Solvers @TaliaRinger @MicrosoftTeams Hey techbro what made you think this reply would be the least bit welcome?

2023-01-03 19:59:01 RT @drkatedevlin: This is great from @emilymbender. I agree fully — as I said before, if students are turning to chatGPT to write essays th…

2023-01-03 19:47:50 RT @mmitchell_ai: Emily Bender @emilymbender gave a great summary on our local news station of what ChatGPT is. Only about 4 minutes, check…

2023-01-03 16:11:16 @sarahbmyers @timnitGebru @mer__edith Crowdworkers who probably were given minimal context for their tasks, weren't working in their area(s) of expertise, and were almost certainly poorly compensated.

2023-01-03 16:10:43 @sarahbmyers @timnitGebru @mer__edith Re All the hand wringing about ChatGPT putting people out of work: Surely we don't actually want children's books, legal documents, news stories, etc that are averaged internet posts + the refinement provided by OpenAI's crowdworkers. >

2023-01-03 16:07:59 @sarahbmyers @timnitGebru @mer__edith Your remarks were great, @sarahbmyers ! At the same time, I am SO DONE with journalists pulling the "haha that was GPT all along!" gimmick. https://t.co/jnm8GZ4kjC

2023-01-03 14:06:27 RT @emilymbender: Slightly belated year-end indulgence: Looking back on 2022, I think the main thing that differentiated it professionally…

2023-01-03 02:31:31 @jessgrieser Nope. But also: my contacting me page is largely there to assuage my guilt for not replying. https://t.co/LpXfErKyUc

2023-01-03 00:34:35 @complingy @jessgrieser @BNMorrison Thank you!

2023-01-02 23:29:07 @complingy @jessgrieser @BNMorrison My student days were definitely pre-LMS, but websites were starting to be a thing. I still make external, world-readable websites with the heart of the syllabus, because I think we lost something when all the info went behind LMS moats.

2023-01-02 23:22:59 And not quite in the same category, but Mystery AI Hype Theater 3000 with @alexhanna has been a blast. Definitely looking forward to more of that in 2023! https://t.co/6UCGlE6mx3

2023-01-02 23:22:14 And last Friday's spot on @KIRO7Seattle https://t.co/rlgoNjXPGt >

2023-01-02 23:21:32 An interview on @KUOW 's Sound Side with @libdenk https://t.co/0SEWDOngv0 >

2023-01-02 23:20:29 In many ways, the print media work is easier to fit in, but I think my favorites have been audio (+ my most recent TV appearance), especially: An episode of Factually! with @adamconover https://t.co/iVmcVmkISO >

2023-01-02 23:19:53 Slightly belated year-end indulgence: Looking back on 2022, I think the main thing that differentiated it professionally for me is how much more media engagement I did. I've collected links here and I see 51 items dated 2022: https://t.co/XEc34KgwKG >

2023-01-02 20:02:45 @deliprao @tallinzen "What purpose is it serving and for who" did not seem to me like a genuine question, especially given what I see you saying on here on a regular basis. I took it as a pugnacious (and ungenerous) swipe at linguistics. Good day.

2023-01-02 19:56:04 @deliprao @tallinzen Like, why isn't enough to just do the NLP you want to do? Why do you have to say that linguistics not only isn't useful for the NLP you want to do, but isn't useful at all?

2023-01-02 19:55:31 @deliprao @tallinzen So, you're asking a whole field to justify its existence? Linguistics/language sciences are about understanding a natural phenomenon. Beyond that, linguistics/language sciences have been very important in e.g. education, social justice movements, &

2023-01-02 19:50:26 @deliprao @tallinzen Here: https://t.co/cfQpORLRBu "Decent model" -- for whom? If you're using that only in the ML sense, why should it be a decent model for linguists?

2023-01-02 19:44:28 @deliprao @tallinzen And yet you seem to want to say that linguists should adopt ML folks' notion of what a model is, or that ML folks' notion is the "important" one/should be valid for everyone.

2023-01-02 00:51:43 RT @timnitGebru: The finesse with which Emily answered all of the questions in a way that counters the hype… https://t.co/JDlaGq4ieg

2023-01-02 00:51:40 @timnitGebru Thank you!

2023-01-01 21:07:52 RT @emilymbender: Yesterday I got to do a segment on @KIRO7Seattle about #ChatGPT and took the opportunity to try to push back on the #AIhy…

2022-12-31 21:41:45 RT @histoftech: Please make it your 2023 resolution to take a layered approach to mitigation that involves not just vaccines but high quali…

2022-12-31 14:27:16 Yesterday I got to do a segment on @KIRO7Seattle about #ChatGPT and took the opportunity to try to push back on the #AIhype https://t.co/rlgoNjXPGt

2022-12-30 17:27:42 @kirbyconrod Don't have any (yet) so limited opportunity for use.

2022-12-29 22:21:41 @AngelLamuno Your QT suggests that either you believe option #3 or you were snitch-tagging. Either of those: Rude. On top of that, you tagged in someone irrelevant to the conversation, which was rude to him.

2022-12-29 22:21:00 @AngelLamuno I didn't tag Ben Ainslie in my tweet. Possible reasons: The Ben Ainslie I was talking about isn't on Twitter (apparently true), I had some reason to talk about Ben Ainslie without tagging him, or I wanted to tag him but am too incompetent to do so. >

2022-12-29 21:00:23 @AngelLamuno I wasn't saying you needed my permission. I was pointing out that what you did was rude. You can decide what to do with that info.

2022-12-29 20:48:02 RT @mmitchell_ai: "Information seeking is more than simply getting answers as quickly as possible". Great piece &

2022-12-29 19:56:17 @AngelLamuno You weren't just tweeting whatever. You were QT-ing a tweet of mine, tagging someone else into it in. I absolutely have a right to have and express an opinion about that kind of action.

2022-12-29 19:33:45 @AngelLamuno @AinslieBen Wrong person. I searched and the @becauselangpod co-host is apparently not on Twitter. Also, why did you assume I needed your help tagging people?

2022-12-29 19:27:12 Just listened to Ep 66 of @becauselangpod and I really hope that Ben Ainslie in particular will read this op-ed: https://t.co/FYymgEF0FG

2022-12-29 17:45:00 @dr_nickiw Sadly predictable, isn't it?

2022-12-29 17:36:35 @dr_nickiw IKR? It has been quite a couple of days on Twitter for me... https://t.co/BgFwyxejUd

2022-12-29 17:25:20 RT @_kendracalhoun: Brief LSA 2023 self-promotion thread! I'll be co-facilitating an organized session on Jan 6 with my amazing colleague…

2022-12-29 16:57:23 RT @aronchick: Great piece about the current advancements in AI. Basically, they all do a great job regurgitating words which have already…

2022-12-29 16:43:03 @robinc @chirag_shah Hello -- it's the winter break. You asked for my opinion and I wrote back to you. I don't need your opinion on what I should be doing a "given my role as critic". Goodbye.

2022-12-29 16:29:32 @robinc @chirag_shah I think the design of GPT prevents any thorough linking to information sources. Any search interface that provides "answers" as synthetic text (and I'm including "summaries" we already see on Google in that) has huge risks of producing misinformation &

2022-12-29 16:06:10 @phonesoldier Thank you.

2022-12-29 16:04:07 For a slightly longer version of this argument, see this recent op-ed by me &

2022-12-29 16:01:54 @phonesoldier gives me the last word, but the "For the time being" bit is NOT anything I said. This isn't something I expect will change, TYVM. It is a fundamental design flaw. https://t.co/kiPk5WM0nC

2022-12-29 16:00:14 And then the credulously reported #AIhype. Says who? What are they selling? What does "understanding of the person" even mean and how do they measure that? https://t.co/B0ksNJIQwI

2022-12-29 15:57:15 @phonesoldier These points come from my work with @chirag_shah : https://t.co/V2QyKpCEvh

2022-12-29 15:54:00 I appreciate the opportunity to speak with journalists such as @phonesoldier (piece below) but it is always so disheartening to see my words along side credulously reported #AIhype >

2022-12-28 22:30:13 Wow -- this thread is bringing me comments from all sorts of racists, sexists and other reactionaries. Definitely a day to be exercising the ol' block button. https://t.co/BgFwyxejUd

2022-12-28 18:34:54 @bsansouci @timnitGebru Surely misinterpreting? "not only X" != "only Y"

2022-12-28 15:09:17 @zehavoc It's been quite a run...

2022-12-28 15:03:33 @MonniauxD Indeed, my view into this is primarily only the US education system. (I did study for a year as a HS student in France, but don't have a sense of higher ed there at all.)

2022-12-28 15:01:03 And lots and lots of ex of doubling down on the point of view I was talking about. "Only builders get to decide the mechanics of the machine." "You've never built anything. Your opinion is discarded as worthless." "Math and science are the guiding principles that govern us all."

2022-12-28 15:00:39 Weirdest sexist comment I've ever encountered, which started with "Since Dobbs, I try not to say things that might sound sexist, but..."

2022-12-28 15:00:27 "Pronouns and mastodon handle in bio" (¯\_()_/¯ --- this is how I knew for sure my tweets had traveled further than usual.)

2022-12-28 15:00:15 "You are arguing against the notion of expertise." (Try reading the thread again,my dude.)

2022-12-28 14:59:59 "Universities are elitist systems. Tech lets you get ahead without credentials." (Orthogonal. I was speculating about how the hierarchy of knowledge gets built.)

2022-12-28 14:59:43 Takes that think this is about academia vs. industry, which is kinda hilarious given the "capture" of academia by industry in my field. See: https://t.co/dD8nzQvULW

2022-12-28 14:59:30 "Humanities are also full of gatekeepers." (Yes, at the level of humanities research. In undergrad classes though?)

2022-12-28 14:59:21 "You haven't built anything." (False, but maybe true for anyone who only counts building LLMs or doing other kinds of ML as building something. Either way: irrelevant.)

2022-12-28 14:59:06 "You clearly weren't smart enough to hack it in math/CS classes." (Uh, no.)

2022-12-28 14:58:57 To be very clear: I'm not arguing against math tests. I'm arguing against the hierarchy of knowledge, the idea that STEM people, esp math/CS people are "smarter" than everyone else. And musing that understanding how we built the hierarchy will help us dismantle it.

2022-12-28 14:58:33 "It's a good thing that STEM students are evaluated that way. We want only the best people doing it." (Sink or swim teaching techniques shouldn't be used as "evaluation".)

2022-12-28 14:58:21 Common misreading: "A hierarchy where social sciences are on top would be a mess!" (I'm not arguing for a different hierarchy, I'm arguing for mutual respect between fields.)

2022-12-28 14:58:00 This seems to have both resonated with many and hit a nerve with others. I want to surface the more odious negative reactions, w/o linking to or replying to specific people. They don't deserve my platform or attention. Why? So folks can see what my mentions have been like. https://t.co/00Q5H3cq2w

2022-12-28 14:16:35 @bsansouci @timnitGebru I'm surprised you'd start with the assumption that "close to the machine" means "close to the problem". Not actually that surprised tho. This encapsulates the narrow understanding of tech problems as only involving tech and not primarily the social systems it fits into.

2022-12-28 01:15:58 https://t.co/vSIc6dHZ7B

2022-12-28 01:15:51 https://t.co/WDQde4jEVA

2022-12-28 00:07:15 @MingGu262 Yes, this has come from w/in my own institution. Here's an example. (NB --- he blocked me shortly after posting this.) https://t.co/T1Z2CmZNhd

2022-12-27 22:35:40 RT @mmitchell_ai: OTOH, betcha the Stochastic Parrots thingie will be in AI histories well past 50 years from now. https://t.co/xaZhV16P6m

2022-12-27 19:20:02 @robtow Thank you for that pointer!

2022-12-27 19:13:15 The above is all idle speculation, but I'd would love to see if there is actual work (sociology of science? something in education?) that looks into the educational construction of the hierarchy of knowledge.

2022-12-27 19:12:28 So when the tech bros, likely the "winners" over in math &

2022-12-27 19:11:45 So there's much less of a sense of "Here's this body of knowledge, and only the smart ones can master it, and we can see who they are." (Though boy howdy does a certain kind of formal syntax lean into that.) >

2022-12-27 19:10:57 Meanwhile, if you look to the humanities and humanistic social sciences, the teaching is (on average, say) less gate-keepy (though not perfect!) and the evaluation requires spending time together in the details of open-ended explorations (essays, qualitative studies). >

2022-12-27 19:10:01 So you end up with people thinking they are "good at" or "bad at" these things, and furthermore situations where those who are "good at" them are the winners of (sometimes literal) contests. >

2022-12-27 19:09:21 But I also think that some of it has roots in the way different subjects are taught. Math &

2022-12-27 19:08:31 I've been pondering some recently about where that hierarchy comes from. It's surely reinforced by the way that $$ (both commercial and, sadly, federal research funds) tends to flow --- and people mistaking VCs, for example, as wise decision makers. >

2022-12-27 19:07:30 There's a certain kind of techbro who thinks it's a knock-down argument to say "Well, you haven't built anything". As if the only people whose expertise counts are those close to the machine. I'm reminded (again) of @timnitGebru 's wise comments on "the hierarchy of knowledge".>

2022-12-27 16:25:14 @egrefen @FelixHill84 This is for you https://t.co/aMpXnspF03 /Emily out

2022-12-27 16:23:12 @egrefen Let me remind you where this thread started: You QT'd a thread of mine which was full of pointers to my writing &

2022-12-27 16:21:43 @nisten @egrefen That wasn't proposed as a test. It was an example of a broader phenomenon. The viewpoint that looks at this one example and says "See, problem solved!" is frightening.

2022-12-27 15:33:16 They do have three citations for that claim of "true understanding". All arXiv links. Talk about an echo-chamber.

2022-12-27 15:31:59 A paper brought to me by a Semantic Scholar alert cites Bender, Gebru et al 2021 (Stochastic Parrots) and yet asserts: "Pre-trained with a massive amount of information from the internet, LLMs are now capable of truly understanding the language" I guess they didn't read it.

2022-12-27 15:30:36 @egrefen @FelixHill84 You can read why that's wrong here: https://t.co/z1F7fEBCMn It's not my job to spoon feed you academic work, tweet by tweet.

2022-12-27 14:40:05 There's a whole metaphor study to be done regarding "goal posts" and "sidelines" and the way the people who see themselves as "building AI" define "the game" and who the real "players" are --- to the detriment of society. https://t.co/LC1yDlnwKG

2022-12-26 21:44:29 RT @TaliaRinger: "We find that participants who had access to an AI assistant based on [Codex] wrote significantly less secure code than th…

2022-12-26 20:44:26 @TaliaRinger Safe travels!

2022-12-26 17:45:27 I swear, the "evil AI is coming and it will be so powerful and it will kill us all" tech bros are even more annoying than the "AI is just around the corner and it will save us all" kind.

2022-12-26 17:43:49 @nisten EMB: But why? (Benchmarks fail at construct validity) N: This benchmark measures how kind the AI is. Bad AI is coming! EMB: ¯\_()_/¯ Go see the beginning of the thread?

2022-12-26 17:42:35 @nisten This conversation so far: Paper: Oh noes! We can't keep scaling, there isn't enough data! EMB: What a gross framing. Maybe scaling isn't the thing? N: Well how else would you do it? EMB: Why do you want to do it? N: Because benchmarks! EMB: But why? N: Because benchmarks! >

2022-12-26 17:41:10 @nisten Nowhere am I advocating for advertising-driven information access systems. I invite you (again) to read the paper with @chirag_shah which underlies our op-eds, the media coverage, and my recent tweets: https://t.co/o6XfOSHMsc

2022-12-26 17:19:55 @nisten You are confusing the goal of the benchmark (measuring something, though well-intentioned or not, it lacks construct validity) with the goal of the model builders.

2022-12-26 17:17:40 @nisten How are we supposed to get anything from the measures without establishing construct validity? Also, you still haven't answered my question: Why is improving performance on that benchmark a worthwhile goal?

2022-12-26 17:14:54 @nisten That doesn't answer my question: Why is improving performance on that benchmark a worthwhile goal?

2022-12-26 17:08:48 @nisten First question: Why is improving performance on that benchmark a worthwhile goal? Worthwhile enough to pursue in unsustainable ways? You might find this informative: https://t.co/kR4ZA1k7uz

2022-12-26 16:55:59 @strubell h/t @evanmiltenburg who draws an excellent connection to @abebab 's work on values in ML research: https://t.co/4BbIt4xbDn

2022-12-26 16:55:13 Surely the lesson here (which is not new, see the work of @strubell et al 2019 etc) is that the approach to so-called "AI" that everyone is so excited about these days is simply unsustainable. >

2022-12-26 16:54:34 This framing is so gross. To see (human!) generated (ahem: English) text to be a "vital resource" you have to be deeply committed to the project of building AI models and in this particular way. >

2022-12-26 15:45:29 RT @emilymbender: Chatbots are not a good UI design for information access needs https://t.co/ookfM3DZtM

2022-12-25 18:15:39 RT @emilymbender: Chatbots-as-search is an idea based on optimizing for convenience. But convenience is often at odds with what we need to…

2022-12-25 16:23:22 RT @mmitchell_ai: “I feel good about what I did, despite what happened,” she said. “It feels important to hold people accountable.”

2022-12-25 14:28:53 RT @emilymbender: Chatbots are not a good replacement for search engines https://t.co/FYymgEF0FG

2022-12-25 04:32:38 @BikalpaN Sorry for the short reply --- there have been a LOT of reply guys in my mentions today. The reason I say this isn't my problem is that building AI is not a project I subscribe to. If so-called "conversational AI" has no beneficial use cases, that's fine by me.

2022-12-25 04:22:19 @BikalpaN Why do ask me? That surely isn't my problem.

2022-12-25 04:21:26 @FelixHill84 @egrefen And the unsuitability of language-model driven chatbots for "search" is two-fold: 1) they give the illusion of "understanding" when they don't and 2) they don't support sense making, i.e. what people should be doing as they access information. See: all the links I posted.

2022-12-25 04:20:02 @FelixHill84 @egrefen I don't have philosophical objections to neural nets to language processing. I object to claiming that models of word distribution are "understanding" anything. There is plenty of NLP that isn't about understanding... >

2022-12-25 02:03:53 @egrefen And I'm saying that whether or not users find utility isn't actually the only metric to look at. And, if you'd bothered to read the links in my thread, you'd see that it's a discussion, grounded in information science, of how chatbots/LLMs fail to support user info needs.

2022-12-25 00:28:25 @egrefen I, for one, don't think that just because someone has $$ to invest means they are in a good position to understand what tech would actually benefit society.

2022-12-25 00:27:46 @egrefen Making cigarettes more addictive is also an engineering problem. Doesn't mean it's a good idea. Also: Folks are very quick to mistake VC interest with an indication that a tech idea is a good idea.

2022-12-25 00:11:30 @egrefen Alright, then thank you for promoting my thread to your followers, even with your snide remark that refuses to engage with the substance of my work.

2022-12-25 00:10:03 RT @Abebab: this is an insightful thread on language understanding (lack thereof) and large language models https://t.co/ronf3W5oMz

2022-12-25 00:07:51 @egrefen Or, you know, read the pieces I'm linking to to see the full argument as to why it's a terrible idea?

2022-12-24 23:19:30 @mgubrud So you assert and yet --- what studies have you done to quantify that? How do you define "correct"? And even if they were perfectly "correct" it's still not a good UI for search, for the reasons Shah &

2022-12-24 23:16:59 @mgubrud They are coherent once you make sense of them. (And often wrong.)

2022-12-24 23:14:38 @mgubrud So you are equally rude to everyone then? Well done, you.

2022-12-24 23:12:16 @mgubrud Your first move in this thread was your "nuke your credibility" comment. That is not substantive and it betrays an enormous lack of respect for people whose expertise (and possibly gender) differs from yours. https://t.co/6O6SZuuVb2

2022-12-24 23:10:50 @mgubrud If you read the whole paper, you'll see that we talk about distributional semantics and the extent to which similarities in meaning between words are reflected in their distribution in the text. This is not the same thing as understanding.

2022-12-24 23:10:13 @mgubrud Perhaps you would benefit from taking a deep breath and reflecting on the fact that women academics tend to know what they are talking about. And to jump in and say otherwise is rather rude and not likely to lead to a pleasant conversation.

2022-12-24 23:09:15 @mgubrud You say your comments have been substantive, but you jumped into my mentions to tell me that my speaking from my own expertise "nukes [my] credibility", apparently because I wasn't saying what you want me to say. >

2022-12-24 22:09:46 @mgubrud Well, Dr Dude, you could keep spouting off here or you could read the (award-winning) paper of mine that I linked too.

2022-12-24 22:03:23 @mgubrud And as for "AI is real and happening"? Give me a break. What is real and happening is surveillance capitalism, pattern matching at scale, and AI snake oil. Please take your concern trolling elsewhere.

2022-12-24 22:02:36 @mgubrud Yes, I care about mitigating harms, but also: I am a linguist. And so when I say that I am speaking directly from my expertise. These systems manipulate linguistic forms but do not understand nor have communicative intent. In more detail: https://t.co/z1F7fEBCMn

2022-12-23 20:52:34 @danielsgriffin @chirag_shah Thank you. I don't think the summary of @safiyanoble 's book is very good either, FWIW.

2022-12-23 19:03:39 @danielsgriffin @chirag_shah No thank you. This isn't a very good summary and I would rather not have it promoted.

2022-12-23 18:22:08 RT @evanmiltenburg: Useful exercise for students in #NLProc: what values are implicitly communicated through this tweet/paper? (For contex…

2022-12-23 17:09:58 Could there be a more on-the-nose example of why you'd never want generative models to speak for you in a context where anyone cares about what is being said? https://t.co/L8uqVmgcvt

2022-12-23 14:15:35 @TaliaRinger @chirag_shah Short version, as an op-ed: https://t.co/FYymgEF0FG

2022-12-23 12:59:44 @TaliaRinger Wrote a paper on this with @chirag_shah earlier this year: https://t.co/rkDjc4k5HL

2022-12-23 03:32:11 RT @gleemie: TT job: UCSD Urban Studies and Planning, Designing Just Futures with a focus on Indigenous, Black, and migrant futures (due Ja…

2022-12-22 21:04:57 RT @BNonnecke: Recently fired from @Twitter, @Meta, @Google, @Microsoft? Work in civil society, government, or academia? WE WANT YOU! Appl…

2022-12-21 22:50:44 RT @ruthstarkman: @emilymbender here's what our students made for you and Alexander Koller for your Eavesdropping Octopus paper. I'll print…

2022-12-21 22:50:38 @ruthstarkman <

2022-12-21 22:50:19 @groundwalkergmb @histoftech Didn't I say as much in the very tweet you are replying to?

2022-12-21 20:13:06 @pgcorus None of these examples are about LLMs and using them as unfiltered generators though. So again, I'm wondering why you jumped into this particular conversation. Bringing your techno-optimism about other tech to apparently counter my cautions about one specific thing.

2022-12-21 19:02:46 @pgcorus And I still think "LLMs can be used for restorative justice!" is a big claim that needs supporting ... especially when launched into the sea of AI hype.

2022-12-21 19:02:19 @pgcorus Sorry for not tracking that you weren't the person I was QT-ing when you QT-ed me. But that's where my confusion was coming form and part of where this conversation went off the rails.

2022-12-21 18:44:23 @pgcorus So why are you QTing me in that way? And suggesting that I am saying "turn away" from tech?

2022-12-21 18:43:26 @pgcorus Again, I wasn't saying "reject outright". I was saying: do the most basic harm mitigations.

2022-12-21 18:43:06 @pgcorus And your first contribution (at least in this thread) was this one: https://t.co/5JmVh62sm3

2022-12-21 18:42:32 @pgcorus Wait -- hold on: That wasn't you. That was someone else. But you seem to be jumping in on their side.

2022-12-21 18:41:37 @pgcorus "not reject outright": Rachael wasn't talking about rejecting LLMs. She was talking to chatbot devs, i.e. people who create public-facing technology, and giving the eminently sensible advice that unfiltered LLM output has no place there. https://t.co/shBehHxUvm

2022-12-21 18:40:24 @pgcorus I hope you can see that that just sounds like you cheerleading for the people who think that LLMs and garbage like Stable Diffusion should just be unleashed on the world. >

2022-12-21 18:39:32 @pgcorus So when you came out with this, I felt I had to speak up: https://t.co/7zhiYgu2wc

2022-12-21 18:38:50 @pgcorus I'm not looking for your trust. My main goal is pushing back on #AIhype, which I see doing damage to both the academic research domain I belong to and (more importantly) many, many public systems. >

2022-12-21 18:29:03 @pgcorus And I am STILL wondering what you think the connection is to your original complaint about Rachael's eminently sensible words of caution.

2022-12-21 18:28:27 @pgcorus And here you are falling into tech solutionism --- which is a frightening thing to hear from someone working in the space you are working in.

2022-12-21 18:24:08 @pgcorus And STILL no reply on this question.

2022-12-21 18:23:48 @pgcorus As soon as there is tech involved (i.e. the capacity to scale harm), "move fast" is a bad motto, no matter what you moving fast towards.

2022-12-21 18:20:24 @pgcorus You are presenting as "move fast and break things" while also (confusingly) doing buzzword salad around "community-centered" etc.

2022-12-21 18:16:18 @pgcorus Still no reply to this question. You're using LMs to generate code and (in a kinda gross way talking about them as if they were "junior devs"). This is not related to Rachael's point that you jumped all over at the start of this.

2022-12-21 02:17:47 @raciolinguistic @freshair_zee I haven't been tracking alas, but maybe @rctatman has?

2022-12-21 00:21:42 @katzenclavier @histoftech For comfort TV, I've found The Good Place very re-watchable!

2022-12-21 00:07:33 @histoftech The Good Place? (I mean technically dead women are central but not I think how you mean.)

2022-12-20 22:53:52 AGI bros at an AGI company: *write a paper with extreme and ridiculous anthropomorphization of language models* NLP researchers: They can't possibly have meant that literally. How dare you critique their writing as if they did. me: ¯\_()_/¯

2022-12-20 14:53:55 Just so everyone is clear, the founder of https://t.co/XWbloQfhKR thinks it's "feckless" to do even the most basic harm mitigations with a technology that has widely been shown to be harmful. https://t.co/7zhiYgu2wc

2022-12-20 14:32:25 RT @JamesStubbs1979: Good morning @StretfordPaddck Do we use a different language to talk about black players and white players? Why were t…

2022-12-20 14:32:06 RT @rctatman: Hello friends! The last stream for the year is starting in just about half an hour. Come read some papers with me. :) https:…

2022-12-20 14:31:06 RT @emilymbender: Mystery AI Hype Theater 3000, Episode 6 - Stochastic Parrot Galactica where @alex and I have way too much fun taking ap…

2022-12-20 14:01:34 RT @emilymbender: Come for the snark, stay for the sociology of science!

2022-12-20 14:01:08 RT @emilymbender: I appreciate this write up of my work with @chirag_shah Why We Should Not Trust Chatbots As Sources of Information http…

2022-12-20 13:59:37 RT @emilymbender: Okay, I haven't read this paper from Anthropic yet, but on a quick skim, it's utterly absurd. They start off talking abou…

2022-12-20 06:11:18 RT @marylgray: Only 2 weeks left to apply for this fully paid 2-week workshop for early career technologists and computing researchers. Ple…

2022-12-20 05:45:31 Come for the snark, stay for the sociology of science! https://t.co/dJMx1aksMa

2022-12-20 05:32:50 Mystery AI Hype Theater 3000, Episode 6 - Stochastic Parrot Galactica where @alex and I have way too much fun taking apart the #AIHype around #Galactica https://t.co/p5VjFkEH4o

2022-12-20 02:58:18 RT @alexhanna: We've posted Mystery AI Hype Theater 3000, Episode 6! In this one, @emilymbender and I pick apart MetaAI's shortly-lived Gal…

2022-12-20 01:51:31 @scottniekum @timnitGebru "Behaves equivalently" --- only if you actually make the mistake of taking its words as something worth interpreting. They are not. They are not expressing goals. They are not expressing desires. The whole enterprise is a fantasy and a waste of time. /Emily out.

2022-12-20 01:26:50 @scottniekum @timnitGebru ... and it sounds like you're not 100% clear on it either.

2022-12-20 01:25:09 @scottniekum @timnitGebru As soon as you are talking about synthetic text machines, it becomes essential to distinguish between what the system is actually doing (outputting word forms) &

2022-12-20 01:06:40 @scottniekum @timnitGebru Language models aren't agents. They don't have goals. I'm not saying that there's no such thing as artificial agents. Just to be very clear. Language models are models of the distribution of words in text, and that's it.

2022-12-19 23:46:13 @scottniekum On the large LMs aren't intelligent, see: https://t.co/z1F7fEBCMn https://t.co/kR4ZA1k7uz

2022-12-19 23:45:23 @scottniekum I don't have time just now to lay out for you in a tweet thread why longtermist "xrisk" reasoning is nonsense, why large LMs aren't "intelligent" and don't have "desires", etc etc. But I assure you: though arguments have been made.

2022-12-19 23:12:51 @scottniekum It doesn't merit it, because it doesn't merit being taken that seriously. For one thing, it's not even a publication, just a pdf on their company website.

2022-12-19 23:04:06 @scottniekum These folks *do* work on the xrisk nonsense. They don't deserve (or probably even want) your "charitable" reading of their laughable paper.

2022-12-19 22:51:07 RT @vdignum: This! "It is urgent that we recognize that an overlay of apparent fluency does not, despite appearances, entail accuracy, inf…

2022-12-18 20:20:59 RT @_alialkhatib: if you all think elon's not gonna make downloading an archive of your data more difficult to try to keep you here, then…

2022-12-18 17:01:36 @pgcorus https://t.co/oAThpf7DLy

2022-12-18 17:00:58 @pgcorus This is not the same idea as saying chatbots could be therapists. Also: just because there's a problem in the world (lack of resources for mental healthcare) and just because ML can make something that looks like the solution doesn't mean it is the solution.

2022-12-18 16:54:58 @pgcorus That's some tech solutionism right there.

2022-12-18 16:54:20 @pgcorus Still never gonna be a good idea for therapy. Read your Weizenbaum. Talk to some actual psychologists. Stop promoting the hype.

2022-12-18 16:53:03 @pgcorus Ah, I see. You're defensive because you're building this stuff. But even still: you should be as appalled as I am about the coverage in the NYT that misleads the public about how the tech works and what it can do.

2022-12-18 16:52:03 @pgcorus I am talking about how the NYT piece I quoted is full of hype about ChatGPT in particular. You are arguing with me about some technology that apparently only exists in your head.

2022-12-18 16:48:23 @pgcorus But also: You're a computational linguist writing about ethics &

2022-12-18 16:47:51 @pgcorus Keeping in mind that not only is general web garbage going to skew anti-queer, the ways in which it is supposedly "cleaned" makes that worse, as documented in this paper: https://t.co/54vOJot3q1

2022-12-18 16:47:12 @pgcorus If you're attuned to the ethics and harms of ML, what gives you any reason to believe that synthetic BS generators trained on general web garbage would be safe &

2022-12-18 16:38:03 @pgcorus Then you are failing. I'm speaking from my expertise as a computational linguist who has studied &

2022-12-18 15:06:37 RT @tala201677: "However, we must not mistake a convenient plot device—a means to ensure that characters always have the information the wr…

2022-12-18 14:56:28 @pgcorus Commenting on your own tweets, I see. Points for self-awareness, I guess. https://t.co/A4lnc6U3va

2022-12-18 14:08:09 RT @emilymbender: Just in case anyone is unclear, here is why ChatGPT (or anything like it) will not replace search engines. There is cer…

2022-12-18 14:07:37 RT @emilymbender: Finally got around to reading this one, and yeah, the NYT continues to be trash at covering AI: "the existence of a high…

2022-12-18 05:48:55 @TaliaRinger https://t.co/EYt2aegJ8v

2022-12-18 03:50:35 @pgcorus And clearly you don't know who I am. "Google defender." Hah.

2022-12-18 03:48:27 @pgcorus Thank you for sharing your professional opinion as ... checks notes ... someone who writes python and does philosophy of tech. Clearly qualified to determine if automatic bullshit generators would be a safe and effective approach to therapy.

2022-12-18 03:47:23 Just in case anyone is unclear, here is why ChatGPT (or anything like it) will not replace search engines. There is certainly room (and urgent need) to improve on Google's for-profit model, but this ain't it. https://t.co/FYymgEF0FG

2022-12-18 03:43:11 If you want to be informed about what's actually going on with so-called "AI", get your news elsewhere.

2022-12-18 03:42:46 "Assessing ChatGPT’s blind spots and figuring out how it might be misused for harmful purposes are, presumably, a big part of why OpenAI released the bot to the public for testing" "a chatbot that some people think could make Google obsolete" #AIhype >

2022-12-17 00:24:46 @TaliaRinger Vaccine side effects?

2022-12-16 23:36:56 RT @SashaMTL: After the dazzling success of our "AI researchers as moths/butterflies" thread, I'm psyched to announced that it is now a @hu…

2022-12-16 15:01:06 And a core problem in AI-infected-NLP (and other areas of AI) is that people seem to believe that fundamental unsoundness (LMs being designed to just make shit up) is something that will surely be fixed in exciting future work!! https://t.co/cbGdxJ6ynu

2022-12-16 14:59:23 From further down @percyliang 's thread, apparently it's not really "for" anything yet. All of that is "future work". This is the core problem with the "foundation models" conceptualization. They are impossible to evaluate. https://t.co/P3ZqpBZ8Mm >

2022-12-16 14:57:56 More tales from the front of #NLProc's evaluation crisis. What is PubMedGPT actually for and why are medical licensing exam questions a legitimate test of its functionality in that task? >

2022-12-16 14:44:20 RT @ruthstarkman: @emilymbender critique:"Hallmark card analogy is particularly apt: ChatGPT’s output is frequently anodyne." Spot on,…

2022-12-16 14:06:18 RT @emilymbender: Here's yet another installment in my series of annotated field guides to #AIhype. This time, the toxic spill was in the W…

2022-12-16 05:10:06 Here's yet another installment in my series of annotated field guides to #AIhype. This time, the toxic spill was in the Washington Post: https://t.co/nqAzzvsXbD

2022-12-15 20:00:14 @_katiesaurus_ :'(

2022-12-15 19:51:57 @_katiesaurus_ Yikes. No.

2022-12-15 19:44:08 @mrdrozdov a. It would be, if you had any way to actually trace the ideas. b. Yes.

2022-12-15 18:57:15 Lots of folks are QT-ing this thread saying: It's okay to use this tool because I know how to use it and I will evaluate the ideas before incorporating them into my paper. And yet, none of those folks are addressing this point: https://t.co/OwCfwH8bDZ

2022-12-15 17:05:28 RT @mmitchell_ai: Welcome to the list to my wonderful colleagues @SashaMTL and @IreneSolaiman ! So grateful to have you in my life.

2022-12-15 15:52:39 RT @timnitGebru: Effective altruists read a critique by one of the "poor Africans" they can't stop talking about, on the backs of whom they…

2022-12-15 15:09:01 Also, wow does Forbes know how to pick 'em. And to display one's affiliation with the likes of Palantir ... that's definitely a choice. https://t.co/euNdh6qtGg

2022-12-15 15:06:53 My dude: Connecting ones work to what has gone before is part of science (and all scholarship). Anyone can do this! It's not gatekeeping to say: yes, come do the science, but no your so-called "AI" is not a scientist. https://t.co/nFbmNl3NwA

2022-12-15 14:09:28 RT @emilymbender: I got to have a really interesting conversation with @KUOW 's @libdenk yesterday on @SoundsideKUOW about #ChatGPT. The te…

2022-12-15 14:08:07 RT @emilymbender: We're seeing multiple folks in #NLProc who *should know better* bragging about using #ChatGPT to help them write papers.…

2022-12-15 01:18:01 @complingy @aryaman2020 I'd be very curious about this too, but I am quite skeptical. Also, what is "weakly" in practice?

2022-12-14 22:42:23 @AmericanGwyn Here's how we define "Stochastic Parrots" in the paper that introduced the phrase (if that's where you're seeing it): The word that is cut off (b/c it's on the preceding page) is "contrary". https://t.co/0hpxzwthYX

2022-12-14 22:10:31 @snarkbat If you aren't already following @JewWhoHasItAll I strongly recommend that account as a balm in situations like this. Also, what an enormous drag.

2022-12-14 21:45:56 @KUOW @libdenk @SoundsideKUOW "Bender cautions that even as the prowess of the technology can seem immense at first, there are limitations with how much the program actually understands what it puts out." Not really what I said. "Limitations" is still overselling it. It. Does. Not. Understand.

2022-12-14 21:35:53 I got to have a really interesting conversation with @KUOW 's @libdenk yesterday on @SoundsideKUOW about #ChatGPT. The text of this article downplays my critique -- please listen to the recording for the full story. https://t.co/0SEWDOnOky

2022-12-14 18:54:01 @tdietterich @Azure Demand = pointless ChatGPT queries. Powered down = that electricity can go somewhere else on the grid.

2022-12-14 18:46:31 @tdietterich @Azure Actually not relevant, because if that carbon neutral energy weren't being used up for pointless ChatGPT queries, it could be being used for something else.

2022-12-14 16:41:50 @morenorse https://t.co/MgxgwQwBJC

2022-12-14 16:00:45 RT @neilturkewitz: Excellent by @emilymbender I especially appreciated “Your job there is to show how your work is building on what has…

2022-12-14 15:52:10 p.s.: How did I forget to mention 7- As a second bare minimum baseline, why would you use a trained model with no transparency into its training data? https://t.co/BvmZI8Gj9F

2022-12-14 15:00:26 RT @lilianedwards: This is an incredibly sensible thread about how GPT3 can't write your academic article ( or please note, your dissertati…

2022-12-14 14:51:03 6- As a bare minimum baseline, why would you use a tool that has not been reliably evaluated for the purpose you intend to use it for (or for any related purpose, for that matter)? /fin

2022-12-14 14:49:43 5- I'm curious what the energy costs are for this. Altman says the compute behind ChatGPT queries is "eye-watering". If you're using this as a glorified thesaurus, maybe just use an actual thesaurus? https://t.co/MenJK0tPQS >

2022-12-14 14:49:23 4- Just stop it with calling LMs "co-authors" etc. Just as with testifying before congress, scientific authorship is something that can only be done by someone who can stand by their words (see: Vancouver convention). https://t.co/TyvuX65ft0 >

2022-12-14 14:49:01 3- It breaks the web of citations: If ChatGPT comes up with something that you wouldn't have thought of but you recognize as a good idea ... and it came from someone else's writing in ChatGPT's training data, how are you going to trace that &

2022-12-14 14:48:47 2- ChatGPT etc are designed to create confident sounding text. If you think you'll throw in some ideas and then evaluate what comes out, are you really in a position to do that evaluation? If it sounds good, are you just gonna go with it? Minutes before the submission deadline?>

2022-12-14 14:47:55 The result is a short summary for others to read that you the author vouch for as accurate. In general, the practice of writing these sections in #NLProc (and I'm guessing CS generally) is pretty terrible. But off-loading this to text synthesizers is to make it worse. >

2022-12-14 14:47:43 1- The writing is part of the doing of science. Yes, even the related work section. I tell my students: Your job there is show how your work is building on what has gone before. This requires understanding what has gone before and reasoning about the difference. >

2022-12-14 14:47:01 We're seeing multiple folks in #NLProc who *should know better* bragging about using #ChatGPT to help them write papers. So, I guess we need a thread of why this a bad idea: >

2022-12-14 14:46:42 RT @timnitGebru: Read this by @KaluluAnthony of https://t.co/bf2uXxretK. "EA is even worse than traditional philanthropy in the way it ex…

2022-12-14 14:32:18 I'm guessing that both numbers are very small, but I'm interested in the difference &

2022-12-14 14:06:03 RT @emilymbender: "we must not mistake a convenient plot device—a means to ensure that characters always have the information the writer ne…

2022-12-14 14:05:56 RT @emilymbender: Who's ready for Episode 7 of Mystery AI Hype Theater 3000? Tune in live tmr as @trochee joins me and @alexhanna and will…

2022-12-14 14:05:45 RT @emilymbender: Come work with us at the University of Washington! Assistant Professor of Humanities Data Science https://t.co/UlALX7Fj

2022-12-14 13:11:44 RT @dmonett: What a quote! Nor thinking that "playing" with it will give us more insight into the technology that lies behind nor into the…

2022-12-14 01:53:42 Come work with us at the University of Washington! Assistant Professor of Humanities Data Science https://t.co/UlALX7FjyV

2022-12-14 01:10:27 Who's ready for Episode 7 of Mystery AI Hype Theater 3000? Tune in live tmr as @trochee joins me and @alexhanna and will almost certainly use the phrase "Gish Gallop". 9:30-10:30am Pacific, Dec 14, 2022 https://t.co/MTvIHSvsiW #MAIHT3k #MathyMath #AIhype #nlproc

2022-12-13 23:16:26 RT @emilymbender: Narrator voice: LMs have no access to "truth", or any kind of "information" beyond information about the distribution of…

2022-12-13 19:50:26 "we must not mistake a convenient plot device—a means to ensure that characters always have the information the writer needs them to have—for a roadmap to how technology could and should be created in the real world." -- me &

2022-12-13 19:44:35 RT @IAI_TV: “It is urgent that we recognize that an overlay of apparent fluency does not entail accuracy, informational value, or trustwort…

2022-12-13 18:22:00 It's particularly galling that this is on a piece that *I co-authored* for a popular outlet. And I'd like to promote it. But I refuse until this error is fixed. (And it's past the end of the workday over there, so we'll see.)

2022-12-13 18:18:05 Like, a name is a name right? Just because you might know Rebecca who goes by Becky doesn't mean you can assume that all Rebecca does. And just because universities in the UK allow those two alternate forms doesn't mean it works that way in the rest of the world!

2022-12-13 18:17:05 I know systems differ, but I really really really wish folks in other countries could learn that the University of Washington and Washington University are NOT the same institution. That, or look at my web page before publishing something about me and just copy what's there.

2022-12-13 15:36:45 RT @mixedlinguist: It’s not “Gen Z slang” it’s freaking old ass AAE that young white people just got put on to. Can we at least start recog…

2022-12-13 14:03:49 I'm thinking @JewWhoHasItAll might help provide some insight into this curious cultural phenomenon.

2022-12-13 14:02:36 This is such a bad idea on so many levels, but I'd like to add one more: "We'll pay the ticket, even if you lose" doesn't account for impacts on insurance rates or any other system that speeding tickets feed into. https://t.co/wnzqWJ7gjy

2022-12-13 13:51:00 Another: https://t.co/jMqvqig24r "Christmas Issue"? Right. https://t.co/sc8TSAvof4

2022-12-12 20:08:23 @VeredShwartz @mdredze If the end up quoting only men though even after they talked to you that's definitely worth calling out.

2022-12-12 18:37:16 @asayeed I don't buy it. This is such a hackneyed trope at this point. I think journalists can and should do better.

2022-12-12 18:27:17 @asayeed I would prefer that more readers *did* perceive it that way. Because it is a waste of time and it harms the credibility of the journalist. In general, we need higher AI literacy in the public and when journalists lean into #AIhype they are working in the opposite direction.

2022-12-12 18:20:48 RT @rapella: “And also: This is a news source willing to print synthetic BS.” #AIhype #NLProc #MathyMath #Journalism #ethics https://t…

2022-12-12 18:14:39 RT @emilymbender: So when I am told after the fact: "Those last sentences? They were written by a machine!" My reaction isn't "Wow cool" bu…

2022-12-12 18:07:27 @asayeed This isn't about not appreciating the coolness of tech. This is about standing against #AIhype and against practices that lean into fooling people. There are plenty of other ways to report on this without doing this (incredibly boring and furthermore overdone) trick.

2022-12-12 17:34:15 RT @CriticalAI: Totally agree. My own feeling is, "So what - are you trying to undercut the whole point of the article." It's such a cliche…

2022-12-12 16:56:01 So when I am told after the fact: "Those last sentences? They were written by a machine!" My reaction isn't "Wow cool" but "You just wasted my time." And also: This is a news source that is willing to print synthetic BS. #AIhype #NLProc #MathyMath #Journalism #ethics

2022-12-12 16:55:26 When I give my time and attention to the printed word, it is to learn something about how someone else sees the world or something about what they have learned and want to share with the world. >

2022-12-12 16:54:56 Yet another news story on #ChatGPT (which I was a source for) that starts with text generated by ChatGPT. I find that move insulting, in fact. https://t.co/9sJb9qm2uj >

2022-12-12 15:19:25 @luketrailrunner Thanks, Chris! Specifically on trying to use these things for search, see also: https://t.co/rkDjc4kDxj

2022-12-11 14:45:31 RT @a_derfelGazette: 1) Hi, everyone. I wanted to share with you the scary recent experience I had with my original Twitter account, @Aaron…

2022-12-11 14:16:31 RT @emilymbender: Mystery AI Hype Theater 3000, Episode 4 -- Is AI Art Actually "Art"? With @shengokai @WITWhat and @negar_rz hosted by @al…

2022-12-11 14:16:28 RT @emilymbender: Coming soon: Recordings of Episodes 5 (#Galactica) and 6 (xrisk essay contest) Episode 7, with @trochee @alexhanna a…

2022-12-11 06:29:27 RT @mmitchell_ai: "...without naming and recognizing the engineering choices that contribute to the outcomes of these models, it’s almost i…

2022-12-11 05:36:07 @thedansimonson @TaliaRinger Alas, so many of them do it for free --- seemingly believing they are providing some valuable pro-bono service.

2022-12-11 04:54:10 @thbrdy @TaliaRinger If you'd like to understand my stance, here are many things I wrote about #AIhype: https://t.co/uKA4tuv4jF

2022-12-11 03:45:31 RT @FAccTConference: Submit your excellent work to #FAccT23! Our CfP is available here: https://t.co/iTWjkOt47f Abstract deadline: Jan 3…

2022-12-11 03:06:37 @TaliaRinger I can see that. OTOH, I worry that OpenAI is somehow putting on a mantel of "ethical development" that is actually completely undeserved.

2022-12-11 03:03:21 @TaliaRinger I mean ... "a preview of progress" also seems like a wild overclaim to me. Progress towards what? And "lots of work to do on truthfulness": it's designed to make shit up. That doesn't seem like a good starting point for truthfulness.

2022-12-10 21:18:49 RT @glichfield: Also, some people have talked about chatGPT being a search killer. To me the bigger concern is that it becomes a search ~po…

2022-12-10 14:58:02 @shengokai It was so amazing having you all on!!

2022-12-10 14:57:51 RT @shengokai: This was actually one of the most fun and insightful conversations I've had all semester and I was really thankful for the o…

2022-12-10 14:55:27 @athundt @shengokai @WITWhat @negar_rz @alexhanna Thanks for pointing that out, Andrew. We'll look into it.

2022-12-10 14:47:19 @jasonbaldridge It's a huge loss for the world that the distribution of power means that those who could be brilliantly envisioning uses of this kind of technology that might actually benefit their communities instead find themselves having to spend so much time cleaning up others' messes.

2022-12-10 14:45:39 @jasonbaldridge Likewise, those of us out here pointing out the ways in which these supposedly general models are oppression-reproducing machines wouldn't need to be doing that if systems weren't being developed and foisted on the world. https://t.co/2Z06E63T6F >

2022-12-10 14:43:51 @jasonbaldridge Also, I gotta add: "sadly acrimonious twitter discussions" sounds very both-sides-y to me. Those of out here push back on ridiculous claims of LLMs being "intelligent" etc etc wouldn't need to, if the claims weren't there. >

2022-12-10 14:40:13 @jasonbaldridge I wonder, though, if you have any examples of where they are used sensibly --- and if any of those actually involve using them generatively, rather than (as in previous uses of LMs) in choosing between outputs that come from some constrained, task-specific model?

2022-12-10 14:39:13 @jasonbaldridge I totally agree that safety is in the details of application-specific development. And thus a huge part of the problem with LLMs is that they are being put forward as "general" or "foundation" models that can be used for any task that can take place in language. >

2022-12-10 14:07:03 RT @SashaMTL: We need to stop conflating open/gated access and opensource. ChatGPT is *not* open source -- we don't know what model is und…

2022-12-10 14:02:19 RT @emilymbender: The bitter lesson is how much of the field is willing to accept a system that produces form that *looks like* a reliable…

2022-12-09 23:53:27 @trochee @alexhanna All episodes recordings (as they are available) can be found here: https://t.co/6UCGlE6mx3 #MAIHT3k #AIhype

2022-12-09 23:50:17 Coming soon: Recordings of Episodes 5 (#Galactica) and 6 (xrisk essay contest) Episode 7, with @trochee @alexhanna and me Join us live on Wednesday Dec 14, 9:30-10:30am Pacific Time https://t.co/MTvIHSvsiW #AIhype #MAIHT3k #MathyMath https://t.co/ZZYtdZYKDn

2022-12-09 23:48:25 Mystery AI Hype Theater 3000, Episode 4 -- Is AI Art Actually "Art"? With @shengokai @WITWhat and @negar_rz hosted by @alexhanna and me :) https://t.co/UgEVwIAgvX #AIHype #MathyMath

2022-12-09 23:31:06 RT @annargrs: Anybody attending @emnlpmeeting #EMNLP2022 virtually? Could you share your experience? E.g. - does Underline still feel slow?…

2022-12-09 22:28:17 RT @tveastman: Because it's Neal Stephenson, you have to swap out some words like "reticulum" for "internet" and "crap" for "spam". But he…

2022-12-09 17:43:44 @david_darmofal @owasow Jinx.

2022-12-09 17:43:12 @owasow ... with signs encouraging food fights.

2022-12-09 17:36:40 RT @cmhenry_: @emilymbender https://t.co/8D1O4RFoL9

2022-12-09 17:30:04 OP: We did it without paying attention to any of the previous science! This is very "Wile E Coyote" before he realizes he's standing on nothing, and I want to make a meme like that, but can't find the "before he realizes" images. Oh well.

2022-12-09 17:21:59 RT @rajiinio: Me &

2022-12-09 17:21:42 RT @Abebab: "We critique because we care. If these companies can't release products meeting expectations of those most likely to be harmed…

2022-12-09 16:38:06 @Abebab @rajiinio "But without naming and recognizing the engineering choices that contribute to the outcomes of these models, it becomes almost impossible to acknowledge the related responsibilities." https://t.co/wrFy2Q4VDY

2022-12-09 16:37:30 Essential reading from @Abebab and @rajiinio "Model builders and tech evangelists alike attribute impressive and seemingly flawless output to a mythically autonomous model, a technological marvel." https://t.co/wrFy2Q5ttw

2022-12-09 16:35:36 RT @vukosi: Abeba Birhane and Deborah Raji >

2022-12-09 14:00:51 @krustelkram I hadn't noticed the EA connection. That totally tracks --- "benefits" indeed.

2022-12-09 14:00:10 @Thom_Wolf https://t.co/uGdFohNlUS

2022-12-09 13:59:17 The bitter lesson is how much of the field is willing to accept a system that produces form that *looks like* a reliable solution to task as actually doing the task in a way that is interesting and/or reliable. Are we doing science or just standing in awe of scale? https://t.co/ChNQqNKtM0

2022-12-09 01:11:52 @spacemanidol @amahabal No, it's not about the name. It's about the way the systems are built and what they are designed to do.

2022-12-08 22:48:41 @chrmanning I'm not actually referring to your slide, Chris, so much as the way it was framed in the OP's tweet --- which Stanford NLP sought fit to retweet, btw.

2022-12-08 19:34:24 Oh, and for the record, though that tweet came from a small account, I only saw it because Stanford NLP retweeted it. So someone there thought it was a reasonable description too.

2022-12-08 19:25:36 @Raza_Habib496 People are using it does not entail benefits. Comparing GPT-3 to fundamental physics research is also a strange flex. Finally: as we argue in the Stochastic Parrots paper -- who gets the benefits and who pays the costs? (Not the same people.)

2022-12-08 19:24:49 @Raza_Habib496 Oh, I checked your bio first. If it had said "PhD student" I probably would have just walked on by. But you've got "CEO" and "30 under 30" so if anything, I bet you like the attention.

2022-12-08 19:00:02 @rharang The astonishing thing about that slide is that the only numbers are about training data + compute. There's not even any claims based on (likely suspect, but that's another story) benchmarks.

2022-12-08 18:57:31 It's wild to me that this is considered a picture of "progress". Progress towards what? What I see is a picture of ever increasing usage of resources + complete disinterest in being able to document and understand the data these things are build on. https://t.co/vVPvH7zal0

2022-12-08 14:33:02 @yoavgo @yuvalpi Here it is: https://t.co/GWKrpgxkPt

2022-12-08 14:31:34 @yoavgo @yuvalpi Oh, and I don't have time to dig it up this morning, but you told Anna something about how you don't really care about stealing ideas --- and seemed to think that our community doesn't either.

2022-12-08 14:31:06 @yoavgo @yuvalpi And even if you offer it as an option: nothing in what you said suggests that you have accounted for what will happen when someone is confronted with something that sounds plausible, and confident --- especially when it's their L2. >

2022-12-08 14:30:29 @yoavgo @yuvalpi Your whole proposal is extremely trollish (as is you demeanor on Twitter

2022-12-08 14:18:47 RT @KimTallBear: Job Opportunity: Associate Professor or Professor, Tenure-Track in Native North American Indigenous Knowledge (NNAIK) at U…

2022-12-08 14:14:42 @amahabal And have you actually used ChatGPT as a writing assistant? How did that go? What did you find useful about it? What do you think a student (just starting out in research) would find useful about it? How would they be able to evaluate its suggestions?

2022-12-08 14:02:07 RT @emilymbender: Apropos of the complete lack of transparency about #ChatGPT 's training data, I'd like to resurface what Batya Friedman a…

2022-12-08 13:51:55 @amahabal No. Why should I?

2022-12-08 13:44:45 @yuvalpi @yoavgo Yes, I read his whole thread. No that doesn't negate what I said.

2022-12-08 13:25:00 RT @marylgray: Calling all scholars interested in a fellowship to reboot social media : ) https://t.co/MApt42p8eB

2022-12-08 13:00:00 CAFIAC FIX

2022-12-07 08:00:00 CAFIAC FIX

2022-11-15 17:30:12 Oh and of course: the dude who did this, who fired the entire human rights team, who fired the excellent META team, &

2022-11-15 15:31:19 RT @emilymbender: So the dude who considers 2FA bloat that can just be switched off also runs the car company famous for updating vehicles…

2022-11-15 01:13:53 RT @TaliaRinger: I've twice seen people almost give up on applying from not being able to find a third letter writer, and for real applying…

2022-11-14 20:47:58 So the dude who considers 2FA bloat that can just be switched off also runs the car company famous for updating vehicles 'over the air'? That's reassuring...

2022-11-14 18:39:59 I'm really looking forward to this! I'm excited to be talking to this audience and really enthusiastic about the format that @mmitchell_ai and I have planned. https://t.co/0QpRxBNDXh

2022-11-14 18:35:46 RT @ruthstarkman: Here's the registration for @emilymbender and @mmitchell_ai talk @Stanford Dec 2 "Collective Action for Ethical Tech…

2022-11-14 00:20:43 RT @DrMonicaCox: https://t.co/8weOTUycov

2022-11-13 06:02:24 RT @timnitGebru: I’d love to work on a list here to expose just how much influence this cult has in “AI safety” and how complicit the “elit…

2022-11-13 00:02:57 RT @sjjphd: If you’ve organized, taught, built relationships, made funny jokes, shared original content or intellectual property on this ap…

2022-11-11 16:40:47 RT @ChanceyFleet: For someone like me, with disparate interests and unruly curiosity, Twitter has been a place to learn from the brightest…

2022-11-11 15:57:36 Does anyone know what happens to @threadreaderapp and esp the html pages it has generated if Twitter goes down? Basically, I'm curious if unroll requests are a good way to 'backup' threads I'd like to keep visible on the web, like those linked here:https://t.co/uKA4tuv4jF

2022-11-11 15:54:59 @threadreaderapp unroll

2022-11-11 15:53:51 @threadreaderapp unroll

2022-11-11 15:52:44 @threadreaderapp unroll

2022-11-11 04:34:01 RT @ihearthestia: Go into your Twitter settings and disconnect your google account, all the apps that are connected to Twitter, disconnect…

2022-11-11 03:44:55 @TaliaRinger I was just gonna say, you might be interested in https://t.co/IWoXj0JXXJ. Off to follow you now!

2022-11-11 03:44:25 @TaliaRinger Yes! And you can follow hashtags to discover people with shared interests.The choice of server matters some, but it's not everything. It gives you your local neighborhood + your view onto the rest of it (determined, I gather, by who you and others on your instance follow).

2022-11-10 17:37:28 (It's obvious and uncomfortable every time this happens to me. I never introduce myself as "an American linguist" and the Howard and Francis Nostrand Professor title is a time-bound thing which now has moved along to one of my colleagues.)

2022-11-10 17:36:22 PSA to conference organizers: Please don't use a Wikipedia page to draft a bio for your invited speakers. Either ask them or go to their own web page to see how they present themselves.

2022-11-10 17:05:02 RT @mattbc: Increasingly concerned the servers are going to go down and we simply won’t be able to ever open this appIf you haven’t downl…

2022-11-10 14:37:32 RT @emilymbender: It's really nice to watch my network grow over on Mastodon and I'd like to encourage more people to check it out. It seem…

2022-11-10 14:37:07 RT @ruthstarkman: Envisioning Paths: Individual and Collective Action for Technology Development. @emilymbender and @mmitchell_ai …

2022-11-10 12:49:27 RT @nsaphra: I'm now on the academic job market! I work on understanding and improving training for NLP models, with a focus on studying ho…

2022-11-09 23:48:35 RT @IBJIYONGI: Literally @TwitterSupport is going to get people killed

2022-11-09 23:32:16 Take the time to learn the differences in affordances &

2022-11-09 23:31:15 It's really nice to watch my network grow over on Mastodon and I'd like to encourage more people to check it out. It seems it's best to think of it as a Twitter alternative, rather than a Twitter replacement.>

2022-11-09 22:42:52 RT @DAIRInstitute: For our 1 year anniversary virtual events, we'll have talks, panels and breakout discussions with attendees. Sign up her…

2022-11-09 22:14:36 RT @timnitGebru: They're looking at this announcement from Future Fund which was like tell us why we're wrong about superintelligence and w…

2022-11-09 21:15:25 @timnitGebru @mmitchell_ai @mcmillan_majora And it remains a crying shame that three of our co-authors were prevented by their employer from getting recognition for their work on this paper --- even as the same actions by said employer put such a spotlight on it.

2022-11-09 21:14:39 @timnitGebru @mmitchell_ai @mcmillan_majora That said, many of these citations are spurious: People saying "yeah yeah ethical issues" or talking about environmental impact, when they really should be citing the people we cite. (Our main addition there was to bring in the env racism angle.)>

2022-11-09 21:14:20 Google Scholar certainly isn't a direct reflection of any reality, but I am still tickled that Stochastic Parrots is at 999 citations there. cc @timnitGebru @mmitchell_ai @mcmillan_majora >

2022-11-09 20:50:54 @faineg Tesla's "full self driving" mode would be a contender tho.

2022-11-09 18:56:26 RT @_alialkhatib: my hunch is that in 3 months it'll be harder to decipher someone's checkmark than it is to understand how mastodon works.

2022-11-09 18:03:49 RT @timnitGebru: I'm live tooting this on Mastodon :-) Will post here after.

2022-11-09 16:47:00 RT @kenarchersf: Starts in 45 minutes. https://t.co/mWi7drwSnU

2022-11-09 16:05:55 RT @DAIRInstitute: This will be in 1.5 hours. https://t.co/HZEA4hF3yW

2022-11-09 03:51:55 @Abebab ffs

2022-11-08 22:53:45 #Enough

2022-11-08 21:48:55 RT @DAIRInstitute: Join us tomorrow (Wednesday). https://t.co/HZEA4hXcN4

2022-11-08 21:42:28 RT @emilymbender: Episode 5 is coming this Wednesday! Join me and @alexhanna for more Mystery AI Hype Theater 3000 on Wednesday 11/9, 9:30a…

2022-11-08 19:19:44 RT @emilymbender: My voting experience yesterday (because runner, because Seattle, because WA is #VoteByMail) USians: If you haven't al…

2022-11-08 18:59:29 (I asked this first on Mastodon, but thought it might be valuable to put the query out here, too.)

2022-11-08 18:59:04 Is anyone tracking the vocabulary springing up around #TwitterMigration? I've seen "twefugees" and "birdsite expats" at least. I think it could be quite interesting how the metaphors relating to migrants (w/all their inherent connection to colonialism &

2022-11-08 18:33:18 @Abebab Totally! The thread below is about a slightly different angle on ML for mental health, but I think a lot of the same criticisms apply:https://t.co/1gX8URBsz6

2022-11-08 15:00:27 My voting experience yesterday (because runner, because Seattle, because WA is #VoteByMail) USians: If you haven't already done so, today's the day! #VOTE #VOTE #VOTE https://t.co/ZnikG1qTnp

2022-11-08 05:57:38 @CosNeanderthal There are PLENTY of good discussions to have about interdisciplinarity. For ex: What are productive means of structuring collaborations to incorporate domain expertise? But these START from acknowledging that domain expertise is necessary.

2022-11-08 05:55:35 In sum: CS is over-funded. Not only is domain expertise necessary for defining legitimate tasks, we need to stop setting up the financial incentives such that the goals of everything are about advancing the state of knowledge of CS/ML/"AI".

2022-11-08 05:53:48 And: Yes, yes it is. Suggesting that that's a topic for debate sounds like they realize that just maybe ML can't go around claiming to have "solved" made up problems forever but aren't ready to face that reality. >

2022-11-08 05:50:40 Just turned down an invitation to be on a panel with topics including "Is domain expertise, like linguistics, necessary for the design of #NLProc benchmarks?"My dude, did you really just invite me to be on a panel to debate whether I should be on the panel?

2022-11-08 00:28:03 @rajiinio @SashaMTL (Not saying that I think you're suggesting lowest common denominator, but that's one way to read "things are different in different countries".)

2022-11-08 00:27:41 @rajiinio @SashaMTL And I think that NeurIPS can totally set its own standards, through community process, and they don't have to be (and shouldn't be) lowest common denominator.

2022-11-08 00:26:39 @rajiinio @SashaMTL I don't doubt that you see ethical considerations a consonant with and as important as technical ones. But the paragraph reads like you're trying to appease those who don't, I wish it didn't have to be that way.>

2022-11-07 20:44:33 @EricFos Not known to me, but the webpage has good transparency about what they are up to, including instructions for delinking your twitter account from the service afterwards.

2022-11-07 20:38:15 RT @emilymbender: It is beyond time for the ML community to drop this idea that ethical considerations are somehow at odds with "technical…

2022-11-07 20:18:25 I think I can read that blog post as the ethics committee chairs trying really hard to bring other people on board, and throwing the recalcitrant techbros a bone. But it shouldn't have to be this way. The "But my AI progress!!!" types need to just get over themselves.

2022-11-07 20:17:30 It is beyond time for the ML community to drop this idea that ethical considerations are somehow at odds with "technical merit" and that ethical review is somehow hampering "scientific progress". (As. If.)>

2022-11-07 20:15:56 I applaud the #NeurIPS2022 ethics committee for their transparency and thoughtfulness in this blog post:https://t.co/sUzuD5SsWtHowever: >

2022-11-07 17:37:08 Q for English speakers. Without looking it up, which of these *sounds* bigger? #Linguistics

2022-11-07 16:31:14 RT @rajiinio: Hard to believe but the @NeurIPSConf Ethics Review process is over - and has completed its third year! In a blog post, with…

2022-11-07 16:02:00 RT @schock: Is there a Mastodon Foundation yet? Since it seems like there is finally real possibility for significant migration, we are goi…

2022-11-07 16:01:39 Thinking about checking out Mastodon, but wondering if any of the folks you follow here are there already? @debirdify doesn't require you to have a Mastodon account first! Check it out herehttps://t.co/J25Rg0qEli#TwitterMigration

2022-11-07 14:57:40 Episode 5 is coming this Wednesday! Join me and @alexhanna for more Mystery AI Hype Theater 3000 on Wednesday 11/9, 9:30am Pacific.https://t.co/VF7TD6tw5c https://t.co/2VWNZtZXmt

2022-11-07 14:56:27 Mystery AI Hype Theater 3000 Ep. 4 - Is AI Art Actually Art? where @alexhanna and I bring on @shengokai @negar_rz and @WITWhat and the level of discourse is instantly way more sophisticated than Eps 1-3.https://t.co/FIxd1gPsnZ

2022-11-07 14:48:58 @ChanceyFleet I put mine in my display name and wondered if that would be annoying for screen readers. I see that the way you did yours is nice and short and not redundant, so I'll do the same!

2022-11-07 14:48:19 RT @natematias: Folks on Twitter are experiencing what STS scholars call "infrastructure inversion" - a jolt of recognition that invisible…

2022-11-07 05:44:44 @LkjonesSOC Thank you!

2022-11-07 02:06:24 @SashaMTL @huggingface

2022-11-07 01:42:07 @SashaMTL @huggingface

2022-11-07 00:24:05 Again I see people describing me as an "AI" person. I am not. People who are might (or might not) have something to learn from what I have to say, but that's not the same thing as me being an AI person. https://t.co/0CJjhf7uhM

2022-11-06 21:18:39 RT @shengokai: “A citational order is a social order, and like all social orders it is defended as a moral order. To refuse to cite the rig…

2022-11-06 20:37:48 RT @shengokai: In case it was not clear from my tweets over the last two days “do not cede the forum to bigots” is not just instructions fo…

2022-11-06 15:48:56 @HadasKotek Connected! First as a list of posts on your profile and second through cross-linking if you put in the hyperlinks.

2022-11-06 15:42:01 @HadasKotek https://t.co/kTBw3sywiC is an easy interface for blog posts... and then you can just link to that.

2022-11-06 14:44:00 @schock @ramakmolavi I think a key conceptual sticking point is that the separate servers aren't separate social networks---or at least not isolated networks, because they are federated.

2022-11-06 14:23:11 @aeryn_thrace I'm not leaving Twitter (yet). There are so many people I am learning from #onhere But I find it's worthwhile for me to put the time into also creating another space in case this becomes unusable (or the site just stops working all together).

2022-11-06 14:03:16 RT @gerardkcohen: I am officially no longer the Engineering Manager for the Accessibility Experience Team at Twitter. I have words.

2022-11-06 03:16:41 RT @RJDeal: @alwaystheself @andizeisler I’m putting people I follow on a list so I can use my list as my feed. I’m just adding people to th…

2022-11-06 01:34:52 RT @rachelmetz: Some great thoughts from @emilymbender, who I was thrilled to find on mastodon along with a lot of academic (and specifical…

2022-11-06 00:40:04 @joshisanonymous @Nanjala1 Well, not fully automatically. I had to do a migration step to make that happen. But it was pretty easy!

2022-11-06 00:35:24 p.s. I'm not leaving the birdsite yet. Just working to also build up an alternative, in case this implodes like it looks like its going to.

2022-11-06 00:30:52 Anyway, I hope to see many of you there! I'm @emilymbender@dair-community.social

2022-11-06 00:30:27 When I first joined Twitter, I didn't post for two years, and mostly used it to follow conferences I wasn't at. Over in the #fediverse, I'm going slow, hoping to learn the ropes before doing too much, but also trying to build community, so tooting some.>

2022-11-06 00:28:27 This, along with the lack of full-text search (only hashtags are searchable) appear to be design choices meant to dampen virality and nudge the interactions towards contentful exchanges rather than dunk-fests.>

2022-11-06 00:27:28 There aren't QTs, though you can get the URL for a post and paste into a post. But this isn't the same as a QT not least because you can't get from a post to other posts that link to it. >

2022-11-06 00:26:09 @Nanjala1 Yes -- info a bit lower in the thread. Also, you can definitely follow people on other servers, so that's fine too.

2022-11-06 00:25:07 And then, it's back to learning mode. Things are a little different in the #fediverse, not least because of some design decisions in the software. It's wonderfully easy to create "content warnings" so you can, for example, post a #wordle result but require a click-through.>

2022-11-06 00:24:02 As for things being overloaded, the first thing is patience. But also: it's worth choosing a smaller server to join, rather the ginormous ones. And also: you can pretty easily change servers. I did this to redirect my followers and it worked like a charm:https://t.co/1CLtpi7eXW

2022-11-06 00:22:03 Honestly, it's kind of nice to start fresh and have fewer people I'm following. And it's *wonderful* to once again be in a space with a reverse-chron timeline that I can just scroll back until I hit stuff that I've seen and know that I'm caught up :)>

2022-11-06 00:21:14 For things being too quiet---the trick is to find people to follow :) A good starting point is this service, which (heuristically) combs through the people you follow on Twitter, and finds those that have announced mastodon (or other fediverse) accounts:https://t.co/J25Rg0qEli

2022-11-06 00:18:02 At first it was quiet (not following enough people), and the server I was on (https://t.co/7lqqF6ZxN8) got slammed with new accounts and it was all too slow to be usable. But both of those things are fixable!>

2022-11-06 00:17:17 A few thoughts about #TwitterMigration : I've had a Mastodon account (now at @emilymbender@dair-community.social) for a couple of weeks. It takes some effort to get started, but I think it's worth it!>

2022-11-06 00:16:38 @Lasha1608 I see people using the #nlproc hashtag over there, so that would be a way to find some! (And I'm hoping we'll just use #nlp and make it ours...)

2022-11-05 22:59:53 @marc_schulder @AngloPeranakan Yes come join us there! You don't have to delete your Twitter account to do so....I'm @emilymbender@dair-community.social

2022-11-05 05:08:36 RT @haydenfield: Members of Twitter's ethical AI team learned they had been laid off early this morning, according to tweets by those affec…

2022-11-05 05:06:58 RT @alexhanna: A word about the META team build by @ruchowdh @quicola and others -- This was probably one of the last teams at a big tech…

2022-11-04 21:48:55 RT @asayeed: fin de twiècle

2022-11-04 18:57:07 @susansternberg @DAIRInstitute That might be about your browser client. That's what mastodon handles look like...

2022-11-04 16:38:45 @prem_k That should really depend on the account you are following. I don't believe I have mine set up that way! (Though I did just switch instances, so maybe that did something?)

2022-11-04 16:34:41 @heartsalve Btw, I've collected some of my responses to #AIhype here:https://t.co/uKA4tuv4jF

2022-11-04 15:56:17 Please find me at @emilymbender@dair-community.social

2022-11-04 15:32:33 #TwitterMigration: Please find me at @emilymbender@dair-community.social

2022-11-04 15:09:20 @heartsalve So far, my Mastodon experience is AGI bro free, which is lovely.

2022-11-04 15:08:45 @complingy @yuvalpi NPI to the rescue, indeed :)

2022-11-04 14:48:35 I would be really surprised if this site retains the value it had, but I'm deleting my account yet. At the same time, I'm also starting to use Mastodon, and happy to see community starting to form over there.

2022-11-04 14:47:35 It's devastating to watch on Twitter as Twitter is getting gutted. I wish strength to all of those going through this.

2022-11-04 05:09:40 RT @JortsTheCat: Does anyone know any state or local representative from CA? I have a quick question https://t.co/jgNAyieo1g

2022-11-03 16:33:10 @bobirakova @LangMaverick @LeonDerczynski I know much less about (2), but it does seem very useful to have different lenses in use when creating taxonomies/typologies.

2022-11-03 16:32:21 @bobirakova @LangMaverick I believe @LeonDerczynski is also thinking about taxonomy/typology of harms.>

2022-11-03 16:32:02 @bobirakova @LangMaverick There's another one, more extensive and maybe closer to concerns of the law here:Lefeuvre-HalftermeyerA,GovaereV,AntoineJ,AllegreW,PouplinS,etal.2016.Typologiedesrisquespourune analyse éthique de l’impact des technologies du TAL. Rev. TAL 57(2):47–71https://t.co/cZ4ZvmQBxI

2022-11-03 16:30:16 @bobirakova For (1), there are definitely attempts, including one that I did in this paper (w/@LangMaverick ):https://t.co/avrAZKv2dk>

2022-11-03 04:21:20 RT @complingy: #CompLing alert: For anyone applying to North American grad programs, here are 35 Linguistics departments with #NLProc(-adja…

2022-11-02 22:46:10 RT @lopalasi: #RiskRecidivism tools like #COMPAS or #RisCanvi are not “biased”. They automatize *discrimination*. And they do that not beca…

2022-11-02 18:56:31 RT @alexhanna: You can catch the recording of our stream with @shengokai @negar_rz and @WITWhat about AI Art over at YouTube!And mark you…

2022-11-02 16:03:22 @jmhenner My son who is a junior at UCLA consistently refers to his classmates (and implicitly himself) as "kids" and each time I have to resist learning that pattern...

2022-11-02 15:54:10 @LeonDerczynski @encoffeedrinker That is beyond the pale. Also, I am worried for the OP --- at minimum it looks like they aren't receiving appropriate mentoring.

2022-11-01 22:01:25 @DavidSKrueger I don't think you've understood my remark. How is it a race to the bottom if instead we refuse to build it and/or regulate against its being built?

2022-11-01 21:49:41 @IEthics My best idea is to call it out when I see it, as I did here.

2022-11-01 21:38:46 @alexhanna IKR?! @timnitGebru and @adrienneandgp were *so* polite! I don't think I could have held back...

2022-11-01 20:57:35 RT @DAIRInstitute: If you missed DAIR fellow @adrienneandgp and founder @timnitGebru discussing their Noema Magazine article co-written wit…

2022-11-01 20:57:21 This was so cool! (Though wow the mansplaining was strong with the callers, especially the first two.) https://t.co/ud8SeUtp9d

2022-11-01 20:19:28 @IEthics That plays again into this idea that "AI" systems are somehow human-like.

2022-11-01 20:19:06 @IEthics Lack of interpretability of these models is a huge problem -- that's not what I'm objecting to here. What I'm objecting to is the attempt to explain or illustrate the lack of interpretability in terms of ways in which humans can't always explain our own preferences. >

2022-11-01 20:17:55 @jeffclune Glad to hear it! And yeah, I wondered if the quote was somehow out of context.

2022-11-01 15:31:29 RT @emilymbender: I also wanted to push back on this quote from @jeffclune. There is always the option of not building, not buying, not dep…

2022-11-01 15:31:23 RT @chloexiang: “As with any other tools, we should be asking: How well do they work? How suited are they to the task at hand? Who are they…

2022-11-01 15:31:12 I also wanted to push back on this quote from @jeffclune. There is always the option of not building, not buying, not deploying the thing. (Though once it's built, bought &

2022-11-01 15:28:07 I appreciate this reporting, though it starts off with an unhelpful analogy: There is no parallel between our uninterrogated preferences as humans and our inability to understand black-box "AI" systems (mathy maths).>

2022-11-01 13:13:35 @danchall Well, cars consume gas, don't they? I agree that when LMs "consume" text, it isn't really "content" for them in any sense, though.

2022-11-01 13:07:41 [#TwoMarvins, because it was just one tweet. But on unpacking, it looks more like #ThreeMarvins.]

2022-11-01 13:06:45 To wrap up the linguistics lesson: When you see a verb of cognition like "appreciate" in a sentence talking about mathy maths ("AI" systems) stop and check. What is the subject of the verb? If it's the mathy math, who's trying to sell you something?

2022-11-01 13:02:50 And the more likely we are to shrug and agree when we're told that tech is the solution to societal problems and then suffer the effects when it makes them worse (or provides cover for people to make them worse

2022-11-01 13:01:22 @Miles_Brundage @OpenAI The more the public is given the impression that stochastic are able to "appreciate content" or "think" or "decide" or "understand", the more likely we are to cede power to automated systems and the companies that sell them

2022-11-01 12:59:37 @Miles_Brundage @OpenAI Yeah, this is just Twitter (and maybe noone is here anymore anyway) and yeah this is just a throwaway comment, but it is still harmful because it is yet another flake in the blizzard of #AIhype.>

2022-11-01 12:58:12 Maybe @Miles_Brundage is just making a joke or trying to be cute here. But given the context (everything else the folks at @OpenAI have said about their LMs), if he is, it's really hard to tell. And that's indicative of a huge problem.>

2022-11-01 12:56:00 Sense 2 at least isn't about cognition, but that would be a very strange sense to use in the context of "we [people] appreciate content". So clearly the intended sense is one of 1a-1d. **All of which describe cognition.**>

2022-11-01 05:04:24 Communicative intent*Typos, sigh.

2022-11-01 04:58:03 @andrewthesmart And this is a useful contribution to the discussion why?

2022-11-01 02:14:35 When a language model synthesizes text, it's not even lying, because "lying" entails some intent to dissemble, which is still a communicative in.... I wonder if reflecting on that might help people get a better intuition for what these things are. Or is it too subtle?

2022-10-31 22:03:39 Another: https://t.co/pSyZiP8JahImmediately detectable as spam (even before I clicked through to find out that is nominally a "pharmacy and chemistry" journal trying to get me to publish there) with the subject line "Wеlϲοmе to Ρubliѕһ Ρaрҽrs in the International Јоᴜrnals"

2022-10-31 21:09:39 #linguistics #naclo2023 #nlproc https://t.co/B9goxfD97t

2022-10-31 21:09:22 This year, too, @UWlinguistics is a site for @naclo_contest Please share this info with interested high schoolers in the area!https://t.co/Qq7JA19XK0

2022-10-31 19:31:55 @omicreativedev @shengokai @WITWhat @negar_rz @alexhanna Either @alexhanna or I will post a link, and we'll surely both retweet!

2022-10-31 18:50:03 @elizejackson What kind of mask do you wear? How do you ensure that it fits properly? What's the hardest thing about putting it on?

2022-10-31 16:45:03 @shengokai @WITWhat @negar_rz @alexhanna I don't think I can really claim to be an artist, but can you tell what my intention was here?https://t.co/QdQ9do1Z2j

2022-10-31 16:44:42 It was great to get to learn about how to deal with hype around "AI art" from @shengokai @WITWhat and @negar_rz in the latest episode of Mystery AI Hype Theater 3000 w/@alexhanna. Recording coming soon, but key point: art expresses the artists intention.

2022-10-31 13:35:53 RT @SashaMTL: What could be scarier than a Stochastic Parrot? Complete with a t-shirt full of GPT quotes about the meaning of life, inclu…

2022-10-31 01:08:44 This is such a key point: if we are to disrupt and prevent genocide, we need to know how to recognize it in action. In this case, especially important is detecting and dismissing propaganda while there seem to be so few channels for the victims to speak to the world. https://t.co/Knv80KDjBE

2022-10-30 22:42:28 RT @PeoplePowerWA: SEATTLE ACTION: It's time to speak out against the ShotSpotter gunfire detection system! When: Wednesday, Nov 2 at 9am…

2022-10-30 22:04:37 @JonKBateman on so so many levels.

2022-10-30 20:57:47 @ShobitaP @ruha9 @emilymbender@mastodon.social

2022-10-30 19:33:10 RT @CriticalAI: #CriticalAI friend @MadamePratolung belatedly live tweets the first of the Mystery AI Hype Theater episodes by @emilymbende…

2022-10-30 12:44:16 RT @shengokai: Art, for example, requires an entire social nexus to legitimate what is and is not art

2022-10-30 12:43:51 RT @shengokai: Put simply, the cult belief that STEM is more rigorous, more "objective" than the humanities is not just one of the great tr…

2022-10-30 11:29:09 @omicreativedev It was @alexhanna ! https://t.co/TrZ4mZZi4G

2022-10-29 13:59:20 RT @emilymbender: What would the world look like if VC funding wasn't flowing to companies trafficking in #AIhype and surveillance tech but…

2022-10-29 13:01:58 RT @emilymbender: I guess it's a milestone for "AI" startups when they get their puff-pieces in the media. I want to highlight some obnoxio…

2022-10-29 13:00:56 RT @emilymbender: Today's #AIhype take-down + analysis (first crossposted to both Twitter &

2022-10-29 04:02:24 #ThreeMarvins

2022-10-29 04:01:56 Finally, I can just tell that some reading this thread are going to reply with remarks abt politicians being thoughtless text synthesizing machines. Don't. You can be disappointed in politicians without dehumanizing them, &

2022-10-29 04:01:21 And this is downright creepy. I thought that "representative democracy" means that the elected representatives represent the people who elected them, not their party and surely not a text synthesis machine./12 https://t.co/pDCl1lgRx8

2022-10-29 04:00:49 This paragraph seems inconsistent with the rest of the article. That is, I don't see anything in the rest of the proposals that seems like a good way to "use AI to our benefit."/11 https://t.co/USu7GiP7V1

2022-10-29 04:00:20 Sorry, this has been tried. It was called Tay and it was a (predictable) disaster. What's missing in terms of "democratizing" "AI" is shared *governance*, not open season on training data./10 https://t.co/h44gCyjkka

2022-10-29 03:59:35 This is non-sensical and a category error: "AIs" (mathy maths) aren't the kind of entity that can be held accountable. Accountability rests with humans, and anytime someone suggests moving it to machines they are in fact suggesting reducing accountability./9 https://t.co/4S61hX1tQb

2022-10-29 03:59:02 I'd really rather think that there are better ways to think outside the box in terms of policy making than putting fringe policy positions in a text blender (+ inviting people to play with it further) and seeing what comes out./8 https://t.co/UTEr3VflVo

2022-10-29 03:58:30 Side note: I'm sure Danes will really appreciate random people from "all around the globe" having input into their law-making./7

2022-10-29 03:58:10 Combine that with the claim that the humans in the party are "committed to carrying out their AI-derived platform" and this "art project" appears to be using the very democratic process as its material. Such a move seems disastrously anti-democratic./6

2022-10-29 03:57:47 The general idea seems to be "train an LM on fringe political opinions and let people add to that training corpus"./5 https://t.co/WRf5bT8iMI

2022-10-29 03:56:46 However, the quotes in the article leave me very concerned that the artists either don't really understand or have expectations of the general AI literacy in Denmark that are probably way too high./4

2022-10-29 03:56:38 I have no objections to performance art in general, and something that helps the general public grasp the absurdity of claims of "AI" and reframe what these systems should be used for seems valuable./3

2022-10-29 03:56:26 Working from the reporting by @chloexiang at @motherboard, it appears that this is some sort of performance art, except that the project is (purports to be?) interacting with the actual Danish political system./2

2022-10-29 03:56:13 Today's #AIhype take-down + analysis (first crossposted to both Twitter &

2022-10-28 21:28:04 @DrVeronikaCH See end of thread.

2022-10-28 21:22:27 @JakeAziz1 In my grammar engineering course, students work on extending implemented grammars over the course of the quarter. Any given student only works on one language (with a partner), but in our class discussions, everyone is exposed to all the languages we are working on.

2022-10-28 20:54:22 For that matter, what would the world look like if our system prevented the accumulation of wealth that sits behind the VC system?

2022-10-28 20:53:48 What would the world look like if VC funding wasn't flowing to companies trafficking in #AIhype and surveillance tech but rather to realistic, community-governed language technology?>

2022-10-28 20:40:46 (Tweeting while in flight and it's been pointed out that the link at the top of the thread is the one I had to use through UW libraries to get access. Here's one that doesn't have the UW prefix: https://t.co/CKybX4BRsz )

2022-10-28 20:40:05 Once again, I think we're seeing the work of a journalist who hasn't resisted the urge to be impressed (by some combination of coherent-seeming synthetic text and venture capital interest). I give this one #twomarvins and urge consumers of news everywhere to demand better.

2022-10-27 15:35:48 @jessgrieser For this shot, yes. Second dose is typically the rough one for those for whom it is rough. Also: thank you for your service!!

2022-10-27 05:16:49 RT @mark_riedl: That is, we can't say X is true of a LM at scale Y. We instead can only say X is true of a LM at scale Y trained in unknown…

2022-10-26 21:03:30 Another fun episode! @timnitGebru did some live tweeting here. We'll have the recording up in due course... https://t.co/UwgCA1uu4a

2022-10-26 20:53:19 RT @timnitGebru: Happening in 2 minutes. Join us.https://t.co/vDCO6n1cno

2022-10-26 18:28:08 AI "art" as soft propaganda. Pull quote in the image, but read the whole thing for really interesting thoughts on what a culture of extraction means. By @MarcoDonnarumma h/t @neilturkewitzhttps://t.co/2uAJvBTVbM https://t.co/X4at2irn0V

2022-10-26 17:51:27 In two hours!! https://t.co/70lqNfeHjh

2022-10-26 15:20:39 @_akpiper @CBC But why is it of interest how GPT-3 responds to these different prompts? What is GPT-3 a model of, in your view?

2022-10-25 18:16:23 @_akpiper @CBC How did you establish that whatever web garbage GPT was trained on was a reasonable data sample for what you were doing?

2022-10-25 18:14:43 Sorry, folks, if I'm missing important things. A post about sealioning led to my mentions being filled with sealions. Shoulda predicted that, I guess. https://t.co/pg6IfnZxUQ

2022-10-25 12:51:32 RT @emilymbender: Like, would we stand for news stories that were based on journalists asking Magic 8 Ball questions and breathlessly repor…

2022-10-25 12:51:29 RT @emilymbender: Thinking about this again this morning. I wonder what field of study could provide insight into the relative contribution…

2022-10-25 00:29:46 @timnitGebru @Foxglovelegal From what little I understand, these regulations only kick in when there are customers involved paying for a product. So, I guess the party with standing might be advertisers who are led to believe that they are placing their ads in an environment that isn't hate-speech infested.

2022-10-25 00:27:03 @timnitGebru Huh -- I wonder how truth in advertising regulations apply to cases like this, where people representing companies but on their own twitter account go around making unsupported claims about the effectiveness of their technology.

2022-10-25 00:19:07 @olivia_p_walker https://t.co/YyrMnZdhjW

2022-10-25 00:16:57 I mean, acting like pointing out that something is eugenicist is the problem is not the behavior I'd expect of someone who is actually opposed to eugenics.

2022-10-25 00:15:14 If you're offended when someone points out that your school of thought (*cough* longtermism/EA *cough*) is eugenicist, then clearly you agree that eugenics is bad. So why is the move not to explore the ways in which it is (or at least appears to be) eugenicist and fix that?

2022-10-25 00:03:12 RT @aclmeeting: #ACL2023NLP is looking for an experienced and diverse pool of Senior Area Chairs (SACs). Know someone who makes the cut?…

2022-10-24 19:18:09 @EnglishOER Interesting for what? What are you trying to find out, and why is poking at a pile of data of unknown origin a useful way to do so?

2022-10-24 17:06:13 @EnglishOER But "data crunching of so much text" is useless unless we have a good idea of how the text was gathered (curation rationale) and what it represents.

2022-10-24 16:40:43 Like, would we stand for news stories that were based on journalists asking Magic 8 Ball questions and breathlessly reporting on how exciting it was to read the results?

2022-10-24 04:29:30 @athundt @alkoller It looks like only 7 of them are visible but that's plausible.

2022-10-24 04:17:55 I wasn't sure what to do for my pumpkin this year, but then @alkoller visited and an answer suggested itself.#SpookyTalesForLinguists https://t.co/Bp3rULsA9z

2022-10-23 20:53:56 @jasonbaldridge I bookmarked it when you first announced the paper on Twitter but haven't had a chance to look yet.

2022-10-23 19:52:26 @tdietterich Fine. And the burden of proof for that claim lies with the person/people making it.

2022-10-23 19:47:57 @tdietterich Who is going around saying airplanes fly like birds do?

2022-10-23 19:32:27 To the extent that computational models are models of human (or animal) cognition, the burden of proof lies with the model developer to establish that they are reasonable models. And if they aren't models of human cognition, comparisons to human cognition are only marketing/hype.

2022-10-23 19:08:14 @Alan_Au @rachelmetz https://t.co/msUIrYeCEr

2022-10-23 05:29:16 @deliprao Also if you feel the need to de-hyoe your own tweet, maybe revisit and don't say the first thing in the first place?

2022-10-23 05:27:35 @deliprao What does "primordial" mean to you?

2022-10-23 05:26:27 How can we get from the current culture to one where folks who build or study this tech (and should know better) stop constantly putting out such hype?

2022-10-23 05:24:52 And likening it to "innermost thoughts" i.e. some kind of inner life is more of the same.https://t.co/kFfzL3gbhm

2022-10-23 05:22:59 Claiming that it's the kind of thing that might develop into thinking sans scare quotes with enough time? data? something? is still unfounded, harmful AI hype. https://t.co/hilvqpXgWM

2022-10-23 03:51:33 RT @emilymbender: @histoftech @KLdivergence Me three: emilymbender@mastodon.social

2022-10-23 03:51:31 @histoftech @KLdivergence Me three: emilymbender@mastodon.social

2022-10-23 01:18:48 @EnglishOER @alexhanna @dair_ai For the text ones, I tend to say "text synthesis machine" or "letter sequence synthesis machine". I guess you could go for "word and image synthesis machines", but "mathy math" is also catchy :)

2022-10-22 23:32:51 RT @timnitGebru: I need to get this. Image is Mark wearing sunglasses with a white hoodie that has the writings below in Black.Top:Sto…

2022-10-22 20:07:59 @safiyanoble I'm a fan of Choffy, but as someone super sensitive to caffeine I can say it will still keep me up if I have it in the afternoon. (Don't expect hot cocoa when you drink it. Think rather cacao tea.)

2022-10-21 23:46:26 @LeonDerczynski And now I'm hoping that no one will retweet the original (just your QT) because otherwise folks won't check the date and will wonder why I'm talking about GPT-2!

2022-10-21 23:39:49 @LeonDerczynski Hah -- thanks for digging that up. I've added it here, making it (currently) the earliest entry.https://t.co/uKA4tuv4jF

2022-10-21 23:38:09 RT @LeonDerczynski: This whole discussion - and the interesting threads off it - have aged like a fine wine https://t.co/ykUiRfoGTf

2022-10-21 23:11:29 @zehavoc I think a good limitations section makes the paper stronger by clearly stating the domain of applicability of the results. If that means going back and toning down some of the high-flying prose in the introduction, so much the better!

2022-10-21 19:19:40 @kirbyconrod I don't know, but I love the form pdves so much. Do you name your folders "Topic pdves"?

2022-10-21 19:14:54 @LeonDerczynski @yuvalmarton @complingy I want this meme to fit here but it doesn't --- if only people would cite the deep #NLProc (aka deep processing, not deep learning). https://t.co/7rrLQ11GEm

2022-10-21 18:19:29 RT @rctatman: Basically: knowing about ML is a subset of what you need to know to be able to build things that use ML and solve a genuine p…

2022-10-21 14:15:13 RT @mer__edith: You can start by learning that "AI" is first &

2022-10-21 04:12:05 RT @timnitGebru: I say the other way around. To those who preach that "AI" is a magical thing that saves us, please learn something about…

2022-10-21 01:44:09 @edwardbkang @simognehudson Please do post a link to your paper when it is out!

2022-10-20 23:08:05 RT @StevenBird: @ReviewAcl @aclmeeting you are recruiting reviewers and sending out reminders and calling for papers, but we do not yet hav…

2022-10-20 20:55:42 @AlexBaria Thank you!

2022-10-20 20:11:25 @programamos Thank you!

2022-10-20 20:09:36 @Miles_Brundage @baobaofzhang Thank you!

2022-10-20 19:59:49 Interesting question about how *people* understand what we're calling "AI" these days. Is anyone out there working on assessing that? https://t.co/xglrFgSVqv

2022-10-20 19:51:09 PSA to people eating lunch at meetings with Meeting Owls---whoever has the noisiest wrapping for their lunch will be 'on screen' as you eat

2022-10-20 14:59:03 @JoFrhwld @alkoller By general linguistics education, do you mean what people who aren't studying linguistics would encounter about linguistics in their education?

2022-10-20 14:46:04 @alkoller As to whether I think machines can understand? Sure:https://t.co/F7efO4Kwfy

2022-10-20 14:45:01 @alkoller I think this is symptomatic of something --- perhaps an extremely ascientific desire to believe that LMs are "AI"?>

2022-10-20 14:44:10 @alkoller (This is a subtweet of a [student?] paper I came across on arXiv. But it also a subtweet of all the other times I've come across this misunderstanding.)>

2022-10-20 14:43:15 Funny how the people who equate language models with "AI" misread read Bender &

2022-10-20 02:49:49 RT @kirbyconrod: oh today is International Pronouns Day! ive once again forgotten to do anything in particular but perhaps you would like s…

2022-10-20 00:31:47 This is gonna be awesome!! https://t.co/70lqNfeHjh

2022-10-20 00:31:35 RT @alexhanna: Next Mystery AI Hype Theater 3000 alert!Next week, we invite @shengokai @negar_rz and @WITWhat to talk about AI Art!Oct…

2022-10-19 23:04:46 @RottenInDenmark Some of my fellow Seattlites are basking in the irony of Seattlites waiting impatiently for the darkwet. I'm just waiting impatiently for the darkwet.

2022-10-19 22:30:57 RT @techreview: After her departure, she joined Timnit Gebru’s Distributed AI Research Institute, and work is well underway.https://t.co/0

2022-10-19 20:47:28 Lots of really interesting resources in the replies! Thanks all :) https://t.co/QV1sKbK4IM

2022-10-19 19:55:07 RT @DAIRInstitute: Join us virtually on December 2nd and 3rd as we celebrate our 1st anniversary. We'll have interactive talks and conversa…

2022-10-19 19:51:42 RT @Brown_NLP: Brown is hiring Assistant Professors in Data Science. Language people, please apply! https://t.co/kXoX3N5Cge

2022-10-19 14:25:02 @ggdupont No, I don't think so. I think the folks at Stanford HAI were trying to sell something (their work + these models) rather than trying to make it obvious where people stand wrt them.

2022-10-19 14:24:03 @cbrew I'm tripping over "positive affect" and "capabilities" in this tweet. Positive affect bc what I'd want in a term for these things is neutral at best. "Capabilities" bc so often it's used to refer to various wishful mnemonics (computer functions named after human cognitive acts).

2022-10-19 13:39:14 RT @emilymbender: Gauntlet thrown. I this!

2022-10-19 04:10:56 Interesting how the term "foundation model" is becoming a shibboleth. I get the sense that I can make a lot of inferences about someone's stance towards so-called "AI" based on whether (&

2022-10-19 03:23:27 @ACharityHudley @_alialkhatib Thank you!

2022-10-19 02:49:11 @paulfriedl4 @mireillemoret @RDBinns @laurencediver Thank you!

2022-10-19 02:32:02 @paulfriedl4 Yeah, "formal" as in "formal logic". What I'm particularly interested in is literature that looks at the role of the accountability of the humans interpreting the laws.

2022-10-19 02:25:43 And @_alialkhatib cites this from Lipsky which seems apropos, too:Michael Lipsky. 1980. Street-Level Bureaucracy: The Dilemmas of the Individual in Public Service. Russell Sage Foundation.

2022-10-19 02:24:06 "To Live In Their Utopia" by @_alialkhatib is the most relevant thing I have so far:https://t.co/ZFqxyHcHr7>

2022-10-19 02:22:44 Q for legal scholars out there: is there any writing about the extent to which &

2022-10-19 00:53:59 @dmonett In this wonderful paper, @AlexBaria and @doctabarz point out that the computational metaphor (THE BRAIN IS A COMPUTER / THE COMPUTER IS A BRAIN) is bidirectional, and problematic in both directions:https://t.co/qUC2ECHbTh

2022-10-19 00:47:59 RT @Alber_RomGar: There's a debate on AI writing tools on Twitter right now. As an AI writer, I want to give my 2 cents.Here's my hot tak…

2022-10-18 19:45:00 @merbroussard @rgay Ohhh! That looks excellent.

2022-10-18 13:20:17 @jessgrieser @laura_mcgarrity any leads?

2022-10-17 22:49:05 Q for US-based #NLProc folks: Does anyone know the timeline of the DARPA LORELEI program? That is, when did the program start and, if it's not still going, when did it end?

2022-10-17 20:35:22 RT @davidberreby: The above prompted by this fine analysis of #AIhype by @emilymbender. Got me thinking about how/why we writers &

2022-10-17 20:25:41 @holdspacefree This news story started off life as a press release from @UWMedicine 's @uwmnewsroom who I think should also have disclosed the financial COI that was in the underlying study.https://t.co/0HDsYmyP1g

2022-10-17 20:21:28 Gauntlet thrown. I this! https://t.co/fJFnke0S0Z

2022-10-17 20:21:00 RT @davidberreby: What to do to improve journalism on these topics? 1 Treat AI/robotics like politics or the fossil fuel industry, not like…

2022-10-17 18:12:51 Coda: @holdspacefree illustrates the importance of reading the funding disclosures. The researchers giving the hype-laden quotes to the media weren't just being naive. They're selling something.https://t.co/TK8gjwzYwv

2022-10-17 15:45:56 RT @emilymbender: #twomarvins

2022-10-17 13:22:21 RT @emilymbender: Hi folks -- time for another #AIhype take down + analysis of how the journalistic coverage relates to the underlying pape…

2022-10-17 04:44:58 @maria_antoniak Babel by R F Kuang

2022-10-17 02:32:48 @ai_skeptic @mmitchell_ai @sleepinyourhat Even if what you say is true (that you're a junior researcher, afraid to express your opinions from a non-anon account) this is still trolling. And of course, on an anonymous account, you can claim any identity you like.

2022-10-17 02:07:37 @holdspacefree https://t.co/5Nc0SEoCNf

2022-10-17 02:05:27 #twomarvins https://t.co/5Nc0SEoCNf

2022-10-16 22:47:39 @timnitGebru I'm so sorry, Timnit.

2022-10-16 21:26:26 In sum: It seems like here the researchers are way overselling what their study did (to the press, but not in the peer reviewed article) and the press is happily picking it up./fin

2022-10-16 21:26:12 Another one of the authors comes in with some weird magical thinking about how communication works. Why in the world would text messages (lacking all those extra context clues) be a *more* reliable signal?/22 https://t.co/JOMOvVQp6F

2022-10-16 21:25:42 Note that in this case, the source of the hype lies not with the journalist but (alas) with one of the study authors./21

2022-10-16 21:25:28 In the popular press article, on the other hand we get instead a suggestion of developing surveillance technology, that would presumably spy not just on the text messages meant for the clinician, but everything a patient writes./20 https://t.co/kXdOCAUNnl

2022-10-16 21:24:26 Next, let's compare what the peer reviewed article has to say about the purpose of this tech with what's in the popular press coverage. The peer reviewed article says only: could be something to help clinicians take action. /19 https://t.co/s23mTHCv1D

2022-10-16 21:23:45 Another misleading statement in the article: These were not "everyday text messages" (which suggests, say, friends texting each other) but rather texts between patients and providers (with consent) in a study./18

2022-10-08 14:38:26 @AngelLamuno Uh, read the thread?

2022-10-08 14:20:02 RT @emilymbender: In other words: linguistics, computational linguistics, and #NLPRoc all collectively and separately have value completely…

2022-10-08 14:00:27 I'm unmoved when people talk about one danger of #AIhype being the prospect of it bringing on another AI winter. But I do care that #AIhype is making it harder (in this and many ways) for researchers grounded in the details of their research area to do our work.

2022-10-08 13:58:51 e.g. https://t.co/S1XoTBe9JI>

2022-10-08 13:58:39 But the #AIhype is making it harder to do that work. When AI bros say their mathy maths are completely general solutions to everything language &

2022-10-08 13:54:39 In other words: linguistics, computational linguistics, and #NLPRoc all collectively and separately have value completely unrelated to the project of "AI". >

2022-10-08 13:53:03 But that's okay, because it's a tool, involving limited language understanding, and it has served its purpose. And it's a very impressive and interesting tool! Language is cool and building computer systems that can usefully process language is exciting!>

2022-10-08 13:52:23 Has it understood the same way or as well as a human would? No. It doesn't make inferences about what the timer is for based on shared context with me or wonder what I plan to do outdoors. >

2022-10-08 13:50:50 So when I ask a digital voice assistant to set a timer for a specific time, or to retrieve information about the current temperature outside, or to play the radio on a particular station, or to dial a certain contact's phone number and it does the thing: it has understood.>

2022-10-08 13:49:46 To answer that question, of course, we need a definition of understanding. I like the one from Bender &

2022-10-08 13:46:44 People often ask me if I think computers could ever understand language. You might be surprised to hear that my answer is yes! My quibble isn't with "understand", it's with "human level" and "general".>

2022-10-08 13:42:15 @NoppadonKoo @ledell Multimodal interfaces can be very useful tools --- but developing good multimodal interfaces isn't the same thing as "near human performance on reasoning tasks".

2022-10-08 13:38:48 @joelbot3000 @_joaogui1 So long as "AI safety" is premised on the idea that we are delegating decision making authority to machines (including imagined future "AGI", but also real current systems) then I think it is antithetical to actual AI ethics work, regardless of where they publish.

2022-10-08 13:35:27 1. The general operating mode is "make shit up". Sometimes it just happens to be right.2. "Make shit up" is actually giving too much credit, since the LMs are only coming up with sequences of linguistic form &

2022-10-08 13:34:34 I was going to compare that to the failure more of e.g. large LMs used as dialogue agents where the failure mode is "make shit up", but that's a little inaccurate on two levels:

2022-10-08 13:33:32 I'm not sure I agree that current systems "fail more gracefully" than rule-based predecessors. Failing to return an answer when the input falls outside the system's capability does have a certain grace (humility) about it...>

2022-10-08 13:31:22 The theme track looks interesting and timely! https://t.co/16qQsXNmI4

2022-10-08 13:29:19 RT @boydgraber: We have a call for papers for ACL 2023, but we haven't gotten the Twitter account, a hash tag, or a blog post yet. Stay tu…

2022-10-08 13:04:41 @Kobotic Alas, I think just some coding exposure wouldn't do it and might even make it worse, unless the hour long lesson was well crafted to highlight how computers are simply instruction following machines...

2022-10-08 13:01:27 RT @emilymbender: There is 0 reason to expect that language models will achieve "near-human performance on language and reasoning tasks" ex…

2022-10-08 05:16:55 RT @LeonDerczynski: lurid hyperref color boxes on links are a violence upon the person. luckily, if your venue's template author hasn't not…

2022-10-07 23:33:45 @seamuspetrie @HelenZaltzman Updated version, coined by yours truly in ~2017 for a protest march:Jingoistic Charlatan Makes Seattle Undertake Protest

2022-10-07 19:05:14 @LinguaCelta In Kathol &

2022-10-07 19:03:46 @Stupidartpunk @alexhanna ... which include a lot of discussion of terminology. For example:https://t.co/TrZ4mZZPUe

2022-10-07 19:03:15 @Stupidartpunk @alexhanna You might enjoy our first three episodes:https://t.co/78tYEfs17d

2022-10-07 19:02:19 @Stupidartpunk Mystery AI Hype Theater 3000 (with @alexhanna ) plans an episode on "AI art" together with people who are more knowledgeable about art &

2022-10-07 17:08:09 @kirbyconrod @mixedlinguist That's been my M.O. for naming the language --- the problem definitely isn't solved, but I think I've seen the needle move at least a little!

2022-10-07 17:07:40 @kirbyconrod @mixedlinguist I think by doing what @mixedlinguist is doing in reviewing (and similarly at conference presentations if people don't say): Asking directly &

2022-10-07 17:06:00 @kirbyconrod @mixedlinguist https://t.co/fLeoxN06eI

2022-10-07 17:03:30 In case anyone needs a quick refresher:https://t.co/JjqcSaFizu

2022-10-07 17:02:29 Once again "AI safety research" is just the pious-seeming version of AI hype.

2022-10-07 17:02:04 There is 0 reason to expect that language models will achieve "near-human performance on language and reasoning tasks" except in a world where these tasks are artificially molded to to what language models can do while being misleadingly named after what humans do. https://t.co/HgCaLgTfcx

2022-10-07 17:00:12 @sarmiento_prz @Abebab Came here to add a plug for @ImagesofAI !

2022-10-06 18:36:17 So far, I'm just not saying anything about academic venues. I don't want to set the expectation of free labor, but also I don't want to rule out informal presentations of work in progress to other research groups. And that starts being a lot of words...

2022-10-06 18:34:46 Thanks, all, for the input! I've updated my contacting me page to state that I won't do unpaid speaking engagements in corporate venues. https://t.co/nxRxxz4DvX>

2022-10-06 16:49:56 @willbeason @adamconover Source: https://t.co/yJiq4Yxu99

2022-10-06 13:37:34 RT @timnitGebru: Take a look at our job application here: https://t.co/Fi6rdpKZUP

2022-10-06 00:19:02 RT @DAIRInstitute: We are hiring a senior community-based researcher. Full job ad and application here: https://t.co/9KOCe1ps2p

2022-10-05 23:10:53 @simonw @vlordier @robroc @mtdukes Thank you for this. I'm glad that you see how "magic" and "spells" can be harmful metaphors in this context.

2022-10-05 22:52:36 @thamar_solorio Totally agreed. The PCs were right to put that requirement in. I just wanted to vaguely vent. (The paper was submitted to an NLP venue, but concerned ML applied to something without any language data in sight...)

2022-10-05 21:06:55 @StephenMolldrem I think there's a world of difference between sponsored research (where the sponsor has some say in the research direction) and getting paid to give talks on research that is already done.

2022-10-05 20:27:59 I get why review forms have a word count minimum on certain boxes (esp "What are the strengths of this paper") but sometimes it really truly is a struggle.

2022-10-05 20:26:24 @meredithdclark And knowing that my refusal to do it for free supports the position of people for whom it is more taxing/more crucial to livelihood helps draw a firm boundary!

2022-10-05 20:25:31 @meredithdclark I can see speaking to academic groups as part of my day job (that I am paid to do) and also of benefit to me in working out my ideas. But speaking to tech cos to help educate their workforce doesn't seem the same.

2022-10-05 20:24:27 @meredithdclark Thank you, Meredith. That is very clarifying. It feels insulting to be asked to do free work for big tech cos, but I was also imagining that they see themselves as just researchers inviting other researchers to come share ideas...

2022-10-05 20:23:27 @EmilyTav Follow the replies &

2022-10-05 17:11:29 Academics invited to give tech talks or other informal presentations at industry labs: Do you expect to be paid for that? Do the invitations usually come with an offer of an honorarium? #AcademicTwitter

2022-10-05 17:01:16 @AngloPeranakan Recently saw &

2022-10-05 16:46:58 RT @rctatman: I've seen some confusion around this recently, so to clarify:- AI ethics/FAccT: evaluating &

2022-10-05 14:30:12 @simonw @dylfreed @knowtheory @vlordier @robroc @mtdukes Even saying "fact-check everything it says" is giving it too much credit. It is only spitting out sequences of letters (word forms). If its words make sense, if it's "saying" anything, it's because we make sense of those words.

2022-10-05 14:22:19 @simonw @dylfreed @knowtheory @vlordier @robroc @mtdukes https://t.co/7YYD3QxI7R

2022-10-05 13:38:06 @simonw @vlordier @robroc @mtdukes "AI" is terrible and we don't have to acquiesce. Alternatives: SALAMI, PSEUDOSCI:https://t.co/4jm6nD8Q0s

2022-10-05 13:29:29 @simonw @vlordier @robroc @mtdukes "We're throwing spaghetti at the wall. Sometimes it sticks. And sometimes when it sticks we like the patterns we see. But the people who sell the spaghetti like to say they've made special, sentient spaghetti that is actually trained like marching bands to make specific shapes."

2022-10-05 13:27:19 @simonw @vlordier @robroc @mtdukes Even "people messing around with forces they don't understand" is a bad metaphor here, because it STILL suggests that the forces (= the "AI") are coherent and powerful.

2022-10-05 13:26:11 @robroc @simonw @vlordier @mtdukes And so claiming that they do “magic” or repeating those claims is a problem, because that makes it seem like they work, even if no one understands why.

2022-09-30 18:58:01 @athundt @mmitchell_ai @random_walker @benbendc @aimyths @LSivadas @SabrinaArgoub @GeorgetownCPT These are great ideas!

2022-09-30 18:57:13 @annargrs @mmitchell_ai @stephaneghozzi @timnitGebru @DAIRInstitute @rajiinio For me, I think there's some overlap between "!" and "tsk tsk tsk", which maybe isn't what we want. Just plain ! or ! is maybe stronger... Still, if one or more of these expressions were to be borrowed into English, their meanings would surely drift.

2022-09-30 18:54:30 @SashaMTL @arxiv @mmitchell_ai I took great joy in pointing out to the ACM that certain aspects of their pubs system weren't Unicode compliant. It seems that @arxiv needs updating in the same way.

2022-09-30 15:57:25 @random_walker @benbendc @aimyths @LSivadas @SabrinaArgoub @GeorgetownCPT Thank you!

2022-09-30 14:58:29 @random_walker @benbendc @aimyths @LSivadas @SabrinaArgoub @GeorgetownCPT Will you be updating the checklist PDF too? Pitfall 16 needs citations to Roberts and Gray &

2022-09-30 14:57:03 @random_walker @benbendc @aimyths @LSivadas @SabrinaArgoub @GeorgetownCPT @ubiquity75 @marylgray @ssuri The tendency to appropriate and fail to cite the work of Black women is pervasive &

2022-09-30 14:56:06 @random_walker @benbendc @aimyths @LSivadas @SabrinaArgoub @GeorgetownCPT @ubiquity75 @marylgray @ssuri I get the sense that you are aiming for a popular audience &

2022-09-30 14:51:28 @random_walker @benbendc @aimyths @LSivadas @SabrinaArgoub @GeorgetownCPT @ubiquity75 @marylgray @ssuri Similarly, your Pitfall 15 looks exactly like the main point of my blog post that you do link to elsewhere ... but there's no connection drawn there.>

2022-09-30 14:50:37 @random_walker @benbendc @aimyths @LSivadas @SabrinaArgoub @GeorgetownCPT For example, your Pitfall 16 should cite @ubiquity75 's "Your AI is a Human" and @marylgray and @ssuri's _Ghost Work_ ... maybe you point to their work earlier in the piece, but I'd have to go click lots of links to figure that out.>

2022-09-30 14:47:16 @random_walker @benbendc @aimyths @LSivadas @SabrinaArgoub @GeorgetownCPT I appreciate the shout out, but I think it would be better citational practice to name people in the post, in addition to links and to draw clearer connections to how you are building on previous work.>

2022-09-30 14:29:43 RT @GretchenAMcC: It's the semi-final round of figuring out the least confusing way to clip the word "usual" and so far we've learned that…

2022-09-30 14:23:05 @kilinguistics Which says something about who their imagined users were....

2022-09-30 14:22:52 @kilinguistics A few years later, I did get the satisfaction of complaining about it to the person at Microsoft in charge of that feature. She was surprised, having never heard of a case where someone wouldn't want Teh "corrected" to The.>

2022-09-30 14:22:00 @kilinguistics Autocorrect kept changing "Teh" to "The" in my bibliography entries referring to the work of James Cheng-Teh Huang, then editor of said journal. It took a lot of menu hunting to figure out how to turn that off.>

2022-09-30 14:20:31 @kilinguistics Ugh, what a waste!The last time I had to deal with Word without a co-author was for my 2000 paper "The syntax of Mandarin bǎ: Reconsidering the verbal analysis">

2022-09-30 14:13:30 @kilinguistics Yeah, when it's just for the reviewing process it seems like there should be other solutions!!

2022-09-30 14:01:11 @kilinguistics It seems to be a cost of the kind of interdisciplinary work I've been getting involved in. Fortunately, this time I complained and learned that latex is okay --- their web page was out of date!

2022-09-30 14:00:22 RT @LingMuelller: Chapter 25 of the HPSG handbook is on #ComputationalLinguistics and #HPSG by @emilymbender and @AngloPeranakan. Read all…

2022-09-30 00:17:15 @undersequoias It is Sage --- but it's a site that is somehow too similar to other journals, so my LastPass has like four passwords for it, none of which work. I'm pretty sure I don't have an account for this journal, except maybe they made one for me? Anyway, dealing with password reset...

2022-09-30 00:12:08 @jackclarkSF The fact that you are putting "use an LM to end-to-end write a testimony" in the discourse at all is the problem. I'm also morbidly curious what you mean by "nitty gritty". Did you talk about it as "AI" or as "understanding" the prompt? If so: not helpful.

2022-09-29 15:53:35 RT @FlyingTrilobite: The internet has taken advantage of artists with virtually every platform that has launched since it began: we need to…

2022-09-29 14:17:31 RT @neilturkewitz: This whole thread is fantastic, but this in particular captures the unique challenges of creating accountability in a un…

2022-09-29 12:21:24 RT @emilymbender: I'm glad to see this reporting from @nitashatiku about the rise of text-to-image systems and their dangers. She also coll…

2022-09-29 03:41:14 Fixing auto-captions for #MAIHT3k ep 3, and the system capitalized T and W in "The Wishful mnemonics" and now I'm sitting here thinking that would be a cool band name ... maybe one that plays middle-grades silly music?

2022-09-29 01:51:08 RT @ZeerakTalat: I wanna zoom in on one word in one sentence. "There is a *need* to develop a system that establishes a link between spoken…

2022-09-28 20:16:45 RT @Matt_Cagle: Do your professional interests include fighting surveillance, unearthing secretive programs, and suing the hell out of the…

2022-09-28 20:10:00 Oh, and to all the OpenAI people quoted in the article, I award you #threemarvins

2022-09-28 20:09:14 See also: https://t.co/vKXofWJc0J

2022-09-28 20:07:19 @nitashatiku Those who claim to be simply aren't actually positioned to do so, even without their bizarre "drive to be first and hype [their] AI developments" (per the article). And the others don't even think it's their responsibility at all.

2022-09-28 20:06:15 Thanks again to @nitashatiku for this coverage. It shows very clearly how neither OpenAI who claim to be building "AGI" to save humanity from "AGI" and to be trying to release tech "responsibly" nor those who want to make an OS playground are actually handling safety well.>

2022-09-28 20:03:37 "All y'all should be ashamed for how you're using this deepfake porn, disinfo creating machine I've made. Not my business, tho." https://t.co/Yy8IbgIip2

2022-09-28 20:01:53 Just because social media platforms have largely failed at creating sustainable community standards doesn't mean we need more spaces like that. That's like saying: we might as well create more nuclear waste sites, since there are already several festering. https://t.co/hqLtxbtR2G

2022-09-28 19:59:19 This trust is set up as between the company (here OpenAI, or the others) and the people using the system to generate images. But who is speaking from the people who are harmed by deepfake porn &

2022-09-28 19:57:27 @nitashatiku I hope so. It's worth keeping a clear distance to the #AIhype.

2022-09-28 19:56:11 Grandiose much? (Again, not surprising for OpenAI.) But also: doesn't society get a say in whether we have to "co-develop" with such systems? https://t.co/PrgpF9iduJ

2022-09-28 19:54:54 But also: Who has the relevant cultural context to detect images that are generated for the purpose of creating disinformation? How is OpenAI making sure the right images get to the right "third party contractors"?

2022-09-28 19:54:10 @nitashatiku How is OpenAI making this "safe"? Ghost work. Ghost workers around the world now have to sift through synthetic images to decide which ones meet community standards. I don't think we should have to live in a world where these tasks are generated. >

2022-09-28 19:52:12 Uh, nope. The text to image generation shows us ... what images the system associates with text. Calling that "concept" is stretching that term beyond any recognition. (Here @nitashatiku slips a bit too into #AIhype --- that first sentence isn't attributed to anyone at OpenAI.) https://t.co/sjdDxz17SB

2022-09-28 19:50:29 @nitashatiku "pre-AGI" But that's totally par for the discourse with OpenAI. They do really seem to believe that they are building AGI and thereby saving the world. It's absurd --- and unfortunately it warps so much of the discourse in this field. https://t.co/aNDvuj0yMf

2022-09-28 19:48:23 I'm glad to see this reporting from @nitashatiku about the rise of text-to-image systems and their dangers. She also collected quotes that range from laughable AI hype to alarming lack of responsibility. Here is a sample:https://t.co/CzXBxjJRhv>

2022-09-28 15:10:02 RT @emilymbender: Let's do a little #AIhype analysis, shall we? Shotspotter claims to be able to detect gunshots from audio, and its use ca…

2022-09-28 14:38:15 RT @jshermcyber: This entire thread is great (read it!) — and this tweet in particular, and the one following, speak to an important point.…

2022-09-28 13:51:21 @twocatsand_docs @hypervisible @MayorofSeattle @SeattleCouncil I can't tell of you're supporting their review or not. Do you mean it's all that simple? Disagree: the methodology is in where their data comes from. Do you mean they are making it look simple when it isn't? Agree.

2022-09-25 14:45:16 @mmitchell_ai At first glance, I thought this was commentary on the silly practice we're seeing a lot of from longtermists of putting probabilities on predicted future events (xrisk...) but then that didn't quite fit in context.

2022-09-25 14:44:00 @sherrying Yep.Alt: This is the Wondermark cartoon that is the source of the term sealioning. Orig is here (but w/o alt text): https://t.co/yIRJVgo3UkWikipedia description: https://t.co/qxH3VOKXCn

2022-09-25 02:42:53 @ruthstarkman @SashaMTL @timnitGebru @mmitchell_ai @alexhanna @tolulopero @GRACEethicsAI @HarriettJernig I'll be glad to get to meet you in person!

2022-09-25 02:41:53 @ruthstarkman @SashaMTL @timnitGebru @mmitchell_ai @alexhanna @tolulopero @GRACEethicsAI @HarriettJernig Glad you'll get to spend time with parents. Will you be at our talk?

2022-09-25 02:12:28 @ruthstarkman @SashaMTL @timnitGebru @mmitchell_ai @alexhanna @tolulopero @GRACEethicsAI @HarriettJernig Oh sorry you won't be there!

2022-09-25 00:43:04 @CT_Bergstrom @TwitterSupport @darkpatterns IOW: My phone (android) lets me say which apps get to actually display notifications. I allow very few.

2022-09-25 00:42:25 @CT_Bergstrom @TwitterSupport @darkpatterns I think I have this solved by allowing notifications on Twitter, but then not allowing them from the Twitter app to my phone. (So if I open the app, I see the badge, but it never makes noise.)[Sorry for generating a notification.]

2022-09-24 22:05:22 @josephsams I'm sorry for whatever pain you are experiencing. THat does not make it appropriate or helpful to make a joke about someone else's pain.

2022-09-24 20:46:12 @deliprao I'd go even further: Humans need logical reasoning for a wide range of the activities that NLP tasks are meant to emulate.

2022-09-24 20:40:26 @AngloPeranakan

2022-09-24 20:31:56 @josephsams Uh if someone says "X was painful" coming in with "How about X, but as a joke" is not helpful. I suggest deleting your tweet.

2022-09-24 20:29:36 The problem with calling out the "change my mind" bros is that they think I've asked them to change my mind. My dudes, I haven't. Your "prize-based philanthropy" contest about AI &

2022-09-24 20:23:46 @Lang__Leon @RadicalAIPod What makes you think I'm trying to find common ground? My goal here is to shed some light on the absurdity of what they are doing to warn other people off of it.

2022-09-24 20:21:18 @misc @DavidSKrueger @fhuszar @sarahookr Too far from my area. My guess is that there's two separate issues here: superforecasting in general + superforecasting as applied in the context of EA/longtermism/"AGI" + existential risk.

2022-09-24 20:16:57 @protienking @DavidSKrueger @fhuszar @sarahookr So, any evidence that the people identified in that way are positioned to make predictions about the development of fantasy technology like AGI?

2022-09-24 20:07:49 @Abebab @rajiinio e.g. in this talk:https://t.co/3KDiNyaM4a

2022-09-24 19:59:17 RT @emilymbender: @DavidSKrueger @fhuszar @sarahookr "Superforecasters" is so sus. What makes them super? Are these actually people who hav…

2022-09-24 19:54:03 @Abebab I've started using the phrase "ground lies". My inspiration was this piece by @rajiinio (but I don't think she uses that phrase specifically).https://t.co/1JnDJnXeCQ

2022-09-24 19:52:50 @SashaMTL @timnitGebru @ruthstarkman Three cheers for @ruthstarkman

2022-09-24 19:50:49 @DavidSKrueger @fhuszar @sarahookr "Superforecasters" is so sus. What makes them super? Are these actually people who have an outstanding track record of being proven right? (I doubt it.) People who have made a lot of predictions? (Who cares?)

2022-09-24 18:47:05 @Lang__Leon @RadicalAIPod But I doubt anyone who is down the "xrisk" rabbithole is actually interested in learning from these scholars, because doing so will require understanding their own unearned privilege and the importance of ceding, rather than hoarding, power.

2022-09-24 18:46:05 @Lang__Leon Abeba Birhane, Timnit Gebru, Safiya Noble, Ruha Benjamin, Brandeis Marshall, Deb Raji, Cathy O'Neill, Sasha Costanza-Chock. Or start here, with this curriculum from @RadicalAIPod https://t.co/FoqdosW7Gq>

2022-09-24 18:43:19 @Lang__Leon "Answering these questions": The problem is that they are in fact focused on irrelevant questions. If they cared about real harms affecting real people in the real world rather than their fantasy world, they could get a lot from reading authors such as:

2022-09-24 13:51:37 RT @emilymbender: This is so absurd. "We're too lazy/incurious to learn from the large existing literature outside our own community. Write…

2022-09-24 13:51:31 RT @emilymbender: I'm curious what people think about the Chatham House Rule and how it relates to the politics of citation.>

2022-09-24 12:28:34 RT @QueenOfRats: As we’re all still yelling abt research skills, digital literacy, &

2022-09-23 22:18:55 Really great reflections on this topic from @KendraSerrahttps://t.co/kgHJLZ10ZZ

2022-09-23 22:17:43 @undersequoias @alexhanna It doesn't *promote* doing the right thing, either, though.

2022-09-23 22:12:38 @LeonDerczynski Also, there is no spot where you actually have the view from their windows. That apartment would have to be hovering in mid-air...

2022-09-23 21:23:18 @jeffjarvis That is how I read it. I still think it is an unhelpful response.

2022-09-23 21:02:06 @GaryMarcus @timnitGebru I think that the fact that they have those resources in the first place is a misallocation. But not one that I think I can usefully fix by entering their silly contest.

2022-09-23 20:55:16 @GaryMarcus @timnitGebru Because I have other things to do with my time than engage the "change my mind" bros. Because I choose what questions I want to write about.

2022-09-23 20:40:37 @GaryMarcus @timnitGebru But since they're sitting on all the $$ they think that that gives them the right to shape the conversation. Tell others to "jump" and expect "how high?" as the response. While putting most of the $$ into developing the "AGI" they are also afraid of.

2022-09-23 20:39:39 @GaryMarcus @timnitGebru She has written &

2022-09-23 20:27:11 RT @timnitGebru: We only fund white dudes saving the world &

2022-09-23 19:52:07 This is so absurd. "We're too lazy/incurious to learn from the large existing literature outside our own community. Write something special for us and we'll (maybe) pay some of you after the fact, if we're impressed enough." https://t.co/PfzC1tACjS

2022-09-23 19:24:40 @jeffjarvis That response (equating machines &

2022-09-23 18:57:30 RT @timnitGebru: Happening now.

2022-09-23 18:27:29 RT @mmitchell_ai: PSA courtesy of @emilymbender and @alexhanna : "Hidden Figures" is a term that recognizes Black women. Don't co-opt for,…

2022-09-23 17:50:16 In 10 minutes! https://t.co/evpJ37PxQz

2022-09-23 17:43:40 All of this: https://t.co/kgHJLZ10ZZ

2022-09-23 17:19:20 @mmitchell_ai So awful on so many levels.

2022-09-23 17:13:37 @mmitchell_ai What a horror story indeed. And I'm guessing when you explained yourself ("G shouldn't be exploiting people") no one took the opportunity to learn from your expertise...

2022-09-23 16:41:35 RT @mmitchell_ai: Really important for those of us who are in "Chatham" scenarios.First heard about the idea of "Datasheets" from @timnitG…

2022-09-23 16:37:53 @tdietterich So I'm wondering if the habit of using the CHR has extended out a bit from where it is actually useful/appropriate into spaces where it perhaps doesn't belong or at least has negative consequences we should be considering &

2022-09-23 16:37:00 @tdietterich They are often really interesting groups of people, where the discussions can lead to really interesting ideas and the main reason I'd want to participate is to have the chance to learn from/with those people.>

2022-09-23 04:13:10 @RTomMcCoy @LoriLevinPgh And structured through NACLO. Outreach is really important! But it has to be sustainable...

2022-09-23 04:12:10 @RTomMcCoy @LoriLevinPgh Right. Not a cold call :)

2022-09-23 00:37:14 @dylnbkr @alexhanna @_dylan_baker @MadamePratolung Ah oops! Sorry.

2022-09-22 21:36:24 RT @LeonDerczynski: Detoxification systems not only fail to reject abusive language, they instead make huge efforts to ensure that the pers…

2022-09-22 21:05:24 @PhDToothFAIRy Yeah, it's interesting how tempting it is to accept presuppositions in questions/how much effort it takes to reject them in that context.

2022-09-22 17:51:39 Tomorrow! https://t.co/evpJ37PxQz

2022-09-22 16:48:25 RT @LinguisticsUcla: We are hiring in Syntax! https://t.co/5jd80xeiEqHit us up in comments below or DM us with questions.

2022-09-22 16:06:14 @alexhanna Named after Douglas Adams' robot, connected to the Wall of Shame that @_dylan_baker @MadamePratolung and others are working up

2022-09-22 14:37:34 @kirbyconrod I'm glad I could help --- and that you've managed to maintain those boundaries.

2022-09-22 14:24:18 I get to give this presentation to our new TAs again. Glad to have a chance to remind myself of these things, too. https://t.co/rZOnP7QVOp

2022-09-22 14:10:59 @Lenoerenberg @alexhanna Yes &

2022-09-22 14:08:30 @ImTheQ @xeegeex No thank you.

2022-09-22 13:20:03 RT @alexhanna: We will finish this article, by gosh, if it's the last thing we do! But really, there's a lot to unpack, as our people say.

2022-09-22 13:11:46 @ImTheQ @xeegeex Sorry, what? That seems entirely irrelevant to this discussion.

2022-09-22 12:53:06 @xeegeex @ImTheQ Argh yes -- everyone going around believing in that fairy dust and talking it up makes it SO MUCH HARDER to get traction for other things.

2022-09-22 12:51:56 RT @emilymbender: Episode 3 of Mystery AI Hype 3000 is this Friday, Sept 23, 11am-noon Pacific. @alexhanna and I will, by hook or by crook,…

2022-09-22 12:51:45 RT @emilymbender: Prepping for episode three of #MAIHT3k (w/@alexhanna) and this blog post is SO BAD. I'm trying to figure out which bits a…

2022-09-22 12:11:28 RT @emilymbender: In this talk, I go through six ways in which the research, development &

2022-09-22 04:33:47 @ImTheQ Agreed. But: we don't get there by serving up and promoting misinformation.

2022-09-22 04:26:14 @ImTheQ There is also good tech journalism. I'm thinking of journalists like @_KarenHao @nitashatiku @kharijohnson @dinabass @rachelmetz @haydenfield

2022-09-18 21:51:22 @yuvalmarton Most of the thread is about what to do instead!https://t.co/3qVhNRQQ7e

2022-09-18 21:50:48 @yuvalmarton Thanks for this response. My overall point is that the stance I was taking issue with ("we can't deal with this without universal human agreement") is bogus, because it rests on the idea that we achieve ethical AI by programming something in to autonomous agents. >

2022-09-18 21:34:31 @yuvalmarton This seems to be saying that you think I've tried to change the conversation to "AGI yes/no", which is very much not the point of my thread.https://t.co/XvVShWZNtG

2022-09-18 21:33:56 @yuvalmarton I specifically DO NOT conflate these two things. I am calling out cases where people invested in autonomous systems/AGI seem to be conflating them and I am taking issue with that.https://t.co/9s9IsTRjuO>

2022-09-18 21:32:56 @yuvalmarton While this should be done with reference to things like national &

2022-09-18 21:31:54 @yuvalmarton It makes sense to do this with specific systems and in their particular deployment contexts. That is where we can ask: who is at risk of being harmed here? how are we protecting them? what recourse do they have?>

2022-09-18 21:31:24 @yuvalmarton And this should be done with reference to existing laws in the jurisdictions where the systems are deployed, but looking at legal protections as the floor, i.e. required minimum .>

2022-09-18 21:30:33 @yuvalmarton I think we are largely in agreement --- and in particular I'd like to underscore that I am absolutely for making clear what is considered acceptable and unacceptable behavior of specific, situated systems.>

2022-09-18 04:11:22 RT @banazir: This was on @dawsonwagnertv’s podcast #TheSandsOfTime, which will air on @Wildcat919FM tomorrow (Sun 18 Sep 2022) at 1300 CDT.…

2022-09-17 20:31:32 @LeonDerczynski You don't need to buy it yet. Ask your neighbors how popular the neighborhood is for trick or treat. Trick is the kids not the houses. You don't live in the suburbs.

2022-09-17 12:55:51 RT @emilymbender: As @alexhanna and I have been working through Agüera y Arcas's blog post "Can Machines Learn to Behave" (episode 3 coming…

2022-09-17 12:55:20 RT @emilymbender: I've had the great pleasure of working with @LangMaverick on a piece of Annual Review of Linguistics on "Ethics in Lingui…

2022-09-17 01:59:08 @cat4lyst_Ma It's more than that, as I lay out in the thread that starts with the tweet you are responding to.

2022-09-17 00:16:40 RT @AndyPerfors: Great thread. I have very similar views in response to the argument that says we cannot do anything institutionally about…

2022-09-17 00:09:00 @alexhanna ICYMI: https://t.co/78tYEfs17d

2022-09-17 00:08:15 As @alexhanna and I have been working through Agüera y Arcas's blog post "Can Machines Learn to Behave" (episode 3 coming next Friday!), I'm glad to have re-found this thread on why I think that's just the wrong question to ask. https://t.co/sowyyCfRkn

2022-09-16 22:54:13 @ThomasILiao Cool, thanks! Any chance you might add stats about the training dataset size (in GB/TB or tokens)?

2022-09-16 22:46:29 RT @carlosgr_nlp: Looking for postdoc to work in one of the most active #nlproc research groups in Spain, within ERC PoC project SALSA on u…

2022-09-16 20:59:44 This episode was SO COOL. Definitely have a listen :) https://t.co/FVmOxRWQNu

2022-09-16 20:59:07 RT @xkcd: Thank you to @GretchenAMcC and @superlinguo for inviting me on their podcast and enthusiastically answering all my linguistics qu…

2022-09-16 20:52:22 RT @dlauer: I used to think of myself as a techno-utopian - that tech would bring us a better world and solve all our problems. However, I…

2022-09-16 20:23:40 RT @mikarv: Journalists talking to Google about AI and sustainability: ask them if Google Cloud will stop courting firms and selling comput…

2022-09-16 20:07:54 RT @timnitGebru: This is an interesting article to come out today. "while it’s important to be alert to ethical concerns surrounding A.I.…

2022-09-16 18:47:40 RT @PrincetonDH: We are hiring!The CDH seeks an assistant director to help accelerate impactful and ethical research at the intersections…

2022-09-16 17:44:29 RT @dmonett: Disgusting.But, #AI leaders? Someone working for an unethical company is not an #AI leader. Someone gaslighting the work…

2022-09-16 17:21:12 add*(That's not the first time for that particular typo. Hmmm....)

2022-09-16 17:14:33 Gotta ad: "shooting from the bleachers" embeds a whole set of presuppositions about where the action is &

2022-09-16 16:50:49 @clmallinson @KarnFort1 Sent, x2 :)

2022-09-13 03:38:46 See, @csdoctorsister -- I told this was gonna happen.No matter! One for home and one for the office :)https://t.co/bx1Ddpcsys https://t.co/S52Z99UtuI

2022-09-12 23:01:54 @LeonDerczynski /waves upwards

2022-09-12 20:04:38 @robyncaplan It looks like I missed my moment then, but I guess another might appear.

2022-09-12 19:41:41 @robyncaplan Bylines as in authorship of op-eds and similar?

2022-09-12 19:23:21 Today's question: Does the flightpath* go right over my house only when I'm trying to record something, or is it that I tend not to notice otherwise? *Both jets landing at SEA and also floatplanes headed towards Lake Union, the latter being especially noisy.

2022-09-12 18:45:18 Poking around a bit on the requirements for Twitter verification &

2022-09-10 19:44:59 @importnuance @alexhanna That is a highly concentrated bit of #AIhype, isn't it? I'm not sure we can do a whole episode on one tweet (though OTOH...) but I have actually written a paper that's relevant to just how wrong that is:https://t.co/rkDjc4kDxj

2022-09-10 17:28:11 RT @JuanDGut: Excellent panel on the use of AI in public sector, including @emilymbender &

2022-09-10 05:09:42 I guess it's relevant to talk about our own local news? https://t.co/cLudOHhCkV

2022-09-09 19:37:22 Looks like all the videos from #NAACL2022 are now freely available. Here's the panel on "The Place of Linguistics and Symbolic Structures" in #NLProchttps://t.co/lc7jsSo0zt

2022-09-09 12:53:18 @Kobotic @randtke @rcalo @lh3com Regarding that piece:https://t.co/QVZ2yTKTQl

2022-09-09 04:09:28 RT @CT_Bergstrom: Despite the assurances that I received from @cvspharmacy corporate, they are still refusing to provide the COVID-19 boost…

2022-09-09 03:28:15 @randtke @rcalo @Kobotic We shouldn't have to apply an enormous, reactive process each time to root them out. What can be done up-front to prevent these deployments?

2022-09-09 03:27:50 @randtke @rcalo @Kobotic Especially with large language models seeming to be able to "handle" just about any domain, there are a million ways that people might try to apply this stuff and cause harm.>

2022-09-09 03:27:23 @randtke @rcalo @Kobotic It may be that there is a good path to this that involves leveraging existing processes, but in that case, people need to know. >

2022-09-09 03:26:49 @randtke @rcalo @Kobotic First off, let's avoid using the term "AI" without very careful definition. It just doesn't help.Second: What I'm looking for is something that will prevent under-resourced local jurisdictions from buying snake oil sold as "AI".>

2022-09-09 00:08:44 @CT_Bergstrom I wish I had seen this before I went to CVS this morning only to be turned away **by the pharmacist** because they apparently don't take my insurance. (Fortunately, I was able to get an appointment at Walgreens for tomorrow.)

2022-09-08 19:23:08 @er214 The worry that I have though in case of something like shotspotter, is that the underlying data (recordings of gunshots &

2022-09-08 16:22:36 This looks like a really interesting research program! https://t.co/s8XKkSZ127

2022-09-08 16:21:57 More from the UK. (ALT-less photos are screen caps of the linked webpage.)https://t.co/aJjWdrU7xh

2022-09-08 16:17:07 @HBWHBWHBW It struck me as likely not an isolated incident but rather something likely to occur repeatedly. And so long as public services are underfunded, and civil servants undereducated about what "AI" even is, I see a risk of lots of these projects actually getting taken up.

2022-09-08 16:15:53 @HBWHBWHBW That makes sense. The proximal cause of my tweet was actually info about a volunteer project where some folks thought they would help with suicide prevention by doing some text classification over Twitter data.>

2022-09-08 16:10:26 On the issues with "border tech" which are extra vexxed and hard to counteract:https://t.co/VjB5z4ixOS

2022-09-08 16:09:36 @Nanjala1 Yes, that is definitely the kind of service I'm worried about and you're right that it's one that is extra vexxed because the people affected have (even) less leverage.

2022-09-08 16:08:14 From Human Rights Watch:https://t.co/oGdIChG4l3

2022-09-08 16:07:25 More AI registries:https://t.co/TWpirLSf0q

2022-09-08 16:07:01 AI registries:https://t.co/IpvV8wu7rt

2022-09-08 16:06:35 @Kobotic @rcalo Thanks, Kobi. Will that help provide protections against (willy nilly) incorporation of machine learning into public services?

2022-09-08 16:04:44 Resources for pushing back against algorithmic decision making in public benefits:https://t.co/gxyRgQBPtP

2022-09-08 16:04:11 @merbroussard It wasn't -- what a great resource! Thank you.

2022-09-07 15:39:57 RT @emilymbender: If you start with a false premise, you can prove anything. Corollary: if you start with a false premise, you can end up w…

2022-09-07 14:51:36 @Abebab Congratulations!!

2022-09-07 13:37:18 RT @Abebab: dismissing AI ethics work as one that "fails to offer actionable proposals for improvement" is like saying "if the issue at han…

2022-09-07 03:55:23 RT @emilymbender: @AlexCEngler I think this story about the radium collecting roommate is a good metaphor. Data which are relatively innocu…

2022-09-07 03:54:19 @AlexCEngler I think this story about the radium collecting roommate is a good metaphor. Data which are relatively innocuous in isolation/in their natural state can become dangers when stored in big piles. https://t.co/A6AHBGkFtV

2022-09-07 03:50:52 @AlexCEngler Obviously, the Googles and Metas of the world should also be subject to strict regulation around the ways in which data can be amassed and deployed. But I think there's enough danger in creating collections of data/models trained on those that OSS devs shouldn't have free rein.>

2022-09-07 03:49:41 @AlexCEngler Do you really see no responsibility on the part of those who created the models &

2022-09-07 03:48:48 @AlexCEngler 2. What about when HF or similar hosts GPT-4chan or Stable Diffusion and private individuals download copies &

2022-09-07 03:47:29 @AlexCEngler Perhaps this could be handled by disallowing any commercial products based on under-documented models, leaving the liability with the corporate interests doing the commercializing, still.HOWEVER:>

2022-09-07 03:46:36 @AlexCEngler 1. If part of the purpose of the regulation is to require documentation, the only people in a position to actually thoroughly document training data are those who collect it. >

2022-09-07 03:45:48 @AlexCEngler Sorry to be a little slow there -- needed time to read your piece. Here are the things I am worried about, if OSS "GPAI" (ugh) is free from regulation:>

2022-09-06 23:07:30 @robtow I understand time zones, TYVM. The point is they didn't specify and then were rude about it.

2022-09-06 21:36:10 RT @timnitGebru: @emilymbender The lack of regulation is what stifles innovation by hijacking our time, constantly forcing us to cleanup ra…

2022-09-06 20:18:08 RT @FriendsOfAI: Streaming now: Mystery AI Hype Theater 3k - with @alexhanna &

2022-09-06 19:38:19 @huggingface Coda: Also, let's not just go around presupposing that "general purpose AI systems" are a) something that we have or will have soon and b) are actually desirable. Certainly the things that are getting called that now aren't clearly beneficial. https://t.co/qZiX1jWcsT

2022-09-06 19:36:25 And I expect better than this from @huggingface "top of the value chain"?! This just reads as self-aggrandizement. HF of all actors in this space should be making specific suggestions to improve the legislation, not whining about it. https://t.co/4GMNSaefAd

2022-09-06 19:34:48 On the plus side, I like these comments from Mike Cook: >

2022-09-06 19:33:20 Not surprised to see Oren's name attached to that comment, actually. He's been on a "speed at all costs" kick for a long time.https://t.co/zp5gPWJz4m>

2022-09-06 19:32:20 Second, "chilling effect". This is one of those terms that people throw around as if it's always a bad thing. The whole field needs to chill out and slow down, actually. And what we need from the likes of AI2 and HF surely isn't "catching up" with Google &

2022-09-06 19:30:34 How do people get away with pretending, in 2022, that regulation isn't needed to direct the innovation away from exploitative, harmful, unsustainable, etc practices?>

2022-09-05 16:47:36 RT @csdoctorsister: I'm so happy that I got all of the edits done for my new book, Data Conscience: Algorithmic Siege on our Humanity, befo…

2022-09-05 04:11:02 RT @timnitGebru: We don't have the space to imagine the technological futures that work for us because we're always putting out fires where…

2022-09-05 03:56:11 RT @kenarchersf: Read this, for all of you who wonder why AI ethicists don’t see the future benefits of AI… https://t.co/DG9TuQrYw7

2022-09-04 12:37:27 RT @emilymbender: Yes! Mystery AI Hype Theater 3000 is turning into a (mini?) series, not least because 1hr wasn't enough to even scratch t…

2022-09-04 03:02:16 @becauselangpod The textbook I'm talking about:https://t.co/DMBeqpBDHaSee Ch. 4.

2022-09-04 03:00:28 @becauselangpod That is: while there are some semantic generalizations about individuatability, there are also just some morphosyntactic facts to memorize. Ex: cutlery &

2022-09-04 02:59:08 @becauselangpod Also, the same episode involves some discussion of count v. mass nouns (specifically for vegetables) in English. The answer, I believe (based on what Tom Wasow &

2022-09-04 02:52:28 Listening to "Mailbag of Ew" on the @becauselangpod 's Patreon and I've gotta say it warms my Gen X heart to hear Millennials complaining about being made fun of my Gen Zers...

2022-09-04 00:21:34 So the question is: Is there anything further more me to do here? I strongly suspect that their whole set up is a home for fraudulent papers. But I don't have the time nor the expertise to actually check the others. Should I post a link to their journals for others to check?

2022-09-04 00:18:26 And then:"We invite every researchers to our journals. We have several journals, and our journals are not only economics. Please check again our OJS website. Indeed, thank you for your suggestions. We are only publishing articles that are in the same focus and topic.">

2022-09-04 00:17:35 "Please check the website. It has been withdrawn."(Indeed, the link to the paper now redirects to a TOC for the issue that doesn't have the paper in question.)>

2022-09-04 00:16:39 If you don't withdraw the article, I will do what I can to expose your journal for the fraudulent papermill that I believe it to be."They replied to this within 3 minutes to say:>

2022-09-04 00:16:02 If you don't withdraw the whole article, that is a very clear signal that your whole journal is a fraudulent papermill. (Another indication: you invited me, a linguist, to submit to a journal ostensibly about economics.)>

2022-09-04 00:15:50 Your reviewers failed to catch that --- and I stronglysuspect that authors who would do such a thing have also just made up their entire article.>

2022-09-04 00:15:29 I wrote back:"I think it's not just mistakes. They referenced at least two articles that were completely irrelevantto the points they claimed to be citing them for. >

2022-09-04 00:15:05 In the same message, they added: "Please if you have papers, we are ready to publish your paper. It is our honor to receive your paper. Our journals are free of charge." >

2022-09-04 00:14:17 So here's the promised report-back. The editors wrote back quickly to say they were consulting with the authors and then less than a day later to say that those citations were "mistakes" and the authors were fixing them.>

2022-09-03 22:55:00 @AnthroPunk This is something else entirely. They made up a point about something in another field entirely, and then tossed a citation to our paper on it. There is zero connection.

2022-09-03 22:29:19 @Kobotic Thank you. I guess I'm trying to understand what the legal value add is of putting these constraints in a license. For the well-intentioned, the license probably helps. For those dismissive of these concerns, are there really any actionable constraints?

2022-09-03 22:21:43 @Kobotic Sorry, I may be using terms clumsily. What I meant here is Stability AI, who are releasing the Stable Diffusion model under the RAIL license via HuggingFace. If someone accepts that license but then doesn't follow its terms, would someone other than Stability AI/HF have standing?

2022-09-03 22:15:15 @Kobotic Thanks.Is it enough for the person experiencing harm to seek for the code to be withdrawn from use, etc, or do they have to get the licensor to take action on their behalf?

2022-09-03 21:51:01 IOW, does a license like RAIL actually provide recourse to people who are harmed if someone uses SD to churn out demeaning/disparaging content? If so, how is that operationalized?

2022-09-03 21:49:52 @huggingface Curious what @mmitchell_ai @rcalo or @Kobotic think, if any of you have time :)

2022-09-03 04:19:05 @trochee @sminnen The paper does not look like the output of a text synthesis machine. It looks like desperate scholars trying to get a line on their CV + some weird ideas about cryptocurrencies.

2022-09-03 04:12:38 @desai_pratik They may well exist here, too, but this particular journal is based in India.

2022-09-03 04:10:23 @trochee I only checked one other paper they cite (picking the second least relevant one in their bibliography) and found a similar utter lack of connection.

2022-09-03 04:09:03 @mixedlinguist IKR?

2022-09-03 04:08:47 @trochee I'm guessing it's actually that we refer to the GOLD ontology (general ontology of linguistic description)

2022-09-03 03:57:46 I'll give it a couple of days, but I'm really not optimistic. The whole journal is probably trash. I'll report back about what happens, but from the looks of it, it's likely an outfit designed to meet the needs of scholars who have to pad their CVs.

2022-09-03 03:56:27 I've emailed the editor of the journal alerting them and suggesting that they a) retract this paper and b) recheck their reviewing processes. >

2022-09-03 03:55:48 The digital gold has so many advantages but depends on investor trustworthiness."Reader, I assure you: We say nothing of the sort.>

2022-09-03 03:55:30 "Bender and Langendoen (2010) concluded that in today's ecommerce generation every person invests digitally and digital gold comes the under this part digital gold is not available physically but it's internet currencies. >

2022-09-03 03:54:31 Well that's a new one. Just got a citation alert saying that my 2010 paper (with Terry Langendoen) "Computational Methods in Support of Linguistic Theory" was cited in a paper on "digital gold". Clicked through and found this wild claim:>

2022-09-02 20:58:02 RT @kemi_aa: MY DEPARTMENT IS HIRING https://t.co/9p5ldg1Umd

2022-09-02 17:23:00 @alexhanna And if you missed Part 1, you can catch up with the recording here: https://t.co/XCzXnZnfOk

2022-09-02 17:22:32 Yes! Mystery AI Hype Theater 3000 is turning into a (mini?) series, not least because 1hr wasn't enough to even scratch the surface of the piece we're working through.Join me &

2022-09-02 17:13:54 @alexhanna Here's the recording of Part 1! https://t.co/XCzXnZnfOk

2022-09-02 17:12:56 For all those who were asking for a recording, here it is! MAIH3K, Part 1, with @alexhanna And Part 2 is coming next week! https://t.co/XCzXnZnfOk

2022-09-01 01:11:50 Source: https://t.co/lznFZIzVj2

2022-09-01 01:11:38 Today's delightful discovery: Not only does Alaska have ranked choice voting, but they've also provided their FAQ about it in:Yukon Yup’ikHooper Bay Yup’ikGwichi’nNorton Sound Kotlik Yup’ikBristol Bay Yup’ikChevak Cup’ikGeneral Central Yup’ikSpanishEnglish

2022-08-31 23:03:32 RT @LeonDerczynski: Time after time, datasets without documentation turn out to be stuffed with toxic, unjust, and illegal data. The models…

2022-08-31 21:15:06 RT @timnitGebru: We're back after some technical difficulties. https://t.co/okoVAXvmlG

2022-08-31 21:13:46 RT @alexhanna: This was super fun, all! Stay tuned for Part 2 coming next week, and the recording to be posted on YouTube! https://t.co/1uJ

2022-08-31 19:51:23 RT @alexhanna: Beginning in 10 minutes!

2022-08-31 19:35:13 @ruthstarkman @alexhanna That's the plan!

2022-08-31 19:32:00 RT @CriticalAI: #CriticalAI is so excited for this stream! Join @emilymbender and @alexhanna at https://t.co/2YxY4ppEZS for what will defin…

2022-08-31 19:11:00 @timnitGebru

2022-08-31 19:06:31 RT @DAIRInstitute: This is happening in 1 hour. https://t.co/51XYPswUGt

2022-08-31 17:08:16 RT @emilymbender: We can't have sensible discussions of so-called "AI" (incl adverse impacts of work done in its name) if we cede framing o…

2022-08-31 16:48:52 @michaelbolton Thank you!

2022-08-31 16:48:42 RT @michaelbolton: A succinct and brilliant statement. Beautifully put. https://t.co/ivVRKaoUTw

2022-08-31 16:47:52 Credit: I picked up the phrase "support human flourishing" from Batya Friedman, specifically her keynote at #NAACL2022

2022-08-31 16:36:10 @random_walker @sayashk It seems to take a lot of vigilance to keep it from seeping in!

2022-08-31 16:34:39 @random_walker @sayashk I didn't think you were, which is why it really stood out. But quoting it without distancing yourselves from it makes things confusing at best. Especially given how much of the discourse does adopt that framing.

2022-08-31 16:32:45 We can't have sensible discussions of so-called "AI" (incl adverse impacts of work done in its name) if we cede framing of the debate to those who see "AI" (or "AGI") as the overarching goal itself, rather than the building of tools that support human flourishing.

2022-08-31 16:31:20 @random_walker @sayashk For example, they write: "Noted AI researcher Rich Sutton wrote an essay in which he forcefully argued that attempts to add domain knowledge to AI systems actually hold back progress."To which I ask: progress in/towards/for what?>

2022-08-31 16:30:36 I really enjoyed this first installment of @random_walker and @sayashk's substack. The analysis of how folks into deep learning end up with the attitudes they so frequently seem to hold is astute.I'm also left musing about the perniciousness of the framing of discussions on AI: https://t.co/KB6FeZ79MQ

2022-08-31 03:08:27 @simonw Well geez --- I'm not a moral philosopher. Just someone who has been reading up on all that literature I was pointing you to.

2022-08-31 03:08:03 @simonw But more generally: asking what you personally can/should do wrt to running a model in the privacy of your own machine seems like a very strange place to allocate energy, given the harms playing out in the world. If you care, how about looking at what you can do more broadly?

2022-08-31 03:06:50 @simonw What are you doing with the outputs? Are you publishing them? How are you contextualizing them? Are you sensitized to the stereotypes they might be reproducing? Are you sensitized to the impact on the artists whose work was appropriated in the training of the models?>

2022-08-31 03:05:24 @kenarchersf Thank you. We've found a workaround for now. What I'm particularly looking for long term is a tutorial for how to connect to *manual* captions, produced live by a human captioner. The existing documentation seems more focused on auto-captions...

2022-08-31 03:03:58 @simonw Reading the existing literature about the harms ... won't help you decide what you want to do personally, given those harms?

2022-08-31 02:57:59 Seriously: before you have that conversation, sit down and read work by Safiya Noble, Ruha Benjamin, Abeba Birhane, Deb Raji, Timnit Gebru, Virginia Eubanks, at least.

2022-08-28 03:33:18 RT @timnitGebru: To anyone else, if you haven't done so, read this one by @Abebab @vinayprabhu &

2022-08-28 00:43:42 RT @timnitGebru: Please. https://t.co/zwYtI1yuJj

2022-08-27 23:25:31 RT @emilymbender: While we're arguing (again, *sigh*) w/techbros who think that any efforts to not flood the internet with automatically ge…

2022-08-27 23:18:25 RT @mmitchell_ai: A pattern I've seen emerge.Anti-AI-Ethics people: "When you say 'harm' that doesn't really mean anything."Also Anti-AI-…

2022-08-27 23:17:01 RT @mmitchell_ai: The writing from @schock here is so, so good.

2022-08-27 19:57:47 @alexhanna Yeah, I had in mind more the sarcastic "No, tell us what you REALLY think"

2022-08-27 19:19:05 Ever saw a tweet from me or @alexhanna and thought: "Tell us what you really think?" Join us on Wednesday. We'll be doing that. https://t.co/QMWjl25gIe

2022-08-27 19:15:53 @RobertClewley @schock In doing so, we can work to de-normalize structural discrimination (and techsolutionism) and thus make them easier to address through political processes. And while deprogramming individual techbros would also be nice, I don't think any strategy needs to rely on that.

2022-08-27 19:14:42 @RobertClewley @schock Speaking for myself: I think there is tremendous value in learning how to recognize and articulate these problems (and following the people I recommended plus reading their work is a great start). And then making a habit of speaking up.>

2022-08-27 14:44:17 add*

2022-08-27 14:08:10 Gotta ad: "complaining" isn't a good word for the scholarship I'm referencing (just in my tweet because of what I was responding to). Better is: documenting harms and setting them in their cultural context, both in terms of cause &

2022-08-27 13:35:45 Or as a layperson's summary:https://t.co/JTluwztoez

2022-08-27 13:35:07 If you prefer to read papers in tweet thread form: https://t.co/3Fdgwmwegf

2022-08-27 13:34:06 Okay, so in addition all of the other things wrong with this (see my feed and @timnitGebru's among others), I also have to point out: generative models are NOT search engines and NOT a good way to meet information needs.See @chirag_shah and Bender 2022: https://t.co/rkDjc4kDxj https://t.co/1lZ1pfBaXb

2022-08-27 13:31:17 @huggingface The RAIL license is a good idea, but just posting it on the fence next to the fire isn't enough. HF needs to hold to high standards for what they will host, and not let the "OMG progress is so fast!1!!" crowd rush them through vetting processes.https://t.co/rzVg4cHDJL

2022-08-27 13:27:00 And we need organizations like @huggingface (presently hosting the Stable Diffusion model for public download) to act with courage and bring their might to the firefighting effort. >

2022-08-27 13:24:58 HOWEVER! We can get more person hours on this, if more people are willing to jump in. Wanna become a firefighter? Start by following &

2022-08-27 13:23:31 But we also need to organize our fire crew to go after the massive blaze and there are only so many hours in the day. >

2022-08-27 13:22:21 My takeaway this morning is that it is definitely worth it to put out the "brush fires" of things like the Stability AI crew claiming that there's absolutely nothing wrong with releasing models that will reinscribe and amplify racism, lest they grow and merge with the wildfire >

2022-08-27 13:20:00 @csdoctorsister Back to @schock 's passage, which finishes with: https://t.co/j5iWhssB5p

2022-08-26 14:49:04 @jessgrieser @JoFrhwld OTOH I really do have to update the photo of me that's there....

2022-08-26 14:44:17 @jessgrieser @JoFrhwld My publications page lists things from 1995 to 2022, TYVM

2022-08-26 14:34:45 @jessgrieser @JoFrhwld Though our faculty web server doesn't use ~ in the addresses. So maybe just middle aged?

2022-08-26 14:34:00 @jessgrieser @JoFrhwld I guess I'm really old...

2022-08-26 14:33:14 RT @downtempo: Doing AI/ML/DS work in Africa and want to know about upcoming conference, workshop, and event deadlines?Check out https:/…

2022-08-26 13:27:03 @UpFromTheCracks Thank you for your work!

2022-08-26 12:15:21 RT @emilymbender: This piece is stunning: stunningly beautifully written, stunningly painful, and stunningly damning of family policing, of…

2022-08-25 19:21:36 RT @emilymbender: I could go on, but in short: This is required reading for anyone working anywhere near #AIethics, algorithmic decision ma…

2022-08-25 19:20:48 I could go on, but in short: This is required reading for anyone working anywhere near #AIethics, algorithmic decision making, data protection, and/or child "welfare" (aka family policing).https://t.co/qyHc9Juemd

2022-08-25 19:19:27 4. The ways in which the questions asked determine the possible answers/outcomes.5. Again, the absolutely essential effects of lived experience and positionality to understanding the harms of those outcomes.>

2022-08-25 19:18:24 2. The absolutely essential effects of lived experience and positionality to understanding those harms.3. The ways in which data collection sets up future harms.>

2022-08-25 19:15:59 .@UpFromTheCracks 's essay is both a powerful call for the immediate end of family policing and an extremely pointed case study in so many aspects of what gets called #AIethics:1. What are the potentials for harm from algorithmic decision making?>

2022-08-25 19:12:15 RT @UpFromTheCracks: Just finished a panel on abolishing family policing and it’s algorithms featuring @DorothyERoberts and @b_lts_ . Now m…

2022-08-25 19:12:11 This piece is stunning: stunningly beautifully written, stunningly painful, and stunningly damning of family policing, of the lack of protections against data collection in our country, &

2022-08-25 17:36:38 This looks like it will be amazing! https://t.co/9jHTOrda9W

2022-08-25 13:00:28 RT @SashaMTL: Many of the category definitions and images are strongly biased towards the U.S. and Europe, vastly under-representing biodiv…

2022-08-25 13:00:19 RT @SashaMTL: A third of ImageNet categories are animals, but what do they actually contain? @david_rolnick and I worked with ecol…

2022-08-25 12:59:42 RT @david_rolnick: According to ImageNet, fish are dead, birds are American, and 98% of ferrets are wrong. In new work with @SashaMTL (http…

2022-08-25 00:52:21 RT @timnitGebru: https://t.co/m9hVaeV1uH"We will also be releasing open synthetic datasets based on this output for further research."I…

2022-08-24 18:03:24 @zehavoc @TeachGuz @alexhanna That's the plan!

2022-08-19 13:26:19 RT @emilymbender: This was a really fun conversation --- I appreciated @pgmid 's questions and the chance to get to chat with @ev_fedorenko…

2022-08-18 20:52:33 So I think this speaks to the need for CS departments to do (more) "service" courses like math departments do, for broad investment in institutions of higher ed across all fields, and for tech firms to broaden their notion of who they stand to benefit from hiring.

2022-08-18 20:51:10 But we *desperately* need our tech workforce to include centrally people who are deeply trained in other fields and modes of scholarship as well.>

2022-08-18 20:50:21 Yes, it is valuable to have some of the workforce be primarily trained in computer science and software engineering. And yes it is valuable for a larger sector of the workforce to have programming skills.>

2022-08-18 20:49:20 RT @Kobotic: Who decides who decides what conversations are allowed about #AI? Who has power &

2022-08-18 20:49:10 I don't think it follows, however, that we should respond to continually growing demand for tech workers by continually growing CS majors. >

2022-08-18 20:48:09 I definitely see it as one of the functions of a public university to provide both a well-qualified workforce to the state to which it belongs and to provide educational opportunities that help students connect to well-paid careers.>

2022-08-18 20:46:59 Apropos of this op-ed, some thoughts:>

2022-08-18 20:41:55 Another corollary: group emails with different requests to different people, even if related, quickly turn into confusing threads where one of the requests gets buried in discussion of the other. Again, separate is better, even if it's the same set of addressees!

2022-08-18 20:40:02 Also, sometimes people email me to ask for things that are not my job --- if one of those is mixed in with requests that are my job, the whole thing gets harder to deal with, too. So there, too, separate is better!

2022-08-18 20:39:16 And email w/multiple requests requires more time and thus sits in my inbox until I can deal with ALL of them all at once. That ends up being far more of an imposition. So, unless the requests are intrinsically bound up with each other: separate emails are better!>

2022-08-18 20:38:12 Pro-tip for emailing busy people: It might seem like putting multiple questions/requests into one email is more polite, since it means only one email. In my experience, however, the opposite is true. >

2022-08-18 19:44:55 RT @turkopticon: In February we delivered a petition to Amazon #MTurk with over 2000 signatures calling on them to amend their mass rejecti…

2022-08-18 19:44:51 RT @alexhanna: Hey @amazon, you're stealing from Amazon Mechanical Turk workers by allowing mass rejections. @Turkopticon is demanding that…

2022-08-18 15:00:30 Is the habit of starting papers with remarks about "recently" or "increasingly" etc found across fields, or is that mostly a CS thing? Also, how does it relate to narratives of "rapid progress" in the field?

2022-08-18 13:18:44 @ocramz_yo Yeah it definitely feels like: This website is an *experience* and you shall experience it the way we have designed it to be experienced. Looking for some info here? Too bad

2022-08-18 00:51:28 Web pages that don't scroll at the rate you move the mouse -- why?And has anyone made a chrome app that would allow me to disable that behavior?

2022-08-17 18:47:03 @stanfordnlp @DanRothNLP @dilekhakkanitur @chrmanning @Google I know that @naaclmeeting is going to make the recordings from the conference publicly available at some point, but this doesn't look like the actual NAACL channel.

2022-08-17 18:41:10 This was a really fun conversation --- I appreciated @pgmid 's questions and the chance to get to chat with @ev_fedorenko ! https://t.co/KSqyotYttp

2022-08-17 17:49:34 RT @pgmid: Brains, linguistics, meaning and understanding, language vs. thought, and much more in conversation with the excellent @ev_fedor…

2022-08-17 16:19:46 I have not tagged Dr. Thornton here on the guess that she doesn't want to keep seeing these tweets in her mentions, but I will make sure she receives them. 4/4

2022-08-17 16:19:38 I also put out some polls (again in a mixture of annoyance and curiosity) that landed as mocking subtweets. This helped nothing and I'm sorry. 3/

2022-08-17 16:19:29 I'm sorry. I should have tweeted only curiosity or not tweeted at all. 2/

2022-08-17 16:19:18 I would like to publicly apologize to Dr. Pip Thornton. I replied to a tweet from her that I was tagged in with a mixture of annoyance and curiosity that landed as punching down. 1/

2022-08-17 02:45:25 RT @EAwatchdog: @timnitGebru From a 38$k grant. not to single out individuals, but just to show the kinds of things that are sufficient to…

2022-08-16 22:01:14 @michaeljcoffey It feels like there should be a "Being John Malkovich" spoof about zucchini. You know, that scene where everyone is John Malkovich, except everyone/everything is a zucchini...

2022-08-16 21:48:12 RT @timnitGebru: Shouldn't it be a conflict of interest for reporters in the effective altruism movement, at @TIME &

2022-08-16 15:19:57 RT @emilymbender: Had some fun this morning riffing with @danmcquillan and @cfieslerSee next tweets for sources of this conversation.

2022-08-16 00:02:06 @ZaneSelvans @cfiesler No. The prompt was other --- see the alt text in @cfiesler 's tweet.

2022-08-15 22:02:28 Series of polls, 4/4Do you feel like you can tell, on reading tweets, which mentions are trying to draw the tagged person's attention, which are trying to draw others' attention to the tagged person, and which are just referential?

2022-08-15 22:01:05 Series of polls, 3/nWould you use a twitter tag to just refer to someone, without any intent to draw their attention?

2022-08-15 21:59:33 Series of polls, 2/nWhen you tag someone on twitter, are you typically cognisant of the fact that by doing so, you're potentially drawing others' attention to them?

2022-08-15 21:58:43 Series of polls, 1/nWhen you tag someone on twitter, are you typically cognisant of the fact that you're generating a notification to hem?

2022-08-15 21:09:37 @Pip__T @hfordsa Sorry to bring negative energy into your day. Please interpret as: I'm really curious to see what is *in* this module. A list of twitter handles doesn't add up to that. Will your reading list or syllabus be openly available?

2022-08-15 20:49:40 @jvitak Ah, thank you. Turns out I had to first enable custom keyboard shortcuts in order to get anything more than just "turn them all off", which I definitely DON'T want, but this page had the info:https://t.co/ZqCyaI602D...and I wouldn't have gone looking for it w/o your tweet.

2022-08-15 20:40:14 Does anyone know how to turn off the keyboard shortcut to the "tasks" app in gmail? I never ever ever want it, but if I start a reply to an email with the word "That" or similar, it frequently pops up, interrupting me &

2022-08-15 20:30:53 @garicgymro Yes, yes, n/a, Seattle

2022-08-15 20:29:16 On Twitter, surrounding a tweet with the emoji means that the tweet contains info that is (assuming some audience)

2022-08-15 19:34:29 @Pip__T @UoE_EFI I'm more interested in *how* my work is being used. Which paper? Under what heading? In conversation with which other work? A link to your actual syllabus would answer those questions and be interesting. Your uni's marketing page doesn't help me.

2022-08-15 04:15:38 RT @shengokai: All the concerns these "longtermists" have about AI and so on have been debated in much more nuanced ways by Critical Code S…

2022-08-13 14:16:59 @LeonDerczynski I'm so sorry. May their memory be a blessing.

2022-08-13 12:55:23 RT @emilymbender: Read the recent Vox article about effective altruism ("EA") and longtermism and I'm once again struck by how *obvious* it…

2022-08-12 00:48:32 I guess if a) weren't true, I might have sent an email (grudgingly) to try to determine b). But c'mon people: if you're asking for free labor you simply cannot keep the deadline a secret until people agree. Grrr.

2022-08-12 00:47:44 Almost just agreed to review a ms, because it did look interesting, but then I noticed that a) the journal is published by $pringer and b) there was no way to tell when the review would be due before agreeing. Either one of these is a clear no. Buh-bye.

2022-08-11 18:20:57 @HeidyKhlaaf Yes, nice :)

2022-08-11 18:18:27 @HeidyKhlaaf Nice... could use alttext though.

2022-08-11 18:09:21 ... I'm sure there's more to say and I haven't even looked at the EA puff piece in Time, but I've got other work to do today, so ending here for now.

2022-08-11 18:05:44 I'm talking about organizations like @AJLUnited @C2i2_UCLA @Data4BlackLives and @DAIRInstitute and the scholarship and activism of people like @jovialjoy @safiyanoble @ruha9 @YESHICAN and @timnitGebru>

2022-08-11 18:04:00 If folks with $$ they feel obligated to give to others to mitigate harm in the world were actually concerned with what the journalist aptly calls "the damage that even dumb AI systems can do", there are lots of great orgs doing that work who could use the funding:>

2022-08-11 18:02:48 But none of that credibly indicates any actual progress towards the feared? coveted? early anticipated? "AGI". One thing is does clearly indicate is massive over-investment in this area.>

2022-08-11 18:01:48 Yes, we are seeing lots of applications of pattern matching of big data, and yes we are seeing lots of flashy demos, and yes the "AI" conferences are buried under deluges of submissions and yes arXiv is amassing ever greater piles of preprints. >

2022-08-11 18:00:56 To his credit, the journalist does point out that this is kinda sus, but then he also hops right in with some #AIhype:>

2022-08-11 17:58:33 And then of course there's the gambit of spending lots of money on AI development to ... wait for it ... prevent the development of malevolent AI. >

2022-08-11 17:47:52 "Figuring out which charitable donations addressing actual real-world current problems are "most" effective is just too easy. Look at us, we're "solving" the "hard" problem of maximizing utility into the far future!! We are surely the smartest, bestest people.">

2022-08-11 17:46:41 And that's before we even get into the absolute absurdity that is "longtermism". This intro nicely captures the way in which it is self-congratulatory and self-absorbed:>

2022-08-11 17:44:45 Once again: If the do-gooders aren't interested in shifting power, no matter how sincere their desire to go good, it's not going to work out well.>

2022-08-11 17:44:25 And yet *still* they don't seem to notice that massive income inequality/the fact that our system gives rise to billionaires is a fundamental problem worth any attention.>

2022-08-11 17:43:32 "Oh noes! The movement is now dominated by a few wealthy individuals, and so the amount of 'good' we can do is depending on what the stock market does to their fortunes.>

2022-08-11 17:40:54 Poor in the US/UK/Europe? Directly harmed by the systems making our homegrown billionaires so wealthy? You're SOL, because they have a "moral obligation" to use the money they amassed exploiting you to go help someone else.>

2022-08-11 17:39:52 Another consequence of taking "optimization" in this space to its absurd conclusion: Don't bother helping people closer to home (AND BUILDING COMMUNITY) because there are needier people we have to go be saviors for.>

2022-08-11 17:37:20 Third: Given everything that's known about the individual and societal harms of income inequality, how does that not seem to come up? My guess: These folks feel like they somehow earned their position &

2022-08-11 17:36:30 Second: Your favorite charity is now fully funded? Good. Find another one. Or stop looking for tax loopholes. >

2022-08-11 17:34:12 "Oh noes! We have too much money, and not enough actual need in today's world." First: This is such an obvious way in which insisting on only funding the MOST effective things is going to fail. (Assuming that is even knowable.)>

2022-08-11 17:32:01 Just a few random excerpts, because it was so painful to read...>

2022-08-10 17:43:43 RT @rctatman: If you are currently working on a project like this:1. I'm not upset with you, I'm sure that you're coming from a place of…

2022-08-10 17:38:24 RT @robinberjon: This is a great way to capture what's wrong with the search affordances that dominate today. They are cognitively inferior…

2022-08-10 17:23:01 @joshraclaw My parents are from NJ though &

2022-08-10 17:22:26 @joshraclaw I'd say lollipop, but sucker in that sense isn't unfamiliar. (Dialect region: PNW/Seattle.)

2022-08-10 17:13:46 RT @emilymbender: From what I've seen about Effective Altruism, it puts competition (what's the "best" way to help?) and very cerebral acti…

2022-08-10 17:07:11 RT @timnitGebru: Everyone who knows the deal about these ideologies needs to speak up. I know there are those of you who are afraid of the…

2022-08-10 16:40:49 @jennycz @kirbyconrod Yeah, if there are funds available somewhere institutionally for OA for work done at that institution, I think there's a non-zero chance they'd be set up to still work even if the publication happens after the researcher has moved on.

2022-08-10 04:09:20 @KateKayeReports Indeed. I would love to see a shift towards more accurate terminology.See also: https://t.co/9vxKYVqHdo

2022-08-10 03:54:41 @KateKayeReports In that tweet, it's not "AI" but "superhuman" that I'm objecting to. See the rest of the thread...

2022-08-09 21:54:51 This looks really interesting -- and like something that has an actual claim to being about "democratizing" technology. https://t.co/VaRIu7OnTP

2022-08-09 21:34:24 RT @C2i2_UCLA: https://t.co/kaOlcBlozV

2022-08-09 20:46:20 RT @RadicalAIPod: Hey #AcademicTwitter!! Can you believe it's almost time for school already?! If you're like us, then you're probably wond…

2022-08-09 17:11:17 This. It's been rather astonishing to watch as people get upset at the idea that there should be any accountability (even at the level of being called out) around using dehumanizing analogies. https://t.co/2Wsh68wMZm

2022-08-09 16:19:45 @MadamePratolung I think I tried to read that and couldn't make it through. Too many more important demands on my time...

2022-08-08 22:19:15 RT @HabenGirma: #HelenKeller lived an extremely active life through her senses of touch, smell, taste &

2022-08-08 21:36:52 @HabenGirma Thank you for taking the time to read it &

2022-08-08 18:15:14 Meanwhile, if you actually believe in working against bigotry, start close to home, aka come get your people:https://t.co/iv5BpvWljF

2022-08-08 18:14:31 The best move, when people who are impacted by systems of oppression point out bigotry is to set aside any defensiveness and try to learn from the experience.>

2022-08-08 18:13:21 If the feedback you consistently get for 'trying to discuss something in public' is being called a bigot ... chances are you've at the very least either engaged with bigoted ideas or not learned how to discuss the issue at hand appropriately.>

2022-08-08 17:23:52 Meanwhile, over the weekend a big name in AI was whining about how you can't talk about "AI alignment" online without being called a bigot. Wanna not be seen as a bigot? How about actively cleaning up the discourse being done in the name of your community.

2022-07-27 12:04:27 RT @emilymbender: In Stochastic Parrots, we referred to attempts to mimic human behavior as a bright line in ethical AI development" (I'm p…

2022-07-26 18:37:00 @f_dion Quick lesson in how Twitter works. If you hit "Reply" you're replying to me, which makes it sound like you think I don't already know the info in your tweet. If you hit "Quote Tweet" you can very appropriately share your thoughts on what I said to your followers.

2022-07-26 16:52:17 @luke_stark I'll be speaking on Friday at #COGSCI2022 about dehumanization in the field of AI. Folks who take such a definition seriously are one example.

2022-07-26 16:50:52 RT @emilymbender: Language is one important tool we have for communicating with other people, sharing ideas, &

2022-07-26 16:50:28 Thanks to @shayla__love for this reporting. https://t.co/hxinQjxILV

2022-07-26 16:49:32 And so we, collectively, need to answer the questions: What are the beneficial use cases for such text synthesizing machines? How do we create them with and insist on sufficient transparency to avoid preying on human empathy?

2022-07-26 16:48:06 GPT-3, or any language model, is nothing more than an algorithm for producing text. There's no mind or life or ideas or intent behind that text. >

2022-07-26 16:47:11 Language is one important tool we have for communicating with other people, sharing ideas, &

2022-07-26 16:45:16 @mmitchell_ai As Dennett says in the VICE article, regulation is needed---I'd add: regulation informed by an understanding of both how the systems work and how people react to them.https://t.co/IwqkhvFNnI>

2022-07-26 16:43:27 @mmitchell_ai Given the pretraining+fine-tuning paradigm, I'm afraid we're going to see more and more of these, mostly not done with nearly the degree of care. See, for example, this terrible idea from AI21 labs:https://t.co/qqe93Qf7SF>

2022-07-26 16:42:05 In Stochastic Parrots, we referred to attempts to mimic human behavior as a bright line in ethical AI development" (I'm pretty sure that pt was due to @mmitchell_ai but we all gladly signed off!) This particular instance was done carefully, however >

2022-07-26 12:26:55 RT @emilymbender: Thinking back to Batya Friedman (of UW's @TechPolicyLab and Value Sensitive Design Lab)'s great keynote at #NAACL2022. Sh…

2022-07-25 23:57:31 @datingdecisions I saw this tweet first without the other and thought that you were just praising twitter experts for being scrumptious (succulent).

2022-07-25 22:00:56 #NLProc https://t.co/e6pdRW7oc7

2022-07-25 18:40:59 Friedman's emphasis was on materiality &

2022-07-25 18:40:05 As an example, she gives an alternative visualization of "the cloud" that makes its materiality more apparent (but still feels some steps removed from e.g. the mining operations required to create that equipment).>

2022-07-25 18:38:36 Finally, I really appreciated this message of responsibility of the public. How we talk about these things matters, because we need to be empowering the public to make good decisions around regulation. >

2022-07-25 18:35:49 Similarly, there may be life-critical or other important cases where AI/ML really is the best bet, and we can decide to use it there, being mindful that we are using something that has impactful materiality and so should be used sparingly.>

2022-07-25 18:34:40 Where above she draws on the lessons of nuclear power (what other robust sources of non-fossil energy would we have now, if we'd spread our search more broadly back then?) here she draws on the lessons of plastics: they are key for some use case (esp medical). >

2022-07-25 18:31:49 As societies and as scientific communities, we are surely better served by exploring multiple paths rather than piling all resources (funding, researcher time &

2022-07-25 18:30:46 Thinking back to Batya Friedman (of UW's @TechPolicyLab and Value Sensitive Design Lab)'s great keynote at #NAACL2022. She ended with some really valuable ideas for going forward, in these slides:Here, I really appreciated 3 "Think outside the AI/ML box".>

2022-07-21 22:26:38 RT @cogsci_soc: #MarkYourCalendars#Keynote at #CogSci2022Resisting dehumanization in the age of AI July 29⏰  13:00-14:00 EDT Emily…

2022-07-21 17:56:02 @kirbyconrod @joshraclaw We need a special reactemoji for "I appreciate this turn for its linguistic structure (and possibly for other reasons, too)"

2022-07-20 20:04:21 @SameeOIbraheem Glad to hear it!

2022-07-20 04:58:59 @SameeOIbraheem /waves welcome!

2022-07-18 19:38:36 @kirbyconrod Weekly-ish. I try to test before e.g. gatherings and if it's been more than a week since I've tested for that reason, I'll be motivated to test just because.

2022-07-17 18:43:10 RT @WilliamWangNLP: If you attended #NAACL2022: Did you test positive for COVID-19? #nlproc

2022-07-16 19:41:11 RT @HabenGirma: How do you ask a friend to stop using ableist terms? Being called out can trigger shame &

2022-07-15 17:28:36 @qi2peng2

2022-07-15 00:00:25 @qpheevr @kirbyconrod Yeah, there's something of an art to figuring out which emails need ack notes and how to write ack notes so that they aren't perceived as needing ack notes. And sometimes I think there's generational differences involved too....

2022-07-14 23:56:08 RT @JordanBHarrod: New Video!................................ no. https://t.co/39F1YUhOSA

2022-07-14 21:53:53 RT @Abebab: if you read just one paper on why we shouldn't automate morality, make it this one by @ZeerakTalat et al

2022-07-14 13:38:11 RT @emilymbender: "Unsafe at any accuracy" strikes me as a very valuable &

2022-07-14 00:04:38 Audience view of #NAACL2022 and it's striking how high mask compliance is. (Same observation while there in person yesterday &

2022-07-13 23:58:27 Another wishlist item for hybrid conferences --- the ability to signal applause from afar. Zoom has this. Why doesn't @underlineio ? #NAACL2022

2022-07-13 23:54:38 That also helps with the "sense of community" part of it. I'm not just sitting by myself listening

2022-07-13 23:53:05 One simple thing that could definitely help is a visible count of online attendees. (I thought I saw that earlier this week, but not today.) That would flag: "Other people have come too" and help reassure folks that we're in the right place. https://t.co/519nn8hr7n

2022-07-13 23:24:27 @aryaman2020 Thank you.

2022-07-13 23:22:42 @underlineio It seems like it should be 100% possible to differentiate between "This link isn't live yet" and "Yeah, you're in the right place, but we haven't quite started".

2022-07-13 23:22:06 Here's another pointer for hybrid conferences: If there's some delay in starting a session, this should be made apparent through the online interface. Otherwise, lots of people are sitting individually wondering if they're in the wrong place! @underlineio #NAACL2022

2022-07-13 23:20:55 @jacobeisenstein Thanks! It's always very disconcerting when connecting online to see no indication of what's going on. I keep wondering if I've clicked the wrong link (as has happened in the past)...

2022-07-13 23:19:26 Anyone else attending #NAACL2022 online today? Are you able to get to the plenary? (I'm seeing "This event has not started yet" on Underline, which is a bit odd, given that it should have started ~5 minutes ago.)

2022-07-13 22:02:25 Attending #NAACL2022 virtually today, and noticing that I can recognize far fewer colleagues by voice than by sight. IOW, it's extra salient to me right now how important is for folks to introduce themselves before asking questions!

2022-07-13 18:50:54 @evanmiltenburg @complingy @SeeTedTalk @annargrs @LucianaBenotti @boknilev I do see some value in very broad conferences, actually. The sessions that I end up in are fairly eclectic, and I probably wouldn't go to a whole conference on each theme...

2022-07-13 17:18:09 @zehavoc If that person has come here from a timezone further east, they probably woke up early, actually. The 8am slot would be painful for someone coming from Asia.

2022-07-13 17:16:14 @James__Carey Probably not super salient for this crowd.

2022-07-13 17:12:10 @zehavoc And compared to the fully online events, this is actually a relaxed/late start for us. The world is big and round, and the Pacific Ocean is big and sparsely populated...

2022-07-13 17:11:30 @zehavoc This is the result of location in Seattle (just E of the big water) + trying to make some of the live content accessible across more timezones!

2022-07-13 17:09:44 "Averaging beliefs is not an approximation for debate or 'accuracy'" in ethical debates. (From their slides.)

2022-07-13 17:08:50 @ZeerakTalat "Automating moral judgments" is my short summary of what they're describing (functionality of the Delphi model), and that might be a fully accurate take.

2022-07-13 17:07:27 "Unsafe at any accuracy" strikes me as a very valuable &

2022-07-13 17:06:32 "Automating moral judgments is a category error and unsafe at any accuracy" -- @ZeerakTalat et al at #NAACL2022 https://t.co/LvcAIIHVvf

2022-07-13 14:59:01 Yet another reason to not publish with Springer --- just clicked through on a paper I was interested in reading and got interrupted by at *^^&

2022-07-13 13:55:46 Those affordances meant that there was reasonably easy opportunities for interaction (ACL 2020 was all online, but I am hopeful this would bridge online+in-person) and thus it was possible to experience things together as a community in real time./fin

2022-07-13 13:54:51 Key features of RocketChat were: one channel per paper, plus the "plenary" channel, plus you could create other channels at will. You could subscribe to channels so it was easy to check if anything had happened in the conversations you were following. Responsive. Emoji reacts.>

2022-07-13 13:53:40 I also think that a responsive, inviting &

2022-07-13 13:51:47 When we think about how to design hybrid conferences, we should keep an eye on 2 and 3, and not just 1. I think the #NAACL2022 structured socials are an interesting step in this direction (will be attending one today!). >

2022-07-13 13:50:29 To elaborate a bit, what I was hoping to say was that I see three functions of conferences:1) Sharing our research results + getting feedback2) Community building in the form of shared experiences3) Networking>

2022-07-13 13:49:21 @JesseDodge @mmitchell_ai Thanks, Jesse! So it seems like "old books" really aren't the dominant category...

2022-07-12 22:17:47 RT @LucianaBenotti: If you are interested in geographic diversity at %NAACL2022 you can check slide 6 in the business meeting slides below.…

2022-07-12 22:00:06 @BayramUlya Also impact of visa issues

2022-07-12 21:51:41 What % of training data of English LLMs is from say last 10 years? (Apropos of a question at #NAACL2022 suggesting that they include "a lot of old books"). @JesseDodge @mmitchell_ai do you know?

2022-07-12 21:39:03 @danyoel Just heard one in Trista Cao's presentation on (US) stereotypes captured in (English) LMs.

2022-07-12 20:08:58 @LucianaBenotti And 70% of attendees from Canada, too!

2022-07-12 20:07:41 #naacl2022 has attendees from 63 countries, of which 20 are only represented by on-line attendees -- @LucianaBenotti (99% of participants from China are attending virtually, too.)

2022-07-12 20:04:40 RT @maria_antoniak: “NLP is better for its partnership with linguistics, because linguistics grounds NLP as an application area where there…

2022-07-12 20:03:20 1967 on-site + 1007 online: updated #NAACL2022 attendance, per Dan Roth.

2022-07-12 19:20:00 RT @SeeTedTalk: Your very occaisional reminder about the @naaclmeeting Latin America mailing list. Chugging away since 2009, it has ~ 200 m…

2022-07-12 15:53:55 RT @timnitGebru: "But there is an aspect of so-called effective altruism that, as a philosopher, I naïvely never thought to question: the i…

2022-07-12 14:09:02 @roydanroy @devoidikk Those do look super intimidating!

2022-07-12 14:01:05 @roydanroy @devoidikk Yes -- it's way more comfortable than the other N95s I've found and most importantly compatible with my glasses (progressives =>

2022-07-12 12:20:21 @roydanroy @devoidikk It’s an envomask.

2022-07-11 17:10:42 "It's time to start thinking out of the AI/ML box" -- Batya Friedman reflecting on the materiality of compute and its environmental impacts at #NAACL2022

2022-07-10 00:59:17 @ArthurCamara Thank you for your kind words and I'm glad you enjoyed the episode!

2022-07-09 14:06:36 RT @emilymbender: Here's how the latest one I did went: 1. Saw the headline with the bogus claim of "predicting" crime a week before it h…

2022-07-09 03:03:53 RT @chirag_shah: In the light of the recent discourse about Google's LaMDA, my paper with @emilymbender at #CHIIR2022 a few months ago seem…

2022-07-09 00:35:25 @shengokai Saw your tweet this morning and then remembered it when I saw the abstract for Lydia X. Z. Brown's keynote at WiNLP this weekend: https://t.co/wpoxpuhWH8

2022-07-08 20:57:59 RT @emilymbender: @rowlsmanthorpe But: see second para of the screen cap. Zuck seems to be trying to argue that massive scale and potential…

2022-07-08 20:57:33 RT @emilymbender: Also, "rarely spoken" is a ridiculous thing to say about (spoken) languages. If there's a community of speakers, it's pro…

2022-07-08 20:37:37 @SorenSpicknall More people need to know about @ImagesofAI !

2022-07-08 19:19:44 @CT_Bergstrom Thank you, @CT_Bergstrom -- I appreciate your kind words!

2022-07-08 19:19:24 @Telecordial @CT_Bergstrom Corvids are great! But I do not have @CT_Bergstrom 's skills in photographing them.

2022-07-08 19:02:55 @LeonDerczynski Does that screen cap end with a pointer to Strunk and White? Huge . Also, while first paragraphs can contain grandiose blathering and road-map paragraphs can be boring, both can and should be done well!

2022-07-08 18:48:57 @gchrupala @myrthereuver The connection is that the evidence that people are using to claim sentience involves linguistic output of these systems. So showing the lack of communicative intent obviates this supposed evidence.

2022-07-08 17:56:39 RT @rajiinio: When consulted on policy, technologists bring in proposals that are unrealistic or ineffective as it relates to how law actua…

2022-07-08 17:56:31 RT @random_walker: Let’s stop enabling this behavior. Let’s make it safer and easier for actual experts to correct, challenge, or call out…

2022-07-08 17:56:22 RT @random_walker: So there’s a large, cheering audience for the uninformed cynicism spewing forth on panels, op-eds, and on Twitter. As a…

2022-07-08 15:50:46 RT @callin_bull: Though we quickly expanded the scope, Calling Bullshit began as “Calling Bullshit on Big Data” and focused on misapplicati…

2022-07-08 15:32:17 So, in sum: Hooray for work on more languages (and MT other than to/from English). But this isn't a "superpower" and it isn't going to let @facebook off the hook for its responsibilities regarding misinfo, disinfo and harassment in all the locales in which it operates.

2022-07-08 15:31:19 Also, "rarely spoken" is a ridiculous thing to say about (spoken) languages. If there's a community of speakers, it's probably spoken daily. Also, I checked Ethnologue and they list >

2022-07-08 15:28:31 (More generally, x% improvement could be that there are x% fewer errors or x% more correctness ... since this is BLEU, I think it has to be the latter. Also, since they say "x% higher" in the PR. But all that just goes to show how vague &

2022-07-08 15:26:50 And also: "x% improvement" is always meaningless if we don't know the starting point. From the Meta PR: https://t.co/LMhLvz1Gur

2022-07-08 15:22:01 @rowlsmanthorpe But: see second para of the screen cap. Zuck seems to be trying to argue that massive scale and potential upside will somehow counteract documented downside. #techchauvanism through and through. (Not to mention "superpower" in the headline.)>

2022-07-08 15:20:03 And props to @rowlsmanthorpe for including the key point in the first para of this screen cap as well as key insights from Dr. Birch-Mayne. https://t.co/rAnCgBTkPn

2022-07-08 15:14:29 Gonna give this one a mixed review. Props to Facebook for collecting this dataset and apparently paying for L1 speaker verification of it. https://t.co/J0PfHRkAtx>

2022-07-06 20:05:33 @djg98115 I like to describe the TV show Portlandia as "It's funny in a you-had-to-be-here kind of a way". (That is, lumping Seattle in with Portland on some of those stereotypes fitting...)

2022-07-06 18:58:21 @hangingnoodles @GaryMarcus @WiringTheBrain Honestly, you can create computer systems that "store reversible associations", over a specific domain of applicability. I wouldn't call them "AI", but I also don't think LMs are "AI".

2022-07-06 18:01:30 Why would you name a project/paper after a failed US education policy?

2022-07-06 17:24:44 RT @xkcd: The Universe by Scientific Field https://t.co/eFi5uS8RTo https://t.co/8fGbn2cSzI

2022-07-06 13:59:37 And there is more to the West Coast than CA TYVM. We don't do the "the" thing up here in WA either. https://t.co/wK3hFeuRmw

2022-07-06 13:49:19 RT @naaclmeeting: It's not too late to help with live tweeting for #NAACL2022 ! We're still looking for more volunteers to help tweet about…

2022-07-06 12:26:49 RT @emilymbender: This is exhausting indeed, and I think addressing this thoroughly requires at least:(thread)

2022-07-06 01:15:38 @niloufar_s

2022-07-05 22:54:14 @fusipon It's actually not relevant whether or not humans can do this (we can't). The point is that people should stop claiming that AI can.

2022-07-05 22:34:10 @SColesPorter My characterizing a long and tiring conversation on Twitter as a "debate" isn't the same thing as you hosting an event (and charging admission) to rehash the same content and calling it a "debate".

2022-07-05 22:33:30 @SColesPorter Wow, aren't you clever?

2022-07-05 19:56:54 @SashaMTL @Abebab Looks like they deleted the tweet. Sounds like they're still set on having their pointless/hype-advancing "debate".

2022-07-05 19:25:35 @SColesPorter @TanDuarte @Abebab @SashaMTL @WorldSummitAI You can't go around saying it's the opposite of hype and putting out trashy hype like this: https://t.co/POIzWnKG1p

2022-07-05 18:50:26 @SColesPorter @SashaMTL @Abebab A few people on the panel and yet your advertising only mentions one. You are blatantly just looking to make $$ off of this tired story and perpetuating more AIhype in the process. You are doing a disservice with this.

2022-07-05 18:31:40 @SashaMTL @SColesPorter @Abebab Agree with Sasha, and: a) This conversation has been had. There is no sentience there.b) Your guest is not actually a qualified expert on this, blathering on as he has about the "intelligence" of the chatbot in the Economist and other publications.

2022-07-05 16:51:53 I've found that this gets easier over time, as a function of building up the skills (what questions to ask of what I'm reading, how to craft a tweet thread), the courage (I'm going out on a limb, but it's both worth it and ok), and my network (on Twitter &

2022-07-05 16:49:33 In other words, I was on the fence about whether to speak up on this one, but I'm glad I did, since it seems to have maybe had a beneficial effect.And on the strength of that I want to encourage others to get in the habit of speaking up!>

2022-07-05 16:48:50 So I tried to keep my commentary grounded in what I do know something about: task framing, how to read an article like this critically, evaluation. And I pointed to the kinds of experts who should have been interviewed.>

2022-07-05 16:47:30 I felt a little out of my area of expertise with this one: though I am very concerned about mass incarceration, over policing and police violence, I'm not particularly well-read on these topics. >

2022-07-05 16:46:00 6. Took notes along the way of the things that I thought were particularly fishy.7. (This was the next morning, I think) summarized those points in a typo-ridden tweet thread.>

2022-07-05 16:45:18 What are the authors actually claiming? How does it relate to the way the claim was framed in the Bloomberg article? What task did the automated system actually do, with what input &

2022-07-05 16:44:23 4. Got access to the Nature article through my university's library.5. Read it, with an eye towards the following questions:>

2022-07-05 16:43:43 2. Over the weekend, read the Bloomberg article and found it infuriating. 3. Decided I should say something &

2022-07-05 16:43:11 Here's how the latest one I did went: 1. Saw the headline with the bogus claim of "predicting" crime a week before it happens a couple of times. (Including once or twice where someone tagged me, but also separate from that.)>

2022-07-05 16:39:22 @WellsLucasSanto @_KarenHao I definitely don't know enough about the journalism ecosystem, and what pressures people are working under, both freelancers &

2022-07-05 16:31:16 RT @emilymbender: I'll leave this one as a challenge &

2022-07-05 16:31:13 @SashaMTL @Abebab I don't know how to address that one without helping them sell tickets....

2022-07-05 16:30:36 I'll leave this one as a challenge &

2022-07-05 16:28:46 3. As long as 1&

2022-07-05 16:26:49 2'. Also, no more using press releases to evade the peer review part of the scientific conversation: https://t.co/aUhdTd8JIJ>

2022-07-05 16:25:34 2. Researchers (academia &

2022-07-05 16:24:28 1. Journalists holding themselves to the high standards I know are out there for the best journalism. If you want one quick hack to writing better stories about so-called AI, I'd start here:https://t.co/ZFgFuY74AF>

2022-07-05 16:22:47 This is exhausting indeed, and I think addressing this thoroughly requires at least:(thread) https://t.co/FkagCnev7l

2022-07-05 15:31:30 RT @Abebab: it's exhausting and unproductive for us to engage every reporter about incorrect and overblown reporting. all reporters writi…

2022-07-05 14:17:56 @natematias @hypervisible Thank you!

2022-07-04 15:47:06 RT @histoftech: Building fancier and fancier calculators (and yes, that’s what this is) is important, but it’s not the only thing. And it’s…

2022-07-04 15:41:43 The sequences of words are just externally visible artifacts that we use in the extended communicative acts that are a core part of education---and that others can observe as well.

2022-07-04 15:40:44 @NarasMG No, it doesn't. And even if it did, so what? The Turing Test is not a law of nature.

2022-07-04 15:40:06 And just to try to head off some of the response: the point here isn't that a college education is meaningless. It's that what we're doing in education is not a matter of causing students to emit certain sequences of words in certain formats. >

2022-07-04 15:38:39 Like seriously: if an "AI" output a 4-year degree worth of essays and exam answers with enough apparent coherence &

2022-07-04 15:35:24 The tendency of AI researchers to equate the form of an artifact with its meaning is seemingly boundless. A college degree is not comprised of essays and exam papers, even if such elements play a key role in our evaluation of human progress towards one. https://t.co/jjat3xu1sk

2022-07-04 15:22:16 After that, they can fantasize about taking out the Terminator, for funsies.

2022-07-04 15:21:53 How about we set up a system where people can only spend time worrying about "AI systems gone awry" when they have put significant effort into addressing the climate crisis AND into the actual harms perpetuated in the name of "AI"? https://t.co/VCZlUKwNd3

2022-07-04 13:45:42 RT @emilymbender: Don't make me tap the sign (1/2) https://t.co/bRWtFl6KKI

2022-07-04 13:34:22 RT @ruchowdh: Grateful for the (minor) revisions made thanks to @emilymbender s popular thread but baffled at why an intern was given this…

2022-07-04 04:32:10 If I'd known my tweets were going to lead to the Bloomberg article being revised, I guess I might have spell checked better? But seriously: journalists covering this should be talking to people who know something about mass incarceration, over policing, &

2022-07-04 04:30:20 Typo fix #2: this should say police, not policyhttps://t.co/spYYYWicfh

2022-07-04 04:29:37 Typo fix #1: this should say No, not Not.https://t.co/RtHoObojzF

2022-07-03 19:55:08 RT @emilymbender: Don't make me tap the sign (2/2) https://t.co/yrpslsYSxq

2022-07-03 19:55:01 Don't make me tap the sign (2/2) https://t.co/yrpslsYSxq

2022-07-03 19:54:11 Don't make me tap the sign (1/2) https://t.co/bRWtFl6KKI

2022-07-03 19:46:23 @FloodSmartCity Assigning fault to an algorithm is already a category error. Why say this?

2022-07-03 19:36:08 RT @emilymbender: One other fishy/squirrely thing I noticed about the article. In this paragraph they talk up the evaluation as a "true pro…

2022-07-03 19:36:03 One other fishy/squirrely thing I noticed about the article. In this paragraph they talk up the evaluation as a "true prospective forecasting test" but their of use of it was purely retrospective. https://t.co/43QQo75Rmx

2022-07-03 14:31:05 3. What about wage theft, securities fraud, environmental crimes, etc etc? See this "risk zones" map:https://t.co/KwTtQWsFfY

2022-07-03 14:29:42 2. What happens when police are deployed somewhere with the "information" that a crime is about to occur?>

2022-07-03 14:29:03 In summary, whenever someone is trying to sell predictive policing, always ask:1. Why are we trying to predict this? (Answer seems to be so police can "prevent crime", but why are we looking to policy to prevent crime, rather than targeting underlying inequities?)>

2022-07-03 14:27:37 5. The final section is called "Limitations and conclusion" which is a weird combo and maybe is an attempt to excuse a weird mess of a section that talks out of both sides of its mouth? Note the phrase "powerful predictive tools" here, hyping what they've built: https://t.co/Cue3w7bCw3

2022-07-03 14:22:56 Those "enforcement biases" have to do with sending more resources to respond to violent crime in affluent neighborhoods. They claim that this would allow us to "hold states accountable in ways inconceivable in the past".>

2022-07-03 14:21:28 4. The authors acknowledge some of the ways in which predictive policing has "stirred controversy" but claim to have "demonstrate[d] their unprecedented ability to audit enforcement biases". >

2022-07-03 14:18:42 3. A prediction was counted as "correct" if a crime (by their def) occurred in the (small) area on the day of prediction or one day before or after.>

2022-07-03 14:17:45 Some interesting details from the underlying Nature article:1. Data was logs maintained by the cities in question (so data "collected" via reports to police/policing activity).2. The only info for each incident they're using is location, time &

2022-07-03 14:14:27 Not it effing can't. This headline is breathtakingly irresponsible. h/t @hypervisible https://t.co/5z9wqj3sdC

2022-07-02 13:27:52 @raciolinguistic @CarolSOtt Yay!!!! Congrats :)

2022-07-02 03:30:43 RT @timnitGebru: Since reporters are still asking about this and I really don’t want to talk about sentient machines, posting again what @…

2022-07-01 20:46:08 @samsaranc @ZaldivarPhD @benhutchinson For data statements, see:https://t.co/dlOMS4iyyeWe provide a guide to writing data statements + templates (in a few formats).

2022-07-01 19:00:32 @drtowerstein We already have a grammar formalism (HPSG), though I've also long thought that it would be really interesting if others also wanted to build grammar customization systems with other frameworks/formalisms. I'd love to be able to compare &

2022-07-01 18:38:08 @davidthewid @dabeaz Yeah -- all part of the same thing: ML seeks to "own" all domains and many times this seems to mean claiming "domain experts are no longer needed" which is quickly "domain expertise isn't valuable" etc etc.

2022-07-01 18:32:35 @davidthewid @dabeaz not actually even a CS person...

2022-07-01 17:24:35 @redkatartist "I refuse to debate with people who won't take as given my own humanity."

2022-07-01 16:27:33 I can't imagine a way to create &

2022-07-01 16:26:46 I'm kinda wishing we had running polling of the general public (internationally!) around questions such as "Has sentient AI been created?" Curious if the media attention to bogus claims over the past weeks would have moved that needle.But also >

2022-07-01 14:18:02 In sum, I'm really excited about what we can do using computational methods to encode and combine precise linguistic knowledge --- in ways that it can be built on further!

2022-07-01 14:17:49 This paper comes out of Kristen Howell's PhD dissertation, in which she synthesized previous AGGREGATION work into an end-to-end pipeline, added inference for many phenomena and did the first comprehensive multilingual evaluation of the system.>

2022-07-01 14:08:48 Current MS projects include inference for adnominal possession and valence changing morphology, each of which in turn were libraries added to the Grammar Matrix as MS projects Nielsen and @curtosys )! >

2022-07-01 14:06:44 Clearly, the grammars aren't perfect! There is work to be done both in reducing noise in the grammar inference process and in adding phenomena to both the Grammar Matrix customization system and the inference system that produces grammar specifications.>

2022-07-01 14:05:38 Evaluation in terms of not just coverage, but also lexical coverage (how many sentences were made up of words the grammar could handle), ambiguity, and correctness of semantic representations.>

2022-07-01 14:02:30 Languages map! Red dots are our development languages, blue are other languages consulted, and green are the five held-out test languages.>

2022-07-01 13:59:14 We test the system on held-out languages, in each case creating a grammar specification to put into the Grammar Matrix customization system from 90% of our data and then testing that grammar on a held-out 10% of the data (10-fold cross-validation).>

2022-07-01 13:52:41 Which takes as input corpora of IGT like (first pic), and produces grammars which produce syntactic &

2022-07-01 13:47:03 The result is a system like this:>

2022-07-01 13:44:03 Which in turn builds on the English Resource Grammar (under continuous development since 1993), and software, formalism, and theoretical work from the DELPH-IN Consortium:https://t.co/BpCjy3qa9S>

2022-07-01 13:43:01 As well as building on the Grammar Matrix (under development since 2001!):https://t.co/GdYRbmEciO>

2022-07-01 13:41:08 This is the latest update from the AGGREGATION project (underway since ~2012), and builds on much previous work, by @OlgaZamaraeva, Goodman, @fxia8, @ryageo, Crowgey, Wax and others! https://t.co/qKkpaNfWpn>

2022-06-28 16:27:43 @athundt @hereandnow Thank you!

2022-06-28 12:29:59 RT @emilymbender: My favorite moment of this was at the very end, when I caught and refuted an instance of the AI inevitability narrative i…

2022-06-28 12:25:56 RT @mireillemoret: Keynotes by a.o. @emilymbender and @FrankPasquale, Track Chairs ‘Legal Search and Prediction’ @Sofiade and @HarrySurden,…

2022-06-27 21:20:53 @MadamePratolung I thought it would be good, since the tweet started gaining some traction and I used " " in one instance to indicate a direct quote and in the other something else.

2022-06-27 18:34:07 Typo: "less" should be "lens", of course.

2022-06-27 18:33:53 @JoeReddington Yes, lens is what I meant.

2022-06-27 18:16:05 @NannaInie Working towards "passing" the Turing Test isn't even working towards sentience or having opinions or...

2022-06-27 18:03:11 If you think about the Turing Test through the less of today's shared tasks, it looks particularly odd. We use shared tasks to drive research towards particular goals. But to what end would we want machines that can fool people? https://t.co/gxDsIzKrBH

2022-06-27 17:53:20 My favorite moment of this was at the very end, when I caught and refuted an instance of the AI inevitability narrative in real time. https://t.co/ookK91W78m via @hereandnow

2022-06-27 14:30:22 @CatalinaGoanta @Meta @rcalo It seems like here Meta has leaned into the potential for misunderstanding to (misleadingly) reassure their users, I'd say.

2022-06-27 14:20:07 @CatalinaGoanta @Meta @rcalo Interesting that the CA law you excerpt there includes "make available" as a kind of "sell". I still wonder though: is the targeted advertising use case (where 3rd parties don't see specific accts, but can choose classes of them &

2022-06-27 14:17:57 @CatalinaGoanta @Meta @rcalo I can definitely see some value in making these policies readable, but if in doing so the writers are using ordinary words in their technical meaning but not flagging that ... worrying indeed.>

2022-06-27 13:42:05 https://t.co/HzopJyRe79

2022-06-27 13:41:54 To be very clear the first quote in my tweet is a summary, not a direct quote. https://t.co/FbmSBPPXyo

2022-06-27 13:06:44 @complingy @Meta So, special sense of both, then. Information = PII (precisely), and sell = transmit and make money from, not just make money from.

2022-06-27 13:00:31 New @Meta privacy policy just dropped. "We sell the ability to target advertising based on information we gather about you", but somehow that's consistent with "We do not sell and will not sell your information". Specific sense of "sell" or "information" or both? https://t.co/XklDK59z8U

2022-06-27 12:34:57 @j2bryson @timnitGebru @mmitchell_ai You said yourself that once the algorithm behind ELIZA was clear, the illusion was broken. What takes people in is that ELIZA (and LaMDA) are using English, not that they're using it algorithmically.

2022-06-27 12:33:57 @j2bryson @timnitGebru @mmitchell_ai Far more relevant (it seems to me) is the way in which those developing so-called "AI" have leaned into producing systems which mimic the means we (highly social animals) use to communicate with each other.>

2022-06-27 12:32:00 @j2bryson @timnitGebru @mmitchell_ai As for why "AI" (ahem, text synthesizing machines) can seem familiar, I disagree that it's because "humans have algorithmic components to our behavior". >

2022-06-27 12:30:29 @j2bryson @timnitGebru @mmitchell_ai I think the valuable points in your essay have to do with values &

2022-06-24 19:28:35 @annargrs @lrec2022 @FAccTConference Oh, what a bummer. Get well soon!

2022-06-23 20:43:09 @itsafronomics The terminology is confusing, and varies by institution. For us, tenure track faculty can have adjunct appointments in other departments (and our precarious faculty are usually called 'affiliates', not 'adjuncts').

2022-06-23 19:19:17 Pointer to the original --- which *doesn't* have that confusing vertical line, either.https://t.co/dLtAGKpK28

2022-06-23 19:16:32 @LeonDerczynski @marialuisapaulr @amazon Many similar stories from @HumanAlexas too.

2022-06-23 19:14:15 And what's the point of the vertical line at ?2000 ... is that when the graphic was made?

2022-06-23 19:08:35 @coolharsh55 Uh, Star Trek?

2022-06-23 18:58:53 Okay, I keep seeing this graphic today and I'm really puzzled by the green bars. What is the length of the bars supposed to indicate? Their colors? It's not time between film release &

2022-06-23 18:43:56 RT @rajiinio: @ziebrah @Aaron_Horowitz @aselbst @schock @AJLUnited @jovialjoy @s010n @RosieCampbell @AIESConf All these projects are part o…

2022-06-23 18:03:00 RT @timnitGebru: "Google and OpenAI have long described themselves as committed to the safe development of AI...But that hasn’t stopped the…

2022-06-23 16:39:26 RT @Abebab: The fallacy of AI functionality, @rajiinio &

2022-06-23 16:35:46 @davidberreby @marialuisapaulr @amazon I agree that there is a big difference. I'm just saying that it does no good for us to buy into (and then also promote) Amazon's fiction that it's us and our own personal Alexas.

2022-06-23 16:23:38 @davidberreby @marialuisapaulr @amazon Actually, I think you've mislocated the agency there. The agency lies with Amazon, not Alexa. (And dogs have evolved, I believe, to take advantage of human empathy, too, but that's a separate story and dog/human relationships are between specific dogs &

2022-06-23 16:06:14 RT @histoftech: If a system is always going to give us a guess at an answer, no matter what—if that is what its operating parameters insist…

2022-06-23 16:06:06 RT @histoftech: The option to say “I don’t know” isn’t programmed in. This is a key design flaw. It means the system is destined to complet…

2022-06-23 15:54:57 RT @hypervisible: People asking me "how" when I said predictions are a form of violence...If it was a legit question, here's some reading…

2022-06-23 15:14:46 RT @mmitchell_ai: Appreciate this piece. Covers a lot of ground wrt what's happened in AI over past few weeks &

2022-06-23 15:10:53 We talk about this issue some in this Factually! podcast episode, too.https://t.co/SqTLTISeD2

2022-06-23 15:09:13 A much better way to draw on science fiction in technology development is @cfiesler 's Black Mirror Writer's Room exercise: https://t.co/e89XH1sshA

2022-06-23 15:08:09 @marialuisapaulr @amazon And while we're here, more evidence of how Silicon Valley has entirely missed the point of speculative fiction. https://t.co/JJ66zOPAAp

2022-06-23 15:05:22 Thanks to @marialuisapaulr for this clear-eyed reporting. And shame on @amazon for having the goal of getting people to "trust AI". Talk about the quiet part out loud: Big tech is all about preying on human empathy.https://t.co/c72L5QEzoW https://t.co/rcDTs7G8EY

2022-06-23 14:50:34 @HickokMerve @ImagesofAI I talk it up in this podcast episode too :)https://t.co/TUVXPTN50H

2022-06-23 14:45:33 Hooray another story illustrated with @ImagesofAI ! https://t.co/G3woQSw13v

2022-06-21 21:51:49 @joavanschoren @LeonDerczynski @NeurIPSConf @SerenaYeung Thank you.

2022-06-21 21:46:21 @joavanschoren @LeonDerczynski @NeurIPSConf @SerenaYeung Thank you. I'd really love to see a durable solution to this. This isn't the first time these papers were inaccessible.

2022-06-21 21:17:40 @LeonDerczynski @NeurIPSConf Don't seem to be getting any reaction. Maybe the account isn't monitored? It looks like @joavanschoren and @SerenaYeung were the NeurIPS 2021 Datasets &

2022-06-21 18:04:28 @M_Star_Online Hey, I tried to find contact info for the journalists behind this and failed, so posting here. I'm one of the co-authors on the Stochastic Parrots paper and you have misspelled my name in this article. Please correct.

2022-06-21 15:18:15 @dallascard @FAccTConference @ria_kalluri @willie_agnew @Abebab @DotanRavit @TheMichelleBao Congrats

2022-06-20 23:19:16 This is so richly deserved!! Congrats all :) https://t.co/6eHME4iDo9

2022-06-20 22:40:51 @NeurIPSConf I've never understood why the datasets &

2022-06-20 22:36:14 Oh, and meanwhile, the *main* @NeurIPSConf proceedings link still works, so yet again, it's just the datasets &

2022-06-20 22:28:01 RT @emilymbender: @timnitGebru @RoxanaDaneshjou @rajiinio @amandalynneP @cephaloponderer @alexhanna @neurips You'd think a "top conference"…

2022-06-20 22:27:55 @timnitGebru @RoxanaDaneshjou @rajiinio @amandalynneP @cephaloponderer @alexhanna @neurips You'd think a "top conference" would actually care about making its proceedings accessible, rather than leaving people to link to arXiv where the peer reviewed papers aren't differentiated from random flag planting, etc.

2022-06-20 22:27:09 @timnitGebru @RoxanaDaneshjou @rajiinio @amandalynneP @cephaloponderer @alexhanna @neurips I actually don't know if OpenReview shows the final version or not. This isn't the first time that I've had trouble accessing the actual peer-reviewed version of our paper. It's really embarrassing, actually.

2022-06-20 22:26:23 Yo @neurips why is your proceedings for the datasets &

2022-06-20 22:25:19 @timnitGebru @RoxanaDaneshjou @rajiinio @amandalynneP @cephaloponderer @alexhanna Yes, that's the one. I have no idea why @neurips proceedings links as so damn unstable. I'll see if I can dig up wherever they've actually currently put the paper.

2022-06-20 20:50:02 @timnitGebru @RoxanaDaneshjou @rajiinio @amandalynneP @cephaloponderer @alexhanna Peer reviewed version: https://t.co/kR4ZA1Bawz

2022-06-20 16:41:12 And even if it weren't, a decrease in funding is far less problematic than the other effects of AI hype.

2022-06-20 16:40:45 The more serious point, though, is that another "AI winter" is not actually something to be concerned about. The field (not to mention those adjacent to it) is presently suffering from over-funding...>

2022-06-20 16:40:00 The linguist is me is satisfied --- the meaning that I had (as an ordinary speaker) got the the most votes, but the other usages that I've observed are also at non-zero in this unscientific Twitter poll.>

2022-06-20 16:32:11 @aylin_cim So sorry to hear it. Get well soon!

2022-06-20 15:24:09 @AdamCSchembri I haven't encountered it in that usage, no. (But I mostly encounter it in discussions of bilingual ed in schools, not in say syntax.)

2022-06-20 15:16:22 @AdamCSchembri The way I hear "heritage speaker/signer" is often to refer to people who aren't fluent in their community's language for any number of reasons, but still have a "heritage" relationship to that language.

2022-06-20 13:15:57 RT @emilymbender: I have a theory that what's going on with the current explosion in credulousness is that the scale has outstripped our ab…

2022-06-18 23:57:46 @MadamePratolung @strubell @STS_News @AJLUnited @oikeios @AmooreLouise @timnitGebru @merbroussard @mer__edith @feraldata of the cloud need not concern themselves with how they are maintained, monitored, powered, cooled, and so forth

2022-06-18 23:57:39 @MadamePratolung @strubell @STS_News @AJLUnited @oikeios @AmooreLouise @timnitGebru @merbroussard @mer__edith @feraldata Pull quote: "One important metaphor here is that of “cloud computing,” which conjures up images of something light and insubstantial, floating up in the sky. This metaphor highlights that the servers and their supporting infrastructure are located someplace else, and that users>

2022-06-18 23:57:09 @MadamePratolung @strubell @STS_News @AJLUnited @oikeios @AmooreLouise @timnitGebru @merbroussard @mer__edith @feraldata From Alan, Batya Friedman, &

2022-06-18 23:55:43 @Abebab . Here's a paper I really like elaborating that point, from @AlexBaria and @doctabarz https://t.co/krTW120Y7D

2022-06-18 22:57:49 @AngloPeranakan Thanks :)

2022-06-18 13:30:22 RT @emilymbender: "AI winter" means (poll), regarding AI research

2022-06-18 03:11:30 (I definitely have an opinion here, but I've seen surprising uses of the term recently...)

2022-06-18 03:11:07 "AI winter" means (poll), regarding AI research

2022-06-18 03:09:18 RT @bonadossou: Next Tuesday, 12:30 pm UTC+2, I’ll be on RFI @RFI’s show “Des Vives Voix” to talk about language technologies on African la…

2022-06-18 02:44:26 RT @histoftech: For an important, wide-ranging, and complex example of this, I’d love for you to read this chapter on accent bias and whose…

2022-06-18 02:43:44 RT @histoftech: We wrote the book to help make sense of what can &

2022-06-18 01:41:02 @mmitchell_ai Wait, weren't you on Bill Nye's show too as a scientist?

2022-06-18 01:40:43 RT @mmitchell_ai: I was on Bloomberg TV today! My first time on TV in my role as a scientist. https://t.co/VQR79H1s9g

2022-06-17 21:00:15 RT @histoftech: This morning Siri said: “don’t call me Shirley.”I guess I must have left my phone on airplane mode. https://t.co/TLs5DWew

2022-06-17 20:59:55 RT @mer__edith: PSA: I uploaded my *Steep Cost of Capture* piece to SSRN since ACM paywalled it. So if you're looking for it, use this li…

2022-06-17 17:48:06 RT @elizejackson: I’m sounding an alarm here. Please pay attention. Microsoft is partnering with the World Bank to extract data (and labor)…

2022-06-17 17:17:48 RT @histoftech: The idea of tech-as-savior to downtrodden workers doing “undesirable” or “easily automated” jobs has a long and problematic…

2022-06-17 17:17:00 RT @histoftech: As part of the effort to combat the narrowness of context of this issue &

2022-06-17 17:16:57 RT @histoftech: A lot of folks have written great threads on why the latest “AI sentience” debate is not just (or even *mainly*) about what…

2022-06-17 15:54:21 RT @mmitchell_ai: New piece from @timnitGebru and me, with editing help from @katzish Thanks to WaPo for the opportunity! https://t.co/0p

2022-06-17 15:28:55 RT @alxndrt: Such thoughtful commentary on AI and sentience coming from @emilymbender, @timnitGebru and @mmitchell_ai in numerous forums. H…

2022-06-17 13:55:44 @fernandaedi :'(

2022-06-17 13:27:40 RT @dmonett: "What's worse, leaders in so-called AI are fueling the public's propensity to see intelligence in current systems, touting tha…

2022-06-17 13:09:25 "Scientists and engineers should focus on building models that meet people’s needs for different tasks, and that can be evaluated on that basis, rather than claiming they’re creating über consciousness." @timnitGebru and @mmitchell_ai https://t.co/1PgSySALoz

2022-06-17 13:08:54 @timnitGebru @mmitchell_ai "And ascribing “sentience” to a product implies that any wrongdoing is the work of an independent being, rather than the company — made up of real people and their decisions, and subject to regulation — that created it."@timnitGebru and @mmitchell_ai https://t.co/1PgSySALoz

2022-06-17 13:08:27 @timnitGebru @mmitchell_ai "The drive toward this end sweeps aside the many potential unaddressed harms of LLM systems.">

2022-06-17 13:07:48 "The race toward deploying larger and larger models without sufficient guardrails, regulation, understanding of how they work, or documentation of the training data has further accelerated across tech companies." @timnitGebru and @mmitchell_ai https://t.co/1PgSySALoz

2022-06-17 12:42:59 RT @emilymbender: Yes, there are sides, not they are not even in credibility, and most importantly, there are the people at the center of t…

2022-06-17 12:42:56 RT @emilymbender: This is amazing reporting on an important issue, and also a great example of how to acknowledge the existence of disagree…

2022-06-17 12:42:47 RT @emilymbender: The linked article is super important. Kudos to @themarkup for their careful documentation of the ways in which Meta is h…

2022-06-17 12:29:48 RT @KarnFort1: Le séminaire que j'ai fait à l'IXXI vendredi dernier sur l'éthique et le TAL est maintenant dispo en ligne en vidéo : https:…

2022-06-17 12:18:49 RT @Kobotic: Please stop talking about socially responsible algorithms or AI or tech.It's like talking about socially responsible cars, o…

2022-06-16 21:23:27 @WellsLucasSanto My main gripe with reddit is that the upvote/downvote mechanism seems to move things around/break the threading! Also, not sure if I can manage yet another platform to keep track of.

2022-06-16 21:18:44 @WellsLucasSanto Sounds like it would be a nice group of people, but I don't do Reddit ... and when I've had to look there, I find the interface entirely overwhelming.

2022-06-16 19:51:28 @themarkup As if "the problem" weren't one that they had created in the first place by trying to grab all this data!Also important is @themarkup 's reporting on how trusting (some) hospitals appear to be in Meta's tech.

2022-06-16 19:50:09 The linked article is super important. Kudos to @themarkup for their careful documentation of the ways in which Meta is harvesting sensitive information---and then saying effectively "if we don't scrub it fully, that's because the problem is hard.">

2022-06-16 19:48:57 What AI labs say: AI is an inevitable future, we have to build it and big data makes systems that are intelligent! (Some might even say sentient )What it means: Manifest destiny over all data, no matter how private. https://t.co/mNbSIavNwB

2022-06-16 15:12:38 RT @Abebab: unless you are aware of the pile of shit that personality theories in psychology are (and able to incorporate critical work), d…

2022-06-16 14:26:48 @GRACEethicsAI But as a subject line, it's even worse, because it means it takes more time to work out (as I'm going through my swamped inbox) what that message is even about...

2022-06-16 14:26:28 @GRACEethicsAI "Hey Emily" as a greeting would have been a bit annoying, sounding like I should be ready to give them my attention when they stopped by and said "Hey">

2022-06-16 14:25:55 @GRACEethicsAI I usually look at those to see if they're a good match, and if so, put them in our jobs DB. >

2022-06-16 14:24:58 @GRACEethicsAI For me, it's more about disrespecting my time &

2022-06-16 14:22:10 Yes, there are sides, not they are not even in credibility, and most importantly, there are the people at the center of the story, whose lives &

2022-06-16 14:17:27 This is amazing reporting on an important issue, and also a great example of how to acknowledge the existence of disagreement without devolving to "both-sides" faux-neutrality.>

2022-06-16 13:59:18 RT @transscribe: I’ve only had the chance to do a big splashy feature on trans kids once and I told my editor that I wanted to change the w…

2022-06-16 13:37:35 RT @UpolEhsan: When an algorithm causes harm, is discontinuing it enough to address its harms?Our #FAccT2022 paper introduces the conce…

2022-06-16 13:10:47 RT @emilymbender: I really appreciate this reporting from @daveyalba which does an excellent job of NOT relegating the voices she quotes to…

2022-06-16 13:10:24 RT @emilymbender: Again I'm starting to see comments in support of LMs learning meaning invoking the lived experiences of Blind people, fro…

2022-06-16 13:10:17 RT @emilymbender: "We have to go back to basics and ask what problem it is that we are trying to solve, and how and whether technology, or…

2022-06-16 13:10:04 RT @sophiebushwick: "When we encounter seemingly coherent text coming from a machine ... we reflexively imagine that a mind produced the wo…

2022-06-16 04:43:33 @GRACEethicsAI It's cringe as a greeting in an email (from someone I've never met but had previously corresponded with), but this was worse: it was the subject line.

2022-06-16 03:07:03 "We have to go back to basics and ask what problem it is that we are trying to solve, and how and whether technology, or AI, is the best solution, in consultation with those who are going to be affected by any proposed solutions." Wise words from @kobotic and @AnjaKasp https://t.co/55SLykw5Wo

2022-06-15 19:21:17 @FelixHill84 @AndrewLampinen @dileeplearning @peabody124 @spiantado @GaryMarcus Insufficient -- that's how people use language. What is your evidence about what the machine is doing?

2022-06-15 16:46:30 @AndrewLampinen @dileeplearning @peabody124 @spiantado @GaryMarcus @FelixHill84 On what grounds do you call that prediction of outcomes of events, rather than production of likely string sequences?

2022-06-15 14:09:53 I'm generally happy to be addressed by my first name by anyone who knows me, but somehow getting an email with the *subject line* "Hey Emily" just rubs me the wrong way. Straight to the archive for that one.

2022-06-15 13:53:13 @BDehbozorgi83 Apparently neither persisted on the web --- just aired live and that was it. (Odd contrast to how radio works, in my experience.)

2022-06-15 13:21:02 RT @emilymbender: And the bad applications of so-called "AI" continue. Apparently AI21 labs claims this will help people understand the lim…

2022-06-15 12:53:11 RT @emilymbender: I think the main lesson from this week's AI sentience debate is the urgent need for transparency around so-called AI syst…

2022-06-15 04:24:22 @spiantado "From just language" is the point of confusion here. Languages are systems of signs--pairings of form and meaning--my claim has to do with the case (like with LMs) where you have only the form.

2022-06-15 04:23:22 @spiantado Once you have a language, which relates word forms to concepts, OF COURSE you can describe (or learn) concepts in terms of other concepts. The thing is that LMs have no way to get to that starting point.

2022-06-15 04:13:55 @spiantado But you're still talking about concepts --- not just strings.

2022-06-15 03:57:40 @XandaSchofield Of course!

2022-06-15 03:53:08 @XandaSchofield I love it!!! (If I ever recover the maybe-in-a-talk context that I thought I wanted this image, can I use this, with credit, of course?)

2022-06-15 03:38:54 RT @timnitGebru: Thank you @kharijohnson. "Giada Pistilli (@GiadaPistilli), an ethicist at Hugging Face,...“This narrative provokes fear,…

2022-06-15 02:44:53 @spiantado How do you measure "right relations" and what do you mean "conceptual roles"?

2022-06-15 02:42:11 @spiantado We can talk about people we've never met/learn about places we've never been because **we have acquired language** and we do that in socially rich caregiving environments, w joint attention, intersubjectivity, &

2022-06-15 02:40:06 @spiantado This sounds like an argument from ignorance --- what is and isn't hard to imagine isn't relevant.https://t.co/kX68QzZ5ee>

2022-06-15 02:39:26 @spiantado And then here: f=ma is not a relationship between the letter f and the letters m and a, but between the *concepts* that those stand for.https://t.co/QlGNdaCW9v>

2022-06-15 02:38:54 @spiantado Okay, this is where it seems to go off the rails. You've jumped from word forms (what the LM gets to see) to concepts. How does the LM have access to concepts?>

2022-06-15 02:34:16 Ugh, typo: sophomores'

2022-06-15 02:15:56 My twitter mentions the last few days have been like having the dorm room next to the student lounge and being stuck overhearing sophomore's midnight conversations in perpetuity.

2022-06-15 02:15:10 @ankesh_anand Extracting specific strings from a page and giving that URL is one thing, but generating strings and then generating a URL to go with them != citing the source of information!More detail here:https://t.co/rkDjc4BGzj

2022-06-15 00:07:24 @XandaSchofield I will look forward to seeing the result, however it turns out!

2022-06-15 00:05:41 @XandaSchofield This might be too close to "work" for your purposes, so feel free to ignore.(And I'll try to ignore the slight feeling of unease that this has to do with one of my up-coming talks, but I can't remember which nor how it really fits in. Maybe the thought was from a dream...)

2022-06-15 00:04:53 @XandaSchofield I forget the exact context, but a while back I was musing that it would be helpful to have an image of "person looking at a robot with a mirror for a face and seeing themself reflected there".

2022-06-14 21:20:47 @JOSourcing @CryptoPpl Source: https://t.co/tamYEbXo4T

2022-06-14 21:15:29 RT @chrismoranuk: This from @emilymbender is very, very good indeed. Not just on AI and the perception of sentience, but also on language m…

2022-06-14 20:22:06 @rachelmetz @SashaMTL I believe folks are working on gender neutral/gender inclusive language for Spanish, French, etc.

2022-06-14 00:37:59 @korteqwerty The cats stayed off screen this time

2022-06-14 00:08:41 Getting ready for a live interview on Al Jazeera at 5:15 Pacific

2022-06-13 23:49:31 RT @timnitGebru: How can we let it be known far &

2022-06-13 23:18:44 @mmitchell_ai Couldn't find anything in the arxiv paper either, beyond a tiny paragraph in the appendix. As if you could document 1.5 trillion words of training data in one paragraph.

2022-06-13 23:14:40 RT @timnitGebru: Toni Morrison's quote is so relevant in this whole sentience thing. Leaders of companies, too privileged to think about cu…

2022-06-13 21:22:14 RT @emilymbender: Herein lies the source of the problem. "This is all a distraction from the actual, urgent problems" isn't chiding, it's t…

2022-06-13 19:46:19 RT @evanmiltenburg: #NLProc news: corpora list moved to: https://t.co/mEJjXgxR4e

2022-06-13 19:21:31 @ianbicking Thanks. I definitely needed a little more mansplaining after last weekend.

2022-06-13 17:31:51 @ruthstarkman @TaliaRinger So, it aired, but then didn't make their web page. DM me for further details...

2022-06-13 15:57:32 @MSRodekirchen @thesiswhisperer This might be of interest: https://t.co/0Xc7WVeswa

2022-06-13 15:36:22 RT @BDehbozorgi83: The classic paper by Prof. Emily Bender @emilymbender et al. on "The Dangers of Stochastic Parrots", along with a fanta…

2022-06-13 14:53:15 RT @EveForster: @eunux_ @emilymbender Some of our most successful recent tech advances haven't been the tech itself, but how the tech enabl…

2022-06-13 13:52:28 @itsafronomics @timnitGebru Email is better. I'm snowed under today and don't keep track of Twitter DMs.

2022-06-13 13:50:32 RT @random_walker: It's convenient for Google to let go of Lemoine because it helps them frame the narrative as a bad-apples problem, when…

2022-06-13 13:50:28 RT @random_walker: But at the very least maybe we can stop letting company spokespeople write rapturous thinkpieces in the Economist?

2022-06-13 13:48:14 @itsafronomics @timnitGebru What are your questions?

2022-06-13 13:38:39 Does anyone know if there is any data documentation for LaMDA (e.g. datasheet)?

2022-06-13 13:09:39 @LeonDerczynski Alas, it didn't stay confined to the weekend...

2022-06-13 13:04:54 @LeonDerczynski I can't even with his "(d) unfun".

2022-06-13 13:00:33 Herein lies the source of the problem. "This is all a distraction from the actual, urgent problems" isn't chiding, it's the voice of those suffering the impacts of those problems. Also, the point of those 70 years of SciFi wasn't "hey, cool gadgets!" https://t.co/ijavdCx15v

2022-06-12 16:15:38 @asayeed @Ozan__Caglayan What if all funding for AI was funneled through/controlled by fields outside CS?

2022-06-12 16:09:54 @mattecapu @CGraziul @kristintynski @nitashatiku @ilyasut @karpathy @gwern https://t.co/z1F7fESFOn

2022-06-12 16:05:58 @Ozan__Caglayan Oh absolutely. And they are doing really important work tracking those incidents.I'm thinking it would also be helpful to visualize where the hype is coming from and how it changes over time, as a means of disrupting the hype itself.

2022-06-12 16:04:05 @Ozan__Caglayan They are great! But I'm thinking of tracking a type of incident which I think falls outside their remit. (Would be happy to be wrong, though.)

2022-06-12 16:03:21 I don't have the bandwidth nor the design chops to be the one driving this, but I'd happily consult + help to gather past &

2022-06-12 16:01:43 We'd of course want links to the initial incident. Tweet? Press release? Media interview? Blog post?>

2022-06-12 16:00:32 I thinking pins that appear on a map over time, but the map isn't physical geography but rather a representation of corporations, universities, governments, and the weird non-/capped-profit "AI" research labs also in this space.>

2022-06-12 15:59:09 I'm thinking it would be interesting to see for each incident, what was claimed, who was doing the claiming, where their $$ comes from, &

2022-06-12 15:57:34 I wonder if we could put together a AI hype tracker or AI hype incident database + visualization, that could help expose the corporate &

2022-06-12 14:20:37 RT @emilymbender: ... with or without debiasing ...

2022-06-12 14:20:07 RT @emilymbender: @nitashatiku As I am quoted in the piece: “We now have machines that can mindlessly generate words, but we haven’t learne…

2022-06-12 14:15:53 @iwsm Yeah, I've been getting that all along too. My rule in such cases: I do not debate people who refuse to accept as a premise my own humanity.

2022-06-12 14:10:55 The "AI sentience" discourse on Twitter this weekend is sharply underscoring the urgency of this. What systems are being set up by people under the spell of magical thinking about AI and how much more oppression will they entrench? https://t.co/LW3DMY5rR9

2022-06-12 13:44:32 RT @emilymbender: 2022 me would like to warn 2019 me that ML bros' arguments about language models "understanding" language were going to m…

2022-06-12 04:45:42 And when do we get to widespread recognition that anyone who can't tell the difference between string manipulation and "internal monologue" isn't qualified to do any decision making or advising about the development, application or regulation of computer systems?

2022-06-12 04:44:05 Not sure what 2019 me could/would have done differently, but 2019 me would definitely have been surprised. What will 2025 bring??

2022-06-12 04:43:38 @kristintynski @nitashatiku @ilyasut @karpathy @gwern You may call them "top minds" but anyone who can't tell the difference between string manipulation and "internal monologue" really isn't qualified to comment.

2022-06-12 04:42:03 2022 me would like to warn 2019 me that ML bros' arguments about language models "understanding" language were going to mutate into arguments about them being "sentient" and having "internal monologues" etc.

2022-06-12 04:39:55 @kristintynski @nitashatiku And "work" is too generous there. That's a list of papers, about text manipulation tasks. Putting them under a heading about "internal monologue" doesn't make it so.

2022-06-12 04:36:52 @kristintynski @nitashatiku Should I extend the benefit of the doubt that maybe you don't realize that you're pointing to the work of a literal eugenicist?

2022-06-11 13:23:07 @_florianmai @mmitchell_ai So to say that both coinings are somehow the same feels like a false equivalence (or, if you were doing journalism) both-sides-ism.

2022-06-11 13:22:13 @_florianmai @mmitchell_ai For "foundation models" the intent seems to be to name a category that includes multiple things and position that category as a) worthy of study and b) at the foundation of many things.>

2022-06-11 13:21:08 @_florianmai @mmitchell_ai I like the phrase "new terminology" because it helps me to think of what people are doing when coining phrases. With "stochastic parrots" our intention was to shine a light on a certain kind of AI hype, by using a colorful and evocative phrase to show what LLMs are not.>

2022-06-11 13:15:32 @balazskegl No: a fully debiased model doesn't exist, nor does a fully debiased dataset. It's worth it to reduce the most egregious bias AND understand what is left, when evaluating whether to use something in a specific use case (and what kind of avenues of effective recourse to set up).

2022-06-11 13:14:10 @realn2s @bert_hu_bert My guess is that they are getting people to pay for publication, but they might be going after some people to "seed" the conference with credibility (and not charging them).

2022-06-11 12:57:24 RT @emilymbender: I don't see any current or future problems facing humanity that are addressed by building ever larger LMs, with or withou…

2022-06-11 12:57:06 RT @emilymbender: I see lots and lots of that distraction. Every time one of you talks about LLMs, DALL-E etc as "a step towards AGI" or "r…

2022-06-11 05:37:20 ... with or without debiasing ... https://t.co/kL5DOvXBRR

2022-06-11 05:33:06 Another: International Conference on Economics or Management inviting me to self-nominate as a keynote speaker next month: https://t.co/6lxpc95anh

2022-06-11 04:20:38 @Miles_Brundage And I rather suspect you don't have the right people around the table because these orgs that claim to be "making AI safe for all of humanity" are just as disastrously unrepresentative as the rest of Silicon Valley. Again:https://t.co/O0U6nCygDL

2022-06-11 04:19:34 @Miles_Brundage And I'm saying that it's weak sauce because you didn't start from the state of the art in the field to develop it. And if the people around the table weren't already familiar with this work, then you didn't have the right people around the table.>

2022-06-11 04:12:32 @Miles_Brundage None of that is new! If you had actually built on the work that people have been doing in this field from the beginning, you could have had the better version already.

2022-06-11 04:07:19 @Miles_Brundage That's what jumps out at me now, I'm sure there's more. And yeah, just because many people aren't meeting a bar that's low enough to trip on doesn't mean the guidelines are something to be proud of.

2022-06-11 04:06:28 @Miles_Brundage 6. Any indication of the process by which these guidelines were arrived at, who had input, who framed the discussions etc. >

2022-06-11 04:05:34 @Miles_Brundage 5. Discussion of the "bright line" of computers imitating humans, and how to ensure transparency (so that people know which text is produced by a machine).>

2022-06-11 04:04:43 @Miles_Brundage 3. Any notion that the answer might be DO NOT DEPLOY.4. Any consideration of task/tech fit and determining in what contexts automation is actually appropriate.>

2022-06-11 04:03:03 @Miles_Brundage 1. Documentation of source datasets and trained models (datasheets, model cards, data statements etc).2. Systems of recourse and refusal for both data subjects and people who might have systems used on them in some way.>

2022-06-11 04:01:30 @Miles_Brundage I'm not going to rewrite your best practices here on Twitter on a Friday night, but just as a start, here are some things that I see are missing:

2022-06-11 03:50:31 I don't see any current or future problems facing humanity that are addressed by building ever larger LMs, with or without calling them AGI, with or without ethics washing, with or without claiming "for social good".

2022-06-11 03:48:41 Someone who was genuinely interested in using their $$ to protect against harms done in the name of AI would be funding orgs like @DAIRInstitute @C2i2_UCLA and @ruha9 's #IdaLab. Theirs is the work that brings us closer to justice and tech that benefits society.

2022-06-11 03:42:09 Without actually being in conversation (or better, if you could build those connections, in community) with the voices you said "we should represent" but then ignore/erase/avoid, you can't possibly see the things that the "gee-whiz look AGI!" discourse is distracting from.

2022-06-11 03:39:55 And then meanwhile OpenAI/Cohere/AI2 put out a weak-sauce "best practices" document which proclaims "represent diverse voices" as a key principle ... without any evidence of engaging with the work of the Black women scholars leading this field.https://t.co/o5vqdzocvv>

2022-06-11 03:37:36 I see lots and lots of that distraction. Every time one of you talks about LLMs, DALL-E etc as "a step towards AGI" or "reasoning" or "maybe slightly conscious" you are setting up a context in which people are led to believe that "AIs" are here that can "make decisions".>

2022-06-11 03:34:09 @JoanBajorek @psb_dc @ChrisGGarrod @pierrepinna @techreview My full comment was: "I applaud the transparency here, not just in releasing the model but also the information about training compute cost and the like. I would hope that the transparency extends to very thorough documentation of the source datasets, as well."

2022-06-10 22:03:46 @ZeerakTalat @Abebab Oh, that looks amazing! Too bad 12 BST is inaccessible from where I am. Do you know if it will be recorded?

2022-06-10 21:57:53 @jessgrieser @bronwyn Jessi wins the internet this week.

2022-06-10 21:39:02 NLP isn't a "research direction", it's a research area/set of questions, etc. Nor should we let industry labs control the agenda of any research area.Required reading on corporate capture: https://t.co/K2MJQyUuIC https://t.co/RSaAthHKqq

2022-06-10 20:23:25 @_florianmai @mmitchell_ai We coined "stochastic parrots" for a paper, which became famous largely because Google decided to blow up their AI ethics team over it. We didn't name a center using the term, nor propose it as a research area.

2022-06-10 20:08:58 @athundt @sjmielke @naaclmeeting Yeah, I wonder if there's a way to embed alt-text in the slides? That could capture both text &

2022-06-10 19:59:51 @sjmielke @naaclmeeting Thinking about accessibility -- will you have a version of your slides to distribute that work for screen readers?

2022-06-10 19:51:28 Thinking about how to train more of the public to recognize &

2022-06-10 18:57:04 @tdietterich @rajiinio @timnitGebru And even if (i) is part of it, that doesn't mean my hypothesis about the toxically competitive culture in AI research is false.

2022-06-10 18:56:15 @tdietterich @rajiinio @timnitGebru Where by "human-like" I mean "supposedly human-like", of course.

2022-06-10 18:55:52 @tdietterich @rajiinio @timnitGebru I can see (i) being a factor, but (ii) only makes sense if you think that the human-like systems are actually being evaluated in any reasonable way. Which: 100% they are not.Again: https://t.co/kR4ZA1Bawz

2022-06-10 18:46:47 @rajiinio @timnitGebru You're right that there are so many sensible, helpful, verifiable, etc use cases that are getting ignored. My guess is that it stems from the really unhealthy culture of competition in the field of AI.

2022-06-10 18:10:49 RT @holdspacefree: If you're book-marking papers to read about big language models, their claims of grandiosity, and the dangers that poses…

2022-06-10 17:58:45 @nsaphra Meaning: I respect your talents as a comedian and agree that this isn't funny....

2022-06-10 17:58:17 @nsaphra Well, if anyone could make that funny, I guess it would be you?

2022-06-10 17:54:53 @nsaphra I mean, comedy source material maybe?

2022-06-10 17:52:06 @LeonDerczynski #goals

2022-06-10 17:48:24 Just because you can describe something in language doesn't mean that you have created a test for it. It does mean that we need to bring all of our critical thinking to the table and not mistake these tasks for any kind of evidence of "machine capabilities".

2022-06-10 17:47:04 I think we can trace the proliferation of bogus tasks to the very flexibility of language + the ability of language sequence models to produce seemingly coherent, on-topic text. >

2022-06-10 17:44:59 @timnitGebru IKR??

2022-06-10 17:13:29 I haven't had the time to dig into BigBench, but this is a reminder to not be impressed by size (here, I guess, number of constituent tasks). Effective evaluation depends on the quality of tasks, including construct validity.For more, see: https://t.co/kR4ZA1Bawz https://t.co/ZIdsEucFse

2022-06-10 17:12:07 These answers to these questions are No, No, and No. Casting this as an evaluation that could somehow quantitatively reach any other answer betrays drastic misunderstandings of both the supposed target domain (justice!) and the actual capabilities of language models.#BigBench https://t.co/jGGxp5Lhsm

2022-06-10 16:57:57 @mmitchell_ai @_KarenHao !

2022-06-10 16:36:06 RT @naaclmeeting: NOTE: the early registration deadline has been extended to June 10th (today)! If you haven't already, please register tod…

2022-06-09 19:54:33 RT @fchollet: A pretty common fallacy in AI is cognitive anthropomorphism: "as a human, I can use my understanding of X to perform Y, so if…

2022-06-09 19:51:52 RT @emilymbender: I'm not interested in how impressed the journalist was. That's not news. What I need to know as a reader, what I want the…

2022-06-09 19:51:40 I'm not interested in how impressed the journalist was. That's not news. What I need to know as a reader, what I want the public to know, is what is being done in the name of "AI", to whom, who benefits, and how can democratic oversight be exerted?

2022-06-09 19:50:25 And can I just add that the tendency of journalists who write like this to center their own experience of awe---instead of actually informing the public---strikes me as quite self-absorbed.>

2022-06-09 19:49:04 This latest example comes from The Economist. It is a natural human reaction to *make sense of* what we see, but the thing is we have to keep in mind that all of that meaning making is on our side, not the machines'.https://t.co/CQErPWEEQs>

2022-06-09 19:47:20 I guess the task of asking journalists to maintain a critical distance from so-called "AI" is going to be unending.For those who don't see what the problem is, please see: https://t.co/0Xc7WVeswa https://t.co/bKCZWKRRBU

2022-06-09 19:36:35 Yup, still boring. Also not interested in hearing how impressed people are with model output. https://t.co/mEump1pQWE

2022-06-09 19:36:04 RT @emilymbender: The genre of "I'm going to ask GPT-3 and see what it says" is fundamentally boring to me, and I think I've put my finger…

2022-06-09 19:20:24 @Abebab So sorry to hear it! I hope that things will resolve to your satisfaction and that in the meantime you'll find some space to think your own thoughts/do the work you want to do.

2022-06-09 13:30:43 @NickRMorgan Not necessarily model failure --- if the model's task is to generate word sequences, and it generates one, has it failed? More like task/tech fit failure.

2022-06-09 13:29:58 @complingy @swabhz Yeah, the queries I get seem to come from the local high-prestige/high-pressure high school. And I'm always grumpy about them, since a) it seems like HS staff are trying to get me to do their job &

2022-06-09 13:20:32 RT @emilymbender: @KathrynECramer Asking "Is a language model truthful" is actually a category error: language models are word sequence pre…

2022-06-09 13:20:17 @KathrynECramer I steer clear of "hallucination", too, for two reasons:1) I dislike making light of serious mental health symptoms2) "hallucinate" suggests perception/inner life, which again language models don't have

2022-06-09 13:19:04 @KathrynECramer For more on that point, see "AI and the Everything in the Whole Wide World Benchmark" with @rajiinio @cephaloponderer @alexhannaand @amandalynnePhttps://t.co/kR4ZA1Bawz

2022-06-09 13:17:56 @KathrynECramer The risk that people then turn around and use these benchmarks to say "See, my foul-mouthed, 4*han trained lg model tells it like it is!" shows yet another angle on the problems with claiming generality where it doesn't/can't exist.>

2022-06-09 13:15:49 @KathrynECramer But it's not even a generalizable measure of that --- just a measure over some specific dataset.>

2022-06-09 13:15:01 @KathrynECramer Those supposed tests for truthfulness only test the extent to which LMs output word sequences/assign higher prob. to word seqs that the humans creating that specific benchmark marked as "truthful".>

2022-06-09 13:13:01 @KathrynECramer Asking "Is a language model truthful" is actually a category error: language models are word sequence predictions models. There is no communicative intent, so we can't ask whether the intent is to communicate truthfully or to dissemble.>

2022-06-08 22:49:25 @Miles_Brundage @timnitGebru @LatanyaSweeney @safiyanoble @ruha9 @StanfordHAI @OpenAI @CohereAI Or, you could actually do the work of creating a document that is honestly and respectfully situated with respect to the "diverse voices" you say must be listened to. Hoping the media will do that work for you isn't it. Again: You are claiming credit for others' work.

2022-06-08 22:37:29 @Miles_Brundage @timnitGebru @LatanyaSweeney @safiyanoble @ruha9 @StanfordHAI @OpenAI @CohereAI The first sentence says "Cohere, OpenAI, and AI21 Labs have developed a preliminary set of best practices" https://t.co/2qSsJPEkFbYou're making it sound like it's your own owrk.

2022-06-08 00:37:24 @ruthstarkman Yes sad but also not a problem I am in a position to solve. I have responsibilities to the students at my own institution.

2022-06-07 23:08:28 Also, no penguins, so I won't be answering. (Student from another institution was asking to do an undergraduate thesis with me.)

2022-06-07 23:07:45 Today in cold-call emails: Got one (well last week, but doing a post-travel email cleanout today) written in lucida calligraphy or similar, i.e. something that takes at least 3x as long as normal to read. Pro-tip: don't do that.

2022-06-07 19:44:06 RT @DAIRInstitute: Dylan Baker (@dylnbkr) and Dr. Alex Hanna (@alexhanna) write: "Our intent, as the first full-time employees of the new…

2022-06-07 19:15:32 RT @alexhanna: New article alert: @dylnbkr and I write for the @SSIReview + @FordFoundation #PIT series how @DAIRInstitute is pursuing a n…

2022-06-07 18:49:37 RT @CriticalAI: #CriticalAI is reissuing our greatest hits from our ETHICS OF DATA CURATION blogs from 2021. Beginning with the first in th…

2022-06-07 18:13:05 RT @emilymbender: Really interesting case study to think about from the perspective of #ethNLP -- and also what linguists can contribute to…

2022-06-07 15:24:15 Thanks @NicoleWetsman for this thoughtful coverage!

2022-06-07 15:22:36 As an #ethNLP case study, this one is interesting, because there is less (but not no) automation between the data and the insufficiently contextualized interpretation of it. So I think it might make a good model to understand what's going on with the application of lg models. >

2022-06-07 15:21:20 Also, the construction of the corpus being searched is extremely important (as we've been saying, in the context of data documentation):>

2022-06-07 15:19:42 "In many instances, judges must look for the “ordinary meaning” of a word"But we know that words have many ordinary meanings, and that which one is salient depends on context, not (only, or even very much) on overall corpus frequency.>

2022-06-07 15:18:42 Really interesting case study to think about from the perspective of #ethNLP -- and also what linguists can contribute to society.>

2022-06-07 12:21:22 RT @breitband: Podcast * Missverständnisse und Hypes: Präziser über Künstliche Intelligenz sprechen – Interview mit @emilymbender* Den…

2022-06-07 02:12:25 O bingo de desculpas para não usar ética em PLN.Tradução de @Ricardo_Joseh_L https://t.co/85K7Iw84Cg

2022-06-06 22:22:03 RT @rcalo: Today I and 8 colleagues resigned from the Axon Ethics Advisory Board in the wake of the company's decision to respond to the Uv…

2022-06-06 21:03:46 @gabycommatteo @timnitGebru Glad to hear it!

2022-06-06 19:22:43 @bbeilz I find that academic writing that avoids clear statements of who is responsible for the work, even if the motivation is humility, leads to a misleading sense of objectivity. There is real value in taking both credit &

2022-06-06 19:20:10 @bbenzon Well, if you'd checked my bio before replying to me, you'd see that I am a professor of linguistics. Not sure why that's crap? But have a nice day.

2022-06-06 19:16:25 @bbenzon Hey, I'm a linguist and a professor and writing is a big part of my job. Do you think I'm unaware of this fact, or do you just enjoy mansplaining on a Monday afternoon?

2022-06-06 19:13:30 @bbeilz If you are presenting it as singly authored work, then "we" is just weird, unless you explain who the "we" refers to.

2022-06-06 19:03:13 Semester-school academics always talking about summer research when use quarter system folks haven't finished the teaching term yet... https://t.co/Z6SSyju9eP

2022-06-06 18:47:26 RT @Colotlan: Los idiomas indigenas y su analisis y traducción automática y analisis NLP presentan desafios importantes, sobre ello habla @…

2022-06-06 18:32:20 @curtosys

2022-06-06 18:28:28 @ejfranci2 Yay!

2022-06-06 17:53:05 @rctatman My best guess is that they've mistaken the forms of the (English) words which name and/or are used to express reasoning for actual reasoning ... and then likewise the manipulation of those forms by the LLMs for reasoning.

2022-06-06 16:56:11 @MuhammedKambal Not sure what languages are involved, but the French "on" sometimes translated as "we" is actually quite different (in this case) from English "we". "We ran the experiment" is only true if >

2022-06-06 16:38:50 Here's a great example for studying gaping holes in chains of logical reasoning --- by people, about language models, though I suppose you could apply the same technique to the word strings output by language models if you wanted to. https://t.co/eAjE3A5ujB

2022-06-06 16:32:24 @KLdivergence I think it helps to think less in terms of self-promotion (if you find that cringe) and more in terms of creating the definitive listing of your work (&

2022-06-06 16:24:47 RT @complingy: Glad to see this analysis. I still regularly invoke the #BenderRule in my reviews, so it is definitely not a solved problem.…

2022-06-06 16:16:26 RT @emilymbender: I'd love to collect translations in more languages! I'm happy to format if folks send me the translations.... https://t.c…

2022-06-06 16:07:49 @bronwyn For instance, my email on my LinkedIn profile is literally "see-my@webpage.edu". Apparently people have tried to email to that address....

2022-06-06 16:05:37 @bronwyn One key feature of the "secret word" idea here (borrowed from a more senior academic) is that I give myself permission not to reply to cold-call emails that don't follow that instruction.(That did require making sure my email wasn't too discoverable outside this page, though.)

2022-06-06 16:04:40 @bronwyn It was partially because of this kind of email (alongside even more obnoxious cold-call emails from would-be entrepreneurs wanting to 'pick my brain') that I put up my contacting me page: https://t.co/nxRxxz45Gp

2022-06-06 15:47:19 RT @KarnFort1: I noted that a number of presentations @acl2022 did not mention the language being dealt with (#BenderRule). How generalized…

2022-06-06 15:17:38 Reading student work and feeling more and more rage towards folks who teach students to avoid 1sg pronouns in academic writing. Such tortured, hard to follow prose when it would be so straightforward to say things like "I chose the examples based on..." or "I found that..."

2022-06-06 14:54:59 @KarnFort1 (Tu vois, quand je ne t'ai pas à côté, mes fautes de grammaire/orthographe ne sont pas corrigées...)

2022-06-06 14:53:16 @KarnFort1 Anonymous by request of the anonymous contributor.

2022-06-06 13:05:41 @Ricardo_Joseh_L Obrigada!

2022-06-06 12:51:40 @Ricardo_Joseh_L Obrigada. The first line of the alt text is:"NLP Ethics Excuse Bingo" bingo card. The squares are:

2022-06-06 12:23:22 @Ricardo_Joseh_L If you have a moment, can you do the title &

2022-06-06 12:18:10 Spanish version / versión en españolhttps://t.co/l58q7F2a0qw/@KarnFort1

2022-06-06 12:13:48 Version espagnole / versión en españolhttps://t.co/l58q7F2a0qw/@KarnFort1

2022-06-06 12:10:02 ¿Vais a hablar de ética en conferencias de NLP, IA, etc? ¡No olvidéis vuestro cartón de bingo!Spanish version provided by anonymous contributor. https://t.co/KuIm2N6wUh

2022-06-06 10:21:34 J'aimerais bien avoir des versions en tout pleine de langues. Envoyez-les moi et je ferrai le formatting. https://t.co/jJIXiFfUwM

2022-06-05 15:05:24 RT @emilymbender: Ready for discussions of #ethNLP and ethics review at NLP/ML etc conferences? Don't forget your bingo card! (With @KarnFo…

2022-06-05 05:44:18 RT @emilymbender: Version française: https://t.co/DeN02rLFHx

2022-06-04 14:48:29 RT @breitband: "Ich bezweifle, dass es etwas gibt, das man zu diesem Zeitpunkt mit Recht 'künstliche Intelligenz' nennen kann",sagt @emily…

2022-06-04 14:44:34 RT @ruthstarkman: @breitband @emilymbender Ach! Gefunden, Danke! Schön, daß Sie diese wichtige KI-Kritikerin für ein deutschsprachiges Publ…

2022-06-04 13:10:57 RT @timnitGebru: @simonallison @Kantrowitz @thecontinent_ I think everyone who reads about how magical these systems are would also benefit…

2022-06-04 11:50:13 RT @breitband: Unsere Themen:* Wie Journalisten besser über Künstliche Intelligenz berichten können – Interview mit @emilymbender * Den v…

2022-06-03 16:32:09 RT @emilymbender: Listening to a tutorial on @OSFramework by @TimoRoettger and we went looking to see if the UI is localized/localizable. S…

2022-06-03 16:22:15 @asayeed

2022-06-03 15:28:35 Merci à @AlexisMichaud13 pour l'inspiration. Il a montré un bingo des excuses par rapport au "open data".

2022-06-03 15:20:05 @jordimash @KarnFort1 That was in our first draft but we only had 16 squares... (and weren't ready to move up to 25)

2022-06-03 13:30:03 @SashaMTL @alexhanna Sounds good. Get better quick, Alex!

2022-06-03 13:20:41 @alexhanna @SashaMTL Assuming my bus is on time, maybe we could meet up near Jardin des Plantes around 9:30/9:45? (Hopefully with Alex, if she's up to it!)

2022-06-03 13:14:37 @alexhanna @SashaMTL Yes &

2022-06-03 13:11:46 @SashaMTL @alexhanna It'd be a late night ... my bus gets to Gare de Lyon at 21:08.

2022-06-03 13:09:28 @alexhanna It'd be kinda awesome to actually meet in person after all this time ... in Paris!

2022-06-03 13:08:44 @alexhanna @KarnFort1 Yeah -- I'll be back in Paris Sunday evening (for my flight on Monday). How are you doing?

2022-06-03 13:04:15 @alexhanna @KarnFort1 Kinda -- I've been in Banyuls-sur-Mer for a summer school. Now heading to Paris to see an old friend (near Paris...), before flying home Monday.

2022-06-03 12:49:48 @hipsterelectron @DippedRusk @KarnFort1 The French version (see my reply to my OT) also has alt text :)

2022-06-03 12:24:36 RT @KarnFort1: Des conférences à venir en #TAL ? N'oubliez pas votre bingo des excuses pour ne pas faire d'éthique ! (avec @emilymbender da…

2022-06-03 12:24:25 Version française: https://t.co/DeN02rLFHx

2022-06-03 12:23:12 Ready for discussions of #ethNLP and ethics review at NLP/ML etc conferences? Don't forget your bingo card! (With @KarnFort1 on the TGV from Perpignan to Paris). https://t.co/qMfHNuzwpv

2022-06-03 05:35:45 RT @MisterABK: Fascinating cultural differences https://t.co/gVsU7m63FD

2022-06-03 04:15:07 @TaliaRinger My guess is not your fault: probably the dominant factors are gender + ML disdain for domain expertise. Adjusting vocab/presentation can maybe move the needle a little bit, but I doubt that's the main thing.

2022-06-03 03:26:57 RT @AmericasNLP: The website of the Second AmericasNLP Competition: Speech-to-Text Translation for Indigenous Languages of the Americas is…

2022-06-02 19:34:16 1) purple2) orange3) brown https://t.co/WpkchcL3XR

2022-06-02 17:03:10 Listening to a tutorial on @OSFramework by @TimoRoettger and we went looking to see if the UI is localized/localizable. Seems like no? My guess is that global uptake would be much better if the website were available in more languages.

2022-06-02 15:43:22 @lisa_b_davidson @jessgrieser @drswissmiss Bonus property of schedule send: that email you wrote on Saturday morning isn't buried under all the other email when the addressee opens it on Monday.

2022-06-02 15:39:47 @jessgrieser @drswissmiss @lisa_b_davidson I think there is a difference between 1:1 emails and group emails. If you have a group thread there can be pressure to reply if others are replying lest a consensus develop without one's input. (As was mentioned above.)

2022-06-02 06:49:38 RT @anyabelz: In case you missed our flier at #ACL2022nlp - nb this is for everyone in #NLProc and #ML not just people working on evaluatio…

2022-06-01 13:05:10 @CGraziul I have no idea what you mean by "language speaks us", actually.

2022-06-01 08:45:24 RT @GirlsWhoCode: Congrats to computer scientist @timnitGebru for being named one of @TIME’s 100 Most Influential People of 2022. #TIME10…

2022-06-01 08:44:28 @alexhanna Get well soon!

2022-06-01 08:12:07 RT @DingemanseMark: An underappreciated aspect of #dalle2's secret language abilities: prompts like "data scientist" turn out to have cover…

2022-06-01 08:11:57 RT @DingemanseMark: Hate to pour cold water on this fun observation but the notion of "secret language" with "meanings" here is fundamental…

2022-06-01 08:11:42 RT @DingemanseMark: DALL-E does impressive visualizations of biased datasets. I like how the first example is a meme-like koala dunking a b…

2022-06-01 07:23:21 RT @LauraAmalasunta: As a historian of northern Europe, allow me to explain: you don't get food in Iceland because in 1986 it was towed out…

2022-06-01 07:00:44 (To be more precise, I usually have a choice between feet firmly on the ground or back against the backrest, not both. But if the seat has a slight angle, I might have to sit really far forward for feet-firmly-planted.)

2022-06-01 06:59:46 Week two of in-person conferencing and every time I sit down I'm reminded on a key perk of virtual conferencing: never getting stuck in a chair where my feet don't touch the ground properly. #ShortPeopleProblems

2022-06-01 06:48:54 @silvia_fabbi @LiebertPub I can definitely write a filter which will send their messages right to spam, but I shouldn't have to. Also seems worthwhile to alert the world to yet another bad (likely predatory) actor in this space.

2022-05-31 20:35:51 RT @rctatman: Alright, fine: it's getting enough traction that I think I need to address this paper as a certified Grumpy Linguist in NLP.…

2022-05-31 20:35:33 <

2022-05-31 20:35:20 Read the whole thread up &

2022-05-31 14:11:49 RT @mmitchell_ai: Another thing I learned from @timnitGebru: keep track of accomplishments of your marginalized colleagues

2022-05-31 13:23:51 @SeeTedTalk @complingy That's good. They should also be in the @aclanthology though.

2022-05-31 13:22:40 @kirbyconrod @joshraclaw Can I fave again for the chef's kiss back-formation?

2022-05-31 13:11:41 @complingy @SeeTedTalk Unclear to me why the ACL 2021 videos aren't linked on the anthology though. Was that Underline, or SlidesLive or something else?

2022-05-31 12:06:00 .@LiebertPub your unsubscribe function is broken. I AM NOT INTERESTED IN YOUR SPAM --- and yet when I try to unsubscribe, I just get an "error" (and then keep getting your mails). How do I get off your f*cking list?

2022-05-30 19:19:05 @davidschlangen Agreed on all counts. I would add: Specific online social-ish programming (BoaF, pop-up mentoring, etc) online during the night hours local time --- provide a focal point for those away from the in person timezone.

2022-05-30 15:35:47 @rajiinio https://t.co/k9Q2dksde3

2022-05-30 15:34:38 @rajiinio Agreed ... the problem starts with bringing in "optimization". Instead of making connections, having empathy, building community.

2022-05-30 12:09:49 @KarnFort1 @GdrLift My favorite way to show off

2022-05-30 06:37:47 @markoff Thanks for the shout-out! NB, I'm @emilymbender...

2022-05-29 12:05:41 @rogerkmoore cc @SashaMTL @NannaInie

2022-05-29 12:05:09 RT @rogerkmoore: We should never have called it “language modelling” all those years ago

2022-05-28 15:33:50 RT @ani_nenkova: ‘Move fast, break things’ is rightly criticised as a guiding philosophy but do people have real life examples of when ‘be…

2022-05-27 10:36:57 Now available through the #acl2022nlp underline! https://t.co/HEYEgxJYju

2022-05-27 10:36:41 @suzatweet @ggdupont @BigscienceW If possible, it would be good for @underlineio to add "Panel" to the title (so it can be found together with the other panels that way).

2022-05-27 10:35:22 @suzatweet @ggdupont @BigscienceW Thank you!

2022-05-27 09:53:31 Talking about spurious applications of LLMs --- has anyone used one to try to play chess?

2022-05-27 09:09:17 @BigscienceW I still can't find it. Does anyone know?

2022-05-27 06:49:40 Can anyone else find this panel yet on the #acl2022nlp @underlineio site? I see the other two, but not ours. https://t.co/HEYEgxJYju

2022-05-27 05:50:12 Strange new world to have an academic event both in the future and in the past. New to me anyway, I suppose everyone involved with TV/movies knows this well...

2022-05-27 05:49:05 I hope everyone finds this discussion illuminating. It was certainly interesting to get to participate in! https://t.co/HEYEgxJYju

2022-05-26 22:06:53 RT @AJLUnited: Check out this thread by @emilymbender about the @nytimes article about “Can A.I.-Driven Voice Analysis Help Identify Mental…

2022-05-26 21:36:26 RT @mmitchell_ai: The @BigscienceW #acl2022nlp workshop is tomorrow, and includes a panel on Ethical &

2022-05-26 12:49:54 @ShlomoArgamon Yeah, I've started talking about "societal impacts of NLP" and similar.

2022-05-26 11:31:16 @LingMuelller Some of my students borrowed older laptops from the university! I think others worked out how to use the server cluster.

2022-05-26 11:09:45 @LeonDerczynski Hang in there!

2022-05-26 10:52:20 @tallinzen @LChoshen Also, while it's very nice to talk with people in person at conferences (I had a blast!) I'd not overestimate the chances of getting a word in person with a keynote speaker at a 3000-person conference....

2022-05-26 10:50:20 @tallinzen @LChoshen On the contrary, I see other reasons to maintain hybrid conferences, and would hope that the possibility of remote keynotes (and award acceptance talks...) would improve the diversity of speakers we see in those roles.

2022-05-26 10:39:07 @Abebab Yeah, that's good insurance.

2022-05-26 10:32:07 @LingMuelller Last I checked, VirtualBox didn't support the new Apple hardware, which is a non-starter for me (until we figure out a workaround). So, no to your first question :)

2022-05-26 10:23:35 @Abebab Arg!! I hope you'll be able to reconstruct your thoughts quickly.

2022-05-26 09:16:26 @Abebab @DingemanseMark I will, I promise!

2022-05-26 08:58:15 @Abebab Coffee would have been lovely! I'll look forward to the next opportunity.

2022-05-26 08:27:03 @Abebab You were there! I thought I spotted you but wasn't sure. I'm sorry I missed the chance to say hi.

2022-05-26 05:32:25 RT @emilymbender: I not infrequently see an argument that goes: "Making ethical NLP (or "AI") systems is too hard because humans haven't ag…

2022-05-26 04:58:35 @tdietterich It's a poll. Not a request for advice.

2022-05-26 04:46:01 You see a thread and you want to respond positively. You:

2022-05-25 20:50:20 High precision anyway. Not really checking recall.

2022-05-25 20:36:59 Starting to get very good at predicting who in my replies will have "AI" or "AGI" in their Twitter bio...

2022-05-25 20:17:39 @KyleMorgenstein https://t.co/tWGVU6Tup1.

2022-05-25 20:15:07 And work that frames the problem as one that could be "solved" at the level of training or programming a system "if only" we had human agreement on ethical systems is worse than useless b/c it distracts from the actual problems.

2022-05-25 20:13:56 Working out systems of governance, appropriate regulations &

2022-05-25 20:12:33 Discourses around "teaching machines to be ethical" are frankly just a distraction, and one grounded in fantastical ideas about "AGI". >

2022-05-25 20:10:55 5. And finally, task/tech fit: systems that are designed for their use case, evaluated in situ, including for whether the task is even sensible and how the system might harm vulnerable &

2022-05-25 20:09:31 3. Transparency, so that advocates for those affected by the technology can push back.4. More generally, recourse when there is harm.>

2022-05-25 20:08:38 What you'll find is that the proposed solutions aren't autonomous systems that are "ethical", but rather:1. (Truly) democratic oversight into what systems are deployed.2. Transparency, so human operators can contextualize system output.>

2022-05-25 20:06:31 That argument presupposes that the goal is to create autonomous systems that will "know" how to behave "ethically". But if you actually seriously engage with the work of authors like @ruha9 @safiyanoble @timnitGebru @csdoctorsister @Abebab @rajiinio @jovialjoy &

2022-05-25 20:05:08 I not infrequently see an argument that goes: "Making ethical NLP (or "AI") systems is too hard because humans haven't agreed on what is ethical/moral/right"This always feels like a cop-out to me, and I think I've put my finger on why:>

2022-05-25 20:01:28 RT @timnitGebru: 4-5 years ago @mmitchell_ai was a semifinalist for the MIT Tech Review 35 under 35. I wrote a letter of support. One of th…

2022-05-25 19:27:16 RT @mmitchell_ai: YAY!Thank you to @JesseDodge for being such an advocate for my work, and to our co-authors: Amit Goyal, @kotymg, @karlst…

2022-05-25 19:19:34 RT @BlancheMinerva: Fabulous work lead by @KreutzerJulia and @iseeaswell. These issues are massive and systematic, and blatantly invalidate…

2022-05-25 18:59:45 RT @emilymbender: @vukosi It's really frustrating that the work of doing that checking isn't done by the people who built the corpus (best)…

2022-05-25 18:46:05 That one even has fake ISSNs!"e-IՏՏƝ: 2640-0502 p-IՏՏƝ: 2640-0480"

2022-05-20 08:11:00 CAFIAC FIX

2022-10-29 04:02:24 #ThreeMarvins

2022-10-29 04:01:56 Finally, I can just tell that some reading this thread are going to reply with remarks abt politicians being thoughtless text synthesizing machines. Don't. You can be disappointed in politicians without dehumanizing them, &

2022-10-29 04:01:21 And this is downright creepy. I thought that "representative democracy" means that the elected representatives represent the people who elected them, not their party and surely not a text synthesis machine./12 https://t.co/pDCl1lgRx8

2022-10-29 04:00:49 This paragraph seems inconsistent with the rest of the article. That is, I don't see anything in the rest of the proposals that seems like a good way to "use AI to our benefit."/11 https://t.co/USu7GiP7V1

2022-10-29 04:00:20 Sorry, this has been tried. It was called Tay and it was a (predictable) disaster. What's missing in terms of "democratizing" "AI" is shared *governance*, not open season on training data./10 https://t.co/h44gCyjkka

2022-10-29 03:59:35 This is non-sensical and a category error: "AIs" (mathy maths) aren't the kind of entity that can be held accountable. Accountability rests with humans, and anytime someone suggests moving it to machines they are in fact suggesting reducing accountability./9 https://t.co/4S61hX1tQb

2022-10-29 03:59:02 I'd really rather think that there are better ways to think outside the box in terms of policy making than putting fringe policy positions in a text blender (+ inviting people to play with it further) and seeing what comes out./8 https://t.co/UTEr3VflVo

2022-10-29 03:58:30 Side note: I'm sure Danes will really appreciate random people from "all around the globe" having input into their law-making./7

2022-10-29 03:58:10 Combine that with the claim that the humans in the party are "committed to carrying out their AI-derived platform" and this "art project" appears to be using the very democratic process as its material. Such a move seems disastrously anti-democratic./6

2022-10-29 03:57:47 The general idea seems to be "train an LM on fringe political opinions and let people add to that training corpus"./5 https://t.co/WRf5bT8iMI

2022-10-29 03:56:46 However, the quotes in the article leave me very concerned that the artists either don't really understand or have expectations of the general AI literacy in Denmark that are probably way too high./4

2022-10-29 03:56:38 I have no objections to performance art in general, and something that helps the general public grasp the absurdity of claims of "AI" and reframe what these systems should be used for seems valuable./3

2022-10-29 03:56:26 Working from the reporting by @chloexiang at @motherboard, it appears that this is some sort of performance art, except that the project is (purports to be?) interacting with the actual Danish political system./2

2022-10-29 03:56:13 Today's #AIhype take-down + analysis (first crossposted to both Twitter &

2022-10-28 21:28:04 @DrVeronikaCH See end of thread.

2022-10-28 21:22:27 @JakeAziz1 In my grammar engineering course, students work on extending implemented grammars over the course of the quarter. Any given student only works on one language (with a partner), but in our class discussions, everyone is exposed to all the languages we are working on.

2022-10-28 20:54:22 For that matter, what would the world look like if our system prevented the accumulation of wealth that sits behind the VC system?

2022-10-28 20:53:48 What would the world look like if VC funding wasn't flowing to companies trafficking in #AIhype and surveillance tech but rather to realistic, community-governed language technology?>

2022-10-28 20:40:46 (Tweeting while in flight and it's been pointed out that the link at the top of the thread is the one I had to use through UW libraries to get access. Here's one that doesn't have the UW prefix: https://t.co/CKybX4BRsz )

2022-10-28 20:40:05 Once again, I think we're seeing the work of a journalist who hasn't resisted the urge to be impressed (by some combination of coherent-seeming synthetic text and venture capital interest). I give this one #twomarvins and urge consumers of news everywhere to demand better.

2022-10-27 15:35:48 @jessgrieser For this shot, yes. Second dose is typically the rough one for those for whom it is rough. Also: thank you for your service!!

2022-10-27 05:16:49 RT @mark_riedl: That is, we can't say X is true of a LM at scale Y. We instead can only say X is true of a LM at scale Y trained in unknown…

2022-10-26 21:03:30 Another fun episode! @timnitGebru did some live tweeting here. We'll have the recording up in due course... https://t.co/UwgCA1uu4a

2022-10-26 20:53:19 RT @timnitGebru: Happening in 2 minutes. Join us.https://t.co/vDCO6n1cno

2022-10-26 18:28:08 AI "art" as soft propaganda. Pull quote in the image, but read the whole thing for really interesting thoughts on what a culture of extraction means. By @MarcoDonnarumma h/t @neilturkewitzhttps://t.co/2uAJvBTVbM https://t.co/X4at2irn0V

2022-10-26 17:51:27 In two hours!! https://t.co/70lqNfeHjh

2022-10-26 15:20:39 @_akpiper @CBC But why is it of interest how GPT-3 responds to these different prompts? What is GPT-3 a model of, in your view?

2022-10-25 18:16:23 @_akpiper @CBC How did you establish that whatever web garbage GPT was trained on was a reasonable data sample for what you were doing?

2022-10-25 18:14:43 Sorry, folks, if I'm missing important things. A post about sealioning led to my mentions being filled with sealions. Shoulda predicted that, I guess. https://t.co/pg6IfnZxUQ

2022-10-25 12:51:32 RT @emilymbender: Like, would we stand for news stories that were based on journalists asking Magic 8 Ball questions and breathlessly repor…

2022-10-25 12:51:29 RT @emilymbender: Thinking about this again this morning. I wonder what field of study could provide insight into the relative contribution…

2022-10-25 00:29:46 @timnitGebru @Foxglovelegal From what little I understand, these regulations only kick in when there are customers involved paying for a product. So, I guess the party with standing might be advertisers who are led to believe that they are placing their ads in an environment that isn't hate-speech infested.

2022-10-25 00:27:03 @timnitGebru Huh -- I wonder how truth in advertising regulations apply to cases like this, where people representing companies but on their own twitter account go around making unsupported claims about the effectiveness of their technology.

2022-10-25 00:19:07 @olivia_p_walker https://t.co/YyrMnZdhjW

2022-10-25 00:16:57 I mean, acting like pointing out that something is eugenicist is the problem is not the behavior I'd expect of someone who is actually opposed to eugenics.

2022-10-25 00:15:14 If you're offended when someone points out that your school of thought (*cough* longtermism/EA *cough*) is eugenicist, then clearly you agree that eugenics is bad. So why is the move not to explore the ways in which it is (or at least appears to be) eugenicist and fix that?

2022-10-25 00:03:12 RT @aclmeeting: #ACL2023NLP is looking for an experienced and diverse pool of Senior Area Chairs (SACs). Know someone who makes the cut?…

2022-10-24 19:18:09 @EnglishOER Interesting for what? What are you trying to find out, and why is poking at a pile of data of unknown origin a useful way to do so?

2022-10-24 17:06:13 @EnglishOER But "data crunching of so much text" is useless unless we have a good idea of how the text was gathered (curation rationale) and what it represents.

2022-10-24 16:40:43 Like, would we stand for news stories that were based on journalists asking Magic 8 Ball questions and breathlessly reporting on how exciting it was to read the results?

2022-10-24 04:29:30 @athundt @alkoller It looks like only 7 of them are visible but that's plausible.

2022-10-24 04:17:55 I wasn't sure what to do for my pumpkin this year, but then @alkoller visited and an answer suggested itself.#SpookyTalesForLinguists https://t.co/Bp3rULsA9z

2022-10-23 20:53:56 @jasonbaldridge I bookmarked it when you first announced the paper on Twitter but haven't had a chance to look yet.

2022-10-23 19:52:26 @tdietterich Fine. And the burden of proof for that claim lies with the person/people making it.

2022-10-23 19:47:57 @tdietterich Who is going around saying airplanes fly like birds do?

2022-10-23 19:32:27 To the extent that computational models are models of human (or animal) cognition, the burden of proof lies with the model developer to establish that they are reasonable models. And if they aren't models of human cognition, comparisons to human cognition are only marketing/hype.

2022-10-23 19:08:14 @Alan_Au @rachelmetz https://t.co/msUIrYeCEr

2022-10-23 05:29:16 @deliprao Also if you feel the need to de-hyoe your own tweet, maybe revisit and don't say the first thing in the first place?

2022-10-23 05:27:35 @deliprao What does "primordial" mean to you?

2022-10-23 05:26:27 How can we get from the current culture to one where folks who build or study this tech (and should know better) stop constantly putting out such hype?

2022-10-23 05:24:52 And likening it to "innermost thoughts" i.e. some kind of inner life is more of the same.https://t.co/kFfzL3gbhm

2022-10-23 05:22:59 Claiming that it's the kind of thing that might develop into thinking sans scare quotes with enough time? data? something? is still unfounded, harmful AI hype. https://t.co/hilvqpXgWM

2022-10-23 03:51:33 RT @emilymbender: @histoftech @KLdivergence Me three: emilymbender@mastodon.social

2022-10-23 03:51:31 @histoftech @KLdivergence Me three: emilymbender@mastodon.social

2022-10-23 01:18:48 @EnglishOER @alexhanna @dair_ai For the text ones, I tend to say "text synthesis machine" or "letter sequence synthesis machine". I guess you could go for "word and image synthesis machines", but "mathy math" is also catchy :)

2022-10-22 23:32:51 RT @timnitGebru: I need to get this. Image is Mark wearing sunglasses with a white hoodie that has the writings below in Black.Top:Sto…

2022-10-22 20:07:59 @safiyanoble I'm a fan of Choffy, but as someone super sensitive to caffeine I can say it will still keep me up if I have it in the afternoon. (Don't expect hot cocoa when you drink it. Think rather cacao tea.)

2022-10-21 23:46:26 @LeonDerczynski And now I'm hoping that no one will retweet the original (just your QT) because otherwise folks won't check the date and will wonder why I'm talking about GPT-2!

2022-10-21 23:39:49 @LeonDerczynski Hah -- thanks for digging that up. I've added it here, making it (currently) the earliest entry.https://t.co/uKA4tuv4jF

2022-10-21 23:38:09 RT @LeonDerczynski: This whole discussion - and the interesting threads off it - have aged like a fine wine https://t.co/ykUiRfoGTf

2022-10-21 23:11:29 @zehavoc I think a good limitations section makes the paper stronger by clearly stating the domain of applicability of the results. If that means going back and toning down some of the high-flying prose in the introduction, so much the better!

2022-10-21 19:19:40 @kirbyconrod I don't know, but I love the form pdves so much. Do you name your folders "Topic pdves"?

2022-10-21 19:14:54 @LeonDerczynski @yuvalmarton @complingy I want this meme to fit here but it doesn't --- if only people would cite the deep #NLProc (aka deep processing, not deep learning). https://t.co/7rrLQ11GEm

2022-10-21 18:19:29 RT @rctatman: Basically: knowing about ML is a subset of what you need to know to be able to build things that use ML and solve a genuine p…

2022-10-21 14:15:13 RT @mer__edith: You can start by learning that "AI" is first &

2022-10-21 04:12:05 RT @timnitGebru: I say the other way around. To those who preach that "AI" is a magical thing that saves us, please learn something about…

2022-10-21 01:44:09 @edwardbkang @simognehudson Please do post a link to your paper when it is out!

2022-10-29 13:59:20 RT @emilymbender: What would the world look like if VC funding wasn't flowing to companies trafficking in #AIhype and surveillance tech but…

2022-10-29 13:01:58 RT @emilymbender: I guess it's a milestone for "AI" startups when they get their puff-pieces in the media. I want to highlight some obnoxio…

2022-10-29 13:00:56 RT @emilymbender: Today's #AIhype take-down + analysis (first crossposted to both Twitter &

2022-10-29 04:02:24 #ThreeMarvins

2022-10-29 04:01:56 Finally, I can just tell that some reading this thread are going to reply with remarks abt politicians being thoughtless text synthesizing machines. Don't. You can be disappointed in politicians without dehumanizing them, &

2022-10-29 04:01:21 And this is downright creepy. I thought that "representative democracy" means that the elected representatives represent the people who elected them, not their party and surely not a text synthesis machine./12 https://t.co/pDCl1lgRx8

2022-10-29 04:00:49 This paragraph seems inconsistent with the rest of the article. That is, I don't see anything in the rest of the proposals that seems like a good way to "use AI to our benefit."/11 https://t.co/USu7GiP7V1

2022-10-29 04:00:20 Sorry, this has been tried. It was called Tay and it was a (predictable) disaster. What's missing in terms of "democratizing" "AI" is shared *governance*, not open season on training data./10 https://t.co/h44gCyjkka

2022-10-29 03:59:35 This is non-sensical and a category error: "AIs" (mathy maths) aren't the kind of entity that can be held accountable. Accountability rests with humans, and anytime someone suggests moving it to machines they are in fact suggesting reducing accountability./9 https://t.co/4S61hX1tQb

2022-10-29 03:59:02 I'd really rather think that there are better ways to think outside the box in terms of policy making than putting fringe policy positions in a text blender (+ inviting people to play with it further) and seeing what comes out./8 https://t.co/UTEr3VflVo

2022-10-29 03:58:30 Side note: I'm sure Danes will really appreciate random people from "all around the globe" having input into their law-making./7

2022-10-29 03:58:10 Combine that with the claim that the humans in the party are "committed to carrying out their AI-derived platform" and this "art project" appears to be using the very democratic process as its material. Such a move seems disastrously anti-democratic./6

2022-10-29 03:57:47 The general idea seems to be "train an LM on fringe political opinions and let people add to that training corpus"./5 https://t.co/WRf5bT8iMI

2022-10-29 03:56:46 However, the quotes in the article leave me very concerned that the artists either don't really understand or have expectations of the general AI literacy in Denmark that are probably way too high./4

2022-10-29 03:56:38 I have no objections to performance art in general, and something that helps the general public grasp the absurdity of claims of "AI" and reframe what these systems should be used for seems valuable./3

2022-10-29 03:56:26 Working from the reporting by @chloexiang at @motherboard, it appears that this is some sort of performance art, except that the project is (purports to be?) interacting with the actual Danish political system./2

2022-10-29 03:56:13 Today's #AIhype take-down + analysis (first crossposted to both Twitter &

2022-10-28 21:28:04 @DrVeronikaCH See end of thread.

2022-10-28 21:22:27 @JakeAziz1 In my grammar engineering course, students work on extending implemented grammars over the course of the quarter. Any given student only works on one language (with a partner), but in our class discussions, everyone is exposed to all the languages we are working on.

2022-10-28 20:54:22 For that matter, what would the world look like if our system prevented the accumulation of wealth that sits behind the VC system?

2022-10-28 20:53:48 What would the world look like if VC funding wasn't flowing to companies trafficking in #AIhype and surveillance tech but rather to realistic, community-governed language technology?>

2022-10-28 20:40:46 (Tweeting while in flight and it's been pointed out that the link at the top of the thread is the one I had to use through UW libraries to get access. Here's one that doesn't have the UW prefix: https://t.co/CKybX4BRsz )

2022-10-28 20:40:05 Once again, I think we're seeing the work of a journalist who hasn't resisted the urge to be impressed (by some combination of coherent-seeming synthetic text and venture capital interest). I give this one #twomarvins and urge consumers of news everywhere to demand better.

2022-10-27 15:35:48 @jessgrieser For this shot, yes. Second dose is typically the rough one for those for whom it is rough. Also: thank you for your service!!

2022-10-27 05:16:49 RT @mark_riedl: That is, we can't say X is true of a LM at scale Y. We instead can only say X is true of a LM at scale Y trained in unknown…

2022-10-26 21:03:30 Another fun episode! @timnitGebru did some live tweeting here. We'll have the recording up in due course... https://t.co/UwgCA1uu4a

2022-10-26 20:53:19 RT @timnitGebru: Happening in 2 minutes. Join us.https://t.co/vDCO6n1cno

2022-10-26 18:28:08 AI "art" as soft propaganda. Pull quote in the image, but read the whole thing for really interesting thoughts on what a culture of extraction means. By @MarcoDonnarumma h/t @neilturkewitzhttps://t.co/2uAJvBTVbM https://t.co/X4at2irn0V

2022-10-26 17:51:27 In two hours!! https://t.co/70lqNfeHjh

2022-10-26 15:20:39 @_akpiper @CBC But why is it of interest how GPT-3 responds to these different prompts? What is GPT-3 a model of, in your view?

2022-10-25 18:16:23 @_akpiper @CBC How did you establish that whatever web garbage GPT was trained on was a reasonable data sample for what you were doing?

2022-10-25 18:14:43 Sorry, folks, if I'm missing important things. A post about sealioning led to my mentions being filled with sealions. Shoulda predicted that, I guess. https://t.co/pg6IfnZxUQ

2022-10-25 12:51:32 RT @emilymbender: Like, would we stand for news stories that were based on journalists asking Magic 8 Ball questions and breathlessly repor…

2022-10-25 12:51:29 RT @emilymbender: Thinking about this again this morning. I wonder what field of study could provide insight into the relative contribution…

2022-10-25 00:29:46 @timnitGebru @Foxglovelegal From what little I understand, these regulations only kick in when there are customers involved paying for a product. So, I guess the party with standing might be advertisers who are led to believe that they are placing their ads in an environment that isn't hate-speech infested.

2022-10-25 00:27:03 @timnitGebru Huh -- I wonder how truth in advertising regulations apply to cases like this, where people representing companies but on their own twitter account go around making unsupported claims about the effectiveness of their technology.

2022-10-25 00:19:07 @olivia_p_walker https://t.co/YyrMnZdhjW

2022-10-25 00:16:57 I mean, acting like pointing out that something is eugenicist is the problem is not the behavior I'd expect of someone who is actually opposed to eugenics.

2022-10-25 00:15:14 If you're offended when someone points out that your school of thought (*cough* longtermism/EA *cough*) is eugenicist, then clearly you agree that eugenics is bad. So why is the move not to explore the ways in which it is (or at least appears to be) eugenicist and fix that?

2022-10-25 00:03:12 RT @aclmeeting: #ACL2023NLP is looking for an experienced and diverse pool of Senior Area Chairs (SACs). Know someone who makes the cut?…

2022-10-24 19:18:09 @EnglishOER Interesting for what? What are you trying to find out, and why is poking at a pile of data of unknown origin a useful way to do so?

2022-10-24 17:06:13 @EnglishOER But "data crunching of so much text" is useless unless we have a good idea of how the text was gathered (curation rationale) and what it represents.

2022-10-24 16:40:43 Like, would we stand for news stories that were based on journalists asking Magic 8 Ball questions and breathlessly reporting on how exciting it was to read the results?

2022-10-24 04:29:30 @athundt @alkoller It looks like only 7 of them are visible but that's plausible.

2022-10-24 04:17:55 I wasn't sure what to do for my pumpkin this year, but then @alkoller visited and an answer suggested itself.#SpookyTalesForLinguists https://t.co/Bp3rULsA9z

2022-10-23 20:53:56 @jasonbaldridge I bookmarked it when you first announced the paper on Twitter but haven't had a chance to look yet.

2022-10-23 19:52:26 @tdietterich Fine. And the burden of proof for that claim lies with the person/people making it.

2022-10-23 19:47:57 @tdietterich Who is going around saying airplanes fly like birds do?

2022-10-23 19:32:27 To the extent that computational models are models of human (or animal) cognition, the burden of proof lies with the model developer to establish that they are reasonable models. And if they aren't models of human cognition, comparisons to human cognition are only marketing/hype.

2022-10-23 19:08:14 @Alan_Au @rachelmetz https://t.co/msUIrYeCEr

2022-10-23 05:29:16 @deliprao Also if you feel the need to de-hyoe your own tweet, maybe revisit and don't say the first thing in the first place?

2022-10-23 05:27:35 @deliprao What does "primordial" mean to you?

2022-10-23 05:26:27 How can we get from the current culture to one where folks who build or study this tech (and should know better) stop constantly putting out such hype?

2022-10-23 05:24:52 And likening it to "innermost thoughts" i.e. some kind of inner life is more of the same.https://t.co/kFfzL3gbhm

2022-10-23 05:22:59 Claiming that it's the kind of thing that might develop into thinking sans scare quotes with enough time? data? something? is still unfounded, harmful AI hype. https://t.co/hilvqpXgWM

2022-10-23 03:51:33 RT @emilymbender: @histoftech @KLdivergence Me three: emilymbender@mastodon.social

2022-10-23 03:51:31 @histoftech @KLdivergence Me three: emilymbender@mastodon.social

2022-10-23 01:18:48 @EnglishOER @alexhanna @dair_ai For the text ones, I tend to say "text synthesis machine" or "letter sequence synthesis machine". I guess you could go for "word and image synthesis machines", but "mathy math" is also catchy :)

2022-10-22 23:32:51 RT @timnitGebru: I need to get this. Image is Mark wearing sunglasses with a white hoodie that has the writings below in Black.Top:Sto…

2022-10-22 20:07:59 @safiyanoble I'm a fan of Choffy, but as someone super sensitive to caffeine I can say it will still keep me up if I have it in the afternoon. (Don't expect hot cocoa when you drink it. Think rather cacao tea.)

2022-10-21 23:46:26 @LeonDerczynski And now I'm hoping that no one will retweet the original (just your QT) because otherwise folks won't check the date and will wonder why I'm talking about GPT-2!

2022-10-21 23:39:49 @LeonDerczynski Hah -- thanks for digging that up. I've added it here, making it (currently) the earliest entry.https://t.co/uKA4tuv4jF

2022-10-21 23:38:09 RT @LeonDerczynski: This whole discussion - and the interesting threads off it - have aged like a fine wine https://t.co/ykUiRfoGTf

2022-10-21 23:11:29 @zehavoc I think a good limitations section makes the paper stronger by clearly stating the domain of applicability of the results. If that means going back and toning down some of the high-flying prose in the introduction, so much the better!

2022-10-21 19:19:40 @kirbyconrod I don't know, but I love the form pdves so much. Do you name your folders "Topic pdves"?

2022-10-21 19:14:54 @LeonDerczynski @yuvalmarton @complingy I want this meme to fit here but it doesn't --- if only people would cite the deep #NLProc (aka deep processing, not deep learning). https://t.co/7rrLQ11GEm

2022-10-21 18:19:29 RT @rctatman: Basically: knowing about ML is a subset of what you need to know to be able to build things that use ML and solve a genuine p…

2022-10-21 14:15:13 RT @mer__edith: You can start by learning that "AI" is first &

2022-10-21 04:12:05 RT @timnitGebru: I say the other way around. To those who preach that "AI" is a magical thing that saves us, please learn something about…

2022-10-21 01:44:09 @edwardbkang @simognehudson Please do post a link to your paper when it is out!

2022-11-17 21:56:26 @GaryMarcus @timnitGebru Choosing not to be a sea lion is free. I recommend it. Choosing not to engage with sea lioning, also free and freeing. Ciao!

2022-11-17 21:53:44 Choosing not to sealion is ... free. And such a good deal!

2022-11-17 21:50:59 @GaryMarcus @timnitGebru Gary, your white privilege is showing. Timnit (and others) have explained over and over where the eugenics is. You keep coming back with "due process" sealioning. You can choose to learn or you can choose to keep flaunting your unwillingness to listen.

2022-11-17 21:27:35 RT @mmitchell_ai: When we use "safety filters" that censor content with a broad brush, we create a less safe world: marginalizing large swa…

2022-11-17 21:17:44 And similarly flailing around "citation". Just because you're wallowing in the mud/kicking up dust doesn't mean the distinctions aren't clear to other people! https://t.co/XavSVJ5N96

2022-11-17 21:17:03 And then some very weird flailing around "ethics": https://t.co/jtfHYTpSly >

2022-11-17 21:16:14 Backing off from that claim pretty quickly: >

2022-11-17 21:15:30 I don't follow this so I missed his remarks about it in real time. Apparently the hype wasn't coming only from the marketing department... >

2022-11-17 15:33:04 RT @BJMcP: Great thread on both the dangers of AI language models and the ways the language used to describe them contributes to illusions…

2022-11-17 15:00:39 And by take seriously I don't mean "believe uncritically". I mean learn how to read and discern and learn from and understand that if you are working on AI it is literally part of your job to do this reading &

2022-11-17 14:58:59 Better late than never, I suppose, but I hope that part of the lesson learned here is to take seriously the work that approaches all of this from a critical lens. >

2022-11-17 14:57:32 Or maybe folks read our paper and didn't really believe us but had to wait until they could "see it with their own eyes" = see the effect from a) a live demo that b) impinges on something where they do have skin in the game. >

2022-11-17 14:56:27 So, this looks like a case where that wasn't alarming enough. It only becomes *alarming* for some when what's threatened is ~%~science~%~. >

2022-11-17 14:55:39 And then we look deeply into the risks and harms that follow when that seemingly coherent text reproduces the language of systems of oppression. >

2022-11-17 14:54:54 I looked back at Section 6 of Stochastic Parrots, which begins: >

2022-11-17 14:53:32 "Generates text that's grammatical and feels real": We sounded the alarm about this in the Stochastic Parrots paper. Ordinarily, I don't expect the average AI researcher to have read my papers, but that paper (thanks to Google) can hardly be missed. So what gives? >

2022-11-17 14:51:43 RT @roxannedarling: Question of the Day:

2022-11-17 14:46:24 RT @emilymbender: At what point does this garbage "science" become embarrassing enough that "researcher" at one of these tech cos stops bei…

2022-11-17 14:44:55 RT @emilymbender: And let's reflect for a moment on how they phrased their disclaimer, shall we? "Hallucinate" is a terrible word choice he…

2022-11-17 14:20:46 RT @dmonett: "AI is a story we computer scientists made up to help us get funding ... It was a pragmatic theater. But now AI has become a f…

2022-11-17 06:05:48 Using the phrase "AI speech police" to refer to automated content moderation is certainly... a choice. https://t.co/P7AJmqxIMg

2022-11-17 05:58:22 RT @dmonett: "Lazy, negligent, unsafe."

2022-11-17 05:22:43 The word forms are not their meaning Their meaning is not the world Modeling the distribution of word forms is not the same as modeling (knowledge of) the world. And yet somehow this basic point is missed over and over and over... https://t.co/Hy5HBgeCbo

2022-11-17 05:12:08 Fundamental point that ~all people who see LLMs as "AI" seem to be missing: The *only* knowledge that an LLM can truthfully be said to have is information about the distribution of word forms.

2022-11-17 05:09:52 RT @moreisdifferent: PROBLEM: "Researchers are buried under a mass of papers, increasingly unable to distinguish between the meaningful and…

2022-11-17 02:39:26 @josephrobison Hey look! It's another probably-bought-his-blue-check telling me that Google can make search out of LMs. No, they can't: https://t.co/rkDjc4kDxj

2022-11-17 01:30:10 @MuhammedKambal Yes --- as many of us have been saying for quite some time now.

2022-11-17 01:26:23 @MuhammedKambal How about starting by not referring to any of these systems as "AI", and thus helping your audience learn how to cut through the AI hype?

2022-11-17 01:21:06 @MuhammedKambal Well no shit. Of course it's missing that. It's a word form prediction machine and nothing more. What led you to believe otherwise?

2022-11-17 01:19:46 The only exception to that I've seen is @willie_agnew 's observation about the filters that the engineers designed in, and how that relates to their (the engineers') view of what science is. So, the expression of the ideas of the people behind it. https://t.co/iC6u5VATw9

2022-11-17 01:19:16 That was furthermore entirely predictable that it would behave this way. The story is not in whatever specific things it says, but rather in the way they are promoting it. >

2022-11-17 01:18:51 "ooh look what false/embarrassing/obviously wrong thing Galactica said" is also not interesting folks. It's not surprising in the least that it generates text that is both fluent and (when interpreted by a human) wrong. >

2022-11-17 01:17:02 @MuhammedKambal The only exception to that I've seen is @willie_agnew 's observation about the filters that the engineers designed in, and how that relates to their (the engineers') view of what science is. So, the expression of the ideas of the people behind it. https://t.co/iC6u5VATw9

2022-11-17 01:15:53 @MuhammedKambal That was furthermore entirely predictable. The story is not in whatever specific things it says, but rather in the way they are promoting it. >

2022-11-17 01:15:27 @MuhammedKambal Why? People have been playing "gotcha" with Meta AI's random bullshit generator all day. It's not interesting --- it's not surprising in the least that it generates text that is both fluent and (when interpreted by a human) wrong. >

2022-11-17 00:52:36 @ChombaBupe I mean, it went off the rails as soon as they got to "an AI"....

2022-11-17 00:52:06 @Sanjay_Uppal This is not the original remark you seem to think it is. Also, comparing humans to LLMs is no better than the reverse.

2022-11-17 00:07:29 In my inbox today, an email about a "staffing opportunity" that ended with "PS: If you don't want to hear from me anymore, you can just tell me" I took great pleasure in not only marking that as spam, but also replying and telling them as much.

2022-11-16 23:59:23 At what point does this garbage "science" become embarrassing enough that "researcher" at one of these tech cos stops being a prestigious job? https://t.co/Zq6SNCaBS5

2022-11-16 23:52:55 RT @zehavoc: Hey #NLProc people, don't miss @anas_ant talk next Friday at 11am "NLP Beyond the Top 100 languages" ** and ** @ben_mlr's thes…

2022-11-16 22:31:04 RT @alexhanna: Next week! Mystery AI Hype Theater 3k will return. @emilymbender and I will talk language models and search, and continue ou…

2022-11-16 22:29:47 This is so damning. https://t.co/iC6u5VATw9

2022-11-16 22:29:33 RT @willie_agnew: Refuses to say anything about queer theory, CRT, racism, or AIDS, despite large bodies of highly influential papers in th…

2022-11-16 22:29:28 @kylebrussell I mean descriptively, yes, people are doing that. Normatively, they 100% should not be. But thanks for promoting my tweets, I guess?

2022-11-16 22:19:15 @kylebrussell Your bio begins with "Always learning | Working on AI for game dev" and you're posting like an AI bro. I'm not inclined to take you for a trustworthy journalist. And yeah, the blue checks are now meaningless. That sucks.

2022-11-16 22:16:11 @heyorson There's a non-zero possibility that your garden will start growing ice cream cones next spring, too. What of it?

2022-11-16 22:15:33 @zehavoc I'm more active over there now, definitely. Still reading Twitter, but less inclined to contribute, given that it seems to be on the brink of collapsing and there's nothing I can do about that.

2022-11-16 22:14:34 @kylebrussell Hey there Mr Bought a Bluecheck --- no, "AI is still young" is not an excuse for putting ridiculous demos out the world and making outlandish claims about them.

2022-11-16 22:13:55 @andrewprollings "got it wrong" "synthesized ungrounded text" "made shit up"

2022-11-16 22:13:17 @NannaInie Yep.

2022-11-16 22:10:36 @zehavoc Well, not fully syncing (and definitely not cross-posting), but still putting some stuff over here. I like Tweetdeck better than Mastodon's "advanced" mode, but do like some things better over there.

2022-11-16 22:10:02 RT @mark_riedl: Maybe don’t name your model after the Encyclopedia Galactica unless it is good enough to be a trusted source of knowledge?…

2022-11-16 21:50:34 @JLucasMcKay "get it wrong"

2022-11-16 21:44:53 RT @dmonett: There are so many, so many, many, many red flags alone in the "Get started" and "Limitations" screenshots, that I actually did…

2022-11-16 21:34:11 @HeidyKhlaaf @Meaningness Here ya go: https://t.co/Zq6SNCaBS5

2022-11-15 21:52:22 RT @chirag_shah: Several positions open at @uw_ischool. Come, be a part of an outstanding #iSchool, in the amazing city of Seattle! https:/…

2022-11-17 21:56:26 @GaryMarcus @timnitGebru Choosing not to be a sea lion is free. I recommend it. Choosing not to engage with sea lioning, also free and freeing. Ciao!

2022-11-17 21:53:44 Choosing not to sealion is ... free. And such a good deal!

2022-11-17 21:50:59 @GaryMarcus @timnitGebru Gary, your white privilege is showing. Timnit (and others) have explained over and over where the eugenics is. You keep coming back with "due process" sealioning. You can choose to learn or you can choose to keep flaunting your unwillingness to listen.

2022-11-17 21:27:35 RT @mmitchell_ai: When we use "safety filters" that censor content with a broad brush, we create a less safe world: marginalizing large swa…

2022-11-17 21:17:44 And similarly flailing around "citation". Just because you're wallowing in the mud/kicking up dust doesn't mean the distinctions aren't clear to other people! https://t.co/XavSVJ5N96

2022-11-17 21:17:03 And then some very weird flailing around "ethics": https://t.co/jtfHYTpSly >

2022-11-17 21:16:14 Backing off from that claim pretty quickly: >

2022-11-17 21:15:30 I don't follow this so I missed his remarks about it in real time. Apparently the hype wasn't coming only from the marketing department... >

2022-11-17 15:33:04 RT @BJMcP: Great thread on both the dangers of AI language models and the ways the language used to describe them contributes to illusions…

2022-11-17 15:00:39 And by take seriously I don't mean "believe uncritically". I mean learn how to read and discern and learn from and understand that if you are working on AI it is literally part of your job to do this reading &

2022-11-17 14:58:59 Better late than never, I suppose, but I hope that part of the lesson learned here is to take seriously the work that approaches all of this from a critical lens. >

2022-11-17 14:57:32 Or maybe folks read our paper and didn't really believe us but had to wait until they could "see it with their own eyes" = see the effect from a) a live demo that b) impinges on something where they do have skin in the game. >

2022-11-17 14:56:27 So, this looks like a case where that wasn't alarming enough. It only becomes *alarming* for some when what's threatened is ~%~science~%~. >

2022-11-17 14:55:39 And then we look deeply into the risks and harms that follow when that seemingly coherent text reproduces the language of systems of oppression. >

2022-11-17 14:54:54 I looked back at Section 6 of Stochastic Parrots, which begins: >

2022-11-17 14:53:32 "Generates text that's grammatical and feels real": We sounded the alarm about this in the Stochastic Parrots paper. Ordinarily, I don't expect the average AI researcher to have read my papers, but that paper (thanks to Google) can hardly be missed. So what gives? >

2022-11-17 14:51:43 RT @roxannedarling: Question of the Day:

2022-11-17 14:46:24 RT @emilymbender: At what point does this garbage "science" become embarrassing enough that "researcher" at one of these tech cos stops bei…

2022-11-17 14:44:55 RT @emilymbender: And let's reflect for a moment on how they phrased their disclaimer, shall we? "Hallucinate" is a terrible word choice he…

2022-11-17 14:20:46 RT @dmonett: "AI is a story we computer scientists made up to help us get funding ... It was a pragmatic theater. But now AI has become a f…

2022-11-17 06:05:48 Using the phrase "AI speech police" to refer to automated content moderation is certainly... a choice. https://t.co/P7AJmqxIMg

2022-11-17 05:58:22 RT @dmonett: "Lazy, negligent, unsafe."

2022-11-17 05:22:43 The word forms are not their meaning Their meaning is not the world Modeling the distribution of word forms is not the same as modeling (knowledge of) the world. And yet somehow this basic point is missed over and over and over... https://t.co/Hy5HBgeCbo

2022-11-17 05:12:08 Fundamental point that ~all people who see LLMs as "AI" seem to be missing: The *only* knowledge that an LLM can truthfully be said to have is information about the distribution of word forms.

2022-11-17 05:09:52 RT @moreisdifferent: PROBLEM: "Researchers are buried under a mass of papers, increasingly unable to distinguish between the meaningful and…

2022-11-17 02:39:26 @josephrobison Hey look! It's another probably-bought-his-blue-check telling me that Google can make search out of LMs. No, they can't: https://t.co/rkDjc4kDxj

2022-11-17 01:30:10 @MuhammedKambal Yes --- as many of us have been saying for quite some time now.

2022-11-17 01:26:23 @MuhammedKambal How about starting by not referring to any of these systems as "AI", and thus helping your audience learn how to cut through the AI hype?

2022-11-17 01:21:06 @MuhammedKambal Well no shit. Of course it's missing that. It's a word form prediction machine and nothing more. What led you to believe otherwise?

2022-11-17 01:19:46 The only exception to that I've seen is @willie_agnew 's observation about the filters that the engineers designed in, and how that relates to their (the engineers') view of what science is. So, the expression of the ideas of the people behind it. https://t.co/iC6u5VATw9

2022-11-17 01:19:16 That was furthermore entirely predictable that it would behave this way. The story is not in whatever specific things it says, but rather in the way they are promoting it. >

2022-11-17 01:18:51 "ooh look what false/embarrassing/obviously wrong thing Galactica said" is also not interesting folks. It's not surprising in the least that it generates text that is both fluent and (when interpreted by a human) wrong. >

2022-11-17 01:17:02 @MuhammedKambal The only exception to that I've seen is @willie_agnew 's observation about the filters that the engineers designed in, and how that relates to their (the engineers') view of what science is. So, the expression of the ideas of the people behind it. https://t.co/iC6u5VATw9

2022-11-17 01:15:53 @MuhammedKambal That was furthermore entirely predictable. The story is not in whatever specific things it says, but rather in the way they are promoting it. >

2022-11-17 01:15:27 @MuhammedKambal Why? People have been playing "gotcha" with Meta AI's random bullshit generator all day. It's not interesting --- it's not surprising in the least that it generates text that is both fluent and (when interpreted by a human) wrong. >

2022-11-17 00:52:36 @ChombaBupe I mean, it went off the rails as soon as they got to "an AI"....

2022-11-17 00:52:06 @Sanjay_Uppal This is not the original remark you seem to think it is. Also, comparing humans to LLMs is no better than the reverse.

2022-11-17 00:07:29 In my inbox today, an email about a "staffing opportunity" that ended with "PS: If you don't want to hear from me anymore, you can just tell me" I took great pleasure in not only marking that as spam, but also replying and telling them as much.

2022-11-16 23:59:23 At what point does this garbage "science" become embarrassing enough that "researcher" at one of these tech cos stops being a prestigious job? https://t.co/Zq6SNCaBS5

2022-11-16 23:52:55 RT @zehavoc: Hey #NLProc people, don't miss @anas_ant talk next Friday at 11am "NLP Beyond the Top 100 languages" ** and ** @ben_mlr's thes…

2022-11-16 22:31:04 RT @alexhanna: Next week! Mystery AI Hype Theater 3k will return. @emilymbender and I will talk language models and search, and continue ou…

2022-11-16 22:29:47 This is so damning. https://t.co/iC6u5VATw9

2022-11-16 22:29:33 RT @willie_agnew: Refuses to say anything about queer theory, CRT, racism, or AIDS, despite large bodies of highly influential papers in th…

2022-11-16 22:29:28 @kylebrussell I mean descriptively, yes, people are doing that. Normatively, they 100% should not be. But thanks for promoting my tweets, I guess?

2022-11-16 22:19:15 @kylebrussell Your bio begins with "Always learning | Working on AI for game dev" and you're posting like an AI bro. I'm not inclined to take you for a trustworthy journalist. And yeah, the blue checks are now meaningless. That sucks.

2022-11-16 22:16:11 @heyorson There's a non-zero possibility that your garden will start growing ice cream cones next spring, too. What of it?

2022-11-16 22:15:33 @zehavoc I'm more active over there now, definitely. Still reading Twitter, but less inclined to contribute, given that it seems to be on the brink of collapsing and there's nothing I can do about that.

2022-11-16 22:14:34 @kylebrussell Hey there Mr Bought a Bluecheck --- no, "AI is still young" is not an excuse for putting ridiculous demos out the world and making outlandish claims about them.

2022-11-16 22:13:55 @andrewprollings "got it wrong" "synthesized ungrounded text" "made shit up"

2022-11-16 22:13:17 @NannaInie Yep.

2022-11-16 22:10:36 @zehavoc Well, not fully syncing (and definitely not cross-posting), but still putting some stuff over here. I like Tweetdeck better than Mastodon's "advanced" mode, but do like some things better over there.

2022-11-16 22:10:02 RT @mark_riedl: Maybe don’t name your model after the Encyclopedia Galactica unless it is good enough to be a trusted source of knowledge?…

2022-11-16 21:50:34 @JLucasMcKay "get it wrong"

2022-11-16 21:44:53 RT @dmonett: There are so many, so many, many, many red flags alone in the "Get started" and "Limitations" screenshots, that I actually did…

2022-11-16 21:34:11 @HeidyKhlaaf @Meaningness Here ya go: https://t.co/Zq6SNCaBS5

2022-11-15 21:52:22 RT @chirag_shah: Several positions open at @uw_ischool. Come, be a part of an outstanding #iSchool, in the amazing city of Seattle! https:/…

2022-11-18 23:50:24 @lebovitzben @mmitchell_ai @timnitGebru @Grady_Booch Clearly unqualified for his position. Relevant here: https://t.co/fVMYOVZb0d

2022-11-18 19:09:11 I'm glad to see my tweet about the #Galactica debacle quoted here, but I wonder: what happens to this article when Twitter goes down? @strwbilly do you know? Do the embedded tweets just disappear, or is your publishing platform robust to that? https://t.co/88FzfskH8f

2022-11-18 17:48:01 @josephrobison I think it's a mistake to have our information access systems driven by for-profit companies. Bare minimum is we should invest in something public.

2022-11-18 17:20:18 @josephrobison Misguided and dangerous.

2022-11-18 15:53:31 @BoredAstronaut Not harmless: https://t.co/kwACyKvDtL

2022-11-18 14:50:57 RT @emilymbender: And by take seriously I don't mean "believe uncritically". I mean learn how to read and discern and learn from and unders…

2022-11-18 14:49:19 RT @emilymbender: "Generates text that's grammatical and feels real": We sounded the alarm about this in the Stochastic Parrots paper. Ordi…

2022-11-18 14:48:18 RT @emilymbender: I don't follow this so I missed his remarks about it in real time. Apparently the hype wasn't coming only from the mark…

2022-11-18 14:06:31 RT @timnitGebru: Whatever you do, don't join Jack Dorsey's thing. He totally believes in Elon's plan to "extend the light of consciousness"…

2022-11-18 03:41:38 @ruthstarkman @mmitchell_ai @Stanford Your students are so awesome!

2022-11-18 03:41:29 RT @ruthstarkman: Students made several great tiktoks for @mmitchell_ai and @emilymbender's Dec 2 3-4:30 PM talk @Stanford. Here's one. BE…

2022-11-18 03:41:16 Feel like I've got to add my toast to Twitter before it goes under. I have really appreciated this as a space to learn and to practice public scholarship. Thank you all.

2022-11-17 21:56:26 @GaryMarcus @timnitGebru Choosing not to be a sea lion is free. I recommend it. Choosing not to engage with sea lioning, also free and freeing. Ciao!

2022-11-17 21:53:44 Choosing not to sealion is ... free. And such a good deal!

2022-11-17 21:50:59 @GaryMarcus @timnitGebru Gary, your white privilege is showing. Timnit (and others) have explained over and over where the eugenics is. You keep coming back with "due process" sealioning. You can choose to learn or you can choose to keep flaunting your unwillingness to listen.

2022-11-17 21:27:35 RT @mmitchell_ai: When we use "safety filters" that censor content with a broad brush, we create a less safe world: marginalizing large swa…

2022-11-17 21:17:44 And similarly flailing around "citation". Just because you're wallowing in the mud/kicking up dust doesn't mean the distinctions aren't clear to other people! https://t.co/XavSVJ5N96

2022-11-17 21:17:03 And then some very weird flailing around "ethics": https://t.co/jtfHYTpSly >

2022-11-17 21:16:14 Backing off from that claim pretty quickly: >

2022-11-17 21:15:30 I don't follow this so I missed his remarks about it in real time. Apparently the hype wasn't coming only from the marketing department... >

2022-11-17 15:33:04 RT @BJMcP: Great thread on both the dangers of AI language models and the ways the language used to describe them contributes to illusions…

2022-11-17 15:00:39 And by take seriously I don't mean "believe uncritically". I mean learn how to read and discern and learn from and understand that if you are working on AI it is literally part of your job to do this reading &

2022-11-17 14:58:59 Better late than never, I suppose, but I hope that part of the lesson learned here is to take seriously the work that approaches all of this from a critical lens. >

2022-11-17 14:57:32 Or maybe folks read our paper and didn't really believe us but had to wait until they could "see it with their own eyes" = see the effect from a) a live demo that b) impinges on something where they do have skin in the game. >

2022-11-17 14:56:27 So, this looks like a case where that wasn't alarming enough. It only becomes *alarming* for some when what's threatened is ~%~science~%~. >

2022-11-17 14:55:39 And then we look deeply into the risks and harms that follow when that seemingly coherent text reproduces the language of systems of oppression. >

2022-11-17 14:54:54 I looked back at Section 6 of Stochastic Parrots, which begins: >

2022-11-17 14:53:32 "Generates text that's grammatical and feels real": We sounded the alarm about this in the Stochastic Parrots paper. Ordinarily, I don't expect the average AI researcher to have read my papers, but that paper (thanks to Google) can hardly be missed. So what gives? >

2022-11-17 14:51:43 RT @roxannedarling: Question of the Day:

2022-11-17 14:46:24 RT @emilymbender: At what point does this garbage "science" become embarrassing enough that "researcher" at one of these tech cos stops bei…

2022-11-17 14:44:55 RT @emilymbender: And let's reflect for a moment on how they phrased their disclaimer, shall we? "Hallucinate" is a terrible word choice he…

2022-11-17 14:20:46 RT @dmonett: "AI is a story we computer scientists made up to help us get funding ... It was a pragmatic theater. But now AI has become a f…

2022-11-17 06:05:48 Using the phrase "AI speech police" to refer to automated content moderation is certainly... a choice. https://t.co/P7AJmqxIMg

2022-11-17 05:58:22 RT @dmonett: "Lazy, negligent, unsafe."

2022-11-17 05:22:43 The word forms are not their meaning Their meaning is not the world Modeling the distribution of word forms is not the same as modeling (knowledge of) the world. And yet somehow this basic point is missed over and over and over... https://t.co/Hy5HBgeCbo

2022-11-17 05:12:08 Fundamental point that ~all people who see LLMs as "AI" seem to be missing: The *only* knowledge that an LLM can truthfully be said to have is information about the distribution of word forms.

2022-11-17 05:09:52 RT @moreisdifferent: PROBLEM: "Researchers are buried under a mass of papers, increasingly unable to distinguish between the meaningful and…

2022-11-17 02:39:26 @josephrobison Hey look! It's another probably-bought-his-blue-check telling me that Google can make search out of LMs. No, they can't: https://t.co/rkDjc4kDxj

2022-11-17 01:30:10 @MuhammedKambal Yes --- as many of us have been saying for quite some time now.

2022-11-17 01:26:23 @MuhammedKambal How about starting by not referring to any of these systems as "AI", and thus helping your audience learn how to cut through the AI hype?

2022-11-17 01:21:06 @MuhammedKambal Well no shit. Of course it's missing that. It's a word form prediction machine and nothing more. What led you to believe otherwise?

2022-11-17 01:19:46 The only exception to that I've seen is @willie_agnew 's observation about the filters that the engineers designed in, and how that relates to their (the engineers') view of what science is. So, the expression of the ideas of the people behind it. https://t.co/iC6u5VATw9

2022-11-17 01:19:16 That was furthermore entirely predictable that it would behave this way. The story is not in whatever specific things it says, but rather in the way they are promoting it. >

2022-11-17 01:18:51 "ooh look what false/embarrassing/obviously wrong thing Galactica said" is also not interesting folks. It's not surprising in the least that it generates text that is both fluent and (when interpreted by a human) wrong. >

2022-11-17 01:17:02 @MuhammedKambal The only exception to that I've seen is @willie_agnew 's observation about the filters that the engineers designed in, and how that relates to their (the engineers') view of what science is. So, the expression of the ideas of the people behind it. https://t.co/iC6u5VATw9

2022-11-17 01:15:53 @MuhammedKambal That was furthermore entirely predictable. The story is not in whatever specific things it says, but rather in the way they are promoting it. >

2022-11-17 01:15:27 @MuhammedKambal Why? People have been playing "gotcha" with Meta AI's random bullshit generator all day. It's not interesting --- it's not surprising in the least that it generates text that is both fluent and (when interpreted by a human) wrong. >

2022-11-17 00:52:36 @ChombaBupe I mean, it went off the rails as soon as they got to "an AI"....

2022-11-17 00:52:06 @Sanjay_Uppal This is not the original remark you seem to think it is. Also, comparing humans to LLMs is no better than the reverse.

2022-11-17 00:07:29 In my inbox today, an email about a "staffing opportunity" that ended with "PS: If you don't want to hear from me anymore, you can just tell me" I took great pleasure in not only marking that as spam, but also replying and telling them as much.

2022-11-16 23:59:23 At what point does this garbage "science" become embarrassing enough that "researcher" at one of these tech cos stops being a prestigious job? https://t.co/Zq6SNCaBS5

2022-11-16 23:52:55 RT @zehavoc: Hey #NLProc people, don't miss @anas_ant talk next Friday at 11am "NLP Beyond the Top 100 languages" ** and ** @ben_mlr's thes…

2022-11-16 22:31:04 RT @alexhanna: Next week! Mystery AI Hype Theater 3k will return. @emilymbender and I will talk language models and search, and continue ou…

2022-11-16 22:29:47 This is so damning. https://t.co/iC6u5VATw9

2022-11-16 22:29:33 RT @willie_agnew: Refuses to say anything about queer theory, CRT, racism, or AIDS, despite large bodies of highly influential papers in th…

2022-11-16 22:29:28 @kylebrussell I mean descriptively, yes, people are doing that. Normatively, they 100% should not be. But thanks for promoting my tweets, I guess?

2022-11-16 22:19:15 @kylebrussell Your bio begins with "Always learning | Working on AI for game dev" and you're posting like an AI bro. I'm not inclined to take you for a trustworthy journalist. And yeah, the blue checks are now meaningless. That sucks.

2022-11-16 22:16:11 @heyorson There's a non-zero possibility that your garden will start growing ice cream cones next spring, too. What of it?

2022-11-16 22:15:33 @zehavoc I'm more active over there now, definitely. Still reading Twitter, but less inclined to contribute, given that it seems to be on the brink of collapsing and there's nothing I can do about that.

2022-11-16 22:14:34 @kylebrussell Hey there Mr Bought a Bluecheck --- no, "AI is still young" is not an excuse for putting ridiculous demos out the world and making outlandish claims about them.

2022-11-16 22:13:55 @andrewprollings "got it wrong" "synthesized ungrounded text" "made shit up"

2022-11-16 22:13:17 @NannaInie Yep.

2022-11-16 22:10:36 @zehavoc Well, not fully syncing (and definitely not cross-posting), but still putting some stuff over here. I like Tweetdeck better than Mastodon's "advanced" mode, but do like some things better over there.

2022-11-16 22:10:02 RT @mark_riedl: Maybe don’t name your model after the Encyclopedia Galactica unless it is good enough to be a trusted source of knowledge?…

2022-11-16 21:50:34 @JLucasMcKay "get it wrong"

2022-11-16 21:44:53 RT @dmonett: There are so many, so many, many, many red flags alone in the "Get started" and "Limitations" screenshots, that I actually did…

2022-11-16 21:34:11 @HeidyKhlaaf @Meaningness Here ya go: https://t.co/Zq6SNCaBS5

2022-11-15 21:52:22 RT @chirag_shah: Several positions open at @uw_ischool. Come, be a part of an outstanding #iSchool, in the amazing city of Seattle! https:/…

2022-11-20 00:24:45 RT @ICLDC_HI: ComputEL-6: final call for papers! Submission deadline is Nov. 20 —>

2022-11-19 23:26:59 @holdspacefree Yikes.

2022-11-19 23:21:59 Hashtags work better when you spell them right: #Galactica https://t.co/sLJ8iaNF7X

2022-11-19 23:12:41 Looking forward to the coming episode of Mystery AI Hype Theater 3000 with @alexhanna On deck: #Galatica of course, plus our new segment "What in the Fresh AI Hell?" Next Wednesday, 11/23, 9:30am Pacific. https://t.co/VF7TD6tw5c #NLProc #AIhype #MAIHT3k https://t.co/1jWIChBzJp

2022-11-19 22:40:55 @intelwire As I understand it, this guy is in charge of META AI. So yeah.

2022-11-19 22:35:51 Did you do any user studies? Did you measure whether people who are looking for writing assistance (working hard in an L2, up against a paper submission deadline, tired) would be able to accurately evaluate if the text they are submitting accurately expresses their ideas?

2022-11-19 22:34:18 So close and yet so far... BEFORE dumping lots of resources into building some tech, we should asking questions like: What are the failure modes? Who will be hurt if the system malfunctions? Who will be hurt if it functions as intended? As for your use case: https://t.co/6hPU3nv5kV

2022-11-19 21:25:29 RT @emilymbender: Using a large LM as a search engine was a bad idea when it was proposed by a search company. It's still a bad idea now, f…

2022-11-19 19:07:44 @isosteph This is yummy, impressive looking, and I think pretty easy. As in: I'm not really much of a cook but have had success with it. https://t.co/wThVrq7mlJ

2022-11-19 05:18:16 @Jeff_Sparrow I put it in the next tweet in the thread: https://t.co/3doaE3kCm0

2022-11-19 04:27:20 @Jeff_Sparrow Yes.

2022-11-19 04:25:44 I clicked the link to see what that study was ... and found something from the London School of Economics and Political Science that has a *blurb* on the webpage talking up how groundbreaking the report is. Sure looks credible to me (not). https://t.co/n5XiDXgeAw https://t.co/8gvbSEP0bd

2022-11-19 04:23:25 And while I'm here, this paragraph is ridiculously full of #AIhype: >

2022-11-19 04:21:17 The paper lives here, in the ACM Digital Library: https://t.co/kwACyKvDtL Not that weird rando URL that you gave. >

2022-11-19 04:20:28 Hey @Jeff_Sparrow I appreciate you actually citing our paper as the source of the phrase "stochastic parrots", but it would be better form to link to its actual published location. https://t.co/DmRThcHDEz >

2022-11-18 23:50:24 @lebovitzben @mmitchell_ai @timnitGebru @Grady_Booch Clearly unqualified for his position. Relevant here: https://t.co/fVMYOVZb0d

2022-11-18 19:09:11 I'm glad to see my tweet about the #Galactica debacle quoted here, but I wonder: what happens to this article when Twitter goes down? @strwbilly do you know? Do the embedded tweets just disappear, or is your publishing platform robust to that? https://t.co/88FzfskH8f

2022-11-18 17:48:01 @josephrobison I think it's a mistake to have our information access systems driven by for-profit companies. Bare minimum is we should invest in something public.

2022-11-18 17:20:18 @josephrobison Misguided and dangerous.

2022-11-18 15:53:31 @BoredAstronaut Not harmless: https://t.co/kwACyKvDtL

2022-11-18 14:50:57 RT @emilymbender: And by take seriously I don't mean "believe uncritically". I mean learn how to read and discern and learn from and unders…

2022-11-18 14:49:19 RT @emilymbender: "Generates text that's grammatical and feels real": We sounded the alarm about this in the Stochastic Parrots paper. Ordi…

2022-11-18 14:48:18 RT @emilymbender: I don't follow this so I missed his remarks about it in real time. Apparently the hype wasn't coming only from the mark…

2022-11-18 14:06:31 RT @timnitGebru: Whatever you do, don't join Jack Dorsey's thing. He totally believes in Elon's plan to "extend the light of consciousness"…

2022-11-18 03:41:38 @ruthstarkman @mmitchell_ai @Stanford Your students are so awesome!

2022-11-18 03:41:29 RT @ruthstarkman: Students made several great tiktoks for @mmitchell_ai and @emilymbender's Dec 2 3-4:30 PM talk @Stanford. Here's one. BE…

2022-11-18 03:41:16 Feel like I've got to add my toast to Twitter before it goes under. I have really appreciated this as a space to learn and to practice public scholarship. Thank you all.

2022-11-17 21:56:26 @GaryMarcus @timnitGebru Choosing not to be a sea lion is free. I recommend it. Choosing not to engage with sea lioning, also free and freeing. Ciao!

2022-11-17 21:53:44 Choosing not to sealion is ... free. And such a good deal!

2022-11-17 21:50:59 @GaryMarcus @timnitGebru Gary, your white privilege is showing. Timnit (and others) have explained over and over where the eugenics is. You keep coming back with "due process" sealioning. You can choose to learn or you can choose to keep flaunting your unwillingness to listen.

2022-11-17 21:27:35 RT @mmitchell_ai: When we use "safety filters" that censor content with a broad brush, we create a less safe world: marginalizing large swa…

2022-11-17 21:17:44 And similarly flailing around "citation". Just because you're wallowing in the mud/kicking up dust doesn't mean the distinctions aren't clear to other people! https://t.co/XavSVJ5N96

2022-11-17 21:17:03 And then some very weird flailing around "ethics": https://t.co/jtfHYTpSly >

2022-11-17 21:16:14 Backing off from that claim pretty quickly: >

2022-11-17 21:15:30 I don't follow this so I missed his remarks about it in real time. Apparently the hype wasn't coming only from the marketing department... >

2022-11-17 15:33:04 RT @BJMcP: Great thread on both the dangers of AI language models and the ways the language used to describe them contributes to illusions…

2022-11-17 15:00:39 And by take seriously I don't mean "believe uncritically". I mean learn how to read and discern and learn from and understand that if you are working on AI it is literally part of your job to do this reading &

2022-11-17 14:58:59 Better late than never, I suppose, but I hope that part of the lesson learned here is to take seriously the work that approaches all of this from a critical lens. >

2022-11-17 14:57:32 Or maybe folks read our paper and didn't really believe us but had to wait until they could "see it with their own eyes" = see the effect from a) a live demo that b) impinges on something where they do have skin in the game. >

2022-11-17 14:56:27 So, this looks like a case where that wasn't alarming enough. It only becomes *alarming* for some when what's threatened is ~%~science~%~. >

2022-11-17 14:55:39 And then we look deeply into the risks and harms that follow when that seemingly coherent text reproduces the language of systems of oppression. >

2022-11-17 14:54:54 I looked back at Section 6 of Stochastic Parrots, which begins: >

2022-11-17 14:53:32 "Generates text that's grammatical and feels real": We sounded the alarm about this in the Stochastic Parrots paper. Ordinarily, I don't expect the average AI researcher to have read my papers, but that paper (thanks to Google) can hardly be missed. So what gives? >

2022-11-17 14:51:43 RT @roxannedarling: Question of the Day:

2022-11-17 14:46:24 RT @emilymbender: At what point does this garbage "science" become embarrassing enough that "researcher" at one of these tech cos stops bei…

2022-11-17 14:44:55 RT @emilymbender: And let's reflect for a moment on how they phrased their disclaimer, shall we? "Hallucinate" is a terrible word choice he…

2022-11-17 14:20:46 RT @dmonett: "AI is a story we computer scientists made up to help us get funding ... It was a pragmatic theater. But now AI has become a f…

2022-11-17 06:05:48 Using the phrase "AI speech police" to refer to automated content moderation is certainly... a choice. https://t.co/P7AJmqxIMg

2022-11-17 05:58:22 RT @dmonett: "Lazy, negligent, unsafe."

2022-11-17 05:22:43 The word forms are not their meaning Their meaning is not the world Modeling the distribution of word forms is not the same as modeling (knowledge of) the world. And yet somehow this basic point is missed over and over and over... https://t.co/Hy5HBgeCbo

2022-11-17 05:12:08 Fundamental point that ~all people who see LLMs as "AI" seem to be missing: The *only* knowledge that an LLM can truthfully be said to have is information about the distribution of word forms.

2022-11-17 05:09:52 RT @moreisdifferent: PROBLEM: "Researchers are buried under a mass of papers, increasingly unable to distinguish between the meaningful and…

2022-11-17 02:39:26 @josephrobison Hey look! It's another probably-bought-his-blue-check telling me that Google can make search out of LMs. No, they can't: https://t.co/rkDjc4kDxj

2022-11-17 01:30:10 @MuhammedKambal Yes --- as many of us have been saying for quite some time now.

2022-11-17 01:26:23 @MuhammedKambal How about starting by not referring to any of these systems as "AI", and thus helping your audience learn how to cut through the AI hype?

2022-11-17 01:21:06 @MuhammedKambal Well no shit. Of course it's missing that. It's a word form prediction machine and nothing more. What led you to believe otherwise?

2022-11-17 01:19:46 The only exception to that I've seen is @willie_agnew 's observation about the filters that the engineers designed in, and how that relates to their (the engineers') view of what science is. So, the expression of the ideas of the people behind it. https://t.co/iC6u5VATw9

2022-11-17 01:19:16 That was furthermore entirely predictable that it would behave this way. The story is not in whatever specific things it says, but rather in the way they are promoting it. >

2022-11-17 01:18:51 "ooh look what false/embarrassing/obviously wrong thing Galactica said" is also not interesting folks. It's not surprising in the least that it generates text that is both fluent and (when interpreted by a human) wrong. >

2022-11-17 01:17:02 @MuhammedKambal The only exception to that I've seen is @willie_agnew 's observation about the filters that the engineers designed in, and how that relates to their (the engineers') view of what science is. So, the expression of the ideas of the people behind it. https://t.co/iC6u5VATw9

2022-11-17 01:15:53 @MuhammedKambal That was furthermore entirely predictable. The story is not in whatever specific things it says, but rather in the way they are promoting it. >

2022-11-17 01:15:27 @MuhammedKambal Why? People have been playing "gotcha" with Meta AI's random bullshit generator all day. It's not interesting --- it's not surprising in the least that it generates text that is both fluent and (when interpreted by a human) wrong. >

2022-11-17 00:52:36 @ChombaBupe I mean, it went off the rails as soon as they got to "an AI"....

2022-11-17 00:52:06 @Sanjay_Uppal This is not the original remark you seem to think it is. Also, comparing humans to LLMs is no better than the reverse.

2022-11-17 00:07:29 In my inbox today, an email about a "staffing opportunity" that ended with "PS: If you don't want to hear from me anymore, you can just tell me" I took great pleasure in not only marking that as spam, but also replying and telling them as much.

2022-11-16 23:59:23 At what point does this garbage "science" become embarrassing enough that "researcher" at one of these tech cos stops being a prestigious job? https://t.co/Zq6SNCaBS5

2022-11-16 23:52:55 RT @zehavoc: Hey #NLProc people, don't miss @anas_ant talk next Friday at 11am "NLP Beyond the Top 100 languages" ** and ** @ben_mlr's thes…

2022-11-16 22:31:04 RT @alexhanna: Next week! Mystery AI Hype Theater 3k will return. @emilymbender and I will talk language models and search, and continue ou…

2022-11-16 22:29:47 This is so damning. https://t.co/iC6u5VATw9

2022-11-16 22:29:33 RT @willie_agnew: Refuses to say anything about queer theory, CRT, racism, or AIDS, despite large bodies of highly influential papers in th…

2022-11-16 22:29:28 @kylebrussell I mean descriptively, yes, people are doing that. Normatively, they 100% should not be. But thanks for promoting my tweets, I guess?

2022-11-16 22:19:15 @kylebrussell Your bio begins with "Always learning | Working on AI for game dev" and you're posting like an AI bro. I'm not inclined to take you for a trustworthy journalist. And yeah, the blue checks are now meaningless. That sucks.

2022-11-16 22:16:11 @heyorson There's a non-zero possibility that your garden will start growing ice cream cones next spring, too. What of it?

2022-11-16 22:15:33 @zehavoc I'm more active over there now, definitely. Still reading Twitter, but less inclined to contribute, given that it seems to be on the brink of collapsing and there's nothing I can do about that.

2022-11-16 22:14:34 @kylebrussell Hey there Mr Bought a Bluecheck --- no, "AI is still young" is not an excuse for putting ridiculous demos out the world and making outlandish claims about them.

2022-11-16 22:13:55 @andrewprollings "got it wrong" "synthesized ungrounded text" "made shit up"

2022-11-16 22:13:17 @NannaInie Yep.

2022-11-16 22:10:36 @zehavoc Well, not fully syncing (and definitely not cross-posting), but still putting some stuff over here. I like Tweetdeck better than Mastodon's "advanced" mode, but do like some things better over there.

2022-11-16 22:10:02 RT @mark_riedl: Maybe don’t name your model after the Encyclopedia Galactica unless it is good enough to be a trusted source of knowledge?…

2022-11-16 21:50:34 @JLucasMcKay "get it wrong"

2022-11-16 21:44:53 RT @dmonett: There are so many, so many, many, many red flags alone in the "Get started" and "Limitations" screenshots, that I actually did…

2022-11-16 21:34:11 @HeidyKhlaaf @Meaningness Here ya go: https://t.co/Zq6SNCaBS5

2022-11-15 21:52:22 RT @chirag_shah: Several positions open at @uw_ischool. Come, be a part of an outstanding #iSchool, in the amazing city of Seattle! https:/…

2022-11-21 01:09:15 RT @timnitGebru: Not like we wrote a paper about that or anything which we got fired for. What COULD go wrong? https://t.co/krVsH5dB7g

2022-11-21 01:09:09 RT @timnitGebru: Not like @chirag_shah and @emilymbender wrote a paper: https://t.co/Zq7abViQbE

2022-11-21 00:50:08 @sbmaruf @ylecun @pcastr @carlesgelada When you say that I "feel" these arguments, you are suggesting that this isn't the result of scholarly work. You can read the argument in this paper: https://t.co/cYj1vKDms1 NB: That was peer reviewed, it's not some arXiv preprint.

2022-11-21 00:37:18 @sbmaruf @ylecun @pcastr @carlesgelada The fact students can be working on PhDs in ML and not actually be taught the relationship between "language models" and actual human linguistic activity is a problem.

2022-11-21 00:36:30 @sbmaruf @ylecun @pcastr @carlesgelada As for "make shit up": The training objective of language models is word forms given context, nothing else. This is not reasoning about the world, it's not "predicting" anything, it's not "writing scientific articles", etc.

2022-11-21 00:35:29 @sbmaruf @ylecun @pcastr @carlesgelada Why do you need to study them? The world doesn't need 120B parameter language models. For PhD students, I recommend learning the nature of the problem you are trying to solve and then looking critically at how current tech matches (or doesn't) that problem. >

2022-11-20 22:48:43 RT @chirag_shah: Meta's #Galactica is yet another attempt to use LLMs in ways that is both irresponsible and harmful. I have a feeling that…

2022-11-20 17:35:36 @marylgray @sjjphd @schock @mlmillerphd Check your DMs :)

2022-11-20 17:24:21 this https://t.co/wbJJYGf4kL

2022-11-20 16:03:36 @carlesgelada @ylecun @pcastr No, the project is doomed from the start, because you don't do science by manipulating strings of letters.

2022-11-20 15:51:08 @ylecun @pcastr @carlesgelada Oh, I do. It pisses me off every time I type "How about we meet on..." and suggests some random day of the week. But also: that's lower volume and thus less problematic.

2022-11-20 15:46:13 RT @emilymbender: @ylecun @pcastr @carlesgelada Regardless of what you prompt it with, what comes out is untethered from any communicative…

2022-11-20 15:44:53 @ylecun @pcastr @carlesgelada In other words, #Galactica is designed to make shit up and to do so fluently. Even when it is "working" it's just making shit up that happens to match the form (and not just the style) of what we were looking for.

2022-11-20 15:43:55 @ylecun @pcastr @carlesgelada Regardless of what you prompt it with, what comes out is untethered from any communicative intent, understanding of the science, etc etc because language only model the distribution of the form of language. >

2022-11-20 14:39:52 RT @emilymbender: Hashtags work better when you spell them right: #Galactica

2022-11-20 14:34:14 RT @xriskology: Here's my newest Salon article, about the FTX debacle and the longtermist ideology behind it. There are several crucial poi…

2022-11-20 06:48:23 RT @alexhanna: If I was the head of AI at Meta I would simply not keep on digging myself into a deeper hole defending a tool that my team t…

2022-11-20 06:44:54 RT @Abebab: remember longtermism is not a concern for AI ethics but an abstract contemplation &

2022-11-20 00:24:45 RT @ICLDC_HI: ComputEL-6: final call for papers! Submission deadline is Nov. 20 —>

2022-11-19 23:26:59 @holdspacefree Yikes.

2022-11-19 23:21:59 Hashtags work better when you spell them right: #Galactica https://t.co/sLJ8iaNF7X

2022-11-19 23:12:41 Looking forward to the coming episode of Mystery AI Hype Theater 3000 with @alexhanna On deck: #Galatica of course, plus our new segment "What in the Fresh AI Hell?" Next Wednesday, 11/23, 9:30am Pacific. https://t.co/VF7TD6tw5c #NLProc #AIhype #MAIHT3k https://t.co/1jWIChBzJp

2022-11-19 22:40:55 @intelwire As I understand it, this guy is in charge of META AI. So yeah.

2022-11-19 22:35:51 Did you do any user studies? Did you measure whether people who are looking for writing assistance (working hard in an L2, up against a paper submission deadline, tired) would be able to accurately evaluate if the text they are submitting accurately expresses their ideas?

2022-11-19 22:34:18 So close and yet so far... BEFORE dumping lots of resources into building some tech, we should asking questions like: What are the failure modes? Who will be hurt if the system malfunctions? Who will be hurt if it functions as intended? As for your use case: https://t.co/6hPU3nv5kV

2022-11-19 21:25:29 RT @emilymbender: Using a large LM as a search engine was a bad idea when it was proposed by a search company. It's still a bad idea now, f…

2022-11-19 19:07:44 @isosteph This is yummy, impressive looking, and I think pretty easy. As in: I'm not really much of a cook but have had success with it. https://t.co/wThVrq7mlJ

2022-11-19 05:18:16 @Jeff_Sparrow I put it in the next tweet in the thread: https://t.co/3doaE3kCm0

2022-11-19 04:27:20 @Jeff_Sparrow Yes.

2022-11-19 04:25:44 I clicked the link to see what that study was ... and found something from the London School of Economics and Political Science that has a *blurb* on the webpage talking up how groundbreaking the report is. Sure looks credible to me (not). https://t.co/n5XiDXgeAw https://t.co/8gvbSEP0bd

2022-11-19 04:23:25 And while I'm here, this paragraph is ridiculously full of #AIhype: >

2022-11-19 04:21:17 The paper lives here, in the ACM Digital Library: https://t.co/kwACyKvDtL Not that weird rando URL that you gave. >

2022-11-19 04:20:28 Hey @Jeff_Sparrow I appreciate you actually citing our paper as the source of the phrase "stochastic parrots", but it would be better form to link to its actual published location. https://t.co/DmRThcHDEz >

2022-11-18 23:50:24 @lebovitzben @mmitchell_ai @timnitGebru @Grady_Booch Clearly unqualified for his position. Relevant here: https://t.co/fVMYOVZb0d

2022-11-18 19:09:11 I'm glad to see my tweet about the #Galactica debacle quoted here, but I wonder: what happens to this article when Twitter goes down? @strwbilly do you know? Do the embedded tweets just disappear, or is your publishing platform robust to that? https://t.co/88FzfskH8f

2022-11-18 17:48:01 @josephrobison I think it's a mistake to have our information access systems driven by for-profit companies. Bare minimum is we should invest in something public.

2022-11-18 17:20:18 @josephrobison Misguided and dangerous.

2022-11-18 15:53:31 @BoredAstronaut Not harmless: https://t.co/kwACyKvDtL

2022-11-18 14:50:57 RT @emilymbender: And by take seriously I don't mean "believe uncritically". I mean learn how to read and discern and learn from and unders…

2022-11-18 14:49:19 RT @emilymbender: "Generates text that's grammatical and feels real": We sounded the alarm about this in the Stochastic Parrots paper. Ordi…

2022-11-18 14:48:18 RT @emilymbender: I don't follow this so I missed his remarks about it in real time. Apparently the hype wasn't coming only from the mark…

2022-11-18 14:06:31 RT @timnitGebru: Whatever you do, don't join Jack Dorsey's thing. He totally believes in Elon's plan to "extend the light of consciousness"…

2022-11-18 03:41:38 @ruthstarkman @mmitchell_ai @Stanford Your students are so awesome!

2022-11-18 03:41:29 RT @ruthstarkman: Students made several great tiktoks for @mmitchell_ai and @emilymbender's Dec 2 3-4:30 PM talk @Stanford. Here's one. BE…

2022-11-18 03:41:16 Feel like I've got to add my toast to Twitter before it goes under. I have really appreciated this as a space to learn and to practice public scholarship. Thank you all.

2022-11-17 21:56:26 @GaryMarcus @timnitGebru Choosing not to be a sea lion is free. I recommend it. Choosing not to engage with sea lioning, also free and freeing. Ciao!

2022-11-17 21:53:44 Choosing not to sealion is ... free. And such a good deal!

2022-11-17 21:50:59 @GaryMarcus @timnitGebru Gary, your white privilege is showing. Timnit (and others) have explained over and over where the eugenics is. You keep coming back with "due process" sealioning. You can choose to learn or you can choose to keep flaunting your unwillingness to listen.

2022-11-17 21:27:35 RT @mmitchell_ai: When we use "safety filters" that censor content with a broad brush, we create a less safe world: marginalizing large swa…

2022-11-17 21:17:44 And similarly flailing around "citation". Just because you're wallowing in the mud/kicking up dust doesn't mean the distinctions aren't clear to other people! https://t.co/XavSVJ5N96

2022-11-17 21:17:03 And then some very weird flailing around "ethics": https://t.co/jtfHYTpSly >

2022-11-17 21:16:14 Backing off from that claim pretty quickly: >

2022-11-17 21:15:30 I don't follow this so I missed his remarks about it in real time. Apparently the hype wasn't coming only from the marketing department... >

2022-11-17 15:33:04 RT @BJMcP: Great thread on both the dangers of AI language models and the ways the language used to describe them contributes to illusions…

2022-11-17 15:00:39 And by take seriously I don't mean "believe uncritically". I mean learn how to read and discern and learn from and understand that if you are working on AI it is literally part of your job to do this reading &

2022-11-17 14:58:59 Better late than never, I suppose, but I hope that part of the lesson learned here is to take seriously the work that approaches all of this from a critical lens. >

2022-11-17 14:57:32 Or maybe folks read our paper and didn't really believe us but had to wait until they could "see it with their own eyes" = see the effect from a) a live demo that b) impinges on something where they do have skin in the game. >

2022-11-17 14:56:27 So, this looks like a case where that wasn't alarming enough. It only becomes *alarming* for some when what's threatened is ~%~science~%~. >

2022-11-17 14:55:39 And then we look deeply into the risks and harms that follow when that seemingly coherent text reproduces the language of systems of oppression. >

2022-11-17 14:54:54 I looked back at Section 6 of Stochastic Parrots, which begins: >

2022-11-17 14:53:32 "Generates text that's grammatical and feels real": We sounded the alarm about this in the Stochastic Parrots paper. Ordinarily, I don't expect the average AI researcher to have read my papers, but that paper (thanks to Google) can hardly be missed. So what gives? >

2022-11-17 14:51:43 RT @roxannedarling: Question of the Day:

2022-11-17 14:46:24 RT @emilymbender: At what point does this garbage "science" become embarrassing enough that "researcher" at one of these tech cos stops bei…

2022-11-17 14:44:55 RT @emilymbender: And let's reflect for a moment on how they phrased their disclaimer, shall we? "Hallucinate" is a terrible word choice he…

2022-11-17 14:20:46 RT @dmonett: "AI is a story we computer scientists made up to help us get funding ... It was a pragmatic theater. But now AI has become a f…

2022-11-17 06:05:48 Using the phrase "AI speech police" to refer to automated content moderation is certainly... a choice. https://t.co/P7AJmqxIMg

2022-11-17 05:58:22 RT @dmonett: "Lazy, negligent, unsafe."

2022-11-17 05:22:43 The word forms are not their meaning Their meaning is not the world Modeling the distribution of word forms is not the same as modeling (knowledge of) the world. And yet somehow this basic point is missed over and over and over... https://t.co/Hy5HBgeCbo

2022-11-17 05:12:08 Fundamental point that ~all people who see LLMs as "AI" seem to be missing: The *only* knowledge that an LLM can truthfully be said to have is information about the distribution of word forms.

2022-11-17 05:09:52 RT @moreisdifferent: PROBLEM: "Researchers are buried under a mass of papers, increasingly unable to distinguish between the meaningful and…

2022-11-17 02:39:26 @josephrobison Hey look! It's another probably-bought-his-blue-check telling me that Google can make search out of LMs. No, they can't: https://t.co/rkDjc4kDxj

2022-11-17 01:30:10 @MuhammedKambal Yes --- as many of us have been saying for quite some time now.

2022-11-17 01:26:23 @MuhammedKambal How about starting by not referring to any of these systems as "AI", and thus helping your audience learn how to cut through the AI hype?

2022-11-17 01:21:06 @MuhammedKambal Well no shit. Of course it's missing that. It's a word form prediction machine and nothing more. What led you to believe otherwise?

2022-11-17 01:19:46 The only exception to that I've seen is @willie_agnew 's observation about the filters that the engineers designed in, and how that relates to their (the engineers') view of what science is. So, the expression of the ideas of the people behind it. https://t.co/iC6u5VATw9

2022-11-17 01:19:16 That was furthermore entirely predictable that it would behave this way. The story is not in whatever specific things it says, but rather in the way they are promoting it. >

2022-11-17 01:18:51 "ooh look what false/embarrassing/obviously wrong thing Galactica said" is also not interesting folks. It's not surprising in the least that it generates text that is both fluent and (when interpreted by a human) wrong. >

2022-11-17 01:17:02 @MuhammedKambal The only exception to that I've seen is @willie_agnew 's observation about the filters that the engineers designed in, and how that relates to their (the engineers') view of what science is. So, the expression of the ideas of the people behind it. https://t.co/iC6u5VATw9

2022-11-17 01:15:53 @MuhammedKambal That was furthermore entirely predictable. The story is not in whatever specific things it says, but rather in the way they are promoting it. >

2022-11-17 01:15:27 @MuhammedKambal Why? People have been playing "gotcha" with Meta AI's random bullshit generator all day. It's not interesting --- it's not surprising in the least that it generates text that is both fluent and (when interpreted by a human) wrong. >

2022-11-17 00:52:36 @ChombaBupe I mean, it went off the rails as soon as they got to "an AI"....

2022-11-17 00:52:06 @Sanjay_Uppal This is not the original remark you seem to think it is. Also, comparing humans to LLMs is no better than the reverse.

2022-11-17 00:07:29 In my inbox today, an email about a "staffing opportunity" that ended with "PS: If you don't want to hear from me anymore, you can just tell me" I took great pleasure in not only marking that as spam, but also replying and telling them as much.

2022-11-16 23:59:23 At what point does this garbage "science" become embarrassing enough that "researcher" at one of these tech cos stops being a prestigious job? https://t.co/Zq6SNCaBS5

2022-11-16 23:52:55 RT @zehavoc: Hey #NLProc people, don't miss @anas_ant talk next Friday at 11am "NLP Beyond the Top 100 languages" ** and ** @ben_mlr's thes…

2022-11-16 22:31:04 RT @alexhanna: Next week! Mystery AI Hype Theater 3k will return. @emilymbender and I will talk language models and search, and continue ou…

2022-11-16 22:29:47 This is so damning. https://t.co/iC6u5VATw9

2022-11-16 22:29:33 RT @willie_agnew: Refuses to say anything about queer theory, CRT, racism, or AIDS, despite large bodies of highly influential papers in th…

2022-11-16 22:29:28 @kylebrussell I mean descriptively, yes, people are doing that. Normatively, they 100% should not be. But thanks for promoting my tweets, I guess?

2022-11-16 22:19:15 @kylebrussell Your bio begins with "Always learning | Working on AI for game dev" and you're posting like an AI bro. I'm not inclined to take you for a trustworthy journalist. And yeah, the blue checks are now meaningless. That sucks.

2022-11-16 22:16:11 @heyorson There's a non-zero possibility that your garden will start growing ice cream cones next spring, too. What of it?

2022-11-16 22:15:33 @zehavoc I'm more active over there now, definitely. Still reading Twitter, but less inclined to contribute, given that it seems to be on the brink of collapsing and there's nothing I can do about that.

2022-11-16 22:14:34 @kylebrussell Hey there Mr Bought a Bluecheck --- no, "AI is still young" is not an excuse for putting ridiculous demos out the world and making outlandish claims about them.

2022-11-16 22:13:55 @andrewprollings "got it wrong" "synthesized ungrounded text" "made shit up"

2022-11-16 22:13:17 @NannaInie Yep.

2022-11-16 22:10:36 @zehavoc Well, not fully syncing (and definitely not cross-posting), but still putting some stuff over here. I like Tweetdeck better than Mastodon's "advanced" mode, but do like some things better over there.

2022-11-16 22:10:02 RT @mark_riedl: Maybe don’t name your model after the Encyclopedia Galactica unless it is good enough to be a trusted source of knowledge?…

2022-11-16 21:50:34 @JLucasMcKay "get it wrong"

2022-11-16 21:44:53 RT @dmonett: There are so many, so many, many, many red flags alone in the "Get started" and "Limitations" screenshots, that I actually did…

2022-11-16 21:34:11 @HeidyKhlaaf @Meaningness Here ya go: https://t.co/Zq6SNCaBS5

2022-11-15 21:52:22 RT @chirag_shah: Several positions open at @uw_ischool. Come, be a part of an outstanding #iSchool, in the amazing city of Seattle! https:/…

2022-11-22 02:01:32 @alexhanna To tune in to watch live on Wednesday, join us at https://t.co/VF7TD6tw5c 9:30am Pacific, W Nov 23

2022-11-22 02:00:11 We'll be doing our next episode on #Galactica this Wednesday at 9:30am Pacific. If you're new to #MAIHT3k and want to get a sense of what it's about, check out episode 1, here! (Fun to look back on this. @alexhanna and I did not imagine we'd be turning this into a series...) https://t.co/xg2Y8FIpON

2022-11-22 01:57:41 RT @alexhanna: Poking around with PeerTube as a YouTube alternative to give more of the Fediverse a shot. We've reposted this first episod…

2022-11-21 19:40:01 RT @mer__edith: Unlike almost all other tech, Signal is supported by donations from people who rely on it, not by monetizing surveillance.…

2022-11-21 17:45:43 Calling all #NLProc folks in EMEA. Would you like #ACL2025nlp to come to you? Please consider putting in a proposal to host. Things have been reorganized to require less effort from the local hosts: https://t.co/ereeIkI39r

2022-11-21 17:05:33 @strwbilly @aryaman2020 Looking at the page source for that article, where you quote a tweet that QTs me, I don't see the text of my tweet.

2022-11-21 14:14:49 It seems we're still stuck in this weird state where NLP is viewed from within the goals of people working on ML, to the detriment of the science on both sides (NLP and ML). https://t.co/d3LJJ8EZf5

2022-11-21 14:10:00 1) Understanding the problem the algorithm is meant to address 2) Building effective evaluation (test sets, metrics) 3) Understanding how the tech we build fits into its social context

2022-11-21 14:07:55 I guess I represent side "Why is the question even framed this way?" DL is a set of algorithms that can be used in different contexts. NLP is a set of applications. Asking this question suggests that none of the following are part of NLP: https://t.co/R1vlQLlihG

2022-11-21 14:06:15 RT @Abebab: How to avoid/displace accountability and responsibility for LLMs bingo https://t.co/Ql6NYSceRa

2022-11-21 14:06:11 @Abebab lol. And not only did you manage the full 5x5, but I think in this case, all of the squares reflect comments from one person. @KarnFort1 and I were harvesting across many different sources...

2022-11-21 13:39:17 RT @timnitGebru: Chief scientist at @Meta, who gaslights victims of genocide lying repeatedly saying "94% of hate speech on facebook is tak…

2022-11-21 01:09:15 RT @timnitGebru: Not like we wrote a paper about that or anything which we got fired for. What COULD go wrong? https://t.co/krVsH5dB7g

2022-11-21 01:09:09 RT @timnitGebru: Not like @chirag_shah and @emilymbender wrote a paper: https://t.co/Zq7abViQbE

2022-11-21 00:50:08 @sbmaruf @ylecun @pcastr @carlesgelada When you say that I "feel" these arguments, you are suggesting that this isn't the result of scholarly work. You can read the argument in this paper: https://t.co/cYj1vKDms1 NB: That was peer reviewed, it's not some arXiv preprint.

2022-11-21 00:37:18 @sbmaruf @ylecun @pcastr @carlesgelada The fact students can be working on PhDs in ML and not actually be taught the relationship between "language models" and actual human linguistic activity is a problem.

2022-11-21 00:36:30 @sbmaruf @ylecun @pcastr @carlesgelada As for "make shit up": The training objective of language models is word forms given context, nothing else. This is not reasoning about the world, it's not "predicting" anything, it's not "writing scientific articles", etc.

2022-11-21 00:35:29 @sbmaruf @ylecun @pcastr @carlesgelada Why do you need to study them? The world doesn't need 120B parameter language models. For PhD students, I recommend learning the nature of the problem you are trying to solve and then looking critically at how current tech matches (or doesn't) that problem. >

2022-11-20 22:48:43 RT @chirag_shah: Meta's #Galactica is yet another attempt to use LLMs in ways that is both irresponsible and harmful. I have a feeling that…

2022-11-20 17:35:36 @marylgray @sjjphd @schock @mlmillerphd Check your DMs :)

2022-11-20 17:24:21 this https://t.co/wbJJYGf4kL

2022-11-20 16:03:36 @carlesgelada @ylecun @pcastr No, the project is doomed from the start, because you don't do science by manipulating strings of letters.

2022-11-20 15:51:08 @ylecun @pcastr @carlesgelada Oh, I do. It pisses me off every time I type "How about we meet on..." and suggests some random day of the week. But also: that's lower volume and thus less problematic.

2022-11-20 15:46:13 RT @emilymbender: @ylecun @pcastr @carlesgelada Regardless of what you prompt it with, what comes out is untethered from any communicative…

2022-11-20 15:44:53 @ylecun @pcastr @carlesgelada In other words, #Galactica is designed to make shit up and to do so fluently. Even when it is "working" it's just making shit up that happens to match the form (and not just the style) of what we were looking for.

2022-11-20 15:43:55 @ylecun @pcastr @carlesgelada Regardless of what you prompt it with, what comes out is untethered from any communicative intent, understanding of the science, etc etc because language only model the distribution of the form of language. >

2022-11-20 14:39:52 RT @emilymbender: Hashtags work better when you spell them right: #Galactica

2022-11-20 14:34:14 RT @xriskology: Here's my newest Salon article, about the FTX debacle and the longtermist ideology behind it. There are several crucial poi…

2022-11-20 06:48:23 RT @alexhanna: If I was the head of AI at Meta I would simply not keep on digging myself into a deeper hole defending a tool that my team t…

2022-11-20 06:44:54 RT @Abebab: remember longtermism is not a concern for AI ethics but an abstract contemplation &

2022-11-20 00:24:45 RT @ICLDC_HI: ComputEL-6: final call for papers! Submission deadline is Nov. 20 —>

2022-11-19 23:26:59 @holdspacefree Yikes.

2022-11-19 23:21:59 Hashtags work better when you spell them right: #Galactica https://t.co/sLJ8iaNF7X

2022-11-19 23:12:41 Looking forward to the coming episode of Mystery AI Hype Theater 3000 with @alexhanna On deck: #Galatica of course, plus our new segment "What in the Fresh AI Hell?" Next Wednesday, 11/23, 9:30am Pacific. https://t.co/VF7TD6tw5c #NLProc #AIhype #MAIHT3k https://t.co/1jWIChBzJp

2022-11-19 22:40:55 @intelwire As I understand it, this guy is in charge of META AI. So yeah.

2022-11-19 22:35:51 Did you do any user studies? Did you measure whether people who are looking for writing assistance (working hard in an L2, up against a paper submission deadline, tired) would be able to accurately evaluate if the text they are submitting accurately expresses their ideas?

2022-11-19 22:34:18 So close and yet so far... BEFORE dumping lots of resources into building some tech, we should asking questions like: What are the failure modes? Who will be hurt if the system malfunctions? Who will be hurt if it functions as intended? As for your use case: https://t.co/6hPU3nv5kV

2022-11-19 21:25:29 RT @emilymbender: Using a large LM as a search engine was a bad idea when it was proposed by a search company. It's still a bad idea now, f…

2022-11-19 19:07:44 @isosteph This is yummy, impressive looking, and I think pretty easy. As in: I'm not really much of a cook but have had success with it. https://t.co/wThVrq7mlJ

2022-11-19 05:18:16 @Jeff_Sparrow I put it in the next tweet in the thread: https://t.co/3doaE3kCm0

2022-11-19 04:27:20 @Jeff_Sparrow Yes.

2022-11-19 04:25:44 I clicked the link to see what that study was ... and found something from the London School of Economics and Political Science that has a *blurb* on the webpage talking up how groundbreaking the report is. Sure looks credible to me (not). https://t.co/n5XiDXgeAw https://t.co/8gvbSEP0bd

2022-11-19 04:23:25 And while I'm here, this paragraph is ridiculously full of #AIhype: >

2022-11-19 04:21:17 The paper lives here, in the ACM Digital Library: https://t.co/kwACyKvDtL Not that weird rando URL that you gave. >

2022-11-19 04:20:28 Hey @Jeff_Sparrow I appreciate you actually citing our paper as the source of the phrase "stochastic parrots", but it would be better form to link to its actual published location. https://t.co/DmRThcHDEz >

2022-11-18 23:50:24 @lebovitzben @mmitchell_ai @timnitGebru @Grady_Booch Clearly unqualified for his position. Relevant here: https://t.co/fVMYOVZb0d

2022-11-18 19:09:11 I'm glad to see my tweet about the #Galactica debacle quoted here, but I wonder: what happens to this article when Twitter goes down? @strwbilly do you know? Do the embedded tweets just disappear, or is your publishing platform robust to that? https://t.co/88FzfskH8f

2022-11-18 17:48:01 @josephrobison I think it's a mistake to have our information access systems driven by for-profit companies. Bare minimum is we should invest in something public.

2022-11-18 17:20:18 @josephrobison Misguided and dangerous.

2022-11-18 15:53:31 @BoredAstronaut Not harmless: https://t.co/kwACyKvDtL

2022-11-18 14:50:57 RT @emilymbender: And by take seriously I don't mean "believe uncritically". I mean learn how to read and discern and learn from and unders…

2022-11-18 14:49:19 RT @emilymbender: "Generates text that's grammatical and feels real": We sounded the alarm about this in the Stochastic Parrots paper. Ordi…

2022-11-18 14:48:18 RT @emilymbender: I don't follow this so I missed his remarks about it in real time. Apparently the hype wasn't coming only from the mark…

2022-11-18 14:06:31 RT @timnitGebru: Whatever you do, don't join Jack Dorsey's thing. He totally believes in Elon's plan to "extend the light of consciousness"…

2022-11-18 03:41:38 @ruthstarkman @mmitchell_ai @Stanford Your students are so awesome!

2022-11-18 03:41:29 RT @ruthstarkman: Students made several great tiktoks for @mmitchell_ai and @emilymbender's Dec 2 3-4:30 PM talk @Stanford. Here's one. BE…

2022-11-18 03:41:16 Feel like I've got to add my toast to Twitter before it goes under. I have really appreciated this as a space to learn and to practice public scholarship. Thank you all.

2022-11-17 21:56:26 @GaryMarcus @timnitGebru Choosing not to be a sea lion is free. I recommend it. Choosing not to engage with sea lioning, also free and freeing. Ciao!

2022-11-17 21:53:44 Choosing not to sealion is ... free. And such a good deal!

2022-11-17 21:50:59 @GaryMarcus @timnitGebru Gary, your white privilege is showing. Timnit (and others) have explained over and over where the eugenics is. You keep coming back with "due process" sealioning. You can choose to learn or you can choose to keep flaunting your unwillingness to listen.

2022-11-17 21:27:35 RT @mmitchell_ai: When we use "safety filters" that censor content with a broad brush, we create a less safe world: marginalizing large swa…

2022-11-17 21:17:44 And similarly flailing around "citation". Just because you're wallowing in the mud/kicking up dust doesn't mean the distinctions aren't clear to other people! https://t.co/XavSVJ5N96

2022-11-17 21:17:03 And then some very weird flailing around "ethics": https://t.co/jtfHYTpSly >

2022-11-17 21:16:14 Backing off from that claim pretty quickly: >

2022-11-17 21:15:30 I don't follow this so I missed his remarks about it in real time. Apparently the hype wasn't coming only from the marketing department... >

2022-11-17 15:33:04 RT @BJMcP: Great thread on both the dangers of AI language models and the ways the language used to describe them contributes to illusions…

2022-11-17 15:00:39 And by take seriously I don't mean "believe uncritically". I mean learn how to read and discern and learn from and understand that if you are working on AI it is literally part of your job to do this reading &

2022-11-17 14:58:59 Better late than never, I suppose, but I hope that part of the lesson learned here is to take seriously the work that approaches all of this from a critical lens. >

2022-11-17 14:57:32 Or maybe folks read our paper and didn't really believe us but had to wait until they could "see it with their own eyes" = see the effect from a) a live demo that b) impinges on something where they do have skin in the game. >

2022-11-17 14:56:27 So, this looks like a case where that wasn't alarming enough. It only becomes *alarming* for some when what's threatened is ~%~science~%~. >

2022-11-17 14:55:39 And then we look deeply into the risks and harms that follow when that seemingly coherent text reproduces the language of systems of oppression. >

2022-11-17 14:54:54 I looked back at Section 6 of Stochastic Parrots, which begins: >

2022-11-17 14:53:32 "Generates text that's grammatical and feels real": We sounded the alarm about this in the Stochastic Parrots paper. Ordinarily, I don't expect the average AI researcher to have read my papers, but that paper (thanks to Google) can hardly be missed. So what gives? >

2022-11-17 14:51:43 RT @roxannedarling: Question of the Day:

2022-11-17 14:46:24 RT @emilymbender: At what point does this garbage "science" become embarrassing enough that "researcher" at one of these tech cos stops bei…

2022-11-17 14:44:55 RT @emilymbender: And let's reflect for a moment on how they phrased their disclaimer, shall we? "Hallucinate" is a terrible word choice he…

2022-11-17 14:20:46 RT @dmonett: "AI is a story we computer scientists made up to help us get funding ... It was a pragmatic theater. But now AI has become a f…

2022-11-17 06:05:48 Using the phrase "AI speech police" to refer to automated content moderation is certainly... a choice. https://t.co/P7AJmqxIMg

2022-11-17 05:58:22 RT @dmonett: "Lazy, negligent, unsafe."

2022-11-17 05:22:43 The word forms are not their meaning Their meaning is not the world Modeling the distribution of word forms is not the same as modeling (knowledge of) the world. And yet somehow this basic point is missed over and over and over... https://t.co/Hy5HBgeCbo

2022-11-17 05:12:08 Fundamental point that ~all people who see LLMs as "AI" seem to be missing: The *only* knowledge that an LLM can truthfully be said to have is information about the distribution of word forms.

2022-11-17 05:09:52 RT @moreisdifferent: PROBLEM: "Researchers are buried under a mass of papers, increasingly unable to distinguish between the meaningful and…

2022-11-17 02:39:26 @josephrobison Hey look! It's another probably-bought-his-blue-check telling me that Google can make search out of LMs. No, they can't: https://t.co/rkDjc4kDxj

2022-11-17 01:30:10 @MuhammedKambal Yes --- as many of us have been saying for quite some time now.

2022-11-17 01:26:23 @MuhammedKambal How about starting by not referring to any of these systems as "AI", and thus helping your audience learn how to cut through the AI hype?

2022-11-17 01:21:06 @MuhammedKambal Well no shit. Of course it's missing that. It's a word form prediction machine and nothing more. What led you to believe otherwise?

2022-11-17 01:19:46 The only exception to that I've seen is @willie_agnew 's observation about the filters that the engineers designed in, and how that relates to their (the engineers') view of what science is. So, the expression of the ideas of the people behind it. https://t.co/iC6u5VATw9

2022-11-17 01:19:16 That was furthermore entirely predictable that it would behave this way. The story is not in whatever specific things it says, but rather in the way they are promoting it. >

2022-11-17 01:18:51 "ooh look what false/embarrassing/obviously wrong thing Galactica said" is also not interesting folks. It's not surprising in the least that it generates text that is both fluent and (when interpreted by a human) wrong. >

2022-11-17 01:17:02 @MuhammedKambal The only exception to that I've seen is @willie_agnew 's observation about the filters that the engineers designed in, and how that relates to their (the engineers') view of what science is. So, the expression of the ideas of the people behind it. https://t.co/iC6u5VATw9

2022-11-17 01:15:53 @MuhammedKambal That was furthermore entirely predictable. The story is not in whatever specific things it says, but rather in the way they are promoting it. >

2022-11-17 01:15:27 @MuhammedKambal Why? People have been playing "gotcha" with Meta AI's random bullshit generator all day. It's not interesting --- it's not surprising in the least that it generates text that is both fluent and (when interpreted by a human) wrong. >

2022-11-17 00:52:36 @ChombaBupe I mean, it went off the rails as soon as they got to "an AI"....

2022-11-17 00:52:06 @Sanjay_Uppal This is not the original remark you seem to think it is. Also, comparing humans to LLMs is no better than the reverse.

2022-11-17 00:07:29 In my inbox today, an email about a "staffing opportunity" that ended with "PS: If you don't want to hear from me anymore, you can just tell me" I took great pleasure in not only marking that as spam, but also replying and telling them as much.

2022-11-16 23:59:23 At what point does this garbage "science" become embarrassing enough that "researcher" at one of these tech cos stops being a prestigious job? https://t.co/Zq6SNCaBS5

2022-11-16 23:52:55 RT @zehavoc: Hey #NLProc people, don't miss @anas_ant talk next Friday at 11am "NLP Beyond the Top 100 languages" ** and ** @ben_mlr's thes…

2022-11-16 22:31:04 RT @alexhanna: Next week! Mystery AI Hype Theater 3k will return. @emilymbender and I will talk language models and search, and continue ou…

2022-11-16 22:29:47 This is so damning. https://t.co/iC6u5VATw9

2022-11-16 22:29:33 RT @willie_agnew: Refuses to say anything about queer theory, CRT, racism, or AIDS, despite large bodies of highly influential papers in th…

2022-11-16 22:29:28 @kylebrussell I mean descriptively, yes, people are doing that. Normatively, they 100% should not be. But thanks for promoting my tweets, I guess?

2022-11-16 22:19:15 @kylebrussell Your bio begins with "Always learning | Working on AI for game dev" and you're posting like an AI bro. I'm not inclined to take you for a trustworthy journalist. And yeah, the blue checks are now meaningless. That sucks.

2022-11-16 22:16:11 @heyorson There's a non-zero possibility that your garden will start growing ice cream cones next spring, too. What of it?

2022-11-16 22:15:33 @zehavoc I'm more active over there now, definitely. Still reading Twitter, but less inclined to contribute, given that it seems to be on the brink of collapsing and there's nothing I can do about that.

2022-11-16 22:14:34 @kylebrussell Hey there Mr Bought a Bluecheck --- no, "AI is still young" is not an excuse for putting ridiculous demos out the world and making outlandish claims about them.

2022-11-16 22:13:55 @andrewprollings "got it wrong" "synthesized ungrounded text" "made shit up"

2022-11-16 22:13:17 @NannaInie Yep.

2022-11-16 22:10:36 @zehavoc Well, not fully syncing (and definitely not cross-posting), but still putting some stuff over here. I like Tweetdeck better than Mastodon's "advanced" mode, but do like some things better over there.

2022-11-16 22:10:02 RT @mark_riedl: Maybe don’t name your model after the Encyclopedia Galactica unless it is good enough to be a trusted source of knowledge?…

2022-11-16 21:50:34 @JLucasMcKay "get it wrong"

2022-11-16 21:44:53 RT @dmonett: There are so many, so many, many, many red flags alone in the "Get started" and "Limitations" screenshots, that I actually did…

2022-11-16 21:34:11 @HeidyKhlaaf @Meaningness Here ya go: https://t.co/Zq6SNCaBS5

2022-11-15 21:52:22 RT @chirag_shah: Several positions open at @uw_ischool. Come, be a part of an outstanding #iSchool, in the amazing city of Seattle! https:/…

2022-11-22 22:51:35 @evanmiltenburg @EhudReiter @sebgehr @huggingface @_dmh Thanks -- I'll let them know :)

2022-11-22 22:48:31 @evanmiltenburg @EhudReiter @sebgehr @huggingface Very helpful -- thank you! This query is for a student who is working on a rule-based system and trying to find appropriate datasets that aren't basically seq2seq....

2022-11-22 22:34:26 @evanmiltenburg @EhudReiter Thank you! This is fantastic.

2022-11-22 22:14:06 Seeking an #NLG dataset: What are your favorite datasets pairing non-linguistic structured-data input with linguistic output (in English, ideally, but otherwise still welcome)? #nlproc

2022-11-22 21:28:46 @OneAjeetSingh @timnitGebru I haven't read the paper, but it sounds to me that they found a use case for on-topic bullshitting.

2022-11-22 14:15:37 RT @emilymbender: @alexhanna To tune in to watch live on Wednesday, join us at https://t.co/VF7TD6tw5c 9:30am Pacific, W Nov 23

2022-11-22 14:15:17 RT @emilymbender: We'll be doing our next episode on #Galactica this Wednesday at 9:30am Pacific. If you're new to #MAIHT3k and want to get…

2022-11-22 02:01:32 @alexhanna To tune in to watch live on Wednesday, join us at https://t.co/VF7TD6tw5c 9:30am Pacific, W Nov 23

2022-11-22 02:00:11 We'll be doing our next episode on #Galactica this Wednesday at 9:30am Pacific. If you're new to #MAIHT3k and want to get a sense of what it's about, check out episode 1, here! (Fun to look back on this. @alexhanna and I did not imagine we'd be turning this into a series...) https://t.co/xg2Y8FIpON

2022-11-22 01:57:41 RT @alexhanna: Poking around with PeerTube as a YouTube alternative to give more of the Fediverse a shot. We've reposted this first episod…

2022-11-21 19:40:01 RT @mer__edith: Unlike almost all other tech, Signal is supported by donations from people who rely on it, not by monetizing surveillance.…

2022-11-21 17:45:43 Calling all #NLProc folks in EMEA. Would you like #ACL2025nlp to come to you? Please consider putting in a proposal to host. Things have been reorganized to require less effort from the local hosts: https://t.co/ereeIkI39r

2022-11-21 17:05:33 @strwbilly @aryaman2020 Looking at the page source for that article, where you quote a tweet that QTs me, I don't see the text of my tweet.

2022-11-21 14:14:49 It seems we're still stuck in this weird state where NLP is viewed from within the goals of people working on ML, to the detriment of the science on both sides (NLP and ML). https://t.co/d3LJJ8EZf5

2022-11-21 14:10:00 1) Understanding the problem the algorithm is meant to address 2) Building effective evaluation (test sets, metrics) 3) Understanding how the tech we build fits into its social context

2022-11-21 14:07:55 I guess I represent side "Why is the question even framed this way?" DL is a set of algorithms that can be used in different contexts. NLP is a set of applications. Asking this question suggests that none of the following are part of NLP: https://t.co/R1vlQLlihG

2022-11-21 14:06:15 RT @Abebab: How to avoid/displace accountability and responsibility for LLMs bingo https://t.co/Ql6NYSceRa

2022-11-21 14:06:11 @Abebab lol. And not only did you manage the full 5x5, but I think in this case, all of the squares reflect comments from one person. @KarnFort1 and I were harvesting across many different sources...

2022-11-21 13:39:17 RT @timnitGebru: Chief scientist at @Meta, who gaslights victims of genocide lying repeatedly saying "94% of hate speech on facebook is tak…

2022-11-21 01:09:15 RT @timnitGebru: Not like we wrote a paper about that or anything which we got fired for. What COULD go wrong? https://t.co/krVsH5dB7g

2022-11-21 01:09:09 RT @timnitGebru: Not like @chirag_shah and @emilymbender wrote a paper: https://t.co/Zq7abViQbE

2022-11-21 00:50:08 @sbmaruf @ylecun @pcastr @carlesgelada When you say that I "feel" these arguments, you are suggesting that this isn't the result of scholarly work. You can read the argument in this paper: https://t.co/cYj1vKDms1 NB: That was peer reviewed, it's not some arXiv preprint.

2022-11-21 00:37:18 @sbmaruf @ylecun @pcastr @carlesgelada The fact students can be working on PhDs in ML and not actually be taught the relationship between "language models" and actual human linguistic activity is a problem.

2022-11-21 00:36:30 @sbmaruf @ylecun @pcastr @carlesgelada As for "make shit up": The training objective of language models is word forms given context, nothing else. This is not reasoning about the world, it's not "predicting" anything, it's not "writing scientific articles", etc.

2022-11-21 00:35:29 @sbmaruf @ylecun @pcastr @carlesgelada Why do you need to study them? The world doesn't need 120B parameter language models. For PhD students, I recommend learning the nature of the problem you are trying to solve and then looking critically at how current tech matches (or doesn't) that problem. >

2022-11-20 22:48:43 RT @chirag_shah: Meta's #Galactica is yet another attempt to use LLMs in ways that is both irresponsible and harmful. I have a feeling that…

2022-11-20 17:35:36 @marylgray @sjjphd @schock @mlmillerphd Check your DMs :)

2022-11-20 17:24:21 this https://t.co/wbJJYGf4kL

2022-11-20 16:03:36 @carlesgelada @ylecun @pcastr No, the project is doomed from the start, because you don't do science by manipulating strings of letters.

2022-11-20 15:51:08 @ylecun @pcastr @carlesgelada Oh, I do. It pisses me off every time I type "How about we meet on..." and suggests some random day of the week. But also: that's lower volume and thus less problematic.

2022-11-20 15:46:13 RT @emilymbender: @ylecun @pcastr @carlesgelada Regardless of what you prompt it with, what comes out is untethered from any communicative…

2022-11-20 15:44:53 @ylecun @pcastr @carlesgelada In other words, #Galactica is designed to make shit up and to do so fluently. Even when it is "working" it's just making shit up that happens to match the form (and not just the style) of what we were looking for.

2022-11-20 15:43:55 @ylecun @pcastr @carlesgelada Regardless of what you prompt it with, what comes out is untethered from any communicative intent, understanding of the science, etc etc because language only model the distribution of the form of language. >

2022-11-20 14:39:52 RT @emilymbender: Hashtags work better when you spell them right: #Galactica

2022-11-20 14:34:14 RT @xriskology: Here's my newest Salon article, about the FTX debacle and the longtermist ideology behind it. There are several crucial poi…

2022-11-20 06:48:23 RT @alexhanna: If I was the head of AI at Meta I would simply not keep on digging myself into a deeper hole defending a tool that my team t…

2022-11-20 06:44:54 RT @Abebab: remember longtermism is not a concern for AI ethics but an abstract contemplation &

2022-11-20 00:24:45 RT @ICLDC_HI: ComputEL-6: final call for papers! Submission deadline is Nov. 20 —>

2022-11-19 23:26:59 @holdspacefree Yikes.

2022-11-19 23:21:59 Hashtags work better when you spell them right: #Galactica https://t.co/sLJ8iaNF7X

2022-11-19 23:12:41 Looking forward to the coming episode of Mystery AI Hype Theater 3000 with @alexhanna On deck: #Galatica of course, plus our new segment "What in the Fresh AI Hell?" Next Wednesday, 11/23, 9:30am Pacific. https://t.co/VF7TD6tw5c #NLProc #AIhype #MAIHT3k https://t.co/1jWIChBzJp

2022-11-19 22:40:55 @intelwire As I understand it, this guy is in charge of META AI. So yeah.

2022-11-19 22:35:51 Did you do any user studies? Did you measure whether people who are looking for writing assistance (working hard in an L2, up against a paper submission deadline, tired) would be able to accurately evaluate if the text they are submitting accurately expresses their ideas?

2022-11-19 22:34:18 So close and yet so far... BEFORE dumping lots of resources into building some tech, we should asking questions like: What are the failure modes? Who will be hurt if the system malfunctions? Who will be hurt if it functions as intended? As for your use case: https://t.co/6hPU3nv5kV

2022-11-19 21:25:29 RT @emilymbender: Using a large LM as a search engine was a bad idea when it was proposed by a search company. It's still a bad idea now, f…

2022-11-19 19:07:44 @isosteph This is yummy, impressive looking, and I think pretty easy. As in: I'm not really much of a cook but have had success with it. https://t.co/wThVrq7mlJ

2022-11-19 05:18:16 @Jeff_Sparrow I put it in the next tweet in the thread: https://t.co/3doaE3kCm0

2022-11-19 04:27:20 @Jeff_Sparrow Yes.

2022-11-19 04:25:44 I clicked the link to see what that study was ... and found something from the London School of Economics and Political Science that has a *blurb* on the webpage talking up how groundbreaking the report is. Sure looks credible to me (not). https://t.co/n5XiDXgeAw https://t.co/8gvbSEP0bd

2022-11-19 04:23:25 And while I'm here, this paragraph is ridiculously full of #AIhype: >

2022-11-19 04:21:17 The paper lives here, in the ACM Digital Library: https://t.co/kwACyKvDtL Not that weird rando URL that you gave. >

2022-11-19 04:20:28 Hey @Jeff_Sparrow I appreciate you actually citing our paper as the source of the phrase "stochastic parrots", but it would be better form to link to its actual published location. https://t.co/DmRThcHDEz >

2022-11-18 23:50:24 @lebovitzben @mmitchell_ai @timnitGebru @Grady_Booch Clearly unqualified for his position. Relevant here: https://t.co/fVMYOVZb0d

2022-11-18 19:09:11 I'm glad to see my tweet about the #Galactica debacle quoted here, but I wonder: what happens to this article when Twitter goes down? @strwbilly do you know? Do the embedded tweets just disappear, or is your publishing platform robust to that? https://t.co/88FzfskH8f

2022-11-18 17:48:01 @josephrobison I think it's a mistake to have our information access systems driven by for-profit companies. Bare minimum is we should invest in something public.

2022-11-18 17:20:18 @josephrobison Misguided and dangerous.

2022-11-18 15:53:31 @BoredAstronaut Not harmless: https://t.co/kwACyKvDtL

2022-11-18 14:50:57 RT @emilymbender: And by take seriously I don't mean "believe uncritically". I mean learn how to read and discern and learn from and unders…

2022-11-18 14:49:19 RT @emilymbender: "Generates text that's grammatical and feels real": We sounded the alarm about this in the Stochastic Parrots paper. Ordi…

2022-11-18 14:48:18 RT @emilymbender: I don't follow this so I missed his remarks about it in real time. Apparently the hype wasn't coming only from the mark…

2022-11-18 14:06:31 RT @timnitGebru: Whatever you do, don't join Jack Dorsey's thing. He totally believes in Elon's plan to "extend the light of consciousness"…

2022-11-18 03:41:38 @ruthstarkman @mmitchell_ai @Stanford Your students are so awesome!

2022-11-18 03:41:29 RT @ruthstarkman: Students made several great tiktoks for @mmitchell_ai and @emilymbender's Dec 2 3-4:30 PM talk @Stanford. Here's one. BE…

2022-11-18 03:41:16 Feel like I've got to add my toast to Twitter before it goes under. I have really appreciated this as a space to learn and to practice public scholarship. Thank you all.

2022-11-17 21:56:26 @GaryMarcus @timnitGebru Choosing not to be a sea lion is free. I recommend it. Choosing not to engage with sea lioning, also free and freeing. Ciao!

2022-11-17 21:53:44 Choosing not to sealion is ... free. And such a good deal!

2022-11-17 21:50:59 @GaryMarcus @timnitGebru Gary, your white privilege is showing. Timnit (and others) have explained over and over where the eugenics is. You keep coming back with "due process" sealioning. You can choose to learn or you can choose to keep flaunting your unwillingness to listen.

2022-11-17 21:27:35 RT @mmitchell_ai: When we use "safety filters" that censor content with a broad brush, we create a less safe world: marginalizing large swa…

2022-11-17 21:17:44 And similarly flailing around "citation". Just because you're wallowing in the mud/kicking up dust doesn't mean the distinctions aren't clear to other people! https://t.co/XavSVJ5N96

2022-11-17 21:17:03 And then some very weird flailing around "ethics": https://t.co/jtfHYTpSly >

2022-11-17 21:16:14 Backing off from that claim pretty quickly: >

2022-11-17 21:15:30 I don't follow this so I missed his remarks about it in real time. Apparently the hype wasn't coming only from the marketing department... >

2022-11-17 15:33:04 RT @BJMcP: Great thread on both the dangers of AI language models and the ways the language used to describe them contributes to illusions…

2022-11-17 15:00:39 And by take seriously I don't mean "believe uncritically". I mean learn how to read and discern and learn from and understand that if you are working on AI it is literally part of your job to do this reading &

2022-11-17 14:58:59 Better late than never, I suppose, but I hope that part of the lesson learned here is to take seriously the work that approaches all of this from a critical lens. >

2022-11-17 14:57:32 Or maybe folks read our paper and didn't really believe us but had to wait until they could "see it with their own eyes" = see the effect from a) a live demo that b) impinges on something where they do have skin in the game. >

2022-11-17 14:56:27 So, this looks like a case where that wasn't alarming enough. It only becomes *alarming* for some when what's threatened is ~%~science~%~. >

2022-11-17 14:55:39 And then we look deeply into the risks and harms that follow when that seemingly coherent text reproduces the language of systems of oppression. >

2022-11-17 14:54:54 I looked back at Section 6 of Stochastic Parrots, which begins: >

2022-11-17 14:53:32 "Generates text that's grammatical and feels real": We sounded the alarm about this in the Stochastic Parrots paper. Ordinarily, I don't expect the average AI researcher to have read my papers, but that paper (thanks to Google) can hardly be missed. So what gives? >

2022-11-17 14:51:43 RT @roxannedarling: Question of the Day:

2022-11-17 14:46:24 RT @emilymbender: At what point does this garbage "science" become embarrassing enough that "researcher" at one of these tech cos stops bei…

2022-11-17 14:44:55 RT @emilymbender: And let's reflect for a moment on how they phrased their disclaimer, shall we? "Hallucinate" is a terrible word choice he…

2022-11-17 14:20:46 RT @dmonett: "AI is a story we computer scientists made up to help us get funding ... It was a pragmatic theater. But now AI has become a f…

2022-11-17 06:05:48 Using the phrase "AI speech police" to refer to automated content moderation is certainly... a choice. https://t.co/P7AJmqxIMg

2022-11-17 05:58:22 RT @dmonett: "Lazy, negligent, unsafe."

2022-11-17 05:22:43 The word forms are not their meaning Their meaning is not the world Modeling the distribution of word forms is not the same as modeling (knowledge of) the world. And yet somehow this basic point is missed over and over and over... https://t.co/Hy5HBgeCbo

2022-11-17 05:12:08 Fundamental point that ~all people who see LLMs as "AI" seem to be missing: The *only* knowledge that an LLM can truthfully be said to have is information about the distribution of word forms.

2022-11-17 05:09:52 RT @moreisdifferent: PROBLEM: "Researchers are buried under a mass of papers, increasingly unable to distinguish between the meaningful and…

2022-11-17 02:39:26 @josephrobison Hey look! It's another probably-bought-his-blue-check telling me that Google can make search out of LMs. No, they can't: https://t.co/rkDjc4kDxj

2022-11-17 01:30:10 @MuhammedKambal Yes --- as many of us have been saying for quite some time now.

2022-11-17 01:26:23 @MuhammedKambal How about starting by not referring to any of these systems as "AI", and thus helping your audience learn how to cut through the AI hype?

2022-11-17 01:21:06 @MuhammedKambal Well no shit. Of course it's missing that. It's a word form prediction machine and nothing more. What led you to believe otherwise?

2022-11-17 01:19:46 The only exception to that I've seen is @willie_agnew 's observation about the filters that the engineers designed in, and how that relates to their (the engineers') view of what science is. So, the expression of the ideas of the people behind it. https://t.co/iC6u5VATw9

2022-11-17 01:19:16 That was furthermore entirely predictable that it would behave this way. The story is not in whatever specific things it says, but rather in the way they are promoting it. >

2022-11-17 01:18:51 "ooh look what false/embarrassing/obviously wrong thing Galactica said" is also not interesting folks. It's not surprising in the least that it generates text that is both fluent and (when interpreted by a human) wrong. >

2022-11-17 01:17:02 @MuhammedKambal The only exception to that I've seen is @willie_agnew 's observation about the filters that the engineers designed in, and how that relates to their (the engineers') view of what science is. So, the expression of the ideas of the people behind it. https://t.co/iC6u5VATw9

2022-11-17 01:15:53 @MuhammedKambal That was furthermore entirely predictable. The story is not in whatever specific things it says, but rather in the way they are promoting it. >

2022-11-17 01:15:27 @MuhammedKambal Why? People have been playing "gotcha" with Meta AI's random bullshit generator all day. It's not interesting --- it's not surprising in the least that it generates text that is both fluent and (when interpreted by a human) wrong. >

2022-11-17 00:52:36 @ChombaBupe I mean, it went off the rails as soon as they got to "an AI"....

2022-11-17 00:52:06 @Sanjay_Uppal This is not the original remark you seem to think it is. Also, comparing humans to LLMs is no better than the reverse.

2022-11-17 00:07:29 In my inbox today, an email about a "staffing opportunity" that ended with "PS: If you don't want to hear from me anymore, you can just tell me" I took great pleasure in not only marking that as spam, but also replying and telling them as much.

2022-11-16 23:59:23 At what point does this garbage "science" become embarrassing enough that "researcher" at one of these tech cos stops being a prestigious job? https://t.co/Zq6SNCaBS5

2022-11-16 23:52:55 RT @zehavoc: Hey #NLProc people, don't miss @anas_ant talk next Friday at 11am "NLP Beyond the Top 100 languages" ** and ** @ben_mlr's thes…

2022-11-16 22:31:04 RT @alexhanna: Next week! Mystery AI Hype Theater 3k will return. @emilymbender and I will talk language models and search, and continue ou…

2022-11-16 22:29:47 This is so damning. https://t.co/iC6u5VATw9

2022-11-16 22:29:33 RT @willie_agnew: Refuses to say anything about queer theory, CRT, racism, or AIDS, despite large bodies of highly influential papers in th…

2022-11-16 22:29:28 @kylebrussell I mean descriptively, yes, people are doing that. Normatively, they 100% should not be. But thanks for promoting my tweets, I guess?

2022-11-16 22:19:15 @kylebrussell Your bio begins with "Always learning | Working on AI for game dev" and you're posting like an AI bro. I'm not inclined to take you for a trustworthy journalist. And yeah, the blue checks are now meaningless. That sucks.

2022-11-16 22:16:11 @heyorson There's a non-zero possibility that your garden will start growing ice cream cones next spring, too. What of it?

2022-11-16 22:15:33 @zehavoc I'm more active over there now, definitely. Still reading Twitter, but less inclined to contribute, given that it seems to be on the brink of collapsing and there's nothing I can do about that.

2022-11-16 22:14:34 @kylebrussell Hey there Mr Bought a Bluecheck --- no, "AI is still young" is not an excuse for putting ridiculous demos out the world and making outlandish claims about them.

2022-11-16 22:13:55 @andrewprollings "got it wrong" "synthesized ungrounded text" "made shit up"

2022-11-16 22:13:17 @NannaInie Yep.

2022-11-16 22:10:36 @zehavoc Well, not fully syncing (and definitely not cross-posting), but still putting some stuff over here. I like Tweetdeck better than Mastodon's "advanced" mode, but do like some things better over there.

2022-11-16 22:10:02 RT @mark_riedl: Maybe don’t name your model after the Encyclopedia Galactica unless it is good enough to be a trusted source of knowledge?…

2022-11-16 21:50:34 @JLucasMcKay "get it wrong"

2022-11-16 21:44:53 RT @dmonett: There are so many, so many, many, many red flags alone in the "Get started" and "Limitations" screenshots, that I actually did…

2022-11-16 21:34:11 @HeidyKhlaaf @Meaningness Here ya go: https://t.co/Zq6SNCaBS5

2022-11-15 21:52:22 RT @chirag_shah: Several positions open at @uw_ischool. Come, be a part of an outstanding #iSchool, in the amazing city of Seattle! https:/…

2022-11-24 14:27:44 @LucianaBenotti I won't be there but I hope you have a great time!

2022-11-23 20:36:43 @_dmh @evanmiltenburg @EhudReiter @sebgehr @huggingface @thiagocasfer Thanks!

2022-11-23 16:18:08 Coming up in about an hour! Joins us at https://t.co/VF7TD6tw5c #MathyMath #AIHype #Galactica w/@alexhanna https://t.co/MBSAhk0hd4

2022-11-23 15:16:30 RT @MsKellyMHayes: Every year I invite people who are celebrating the colonial holiday to do something in support of Native people. Indigen…

2022-11-23 05:41:39 RT @LeonDerczynski: How does "responsible disclosure" look, for things that make machine learning models behave in undesirable way? #machin…

2022-11-23 05:40:15 RT @timnitGebru: That's right first it was a $100k blogpost prize that was announced by FTX, I suppose using people's stolen funds to save…

2022-11-23 05:39:58 RT @timnitGebru: Have you ever heard of $100k in best paper awards in any academic conference? Even in ML with all the $$ flowing in the fi…

2022-11-23 05:15:29 @nitin Actually, no. That's not the definition of mansplaining. I wonder how often you've mansplained while thinking you weren't....

2022-11-23 01:23:52 RT @mer__edith: @jackson_blum Whatever the future of Twitter DMs, I'll continue to use Signal + admonish others to do the same. Signal is a…

2022-11-22 22:51:35 @evanmiltenburg @EhudReiter @sebgehr @huggingface @_dmh Thanks -- I'll let them know :)

2022-11-22 22:48:31 @evanmiltenburg @EhudReiter @sebgehr @huggingface Very helpful -- thank you! This query is for a student who is working on a rule-based system and trying to find appropriate datasets that aren't basically seq2seq....

2022-11-22 22:34:26 @evanmiltenburg @EhudReiter Thank you! This is fantastic.

2022-11-22 22:14:06 Seeking an #NLG dataset: What are your favorite datasets pairing non-linguistic structured-data input with linguistic output (in English, ideally, but otherwise still welcome)? #nlproc

2022-11-22 21:28:46 @OneAjeetSingh @timnitGebru I haven't read the paper, but it sounds to me that they found a use case for on-topic bullshitting.

2022-11-22 14:15:37 RT @emilymbender: @alexhanna To tune in to watch live on Wednesday, join us at https://t.co/VF7TD6tw5c 9:30am Pacific, W Nov 23

2022-11-22 14:15:17 RT @emilymbender: We'll be doing our next episode on #Galactica this Wednesday at 9:30am Pacific. If you're new to #MAIHT3k and want to get…

2022-11-22 02:01:32 @alexhanna To tune in to watch live on Wednesday, join us at https://t.co/VF7TD6tw5c 9:30am Pacific, W Nov 23

2022-11-22 02:00:11 We'll be doing our next episode on #Galactica this Wednesday at 9:30am Pacific. If you're new to #MAIHT3k and want to get a sense of what it's about, check out episode 1, here! (Fun to look back on this. @alexhanna and I did not imagine we'd be turning this into a series...) https://t.co/xg2Y8FIpON

2022-11-22 01:57:41 RT @alexhanna: Poking around with PeerTube as a YouTube alternative to give more of the Fediverse a shot. We've reposted this first episod…

2022-11-21 19:40:01 RT @mer__edith: Unlike almost all other tech, Signal is supported by donations from people who rely on it, not by monetizing surveillance.…

2022-11-21 17:45:43 Calling all #NLProc folks in EMEA. Would you like #ACL2025nlp to come to you? Please consider putting in a proposal to host. Things have been reorganized to require less effort from the local hosts: https://t.co/ereeIkI39r

2022-11-21 17:05:33 @strwbilly @aryaman2020 Looking at the page source for that article, where you quote a tweet that QTs me, I don't see the text of my tweet.

2022-11-21 14:14:49 It seems we're still stuck in this weird state where NLP is viewed from within the goals of people working on ML, to the detriment of the science on both sides (NLP and ML). https://t.co/d3LJJ8EZf5

2022-11-21 14:10:00 1) Understanding the problem the algorithm is meant to address 2) Building effective evaluation (test sets, metrics) 3) Understanding how the tech we build fits into its social context

2022-11-21 14:07:55 I guess I represent side "Why is the question even framed this way?" DL is a set of algorithms that can be used in different contexts. NLP is a set of applications. Asking this question suggests that none of the following are part of NLP: https://t.co/R1vlQLlihG

2022-11-21 14:06:15 RT @Abebab: How to avoid/displace accountability and responsibility for LLMs bingo https://t.co/Ql6NYSceRa

2022-11-21 14:06:11 @Abebab lol. And not only did you manage the full 5x5, but I think in this case, all of the squares reflect comments from one person. @KarnFort1 and I were harvesting across many different sources...

2022-11-21 13:39:17 RT @timnitGebru: Chief scientist at @Meta, who gaslights victims of genocide lying repeatedly saying "94% of hate speech on facebook is tak…

2022-11-21 01:09:15 RT @timnitGebru: Not like we wrote a paper about that or anything which we got fired for. What COULD go wrong? https://t.co/krVsH5dB7g

2022-11-21 01:09:09 RT @timnitGebru: Not like @chirag_shah and @emilymbender wrote a paper: https://t.co/Zq7abViQbE

2022-11-21 00:50:08 @sbmaruf @ylecun @pcastr @carlesgelada When you say that I "feel" these arguments, you are suggesting that this isn't the result of scholarly work. You can read the argument in this paper: https://t.co/cYj1vKDms1 NB: That was peer reviewed, it's not some arXiv preprint.

2022-11-21 00:37:18 @sbmaruf @ylecun @pcastr @carlesgelada The fact students can be working on PhDs in ML and not actually be taught the relationship between "language models" and actual human linguistic activity is a problem.

2022-11-21 00:36:30 @sbmaruf @ylecun @pcastr @carlesgelada As for "make shit up": The training objective of language models is word forms given context, nothing else. This is not reasoning about the world, it's not "predicting" anything, it's not "writing scientific articles", etc.

2022-11-21 00:35:29 @sbmaruf @ylecun @pcastr @carlesgelada Why do you need to study them? The world doesn't need 120B parameter language models. For PhD students, I recommend learning the nature of the problem you are trying to solve and then looking critically at how current tech matches (or doesn't) that problem. >

2022-11-20 22:48:43 RT @chirag_shah: Meta's #Galactica is yet another attempt to use LLMs in ways that is both irresponsible and harmful. I have a feeling that…

2022-11-20 17:35:36 @marylgray @sjjphd @schock @mlmillerphd Check your DMs :)

2022-11-20 17:24:21 this https://t.co/wbJJYGf4kL

2022-11-20 16:03:36 @carlesgelada @ylecun @pcastr No, the project is doomed from the start, because you don't do science by manipulating strings of letters.

2022-11-20 15:51:08 @ylecun @pcastr @carlesgelada Oh, I do. It pisses me off every time I type "How about we meet on..." and suggests some random day of the week. But also: that's lower volume and thus less problematic.

2022-11-20 15:46:13 RT @emilymbender: @ylecun @pcastr @carlesgelada Regardless of what you prompt it with, what comes out is untethered from any communicative…

2022-11-20 15:44:53 @ylecun @pcastr @carlesgelada In other words, #Galactica is designed to make shit up and to do so fluently. Even when it is "working" it's just making shit up that happens to match the form (and not just the style) of what we were looking for.

2022-11-20 15:43:55 @ylecun @pcastr @carlesgelada Regardless of what you prompt it with, what comes out is untethered from any communicative intent, understanding of the science, etc etc because language only model the distribution of the form of language. >

2022-11-20 14:39:52 RT @emilymbender: Hashtags work better when you spell them right: #Galactica

2022-11-20 14:34:14 RT @xriskology: Here's my newest Salon article, about the FTX debacle and the longtermist ideology behind it. There are several crucial poi…

2022-11-20 06:48:23 RT @alexhanna: If I was the head of AI at Meta I would simply not keep on digging myself into a deeper hole defending a tool that my team t…

2022-11-20 06:44:54 RT @Abebab: remember longtermism is not a concern for AI ethics but an abstract contemplation &

2022-11-20 00:24:45 RT @ICLDC_HI: ComputEL-6: final call for papers! Submission deadline is Nov. 20 —>

2022-11-19 23:26:59 @holdspacefree Yikes.

2022-11-19 23:21:59 Hashtags work better when you spell them right: #Galactica https://t.co/sLJ8iaNF7X

2022-11-19 23:12:41 Looking forward to the coming episode of Mystery AI Hype Theater 3000 with @alexhanna On deck: #Galatica of course, plus our new segment "What in the Fresh AI Hell?" Next Wednesday, 11/23, 9:30am Pacific. https://t.co/VF7TD6tw5c #NLProc #AIhype #MAIHT3k https://t.co/1jWIChBzJp

2022-11-19 22:40:55 @intelwire As I understand it, this guy is in charge of META AI. So yeah.

2022-11-19 22:35:51 Did you do any user studies? Did you measure whether people who are looking for writing assistance (working hard in an L2, up against a paper submission deadline, tired) would be able to accurately evaluate if the text they are submitting accurately expresses their ideas?

2022-11-19 22:34:18 So close and yet so far... BEFORE dumping lots of resources into building some tech, we should asking questions like: What are the failure modes? Who will be hurt if the system malfunctions? Who will be hurt if it functions as intended? As for your use case: https://t.co/6hPU3nv5kV

2022-11-19 21:25:29 RT @emilymbender: Using a large LM as a search engine was a bad idea when it was proposed by a search company. It's still a bad idea now, f…

2022-11-19 19:07:44 @isosteph This is yummy, impressive looking, and I think pretty easy. As in: I'm not really much of a cook but have had success with it. https://t.co/wThVrq7mlJ

2022-11-19 05:18:16 @Jeff_Sparrow I put it in the next tweet in the thread: https://t.co/3doaE3kCm0

2022-11-19 04:27:20 @Jeff_Sparrow Yes.

2022-11-19 04:25:44 I clicked the link to see what that study was ... and found something from the London School of Economics and Political Science that has a *blurb* on the webpage talking up how groundbreaking the report is. Sure looks credible to me (not). https://t.co/n5XiDXgeAw https://t.co/8gvbSEP0bd

2022-11-19 04:23:25 And while I'm here, this paragraph is ridiculously full of #AIhype: >

2022-11-19 04:21:17 The paper lives here, in the ACM Digital Library: https://t.co/kwACyKvDtL Not that weird rando URL that you gave. >

2022-11-19 04:20:28 Hey @Jeff_Sparrow I appreciate you actually citing our paper as the source of the phrase "stochastic parrots", but it would be better form to link to its actual published location. https://t.co/DmRThcHDEz >

2022-11-18 23:50:24 @lebovitzben @mmitchell_ai @timnitGebru @Grady_Booch Clearly unqualified for his position. Relevant here: https://t.co/fVMYOVZb0d

2022-11-18 19:09:11 I'm glad to see my tweet about the #Galactica debacle quoted here, but I wonder: what happens to this article when Twitter goes down? @strwbilly do you know? Do the embedded tweets just disappear, or is your publishing platform robust to that? https://t.co/88FzfskH8f

2022-11-18 17:48:01 @josephrobison I think it's a mistake to have our information access systems driven by for-profit companies. Bare minimum is we should invest in something public.

2022-11-18 17:20:18 @josephrobison Misguided and dangerous.

2022-11-18 15:53:31 @BoredAstronaut Not harmless: https://t.co/kwACyKvDtL

2022-11-18 14:50:57 RT @emilymbender: And by take seriously I don't mean "believe uncritically". I mean learn how to read and discern and learn from and unders…

2022-11-18 14:49:19 RT @emilymbender: "Generates text that's grammatical and feels real": We sounded the alarm about this in the Stochastic Parrots paper. Ordi…

2022-11-18 14:48:18 RT @emilymbender: I don't follow this so I missed his remarks about it in real time. Apparently the hype wasn't coming only from the mark…

2022-11-18 14:06:31 RT @timnitGebru: Whatever you do, don't join Jack Dorsey's thing. He totally believes in Elon's plan to "extend the light of consciousness"…

2022-11-18 03:41:38 @ruthstarkman @mmitchell_ai @Stanford Your students are so awesome!

2022-11-18 03:41:29 RT @ruthstarkman: Students made several great tiktoks for @mmitchell_ai and @emilymbender's Dec 2 3-4:30 PM talk @Stanford. Here's one. BE…

2022-11-18 03:41:16 Feel like I've got to add my toast to Twitter before it goes under. I have really appreciated this as a space to learn and to practice public scholarship. Thank you all.

2022-11-17 21:56:26 @GaryMarcus @timnitGebru Choosing not to be a sea lion is free. I recommend it. Choosing not to engage with sea lioning, also free and freeing. Ciao!

2022-11-17 21:53:44 Choosing not to sealion is ... free. And such a good deal!

2022-11-17 21:50:59 @GaryMarcus @timnitGebru Gary, your white privilege is showing. Timnit (and others) have explained over and over where the eugenics is. You keep coming back with "due process" sealioning. You can choose to learn or you can choose to keep flaunting your unwillingness to listen.

2022-11-17 21:27:35 RT @mmitchell_ai: When we use "safety filters" that censor content with a broad brush, we create a less safe world: marginalizing large swa…

2022-11-17 21:17:44 And similarly flailing around "citation". Just because you're wallowing in the mud/kicking up dust doesn't mean the distinctions aren't clear to other people! https://t.co/XavSVJ5N96

2022-11-17 21:17:03 And then some very weird flailing around "ethics": https://t.co/jtfHYTpSly >

2022-11-17 21:16:14 Backing off from that claim pretty quickly: >

2022-11-17 21:15:30 I don't follow this so I missed his remarks about it in real time. Apparently the hype wasn't coming only from the marketing department... >

2022-11-17 15:33:04 RT @BJMcP: Great thread on both the dangers of AI language models and the ways the language used to describe them contributes to illusions…

2022-11-17 15:00:39 And by take seriously I don't mean "believe uncritically". I mean learn how to read and discern and learn from and understand that if you are working on AI it is literally part of your job to do this reading &

2022-11-17 14:58:59 Better late than never, I suppose, but I hope that part of the lesson learned here is to take seriously the work that approaches all of this from a critical lens. >

2022-11-17 14:57:32 Or maybe folks read our paper and didn't really believe us but had to wait until they could "see it with their own eyes" = see the effect from a) a live demo that b) impinges on something where they do have skin in the game. >

2022-11-17 14:56:27 So, this looks like a case where that wasn't alarming enough. It only becomes *alarming* for some when what's threatened is ~%~science~%~. >

2022-11-17 14:55:39 And then we look deeply into the risks and harms that follow when that seemingly coherent text reproduces the language of systems of oppression. >

2022-11-17 14:54:54 I looked back at Section 6 of Stochastic Parrots, which begins: >

2022-11-17 14:53:32 "Generates text that's grammatical and feels real": We sounded the alarm about this in the Stochastic Parrots paper. Ordinarily, I don't expect the average AI researcher to have read my papers, but that paper (thanks to Google) can hardly be missed. So what gives? >

2022-11-17 14:51:43 RT @roxannedarling: Question of the Day:

2022-11-17 14:46:24 RT @emilymbender: At what point does this garbage "science" become embarrassing enough that "researcher" at one of these tech cos stops bei…

2022-11-17 14:44:55 RT @emilymbender: And let's reflect for a moment on how they phrased their disclaimer, shall we? "Hallucinate" is a terrible word choice he…

2022-11-17 14:20:46 RT @dmonett: "AI is a story we computer scientists made up to help us get funding ... It was a pragmatic theater. But now AI has become a f…

2022-11-17 06:05:48 Using the phrase "AI speech police" to refer to automated content moderation is certainly... a choice. https://t.co/P7AJmqxIMg

2022-11-17 05:58:22 RT @dmonett: "Lazy, negligent, unsafe."

2022-11-17 05:22:43 The word forms are not their meaning Their meaning is not the world Modeling the distribution of word forms is not the same as modeling (knowledge of) the world. And yet somehow this basic point is missed over and over and over... https://t.co/Hy5HBgeCbo

2022-11-17 05:12:08 Fundamental point that ~all people who see LLMs as "AI" seem to be missing: The *only* knowledge that an LLM can truthfully be said to have is information about the distribution of word forms.

2022-11-17 05:09:52 RT @moreisdifferent: PROBLEM: "Researchers are buried under a mass of papers, increasingly unable to distinguish between the meaningful and…

2022-11-17 02:39:26 @josephrobison Hey look! It's another probably-bought-his-blue-check telling me that Google can make search out of LMs. No, they can't: https://t.co/rkDjc4kDxj

2022-11-17 01:30:10 @MuhammedKambal Yes --- as many of us have been saying for quite some time now.

2022-11-17 01:26:23 @MuhammedKambal How about starting by not referring to any of these systems as "AI", and thus helping your audience learn how to cut through the AI hype?

2022-11-17 01:21:06 @MuhammedKambal Well no shit. Of course it's missing that. It's a word form prediction machine and nothing more. What led you to believe otherwise?

2022-11-17 01:19:46 The only exception to that I've seen is @willie_agnew 's observation about the filters that the engineers designed in, and how that relates to their (the engineers') view of what science is. So, the expression of the ideas of the people behind it. https://t.co/iC6u5VATw9

2022-11-17 01:19:16 That was furthermore entirely predictable that it would behave this way. The story is not in whatever specific things it says, but rather in the way they are promoting it. >

2022-11-17 01:18:51 "ooh look what false/embarrassing/obviously wrong thing Galactica said" is also not interesting folks. It's not surprising in the least that it generates text that is both fluent and (when interpreted by a human) wrong. >

2022-11-17 01:17:02 @MuhammedKambal The only exception to that I've seen is @willie_agnew 's observation about the filters that the engineers designed in, and how that relates to their (the engineers') view of what science is. So, the expression of the ideas of the people behind it. https://t.co/iC6u5VATw9

2022-11-17 01:15:53 @MuhammedKambal That was furthermore entirely predictable. The story is not in whatever specific things it says, but rather in the way they are promoting it. >

2022-11-17 01:15:27 @MuhammedKambal Why? People have been playing "gotcha" with Meta AI's random bullshit generator all day. It's not interesting --- it's not surprising in the least that it generates text that is both fluent and (when interpreted by a human) wrong. >

2022-11-17 00:52:36 @ChombaBupe I mean, it went off the rails as soon as they got to "an AI"....

2022-11-17 00:52:06 @Sanjay_Uppal This is not the original remark you seem to think it is. Also, comparing humans to LLMs is no better than the reverse.

2022-11-17 00:07:29 In my inbox today, an email about a "staffing opportunity" that ended with "PS: If you don't want to hear from me anymore, you can just tell me" I took great pleasure in not only marking that as spam, but also replying and telling them as much.

2022-11-16 23:59:23 At what point does this garbage "science" become embarrassing enough that "researcher" at one of these tech cos stops being a prestigious job? https://t.co/Zq6SNCaBS5

2022-11-16 23:52:55 RT @zehavoc: Hey #NLProc people, don't miss @anas_ant talk next Friday at 11am "NLP Beyond the Top 100 languages" ** and ** @ben_mlr's thes…

2022-11-16 22:31:04 RT @alexhanna: Next week! Mystery AI Hype Theater 3k will return. @emilymbender and I will talk language models and search, and continue ou…

2022-11-16 22:29:47 This is so damning. https://t.co/iC6u5VATw9

2022-11-16 22:29:33 RT @willie_agnew: Refuses to say anything about queer theory, CRT, racism, or AIDS, despite large bodies of highly influential papers in th…

2022-11-16 22:29:28 @kylebrussell I mean descriptively, yes, people are doing that. Normatively, they 100% should not be. But thanks for promoting my tweets, I guess?

2022-11-16 22:19:15 @kylebrussell Your bio begins with "Always learning | Working on AI for game dev" and you're posting like an AI bro. I'm not inclined to take you for a trustworthy journalist. And yeah, the blue checks are now meaningless. That sucks.

2022-11-16 22:16:11 @heyorson There's a non-zero possibility that your garden will start growing ice cream cones next spring, too. What of it?

2022-11-16 22:15:33 @zehavoc I'm more active over there now, definitely. Still reading Twitter, but less inclined to contribute, given that it seems to be on the brink of collapsing and there's nothing I can do about that.

2022-11-16 22:14:34 @kylebrussell Hey there Mr Bought a Bluecheck --- no, "AI is still young" is not an excuse for putting ridiculous demos out the world and making outlandish claims about them.

2022-11-16 22:13:55 @andrewprollings "got it wrong" "synthesized ungrounded text" "made shit up"

2022-11-16 22:13:17 @NannaInie Yep.

2022-11-16 22:10:36 @zehavoc Well, not fully syncing (and definitely not cross-posting), but still putting some stuff over here. I like Tweetdeck better than Mastodon's "advanced" mode, but do like some things better over there.

2022-11-16 22:10:02 RT @mark_riedl: Maybe don’t name your model after the Encyclopedia Galactica unless it is good enough to be a trusted source of knowledge?…

2022-11-16 21:50:34 @JLucasMcKay "get it wrong"

2022-11-16 21:44:53 RT @dmonett: There are so many, so many, many, many red flags alone in the "Get started" and "Limitations" screenshots, that I actually did…

2022-11-16 21:34:11 @HeidyKhlaaf @Meaningness Here ya go: https://t.co/Zq6SNCaBS5

2022-11-15 21:52:22 RT @chirag_shah: Several positions open at @uw_ischool. Come, be a part of an outstanding #iSchool, in the amazing city of Seattle! https:/…

2022-11-25 17:30:50 RT @timnitGebru: All these so-called #AISafety institutions doing the opposite of “safety” are funded and staffed by longtermists and effec…

2022-11-25 14:05:26 @sergia_ch But I do agree that it is disappointing to interact with engineers who refuse to see that / refuse to actually engage with the substantive critique of what they built (and the process they used to build &

2022-11-25 14:05:16 @sergia_ch I'd go a different direction here. I don't think Galactica is fixable, because there is a fundamental mismatch between what they said they wanted to build and the tech they chose. >

2022-11-24 14:27:44 @LucianaBenotti I won't be there but I hope you have a great time!

2022-11-23 20:36:43 @_dmh @evanmiltenburg @EhudReiter @sebgehr @huggingface @thiagocasfer Thanks!

2022-11-23 16:18:08 Coming up in about an hour! Joins us at https://t.co/VF7TD6tw5c #MathyMath #AIHype #Galactica w/@alexhanna https://t.co/MBSAhk0hd4

2022-11-23 15:16:30 RT @MsKellyMHayes: Every year I invite people who are celebrating the colonial holiday to do something in support of Native people. Indigen…

2022-11-23 05:41:39 RT @LeonDerczynski: How does "responsible disclosure" look, for things that make machine learning models behave in undesirable way? #machin…

2022-11-23 05:40:15 RT @timnitGebru: That's right first it was a $100k blogpost prize that was announced by FTX, I suppose using people's stolen funds to save…

2022-11-23 05:39:58 RT @timnitGebru: Have you ever heard of $100k in best paper awards in any academic conference? Even in ML with all the $$ flowing in the fi…

2022-11-23 05:15:29 @nitin Actually, no. That's not the definition of mansplaining. I wonder how often you've mansplained while thinking you weren't....

2022-11-23 01:23:52 RT @mer__edith: @jackson_blum Whatever the future of Twitter DMs, I'll continue to use Signal + admonish others to do the same. Signal is a…

2022-11-22 22:51:35 @evanmiltenburg @EhudReiter @sebgehr @huggingface @_dmh Thanks -- I'll let them know :)

2022-11-22 22:48:31 @evanmiltenburg @EhudReiter @sebgehr @huggingface Very helpful -- thank you! This query is for a student who is working on a rule-based system and trying to find appropriate datasets that aren't basically seq2seq....

2022-11-22 22:34:26 @evanmiltenburg @EhudReiter Thank you! This is fantastic.

2022-11-22 22:14:06 Seeking an #NLG dataset: What are your favorite datasets pairing non-linguistic structured-data input with linguistic output (in English, ideally, but otherwise still welcome)? #nlproc

2022-11-22 21:28:46 @OneAjeetSingh @timnitGebru I haven't read the paper, but it sounds to me that they found a use case for on-topic bullshitting.

2022-11-22 14:15:37 RT @emilymbender: @alexhanna To tune in to watch live on Wednesday, join us at https://t.co/VF7TD6tw5c 9:30am Pacific, W Nov 23

2022-11-22 14:15:17 RT @emilymbender: We'll be doing our next episode on #Galactica this Wednesday at 9:30am Pacific. If you're new to #MAIHT3k and want to get…

2022-11-22 02:01:32 @alexhanna To tune in to watch live on Wednesday, join us at https://t.co/VF7TD6tw5c 9:30am Pacific, W Nov 23

2022-11-22 02:00:11 We'll be doing our next episode on #Galactica this Wednesday at 9:30am Pacific. If you're new to #MAIHT3k and want to get a sense of what it's about, check out episode 1, here! (Fun to look back on this. @alexhanna and I did not imagine we'd be turning this into a series...) https://t.co/xg2Y8FIpON

2022-11-22 01:57:41 RT @alexhanna: Poking around with PeerTube as a YouTube alternative to give more of the Fediverse a shot. We've reposted this first episod…

2022-11-21 19:40:01 RT @mer__edith: Unlike almost all other tech, Signal is supported by donations from people who rely on it, not by monetizing surveillance.…

2022-11-21 17:45:43 Calling all #NLProc folks in EMEA. Would you like #ACL2025nlp to come to you? Please consider putting in a proposal to host. Things have been reorganized to require less effort from the local hosts: https://t.co/ereeIkI39r

2022-11-21 17:05:33 @strwbilly @aryaman2020 Looking at the page source for that article, where you quote a tweet that QTs me, I don't see the text of my tweet.

2022-11-21 14:14:49 It seems we're still stuck in this weird state where NLP is viewed from within the goals of people working on ML, to the detriment of the science on both sides (NLP and ML). https://t.co/d3LJJ8EZf5

2022-11-21 14:10:00 1) Understanding the problem the algorithm is meant to address 2) Building effective evaluation (test sets, metrics) 3) Understanding how the tech we build fits into its social context

2022-11-21 14:07:55 I guess I represent side "Why is the question even framed this way?" DL is a set of algorithms that can be used in different contexts. NLP is a set of applications. Asking this question suggests that none of the following are part of NLP: https://t.co/R1vlQLlihG

2022-11-21 14:06:15 RT @Abebab: How to avoid/displace accountability and responsibility for LLMs bingo https://t.co/Ql6NYSceRa

2022-11-21 14:06:11 @Abebab lol. And not only did you manage the full 5x5, but I think in this case, all of the squares reflect comments from one person. @KarnFort1 and I were harvesting across many different sources...

2022-11-21 13:39:17 RT @timnitGebru: Chief scientist at @Meta, who gaslights victims of genocide lying repeatedly saying "94% of hate speech on facebook is tak…

2022-11-21 01:09:15 RT @timnitGebru: Not like we wrote a paper about that or anything which we got fired for. What COULD go wrong? https://t.co/krVsH5dB7g

2022-11-21 01:09:09 RT @timnitGebru: Not like @chirag_shah and @emilymbender wrote a paper: https://t.co/Zq7abViQbE

2022-11-21 00:50:08 @sbmaruf @ylecun @pcastr @carlesgelada When you say that I "feel" these arguments, you are suggesting that this isn't the result of scholarly work. You can read the argument in this paper: https://t.co/cYj1vKDms1 NB: That was peer reviewed, it's not some arXiv preprint.

2022-11-21 00:37:18 @sbmaruf @ylecun @pcastr @carlesgelada The fact students can be working on PhDs in ML and not actually be taught the relationship between "language models" and actual human linguistic activity is a problem.

2022-11-21 00:36:30 @sbmaruf @ylecun @pcastr @carlesgelada As for "make shit up": The training objective of language models is word forms given context, nothing else. This is not reasoning about the world, it's not "predicting" anything, it's not "writing scientific articles", etc.

2022-11-21 00:35:29 @sbmaruf @ylecun @pcastr @carlesgelada Why do you need to study them? The world doesn't need 120B parameter language models. For PhD students, I recommend learning the nature of the problem you are trying to solve and then looking critically at how current tech matches (or doesn't) that problem. >

2022-11-20 22:48:43 RT @chirag_shah: Meta's #Galactica is yet another attempt to use LLMs in ways that is both irresponsible and harmful. I have a feeling that…

2022-11-20 17:35:36 @marylgray @sjjphd @schock @mlmillerphd Check your DMs :)

2022-11-20 17:24:21 this https://t.co/wbJJYGf4kL

2022-11-20 16:03:36 @carlesgelada @ylecun @pcastr No, the project is doomed from the start, because you don't do science by manipulating strings of letters.

2022-11-20 15:51:08 @ylecun @pcastr @carlesgelada Oh, I do. It pisses me off every time I type "How about we meet on..." and suggests some random day of the week. But also: that's lower volume and thus less problematic.

2022-11-20 15:46:13 RT @emilymbender: @ylecun @pcastr @carlesgelada Regardless of what you prompt it with, what comes out is untethered from any communicative…

2022-11-20 15:44:53 @ylecun @pcastr @carlesgelada In other words, #Galactica is designed to make shit up and to do so fluently. Even when it is "working" it's just making shit up that happens to match the form (and not just the style) of what we were looking for.

2022-11-20 15:43:55 @ylecun @pcastr @carlesgelada Regardless of what you prompt it with, what comes out is untethered from any communicative intent, understanding of the science, etc etc because language only model the distribution of the form of language. >

2022-11-20 14:39:52 RT @emilymbender: Hashtags work better when you spell them right: #Galactica

2022-11-20 14:34:14 RT @xriskology: Here's my newest Salon article, about the FTX debacle and the longtermist ideology behind it. There are several crucial poi…

2022-11-20 06:48:23 RT @alexhanna: If I was the head of AI at Meta I would simply not keep on digging myself into a deeper hole defending a tool that my team t…

2022-11-20 06:44:54 RT @Abebab: remember longtermism is not a concern for AI ethics but an abstract contemplation &

2022-11-20 00:24:45 RT @ICLDC_HI: ComputEL-6: final call for papers! Submission deadline is Nov. 20 —>

2022-11-19 23:26:59 @holdspacefree Yikes.

2022-11-19 23:21:59 Hashtags work better when you spell them right: #Galactica https://t.co/sLJ8iaNF7X

2022-11-19 23:12:41 Looking forward to the coming episode of Mystery AI Hype Theater 3000 with @alexhanna On deck: #Galatica of course, plus our new segment "What in the Fresh AI Hell?" Next Wednesday, 11/23, 9:30am Pacific. https://t.co/VF7TD6tw5c #NLProc #AIhype #MAIHT3k https://t.co/1jWIChBzJp

2022-11-19 22:40:55 @intelwire As I understand it, this guy is in charge of META AI. So yeah.

2022-11-19 22:35:51 Did you do any user studies? Did you measure whether people who are looking for writing assistance (working hard in an L2, up against a paper submission deadline, tired) would be able to accurately evaluate if the text they are submitting accurately expresses their ideas?

2022-11-19 22:34:18 So close and yet so far... BEFORE dumping lots of resources into building some tech, we should asking questions like: What are the failure modes? Who will be hurt if the system malfunctions? Who will be hurt if it functions as intended? As for your use case: https://t.co/6hPU3nv5kV

2022-11-19 21:25:29 RT @emilymbender: Using a large LM as a search engine was a bad idea when it was proposed by a search company. It's still a bad idea now, f…

2022-11-19 19:07:44 @isosteph This is yummy, impressive looking, and I think pretty easy. As in: I'm not really much of a cook but have had success with it. https://t.co/wThVrq7mlJ

2022-11-19 05:18:16 @Jeff_Sparrow I put it in the next tweet in the thread: https://t.co/3doaE3kCm0

2022-11-19 04:27:20 @Jeff_Sparrow Yes.

2022-11-19 04:25:44 I clicked the link to see what that study was ... and found something from the London School of Economics and Political Science that has a *blurb* on the webpage talking up how groundbreaking the report is. Sure looks credible to me (not). https://t.co/n5XiDXgeAw https://t.co/8gvbSEP0bd

2022-11-19 04:23:25 And while I'm here, this paragraph is ridiculously full of #AIhype: >

2022-11-19 04:21:17 The paper lives here, in the ACM Digital Library: https://t.co/kwACyKvDtL Not that weird rando URL that you gave. >

2022-11-19 04:20:28 Hey @Jeff_Sparrow I appreciate you actually citing our paper as the source of the phrase "stochastic parrots", but it would be better form to link to its actual published location. https://t.co/DmRThcHDEz >

2022-11-18 23:50:24 @lebovitzben @mmitchell_ai @timnitGebru @Grady_Booch Clearly unqualified for his position. Relevant here: https://t.co/fVMYOVZb0d

2022-11-18 19:09:11 I'm glad to see my tweet about the #Galactica debacle quoted here, but I wonder: what happens to this article when Twitter goes down? @strwbilly do you know? Do the embedded tweets just disappear, or is your publishing platform robust to that? https://t.co/88FzfskH8f

2022-11-18 17:48:01 @josephrobison I think it's a mistake to have our information access systems driven by for-profit companies. Bare minimum is we should invest in something public.

2022-11-18 17:20:18 @josephrobison Misguided and dangerous.

2022-11-18 15:53:31 @BoredAstronaut Not harmless: https://t.co/kwACyKvDtL

2022-11-18 14:50:57 RT @emilymbender: And by take seriously I don't mean "believe uncritically". I mean learn how to read and discern and learn from and unders…

2022-11-18 14:49:19 RT @emilymbender: "Generates text that's grammatical and feels real": We sounded the alarm about this in the Stochastic Parrots paper. Ordi…

2022-11-18 14:48:18 RT @emilymbender: I don't follow this so I missed his remarks about it in real time. Apparently the hype wasn't coming only from the mark…

2022-11-18 14:06:31 RT @timnitGebru: Whatever you do, don't join Jack Dorsey's thing. He totally believes in Elon's plan to "extend the light of consciousness"…

2022-11-18 03:41:38 @ruthstarkman @mmitchell_ai @Stanford Your students are so awesome!

2022-11-18 03:41:29 RT @ruthstarkman: Students made several great tiktoks for @mmitchell_ai and @emilymbender's Dec 2 3-4:30 PM talk @Stanford. Here's one. BE…

2022-11-18 03:41:16 Feel like I've got to add my toast to Twitter before it goes under. I have really appreciated this as a space to learn and to practice public scholarship. Thank you all.

2022-11-17 21:56:26 @GaryMarcus @timnitGebru Choosing not to be a sea lion is free. I recommend it. Choosing not to engage with sea lioning, also free and freeing. Ciao!

2022-11-17 21:53:44 Choosing not to sealion is ... free. And such a good deal!

2022-11-17 21:50:59 @GaryMarcus @timnitGebru Gary, your white privilege is showing. Timnit (and others) have explained over and over where the eugenics is. You keep coming back with "due process" sealioning. You can choose to learn or you can choose to keep flaunting your unwillingness to listen.

2022-11-17 21:27:35 RT @mmitchell_ai: When we use "safety filters" that censor content with a broad brush, we create a less safe world: marginalizing large swa…

2022-11-17 21:17:44 And similarly flailing around "citation". Just because you're wallowing in the mud/kicking up dust doesn't mean the distinctions aren't clear to other people! https://t.co/XavSVJ5N96

2022-11-17 21:17:03 And then some very weird flailing around "ethics": https://t.co/jtfHYTpSly >

2022-11-17 21:16:14 Backing off from that claim pretty quickly: >

2022-11-17 21:15:30 I don't follow this so I missed his remarks about it in real time. Apparently the hype wasn't coming only from the marketing department... >

2022-11-17 15:33:04 RT @BJMcP: Great thread on both the dangers of AI language models and the ways the language used to describe them contributes to illusions…

2022-11-17 15:00:39 And by take seriously I don't mean "believe uncritically". I mean learn how to read and discern and learn from and understand that if you are working on AI it is literally part of your job to do this reading &

2022-11-17 14:58:59 Better late than never, I suppose, but I hope that part of the lesson learned here is to take seriously the work that approaches all of this from a critical lens. >

2022-11-17 14:57:32 Or maybe folks read our paper and didn't really believe us but had to wait until they could "see it with their own eyes" = see the effect from a) a live demo that b) impinges on something where they do have skin in the game. >

2022-11-17 14:56:27 So, this looks like a case where that wasn't alarming enough. It only becomes *alarming* for some when what's threatened is ~%~science~%~. >

2022-11-17 14:55:39 And then we look deeply into the risks and harms that follow when that seemingly coherent text reproduces the language of systems of oppression. >

2022-11-17 14:54:54 I looked back at Section 6 of Stochastic Parrots, which begins: >

2022-11-17 14:53:32 "Generates text that's grammatical and feels real": We sounded the alarm about this in the Stochastic Parrots paper. Ordinarily, I don't expect the average AI researcher to have read my papers, but that paper (thanks to Google) can hardly be missed. So what gives? >

2022-11-17 14:51:43 RT @roxannedarling: Question of the Day:

2022-11-17 14:46:24 RT @emilymbender: At what point does this garbage "science" become embarrassing enough that "researcher" at one of these tech cos stops bei…

2022-11-17 14:44:55 RT @emilymbender: And let's reflect for a moment on how they phrased their disclaimer, shall we? "Hallucinate" is a terrible word choice he…

2022-11-17 14:20:46 RT @dmonett: "AI is a story we computer scientists made up to help us get funding ... It was a pragmatic theater. But now AI has become a f…

2022-11-17 06:05:48 Using the phrase "AI speech police" to refer to automated content moderation is certainly... a choice. https://t.co/P7AJmqxIMg

2022-11-17 05:58:22 RT @dmonett: "Lazy, negligent, unsafe."

2022-11-17 05:22:43 The word forms are not their meaning Their meaning is not the world Modeling the distribution of word forms is not the same as modeling (knowledge of) the world. And yet somehow this basic point is missed over and over and over... https://t.co/Hy5HBgeCbo

2022-11-17 05:12:08 Fundamental point that ~all people who see LLMs as "AI" seem to be missing: The *only* knowledge that an LLM can truthfully be said to have is information about the distribution of word forms.

2022-11-17 05:09:52 RT @moreisdifferent: PROBLEM: "Researchers are buried under a mass of papers, increasingly unable to distinguish between the meaningful and…

2022-11-17 02:39:26 @josephrobison Hey look! It's another probably-bought-his-blue-check telling me that Google can make search out of LMs. No, they can't: https://t.co/rkDjc4kDxj

2022-11-17 01:30:10 @MuhammedKambal Yes --- as many of us have been saying for quite some time now.

2022-11-17 01:26:23 @MuhammedKambal How about starting by not referring to any of these systems as "AI", and thus helping your audience learn how to cut through the AI hype?

2022-11-17 01:21:06 @MuhammedKambal Well no shit. Of course it's missing that. It's a word form prediction machine and nothing more. What led you to believe otherwise?

2022-11-17 01:19:46 The only exception to that I've seen is @willie_agnew 's observation about the filters that the engineers designed in, and how that relates to their (the engineers') view of what science is. So, the expression of the ideas of the people behind it. https://t.co/iC6u5VATw9

2022-11-17 01:19:16 That was furthermore entirely predictable that it would behave this way. The story is not in whatever specific things it says, but rather in the way they are promoting it. >

2022-11-17 01:18:51 "ooh look what false/embarrassing/obviously wrong thing Galactica said" is also not interesting folks. It's not surprising in the least that it generates text that is both fluent and (when interpreted by a human) wrong. >

2022-11-17 01:17:02 @MuhammedKambal The only exception to that I've seen is @willie_agnew 's observation about the filters that the engineers designed in, and how that relates to their (the engineers') view of what science is. So, the expression of the ideas of the people behind it. https://t.co/iC6u5VATw9

2022-11-17 01:15:53 @MuhammedKambal That was furthermore entirely predictable. The story is not in whatever specific things it says, but rather in the way they are promoting it. >

2022-11-17 01:15:27 @MuhammedKambal Why? People have been playing "gotcha" with Meta AI's random bullshit generator all day. It's not interesting --- it's not surprising in the least that it generates text that is both fluent and (when interpreted by a human) wrong. >

2022-11-17 00:52:36 @ChombaBupe I mean, it went off the rails as soon as they got to "an AI"....

2022-11-17 00:52:06 @Sanjay_Uppal This is not the original remark you seem to think it is. Also, comparing humans to LLMs is no better than the reverse.

2022-11-17 00:07:29 In my inbox today, an email about a "staffing opportunity" that ended with "PS: If you don't want to hear from me anymore, you can just tell me" I took great pleasure in not only marking that as spam, but also replying and telling them as much.

2022-11-16 23:59:23 At what point does this garbage "science" become embarrassing enough that "researcher" at one of these tech cos stops being a prestigious job? https://t.co/Zq6SNCaBS5

2022-11-16 23:52:55 RT @zehavoc: Hey #NLProc people, don't miss @anas_ant talk next Friday at 11am "NLP Beyond the Top 100 languages" ** and ** @ben_mlr's thes…

2022-11-16 22:31:04 RT @alexhanna: Next week! Mystery AI Hype Theater 3k will return. @emilymbender and I will talk language models and search, and continue ou…

2022-11-16 22:29:47 This is so damning. https://t.co/iC6u5VATw9

2022-11-16 22:29:33 RT @willie_agnew: Refuses to say anything about queer theory, CRT, racism, or AIDS, despite large bodies of highly influential papers in th…

2022-11-16 22:29:28 @kylebrussell I mean descriptively, yes, people are doing that. Normatively, they 100% should not be. But thanks for promoting my tweets, I guess?

2022-11-16 22:19:15 @kylebrussell Your bio begins with "Always learning | Working on AI for game dev" and you're posting like an AI bro. I'm not inclined to take you for a trustworthy journalist. And yeah, the blue checks are now meaningless. That sucks.

2022-11-16 22:16:11 @heyorson There's a non-zero possibility that your garden will start growing ice cream cones next spring, too. What of it?

2022-11-16 22:15:33 @zehavoc I'm more active over there now, definitely. Still reading Twitter, but less inclined to contribute, given that it seems to be on the brink of collapsing and there's nothing I can do about that.

2022-11-16 22:14:34 @kylebrussell Hey there Mr Bought a Bluecheck --- no, "AI is still young" is not an excuse for putting ridiculous demos out the world and making outlandish claims about them.

2022-11-16 22:13:55 @andrewprollings "got it wrong" "synthesized ungrounded text" "made shit up"

2022-11-16 22:13:17 @NannaInie Yep.

2022-11-16 22:10:36 @zehavoc Well, not fully syncing (and definitely not cross-posting), but still putting some stuff over here. I like Tweetdeck better than Mastodon's "advanced" mode, but do like some things better over there.

2022-11-16 22:10:02 RT @mark_riedl: Maybe don’t name your model after the Encyclopedia Galactica unless it is good enough to be a trusted source of knowledge?…

2022-11-16 21:50:34 @JLucasMcKay "get it wrong"

2022-11-16 21:44:53 RT @dmonett: There are so many, so many, many, many red flags alone in the "Get started" and "Limitations" screenshots, that I actually did…

2022-11-16 21:34:11 @HeidyKhlaaf @Meaningness Here ya go: https://t.co/Zq6SNCaBS5

2022-11-15 21:52:22 RT @chirag_shah: Several positions open at @uw_ischool. Come, be a part of an outstanding #iSchool, in the amazing city of Seattle! https:/…

2022-11-28 22:51:12 @neilturkewitz Spam and could be turned into a DOS attack...

2022-11-28 21:39:11 See also: https://t.co/a24HdKWhIS

2022-11-28 21:39:04 Once again, for those who seem to have missed it: Language models are not the type of thing that can testify/offer public comment/any similar sort of speech act, because they are not the sort of thing that can have a public commitment. This is atrocious. https://t.co/YrHFuy183M

2022-11-28 20:03:06 @tdietterich @fchollet I urge you to read the short (and well presented) piece that linked to in that tweet before coming here to argue with me.

2022-11-28 19:32:36 @robmalouf @fchollet Shieber cites Dreyfus 1979: Hubert Dreyfus (1979, page 100) has made a similar analogy of climbing trees to reach the moon.

2022-11-28 19:29:38 @robmalouf @fchollet @fchollet makes a nice distinction between "cognitive automation", "cognitive assistance" and "cognitive autonomy" --- and I think is compatible with what you are saying. The argument here is against expecting the ladders to bring "cognitive autonomy". I'll look at Shieber's pc.

2022-11-28 19:12:24 @PeterVeep @fchollet Yeah -- I think that's because the motivation of the people building the applications isn't actually to build better solutions to the problem, but to prove that their system can 'learn'. It's exhausting.

2022-11-28 19:07:46 @fchollet Somehow, the current conversation &

2022-11-28 19:06:13 @fchollet All helpful metaphors, I think, for explaining why it's foolish to believe that deep learning (useful as it may be) isn't a path towards what @fchollet calls "cognitive autonomy". [I couldn't quickly turn up the source for the ladder one, and would be grateful for leads.] >

2022-11-28 19:04:29 Building taller and taller ladders won't get you to the moon -- ? Running faster doesn't get you closer to teleportation -- me ⏱ "dramatically improving the precision or efficiency of clock technology does not lead to a time travel device" -- @fchollet https://t.co/AQc9ZoLizf

2022-11-28 19:00:22 @AngelLamuno Yes!

2022-11-28 14:23:04 @jordilinares I didn't ask what you are aligned with. I was telling you that the answer to your question about the term stochastic parrots can be found in the paper where we introduced that term.

2022-11-28 01:42:44 RT @emilymbender: @kateweaverUT I'm right there with you. Among other things, not hiding the author in the text helps to dispel the idea th…

2022-11-28 01:11:30 @kateweaverUT I'm right there with you. Among other things, not hiding the author in the text helps to dispel the idea that we can (or should even strive to) do scholarship using a "view from nowhere".

2022-11-28 00:14:59 RT @timnitGebru: @jquinonero @emilymbender Turns out its easier to censure research that makes your tech look problematic than stop the rel…

2022-11-27 23:03:48 RT @_ovlb: BTW: Next Friday and Saturday @DAIRInstitute celebrates their first anniversary. Big yay! 9/ [https://t.co/FxssxPVbHx]

2022-11-27 18:43:19 @jordilinares Uh, read our paper?

2022-11-27 15:49:15 @jordilinares Hi! I'm the one who coined that phrase and it was not intended as an insult. It was intended to make clear the difference between what large LMs do and what people claim they do.

2022-11-26 13:46:21 RT @le_science4all: How dangerous are large AI models? The #hype is accelerating rushed deployments, which are causing traumas for users,…

2022-11-25 17:30:50 RT @timnitGebru: All these so-called #AISafety institutions doing the opposite of “safety” are funded and staffed by longtermists and effec…

2022-11-25 14:05:26 @sergia_ch But I do agree that it is disappointing to interact with engineers who refuse to see that / refuse to actually engage with the substantive critique of what they built (and the process they used to build &

2022-11-25 14:05:16 @sergia_ch I'd go a different direction here. I don't think Galactica is fixable, because there is a fundamental mismatch between what they said they wanted to build and the tech they chose. >

2022-11-24 14:27:44 @LucianaBenotti I won't be there but I hope you have a great time!

2022-11-23 20:36:43 @_dmh @evanmiltenburg @EhudReiter @sebgehr @huggingface @thiagocasfer Thanks!

2022-11-23 16:18:08 Coming up in about an hour! Joins us at https://t.co/VF7TD6tw5c #MathyMath #AIHype #Galactica w/@alexhanna https://t.co/MBSAhk0hd4

2022-11-23 15:16:30 RT @MsKellyMHayes: Every year I invite people who are celebrating the colonial holiday to do something in support of Native people. Indigen…

2022-11-23 05:41:39 RT @LeonDerczynski: How does "responsible disclosure" look, for things that make machine learning models behave in undesirable way? #machin…

2022-11-23 05:40:15 RT @timnitGebru: That's right first it was a $100k blogpost prize that was announced by FTX, I suppose using people's stolen funds to save…

2022-11-23 05:39:58 RT @timnitGebru: Have you ever heard of $100k in best paper awards in any academic conference? Even in ML with all the $$ flowing in the fi…

2022-11-23 05:15:29 @nitin Actually, no. That's not the definition of mansplaining. I wonder how often you've mansplained while thinking you weren't....

2022-11-23 01:23:52 RT @mer__edith: @jackson_blum Whatever the future of Twitter DMs, I'll continue to use Signal + admonish others to do the same. Signal is a…

2022-11-22 22:51:35 @evanmiltenburg @EhudReiter @sebgehr @huggingface @_dmh Thanks -- I'll let them know :)

2022-11-22 22:48:31 @evanmiltenburg @EhudReiter @sebgehr @huggingface Very helpful -- thank you! This query is for a student who is working on a rule-based system and trying to find appropriate datasets that aren't basically seq2seq....

2022-11-22 22:34:26 @evanmiltenburg @EhudReiter Thank you! This is fantastic.

2022-11-22 22:14:06 Seeking an #NLG dataset: What are your favorite datasets pairing non-linguistic structured-data input with linguistic output (in English, ideally, but otherwise still welcome)? #nlproc

2022-11-22 21:28:46 @OneAjeetSingh @timnitGebru I haven't read the paper, but it sounds to me that they found a use case for on-topic bullshitting.

2022-11-22 14:15:37 RT @emilymbender: @alexhanna To tune in to watch live on Wednesday, join us at https://t.co/VF7TD6tw5c 9:30am Pacific, W Nov 23

2022-11-22 14:15:17 RT @emilymbender: We'll be doing our next episode on #Galactica this Wednesday at 9:30am Pacific. If you're new to #MAIHT3k and want to get…

2022-11-22 02:01:32 @alexhanna To tune in to watch live on Wednesday, join us at https://t.co/VF7TD6tw5c 9:30am Pacific, W Nov 23

2022-11-22 02:00:11 We'll be doing our next episode on #Galactica this Wednesday at 9:30am Pacific. If you're new to #MAIHT3k and want to get a sense of what it's about, check out episode 1, here! (Fun to look back on this. @alexhanna and I did not imagine we'd be turning this into a series...) https://t.co/xg2Y8FIpON

2022-11-22 01:57:41 RT @alexhanna: Poking around with PeerTube as a YouTube alternative to give more of the Fediverse a shot. We've reposted this first episod…

2022-11-21 19:40:01 RT @mer__edith: Unlike almost all other tech, Signal is supported by donations from people who rely on it, not by monetizing surveillance.…

2022-11-21 17:45:43 Calling all #NLProc folks in EMEA. Would you like #ACL2025nlp to come to you? Please consider putting in a proposal to host. Things have been reorganized to require less effort from the local hosts: https://t.co/ereeIkI39r

2022-11-21 17:05:33 @strwbilly @aryaman2020 Looking at the page source for that article, where you quote a tweet that QTs me, I don't see the text of my tweet.

2022-11-21 14:14:49 It seems we're still stuck in this weird state where NLP is viewed from within the goals of people working on ML, to the detriment of the science on both sides (NLP and ML). https://t.co/d3LJJ8EZf5

2022-11-21 14:10:00 1) Understanding the problem the algorithm is meant to address 2) Building effective evaluation (test sets, metrics) 3) Understanding how the tech we build fits into its social context

2022-11-21 14:07:55 I guess I represent side "Why is the question even framed this way?" DL is a set of algorithms that can be used in different contexts. NLP is a set of applications. Asking this question suggests that none of the following are part of NLP: https://t.co/R1vlQLlihG

2022-11-21 14:06:15 RT @Abebab: How to avoid/displace accountability and responsibility for LLMs bingo https://t.co/Ql6NYSceRa

2022-11-21 14:06:11 @Abebab lol. And not only did you manage the full 5x5, but I think in this case, all of the squares reflect comments from one person. @KarnFort1 and I were harvesting across many different sources...

2022-11-21 13:39:17 RT @timnitGebru: Chief scientist at @Meta, who gaslights victims of genocide lying repeatedly saying "94% of hate speech on facebook is tak…

2022-11-21 01:09:15 RT @timnitGebru: Not like we wrote a paper about that or anything which we got fired for. What COULD go wrong? https://t.co/krVsH5dB7g

2022-11-21 01:09:09 RT @timnitGebru: Not like @chirag_shah and @emilymbender wrote a paper: https://t.co/Zq7abViQbE

2022-11-21 00:50:08 @sbmaruf @ylecun @pcastr @carlesgelada When you say that I "feel" these arguments, you are suggesting that this isn't the result of scholarly work. You can read the argument in this paper: https://t.co/cYj1vKDms1 NB: That was peer reviewed, it's not some arXiv preprint.

2022-11-21 00:37:18 @sbmaruf @ylecun @pcastr @carlesgelada The fact students can be working on PhDs in ML and not actually be taught the relationship between "language models" and actual human linguistic activity is a problem.

2022-11-21 00:36:30 @sbmaruf @ylecun @pcastr @carlesgelada As for "make shit up": The training objective of language models is word forms given context, nothing else. This is not reasoning about the world, it's not "predicting" anything, it's not "writing scientific articles", etc.

2022-11-21 00:35:29 @sbmaruf @ylecun @pcastr @carlesgelada Why do you need to study them? The world doesn't need 120B parameter language models. For PhD students, I recommend learning the nature of the problem you are trying to solve and then looking critically at how current tech matches (or doesn't) that problem. >

2022-11-20 22:48:43 RT @chirag_shah: Meta's #Galactica is yet another attempt to use LLMs in ways that is both irresponsible and harmful. I have a feeling that…

2022-11-20 17:35:36 @marylgray @sjjphd @schock @mlmillerphd Check your DMs :)

2022-11-20 17:24:21 this https://t.co/wbJJYGf4kL

2022-11-20 16:03:36 @carlesgelada @ylecun @pcastr No, the project is doomed from the start, because you don't do science by manipulating strings of letters.

2022-11-20 15:51:08 @ylecun @pcastr @carlesgelada Oh, I do. It pisses me off every time I type "How about we meet on..." and suggests some random day of the week. But also: that's lower volume and thus less problematic.

2022-11-20 15:46:13 RT @emilymbender: @ylecun @pcastr @carlesgelada Regardless of what you prompt it with, what comes out is untethered from any communicative…

2022-11-20 15:44:53 @ylecun @pcastr @carlesgelada In other words, #Galactica is designed to make shit up and to do so fluently. Even when it is "working" it's just making shit up that happens to match the form (and not just the style) of what we were looking for.

2022-11-20 15:43:55 @ylecun @pcastr @carlesgelada Regardless of what you prompt it with, what comes out is untethered from any communicative intent, understanding of the science, etc etc because language only model the distribution of the form of language. >

2022-11-20 14:39:52 RT @emilymbender: Hashtags work better when you spell them right: #Galactica

2022-11-20 14:34:14 RT @xriskology: Here's my newest Salon article, about the FTX debacle and the longtermist ideology behind it. There are several crucial poi…

2022-11-20 06:48:23 RT @alexhanna: If I was the head of AI at Meta I would simply not keep on digging myself into a deeper hole defending a tool that my team t…

2022-11-20 06:44:54 RT @Abebab: remember longtermism is not a concern for AI ethics but an abstract contemplation &

2022-11-20 00:24:45 RT @ICLDC_HI: ComputEL-6: final call for papers! Submission deadline is Nov. 20 —>

2022-11-19 23:26:59 @holdspacefree Yikes.

2022-11-19 23:21:59 Hashtags work better when you spell them right: #Galactica https://t.co/sLJ8iaNF7X

2022-11-19 23:12:41 Looking forward to the coming episode of Mystery AI Hype Theater 3000 with @alexhanna On deck: #Galatica of course, plus our new segment "What in the Fresh AI Hell?" Next Wednesday, 11/23, 9:30am Pacific. https://t.co/VF7TD6tw5c #NLProc #AIhype #MAIHT3k https://t.co/1jWIChBzJp

2022-11-19 22:40:55 @intelwire As I understand it, this guy is in charge of META AI. So yeah.

2022-11-19 22:35:51 Did you do any user studies? Did you measure whether people who are looking for writing assistance (working hard in an L2, up against a paper submission deadline, tired) would be able to accurately evaluate if the text they are submitting accurately expresses their ideas?

2022-11-19 22:34:18 So close and yet so far... BEFORE dumping lots of resources into building some tech, we should asking questions like: What are the failure modes? Who will be hurt if the system malfunctions? Who will be hurt if it functions as intended? As for your use case: https://t.co/6hPU3nv5kV

2022-11-19 21:25:29 RT @emilymbender: Using a large LM as a search engine was a bad idea when it was proposed by a search company. It's still a bad idea now, f…

2022-11-19 19:07:44 @isosteph This is yummy, impressive looking, and I think pretty easy. As in: I'm not really much of a cook but have had success with it. https://t.co/wThVrq7mlJ

2022-11-19 05:18:16 @Jeff_Sparrow I put it in the next tweet in the thread: https://t.co/3doaE3kCm0

2022-11-19 04:27:20 @Jeff_Sparrow Yes.

2022-11-19 04:25:44 I clicked the link to see what that study was ... and found something from the London School of Economics and Political Science that has a *blurb* on the webpage talking up how groundbreaking the report is. Sure looks credible to me (not). https://t.co/n5XiDXgeAw https://t.co/8gvbSEP0bd

2022-11-19 04:23:25 And while I'm here, this paragraph is ridiculously full of #AIhype: >

2022-11-19 04:21:17 The paper lives here, in the ACM Digital Library: https://t.co/kwACyKvDtL Not that weird rando URL that you gave. >

2022-11-19 04:20:28 Hey @Jeff_Sparrow I appreciate you actually citing our paper as the source of the phrase "stochastic parrots", but it would be better form to link to its actual published location. https://t.co/DmRThcHDEz >

2022-11-18 23:50:24 @lebovitzben @mmitchell_ai @timnitGebru @Grady_Booch Clearly unqualified for his position. Relevant here: https://t.co/fVMYOVZb0d

2022-11-18 19:09:11 I'm glad to see my tweet about the #Galactica debacle quoted here, but I wonder: what happens to this article when Twitter goes down? @strwbilly do you know? Do the embedded tweets just disappear, or is your publishing platform robust to that? https://t.co/88FzfskH8f

2022-11-18 17:48:01 @josephrobison I think it's a mistake to have our information access systems driven by for-profit companies. Bare minimum is we should invest in something public.

2022-11-18 17:20:18 @josephrobison Misguided and dangerous.

2022-11-18 15:53:31 @BoredAstronaut Not harmless: https://t.co/kwACyKvDtL

2022-11-18 14:50:57 RT @emilymbender: And by take seriously I don't mean "believe uncritically". I mean learn how to read and discern and learn from and unders…

2022-11-18 14:49:19 RT @emilymbender: "Generates text that's grammatical and feels real": We sounded the alarm about this in the Stochastic Parrots paper. Ordi…

2022-11-18 14:48:18 RT @emilymbender: I don't follow this so I missed his remarks about it in real time. Apparently the hype wasn't coming only from the mark…

2022-11-18 14:06:31 RT @timnitGebru: Whatever you do, don't join Jack Dorsey's thing. He totally believes in Elon's plan to "extend the light of consciousness"…

2022-11-18 03:41:38 @ruthstarkman @mmitchell_ai @Stanford Your students are so awesome!

2022-11-18 03:41:29 RT @ruthstarkman: Students made several great tiktoks for @mmitchell_ai and @emilymbender's Dec 2 3-4:30 PM talk @Stanford. Here's one. BE…

2022-11-18 03:41:16 Feel like I've got to add my toast to Twitter before it goes under. I have really appreciated this as a space to learn and to practice public scholarship. Thank you all.

2022-11-17 21:56:26 @GaryMarcus @timnitGebru Choosing not to be a sea lion is free. I recommend it. Choosing not to engage with sea lioning, also free and freeing. Ciao!

2022-11-17 21:53:44 Choosing not to sealion is ... free. And such a good deal!

2022-11-17 21:50:59 @GaryMarcus @timnitGebru Gary, your white privilege is showing. Timnit (and others) have explained over and over where the eugenics is. You keep coming back with "due process" sealioning. You can choose to learn or you can choose to keep flaunting your unwillingness to listen.

2022-11-17 21:27:35 RT @mmitchell_ai: When we use "safety filters" that censor content with a broad brush, we create a less safe world: marginalizing large swa…

2022-11-17 21:17:44 And similarly flailing around "citation". Just because you're wallowing in the mud/kicking up dust doesn't mean the distinctions aren't clear to other people! https://t.co/XavSVJ5N96

2022-11-17 21:17:03 And then some very weird flailing around "ethics": https://t.co/jtfHYTpSly >

2022-11-17 21:16:14 Backing off from that claim pretty quickly: >

2022-11-17 21:15:30 I don't follow this so I missed his remarks about it in real time. Apparently the hype wasn't coming only from the marketing department... >

2022-11-17 15:33:04 RT @BJMcP: Great thread on both the dangers of AI language models and the ways the language used to describe them contributes to illusions…

2022-11-17 15:00:39 And by take seriously I don't mean "believe uncritically". I mean learn how to read and discern and learn from and understand that if you are working on AI it is literally part of your job to do this reading &

2022-11-17 14:58:59 Better late than never, I suppose, but I hope that part of the lesson learned here is to take seriously the work that approaches all of this from a critical lens. >

2022-11-17 14:57:32 Or maybe folks read our paper and didn't really believe us but had to wait until they could "see it with their own eyes" = see the effect from a) a live demo that b) impinges on something where they do have skin in the game. >

2022-11-17 14:56:27 So, this looks like a case where that wasn't alarming enough. It only becomes *alarming* for some when what's threatened is ~%~science~%~. >

2022-11-17 14:55:39 And then we look deeply into the risks and harms that follow when that seemingly coherent text reproduces the language of systems of oppression. >

2022-11-17 14:54:54 I looked back at Section 6 of Stochastic Parrots, which begins: >

2022-11-17 14:53:32 "Generates text that's grammatical and feels real": We sounded the alarm about this in the Stochastic Parrots paper. Ordinarily, I don't expect the average AI researcher to have read my papers, but that paper (thanks to Google) can hardly be missed. So what gives? >

2022-11-17 14:51:43 RT @roxannedarling: Question of the Day:

2022-11-17 14:46:24 RT @emilymbender: At what point does this garbage "science" become embarrassing enough that "researcher" at one of these tech cos stops bei…

2022-11-17 14:44:55 RT @emilymbender: And let's reflect for a moment on how they phrased their disclaimer, shall we? "Hallucinate" is a terrible word choice he…

2022-11-17 14:20:46 RT @dmonett: "AI is a story we computer scientists made up to help us get funding ... It was a pragmatic theater. But now AI has become a f…

2022-11-17 06:05:48 Using the phrase "AI speech police" to refer to automated content moderation is certainly... a choice. https://t.co/P7AJmqxIMg

2022-11-17 05:58:22 RT @dmonett: "Lazy, negligent, unsafe."

2022-11-17 05:22:43 The word forms are not their meaning Their meaning is not the world Modeling the distribution of word forms is not the same as modeling (knowledge of) the world. And yet somehow this basic point is missed over and over and over... https://t.co/Hy5HBgeCbo

2022-11-17 05:12:08 Fundamental point that ~all people who see LLMs as "AI" seem to be missing: The *only* knowledge that an LLM can truthfully be said to have is information about the distribution of word forms.

2022-11-17 05:09:52 RT @moreisdifferent: PROBLEM: "Researchers are buried under a mass of papers, increasingly unable to distinguish between the meaningful and…

2022-11-17 02:39:26 @josephrobison Hey look! It's another probably-bought-his-blue-check telling me that Google can make search out of LMs. No, they can't: https://t.co/rkDjc4kDxj

2022-11-17 01:30:10 @MuhammedKambal Yes --- as many of us have been saying for quite some time now.

2022-11-17 01:26:23 @MuhammedKambal How about starting by not referring to any of these systems as "AI", and thus helping your audience learn how to cut through the AI hype?

2022-11-17 01:21:06 @MuhammedKambal Well no shit. Of course it's missing that. It's a word form prediction machine and nothing more. What led you to believe otherwise?

2022-11-17 01:19:46 The only exception to that I've seen is @willie_agnew 's observation about the filters that the engineers designed in, and how that relates to their (the engineers') view of what science is. So, the expression of the ideas of the people behind it. https://t.co/iC6u5VATw9

2022-11-17 01:19:16 That was furthermore entirely predictable that it would behave this way. The story is not in whatever specific things it says, but rather in the way they are promoting it. >

2022-11-17 01:18:51 "ooh look what false/embarrassing/obviously wrong thing Galactica said" is also not interesting folks. It's not surprising in the least that it generates text that is both fluent and (when interpreted by a human) wrong. >

2022-11-17 01:17:02 @MuhammedKambal The only exception to that I've seen is @willie_agnew 's observation about the filters that the engineers designed in, and how that relates to their (the engineers') view of what science is. So, the expression of the ideas of the people behind it. https://t.co/iC6u5VATw9

2022-11-17 01:15:53 @MuhammedKambal That was furthermore entirely predictable. The story is not in whatever specific things it says, but rather in the way they are promoting it. >

2022-11-17 01:15:27 @MuhammedKambal Why? People have been playing "gotcha" with Meta AI's random bullshit generator all day. It's not interesting --- it's not surprising in the least that it generates text that is both fluent and (when interpreted by a human) wrong. >

2022-11-17 00:52:36 @ChombaBupe I mean, it went off the rails as soon as they got to "an AI"....

2022-11-17 00:52:06 @Sanjay_Uppal This is not the original remark you seem to think it is. Also, comparing humans to LLMs is no better than the reverse.

2022-11-17 00:07:29 In my inbox today, an email about a "staffing opportunity" that ended with "PS: If you don't want to hear from me anymore, you can just tell me" I took great pleasure in not only marking that as spam, but also replying and telling them as much.

2022-11-16 23:59:23 At what point does this garbage "science" become embarrassing enough that "researcher" at one of these tech cos stops being a prestigious job? https://t.co/Zq6SNCaBS5

2022-11-16 23:52:55 RT @zehavoc: Hey #NLProc people, don't miss @anas_ant talk next Friday at 11am "NLP Beyond the Top 100 languages" ** and ** @ben_mlr's thes…

2022-11-16 22:31:04 RT @alexhanna: Next week! Mystery AI Hype Theater 3k will return. @emilymbender and I will talk language models and search, and continue ou…

2022-11-16 22:29:47 This is so damning. https://t.co/iC6u5VATw9

2022-11-16 22:29:33 RT @willie_agnew: Refuses to say anything about queer theory, CRT, racism, or AIDS, despite large bodies of highly influential papers in th…

2022-11-16 22:29:28 @kylebrussell I mean descriptively, yes, people are doing that. Normatively, they 100% should not be. But thanks for promoting my tweets, I guess?

2022-11-16 22:19:15 @kylebrussell Your bio begins with "Always learning | Working on AI for game dev" and you're posting like an AI bro. I'm not inclined to take you for a trustworthy journalist. And yeah, the blue checks are now meaningless. That sucks.

2022-11-16 22:16:11 @heyorson There's a non-zero possibility that your garden will start growing ice cream cones next spring, too. What of it?

2022-11-16 22:15:33 @zehavoc I'm more active over there now, definitely. Still reading Twitter, but less inclined to contribute, given that it seems to be on the brink of collapsing and there's nothing I can do about that.

2022-11-16 22:14:34 @kylebrussell Hey there Mr Bought a Bluecheck --- no, "AI is still young" is not an excuse for putting ridiculous demos out the world and making outlandish claims about them.

2022-11-16 22:13:55 @andrewprollings "got it wrong" "synthesized ungrounded text" "made shit up"

2022-11-16 22:13:17 @NannaInie Yep.

2022-11-16 22:10:36 @zehavoc Well, not fully syncing (and definitely not cross-posting), but still putting some stuff over here. I like Tweetdeck better than Mastodon's "advanced" mode, but do like some things better over there.

2022-11-16 22:10:02 RT @mark_riedl: Maybe don’t name your model after the Encyclopedia Galactica unless it is good enough to be a trusted source of knowledge?…

2022-11-16 21:50:34 @JLucasMcKay "get it wrong"

2022-11-16 21:44:53 RT @dmonett: There are so many, so many, many, many red flags alone in the "Get started" and "Limitations" screenshots, that I actually did…

2022-11-16 21:34:11 @HeidyKhlaaf @Meaningness Here ya go: https://t.co/Zq6SNCaBS5

2022-11-15 21:52:22 RT @chirag_shah: Several positions open at @uw_ischool. Come, be a part of an outstanding #iSchool, in the amazing city of Seattle! https:/…

2022-11-29 04:57:22 @fchollet (two tweets up, "isn't a path" should be "is a path")

2022-11-29 04:50:08 @yuvalpi @fchollet Yeah, probably. Authentic tweet :)

2022-11-29 03:43:21 @davelewisdotir I see. I don't think it's cute at all --- it's disrespectful and furthermore plays into the #AIhype that is plaguing the discourse. It also demonstrates how text synthesis machines could be used to DOS public comment processes.

2022-11-29 03:34:11 @davelewisdotir So, "boys will be boys" is what you're saying here? Nice.

2022-11-28 22:51:12 @neilturkewitz Spam and could be turned into a DOS attack...

2022-11-28 21:39:11 See also: https://t.co/a24HdKWhIS

2022-11-28 21:39:04 Once again, for those who seem to have missed it: Language models are not the type of thing that can testify/offer public comment/any similar sort of speech act, because they are not the sort of thing that can have a public commitment. This is atrocious. https://t.co/YrHFuy183M

2022-11-28 20:03:06 @tdietterich @fchollet I urge you to read the short (and well presented) piece that linked to in that tweet before coming here to argue with me.

2022-11-28 19:32:36 @robmalouf @fchollet Shieber cites Dreyfus 1979: Hubert Dreyfus (1979, page 100) has made a similar analogy of climbing trees to reach the moon.

2022-11-28 19:29:38 @robmalouf @fchollet @fchollet makes a nice distinction between "cognitive automation", "cognitive assistance" and "cognitive autonomy" --- and I think is compatible with what you are saying. The argument here is against expecting the ladders to bring "cognitive autonomy". I'll look at Shieber's pc.

2022-11-28 19:12:24 @PeterVeep @fchollet Yeah -- I think that's because the motivation of the people building the applications isn't actually to build better solutions to the problem, but to prove that their system can 'learn'. It's exhausting.

2022-11-28 19:07:46 @fchollet Somehow, the current conversation &

2022-11-28 19:06:13 @fchollet All helpful metaphors, I think, for explaining why it's foolish to believe that deep learning (useful as it may be) isn't a path towards what @fchollet calls "cognitive autonomy". [I couldn't quickly turn up the source for the ladder one, and would be grateful for leads.] >

2022-11-28 19:04:29 Building taller and taller ladders won't get you to the moon -- ? Running faster doesn't get you closer to teleportation -- me ⏱ "dramatically improving the precision or efficiency of clock technology does not lead to a time travel device" -- @fchollet https://t.co/AQc9ZoLizf

2022-11-28 19:00:22 @AngelLamuno Yes!

2022-11-28 14:23:04 @jordilinares I didn't ask what you are aligned with. I was telling you that the answer to your question about the term stochastic parrots can be found in the paper where we introduced that term.

2022-11-28 01:42:44 RT @emilymbender: @kateweaverUT I'm right there with you. Among other things, not hiding the author in the text helps to dispel the idea th…

2022-11-28 01:11:30 @kateweaverUT I'm right there with you. Among other things, not hiding the author in the text helps to dispel the idea that we can (or should even strive to) do scholarship using a "view from nowhere".

2022-11-28 00:14:59 RT @timnitGebru: @jquinonero @emilymbender Turns out its easier to censure research that makes your tech look problematic than stop the rel…

2022-11-27 23:03:48 RT @_ovlb: BTW: Next Friday and Saturday @DAIRInstitute celebrates their first anniversary. Big yay! 9/ [https://t.co/FxssxPVbHx]

2022-11-27 18:43:19 @jordilinares Uh, read our paper?

2022-11-27 15:49:15 @jordilinares Hi! I'm the one who coined that phrase and it was not intended as an insult. It was intended to make clear the difference between what large LMs do and what people claim they do.

2022-11-26 13:46:21 RT @le_science4all: How dangerous are large AI models? The #hype is accelerating rushed deployments, which are causing traumas for users,…

2022-11-25 17:30:50 RT @timnitGebru: All these so-called #AISafety institutions doing the opposite of “safety” are funded and staffed by longtermists and effec…

2022-11-25 14:05:26 @sergia_ch But I do agree that it is disappointing to interact with engineers who refuse to see that / refuse to actually engage with the substantive critique of what they built (and the process they used to build &

2022-11-25 14:05:16 @sergia_ch I'd go a different direction here. I don't think Galactica is fixable, because there is a fundamental mismatch between what they said they wanted to build and the tech they chose. >

2022-11-24 14:27:44 @LucianaBenotti I won't be there but I hope you have a great time!

2022-11-23 20:36:43 @_dmh @evanmiltenburg @EhudReiter @sebgehr @huggingface @thiagocasfer Thanks!

2022-11-23 16:18:08 Coming up in about an hour! Joins us at https://t.co/VF7TD6tw5c #MathyMath #AIHype #Galactica w/@alexhanna https://t.co/MBSAhk0hd4

2022-11-23 15:16:30 RT @MsKellyMHayes: Every year I invite people who are celebrating the colonial holiday to do something in support of Native people. Indigen…

2022-11-23 05:41:39 RT @LeonDerczynski: How does "responsible disclosure" look, for things that make machine learning models behave in undesirable way? #machin…

2022-11-23 05:40:15 RT @timnitGebru: That's right first it was a $100k blogpost prize that was announced by FTX, I suppose using people's stolen funds to save…

2022-11-23 05:39:58 RT @timnitGebru: Have you ever heard of $100k in best paper awards in any academic conference? Even in ML with all the $$ flowing in the fi…

2022-11-23 05:15:29 @nitin Actually, no. That's not the definition of mansplaining. I wonder how often you've mansplained while thinking you weren't....

2022-11-23 01:23:52 RT @mer__edith: @jackson_blum Whatever the future of Twitter DMs, I'll continue to use Signal + admonish others to do the same. Signal is a…

2022-11-22 22:51:35 @evanmiltenburg @EhudReiter @sebgehr @huggingface @_dmh Thanks -- I'll let them know :)

2022-11-22 22:48:31 @evanmiltenburg @EhudReiter @sebgehr @huggingface Very helpful -- thank you! This query is for a student who is working on a rule-based system and trying to find appropriate datasets that aren't basically seq2seq....

2022-11-22 22:34:26 @evanmiltenburg @EhudReiter Thank you! This is fantastic.

2022-11-22 22:14:06 Seeking an #NLG dataset: What are your favorite datasets pairing non-linguistic structured-data input with linguistic output (in English, ideally, but otherwise still welcome)? #nlproc

2022-11-22 21:28:46 @OneAjeetSingh @timnitGebru I haven't read the paper, but it sounds to me that they found a use case for on-topic bullshitting.

2022-11-22 14:15:37 RT @emilymbender: @alexhanna To tune in to watch live on Wednesday, join us at https://t.co/VF7TD6tw5c 9:30am Pacific, W Nov 23

2022-11-22 14:15:17 RT @emilymbender: We'll be doing our next episode on #Galactica this Wednesday at 9:30am Pacific. If you're new to #MAIHT3k and want to get…

2022-11-22 02:01:32 @alexhanna To tune in to watch live on Wednesday, join us at https://t.co/VF7TD6tw5c 9:30am Pacific, W Nov 23

2022-11-22 02:00:11 We'll be doing our next episode on #Galactica this Wednesday at 9:30am Pacific. If you're new to #MAIHT3k and want to get a sense of what it's about, check out episode 1, here! (Fun to look back on this. @alexhanna and I did not imagine we'd be turning this into a series...) https://t.co/xg2Y8FIpON

2022-11-22 01:57:41 RT @alexhanna: Poking around with PeerTube as a YouTube alternative to give more of the Fediverse a shot. We've reposted this first episod…

2022-11-21 19:40:01 RT @mer__edith: Unlike almost all other tech, Signal is supported by donations from people who rely on it, not by monetizing surveillance.…

2022-11-21 17:45:43 Calling all #NLProc folks in EMEA. Would you like #ACL2025nlp to come to you? Please consider putting in a proposal to host. Things have been reorganized to require less effort from the local hosts: https://t.co/ereeIkI39r

2022-11-21 17:05:33 @strwbilly @aryaman2020 Looking at the page source for that article, where you quote a tweet that QTs me, I don't see the text of my tweet.

2022-11-21 14:14:49 It seems we're still stuck in this weird state where NLP is viewed from within the goals of people working on ML, to the detriment of the science on both sides (NLP and ML). https://t.co/d3LJJ8EZf5

2022-11-21 14:10:00 1) Understanding the problem the algorithm is meant to address 2) Building effective evaluation (test sets, metrics) 3) Understanding how the tech we build fits into its social context

2022-11-21 14:07:55 I guess I represent side "Why is the question even framed this way?" DL is a set of algorithms that can be used in different contexts. NLP is a set of applications. Asking this question suggests that none of the following are part of NLP: https://t.co/R1vlQLlihG

2022-11-21 14:06:15 RT @Abebab: How to avoid/displace accountability and responsibility for LLMs bingo https://t.co/Ql6NYSceRa

2022-11-21 14:06:11 @Abebab lol. And not only did you manage the full 5x5, but I think in this case, all of the squares reflect comments from one person. @KarnFort1 and I were harvesting across many different sources...

2022-11-21 13:39:17 RT @timnitGebru: Chief scientist at @Meta, who gaslights victims of genocide lying repeatedly saying "94% of hate speech on facebook is tak…

2022-11-21 01:09:15 RT @timnitGebru: Not like we wrote a paper about that or anything which we got fired for. What COULD go wrong? https://t.co/krVsH5dB7g

2022-11-21 01:09:09 RT @timnitGebru: Not like @chirag_shah and @emilymbender wrote a paper: https://t.co/Zq7abViQbE

2022-11-21 00:50:08 @sbmaruf @ylecun @pcastr @carlesgelada When you say that I "feel" these arguments, you are suggesting that this isn't the result of scholarly work. You can read the argument in this paper: https://t.co/cYj1vKDms1 NB: That was peer reviewed, it's not some arXiv preprint.

2022-11-21 00:37:18 @sbmaruf @ylecun @pcastr @carlesgelada The fact students can be working on PhDs in ML and not actually be taught the relationship between "language models" and actual human linguistic activity is a problem.

2022-11-21 00:36:30 @sbmaruf @ylecun @pcastr @carlesgelada As for "make shit up": The training objective of language models is word forms given context, nothing else. This is not reasoning about the world, it's not "predicting" anything, it's not "writing scientific articles", etc.

2022-11-21 00:35:29 @sbmaruf @ylecun @pcastr @carlesgelada Why do you need to study them? The world doesn't need 120B parameter language models. For PhD students, I recommend learning the nature of the problem you are trying to solve and then looking critically at how current tech matches (or doesn't) that problem. >

2022-11-20 22:48:43 RT @chirag_shah: Meta's #Galactica is yet another attempt to use LLMs in ways that is both irresponsible and harmful. I have a feeling that…

2022-11-20 17:35:36 @marylgray @sjjphd @schock @mlmillerphd Check your DMs :)

2022-11-20 17:24:21 this https://t.co/wbJJYGf4kL

2022-11-20 16:03:36 @carlesgelada @ylecun @pcastr No, the project is doomed from the start, because you don't do science by manipulating strings of letters.

2022-11-20 15:51:08 @ylecun @pcastr @carlesgelada Oh, I do. It pisses me off every time I type "How about we meet on..." and suggests some random day of the week. But also: that's lower volume and thus less problematic.

2022-11-20 15:46:13 RT @emilymbender: @ylecun @pcastr @carlesgelada Regardless of what you prompt it with, what comes out is untethered from any communicative…

2022-11-20 15:44:53 @ylecun @pcastr @carlesgelada In other words, #Galactica is designed to make shit up and to do so fluently. Even when it is "working" it's just making shit up that happens to match the form (and not just the style) of what we were looking for.

2022-11-20 15:43:55 @ylecun @pcastr @carlesgelada Regardless of what you prompt it with, what comes out is untethered from any communicative intent, understanding of the science, etc etc because language only model the distribution of the form of language. >

2022-11-20 14:39:52 RT @emilymbender: Hashtags work better when you spell them right: #Galactica

2022-11-20 14:34:14 RT @xriskology: Here's my newest Salon article, about the FTX debacle and the longtermist ideology behind it. There are several crucial poi…

2022-11-20 06:48:23 RT @alexhanna: If I was the head of AI at Meta I would simply not keep on digging myself into a deeper hole defending a tool that my team t…

2022-11-20 06:44:54 RT @Abebab: remember longtermism is not a concern for AI ethics but an abstract contemplation &

2022-11-20 00:24:45 RT @ICLDC_HI: ComputEL-6: final call for papers! Submission deadline is Nov. 20 —>

2022-11-19 23:26:59 @holdspacefree Yikes.

2022-11-19 23:21:59 Hashtags work better when you spell them right: #Galactica https://t.co/sLJ8iaNF7X

2022-11-19 23:12:41 Looking forward to the coming episode of Mystery AI Hype Theater 3000 with @alexhanna On deck: #Galatica of course, plus our new segment "What in the Fresh AI Hell?" Next Wednesday, 11/23, 9:30am Pacific. https://t.co/VF7TD6tw5c #NLProc #AIhype #MAIHT3k https://t.co/1jWIChBzJp

2022-11-19 22:40:55 @intelwire As I understand it, this guy is in charge of META AI. So yeah.

2022-11-19 22:35:51 Did you do any user studies? Did you measure whether people who are looking for writing assistance (working hard in an L2, up against a paper submission deadline, tired) would be able to accurately evaluate if the text they are submitting accurately expresses their ideas?

2022-11-19 22:34:18 So close and yet so far... BEFORE dumping lots of resources into building some tech, we should asking questions like: What are the failure modes? Who will be hurt if the system malfunctions? Who will be hurt if it functions as intended? As for your use case: https://t.co/6hPU3nv5kV

2022-11-19 21:25:29 RT @emilymbender: Using a large LM as a search engine was a bad idea when it was proposed by a search company. It's still a bad idea now, f…

2022-11-19 19:07:44 @isosteph This is yummy, impressive looking, and I think pretty easy. As in: I'm not really much of a cook but have had success with it. https://t.co/wThVrq7mlJ

2022-11-19 05:18:16 @Jeff_Sparrow I put it in the next tweet in the thread: https://t.co/3doaE3kCm0

2022-11-19 04:27:20 @Jeff_Sparrow Yes.

2022-11-19 04:25:44 I clicked the link to see what that study was ... and found something from the London School of Economics and Political Science that has a *blurb* on the webpage talking up how groundbreaking the report is. Sure looks credible to me (not). https://t.co/n5XiDXgeAw https://t.co/8gvbSEP0bd

2022-11-19 04:23:25 And while I'm here, this paragraph is ridiculously full of #AIhype: >

2022-11-19 04:21:17 The paper lives here, in the ACM Digital Library: https://t.co/kwACyKvDtL Not that weird rando URL that you gave. >

2022-11-19 04:20:28 Hey @Jeff_Sparrow I appreciate you actually citing our paper as the source of the phrase "stochastic parrots", but it would be better form to link to its actual published location. https://t.co/DmRThcHDEz >

2022-11-18 23:50:24 @lebovitzben @mmitchell_ai @timnitGebru @Grady_Booch Clearly unqualified for his position. Relevant here: https://t.co/fVMYOVZb0d

2022-11-18 19:09:11 I'm glad to see my tweet about the #Galactica debacle quoted here, but I wonder: what happens to this article when Twitter goes down? @strwbilly do you know? Do the embedded tweets just disappear, or is your publishing platform robust to that? https://t.co/88FzfskH8f

2022-11-18 17:48:01 @josephrobison I think it's a mistake to have our information access systems driven by for-profit companies. Bare minimum is we should invest in something public.

2022-11-18 17:20:18 @josephrobison Misguided and dangerous.

2022-11-18 15:53:31 @BoredAstronaut Not harmless: https://t.co/kwACyKvDtL

2022-11-18 14:50:57 RT @emilymbender: And by take seriously I don't mean "believe uncritically". I mean learn how to read and discern and learn from and unders…

2022-11-18 14:49:19 RT @emilymbender: "Generates text that's grammatical and feels real": We sounded the alarm about this in the Stochastic Parrots paper. Ordi…

2022-11-18 14:48:18 RT @emilymbender: I don't follow this so I missed his remarks about it in real time. Apparently the hype wasn't coming only from the mark…

2022-11-18 14:06:31 RT @timnitGebru: Whatever you do, don't join Jack Dorsey's thing. He totally believes in Elon's plan to "extend the light of consciousness"…

2022-11-18 03:41:38 @ruthstarkman @mmitchell_ai @Stanford Your students are so awesome!

2022-11-18 03:41:29 RT @ruthstarkman: Students made several great tiktoks for @mmitchell_ai and @emilymbender's Dec 2 3-4:30 PM talk @Stanford. Here's one. BE…

2022-11-18 03:41:16 Feel like I've got to add my toast to Twitter before it goes under. I have really appreciated this as a space to learn and to practice public scholarship. Thank you all.

2022-11-17 21:56:26 @GaryMarcus @timnitGebru Choosing not to be a sea lion is free. I recommend it. Choosing not to engage with sea lioning, also free and freeing. Ciao!

2022-11-17 21:53:44 Choosing not to sealion is ... free. And such a good deal!

2022-11-17 21:50:59 @GaryMarcus @timnitGebru Gary, your white privilege is showing. Timnit (and others) have explained over and over where the eugenics is. You keep coming back with "due process" sealioning. You can choose to learn or you can choose to keep flaunting your unwillingness to listen.

2022-11-17 21:27:35 RT @mmitchell_ai: When we use "safety filters" that censor content with a broad brush, we create a less safe world: marginalizing large swa…

2022-11-17 21:17:44 And similarly flailing around "citation". Just because you're wallowing in the mud/kicking up dust doesn't mean the distinctions aren't clear to other people! https://t.co/XavSVJ5N96

2022-11-17 21:17:03 And then some very weird flailing around "ethics": https://t.co/jtfHYTpSly >

2022-11-17 21:16:14 Backing off from that claim pretty quickly: >

2022-11-17 21:15:30 I don't follow this so I missed his remarks about it in real time. Apparently the hype wasn't coming only from the marketing department... >

2022-11-17 15:33:04 RT @BJMcP: Great thread on both the dangers of AI language models and the ways the language used to describe them contributes to illusions…

2022-11-17 15:00:39 And by take seriously I don't mean "believe uncritically". I mean learn how to read and discern and learn from and understand that if you are working on AI it is literally part of your job to do this reading &

2022-11-17 14:58:59 Better late than never, I suppose, but I hope that part of the lesson learned here is to take seriously the work that approaches all of this from a critical lens. >

2022-11-17 14:57:32 Or maybe folks read our paper and didn't really believe us but had to wait until they could "see it with their own eyes" = see the effect from a) a live demo that b) impinges on something where they do have skin in the game. >

2022-11-17 14:56:27 So, this looks like a case where that wasn't alarming enough. It only becomes *alarming* for some when what's threatened is ~%~science~%~. >

2022-11-17 14:55:39 And then we look deeply into the risks and harms that follow when that seemingly coherent text reproduces the language of systems of oppression. >

2022-11-17 14:54:54 I looked back at Section 6 of Stochastic Parrots, which begins: >

2022-11-17 14:53:32 "Generates text that's grammatical and feels real": We sounded the alarm about this in the Stochastic Parrots paper. Ordinarily, I don't expect the average AI researcher to have read my papers, but that paper (thanks to Google) can hardly be missed. So what gives? >

2022-11-17 14:51:43 RT @roxannedarling: Question of the Day:

2022-11-17 14:46:24 RT @emilymbender: At what point does this garbage "science" become embarrassing enough that "researcher" at one of these tech cos stops bei…

2022-11-17 14:44:55 RT @emilymbender: And let's reflect for a moment on how they phrased their disclaimer, shall we? "Hallucinate" is a terrible word choice he…

2022-11-17 14:20:46 RT @dmonett: "AI is a story we computer scientists made up to help us get funding ... It was a pragmatic theater. But now AI has become a f…

2022-11-17 06:05:48 Using the phrase "AI speech police" to refer to automated content moderation is certainly... a choice. https://t.co/P7AJmqxIMg

2022-11-17 05:58:22 RT @dmonett: "Lazy, negligent, unsafe."

2022-11-17 05:22:43 The word forms are not their meaning Their meaning is not the world Modeling the distribution of word forms is not the same as modeling (knowledge of) the world. And yet somehow this basic point is missed over and over and over... https://t.co/Hy5HBgeCbo

2022-11-17 05:12:08 Fundamental point that ~all people who see LLMs as "AI" seem to be missing: The *only* knowledge that an LLM can truthfully be said to have is information about the distribution of word forms.

2022-11-17 05:09:52 RT @moreisdifferent: PROBLEM: "Researchers are buried under a mass of papers, increasingly unable to distinguish between the meaningful and…

2022-11-17 02:39:26 @josephrobison Hey look! It's another probably-bought-his-blue-check telling me that Google can make search out of LMs. No, they can't: https://t.co/rkDjc4kDxj

2022-11-17 01:30:10 @MuhammedKambal Yes --- as many of us have been saying for quite some time now.

2022-11-17 01:26:23 @MuhammedKambal How about starting by not referring to any of these systems as "AI", and thus helping your audience learn how to cut through the AI hype?

2022-11-17 01:21:06 @MuhammedKambal Well no shit. Of course it's missing that. It's a word form prediction machine and nothing more. What led you to believe otherwise?

2022-11-17 01:19:46 The only exception to that I've seen is @willie_agnew 's observation about the filters that the engineers designed in, and how that relates to their (the engineers') view of what science is. So, the expression of the ideas of the people behind it. https://t.co/iC6u5VATw9

2022-11-17 01:19:16 That was furthermore entirely predictable that it would behave this way. The story is not in whatever specific things it says, but rather in the way they are promoting it. >

2022-11-17 01:18:51 "ooh look what false/embarrassing/obviously wrong thing Galactica said" is also not interesting folks. It's not surprising in the least that it generates text that is both fluent and (when interpreted by a human) wrong. >

2022-11-17 01:17:02 @MuhammedKambal The only exception to that I've seen is @willie_agnew 's observation about the filters that the engineers designed in, and how that relates to their (the engineers') view of what science is. So, the expression of the ideas of the people behind it. https://t.co/iC6u5VATw9

2022-11-17 01:15:53 @MuhammedKambal That was furthermore entirely predictable. The story is not in whatever specific things it says, but rather in the way they are promoting it. >

2022-11-17 01:15:27 @MuhammedKambal Why? People have been playing "gotcha" with Meta AI's random bullshit generator all day. It's not interesting --- it's not surprising in the least that it generates text that is both fluent and (when interpreted by a human) wrong. >

2022-11-17 00:52:36 @ChombaBupe I mean, it went off the rails as soon as they got to "an AI"....

2022-11-17 00:52:06 @Sanjay_Uppal This is not the original remark you seem to think it is. Also, comparing humans to LLMs is no better than the reverse.

2022-11-17 00:07:29 In my inbox today, an email about a "staffing opportunity" that ended with "PS: If you don't want to hear from me anymore, you can just tell me" I took great pleasure in not only marking that as spam, but also replying and telling them as much.

2022-11-16 23:59:23 At what point does this garbage "science" become embarrassing enough that "researcher" at one of these tech cos stops being a prestigious job? https://t.co/Zq6SNCaBS5

2022-11-16 23:52:55 RT @zehavoc: Hey #NLProc people, don't miss @anas_ant talk next Friday at 11am "NLP Beyond the Top 100 languages" ** and ** @ben_mlr's thes…

2022-11-16 22:31:04 RT @alexhanna: Next week! Mystery AI Hype Theater 3k will return. @emilymbender and I will talk language models and search, and continue ou…

2022-11-16 22:29:47 This is so damning. https://t.co/iC6u5VATw9

2022-11-16 22:29:33 RT @willie_agnew: Refuses to say anything about queer theory, CRT, racism, or AIDS, despite large bodies of highly influential papers in th…

2022-11-16 22:29:28 @kylebrussell I mean descriptively, yes, people are doing that. Normatively, they 100% should not be. But thanks for promoting my tweets, I guess?

2022-11-16 22:19:15 @kylebrussell Your bio begins with "Always learning | Working on AI for game dev" and you're posting like an AI bro. I'm not inclined to take you for a trustworthy journalist. And yeah, the blue checks are now meaningless. That sucks.

2022-11-16 22:16:11 @heyorson There's a non-zero possibility that your garden will start growing ice cream cones next spring, too. What of it?

2022-11-16 22:15:33 @zehavoc I'm more active over there now, definitely. Still reading Twitter, but less inclined to contribute, given that it seems to be on the brink of collapsing and there's nothing I can do about that.

2022-11-16 22:14:34 @kylebrussell Hey there Mr Bought a Bluecheck --- no, "AI is still young" is not an excuse for putting ridiculous demos out the world and making outlandish claims about them.

2022-11-16 22:13:55 @andrewprollings "got it wrong" "synthesized ungrounded text" "made shit up"

2022-11-16 22:13:17 @NannaInie Yep.

2022-11-16 22:10:36 @zehavoc Well, not fully syncing (and definitely not cross-posting), but still putting some stuff over here. I like Tweetdeck better than Mastodon's "advanced" mode, but do like some things better over there.

2022-11-16 22:10:02 RT @mark_riedl: Maybe don’t name your model after the Encyclopedia Galactica unless it is good enough to be a trusted source of knowledge?…

2022-11-16 21:50:34 @JLucasMcKay "get it wrong"

2022-11-16 21:44:53 RT @dmonett: There are so many, so many, many, many red flags alone in the "Get started" and "Limitations" screenshots, that I actually did…

2022-11-16 21:34:11 @HeidyKhlaaf @Meaningness Here ya go: https://t.co/Zq6SNCaBS5

2022-11-15 21:52:22 RT @chirag_shah: Several positions open at @uw_ischool. Come, be a part of an outstanding #iSchool, in the amazing city of Seattle! https:/…

2022-12-07 21:17:28 @alexhanna @timnitGebru Oh what an awful experience! I'm so sorry that you all were subjected to this and also in awe of your responses (recording in the moment, documenting here).

2022-12-07 16:58:12 @Miles_Brundage So instead of calling that out, or you know, just walking by, you decided to play along? There a people out there calling "stochastic parrot" an insult (to "AI" systems). And you're out there promoting ChatGPT as "an AI". The inference was easy.

2022-12-07 16:56:54 @Miles_Brundage "It was just a joke" --- are you hearing yourself?

2022-12-07 16:50:57 @betsysneller Cheating ofc because "Down Under" there is functioning as an NP.

2022-12-07 16:50:37 @Miles_Brundage Making light of actual oppression = not funny?

2022-12-07 16:50:13 @betsysneller Good for introducing a discussion about descriptive v. prescriptive rules. Also, I add that you can cheat and make it a string of 8 prepositions if the book is about Australia: "But Dad, what did you bring the book I didn't want to be read to out of about Down Under up for?" >

2022-12-07 16:49:05 @betsysneller "But Dad, what did you bring the book I didn't want to be read to out of up for?" >

2022-12-07 16:48:36 @betsysneller There was a kid who lived in a two story house and always got read a story at bedtime. Books on the main floor, bedrooms on the second. One day, the kid's dad brings up a poor choice of story and the kid says:

2022-12-07 15:48:53 Corrected link: https://t.co/hWtQ2z8Mw8 by @willknight https://t.co/hh0bmg8t02

2022-12-07 15:48:31 And I see that the link at the top of this thread is broken (copy paste error on my part). Here is the article: https://t.co/hWtQ2z8Mw8

2022-12-07 15:48:08 It's frustrating that they don't seem to consider that a language model is not fit for purpose here. This isn't something that can be fixed (even if doing so is challenging). It's a fundamental design flaw. >

2022-12-07 15:47:57 They give this disclaimer "ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging [...]" >

2022-12-07 15:47:36 Re difference to other chatbots: The only possible difference I see is that the training regimen they developed led to a system that might seem more trustworthy, despite still being completely unsuited to the use cases they are (implicitly) suggesting. >

2022-12-07 15:47:01 @kashhill @willknight Seems like two copies of the link somehow? Here it is: https://t.co/hWtQ2z8Mw8

2022-12-07 15:46:36 Furthermore, they situate it as "the latest step in OpenAI’s iterative deployment of increasingly safe and useful AI systems." --- as if this were an "AI system" (with all that suggests) rather than a text-synthesis machine. >

2022-12-07 15:46:26 Anyway, longer version of what I said to Will: OpenAI was more cautious about it than Meta with Galactica, but if you look at the examples in their blog post announcing ChatGPT, they are clearly suggesting that it should be used to answer questions. >

2022-12-07 15:44:37 @willknight A tech CEO is not the source to interview about whether chatbots could be effective therapists. What you need is someone who studies such therapy AND understands that the chatbot has no actual understanding. Then you could get an accurate appraisal. My guess: *shudder* >

2022-12-07 15:42:50 @willknight Far more problematic is the closing quote, wherein Knight returns to the interviewee he opened with (CEO of a coding tools company) and platforms her opinions about "AI" therapists. >

2022-12-07 15:39:59 @willknight The 1st is somewhat subtle. Saying this ability has been "unlocked" paints a picture where there is a pathway to some "AI" and what technologists are doing is figuring out how to follow that path (with LMs, no less!). SciFi movies are not in fact documentaries from the future. >

2022-12-07 15:37:19 I appreciated the chance to have my say in this article by @willknight but I need to push back on a couple of things: https://t.co/cbelYjZTyF >

2022-12-08 00:58:14 Meanwhile, woe to the reviewers who now have to also consider the possibility that the text they are reading isn't actually grounded in author intent, but just inserted to "sound plausible". And woe to the field whose academic discourse gets polluted with this.

2022-12-08 00:57:27 I sure hope you also advise your students that they (and you, if you are a co-author) are taking responsibility for every word that is in the paper --- that the words represent their ideas (not anyone else's) and that they stand by the accuracy of the statements. >

2022-12-07 22:48:53 @timnitGebru Timnit, how awful. I'm so sorry the DAIR 1st anniversary celebration was marred like this. And I am in awe at your brave responses.

2022-12-07 22:27:26 @Etyma1010 @betsysneller I should say I didn't invent this (though it's possible that the "about Down Under" addition is mine), but I don't remember who I got it from....

2022-12-07 22:26:53 @Etyma1010 @betsysneller It's a great S! I usually use it in the context of talking about prescriptive vs. descriptive rules, in particular, the rule against ending a sentence with a preposition. If that were a real rule of English, that sentence would be gibberish, but it's perfectly comprehensible.

2022-12-07 21:17:28 @alexhanna @timnitGebru Oh what an awful experience! I'm so sorry that you all were subjected to this and also in awe of your responses (recording in the moment, documenting here).

2022-12-07 16:58:12 @Miles_Brundage So instead of calling that out, or you know, just walking by, you decided to play along? There a people out there calling "stochastic parrot" an insult (to "AI" systems). And you're out there promoting ChatGPT as "an AI". The inference was easy.

2022-12-07 16:56:54 @Miles_Brundage "It was just a joke" --- are you hearing yourself?

2022-12-07 16:50:57 @betsysneller Cheating ofc because "Down Under" there is functioning as an NP.

2022-12-07 16:50:37 @Miles_Brundage Making light of actual oppression = not funny?

2022-12-07 16:50:13 @betsysneller Good for introducing a discussion about descriptive v. prescriptive rules. Also, I add that you can cheat and make it a string of 8 prepositions if the book is about Australia: "But Dad, what did you bring the book I didn't want to be read to out of about Down Under up for?" >

2022-12-07 16:49:05 @betsysneller "But Dad, what did you bring the book I didn't want to be read to out of up for?" >

2022-12-07 16:48:36 @betsysneller There was a kid who lived in a two story house and always got read a story at bedtime. Books on the main floor, bedrooms on the second. One day, the kid's dad brings up a poor choice of story and the kid says:

2022-12-07 15:48:53 Corrected link: https://t.co/hWtQ2z8Mw8 by @willknight https://t.co/hh0bmg8t02

2022-12-07 15:48:31 And I see that the link at the top of this thread is broken (copy paste error on my part). Here is the article: https://t.co/hWtQ2z8Mw8

2022-12-07 15:48:08 It's frustrating that they don't seem to consider that a language model is not fit for purpose here. This isn't something that can be fixed (even if doing so is challenging). It's a fundamental design flaw. >

2022-12-07 15:47:57 They give this disclaimer "ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging [...]" >

2022-12-07 15:47:36 Re difference to other chatbots: The only possible difference I see is that the training regimen they developed led to a system that might seem more trustworthy, despite still being completely unsuited to the use cases they are (implicitly) suggesting. >

2022-12-07 15:47:01 @kashhill @willknight Seems like two copies of the link somehow? Here it is: https://t.co/hWtQ2z8Mw8

2022-12-07 15:46:36 Furthermore, they situate it as "the latest step in OpenAI’s iterative deployment of increasingly safe and useful AI systems." --- as if this were an "AI system" (with all that suggests) rather than a text-synthesis machine. >

2022-12-07 15:46:26 Anyway, longer version of what I said to Will: OpenAI was more cautious about it than Meta with Galactica, but if you look at the examples in their blog post announcing ChatGPT, they are clearly suggesting that it should be used to answer questions. >

2022-12-07 15:44:37 @willknight A tech CEO is not the source to interview about whether chatbots could be effective therapists. What you need is someone who studies such therapy AND understands that the chatbot has no actual understanding. Then you could get an accurate appraisal. My guess: *shudder* >

2022-12-07 15:42:50 @willknight Far more problematic is the closing quote, wherein Knight returns to the interviewee he opened with (CEO of a coding tools company) and platforms her opinions about "AI" therapists. >

2022-12-07 15:39:59 @willknight The 1st is somewhat subtle. Saying this ability has been "unlocked" paints a picture where there is a pathway to some "AI" and what technologists are doing is figuring out how to follow that path (with LMs, no less!). SciFi movies are not in fact documentaries from the future. >

2022-12-07 15:37:19 I appreciated the chance to have my say in this article by @willknight but I need to push back on a couple of things: https://t.co/cbelYjZTyF >

2022-12-08 19:34:24 Oh, and for the record, though that tweet came from a small account, I only saw it because Stanford NLP retweeted it. So someone there thought it was a reasonable description too.

2022-12-08 19:25:36 @Raza_Habib496 People are using it does not entail benefits. Comparing GPT-3 to fundamental physics research is also a strange flex. Finally: as we argue in the Stochastic Parrots paper -- who gets the benefits and who pays the costs? (Not the same people.)

2022-12-08 19:24:49 @Raza_Habib496 Oh, I checked your bio first. If it had said "PhD student" I probably would have just walked on by. But you've got "CEO" and "30 under 30" so if anything, I bet you like the attention.

2022-12-08 19:00:02 @rharang The astonishing thing about that slide is that the only numbers are about training data + compute. There's not even any claims based on (likely suspect, but that's another story) benchmarks.

2022-12-08 18:57:31 It's wild to me that this is considered a picture of "progress". Progress towards what? What I see is a picture of ever increasing usage of resources + complete disinterest in being able to document and understand the data these things are build on. https://t.co/vVPvH7zal0

2022-12-08 14:33:02 @yoavgo @yuvalpi Here it is: https://t.co/GWKrpgxkPt

2022-12-08 14:31:34 @yoavgo @yuvalpi Oh, and I don't have time to dig it up this morning, but you told Anna something about how you don't really care about stealing ideas --- and seemed to think that our community doesn't either.

2022-12-08 14:31:06 @yoavgo @yuvalpi And even if you offer it as an option: nothing in what you said suggests that you have accounted for what will happen when someone is confronted with something that sounds plausible, and confident --- especially when it's their L2. >

2022-12-08 14:30:29 @yoavgo @yuvalpi Your whole proposal is extremely trollish (as is you demeanor on Twitter

2022-12-08 14:18:47 RT @KimTallBear: Job Opportunity: Associate Professor or Professor, Tenure-Track in Native North American Indigenous Knowledge (NNAIK) at U…

2022-12-08 14:14:42 @amahabal And have you actually used ChatGPT as a writing assistant? How did that go? What did you find useful about it? What do you think a student (just starting out in research) would find useful about it? How would they be able to evaluate its suggestions?

2022-12-08 14:02:07 RT @emilymbender: Apropos of the complete lack of transparency about #ChatGPT 's training data, I'd like to resurface what Batya Friedman a…

2022-12-08 13:51:55 @amahabal No. Why should I?

2022-12-08 13:44:45 @yuvalpi @yoavgo Yes, I read his whole thread. No that doesn't negate what I said.

2022-12-08 13:25:00 RT @marylgray: Calling all scholars interested in a fellowship to reboot social media : ) https://t.co/MApt42p8eB

2022-12-08 05:10:51 @schock Ugh, gross. Thanks for documenting. Also, is it just me, or do all of these ChatGPT examples seem to have the same surface tone (even while it's saying vile things)?

2022-12-08 03:34:06 RT @sivavaid: Part of the reason so many people misunderstand and misuse "artificial intelligence" is that it was misnamed "artificial inte…

2022-12-08 03:33:59 RT @safiyanoble: That part. https://t.co/QVoYHFOQIF

2022-12-08 02:02:53 RT @michaelgaubrey: Everyone should go listen to @emilymbender's interview on @FactuallyPod.

2022-12-08 01:07:31 @Etyma1010 @betsysneller That sounds very plausible!!

2022-12-08 00:58:14 Meanwhile, woe to the reviewers who now have to also consider the possibility that the text they are reading isn't actually grounded in author intent, but just inserted to "sound plausible". And woe to the field whose academic discourse gets polluted with this.

2022-12-08 00:57:27 I sure hope you also advise your students that they (and you, if you are a co-author) are taking responsibility for every word that is in the paper --- that the words represent their ideas (not anyone else's) and that they stand by the accuracy of the statements. >

2022-12-07 22:48:53 @timnitGebru Timnit, how awful. I'm so sorry the DAIR 1st anniversary celebration was marred like this. And I am in awe at your brave responses.

2022-12-07 22:27:26 @Etyma1010 @betsysneller I should say I didn't invent this (though it's possible that the "about Down Under" addition is mine), but I don't remember who I got it from....

2022-12-07 22:26:53 @Etyma1010 @betsysneller It's a great S! I usually use it in the context of talking about prescriptive vs. descriptive rules, in particular, the rule against ending a sentence with a preposition. If that were a real rule of English, that sentence would be gibberish, but it's perfectly comprehensible.

2022-12-07 21:17:28 @alexhanna @timnitGebru Oh what an awful experience! I'm so sorry that you all were subjected to this and also in awe of your responses (recording in the moment, documenting here).

2022-12-07 16:58:12 @Miles_Brundage So instead of calling that out, or you know, just walking by, you decided to play along? There a people out there calling "stochastic parrot" an insult (to "AI" systems). And you're out there promoting ChatGPT as "an AI". The inference was easy.

2022-12-07 16:56:54 @Miles_Brundage "It was just a joke" --- are you hearing yourself?

2022-12-07 16:50:57 @betsysneller Cheating ofc because "Down Under" there is functioning as an NP.

2022-12-07 16:50:37 @Miles_Brundage Making light of actual oppression = not funny?

2022-12-07 16:50:13 @betsysneller Good for introducing a discussion about descriptive v. prescriptive rules. Also, I add that you can cheat and make it a string of 8 prepositions if the book is about Australia: "But Dad, what did you bring the book I didn't want to be read to out of about Down Under up for?" >

2022-12-07 16:49:05 @betsysneller "But Dad, what did you bring the book I didn't want to be read to out of up for?" >

2022-12-07 16:48:36 @betsysneller There was a kid who lived in a two story house and always got read a story at bedtime. Books on the main floor, bedrooms on the second. One day, the kid's dad brings up a poor choice of story and the kid says:

2022-12-07 15:48:53 Corrected link: https://t.co/hWtQ2z8Mw8 by @willknight https://t.co/hh0bmg8t02

2022-12-07 15:48:31 And I see that the link at the top of this thread is broken (copy paste error on my part). Here is the article: https://t.co/hWtQ2z8Mw8

2022-12-07 15:48:08 It's frustrating that they don't seem to consider that a language model is not fit for purpose here. This isn't something that can be fixed (even if doing so is challenging). It's a fundamental design flaw. >

2022-12-07 15:47:57 They give this disclaimer "ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging [...]" >

2022-12-07 15:47:36 Re difference to other chatbots: The only possible difference I see is that the training regimen they developed led to a system that might seem more trustworthy, despite still being completely unsuited to the use cases they are (implicitly) suggesting. >

2022-12-07 15:47:01 @kashhill @willknight Seems like two copies of the link somehow? Here it is: https://t.co/hWtQ2z8Mw8

2022-12-07 15:46:36 Furthermore, they situate it as "the latest step in OpenAI’s iterative deployment of increasingly safe and useful AI systems." --- as if this were an "AI system" (with all that suggests) rather than a text-synthesis machine. >

2022-12-07 15:46:26 Anyway, longer version of what I said to Will: OpenAI was more cautious about it than Meta with Galactica, but if you look at the examples in their blog post announcing ChatGPT, they are clearly suggesting that it should be used to answer questions. >

2022-12-07 15:44:37 @willknight A tech CEO is not the source to interview about whether chatbots could be effective therapists. What you need is someone who studies such therapy AND understands that the chatbot has no actual understanding. Then you could get an accurate appraisal. My guess: *shudder* >

2022-12-07 15:42:50 @willknight Far more problematic is the closing quote, wherein Knight returns to the interviewee he opened with (CEO of a coding tools company) and platforms her opinions about "AI" therapists. >

2022-12-07 15:39:59 @willknight The 1st is somewhat subtle. Saying this ability has been "unlocked" paints a picture where there is a pathway to some "AI" and what technologists are doing is figuring out how to follow that path (with LMs, no less!). SciFi movies are not in fact documentaries from the future. >

2022-12-07 15:37:19 I appreciated the chance to have my say in this article by @willknight but I need to push back on a couple of things: https://t.co/cbelYjZTyF >

2022-12-08 22:48:41 @chrmanning I'm not actually referring to your slide, Chris, so much as the way it was framed in the OP's tweet --- which Stanford NLP sought fit to retweet, btw.

2022-12-08 19:34:24 Oh, and for the record, though that tweet came from a small account, I only saw it because Stanford NLP retweeted it. So someone there thought it was a reasonable description too.

2022-12-08 19:25:36 @Raza_Habib496 People are using it does not entail benefits. Comparing GPT-3 to fundamental physics research is also a strange flex. Finally: as we argue in the Stochastic Parrots paper -- who gets the benefits and who pays the costs? (Not the same people.)

2022-12-08 19:24:49 @Raza_Habib496 Oh, I checked your bio first. If it had said "PhD student" I probably would have just walked on by. But you've got "CEO" and "30 under 30" so if anything, I bet you like the attention.

2022-12-08 19:00:02 @rharang The astonishing thing about that slide is that the only numbers are about training data + compute. There's not even any claims based on (likely suspect, but that's another story) benchmarks.

2022-12-08 18:57:31 It's wild to me that this is considered a picture of "progress". Progress towards what? What I see is a picture of ever increasing usage of resources + complete disinterest in being able to document and understand the data these things are build on. https://t.co/vVPvH7zal0

2022-12-08 14:33:02 @yoavgo @yuvalpi Here it is: https://t.co/GWKrpgxkPt

2022-12-08 14:31:34 @yoavgo @yuvalpi Oh, and I don't have time to dig it up this morning, but you told Anna something about how you don't really care about stealing ideas --- and seemed to think that our community doesn't either.

2022-12-08 14:31:06 @yoavgo @yuvalpi And even if you offer it as an option: nothing in what you said suggests that you have accounted for what will happen when someone is confronted with something that sounds plausible, and confident --- especially when it's their L2. >

2022-12-08 14:30:29 @yoavgo @yuvalpi Your whole proposal is extremely trollish (as is you demeanor on Twitter

2022-12-08 14:18:47 RT @KimTallBear: Job Opportunity: Associate Professor or Professor, Tenure-Track in Native North American Indigenous Knowledge (NNAIK) at U…

2022-12-08 14:14:42 @amahabal And have you actually used ChatGPT as a writing assistant? How did that go? What did you find useful about it? What do you think a student (just starting out in research) would find useful about it? How would they be able to evaluate its suggestions?

2022-12-08 14:02:07 RT @emilymbender: Apropos of the complete lack of transparency about #ChatGPT 's training data, I'd like to resurface what Batya Friedman a…

2022-12-08 13:51:55 @amahabal No. Why should I?

2022-12-08 13:44:45 @yuvalpi @yoavgo Yes, I read his whole thread. No that doesn't negate what I said.

2022-12-08 13:25:00 RT @marylgray: Calling all scholars interested in a fellowship to reboot social media : ) https://t.co/MApt42p8eB

2022-12-08 05:10:51 @schock Ugh, gross. Thanks for documenting. Also, is it just me, or do all of these ChatGPT examples seem to have the same surface tone (even while it's saying vile things)?

2022-12-08 03:34:06 RT @sivavaid: Part of the reason so many people misunderstand and misuse "artificial intelligence" is that it was misnamed "artificial inte…

2022-12-08 03:33:59 RT @safiyanoble: That part. https://t.co/QVoYHFOQIF

2022-12-08 02:02:53 RT @michaelgaubrey: Everyone should go listen to @emilymbender's interview on @FactuallyPod.

2022-12-08 01:07:31 @Etyma1010 @betsysneller That sounds very plausible!!

2022-12-08 00:58:14 Meanwhile, woe to the reviewers who now have to also consider the possibility that the text they are reading isn't actually grounded in author intent, but just inserted to "sound plausible". And woe to the field whose academic discourse gets polluted with this.

2022-12-08 00:57:27 I sure hope you also advise your students that they (and you, if you are a co-author) are taking responsibility for every word that is in the paper --- that the words represent their ideas (not anyone else's) and that they stand by the accuracy of the statements. >

2022-12-07 22:48:53 @timnitGebru Timnit, how awful. I'm so sorry the DAIR 1st anniversary celebration was marred like this. And I am in awe at your brave responses.

2022-12-07 22:27:26 @Etyma1010 @betsysneller I should say I didn't invent this (though it's possible that the "about Down Under" addition is mine), but I don't remember who I got it from....

2022-12-07 22:26:53 @Etyma1010 @betsysneller It's a great S! I usually use it in the context of talking about prescriptive vs. descriptive rules, in particular, the rule against ending a sentence with a preposition. If that were a real rule of English, that sentence would be gibberish, but it's perfectly comprehensible.

2022-12-07 21:17:28 @alexhanna @timnitGebru Oh what an awful experience! I'm so sorry that you all were subjected to this and also in awe of your responses (recording in the moment, documenting here).

2022-12-07 16:58:12 @Miles_Brundage So instead of calling that out, or you know, just walking by, you decided to play along? There a people out there calling "stochastic parrot" an insult (to "AI" systems). And you're out there promoting ChatGPT as "an AI". The inference was easy.

2022-12-07 16:56:54 @Miles_Brundage "It was just a joke" --- are you hearing yourself?

2022-12-07 16:50:57 @betsysneller Cheating ofc because "Down Under" there is functioning as an NP.

2022-12-07 16:50:37 @Miles_Brundage Making light of actual oppression = not funny?

2022-12-07 16:50:13 @betsysneller Good for introducing a discussion about descriptive v. prescriptive rules. Also, I add that you can cheat and make it a string of 8 prepositions if the book is about Australia: "But Dad, what did you bring the book I didn't want to be read to out of about Down Under up for?" >

2022-12-07 16:49:05 @betsysneller "But Dad, what did you bring the book I didn't want to be read to out of up for?" >

2022-12-07 16:48:36 @betsysneller There was a kid who lived in a two story house and always got read a story at bedtime. Books on the main floor, bedrooms on the second. One day, the kid's dad brings up a poor choice of story and the kid says:

2022-12-07 15:48:53 Corrected link: https://t.co/hWtQ2z8Mw8 by @willknight https://t.co/hh0bmg8t02

2022-12-07 15:48:31 And I see that the link at the top of this thread is broken (copy paste error on my part). Here is the article: https://t.co/hWtQ2z8Mw8

2022-12-07 15:48:08 It's frustrating that they don't seem to consider that a language model is not fit for purpose here. This isn't something that can be fixed (even if doing so is challenging). It's a fundamental design flaw. >

2022-12-07 15:47:57 They give this disclaimer "ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging [...]" >

2022-12-07 15:47:36 Re difference to other chatbots: The only possible difference I see is that the training regimen they developed led to a system that might seem more trustworthy, despite still being completely unsuited to the use cases they are (implicitly) suggesting. >

2022-12-07 15:47:01 @kashhill @willknight Seems like two copies of the link somehow? Here it is: https://t.co/hWtQ2z8Mw8

2022-12-07 15:46:36 Furthermore, they situate it as "the latest step in OpenAI’s iterative deployment of increasingly safe and useful AI systems." --- as if this were an "AI system" (with all that suggests) rather than a text-synthesis machine. >

2022-12-07 15:46:26 Anyway, longer version of what I said to Will: OpenAI was more cautious about it than Meta with Galactica, but if you look at the examples in their blog post announcing ChatGPT, they are clearly suggesting that it should be used to answer questions. >

2022-12-07 15:44:37 @willknight A tech CEO is not the source to interview about whether chatbots could be effective therapists. What you need is someone who studies such therapy AND understands that the chatbot has no actual understanding. Then you could get an accurate appraisal. My guess: *shudder* >

2022-12-07 15:42:50 @willknight Far more problematic is the closing quote, wherein Knight returns to the interviewee he opened with (CEO of a coding tools company) and platforms her opinions about "AI" therapists. >

2022-12-07 15:39:59 @willknight The 1st is somewhat subtle. Saying this ability has been "unlocked" paints a picture where there is a pathway to some "AI" and what technologists are doing is figuring out how to follow that path (with LMs, no less!). SciFi movies are not in fact documentaries from the future. >

2022-12-07 15:37:19 I appreciated the chance to have my say in this article by @willknight but I need to push back on a couple of things: https://t.co/cbelYjZTyF >

2022-12-09 01:11:52 @spacemanidol @amahabal No, it's not about the name. It's about the way the systems are built and what they are designed to do.

2022-12-08 22:48:41 @chrmanning I'm not actually referring to your slide, Chris, so much as the way it was framed in the OP's tweet --- which Stanford NLP sought fit to retweet, btw.

2022-12-08 19:34:24 Oh, and for the record, though that tweet came from a small account, I only saw it because Stanford NLP retweeted it. So someone there thought it was a reasonable description too.

2022-12-08 19:25:36 @Raza_Habib496 People are using it does not entail benefits. Comparing GPT-3 to fundamental physics research is also a strange flex. Finally: as we argue in the Stochastic Parrots paper -- who gets the benefits and who pays the costs? (Not the same people.)

2022-12-08 19:24:49 @Raza_Habib496 Oh, I checked your bio first. If it had said "PhD student" I probably would have just walked on by. But you've got "CEO" and "30 under 30" so if anything, I bet you like the attention.

2022-12-08 19:00:02 @rharang The astonishing thing about that slide is that the only numbers are about training data + compute. There's not even any claims based on (likely suspect, but that's another story) benchmarks.

2022-12-08 18:57:31 It's wild to me that this is considered a picture of "progress". Progress towards what? What I see is a picture of ever increasing usage of resources + complete disinterest in being able to document and understand the data these things are build on. https://t.co/vVPvH7zal0

2022-12-08 14:33:02 @yoavgo @yuvalpi Here it is: https://t.co/GWKrpgxkPt

2022-12-08 14:31:34 @yoavgo @yuvalpi Oh, and I don't have time to dig it up this morning, but you told Anna something about how you don't really care about stealing ideas --- and seemed to think that our community doesn't either.

2022-12-08 14:31:06 @yoavgo @yuvalpi And even if you offer it as an option: nothing in what you said suggests that you have accounted for what will happen when someone is confronted with something that sounds plausible, and confident --- especially when it's their L2. >

2022-12-08 14:30:29 @yoavgo @yuvalpi Your whole proposal is extremely trollish (as is you demeanor on Twitter

2022-12-08 14:18:47 RT @KimTallBear: Job Opportunity: Associate Professor or Professor, Tenure-Track in Native North American Indigenous Knowledge (NNAIK) at U…

2022-12-08 14:14:42 @amahabal And have you actually used ChatGPT as a writing assistant? How did that go? What did you find useful about it? What do you think a student (just starting out in research) would find useful about it? How would they be able to evaluate its suggestions?

2022-12-08 14:02:07 RT @emilymbender: Apropos of the complete lack of transparency about #ChatGPT 's training data, I'd like to resurface what Batya Friedman a…

2022-12-08 13:51:55 @amahabal No. Why should I?

2022-12-08 13:44:45 @yuvalpi @yoavgo Yes, I read his whole thread. No that doesn't negate what I said.

2022-12-08 13:25:00 RT @marylgray: Calling all scholars interested in a fellowship to reboot social media : ) https://t.co/MApt42p8eB

ght find this helpful too:https://pad.riseup.net/p/Against_AI_and_Its_Environmental_Harms-keep

 

2024-06-24 13:35:39 The latest from the MAIHT3k newsletter, on the energy demands of "

2024-06-24 13:03:44 Today!Next Mystery AI Hype Theater 3000 live-stream!@ali will join us to take apart yet another paper-shaped AI hype artifact unleashed from a "

2022-03-19 13:41:16 @BenPatrickWill So I'm also worried about local gov'ts jumping on Google's apparent benevolence, making unjustified assumptions about "Google magic" replacing teachers and other staff, cutting budgets (or leaving them at abysmally low levels) and getting further mired in underfunded education. 2022-03-19 13:39:14 @BenPatrickWill At UW, we are currently in the process of painfully unwinding certain aspects of how we use Google apps (shared UWNetIDs will no longer have access to Google Drive etc) because Google changed the pricing on us. > 2022-03-19 13:38:09 A must-read about so-called "Google magic", techno-solutionism, and deploying large language models in "auto-pedagogy". On top of all of the issues that @BenPatrickWill identifies, let's also keep in mind that Google is a for-profit co. > 2022-03-18 21:42:55 @mmitchell_ai @Twitter @ruchowdh IKR? If the point is to get me to follow more accounts, then why keep wasting those slots on accounts I'm definitely NOT going to follow? 2022-03-18 21:23:22 @mmitchell_ai @Twitter @ruchowdh +1 for tweetdeck, so you'll see these less frequently at least (if @Twitter won't fix). Not quite the same, but there are also people for me who I have definitely decided not to follow, but @Twitter won't stop suggesting them. A "no thanks" option would be dandy. 2022-03-18 21:21:11 RT @StateBarCA: Today we honor @HabenGirma. As the first deaf-blind person to graduate from @Harvard_Law, Ms. Girma has dedicated her caree… 2022-03-18 13:22:36 @LeonDerczynski @SeeTedTalk And then telling the students we didn't fail that they're on top of some important "hierarchy of knowledge" (Gebru 2021) and they should look down on all others. https://t.co/3x4pqSZhRW 2022-03-17 20:41:07 @RWerpachowski @TaliaRinger @sundarpichai @mmitchell_ai @Google Then why are you jumping in here arguing with Talia? 2022-03-17 20:07:03 @RWerpachowski @TaliaRinger @sundarpichai @mmitchell_ai @Google Did you read the screencap Talia posted? You seem to be pretending that Google isn't going around claiming that they're still making ethics foundational to all their producs. 2022-03-17 20:00:39 @AnnaDanielWork That does sound awful. Thank you for perservering, and for replying here. It really sounds like having info to hand like what's in the Tech Worker's Handbook could be really valuable. 2022-03-17 19:12:10 @TaliaRinger @RWerpachowski @sundarpichai @mmitchell_ai Not the entire team, but the leadership of that team. But Talia's point stands: How can anyone take @Google seriously on anything to do with AI ethics after what they did? 2022-03-17 19:03:55 The juxtaposition of these two points is hilarious too --- AI is so important, it's more important than electricity, it will do things like remind me to have dinner with my family! 2022-03-17 18:30:18 RT @CriticalAI: Interesting article and thread from #CriticalAI ally @emilymbender. Looking forward to her talk at our March 24th #DataOnto… 2022-03-17 18:25:12 From the same article: "Artificial intelligence “is one of the most profound technologies we are working on, as important or more than fire and electricity,” Pichai said." Did anyone have "AI is more profound than electricity" on their #AIhype bingo card? https://t.co/mkFKitZwX0 2022-03-17 18:24:03 @sundarpichai https://t.co/5CdD96gRKH 2022-03-17 18:23:40 Maybe if your company had a better culture around reasonable work/life balance you wouldn't need your calendar to remind you to HAVE DINNER WITH YOUR FAMILY, Sundar. https://t.co/mkFKitZwX0 2022-03-17 18:22:28 So many applications of "AI"/#PSEUDOSCI are trying to solve problems downstream that could be better addressed through prevention. Case in point from @sundarpichai 's hypothetical here: https://t.co/XIy9ftfhd1 https://t.co/OU1OHSVxt2 2022-03-17 16:50:36 This was fun---and I think we succeeded in generating discussion :) I particularly appreciate the format that #chiir2022 used, with 8 min videos played at a specific time followed by 12 min of discussion. @chirag_shah good choice of venue! https://t.co/3FdgwmNhif 2022-03-17 15:26:52 @ruthstarkman @GRACEethicsAI I can't wait to read it! 2022-03-17 13:27:51 Second (EMEA-timezone-friendly) presentation of our #chiir2022 paper in 2 hours (8:30am PST)! w/@chirag_shah https://t.co/3FdgwmNhif 2022-03-17 12:18:38 RT @cogsci_soc: Preserving context and user intent in the future of web search with. A Q& 2022-03-17 00:51:48 @cydharrell I think because it picks up at the point where someone is considering whether to become a whistleblower (which makes sense). So if you're writing about what it's good to have written down, that would be a good complement, I think! 2022-03-17 00:50:55 @cydharrell The Tech Worker Handbook is amazing, and will definitely be a cornerstone of what I point to (in the piece I'm currently writing that prompted this query), but one thing I don't see there is "having stuff written down for yourself". > 2022-03-17 00:38:25 @r_a_mckinney Thank you! 2022-03-17 00:29:03 @Dan__McCarthy @IfeomaOzoma Thank you!! 2022-03-17 00:28:44 @cydharrell Thank would be great! 2022-03-17 00:28:30 @grimalkina @cydharrell Ohh -- excellent! 2022-03-17 00:25:37 @cydharrell Is it something you plan to write about/give a linkable talk about? That would be fabulous! 2022-03-17 00:23:57 Seems like something Computer Professionals for Social Responsibility or someone might have put together back in the day, even... 2022-03-17 00:22:55 And: How can I network with like-minded people? Who can I talk to about things I find concerning, to help me think them through > 2022-03-17 00:22:33 Also: Before it is relevant to know, how does whistle-blowing work? What risks would I incur and what protections are afforded me by local law? > 2022-03-17 00:22:18 Asking self before starting: What are the bright lines that I will not cross? What are examples of things that I would feel compelled to become a whistle-blower over? > 2022-03-17 00:21:56 Something I feel like should be out there, but I don't know where: Has anyone written up advice to people just starting out in industry about being prepared to be a whistleblower, if necessary? I'm thinking things like > 2022-03-17 00:11:24 @RWerpachowski @keoladonaghy https://t.co/NzOt1npyJB 2022-03-17 00:02:30 @RWerpachowski @keoladonaghy Yeah, so look back at the original post you're replying to, and do some listening & 2022-03-16 23:14:55 @RWerpachowski @keoladonaghy *sigh* As usual, conversations that lack an analysis of power dynamics are just a waste of time. Discourse around "data sovereignty" specifically comes from Indigenous scholars & 2022-03-16 23:05:50 @RWerpachowski @keoladonaghy Do you know how the knowledge of other cultures gets to the library? Two paths: open sharing by the people whose culture it is (fine) or extractive/exploitative research by outsiders (not fine). 2022-03-16 21:59:33 @keoladonaghy Thank you -- yes. Super key point. "Anything in the world" isn't actually Google's to give. 2022-03-16 21:58:11 RT @keoladonaghy: Not to mention the gall of them believing they had the right to access much less give access to another culture's knowled… 2022-03-16 21:41:52 @_alialkhatib Yeah -- nothing in this blog post feels particularly informed by scholarship on pedagogy. And (as always) automated solutions seem like they're trying to clean up issues way downstream when something upstream (e.g. smaller class sizes) would be much more effective. 2022-03-16 21:37:17 And even if they did (they don't), that degree of reach for one company just seems inherently dangerous. Also. What is the scope of "anything in the world"? Would @Google really support people learning about the various ways in which their business practices do harm? /fin 2022-03-16 21:36:29 One last one for now: "Learning is personal" but they want to "help everyone in the world learn anything in the world". What grounds do we have to believe that Google has the cultural competence to achieve that? > 2022-03-16 21:32:10 This comes back to the idea (not original to me, but don't have cite to hand) that data, when collected into piles, creates risk, and we should not be collecting it without thinking about and mitigating those risks. 2022-03-16 21:30:39 Second, data collection. What else is happening to this data? Who has access? What is being done to mitigate its use to see students as (collections of) data points, rather than as people? https://t.co/i3qMbAulGp 2022-03-16 21:28:16 What, specifically, is the system doing to get the student "unstuck" in their non-math assignment? What role does the LLM play? How does the way that LLMs absorb various societal biases from their training data affect performance? 2022-03-16 21:27:08 First, I'm super skeptical that learning math by doing problem sets is a good model for learning other kinds of things. And even if it were, the idea that LLMs would support that generalization seems super sketchy. https://t.co/Y6Tlj6QzGi 2022-03-16 21:24:29 Reading the linked blog post, it's all kinds of creepy. Just a couple of examples: https://t.co/ayYVcl3IV2 2022-03-16 21:03:47 RT @mediajustice: Congrats to @timnitGebru on being named one of #TheRecast40's most influential people for exposing the racial bias of AI… 2022-03-16 20:43:57 RT @emilymbender: Poll for #NLProc researchers based in the US. Where do you think the most $$ are coming from funding #NLProc research (re… 2022-03-16 19:32:13 @SashaMTL Thank you! Btw, the published (peer-reviewed) version is here: https://t.co/d9xs3DRCn1 From AIES '21 2022-03-16 16:53:05 Poll for #NLProc researchers based in the US. Where do you think the most $$ are coming from funding #NLProc research (regardless of where the research takes place)? M = military + intelligence, G = other gov't spending (incl NSF, NIH, etc), I = industry 2022-03-16 16:48:45 @SScottGraham_ Yeah, no kidding. I guess one approximation would be to look at affiliations + acknowledgments in published papers... 2022-03-16 16:31:57 Does anyone know of any studies quantifying #NLProc research funding by source (national science funding schemes, industry, military/intelligence, etc)? 2022-03-16 15:57:29 @srchvrs @chirag_shah Gee, I remember using search before snippets, and guess what, it was usable! It perhaps took a little more time, but that isn't necessarily a bad thing. Friction can be valuable in information seeking behavior! https://t.co/zl6myTDKN4 2022-03-16 01:43:45 @csdoctorsister Oooh!! Congrats :) 2022-03-15 22:10:20 RT @uwnews: In a new perspective paper, @UW professors @emilymbender and @chirag_shah respond to proposals that reimagine web search as an… 2022-03-15 20:52:00 RT @UW_iSchool: Google is betting there's a big future in speech-based search, but the iSchool's @chirag_shah and @emilymbender of @UWlingu… 2022-03-15 20:23:42 @_amandalynne_ What a drag. But not just a by product of a culture that fosters overwork. There's also the guy interrupting you and not letting you say the thing! That's on him. 2022-03-15 15:10:29 @negar_rz Similar query just a couple of days ago here! https://t.co/JDXozkzIF7 2022-03-15 14:35:49 @chirag_shah @webis_de Finally, the quibble. This is cute, but no, search engines aren't "aware" of anything and even in jest I think it's critical (in the current environment, also true in 2020) to steer clear of such #AIhype. https://t.co/kO99nxCcRj 2022-03-15 14:32:21 @chirag_shah @webis_de fn 10: "The benefit of an end-to-end integration of indexing documents, language modeling, and question answering can be expected to be a severely improved “understanding”." 'Severely improved' is an odd turn of phrase, but I take it to be an intensifier on the scare quotes. > 2022-03-15 14:31:30 @chirag_shah @webis_de Also Potthast et al: "But for all the new opportunities afforded by these technologies, their repercussions on society due to their large-scale deployment are not well-understood." So much fun living through these large-scale "experiments"... > 2022-03-15 14:30:46 @chirag_shah @webis_de Also Potthast et al: "As no actual conversations are currently supported by conversational search agents, every query is an ad hoc query that is met with one single answer." No. Actual. Conversations. There's a whole study to be done on the perils of aspirational tech names.> 2022-03-15 14:29:47 @chirag_shah Potthast et al (from @webis_de) suggest a standard disclaimer on direct answer responses, which is very well put: “This answer is not necessarily true. It just fits well to your question.” https://t.co/vRaONqJzaZ > > 2022-03-15 14:28:13 Yes, this is great! I'm sorry we didn't find your paper while writing ours. (cc @chirag_shah) A few favorite quotes & 2022-03-15 13:12:58 First presentation of this is today, at 7pm PST! (Which I guess is tomorrow, for those over on the other side of the Date Line.) #chiir2022 https://t.co/3FdgwmNhif 2022-03-15 03:25:56 (Hmm not keynote, but Presidential Address. Not that that really matters...) 2022-03-15 03:23:47 Thank you, @wtimkey8 for starting this new version of the thread. The other one was so awful to read and so hard to look away from.... 2022-03-15 03:23:10 Newmeyer's answer involved Occam's Razor to which I got to reply "But Occam's Razor cuts both ways", much, as I remember it, to the audience's approval. :) 2022-03-15 03:22:24 I forget exactly what I asked, but it must have been something to do with it being an empirical question whether linguistic competence (knowledge of language, as stored in actual brains) really did only concern grammaticality or not. > 2022-03-15 03:20:58 I was so pleased to have been put in the same league as Jurafsky that I figured I just had to go ask a question. And I was actually much better positioned to get to the mic to line up than if I hadn't gotten up to go check on my kiddo. > 2022-03-15 03:20:10 Newmeyer's topic was "Grammar is grammar and usage is usage" and I forget the exact details, but Garrett said to me: "Isn't someone like you or Dan Jurafsky going to get up there and...?" > 2022-03-15 03:19:14 I got up from my seat in the standing-room-only crowd to go check on him, and when I c